text
stringlengths
1.36k
1.27M
8:00 - 9:15 POSTER SESSION 1. Use of the Communication Profile for the Hearing Impaired (CPHI) in a Cochlear Implant Population Susan M. Binzer and Timothy A. Holden; Washington University School of Medicine, St. Louis, Missouri This poster will compare: (a) normative data for the CPHI (Erdman & Demorest, 1998) with preoperative values from 60 cochlear implant candidates; (b) average scale scores for users of the SPEAK strategy for the Nucleus Cochlear Implant Systems preoperatively, and at 3 months post initial stimulation; and (c) scale difference scores of these users, with their ability to communicate (categorized as above average, average, and below average) at 3 months post initial stimulation. Results indicate that preoperatively, implant clients respond differently than the general population with hearing impairment. The average difference scores for several scales increase with improved performance. 2. Determining Handicap and Coping Abilities: A Comparison of Methods L. Dillon Edgett, N. Lamb, K. Roodenburg, M.K. Pichora-Fuller, and C. Johnson; University of British Columbia, Vancouver, British Columbia, Canada Many clinicians attempt to estimate a client's handicap and coping ability as part of the rehabilitative process. What is the most accurate and reliable method to obtain this information? We will compare and contrast results from several approaches in a case study format. The methods used include (a) the Hearing Performance Inventory (HPI); (b) discourse analysis of conversations held between the client and a rehabilitative audiologist in a controlled environment (e.g., quiet/noisy background, defined/undefined topic); and (c) a qualitative interview regarding importance of communication in everyday life, attitudes toward communication, and impact of hearing loss. 3. The Effects of Hearing Loss on Conversational Interactions of Couples: A Preliminary Study Susan K. Harned, Alice E. Holmes, and Norman Markel; University of Florida, Gainesville, Florida How does hearing loss affect communication within a marriage relationship? This preliminary study analyzed content of conversational samples from two pairs of couples, young and old, hearing impaired and normal. Attention was paid to differentiating between conversational elements that could be a consequence of hearing impairment. Non-verbal indicators were primarily used for motivational and perceptual judgments of conversational commitment. Analyses were conducted to identify positive and negative conversational elements, as well as elements not used or attempted in spontaneous interactions, which can contribute to reported spousal feelings of disengagement and loss of intimacy. 4. Comparison of Performance Intensity Functions Obtained With Isophonemic, W-22 and NU-6 Word Lists Laura J. Kelly; Miami University, Oxford, Ohio There is little data comparing speech recognition scores obtained using isophonemic, W-22, and NU-6 word lists. Yet isophonemic word lists may represent a faster, more efficient means of obtaining speech information for use in diagnostics and hearing loss management. The administration of two 10-word isophonemic word lists takes about the same time as a half list of the W-22 or NU-6. This study compares performance intensity functions obtained with the three lists and discusses interpretation and implications of the results for diagnostics and hearing loss management. 5. **Use of DYALOG to Measure Conversational Fluency of Children With Cochlear Implants** Elizabeth Mauzé, Nancy Tye-Murray, and Ann Geers; *Central Institute for the Deaf, St. Louis, Missouri* This poster will illustrate ways in which DYALOG (Erber, 1997) can be used to quantify important aspects of a fluent conversation beyond its original intent, which was to record communication breakdown. Ten-minute segments of videotaped conversations with deaf children were used to code the occurrence of selected events. Ratios were calculated to quantify dominance of the conversation by the child or by the adult and the tendency of the conversation to be characterized by silence or by breakdown. These metrics are examined in relation to other measured characteristics of this sample in order to determine factors predictive of conversational fluency. 6. **User-Friendly Approaches to Information Dissemination** Mary Pat Moeller; *Boys Town National Research Hospital, Omaha, Nebraska* The Center for Hearing Loss in Children is a NIDCD-sponsored research and training center at the Boys Town National Research Hospital. A major charge of the Research and Training Center grant is to disseminate information to parents, family members, and the general public related to hearing loss. This poster session will review strategies implemented to date, products developed, and future needs. The poster will include examples of dissemination approaches, including a www site, public information fact sheets, video tapes (information and support for parents, sign language curricula, etc.), instructional manuals, and Public Service Announcements. 7. **Effects of In-Service Training on Nursing Staff Perception and Knowledge of Hearing Loss** Joanne E. Morgan and Laura J. Kelly; *Northern Kentucky Easter Seal Center, Covington, Kentucky and Miami University, Oxford, Ohio* The role of audiologists in nursing homes includes staff education about hearing loss. In this study, 30-min in-services were presented in seven nursing homes. Nursing assistants, licensed practical nurses, and registered nurses participated. Participants completed the DanPat questionnaire pre-and post-training. The questionnaire has 31 items divided into four major categories: (a) attitudes and perceptions, (b) hearing aids, (c) hearing loss, and (d) speechreading and aural rehabilitation. Results demonstrate the effectiveness of the in-services. Discussion includes implications for future audiolologic care in nursing homes and recommendations for improving in-services and hearing care in nursing homes. 8. Reducing Hearing Aid Returns Through Patient Education Jerry L. Northern and Cindy Beyer; HEARx, West Palm Beach, Florida Audiologists often question the necessity of aural rehabilitation as it relates to hearing aid dispensing. Records were reviewed from a large sample of patients ($N = 9,868$) who ordered hearing aids between January and June of 1997. Approximately one-third of the patients ($n = 3,306$) elected to attend a free series of aural rehabilitation classes. Rehabilitation participants showed a 3.5% hearing aid return rate compared to a 12% return/cancellation rate from those patients who did not attend the classes. 9. Soundscape: The Concept and its Potential Application in Audiologic Rehabilitation M. Kathleen Pichora-Fuller, P. Kooner, and B. Truax; University of British Columbia, Vancouver, British Columbia, Canada A soundscape is a sound sample of auditory events in a specific environmental context. An extensive library of sounds found in the Vancouver soundscape have been recorded and published. We have extended this method to record the sound world of specific hard-of-hearing individuals so as to learn about their listening environments and their responses to these environments. This approach will be demonstrated using a case study of a hearing-impaired senior in a care facility. Audio recordings of her sound world will be presented along with her reflections on the meaning of sound to her in daily life. 10. Interdisciplinary Training of Early Intervention Specialists for Children with Hearing Impairment and Their Families Ronald Sommers, Irvin Gerling, John Hawks, Harold Johnson, and Carol Sommer; Kent State University, Kent, Ohio Traditionally, audiologists and deaf educators have not worked together, although both disciplines advocate for clients and perform rehabilitation with hearing-impaired clients. There is a pressing need for specialized personnel to serve very young children (birth to 3 years) and their families, in a family-centered program. This federally funded training program combines deaf education and audiology students into a comprehensive transdisciplinary climate. 9:30 - 10:15 1998 KEYNOTE ADDRESS Effects of Hearing Impairment on Family Life Lillemor Hallberg, 1998 Keynote Speaker; University of Göteborg, Göteborg, Sweden The burden of hearing disability is shared with close relatives. Studies on the effects of hearing disability, however, are commonly focused on the person with the impairment. The purpose of this qualitative study was to describe, from the perspective of spouses, their experiences of living with a male with a severe noise-induced hearing loss. Transcribed in-depth interviews were analyzed in the grounded theory tradition. Two primary issues emerged from the data: the husband's reluctance to acknowledge hearing difficulties and the impact of hearing loss on the intimate relationship. 10:15 - 10:45 Reducing Hearing Aid Returns Through Patient Education: Implications for Rehabilitation Jerry L. Northern, Invited Speaker; HEARx, West Palm Beach, Florida The difference in hearing aid return rates between patients who do and do not attend aural rehabilitation classes is evidence of the effectiveness of these services. All too often, dispensing audiologists do not include formal counseling or patient education programs beyond a hearing aid orientation. The rationale for formalizing rehabilitative intervention, the implications for practitioners, and suggestions for implementing these services in a variety of clinical settings will be discussed. 10:45 - 11:15 Update on the Fitting of Amplification in Infants and Young Children Richard Seewald, Invited Speaker; University of Western Ontario, London, Ontario, Canada In theory, the identification of hearing impairment in infancy leads directly to family-centered habilitation including the fitting and optimal use of amplification. This presentation will review the key elements of a contemporary approach to pediatric hearing aid fitting as described in the recent Position Statement on Amplification for Infants and Children with Hearing Loss. Specific procedures discussed in the Position Statement will be demonstrated through video presentation. 11:15 - 11:45 The World Health Organization’s Definition of Handicap: Implications for Training in Audiological Rehabilitation Louise Getty and Jean-Pierre Gagné; Université de Montréal, Montréal, Québec The World Health Organization (WHO), as early as 1980, presented a conceptual framework for the classification of handicaps, clarifying the concepts of impairment, disability, and handicaps. Since then, researchers in rehabilitation have worked to improve this model. In 1989, it became the International Classification of Impairments, Disabilities and Handicaps (ICIDH) model, that is more interactive and takes into account life habits and environmental factors in the creation of situations of handicap. The presentation will review this more recent conceptual framework, discuss the training of students in rehabilitative audiology, and identify in a curriculum, the courses that target these concepts. Conclusions will follow. **1:15 - 2:00** Psychological Factors in the Outcomes Experienced by Adult Cochlear Implant Users: Implications for Rehabilitation John F. Knutson, Invited Speaker; University of Iowa, Iowa City, Iowa Although modern cochlear implants provide considerable optimism for the restoration of hearing for post-lingually deafened adults, the documented variability in the audiological outcome of implants can temper that optimism. In an effort to understand the variability in outcome and to evaluate alternative indices of benefit, psychological factors in implant use and benefit have been studied for almost two decades. Based on a sample of consecutively referred adult implant candidates between 1980 and 1998, findings related to three specific topics will be presented: (a) psychological characteristics of post-lingually deafened adults seeking a cochlear implant, (b) psychological factors that predict implant benefit among post-lingually deafened users of multi-channel implants, and (c) changes in psychological function following sustained implant use. Within each topic, the implications of the findings for clinicians working with implant recipients will also be considered. **2:00 - 2:30** A Computerized Music Training Program for Adult Cochlear Implant Recipients Kate Gfeller and Shelley Witt; University of Iowa, Iowa City, Iowa This presentation describes the development and assessment of a computerized music training program for adult cochlear implant recipients that can be self-administered. The format and content of the program are based on: (a) models of adult aural rehabilitation, (b) existing knowledge of music cognition and pedagogy, (c) models of adult learning, (d) feedback from implant recipients garnered through surveys and interviews with regard to music listening, and (e) data taken from a pilot study using a workbook and cassette tape. Components of the training program consist of pitch sequence perception, song recognition, timbre recognition and appraisal, and appraisal of different musical styles. 2:30 - 3:00 Multichannel Cochlear Implant Update for Postlingually Deafened Adults Cathleen O'Connor and Joanne Schupbach; Chicago Otology Group, Hinsdale, Illinois Recent advances in cochlear implant technology have resulted in significantly improved patient performance necessitating change in cochlear implant candidacy criteria to include severely and profoundly impaired persons with more residual hearing and with better aided speech understanding. The goal of this paper is to review changes in candidacy criteria, steps in referral, evaluation, and rehabilitation process. We will also present distinguishing characteristics of four multi-channel cochlear implants. Objectives will be geared for audiologists working outside an implant center so they may appropriately refer candidates for cochlear implantation. 3:15 - 3:45 Preparing Aural Rehabilitation Students for Clinical Interaction and Functional Assessment Norman P. Erber; LaTrobe University, Melbourne, Victoria, Australia A brief self-teaching AR activity is described in which university students: (a) experience different amounts of simulated hearing/vision loss, (b) play roles of both client and clinician, (c) apply clarification requests and behaviors, and (d) learn to apply two different methods for evaluating conversational performance. Students converse under six perceptual conditions via closed-circuit audio/video (CONAN) incorporating adjustable hearing-loss (HELOS) and vision-loss simulators (Erber, 1996). A videotape will permit participants to observe interactions under various hearing-loss and vision-loss conditions, rate conversational fluency, and observe how computer-based DYALOG conversation-analysis data is obtained. Summary data will be presented and discussed. 3:45 - 4:15 Evaluation of Behind-the-Ear FM Technology Linda Thibodeau; Callier Center for Communication Disorders and University of Texas at Dallas, Dallas, Texas With the development of smaller FM components, options for receiving the signal through ear level instruments have increased. Following a description of ear level options, several issues regarding the performance evaluations, maintenance, variations in antenna strength, and battery drain will be reviewed. 4:15 - 5:00 **1998 ROUND TABLE** *Everything You Ever Wanted to Ask About AR!* **Moderator:** Patricia A. McCarthy; *Rush University, Chicago, Illinois* **Panelists:** Jerome G. Alpiner; *Hear Now, Denver, Colorado* Jerry L. Northern; *HEARx, West Palm Beach, Florida* John J. O’Neill; *University of Illinois at Urbana-Champaign, Champaign, Illinois* Laura A. Wilber; *Northwestern University, Evanston, Illinois* This special session brings together a group of individuals whose careers have exemplified rehabilitative audiology. Their collective experience, perspective, and wisdom provide a unique opportunity for participants to pose some of those questions for which they have yet to find satisfactory answers. The panelists have agreed to address questions from the audience and to discuss the ways in which their experiences as clinicians, researchers, and teachers contributed to their commitment to rehabilitative audiology. **SATURDAY, JUNE 13** 8:30 - 9:30 **Effectiveness of Early Intervention** Mary Pat Moeller, Invited Speaker; *Boys Town National Research Hospital, Omaha, Nebraska* This presentation explores three fundamental questions related to the effectiveness of early intervention for children with hearing loss: (a) Why should we intervene early? – with focus on the need for prevention and broadening of the outcome variables we examine; (b) What do we know about early intervention effectiveness? – including literature reviews and outcome data from the Diagnostic Early Intervention Project at BTNRH; and (c) What else do we need to know? – with implications for research and practice. 9:30 - 10:00 **A Unique Listening Experience for Children With Cochlear Implants** Linda Thibodeau, Diana Terry, Jennifer Basham, and Emily Tobey; *Callier Center for Communication Disorders and University of Texas at Dallas, Dallas, Texas* An annual summer day camp for children with cochlear implants was initiated at the Callier Center for Communication Disorders at the University of Texas at Dallas in 1996. Through an interdisciplinary team approach including psychologists and speech pathology/audiology graduate students, faculty, and practicum supervisors, a multifaceted program was presented for children ages 2 to 10. Families also participate through specialized sibling and parent workshops. The activities center around a theme and are designed to develop listening skills. **10:00 - 10:15** **The Impact of Otitis Media in Infancy on Maternal Language in Mother-Infant Interactions** Sheila Pratt and Michele Morrissey; *University of Pittsburgh, Pittsburgh, Pennsylvania* The purpose of this study was to look at the linguistic behaviors of mothers of infants with positive histories of otitis media. Twenty-six infants were monitored monthly from 1 through 12 months of age for otitis media. Hearing also was monitored monthly. At 12 months of age, infants and their mothers were video-recorded in a laboratory play-setting. From the recordings, the mothers’ MLUs and verbal density were calculated and verbal directiveness was analyzed. The productions of the mothers of otitis-positive infants were similar to those of the mothers of the otitis media-negative infants. **10:30 - 11:15** **Life After Hearing Loss: Providing Services for Adults with Late-Onset Hearing Loss** Mary Clark and Cathleen O’Connor; *Hearing Loss Link and Chicago Otology Group, Chicago, Illinois* Individuals who experience acquired hearing loss can face a challenging identity crisis. The rehabilitative audiologist must understand this type of hearing loss and the grief process in order to enable the client to cope with the loss, to determine communication needs, and to develop and employ appropriate communication strategies. The workshop will also address issues that affect family members, friends, workplace associates, and professionals who work with the individual with acquired hearing impairment. **11:15 - 11:45** **Group Rehabilitation of Men With Noise-Induced Hearing Loss and Their Spouses** Lillemor Hallberg, 1998 Keynote Speaker; *University of Göteborg, Göteborg, Sweden* According to an environmentally related definition of handicap, imperfections and barriers in the physical and social milieu are triggers for a handicap to arise. Based on these assumptions and on results from earlier research on predictors of handicap, a group rehabilitation program for men with noise-induced hearing loss (NIHL) and their spouses was designed. The aim of the program was to offer the couples psychosocial support, adequate knowledge on the nature of NIHL, and training in effective coping strategies and hearing tactics. Short- and long-term results will be discussed with emphasis on the implications for program planning. 11:45 - 12:15 The Application of a Client-Centered Approach to Evaluate the Effectiveness of Intervention Programs for Persons with a Hearing Loss Jean-Pierre Gagné, Stéphane McDuff, Denis Charron, and Louise Getty; Université de Montréal and Institut Raymond-Dewar, Montréal, Québec, Canada The findings of a preliminary investigation designed to evaluate the effectiveness of intervention programs based on the principles underlying a client-centered problem-solving approach to rehabilitation will be presented. The results of the investigation provided some insights into the benefits provided by assistive listening devices (sound transmission systems and alerting devices) to solve specific situations of handicap. In addition, interviews conducted with the participants \((N = 10)\) provided important information concerning the implementation of the intervention programs designed for individual participants. Finally, the documented impacts and consequences of the intervention programs will be discussed. SUNDAY, JUNE 14 8:00 - 8:45 Casting Audiologic Rehabilitation Within a Model of Miscommunication M. Kathleen Pichora-Fuller, Carolyn Johnson, Noelle Lamb, Kristen Roodenburg, and Lisa Dillon Edgett; University of British Columbia, Vancouver, British Columbia, Canada Two major functions of communication are transaction (information exchange) and interaction (forming and maintaining social relations). Many audiologists have been biased in their concern with the transactional function and their relative neglect of the interactional function. In our work, we have tried to understand when and how listeners chose conversational behaviors that achieve either transactional or interactional goals and how these may be pitted against each other. We propose a new way of organizing conversation therapy that deploys a model of “miscommunication” taken from the social psychology literature (Coupland, Wiemann, & Giles, 1991). 8:45 - 9:30 The Influence of Personal Beliefs in Adaptation to Hearing Loss Tom Conran and Susan Binzer; St. Louis University and Washington University School of Medicine, St. Louis, Missouri The objective of this study is to describe belief systems and coping mechanisms of patients who appear to deny typical reactions to communication problems as interpreted by the Communication Profile for the Hearing Impaired (CPHI). Using narrative ethnographies, we distinguish between those individuals who appear to have accepted the limitations of their hearing impairment, and those who appear to deny their anger, sadness, and panic. We have found that strong, adaptive belief systems assist some patients in achieving a high level of acceptance. Early identification and continuous amplification of positive belief systems can significantly aid audiologic rehabilitation. 9:30 - 10:00 Gender, Age, and Hearing Loss-Related Behaviors Among Older Adults Susan F. Erler and Dean C. Garstecki; Northwestern University, Evanston, Illinois There is a need to address the heterogeneity among older adults with impaired hearing. This paper reports on three investigations of hearing loss, giving consideration to gender, age, and degree of impairment. Results suggest that: (a) among adults with mild-moderate hearing loss, women perceive greater communication and personal adjustment difficulties; (b) in addition to gradual declines in hearing sensitivity, financial and emotional resources vary among women of different ages; and (c) moderate-severe hearing loss has a greater emotional impact on women. These findings underscore the need to develop evaluation and rehabilitative techniques that accommodate diverse segments of the aging population. 10:15 - 10:45 Hearing Impairment, Coping, and Perceived Handicap Lillemor Hallberg, 1998 Keynote Speaker; University of Göteborg, Göteborg, Sweden Coping has a central role in adaptation to illness and disability. The purpose of these studies was to describe coping with demanding auditory situations from the perspective of hearing-impaired persons. In-depth interviews were conducted, transcribed verbatim, and analyzed according to the grounded theory method. Two qualitatively different coping patterns or concepts were revealed: controlling the social scene and avoiding the social scene. A handicap can be created in situations related to environmental factors and in situations related to life habits and social roles. These conditions can be distinguished by the hearing-impaired person's control over what is happening. There is an apparent relationship between the use and choice of coping strategies and the desire to prevent or minimize stigmatization. 10:45 - 11:15 Psychological and Marital Adjustment Among Adults With Hearing Impairment Sue Ann Erdman and Marilyn E. Demorest; UMBC, Baltimore, Maryland Results of a multicenter investigation refute the notion that hearing impairment is associated with specific psychological disturbances. Normative data mirror those for the general population on measures of anxiety, depression, loneliness, and coping behavior. Degree of hearing impairment did not predict any of the psychological measures; the psychological measures do, however, predict adjustment to hearing impairment. In a related investigation, spouses underestimated communication importance, environmental problems, and their hearing impaired partners' use of communication strategies, and overestimated their partners' self-acceptance. Correlations between the spouses' scores are moderate indicating that couples' views of adjustment to hearing impairment do not strongly agree. Marital adjustment for these couples, as assessed by three marital satisfaction scales, is highly similar to that of the standardization samples. 11:15 - 11:45 Adjustment to Hearing Impairment: A Model Marilyn E. Demorest and Sue Ann Erdman; UMBC, Baltimore, Maryland Studies of psychosocial and behavioral adjustment to hearing impairment have revealed relationships among several domains that form the basis of a working model of hearing impairment. Structural equation modeling was used to portray these patterns of association. Relationships between Hearing Ability and constructs assessed by the CPHI were modeled with data obtained from a heterogeneous sample of hearing impaired adults (N = 1,051). Variables assessing Psychological Distress were added (N = 179) with previously estimated parameters of the measurement model fixed. Direct and indirect effects in the model and the direction of these effects will be described for the following: Hearing Ability, Communication Performance, Communication Importance, Communication Strategies, Environmental Demand, Psychological Adjustment, and Psychological Distress.
Widdowson’s Use of Speech Act Theory in his Approach to Analyzing Discourse Walter DAVIES Institute for Foreign Language Research and Education Hiroshima University My aims in this article are to discuss Widdowson’s (1978) ideas on cohesion and coherence and the underlying philosophy that is used to establish the distinction, then to examine whether the definition of cohesion can be changed in order to make an analysis of stretches of discourse easier for language teachers. While Widdowson uses his analysis of discourse to argue for the use of certain kinds of written texts in language teaching, the analysis of discourse itself, particularly in relation to correcting students’ written work, or providing reading texts for students, is an important part of most teachers’ work. I argue that Widdowson’s analysis of discourse, while giving support to his teaching ideas, is not easy to use as a framework of analysis in this more practical everyday aspect of teaching. Consequently, I explore the underlying philosophy and propose a change to the definition of cohesion. This article is focused primarily on one book, Widdowson’s *Teaching Language as Communication*. In a previous article (Davies, 2010), I noted that Widdowson draws on the ideas that emerge in the philosophy of language to develop ideas on teaching languages. In this article I consider how he links the two fields, by examining his use of ideas drawn from John (L.) Austin and John Searle. I argue that he uses a hybrid approach, based on the ideas of both philosophers, and that this creates ambiguity in the analysis. I argue that Widdowson has much more in common with the ordinary language philosophy of Austin, where problems are analyzed through natural language rather than through the analyses of Searle, who incorporates symbolic devices drawn from logic. I then consider whether an analysis of discourse using Austin’s concept of the *locutionary act* gives greater clarity than Searle’s concept of the *propositional act*, and examine the implications of this change on the concept of cohesion. In the first section, I give a brief summary of *Teaching Language as Communication*, particularly with reference to its first chapter, which sets up the overarching ideas of ‘usage’ and ‘use’ that are a constant theme within the book. In the second section, I consider the link to the two philosophers of language, J. L. Austin and John Searle, examining some of the differences between them, particularly in relation to the difference between a *locutionary act* and a *propositional act*. In the third section, I consider Widdowson’s chapter on discourse, where he takes the concepts of the *propositional act* and *illocutionary act*, and uses them in developing the ideas of cohesion and coherence, which can aid the analysis of stretches of language. I then consider two key problems connected to the use of Searle’s concepts, given that Widdowson’s analysis is primarily based on an ordinary language approach. These problems are the absence of definition of the *proposition*, and the overlap that exists between *propositional acts* and *illocutionary acts*. Finally, I consider the implications of using the concept of the *locutionary act* in contrast to the *propositional act*, and re-define the concept of cohesion. Several points should initially be made in relation to *Teaching Language as Communication*, these being the spirit in which the book is written, the differences between philosophy and applied linguistics, the limitations of the analysis, and the necessity of prescriptivism in language teaching. In his introduction, Widdowson (1978) makes clear the spirit in which he has written the book. He distinguishes between the classical view on publication, where a writer has essentially worked out all the ideas he/she is writing about and reveals them in a way that is as definitive and precise as possible. In contrast to this he describes the romantic approach as one which is less cautious, and regards the aim as a device for public speculation. He notes "the aim here is to stimulate interest by exposure, to suggest rather than to specify, to allow the public access to personal thinking" (p. x). Widdowson subscribes to this latter view, and it is in this spirit that I have written this article. I treat the book as open to interpretation and discussion. Given that it is a book on applied linguistics, I make my case in this field rather than in relation to issues that emerge in the philosophy of language. While applied linguistics draws from the philosophy of language, I argue that it is a different discipline with different overall aims. Some of the analysis in this article focuses on the purposes of the three thinkers (Austin, Searle, Widdowson). In philosophy, I have argued that Austin was a pioneer (Davies, 2010), breaking philosophy at Oxford University out of the strait-jacket of logical positivism: having established the concept of the speech act, he attempted to identify, categorize, and list speech acts through the use of *illocutionary verbs*. Searle is also a philosopher, one with several clear aims for his research, who seeks to develop precise tools for achieving these. Widdowson, an applied linguist, draws on the philosophy of language to shed light on language teaching and open possibilities for teaching methodology and materials development. This leads to differences in stress and emphasis when the same sets of ideas are being used. Issues that may be central to an argument in philosophy may have less centrality when they are used to develop ideas in applied linguistics. A further consideration in applied linguistics relates to the usefulness of an analysis of discourse for the purposes to which it is put. This article is written from an applied linguistics perspective, with a focus on the limitations of a particular set of ideas. Widdowson (1990) notes that researchers working in applied linguistics "continually fall into the error of supposing a solution designed to match one problem must be applicable to a quite different problem as well" (p. 8). In terms of the discourse analyzed in *Teaching Language as Communication*, his examples are oriented towards facts and processes. While Widdowson (1992) has examined areas such as poetry in later work, the discourse examined in *Teaching Language as Communication*, is different. Widdowson (1978) argues that English language teachers could teach English through other subjects on the curriculum, but his examples tend to focus on a narrow range, where the communication of facts and processes are very important. Most of his examples are taken from geography, chemistry, and physics. In one example he also uses history, but this example is more to do with the conveying of established knowledge rather than historical interpretation. Given that the discourse examined is of a particular type, it is important to consider the issue of prescriptivism in language teaching. While linguistics itself is categorized as a descriptive science, Stern (1983) argues that language teaching involves a prescriptive approach. Language teachers are ultimately involved in raising the level of English of their students. Widdowson (1978) is making his analyses based on clear ideas of what 'good' writing involves, with a particular kind of discourse in mind: clear, empirical and concise, eliminating unnecessary repetition, and taking into account the levels of knowledge of the interlocutors in spoken discourse and the target audience in written discourse. In analyzing Widdowson's (1978) arguments, I wish to state my own position, which is agonistic: In this article, I suggest that Widdowson's analysis leads to certain problems in relation to the defining of a *propositional act* and the ability to demarcate between cohesion and coherence on the basis of *illocutionary acts* and *propositional acts*. I suggest an alternative definition of cohesion, which I do not believe alters most of the key arguments relating to teaching in the book. In using the concept of the *locutionary act*, rather than the *propositional act*, and in re-defining the concept of cohesion, something is gained and something is lost. My argument is that the re-definition of cohesion brings it closer to the ordinary language of teachers, and so makes it easier for a teacher to analyze discourse, whether this is in assessing materials for classes or students' writing. Against this position, critics can argue that a certain tightness of analysis is lost. In applied linguistics, conceptual schemes are used as tools in the process of analysis. It is up to teachers to decide whether they are useful or not, and I leave it to the individual readers of this article to judge the merits of my suggestions. **USAGE AND USE** As the title of the book clearly shows, its key theme is teaching language as communication, where the target language is used in a way that would be natural to a competent speaker of the language, rather than in a way that is unnatural beyond the confines of the language classroom. Widdowson notes that the ability to produce syntactically correct sentences, while important, is not a sufficient condition to be able to communicate in a language. Consequently he makes the important distinction between "usage" and "use". In terms of usage, Widdowson gives the example of a teacher-student dialogue: Teacher: What is on the table? Students: There is a book on the table. (p. 6) He points out that the exercise is unnatural for several reasons. The students' response is too long; it would be natural to respond with "A book". In addition, if the book is clearly in view to everyone in the classroom, then it is an unnatural dialogue. Consequently, he categorizes such activities under "usage". In contrast, instances of use occur in situations that are natural to the classroom. Also, depending on context, a grammatical structure may be used to focus either on usage or on use: Widdowson observes that "This is a pen" is an instance of usage because all the people in the classroom will know what a pen is, but "This is a barometer" is an instance of use where a teacher is introducing a new piece of equipment to students. In order to achieve his aim, Widdowson raises the question of what should be taught, and his provisional answer to this is that the foreign language should be used as a medium for teaching other subjects on the school curriculum. By doing this he creates a focal point for deciding what should be taught and how language might be organized into units. It is also important to note that, while the analysis of language covers all the four skills of speaking, listening, reading, and writing, the main focus of the book in relation to teaching language is on the skills of reading and writing. A further point of importance is that Widdowson accepts that language teaching designed to focus on usage is useful: This does not mean that exercises in particular aspects of usage cannot be introduced where necessary; but these would be auxiliary to the communicative purposes of the course as a whole and not introduced as an end in themselves. (pp. 19-20) This then creates the overall structure of the book: Teaching language as a medium for teaching other subjects on the curriculum. **THE PHILOSOPHY OF LANGUAGE: AUSTIN AND SEARLE** In relation to *Teaching Language as Communication*, Widdowson cites both Searle and Austin in his notes to Chapter 2 (Discourse). However, the pairing of terms he uses in his analyses, *propositional acts/illocutionary acts*, are Searle's rather than Austin's, whose paired terms are *locutionary acts* and *illocutionary acts*. Searle (1973) argues that this is not a simple change in nomenclature, rather it underpins an important difference between the two thinkers. The argument is explored in this section to clarify the analysis used by Widdowson. **John Austin** Austin's work, cited in *Teaching Language as Communication*, is *How to Do Things with Words*, a book compiled by Urnson & Shisa from Austin's original lecture notes for the 1955 William James lectures at Harvard University. The ideas contained in the book are developments on and re-workings of a set of lectures that Austin gave in the early 1950s, which he called "Words and Deeds". The original distinction that Austin (1961; 1962; 1971) chooses to make is between utterances that state facts and those which are clearly meaningful but cannot be evaluated on the basis of truth and falsity. The former are labelled 'constatives', and an example would be "Paris is in France", while the latter are labelled 'performatives', and an example would be "I bet £5 that Silver Blaze will win the 2:30 at Epsom". Where constatives are true or false, performatives are felicitous or infelicitous (successful or unsuccessful). In investigating this division, he comes to the conclusion that the constatives also have felicity conditions and performatives have truth conditions. The clearest example of the felicity conditions connected to a constative is Moore’s statement “The cat is on the mat but I do not believe it”. This is a nonsensical utterance, because to say the sentence also signals a belief in it to which the speaker commits. In relation to a performative involving truth conditions, a verdict of “guilty” is associated with a set of facts that are true or false. Consequently, Austin comes to the conclusion that to say something is to do something and to perform a speech act. Thus, he abandons the original distinction, eliminating both categories and replacing them with this general term speech act. The resulting analysis of the speech act leads to key ideas that are used by Widdowson in Teaching Language as Communication. In How to Do Things with Words, Austin divides the speech act into six component acts: phonetic, phatic, and rhetic acts; locutionary, illocutionary, and perlocutionary acts. In terms of the first triplet, to say something is to utter certain noises (phonetic act), to utter words following a certain syntax and vocabulary belonging to a particular language (phatic act), and to utter these words with a certain sense and reference (rhetic act). In terms of the second triplet, the key terms are the locutionary and illocutionary acts, illustrated by Austin in the following way: Act (A) or locution He said to me, “You can’t do that.” Act (B) or illocution He protested against my doing it. (p. 102) In the case of reporting direct speech, there is no judgement of how the original words were meant. In the case of reporting an utterance through indirect speech, a judgement is made of what the speaker did by using those words. It is important to note here that the examples cited by Austin are ‘reports’ of the locutionary act and the illocutionary act, and this creates a certain amount of ambiguity when Widdowson analyzes discourse. A further key point involves the generality or specificity of illocutionary acts. Austin seeks to categorize speech acts according to illocutionary verbs such as ‘warn’ and ‘promise’. However, the full illocutionary act reported above, could be made explicit, an example being ‘I protest against your doing it’. In this article I make a distinction between general illocutionary acts (warnings, promises, requests) and specific illocutionary acts (a warning not to smoke, a promise to be at the station at 10:00). John Searle Searle, whose work *Speech Acts* is also cited by Widdowson (1978), further develops Austin's ideas. However, he also has a variety of important criticisms of Austin's original analysis. It is important to consider these because a Searle-type analysis of speech acts is different from an Austin-type analysis, and the purposes of the two philosophers are slightly different. Austin, an ordinary language philosopher, produces an analysis using ordinary language. Once he has established the component acts of the *speech act* as a whole, he uses reported speech to collect and categorize sets of *illocutionary verbs*, which he feels are the best linguistic indicators of the variety of speech acts. In examining Austin's analysis, Searle (1971; 1973) argues that there are a variety of weaknesses and these lead Searle, in contrast to Austin's *locutionary/ illocutionary acts*, to use a different pairing; the *propositional act* and the *illocutionary act*. In defining the *propositional act*, Searle (1973) is very clear that the terms *locutionary act* and *propositional act* are not interchangeable. Searle observes that Austin works with ordinary language and uses direct speech and indirect speech to illustrate differences between *phatic acts* and *rhetic acts*, and also between *locutionary acts* and *illocutionary acts*. Thus, "He said 'Is it Oxford or Cambridge?'" reports a phatic act, and "He asked whether it was Oxford or Cambridge" reports a *rhetic act*. By using direct speech, the speaker does not make an interpretation of the reported words. He/she simply reports the actual words. By using indirect speech, the speaker ascribes sense and reference to what was said. Similarly, an example of reporting a locution and illocution is given by Austin (1962): He said to me "You can't do that." He protested against my doing it. (p. 102) Searle (1973) argues that in reporting the *rhetic act*, Austin is using *illocutionary verbs* of a rather general type. He notes that there is a problem with the six categories that make up the *speech act*. In creating the categories, Austin states that to perform a *phatic act*, a speaker must also perform a *phonetic act*. Similarly, to perform a *rhetic act*, a speaker must also perform a *phatic act*. A *rhetic act* is a *phatic act* spoken with sense and reference. However, at the next stage of the analysis, Austin introduces the term *locutionary act* which appears to be a simple re-naming of the *rhetic act*. The *locutionary act* and the *rhetic act* are one and the same. Consequently, one or other of the terms can be dropped, leaving four acts relating to the speaker: - phonetic act - phatic act - locutionary act (*rhetic act*) - illocutionary act However, Searle remains dissatisfied with the categorization because, using Austin's method of direct and indirect speech to make distinctions, he notes that problems still remain. In my example below, direct speech is used to report the locution while indirect speech is used to report the illocution: He said, "I'll do it tomorrow." (Reported locution) He promised to do it tomorrow/the next day. (Reported illocution) Searle (1973) argues that the 'direct speech/indirect speech' way of differentiating between the *phatic act* and the *rhetic act* is the same as the way of differentiating between the *locutionary act* and *illocutionary act*. In fact, as the *locutionary act* is the *rhetic act*, the *locutionary (rhetic) act* is sometimes established through reporting direct speech and sometimes through reporting indirect speech. A further important criticism by Searle (1973) is that there is often an overlap between *locutionary acts* and *illocutionary acts*. In the example above, the promise is reported in indirect speech. However, Searle observes that there are many cases where a promise is made specific in direct speech, and he argues that the *locutionary act* and the *illocutionary act* are then the same. "I promise to do it tomorrow" is consequently both a *locution* and an *illocution*. In his analysis, Searle (1969) makes the distinction between *propositional content* and *illocutionary force-indicating devices*. Using a different approach from Austin, he argues that, in the case of utterances, the *propositional content* can always be separated from the *illocutionary force-indicating device*. *Propositional acts* can be evaluated on the basis of truth or falsity, *illocutionary acts* can be evaluated on the basis of felicity or infelicity. To illustrate this, Searle (1973) uses a symbolic device: Symbolically, we might represent the sentence as containing an illocutionary force-indicating device and a propositional content indicator. Thus: \[ F(p). \] Where the range of possible values for \( F \) will determine the range of illocutionary forces, and the \( p \) is a variable over the infinite range of propositions. (p. 156) In this way Searle (1969) is able to make a strong distinction between *propositional* and *illocutionary* aspects of a sentence. However, to do this, he uses a partially symbolic system to represent utterances: Thus, "How many people were at the party?" is represented as \(?(\text{X number of people were at the party})\) “Why did he do it?” is represented as ?(He did it because…) (p. 31) Widdowson’s hybrid approach Depending on the analysis used (Austin’s or Searle’s), different problems arise. Widdowson (1978) cites both philosophers in his chapter on discourse, so that the analysis appears to be a hybrid of the two thinkers. This leads to some ambiguity in his analysis. From a very surface level, with regard to Searle’s terminology (propositional act /illocutionary act), it seems that his (Searle’s) framework of analysis is used in Teaching Language as Communication. However, much is also drawn from Austin: On the level of terminology, Widdowson’s deployment of similar sounding terms, such as ‘usage’ and ‘use’, and ‘cohesion’ and ‘coherence’, seems to echo Austin, who favoured such terms as ‘misexecution’ and ‘misapplication’; from the much more important perspective of approach, Widdowson’s preference for an analysis free of logical symbols also has more in common with Austin than with Searle. There therefore appears to be an ambivalence in the writing between an Austin-style approach to discourse and a Searle-style one. However, as noted above, the two thinkers work with slightly different conceptual schemes based on different philosophical approaches. My argument is that Searle did identify some weaknesses in Austin’s argument. However, his work is not simply an upgrading of Austin’s system. It is related, but the two philosophies exist to some extent in parallel. The challenge in this article is to evaluate both philosophies in the light of language and language teaching, and to establish a framework that is effective within the discipline of applied linguistics. While Widdowson (1978) produces a generally powerful analysis of discourse, the hybrid approach he uses leads to the important problems of defining propositions, and clearly separating illocutionary acts from propositional acts. In the next section, I examine Widdowson’s arguments and consider these problems that emerge from what appears to be a hybrid approach. DISCOURSE ANALYSIS One of the purposes of this article is to clarify terms in a way that makes them easy to use in evaluating discourse. Central to this section is the question of classification: What is a proposition? What is an illocution? How are they related? The elusive proposition I have noted that one of the criticisms of Austin was his line-by-line approach to an analysis of texts (Davies, 2010). Widdowson’s (1978) main focus is to examine stretches of text (discourse) from the perspective of propositional acts and illocutionary acts. In his chapter on discourse, Widdowson starts with an analysis that almost exactly replicates Austin’s approach, introducing an example in which a speaker (A) makes a remark to a listener (B): A: My husband will return the parcel tomorrow. (p. 22) Widdowson notes that if B talks to a third party, then two ways of doing so are to use direct and indirect speech: B: She said: 'My husband will return the parcel tomorrow.' (p. 22) B: She said that her husband would return the parcel tomorrow. (p. 22) Regarding the first (direct speech) example, Widdowson states that B is reporting A's sentence. In relation to the reported speech, Widdowson makes the following observation: "In this case it is not A's sentence that is being reported but the proposition that her sentence is being used to express" (p. 22). The introduction of the concept of a proposition is important as Widdowson observes that the proposition can be reported in a variety of ways: B: (i) She said that the parcel would be returned by her husband tomorrow. (ii) She said that it would be her husband who would return the parcel tomorrow. (iii) She said that it would be the parcel that her husband would return tomorrow. (iv) She said that what her husband would do tomorrow would be to return the parcel. (p. 23) He then notes that B can also specify what illocutionary act he thought that A performed, a way that at the same time reports A's proposition: B: She promised that her husband would return the parcel tomorrow. She threatened that her husband would return the parcel tomorrow. She warned me that her husband would return the parcel tomorrow. She predicted that her husband would return the parcel tomorrow. She mentioned in passing that her husband would return the parcel tomorrow. (p. 23) Most of the analysis to this point appears to reflect Austin: The argument is couched in ordinary language, and Widdowson uses direct and indirect speech to highlight the difference between propositions and illocutionary acts. However, he has carefully chosen the term proposition over locution, and he has introduced the term sentence. Thus, in B's direct speech report of A's utterance "She said: 'My husband will return the parcel tomorrow'" B reports A's sentence. Given the method that Widdowson is using, where B reports the sentence, it is very straightforward to identify the sentence itself: 'My husband will return the parcel tomorrow.' In the indirect speech example, "She said that her husband would return the parcel tomorrow" B reports the proposition. Widdowson gives an example of the sentence and the report of the proposition, but he does not give the proposition itself. With direct speech the sentence is clear. However, in indirect speech, what is the *proposition*? It appears to be an unspecified abstraction that floats above the level of the *sentence*. This problem is addressed by Searle (1969) through a combination of symbols drawn from logic and through the re-writing of the utterance on the basis of $F(\beta)$. However, Widdowson is using Austin's ordinary language approach and avoiding Searle's symbolism. Consequently, when Widdowson writes about *propositional development*, he does in fact refer to a set of unspecified abstractions that the reader is expected to infer on the basis of sentences and reported speech. The questions that I raise in the discussion section are whether this is satisfactory and whether Austin's *locutionary/illocutionary* distinction can be used to avoid this problem. **Propositional and illocutionary demarcation** Having chosen and explained the categories of the *proposition* and the *illocutionary act*, Widdowson then uses them to analyze discourse, initially through an examination of conversational exchanges, and later through an examination of written texts. In developing his analysis, he introduces two key terms: 'Cohesion' is used to describe the links between *propositions*, and 'coherence' is used to describe the links between *illocutionary acts*. Widdowson notes that "we may say that a discourse is cohesive to the extent that it allows for effective propositional development. Further, this appropriacy will often require sentences not to express complete propositions" (p. 27). An example of cohesion is given by the following dialogue: A: What happened to the crops? B: They were destroyed by the rain. A: When? B: Last week. (p. 26) In contrast, Widdowson examines *illocutionary development*, where it is possible to make sense of discourse that is not cohesive, by focusing on the *illocutionary acts* the speakers are performing. One of his key examples is the following exchange: A: That's the telephone. B: I'm in the bath. A: OK. (p. 29) In this example, Widdowson points out that there are no *propositional links* between the three lines, arguing that the text is not cohesive. However, it is reasonably easy to identify an *illocutionary link* between the lines, and he consequently expands the example to make it into a cohesive text: A: That's the telephone. (Can you answer it, please?) B: (No, I can't answer it because) I'm in the bath. A: OK. (I'll answer it). (p. 29) Once again, the question arises whether the analysis is following Searle or Austin. Given the focus on *propositions*, there remains the issue of whether he is drawing on Searle. As noted earlier, Searle aims for a strict separation between the *illocutionary* part and *propositional* part of the utterance through the use of $F(p)$. His decision to use this approach is based on his observation that, while Austin's examples involve reported speech to draw out *illocutionary verbs*, it is possible to have a situation where an utterance includes an *illocutionary verb*. To take a hypothetical stretch of discourse that resembles Widdowson's bathroom example, the following dialogue is possible: A: That's the front door. B: I'm busy, Mum. A: With what? B: Errm. A: Look I'm telling you to answer the front door. B: Errm, I would if I could, but the handle's come off the bathroom door. In this case, the *illocutionary verb* is explicit in the direct speech: "I'm telling you... ". It is due to cases such as these that Searle uses the more complex strategy of representing *illocutionary force-indicating devices* by various symbols and formulating *propositions* in ways which have little similarity to ordinary language. Searle registers his dissatisfaction with Austin's approach on the basis that there is an overlap between *locutions* and *illocutions* in the form of direct speech involving *illocutionary verbs*. Another case of overlap occurs with certain forms of *illocutionary marker*. In his discussion of written texts, Widdowson examines what happens when two sentences are combined to form a discourse or part of a discourse, using the following: The committee decided to continue with its arrangements. Morgan left London on the midnight train. (p. 30) He notes that when the sentences are put together, the reader starts to look for a connection between them. Once the reader has inferred the *illocutionary value* of the sentences, he/she can use an *illocutionary marker* to make the situation clearer: We might, for example, interpret the second proposition as having the value of a qualifying statement of some kind which in some sense 'corrects' what is stated in the first proposition. We can make this interpretation explicit by using what we will call an illocutionary marker: *however*. (p. 30) In the case of this category of *illocutionary markers*, a related problem emerges as with the inclusion of *illocutionary verbs* in utterances: The *illocutionary marker* has both an *illocutionary* and a *propositional* aspect. From the point of view of Widdowson's *propositional development*, it links sentences. It also signals a qualification. It is both *propositional* and *illocutionary*, in contrast to pronouns, which usually help with *propositional development* only. In this case, it is not possible to demarcate clearly between *illocutions* and *propositions*. Another example of this difficulty occurs when Widdowson examines the following sentences, and considers what happens if they are combined into a paragraph: 1. Rocks are composed of a number of different substances. 2. The different substances of which rocks are composed are called minerals. 3. It is according to their chemical composition that minerals are classified. 4. Some minerals are oxides. 5. Some minerals are sulphides. 6. Some minerals are silicates. 7. Ores are minerals from which we extract metals. 8. What gold is is an ore. (p. 32) He analyzes the paragraph from the point of cohesion and coherence. One problem he describes in his analysis of cohesion relates to Sentence 3. He notes that it is an example of a cleft sentence, which is normally used to correct something written earlier. However, there is nothing to correct in the previous sentence, and so he re-writes Sentence 3 as "Minerals are classified according to their chemical composition" (p. 36). This argument does not appear to have much to do with cohesion. His re-writing of the cleft sentence is because it 'does' something: It corrects previous information. His argument deals with the *illocutionary* effect of a cleft sentence, and consequently with coherence. It is coherence that is dominant, with cohesion dependent upon it. Thus, the attempt to separate an analysis into coherence and cohesion on the basis of *illocutionary* and *propositional* development becomes increasingly difficult. Yet cohesion and coherence are very useful terms. If they are not about *propositional* and *illocutionary development*, what are they about? My provisional answer is that they relate much more to what is overt in the text and what is not overt, and if this is so, the use of Austin's *locutionary/illocutionary* distinction has a number of advantages. **RE-DEFINING COHESION USING LOCUTIONARY DEVELOPMENT** So far, I have argued that there are two problems with Widdowson's analysis: The lack of definition of the *proposition* itself, and the overlap of *illocutionary markers* with *propositional links*. Of these two problems, the first seems more important, and there seem to be two possible solutions. The first is to accept that the *propositions* are generally undefined, but to accept that they are theoretically definable through a Searle-type analysis. Returning to Widdowson’s earlier example, the following are all sentences related to one *proposition*: B: (i) She said that the parcel would be returned by her husband tomorrow. (ii) She said that it would be her husband who would return the parcel tomorrow. (iii) She said that it would be the parcel that her husband would return tomorrow. (iv) She said that what her husband would do tomorrow would be to return the parcel. (p. 23) For this example, the *proposition* seems to hover close to the sentences themselves. However, an example taken from Searle (1969) is less intuitively easy to follow. Searle observes that the same proposition is expressed in the following five sentences: 1. Sam smokes habitually. (p. 22) 2. Does Sam smoke habitually? (p. 22) 3. Sam, smoke habitually! (p. 22) 4. Would that Sam smoked habitually. (p. 22) 5. Mr Samuel Martin is a regular smoker of tobacco (p. 24) There seems to be something generally unsatisfactory with assuming an undefined *proposition* that can be abstracted from all sentences. An alternative approach is to remove the concept of a *proposition* and to focus on the *sentence* itself, which is Austin’s concept of a *locution*. How then would this differ from the *proposition/illocutionary act* distinction? In considering this it is useful to try moving away from *reports* to actually stating *locutions* and *illocutions*; one of the problems that emerges from Widdowson’s example is in the use of reported speech itself. This does not seem to be necessary, and Austin, in his early chapters of *How to Do Things with Words*, when he is considering the constative/performative distinction, uses a different approach: (1) Primary utterance: ‘I shall be there.’ (2) Explicit performative: ‘I promise I shall be there.’ (p. 69) After collapsing the distinction, Austin prefers to use reported speech for his analysis of speech acts, and while this is helpful in distinguishing the *locutionary act* from the *illocutionary act*, there are alternatives. For example, his basic argument is that all communicative utterances are performatives; constatives are essentially incorporated into the performative category, and both are re-labelled as speech acts; they ‘do’ something. In Widdowson’s example of the parcel, there is a primary utterance and an explicit performative: Primary utterance: My husband will return it tomorrow. Explicit performative: I promise that my husband will return it tomorrow. Here, the primary utterance can be identified with the *locution* and the explicit performative with the *explicit specific illocution*. Searle's criticism of the *locutionary act* is that some *locutionary acts* overtly signal the *illocution*. For example, "I promise that I'll do it tomorrow" shows that the speaker is making a promise. In the case of primary and explicit utterances this is unsatisfactory in a Searle-type analysis: *Locution*: I promise I'll do it tomorrow. *Explicit specific Illocution*: I promise I'll do it tomorrow. Searle wishes to separate *propositional content* from *illocutionary force-indicating devices*. However, as noted earlier, applied linguistics is a different discipline with different aims, and does not require a total separation of the *propositional content* from the *illocutionary force-indicating device*. If the *illocutionary force-indicating device* is overt in the utterance, this makes the interpretation of the message much easier. Similarly, the use of various grammatical devices appearing in the text to link sentences helps the reader interpret the text. Cohesion, therefore, is concerned with overt links between sentences. *Coherence*, the dominant term of the pair, is related to *illocutionary development*, which is both overt and non-overt. In reading a text, a reader creates a coherent understanding by identifying illocutionary signals in the text and bringing his/her experience and knowledge to bear on the text. A reader can analyze a text by considering what each sentence is doing, and is aided in this by its cohesion. *Locutionary development* is now connected with cohesion, while *illocutionary development* remains connected to coherence. In re-defining cohesion and coherence, does this damage Widdowson's overall arguments in *Teaching Language as Communication*? In general, the main arguments do not appear to be affected. Widdowson is arguing for a kind of teaching that primarily focuses on 'use' rather than 'usage' on the basis that there is always an element of interpretation in communication. Teaching that focuses on use has more chance of helping students to develop this interpretative faculty. The key argument for reading is that there are linguistic clues in the text that students can use to build up their understandings of the text. Meaning emerges through the interaction of a reader with a text; it is not contained solely in the text. This argument is central to *Teaching Language as Communication*. It is about the overt signals in a text and the skills of the reader to interpret the signals and construct meaning. Cohesion is about the overt signals which link the text together, coherence is about building an understanding of the text by trying to establish the writer's intent for each sentence as part of an overall discourse. Similarly, with writing, the writer tries to give enough explicit signals to his/her readership to allow them to follow his/her line of thought. The move to a *locutionary/illocutionary* analysis does not affect this overall argument. **CONCLUSION** In this article, I have argued that Widdowson's use of Searle's conceptual structure (propositional acts and illocutionary acts) creates ambiguity in his analysis. He draws on Searle's conceptual structure combined with Austin's ordinary language style of analysis. My main concerns have been the absence of definition of propositions, and the difficulty of separating out the analysis into cohesion and coherence on the basis of propositional and illocutionary development. I have argued that the units of analysis most relevant to an applied linguist are the utterances in spoken language and the written sentences in terms of written language. These, I have defined in Austin's terminology as locutions. The structure of the locutions is dependent on their illocutionary purpose. Cohesion relates to the overt signals contained within and between the locutions, while coherence relates to both the overt illocutionary markers and non-overt illocutionary links that the reader is able to discern in the discourse. As I noted at the beginning of the article, my position in relation to the analysis is agonistic. My view is that, by changing from a propositional/illocutionary analysis to a locutionary/illocutionary analysis, something is gained and something is lost. On the loss side, the removal of the concept of a proposition removes a superordinate term. In Widdowson's analysis, the same proposition can be represented through a number of different sentences: (i) The parcel will be returned by my husband tomorrow. (ii) It will be my husband who returns the parcel tomorrow. (iii) It will be the parcel that my husband returns tomorrow. (iv) What my husband will do tomorrow will be to return the parcel. Using a locutionary/illocutionary analysis, a locution may be re-written to form a related locution, but this lacks the elegance and simplicity of saying that different messages can be conveyed using the same propositional content. On the gain-side, I have noted that when a propositional/illocutionary analysis is applied to discourse and, in particular, more complicated discourse, it becomes very difficult to demarcate the analysis: Illocutionary markers and links appear alongside non-illocutionary links and content. Cohesion is much more about overt links within the discourse than with a narrower analysis of links between factual content. Coherence is about establishing what the locutions in the discourse are doing. It has not been possible to address a number of key issues in an article of this length. In this article I have closely examined the theoretical framework that Widdowson uses, and I have suggested changes in it. However, it is beyond the scope of this article to see how such changes work in analyzing texts, and to establish more fully the advantages of a locutionary/illocutionary analysis of discourse in contrast to a propositional/illocutionary one. In addition, one distinction that has emerged in the course of writing this article is the difference between a general and a specific illocutionary act. While Searle criticises Austin for having too many component acts that make up the overall speech act, a new issue has developed in relation to the number of acts: Is the *specific illocutionary act* equal to the overall *speech act*? If the component acts are like a set of Russian dolls, with the *phonetic act* as the smallest, central doll, then does the *specific illocutionary act*, incorporating all the other acts, then become the *speech act*? Finally, there lies the question of method. I have argued that Widdowson uses an ordinary language analysis in a way that is similar to Austin. However, Austin tended to use reports of *speech acts* in his analyses and to work towards surfacing *illocutionary verbs* for the purposes of categorization. For the purposes of applied linguistics, other methods for investigating *speech acts* through ordinary language analysis might help in the analysis of discourse. **REFERENCES** Austin, J. L. (1961). *Philosophical papers*. Oxford: Oxford University Press. Austin, J. L. (1962). *How to do things with words* (2nd ed.). Cambridge, Mass.: Harvard University Press. Austin, J. L. (1971). Performative-Constative. In Searle, J. (Ed.). *The philosophy of language*. (pp. 1-12). Oxford: Oxford University Press. Berlin I. (1973). Austin and the early beginnings of Oxford Philosophy. In Berlin, I. (Ed.). *Essays on J. L. Austin*. (pp. 1-16). Oxford: Oxford University Press. Davies, W. (2010). The influence of J. L. Austin. *Hiroshima Studies in Language and Language Education*, 13, 93-108. Searle J. R. (1969). *Speech acts*. Cambridge: Cambridge University Press. Searle, J. R. (1971). What is a speech act?. In Searle, J. (Ed.). *The philosophy of language*. (pp. 39-53). Oxford: Oxford University Press. Searle, J. R. (1973). Austin on locutionary and illocutionary acts. In Berlin, I. (Ed.). *Essays on J. L. Austin*. (pp. 1-16). Oxford: Oxford University Press. Searle, J. R. (1979). *Expression and meaning*. Cambridge: Cambridge University Press. Stern, H. H. (1983). *Fundamental concepts in language teaching*. Oxford: Oxford University Press. Widdowson, H. G. (1990). *Aspects of language teaching*. Oxford: Oxford University Press. Widdowson, H. G. (1992). *Practical stylistics*. Oxford: Oxford University Press. Widdowson, H. G. (1978). *Teaching language as communication*. Oxford: Oxford University Press. 要約 談話分析における Widdowson の発話行為理論の利用 デービス・ウォルター 広島大学外国語教育研究センター 本論文では、とりわけ結束性(cohesion)、意味的連結性(coherence)の概念に関連して談話分析における Widdowson の発話行為理論の捉え方を考察する。Widdowson は、John Austin と John Searle の著作に依拠しながら、Austin のアプローチと Searle が主張した命題的行為・発話内行為(propositional/illocutionary acts)という二組の対概念を結びつける分析を試みている。しかし、この分析では、(1) Widdowson が直接的に命題そのものを明らかにし得ていないこと、(2) 前述の二組の対概念に基づく分析では、結束性と意味的連結性とを区別することが困難になること、といった二つの問題が生じてくる。こうした問題があるために、学習者のライティングを分析したり、テキストを吟味する上で結束性・意味的連結性という概念を教師が用いることが難しくなる。そこで、筆者は、命題という概念を談話分析の対象外とし、文レベルに焦点化した分析を提案する。文を単位とするこのアプローチは、Austin の発話(locution)と一致する捉え方でもある。このようなアプローチによって、結束性という概念が発話内や発話間の明確な橋渡しとして容易に定義することが可能となる。
Women, Water, And Sanitation: Household Behavioral Patterns In Two Egyptian Villages by Samiha El Katsha Social Research Center American University in Cairo 113 Kasr el Aini Egypt and Anne U. White* Institute of Behavioral Science Box 482, University of Colorado Boulder, CO 80309, U.S.A. ABSTRACT Understanding the behavior patterns of women in rural households regarding water and sanitation may be the key to solving the problem of why improvements in facilities may not be accompanied by a reduction in disease prevalence. An interdisciplinary team surveyed 312 households in two Egyptian delta villages, examining 46 of them in depth, with participant observation. Their patterns of storing water, and its use for drinking, cooking, washing, animal rearing and waste disposal are rooted in the woman’s beliefs regarding cleanliness and what enhances the health and well-being of her family. The local environment of surface and groundwater availability, quality and available drainage affect her choices. Other factors include local government institutions, available technology, information and educational facilities, time and energy expended on various practices, and social values held by the women and the community. The women suggest practical solutions for their water and sanitation problems such as carts for collecting waste water, but feel powerless to influence local governments, or even their husbands, to institute new practices. Such targeted studies can disclose linkages among significant factors in the household environment, and should be undertaken for any project designed to provide effective and lasting water and sanitation in rural villages. Much of the activity under the International Drinking Water Supply and Sanitation Decade (1980-90) presumes that provision of safe and adequate water supplies, and safe systems for disposal of human waste is essential for good health and a contribution to improved quality of life [3]. The effects, however, are not certainly of large benefit to health, and it is important to know why that is so in specific areas. In Egypt, for example, the first developing country to extend potable water supplies to all its rural population [4], and possessing more doctors, nurses, and clinics than many lower middle-income countries, the infant mortality rate remains as high as that of countries with much less infrastructure, with diarrheal diseases reported as a major cause of death. It has been reduced in recent years, but it is estimated at 88 deaths per 1,000 in 1986 [5]. It is also known that prevalence of schistosomiasis has declined in sample Egyptian villages, and that this may be ascribed in part to water hydrants eliminating the need to wade in canals [6]. * Anne U. White died on April 10, 1989 Much useful work has been performed during the Decade regarding the components of health-related behavior regarding water and sanitation. Examples are the comprehensive studies of water related diseases and the range of technological possibilities done by the World Bank. Other studies have looked at the cost-effectiveness of individual measures such as hand washing [7]. Research on the role of women [8], or the growing understanding about the realities and possibilities of community participation [9] have slowly broadened understanding. Just recently a discussion paper for the World Bank acknowledged that the time of rural women in developing countries does have an economic value, albeit a small one, that must be factored into cost-effective calculations for water and sanitation improvements [10]. But some of the questions asked nearly 20 years ago [11] still have not had in many areas a more precise answer than to recognize that physical improvements may not be sufficient. Why do people not use facilities such as a hydrant or standpipe when it delivers purer water, or not use a pit latrine when available? Why does new and valid information about health hazards and safe practices not result in reduced mortality or disease morbidity? The challenge is to find answers, and put them together in a way that makes prevention of disease a reality in the household. An interdisciplinary team in Egypt in consultation with a larger task force recently addressed this set of problems [12]. It set out to 1) determine the pattern of women's behavior related to the handling and utilization of water and waste, 2) identify some linkages between behavioral patterns and the transmission of diseases, and 3) seek an understanding of the cultural framework and household economy within which the patterns fit. The underlying assumption was that the woman in the household is the determining influence on the health-related behavior of the other members. Her roles of manager of the household use of water and sanitation and other facilities, acceptor or rejecter of new technology, and agent of behavioral change [13], are considered to be of primary importance in any attempts to improve community health and well-being. The task force participants were administrators, scientists, and technicians from a wide range of training and skills. They included anthropologists, economists, bacteriologists, communications experts, demographers, directors of community and national programs, engineers, geographers, health educators, parasitologists, physicians, public health specialists, social workers, and sociologists. The authors of the final report are two anthropologists and two physicians all concerned with environmental health. THE STUDY AREA The survey sample (312 households), 25% of the total number, was drawn from two agriculturally oriented villages, both Menoufia governate, Kafr Shanawan (hereinafter referred to as K) in Markaz Shebeen El Kom and Babil (hereinafter referred to as B) in Markaz Tala (see Fig. 1). Of these households, 46 took part in an intensive observation and in-depth examination of behavioral patterns related to health, and an environmental assessment including water and stool sampling. These village households are considered representative of many in the Nile delta where about two-fifths of the Egyptian population live. They illustrate a considerable range in household facilities but are similar in the range of socioeconomic status. Table 1. Percentage Facilities in Two House Types in 312 Households | Characteristics | Adobe (137) | Red Brick (175) | |-----------------------|-------------|-----------------| | Piped water | 21 | 71 | | Latrine | 86 | 95 | | Dust floor | 85 | 55 | | Separate kitchen | 13 | 60 | | Animal raising | 88 | 27 | | Electricity | 98 | 99 | Table 2. Environmental Conditions Within 46 Households | Index | Number of Households | |-----------|----------------------| | | Babil | Kafr Sanawan | | Crowding index | | | | Less than 2 | 14 | 10 | | 2-4 | 8 | 11 | | 4-6 | 1 | 2 | | Ventilation index | Adobe | Red Brick | Adobe | Red Brick | | Good | 6 | 8 | 4 | 12 | | Fair | 5 | 1 | 2 | 2 | | Poor | 3 | - | - | 3 | | Fly index | | | | Less than 110 | 3 | 3 | | 110-43 | 20 | 20 | Water International Laboratory examination was made of water supply in each of two periods (April-May and July) for six parameters of chemical water quality, and for occurrence of total bacteria, and faecal coliforms. Stool tests for parasites were made of children and females in the 46 observation households. The survey solicited reported diseases for all members of the 312 households. For the observation households, three indices believed to have relevance to disease transmission were computed: a crowding index was the number of people per sleeping room; a ventilation index was the ratio between areas of floor space and area of openings for sleeping rooms; a fly index was the number of flies observed on a square meter of the floor at designated places and times (see Table 2). Among these observation households, ten were investigated in greater depth by 24-hour observation of activities of individual family members. WATER SUPPLY AND SANITATION A piped water supply from deep wells was introduced in B in 1965, but the output per 12-hour period amounts to only about 31% of the estimated daily consumption, and the flow is irregular. Three heavily-used public standpipes, occasionally polluted, shallow, hand-pumped wells, and the canals constitute the remaining sources for the village. K was supplied with piped water in 1952, also from deep wells, but with recent renovations has an abundant and regular supply. Most people are connected to the piped water supply, and the one public standpipe is used infrequently. Two house types prevail in the villages; one is the traditional adobe, subject to damage from rising groundwater, the other is a more recent design made of water-resistant red brick. In 1965 groundwater damage affected the adobe houses in K, with the result that 76% of the houses there are now red brick, as contrasted with only 24% of those in B, the rest being the traditional type (see Table 1). In both villages ground water is less than 1 meter below ground level, and has a higher content of dissolved solids and chlorides than surface water. There are no central sewerage or waste-water systems. In B, a water tap next to the door is the predominant pattern, with no drainage facilities. Waste water is emptied into the street, or, if the neighbors in nearby adobe houses object, taken to the canals. Most of the houses in both villages have pit latrines with no drain, or, less frequently, are connected... In both villages ground water is less than 1 meter below ground level. to barrels or a rudimentary septic tank, both requiring frequent emptying. The two villages have populations with very similar socio-economic characteristics. In terms of persons per household (B 7.7 and K 7.0); proportion of nuclear families (67% and 63%); availability of electricity (99% and 97%); proportions with television sets (83% and 87%), washing machines (64% and 59%), refrigerators (16% and 21%); latrines in household (86% and 97%); and illiteracy of females (75% and 73%), they were much alike. They differ notably in percentage of households having a piped water connection (B 17% and K 82%), and separate animal accommodations in courtyards (71% and 35%). In both villages the households in adobe buildings and those in red brick buildings with wooden or concrete roofs differed in several major dimensions found to have an association with health-related activities. The chemical analysis showed that in both villages the water delivered in the public supplies had a hardness ranging from 300 to 530 mg/l in contrast to 100 to 480 in the adjacent canals. Dissolved solids were 435-633 mg/l in the pipes compared to 213-665 in the canals (see Table 3). Much of the variation can be linked to conditions of canal flow and irrigation. The differences in bacterial quality were more pronounced according to season and discharges into ground and surface waters (see Table 4). The canals at washing sites showed total bacterial counts between 10,000 and 115,000 per ml. In the standpipes these ranged between 30 and 580 per ml. No faecal coliforms were found in the K supply, but the B reports showed 0 to 7, and in the canals they fluctuated between 115 and 11,175. In hand pumps in B the faecal coliform count was 33 to 45. Table 3. Chemical and Bacteriological Quality of Water Sources Spring and Summer, 1985 Babil and Kafr Shanawan | Chemical mg/l | Public Tank (on leaving tank) | Standpipe | Hand Pump | Canal (upstream from washing site) | |---------------|-------------------------------|-----------|-----------|-----------------------------------| | | B | K | B | K | B | K | B | K | B | K | | Dissolved solids | 625-644 | 480-535 | 633-681 | 435-555 | 631-707 | 441-484 | 227-264 | 302-213 | | Sulfate | 35-34 | 18-72 | 32-42 | 28-76 | 45-22 | 28-16 | 10-22 | 7-41 | | Total hardness | 360-560 | 300-480 | 357-530 | 300-450 | 340-492 | 267-500 | 80-263 | 135-313 | | Chloride | 135-120 | 90-150 | 128-122 | 95-95 | 135-153 | 107-115 | 35-32 | 79-58 | | Nitrate | .04-.52 | .08-.6 | .05-.17 | .12-.12 | .36-.04 | .12-1.0 | .03-.07 | .12-1.3 | | Turbidity: units | 4.7-6 | 7.9 | 6.2-5.7 | 7-10 | 2-6.3 | 17.3-30 | 23-18.7 | 12.5-17.5 | Bacteriological: Total bacterial count ml Faecal coliforms | | B | K | B | K | B | K | B | K | B | K | |---------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------| | Total bacterial count ml | 19,090-1,000 | 44-10 | 5,000-629 | 580-30 | 125-155 | 222-300 | 39,837-148,600 | 2,670-97,926 | | Faecal coliforms | 0-0 | 18-0 | 7-0 | 0-0 | 45-33 | 0-0 | 115-8,523 | 156-11,175 | Table 4. Bacteriological Analysis of Water Sources from 46 Households in Babil and Kafr Shanawan | | B (March, 1985) | K (April, 1985) | |---------------|-----------------|-----------------| | | Tap water | Stored water | Tap water | Stored water | | Total bacterial count per ml | 229,408 | 597,100 | 41,629 | 89,700 | | Faecal coliforms per 100 ml | 14 | 1,094 | 1 | 877 | BEHAVIOR PATTERNS The study examined the patterns of water use for drinking, cooking, laundering, washing, animal rearing, and waste disposal in terms of where the water is obtained, how it is transported to and used within the household, and how it leaves the household. In each pattern an effort was made to identify the principal considerations that appear to influence the women in deciding what to do in their local circumstances. Seven major sets of factors were taken into account: a) the local environment of surface and ground water availability and quality, and available drainage; b) the local organization and institutions for dealing with water; c) available technology, such as pumps and washing machines; d) information and educational facilities to which the villagers have access; e) the time and energy expended on various practices; f) social values held by the women and men of the community; and g) perceived health effects as measured by reported mortality and prevalence of disease. An effort was made to identify the principal considerations that appear to influence the women in deciding what to do. CLOTHES LAUNDERING The mode of analysis is illustrated by examination of the practice of laundering clothes. In the observation sample, 43% of the households in B and 87% in K choose to take their clothes to the canal to wash, even though 32% of them have a water tap connected to the village supply. Their reasons are complex but discernable. The canal water lathers more readily and yields whiter clothes than the ground water pumped in the village pipes (total dissolved solids are an average of 681 mg/l for the reservoirs in B, 681 for the piped supply, and 236 for the canals. The shallow wells are even higher—707 mg/l. In K the difference is slightly less extreme. Given the limited capacity of the latrines, septic tanks where they exist, and other sullage facilities, the disposal of waste water in the latrines, septic systems or the street carries the hazard of weakening the foundations of adobe houses and pooling water in areas adjoining both adobe and brick houses with subsequent complaints from neighbors. Although possessed by over 60% of the households, washing machines are largely a status symbol. While many respondents claimed to wash at home, in-depth interviewing and observation revealed contrary results. (To this and similar questions, most women initially tended to give what they believed to be expected answers instead of the actual practice.) Water quality, cost, and the difficulty of sullage disposal are important factors. The women know that washing in the canal has a risk of exposure to bilharziasis but feel there is no viable alternative when they take into account the time and energy of carrying waste water back to the canal, the high premium placed by women and men alike on very white clothes, the objections by neighbors to dumping water in the street, and value attached to water quality. All of these considerations enter into the choices made by the women who carried their wash to the canal, and carry it back to be possibly machine washed, boiled and sundried in the household. These choices might be altered by changes in drainage, in waste collection, in standards of clothing appearance, or in information about health hazards of the canal. A similar analysis is applied to other activities involving water. Without attempting a presentation of all of them, a few of the findings are summarized here to indicate their scope and also to suggest conclusions of a more general character. FETCHING DRINKING WATER In both villages, drinking water receives the best care possible in respect to source, fetching, and storage. With a few exceptions, the women believe that the water must be clear and free of odor, and they are concerned about taste as indices of quality. The tasks of fetching and storing it are usually delegated to the cleanest and most energetic women within the household, and in an extended family, the mother-in-law delegates the most appropriate candidate, regardless of status. Commonly an adult female or sibling over 14 years is designated, and spends 20-30 minutes per load, one or more times a day, using clay, metal, or plastic containers. In households with piped connections drinking water is drawn from the tap. In the other half of the households it is carried from public hydrants or private hand pumps, the exceptions being mostly elderly women who go to the canal because they believe it quenches their thirst better. Those going to public hydrants have complaints because of waiting time to fill their containers, low water pressure at certain hours, the time required to clean the container before filling it, and irregular periods of supply. They try to minimize contamination by washing the container with soap and rice hay, and by avoiding contact with waste water. Thus, until the water is available in the house, generally effective measures are taken to maintain its relative purity. Fetching and storing it are usually delegated to the cleanest and most energetic women. STORING DRINKING WATER Once drawn, water generally is kept in one of four types of containers, the first three being used for cooling: the oullah, a traditional narrow-necked clay jug holding one to two liters; glass or plastic bottles; the zir, another traditional porous clay container holding 20 or more liters; and aluminum storage containers with either a fitted cover or a tap. Care is taken to cover all open tops to keep out dust (see Fig. 2). The family drinks directly out of the oullah and bottle, but must dip into the zir or open-topped storage container with cups that rarely are washed. Laboratory examination of water sampled at the source and in the various containers reveals major differences in quality. The women, however, consider all of their practices to be free from danger of disease transmission. EXCRETING Latrines are present in all but 8% of the houses, red brick and adobe alike. Those in the red brick houses, however, are better constructed and somewhat better maintained. The latrines are used mostly by the females, with a few using the roofs, whereas the males frequently use the mosques when not in the field. A tin of water is usually placed near the latrine for rinsing after defecation. Most pre-school children of both sexes are not trained to use the latrines; they generally defecate in the street. The prevalent type of latrine is a hand-dug hole lined with red brick and covered with a concrete slab. Drainage is into the surrounding earth and groundwater. These latrines are cheap, practical, and the contents can be used as manure, but they are odorous, collect flies, and threaten groundwater contamination. Among all household facilities, latrines have the lowest priority for upkeep and cleanliness. This appears to be related to the predominantly dirt floors, lack of drainage and the concern about the water affecting the adobe walls, as well as the expense of emptying the pits or cess-pools, all of which inhibits the use of water for cleaning. There is no evidence that the women regard the latrine areas as health hazards. BATHING AND HAND WASHING Since the inhabitants of the villages are largely Moslem, they follow the Islamic teachings regarding purification before prayer. Women wash hands, face, mouth, ears and feet daily before prayers, and take a complete bath after sexual intercourse, menstruation, and childbirth. The use of soap is not required. General baths are taken once or twice a week, using soap and a loofah (gourd sponge). The women state that washing with soap is essential 1) after handling things with strong odors—i.e. fish, kerosene, or cow dung for fuel cakes, 2) before baking or dairy chores, 3) after defecation, and 4) after eating greasy food. But during the intensive study of 10 cases, little hand washing was observed, and little use of soap. Hands are not washed on a regular basis before cooking, eating, nursing infants, nor after changing an infant who has defecated. DOMESTIC ANIMALS Water usage for animals, usually cattle and poultry, is limited to animal watering, and the cleaning of utensils used for feed. Cleaning of the *zereeba*, the room where the animals are kept, takes priority over other tasks, and is performed usually by a responsible older woman. She is also in charge of accumulating the dung for fuel cakes prepared by hand on the dirt floor and stored when dry for sale or for use in cooking. FOOD HANDLING Cooking is done once a day, and baking once a week. Women use water sparingly in cooking, often reusing vegetable wash water to wash a dirty cup or utensil. Where more abundant water is required, as for washing lettuce, they use the canal. The eating of raw vegetables without washing, the use of canal water, the lack of hand washing with soap before food preparation or after handling dung cakes, coupled with the room temperature storage of leftover foods, all provide routes for faecal-oral and other disease transmission. INFORMATION AND EDUCATION Although the two villages are blanketed by television and radio, and have access to public health units and social centers in addition to the regular primary school system, many of the women do not perceive the role of water in hindering or fostering disease transmission. The household surveys show them to be deeply concerned with maintaining the health of their families. They go to great pains to wash out the water containers and to protect them from dust. They generally know that schistosomiasis may be contracted by body contact with canal water. Yet, they permit a child fresh from defecating to dip its hand into a carefully covered *zir*, they stand in the canal while laundering, and they do not wash their hands before preparing food. Much of this behavior may be traced to misinformation or misunderstanding. By and large, they believe that once water is obtained from a pipe it will remain pure regardless of further modes of use. Hence, the slight regard for human contact in the *zir* in contrast to meticulous effort to clean that container periodically. They also believe that naturally running water is not harmful to health in contrast to standing water. The majority regard schistosomiasis as a real danger that can be contracted only by swimming in the canal and swallowing water while swimming. Hence, their willingness to wade in the canal while laundering when there is no satisfactory alternative, or to permit others to do so. The women seek to avoid the pooling of water in the street but they feel no urgency to clean the household latrine or to curb defecation by children on the street or interior dirt floors. They pay careful attention to the feeding of cattle and cleaning of the courtyard stalls. Given the fragmentary understanding they have of disease transmission, their behavior is not unreasonable. And it is not likely to change simply by adding to the already adequate quantity of water or by installing a tap in every household. If that understanding is to be improved, the channels of education--schools, television and radio, health center programs, community discussion--will have to be used to add to or correct the current information and motivation. Finding the effective message and communication channel may not be easy. Finding the effective message and communication channel may not be easy. In the 46 observation households the major reported cause of death from all causes for children under 10 years of age was gastrointestinal (55% in B, 44% in K). A measles epidemic took a heavy toll in K (7 deaths out of the total of 25), and two deaths of 29 in B. Respiratory disease accounted for 20% of the child deaths in K, and congenital anomalies for 20% of those in B. For the total population of the 46 households, gastrointestinal complaints were most numerous (39.3% in B and 35.0% in K), eye complaints next most common (31.8% and 30.9%), respiratory disease running 7.5% and 5.4%, other fevers 4.8% and 6.3%, with all others amounting to 16.6% and 22.4%. The examination of stool samples indicated positive cases for parasitic diseases as shown in Table 5. Comparing these with several indicators of conditions in the reporting households, it was found that there was a strong association between crowding and respiratory disease prevalence (see Table 2). The minority of families without a separate room for animals report a higher rate of respiratory and gastrointestinal disease. In both villages there is a rough association between the crowding index and prevalence of disease. Likewise, there is an association between quality of ventilation and mortality due to respiratory disease and measles among children of less than 10 years in age. The same applies to the fly index. The prevalence of disease also is high where the fly index is high and low where it is low. The provision of private solid waste containers is associated with a higher fly index in the streets outside the B houses, while there is no association for the K streets. Table 5. Incidence of Parasitic Disease from Stool Examination Survey of 46 Households Percentage of Positive Cases Among Total Number Examined | | School Children | All Other Members | |----------------------|-----------------|-------------------| | | Babil | Kafr Shanawan | Babi | Kafr Shanawan | | Bilharziasis (Schistosomiasis) | 2.9 | 10.6 | 7.4 | 7.7 | | Oxyuris | 1.5 | 0.7 | - | - | | Ascaris | 3.5 | 0.6 | 7.4 | 15.4 | | Trichuris | - | 0.2 | - | - | | Ancylostoma | - | 0.4 | - | - | | Amoebiasis | - | - | 11.1 | 15.4 | | Trichostrongloides | - | - | - | 3.8 | WOMEN'S AWARENESS OF HEALTH PROBLEMS The majority of the women surveyed are aware of, and have stated their dissatisfaction with the existing sanitary conditions in their immediate environment. They have suggested feasible solutions to some of their problems, such as communal septic tanks or a cart collection of waste water, solid waste collection with regular service, or the use of educated elderly women to communicate health-related information within their neighborhood. However, the main constraint for community participation appears to be their feeling of powerlessness, that responsible bodies would never listen to their complaints or suggestions, as such action is beyond the women's domain. A second phase of the study under the auspices of the International Development Research Center in Canada, is attempting to identify problems in the institutional management of water supply and sanitation, and to find ways of enhancing the capacity of the women particularly, as well as the whole community, to solve these problems. It is aimed at mobilizing the local level service and community workers to actively assist the villagers in improving village sanitary facilities, working through various levels of government, as well as initiating viable water and sanitation educational programs. One lesson from this study has relevance in many other situations where village water supply and sanitation improvements are planned or completed. As Briscoe and de Feranti put it, "The hardware components of water projects are only one link in a long chain. The other links involve changing hygiene habits and other factors and can require actions ranging from providing better education to promoting public health programs. If any one link is missing, health indicators may not improve. But this does not mean that investing in individual links is futile. Where it is not possible to upgrade all links simultaneously, one must take a longer view, proceed step by step, and not expect to see large health improvements until the last step has been completed" [9]. To identify these individual links in a given community, the design and management of physical facilities should be accompanied by actions based on a study of the prevailing behavior patterns and their components. Without analyzing household behaviors and the factors affecting them, and bringing the community into the planning process, it is hazardous to initiate improvements in community information and facilities with the expectation that any change will decrease disease transmission. It is conventional wisdom that water supply and sanitation measures do not necessarily contribute to the enhancement of human health in villages. It is also recognized that programs of health education relating to water and waste do not necessarily result in reduced disease prevalence. What is not yet conventional wisdom is that there are common sense methods of finding out why this is and of identifying possible means to correct them. A reasonable component of total funds for the project must be directed to this end. The recent Egyptian study demonstrates the opportunity. Such inquiry need not be highly expensive by comparison with the cost of designing physical improvements. When undertaken with the participation of local women and of social scientists who are familiar with the demographic characteristics and social structure of the types of communities involved, it is possible to draw samples of major types of households. For those, carefully targeted interviews can reveal a great deal about linkages among significant factors in the household environment. These factors include the physical conditions of water supply and waste disposal as perceived by the women who make the critical behavioral choices, their information as to health consequences, their values affecting water and waste handling, and their perception of community problems and of possible ways to deal with those problems. This is bound to lead to appraisal of the roles of women in community decisions and of means of strengthening them. ACKNOWLEDGMENTS This article is based upon the large report prepared by El Katsha, Younis, El Sebaie, and Hussein. We are indebted to the other three authors for review of an earlier draft. We also thank F.D. Miller for comments on the draft. The project was supported by the Ford Foundation and was carried out in the Social Research Center, American University in Cairo, under Leila El Hamamsy, Director. REFERENCES 1. Lindskog, P. *Why Poor Children Stay Sick*. Linköping Studies in Arts and Science. 16. Linköping University, Linköping, Sweden. 1987. 2. Rahaman, M., Aziz, K.M.S., Hasan, Z., K.M.A., Munshi, M.H., Patwari, M.K., & Alam, N. "The Teknaf Health Impact Study: Methods and Results." In *Evaluating Health Impact*, ed. Briscoe, J., Feachem, R.J., & Rahaman, M.M. International Development Research Centre, Ottawa, Canada. 1986. 3. World Health Organization. "Benefits to Health of Safe and Adequate Drinking Water and Sanitary Disposal of Human Wastes." In *Imperative Considerations for the International Drinking Water Supply and Sanitation Decade*. The World Health Organization, Geneva, Switzerland. 1981. 4. White, G.F. & White, A.U. "Potable Water for All: The Egyptian Experience with Rural Water Supply." *Water International*, 11, 1986, 54-63. 5. World Bank. *World Development Report*. The Oxford University Press, New York, N.Y., USA. 1988. Table 33. 6. Miller, F.D., Hussein, F.D., Maney, K.H., Hilbert, M.S. "An Epidemiological Study of Schistosoma haematobium and S. mansoni in Thirty-five Rural Egyptian Villages." *Geographical Medicine*, 33, 1981, 355-365. 7. Esrey, S.A., Feachem, R.G., & Hughes, J.M. "Interventions for the Control of Diarrheal Diseases among Young Children: Improving Water Supplies and Excreta Disposal Facilities." *Bulletin of World Health Organization*, 63(4), 1985, 757-72. 8. Wijk-Sijbesma, C. *Participation of Women in Water Supply and Sanitation: Roles and Realities*. International Reference Water and Sanitation Centre: The Hague, the Netherlands. 1985. 9. Briscoe, J. & Ferrant, D. de. *Water for Rural Communities*. The World Bank, Washington, D.C., USA., pp. 1,3. 1988. 10. Churchill, A.A. *Rural Water Supply and Sanitation: Time for a Change*. World Bank Discussion Paper 18. The World Bank, Washington, D.C. 1987. 11. White, G.F., Bradley, D.J., & White, A.U. *Drawers of Water: Domestic Water Use in East Africa*. University of Chicago Press, Chicago, IL, USA. 1972. 12. El Katsha, S., Younis, A., El Sebaie, O., & Hussein, A. *Women, Water, and Sanitation: Behavioral Patterns Related to Handling and Utilizing Water for Household Purposes: An Interdisciplinary Study in Two Egyptian Villages*. Social Research Center, American University in Cairo. 1986. 13. Elmdendorf, M.L., & Isely, R.B. "Role of women in water supply and sanitation." *World Health Forum*, 3, 1982, 227-30.
For monolithic image sensors: MOS or CCD? Minicomputers in action: tracking a city’s rainfall Special: previewing the National Computer Conference SPECIAL: SMOOTHING DATA FLOW WITH COMMUNICATIONS PROCESSORS IT TAKES THE LEADER TO PUT IT ALL TOGETHER MSC Again Advances the State-of-the-Art in Microwave Power Transistors with the AMPAC™ TACAN - RADAR - TELECOMMUNICATIONS FEATURES: - Input Impedance Matching—Ease of Broadbanding - Output Impedance Matching—Higher Collector Efficiency / Lower Junction Temperature - Hermetic Construction—Improved Reliability Call or write us for further information on Microwave Semiconductor Corp.’s AMPAC power transistors and other components—all available from stock and priced to minimize your system cost. Also feel free to contact us for advance information on any of our latest developments. | | AMPAC 912-50 | AMPAC 1214-30 | AMPAC 1417-24 | AMPAC 1720-10 | AMPAC 2023-10 | AMPAC 3135-5 | AMPAC 3742-5 | |----------------|--------------|---------------|---------------|---------------|---------------|-------------|-------------| | Frequency (f₁-f₂) | 0.9-1.2 GHz | 1.1-1.4 GHz | 1.4-1.7 GHz | 1.7-2.0 GHz | 2.0-2.3 GHz | 3.1-3.5 GHz | 3.7-4.2 GHz | | Power Output (P₀) | 50-65W (pk) | 30-35W (pk) | 22-26W (cw) | 10-11 (cw) | 9-10W (cw) | 5-6W (cw) | 5W (cw) | | Power Gain (P₆) | 9 dB | 9 dB | 7 dB | 7 dB | 6 dB | 5-6 dB | 4-5 dB | | Efficiency (ηᵢ) | 60-75% | 55-65% | 50% | 45% | 40-45% | 30-35% | 28-30% | | Voltage (Vcc) | 35V | 28V | 28V | 24V | 24V | 28V | 24V | | Impedance (Zᵢn/Zᵢc) | 6/10Ω | 25/10Ω | 25/15Ω | 50/25Ω | 50/25Ω | 50/50Ω | 50/50Ω | | Thermal Resistance (θjc) | <1.5 C/W | <2.5 C/W | <3.0° C/W | <5.0° C/W | <5.0° C/W | <6.0 C/W | <6.0° CW | MICROWAVE SEMICONDUCTOR CORP. 100 School House Road, Somerset, N.J. • (201) 469-3311 • TWX (710) 480-4730 • TELEX 833473 Circle 900 on reader service card Now... High Efficiency, Switching Regulated Power Supplies The new Switching Regulated Series is the most recent addition to the expanding line of Hewlett-Packard Modular Power Supplies. The MIGHTY MODS started with the 62000 Series — a complete line of modular power supplies with coverage from 3 to 48 volts, up to 192 watts. The new Switching Regulated Supplies, Series 62600, feature advanced transistor switching design with up to 80% efficiency. You get more power in a smaller, cooler operating package... with 4 to 28 volts, up to 300 watts, 0.2% combined line and load regulation, 20mV rms/30mV p-p ripple and noise. And, HP thinks ahead to give you all the protection you need: overvoltage, overcurrent, over-temperature, reverse voltage and protected remote sensing. What it all adds up to is: selection, performance, reliability plus competitive pricing (with quantity and OEM discounts). Whether it's a modular, laboratory, or digitally programmable power supply — be confident when you specify... specify HP. For detailed information, contact your local HP field engineer. Or write: Hewlett-Packard, Palo Alto, California 94304. In Europe, Post Office Box 85, Meyrin-Geneva, Switzerland. What's it worth to get lab capability in an instrumentation recorder with field portability? ($4,270!) If you think you've got to sacrifice performance for portability, it'll be worth your while to price Hewlett-Packard's 3960 instrumentation recorder. At $4,270, it gives you field portability along with performance and features found only in the most expensive laboratory machines. Built on a precision milled aluminum casting, the 3960 weighs less than 50 pounds. Signal-to-noise ratio is better than 46db when operating in the FM mode at 15/16 ips. The 3960 Portable Instrumentation Recorder from Hewlett-Packard. The one that you can pack off into the field, or rack in the lab. From just $4,270. And worth it. For complete details write Hewlett-Packard, 1501 Page Mill Road, Palo Alto, California 94304. In Europe, Post Office Box 85, CH-1217 Meyrin 2, Geneva, Switzerland. In Japan, Yokogawa-Hewlett-Packard, 1-59-1, Yoyogi, Shibuya-Ku, Tokyo, 151. Electronics review COMMUNICATIONS: Bell Labs, Corning push fiber optics, 33 Stabilized laser communicator, 34 AUTO ELECTRONICS: Solid-state sensor monitors car fumes, 34 Anti-skid unit is digital, 35 VW uses ICs in seat-belt system, 35 COMMERCIAL ELECTRONICS: Highway call box moves to corner, 36 COMPUTERS: Microcomputer from unlikely source, 36 INDEX OF ACTIVITY, 38 PRODUCTION: Ribbon wire virtues leave users unmoved, 38 COMPONENTS: Military holds out against plastic packs, 40 SPACE ELECTRONICS: FCC okays Marsat, but questions remain, 42 NEWS BRIEFS, 42 Electronics International JAPAN: Calculator has drive, display on one substrate, 56 WEST GERMANY: Solar cells power cigarette lighters, 56 SWEDEN: Pocket pagers cover nation, 56 Probing the News LOGIC: C-MOS catches on as standard logic, 71 COMMERCIAL ELECTRONICS: Hotels like what they see in pay TV, 74 COMPUTERS: Moscow unveils its ES series, 76 COMPANIES: Pennies from heaven, 80 MEDICAL ELECTRONICS: Finnish firm takes giant steps, 82 Technical Articles COMMUNICATIONS: Processors pace growth in data-net traffic, 89 DESIGNER'S CASEBOOK: Transistor temperature compensation, 102 Gating circuit monitors real-time inputs, 103 Audio amplitude leveler minimizes signal distortion, 104 SOLID STATE: Tradeoffs in monolithic image sensors: MOS vs CCD, 106 COMPUTERS: Minicomputer helps in sewer-system improvements, 114 ENGINEER'S NOTEBOOK: Capacitor selection and ac power, 120 General-purpose op amp forms active voltage divider, 122 COMPUTERS: End user and engineer are targets of first NCC, 124 New Products NATIONAL COMPUTER CONFERENCE PRODUCT PREVIEW: 129 No-refresh display holds full printout page, 129 Bare-bones and stand-alone microcomputers to bow, 130 Printer/plotter is designed for minicomputers, 130 Plug-in processor speeds computer arithmetic, 132 Drum plotter completes IC mask in 6½ minutes, 132 Low-priced printer offers upper/lower case, 134 Disk memory system stores 50 megabits, 134 INSTRUMENTS: Solid-state generator puts out 50 watts, 137 SEMICONDUCTORS: High-speed IC '741' sells for $1.25 DATA HANDLING: Computer-aided design program adds models, 153 INDUSTRIAL: Hybrid transistor ICs challenge SCRs in power, 163 MATERIALS: 170 Departments Publisher's letter, 4 Readers comment, 6 40 years ago, 8 People, 14 Meetings, 24 Electronics newsletter, 29 Washington newsletter, 53 Washington commentary, 54 International newsletter, 59 Engineer's newsletter, 123 New literature, 172 Personal Business, 183 Highlights The cover: Getting the data through, 89 In data-communications networks today, data flow between central computer and remote terminals is organized and facilitated by special-purpose processors. Cover by graphic designer Ann Dalton symbolizes how such communications processors sort and concentrate disparate data into orderly sequences for transmission over the communications lines. C-MOS sets a new logic standard, 71 Transistor-transistor logic has a rival in complementary-MOS circuits, and semiconductor manufacturers are jockeying for position in a market that is expected to total $100 million in 1975. An a-d converter that does it differently, 97 The charge-balancing analog-to-digital converter does much the same job for much the same price as the dual-slope converter. But the fact that it requires fewer, less critical components makes it the better option in some applications. The first National Computer Conference, 124 Replacing the spring and fall shows of past years, the first annual computer show takes place next month in New York. Its scope has been greatly enlarged to attract the computer user as well as the computer designer. A preview of products to be exhibited starts on page 129. And in the next issue... Special report on custom hybrid technology ...a 16-bit computer-on-a-board for less than $1,000... video refresh and two-way television. With society's need for information skyrocketing, it's no wonder that the growth rate of data-communications gear is hefty. To cut the cost of moving the growing volume of data, more and more users are turning to communications processors. These processors, based on small and medium computers, handle the details of transmission, even do some local processing to reduce the data flow, thus freeing the central computer to do its thing—data processing. We think you'll find the eight-page in-depth wrap-up of the state of the art in communication processors (see p. 89) by Communications Editor Lyman Hardeman valuable and timely. It's especially timely when you consider that in the past decade those processors have zoomed from virtually nothing to several hundred millions of dollars in sales and, currently growing at about 30% a year, may reach $1 billion by 1976. Even in the highly volatile computer and communications segments of electronics, that's some activity. In early June, the first National Computer Conference—which one of its officials calls "a complete department store of computer equipment"—will open its doors in New York. You can just open our pages now to get a preview of what engineers and users alike can expect from the show, which replaces the AFIPS spring and fall meetings. On page 124, you'll find an article by our New York Bureau Manager, Alfred Rosenblatt, describing what the sponsors have done to tailor the conference to the needs of today's engineers and users. Then, on page 129 starts a detailed run-down on some of the most interesting of the new products to be shown. Publisher's letter And speaking of computers, you will find the latest article in our "Minicomputers in Action" series on page 114. It's about San Francisco's attempt to figure out why, when a single rain gage downtown showed only light rainfall, torrents of rain water would flood some local sewage plants and send pollution into San Francisco Bay. In the quest for a solution, city engineers set up a minicomputer-controlled rain-sensing and sewage-monitoring system. Ultimately, they hope to upgrade the system to give real-time control of the sewers' storage capabilities as a storm moves across the city. "Significantly," says Computers Editor Wally Riley, "San Francisco's problems are typical of many American cities. And because of the sophistication of the system, delegations from around the country have come to observe it in action." Every issue we pack a wide variety of subjects into our Probing the News department. Take this issue's section as an example. From what's happening in the computer market in Russia to the market for CMOS devices in the U.S. From an electronics success story in Finland to the down-to-earth uses for satellite data. From medical electronics to pay-TV in hotels and motels. As its name implies, the section brings you the stories behind the news events, pointing out the trends that add meaning and significance to the spot news. Waveforms you can trust from 10Hz to 10MHz. Model 4300 oscillator with 0.025 db frequency response and precision calibrated attenuator eliminates need for constant monitoring and adjustment. You can tune it and forget it. No meter is necessary, because the output is virtually transient-free. Push button controls provide rapid frequency tuning and output control. Unlike function generators which offer sine and square waves, the Model 4300 is basically a Wien Bridge oscillator that generates true sine waves without discontinuities or peaks. The sine wave exhibits less than 0.1 percent distortion and frequency stability is .002 percent. Price for Model 4300 is $475. Model 4200 offers all but square wave for $395. For fast action, call (617) 491-3211, TWX 710 320 6583, or contact your local representative listed below. KROHN-HITE CORPORATION 580 Massachusetts Avenue, Cambridge, Massachusetts 02139 SALES OFFICES: ALA., Huntsville (205) 524-9771; CAL., Santa Clara (408) 243-2891; Inglewood (213) 674-6850; COLO., Littleton (303) 795-0240; CONN., Glastonbury (203) 633-0777; FLA., Orlando (305) 894-4401; HAWAII, Honolulu (808) 941-1574; ILL., Des Plaines (312) 298-3600; IND., Indianapolis (317) 244-2456; MASS., Lexington (617) 861-8620; MICH., Detroit (313) 526-8800; MINN., Minneapolis (612) 884-4336; MO., St. Louis (314) 423-1234; N.C., Burlington (919) 227-2581; N.J., Bordertown (609) 298-6700; N.M., Albuquerque (505) 255-2440; N.Y., E. Syracuse (315) 437-6666, Rochester (716) 328-2230, Wappingers Falls (914) 297-7777, Vestal (607) 785-9947, Elmont (516) 488-2100, OHIO, Cleveland (216) 261-5440, Dayton (513) 425-5551; PA., Pittsburgh (412) 371-9449; TEX., Houston (713) 468-3877, Dallas (214) 356-3704; VA., Springfield (703) 321-8630; WASH., Seattle (206) 762-2310; CANADA, Montreal, Quebec (514) 636-4411, Toronto, Ontario (416) 444-9111, Stittsville, Ontario (613) 836-4411, Vancouver, British Columbia (604) 688-2619. Circle 5 on reader service card Intronics multiplies your design flexibility with low-cost, high accuracy M530 IC multiplier/dividers Intronics IC multiplier/dividers provide the packaging flexibility you need when space is at a premium. These low-cost, fully self-contained, four-quadrant monolithic devices are capable of multiplication $\frac{XY}{10}$, division $\frac{10Z}{Y}$, squaring $\frac{X^2}{10}$, and square rooting $\sqrt{10Z}$, and feature high accuracy to 0.5% with excellent stability and a wide bandwidth of one megahertz. Prices start as low as $20. Applications include: modulation and demodulation, phase detection and measurement, ratio measurement, power measurement, function generation and frequency discrimination. Write for complete applications information in our designer's guide, "Optimizing Analog Multiplier Performance." When you're in a tight spot specify Intronics M530 IC multiplier/dividers. Readers comment Display corrections To the Editor: I have found two errors in my article "Matching driver circuitry to multidigit numeric displays" (April 26, p.95) that I think merit a correction. Equation (4) should read: $$C_{\text{eff}} = \frac{F(L_{on} - L_n)}{(L_n + L_{nSG})} + 1$$ In the chart, the off switching time for thin-film electroluminescent devices should be 1 millisecond rather than 1 microsecond. Alan Sobel Zenith Radio Corp. Chicago, Ill. Warping on wrapping To the Editor: My comments on the automatic vs. semi-automatic wire-wrapping machines as published under "Wire Wrapping Takes A New Twist" in the April 12 issue (page 86) were distorted. This was partly due to the semantics of the term "automated." My definition as applied to wire-wrapping equipment encompasses all semiautomatic and fully automatic equipment and excludes hand wrapping. The comment "Automatics can be changed quickly to handle different types of ICS" should have read "Automated wiring systems. . . ." Either automatic or semi-automatic machines can be changed almost as quickly in the program tapes, but the automatics usually require more extensive tooling and set up. Semi-automatic systems still offer the advantages of allowing less precise dimensional control of the pin positions, use of twisted pairs and triplets, use of miniature coaxial cable, complex routing patterns, and easy intermingling of different sizes and colors of wires—all of which the automatics cannot accommodate. The new lower-cost, higher-speed automatic machine should make it more competitive with the semi-automatic, reduce the per-wire cost on longer runs, and make wire wrapping even more competitive with the alternate interconnection methods than it is today. Jack J. Staller Techstal Associates Norwood, Mass. Liquid Rivets, Bolts, Nails, Staples, Etc. One drop goes a long way in fastening almost anything to almost anything. Metals, for instance. And plastics. And ceramics. And rubber. Eastman 910® adhesive bonds fast, too. Almost instantaneously. With only contact pressure. Tensile strength? Up to 5,000 psi at room temperature. New Eastman 910 MHT and THT grades hold when the heat is on. Even over 400°F. For further data and technical literature, write: Eastman Chemical Products, Inc., Kingsport, Tennessee 37662. Distributors Arizona: Hamilton/Avnet, Phoenix (602) 257-7331 Kierulf, Phoenix (602) 257-7331 Compar, Tempe (602) 947-4336 Californian: Hamilton/Avnet Mountain View (415) 961-7000 San Diego (714) 286-2421 Hamilton Electro, Colton (909) 302-7773 Kierulf, Palo Alto (415) 362-2100 San Diego (714) 278-2112 Compar, Roseville (415) 362-2100 Gardena (313) 327-6550 Californian: Hamilton/Avnet, Denver (303) 514-1912 Kierulf, Denver (303) 343-7090 Florida: Hamilton/Avnet, Hollywood (305) 925-5401 Georgia: Hamilton/Avnet, Mariacross (404) 351-1100 Illinois: Allied, Chicago (312) 243-1100 Hamilton/Avnet, Schiller Park (312) 671-1100 Kierulf, Rosemont (312) 671-8560 Kansas: Hamilton/Avnet Prairie Village (Kansas City) (913) 621-1100 Maryland: Hamilton/Avnet, Hanover (410) 684-3300 Pioneer, Rockville (301) 427-3300 Compar, Baltimore (301) 484-5400 Massachusetts: Electrical Supply, Cambridge (617) 491-3300 Gorleben, Danvers (617) 529-2400 Hamilton/Avnet, Burlington (617) 372-1120 Kierulf, Waltham (617) 849-3600 Compar, Boston (617) 849-3600 Highlands (617) 968-7140 Michigan: Hamilton/Avnet, Livonia (313) 522-4700 Milwaukee: Hamilton/Avnet Bloomington (812) 854-4800 Milwaukee: Hamilton/Avnet, Hazelwood (St. Louis) (314) 731-1144 New Jersey: Arrow, Toms River (609) 531-1331 Hamilton/Avnet, Cherry Hill (609) 531-1331 Cedar Grove (201) 239-0800 Haddad, Englewood Cliffs (201) 229-1526 Compar, Clifton (201) 229-1526 New Mexico: Century, Albuquerque (505) 822-4000 Kierulf, Albuquerque, (505) 822-4000 New York: Semiconductor Components, Long Island (516) 273-1234 Hamilton/Avnet, Syracuse (315) 437-3642 Westbury, L.I. (516) 541-3812 Arrow, New York (516) 694-6800 Summit, Buffalo (716) 884-3450 Compar, New York (516) 884-3450 North Carolina: Pioneer, Greensboro (919) 273-4441 Compar, Charlotte (704) 371-1000 Salem (919) 723-1002 Ohio: Arrow, Dayton (513) 421-1100 Kierulf, Dayton (513) 421-1100 Texas: Hamilton/Avnet, Dallas (214) 271-2471 Houston (713) 526-4661 Kierulf, Houston (713) 526-4661 Dallas (214) 271-2471 Utah: Kierulf, Salt Lake City (801) 262-8451 Salt Lake City: Hamilton/Avnet (801) 262-8451 Washington: Hamilton/Avnet Seattle (206) 744-1330 Kierulf, Seattle (206) 744-1330 Compar, Kirkland (206) 744-1330 Canada: Pacific, Montreal (514) 875-1100 Ottawa (613) 237-4550 Electro Sonic Ind. Sales, Toronto (416) 943-1100 Hamilton/Avnet, Montreal (514) 875-1100 Toronto (416) 677-7432 Ontario: Kierulf, Toronto (416) 677-7432 L. A. Varah, Vancouver, B.C. (604) 684-3300 Representatives Alabama: Twentieth Century Marketing, Birmingham (205) 772-9237 Arizona: Q. T. Wilkes & Assoc., Scottsdale (602) 943-5791 California: Century, San Diego (714) 279-7961 Tidewater, Inc., Mountain View (415) 967-3871 Q. T. Wilkes & Assoc., Los Angeles (213) 449-1322 Colorado: Parker-Kelley, Denver (303) 770-1972 Florida: W. M. & M. Assoc., Alumina, Miami (305) 831-4645 Clearwater (813) 726-8071 Pompano Beach (305) 943-3091 Illinois: Allied, Chicago (312) 243-1100 Des Plaines (312) 824-0104 Indicatronic, Naperville (312) 824-0104 Fort Wayne (219) 747-0402 Maryland: Mechronic Sales, Rockville (301) 821-2429 Massachusetts: Contact Sales, Inc., Burlington (617) 273-1520 Michigan: Greiner Assoc., Grosse Pointe Park (313) 499-0388 Minnesota: Command, Inc. Minneapolis (812) 560-5300 Missouri: Coombs Assoc., St. Louis (314) 771-1397 New Mexico: Electronic Marketing, Albuquerque (505) 503-7837 New York: Win-Cor Electronics, Manhasset (516) 627-9474 Tritech, DeWitt (315) 446-2801 Ohio: Kierulf, Cleveland (Dayton) (513) 432-3800 Aurora (Cleveland) (216) 562-6104 Pennsylvania: G. C. M., Ambler (215) 646-7335 Texas: Semiconductor Sales, Richland (914) 231-8181 Houston (713) 461-4192 World Radio History Electronics/May 24, 1973 THIS IS NO ORDINARY DISGUISED MAN, IT'S THE SUPERMAN DIGIT. Replaces everything else. The Data-Lit 707 is designed in the standard 14-pin dual in-line package. It's pin-for-pin identical with the MAN-1 and DL-10. The Data-Lit 704 is pin-for-pin identical to the MAN-4 and DL-4. And while it isn't pin-for-pin identical with tubes, the total system cost will beat them penny-for-penny. The Data-Lit 707 second generation LED display has all the qualities you would like to see in a Superman digit. Low cost, low power, full solid segments with minimum gaps, low cost, availability, standard pins, high reliability, low cost. It's Cheap. Everyone wanted us to say economical. But the DL-707 is cheap compared to what you've been used to for LED displays and tube displays. The total system cost of power supplies, drivers, digits and mounting hardware is now less in the 2 to 8 digit range using LED's than any other display technology. ELD makes it all happen. Encapsulated Light Diffusion (ELD) was developed in our Krypton lab. We've produced a high quality diffusing light channel in a single encapsulating step. This allows us to use 85% less GaAsP material without sacrificing brightness. The only thing it cuts is cost. This looks like a job for Superman DL-707. If you're anywhere in the thriving metropolis of desk top calculators, POS equipment, digital panel meters, small instrumentation and so on, you have to see the Data-Lit 707 and get our volume prices. Here's the first of our Superman Data-Lit 700 Series: | Model | Description | |---------|--------------------------------------------------| | DL-707 | Common anode, left decimal | | DL-707R | Common anode, right decimal | | DL-701 | Common anode, polarity and overflow | | DL-704 | Common cathode, right decimal | So step into a phone booth and call one of our distributors. The Data-Lit 700 Series is going at $3.25 in 100-999 quantities. No surprises, the Bright Guys did it again. Litronix, Inc. • 19000 Homestead Road • Cupertino, California 95014 • (408) 257-7910 TWX: 910-338-0022 Circle 7 on reader service card Cut package count... Simplify board layout... Reduce equipment size... with DIP MULTI-COMP® RESISTOR-CAPACITOR NETWORKS (Metanet® Film Resistors, Monolythic® Ceramic Capacitors) STANDARDIZED DESIGNS* FOR BETTER AVAILABILITY, BETTER PRICES | R (Ω) | C₁ | |-------|----| | 100 | 470| | 150 | 500| | 200 | 680| | 220 | 1000| | 330 | 1500| | R (Ω) | C₂ | |-------|----| | 2000 | 100pF| | 2200 | 330pF| | 3300 | 0.01µF| | 4700 | 0.05µF| BYPASSED PULL-UP AND R-C COUPLING NETWORKS | R (Ω) | C | |-------|---| | 100 | 470| | 150 | 500| | 200 | 680| | 220 | 1000| | 330 | 1500| SPEED-UP NETWORKS | R (Ω) | C (pF) | |-------|--------| | 100 | 470 | | 150 | 500 | | 200 | 680 | | 220 | 1000 | | 330 | 1500 | ACTIVE TERMINATOR NETWORKS | R (Ω) | C (pF) | |-------|--------| | 100 | 470 | | 150 | 500 | | 200 | 680 | | 220 | 1000 | | 330 | 1500 | * OTHER PACKAGES, CIRCUIT CONFIGURATIONS, AND RATINGS AVAILABLE ON SPECIAL ORDER Sprague puts more passive component families into dual in-line packages than any other manufacturer: - TANTALUM CAPACITORS - CERAMIC CAPACITORS - TANTALUM-CERAMIC NETWORKS - RESISTOR-CAPACITOR NETWORKS - PULSE TRANSFORMERS - TOROIDAL INDUCTORS - HYBRID CIRCUITS - TAPPED DELAY LINES - SPECIAL COMPONENT COMBINATIONS - THICK-FILM RESISTOR NETWORKS - THIN-FILM RESISTOR NETWORKS - ION-IMPLANTED RESISTOR NETWORKS For more information on Sprague DIP components, write or call Ed Geissler, Manager, Specialty Components Marketing, Sprague Electric Co., 509 Marshall St., North Adams, Mass. 01247. Tel. 413/664-4411. THE BROAD-LINE PRODUCER OF ELECTRONIC PARTS 40 years ago From the pages of Electronics, May 1933 "Auditory perspective" by which the sounds of the instruments in a great orchestra seemed to come from different sides of an empty stage, exactly as if the orchestra itself were seated on that stage, instead of being in another city 150 miles away, was the striking feature of the Bell Laboratories transmission of Dr. Stokowski's Philadelphia orchestra to the audience of the National Academy of Sciences, meeting in Washington, April 27. In addition, new extensions of the frequency band were transmitted, including tones from 40 cycles to 16,000 cycles per second, affording new degrees of utter realism, in the reproduction of wind instruments, bells, snare-drums and other effects. Dire plans are apparently underway among the newspaper publishers for the elimination of broadcasting-program material from their reading pages. Unfortunately the newspaper men look upon radio as something competitive, and they are unwilling to give it further support. Broadcasting needs advance printed programs to which listeners can refer. Mere oral announcements of program features to come are ineffective, except in the case of single outstanding events. But the radio industry has its own defensive means all ready, in the shape of facsimile reproduction. The radio listener of the near future, when turning off his receiver on going to bed, might merely switch it over onto "facsimile"; the receiver would then go on recording during the night. And on coming down to breakfast next morning, the listener would find issued from his set his morning tabloid newspaper. Following a nationwide broadcast over the Columbia network, April 17, the infra-red fog-eye developed by Commander Paul H. Macneil of Huntington, Long Island, N.Y., was demonstrated on the Furness liner "Queen of Bermuda" with the aid of British destroyers. A sensitive thermopile has its output amplified so that it is sensitive to one fifty-thousandth of a degree Centigrade. interface-ability Or why systems people buy more S-D counters 6 remote programming options 4 BCD outputs (low and high level) Special codes, formats, logic levels Universal counter/timer functions 50, 200, 512 MHz, 3 GHz For details or a demo on series 6150 counters, call your Scientific Devices sales and service office (listed below). Or contact Concord Instruments Division, 10 Systron Drive, Concord, CA 94518. Phone (415) 682-6161. In Europe: Systron-Donner GmbH, Munich, W. Germany; Systron-Donner Ltd., Leamington Spa, U.K.; Systron-Donner S.A. Paris (Le Port Marly) France. Australia: Systron-Donner Pty. Ltd., Melbourne. Systron Donner Albuquerque, (505) 268-6729; Baltimore, (301) 788-6611 Boston, (617) 894-5637; Burlington, NC (919) 228-6279; Chicago, (312) 297-5240; Cleveland, (216) 261-2000; Denver, (303) 573-9466; Dayton, (513) 296-9904; Dallas, (214) 231-8106; Detroit, (313) 363-2282; Ft. Lauderdale, (305) 721-4260; Hamden CT (203) 249-3361; Huntsville, AL (205) 536-1969; Houston, (713) 623-4250; Indianapolis, (317) 783-2111; Kansas City, KS (913) 631-3816; Los Angeles, (213) 641-4800; Minneapolis, (612) 544-1616; New York City area (201) 871-3916; Norfolk, (703) 499-8133; Orlando, (305) 424-7932; Philadelphia, (215) 625-9515; Phoenix, (602) 524-1080; Rochester, NY (716) 334-2445; San Antonio, (512) 694-6251; San Diego, (714) 249-6642; San Francisco area (415) 964-4230; Seattle, (206) 454-0900; St. Louis, (314) 731-2332; Syracuse, (315) 457-7420; Washington, DC area (703) 451-8500. Electronics/May 24, 1973 Circle 9 on reader service card 9 All Panel Meters are not created equal. We try to build an edge into General Electric panel meters. For instance, you won't see a GE panel meter turn yellow, because we use a special white paint that stays white. You won't get eyestrain either. GE panel meters come with extra-wide scales, big numerals, tapered pointers, and shadow-free cover plates for quick, sure readings. We're fussy about things like that. Once you've installed them, forget 'em. GE's famous reliability just doesn't happen, we build it in! We designed-out a lot of extra parts that might fail, just to give you extra instrument reliability. To make sure, we added a 20% overload capability to our voltmeters and ammeters. Still not satisfied, we decided to measure instrument quality from parts to finished product in order to screen out anything marginal. Now, it's just too tough for a lemon to squeeze through. GE panel meters come from a good family. They look good individually and they look good together. Choose the rounded BIG LOOK® design for unique style and wide-eyed readability. Or choose the clean HORIZON LINE® case for its behind-panel mounting flexibility (without the usual bezel), and its snap-off mask available in six colors. GE makes panel meters to suit you and to add snap to your application. You can count on General Electric panel meters. They're built to help you do a better job. At GE, we're not interested in product equality. We want ours to be better than the rest. For a complete catalog of competitively priced and readily available GE panel meters, see your nearby authorized GE distributor. Or write to General Electric Company, Section 592-43, One River Road, Schenectady, N.Y. 12345. Specify General Electric... just for good measure. Circle 10 on reader service card A-C AMPERES GENERAL ELECTRIC GENERAL ELECTRIC THE CHEAPEST MINICOMPUTER VS. THE CHEAPEST SOLUTION Before you buy a minicomputer, do yourself a favor. Make a very fundamental decision. Do you want the cheapest machine you can find or the cheapest total solution to your problem? We think it's the latter. Because the cheapest machine is just that. It's raw hardware at a rock bottom price. And virtually every minicomputer supplier offers a product like this. Including us. But your goal should be to get the lowest cost total solution for your problems. And paying less now could cost you more later if the machine you buy has been designed for rock bottom price alone. Be careful. You should look beyond raw iron. You need a computer package that saves you money at both ends. One that's been designed with the total solution in mind. A powerful blend of hardware, systems software, and extensive peripherals. You also should look for a supplier that has built his business on fulfilling this need. That's us. The world's most powerful mini. We've developed the most effective minicomputer package you can buy: the SPC-16. Six different models to choose from and the most powerful instruction set available anywhere. The SPC-16 does more things in less time with less memory. That's why it can actually save you money on your total system. And we've recently enhanced the capability of our SPC-16 family with a number of new products including: Multi-user BASIC, and the real-time, multi-programming capability of our RTOS-16 operating system. And our new extended FORTRAN IV. New peripherals like a low speed line printer, head per track disk and a floppy disk. High speed floating point processor, 8K memory board, heavy duty process I/O boards, A/D and D/A converters and digital I/O boards. And completely new asynchronous communications multiplexer system. Here's another reason for choosing us: We've already had our tryouts. Today all the big mini manufacturers are announcing that they're "in the systems business". We've been in it from the start. And while everybody else was churning out iron, we were building systems and piling up applications know-how. We got involved with our customers' problems. We listened and we learned. Then we rolled up our sleeves and went to work. As a result our people don't have to be retrained for this new approach because it isn't new at all. Not to us. Over the years we've supplied systems to solve some very tough problems in the automotive industry, in production machine control, in electrical testing and communications. And this experience has built a fund of systems expertise no mini manufacturer can match. There's a good chance we already have a system that fits your needs. If not, we have the know-how to design it for you. Or with you. In fact, we can probably utilize our experience to solve your system problem faster than others can deliver a bid. Read all about it. If you're determined to reduce systems cost, we have a book for you. It's titled "The Value of Power." It covers everything you'll need to know to make the right decisions, for the right reasons, to end up with the right system for your specific needs. It's free. Write for a copy. The address is 1055 South East Street, Anaheim, California 92804. Or phone (714) 778-4800. Tubes? Forget them. HERE'S 100 WATTS OF SOLID-STATE RF POWER! A state-of-the-art amplifier. ENI's new Model 3100L all-solid-state power amplifier provides more than 100 watts of linear power and up to 180 watts of pulse power from 250 kHz to 105 MHz. This state-of-the-art class A unit supplies over 50 watts at frequencies up to 120 MHz and down to 120 kHz. All this capability is packaged in a case as small as an oscilloscope, and it's just as portable. Extraordinary performance. Featuring a flat 50 dB gain, the Model 3100L is driven to full power by any signal generator, synthesizer or sweeper. AM, FM, SSB, TV and pulse modulations are faithfully reproduced by the highly linear output circuitry. Immune to damage due to load mismatch or overdrive, the 3100L delivers constant forward power to loads ranging from an open to a short circuit. Solid-state reliability is here. The price? $5,690. Write for complete information: ENI, 3000 Winton Road South, Rochester, N.Y. 14623 Call (716)-473-6900 or TELEX 97-8283 Dept. E 524 ELECTRONIC NAVIGATION INDUSTRIES ENI . . . The world's leader in solid-state power amplifiers. People RPI head prepares for technological shifts As Rensselaer Polytechnic Institute nears its 150th year, its president, Richard J. Grosh, is organizing a five-year plan to maintain academic excellence and prepare its graduates for the technological shifts of the future. Grosh says the program must consider the technical problems the nation is going to face in the next 40 years, and "we know what these problems are going to be: energy generation and distribution, ecology, information retrieval and distribution." Within these broad categories, he sees the need for continued growth in the areas of computer diagnostics, pattern recognition, materials and materials processing, as well as circuit theory. Grosh emphasizes engineering. Many students spend their first three years studying chemistry, physics, and mathematics and do not really get involved in engineering until their senior year. "I think we have to make the educational process a little more amenable to their interest," he says, and therefore advocates more engineering courses earlier in students' academic schedules and more laboratory work. Overall, he stresses that the professional obsolescence faced today by many engineers can only be overcome by the understanding that education is a lifelong experience. Here the universities have an obligation, too. "We tend to focus on men and women between the ages of 18 and 22 years of age, and I think there should be many more opportunities for those later on in life." Although, according to Grosh, some of the techniques of business can be applied to a university—cost centering, goal setting and programmed budgets, "our payoff is difficult to measure. In business there is a P&L statement that can be looked at to help determine if the actions of the past year were reasonably correct. In education, there is no chance to measure the quality of your graduates or their success—if that can be measured." Previously dean of Purdue University's Schools of Engineering, Grosh acts as an adviser to such organizations as Bell Telephone Laboratories and the National Science Foundation, and to other engineering schools besides RPI. While he appears to be a sharp, corporate executive type, he is described by an associate as also a "perennial student," with a serious interest in Greek history, Shakespeare, and classical music. When he's not riding his racing bike, skiing, or sailing his boat, he drives his gray Corvette, which can hardly contain his six children and his wife. Silicon General profits from Beck The route to the top of a semiconductor company rarely starts at an electronics distributor, but Fred Beck feels that his journey along that path has given him an advantage over his counterparts who rose through operations engineering at the IC houses. As president of linear-IC specialist Silicon General Inc., Westminister, Calif., Beck thinks his 13 years at major distributor, Hamilton Avnet, has given him a better insight into customer needs. GREAT MOMENTS IN MOS ION Implantation Revolutionizes MOS Mostek's Ion Implantation process is relatively simple, yet its results have literally increased MOS array performance by a factor of 10 and at the same time has yielded smaller size, lower thresholds, and a significant reduction in manufacturing costs. Further, it has made possible the co-existence of digital and linear MOS circuitry on the same chip. Nevertheless, Ion Implantation is a young technology and its full potential is yet to be realized. AT MOSTEK we were quick to realize the implications of Ion Implantation and we were the first manufacturer to apply it to volume production. The results have been extremely gratifying, both to us and to our many customers who have taken advantage of MOS technology for their products. Unknown just a few years ago, MOS products are now at work in one-chip calculators, multi-function calculator systems, micro clocks and calendar circuitry, organ key-boards and numerous industrial and consumer products. Ion Implantation has made MOS/LSI technology available to virtually all industries no matter how unique their requirements may be. And it's only natural that they turn to a leader to solve their MOS problems. Whether off-the-shelf or customized IC's, Ion Implantation and MOSTEK know-how are both benchmarks in Great Moments in MOS. REGIONAL SALES OFFICES: Western: 11222 La Cienega Blvd., Inglewood, Calif. 90301 (213) 628-2881; John Turner Sales, Waltham, Mass. 02154 (617) 899-9107; Central: 8180 Brecksville Rd., Brecksville, Ohio 44141 (216) 526-6747; International Europe: MOSTEK GmbH, Stuttgarter Strasse 60, D-7000 Stuttgart 19, West Germany (Telex-7255792 MK D); Japan: System Marketing Inc., 4 Floor, Minasu Bldg., 3-14-1 Kanda Surugadai, Chiyoda, Tokyo, Japan (Telex-0222-5276 SMITOK) Far East: Markeeting Assoc., Inc., 525 W. Remington Dr., #108, Sunnyvale, Calif. 94087 (Telex-35-7453) Hong Kong: Asic Components Ltd., 12-14 Crown Court, Flat "C" 5th Floor, 70 Nathan Rd., Kowloon, Hong Kong (Telex-HX4899) Mid East: Racel Electronics, 68 Pinkas St., Tel Aviv, Israel (Telex-33-808 RACEL), Canada: Electronics Inc., 4252 Braille Ave., Montreal, Quebec (TWX-610-421-332-2) MOSTEK Corporation 1215 West Crosby Road Carrollton, Texas 75006 (214) 242-0444 TWX 910-860-5975 TELEX 73-0423 © Copyright 1973 by MOSTEK Corporation Your card reader and interface problems end here. Hickok designs static card readers with the user in mind. Starting with two rugged, reliable, economical models, we tailor the reader you need for use in programming system control and data collection. You also receive the help you need. You select among a variety of electronic packages to interface the reader to your system. Packages like TTL-compatible scanners with two operating modes, sequential scanning and addressable by column number. Reliability is built into Hickok readers with the multistrand continuous brush design. This technique eliminates errors caused by contaminants on the card and allows reading even of cards punched out of tolerance. This design also saves you money, because it's easier to make. Even in single lots, the 264A Badge Reader is only $175, and the 960A Card Reader, $495. When you're considering static card readers, call Hickok. We have the right unit at the right price for you. Model 264A reads first 22 columns of tab card and all columns of plastic badge — $175 Model 960A reads all 80 columns of tab card — $495 Model 80 Scanner to interface to your system HICKOK the value innovator Instrumentation & Controls Division The Hickok Electrical Instrument Co. 10514 Dupont Ave. • Cleveland, Ohio 44108 (216) 541-8060 People And judging by the turnaround in corporate profits, he may have a point. For three years, Silicon General had steadily increasing losses, touching $400,000 in 1971 on sales of $1.5 million. Then Beck took charge. Today the company is fully profitable and reported sales of $3 million in 1972. Although the strong semiconductor market accounts for part of the turnaround, much of the result stems from changes Beck has made. As a distributor, he learned that what makes the difference among businesses offering the same products is service, and so he modified Silicon's strategy to serve its customers better, by down-playing engineering and emphasizing marketing and distribution. First, he stopped development of new proprietary products, feeling that second-sourcing popular parts made better use of the limited resources available. The firm also started concentrating on the limited military and industrial markets, sidestepping consumer and computer products for the time being. Commitment to commitments. By doing this, and by raising some new capital, Beck says the firm offers fast turnaround and assured delivery. He explains the company won't make commitments it can't keep, and consequently "we turn down more business than we accept." Beck, a trim 35-year-old, attended the University of Redlands in California. Although he had a 44-ft sloop and expected to enter the 2,000 mile Transpac race from California to Hawaii, business came first. He sold the boat and now won't even go out on the water. Connector or IC panel—we can give you exactly what you need. Single-, double- or multilayer. Mother/daughter board connectors, IC receptacle packaging, feedthrough posts, low-profile DIP headers, or cable-to-board connectors. Prewired or ready to wire by automatic techniques. Panels with high reliability, competitive cost and ease of repairability. We built our reputation for quality and low applied cost in the connector field. And carried it over into back panels—the very heart of modern electronic systems. To give you the kinds of connectors, manufacturing techniques and equipment which ensure reliability, performance and repairability—at a competitive cost. High reliability. We eliminate plated through-hole distortion and possible damage caused by force fit insertion. This is done by selectively pre-depositing bands of solder on posts and receptacles before inserting and reflow-soldering them into panels. This process also greatly increases the reliability and performance of our panels by eliminating wicking, bridging, peaks, icicles and board delamination. Filletts are more uniform and complete, with full solder top to bottom. And posts are left clean and solder-free for automatic wiring. AMP has also developed connector housings which snap on over the contacts after contacts are flow soldered, so there’s better use of printed circuit real estate. For information on our panels circle Reader Service Number 150. Ease of repair. When snap-on connector housings are used, individual contacts can be exposed for quick, easy removal and replacement, without the need to desolder all contacts. Competitive cost. There are several important ways in which we keep the cost of our panels competitive. First, by inserting contact posts with high-speed, automated machines. Second, by soldering all contacts simultaneously instead of individually. And third, by conducting rigorous electrical and mechanical quality checks on every single panel we make, eliminating the cost and burden of incoming inspection for our customers. Additional economies can be achieved by using snap-on housings which do not require time-consuming individual contact loading. We can design with you or for you. If you customarily design your own panels, we can assist in optimizing your circuit patterns. Or, we can take your parameters and complete the entire panel-making operation, sparing you considerable investment. Using computer-driven plotters, we “pack” the greatest number of circuit paths into the smallest possible board space, consistent with other design parameters. We’ll set you up to wire or do your wiring for you. Give us your parameters. We’ll give you assembled connector or IC panels, pre-wired or ready for your automatic wiring. If you choose the TERMI-POINT clip system, you’ll get highly-reliable, spring-action terminations that are easier to test, maintain and service. Panel construction is AMP-engineered and manufactured. One main reason we can control the quality and cost of our panels so well is the fact that we design, engineer and manufacture literally everything that goes into them. DIP headers are ideal for low-cost, high-density packaging. Our low-profile DIP headers provide some of the industry's lowest-cost, highest-density packaging for 14- and 16-lead IC's. Standard headers accept a full range of lead sizes—round, rectangular or both, and are compatible with high-speed, automated wiring methods. Low-profile headers (.150-inch high) accept rectangular leads up to .015 x .030-inch. Low-profile miniature spring socket offers maximum retention and conductivity. Designed specifically for electronic and wiring applications that require low profile miniature sockets, this product has an inner spring member and a body with either a .022 x .036-inch or .025" post configuration. The inner spring member maintains consistent pressure against the lead, providing excellent retention and conductivity. A "barbed" design allows the socket to be self-retained in the panel and, at the same time, prevents socket "pullout." IC receptacles have unique anti-overstress design. The unique built-in anti-overstress stop on our IC receptacles assures tight, constant contact. The receptacle will accommodate any known IC configuration or package with round or flat leads up to .022-inch diameter or .022 x .040-inch dimensions. Removable gold-over-nickel-plated contact springs provide excellent performance. Posted card connectors offer great versatility in panel design. Our TERMI-TWIST Connectors are available in a variety of configurations, depending on your requirements for post size, number of positions and center-line spacing. Board area contacts are bifurcated for redundancy. Connectors can all be wired by high-speed, automatic techniques. Engineering backup...worldwide. At AMP, nearly 900 application, service and sales engineers are prepared to assist you with every phase of panel-making, connectors and programming systems. At your domestic manufacturing plant, or wherever you use AMP products and machines throughout the world. You'll find AMP manufacturing and service facilities in most major international markets. In the United States, district offices are located in California, Georgia, Illinois, Massachusetts, Michigan, Minnesota, New Jersey, Ohio, Pennsylvania, Texas, and the District of Columbia. Write for Panel Packaging Folder Find out how we're able to give you exactly the panel you need. Write on your company letterhead for our Panel Packaging Folder. It contains full documentation of our various processes, with suggestions of how they can work best for you. AMP Industrial Division, Harrisburg, Pa. 17105. AMP, TERMI-POINT, TERMI-TWIST are trademarks of AMP Incorporated. A tough little competitor. AMPSCO. American Power Systems Corporation. Maybe you heard of us when we were Armour Electronics. Maybe you didn't. Either way we're a scrappy little powerhouse you need to know now. Because when your full time talents are needed in putting out a sophisticated, multi-thousand dollar unit, why waste even a little time designing its power supply? And since a reliable power supply is so critical, why buy it from a big company that spends only some of its time making it? Why not find yourself a sharp little outfit that spends all its time on power supplies? A company that stakes its entire reputation on one kind of product and has to do a mighty good job. For a mighty good price. AMPSCO. O.E.M., off the shelf, slot and multiple power systems — When you make only one thing you make it better. Power Supplies by AMPSCO American Power Systems Corporation 51 Jackson Street Worcester, Mass. 01608 (617) 753-8103 © 1973 Viacom International Inc. Siemens introduces the lowest profile in PC-board EMR's. Common low profile Siemens low profile 6PDT 4PDT DPDT These new low profile relays with only 0.4" height let you put twice as many PC boards in a rack yet give you over twice the current rating. Siemens, one of the world's leading relay manufacturers, has come up with another major relay innovation. This time it's a complete family of general-purpose Electro-Mechanical Relays with a lower profile combined with higher current rating than has been possible with any available design. The new Siemens family consists of DPDT, 4PDT, and 6PDT models which have uniformly the same 0.4 inch height above the PC-board face and have contact ratings of 1 A at 24VDC (0.3 A at 115 VAC). No longer need the relay be a limiting design factor. You can use Siemens low profiles on racks with 0.5" center-to-center PC-board spacing instead of up to one inch spacing. Thus you can pack up to twice the circuitry in the same space. It also means you can design to switch twice the current you had been limited to by earlier PC-board relay types. Or if you don't need more current, you have a much higher safety margin. The new Siemens relays have bifurcated contacts for high reliability, and a sealed base that keeps flux or solder from contaminating the contacts. Siemens has many additional high-reliability, general-purpose relays. Write or call us for more information on the new low profile line or for relays for other applications. Siemens Corporation, Special Components Department, 186 Wood Avenue South, Iselin, New Jersey 08830. (201) 494-1000. At left, the first complete family of low-profile relays. WHY CHOOSE RENTAL ELECTRONICS WHEN YOU RENT, LEASE, OR RENTAL-PURCHASE? Because REI is the recognized leader when it comes to supplying you the most complete selection of electronic/scientific test equipment—to rent, to lease, or to rental-purchase—at the most attractive costs. Now, more than ever, you must expand along with the pace of economic and technological development. To avoid the handicap of obsolete equipment, to help you maintain a flexible budget, to keep abreast of the competition, to assure growth with increased production and sales—Rental Electronics offers you the instruments you need, when you need them, for as long as you need them. REI offers you precisely the right instruments—everything from amplifiers to oscilloscopes to synthesizers—with a plan custom-designed to meet your specific requirements! Our staff of sophisticated financial planners is ready to help you choose the rental, lease, or rental-purchase package that best fits your situation. And your needed equipment is ready for almost instantaneous delivery, direct from one of nine strategically-located "Instant Inventory" Centers across the U.S. and Canada. Every Rental Electronics customer is our very special customer, receiving the service he needs under a rental, lease, or rental-purchase plan custom-tailored especially for him. The results are increased PROFITS for you! Ask for our full catalog today! Write or call: Rental Electronics, Inc. A PEPSICO LEASING COMPANY 99 Hartwell Avenue, P. O. Box 223 Lexington, Massachusetts 02173 Tel. 617/862-6905 Meetings International Microwave Symposium: IEEE, U. of Colorado, Boulder, June 4–6. National Computer Conference and Exposition: AFIPS, New York Coliseum, June 4–8. Consumer Electronics Show: EIA, McCormick Place, Chicago, June 10–13. Chicago Spring Conference on Broadcast and TV Receivers: IEEE, Marriott, Chicago, June 11–12. Power Electronics Specialists Conference: IEEE, California Institute of Technology, Pasadena, June 11–13. International Conference on Communications: IEEE, Washington Plaza, Seattle, Wash., June 11–13. Frequency Control Symposium: ECOM, Howard Johnson's Motor Lodge, Atlantic City, N.J., June 12–14. National Cable TV Association Annual Convention: NCTA, Convention Center, Anaheim, Calif., June 17–20. International Symposium on Electromagnetic Compatibility: IEEE, New York Hilton, New York, June 20–22. International Symposium on Fault-Tolerant Computing: IEEE, Palo Alto, Calif., June 20–22. Design Automation Workshop: ACM, IEEE, Sheraton, Portland, Ore., June 25–27. International Symposium on Information Theory: IEEE, Ashkelon, Israel, June 25–29. International IEEE G/AP Symposium and USNC/URSI Meeting: IEEE, U. of Colorado, Boulder, Aug. 21–24. 17th Annual Meeting and Equipment Display: SPIE, Town and Country, San Diego, Calif., Aug. 27–29. Here's a dependable, quick-delivery source for Zero Defect High Voltage Silicon Rectifiers - MEETS STRINGENT ENVIRONMENTAL REQUIREMENTS - HIGH TRANSIENT VOLTAGE RATINGS - EXTREMELY LOW LEAKAGE - WORKING VOLTAGE RANGE ... 200V. THROUGH 50kV. If you're looking for on-time delivery of miniature and microminiature High Voltage Silicon Rectifiers, look no further than ERIE. You simply can't beat our zero defect rectifiers since these units were first designed for high reliability night vision, lunar and aerospace applications. Their small size makes ERIE rectifiers ideal for thick film substrates, miniature power supplies, airborne displays, CRT displays, color TV, microwave ovens and other industrial and commercial applications where small size, reliability and superior performance are critical. All ERIE High Voltage Silicon Rectifiers feature conservative voltage ratings, fast recovery time, fast turn-on time, wide operating temperature range, high transient voltage ratings, low reverse leakage and unsurpassed reliability. ERIE also offers double sealed, miniature Full Wave Bridge Rectifiers perfect for P.C. use, with ratings up to 1000 volts per leg. So think ERIE for your High Voltage Silicon Rectifiers. Write TODAY for our new 24-page catalog High Voltage Components and Devices. ERIE TECHNOLOGICAL PRODUCTS • Erie, Pennsylvania 16512 Circle 25 on reader service card The world didn’t need another darned-good-and-expensive rack and panel connector. Uniform resistors reduce costs. If you're really serious about cost, be serious about quality. That could be money in your pocket. Allen-Bradley's exclusive hot molding process offers physical consistency that can reduce your installation costs. Bodies are a uniform size with clean squared ends, free from coatings which adversely affect automatic handling equipment. Lead lengths and diameters are precise. Resistors with uniform physical characteristics, accurately placed on tape reels, eliminate insertion machine jam-ups. And trouble free assembly means less production down-time; lower cost. That's A-B quality. Consistent shipment after shipment. If you think all resistors are the same, send for our free booklet, "7 ways to tell the difference in fixed resistors." Allen-Bradley Electronics Div., 1201 S. 2nd St., Milwaukee, WI 53204. Export: Bloomfield, NJ 07003. Canada: Allen-Bradley Canada Ltd., Cambridge, Ont. U. K.: Jarrow, Co. Durham NE32 3EN. Tiny tape head ups disk density by seven times VRC California, Los Angeles, has developed 0.4-mil flying tape heads for disk drives. The prototype units permit disk densities of about 1,500 tracks per inch, over seven times that of present IBM 3330 disks with their 4.3-mil heads. VRC California, a subsidiary of Vermont Research Company, is also using slightly larger 1-mil heads, plus a special locating system earlier developed for amorphous laser memories, to produce disk drives the size of a small drawer-type OEM disk but with 3330-type storage. The 600 track-per-inch density permits a 60-megabyte memory in one IBM System-3-type cartridge using a 3330 disk; other small systems typically store 10 megabytes. Tektronix starts OEM design and sales program After a 25-year tradition of catalog sales, Tektronix Inc. has started up an original-equipment-manufacturer design and sales program in its Information Display Products division. Under the new policy, the division will disclose new developments to systems manufacturers months before they would normally be introduced and will also design special versions for OEMs. Among the first developments being offered are a video scan converter for computer-graphics and analog-instrumentation systems, a hard-copy printer for computer-display terminals, and a 19-in.-diagonal storage CRT for computer-terminal applications. The tube can display more than 8,000 characters, compared with some 2,500 for today's most advanced unit. West Coast gets fully automated stock system What appears to be the first fully automated stock-transaction system has gone on line at the Pacific Stock Exchange. Its Comex system should particularly benefit the small investor who buys up to 199 shares. An earlier version of the system, useful for odd lots (under 100 shares) helped eliminate odd-lot charges at PSE since it reduces the cost of handling the small orders for the brokers. The system uses displays and keyboards from Quotron Systems, Inc., two IBM 370/145 computers, and two DEC computers as communications processors. It ties together the exchange's San Francisco and Los Angeles branches. Gold bumps offer beam-lead reliability, flip-chip strength A new process for preparing integrated circuits for automated packaging is being developed by the Solid-State Electronics Center of Honeywell Inc., Plymouth, Minn. Gold bumps, which offer the reliability of beam leads and the ruggedness of conventional solder-bump flip-chip techniques, are plated onto an evaporated, intermediate, multilayered base of chrome, copper, and gold. The contact resistance between the bump and the chip is typically 30 milliohms, while the pull strength is as high as 20 to 50 grams. Because a plating process is used, gold costs are kept to a minimum. The gold bumps can be bonded to tin-plated-copper lead frames. The technique is compatible with most semiconductor manufacturing processes. MECL 10,000 moves to peripherals Officials at Motorola's Semiconductor Products division think they have a bonus on their hands because of the apparent acceptance of the MECL 10,000 emitter-coupled logic line by computer-peripheral-equipElectronics newsletter **Italian firm enters U.S. hi-fi market with high-power IC** SGS-Ates, Italian-based semiconductor company, has pushed integrated-circuit technology into the high-fidelity realm with their development of a 10–15-watt audio amplifier chip—two to three times more powerful than previously available single chips—which will be available by the end of the year. Pietro Fox, U.S. marketing manager of SGS-Ates, estimates there are one million hi-fi amplifier sockets in the U.S. that this product could fill. And unlike audio amplifier ICs now available, this chip will have the high power and low distortion—1% at 15 W—that is required for hi-fi service. Also, SGS-Ates, which has been marketing complementary MOS in Europe on a limited basis, is planning to bring its C-MOS products to this country late this year. SGS-Ates has a licensing agreement with RCA to build the 400 series COS/MOS line. **Chopper-stabilized op amp is packaged in standard DIP** Texas Instruments has built the first chopper-stabilized op amp to be marketed in a standard 14-pin DIP. Previously, the low offset, low drift, and high gain of chopper-stabilized devices were available only in bulkier module packages. The two-chip op amp has a differential capability and fast slew rate (25 volts per micro-second) that in most cases are found only in modules. Sample quantities of the SN62/72088 are available at $70 for the device specified over 0° to 70°C and $120 for the -25°-to-85°C version. **Boston visitors try domestic-satellite communications** People attending the International Communications Association’s annual meeting in Boston early this month were among the first in the nation to communicate via domestic satellite. The link was set up by RCA Global Communications Inc., using Telesat Canada’s Anik II satellite. By September of this year, public voice-grade circuits should be operating between New York City and Los Angeles or San Francisco at a monthly charge of $1,400, roughly 40% less than terrestrial circuits. The initial system is an interim arrangement, the forerunner of a more extensive domestic-satellite communications system that will serve all 50 states and Puerto Rico when RCA Global Communications completes it in two years’ time. **Addendum** A third-party leasing agreement has been reached between Memory Technology Inc., Sudbury, Mass., and Alanthus Corp., White Plains, N. Y. for $16 million worth of MTI’s add-on memories for the IBM 370 Models 155 and 165. The MTI semiconductor memories have up to four megabytes on one port, twice IBM’s core-memory capacity in less than half the space. C-LINE POWER SWITCHING TRANSISTORS 100 WAYS TO GET MORE INDUSTRIAL SWITCHING PERFORMANCE FOR YOUR MONEY Unitrode's UPT Power Switching Transistor series offers the optimum combinations of price and performance from 0.5A to 20A, and up to 400V in 3 package types. Choose from 100 different transistor types for more efficient and simplified circuit design in power supplies, switching regulators, inverters, converters, solenoids, stepper motors and other inductive load driving applications. They're available off-the-shelf from your local Unitrode distributor or representative. For the one closest to you, dial (800) 645-9200 toll free, or in N.Y. State (516) 294-0990 collect. For immediate action on any specific problem, call Sales Engineering collect at (617) 926-0404, Unitrode Corporation, Department 6 Y, 580 Pleasant Street, Watertown, Massachusetts 02172. For specific data sheets containing full characterization of devices check the table/coupon below. | Check Here | Ic | V<sub>ce(sat)</sub> | SERIES/PACKAGE | t<sub>on</sub> | t<sub>off</sub> | 100 Qty prices each | |------------|------|---------------------|----------------------|---------------|----------------|---------------------| | | 0.5ADC | up to 400V | UPT011-T05 | 50ns | 400ns | $1.02 to 2.30 | | | | | UPT021-T066 | | | | | | 1ADC | up to 150V | UPT111-T05 | 100ns | 250ns | 0.83 to 1.86 | | | | | UPT121-T066 | | | | | | 2ADC | up to 150V | UPT211-T05 | 130ns | 300ns | 1.08 to 2.42 | | | | | UPT221-T066 | | | | | | | up to 400V | UPT311-T05 | 200ns | 800ns | 1.25 to 2.73 | | | | | UPT321-T066 | | | | | | 3ADC | up to 400V | UPT521-T066 | 200ns | 900ns | 2.30 to 3.80 | | | | | UPT531-T03 | | | | | Check Here | Ic | V<sub>ce(sat)</sub> | SERIES/PACKAGE | t<sub>on</sub> | t<sub>off</sub> | 100 Qty prices each | |------------|------|---------------------|----------------------|---------------|----------------|---------------------| | | 5ADC | up to 150V | UPT611-T05 | 250ns | 550ns | $1.25 to 2.72 | | | | up to 400V | UPT621-T066 | | | | | | 10ADC | up to 150V | UPT721-T066 | 250ns | 800ns | 3.38 to 5.43 | | | | up to 400V | UPT731-T03 | | | | | | 15ADC | up to 150V | UPT821-T066 | 250ns | 550ns | 3.14 to 5.05 | | | | up to 400V | UPT831-T03 | | | | | | 20ADC | up to 150V | UPT921-T066 | 500ns | 1200ns | 7.67 to 13.92 | | | | up to 400V | UPT931-T03 | | | | | | | up to 150V | UPT1021-T066 | 450ns | 350ns | 8.29 to 9.93 | | | | up to 400V | UPT1031-T03 | | | | | | | up to 150V | UPT1131-T03 | 300ns | 600ns | 9.52 to 6.91 | Please send data sheets on specific C-Line Power Switching Transistor series checked below. Name: ____________________________ Co.: ______________________________ City: _____________________________ Title: _____________________________ Address: __________________________ State: ____________________________ Zip: ___________ Telephone: _________________________ See EEM Section 4800 and EBG Semiconductors Section for more complete product listing. UNITRODE quality takes the worry out of paying less. NEW SOLID STATE RELAYS FROM GENERAL ELECTRIC All the technology that went into making General Electric a leader in couplers and power semiconductors is in our first solid state relay. Two models, 5 Amp. GSR10AU5 and 10 Amp. GSR10AU10, feature: - 120 V line operation - Zero voltage switching, 5V max. - T2L operation -30 to 100° C - Operates from 6.3 to 140 V RMS - 1500 V RMS Photon Isolation AVAILABLE NOW FROM YOUR AUTHORIZED GE DISTRIBUTOR GENERAL ELECTRIC 32 Circle 32 on reader service card Optical-fiber communications spurred by new waveguide and low transmission losses Bell Labs and Corning Glass take the lead in innovating light pipes for communication systems Fiber-optic waveguides for communication systems look more practical now that Bell Laboratories has developed single-material fibers with losses as low as 5 decibels per kilometer and Corning Glass has reported more conventional core-cladding fibers with losses of 2 dB per kilometer. In mid-1971, the best fibers had losses of about 20 dB [Electronics, July 5, 1971, p. 46] and even today 20 dB loss is considered very good [Electronics, April 26, p. 53]. While much work remains to be done, scientists at Bell Labs now consider glass fibers as practical replacements, or more likely, additions to copper wire, particularly in cities where conduit space below ground and the communication capacity is becoming limited. Hair-thin glass fibers packed together to form a cable a quarter of an inch in diameter could carry as many communication signals as thousands of ordinary telephone cables, according to Bell Labs. Modulation. A working system could use semiconductor lasers, modulated at rates between a few megahertz and a few gigahertz, to generate the light signals. Avalanche photodiodes could be used as detectors at the receiving end. Research thus far has shown that optical fibers can carry signals modulated at rates up to 6 MHz—equal to about 3,000 telephone calls. The Corning research was reported last month in a paper at the American Ceramic Society meeting in Cincinnati, Ohio, by Peter C. Schultz, senior ceramicist. Schultz ran his experiments at wavelengths of 1,050 nanometers with a glass fiber 1.2 km long. The fiber, made of two high silica glasses, had a core with an index of refraction somewhat higher than the cladding. Although this is an experimental fiber, scientists at the Corning, N. Y., firm believe that its relatively simple configuration will ease production. Three in one. Three Bell Labs scientists—Stewart E. Miller, Enrique A. J. Marcatili, and Peter Kaiser—devised a glass-fiber structure with three elements, all made of the same low-loss glass. Fibers made with differing glass materials, according to Bell Labs, contain undesired impurities that interfere with the passage of light and cause transmission losses in the fiber. The Bell Labs' design consists of a tube, a solid inner rod, and a supporting plate for the rod. The technique of centering the light involves wave, rather than geometric, optics. Marcatili says that the changing height between the rod and the supporting plate is equivalent to a change in refractive index. The tube serves as a protection for the fiber assembly. The preformed waveguide consists of a tube, 1 centimeter in diameter, with the interior plate supporting a rod a few millimeters in diameter. As this is heated and pulled, all the elements retain their scale and the external dimension is reduced to only a few mils in diameter. Stronger than steel. The tensile strength of newly drawn fibers varies from several thousand to several million pounds per square inch, depending on the specific fiber and... the conditions under which it is used. This is considerably greater than copper and even better than steel. Thus there should be little difficulty in drawing such fiber cables through conduits. The strength of optical fibers will, however, degrade with time, but this is not considered to be a problem since the fibers will be in bundles and will also gain strength from a sheathing. **Needed: laser reliability.** While the work in low-loss glass fibers has proceeded at a rapid rate, long-range optical systems require long-life semiconductor/lasers, and these light sources have not had the reliability required for communication systems. --- **Communications** **Stabilized laser communicator operates from moving vehicle** Laser beams offer hope for secure communications, but the major problem has been keeping the two ends of the circuit aligned. A new development from American Laser Systems, Santa Barbara, Calif., may change that, and both military and police organizations are interested [Electronics, May 10, p. 26]. The system permits 15- to 20-mile two-way communications between moving land vehicles, ships, or helicopters. The laser communicator combines infrared-laser transceivers with stabilized optics, which are modifications of binoculars made by Stabilized Optics Corp., Cupertino, Calif. The binoculars, which permit the use of highpower, $20 \times$ magnification on moving ships and vehicles, are already in use by police and naval forces. Conventional binoculars are limited to $10 \times$ or less in these situations. The stabilized communications system uses a simple but patented physical principle that cancels the effects of magnification of movement with opposite-phase reflected light. A small gyroscope is also included, but serves only to overcome the friction of bearings at very low vibration frequencies. **Avalanche detector used.** In the communicator, the optical path to one eyepiece in the binocular is replaced by a sensitive silicon avalanche detector. The stabilized system permits the sensitive but sharply focused detector to give a $100 \times$ system gain over conventional laser systems. The transmitter in the system is a small semiconductor laser diode with a peak output of 2 watts and an average output of 1 milliwatt. The range of the unit is 16 to 20 miles, but, of course, is dependent on the visibility of the receiver. The communicator does not have to be aimed accurately—the bridge of a ship or a whole automobile is adequate. The American Laser Systems unit operates from flashlight-size cells, giving 5 to 7 hours of operation. Power drain is about a third that of a flashlight bulb. **Computer transmission.** According to Duncan Campbell, president of American Laser Systems, the receiver and transmitter are basically digital in nature. The U.S. Navy has shown interest in data communications between ships using the technique, and Campbell also sees a future in the individual transmitter and receiver modules in the context of computer transmission between fixed points. --- **Auto electronics** **Solid-state sensor monitors car fumes** A new type of diffused semiconductor pressure sensor for automotive emission control has been developed by Bell and Howell. The device is expected to meet the long-term environmental requirements of cars while also surviving the tests of the legendary sharp pencils of Detroit's automotive economists. The reliability required of semiconductors used in automobiles is actually higher than in the aerospace industry. For instance, conventional wire and tape strain gages for aerospace are not good for monitoring pressures in emission control, whether by fuel injection, exhaust gas recirculation or ignition control. According to Robert L. Cheney, project engineer at Bell and Howell's electronics and instruments group, Pasadena, Calif., the basic problem is that an aerospace transducer may be subjected to several temperature cycles and hundreds of pressure cycles, but a transducer in a car traveling 50,000 miles will be subjected to hundreds of temperature cycles and thousands of pressure cycles. The Bell and Howell device is not the first strain gage to use the piezoresistivity of a semiconductor, but earlier ones used a whole piece of silicon as a sensor, with separate support. The new diffused device uses a silicon slice as support and pressure diaphragm as well, with only small areas actually serving as sensors. This eliminates the mounting interface problem between the sensor and support, a vital consideration in view of the tiny displacements involved. Unlike conventional gages, also, the technique seems well adapted to low-cost automated assembly and checkout, for manufacturers can use the proven photochemical methods used in making integrated circuits and transistors. **Tolerance.** One area in which semiconductor gages are basically inferior to conventional ones is temperature tolerance. However, this temperature dependence is virtually eliminated in the Bell and Howell device, since it combines two or more sensors in a bridge configuration. As the sensors are simply small areas in a silicon slide, they cost no more than a single sensor. The usual automatic-adjustment techniques of laser-trimmed thick-film resistors and temperature-sensitive resistance elements permit this unit to respond to a change of less than 0.002% of full scale per degree Fahrenheit. This is five times better than typical aerospace standards. An unusual feature of the gage is the bond between the silicon diaphragm-sensor and the glass tube that supports it. The bond was developed by P.R. Mallory Co., Indianapolis, Ind., and it produces a stable, pressure-tight joint. The stability of the semiconductor devices also eliminates the need for periodic calibration, which is prohibited in automotive uses. Tests of transducers over a million cycles from -65° to 250°F while being pressure-cycled from full vacuum to ambient pressure indicate stability of better than 0.5% of full scale. Conventional transducers subjected to the same cycling have shown an order of magnitude greater change, says Cheney. Robert W. Meyers, product manager at Bell and Howell, feels that the new device can beat both performance and price requirements of the auto industry and expects to see such sensors will be picked for use on 1975 model year cars, at least those sold in California. --- **Anti-skid unit is digital** At the first public showing of its SKID-TROL anti-wheel-lock braking systems for heavy trucks, Rockwell International Corp. said it hopes to capture at least 30% of the $100 million market [Electronics, May 10, p. 70.] The electronics for SKID-TROL's digital computer comes from Rockwell's Microelectronics Division, Anaheim, Calif. The firm's Rockwell Standard division is handling the system integration. The computer is the heart of the only fully digital system for the function that will be required by new Federal safety regulations. It uses an MOS LSI calculator-type chip programmed for this special function. Other manufacturers of anti-skid systems are expected to adopt the digital approach in the future. The electronics firm that helped with the system is Intermetall GmbH, the German member of the ITT Semiconductor Group and developer of the integrated circuit around which the equipment is built. Intermetall calls its IC the first made in Europe for interlock units. Designated the SAJ 280, the device will be marketed in the U.S. through the ITT group's American facilities. With its new equipment, Volkswagen is complying with U.S. regulation MVSS 208, the motor vehicle safety standard that requires seatbelt interlock systems on all 1974 passenger models [Electronics, March 1, p.70]. Such systems prevent the driver from starting his vehicle unless he and his front-seat companions have fastened their lap and shoulder straps. **Signs and alarms.** Like any seatbelt interlock system, VW's equipment lights a "fasten seat belts" sign and sounds an acoustical warning when a set of conditions is not satisfied. Monitoring the proper sequence of events from seat occupancy and belt fastening to handbrake loosening and engine turn-on are electronic circuits. Using inputs from sensors at the seats, in the seat belts, the oil pressure system, and at the handbrakes, the circuits regulate an interlock solenoid so that the engine can be started only when the sequence is correctly followed. The solenoid blocks the starter when, for example, the belts are fastened before the seat is occupied. In addition to performing the basic functions spelled out in the U.S. regulation, the German system has a few refinements. One is a time delay that allows an engine restart before three minutes have elapsed, regardless of whether seat belts are fastened or not. This feature will be welcomed by a driver who, for example, turns off the engine and gets out of his car to open a garage door. The time delay lets the driver put his vehicle into the garage without the need for him to fasten his belts. Another feature is a 10-second delay that prevents the starter from becoming blocked when a passenger, whose seat belts are already fastened, temporarily lifts himself from his seat in trying to find a comfortable position. Without this delay, the driver would have to go through the whole sequence of getting into the car, fastening his belt and loosening the brakes every time he pulls himself slightly off the seat. While other German electronics and car-accessory makers have chosen simple discrete solutions for the interlock circuitry, the Intermetall/Volkswagen designers have opted for an IC approach. "Although more difficult to realize, it makes for a more reliable and relatively inexpensive system when produced in volume," says Alfred P. Prillmann, sales manager for professional products at Intermetall. "The price of the system," Prillmann says, "is about the same as that of a discrete-transistor version." Now that it has delivered a limited number of ICs for the car maker's first production versions of the system, Intermetall will start mass producing them at its Freiburg plant next month. **Picked bipolar.** The 280 circuit packs onto a 3.5-millimeter-square chip roughly 100 transistors in addition to a number of diodes and resistors. The company has picked bipolar instead of MOS technology to insure circuit operation even when, as a result of cold weather, the supply drops to 6 volts, half of the normal 12-v supply. A bipolar design also makes it easier to get up to the current levels needed for driving the output stages and for relay operation, Prillmann says. The trend in U.S. seatbelt interlock systems is to use C-MOS, which operates from as low as 3 V and is more tolerant of power-supply variations than bipolar. The Intermetall circuit, which comes in a 14-pin dual in-line plastic package, handles up to 25 milliamperes. Its current consumption under engine-off conditions is less than 5 mA, and leakage current is less than 1 microampere. --- **Commercial electronics** **Highway call box moves to corner** The highway aid box is a familiar sight along major thoroughfares across the country, but these hard-wired telephone devices are not vandal-resistant and are prone to destruction in severe weather conditions. Now, hard-wired systems are being replaced by radio call boxes, which overcome these problems, and many have been set up on interstate highways. The first installation, on a 43-mile stretch on Florida Interstate 75, was put up by the ADT Corp., New York. It was followed by a string of boxes from Motorola along 20 miles of roadway between Fort Lauderdale and Miami, Florida [Electronics, March 1, p. 32]. Further, ADT is currently installing 248 help boxes along the 60 miles of Massachusetts I-495 and in various locations along interstate highways in southern Illinois. Moreover, the company, moving its system onto the street-corner, is set to install 40 radio boxes in the small industrial-residential town of Weehawken, N.J., at a cost of $400,000. Motorola's installation of 90 boxes cost $328,000. The ADT box, unlike Motorola's, uses no voice communications or batteries. Instead, the user activates a magneto when opening the door of the box, which in turn provides power to send a radio signal to a computer-type console. When the signal reaches the console, the type of assistance requested and the location of the box flashes on a digital counter and is permanently recorded on paper tape. An operator returns a signal that the message has been received. The console can handle as many as 9,999 remote call boxes. In the event that a pole holding the help box is knocked over, a tilt alarm is activated on the console. Components for the system are manufactured by Solid State Technology Inc., Wilmington, Mass. --- **Computers** **Systems house in microcomputer race** The ever-growing commercial microcomputer derby has a new and seemingly unlikely entry: Teledyne Systems Co., historically best known... measurements on the move... With TEKTRONIX you make your measurements quicker and with greater accuracy. The light-weight 465 and 475 portables combine ease-of-operation with laboratory precision to reduce your repair time at your customer's location. Some of the functions that make the 465 and 475 value leaders are: push-button trigger view, ground reference button at probe tips, delayed and mixed sweep, CRT positioned between the vertical and horizontal controls, easy to interpret push-button mode selection, and more. With 200 MHz at 2 mV/div, the 475 offers lasting measurement capability. A linear 8 x 10-cm display and one nanosecond sweep speed illustrate the ability to make complex, precise time measurements. The 465 with a bandwidth of 100 MHz at 5 mV/div and 5 ns/div qualify it for most of today's measurement needs. A different approach to battery operation. A 12 and 24 VDC option combined with a detachable battery pack provide continuous operation under a variety of situations. Measurements can be made when power availability is restricted to 12 and 24 VDC, or when commercial power is limited, or when isolation from line or ground is desired. With the detachable battery pack you carry the weight of the batteries only when needed. Also available are rackmount versions of both the 465 and 475. 465 Oscilloscope ........... $1725 (Includes delayed sweep and probes) 475 Oscilloscope ........... $2500 (Includes delayed sweep and probes) DC Operation (Option 7) . Add $75 1106 Battery Pack .......... $250 Rackmount .................. Add $75 Let us help you make your measurements. To see one of these scopes, call your local Tektronix field engineer, he'll be glad to demo one for you. If you prefer, for additional information write Tektronix, Inc., P.O. Box 500, Beaverton, Oregon 97005. In Europe, write Tektronix Ltd., P.O. Box 36, St. Peter Port, Guernsey, C.I., U.K. TEKTRONIX® committed to technical excellence "the value leaders" U.S. Sales Prices FOB Beaverton, Oregon For a demonstration circle 36 on reader service card Circle 37 on reader service card for its military equipment. Taking advantage of the availability of chip sets from several semiconductor makers [Electronics, March 1, p. 63], Teledyne has come up with a family of microcomputers that are 2.5 in. in diameter, 0.10 in. high, and cost $1,000 in small lots. The Northridge, California firm developed the microcomputers for a Government program and is now attempting to develop commercial customers. The company has several varieties in its line, from a basic, all-in-one package unit with about four to five times the capability of the Intel MCS-4 microprocessor set, up to one that requires two packages and has close to the capability of a minicomputer, according to Earl Kanter, vice president of advanced systems. Typical add times for this level are 10 microseconds for 16 bits. The price is about $1,000 per package independent of the specific circuitry, although memory is less expensive than logic. P-MOS now, n-MOS later. The basic technology in the computers is p-MOS. It makes use of available components from Intel, National and Rockwell, but with architecture and other components different from standard sets. Kanter expects to use n-channel sets when they become available. Teledyne buys the parts in wafer form, then separates, tests, and applies them with hybrid techniques. Kanter says that the most popular package is the 2.5-in. round one, but other, rectangular units are available. The sealed unit requires no maintenance, calibration or service, and can easily be replaced in the field. The compactness of the microcomputer, its ease of replacement and projected high reliability (mean time between failure is 25 years) make it especially attractive for the automotive, process control, chemical, and petroleum industries. It requires about 7 watts. In addition to the unique packaging, Teledyne is also offering a comprehensive set of software, and in fact, Kanter feels that this is the unit's major advantage over the capabilities of the stock chip sets. --- **Production** **Ribbon wire virtues don’t move users** Bonding with round wire has been a headache for IC and hybrid manufacturers, but a switch from round wire to flat ribbon wire could provide a cure. Alvin H. Sher and Herbert K. Kessler of the National Bureau of Standards have recently completed studies showing that an ultrasonic bonding tool can operate over a much broader time-amplitude range yet still provide a given pull-strength value when the wire used is ribbon-shaped. Potential users also believe it is the way to go, but not right now. 3 interchangeable CPUs. That's modularity. SUE’s basic CPU gives you a minicomputer that’s high in flexibility yet low in cost. A second CPU provides decimal arithmetic functions. And the third meets the requirements of scientific or industrial applications that call for improved math capability. These all slide easily in and out of the chassis. Without any wiring. In fact, you can change CPUs at your plant (or even in the field if need be) in about 60 seconds. So a SUE system can change and grow as fast as your customer’s needs change and grow. The component computer. And you’re not limited to one CPU at a time. SUE’s multiprocessor capability lets you hook up as many as four on a single Infibus. Just choose the combination of processors that suits the system best. That’s because SUE (the System User Engineered minicomputer) is the first of its kind: a component computer for systems. Its modular processors, memories and controllers all plug together in almost any combination to solve your application problems. That includes I/O controllers, but you’ll never need more than two basic types with SUE: one bit serial, one word parallel. These will adapt to any I/O device. Wider choice of peripherals. We offer a full line of peripherals to go with SUE: IBM compatible 5440 disk drives, CRT/keyboards, printers from 100 cps to 600 lpm, magnetic tapes, cassettes, punched card devices and paper tapes. Anything your system needs. Complete software tools. To make your programming burden lighter, we offer a full set of software tools: sort/merge, DOS, assemblers, utilities and RPG/SUE. That last item is 98% compatible with RPG II, by the way. And we’re the only company we know of that unconditionally warrants all our software for a full year. Built for systems builders. SUE’s built-in flexibility makes it fit your systems now, makes it easily changeable later on. You can be sure we’ll be here later on, too. Which is one more advantage of dealing with an established, reputable company like Lockheed Electronics. Let’s talk. Call (213) 722-6810, collect, or write us at 6201 E. Randolph St., Los Angeles, California 90040. That’s SUE. Lockheed Electronics Data Products Division Circle 39 on reader service card See SUE systems at the National Computer Conference booth #2831 Leo W. Czarnecki, manager for equipment engineering at Fairchild Camera & Instrument, Mountain View, Calif., says Fairchild tried ribbon bonding and found its strength to be its most outstanding characteristic. He says it takes current surge better and its assembly is easier. Czarnecki believes the greatest demand for ribbon bonding will be in power devices. But for now, Fairchild is staying with round wire for economic reasons. Frank Stevens, general manager of Sigmund Cohn Manufacturing Co. Inc., Mount Vernon, N.Y., which supplies bonding wire to the industry, says that ribbon wire is just "a few days after birth," and few are using the technique in volume. In one series of tests, Sher and Kessler learned that tool displacement amplitude can be varied over wide margins and could still meet a 10-gram pull-test. But the margin of adjustment is much tighter for bonding round wire. A second advantage for ribbon wire is that bonds can be stacked. This gives the device maker greater flexibility in point-to-point wiring and should be particularly attractive to hybrid users. But this technique has been around for more than two years and so the puzzling question is: why have so few users adopted it? One answer may be that IC device makers are in a boom period, and are only just now beginning to recover from 1970 and 1971 and are too busy to innovate. **Does ribbon wire cost more?** Cost depends on the tolerance requirements of the user. If tolerances are relaxed, as Stevens feels they should be, then the price for ribbon wire would probably rise no more than 10%. However, if the user sticks with +3% tolerance on the cross-section dimensions, as is often the case with round wire, then the price might jump as much as 25%. But either way, going to ribbon wire shouldn't impact the IC fabrication cost much, since the value of the wire is no more than 3% of the material costs in the IC. --- **Components** **Military holds out against plastic packs** Intermittent or open bonds continue to plague plastic-packaged semiconductors tested by the U.S. Army for military applications, despite acknowledged gains in bond reliability and device moisture-resistance. This, coupled with the Army's view that "vendors are not interested in supplying plastic devices to 'hi-rel' specifications," have led the military to hold fast to its restrictions against general usage of plastic discrete and microcircuit semiconductors, the 1973 Electronic Components Conference was told in Washington. The steadily increasing preference of semiconductor makers for plastic packages could pose a future procurement problem for military users seeking hermetically sealed glass, metal, or ceramic packages, says Edward B. Hakim of the Army Electronics Command's Electronics Technology and Devices Laboratory, Ft. Monmouth, N.J. Citing Electronic Industries Association estimates that plastic packages accounted for about 70% of U.S. transistor production and approximately 55% of monolithic-integrated-circuit output in 1972, Hakim told the symposium that growth rates for plastic packages in these two market categories approximate 2% and 4% a year. Compared to hermetic packages, prices for plastic devices continue to decline. In 1972, transistors averaged 59 cents for hermetics compared to 13 cents for plastics, while hermetic ICs cost $1.56 compared to 63 cents for plastics. **Volume.** Following the symposium session on interconnection, where he outlined Army data on tests of interconnection reliability of plastic devices in a paper authored jointly with ECOM's Bernard Reich, Hakim identified the potential problem for military semiconductor users this way: vendors "can sell a million plastic devices a month to the computer industry. We may not buy more than a million a year." Moreover, the industrial customer's test requirements are much less rigid than the military's, making the market more appealing to semiconductor makers. Vendor disinterest in supplying plastic packages to 'hi-rel' specifications, he said, stems primarily from lot testing costs relative to the cost of the devices themselves. "Plastic device vendors would rather rely on short-term indicators than be confronted with the rigors of the normal 'hi-rel' specifications," Hakim explained to the ECC meeting. The three-day May meeting was sponsored jointly by the EIA and the IEEE. The Army semiconductor specialist was not prepared to write off military use of plastic devices indefinitely, however, noting that surveillance of device developments and performance should continue because their future use—"possibly within this decade"—may be justified. **Defects down.** Bonding defects, for example, are about 0.15% based on 1972 experience, he noted, "down significantly from the period when the bond problems first began getting visibility." Hakim forecast that this figure could drop to 0.01% 16 channels ROCKLAND Rockland Systems Corporation PROGRAMMABLE MULTI-CHANNEL ANALOG FILTER SYSTEM 816 - UP TO 16 FILTER CHANNELS (THROUGH PLUG-IN CARDS) IN 5 1/4" PANEL HEIGHT - LOCAL (FRONT PANEL), ON-CARD, AND REMOTE PROGRAMMING OF CUTOFF FREQUENCY - DIRECT INTERFACE TO COMPUTER, PROGRAMMER OR SWITCH CLOSURES - LOW PASS, HIGH PASS, BAND PASS, BAND REJECT - FREQUENCY RANGE 10 Hz to 150 KHz - ATTENUATION SLOPE 48 db/oct per channel - PASSBAND GAIN 0 db - BUTTERWORTH RESPONSE - 1 MEG INPUT AND 50-OHM OUTPUT IMPEDANCE - OUTPUT SIGNAL 10 volts, p-p - DYNAMIC RANGE 80 db Rockland Systems Corporation 230 W. Nyack Road, West Nyack, N.Y. 10994 • (914) 623-6666 • TWX 710-575-2631 by 1975 with present technology. However, if beam-lead devices or devices employing bump technology are generally applied, the rate of improvement could be significantly increased. Fort Monmouth, he said, is considering the use of "a liquid-to-liquid, -60°-to-+150°C thermal shock screen to cope with the bonding problem," but added "there is a great reluctance on the part of vendors" to accept it on the ground that such a test "is potentially destructive and capable of creating latent defects." Preliminary results at ECOM show the test is not destructive. Noncontrolled field reliability performance of plastic and hermetic transistors by ECOM indicates dramatic improvements and suggests their performance should approach those of hermetics by 1974–75. **Space electronics** **FCC okays Marsat, but questions remain** When the Communications Satellite Corp. announced that it planned to use the interim Navy navigation satellite as a civilian maritime satellite as well [Electronics, March 15, p. 36], it brought a storm of protest from the international record (message) carriers and potential equipment builders. They complained to the Federal Communications Commission that the deal was too quick, could give Comsat dominance in international maritime-satellite communications, and effectively freeze them out of the system. Now, after much discussion, the FCC decided this month that Comsat could proceed "at its own risk" to contract with Hughes Aircraft Co. for three Anik-like satellites to start the $70 million program. But it left some touchy issues undecided—how to achieve workable joint ownership among competing companies and how those parties would choose a system manager. To find out who's really interested, the FCC ordered U.S. common carriers now providing maritime service and known as wanting to join—AT&T, ITT, RCA, TRT Communications Inc., and Western Union International—to sign up with the commission by June. **Comsat comeback.** Naturally preferring to keep its lead, Comsat argues that the commission's decision to let the participating companies select the system manager could breed disruptive discontinuity. By piggybacking a commercial maritime satellite system on the Navy's two-ocean interim satellite requirement, Comsat and the participating carriers could engineer a lucrative combination. Satellite costs, including spare parts and development, would equal $39 million, launch costs would come to $26.5 million, and system development and filing fees would take up the rest, the company says. Moreover, participating in such a system would give companies entrée into a bigger market beyond—a projected global ship-shore-satellite navigation and communications system that maritime interests are seeking [Electronics, Feb. 1, p. 50]. **News briefs** **Wescon gains ERA as co-sponsor** The Western Electronic Show and Convention (Wescon), to be held Sept. 11–14 in San Francisco's Brooks Hall, will have a new co-sponsor with IEEE—the Electronic Representatives Association. WEMA withdrew as a co-sponsor earlier [Electronics, March 15, p. 25], but ERA's presence "guarantees some management and marketing flavor to the Wescon board," says a Wescon spokesman. All the exhibits will be contained in Brooks Hall this year, which means the show will be limited to about 500 booths. One new feature planned for the show this year is a two-day manufacturing seminar to be held at the San Francisco Hilton. **Bendix in $1 million deal with Argentina** Bendix International, New York, has received a $1 million contract from the Argentine Ministry of Agriculture and Cattle to supply an airborne remote-sensing and ground data-processing system for conducting surveys of the country's agricultural lands. Data will be gathered and stored on high-density magnetic tape and will be processed by a general-purpose computer provided by the Argentinian government. **Color sets have built-in cable converter** RCA Corp. says it is the first color-TV manufacturer to build into its sets the capability of receiving 24 cable channels in addition to conventional vhf and uhf signals. The new feature is going into RCA's top-of-the-line XL-100 solid-state units and will eliminate the need for a separate converter or selector device to obtain cable TV reception. **Analog Devices starts C-MOS line** Believing that "C-MOS is on the upswing and ready to replace TTL in many applications," Analog Devices' microcircuits operation in Santa Clara, Calif., announced its entry into the C-MOS field with three new products (see p. 71). These include a differential four-channel multiplexer, a single eight-channel analog multiplexer, and an uncommitted quad analog switch. All are compatible with TTL, DTL, and C-MOS logic. The new devices are aimed at applications in analog-to-digital and d-a converters, digital amplifiers, frequency multipliers, and digital filters. Analog Devices says part of the move was in anticipation of growth in the minicomputer and IC markets. **E-Systems wins airborne command post job** The Air Force Electronics Systems Division, Hanscom Field, Mass., has awarded a $20.5 million fixed-price-incentive contract to E-Systems Inc., Greenville, Texas, to equip two 747-200B Advanced Airborne Command Post (AACP) aircraft with electronics. WORLD'S FIRST PRODUCTION CCD Our 500-Element Linear Image Sensor: World's First Production CCD. New CCD101. High sensitivity, wide dynamic range, self-scanning device. Available now for prototyping at $1200. The CCD101 Linear Image Sensor uses charge-coupled technology and a buried-channel structure to create a rugged, monolithic, self-scanned, 500-element sensor designed for high sensitivity conversion of images to analog signals. For slow-scan TV, facsimile, and other high-resolution linear imaging applications. The impact of CCD on imaging is analogous to that of the transistor on vacuum tubes. It has been called by one high level government scientist "the most important breakthrough in semiconductors since the development of MOS." CCD101 Linear Image Sensor The array is a 500-element photo-sensing chip, 60 x 635 mils. It includes, in addition, charge transfer gates, two 250-element CCD analog shift registers, a 2-element output register, and a preamplifier. The device allows sequential reading of the 500 imaging elements with a typical dynamic range of 1000:1 at 1 MHz. Sensitivity is typically $15 \times 10^{-6}$ footcandle-seconds. Operating voltages are under 20V. On-chip preamplifier allows a low-impedance interface. The 24-lead dual in-line ceramic package—$1\frac{1}{4}$" long x $\frac{1}{2}$" wide x $\frac{7}{16}$" high—has a sealed anti-reflectance glass window and non-reflective interior. 99.999% Transfer Efficiency Key to CCD101 high sensitivity imaging is the buried channel structure which reduces charge-transfer loss, thus permitting greater image element density. The result is demonstrated above. The 4 photos illustrate the device's capacity for generating a clear video picture of a single frame at CCD Imaging Advantages CCD technology provides the first high-performance method for solid state imaging. The CCD101 is the first CCD product, and thus the first to clearly demonstrate its high performance advantages, high reliability and dimensional accuracy, with lower noise video, low-voltage operation and self-scanning that eliminates much external control circuitry. All made possible by our CCD buried N-channel technology. CCD advantages over other types of imaging devices are manifest: | As Compared To Vacuum Tube Imaging Devices | As Compared To Non-CCD Solid State Imaging Systems or Devices | |-------------------------------------------|-------------------------------------------------------------| | • Small size | • Low clock interference | | • Long Life | • No pattern noise | | • Lower power | • Low, uniform dark current | | • Lower operating voltages (none greater than 20V) | • Better detectivity | | • Solid state ruggedness | • On-chip preamp | | • Inherent metric accuracy | • Lower power | | • Low impedance interface | | | • Greater dynamic range | | For more information. Call our Hot Line number – (415) 962-3333 – for a complete information package: data sheets, application notes, and a technical paper on the physics and applications of charge-coupled devices. widely varying levels of illumination. The photos show the face of a CRT displaying the output of a CCD101 sensor clocked at 1 MHz scanning a black-and-white photo on a rotating drum. Increasingly dense filters were inserted between the sensor and the scanned photo. The intensity dropped, but the image remained usable. DATA GENERAL INTRODUCES THE LOADED NOVA. The loaded Nova is the new Nova 840 and the most comprehensive set of software/hardware capabilities ever available with a Data General computer. It comes with a built-in Memory Management and Protection Unit that lets you expand main memory to 128K 16-bit words. Base price with 16K of memory is $16,530. Nova 840 runs a comprehensive Real-time Disc Operating System (RDOS) for dual programming operations. A new BATCH executive lets you pick your I/O devices, load your jobs, and walk away. It has our new Fortran 5, Extended ALGOL, Extended Timesharing BASIC, and a whole library of proven Data General software; proven software that we can deliver now. And our Remote Job Entry software can let the 840 double as a high-powered terminal to a big computer someplace else. With the right kind of configuration (like the one shown), all that software is available free. ON YOUR DOORSTEP IN UNDER 90 DAYS. The Nova 840 in the picture has a central processor with 32 to 64K of main memory, a high-speed Floating Point Processor, hardware Multiply/Divide unit, fast-access disc storage, and 9-track mag tape. The picture doesn't show lots of the other things you can get with Nova 840: line printers, card readers, Novadisplay terminals, fixed-head Novadiscs, moving-head discs, Nova Cassette tape, communications interfaces. Nor could we show you the applications and service experience we've developed in the course of building, installing, and supporting over 6,000 Nova computer systems all over the world. If you're looking for more throughput than you could ever get with a minicomputer, for better access to system resources, at a lower price, call Data General. Call with an order: we'll put a loaded Nova on your doorstep in less than 90 days. DATA GENERAL Southboro, Massachusetts 01772 SWITCHING REGULATOR | | $V_{CEO}$ @ 0.1 mA | $V_{EBO}$ @ 50 mA | $V_{CE(SUS)}$ @ 500 mA | $h_{fe}$ @ 1 MHz ($V_{CE} = 10V$, $I_C = 200$ mA) | $h_{FE}$ ($V_{CE} = 5V$, $I_C = 10A$) | $V_{CE(SAT)}$ @ 5.0 A | $I_C$ | $P_T$ @ 75°C | |----------|-------------------|------------------|------------------------|-----------------------------------------------|-------------------------------------|-----------------|-------|-------------| | DTS-1010 | 120V | 7V | 80V | 12 | 200 | 1.8V | 10A | 100W* | | DTS-1020 | 120V | 7V | 80V | 12 | 500 | 1.5V | 10A | 100W* | *100 percent tested at 2.5A, 40V. The Kokomoans now give you Darlington Switching Power. Use a Darlington in place of an ordinary transistor, and you'll realize an additional magnitude of gain plus increased switching power. Use a Delco silicon power Darlington (DTS-1010 or DTS-1020) and you'll also realize a gain in dependability. Delco's Darlingtons are triple diffused mesa units housed in copper TO204MA cases and built for ruggedness. The design gives them high energy capability—the ability to handle surges of current and voltage simultaneously. They are ideal for switching inductive loads in circuits subject to transients or fault conditions. Design a switching regulator circuit around a Delco Darlington or use it in any 60-100 volt application to reduce circuit size, weight, and cost. In addition, the Darlington space saving feature allows you more design flexibility. Unlike an ordinary transistor, it's only energy-limited, not beta-limited. You can exploit its full energy capability in your circuit. Call your nearest Delco distributor. He has them in stock and he's got the data on high energy switching for small spaces. For details on the switching regulator circuit, ask for Application Note 49. Now available from these distributors in production quantities. ALABAMA, BIRMINGHAM • Forbes Distributing Co., Inc. (205)-251-4104 ARIZONA, PHOENIX • Sterling Electronics (602)-258-4531 CALIFORNIA, LOS ANGELES • Kierulf Electronics, Inc. (213)-685-5511 • Radio Products Sales, Inc. (213)-748-1271 CALIFORNIA, PALO ALTO • Kierulf Electronics, Inc. (415)-968-6292 CALIFORNIA, SAN DIEGO • Radio Products Sales, Inc. (714)-278-2112 CALIFORNIA, SAN DIEGO • Kierulf Electronics, Inc. (714)-278-2112 CALIFORNIA, SUNNYVALE • Cramer/San Francisco (408)-739-3011 COLORADO, DENVER • Cramer/Denver (303)-758-3400 CONNECTICUT, NORWALK • Harvey/Connecticut (203)-853-1515 FLORIDA, MIAMI SPRINGS • Powell/Gulf Electronics (305)-885-8761 FLORIDA, ORLANDO • Powell/Gulf Electronics (305)-885-8761 ILLINOIS, ROSEMONT (Chicago) • Kierulf Electronics (312)-678-8569 ILLINOIS, SKOKIE (Chicago) • Bell Industries (312)-282-5400 INDIANA, INDIANAPOLIS • Graham Electronics Supply, Inc. (317)-634-8202 MARYLAND, BALTIMORE • Radio Electric Service Co. (301)-825-0070 MASSACHUSETTS, NEEDHAM HEIGHTS • Kierulf Electronics, Inc. (617)-449-3600 MASSACHUSETTS, NEWTON • The Greene-Shaw Co. Inc. (617)-969-8900 MICHIGAN, FARMINGTON • Harvey-Michigan Supply Co. (612)-332-1325 MINNESOTA, MINNEAPOLIS • Stark Electronics Co. (612)-221-2400 MISSOURI, NO. KANSAS CITY • LCOMP-Kansas City, Inc. (816)-221-2400 MISSOURI, ST. LOUIS • LCOMP-St. Louis, Inc. (314)-647-5505 NEW JERSEY, CLIFTON • Eastern Radio Corporation (201)-365-2600, (212)-244-8930 NEW YORK, BINGHAMTON • Harvey/Federal (607)-772-7700 NEW YORK, EAST SYRACUSE • Cramer/Syracuse (315)-437-6671 NEW YORK, ROCHESTER • Cramer/Rochester (716)-275-0300 NEW YORK, WOODBURY • Harvey/New York (516)-381-8700, (212)-582-2590 OHIO, CINCINNATI • United Radio, Inc. (513)-761-4030 OHIO, CLEVELAND • Pattison Supply (216)-441-3000 OHIO, DAYTON • Kierulf Electronics (513)-278-9411 PENNSYLVANIA, PHILADELPHIA • Almo Electronics (215)-376-8000 PENNSYLVANIA, PITTSBURGH • RPC Electronics (412)-782-3770 SOUTH CAROLINA, COLUMBIA • Dixie Radio Supply Co., Inc. (803)-253-5333 TEXAS, DALLAS • Adleta Electronics Co. (214)-741-3151 TEXAS, FORT WORTH • Adleta Electronics Co. (817)-336-7446 TEXAS, GAILLAND • Kierulf Electronics, Inc. (214)-271-2471 TEXAS, HOUSTON • Harrison Equipment Co., Inc. (713)-224-9131 UTAH, SALT LAKE CITY • Cramer/Utah (801)-487-3681 VIRGINIA, RICHMOND • Meridian Electronics, Inc., a Sterling Electronics Company (703)-353-6648 WASHINGTON, SEATTLE • Kierulf Electronics, Inc. (206)-763-6550 ONTARIO, SCARBOROUGH • Lake Engineering Co., Ltd. (416)-751-5980 ALL OVERSEAS INQUIRIES: General Motors Overseas Operations Powertrain Industrial Products Dept. 767 Fifth Avenue, New York, N.Y. 10022. Phone: (212)-486-3723. Kokomoans' Regional Headquarters. Union, New Jersey 07083, Box 1918, Constant Station, (201) 687-3770. El Segundo, Calif. 90245, 354 Coral Circle, (213) 646-0443. Kokomo, Ind. 46901, 700 E. Firkin, (317) 459-2175 (Home Office). Delco Electronics DIVISION OF GENERAL MOTORS CORPORATION. KOKOMO, INDIANA Circle 49 on reader service card 49 The Sorensen Modulares. A powerful line-up. Sorensen Modulares give you maximum choice. Plus dependability and efficiency. No matter what your power requirement, count on Sorensen. From the advanced switching-transistor STM series to the miniature encapsulated MM's, there's a Sorensen modular to meet your system specifications and your most rigid performance demands. Single Output STM Series — 40 models. Switching-transistor modulares that provide twice the efficiency of series-pass competitors in half the space — and eliminate need for external cooling. STMs feature built-in overvoltage protection; computer-optimized filtering; 0.05% voltage regulation; output voltages range from 3.0 (min.) to 56 (max.) Vdc. PTM Series — 12 models. All solid-state series-pass modulares that achieve state-of-the-art power density; deliver more power per cubic inch than comparable competitive units, at lower cost per watt. Features include built-in overvoltage protection; highest quality components; adjustable automatic current limiting; 0.05% + 5mV voltage regulation; low ripple and noise; six voltage levels to 100 watts. Dual Output PTM DUALS Series — 9 models. Dual output versions of PTM series, with the same advanced design and construction. Compact, solid-state series-pass modulares with built-in overvoltage protection; feature tracking accuracy to 0.2%; voltage regulation — .02%; transient response — 50μsec. Series includes +5, -12 volt model for CMOS applications. Miniature MM Series — MMS (single) MMD (dual) MMT (triple) — 15 models, 4 package sizes. Designed for maximum reliability in microminiature electronic applications. All MM encapsulated modulares feature built-in overvoltage protection; excellent voltage regulation; single outputs from 5 to 28 Vdc; dual outputs of ±12 or ±15 Vdc. Other dependable Sorensen power supplies QSA Series — 29 models. Modular, wide range, convection-cooled power supplies feature excellent operating specifications plus a wide range of accessories. Models provide outputs from 3-330 volts and up to 300 watts. Top choice for multi-output systems. Lab/Systems Power Supplies SRL Series — 14 models. Low voltage, regulated, solid-state DC power supplies. Rack-mount style featuring excellent stability, fast response time over the full load range, built-in overvoltage protection. Power ranges from 0-60 Vdc and 100 Amps. DCR Series — 37 models. High performance, all-solid-state power supplies featuring the lowest cost per watt on the market. 10 voltage ranges from 20 Vdc to 30,000 Vdc; 7 power levels from 400 to 20,000 watts. Ideal combination of economy, reliability and performance. SORENSEN CATALOG/73 provides fully detailed specifications for all models of Sorensen modular and lab/systems power supplies. Write for your copy. Sorensen Company, a unit of the Raytheon Company, 676 Island Pond Road, Manchester, N.H. 03103. Tel. (603) 668-4500. Or TWX 710-220-1339. See us at Booth #2010 at the 1973 National Computer Conference. From those wonderful folks who gave you clean waveforms... ...come some of the dirtiest waveforms ever. Because the new Model 132 generates precise calibrated outputs of digital or analog noise. And because it's really two generators in one, the 132 also puts out super-clean sine, square and triangle waveforms over the frequency range of 0.2 Hz to 2 MHz. You can even mix the clean signals with noise and get calibrated signal-to-noise ratios. Cleans and dirties from Wavetek for only $795. WAVETEK® P.O. Box 651, San Diego, California 92112 Telephone (714) 279-2200, TWX 910-335-2007 Circle 52 on reader service card American component manufacturers believe they have won a significant victory in the development of a voluntary international components certification system under the International Electrotechnical Commission. This is the message just delivered by Leon Podolsky, chief negotiator for the U.S. IEC committee, to the Electronic Industries Association, following a round of meetings in Geneva last month. EIA, as well as a number of U.S. officials, have long regarded the proposed multipartite standards pact as a nontariff barrier to trade, since the pact initially involved only European manufacturers [Electronics, March 30, 1970, p. 69]. Podolsky's report spelled out progress in getting "a generally satisfactory compromise" by most of the 13 member nations on two key U.S. points. The first is a provision that an "inspectorate in the country in which the product is released is responsible for the supervision of all testing and inspection necessary," thereby covering multinational manufacturing operations. The second provides for use of other than IEC specifications when there are none covering a given product. Following the Geneva round, which Podolsky dubbed "unexpectedly difficult" because some national delegates altered earlier positions, the IEC Council will vote at Munich in June on whether to accept the draft statutes for the new certification system. Odds on acceptance are put by Podolsky at "about 5 to 1." Participating nations are Australia, Belgium, Brazil, Canada, France, Germany, Israel, Italy, Japan, the Netherlands, the UK, U.S.A., and USSR. Domestically, the EIA says it is exploring a means of developing a national supervising inspectorate through existing Government or industry inspection organizations, "or some brand-new organization." It is also determining how much U.S. participation will cost manufacturers—and how much more a components user will be willing to pay for certified parts. A preliminary EIA estimate of a 3% increase in component costs has been criticized by some members as too high. In any event, the earliest the new system could be operational is late 1974. Bad news may be in store for those suppliers of interconnection and switching equipment who have been eyeing the General Services Administration's plans to install and operate telephone centers for Federal use. Following its cancellation of the publicized Middle River, Md., project to install its own Centrex-type system, GSA is re-evaluating projects at Erie, Pa., Winston-Salem, N.C., and Denver, Colo., as well as others as yet only on the drawing boards, to see whether the total telephone tariffs permit economical operation of its own equipment. "We may have to make modifications in the program," says one source, indicating that some cutbacks may be in order. GSA dropped the small, 200-line Maryland project after the Chesapeake and Potomac Telephone Co. produced a new tariff which the agency thought was too high to allow it to run its own equipment. Observers view that tariff as a way of stifling GSA's idea and discouraging large users from buying non-Bell equipment. Winston-Salem is planned to use 600-800 lines, while Denver has about 3,000. Trade reform and the impact of Watergate "When Nixon's trade bill makes it through this Congress... perhaps I should say if Nixon's trade bill makes it through this Congress, you won't be calling it 'the Trade Reform Act of 1973' anymore. You will be calling it 'the Mills bill.'" Thus, with a deft bow to Rep. Wilbur Mills—the Arkansas Democrat invariably described in the press as "the powerful Chairman of the House Ways and Means Committee"—one Administration trade specialist summarized the view of many in the capital following the opening round of congressional hearings on the President's proposal. The fact that even some Nixon Administration loyalists seem uncertain about the bill's fate is one of the less heralded consequences of the Watergate affair. Apart from all its other national and international implications, Watergate is expected to severely handicap the President's request for new and extraordinary powers to deal with the nation's mounting balance of payments and trade problems. "The Administration's bill would give unprecedented powers to the President" to deal with trade problems, explains one congressional economic specialist, "and there are a lot of members up here having second thoughts about that, considering the way the White House has been handling itself lately." The electronics and aerospace industries, anxious to see the legislation passed, would do well to listen closely to the views of Chairman Mills, whose committee will shape the bill. Another crisis While the President is confronted by a personal crisis of substantial proportions, the nation's trade problems present a national crisis of far greater significance for the long term. This was made plain by the mid-May disclosure by the Department of Commerce that the $10.2 billion balance of payments deficit in the first three months of 1973 was only $100 million less than that recorded during all 12 months of 1972. The red ink for the quarter was more than three times the $3.22 billion deficit shown in the first quarter of 1972. What is intriguing to anyone taking a closer look at the new Government figures is that this massive increase in the payments deficit came despite a significant improvement in the U.S. trade balance for the same period. Commerce blames it largely on a $5.9 billion flow of liquid private capital out of the country in the quarter, reflecting a lack of confidence in the dollar by those seeking to capitalize on the relative stability of European currencies. All of this presents two significant complications for industries such as electronics and aerospace that have heavy multinational interests. The first and most obvious of these is that action to turn the payments balance around cannot wait on the passage of new trade legislation later this year. If all the mounting signs of economic crisis are not dealt with swiftly and comprehensively, then the combination of payments deficits, inflation, and international monetary controls could easily snowball, taking a shaky securities market down with it. The Mills committee, among many others in Washington, is cognizant of this, of course. The second complication is that congressional and other Government fiscal leaders are also suspicious that a disproportionate share of the liquid capital outflow out of the country is the responsibility of the multinational corporations, protest as some of them might to the contrary. Thus are the multinational technologists likely to find in the Congress a diminishing sympathy and a more intense questioning of the motives behind their pleas to eliminate suspension of Items 806.30 and 807.00 of the U.S. Tariff Schedules from the Nixon trade package [Electronics, April 26, p. 29]. Shifting power Mills at one point questioned the wisdom of giving the President power to grant relief to industries threatened by imports by imposing temporary tariff surcharges or quotas. But such suspicions do not carry over to the White House request for power to suspend Items 806.30 and 807.00 under which products assembled abroad using U.S. components enter the country duty-free except for the value added, despite the strong opposition of such groups as the Electronic Industries Association and the Aerospace Industries Association of America. As AIAA president Karl G. Harr, Jr., put it strongly in early testimony before the Mills committee, "If the manufacturing is not done in this manner, it either will not be done at all or the components will be produced locally instead of in the United States. This would mean a real loss in American jobs—those of an estimated 37,000 workers" for AIAA's membership. Despite their persuasive ring, such arguments do not seem to have much influenced Mr. Mills or many of his colleagues, who are being subjected to just as intensive lobbying by organized labor and its opposite view. —Ray Connolly Teledyne makes any JFET you need. You can book on it. Every one of our JFETS costs money. And our new price list will lay it right on the line in black and white. Every Teledyne JFET has a price. Locate a JFET at a glance with this one page locator guide. It lists all significant parameters by family type for N-channel, P-channel and dual N-channel JFETS from Teledyne. Pick-a-FET locator and selector guide. This easy to use guide will help you select the FET you need from 19 JFET families. You can start with just a key parameter, an application or type number and be led to the best JFET for the job. How to use, where to use, when to use and what to use, or everything you ever wanted to know about JFETS is in our applications and specifications handbook. One hundred pages of JFET specs and sixty pages of applications information—yours absolutely free. Never cross a JFET unless you use our new cross reference and substitution guide. This handy little guide lists 1,232 popular JFET numbers and crosses them with the favorite Teledyne numbers. And our transistor guide lists lots, lots more. For your free copy of our FET literature pack, call your Teledyne distributor, or complete the coupon below. Yes, I'm booking on you, send me your package of JFET Information. I want to be saved from the mass of confusion in selecting, specifying and buying JFETS. Name ____________________________ Title ____________________________ Company _________________________ Address __________________________ City __________________ State ______ Zip ______ the challenger TELEDYNE SEMICONDUCTOR 1300 Terra Bella Avenue Mountain View, California 94040 (415) 968-9241 TWX: 910-379-6494 Telex: 34-8416 Circle 55 on reader service card Calculator has drive and display on a single substrate Circuits of a new low-power calculator developed at Sharp Corp. are on the same glass panel as the liquid-crystal display, making it a calculator on a single substrate. Even the key-switch interdigitated contacts—which are shorted by conductive rubber buttons embedded in an insulating rubber mat when the key is depressed—are on the opposite side of the same substrate. The only part of the calculator not on the same substrate is the direct-current voltage-stepup converter, which is mounted on a small substrate of its own—piggy-backed onto the main substrate. In one. Liquid-crystal displays are made by Sharp itself and feature dynamic drive. The optimum oblique viewing angle for best display contrast is assured by the use of a snap-up hood coupled to the on-off switch. Light to be reflected by the display enters from both the open front and from a plastic window toward rear of hood. The black-matte hood makes all but the selected segments appear black. Both the display segments and the two-layer printed wiring for the electronic circuits are fabricated on the lower surface of the substrate, while the key-switch contacts are fabricated on the upper surface. Connections between the printed circuits and the switch contacts are made around the edge of the substrate to simplify fabrication. The low power drain of both the liquid-crystal display and the C-MOS circuits enables the calculator to run for about 100 hours on a single penlite-size dry cell. What's more, the unit is complete in itself. "With this combination of long battery life and low-cost replacement, it didn't seem worthwhile to build an ac adapter," says Atsushi Asada, general manager, business machine division. Although the new calculator represents a far greater developmental effort than other recent units introduced by the company, a significant portion of it is not new at all. The C-MOS LSI calculator chip and clock generator and register MSI chip are the same units used in an earlier calculator that featured a light-emitting-diode display [Electronics, Aug. 14, p. 8E]. It is the lower power drain of the liquid-crystal display compared with LED display that gives the almost two orders of magnitude reduction in battery cost and the doubling of battery life even though only one rather than four cells are required. Two new ICs, both manufactured by Sharp, are required for the new calculator, though. Dynamic drive of the liquid-crystal display uses two identical packages of C-MOS segment drivers. The 28-lead maximum for the packages used rather than the size of the chips dictated the decision to use two chips. For the backplate driver, a bipolar IC is used. This simplifies fabrication processing because it is much easier to build bipolar circuits that operate at the 28 V applied to the backplate than it would be to fabricate C-MOS ICs for this voltage. Total power drain is only about 12 milliwatts during standard operating conditions. The dc-to-dc converter, which has an efficiency in the range of 60–70%, ups the power drain from the battery to the nominal value of 20 mW. Display. The liquid-crystal display, which makes possible the low-power and single-substrate features of the calculator, differs a bit from other dynamic-scattering displays. The front-panel glass carrying the transparent segments is the substrate on which the calculator circuits are fabricated. A smaller glass plate carries the mirrored backplates. The transparent segment electrodes are indium oxide, while the mirrored backplates are aluminum. Sharp went to indium oxide for its longer life and its ease of fabrication to the required precision, compared with the usual tin oxide. Two coatings cover both the front and rear electrodes. The coating nearest the electrodes insulates them from the liquid-crystal material, preventing direct current from flowing between the electrodes and also electrolytic breakdown of the liquid crystal at the electrodes. Overlying the insulation layer is an activation layer, which has a surface roughness similar to the size of the liquid-crystal molecules, that causes the molecules to line up perpendicularly to the panel when the display is undriven. The layer also acts to speed up the random orientation of the molecules when the display is driven. West Germany Solar cells charge the batteries in table-top cigarette lighters If two West German companies are right, solar cells may turn up on coffee tables. Rowenta-Werke GmbH and Braun AG, majority-owned by Gillette Co, Boston, Mass., are putting the space-age products into prototype table-model cigarette lighters and are looking into the possibilities of solar-cell-powered pocket lighters. Because patents are still pending, neither company is willing to give all technical details on their new lighters. Rowenta volunteers enough information, though, to give a rough idea on how its units work. Called the Solartronic, its lighter uses four 2-centimeter-square cells mounted on top of the case. There, the cell's bluish color imparts something of a decorative effect. Capacity. Rowenta says that it manages with only an $8-\text{cm}^2$ cell area "thanks to special circuit-design measures". The photocurrent charges a small nickel-cadmium battery with a capacity in excess of 100,000 milliwatt-seconds. The battery supplies the energy for a high-voltage ignition system which ignites the gas. A full battery lasts for at least 1,500 ignitions. That means the Solartronic can be used to light up an average of 50 cigarettes a day for a full month without the battery having to be recharged by exposing the solar cells to a light source. Since a lighter on a living-room table is exposed to some form of light every now and then, the Solartronic stays charged much longer than that. Lighter operation then depends only on the amount of gas inside. Wilfred Grosch, marketing manager at Rowenta, is convinced that solar-cell-powered lighters are here to stay and that more companies will follow suit in making them. Braun, which does not yet share Rowenta's optimism, is still cautious on the prospects of solar-cell lighters and wants to see if demand justifies full-scale entry into the market. Whatever their prospects, solar-cell lighters will be initially a prestige product. Grosch points out that, when they hit the market in September, the least expensive of the company's new lighters will retail for "between 500 and 600 marks, probably closer to 600." At the current exchange rate, the price tag comes to between $175 and $210. Braun, which does not yet have any marketing plans, is tight-lipped on price but says they will be high. Both companies believe, however, that prices will come down eventually. They are pinning their hopes on electronic firms being able to supply less-expensive cells. Sweden Paging system would cover nation Sweden, the fourth largest nation in Europe, is about the size of California, and it's half covered with forests. But no matter where any of the 8 million Swedes happens to be, he will never be out of touch if he's carrying a new pocket pager. The Swedish telecommunications authority has been given half a green light for a nationwide personal paging system, which engineers here say is unique in the world. The system has been under development and field test for almost four years. The half green light means that all that's left is to work out a deal with the Swedish Broadcasting Corp. to share their fm bands. Engineers at the telecommunications authority are optimistic that the broadcasting company will be willing to work out a deal—and this will mean that commercial launching of the system could be in 1975. The system involves providing each pocket-paging receiver with a special 3-tone-code signal receiver. These tones range from 52–75 kilohertz. When a caller wants to page a person, he uses a ordinary telephone, dials a special code number to get into the automatic-paging transmitter, and then dials the personal page number of the person he is seeking. With each person in Sweden already assigned a personal number for tax and census purposes, he already has a personal number that could be used. The caller hears a confirmation signal that the paging message has been sent out. Before hanging up, he dials his own phone number, which is recorded. When the person being paged hears the beep on his pocket receiver, he telephones to a central exchange. The number of the caller is given to him automatically over a voice device that has transformed the telephone number into a vocal message. Today, the Swedish Broadcasting Corp. has three programs broadcast on the fm band. Program 1 is educational shows and news, 2 is classic music, and 3 is light music. Broadcasting in stereo is done on a test basis on program 2. Since fm transmitters now cover the nation, this means that the telecommunications authority would be able to use these transmitters for broadcasting the paging tones. Seeker. One nuance that the engineers would work into production receivers would be automatic transmission-seeking, a capability which would tune the receiver to the most powerful transmitter. By using high-speed transmission, engineers figure that the present fm network (using 87–100 megahertz) could handle up to 400,000 customers—and they certainly don't expect this kind of business. However, they plan on a first production run of 5,000 receivers, which the authority would market and lease, just to get the ball rolling. After that, any manufacturer could offer pocket pagers, which the authority estimates would be able to be sold for between $100 and $200. Users would pay a regular fee for the service—which is estimated to run about $50 for a full year. Norplex...your reliable hand for printed circuit boards right on through turn-on time Just the right laminates to handle your applications. Just the right team to help assure they keep on performing, right on through turn-on time. That's Norplex. Because reliability is what Norplex is all about. In finest-quality, true-to-specs printed circuit board materials. And in customer service through Norplex representatives who are more than just salesmen. They are laminate specialists who want the job done right . . . your way. And Norplex can offer you a wide choice of grades, foil thicknesses and sheet sizes...the widest in the industry. Plus a research and engineering staff geared to designing laminates with special properties for unusual applications. They're available when you need them. With plants in both Franklin, Indiana and La Crosse, Wisconsin, Norplex has the answers for all your printed circuit board requirements. Start the Norplex team working for you by calling or writing for our latest literature. Norplex Division, UOP (Universal Oil Products Company), Norplex Drive, La Crosse, Wisconsin 54601. Telephone: 608/784-6070. Norplex laminates by UOP © 1973 UOP Circle 58 on reader service card French avionics producers look abroad Ambitious armaments and aerospace programs that spawn new technology are noticeably lacking in France these days so the country's avionics producers won't have much in the way of brand-new technology to unveil at the upcoming Paris Air Show. All the same, they'll have some considerable commercial achievements to talk about. The supersonic Concorde transport and the medium-range, wide-body Airbus projects don't look like they'll turn into major outlets for hardware as once hoped. But French electronics companies are compensating for this setback at home by aggressive marketing around the world. Thomson-CSF, the biggest company in French avionics, did some 1 billion francs ($220 million) in aerospace business last year. This year the company's general manager, Jean-Pierre Bouyssonnie, expects aerospace sales will climb 15%. About 55% comes from outside France. Where Thomson-CSF has scored best on the ground is in its export push. Its big order at the moment is a nationwide air-traffic control system for Brazil, a project worth 350 million francs (roughly $77 million). Thomson-CSF radars also are on tap for U.S. airports. General Dynamics has an FAA contract to build 37 ASR-8 airport radars, developed in France. Texas Instruments has sold 135 sets of instrument-landing system hardware built under license from Thomson-CSF. ... but nation's telecommunications makers thrive at home French telecommunications equipment producers will have full order books through 1974 at least. In an effort to meet a fast-rising demand for telephones, the postal ministry plans to spend $2.3 billion next year for telecommunications. That's a hefty 25% more than the ministry has earmarked for this year. The goal now, promised by Prime Minister Pierre Messmer during the election campaign this spring, is 12 million lines in service by 1978. The current five-year economic plan calls for only 9.6 million lines by 1975. Plasma etches and cleans ICs in Japanese process Plasma gas ions replace chemical etchants and organic solvents in a new integrated-circuit fabrication process developed by Mitsubishi Electric Corp.—and quietly put into operation in the company's production facilities in the second half of last year. The company claims the process is superior to the ones it replaces by every yardstick applicable to production. Yield and uniformity of MOS threshold voltage are improved and finer line widths can be obtained. The number of processing operations and processing time are decreased, and the labor content of processing operations is cut. What's more, since the gas ions react to form harmless gases and water, the problems of disposal of spent acids and organic solvents can be forgotten. The plasma approach can automatically perform masked etching of silicon nitride layers or polycrystalline silicon layers and the removal of photoresist after the etching process is completed. Mitsubishi is now using these processes in the production of LSI chips for calculators, silicon-gate LSI memories, and linear bipolar circuits. German firm studies "wired nation" Now that the Wired City has become a household word, a new one, the Wired Nation, is cropping up in European electronic circles. In a study prepared by Siemens AG, communications experts at that company are proposing a scheme whereby cable television, which has thus far been limited to population centers, would be extended to the whole of West Germany. To wire up the country—which is about the size of Oregon—the company estimates that $7–14 billion would be needed. The Siemens Wired-Nation scheme would be based on a broad-band communications concept in which not only the functions of present-day cable-TV networks would be handled, but also services like viewer participation during program transmissions, question-and-answer educational programs, and various information services. In successive steps of service expansion, remote shopping, conference television, and video-phone transmissions would also be handled. **Surface regrowth improves material for magnetic-bubble use** Scientists at the Philips Research Labs in Eindhoven, the Netherlands, have found a way of producing a near-perfect bubble material—which is needed for practical memories. In the new Philips method, the thin magnetic monocrystalline layers needed for high bubble density are grown from a liquid phase on a nonmagnetic monocrystalline substrate, which has previously been subjected to a predipping process. The surface of the treated substrate shows considerably fewer crystal imperfections than bubble materials produced thus far. This, in turn, enhances bubble displacement in the magnetic layers on the substrate. In conventional bubble materials, imperfections in the crystal structure at or near the substrate surface greatly impede bubble displacement. In the Philips method, faults in crystal structure are minimized by first dipping the substrate in a bath containing the substrate constituents and heated to a temperature at which a thin layer of the substrate dissolves. Next, the temperature of the bath is allowed to drop slightly. This causes a new and more nearly perfect layer of substrate material to form on the substrate surface. Finally, the magnetic layer, from 3 to 5 micrometers thick, is epitaxially grown. **Activity surges in green LEDs** Look for an upswing in sales of light-emitting diodes in coming months. Following Siemens new production line [Electronics, May 10, p. 56], Ferranti Ltd. has halved prices for its gallium-phosphide green-emitting dice to 15 cents each in quantities of 100,000, or about 20% more than similar red gallium-arsenide-phosphide dice. Similarly, a new line in small green monolithic seven-segment numerics is priced at $1.50 each for 100,000, not much more than comparable red numerics. And Monsanto is making proportionately similar cuts on its reflective-mode green numerics, which aren’t as bright as monolithics. **Addenda** Motorola has received advanced notice of permission from the Japanese government to form a 50-50 joint venture with Alps Electric Co. Ltd. to produce semiconductors in Japan. . . . West Germany’s Grundig AG is readying the country’s first portable color-TV set. The unit, which will debut at the International Radio Show in West Berlin this August. Since no German—or European—component houses are yet producing picture tubes for color portables, Grundig had to turn to a Japanese supplier, Tokyo Shibaura Electric Co., and its shadow mask tube with vertical slots and three in-line electron guns. GREATEST YIELD IN THE ELECTRICAL FIELD ALLEGHENY LUDLUM STEEL GIVES YOU MORE FOR YOUR MONEY Critical demands for leadframes in semiconductors are more than semifilled by Allegheny Ludlum. Our product range and consistently high quality fill numerous needs. Wider strip? In widths up to 25 inches, our electrical alloy strip yields more good parts faster to cut costs. Sheet? Plate? Bar? Get the best...for laminations and shieldings. Motors. Transformers. Generators. Relays. Solenoids. Vibrators. Cores. Check our Sealmet alloys. Or Ohmaloy. Mumetal. Moly Permalloy, etc.: all products of strict A-L quality control...from computerized melting to final shipment...for consistently superior characteristics. We can earn your seal of approval...with special steels for glass-to-metal seals: AL 42 and 4750. Sealmet 1 and 4. AL 430Ti. And others. All meet tightest tolerance requirements...for sealed-beam headlights, fluorescent lights, electronic tubes. A pioneer in developing magnetic shielding materials, we stock Mumetal and Moly Permalloy for prompt delivery. Our Research Center, most elaborate in stainless and specialty alloys, helps solve special shielding problems. For more on how America's leading producer of stainless and specialty alloys can help you in the electrical field, write: Allegheny Ludlum Steel, Dept. 331, Oliver Building, Pittsburgh, Pa. 15222. Allegheny Ludlum Steel Division of Allegheny Ludlum Industries Electronics/May 24, 1973 Solid State vs. Your Guardian Angel puts it all together! Control confusion? Contact Guardian. Perhaps the solution will be a solid state device. Or, an electromechanical relay. Or, a money-saving assembly combining both! Our vast experience in solid state, hybrid and electromechanical devices assures you the most effective possible solution. Choose the one best solution to your control design problems: - Solid State - Hybrid - Electromechanical - All of the above - None of the above - Ask your Guardian Angel! Hybrid vs. Electromechanical? 1. **CUSTOM CONTROL PACKAGES** for sequential switching at precise intervals. In most cases, numerous standard Guardian solid state devices could do the job. But, to lower total cost, Guardian can combine electromechanical and solid state into a single package. The result? Unique solutions to specific control problems demanding adjustable time delay, priority logic, voltage sensing, circuit isolation, or virtually any switching problem your system might demand. 2. **CUSTOM SOLID STATE CONTROL PACKAGES** that put a dozen functions ranging from temperature sensing to time delay and voltage regulation in a single, miniaturized, low-cost module. 3. **SOLID STATE RELAYS** that perform the function of electromechanical relays with total isolation between control circuit and switching output. Standard designs for most applications—custom designs at near-standard prices! 4. **SOLID STATE TIME DELAYS** in just about any size, shape, form or delay range your application can demand. Need 30 minute delay? Guardian’s got it. A 25 millisecond delay? We’ve got that, too. And they’re yours right-off-the-shelf or in custom designs. 5. **HYBRID TIME DELAYS AND RELAYS** for low-cost, dependable solutions to perplexing design problems. At Guardian they’re yours in standard and custom designs. 6. **SOLID STATE VOLTAGE SENSORS** that protect other controls and motors from the damaging effects of under-voltage or phase loss during power outages and “brown outs.” 7. **ELECTRONIC “ZERO CROSSOVER” SWITCHES** that form an electrical “cushion” between signal input and load power by switching at 0.0VAC±10V. COMPLETE APPLICATION DATA IS IN THESE TWO FREE CATALOGS. GUARDIAN ELECTRIC MANUFACTURING CO. • 1550 W. Carroll Ave. • Chicago, Illinois 60607 Circle #3 on reader service card At MEPCO/ELECTRA, we've got 'em all. We can supply passive components to almost any military or industrial specification or for the less critical high-volume application. ER...it's here. Commercial/industrial...here too. Delivery? In quantity, fast. From a shelf inventory of better than 50,000,000 pieces. Standard components from 10 worldwide plant locations. Complete? Everything from ceramic and electrolytic capacitors to metal and carbon film resistors. From carbon and cermet trimmers to DIP networks, hybrid microcircuits and thermistors. Our point is this... if you need high production or short runs for EDP, instrumentation, communications, entertainment or military application check with us. It pays. The best buys are in our bag. Sold through North American Philips Electronic Component Corporation Talk to the man from MEPCO/ELECTRA FACTORY LOCATIONS: Columbia Road. Morristown. New Jersey 07960. 11468 Sorrento Valley Plod. San Diego. California 92121 P.O. Box 760. Mineral Wells. Texas 76067 Circle 65 on reader service card For years, people thought Teletype machines only talked to themselves. Ever since the information explosion and solid-state technology, our machines have been running in a very fast crowd. With computers. In fact, Teletype equipment is compatible with practically every computer-based communications system. For proof, you don't have to look any further than our product line. We built the model 33 to offer economy and reliability. For an economical wide-platen terminal, look at our new model 38. If you need heavy-duty operation, we make the model 35. And for the utmost in flexibility and vocabulary, check out our model 37. Teletype's keyboard terminals operate at standard speeds. But if your speed requirements are greater, all our terminals are compatible with the 2400 wpm Teletype 4210 mag tape unit. We also manufacture a series of paper tape senders and receivers with speeds up to 2400 wpm. When you look into our product line-up, you'll find we're very big on flexibility. In assembled ASR, KSR and RO terminals. Or in individual components—printers, keyboards, readers and punches. Take interface options. We offer three. Built-in modems, current interfaces and EIA Standard R-232-C interfaces. We offer platen widths that range all the way up to 1.5 inches. And optional character sets. Like Greek letters, algebraic and chemical symbols, as well as other graphics for charts and molecular structures. We also cover error detection and station control with a complete group of solid-state accessories. We're big on economy, too. Because on a price/performance basis, you won't find a better buy than Teletype equipment. And we didn't forget service. Our applications engineers will work with you to make sure what you get is exactly what you need. And after the sale, we'll set up a maintenance program for you. Or, if you prefer, we'll train your people in the proper maintenance procedures. It takes more than manufacturing facilities to build the machines Teletype Corporation offers. It also takes commitment. From people who think service is as important as sales. In terminals for computers and point-to-point communications. That's why we invented a new name for who we are and what we make. The computercommunications people. For more information about any Teletype product, write or call TERMINAL CENTRAL: Teletype Corporation, Dept. 53F, 5555 Touhy Avenue, Skokie, Illinois 60076. Phone 312/982-2500 Teletype is a trademark registered in the United States Patent Office. World Radio History Think Twice: When is a portable really portable? HP's 1700 Series Portable Scopes Always Are... They're tough go-anywhere scopes: weatherproof, dustproof, completely self-contained. Not the kind of "portable" that's gently moved from bench to bench, trailing a power cord. With a 1700 Series scope you don't worry about the rain. Or the rough ride. Or whether, when you get there, you'll find ac or dc power—or no line power. An HP portable gives you features you'd expect only in a big lab scope. Like a large, bright CRT that lets you see even difficult signals in high ambient lighting. ECL trigger circuits and a trigger hold-off control, and sweep linearity over the full 10 divisions of horizontal display—ideal for maximum resolution in making those critical timing measurements. But that's just the beginning. Then the 1700 Series allows you to pick the specific features you need for your field service application: conventional or variable-persistence storage CRT; bandwidths of 35, 75, or 150 MHz; sweep speeds as fast as 2 ns/div; delayed or non-delayed sweep; selectable input impedance; bright-scan viewing mode; and a built-in rechargeable battery pack for complete measurement independence. And we're just as proud of the things you don't get with a 1700 Series portable. No heat sinks. No fans. No ventilation holes to let in dust and moisture. That's because our circuits are designed for very low power consumption—and for long, trouble-free operation. And there's no challenge in servicing our portables. In fact, you can completely recalibrate some models in an hour or less, even if all the internal adjustments are misaligned. It's not very sporting, but this ease of servicing quickly adds up to impressive savings. So before you choose a scope, check your requirements. Then think twice about costs and benefits. Remember, Hewlett-Packard portables let you make any measurement you need—and they cost from $100 to $250 less than comparable scopes. These 1700 Series portables are priced from $1475 to $2300 for non-storage models and from $2375 to $2725 for models with variable-persistence storage. For help in choosing the HP portable that's best for you, send for a free copy of our "No-Nonsense Guide to Oscilloscope Selection." Or contact your local HP field engineer. Hewlett-Packard, Palo Alto, California 94304. In Japan: Yokogawa—Hewlett-Packard, 1-59-1, Yoyogi, Shibuya-Ku, Tokoyo 151, Japan. In Europe: HPSA, P.O. Box 85, CH-1217 Meyrin 2, Geneva, Switzerland. Scopes Are Changing; Think Twice. HEWLETT PACKARD OSCILLOSCOPE SYSTEMS Electronics/May 24, 1973 Circle 69 on reader service card 69 The T317 Incoming-Inspection Department Our new T317 tests transistors, diodes, SCRs, FETs, zeners, triacs, and arrays. It can multiplex up to three different jobs simultaneously, and it interfaces to handlers and probers. It stores programs by built-in semiconductor memory or by a magnetic card programmer. It is powerful enough for production-line testing, but is priced low enough for incoming inspection. Learn more. Get a fact-filled brochure by writing: Teradyne, 183 Essex St., Boston, Mass. 02111. In Europe, Teradyne Europe S.A., 11 bis, rue Roquépine, 75 Paris 8e, France. IC makers are betting on C-MOS They expect it to replace TTL as the standard logic line, with sales soaring to $100 million in 1975 by Howard Wolff, Associate Editor With complementary MOS building muscle as a standard logic technology, integrated-circuit makers are getting their production lines ready for a plunge into the market. And that market is a dazzling one: from $8 million to $10 million last year, total sales are expected to grow to $100 million in 1975 [Electronics, Jan. 4, p. 77]. Robert Mason, sales manager at Solid State Scientific of Montgomeryville, Pa., sums it up best, saying: "We're obviously not replacing transistor-transistor logic at this stage of the game, but I certainly believe C-MOS eventually will be its successor—except maybe where high speed is needed." Adds RCA's Harry Weisberg, manager of MOS IC product lines: "Standard C-MOS can go beyond TTL's capability, with one C-MOS circuit often equivalent to up to 30 TTL parts." Pick a number. There are two major C-MOS families. RCA's 4000 series has the advantage of being older and established, with a choice of second sources. National Semiconductor's 54C/74C has the advantage of being pin-to-pin compatible with TTL; however, its only second source at present is in Japan. As a result, more companies are putting their money on the 4000 parts [see "Choosing sides,"]. However, National says it's sailing along. Robert Bennett, C-MOS marketing manager, claims that sales have doubled every four weeks since 74C was introduced last August and now surpass National's sales of 4000 types. The company started second-sourcing the 4000s last June to "support our diffusion furnaces" while the 74C line was developed. But whichever family dominates, everyone agrees that C-MOS is the standard logic of the future. RCA's Weisberg expects it to break in at two main points in the market. One is where TTL is already used but where designers are interested in the improved noise immunity and stinger power consumption of C-MOS. Here RCA sees increasing application of C-MOS in industrial and numerical-control circuits, point-of-sale equipment, line printers, peripherals, and medical electronics. The other market is where p-channel MOS circuits, mostly cus- Choosing sides More and more IC manufacturers eager to get into the standard C-MOS logic business are choosing the older, established RCA 4000 family with its wide choice of parts over the 54C/74C line introduced nine months ago by National Semiconductor. Fairchild Semiconductor, which expects to be in production by the end of the year, will go initially with 4000 parts because of their popularity. It might also second-source some Motorola proprietary parts, besides building some of its own. There has been no decision yet on whether Fairchild eventually will also second-source 74C. Texas Instruments has said it is planning some 4000 parts—including a high-reliability 4000A line in a ceramic dual in-line package [Electronics, April 12, p. 82]. And one industry observer says that TI will announce 30 parts next month: 19 RCA and 11 Motorola types. Motorola makes the RCA line, which it calls MC14000, as well as its proprietary line, MC14500. John Ekiss, group MOS operation manager, is so certain that his company has taken the right route that he says flatly: "54C MOS is dead." Also in the 4000 camp are Solid State Scientific, Soliton Devices—which had been National's sole domestic second source—Signetics, and General Instrument. Signetics MOS marketing manager Robert Dwyer says 74C was considered because of its compatibility with TTL functions. But 4000 won out since, among other reasons, Dwyer says it's more thoroughly debugged. However, no one at National is ready to roll out the hearse. C-MOS marketing manager Robert Bennett says that 74C is "coming on so fast" that the bulk of his development support is going to the new line. By the end of this month, he says, National will be second-sourcing 17 RCA designs and prime-sourcing 27 of the 74C designs. About 20 more 74C parts, including a 256-bit tri-state random-access memory, a four-bit adder, and other complex devices, are on the "future" list, compared with only two 4000-series designs. Among the characteristics of 74C that make National so optimistic: - The same logic configurations as standard TTL, making it unnecessary for designers already familiar with bipolar logic to learn how to design with C-MOS. - Higher speed, current drive, and noise immunity. National says the first two are a 50% improvement over the 4000 series, and specifies noise immunity in volts similar to TTL specs. - Better compatibility with bipolar logic and more consistent output-current specs than the 4000 series. Increased productivity is an important national priority. It can mean the difference between a higher standard of living and the status quo. It can help stem runaway inflation too. Use of Panasonic Data Entry Terminals in payroll time recording systems is just one example of how productivity can be increased. Workers carrying identification badges insert them into a Data Entry Terminal when arriving or departing work. In each case, the worker's identity and the time of punch is instantaneously recorded in the CPU. Time cards, the extensive work of converting them to punched Hollerith cards for computer use, and the resultant errors, are entirely eliminated. Panasonic's Data Entry Terminal is the device that can make source-data acquisition a practicality almost anywhere. It combines a unique optical card reader with advanced C-MOS ICs to read punched cards or badges in the stationary condition. So it's remarkably lightweight, compact, reliable, and competitively priced. Because it's manufactured in a variety of specifications and options, Panasonic's Data Entry Terminal lets you buy just the amount of data handling capability you need. What's more, it's designed to interface with most multiplexers or CPUs with no need for a special polling device or a controller. It has the flexibility to adapt to most user-designed systems. Panasonic's Data Entry Terminal. One of the more than 30,000 systems and components we manufacture. Including one you may be in the market for. Or perhaps you need something we aren't presently manufacturing. No problem. More than likely we have the capability to conceive, design, and manufacture it. With the speed, efficiency, and quality you expect from a world-leading electronics manufacturer. In fact, we may even be working on a new idea or introducing a re-designed system or component that fits your particular specs right now. Panasonic prides itself in its ability to engineer virtually any system or component you happen to be in the market for. To your exact specifications. We'd like to help you. Call us. (212) 973-8216. Probing the news tom, have held sway. Now engineers find they can rely on a standard logic family, accomplishing medium- and even large-scale integrated functions with TTL compatibility and lower power dissipation. Another important area, says Weisberg, involves equipment that wouldn't have been feasible with either TTL or p-channel MOS technologies. This includes such items as pocket paging systems using digital C-MOS addressing circuitry, satellite and airborne computers, and telephone dial-tone generators and coin changers. New standard. At Motorola Semiconductor, which second-sources 4000 and also builds proprietary parts, MOS development and planning manager Ronald Komatz expects C-MOS to replace TTL. "On new designs, I think C-MOS will become standard. Where it can't be used because it's not fast enough, users will go 74S [Schottky] or ECL," he said. C-MOS is about 5 megahertz at 5 volts, 10 MHz at 10 to 15 v. Standard TTL is 25 MHz at 5 v. Komatz thinks the TTL shortage may have helped accelerate early redesigns to C-MOS from TTL, "but now they can't get C-MOS either." He says Motorola's production has doubled since the first of the year, "but orders have tripled." Komatz concedes that backlogs are building, with wafer fabrication the real crunch. One of RCA's first C-MOS second sources, Solid State Scientific, put together a pair of $1 million-plus deals earlier this year. The rapidly growing company signed contracts with the Philco-Ford division of the Ford Motor Co. and the Chrysler Corp. for custom C-MOS circuits for auto seat-belt interlock systems. As for standard logic, sales manager Mason believes that there will be no problem with designers learning a new set of logic rules. If they've used TTL they're accustomed to a system concept of gates, flip-flops, MSI, and building blocks, he says, and "even though the logic family may be different, conceptually it's the same." This year will be a dramatic one for C-MOS, he continues. "It will be the first year of real production." He estimates that C-MOS sales should hit $35 million by the end of the year. Solid State Scientific has been increasing production facilities "dramatically" since last fall, according to Mason. But sales are booming to such an extent that "the faster we go, the behinder we get," Mason says, paraphrasing an old saying from the nearby Pennsylvania Dutch country. Cheers. "We're very excited about the C-MOS market," declares William Maxwell, digital products marketing manager at the Harris Semiconductor division of Harris Intertype Corp., Melbourne, Fla. By the end of 1973, Harris plans to have 39 circuits on the market, up from 15 so far. "60% of them will be 4000-series parts, while only a few will be Motorola types," Maxwell says. The rest will be proprietary designs, relying on a dielectric isolation process that results in a circuit roughly two to three times as fast as the 4000 series and with quiescent power dissipation an order of magnitude lower. Right now, commercial units with temperature ranges of -40° to +85°C sell at a premium of 25% to 30% over conventional 4000-series units. However, the full-temperature-range devices are competitive. Maxwell says that improvement in yields, which Harris is already seeing as production builds up, and the introduction of a plastic package (to go with the ceramic dual in-line packages) should bring this price premium down to only about 10%. Like its fellow IC makers, Solitron Devices Inc. in San Diego, Calif., is betting heavily on C-MOS. The company has 32 of the 4000 parts, with 13 to be added by June. It also has three proprietary parts, with two more due next month. And while Solitron is second-sourcing only RCA, after dropping its 74C parts [Electronics, May 10, p. 25], MOS applications engineer James Everett says that it is also interested in Motorola's seven-segment driver with bipolar output. Going east. Back on the East Coast, General Instrument Corp. also has its eye on the end of the C-MOS rainbow. One of the product marketing managers in the company's Semiconductor Components division in Hicksville, N.Y., says it will announce eight 4000-type products by the end of the year, with the first—a quad bilateral switch—to be available this month. As C-MOS invades more and more of what is now TTL territory, production facilities are showing the strain. Solid State Scientific has already increased capacity; Motorola is building a new plant in Austin, Texas, dedicated to MOS. And demand in Europe has been so heavy that Motorola has decided to begin making C-MOS in Scotland. The difference All complementary-MOS logic families are based on the inverter concept of circuit functions. The circuits consist of two types of MOS enhancement-mode transistors—a p-MOS type and an n-MOS type—in various parallel and series combinations on a single chip. The 74C logic family from National Semiconductor differs from the basic 4000 family because its pin arrangements are identical with those of standard bipolar logic (TTL, low-power TTL, DTL, and so on), so in some cases it can be used as a direct pin-for-pin replacement in standard logic configurations. This is not true of the other C-MOS types, which have been uniquely partitioned to be used specifically for C-MOS applications. The compatibility of 74C with standard bipolar logic circuits is one of its strongest selling points. National says that the 74C series can be driven directly from TTL, low-power TTL, and DTL over the commercial temperature range without external pull-up resistors, but the 4000 series cannot be driven directly by bipolar logic because that family does not guarantee a direct interface with no pull-up resistors. As for the other question, whether 74C can drive bipolar logic, the answer in many cases is "yes"—but is also more complicated.—Laurence Altman Hotels like what they see in pay TV 1973 shapes up as boom year for private delivery systems enabling guests to watch first-run movies in their rooms by Alfred Rosenblatt, New York bureau manager The average audience is not likely to be as numerous or as husky as the 22 football players who crowded into a single Atlanta hotel room one Friday night last fall to see the movie "Deliverance." In town for a big college football game, the players were viewing the first-run show over a pay television system that had just been installed at the Regency Hyatt House by Trans-World Communications division of Columbia Pictures Industries Inc. The charge: a flat $3 for "unlocking" the room's TV set so the movie could be seen, or little less than 14 cents per footballer. Most often, the personal economics will not be this attractive. But for travelers finding themselves in strange cities with little to do, pay TV systems may prove to be a cheap and convenient entertainment boon—so much so that 1973 is shaping up as a boom year in the number of pay systems installed in hotels and motels across the country. Trans-World's Tele/Theatre movie system, for instance, in the past 15 months has gone into 29,210 hotel rooms in 27 hotels in seven cities. The company expects this figure to double by the end of the year. Another company in the field, Computer Television Inc.'s Computer Cinema division, New York, reports it has just signed on with the Hilton Hotel chain with a potential 40,000 rooms across the country. Players Cinema Systems Inc., Englewood Cliffs, N.J., boasts some 14,500 hotel rooms with more "signing up like crazy," according to a company spokesman. And a newcomer to the field, Telebeam Corp., has signed on the Americana Hotel in New York. The designs of the various systems differ markedly. In any given city, Trans-World may distribute its programs—sport and theatrical events as well as movies may be offered—from a central studio to its hotel clients over specially installed coaxial cables. Or, with the FCC approval it has received in at least four cities, it may beam the signal from the studio to the hotels over a private microwave link. Riding a LED. Telebeam, however, since the installation of either a coax or microwave link is expensive, encodes its programs on the beam from a 20-milliwatt light-emitting diode, and then transmits them to a receiver at the hotel. From one location, then, the company can transmit to any hotel within a range of a half mile. Eschewing a distribution net from a central studio is Players Cinema Systems. It actually sets up video tape players with program tapes in each hotel subscribing to its service. Other elements in the pay TV system include the central control station in a hotel through which the room units can be unlocked to unscramble the picture for a paying guest, plus maybe a control unit in a hotel room for selecting one of several pay TV channels. The central control station, often directed by a minicomputer, is used to keep track of the programs ordered by guests so that they can be billed later. It usually distributes the programs to the individual rooms over the master antenna network that is already installed. Computer Cinema, however, does not believe the master antenna systems are generally in good enough shape to provide a suitable signal for a customer paying for a program. This company, therefore, installs its own coaxial network in the hotel, connecting each room to a "switching nest" that may pick up all the rooms on a floor, or all the rooms in a vertical line through the building. The switch itself, specially built for the company by Data Architects Inc., Waltham, Mass., is "one of the most sophisticated rf switching units in the business," says executive vice president Paul von Schreiber. The company's three pay channels are transmitted in a 50- to 100-megahertz band. The fact that, instead of an elaborate control unit, only a very simple switch is needed in the hotel room at the TV set compensates somewhat for the expense of wiring the hotel with the dedicated coax. Von Schreiber estimates he can fit up a hotel for about $100 per room, plus the cost of the studio origination equipment. Others find the costs as high as $150 per room. Most of the systems, however, do not have any provision in the individual rooms for ordering a program. Rather, the guest is expected to telephone down to the control station and request the program he wants to see. An operator at the station then switches the program to the room. Payment plans. Most of the systems, too, operate with the customer paying a flat rate—$3 is a usual figure—for each program he wants. Others, such as Players Cinema Systems II, are "subscription" services in which the hotel pays the fees and the guests can then see whatever programs are being sent into the system. Critics of this system assert that the payments to motion picture producers would not be attractive enough for them to furnish popular, first-run motion pictures. This is because they would receive a flat rate rather than a payment based on the number of viewers. Players Cinema also has another pay-as-you see system that does not employ a scrambled picture which must be decoded. Rather, a hotel guest will receive a clear, unimpeded picture. But his viewing is monitored by a central station in the hotel. If he's tuned into the program for, say, longer than 10 minutes, he's billed. **Two-way control.** Perhaps the most sophisticated system in terms of the number of services offered is Telebeam's. It provides five channels of pay TV, relying on an interactive, two-way control unit in the room that allows a guest to do such things as dial up various information services, such as restaurant menus or airline and train schedules, or keep track of whether the door to the room is opened without authorization. Both Trans-World and Computer Cinema, among others, are also developing similar expanded services around their pay TV system. The latter company plans to upgrade one of its Hilton hotel systems to a two-way system by next summer. In the Telebeam system, the information is displayed on the TV set using a frame grabber device manufactured by Systems Resources Corp., Plainview, N.Y. [Electronics, March 17, 1972, p. 30]. In addition, Telebeam provides a terminal, to be deployed beside the hotel's various point-of-sale terminals, that immediately bills a guest when he presents a check at, for example, a hotel coffee shop or restaurant. Employing a 7-inch TV set, the terminal displays the guest's original registration card with his signature; it's designed to eliminate the "charges after departures" that are the bane of the hotel manager. Telebeam also has a room-status terminal for the front desk that keeps track of rooms as they are sold, vacated, or cleaned by the maids. But the security provision should be of greatest interest to hotel keepers. In this mode, the central computer station can monitor through the hotel's master antenna system as many as 2,000 doors per second. An unauthorized entrance made, for example, without the proper key, or when a room guest has checked out, immediately triggers an alarm. As the pay TV systems prove successful in hotels they will undoubtedly be applied to large apartment houses and hospitals, as well as being introduced to already installed community-antenna TV systems. In fact, this latter type of system is already operating in many parts of the country. For many companies, hotels are a proving ground for their system. "They're simpler systems to do, more controllable, and they can be adjusted more thoroughly," says Marvin Korman of Trans-World. Comments Computer Cinema's von Schreiber: "Hotels are a natural learning environment for both the economics and technology for pay TV." The environment is relatively easy to set up and service, he continues, and "you don't have to worry about upgrading 300 miles of outside cable plant." --- **Headquarters.** Trans-World's originating studio pipes first-run movies to hotel pay TV systems. Charge is usually $3. Computers Moscow strides into the marketplace Westerners at first showing of the East Bloc's Unified System agree it will be strong competitor, even as they press sales efforts by Axel Krause, World News The Soviet Union's four-year effort to make the Comecon nations a force in the European computer marketplace has been successful. That's the virtually unanimous conclusion of Western computer experts who traveled to Moscow this month for their first look at the line of computers, software, and peripherals produced by the Soviet-led Eastern European bloc. However, even though they were impressed by the display, the Western marketers have no intention of easing up on their Eastern Europe sales efforts. The series, formerly RJAD but now officially called ES (from the Russian for Unified System) [Electronics, Sept. 25, 1972, p.72] is on display until June 10 in Moscow. It demonstrates that "concretely, [the Soviets] now will be able to handle all their routine, commercial data-processing needs on their own," said a senior marketing executive of Britain's International Computers Ltd. after an inspection tour of the exhibit. And, as though to underscore that, a top Soviet computer official, pointing toward the equipment—ranging from the small, Hungarian-made ES-1010 to the powerful, Soviet-made ES-1050—flatly declared: "What you see here either is being mass-produced already, or will be by the end of 1973." Consequences. While Western computer experts questioned how fast or in what quantities the series actually would be manufactured and installed, Soviet officials provided a glimpse into some of the inner workings and implications of the joint project, which initially included the Soviet Union, Bulgaria, Czechoslovakia, East Germany, Hungary, and Poland. Soviet officials disclosed: - An ES-series intergovernmental coordinating agency will be in charge of all future development and design work. Moreover, the agency will, for the first time, include Rumania, which had decided to stay out of the ES effort but from now on will participate in all future work, including design and possibly software development. "Eventually, we hope to go beyond the ES-1060," said a Soviet official, referring to the most powerful computer in the series, which is still under development by the Soviet Union. The 1060 will be a 2,048,000-byte machine. - Experience, knowhow, and equipment from Western computer firms will be welcomed in complementing the ES-series development. - Soviet planners have no serious ambitions now to export substantial amounts of equipment from the present ES series. Western computer experts viewed the developments with mixed feelings. A seasoned executive of a U.S. electronics firm, with long experience in East European countries, said that "as with others in this league, our technology is so far advanced that we aren't worried in the slightest—but some large computer firms should be." As the executive sees it, the market for computers in the range of the IBM 360/50 and below will be "gradually squeezed and eventually eliminated." He predicts that Western computer and electronics companies increasingly will be competing against ES-series equipment as it becomes available. For the immediate future anyway, there should be some consolation for marketers in developWhat looks like data here could be garbage there. And vice-versa. If you've ever tried to combine data interchangeability and low cost digital recording, you know the problem: data in, but sometimes only garbage out. So we've come up with a system for cutting costs without cutting corners: The "Scotch" Brand Data Cartridge and 3M Data Cartridge Drive. Fully compatible with proposed ANSI Standards for \( \frac{1}{4} \) inch cartridge devices, our system records—and retrieves—up to 5.5 million bits of data on each of 1 to 4 tracks. Shuttles bidirectionally at 90 ips. Reads, writes and backspaces at 30 ips. And provides a data transfer rate of 48,000 bps. You get reel-to-reel performance for about the price of a 0.150 inch cassette system. The medium is our own \( \frac{1}{4} \) inch "Scotch" Brand Data Cartridge, with its unique endless drive band. A new concept in tape handling, it requires no external tape guidance and external power at only a single point. Tape handling is so gentle, so precise tape can't cinch, spill, stretch or break and cartridge life expectancy is in excess of 5000 passes. And to ensure data interchangeability, we've developed a drive with the same simplicity and reliability. A single dc motor with a tachometer feedback speed control. Servo control of the start/stop ramps for accurate inter-record gap lengths. A die cast top plate for exact cartridge positioning. And foolproof, drop-in loading that automatically positions the cartridge in precisely the same position even after thousands of load/unload operations. In OEM quantities, the basic DCD-3 Data Cartridge Drive—with a single-channel, read-while-write head and servo electronics—is less than $350. Full-blown evaluation units with data and control electronics in a 4 track, read-while-write configuration are just $961 and available now. It's your chance to clean up your data interchange problems. And your costs. Contact Data Products, 3M Company, 300 S. Lewis Rd., Camarillo, Calif. 93010. Telephone (805) 482-1911. TWX: 910 336-1676. We've been there. And brought the answers back. Clean up at NCC Booth #2741. Tung-Sol® 3 to 500 Amp Power Rectifiers Tung-Sol rectifiers are conservatively rated to assure the widest possible margin of safety and reliability. Press-fit rectifiers to 44 Amps. Stud-mounted types to 420 Amps. Ratings to 500 Amps in flat-base construction. Modular Bridge Rectifiers 10 to 35 Amps Highest surge ratings give designers maximum protection against voltage overloads. Single Phase B-50 Series—To 10 Amps DC. 50 to 600 PRV. 300 Amps surge. B-40 Series—To 15 Amps DC. 50 to 1,000 PRV. 300 Amps surge. B-10 Series—To 30 Amps DC. 50 to 1,000 PRV. 400 Amps surge. Three Phase B-20 Series—To 35 Amps DC. 50 to 1,000 PRV. 400 Amps surge. WRITE FOR TECHNICAL INFORMATION. SPECIFY BRIDGES, OR POWER RECTIFIERS. SILICON PRODUCTS SECTION WAGNER ELECTRIC CORPORATION 63C West Mt. Pleasant Ave. Livingston, N.J. 07039 TWX: 710-994-4865 PHONE: (212) 962-1100 (212) 733-5426 Trademark TUNG-SOL Reg. U.S. Pat. Off. and Marcas Registradas Probing the news ment gaps noted in the series by Western experts. These include advanced disk systems, memory storage units, communications equipment, and high-speed printers. IC problems. Soviet and Bulgarian officials, jointly developing the ES-1020 computer—with its 256,000-byte main memory roughly equivalent to an IBM 360/40—admitted to visiting Westerners that they were still encountering difficulties with the production and installation of integrated circuits. Commented a Western computer engineer after seeing the unit, "The kinds of circuitry problems they described were exactly the same as those besetting us six or seven years ago." But he added that they seem to be finding solutions. While Soviet ambivalence is not making the marketing tasks of companies any easier, there are some clear-cut signs that several leading U.S. computer companies are pressing ahead to expand their business in the Soviet Union. With Washington's continuing embargo on sale of computers above the range of the IBM 360/50, any of these deals that materialize into sales and deliveries would represent breakthroughs. At the center of things is IBM, which last month signed a contract to supply a large computer system to the Soviet travel company Intourist. It is described by Moscow sources as a 370/155 with enough memory to make it totally unacceptable to Washington under existing embargo rules. Western diplomats in Moscow are convinced, however, that IBM would not have signed the contract unless it had some strong indication that the Nixon Administration would reverse current restrictive policy. The June 18-26 visit of Soviet party leader Leonid Brezhnev to the U.S. could result in relaxation of the embargo. If the embargo were lifted, several other pending major computer projects could be consummated, according to Western diplomats in Moscow. These include an estimated $25 million data-processing system for the $2 billion heavy-duty truck plant the Russians plan to build on the Kama River, east of Moscow. Though ICL, IBM, and Honeywell-Bull have been mentioned as among those interested, some trade sources in Moscow report that IBM already has signed or is about to sign a contract. Soviet officials also have requested bids for an estimated $15 million computer network for Aeroflot, the Soviet airline. But this deal, too, is running up against Washington's embargo, since the equipment involved calls for extensive memory storage capacity. CDC deal. Control Data Corp. is continuing negotiations with Soviet organizations for a range of projects. Among them is the possible sale of several computers, ranging from the Cyber 70 up to the Cyber 76, as well as cooperation in developing large-scale, advanced, computer-manufacturing potential. Several U.S. companies are bidding on a Soviet proposal to build a plant specializing in manufacturing disk packs and magnetic tapes. While the investment called for is believed to be in the range of $10 million, progress was described by Moscow sources as slow, pending approvals by Washington. The waiting game Not only are Western computer and peripheral makers busily knocking at Russian doors, but large components firms, among them Texas Instruments, have been actively canvassing the Soviet market. Again, for what the Soviets want most—the latest, highly sophisticated IC equipment, technology, and knowhow—the embargo remains in force. "Unless Kissinger, Nixon, and Brezhnev have come to some kind of new understanding on the embargo we don't know about yet, I would say that the chances of anything happening soon are about nil," says a highly knowledgeable U.S. electronics executive. He notes that easing of restrictions on sale of electronic products announced last year has not yet been implemented by U.S. Government agencies. "We are still waiting," he says. These digital panel meters are changing your thinking about digital panel meters. They all operate on 5 volts DC. A new class of DPM's. Most of your electronic systems have lots of digital logic all over the place along with 5 volts of DC to power it. We pioneered a way to use the same 5 volts to power the DPM as well. The first thing this means is that you don't need a separate power supply just for the DPM. That saves money. It saves space. Less heat is generated. The design becomes simpler and the reliability is improved. Then, because line-power voltage is kept away from the DPM and its inputs, internally generated noise is virtually eliminated. You get more reliable readings. Now you can think of a DPM as a component just like any other logic component in your system. We offer DPM's optimized for economy display applications. Like the AD2001, 3½ digits — $89*. The AD2002, a $50* 2½ digit replacement for analog meters. And the smallest, the AD2010, a 3½ digit LED display DPM with full latched BCD outputs at only $79*. Then, for system interfacing requiring exceptionally clean digital outputs, good isolation and high noise immunity, we offer the AD2003, a 3½ digit DPM with differential input, CMR of 80dB and normal mode rejection of 40dB at 60Hz. All for $93*. If you need 4½ digits, there's the AD2004 LED display DPM with an optically isolated analog section, and fully floating differential input providing CMR of 120dB at ±300 volts and normal mode rejection of 60dB at 60-Hz or 50Hz. This one's $189*. BCD outputs on all. All small. All given a seven-day burn-in for added reliability. Our thinking hasn't stopped because yours hasn't either. And our DPM's give you a lot more to think about. Analog Devices, Inc., Norwood, Mass. 02062. *All prices are the 100-piece price. Call 617-329-4700 for everything you need to know about 5 volt DC powered DPM's. Companies Many pennies from heaven By interpreting satellite data on the earth's natural resources, Earth Satellite Corp. benefits both itself and clients the world over by William F. Arnold, Aerospace Editor The Federal Government develops a new technology—and the private sector, in exploiting it, opens up a whole new market. The latest version of that script starts from the NASA-derived technology of electronic remote sensing using meteorological and earth resources satellites and stars a small Washington-based company named Earth Satellite Corp., known as Earthsat. Earthsat specializes in integrating satellite remote-sensing data and aerial photographs with electronic and computer techniques to produce resource analysis tailored to the special needs of its clients. The result is a growing clientele among Federal agencies, state and foreign governments, and private companies that need to survey an area or to inventory resources. What a company like Earthsat offers is the speedy and precise inventory of an area's resources, thanks to satellite sensing and computer-based analysis. For example, Iran, concerned about increasing the amount of protein in its citizens' diet, wants to know the extent of its grazing land so it can plan cattle production. Conventional surveying techniques could take decades, but satellite technology will let Earthsat give the Iranian government an exact inventory of its approximately 100 million hectares of potential grazing land in 120 days. Jersey, too The uses and users are varied. Besides Iran, Earthsat has contracts with the state of New Jersey to map the wetlands; with the Brazilian government to chart the Amazon River basin; with the state of New York to identify danger zones in strip mining areas as a preliminary to better control of land use; and with the states of Maryland and Arizona, and the governments of Argentina, Ecuador, Greece, and Venezuela for various land-use inventories. Applying satellite data to commercial and public needs is successful for a variety of reasons. It responds to an urgent demand for resource planning in a shrinking world. A factor, too, is that the company's president, J. Robert Porter Jr., was chief of NASA's earth resources technology program until he quit four years ago to form Earthsat. His vice president, Arch B. Park, succeeded Porter at NASA until he left last March to join the company. Whether by performing the surveys itself, by consulting, or by training a client's cadre to perform analysis, Earthsat applies satellite technology to social and economic uses. Refined ERTS technology gives analysts a lot to work with. A multispectral scanner onboard the craft picks up images simultaneously in four spectral bands of green, red and two infrared, stores them, and dumps the pictures digitally as it passes over U.S. receiving stations, explains Charles Sheffield, Earthsat's computer applications manager and president of Image Processing Inc., of which Earthsat is majority owner. Each ERTS picture covers an area 115 miles on a side with a high resolution of 70 meters, he says. Iran can be covered in 107 pictures, of which 42 have to be studied for grazing land and only 10 of those intensively, Parks adds. Since ERTS passes over the same area many times during its journeys, an agriculturalist, for example, can chart the maturation of a wheat field or the cancerous growth of corn blight by studying the frequency distributions and changes in the images. Earthsat's IBM 360/50 computer helps handle the voluminous data. One image from one of the four bands contains 7.6 million eight-bit bytes, each of which corresponds to one grey element in the picture (the computer processes in black and white), explains Sheffield. Each ERTS picture element equals about one acre of land. But it isn't quite as simple as that, Sheffield goes on. On the data collection side, the ERTS pictures have to be "underpinned" with ground or aircraft surveys to establish reference points. According to Park, however, only a millionth of the sample area is needed, instead of the 10% sampled in other survey methods. On the processing side, Earthsat uses a highly interactive process employing optical, analog, and digital techniques with "the human operator very much in the loop," Sheffield says. A biologist trained to use the computerized processing can employ his special and sometimes intuitive knowledge to get the most out of the data. This way, "we avoid the problem of trying to encode into the computer what's in a person's head," Sheffield says. Armed with the computer and equipment such as additive color viewers and a Digicol unit (which translates grey tones into color) made by International Imaging Systems, Mountain View, Calif., Earthsat personnel prefer to work with the unprocessed computer-compatible ERTS tape. The data can be "squeezed" using a variety of picture-enhancing techniques, such as frequency histograms, Fourier transforms, scale changes, and edge enhancement. A current project is development of an automatic crop-classification system, Sheffield says. With the technology it's possible to completely classify an agricultural area by vegetation codes, even before a team goes over to find out what plants go into the classifications, he adds. Porter had two lean years before Earthsat began to take off. Now, fattened by the New Jersey wetlands survey at $800,000 and the $1 million Interior Department contract among others, the company is looking up. Current backlog is more than $2 million, and assets equal $1 million plus. Earthsat brass. J. Robert Porter Jr. left, is president, and S. Benedict Levin is executive vice president. ANNOUNCING THE FIRST PAGE READER ON A CHIP You can read a standard 8½" wide page at 16 mil resolution with only a single RL-512 self-scanned array. With only two of the 512 element devices aligned you can improve resolution to 8 mil on paper and still read at up to 10 MHz scan rates. The Reticon RL-512 array offers 512 photodiodes on 1 mil centers self-scanned by on chip shift registers and multiplex switches. The device offers high sensitivity, charge storage mode operation, scan rates from 10 KHz to 10 MHz and operation on 15V supply. Optical quality quartz window seals the 18 pin standard ceramic DIP. Other applications include OCR, facsimile, surveillance, industrial control, size and edge monitoring, laser detection and many others. This and other devices of 16, 64, 128, 256 elements are available from inventory. RETICON™ 450 E. MIDDLEFIELD ROAD MOUNTAIN VIEW, CA 94040 (415) 964-6800 Note: The IEEE Std 162a-1973 High Definition Facsimile Test Chart was reproduced by permission of the Institute of Electrical and Electronics Engineers, Inc., 345 East 47th Street, New York, N.Y. 10017 Medical electronics ‘Olli’ is taking giant steps Small Finnish company entered medical market in 1969; to log $2.5 million in ’73 with monitor, lab systems by Martin Schultz, World News; Arthur Erikson, Managing Editor, International You don’t judge a book by its cover, and you don’t judge a company simply by its size. And even by Finnish standards, Ollitutoe Oy is small. Only some 150 workers report for work each day at the company’s modern but modest plant in the wooded outskirts of Helsinki. Nonetheless, Ollitutoe has compiled a track record many a larger corporation would envy: in 1973, a scant three and a half years after it plunged into medical electronics, the company has a line of hospital hardware that’s selling strongly in export markets as well as in Finland. And largely as a result, Ollitutoe’s sales are spurring. For 1972, the firm logged roughly $1.65 million in sales, about half in medical and half in “conventional” electronics. This year, Ollitutoe figures its sales will jump more than 50% to reach $2.58 million. Some two thirds of sales will come from items like the Olli 3000 automated chemical analyzer, cardioscopes, defibrillators, and arterial pressure meters. The other third will come from Ollitutoe’s former bread-and-butter hardware—warning flashers, electronic controls for electric fences, and subassemblies for elevator controls. **Diagnosis.** When Ollitutoe got into the hospital field, says Harry Timonen, managing director, “there was no domestic production at all. It was all imported.” Further, the company realized that Finland has a reservoir of medical technologists eager to cooperate with a Finnish company in the design and domestic production of hospital electronics. And there was adequate electronics expertise in the country’s technical universities. Ollitutoe plunged into the world of hospital electronics in 1969 in a joint project with the huge state-run engineering corporation, Valmet Instrument Works. The pair developed a 12-patient coronary-care unit and by 1970 had the first one installed in the hospital at Tampere, Finland’s second largest city. Timonen maintains that the Ollitutoe intensive-care system costs about 20% less than imported systems. To be sure, the Ollitutoe equipment doesn’t provide the diagnostic capacities—not really needed usually, Timonen insists—of most other hardware. But the company has managed to sell several hundred. This first success in medical electronics obviously made Timonen eager to try again, and he soon had a team looking into operations at several general hospitals. One thing the team found was that hospital labs tended to be overloaded in the morning and that pointed to a market for an automated chemical analyzer. After three years of development work and an outlay of more... than $500,000, the company had its Olli 3000 system ready. The first went into operation in August 1972, and two others have gone on stream since. During the next couple of years, Ollitutoe expects to deliver another 10 or so of the $77,000 systems to Finnish customers. These estimates back up Timonen's belief that Ollitutoe is among the first to get onto the market with hardware for automating a wide range of clinical tests. The Olli 3000 can easily handle 1,000 analyses an hour, using up to 20 different wavelengths. The analyses are made by a photometer, and the readout comes from a data processor built around a Data General Nova 1200. **Mass production.** Actually, Ollitutoe's big stride forward has not been in the chemical analysis itself, for the photometer performs no better and no worse than many others now on the market. But Ollitutoe grouped 24 of them in one block, vastly improving the handling of the blood samples, the identification of sample and patient, and the processing of the photometer results. When a patient enters the laboratory, an identification tag is made up in binary code and attached to his wrist. Then the blood sample is taken and identified by the code, and the syringe tube is put into the 24-tube carrier. This carrier, after passing under a dispenser that drops test reagent into the tubes, is manually put into the photometer, which has 24 channels. Because the samples are coded, the computer has no problem matching test results with the right patient. (In manual systems, there's always a chance of error here.) Having drawn blood with its Olli 3000, the Finnish firm plans to expand its operations in hospital electronics. The next product will be a terminal for computer analysis of electrocardiograms. The terminal records patients' EKG, digitizes the records, and then feeds the data to a remote computer. The computer, backed up by a diagnostic specialist, analyzes the EKGs and then sends the results back to the terminal where the doctor can see them. --- **Automated tester.** Engineer checks out Olli 3000. It can handle 1,000 analyses an hour, using up to 20 different wavelengths. --- Reticon Corporation has long been an acknowledged leader in the field of solid state image sensing for optical character recognition (OCR), facsimile and surveillance equipment. They now offer complete electronic camera systems for industrial measurement and inspection applications. Using these cameras with an appropriate lens option, any field of view can be imaged onto a self-scanning linear array of 16 to 512 photodiodes. The diodes are placed on 1 to 16 mil centers with better than 0.01 mil accuracy. Charge storage-mode operation provides high sensitivity at up to 10 MHz scan rates, thus allowing reading or measuring of fast-moving objects. The Reticon LC 600 camera system is available from inventory. There are 56 salesmen and 14 distributors to serve you worldwide. 1. Is this circuit a thick film or thin film hybrid? - Thick - Thin 2. Is this hybrid's package designed for a hermetic seal? - Yes - No 3. Has this resistor been abrasive or laser trimmed? - Abrasive - Laser 4. Is this semiconductor chip an integrated circuit or transistor chip? - IC - Transistor 5. Is this passive chip a resistor or capacitor chip? - Resistor - Capacitor Test your Hybrid IQ: Eight out of ten people in this business can't get 100% on the Boeing Electronics Hybrid IQ test. That's not surprising. It's a highly technical, complicated science. If you wound up with five right answers, we'd like to give you special recognition. It's a Hybrid Genius identification card, made of metal and stamped with your name. But you have to be absolutely honest with us. Did you, or did you not, get all five correct, without peeking? Even though you missed one or two, there's still another chance. Just ask for the Second Chance Hybrid IQ test. This little examination is our way of letting you know Boeing knows quite a lot about hybrid microcircuits. Each of the circuits shown in the test was produced by Boeing for very specialized product requirements. Boeing is especially adept in supplying the right technical support to the equipment designer. Our engineers know how to design with your unique specifications in mind, and how to keep the price in line. But just as important, they know the importance of keeping your job on schedule. In other words, you'll never get lost in the shuffle at Boeing Electronics. We'd like to tell you more about our abilities. The right answers: 5. Capacitor. This circuit is an accelerometer restoring amplifier used with a gyro in a guidance and control system. 4. IC. This 16-channel multiplexer hybrid is used in an aircraft on-board maintenance and test system. 3. Laser. This hybrid dual current switch circuit handles 4 amps per switch in an airborne computer. 2. Yes. This is a 4096 bit random access memory circuit for a digital computer. 1. Thin Film. This is a digital logic circuit being used in a military guidance and control system. Computer builders get more than a motor from TRW/Globe TRW/Globe customers can achieve a combination of motorized functions in minimum space while eliminating unnecessary assembly operations and simplifying inventory requirements. They do it by ordering a Globe functional "package" instead of just a motor. For example, take the items above, produced for three of the leading builders of business machines and computers: The package on the left drives the printing ball on a serial printer and indexes it horizontally, with a DC motor. A DC tach generator provides feedback to the rest of the system. And a hollow motor shaft permits another shaft from a linear solenoid to index the ball vertically. At the top is a drive for a banking terminal carriage. Globe's integrally cast heat sink permits the high torque motor to operate reliably without burning up and causing costly downtime. The third package drives a computer tape reader. Widened poles and spiralled armature slots assure smooth motion even though torque changes constantly. The tach wheel is read by an electric eye for feedback, and Globe supplies the drive hub. When you can't afford the cost of failure, call or write: TRW/Globe Motors, an Electronic Components Division of TRW Inc., Dayton, Ohio 45404 (513-228-3171). Communications processors pace growth in data-network traffic Based on small and medium computers, programmable front-end processors, message switchers, remote concentrators, and remote-terminal controllers coordinate communications among many widely dispersed data terminals. by Lyman J. Hardeman, Communications Editor The current explosion in central processing of data for widely dispersed terminals has created a tidal wave of demand for equipment to cut down the expenses of data communications. In order to slash the heavy expenses of transmitting data among an ever-increasing number of terminals, manufacturers over the past few years have developed a new class of equipment known collectively as communications processors. The amount of data transmitted is substantially reduced by partial processing at the terminal sites and at intermediate locations. And the built-in processing power of this equipment also relieves the host computer of the burden of having to perform line-control and other communications functions, thus leaving the host more time and resources for the higher-level information processing for which it is designed. In the span of about 10 years, the market for communications processors has skyrocketed from virtually zero to a present annual level of several hundred million dollars. Sales for communications processors are expected to grow at a rate of approximately 30% per year for the next several years to a volume that has been estimated as high as $1 billion by 1976. This growth and market size easily places communications processors among the most active segments of both the computer and the communications industries. Processors classified It is convenient and instructive to categorize programmable communications processors into four classes: 1. **Teleprocessing added.** In the span of only a few years, the exploding demand for connecting data terminals at points remote from the host computer has complicated the requirements of communications networks. To simplify network coordination, several levels of communications processors have been inserted into the network in order to enhance overall teleprocessing efficiency. of equipment. These classes, shown in Fig. 1, are: - Front-end processor, which serves as the communications controller interface between the host computer and the communications network. - Message switcher, which receives data messages in a distributed communications network, analyzes the messages to determine their proper routing, then forwards them to other points in the network. - Data concentrator, which receives a number of low-speed transmission lines, multiplexes them, and transmits on one line at a higher data rate. - Remote-terminal controller, which coordinates a cluster of peripheral units at a location remote from the host computer, and often performs limited local processing. Functions within each of these classes of communications processors overlap substantially, and a single processor may, in fact, serve multiple functions. A message switcher, for example, while switching messages in and out of a network node, may function as a data concentrator. Similarly, a front-end processor may perform additionally as a message switcher or a local terminal controller. A detailed description of design trends and system applications for each of the four types of communications processors will be more meaningful if the typical communications-network tasks that must be performed are examined first. These include network control, code and data-speed conversion, buffering for character and message assembly, error control, compilation of system statistics, message validation, and record keeping. Network-control functions are usually performed by a front-end processor or a message switcher in conjunction with remote terminals. The communications processor must decide when each terminal is to be connected to the system. Connection is accomplished by a well-defined protocol and signaling procedure called handshaking. Also, the processor must determine priorities for connection of the various peripherals, and set up message queues. **Network control** Line discipline must be established to identify and distinguish between the data itself and the control information that coordinates transmissions within the communications network. Several factors must be considered. First, unlike peripheral devices local to the host computer, a remote terminal is often not continuously connected to the system. Therefore, means must be provided for a terminal to establish contact with the computer to supply an input message, or conversely, the computer must contact the terminal (via an automatic dialing unit in the dial-up telephone network, for example) prior to transmitting output data. Also, since several terminals may be attached to a common private or leased line to reduce communications costs, it becomes necessary to provide a proceIBM's blessing—at last IBM entered the communications-processor business in March of last year with the announcement of its model 3705 front-end processor, which coordinates data transmissions between a host computer and as many as 352 voice-grade communications lines. A more recent unit, model 3704, can accommodate a maximum of 32 lines. It is intended for use mainly as a remote data concentrator. The announcement of both models significantly impacted the communications processor field. Because IBM System/360 and 370 computers represent about 65% of the host computers that either actually or potentially interface with data-communications networks, that company has been the market target of the communications-processor industry. But until last year, IBM's products in data communications consisted of a line of hardwired controllers to interface data-communications lines with the input-output ports of mainframe computers. These controllers, however, depend on the processing powers of the host computers and therefore cannot be classified as stand-alone communications processors. Thus, IBM's approach to data communications before last year had been to let the host computer handle the communications tasks. In effect, IBM was saying that there was no need to add processing capability external to the host computer. This network philosophy, propagated by the company with the dominant role in data processing, made it difficult for other companies to sell the stand-alone-communications-processor concept to their customers. IBM's belated endorsement of the need for communications processors, therefore, has generally been considered a blessing by competing equipment vendors. But this IBM blessing, of course, brings with it the formidable competition of the computer giant in the communications-processor field. Similarly, data from terminals is transmitted over communications lines at "standard" rates ranging from 50 baud to more than 9,600 baud, and the processor must be able to accept these different speeds and merge them into a fixed higher speed to the host computer. As binary data is received by a communications processor, it is assembled into characters, blocks, or complete message texts before being forwarded to the host computer or another network node. Thus, sufficient storage must be provided to buffer these messages for further routing. This message-buffering is one of the most important tasks of the front-end processor or message-switcher. It reduces high-speed-line costs. In addition, message blocks can often be edited to remove unneeded address information. Stripping messages of unneeded characters at the earliest points in a network decreases communications-line requirements and reduces the load on the host computer. Another important function of the communications processor is that of controlling errors introduced by noise on the communications lines. Although highly sophisticated techniques are available for correcting errors introduced in a one-way communications lines (forMessage switch. In complex communications networks, the message switch analyzes data messages to determine their proper routing, then forwards them to other points in the network. The medium-size switch shown is based on Honeywell's H-716 minicomputer. Other panels in the rack contain line adapters, automatic calling units, and an interface to the teleprinter control unit. ward-error correction), practical error-control techniques simply use another channel to request retransmission of characters or messages when errors are detected. In all error-control methods, however, redundant "parity" bits or characters for checking errors are transmitted along with information bits, and the communications processor codes and decodes messages containing such check bits. The communications processor, with its built-in storage capability, can also keep a running record of all message traffic, including such statistics as the total number of messages processed, number of line errors, overflow information, and lengths of time messages are in queues. These statistics, which can also be used to analyze future communications requirements, can be printed at periodic intervals in the processing cycle. With this general outline of the tasks that must be performed by the communications-processing system, it is now convenient to look more closely at each of the four classes of processors. Since the front-end processor often serves as the principal node in a network, it will be discussed first. As communications networks came into being, hardwired line-controllers (such as IBM models 2701, 2702, and 2703) were provided to interface the computer with the communications system. The problem with hardwired controllers, however, is that they cannot easily adjust to the almost unending changes in network configurations and terminal types that characterize a large system. Also, a network using the hard-wired line-controller totally depends on the processing ability of the host computer to perform such important communications functions as network control and message-storage. The programmable front-end processor, however, keeps most communications tasks outside of the host computer. This separation of communications-processing from information-processing functions is justified in terms of over-all system cost and efficiency. The host computer, including its associated software, is generally optimized for arithmetic operations at extremely high speeds. Therefore, when communications processing must be handled intermittently and at relatively slow speeds, the effectiveness of the host is decreased. Without a front-end processor, the large communications-oriented mainframe computer typically dedicates from 15% to over 30% of its time to communications processing. This overhead can be reduced to 1–4% by adding a separate front-end processor. In addition to savings in host processing time, a front-end processor may reduce memory requirements for communications functions in a typical host computer from a level of 24 to 64 kilobytes to only 1 or 2 kilobytes. And there are corresponding decreases in the traffic loading at the host computer's input-output ports. Line-controller emulators Even when used as direct plug-for-plug replacements for IBM 270X hard-wired line controllers, minicomputer-based front-end processors generally offer as much as 30% lower equipment costs than the line controllers they replace. While such line-control emulators do not remove the communications-processing functions from the host, they can be installed without its being necessary to rewrite existing user applications programs for the host computer. And the processing capability external to the host computer serves as a base for future expansion to a true stand-alone front end. As a stepping stone from the line-control emulator to the full front-end processor, several companies have developed the intelligent emulator, or the "emulator-plus." In a typical application, the intelligent 270X emulator may be used with the IBM System/360 or 370 mainframe to support terminals that are not IBM-compatible. It may also provide limited network control functions when the host processor fails, alerting remote terminals not to send new messages. Message-switching systems Unlike front-end processors, which usually funnel data into a large-scale computer for information processing, the message switch generally serves as a central clearinghouse for messages between all points in a communications network. Large stand-alone message-switching systems actually existed before the computer terminal began to leave the data-processing center. Military switching systems, such as Autodin, and, more recently, public message networks, such as the Western Union Corp.'s TWX and Telex, are typical systems that are dedicated solely to message switching. Today, however, there are new and expanding requirements by companies with geographically dispersed computing facilities (as well as terminal locations) for efficient means of exchanging messages. Companies in the banking, transportation, and retailing businesses are perhaps the best examples, but large multilocation manufacturing operations, law-enforcement agencies, and others can benefit by the use of message switches. Although the concept of store-and-forward message-switching is readily understood, it is usually the most difficult class of communications processor to implement. This is because of the seemingly unlimited number of alternatives of routing, queueing, assigning of priorities, recording, and other control actions that must be taken for any given message. A look at the requirements for a typical corporate data switching center will help in determining the requirements for a stand-alone message switcher. The example that follows is also representative of the requirements for a small to medium-size front-end-processor. Consider a message-switching system supporting a peak load of 30 terminals actively sending data into the message switch (a much larger number of inactive terminals may actually be connected, or have access, to the network, but only 30 are assumed to be sending data at any one instant). With an average message length of 150 characters and an average store-and-forward buffering of two messages per input line, a quick-access buffer storage capacity of 9 kilobytes is required. For a system of this size, another 6 to 9 kilobytes of memory would typically be required to store the necessary software programming, which would result in a total quick-access storage requirement of about 18 kilobytes. In addition, assuming that 3,500 such messages are switched through the system each day and that all messages must be kept on record for six months, this switcher system would require up to 100 million bytes of peripheral magnetic-tape storage. **Remote data concentrators** Compared with front-end processors and message switchers, the design requirements for remote data concentrators and intelligent terminals are generally much less demanding. The basic function of the data concentrator is to reduce telecommunications line costs by concentrating data from multiple input lines onto, generally, one high-speed line. The data concentrator is generally more efficient than the simpler hard-wired data multiplexer, which competes with the concentrator for similar network applications. With its built-in processor and memory, however, the concentrator can smooth the intermittent traffic loads that occur in data systems. This reduces the peak loading of the high-speed output of the concentrator, often by a factor of 3 or 4, allowing a corresponding decrease in line costs. The remote concentrator based on the minicomputer can also be expanded into a local message-switcher to help reduce the load on switchers located at more central nodes in a network. However, the objective of remote concentration is to reduce system communications costs. Therefore, the cost of added processing functions must constantly be weighed against resulting savings in line costs. **Data concentrator.** To reduce communications line costs, a data concentrator terminates a number of low-speed transmission lines and multiplexes them into fewer high-speed lines. In the unit shown, a processor and a maximum of 32,000 16-bit words of memory allows such additional functions as code conversion, message assembly, and terminal-usage accounting. The intelligent terminal can be thought of simply as a remote data concentrator that controls one or more local peripherals. However, there seem to be two diverging trends in technologies and markets for intelligent terminals. Along one path, more and more use is being made of the processing powers of remote minicomputers, which form the heart of most of these terminals. Small data files have even been attached to local computers to avoid unnecessarily having to use communications lines and central computers for simple data-processing functions. Taking an entirely different approach to intelligent-terminal design, some terminal vendors are using the sophistication of LSI technology to produce specialized communications controllers at prices that cannot be approached by even the least expensive communications processor. And since many relatively simple communications functions required of the intelligent terminal can be handled with only a few LSI packages, the trend toward use of such technology is sure to gain momentum. **Communications processor components** The essential modules for each of the four general classes of communications processors are identified in Fig. 2. The six functional modules identified are building blocks for the communications-systems designer. In many practical systems, each of these building blocks is Intelligent terminal. Minicomputer-based terminal controller (center) provides code conversion, error correction, and even limited local processing for a cluster of terminals remote from the host computer. Any of a number of terminal units can be attached to the controller, such as the 300-card-per-minute card reader (left) and the 600-line-per-minute printer (right) marketed by Harris Communications Systems. on an individual printed-circuit board or group of boards. These same modules serve as the basis for programming into functionally divided software packages. It is therefore helpful to take a closer look at what hardware (and software) goes into each of these modules. For all but the largest data-communications networks, minicomputers (both general-purpose and, in some case, custom-designed units) often serve the central processor and memory functions in the communications processors. Software in conjunction with the associated line and host computer interface hardware, adapt the minicomputer to communications applications. Built-in ROMs In several recent designs, microprogramable read-only memories have been incorporated into the processors of some communications-oriented minicomputers, following the trend that exists in other segments of the computer industry. The use of such microprogramable circuits shortens the processing time associated with commonly recurring communications functions, such as code-conversion, by reducing the number of times in the conversion process that main memories must be accessed to obtain software instructions. An example of one of these modified minicomputers is a microprogramable design used in Interdata's model 50 communications processor to handle the numerous repetitive tasks. "By making use of ROMs," Jon Gould, director of data communications at the Oceanport, N.J., company, says, "throughput in a moderately sized communications processor is increased from 10,000 to over 30,000 characters per second." In addition, claims Gould, "the amount of memory required to store software programs is typically reduced by a factor of 75." Taking a similar approach to design of the minicomputer processor, Teleswitcher Corp., a Dallas-based vendor of turn-key data communications systems, has designed an array of 32 field-programable plug-in ROMs onto its latest custom-designed processor board. In addition to requiring less software and allowing greater throughput than could be achieved by implementing the same functions with software, the use of such microprogramming reduces the over-all processor hardware costs, reports Wayne Pratt, manager of hardware design at Teleswitcher. To supplement these basic processing and memory functions, both minicomputer manufacturers and others have developed the necessary special-purpose hardware and software to interface the minicomputers with communications lines, terminals, and host computers. Line adapters The line-adapter units shown in Fig. 2 bring data on the communications line into the communications processor following three basic techniques—bit-by-bit, character-buffered, and by direct memory access. The simplest line interface unit looks at a message bit by bit, transferring it through the processor and into memory one bit at a time. Such a "bit-banging" technique requires little hardware, but it requires substantial software and is therefore usually wasteful of processor time. A much more efficient line adapter converts serial bits into characters, then transfers complete (parallel-bit) characters into the processor and memory. In addition to reducing processor overhead, a character-converter interface is capable of handling much higher data rates—up to about 9,600 bits per second. A line adapter that works with characters instead of bits also can perform such tasks as code conversion and simple error correction without depending on the computing powers of the processor, thus further increasing over-all system efficiency. The most efficient method of interfacing high-speed data with the communications processor is through the computer's direct-memory-access channel. Here, data rates of 50 kilobits per second and higher can be loaded directly into memory without having to be processed character by character by the central processing unit. As might be expected, choices between the three techniques used in line adapters are governed by tradeoffs in price and performance. Bit-by-bit adapters are adequate only for small systems where the burden on the processor is light. The use of character-conversion techniques in the line adapter satisfies all but the most rigorous high-speed-network requirements, where direct-memory-access methods can be used. **Cost tradeoffs** The price for line adapters, however, increases substantially as performance level is increased. According to John Chyzik, data-communications specialist at Data General Corp., Southboro, Mass., a minicomputer maker that also supplies communications-processor accessories, "the system using bit-banger-type line-adapters might average $130 per line, while the character-converter hardware for high-speed lines can cost between $200 and $600 per line, not counting the cost of the minicomputer and other communications-processor functions." On the high end, adds Chyzik, the complex hardware in the line adapter with direct memory access brings its price up to about $2,500 per line. Typical of the classes of line-adapter hardware developed in the last couple of years are the plug-in printed-circuit cards produced by Digital Equipment Corp., --- **2. Building blocks.** Both hardware and software needed to implement the tasks of a communications network are often separated into six functional blocks. Two of the blocks—the central processor and memory—are often general-purpose minicomputers. The manner in which the six blocks are configured defines each of the four fundamental classifications of communications processors. Microprogramed. Microprogramable read-only memories have been designed into some minicomputers in order to shorten the processing time that is associated with commonly recurring communications functions. The read-only-memory board shown here is part of Interdata's model 50 communications processor. Maynard, Mass., for use with communications processors employing that company's minicomputers. At first, DEC announced a line of nonprogramable cards with options for interfacing with codes and data speeds associated with standard teletypewriter equipment or CRT terminals. These plug-in cards operate in a basic bit-by-bit mode and, being hard-wired, have to be physically changed every time a remote terminal with a different code or data speed is changed. Then in late 1971, the company introduced a programmable single-line interface for asynchronous communications. With such a unit, software can be used to adapt the line unit to varying codes, code speeds, and either half- or full-duplex operation. More recently, an assembly was introduced which multiplexes 16 separate lines. "For systems that require a large number of terminals, such programmable multiline adapters offer much lower costs per line because of shared logic for all lines," says Dimitri Dimancesco. Multiline adapter. For a system that includes a large number of terminals, programmable multiline interfaces, such as this 16-line interface assembly produced by Digital Equipment Corp., offer a flexibility and a reduction in costs that could not be achieved in earlier hard-wired single-line units. senior market development specialist. Product evolution in other companies has followed similar patterns. Some of the latest extras that add flexibility to programable line adapters include automatic dialing options, provisions to allow direct memory access between the line unit and the communications processor's memory, and logic to fully control modems through EIA RS-232 interfaces. Host-computer interface If there is one thing that vendors agree on, it's that the interface between the communications processor and the host computer is the most difficult of all the communications processor functions to build. To make the front-end processor appear to the computer as another peripheral at one of its input-output ports is no mean feat. As in the communications-line adapter, the host-computer adapter must be made to interface with signal levels and operating procedures over which the maker of the communications processor has little control. The adapter therefore must be designed around existing systems, and it must be flexible enough to easily adjust to future changes, in both hardware and software, dictated by the mainframe manufacturer. As a result, the host-computer interface modules are often more expensive and more complex than the central-processing unit in the front-end processor. For example, the price for Interdata's interface adapter for the IBM 360 mainframe computer is $8,180, while the company sells its processor boards for $6,800. And this price differential does not account for associated software. The problem of adapting software to interface with the IBM 360 mainframe can be even more difficult, says Royce Pipes, head of product planning at Harris Communications Systems Inc., Dallas. "We spent close to a half-million dollars developing software to adapt our new model 4705 front-end processor to the 360," Pipes asserts, "and we currently commit about $5,000 per month in maintaining and upgrading these programs." Harris' host interface task is particularly difficult, however, since its software also converts IBM's half-duplex protocol to a full-duplex line discipline. A strong future Most communications-processor installations have been added to existing mainframe-computer installations. Even so, this means only a small percentage of these batch installations have been converted to teleprocessing applications, so that this retrofit market is not expected to be saturated for a number of years. But adding teleprocessing capability to existing mainframe computers is only a small part of the future for communications processors. A large portion of these processors will, as they do today, go into stand-alone message-switched information networks that are independent of, or at least incidental to, mainframe processors. Such data-transfer and data-retrieval networks appear now to have the strongest long-term future. And as data communications and two-way television technologies advance, the communications processor will eventually be central to consumer-oriented networks with terminals in the home. The charge-balancing a-d converter: an alternative to dual-slope integration Like dual-slope analog-to-digital conversion, the new technique basically is an integration scheme; but simple design and relaxed tolerances on components may give it an economic edge in some applications. by Robert C. Kime, Jr., Keithley Instruments Inc., Cleveland, Ohio Given the problem of designing a cheap, simple, reliable analog-to-digital converter that exhibits high accuracy and low power consumption, most engineers will probably first think of the dual-slope converter. Recently, however, the charge-balancing converter was developed for pretty much the same purpose. The two circuits have much in common. Both center on charging and discharging a capacitor, both are integrating circuits, both are quite economical, and neither will ever set a conversion speed record. But the techniques are not identical, and one or the other may prove superior in any given application. The dual-slope unit can be designed to have outstanding normal-mode rejection at one particular frequency, so that line rejection is very easy, while the charge-balancing converter can generally be implemented with fewer parts and with looser component specs. Basically the dual-slope converter works by applying the unknown input signal to an uncharged capacitor for a fixed length of time, and then measuring the time needed to discharge the capacitor at a constant rate. In the charge-balancing converter, however, there is no fixed charging period, and the charging continues for as long as necessary to get the capacitor voltage to cross a fixed threshold level; then a reference current is subtracted from the input current, and the capacitor discharges until the threshold level is crossed again. The process repeats itself until the conversion period is over. At that time, a counter, which only accumulates clock pulses when both the input signal and the reference current are applied to the capacitor, contains a number of counts proportional to the input voltage. One conversion cycle The charge-balancing unit got its name from the fact that the net charge put into an integrator over one integration cycle is zero. The converter (Fig. 1), which accepts only positive input voltages, operates as follows. Initially the current switch is open, and only the input voltage is applied to the integrator. Since the integrator contains an inverting operational amplifier and a capacitor, the output voltage, $V_o$, is a negative-going ramp that has a slope proportional to $V_{in}$. As $V_o$ passes the threshold level, $V_d$, of the threshold detector, the detector's output voltage, $V_t$, switches to a logic 1 state which, among other things, opens a gate that allows the counter to start accumulating clock pulses. It also makes a flip-flop close the current switch on the next clock pulse. Closing this switch causes the known, constant reference current to be subtracted from the input current. The difference current is applied to the integrator as before. The reference current is chosen to be greater than the input current for all allowable input voltages, so that subtracting the reference current, $I_{ref}$, from the input current, $I_{in}$, is guaranteed both to change the polarity of the input to the integrator and to start $V_o$ on a positive-going ramp. During this part of the integration cycle, gated clock pulses are being fed to the counter and accumulated. When $V_o$ again passes $V_d$, $V_t$ switches to a logic 0 state, and the next clock pulse opens the current switch and closes the clock gate. Since the converter is a free-running system, this process keeps repeating itself until the conversion period is over. The conversion period is defined as a certain number of clock pulses, $N_t$, received by the digital conversion circuitry. $N_t$ is a system constant and is fixed when the converter is designed. The number of clock pulses accumulated by the counter during one conversion cycle, $N_t$, is a variable quantity that is directly proportional to the input voltage. $N_t$, in fact, is the digital output of the converter. Probably the most unusual feature of the charge-balancing converter is the diversity among the waveforms. 1. **Balanced.** By switching reference current on and off, a-d converter puts zero net charge into integrator over full integration cycle. 2. **Timing.** Number of integration cycles in one conversion cycle depends upon input voltage. When voltage is very small, negative ramp is very slow, and converter makes only one integration per conversion (a). Doubling input voltage doubles number of integration cycles (b). After half-way point (c), trend reverses and at top of measurement range unit is back at one integration per conversion (d). observed at the output of the integrator. In one conversion cycle, there can be anywhere from zero integration cycles to approximately $N_i/2$ integration cycles, depending on the value of the input voltage. **Changing waveform** For instance, assume that $N_i = 2,000$, $I_{\text{ref}} = 1.0$ milliampere, and $R_{in}$—the converter input resistance—is 2 kilohms. Now, if a small input voltage, say 1.0 millivolt, is applied to the input, $I_{in}$ will be only 0.5 microampere and $V_o$ will move very slowly in the negative direction until it crosses zero (Fig. 2a). Then, at the next clock pulse, $I_{\text{ref}}$ is connected to the integrator (point A), causing it to climb steeply in the positive direction. A zero crossing occurs almost immediately, and the next clock pulse disconnects $I_{\text{ref}}$, leaving $V_o$ at point B. From here on the process repeats itself, with $V_o$ dropping slowly toward point A'. Since this is a charge-balancing converter, the charge removed from the integrator between points A and B must equal the charge applied between points B and A'. Because $I_{\text{ref}}$ is 1.0 mA and $I_{in}$ is 0.5 $\mu$A, the reference current must be off for about 2,000 times as long as it is on. Thus, if $I_{\text{ref}}$ is on for the minimum duration of one clock pulse, it will be off the 2,000 clock pulses and there will be only one integration cycle in the conversion cycle. But if $V_{in}$ is twice as large—2.0 mV—I$_{in}$ becomes 1.0 $\mu$A, and there are two integrations per conversion (Fig. 2b). The negative-going ramp is twice as steep as in the preceding case, while the positive-going portion has practically the same slope. (To be exact, the charging current, $I_c = I_{in} - I_{\text{ref}}$, is 0.9995 mA in the first case and 0.9990 mA in the second.) As the input voltage increases further, the slope of $V_o$ from B to A' becomes steeper because $I_{in}$ increases, while the slope from A to B becomes less steep because $I_c$ decreases. Again, when the input voltage is 1,000 volts, $I_{in} = 0.5$ mA, and $I_c = 0.5$ mA too. Thus the two slopes of the $V_o$ waveform are equal and opposite, and the wave shape is a triangular wave centered about zero (Fig. 2c). Also, the peak amplitude of $V_o$ is half of what it was when $V_{in}$ was 0.001 V, and there are 1,000 integration cycles in one conversion cycle. Finally, when the input voltage is 1.999 V, $I_{in}$ is 0.9995 mA and the waveform picture is an inverted version of the situation pertaining to the 1-mV input (Fig. 2d). The segment of the $V_o$ waveform from B to A in Fig. 2d corresponds to the segment from A to B in Fig. 2a. The reference current is connected for all but one count during a conversion cycle, so there is one integration per conversion again. If the input voltage is 2.000 V or greater, $V_o$ never gets back to zero, and the counting is continuous. **How it works** An implementation of the converter block diagram (Fig. 3) shows one of the advantages of the charge-balancing converter—its simplicity. The integrator, for example, consists of only an op amp, a capacitor, and a resistor. Since the high open-loop gain of the op amp keeps node A at ground potential, the resistor in the input lead determines the input resistance of the converter. Note that the system is designed to work properly only with positive inputs. The threshold detector is even simpler. Although two different threshold levels could have been used—one for when $V_o$ is moving in the positive direction, and one for the negative direction—this is not necessary. For the implementation shown here, it was sufficient to choose $V_d^+ = V_d^- = 0$. The only further requirement is that the threshold voltages remain stable for at least one conversion cycle (about 200 milliseconds in this case). Usually, a threshold detector contains a positive feedback that, by setting up a hysteresis voltage at the switching levels, assures positive switching when the input is moving slowly nearly the threshold level and also prevents the detector from oscillating under the influence of input noise, input bias voltage shifts, and so on. But in the charge-balancing a-d converter, the detector requires only an open-loop operational amplifier, and the hysteresis is provided instead by the digital portion of the circuitry. The digital section consists of J-K flip-flop, an inverter, an AND gate, and a few diodes. The circuitry works as follows: first, assume that Q is low, $\overline{Q}$ is high, and $V_t$ is low. Since Q is low, there are no pulses coming through the AND gate. Since $\overline{Q}$ is high, it supplies the current $I_{ref}$ through $D_2$, and $D_1$ is back-biased so that $I_{ref}$ is not connected to node A of the integrator. Under these conditions $V_o$ ramps negatively and eventually passes zero—the threshold voltage—causing $V_t$ to go high. This puts a high level on J and a low level on K. At the next clock pulse, Q goes high and $\overline{Q}$ goes low. The AND gate now starts to pass clock pulses which go to a counter. At the same time $I_{ref}$ is switched to node A because $D_2$ is reverse-biased, and reference current is drawn from the integrator. When the integrator output $V_o$ again passes $V_d$, $V_t$ again goes negative, putting a low level on J and a high level on K. At the next clock pulse, Q goes low and $\overline{Q}$ goes high, which stops clock pulses from passing through the AND gate and disconnects $I_{ref}$ from the integrator. A known current has now been removed from the integrator over a measured integral number of clock cycles and thus a known amount of charge has been removed. This process continues as long as $V_{in}$ is positive. The variable time period between when $V_t$ changes state and when the clock pulse changes the state of the J-K flip-flop is the digitally generated hysteresis of the system. This time can vary from zero to one clock cycle. After a period of time, under the application of a constant input voltage $V_{in}$, the output of the integrator establishes an average voltage. For this to occur, the average voltage on the integrating capacitor must be constant. Consequently the charge removed must equal—or balance—the charge applied. The current reference is really a voltage reference and a resistor. It makes use of the fact that the integrator summing junction (node A) is at 0 V which makes $I_{ref} = V_{ref}/R_r$ (see Fig. 3). Since diode $D_1$ is also in the circuit when $I_{ref}$ is connected to node A, transistor $Q_1$ is used to buck out and temperature-compensate $D_1$. $D_3$ is a temperature-compensated zener diode. **Fixing the variables** Since the same capacitor is used for both the applied and removed charge, the charge-balancing equations can be written in terms of currents. Over any conversion cycle, the average input current, $I_{in} = V_{in}/R$ must equal the average current removed from the integrator, $I_o = I_{ref}N_t/N_t$. Here, $N_t$ is the number of clock pulses over which current was removed from the integrator, and $N_t$ is the total number of clock pulses in one conversion cycle. Thus, $V_{in}/R = I_{ref}N_t/N_t$, or $V_{in} = R(I_{ref}N_t/N_t)$. The only variable on the right-hand side of this equation is $N_t$. Thus, by proper selection of R, $I_{ref}$, and $N_t$, the converter can easily be scaled to cover any desired voltage range. For example, choosing $R = 2$ kilohms, $I_{ref} = 1.0$ mA, and $N_t = 2,048$ counts provides a 3½-digit converter that can measure up to 2.000 V with 1.0-mV resolution. Actually, only 2,000 counts are needed for a 3½-digit machine; the extra 48 clock pulses are used for such housekeeping chores as transferring the contents of the counter into some type of memory (so that the count can be displayed during the next conversion cycle) and resetting the counter to zero. **Limitations** The major disadvantage of the charge-balancing converter is its speed. It requires at least as many clock pulses as the maximum count to complete a conversion cycle. As an example, a 3½-digit (13-bit) converter requires more than 2,000 clock pulses to effect a conversion. A successive-approximation converter could probably do the job in 14 to 16 clock cycles. Inexpensive linear IC's have sufficient gain-bandwidth product and slew rate for satisfactory operation at clock frequencies of 10 to 20 kilohertz, which allow a conversion cycle time of 0.1 second. If 10-ms conversion times were required, faster devices would be needed. As for sensitivity, the 3½-digit system described here can resolve 1 mv. To achieve 100-microvolt resolution would require either an integrator amplifier with less than 30 $\mu V/^\circ C$ of drift, or an autozero circuit. Precision, too, must be considered. The present system is free-running—each conversion cycle does not necessarily start with the same initial integrator condi4. Competitor. In dual-slope converter, input voltage is applied for fixed period, and time needed to return integrator voltage to zero at constant discharge rate provides measure of applied input voltage. This leads to a ±1-digit error. The converter could be synchronized to remove this source of imprecision. Finally, as already mentioned, this a-d convertor is unipolar, so that an input must be preconditioned before a-d conversion can take place. Still, despite these disadvantages, all of which can be overcome with the exception of speed, the performance of this system is quite impressive in light of its low cost and simplicity. Comparison with dual-slope conversion The speed limitation of the charge-balancing converter is shared by the dual-slope converter, so it should prove worthwhile to compare these two approaches in some detail. Different versions of the dual-slope converter can be made to handle bipolar inputs, to be auto-zeroed, to reject power-line interference, and to be insensitive to variations in the values of some of its components. But for purposes of comparison with the charge-balancing circuit, a unipolar system with the same input configuration will be used. (Actually, many of the input-circuit variations used with the dual-slope technique can also be applied to the charge-balancing converter.) A dual-slope converter has basically the same parts complement and block diagram as the charge-balancing unit. Of course, the digital conversion circuitry would have to be different. For the dual-slope circuit in Fig. 4, input voltage $V_{in}$ is converted to a current and applied to the integrator for a fixed period of time, which allows the integrator to ramp to some arbitrary voltage, $V_x$. Then a reference current of opposite polarity is connected to the integrator until the integrator crosses zero. Once $V_{ref}$ is connected to the integrator, the counter starts, and clock pulses are counted until the zero crossing occurs. Since the voltage on the integrating capacitor is again zero, the charge applied equals the charge removed, and a digital conversion has been performed. A small reset time is needed to strobe the latches and reset the counter. From the standpoint of a circuit complexity, the digital sections of the dual-slope and charge-balancing converters would be about the same complexity with slightly different timing. The dual-slope circuit would need more counts for the conversion cycle for the period that $V_{in}$ alone is applied, which would increase the size of the required ripple counter. The analog sections would also be about the same with two exceptions: the threshold detector and the switching devices. In the dual-slope circuit, the threshold detector must have good long-term zero stability; in the charge-balancing circuit, good zero stability for one conversion cycle is all that's required. For both converter types the integrating capacitance and the system clock frequency must be stable for one conversion cycle. To connect the reference, the same diode switching could be used in the dual-slope as in the charge-balancing technique, but for reset, additional switching devices would be required for the former. The integrator in the dual-slope circuit must be initialized before the start of a conversion cycle because its starting level determines the accuracy of a conversion. Consequently, the charge-balancing approach winds up using fewer components, which are less critical, than does an equivalent dual-slope circuit. Dual-slope converters can be made bipolar—by adding an opposite-polarity reference, another high-quality threshold detector, and polarity-sensing circuitry to determine which reference and which threshold detector to use. To give the charge-balancing converter bipolar capability, an absolute-value detector can be used in front of it. In this case, over-all complexity and cost of each system are about equal. Even so, the charge-balancing circuit still enjoys an advantage over the dual-slope circuit because the absolute-value detector doubles as an ac-dc converter, which adds ac measurement capability at no extra cost. The dual-slope converter has one distinct advantage over the charge-balancing converter. By proper selection of the period for which the input is applied, the dual-slope converter can be made to reject a specific frequency (such as line frequency) and its harmonics. This is impossible to do with the charge-balancing convertor. Ac rejection must be accomplished with input filtering techniques. The operation of the charge-balancing converter has been proven by over a year of use in the Keithley 167 Autoprobe Digital Multimeter (Fig. 5). This instrument is fully auto-ranging, measures ac and dc voltage and resistance, and is battery-operated. In fact, the charge-balancing converter seems to have met the meter's overall design objectives as well as or better than any other a-d conversion system. 5. Proven performance. Charge-balancing converter is heart of this portable multimeter. Extensive field experience proves that new conversion technique is worth considering for new system designs. TRW LVA diode...the sharpest knee below 10 Volts. The current saver. No other zener can approach TRW's LVA performance below 10 volts. Available for operation down to 4.3 volts, TRW LVA diodes minimize power consumption in portable-battery operated equipment. They're also ideal for instrumentation, where, as reference elements, they draw as little as 50 μAmps. TRW LVA's are available in various package configurations, including passivated chip form for hybrid-compatible packages. If you have a need for a low current voltage regulator or any other product that demands low current consumption, you should check out TRW LVA zeners. When it comes to current, they're really misers! For product information and applications assistance write TRW Semiconductors, an Electronic Component Division of TRW Inc., 14520 Aviation Boulevard, Lawndale, California 90260. Phone (213) 679-4561. Temperature compensation for high-frequency transistors by Bert K. Erickson General Electric Co., Syracuse, N.Y. If the operating temperature of a high-frequency grounded-emitter power transistor varies widely, the collector resistance of a second transistor can provide temperature compensation, without causing excessive power dissipation in the stage's bias circuit. The technique is suitable for operating frequencies of 300 to 3,000 megahertz, if the power levels are at least 200 milliwatts and ambient temperature variations range from 0°C to 70°C. For this broad a temperature range, the quiescent collector current of a class-A transistor amplifier will change enough to cause noticeable gain variation and waveform distortion. Conventionally, a current-feedback approach is employed to obtain temperature stability. The resistance in the transistor's emitter circuit is maximized, while the resistance in the base circuit is minimized. But this technique presents assembly problems because of the very high operating frequencies involved. The emitter of these transistors is usually connected to ground with very short wire bonds to eliminate the series resonance of the bypass capacitor. And, although the transistor's emitter is grounded, temperature stability cannot be obtained with a voltage-feedback approach since this would reduce power-conversion efficiency. The typical grounded-emitter transistor stage of (a), which is drawn without isolating, coupling, and tuning components for simplicity, has a current stability factor of:¹ \[ S_i = \frac{\Delta I_C}{\Delta I_{CO}} = \frac{[R_L + R_1 + R_E(1 + (R_1 + R_L)/R_2)]}{[R_L + R_1(1 - \alpha) + R_E(1 + (R_1 + R_L)/R_2)]} \] And the voltage stability factor is:¹ \[ S_v = \frac{\Delta I_C}{\Delta V_{EB}} = -\alpha[1 + (R_1 + R_L)/R_2]/[R_L + R_1(1 - \alpha) + R_E(1 + (R_1 + R_L)/R_2)] \] Voltage stability is the preferred sensitivity parameter for a power transistor because the emitter-base voltage, Nailing down Q point. Grounded-emitter power transistor stage (a) can be compensated for a 70°C temperature range by employing a second transistor as the load resistance (b). The collector resistance of upper transistor improves the voltage and current stability of lower transistor without causing an efficiency-robbing voltage drop. The graph depicts the stage's temperature performance. $V_{EB}$, is easily measured and is often used to find the temperature of the collector depletion layer.\textsuperscript{2} The voltage stability factor is negative because collector current $I_C$ increases as junction voltage $V_{EB}$ decreases. Since the term, $R_L(1 - \alpha)$, is very small, one way to diminish $S_v$ is to make the load resistance, $R_L$, as large as possible. Unfortunately, $I_C$ flows through $R_L$, and the stage's conversion efficiency will be reduced substantially. However, a large $R_L$ can be obtained without the usual voltage drop degradation by using the collector resistance of a second transistor, as in (b). With the T-model equivalent circuit for the common-emitter transistor, the output resistance of this configuration can be expressed as: $$r_o = r_e(1 - \alpha) + r_e(r_b + \alpha r_c + R_R)/(r_b + r_e + R_g)$$ For this equation: $$1/r_c(1 - \alpha) = \Delta i_c/\Delta v_{ce}\big|_{i_b}$$ $$\beta = \Delta i_c/\Delta i_b\big|_{v_{ce}}$$ $$\alpha = \beta/(\beta + 1)$$ All of these quantities can be readily obtained from the transistor's collector characteristics.\textsuperscript{3} The upper transistor in (b) provides an output resistance of 1,200 ohms, which yields a predicted voltage stability factor of $14.9 \times 10^{-3}$ amperes per volt. If a fixed resistor of 1,200 ohms were used, it would have an IR drop of 60 V across it. But the voltage drop across both the transistor and its emitter resistor is only 3 V. The graph shows the actual characteristics of the transistor stage as temperature rises from 30°C to 100°C. As temperature increases by 70°C, the base-emitter voltage drops by only 0.15 V and the collector current rises only 5 milliamperes. Without temperature compensation, the collector current would be 32 mA higher. \textbf{REFERENCES} 1. M.V. Joyce and K.K. Clark, "Transistor Circuit Analysis," Addison-Wesley, 1964, Chap.3. 2. R.L. Pritchard, "Electrical Characteristics of Transistors," McGraw-Hill, 1967, p. 613. 3. R.P. Nanavati, "An Introduction to Semiconductor Electronics," McGraw-Hill, 1963, p. 163. --- \textbf{Simple gating circuit monitors real-time inputs} by David F. Hood Bell-Northern Research, Ottawa, Canada In normal operation, the set and reset inputs of the simple flip-flop circuit are not allowed to become active simultaneously, although both can remain at logic 0. But if this elementary rule is violated, a new gating function that can arbitrate real-time inputs is realized. The circuit is particularly useful in signal-processing applications where interrupt requests may arrive asynchronously to be processed by a simple sequencer, rather than by a computing-type device. In such applications, the simplicity of the circuit also makes possible considerable cost savings. When circuit (a) is used as a flip-flop, its $S_1$ and $S_2$ inputs are both low in the quiescent operating state. If the $S_1$ and $S_2$ inputs are both high instead, outputs $Y_1$ and $Y_2$ are low (in the quiescent mode). Now, when $S_1$ goes low, $Y_1$ goes high, and $Y_2$ does not change. But if $S_2$ then goes low, neither $Y_1$ nor $Y_2$ changes. And since the circuit is symmetrical, if $S_2$ goes low while $S_1$ is high, $Y_2$ will go high and lock out $S_1$. The signal paths of $S_1-Y_1$ and $S_2-Y_2$ may be regarded as inverters with real-time priority arbitration. The addition of a third gate to the circuit provides an INPUT REQUEST lead. With a third gate, the circuit can also be extended to accept three inputs, as shown in (b). Further extension is done in a similar manner. As with circuit (a), the first input that goes to logic 0 inhibits all the other inputs, while producing an output itself. Two or more inputs going to logic 0 simultaneously will produce a race condition that, nevertheless, can have only a single victor. One input can be handicapped relative to another by using RC delay networks at the input or output of the handicapped gate. The same operating description and circuit configurations apply if all the logic levels are inverted, and if NAND gates are substituted for the NOR gates. \textbf{Lock-out gate.} Both set and reset ($S_1$ and $S_2$) inputs to flip-flop (a) are kept high in quiescent state. When either $S_1$ or $S_2$ goes low, its output ($Y_1$ or $Y_2$, respectively) will go high, but the other output stays low even if its input goes low. Since only one signal can pass to the output at a time, this gate can arbitrate asynchronous interrupt signals. An additional gate (b) accommodates another input. Audio amplitude leveler minimizes signal distortion by Edward E. Pearson Burr-Brown Research Corp., Tucson, Ariz. An ac current-controlled bridge in the feedback loop of an operational amplifier can provide very close control of signal amplitude, while contributing negligible distortion at or near a predetermined optimum input signal level. The resulting circuit is well suited for amplitude leveling in test oscillators, communications equipment, and telemetry systems. It can be built for around $4 and offers extremely close amplitude control over the entire audio spectrum. Unlike conventional circuits that apply increasing amounts of feedback along the entire span of input voltage range, this amplitude leveler applies zero feedback (and, therefore, zero distortion) at an optimum input level and produces positive or negative feedback above and below this level. The differential output from a bridge is used to get the desired feedback. The bridge, which is outlined in color, employs two devices, T₁ and T₂, whose resistance varies with current. Such components as incandescent lamps, thermistors, or even active devices can be used. Here, T₁ and T₂ are incandescent lamps. Resistors R₁ and R₂ are chosen to be within the resistance range of T₁ and T₂. A specific voltage, V, will shift the resistance of T₁ and T₂, balancing the bridge and producing a zero differential output (e₁ - e₂). As voltage V is varied above and below the zero output level, the bridge is unbalanced in opposite directions and develops differential outputs of opposite phase. Letting R₁ = R₂ = R and T₁ = T₂, = T, voltage e₁ can be expressed as: \[ e_1 = TV/(T+R) \] and voltage e₂ is: \[ e_2 = RV/(T+R) \] so that the differential voltage becomes: \[ e_1 - e_2 = (T - R)V/(T+R) \] When T is greater than R, \( e_1 - e_2 \) is more than zero; when T = R, \( e_1 - e_2 = 0 \); and when T is less than R, \( e_1 - e_2 \) is smaller than zero. Depending on the input signal level present at amplifier A₁, the network formed by the bridge and amplifiers A₂ and A₃ produces positive, negative, or zero feedback. For the component values indicated, an input voltage of approximately 0.4 volt is just sufficient to drive the bridge to a balanced condition (zero feedback). At this input level, the components in the feedback network cannot contribute to distortion in the output. If the input voltage varies from the optimum 0.4-V level, the inputs to amplifier A₃ will become unbalanced, and an amplified differential voltage (from the bridge) will produce gain compensation at amplifier A₁. The table indicates the range and degree of amplitude control obtained. The circuit's output voltage can be made higher by increasing the value of resistors R₁ and R₂; the higher resistance values increase the voltage needed to balance the bridge. Or, the output voltage can be made smaller by increasing the gain of amplifier A₂. To lower the optimum input voltage level, the gain of amplifier A₁ is made higher. Because increasing amounts of positive feedback are present at the input to amplifier A₁, the circuit becomes unstable at very low or zero input levels. The table shows the minimum permissible input levels; the circuit must be modified to accommodate input signal drop-outs. --- **Leveling audio signals.** Rather than increasing feedback with increasing input voltage, audio amplitude leveler operates at zero feedback for an optimum input voltage. Current-controlled bridge in feedback loop of amplifier A₁ develops the differential voltage needed to keep the output level steady. The incandescent lamps act as current-variable resistors that balance the bridge when input voltage is 0.4 volt. Since 1962, Siliconix has evolved FET technology and applied it to a complete range of singles, duals, arrays, and ICs. So what's new? **Industrial Analog Switch:** **80¢ Per Channel** Now available: analog switches specially selected for industrial control service, at per-channel prices as low as 80¢ (DG501CJ) and 90¢ (DG172CJ) in 1000-unit quantities. **DG172CJ** You can't go wrong with our epoxy analog switches if you're a designer of industrial control circuits. Choose from a variety of switch configurations and structures: - **PMOS monolithic** — DG501CJ. - 8 channels, break-before-make switch action, ± 5 V analog signal range - **PMOS/bipolar** — DG172, 4 channels, 20 V peak-to-peak signal capability, PMOS FETs and bipolar on a single chip **DG501CJ** All of these FET switch/driver combinations are off-the-shelf and at your nearest Siliconix distributor right now. They're priced to compete with reed relays and other electromechanical devices—and they switch 1000 times faster, last far longer, require less drive power and associated hardware, and operate directly from industrial computer logic. Our catalog line of FET analog switches will cover most applications. If your switching problems are unique—and whose aren't—call our applications people. They're eager to help. For complete information **write for data** Applications Engineering (408) 246-8000, Ext. 501 Siliconix Incorporated 2201 Laurelwood Road, Santa Clara, California 95054 The tradeoffs in monolithic image sensors: MOS vs CCD With two kinds of solid-state imaging devices now available, designers have a choice in systems that will replace image tubes, especially in low-light-level applications. by Roger Melen, Stanford Electronics Laboratories, Stanford, Calif. Designers of information-display and recording systems have long dreamed of the coming of solid-state imaging devices that are small and fast, operate at low power without high voltages, and work at wide dynamic ranges of ambient light. Those dreams are becoming a reality as the charge-coupled device comes to the marketplace to join the older MOS photodiode image sensor. Both types of monolithic image sensors offer fundamental improvements over earlier imaging methods, especially for optical character recognition, facsimile systems, and video communications, where high-voltage devices often requiring high light levels are being used. But the capabilities of the two types of devices overlap, and designers are evaluating the strong points of each. Preliminary experiences indicate that both the MOS diode array and the CCD imaging array are suitable for OCR and facsimile displays that require only small arrays, while the CCD display appears to be the only one of the two suitable for television applications, both at high and unusually low light levels. MOS diode linear arrays ranging in size from 64 to 1,000 diode elements (photograph, right, from Reticon Corp.) have been available for some time from several manufacturers. These devices are capable of imaging rates higher than 5 megahertz, and they offer real-time facsimile-quality performance. The MOS diode array also is useful for small-area imaging applications where resolution need not be of television quality. In these applications, the MOS diode array is an excellent replacement for low-resolution image tubes because of its capability of operating at low light levels with self-contained power supply, drive circuitry, and displays. Complete camera systems with 50-by-50 diode arrays are already on the market and are being used for surveillance, OCR, and defect-detection systems. The value of the CCD must be considered as lying mainly in its potential for supplying full video-quality imaging at both high and low light levels, a performance which is beyond the capability of present MOS diode technology and costs. Moving from concept to marketplace in less than three years, the CCD is already available in linear device with 500 elements (photograph, far right, from Fairchild), an incredible achievement when compared to the earliest, 64-element MOS image sensors. CCD technology already offers products with a dynamic range of 1,000 to 1, making it possible to image objects having widely different intensities. The sensor can detect light levels as low as 15 microfootcandles. Full-scale CCD area imaging devices that are expected within the next year include video cameras with resolution of 250 lines, which is adequate for most data-communications systems. Line-imagers with 1,500 elements for page readers are on their way, and not far behind is the ultimate imaging goal—well within the immediate developmental capabilities of CCDs—full-video-quality cameras that have 550-line resolution and that operate at ordinary ambient-light levels. The article that follows compares the two competitive types of monolithic image sensors from the standpoint of important performance criteria—dynamic range, sensitivity, noise, and image clarity. But it is emphasized that the comparison must be based on MOS arrays that have a product history of three to four years, while the CCDs are only now entering production. —Laurence Altman Monolithic image sensors provide a major new dimension in the fabrication of information system displays—be they video cameras, facsimile equipment, or process control instrumentation for optical character-recognition systems. Both MOS photodiode arrays, now three or four years old, and charge-coupled devices, which are now entering the marketplace, offer the user a new standard in small size, high speed, high reliability, and ease of use. But because of subtle differences in structure and readout, each has its unique performance characteristics. The design engineer should clearly understand the operation and performance limitations of each device before committing it to an expensive, complex system design. Superficially, both types of arrays appear to be similar in operation. Both are fabricated with basically the same integrated-circuit technology. Images formed on the face of semiconductors are scanned off in conventional shift-register fashion. However, their performance differs because of different methods of projecting the image on the chip and reading out the signals. The MOS image sensor (Fig. 1a) is essentially a high-performance diode scanning circuit built with standard photodiode and either metal- or silicon-gate technology [Electronics, Nov. 8, 1971]. The scanning circuit is made up of MOS transistors that are embedded in the same monolithic structure containing the array of photodiodes. After an object is imaged onto the surface of the photodiode array, the MOS scanning circuit shifts the signals off the chip by accessing the diodes sequentially through an analog switch to a common bus line. A simple CCD (Fig. 1b) is essentially an analog-signal shift register (a delay line) fabricated from a closely spaced array of MOS capacitors [Electronics, March 29, p.25]. It also is usually built with some sort of silicon-gate buried-layer technology. The input signal takes the form of minority carriers generated in the semiconductor beneath the capacitor plates by the absorption of incident light. The signal charge, consisting of minority carriers, is stored in packets in the semiconductor beneath the capacitor plates. Since the signals are stored in packets, they appear at the output as sampled signals, with each sample representing a packet of charge. In operation, the signal charge may be transferred from capacitor to capacitor throughout the array by application of a sequence of biasing pulses. The charge-transfer efficiency is typically greater than 99.9% because of the close spacing of the capacitors in the arrays. Recent devices have efficiencies as high as 99.999%. **Differences in devices** Despite the similarities in fabrication technologies, the performance of the MOS and CCD image sensors are different because they have different methods of imaging the light and different techniques of reading out sig- --- **Closing the loop** Readers who are interested in discussing this article with the author may call Roger Melen during business hours on June 7 and 8 at (415) 321-2300 ext. 2642. 1. **Structures.** Two image sensor types operate differently. MOS sensor (a) passes charge directly from the imaging photodiode into an MOS shift register, which then carries the charge to a detection circuit. CCD sensor (b) uses the imaging array itself as the transfer mechanism. In the CCD image sensor, the signal charge is collected by a field-induced junction beneath an MOS capacitor electrode, and readout is accomplished by multiple transfers of charge through the array of induced junctions to the output circuitry. But in the MOS image sensor, the charge is collected by a diffused junction in the photodiode, and readout is accomplished by a single charge transfer from the diffused junction to the video-output circuitry. These rather subtle differences in structure and readout result in wide differences in performances at high and low light levels, in image clarity, and in device complexity. At low light levels, the minimum light that can be resolved by the image sensor depends on the efficiency with which the image sensor can collect the light incident on it, as well as noise introduced by the sensor and its associated circuitry. The MOS sensor converts light to signals more efficiently than does the CCD, a property that results from the differences in the amount of light reflected from the imaging surface of each device and from the differences in the site at which the signal charge (generated by the incident light) is collected. **Illumination: front vs. back** Two common techniques (Fig. 2) are used to illuminate the semiconductor substrate in monolithic image sensors—front and back illumination. Although either technique could be used with either CCD or MOS image sensors, only back illumination is used for CCDs because most CCD structures have electrodes on the front that are opaque. Unfortunately, back lighting introduces fabrication problems and performance limitations. The substrate die must be made very thin so that the light-generated carriers, which are generated within 4 micrometers of the semiconductor surface for visible light, may be efficiently collected and stored in the depletion layer beneath the capacitor electrodes on the front side. About the thinnest substrate that can be fabricated has a thickness of about 25 micrometers. This means that device elements cannot be spaced less than 25 μm apart—thicker substrates would cause charges to spread from one electrode to another—a restriction that limits the potential resolution of back-illuminated CCDs. This limitation on element spacing is especially damaging for image sensors containing large numbers of elements because it means that a great deal of silicon must be used to accommodate the density. Clearly, front illumination is desirable for simple structures to give good resolution. MOS image sensors, fortunately, have silicon oxide covering the semiconductor substrate. Not only is this oxide transparent, but it also acts as an optical coating that matches the optical impedance of the silicon to the impedance of air. Some CCD sensors also have been built with polycrystalline electrodes that can be illuminated from the front, but these polycrystalline structures, unfortunately, pro- --- 2. **Backlighting.** Monolithic image sensors can be imaged either on the back or the front of the substrate. Most CCDs, on the other hand, have front metallization that is opaque to light and therefore requires the imaging to be done from the back. vide poor impedance matches with the oxide beneath, which causes reflection at the poly-oxide interface. These mismatches create interference patterns in the surface reflections, resulting in a decrease in the photocurrent output. **That villain, noise** But whether the array is illuminated from the front or the back, noise introduced into the video signal by the image sensors and associated circuitry is probably the greatest factor that limits operation at low light levels. The noise, which masks small photosignals in both types of arrays, comes from mismatches in parasitic capacitances and thermally generated carriers. Moreover, CCDs suffer noise from transfer losses. In MOS image sensors, capacitor noise results from mismatches between parasitic gate-source and gate-drain MOS capacitance of transistors in the scanning circuit and photodiodes and video output port, with which these capacitances are in series. These MOS transistors are analog switches that address the individual photoelements in the array. When these transistors are turned on or off, there is a corresponding voltage spike on the analog photosignal line being switched. Although these spikes may be reduced by filtering, because they occur at twice the maximum video frequency, they can not be eliminated completely. The variation in the magnitude of these spikes throughout the MOS photoarray gives rise to fixed-pattern noise (FPN) in the video passband—noise that cannot be filtered. Fortunately, the variation in the noise is small compared to the absolute magnitude of the spikes. Indeed, with no incident illumination, a low-level noise image resulting from FPN may be observed at the output of the image sensor. Spike noise, indicated in Fig. 3 as observed at the sensor output, is referenced to an equivalent noise voltage across the capacitance of the photosensing element in a --- **Night vision** Monolithic image sensors may be operated at incident light levels below those found in the average office, a capability that has resulted in their being used in monitoring and surveillance applications. For this kind of work, exposure range is the important parameter for evaluating the sensor's ability to perform at low light levels. The exposure range is the ratio of the maximum to minimum intensity that can be resolved by the image sensor. The exposure range is often expressed in f stops (factors of two in light intensity). Exposure range may be calculated by: \[ ER = 0.301 \log_{10} I_{\text{max}} / I_{\text{min}} \] where: - \( ER \) = exposure range in f stops - \( I_{\text{max}} \) = maximum resolved light intensity - \( I_{\text{min}} \) = minimum resolved light intensity Maximum resolved light intensity can be limited by either the brightness range of the scene or the saturation level of the image sensor. At the low illumination levels being considered, the exposure range is often limited by the brightness range of the scene. Typical values of exposure range for MOS image sensors are six to 10 f stops, which translates into a dynamic range of 20 to 1. Experimental MOS image sensors and some recent CCD products have already been built with dynamic ranges as high as 1,000 to 1. --- **3. The limit.** Noise is the limiting factor in the level of light that can be detected by a monolithic image sensor. In MOS imagers, spike noise in the range of \( 0.5 \times 10^{-2} \) volt is low enough to allow use in poorly lit rooms. In CCDs, charge transfer noise can be as low as \( 10^{-3} \) V. 4. Noise. Low-light-level detection is limited by noise in the detection amplifier and reset resistor (a). Noise from the latter ($R_1$) is more prevalent in MOS images and dictates the use of low-noise amplifiers, such as the charge amplifier shown in (b). A representative 512-element device. Values of noise range from $10^{-3}$ to $0.5 \times 10^{-2}$ volts, well within practical operating levels. The saturated output signal referred to the diode is typically 5 volts, resulting in dynamic ranges of 100 to 1 and more. While CCDs are not affected by FPN from the spikes in switching transistors, they have fixed-pattern noise resulting from capacitance between clock lines and the output lines. Luckily, these noise pulses are all the same height and can be filtered out by low-pass filters, but the filters consume power and occupy space. A better method of reducing this parasitically coupled noise is to fabricate video preamplifiers on the same image-sensor chips. The noise is thereby reduced because the magnitude of the parasitic coupling capacitance may be made smaller for amplifiers on the same chips than for off-the-chip amplifiers. Fixed-pattern noise in both MOS sensors and CCDs can also come from thermal effects. CCD image sensors, however, are more susceptible to thermal effects than are MOS sensors because the surface of CCDs is not in equilibrium, which causes thermal imbalance. This form of noise is most troublesome at illumination levels below 10 microwatts per square centimeter and for light-integration periods longer than 100 milliseconds for typical devices because the noise comprises a significant portion of the dark current at these levels and represents the ultimate operating limitation. Transfer noise But with CCDs, transfer-loss noise (also shown in Fig. 3) is more damaging than fixed-pattern noise. This type of noise, the result of charges left behind after transfer operations, appears in the sensed image as a white smear to one side of a sensed white spot. It is most noticeable when large quantities of charge are being transferred, corresponding to a high-intensity spot. For example, a loss of $10^{-5}$ per transfer (99.999% efficient) will in a 512-element array (1,024 transfers for a two-phase device) result in a total loss of one part in 100. Three-phase clocking would increase the total loss of charge for the same charge efficiency. A white spot transferred through the entire array will appear as a smear at the output port with the biggest smear coming from dots starting farthest from the output port. Transfer-loss noise also reduces a CCD's exposure range, and it basically decreases the contrast the sensor can detect. One method of reducing transfer noise is to bury the transfer channels about 1 micrometer beneath the surface of the substrate by ion implantation. Charges transferred in the buried channels are not subjected to transfer inefficiencies caused by charges trapped in surface states at the semiconductor-oxide interface. A 500-element buried-channel device just appearing on the market has a transfer efficiency of 99.999%. Fighting noise However, some noise sources cannot be readily overcome, such as thermally generated noise, which will always be present. This limits sensor performance at low light levels. All amplifiers and all resistors are subject to thermal noise; in an imaging system, the circuitry connected to the output of the image sensor (Fig. 4a) generates this noise. In this example, the thermal noise signal appearing at the amplifier output is a function of the source impedance and the noise parameters of the amplifier. In monolithic image sensors, since the source impedance is the capacitance between the output terminal and ground, the larger the capacitance, the greater the noise. This type of noise is greater in MOS image sensors than in CCD arrays, because the MOS image sensor has a high-capacitance bus line connected to its output, but this noise is not a limiting factor because high-performance low-noise amplifiers are available at low cost. Still another source of noise is in the resetting resistor, $R_1$, also shown in Fig. 4a. This resistor can introduce an equivalent noise charge (called Johnson noise) on the video signal of magnitude $q_{\text{noise}} = KT C_1$. Thus, the greater the capacitance across the reset resistor, the greater is the noise charge. Fortunately, the charge amplifier shown in Fig. 4b may be used to reduce the influence of this fundamental noise source by allowing capacitor $C_1$ to be very small. The magnitude of Johnson noise relative to the other sources of noise previously discussed is also shown in this generalized noise structure of Fig. 3, where Johnson noise is found in a representative 512-element MOS line-image sensor. Performance at high light levels Saturation exposure, a parameter that describes sensor performance at high light levels, generally is a function of the maximum charge that can be stored during the light-integration period of the sensing elements in the photoarray. The light-integration period is the time used by the photoelements to collect charges representing the illuminated image. Typically, the light-integration period corresponds to the frame period. For MOS image sensors, the maximum signal charge that can be stored depends on the bias applied to the photodiodes. For CCDs, it depends on the potential of the storage surface. Because the photoelements of CCDs and MOS image sensors have similar geometries and similar storage potentials, the saturation-light levels of both devices are similar. The high-light-level capability of both sensors can be maximized by increasing the storage capacitance of the photoelements while masking the other regions from incident light. This type of structure, called a monolithic aperture, allows the light-handling capability of line-scanning arrays to be significantly increased, while keeping noise in the unexposed areas small. Area-sensing monolithic arrays do not benefit as much from this technique because of the loss of spatial resolution resulting from the large area required to achieve increased capacitance. MOS image sensors benefit most from this technique because a pn junction is the photoelement, and it can be read out quickly. In a CCD, on the other hand, an adjacent-capacitor photoarray technique would increase the size of the photoelements, and, in turn, the time required to transfer the signal charge from the adjacent photo capacitor to the analog CCD shift register. CCDs, which have a low charge-transfer efficiency, are subject to blurring where the charges that are left behind during transfer between electrodes may appear later at the device's output terminal. Unfortunately, the transfer losses that cause blurring in CCDs increase not only with the number of transfers but also with lower light levels. However, the MOS imager does not blur because there is only one transfer. The signal flows only through a single analog switch before reaching the output. **Large-area imaging** There is nearly universal agreement that CCD image sensors, larger because of their smaller cell size, are more likely than MOS sensors to achieve television quality of 525 lines in a two-dimensional scanned-area image sensor. The highest-density MOS area image sensor available has a cell size of 2 mils by 2 mils, whereas CCD area image sensors have already been built in the laboratory with cell sizes on the order of 1 mil by 1 mil—4 times denser. Indeed, a CCD area image sensor with half the resolution of that required for data-transmission systems, such as Picturephone, has already been built on today's LSI-size silicon dice. Industry observers are hopeful that a sensor with full TV-resolution can be built in the next couple of years by using larger 500-by-500-mil dice. However this device won't have the simple structure that was first conceived for the CCD scanner. It will probably incorporate diffusions for low-noise charge-detection, blooming control, and high charge-transfer efficiency. The fabrication process will most likely include ion implantation, two layers of metal, special annealing steps, and multiple diffusions to obtain the necessary high performance. The elegance and simplicity of the original device may have to be sacrificed to attain the high level of technology required to mass-produce a competitive CCD device having television-quality resolution. On the other hand, TV-quality systems can be constructed with existing 512-bit MOS line-image sensors by adding a rotating mirror to optically scan the images. However, the mechanical scanning mirror necessitates adding volume in the camera. But less silicon real estate, which is expensive, is required for the mirror-scanned line-image sensor than for a corresponding area image sensor, which results in a correspondingly lower component cost. But the 512-element MOS line scanner requires higher light levels because less time is available for integration of each scanned element than in area sensing devices. These tradeoffs make it clear that the system designer must evaluate carefully the relative merits and disadvantages of both technologies. CCD image sensors, which are free of spike noise, are more likely to be built in high-density arrays, whereas MOS image sensors tend to be less susceptible to image degradation. In any case, since both types of sensors are fabricated with silicon semiconductor technologies and offer similar performance, the interchange of the two types of device should be straightforward. World Radio History Buying our tester won’t cost much. Not buying it will. We can help. If you’re into discrete devices, the question isn’t can you afford a test system. The question is how can you afford not to have this test system. Small volume users or manufacturers—for die sort, sampling or hi rel requirements. Our sleek, new System 12 Programmable Automatic Test System can offer you the plain, simple, yes or no, pass or fail, answers you need. High speed. Accurate. Reliable. It’s a small test system designed to do maximum work while taking up minimum space. The System 12 will handle incoming inspection, wafer testing, quality control, and die sorting. A 3-station multiplex—each station conducts different tests on different types of polarities at the same time. The solid state MOS memory stores up to 82 tests which can be programmed into Class Plans. Programming is flexible and readily expandable. BV measurements at 600V and currents at 10 amps are standard for power devices. Among the options for the System 12 are Multiplex option with remote station capabilities, small signal $H_{FE}$, high throughput, 1kV, SCR parametric expansion, 100 amp DCH$_{FE}$, 110V or 220V operation, external test capability, memory protect, and digital read-out on the central and remote test stations. The System 12 can be configured to satisfy your requirements today, and expanded as your needs expand…easily and without penalty. You can order the new System 12 from stock. The price is low. Write for detailed spec sheets, or call collect (415) 493-5011 for demonstrations. Our TWX: 910-373-1204 Fairchild Systems Technology 3500 Deer Creek Road Palo Alto, California 94302 Send me information on the System 12 Programmable Automatic Test System. Our monthly test requirement is approximately ___________ of □ Transistors, □ I.C.s □ MSL. Name ________________________________________________________ Title _______________________________________________________ Company _____________________________________________________ Street _______________________________________________________ City ____________________________ State ______________ Zip _______ Phone ( ) ______________________ Ext. _________________________ Fairchild Systems Technology, a division of Fairchild Camera & Instrument Corporation Minicomputer points the way to sewer-system improvements By tracking rainfall variations and sewage flow rates, a data acquisition system in San Francisco shows why and where sewage overflows, and how deficiencies in the sewer system can be corrected economically. by W. R. Giessner, F. H. Moss, and R. T. Cockburn, Division of Sanitary Engineering, San Francisco, Calif. According to limited data on rainfall available—from a single rain gage downtown—San Francisco's sewer system should have been functioning satisfactorily. Yet sewage overflowed into the bay and ocean in just about every heavy rainstorm. Evidently, a broader data base was a very necessary first step in upgrading the system. Over a year ago, therefore, a minicomputer began collecting data from a city-wide network of rain gages and sewage-level monitors in order to define the precise link between the rain and the overflows. Ultimately, the project is expected to lead to a real-time control capability for deciding, on a minute-by-minute basis, whether to store or treat sewage from all 30 subdistricts in the city, in response to storm movements. For the electronics engineer, the challenge in developing either the data acquisition system or the eventual control system lies largely in interfacing the central minicomputer to the gages and monitors that collect data and to the valves, blowers, and pumps that will control the mixtures of rain runoff and sewage flow. There is also the need to design a data acquisition system capable of being expanded into a fully integrated on-line control system. Storms and sewers San Francisco resembles many long-established United States communities in having a combined sewer system—that is, one that carries domestic and industrial waste water as well as the runoff from rainfall in one set of pipes. In dry weather, waste water flows from its sources, as shown in Fig. 1, into collector sewers, and thence through interceptor sewers to three sewage treatment plants, which normally handle about 39 billion gallons of such dry-weather waste water per year. Because this dry-weather flow fluctuates, the plants have been designed to handle up to three times the average daily flow without being overloaded. But during a rainstorm, the flow of sewage may for a short period increase to as much as 100 times its average daily flow—causing a massive overflow of untreated sewage into San Francisco Bay and the Pacific Ocean. Over a period of years such overflows occurred on an average of 82 times per year during 46 rainstorms, mostly during the winter months. Part of the problem seemed likely to lie in an assumption made by designers of conventional storm sewer systems like San Francisco's. Most designs are based on a determination of how heavily rain falls, how long it falls, and how often particular combinations of these rainfall intensities and durations occur. By statistical methods these combinations are converted into pipe sizes and other specifications of a sewer system—but the conversion usually assumes that rain falls uniformly over the entire area drained by the sewers. Moreover, the rainfall data on which the conversion was based in San Francisco's case is obtained from a single gage atop the Federal Office Building, from which measurements had been taken for 62 years. The preliminaries When the city's sanitary planning and studies engineers undertook the difficult task of correcting the overflows, they first had to determine what kinds of waste flow were involved and when and in what quantities they occurred. At this point a computer enters the picture—not the minicomputer in the data acquisition system, but a larger system used by several municipal agencies. On this computer the planning and studies staff developed a historical analysis of actual rainfall over that 62-year period, from 1906 to 1968, using the only data available—that from the solitary rain gage. But instead of looking for combinations of intensity and duration, as in the conventional method, they worked with the actual rain occurrences and the time between rains. The result of this analysis appears in Fig. 2: the scale of inches against hours—rainfall intensity—turned out less important than the relation of the volume of storage to treatment capacity in a given period of time. As the diagram of the analysis shows, when the cumulative rainfall volume exceeds the treatment capacity, the system overflows. To control this overflow, three avenues are open. The first is to increase the treatment rate to match the rainfall rate plus the daily rate of sanitary sewage flow, with hourly variations. This is impractical because it would require a large treatment plant with an undefined capacity that could respond almost instantaneously to large fluctuations in rate. 1. San Francisco sewer system. Ordinary waste and the runoff from rainfall both pass through the same pipes, to be treated and turned into landfill or to be discharged into San Francisco Bay or the Pacific Ocean. Normally 39 billion gallons per year (BG/Y) are processed. Runoff from rain can rise to 100 times the average daily sewage flow and cause massive overflows (color). A second, more feasible, approach is to provide storage for the excess flow. Then the excess can be treated as capacity becomes available. The third, and most sophisticated, approach is to allocate the total treatment capacity to different segments of the city at different times, taking into account the way rainfall varies with time at any given point and from point to point at any given time during a storm. For example, if a third of the city had relatively intense rain and the rest had little or none at that time, then devoting the total treatment capacity to that area of the city that needs it effectively triples the available treatment capacity. However, the temporal and spatial variations in rainfall upon which the third approach depends assume that the rainfall extrapolated from a single gage is not representative of rainfall in San Francisco. Further measurement was necessary to establish whether such variations occurred, and if so, where and when. For the initial evaluation, 17 rain gages were installed by staff members on the roofs of their houses at various places in the city. Each of these 17 gages recorded rainfall data on a drum recorder for the 1969–70 season. They provided a relatively poor database, for several reasons: the drum recorders on the various gages were not synchronized, and the mechanical or battery-driven clock mechanisms were neither very accurate nor capable of resolution better than five minutes. In a few cases several days of rainfall data were overwritten on a single sheet because a staffer was out of town, or just forgot to replace the charts on the recorders. Nevertheless, the distributed network recorded data that was significantly different from the single gage at the Federal Office Building. For example, the volume of rainfall was consistently at least 15% less, and ranged to as much as 25% less, than that extrapolated from the single gage—pointing to a drastically reduced need for added treatment and/or storage capacity. Spatial differences were quite evident. Significant time variations were indicated, although they could not be well defined because the gages were not synchronized. **The present system** This data justified the expenditure on the data acquisition system. It consists of 30 remote rain-gage stations, 120 remote waste-water-level monitors, a central recording station, and all necessary software to operate the system. The central recording station includes a Honeywell H316 computer with 16,384 words of core memory, a real-time clock, and a power-failure detection and power-restarting unit. Its peripherals include two magnetic tape drives with a controller, and a teletypewriter. The data collection requirements of the system are not severe—almost any minicomputer that was available at the time the specifications were drawn up could have done the job. Even heavy rainstorms keep the computer busy only about 10% of the time. Software supplied with the system includes the manufacturer's basic package plus application programs: a data communications interface, processors for accepting data from the remote stations, a timer routine, loggers for printed output, formatters and recorders for storing processed data on magnetic tape, and a routine for calculating the five-minute interval of maximum rainfall intensity for each hour of the day. An executive program sequences these other programs in response to timing and interrupt signals. Leased telephone lines connect the remote stations to the central recording facility. Whenever any of the 150 circuits changes state (from off to on, or from on to off), a corresponding relay in a telephone equipment rack is activated, interrupting the computer; the computer then services the line, identifying it and adding the data that the state change signifies to the accumulated base. Each rain gage consists of a beam with two indented buckets balanced on a central pivot in such a way that each bucket alternately receives and discharges rainfall collected by a funnel at the top of the gage. When the equivalent of 0.01 inch of rain has accumulated in the bucket, it overturns and empties out the water; this brings the other bucket into position to catch more rain. As a bucket tips, a normally closed mercury switch opens momentarily to signal the tipping; the normally closed position permits the line to be checked for continuity. These gages are identical to the 17 used in the initial evaluation except for the absence of drum recorders. Each remote waste-water-level monitor contains an electrically driven air compressor and regulatory equipment that produces a constant flow of air out of the reservoir at $1\frac{1}{2}$ standard cubic feet per hour and 28 pounds per square inch. The air bubbles up through the water in the sewer pipe, and as it bubbles, it overcomes the back pressure created by the depth of the water. This back pressure is measured by a bellows that sets the position of a cam lever. As the cam rotates, once every 15 seconds, it lifts the lever; this action closes a mercury switch for a length of time that is proportional to the back pressure and thus to the depth of water in the sewer pipe. The switch closure completes the circuit in the telephone line and is detected at the central recording facility. **Raw measurement cycle** The 15-second cycle of the remote monitors establishes the raw data-recording cycle. From the 120 monitors thus come 480 measurements each minute; these are recorded on magnetic tape in one-minute blocks, followed by accumulated rainfall data. The tape also records summaries every five minutes, and maximum sewer levels, accumulated rainfall, and peak rainfall intensities every hour. As a whole, the system is modular and flexible, and makes provision for its eventual expansion into a real-time control system. Installation of the system began in 1970. The remote --- **2. Rainfall analysis.** Colored steps show rainfall recorded at intervals during a typical storm. The heavy dark line plots the accumulated rainfall when this line rises above the lower dotted line, which represents the volume of sewage treated at a constant rate, some of the sewage is stored. When storage capacity (upper dotted line) is exceeded, sewage overflows. 3. **Storm front.** Computer-generated maps of San Francisco showing rainstorm moving across the city definitely invalidate the previous assumption that rain fell uniformly on all sections. Degree of shading shows the amount of rain. The 12 small maps represent three-minute accumulations between 11:49 a.m. and 12:25 p.m. on March 12, 1971, while the large map shows part of the printout for the period ending at 11:58 a.m., with accumulations registered at individual rain gages. Asterisks represent the shoreline and boundaries between districts within the city. Each dot represents about 3¾ acres. Stations turned out to be more difficult to set up than had been anticipated, because telephone lines and other facilities that were presumed to exist often did not, and because locations that required new telephone lines to be installed had to await the utility company's engineering and construction schedule. Other problems and weaknesses appeared during the 1971–72 rainfall season—the system's first full season in operation. For example, noise in the telephone lines and contact "bounce"—or more precisely "slosh"—in the mercury switches were sometimes recorded as tips of the rain gages. Though the "bounce" was more a matter of alignment than a basic fault in the equipment, it did cause the extra data to be recorded, and this extra data overflowed the buffers in the computer's memory and was written in the memory area reserved for the realtime clock. As a result, the clock indication was thrown off, and one week's data was without value. The clock itself was quite inaccurate at first. Its specifications indicated accuracy within 2%—which meant it drifted by almost 30 minutes within 24 hours. It had to be modified to synchronize with the ac line frequency, which is much more accurate. Other problems arose because the computer is considerably more precise than the instrumentation, which is subject to drift and requires the software to be modified to reject spurious data. Algorithms had to be devised to distinguish between significant and insignificant indications of malfunction. The staff had had no experience with any system such as this—nor had anyone else, since the rainfall monitoring system is unique—and therefore had no basis for deciding when they had sufficiently valid data. Attempting to insure 100% valid data was obviously not feasible. But in spite of all these problems, the system's operational status is being continually improved, providing data that previously was not available and indicating many new aspects of rainfall. The system was baptized in March, 1971, when fairly heavy rain fell. By this time, also, a computer program called Symap, developed by the Harvard University Laboratory for Computer Graphics and Illustrated Design, had been purchased and modified to produce computer-drawn contour maps of rainfall patterns. Figure 3 shows several printouts from the March, 1971, storm. They reveal a frontal storm entering the city from the northwest, with a front about a mile wide, progressing southeasterly at about eight miles per hour. This depiction of a storm's shape, size and the rate of travel is the most outstanding and useful benefit of the computerized data acquisition system, because it provides the information needed to upgrade the city's sewer system. This was the first definitive indication of the degree of rainfall variability in San Francisco. It also indicated clearly the storm's frontal nature and the brevity of its highest intensity—which was evidently dissipated by the high hills in the center of the city. Since these hills are just under 1,000 feet high, they were not expected to affect the storms significantly. But they were evidently responsible for preventing all but a very light rain from reaching the southeastern part of the city. The rapid response of sewer flow to the course of storm was equally noteworthy. One of the level monitors showed a depth in three successive 15-second intervals of 44, 86, and over 120 inches. That 120 inches represents a substantial surcharge, because the sewer pipe in question is only 72 inches in diameter. No such rapid fluctuation had even been taken into consideration when the sewers were being designed. More data became available during the 1971–72 rainfall season, providing much information about storms and sewer flow. For example, the degree of variability of rainfall over San Francisco is vividly illustrated by a study of 27 rainstorms recorded during that season. Of these 27, which occurred between November 1971 and April 1972, one entered the city from the north, nine from the northwest, eight from the west, six from the southwest, two from the south and one from the southeast. None came from the northeast or east. Thus the dominant direction of storm travel is from west to east. Further analysis of 24 of these 27 storms showed that the western part of the city received more rain than either the northeastern or southeastern section in 12 storms, while the northeastern area received the least rain in a different set of 12. Nine of these 12 coincided, and the excess rainfall in the western over the northeastern sector in those nine ranged from 6% to 112%. These same 24 storms also provide a data base for the primary analysis effort to date—to define the runoff following a rain. In a relatively small drainage subdistrict, with a modified Symap program, the incident rainfall has been quantified and compared to the runoff measured by one level monitor. Initially the emphasis has been to determine the consistency in the runoff between storms traveling in the same direction—primarily to check on the method of quantification. When these techniques have been proven, and the significant parameters identified, the analysis will be expanded to cover other subdistricts. **The forecast** Eventually a predictive capability for real-time control will be developed. Once the response of the sewer system to rainfall inputs can be reliably and consistently predicted, then the control logic can be developed. This control logic will be tested on physical scale models of actual storage basins in the laboratory, using real data input obtained from recorded storms. Control mechanisms and new computer hardware and software will also be tested with these models, and later with full-size prototypes. Speed will probably be a much more important factor in the real-time control computer than in the present data acquisition computer. While most of the elements incorporated in the San Francisco plan are not new, nowhere else have they been integrated into a single system. And in helping to make the plan a reality, minicomputers will play a crucial role. 2K MOS RAMS Clear your boards for action. Try this in your sockets: our industry standard 2K MOS RAM, the 2548, delivers twice the bit storage in less than half the board space required by 1K RAMs. Just the cost/performance edge you're looking for, to give your competition a lot tougher run for the money. Available now, volume-stocked and field-proven, with over two years of production and testing experience. Much easier to use than previous alternatives, the 2548 jazzes up capability while aggressively lowering system costs. 2K density in a single MOS RAM device gets you out of core memory. TTL-MOS level shifters? The 2548 requires only three—the usual 1K requires fourteen. Non-overlapping clocks simplify design and debugging. Fast access time gets processing moving at a livelier clip. Our 8T25 sense amp assures smooth conversion between your 2K MOS RAM and TTL. Take the most dense RAM available, and unlock its real potential. That's Signetics' user-dedicated technology every time. Now it's a true 2K MOS RAM that goes you 1024 bits better than ever before. Call, write, or wire us today for specs and quotes. And profit from our experience. Signetics-MOS 811 E. Arques Avenue Sunnyvale, California 94086 (408) 739-7700 Please rush complete input on your new 2K MOS RAMs, and 8T25 sense amps. | Name | |------| | Title | | Company | | Address | | City | State | Zip | | Telephone | Signetics Corporation A subsidiary of Corning Glass Works Ac power considerations in capacitor selection by John Kropp Mepco/Electra, Inc., a North American Philips Co. Morristown, N.J. There are as many different ways of calculating power dissipation in a capacitor as there are ways to use a capacitor. The dissipation due to an impressed ac voltage is often overlooked or considered negligible, resulting in capacitor degradation, excessive heating, and early failure. The ac voltage capability of a capacitor is quite different from its dc rating and is a function of its construction. Fortunately, dissipation due to dc leakage adds to dissipation due to ac components, permitting them to be calculated separately and superimposed. Film capacitors are rated in terms of a frequency-dependent equivalent series RC product, which is labeled the $R_S C$ product. And since nonsinusoidal waveforms can be broken down into their harmonic components, the dissipation of each significant component can then be calculated separately and added arithmetically to obtain a conservative estimate of power dissipation. Ceramic capacitors are rated in terms of Q (quality factor) or its inverse, the dissipation factor, from which the $R_S C$ product can be computed. The equivalent series resistance of electrolytic capacitors can be found similarly, but this is rarely necessary since ripple current ratings for electrolytics are generally specified. The limitation on power dissipation is, of course, the maximum temperature the capacitor can tolerate. This is, in turn, a function of the internal structure and case size, which determines the surface area available for dissipating the power. The approximate relationship (assuming free-air convection around the entire surface) between surface area and temperature rise above ambient is: $$T_{\text{rise}} = 133(P/A) \text{ °C}$$ where P is the dissipation expressed in watts, and A is... the surface area of the case expressed in square inches. The typical frequency curves show how the maximum $R_S C$ product varies with frequency for polycarbonate and polyester film capacitors, how $Q$ varies with frequency for ceramic capacitors, and how dissipation factor varies with frequency for electrolytic capacitors. For film capacitors, the temperature curves illustrate how the maximum permissible power dissipation is related to ambient temperature for various capacitor sizes. The table associated with each temperature graph gives approximate capacitor dimensions. The Group A plots are representative of Mepco/Electra series C280A/C280M units, Group B plots represent series C280M units, and Group C plots represent series C281 units. A sample power computation will show how to use the graphs. Suppose a polycarbonate capacitor of 0.33 microfarad must handle an impressed voltage ($V_{ac}$) of 180 volts at a frequency ($\omega$) of 1 kilohertz in an ambient temperature of 50°C. Since the power dissipated is: $$P = I^2 R_S$$ and: $$I = V_{ac} \omega C$$ then: $$P = R_S V_{ac}^2 \omega^2 C^2$$ or: $$P = (R_S C) V_{ac}^2 \omega^2 C$$ The film capacitor frequency curves indicate that the $R_S C$ product is $5 \times 10^{-7} \Omega F$. Substituting for this product and for the capacitor's operating conditions in the last equation yields: $$P = (5 \times 10^{-7})(0.33 \times 10^{-6})(2\pi \times 1,000)(180)^2$$ $$P = 0.214 \text{ w}$$ If the Group A capacitors are chosen, those with curve numbers of 8 to 12 can be used at 50°C, and the minimum size capacitor is 0.374 by 0.8666 by 0.571 inch. When curves for maximum power dissipation versus ambient temperature are not given for a capacitor, the power dissipation must be limited to a value that will not cause the capacitor's internal temperature to rise above its maximum rated value. Some conservative estimates for this maximum internal hot-spot temperature are: 100°C for ceramic plate capacitors, polycarbonate capacitors, polyester foil capacitors, and metalized polyester capacitors, 125°C for solid electrolytic capacitors, and 90°C for conventional aluminum electrolytics. Other factors can also limit the level of the applied ac voltage. For example, in film capacitors, the maximum ac voltage rating at line frequency must be respected at all frequencies since it is determined by dielectric strength, not power dissipation. Similarly, some capacitors are rated for voltage steepness, a rating that must be respected, regardless of waveform or dissipation. (Voltage transients in the order of 20 to 50 volts/microsecond can cause dielectric breakdown in metalized film capacitors.) Finally, if a capacitor current rating is given, it must also be observed, no matter what the result of other calculations. --- **General-purpose op amp forms active voltage divider** *by Peter Church* *Parsec Laboratory, St. Thomas, U.S. Virgin Islands* The everyday 741-type operational amplifier easily transforms a single-ended power supply into a dual supply. For less than $1, the active voltage divider of (a) can be built. It is useful for powering circuits that require a balanced supply with a ground, but draw only a little current through the ground line. The output-voltage ratio, \( V_1/V_2 \), is determined by resistors \( R_1 \) and \( R_2 \): \[ V_1/V_2 = R_1/R_2 \] This ratio can be kept fixed or made adjustable by using potentiometers as resistors \( R_1 \) and \( R_2 \). More current, up to 1 ampere, can be handled by the active divider by adding a heat-sunked pass transistor, as shown in (b). For breadboarding, either divider configuration (a) or (b) simply may be included as part of the circuit being laid out. The 0.1-microfarad capacitors in divider (a) can be removed if no fast transients will be encountered in the circuit to be powered, provided that the op amp's level of internal noise can be tolerated. The 741-type op amp is well-suited for this application because of its high gain over a wide power-supply voltage range and its excellent internal protection circuitry. The single-ended supply voltage should not exceed the op-amp's 36-volt input supply range. --- **Active divider.** Ordinary op amp (a) changes single-voltage supply to dual-voltage supply. Resistance ratio \((R_1/R_2)\) determines output-voltage ratio \((V_1/V_2)\). Additional output current is made available by following the op amp with a pass transistor, as shown in (b). Transmission-line analysis gets computer aid Computer-aided design programs are finally becoming available for transmission-line circuit analysis—a major design tool that's important because almost any circuit can be reduced to a transmission line. Besides its power and microwave applications, transmission-line analysis is essential for high-frequency logic circuits that use ECL and Schottky-TTL: at megahertz data rates, the tiny metal interconnects on a chip must be regarded as transmission lines. Aedcap, a general-purpose circuit analysis program, now includes transmission-line models (see p. 153). Also, a newly released program called Nacap, written by the Nanodyne Corp. in Sudbury, Mass., employs transmission-line analysis as a computational tool. A FET makes an excellent rf switch For switching high-frequency signals, designers of rf circuits are finding that some of the analog FET switches now on the market make a good alternative to p-i-n diodes or electromechanical devices. FET switches are available that can switch wideband (to 100 MHz) signals at rf levels with excellent off isolation and low insertion loss—and they do it directly from 5-volt TTL levels without external circuitry. Check out Siliconix's DG 181/191 family of n-channel JFETs with MOS bipolar drivers. How to get power and voltage stability from rf transistors To compensate for variations in rf-transistor temperature at high power levels and maintain high efficiency at the same time, use the collector resistance of a second transistor in series with the load. This has the virtue of increasing voltage stability without degrading power efficiency because now the desired increase in load resistance has been achieved with only a small voltage drop across the load transistor's collector-emitter junction ($V_{ce}$). An ordinary resistor would maintain stability but produce a large IR drop (see p. 102). Falling behind? Study at home If you are worried that the good jobs are passing you by because your old college courses left you unprepared for today's technology, look into the IEEE home study program. Prepared by Britain's IEE, courses are available in this country in field-effect transistors, pulse-code modulation, digital instrumentation, and modern control and processing theory. On a post-graduate level, the program is aimed at the graduate engineer, and each student is assigned an individual instructor with whom he communicates. Cost is $75 per course, culminating in a certificate of completion. Write: Education Registrar, IEEE, 345 E. 47th St., New York, N.Y. 10017. The packaged breadboard For keeping your breadboard circuits from looking like rats' nests, consider the neat, packaged breadboard kit available from E&L Instruments. It contains a 5-volt power supply, a four-frequency clock generator, lamps, positive and momentary switches, and a socket system that is capable of holding dozens of ICs and passive components. Ask for Digi Designer from E&L Instruments Inc., 61 First Street, Derby, Conn. 06418. Both end user and engineer are targets of first National Computer Conference At the June conference in New York, which replaces AFIPS' spring and fall meetings, the scope of the exhibits and technical program has been expanded in hopes of attracting a record attendance by Alfred Rosenblatt, New York bureau manager It will be "a complete department store of computer equipment," declares Gerard L. (Jerry) Van Dijk, conference manager of the 1973 National Computer Conference and Exposition. And just as in a department store, the equipment may be sporting price tags—to Van Dijk, "the biggest innovation" to hit a technical conference in the last 20 years. Altogether, the first National Computer Conference, to be sponsored June 4–8 at the New York Coliseum by the American Federation of Information Processing Societies Inc. (AFIPS), is shaping up to be a complete sell-out, predicts the show's management confidently. Some 70,000 square feet of space will be occupied by 250 companies displaying their wares in 700 exhibit booths. Some of the big mainframe houses like IBM and Control Data Corp. are also back after having abandoned AFIPS' conferences round about the beginning of the recent recession. Attendance will set an AFIPS record, topping 30,000, predicts Van Dijk. And to make things more comfortable for everybody, he is covering the entire exhibit area with plush, red carpet. By way of comparison, the IEEE's annual meeting at the Coliseum in March attracted 25,000 people, 226 exhibitors, and occupied but 43,000 square feet of bare floor. What is attracting the exhibitors to the NCC? It isn't simply the upturn in the economy, says Van Dijk. "For the first time we're also making the conference relevant to the computer user," he declares. This has been accomplished by expanding the technical program and attracting just about every kind of hardware and software supplier to the Coliseum, he continues. From mainframe, peripheral, and communications-equipment manufacturers to systems houses and maintenance and service companies, "the computer user is not going to miss a thing." The conference's management has expanded its definition of the type of person it wants to attract. The term "end user" no longer refers just to the manager of a data processing installation, but also includes the executive who would be the one to benefit from having a computer-based system installed in his business operation. Accordingly, the 1973 NCC is appealing in the promotional mailings that solicit attendance, as well as in its program and exhibits, to the upper level of management, which is concerned less with the computer itself than with what the total system can do. "I'm after people like the marketing manager at a tobacco company, the president of a department store, or the circulation manager at a magazine," Van Dijk explains. "The data processing manager may select a computer system, but he too often has to sell the need for a machine upwards within his company," he continues. "And this may be difficult, if not suspect, in the eyes of a management not attuned to a computer system's benefits." The goal of the AFIPS conference is to "get that buyer there, that creator of computer usage, so that he's going to say, 'Buy one of them,'" Van Dijk concludes. In the past, the exhibits at the AFIPS were oriented too much to those building the computer-system hardware, Van Dijk continues. "The electronics engineers took over," he laments, with companies exhibiting components like cores and knobs and switches. Such companies are apparently dropping out this year. Another long-term plus for AFIPS' new national conference, a good many agree, is the decision to substitute a single show for the two Joint Computer Conferences it had sponsored for 20 years in the fall and spring. The spring show was held in the northeast, often in Atlantic City, and the fall show was held in a city out West. But things began turning sour for the two traditional shows after the fall meeting in 1969. CDC became the first large-scale mainframe manufacturer to drop out of the joint conferences, according to Tom Johnston, manager of exhibits and special promotions for Control Data Corp. Other companies dropped out, too, so that the conferences came to resemble shadows of their former selves. Only 14,000 came to the 1972 spring show in Atlantic City, down precipitously from the peak attendance of 30,000 in Boston in the spring of 1969. "We didn't feel we were getting our money's worth participating in two similar computer conferences," says CDC's Johnston. Adds Roy Gould, exhibits manager for Digital Equipment Corp.: "The load of a twice-a-year show on many exhibitors was just too great." Many also felt attendance at the joint conferences lagged because of their locations, often away from major population centers where computers are heavily used. Reacting to such criticism, AFIPS is now shifting its annual conferences from New York this year to Chicago in May, 1974, and to a metropolitan area yet to be chosen out in the West in 1975. Then the plan is to go back into New York again. Still, though the move to a once-a-year show in major cities seems generally well received, many exhibitors are waiting to see how the first one turns out before committing themselves to Chicago. The decision to allow prices to be posted—"discreetly," cautions Van Dijk—is also likely to be popular. "We exhibit our products to sell them, don't we?" asks a spokesman at one peripherals supplier. But a man at minicomputer maker Interdata Corp. is less certain of the value. "Hardware prices are only part of a system's total cost," he points out. **The technical program** As for the technical program at the NCC, it will consist of 103 sessions, panel discussions, and seminars—a number that the conference's general chairman, Harvey L. Garner of the University of Pennsylvania, terms "unprecedented." Generally, the joint AFIPS conferences had only a quarter to a third this number, points out AFIPS' communications director, Tom White. Many of the sessions will include not just the formal presentation of papers but also panel discussions and opinions from expert commentators. Giving evidence of the shift in focus toward the end user, the methods and applications section will have 37 out of the 103 sessions. At the spring conference a year ago, "you would have been hard pressed to pull out five papers of interest to the user," observes White. But the science and technology section still dominates with 56 sessions. There are also five sessions of broad interest to management and five devoted to computer arts. The 56 sessions in the science and technology portion of the program will include about 110 papers and almost 400 participants. And for the first time, a computer conference contains at least one session organized by each of AFIPS' 13 constituent societies. Computer architecture and hardware is one of the major areas of concentration. Several new hardware developments reflecting on computer architecture are to be discussed during the session, "Advanced Hardware." Included are papers dealing with optical inter- connections, the new feasibility of distributed processing systems that's due to the development of the computer-on-a-chip, and methods of tuning special-purpose hardware, or firmware, to an application with the help of a high-level language like Algol. The session called "The Growing Potential of Mini/Small Systems" will review new techniques and applications for small systems, particularly those made possible through microcoding. Associative hardware devices will be the subject of "Associative Processors," with emphasis on their application to data management. A session on "Storage Systems" includes papers on hierarchy and virtual systems, while military needs are addressed in "What's Different About Tactical Military Computer Systems." And finally, the architectural implications of virtual machine systems, as well as performance and applications aspects, will be discussed at the session, "Virtual Machines." Another important group of sessions at NCC deals with communications networking and terminals. Both the economic and technical viability of computer networks are to be treated in "Network Computers: Economic Considerations—Problems and Solutions." And in a session organized by the American Institute of Aeronautics and Astronautics, "Data Communications Via Satellite," the burgeoning area of commercial data communications by satellite will be discussed. A related session, "Satellite Packet Communications," will examine techniques for using a single wideband satellite channel in a multi-access broadcast mode by transmitting addressed data packets from many ground stations. "Intelligent Terminals" will concern the division of labor between terminals and central computer, as well as the limitation of power inherent in such terminals. Pattern recognition gets its share of attention with two sessions. One, "Ingredients of Pattern Recognition," discusses a device for inputting pictorial information into a computer, and advanced techniques for recognition. The other covers "Applications of Pattern Recognition," in medical diagnosis, character recognition, aircraft control, and screening of large masses of data. Another important subject area is data-base manage- **Conference manager** Gerard Van Dijk says, "The computer user will not miss a thing" at the first National Computer Conference. ment. At a session on "Trends in Data Base Management" papers will discuss specialized processors, relational data bases, a technique for data-base sharing, and an algorithm for optimal distribution of data within a computer network. The session on "Performance Evaluation" will be concerned with measuring the performance of computer and teleprocessing systems according to economic and human criteria. And in the area of data security, sessions are devoted to an "Interim Report from the IBM Data Security Study Sites," "Data Security in Government," and "Secure Data Systems." Finally, there will be sessions on computer graphics, computers in education, simulation and process control, and software. **For the end user** Appealing to the end user, the methods and applications portion of the program falls into four sections: computer applications in industry, Government, and merchandising, and installation management. Four of the six industry-oriented sessions are devoted to the use of the computer in the manufacture and operation of the automobile. Looking furthest down the road is a session, "Onboard Computers for Automobiles," at which papers to be presented include one by an IBM spokesman, "Automobiles and Computer Architecture," and one by a man from Ford Motor Co., "Tradeoff Considerations for Automotive Computers." Another session involved with projecting future applications is "Off Vehicle Diagnostics," which contains papers describing how computers are used to diagnose malfunctions and assist in tune-ups. A session dealing with "Computers in Automotive Design and Manufacturing" pays close attention to how minicomputers have invaded the auto plants. Capping the discussion of the automotive industry will be a luncheon address by Edward N. Cole, president and chief operating officer of General Motors Corp., and a session, "Automobiles, Computers and the Consumer," dealing with the impact of computers in terms of consumerism, safety, and emission control. Still in the industry category, "Manufacturing Automation" will touch on the hardware and software available today for computer control of manufacturing, as well as on the shortcomings of existing systems. Another session looks at computers applied to publishing. Seven sessions concentrate on data processing within various Federal, state and local agencies. "Computers in the Elective Process" will debate the pros and cons of computers in political solicitation and campaigning, and in the vote-counting process, and will concern the scope of such use and the possibilities of fraud. Other sessions include "Five Year Master Plans for Computers in State Government," discussing approaches to plans in three sizable states—Texas, Illinois, and Michigan; a panel discussion on "Computer Operations of State Agencies and Universities," which deals with the sharing of computer systems; "Urban Services" which deals with such things as the application of computers to housing and welfare policies in New York City, traffic control, and deployment of police and fire fighters, and "Computers in the Congress," which will attract as speakers the data processing managers of both the Senate and the House of Representatives. The three sessions about merchandising include discussion of "Point-of-Sale Systems" and "Data Processing Directions in the Retail Industry." The sessions on installation management stress discussions of cost effective operation. "Economics and Remote Terminals" tries to show how installation costs can be cut as a result of strategic use of remote operations and communications. At least three sessions deal with confidentiality, security and privacy with respect to data processing systems, and one of them, "Four Major Reports on Privacy and Computers" will present a quartette of recent national studies. Protection against data errors or losses is handled in "Data Integrity." In addition, a separate one and a half day seminar will be held on "Managing the Impact of Generalized Data Bases." Software design and marketing is treated in "Development of Generalized Software Products" and in "Status and Future of Software Products Worldwide," and so is its legal status in "Legal Protection for Software." Also touching on legal matters is "Regulation of the Computer/Communications Industry." Other sessions worth noting include "Voice Answerback Comes of Age," reflecting the increasing use of voice answerback with computer systems, "Metrication," dealing with how the computer is being applied to metric system conversion, and "Reliability for Integration into Human Affairs," which is concerned with real-time computer systems in such areas as health care, air and surface traffic control, criminal systems, and credit systems. For the really tough applications, OEM's like VIDAR choose HP. How do you record millions of telephone calls daily, process this data, and bill millions of customers monthly — without any errors? The VITEL division of VIDAR tackled this problem and solved it with their unique new telephone message metering system. To record the raw data, VIDAR needed a magnetic tape drive with proven reliability at a competitive price. That’s why VIDAR chose HP’s 7970E Tape Drive. They needed the best of both worlds and knew that HP quality was the result of 33 years of experience in engineering and mass production techniques that lower costs and improve reliability. The VITEL system records “one-shot” data at a telephone company central office to provide accurate usage information. For instance, one system in a major metropolitan area handles 3.6 million telephones in over 100 offices. The system replaces mechanical message registers to bring a new level of accuracy to customer billing procedures. But OEM’s like VITEL want — and need — more than rugged construction, reliable performance, and competitive pricing. They want a broad range of data rates. Like 200,556,800 cpi NRZI, or 1600 cpi phase-encoded recording that’s ANSI IBM compatible. And flexibility, like 7 and 9 track, multi-density, NRZI and PE; all in one read-only tape drive. Plus OEM Specials. Like 50-Hz 230-volt operation. Or personalized labels or logos. Even different paint on the front panel. And how about OEM discounts, and a one-page OEM agreement written in plain English. For the full story call your local HP sales engineer or write: Hewlett-Packard, 1501 Page Mill Road, Palo Alto, California 94304; Europe. P.O. Box 85, CH-1217 Meyrin 2, Geneva, Switzerland; Japan: Yokogawa — Hewlett-Packard, 1-59-1. Yoyogi, Shibuya-ku, Tokyo, 151. Now that the world has flipped over our OEM Model 74, can we make them fast enough? When we introduced our Model 74 in 1972, we knew we had a great little OEM minicomputer. We just didn’t know how great. We knew a lot of OEMs would like the hardware multiply/divide, 16 general registers and directly addressable 8KB core — expandable to 64KB. But we didn’t know so many OEMs would beat down our door to sign up for it within the first 6 months. What you need is what you get. We had an idea that the 80-nanosecond solid-state Read-Only-Memory and the multiplexor providing an I/O system for communicating with up to 255 peripheral-oriented device controllers would turn on a lot of OEMs. But who would’ve guessed we’d have the big machine tool manufacturers, electronics companies, peripherals houses and controls companies standing in line for it? The $3600* OEM Model 74. We were pretty sure a lot of OEMs would appreciate the upward compatibility of the Model 74 and our Mix and Match discount schedule, which gives cumulative credit for all machines bought, regardless of model. But we never even dreamed we’d have to tell our manufacturing people to make them by the bushel to keep our 30-day delivery schedule. Maybe it’s the $3600 price. Maybe it’s the no-frills design. Maybe it’s just the way it does so many jobs so well. Whatever it is, we’ll keep making them just as fast as our OEM customers want them. The more the merrier. *Basic 8KB Model 74 list. With OEM discount, quantity of 61 — $2,160. Circle 128 on reader service card Exhibits at computer show to hard-sell managers of industries In common with all areas of electronics technology, the computer segment is moving aggressively into virtually every type of industrial activity. This will be very evident at the National Computer Conference, to be held June 4–8 at the New York Coliseum. Following are some of the significant products to be introduced. Others are in the section starting on page 153. No-refresh display holds full printout page In two computer-graphics terminals, the displays will hold, without memory refresh, a full computer-printout page, a complex circuit diagram, or a large map. Built by the Information Display Products division of Tektronix Inc., the models 4014 and 4015 interactive terminals provide all the hardware and software features of the company's 4010 display family, plus new ones made practical by development of a direct-view storage cathode-ray tube with a screen size of 11 by 15 inches. The 19-inch diagonal measurement gives the new tube about four times the display area of earlier Tektronix storage terminals. The earlier models in the family display 35 lines of 72 characters and plot graphics in a vector mode on a matrix of 1,024 by 1,024 addressable points. In a line-printer format, the new types display 64 lines of 132 characters and, with the graphics resolution extended by a discrete plotting option, will generate graphics displays with 12-bit resolution. That resolution gives a matrix of 4,096 by 4,096 points. As in the previous systems, graphics inputs are made with cursor cross-hair controls. Three other page formats are provided by the new models: 121 characters by 58 lines; 74 characters by 35 lines (compatible with the 4010-family format); and 81 characters by 38 lines, with a computer addressable scratchpad area (compatible with the model 4002A terminal). Sizes of the 7-by-9 dot-matrix characters are proportioned to the display format used. Like the model 4012, the model 4014 generates a set of Ascii uppercase and lower-case characters. And the 4015 has both Ascii and APL character sets (A Programming Language originated by IBM). The model 4015's keyboard is optimized for APL entries. Tektronix 4010-family software packages include, besides APL graphics routines, software for terminal control and packages that allow the terminals to be used with minicomputers, IBM 360 and 370 systems, and time-shared computer services. According to Robert Peterson, marketing program manager for computer terminal products, the new displays are not only the first to be large enough to allow a full magazine page to be set up by a computer-controlled typesetting system, but they will also make more efficient such new techniques as geocoding and entry of geophysical data from map displays. Geocoding is used, for example, to enter into a data processing system code words for the location of gas mains, street lights, and other map details. With the 19-inch displays, details on large maps can be entered on the new display with the cursor inputs. Also, Peterson says, the models 4014 and 4015 will make it easier to design complex integrated circuits and printed-circuit boards with computers. "People in these fields have been telling us for years that they need larger displays," he remarks, referring to CAD jobs. The new tube is a single-gun CRT with a directed beam. The beam scans in an analog vector mode to display graphics and in a digital mode to display characters and to scan the display in the computer-entry mode. Data may be stored indefinitely on the screen. However, Tektronix recommends the display be erased after an hour to prevent stationary images from being permanently retained by the phosphors on the screen. The 4014 is priced at $8,450, and the 4015 at $8,950. Tektronix Inc., P.O. Box 500, Beaverton, Ore. 97005 [341] Bare-bones and stand-alone microcomputers to bow Some 30 companies have gone into business designing, programming, and packaging microcomputers with the MOS LSI chip sets that Intel Corp. introduced in 1971 and 1972 [Electronics, March 1, p. 63]. Now, Intel is jumping on the bandwagon it created. At the National Computer Conference, Intel will offer two microcomputers—the Intellec 4 and 8—that are comparable to minicomputers, except that they are slower and less costly. Unlike the IMP-16C microprocessor cards that National Semiconductor Corp. introduced last month [Electronics, April 12, p. 42], the Intellecs are completely assembled, down to the cooling fans in the chassis. They will be sold in two versions that can be expanded by modules: “bare-bones” chassis-mounted computers that plug into host systems, and stand-alone table-top models with cabinets, control panels, and power supplies. Intel is also planning big-board variations for equipment manufacturers. The Intellec 4, aimed primarily at system-control markets, features a large input-output structure. It handles 12 to 64 I/O channels through interface cards compatible with transistor-transistor logic. The memory, also expandable by modules, stores up to 3 kilobytes (eight-bit words) of instructions and up to 2,560 four-bit data words. Basic programs go into read-only memories on the processor card. Data and additional programs are stored in random-access memories. The mainframe, controlled by a set of 45 instructions, processes either decimal or binary words at a cycle time of 10.8 microseconds. The Intellec 8 is a more powerful system with a 48-instruction set. It processes eight-bit bytes in 12.5 microseconds. The basic add and subtract routines take 40 μs. From 4 kilobytes to 16 kilobytes of programs and data, stored in read-only or random-access memories, can be addressed directly by the processor, which also handles real-time interrupts and runs from 12 to 32 I/O channels at a rate of 12,500 bytes per second under program control. Both computers have monitor programs stored in ROMs and assembler software that can be loaded into the RAMs from tape. Once the monitor starts the system and loads the RAMs, the processors are controllable through teletypewriter channels. Software-development packages that will run on general-purpose computers are also available. They include assemblers and simulators for the Intellec 4 and assemblers, simulators, compilers, and a text editor for the Intellec 8. Henry Smith, microcomputer systems manager, expects the basic Intellec 8 system to cost about $1,500, and $2,000 when packaged. The Intellec 4 will cost less. In cabinets, the computers weigh 30 pounds and measure 7 by 12 by 17 inches. The computers are backed up by development accessories, including breadboards with wire-wrapped socket mounts, programmable ROM modules, and a ROM pulse-programer controlled by software. With these accessories and the Intellec control panels, an engineer can work up systems having custom programs and peripheral interfaces. The panel has controls for debugging and programmer operation. Conceptually, the development aids are similar to the hardware simulators and programmers that Intel supplies to chip-set buyers. In fact, Phil Tai, microcomputer engineering manager, says the simulators were the forerunners of the Intellec computers. Intel discovered to its surprise last year, Tai recalls, that simulator-card sales were rapidly mounting into the $1 million-a-year range. Customers making one-of-a-kind systems and those that needed only a few microcomputers were using programmed simulators, rather than buying and assembling chip sets. Intel Corp., 3065 Bowers Ave., Santa Clara, Calif. 95051 [342] Printer/plotter is designed for minicomputers Convinced that the minicomputer market needs lower-priced peripheral equipment, Gould Data Systems has developed an electrostatic printer/plotter that has a unit price of $7,600 for the hardware. Designated the Gould 5000, it prints alphanumeric data at 1,200 lines per minute and plots graphic material at 3 inches per second. “Like other nonimpact printers,” observes Peter A. Highberg, manager of printer products, “this unit operates quietly and requires minimum maintenance since it contains few moving parts.” It offers minicomputer users extra flexibility, he adds. The electronics for the 5000 is solid-state, has an 8-bit data path for input from the minicomputer, FEEL OUR PULSE THOMSON-CSF’s new planar triodes give long, healthy life and reliable service in your radar and communication equipment. | Tube Type | Frequency MHz | Peak power output kW | Anode dissipation capability W | Peak anode voltage kV | |-----------|---------------|----------------------|-------------------------------|-----------------------| | TH 318 | 1500 | 40 | 700 | 6.2 | | 6886 | 3000 | 15 | 250 | 6 | | TH 363 | 3000 | 8 | 100 | 8 | | TH 364 | 3000 | 8 | 100 | 8 | | TH 366 | 3000 | 2 | 350 | 3.5 | | TH 368 | 3000 | 8 | 100 | 8 | THOMSON-CSF ELECTRON TUBES, INC. / 50 ROCKEFELLER PLAZA / NEW YORK, N.Y. 10020 / TEL. (212) 489.0400 THOMSON-CSF Electron Tubes Ltd / Bilton House, Uxbridge Road, Ealing / LONDON W 5 2TT / Tel. (01) 579 55.11 / Telex : 25 659 Circle 131 on reader service card New products and comes with a 64-Ascii-character, 7-by-9-dot matrix font. The printer/plotter generates 132 characters per line and has a resolution of 100 dots per inch vertically and horizontally. A full 96-character, 7-by-9-dot matrix font with upper and lower cases and a 128-character, 7-by-9-dot matrix font are available as options. Computer printout is on 11-inch-wide coated paper. In the course of traveling through the printer/plotter, the paper first of all becomes electrically charged with invisible images, then is dowsed with fluid toner, which adheres to the charged areas, and finally emerges dry from the machine with the images visible. The Gould 5000 has a 1,000-sheet fanfold paper-handling capacity. It will accept 400 feet of paper rolled on a three-inch internal diameter core. A six-button control panel is recessed at the top of the unit, which comes in a floor-cabinet model 28 inches wide, 18 in. deep, and 39 in. high. Weight of the Gould 5000 is 195 pounds. Printer and plotting software packages, as well as interface hardware packages, are available for most minicomputer systems Gould Data Systems, 20 Ossipee Rd., Newton Upper Falls, Mass. 02164 [343] Plug-in processor speeds computer arithmetic The prototype of a plug-in processor, which makes the Nova computer line multiply 10 times faster and divide 20 times faster, will be shown for the first time at the National Computer Conference by Floating Point Systems of Portland, Ore. The firm also will show a prototype of a floating point processor that's compatible with the Nova line. The first processor will sell for $3,000. It includes 16 Boolean functions, n-bit shift, double-precision floating-point-compatible instructions, and high speed. The processor, a plug-in, fits on two standard-size cards and can be retrofitted in the field—no backplane wiring is required, says company president C. Norman Winningstad. He says the processor "doesn't fit everyone's needs," but is most advantageous for the business office that demands accuracy in nine-digit figures, for example. Winningstad attributes the high speed to hardware rather than software, pointing out that not only are multiplication and division much faster, but the processor also performs addition and subtraction. The processor's instructions are microcoded in the card, and it can handle up to 64 bits. The second new product—a floating-point processor for the Nova—is aimed at the scientific user. The $3,500 processor adds, subtracts, divides, multiplies, and is fixed-point-and floating-point-compatible with the Nova software set. The processor works with Fortran Four and Five, notes Winningstad. The set plugs in on two boards and provides double-integer multiplication and division which, he says, "avoids scaling errors." The processor is designed for large dynamic ranges, where the user moves orders of magnitude during a single computation. Delivery time is 90 days for both units. Floating Point Systems, 3160 S.W. 87th St., Portland, Ore. 97225 [344] Drum plotter completes IC mask in 6½ minutes In the wide realm of graphic plotters, the special attraction of the drum plotter is its speed, and a new plotter from California Computer Products (Calcomp) offers what the company claims is the highest speed in its price range. The model 1036 drum plotter draws at 10.25 inches per second axial speed or 14.4 diagonal rate, and costs $22,720. It can plot an IC mask in 6½ minutes, a job that took earlier Calcomp drum models anywhere from 18 minutes to as much as an hour and a half to complete. Main applications for the plotter include automated drafting, computer-aided design output, mapping and isometric drawings, and medical plots. The drum plotter is often used at test sites by checkout engineers. A side consequence of a drum plotter's speed is a degree of inaccuracy—which is hardly surprising in view of the shrinkage and stretching in the 36-inch wide, 120-foot long paper roll it accepts. To reduce these problems, the new plotter includes a scale factor adjustment to compensate linearly for paper shrinkage. It's especially useful for the gridded paper normally used for IC-mask checking plots. Narrower paper widths can also be used with optional drums. The model 1036 includes three programmable pens, since IC makers seem to prefer three colors to help Celanex. It even sounds electrical. For electrical-electronic applications, Celanex thermoplastic polyester performs small wonders. One reason is that glass-filled Celanex combines all the advantages of DAF, alkyds and phenolics. With none of their disadvantages. The parts illustrated feature some other good reasons for choosing Celanex. In the Airpax slide switch (a), for example, Celanex SE-O grade combines excellent electrical properties with wear resistance, low coefficient of friction. And it received sole support approval from UL. In the Permonite TV cathode ray tube socket (b), Celanex 3310 replaced polysulfone. Celanex withstands high voltage and high temperatures. Remains dimensionally stable. Replacing alkyds and nylons, Celanex combines fine electrical properties with fast cycling and ease of molding in this high voltage contactor coil (c) by Essex International Controls Division, Inc. And the small grey TV tuner shaft (d) takes good advantage of another Celanex property—the lowest moisture absorption of any high-strength engineering plastic. Celanex is also the high-strength insulating material for Magnum Electric Corporation's new, slimmer terminal strips (e). And Celanex's high dielectric strength assures an RMS breakdown voltage of more than 3,000 volts for the thin barriers between terminals. Celanex also contributes high arc track resistance and chemical inertness. Plus all that, Celanex is one of the most processable plastics available. Molding is easier. Cycles faster. Which adds up to a very remarkable, performance-boosting, cost-saving engineering resin. Get the facts on Celanex. And on Celcon and Celanese Nylons. Write Celanese Plastics Co., Dept. X-607, 550 Broad Street, Newark, N.J. 07102. Time coding can be simpler than you think Systron-Donner can help you cut through the profusion of time code formats and equipment—to the selection of the right format and the right equipment for the job. Our new 90-page handbook will guide you to the best technique for your application, and enable you to select appropriate equipment from the comprehensive line offered by Systron-Donner. Systron-Donner equipment ranges from compact time code generator/readers costing as little as $1495, through portable battery-powered generators for field use, to high-precision generator/readers with automatic tape search. Send for handbook sample. Send for free copy of "Selecting time code format," Chapter 3 of our new 90-page, illustrated handbook on time coding techniques. Complete handbook: $3.00. Data Products Division, Systron-Donner Corporation, 10 Systron Dr., Concord, Calif. 94518. Phone (415) 682-6161. New products distinguish the layers, and map makers often use three line widths for variation. Either pressure-flow ballpoint pens or liquid ink can be used at full speed. Operation is digital, with minimum step size of 2 mils (or 0.05 millimeter in the metric model). Plot area is 33 in. by 120 ft. Most machines are expected to be sold to end users, who will typically combine the 1036 with a model 915 controller. Input is from standard magnetic tape prepared offline, but the system can also be used with a minicomputer or on-line to a large-scale computer. It operates from common supply voltages. California Computer Products Inc., 2411 W. La Palma, Anaheim, Calif. 92801. [345] Printer is versatile True printers with both upper- and lower-case character fonts can cost twice as much as printers with only upper-case fonts, says Printer Technology Inc., but typewriters priced competitively with upper-case printers are comparatively slow, operating at 10 to 15 characters per second. With the introduction of the Printec 100-A, a low-priced, upper/lower-case serial impact printer, Printer Technology hopes to fill the gap. Priced at $2,800 in single units, the 100-A uses a 96-character font and prints 70 characters per second from the company's multiple-split helix wheel. The wheel contains two full character sets and four associated hammers. Throughput is 26 lines per minute for 132 column lines, and 44 lines per minute for 72-column lines. The Printec-100-A includes a two-channel vertical format unit and an 8-bit Usascii interface. Optional interfaces include buffered bit serial, buffered bit parallel, and remote control. Printer Technology can provide a complete packaged interface to a PDP-11 or Nova for about $4,000. Applications include communications, word processing, text management, key-to-storage systems, preparation of CRT hard copy, and editorial and typesetting tasks. Delivery time is 60 days. Printer Technology, Inc., Sixth Road, Woburn, Mass. 01801 [346] Disk system holds 50 megabits Designed for main-memory extension, software storage, and similar applications, a compact single-disk drive is particularly suitable where low price and moderately fast access are required. It is also said to be designed for long-term reliability. The series N from Wangco Inc. uses a single, nonremovable fixed disk in a package 5¼ inches high. Models 1211 and 1212 offer a capacity of 25 megabits, recording 2,200 bits per inch on 100 tracks per inch. The 1211 has a transfer rate of 1,562 kilobits per second, with a rotation speed of 1,500 rpm. The 1212 has a transfer rate of 2,500k/s, at 2,400 rpm. The model N-2212 has a capacity of 50 megabits, with 2,200 bits/in. on 200 tracks per inch. Transfer rate is 2,500k/s at 2,400 rpm. Track-to-track access time for all models is 15 milliseconds, with an average of 70 ms. Recoverable error rate is a maximum of 1 in $10^{12}$ bits. Nonrecoverable error rate is 1 in $10^{12}$ bits. The series-N is 5.25 in. high, 16.60 in. wide, and 22 in. deep. It weighs about 75 pounds with a built-in power supply. Wangco Inc., 2400 Broadway, Santa Monica, Calif. 90494 [347] Announcing the first fully loaded, full autoranging 5-digit multimeter to break the $1,000 barrier. Cimron DMM-51 For only $995, the Cimron DMM-51 offers autoranging in all functions and all 24 ranges. Including 5 dc ranges, 5 dc ratio ranges, 4 ac/dc ratio ranges, 6 resistance ranges and 4 ac ranges. If that's more than you need now, drop the ac and ohms converter and you drop the price to $795. But you still get 1 microvolt sensitivity. Automatic input zeroing. And basic accuracy of .004%. Stripped or loaded, the DMM-51 comes complete with such Cimron quality features as a rugged, all-metal case. Built-in calibration instructions and simple adjustments that cut calibration time to 20 minutes. Plus the same great service network, the same comprehensive guarantee that back all our instruments. For more facts on the DMM-51, write today. Or call 714/774-1010 for a demonstration. You'll know a winner when you see one. Cimron Instruments, Lear Siegler, Inc., 714 North Brookhurst, Anaheim, California 92803. We're in two classes by ourselves. Our design capability makes us the only company that can deliver both side-arm and coaxial tubes. If you need a tube that will go in a small package, we can provide a coaxial. If size is not a problem, we can provide a side-arm tube at considerable savings. Example: a 2.0mW, TEMoo, coaxial tube in quantities is $90.00; a 2.0mW, TEMoo, side-arm tube in quantities is $85.00. Our capability gives us the broadest line of Helium/Neon plasma tubes in the business. Over 30 different standard plasma tubes, from 0.5mW to 50mW. Most off the shelf. With internal or external mirrors. All internal mirror tubes and lasers warranted for 18 months. Moreover, if you need a Helium/Neon laser specially designed, we can design it just for you—using the same parameters that we have tested to 26,000 hours MTTF and 15,000 hours average lifetime. We can even handle such special requests as a modulated laser to 50 KHz, or a .01% noise laser, or you name it. One more Both to brag about. We make more lasers than anybody, and so we have the expertise that brings you both high quality and low cost. Who says you can't have it both ways? And at Spectra-Physics you can have it both ways in more ways than anyplace else. Spectra-Physics Circle 136 on reader service card Instruments Solid-state unit generates 50 W Custom transistors help generator deliver in 225–400-MHz range Using power transistors fabricated to its own specifications, Ailtech, a Cutler-Hammer company, has developed a solid-state sweep power generator that delivers a fat 50 watts across the 225-to-400-megahertz communications band. This 50-w output represents an order of magnitude improvement over the output power obtainable from the solid-state generators available until now, asserts Thomas D. Eccles, manager of rf instrumentation products at Ailtech's West Coast operation. Eccles explains that although transistors capable of producing the 50-w output have been available for some time, it's only recently that their reliability has been considered "good enough." A metal migration phenomenon across the interleaved fingers in the power transistor structure would cause the unit to fail under extended high-power operation. Ailtech's specifications for the transistors are designed to overcome this, he says. Another important feature of the new unit, the model 473, says Eccles, is that it's easier to set up and operate than are conventional vacuum-tube designs. The reason is that both power and frequency outputs are programmable via an external dc voltage or binary-coded-decimal signal; and the output power is automatically leveled to within ±0.5 decibel. The result is that the model 473 "requires practically no effort to tune and adjust," so that it is particularly useful for repetitive production-line testing, says Eccles. Also, the generator's all-solid-stage design—basically a voltage-controlled oscillator followed by an amplification chain of transistors—is considered much more reliable than its vacuum-tube equivalent. The model 473 power generator can be used in systems for checking such things as the output flatness of directional couplers and detectors; filter characteristics like cutoff frequency, insertion loss, and passband flatness; and antenna impedance characteristics. The generator's output, which ranges from 50 w down to 1 w into a 50-ohm load, can be amplitude- or frequency-modulated. It is also completely protected against any load failure. Amplitude modulation can range from 0% to 95% of full 50-w peak power with a frequency range of dc to 20 kilohertz. Distortion is specified at 5% at full power and 50% modulation. The frequency modulation can range from dc to 1 MHz for less than a 10-MHz total deviation. Distortion is 1% for a 1-MHz deviation; symmetry is within 1%. Frequency of the under-40-pound model is tuned with a single continuous control. Readout is on a direct-reading analog meter. Sweep sensitivity for an external sweep is 20 MHz per volt. Linearity over the total 175-MHz-wide range is within ±10%. Price of the model 473 is $5,950. Ailtech, 19535 East Walnut Drive, City of Industry, Calif. 91748 [351] Wideband multiplier provides fast measurements Resolution of low-frequency events is usually costly, time-consuming, or inaccurate. A line of wideband multipliers developed recently is immune to noise and zero-crossing distortion and reduces measurement time significantly. For example, a 10-hertz signal is resolved to 0.1% in 1 second with the model 2100, and the company says the measurement is essentially error-free. Computations in revolutions per minute, gallons per hour, etc., can be generated with multiplication factors of 2, 4, 5, 6, 10—and up to 1,000. Key design technique is a phase-lock loop and frequency comparator-multiplier combination with inherent phase stability. Price of the model 2100 Wouldn’t it be nice if you could get one of National’s new LM 321 pre amps plus one of their famous LM 308 op amps to use with it, plus data sheets and application notes all in one swell Designers Kit for 1/3 off—only $4.95? Guess what? To get your Designers Kit for only $4.95 with data sheets and application notes, call or take this coupon to your nearest franchised National Semiconductor distributor (listed on the page to your right). Do not mail to National Semiconductor Corp. | Name | Title | |------|-------| | Company | | | Street | | | Div. Mail Drop | | | City | | | State | Zip | NATIONAL 138 Circle 138 on reader service card Electronics/May 24, 1973 Swell Designers Kits Here ALABAMA Powell Electronics Corp., Huntsville (205) 539-2731 · Hall-Mark Electronics, Huntsville (205) 539-0691 ARIZONA Hamilton/Avnet Electronics, Phoenix (602) 269-1391 · Liberty Electronics, Phoenix (602) 264-4438 ARKANSAS Carlton Bates, Little Rock (501) 562-9100 CALIFORNIA Elmar Electronics, Mountain View (415) 961-3611 · Hamilton/Avnet, Mountain View (415) 961-7000 · Hamilton/Avnet, San Diego (714) 279-2421 · Hamilton Electro Sales, Culver City (213) 870-7171, (714) 522-8200 · Liberty Electronics, El Segundo (213) 322-8100, (714) 638-7601 · Western Radio Corp., San Diego (714) 235-0571 · Newport Industries, Santa Ana (714) 540-2283 COLORADO Hamilton/Avnet, Denver (303) 534-1212 · Elmar Electronics, Denver, Commerce City (303) 287-9611 CONNECTICUT Connecticut Electro Sales, Hamden (203) 288-8266 · Harvey-Connecticut Ind. Elect., Inc., Norwalk (203) 853-1515 FLORIDA Powel Electronics Corp., Miami (305) 885-8761 · Hamilton/Avnet, Hollywood (305) 925-5401 · Hammond Electronics Inc., Orlando (305) 241-6601 GEORGIA Hamilton/Avnet, Norcross (404) 448-0800 ILLINOIS Hall-Mark Electronics Corp., Elk Grove Village (312) 437-8800 · Hamilton/Avnet Electronics, Schiller Park (312) 678-6310 INDIANA Fort Wayne Electronics Supply, Inc., Fort Wayne (219) 742-4346 · Graham Electronics Supply, Inc., Indianapolis (317) 634-8486 KANSAS Hall-Mark Electronics Corp., Lenexa (913) 888-4747 · Hamilton/Avnet Electronics, Prairie Village (913) 362-3250 MARYLAND Hamilton/Avnet Electronics, Baltimore (301) 796-5000 · Kierulff/Schley Electronics, Gaithersburg (301) 948-0250 MASSACHUSETTS Hamilton/Avnet Electronics, Burlington (617) 273-2120 · Kierulff/Schley, Needham Heights (617) 449-3600 · Harvey Electronics, Lexington (617) 861-9200 MICHIGAN Hamilton/Avnet Electronics, Livonia (313) 522-4700 · Harvey-Michigan Inc., Farmington (313) 477-1650 MINNESOTA Hall-Mark Electronics, Minneapolis (612) 925-2944 · Hamilton/Avnet Electronics, Minneapolis (612) 854-4800 MISSOURI Hall-Mark Electronics Corp., St. Louis (314) 521-3800 · Hamilton/Avnet Electronics, Hazelwood (314) PE 1-1144 NEW JERSEY Hamilton/Avnet Electronics, Cherry Hill (609) 662-9337 · Hamilton/Avnet Electronics, Cedar Grove (201) 239-0800 · Kierulff/Schley, Rutherford (201) 935-2120 NEW MEXICO Century Electronics, Inc., Albuquerque (505) 265-7839 NEW YORK Harvey Federal Electronics, Binghamton (607) 748-8211 · Hamilton/Avnet Electronics, Syracuse (315) 437-2641 · Hamilton/Avnet Electronics, Westbury, L.I. (516) 333-5800 · Kierulff/FJR Electronics, Hicksville, L.I. (516) 433-5530 · Semiconductor Concepts, Inc., Hauppauge, L.I. (516) 273-1234 NORTH CAROLINA Hammond Electronics of Carolina, Inc., Greensboro (919) 275-6391 · Pioneer/Carolina, Greensboro (919) 273-4441 OHIO Gibson Electronic Components, Cleveland (216) 731-6820 · Gibson Electronics Marketing, Inc., Dayton (513) 433-4055 · Pioneer Standard, Dayton (513) 236-9900 · Pioneer Standard, Cleveland (216) 587-3600 OKLAHOMA Hall-Mark Electronics Corp., Tulsa (918) 835-8458 · Radio, Inc., Oklahoma City (405) CE 5-1551 · Radio Inc., Industrial Electronics, Tulsa (918) 587-9124 OREGON Almac/Stroum Electronics, Portland (503) 292-3534 PENNSYLVANIA Cameradio, Pittsburgh (412) 391-4000 · Hall-Mark Electronics Corp., Huntingdon Valley (215) 355-7300 · Mace Electronics, Erie (814) 838-3511 · Pioneer Standard, Pittsburgh (412) 391-4846 SOUTH CAROLINA Hammond Electronics of Carolina, Inc., Greenville (803) 239-5125 TEXAS Hall-Mark Electronics Corp., Dallas (214) 231-6111 · Hall-Mark Electronics Corp., Houston (713) 781-6100 · Hall-Mark Electronics Corp., Austin (512) 454-4839 · Hamilton/Avnet Electronics, Houston (713) 526-4661 · Hamilton/Avnet Electronics, Dallas (214) 638-2850 UTAH Hamilton/Avnet, Salt Lake (801) 262-8451 WASHINGTON Almac/Stroum Electronics, Seattle (206) 763-2300 · Hamilton/Avnet Electronics, Seattle (206) 624-5930 · Liberty Electronics Northwest, Seattle (206) RO 3-8200 WISCONSIN Taylor Electric Company, Milwaukee (414) 241-4321 multiplier is $295, and delivery is from stock. Valhalla Scientific Inc., 7707 Convoy Ct., San Diego, Calif. 92111 [353] Digital panel meters use plasma displays A family of four computer-compatible digital panel meters using Sperry seven-segment plasma displays includes one 3-digit, two 3½-digit, and one 3¾-digit units. The displays have characters high enough for long-distance viewing, and continuous characters and planar construction for improved readability. In 100-piece quantities, the AN2530, a logic-powered (5-volt) 3-digit meter, sells for $52; the AN2532, a line-powered 3½-digit unit, for $95; the AN2535, a logic-powered model, for $85; and the AN2534, a line-powered 3¾-digit unit, for $130. Analogic, Audubon Rd., Wakefield, Mass. 01880 [354] 3½-digit panel meter has unit price of $99 Offering a floating, bipolar differential input, the DM-2000 digital panel meter uses light-emitting diodes for its 3½-digit readout and sells for $99 in single quantity. FeaIt's better to be near a major transportation artery than in the middle of one. A company in a congested industrial center is like a human being in a congested city. Frustrated. By the monumental problem of simply getting around. To escape that, many people are moving to smaller towns. And many companies are moving with them. Not just to any smaller towns. But to the smaller towns of Georgia. The transportation and distribution center of the Southeast. In Georgia's smaller towns, you can be on or near the Interstate highway network. Plus a 100,000 mile system of public highways within the State. Served by some 500 motor carriers. You can be near two deep-water ports that put you amazingly closer to the heartland of America. You can have access to over 200 public and private airports. You can be near pipelines. Near some 6,000 miles of railroad. In Georgia's smaller towns, your company can be near all that. Close to your resources. Close to your markets. Close to transportation. But rarely frustrated by it. So may we direct your attention to our coupon? Send it to a better place. A much better place for a company to find itself. Georgia. New products tures include an input bias current of 20 nanoamperes and an input impedance of 100 megohms, plus automatic polarity. The DM-2000 is accurate to within ±0.05%, and it can resolve to 100 microvolts over a range of 0° to 70°C. Input settling time is 50 microseconds, and up to 200 readings can be made asynchronously or synchronously. Datel Systems Inc., 1020 Turnpike St., Canton, Mass. 02021 [355] Pocket-sized instrument tests ICs during operation Designed for rapid diagnostic and functional testing of DTL and TTL integrated circuits during operation, the Logiscope, type IFP, simultaneously displays the logic state of all 14 or 16 pins of an IC soldered into a module. A clip-on connector and a 1-meter cable connects the Logiscope, a pocket-sized instrument, to the circuit under test. Requiring no power supply of its own, the tester receives its operating voltage from the test item, locating automatically the positive and negative poles. The influence of the cable capacitance on short clock pulses is balanced out by decoupling coils, so the functioning of the module under test is not affected. Price is $260. Rohde & Schwarz Sales Co., 111 Lexington Ave., Passaic, N.J. 07055 [356] Function generator/filter keeps distortion under 0.1% A novel design using an active filter permits the model 765 function generator to produce high-purity sine, triangle, and square waves from 1 Burroughs panel displays help you sell your products "The SELF-SCAN® panel display provides a CompuWriter® feature never before available... now an average typist can set quality typography with ease because the SELF-SCAN panel displays each character and function as she keyboards so that she can verify or make corrections by individual character, word, or complete line. Burroughs panel displays DO help you sell your product." — Mr. John Peterson, Vice President, Domestic Marketing, for Compugraphic Corporation, 80 Industrial Way, Wilmington, Massachusetts 01887. Helping you to sell your product helps us sell our product. That's why Burroughs family of panel displays is designed to provide the most pleasing, most readable character available today. Whether your application is for 8, 16, 32, 80, or 256 alphanumeric characters, Burroughs SELF-SCAN panel displays provide an economical approach to your readout requirement plus offer the extra advantage of adding aesthetic quality to your product. Ask a Burroughs salesman to drop by and demonstrate his terminal in a briefcase. His SELF-SCAN panel display demonstrator is our best salesman. Write or call Burroughs Corporation, Electronic Components Division, Box 1226, Plaintfield, N.J. 07061 (201) 757-3400 Only Burroughs manufactures NIXIE® tubes, SELF-SCAN® panel displays and PANAPLEX™ numeric panel displays. Circle 141 on reader service card This is a Hughes helium-neon laser. It might give you a big edge on competition. Then again, it might not. New uses for helium-neon lasers are being discovered every day. Who knows? There may be an application that makes your product lighter, smaller, faster, more accurate or more versatile. First, let’s make sure we’re talking about the same kind of laser. He-Ne lasers are low-cost, low-power, and safe. They send out a visible beam of parallel light waves—continuous wave or modulated. It travels for miles with very little diffusion. He-Ne lasers don’t cut steel. Or perform brain surgery. Those are other lasers. He-Ne lasers have revolutionized surveying and construction engineering. They shoot a perfect straight line for building bridges. Digging tunnels. Laying pipes and cables. Or leveling road beds. But there are many other applications. They align car wheels. And tell auto repair shops when a damaged body is straight. They find surface blemishes as products pass down a production line. Measure flow rates and machine-tool distances. Gauge thickness. Position automated machines. Inspect large prisms and lenses. He-Ne lasers scan production lines to keep quantitative records. Read bar-code patterns on packages and letters and supermarket items—adding totals and keeping inventory count. They carry large amounts of audio and video information over short distances. (Like transporting TV signals from a football stadium to a transmitter van.) Non-contact printing. Video playback systems. Security systems. Computer readout. Holographic recording for checking filed data such as the validity of credit cards. Spectroscopic particle counting in pollution monitors. He-Ne lasers might be the answer to your product improvement, too. Then again, they might not. After all, there are system interfacing considerations. And a hundred other angles. That’s why you need us. We made the first working laser. We pioneered the use of He-Ne lasers in many of the fields we’ve listed. That means we can anticipate many of your problems. And opportunities. We can tell you whether you’ve got a practical He-Ne laser application. Or not. Free Advice: (213) 530-6272. Ask for Dick Roemer. Or write: 3100 W. Lomita Blvd. Torrance, Calif. 90509 HUGHES HUGHES AIRCRAFT COMPANY ELECTRON DYNAMICS DIVISION New products hertz to 100 kHz. Sine-wave distortion is less than 0.1%, the company says. The active filter is switch-selectable and can be used as a band-pass filter (nominal Q of 50) or as a notch filter (40 dB notch depth). Output impedance is 600 ohms, and output amplitude is adjustable from 0 to 10 volts peak-to-peak with no load. Price is $195 assembled, $145 in kit form. Dytech Corp., 391 Mathew St., Santa Clara, Calif. 95050 [357] Dc high-potential tester weighs less than 20 pounds Digital readout of test voltage and leakage current is provided in a portable dc high-potential tester that weighs less than 20 pounds. The tester, designated the model 16300, operates either from an internal storage battery or from line voltage, and it contains an integral battery charger. Test voltage is adjustable from 500 to 25,000 volts dc, and the high-voltage power supply is electronically regulated against line and load changes. An adjustable current limiter protects the unit and provides nondestructive testing of components. The model 16300 tester is priced at $1,950, and delivery is from stock. Marketing Dept., ITT Jennings, 970 McLaughlin Ave., San Jose, Calif. 95116 [358] Elapsed-time meters include back-of-panel models For use where space is at a premium, a back-of-panel type is included in a line of elapsed-time meters designated the 240 series. The meters are offered in 2½- and 3½-inch Big Look styles. They have a six-digit display in hours and 10ths of hours, or in minutes and 10ths of minutes. All of the meters, which are interchangeable with GE's type 236 and 235 units, are designed to meet ANSI shock and vibration specifications. General Electric Co., Display Devices Marketing, 40 Federal St., Lynn, Mass. 01910 [359] Low-cost voltage references built for production-line use For applications that do not require laboratory accuracy two voltage references offer substantial savings. The model E-10-D, which sells for $450, provides a voltage output that is selectable between 0 and ±11 volts dc in 100-microvolt steps. Output current is 50 milliamperes, maximum, and is both short-circuit- and overload-protected. The model E-100-E, which is priced at $525, provides two output voltage ranges: ±10 V and ±100 mv, plus 10% over-range in both cases. Resolution on the 10-v scale is 10 parts per minute or 100 μV, 1 μV steps on the 100-mv range. Output current is 50 mA at 10 v, with an output impedance of 50 milliohms. These economy instruments are accurate to within 0.01% of the dial setting. Electronic Development Corp., 11 Hamlin St., Boston, Mass. 02127 [360] "Scotchflex" Flat Cable Connector System makes 50 connections at a time. Build assembly cost savings into your electronics package with "Scotchflex" flat cable and connectors. These fast, simple systems make simultaneous multiple connections in seconds without stripping or soldering. Equipment investment is minimal; there's no need for special training. The inexpensive assembly press, shown above, crimps connections tightly, operates easily and assures error free wiring. Reliability is built in, too, with "Scotchflex" interconnects. Inside of connector bodies, unique U-contacts strip through flat cable insulation, grip each conductor for dependable gas-tight connections. "Scotchflex" offers you design freedom, with a wide choice of cable and connectors. From off-the-shelf stock you can choose: 14 to 50-conductor cables. Connectors to interface with standard DIP sockets, wrap posts on standard grid patterns, printed circuit boards. Headers for de-pluggable connection between cable jumpers and PCB. Custom assemblies are also available on request. For more information, write Dept. EAH-1, 3M Center, St. Paul, Minn. 55101. 3M COMPANY "Scotchflex". Your systems approach to circuitry. High-speed '741' sells for $1.25 IC replacement for standard op amp slews at minimum rate of 10 µV per µs A high-speed replacement for the popular 741 operational amplifier offers 20 times the speed, yet costs only $1.25 in 100-up quantity. Some available high-speed op amps are priced in the $20 range and, unlike the new Motorola MC1741S, are not direct plug-in replacements. This device, according to Ronald Campo of Motorola's marketing department, has the performance and characteristics of the 741 except that it has a guaranteed minimum slew rate of 10 microvolts per microsecond compared to 0.5 µV/µs for the conventional part. Correspondingly, power bandwidth is guaranteed to be 150 kilohertz, and is typically 200 kHz, 20 times the 10 kHz typical for a regular 741. The high-speed part also offers a typical time of 3µs to settle within 0.1%, an important property in digital-to-analog converters. When combined with a Motorola MC1408 DAC, the part produces a voltage-mode output DAC that settles in 4 µs and sells for $7.20 in 100-up lots. Apart from the higher speed, Campo says, the new part is designed to act like other 741s, has a similar gain, and includes their standard features of short-circuit protection and internal frequency compensation. "It's very easy to use for anyone familiar with the 741," he says. "It looks and acts just like a standard 741 except for the higher-speed capability." The improved performance is obtained by changes in the internal circuitry of the part. Besides the applications in digital-to-analog converters (a swiftly growing market, according to Campo), the part should be popular in any large-signal amplifier where distortion is undesirable. The part is available in two packages, a TO-5-size eight-lead metal can, for both commercial (0° to 75° C) and military (-55° to +125° C) temperature ranges, and an eight-lead plastic mini DIP (the MC1741SCP1) having the commercial rating only. A ceramic DIP will be offered in the future. Regular and high-speed 741s feature offset null voltage capability, power consumption of 50 milliwatts, no latch-up, and differential voltage ranges. They are designed for ±15-volt operation, have open-loop voltage gains of 100,000, and can supply output currents of more than 10 milliamperes. Technical Information Center, Motorola Semiconductor Products Inc., P. O. Box 20924, Phoenix, Ariz. 85036 [411] P-i-n diode aims at uhf/vhf switches and attenuators A low-capacitance, planar-passivated, silicon p-i-n diode is designed for rf switching, modulating, and automatic-gain-control applications. Designated the HP 5082-3077, it is intended for use in rf duplexers, antenna switching matrixes, electronically tuned filters, and variable rf attenuators. Effective minority-carrier lifetime is greater than 100 nanoseconds, resulting in low harmonic distortion in the range from 100 megahertz to 1 gigahertz. Dynamic range is from 1 ohm to 10 kilohms; reverse bias capacitance is less than 0.3 picofarad, and continuous-wave power switching capability is 2.5 watts. Price is $2.75 each for 1 to 99, and $2.20 for 100 to 999. Inquiries Manager, Hewlett-Packard Co., 1501 Page Mill Rd., Palo Alto, Calif. 94304 [413] ECL gate propagates in 2 nanoseconds For use in high-speed comparator and parity functions, a triple two-input, exclusive OR/NOR gate has been developed in the form of an emitter-coupled-logic integrated cir- Overall analog dynamic range: 132 db Automatic/programmable gains to 1024 NEW APPROACH TO LOW LEVEL DATA AQUISITION Phoenix Data's new 8000 Series Phoenix Data's floating point 8000 Series data acquisition system features adaptability to virtually any analog input signal currently in use—offering automatic or programmed gain selection with 11 binary ranges from ±10 millivolts to ±10.24 volts full scale. The data word (12 binary bits) is combined with the range data (4 binary bits) for a 16 bit output word in the automatic ranging mode. The system will resolve input changes of 5 microvolts on the ±10 millivolt range for an overall analog dynamic range of 132 db. FEATURES: • ADC resolution of 12 binary bits. • 11 binary gain ranges. • ±10 mv to ±10.24V input ranges. • Solid state MOSFET multiplexing. • Thruput rates from 1 to 20 KHz. • Auto or programmable gains. • Up to 128 channels per chassis. • System accuracy of .05% of reading. • System T.C.: 0.001%FSR±1µ volt RTI/C. If it's stability, accuracy, speed, or all-around quality you need in Data Conversion, contact Phoenix Data now! PHOENIX DATA, INC. 3384 West Osborn Road Phoenix, Arizona 85017 Ph. (602) 278-8528. TWX 910-951-1364 Circle 145 on reader service card 145 Any system requiring memory capability—from small programmable controllers to sophisticated computers—also requires data security. So here’s a statement of fact that’s well worth remembering when you’re considering memory elements for any application: ECD’s new family of Read-Mostly Memories give you a much higher degree of data security than any other read/write memory on the market today—bar none! No conditions; no reservations; no exclusions. No need for costly power-fail detection circuitry and a battery back-up source to protect their stored data content, either. Because these unique Ovonic amorphous semiconductor memory arrays are inherently non-volatile. In fact, you can take them completely out of your system at will without losing one bit of stored information. But 100% data security is only one of the basic advantages offered by amorphous RMMs. The other is repetitive alterability. An inherent capability that lets you correct program errors on the spot—and change, up-date or re-alter stored data as often as you like. Quickly, easily and selectively—by simple electrical means. Other key operating characteristics include: - In-system read/write - Random access operation - High noise immunity - Non-destructive readout - Write lock-out protection - TTL/DTL compatibility Availability? Here and now! In standard units for 2 x 4, 1 x 15 and 8 x 4 bit configurations all the way up to 256-bit and 2048-bit arrays that can be easily arranged in 512 x 4 and 256 x 8 expandable systems. Plus write current generators and read multiplexer units that permit easy interfacing with existing logic forms to give you full in-system read/write operation. AMORPHOUS RMM Non-Volatile/Repetitively Alterable Semiconductor Memory Arrays Technical data sheets on standard RMMs are yours for the asking. And our Systems Engineering Group will welcome the opportunity to be of helpful service to you—any time. Simply call or write: Energy Conversion Devices, Inc. 1675 West Maple Road • Troy, Michigan 48084 Telephone: 313/549-7300 New products Propagation delay of the model 10107 is brief: for one set of inputs, it is 2.0 nanoseconds, and for the other, delay is 2.8 ns. Each input is terminated with a 50-kilohm pull-down resistor which eliminates the need to tie unused inputs “low.” The dc loading factor is one. Typical no-load power dissipation is 115 milliwatts per package. The unit has a high fanout capability together with high-impedance inputs, making the circuit useful for a transmission-line environment. Typical output rise and fall times with all outputs loaded are 3.5 ns between 10% and 90%, and 2 ns between 20% and 80%. In lots of 100, price of the plastic-packaged version is $1.70 each. Signetics, 811 East Arques Ave., Sunnyvale, Calif. 94086 Quad line receivers aimed at data communications Offering built-in threshold hysteresis, two quad line receivers called the SG1489J and -AJ are designed for data-interfacing applications. The A version provides a greater margin of hysteresis, more than double that of the AJ version. Both types offer logic threshold shifting and input noise filtering capability. Input resistance is 3.0 to 7.0 kilohms, and input signal range is ±30 volts. Primary application for the devices is in interfacing terminals with data communications. Integrating man's creativity with a computer's speed and memory isn't easy. But Gerber has done it with an interactive system which can create and plot drawings as complex as the operator can imagine... all at the touch of a button. Think what this can mean in your department. Drafting drudgery eliminated. More accurate drawings, faster. More time for designers to think and create. Lead time drastically reduced to give you a competitive edge in bringing new products to the marketplace. Translated into dollars, this means more profits from your engineering department profit center. More profits whether you make automobiles, aircraft, electronic equipment, machine tools. Or anything else that requires drafting. The Gerber interactive system is delivered complete and ready to go to work for you. And, of course, the entire system is designed, built and serviced by Gerber Scientific, the internationally recognized pioneering leader of the industry. Find out how the Gerber Interactive Design System can produce for you. A note or phone call will bring a prompt response. The Gerber Scientific Instrument Company, Hartford, Connecticut 06101. (203) 644-1551. See us at NEPCON East, Booth 4540, and NCC, Booth 1102. Push Button Drawing. Selected Wiley-Interscience Books A User's Handbook of Integrated Circuits By Eugene R. Hnatek, Signetics Corporation *A User's Handbook of Integrated Circuits* enables the electronics designer to take full advantage of the wide selection of circuitry available to him. With it as a guide, the practicing engineer should be able to make an informed choice between circuits made by different technologies. 1973 464 pages $24.95 Applied Maintainability Engineering By C. E. Cunningham and Wilbert Cox, both with Philco-Ford Western Development Labs A volume in the Wiley Series in Human Factors, edited by David Meister *A practical guide, Applied Maintainability Engineering* will help you develop a maintainability engineering program which conforms to the specifications delegated by the Department of Defense. Backed by ten years of implementation experience using MIL-STD-470, this handbook describes every facet of developing and implementing a maintainability engineering program with specific examples and methodology for each maintainability task. 1972 414 pages $19.95 Lightning Protection By J. L. Marshall, Canadian Broadcasting Corporation *A consolidation of the available information, Lightning Protection provides a lucid examination of lightning—its nature, effects, and principles of protection. An invaluable resource for electrical, communication, and broadcasting engineers, it discusses topics ranging from the magnitude of the lightning discharge to grounding communication towers and systems.* 1973 224 pages $14.95 Low-Noise Electronic Design By C. D. Motchenbacher, Honeywell Corporate Research Center and F. C. Fitchen, University of Bridgeport, Connecticut *Low-Noise Electronic Design* offers the electrical engineer and technician a practical, yet comprehensive guide to the problems of low-noise design. Among the materials presented are a computer program for the calculation and integration of noise, new information on noise in passive components, and many practical design examples. 1973 358 pages $19.95 Digital Signal Processing Edited by Lawrence R. Rabiner, Bell Telephone Laboratories, and Charles M. Rader, M.I.T. Lincoln Laboratory *A volume in the IEEE PRESS Selected Reprint Series, prepared under the sponsorship of the IEEE Audio and Electroacoustics Group* Since digital signal design now has applications in radar, speech, seismic exploration, analysis of vibration, analysis of biomedical signals, picture processing, reliable communications, and sonar, scientists and engineers should find this book to be a very valuable reference. A compilation of 57 articles, it is divided into three sections: Digital Filters, the Fast Fourier Transform, and Effects of Finite Word Length. 1973 518 pages $13.95 Prices subject to change without notice. Available at your bookstore or from Dept. 093-A-1268-WI WILEY-INTERSCIENCE, a division of JOHN WILEY & SONS, Inc. 605 Third Avenue, New York, N.Y. 10016 In Canada: 22 Worcester Road, Rexdale, Ontario New products equipment that meets EIA standard RS-232C. Package style is 14-pin Cerdip. In 100-lots, price of the J version is $4; for the AJ type, $4.50. Silicon General Inc., 7382 Bolsa Ave., Westminster, Calif. 92683 [416] Crystal oscillators clock four logic families Multipurpose crystal-controlled low-frequency (10- to 250-kHz) clock oscillators feature bipolar design with buffered output. Designated the SQXO-2 series, the units are built for use in circuit applications requiring square-wave outputs that are compatible with CMOS, TTL, DTL, and RTL. Oscillator and all related components are housed in a TO-5 package. The quartz crystal, which is photolithographically produced in a tuning-fork configuration, is then laser-tuned to the precise frequency. Power requirements are 0.2 to 2 milliamperes at 5 volts, depending on frequency. Prices start at $19.50 each in 100-lots and at $10 each in quantities of 1,000. Delivery time is stock to four weeks. Statek Corp., 1233 Alvarez Ave., Orange, Calif. 92668 [417] Voltage regulators housed in plastic packages Seven fixed-voltage regulators housed in plastic power-transistor packages are designed for applications where a simple, low-cost unit is needed that can provide a moderate amount of current without complex current-boosting circuitry. The MC7805/24 series posiPROBLEM: keeping your cables interference-free in all environments. SOLUTION: the right combination of shielding and cable. There can be as many types of electrical interference as there are operating environments. That's why Brand Rex makes such a variety of shield constructions. Shields for communications, telemetering, instrumentation, signal transmission and computer interconnecting cables you can use in just about any situation. Where the problem is electrostatic interference, we offer everything from conventional braided or served round wire types; to corrugated copper, aluminum or bimetallic tapes longitudinally applied; to metal/polyester systems with drain wires. Where magnetic effects are the problem, we supply special conductor lay-ups, and, if required, shields of high permeability materials. To help answer your basic questions on electrical interference, Brand-Rex has published a comprehensive shielding guide. If it doesn't offer a solution to your particular problem, our product engineers will. For your copy write to Industrial Market Manager, Brand-Rex Company, Willimantic, Conn. 06226 or call 203/423-7771. BRAND-REX 4,000 solutions in search of a problem. ECL 10K: Now easy as ABC The big three basics come first at Signetics. **Availability.** The fastest turnaround anywhere in 2ns high speed logic. What you see here, is what you can get from Signetics. Now. No delays, no alibis, no fooling around for months. Standard parts straight from proven, line-ready stock. MIL STD 883 Class B takes just a little longer. **Broad line.** Twelve new memory, MSI and interface ECL 10K devices join the logic functions we’re already shipping in volume world-wide. One-call access to the full range of part types and parameters, packaged in plastic DIP, Cerdip, or chips. A complete high speed logic family, from one single source: for greater design flexibility, plus significant cost advantages on a mixed buy. **NEW ECL 10,000 FUNCTIONS:** - **Supergate** - 10100 Quad 3-input gate. Most useful 10K function, most reasonably priced. - **Interface** - 10124/10125 Quad differential line drivers/receivers, ECL-TTL translators. 5ns high-performance delivers density and flexibility, below the cost of any other similar devices. - **Multiplexers** - 10132/10134/10173 Logically powerful, 2.5ns high speed Mux-latches. - 10164/10174 8 to 1/dual 4 to 1. Large fan-in multiplexers, operating at 3.5ns. - **Parity** - 10170 9-plus-2 input expandable parity circuit. - **Storage** - 10133 Quad D-type latch, with gated outputs. - **Memory** - 10139* Extremely fast 17ns PROM, 32x8 organization. - 10145* Extremely fast 9ns RAM, 16x4 organization. *In characterization now. Coming soon. **ALTERNATE SOURCED FUNCTIONS** | | | | | | | |-------|-------|-------|-------|-------|-------| | 10101 | 10106 | 10110 | 10116 | 10119 | 10131 | | 10102 | 10107 | 10111 | 10117 | 10121 | 10161 | | 10105 | 10109 | 10115 | 10118 | 10130 | 10162 | **SIGNETICS ORIGINATED FUNCTIONS** | | Description | |-------|--------------------------------------------------| | 10112 | Dual 3-Input 1OR/2 NOR Gate | | 10113 | Quad Exclusive OR with enable | | 10171 | Dual 1-of-4 Demux/Decoder (Low) | | 10172 | Dual 1-of-4 Demux/Decoder (High) | **Compatibility.** Pin-for-pin identical with Motorola MECL 10,000, with industry-accepted temperature coefficients and ranges (-30° to +85°C). Two in-depth, production-proven sources insure service and delivery. You can use Signetics ECL 10K in mixed systems without the subtle penalties or noise immunity reductions that occur with compensated 10K families. Switching rise/fall times are compatible with conventional system layouts. **FREE ECL 10K PARTS KIT** To introduce our new ECL 10,000 products, we’re offering an Evaluation Kit: six free parts to give you first-hand experience with Signetics optimized ECL 10K. --- Please rush us your free sample ECL 10K Evaluation Kit. Our design/production application is__________________________ Send complete specs, data and application notes on your complete ECL 10K line. Name_____________________________________________________ Title_____________________________________________________ Attach this coupon to your company letterhead, and mail to: Signetics-ECL 811 East Arques Avenue Sunnyvale, California 94086 (408) 739-7700 Signetics Corporation, A subsidiary of Corning Glass Works. New products Voltage regulators can supply more than 1 ampere at nominal voltages of 5, 6, 8, 12, 15, 18, or 24 volts. The devices have only three terminals—input, output, and ground—and they require no external components. The regulators can be attached to a heat-sink surface with a machine screw through a hole in the package. Maximum input voltage is 35 V on all types except the MC7824, which is specified at 40 V. Price is $1.75 in 100-lots. Technical Information Center, Motorola Semiconductor Products Division, P.O. Box 20912, Phoenix, Ariz. 85036 [418] Voltage regulator handles wide range of inputs A variable, dual-tracking voltage regulator offers a 45-volt input and an output range of 50 millivolts to 42 V. Only one external resistor is required to set desired output voltage. With 0.2% load regulation. The RM/RC4194 provides 200 milliamperes at both outputs simultaneously; with external pass transistors, it can supply up to 10 A or greater. The unit provides thermal-shutdown protection when temperatures approach 175 C. Prices start at less than $2 for 100-lot quantities. Raytheon Semiconductor, 350 Ellis St., Mountain View, Calif. 94040 [419] Now, for the first time 3-inch diameter sapphire presents a competitive alternate to silicon for the fabrication of MOS IC devices. Sapphire's significant advantages in strength, reliability, and cost savings are making it today's new substrate standard. Sapphire is compatible with thin-film processing technology . . . it substantially reduces masking costs . . . and it can be recycled. Sapphire is the ideal material for high-speed, low-power MOS devices, high-reliability microwave IC's, and is less expensive and more versatile for LED applications, and other devices. FREE . . . write for your 12-page technical manual that describes the physical characteristics of sapphire and its many applications. Crystal Products Dept. 8888 Balboa Avenue San Diego, California 92123 Phone: (714) 279-4500 Sales offices throughout the world CRYSTAL PRODUCTS UNION CARBIDE THE DISCOVERY COMPANY Circle 151 on reader service card YOU CAN COUNT ON US EIGHT FUNCTION CALCULATOR ARRAY WITH MEMORY FIVE FUNCTION CALCULATOR ARRAY WITH MEMORY + - x ÷ % √x 1/x x² M↓ + - x ÷ % • Displays up to 8 digits and mathematical sign during entry of operands • Displays intermediate calculations as well as final result • Performs chain calculations involving sequential algebraic entry • Permits fixed or floating decimal point • Internal clock generator • Auto-clear • Display blanking • Low battery detector • Internal memory for "M" operation • Direct LED segment drive optional • Supplied in a 28-lead DIP MOS TECHNOLOGY, INC.—Continually coming up with new methods and more answers. So today, more than ever, YOU CAN COUNT ON US! MOS TECHNOLOGY, INC. VALLEY FORGE INDUSTRIAL PARK, VALLEY FORGE, PA. 19481 (215) 666-7530 an affiliate of ALLEN-BRADLEY Need Details? We've A Data Sheet That Delivers. Simply Write Or Phone. HEADQUARTERS — Sales: Mr. Jack Turk; Applications Mgr.: Mr. Julius Hertsch; MOS Technology, Inc., Valley Forge, Pa. 19481 EASTERN REGION — Mr. William Whitehead, Suite 307 — 88 Sunnyside Blvd., Plainview, N.Y. 11803 • (516) 822-4240 CENTRAL REGION — MOS Technology, Inc., 838 S. Des Plaines St., Chicago, Illinois 60607 • (312) 922-0288 WESTERN REGION — Mr. Chuck Martin, 2172 Dupont Drive, Suite 221, Patio Bldg., Newport Beach, Calif. 92660 • (714) 833-1600 Data handling CAD program adds models Aedcap extends capability to transmission lines; improves other models First introduced about two years ago, Aedcap, one of the principal general-purpose circuit analysis computer programs in use today, is now being updated. Half a dozen new built-in models are being added to its library, allowing the user to take advantage of the latest modeling techniques. Aedcap (Automated Engineering Design Circuit Analysis Program) can perform both linear and nonlinear circuit analysis [Electronics, March 27, 1972, p. 123]. The program can now directly analyze transmission lines, as well as circuits that are described by either Y or S parameters. Besides being important for electrical power applications and microwave circuits, transmission-line modeling is also a significant factor in the analysis of emitter-coupled-logic circuits and Schottky-TTL circuits. Such high-speed logic circuits require the metal interconnection paths on a chip to be regarded as transmission lines having a characteristic impedance, an attenuation constant, and specified length. Also, Aedcap now makes use of the Gummel-Poon bipolar transistor model. It is more accurate than the well-known Ebers-Moll model because it accounts for the variation of transistor beta with collector current and for base-width modulation, which causes the transistor's collector-emitter resistance to change with base current. The figure shows how this resistance is constant (parallel slopes) for the Ebers-Moll transistor characteristics, but is variable (changing slopes) for the Gummel-Poon characteristics. (These curves converge at some early voltage, $V_{A}$.) In addition, Aedcap now has a more accurate model for the MOSFET—a model that is based on device geometry, rather than processing parameters. This makes MOSFET model parameters easier to measure. Moreover, n-channel devices can now be handled as easily as p-channel devices, making the new MOSFET model ideal for analyzing complementary-MOS circuits. Two new diode models round out the additions to Aedcap's model library: the Schottky-barrier diode and the zener diode. The program's existing junction-diode model was modified to get the two new diode models. SoftTech Inc., 391 Totten Pond Rd., Waltham, Mass. 02151 [361] Channel concentrator allows sharing of computer ports Designated the model C-32, a data channel concentrator connects data from modems, terminals or multi- Improved transistor modeling. Ebers-Moll curves (left) with parallel slopes do not account for base-width modulation, as do converging slopes of Gummel-Poon curves (right). "Airpax UPL Series Circuit Protectors make our entire systems possible." AIRPAX™ TYPE UPL Electromagnetic Circuit Protectors Airpax Type UPL Circuit Protectors provide reliable, low cost power switching, circuit protection, and circuit control. UL Recognized, the UPL line offers many configurations including series, shunt and relay. Multipole assemblies are available with a mix of current ratings, delays, and internal configurations. Full load current ratings from 0.05 to 100 amperes. Avtec Industries designs and manufactures pre-fabricated modular Utility Distribution Systems for one-point connection to multiple-unit batteries of equipment or to special laboratory equipment; and Electra-Poles, unique systems that plug into the ceiling when power is needed and unplug when the equipment is removed. These systems require positive protection — AIRPAX protection! In the words of Mr. Bruce A. Zimmerman, Vice President of Avtec, "These entire systems are possible because we use Airpax UPL Series Circuit Protectors, which provide built-in 'point-of-use' protection and eliminate the need for running branch circuits from the panel box." Shouldn't you investigate Airpax Circuit Protectors for your next design? Write for specifications. Airpax Electronics / CAMBRIDGE DIVISION / Cambridge, Md. 21613 / Phone (301) 228-4600 New products Timeplex Inc., Box 202, 65 Oak St., Norwood, N.J. 07648 [363] Security system protects privacy of computer data Designed for continuous on-line use, a security system called the Identimat 2000H protects confidential data or programs stored in computers. The system consists of a video terminal and a device that measures the geometry of the hand. The latter device allows only authorized persons to access the computer, and the unit continuously monitors the line to the terminal. A user of the system keys in his employee number or code on the terNEW FAST ANSWER FOR ELECTRONIC NOISE PROBLEMS... Ceramag® Ferrite Beads on Lead Tape Stackpole Ceramag® ferrite beads provide a simple, inexpensive means of obtaining RF decoupling, shielding and parasitic suppression without sacrificing low frequency power or signal level. Now beads are available with leads, cut and formed or on lead tape. Most equipment that is capable of automatic insertion of lead tape components can be modified to accept this special Stackpole bead. No other filtering method is as inexpensive...and now as fast to insert in your circuit. Starting with a simple ferrite bead (a frequency-sensitive impedance element) which slips over the appropriate conductor, Stackpole has available a variety of materials and shapes providing impedances from 1 MHz to over 200 MHz. The higher the permeability, the lower the frequency at which the bead becomes effective. CERAMAG® FERRITE BEAD CHARACTERISTICS | GRADE NUMBER | 24 | 7D | 5N | 11 | |--------------|------|------|------|------| | Initial Permeability | 2500 | 850 | 500 | 125 | | Volume Resistivity @ 25°C | $1.0 \times 10^2$ | $1.4 \times 10^5$ | $1.0 \times 10^3$ | $2.0 \times 10^7$ | | *Effective Suppression At: | 1 MHz | 20 MHz | 50 MHz | 100 MHz | | Curie Temperature | 205 | 140 | 200 | 385 | * A tutorial guide on how these passive components behave with frequency and geometry is available from the Electronic Components Div. Impedance varies directly with the bead length and log [O.D./I.D.]. Beads are available in sleeve form in a range of sizes starting at .020” I.D., .038” O.D., and .050” long. The bead on lead tape is .138” O.D. and .175” long. Where quantities warrant, other beads on leads and/or lead tape are a design possibility. Tight mechanical tolerances are held in sizes and shapes as varied as the pair of giant, mating channels shown on the left which are used to eliminate the effect of transient noise in computers. Sample quantities of beads are available for testing. Consult Stackpole Carbon Company, Electronic Components Div., St. Marys, Pa. 15857. Phone: 814-781-8521. TWX: 510-693-4511. STACKPOLE ELECTRONIC COMPONENTS DIV. Circle 155 on reader service card POLYSKOP® III INTEGRATED SWEEP TEST SYSTEM 100 kHz - 1250 MHz MODULAR DESIGN PERMITS TAILORING TO YOUR REQUIREMENTS AND CASH FLOW - Solid State - Narrow Band Measurements in IF Ranges, e.g., 400-510 kHz/9.5-12.5 MHz/28-45 MHz. Crystal Filter Measurements in conjunction with a synthesizer - Medium Band Measurements: 10 Subranges from 100 kHz-1250 MHz - Wideband Measurements: covering the VHF (1-300 MHz) and UHF (460-860 MHz) and 300-460 MHz - Four Parameters can be displayed simultaneously Simultaneous display of forward and return sweep with magnified display during flyback Forward and return sweep times adjustable between 20 msec and 10 sec and a manually controlled CW signal Crystal-controlled frequency markers 1, 10, 100 MHz+Ext. Marker Input. Pulse or vertical line marker selectable Parallax-free superimposed reference lines for frequency and level Available in 50 or 75 Ω impedance Get The Extra Capability, Greater Reliability, and Longer Useful Life Of... ROHDE & SCHWARZ 111 LEXINGTON AVENUE, PASSAIC, N.J. 07055 • (201) 773-8010 Western Office: 510 S. Mathilda Avenue, Sunnyvale, Calif. 94086 (408) 736-1122 New products minal; the computer relays a signal to the terminal and security unit, and the user places the hand on the security unit for decoding and verification. Identification Corp., 408 N. Paulding Ave., Northvale, N.J. 07647 [364] Reader-punch handles up to 285 cards per minute The RP-100, a reader-punch for 80-column cards, features a side-friction picking system with positive roller-controlled transport, a cam-driven punching system, and automatic verification in its punch operation. The unit punches 100 to 285 cards per minute, the latter when only the first column is punched. Reading speed is 400 cards per minute. Price is $11,750. Documation Inc., Box 1240, Melbourne, Fla. 32901 [365] Disk memories provide 12.8-megabit storage A family of fixed-head disk memories for OEM minicomputer applications provides up to 12.8 megabits of storage capacity. Designated the 6000 series, the memories feature double-density phase-modulated recording, and noncontact flying heads. Units are available in eight-to 128-track configurations, with memory capacity of 100,000 words per track. Alternately, 4,096 16-bit words per track can be accommodated, providing up to 512,000 To display people this is an "electrical harp". Molded of Plenco, it gives lights and displays a whirl that strums up customer interest. CUSTOM DIE MOLD, Bellwood, Ill., injection-molds the motor-driven harp of our Plenco 482 General-Purpose Phenolic Compound. In use the harp holds fluorescent lights and revolves pictorial elements. Leading advertisers and display houses switch it on to give more life to point-of-purchase displays. Says the molder, "Plenco 482 was selected because the part is long and flat, and this compound gave us the molding latitude we wanted. Assured us of no freezing in the cylinder, and no warpage." Black or Brown, Plenco 482 is easy flowing and fast curing. Is formulated to perform well under a wide latitude of molding conditions. Harp or other molding problem, you have a whole symphony orchestra of Plenco selections and service to tune in on. We invite your requests. PLENCO THERMOSET PLASTICS PLASTICS ENGINEERING COMPANY Sheboygan, Wisconsin 53081 Through Plenco research...a wide range of ready-made or custom-formulated phenolic, melamine, epoxy and alkyd thermoset molding compounds, and industrial resins. New products words of storage capacity. Packing density is nominally 2,700 bits per inch; 1,800 or 3,600 rpm is offered; access time is 8.3 milliseconds at 3,600 rpm; and bit serial transfer rate is 6 MHz. A typical price is $10,000 for a single unit and $7,000 in quantity. Information Data Systems Inc., 2020 Winner St., Walled Lake, Mich. 48088 [366] CRT displays 1,920 character positions The series TDV 200 video display terminal is a CRT with keyboard, control logic, character generator, refresh memory and interface. Features include capability for 80 characters per line, 24 lines, and 1,920 maximum displayable character positions. Character generation is accomplished by a five-by-seven dot matrix displayed as five-by-14 using interlace. The memory is a dynamic MOS shift register, and the scan method is designed for a standard raster of 625 lines. Tandbergs Radiofabrikk A/S., P.O. Box 9, Korsvoll, Oslo 8, Norway [367] Disk, tape drives are aimed at OEM, systems applications For applications in both OEM equipment and as parts of systems, two disk drives and a tape transport are aimed at the minicomputer market. The model 6000 tape transport is a 10½-inch IBM-compatible unit offering file search and rewind at 200 inches per second and a maximum data transfer rate of 72,000 characters per second. Two models of the disk drive are available; the 8100/5 storing 25 million bits, and the 8200/5 with a capacity of 50 million bits. Price of the transport is under Scotchpar Brand Flame Retardant Polyester Film. The built-in fire extinguisher. Fire. The common enemy of transformers, flexible circuits, flat cables and other electrical/electronic components. Until now. Now 3M introduces "Scotchpar" Type 7300 Flame Retardant Polyester Film. With an Oxygen Index of 28 minimum, Type 7300 film, when ignited, will melt but not burn. It extinguishes itself, easing further danger to equipment and lives. Type 7300 film can save you money, too. For example, "fly-back" transformers may no longer need encapsulation in silicone rubber. And, "Scotchpar" Flame Retardant Film, available in 1 to 5 mil thicknesses, has the electrical and physical properties of standard polyester films, with the added benefit of a much better winding surface. Learn more about "Scotchpar" Type 7300 film, the built-in fire extinguisher by, writing 3M Company, 3M Center, 220-6E, St. Paul, Minnesota 55101. X-Y and Y-T recording . . . and PORTABLE, too? (only 8" x 10" and 7 lbs.) YES... only the Simpson Model 2745 offers all this—and more: - Makes X-Y Recordings with independent selection of X and Y axis sensitivity - Makes Y-T Recordings with a built-in selectable time sweep - Has Fast Servo-Drive Response of 0.7 second on X axis and 0.5 second on Y axis for a full scale change - Makes Bi-Polar Recordings and segmental scale recordings - Records on Chart Rolls OR Sheets using ink OR inkless writing systems BATTERY POWERED... NO LINE RESTRICTIONS. Operates 75 hours or more on a single set of "D" cells with dependable ±1.0% accuracy. All solid state circuitry with high input impedance—FET chopper for long term stability. Only $750... ready to operate. Supplied with 2 Y-T chart rolls, 2 X-Y chart pads, inkless stylus pen, fiber tip ink pen, 6 test leads, dust cover, batteries and instruction manual. ASK YOUR SIMPSON REPRESENTATIVE FOR A DEMONSTRATION... OR WRITE FOR BULLETIN L-1012. SIMPSON ELECTRIC COMPANY 5200 W. Kinzie St., Chicago, Ill. 60644, (312) 379-1121 Export Dept.: 5200 W. Kinzie St., Chicago, Ill. 60644, Cable SIMELCO IN CANADA: Bach-Simpson, Ltd., London, Ontario IN INDIA: Ruttencha-Simpson Private Ltd., International House, Bombay-Agra Road, Vikhroli, Bombay New products $3,000 in OEM quantities; the 8100/5 disk drive, $2,800; and the 8200/5, $3,100. Microdata Corp., 17481 Red Hill Ave., Irvine, Calif. 92705 [369] Print-plot system is for use with matrix technique An off-line print-plot system is built to operate with Versatec's 8½-, 11-, and 20-inch line of printers, plotters and printer-plotter units that use the Matrix Electrostatic Writing Technique—MEWT. The new matrix print/plot system is designed to be used with IBM-compatible 37.5-inches-per-second magnetic tape: nine-track NRZI, 800 bits/inch; nine-track phase-encoded, 1,600 bits/inch; or seven-track NRZI, 200, 556, or 800 bits per inch. Versatec Inc., 10100 Bubb Rd., Cupertino, Calif. 95014 [368] Computer's cycle time is 750 nanoseconds A medium-scale computer designated the Slash 4 is aimed at scientific and real-time users and offers a cycle time of 750 nanoseconds. The basic price of $19,900 includes 24,000 bytes of memory, 128 to 356 24-bit words in 8,000 word increments; parity; hardware multiply/divide/square root; priority interrupt control system; four external priority interrupts; five registers; one eight-bit input-output channel; and software. Aimed at the end-user, the computer uses a core memory. A combination of semiconductor and core types is planned for future models. Datacraft, 1200 Northwest 70th St., Box 23550, Fort Lauderdale, Fla. 33307 [370] Our customers... what do they say about Rotron fans and blowers? When a company is chosen to supply air moving devices and systems to so many of the best known and most respected names in American business, it says something about that company. About the quality of its products. About the level of its engineering abilities. About the realism of its pricing. About all the things intelligent, sophisticated buyers of fans and blowers find important. It says, we think, that Rotron® is a company other buyers would like to know better. Buying is believing. SURPRISE! HP's new $2.95* displays! Now a great looking solid-state display for only $2.95.* HP's new low-cost digit is really something to see. Wide viewing angle and bright, evenly-lighted segments offer excellent readability. Designed for commercial applications, the 5082-7730 Series is a pin-for-pin replacement for other displays such as the MAN 1, MAN 7, DL 10 and DL 707 and offers a large 7-segment 0.3 inch character with right or left hand decimal points. Quality? Still the best around. Contact your nearby HP distributor for immediate delivery. Or write us for more details. This display is worth a closer look. *1K quantity; Domestic USA Price Only. New products Industrial electronics ICs challenge SCRs in power Three hybrid transistor units can put out from 2 to 24 kilovolt-amperes In the high-power conversion field, the transistor had been considered a "soft" pulse device unable to withstand instantaneous overloads. That has changed now with Texas Instruments' development of three integrated power-transistor switches designed specifically for jobs usually performed by silicon controlled rectifiers. The three hybrids supply power outputs ranging from 2 to 24 kilovolt-amperes. The TIXH807 is rated at 150 amperes and 100 volts, the TIXH808 at 200 A and 100 V, and the TIXH809 at 60 A and 400 V. And, TI is working toward a switch rated at 100 A and 500 V. In its design TI has combined a power transistor drive with sufficient overload protection to enable the devices to compete in the high-power conversion market. Housed in a conduction-cooled aluminum case measuring 7 by 3.5 by 1.6 inches, each switch weighs 3.3 pounds. Applications include dc choppers for motor controls, dc to ac converters, and ac to dc converters that are used in a broad gamut of machine tool, industrial control, communications, hand tool, power supply, induction heating, frequency conversion and other systems. Each switch—actually a dual integrated unit—contains two identical circuits which may be connected together for a single push-pull output, or operated as two independent switches. A signal generated by diode-transistor or transistor-transistor logic can be used to control the peak power of 24 kilovolt-amperes at a frequency range between dc and 10 kilohertz. Typical turn-off time is 0.5 microsecond, or about an order of magnitude greater than an SCR's. The units also feature optically coupled isolation between input circuitry and the power system. Internal circuitry turns off each switch within about two microseconds if its load is short-circuited, and operation begins again in 2.5 microseconds. If the short circuit continues, the switch will turn off again and recycle at a frequency of about 400 hertz until the short condition is eliminated. Protection is also provided against overheating. The signal for this condition is fed into a Schmitt trigger which, with its hysteresis characteristic, ensures that the temperature recovers by a safe margin before operation resumes. The integrated power switches can absorb a transient overvoltage up to an energy level of 5 joules. Evaluation quantities are available in 12 weeks after receipt of order. Tentative prices range from $785 for single units to $450 each in 1,000-piece quantities. Texas Instruments, Inquiry Answering Service, P.O. Box 5012, M/S 308, Dallas, Texas 75222 [371] Plug-in construction makes counters flexible All-modular plug-in construction enables the user to tailor a line of presettable counters to his exact specifications. The input, output, and counting circuit boards all plug in, permitting combinations of 17 types of inputs, 10 types of output, a 1- to 5-digit readout, one or two presets, or totalizer only. Counting speeds ranging from 50 to 20,000 counts per second are possible. The Multi-Octave Double Balanced Imageless Mixer 1-12 GHz COVERAGE 1-12 GHz coverage in a single device has been achieved through the use of a unique double balanced* configuration which provides superior dynamic range and high output levels while maintaining low noise performance. LEADING PARTICULARS | Parameter | Value | |----------------------------|--------------------------------------------| | Frequency Coverage | 1-12 GHz | | Noise Figure | 8 db mid band (12.5 db max.) | | Image Rejection | 20 db nom. (15 db min.) | | LO:RF Isolation | >20 db across band | | RF VSWR | <1.5:1 across band | Completely self contained, the unit includes a broadband RF hybrid consisting of two tapered line directional couplers in cascade; two ultra wide double balanced mixers utilizing substrate surface balanced microstrip construction; a local oscillator and divider for phase trim; and an IF hybrid providing high percentage bandwidth at various center frequencies. Units are available with or without built in IF preamplifiers. The reciprocal properties of the device make it useable as a single sideband up-converter. Complete data on operation in this mode is available. WRITE FOR DETAILED TECHNICAL DESCRIPTION AND LISTING OF STANDARD MODELS. *U.S. Pat. #3,652,941 RHG ELECTRONICS LABORATORY • INC 161 EAST INDUSTRY COURT • DEER PARK NEW YORK 11729 • (516) 242-1100 • TWX 510-227-6083 for Reliability, Innovation and Service New products counter will accept almost any type of input, including ac or dc voltage, pulse, photoelectric, proximity, vane pickup, magnetic pickup, and shaft encoders. The numerical readout is a bright green that is legible to 20 feet. Prices start at $100. JMR Electronics Corp., 1424 Blondell Ave., Bronx, N.Y. 10461 [374] DIP audio indicators eliminate arcing, rf noise Called DIP-Alarm, a line of audio indicators in dual inline packages plugs into a standard 16-pin DIP socket or printed-circuit board on 0.300-by-0.700-inch centers. They contain no moving contacts, so problems of arcing, electrical interference, and rf noise are eliminated. Sound output is 80 decibels in the range of 400 hertz to provide extra audible penetration. Units range in dc voltage from 1.5 to 15 and will operate from -55 to +55 C. They weigh slightly under 8 grams. Typical applications are in automotive warning systems, metal detectors, fume detectors, depth finders, audible tuning devices, monitoring systems, alarm circuits, continuity test sets, timers, medical instruction, telephone sensors, and various types of home alarm systems. Projects Unlimited, 3680 Wyse Road, Dayton, Ohio 45414 [375] Temperature controllers have triac output, operate to 650°F Featuring 0.1° sensitivity over the range from room temperature to 650 F, a line of solid-state temperature controllers designated Quantem is priced at $37.50 each in quantities of 50. The controller is supplied as a compact 4.4-inch-square protected circuit board with built-in power supply and triac output, plus matched precision setpoint potentiometer, dialplate, and encapsulated sensor. The unit connects directly to any 120/240-volt, 60-hertz power source. The controller is suited for appliMETOXILITE RECTIFIERS SET INDUSTRY STANDARDS Metoxilite, the material that pushed Semtech to the forefront of the industry for sub-miniature medium power silicon rectifiers, now makes its debut in a whole new spectrum of "state-of-the-art" devices. Metoxilite (metal oxides) is fused to the metallurgically bonded junction-tungsten pin assembly forming a "tough" sub-miniature package. Designed to electrically approach the theoretical, the Metoxilite rectifier, introduced in 1969, is the result of years of applied research and extensive testing. You'll see them used in stringent military and space environments as well as industrial and commercial applications. JAN and SIN parts available in most types. PRESENT STANDARDS The Metoxilite 3-amp series is the first family introduced by Semtech. Supplied in an axial leaded package, it filled the product gap in the industry between the lower current axial leaded rectifier and the higher current stud packages. THE WORK HORSE The Metoxilite 1-amp rectifier family introduced with the 3-amp family has since become the workhorse of the industry. Utilizing the .060" diameter die, it offers more capability than the similar devices now available in the industry. NEW METOXILITE MINISTAC MEDIUM RECOVERY (Trr) 2 µs Average Inverse Voltage: 2, 3, 4, & 5 KV Average Rectified Current: 125 mA Reverse Current @ 25°C: 100 nA; @ 100°C: 7.0 µA Forward Voltage @ 125 mA, 25°C: 5V Single Cycle Surge: 5A; Recurrent Surge: 1.25A Body Dimension: .070" D x .215" L Types: M20, M30, M40 & M50 FAST RECOVERY (Trr) 250 ns Peak Inverse Voltage: 1.5, 2.0, 2.5 & 3 KV Average Rectified Current: 100 mA Reverse Current @ 25°C: 100 nA; @ 100°C: 7.0 µA Forward Voltage @ 100 mA, 25°C: 5V Single Cycle Surge: 5A; Recurrent Surge: 1.25A Body Dimension: .070" D x .215" L Types: F15, F20, F25 & F30 SUB-MINIATURE HIGH VOLTAGE METOXILITE RECTIFIERS A sub-miniature high voltage rectifier obtained by Semtech's unique technology. A multi-junction device high temperature metallurgically bonded and fused in a non-cavity Metoxilite case. This small device is designed to solve packaging problems where size and environmental criteria are critical. NEW GENERATION 1N645 Our new ½-amp Metoxilite rectifier is small enough to replace the unreliable whiskey type devices (1N645-7). This rectifier is now available in the Metoxilite non-cavity case with a high temperature metallurgically bonded internal assembly. NEW ½-AMP METOXILITE RECTIFIER MEDIUM RECOVERY (Trr) 2 µs Peak Inverse Volt.: 200, 400, 600, 800 & 1000V Average Rectified Current: 0.5A Reverse Current @ 25°C: 100 nA; @ 100°C: 7 µA Forward Voltage @ 0.5A, 25°C: 1V Single Cycle Surge: 25A; Recurrent Surge: 5A Body Dimension: .070" D x .165" L Types: M2, M4, M8 & M9 FAST RECOVERY (Trr) 300 ns Peak Inverse Voltage: 1000, 1500, 2000 & 2500V Average Rectified Current: 250 mA Forward Voltage @ 100 mA, 25°C: 4V Reverse Current @ 25°C: 1 µA; @ 100°C: 20 µA Single Cycle Surge: 10A; Recurrent Surge: 2.5A Body Dimension: .110" D x .215" L Types: IN3643-47 & SM20, SM25 & SM30 LO-DYNAMIC Z-ZENERS Semtech's new Metoxilite low dynamic impedance zeners offer voltages of 6.8 to 120 volts for 1, 3, and 5 watt applications. This new series of devices offers ½ lower dynamic impedance when compared at the same operating current to those presently available. As an added plus, the device is radiation resistant. The zener body measures .165" long (max.) and is .110" in diameter (max.). Types SX30-120. FOR VOLTAGE MULTIPLIERS Introducing the Ministac in Metoxilite, multi-chip high voltage rectifier, particularly adaptable for high frequency applications such as voltage multipliers. RECTIFICATION EFFICIENCY IMPROVED The Metoxilite LO VF rectifiers open the door previously barred to the designer who required high efficiency rectification with ultra fast recovery times. These units are ideally suited to today's power supply design technology. LO-VF WITH FAST RECOVERY (Trr) 100 ns Peak Inverse Voltage: 30 and 50V Reverse Current @ 25°C: 1.0 µA; @ 100°C: 20 µA Forward Voltage @ 3A, 25°C: 0.9V Single Cycle Surge: 150A; Recurrent Surge: 25A Body Dimension: .165" D x .165" L Types: 3L03 & 3L05 RADIATION RESISTANT RECTIFIERS Specifically designed to operate in a radiation environment. Now available in Metoxilite. Extremely rugged part is ideally suited for missile and space applications. Peak Inverse Voltage: 100, 200, 300 & 400V Average Rectified Current: 1A Forward Voltage @ 1A 25°C: 1.2V Reverse Current @ 25°C: 1 µA; @ 100°C: 25 µA Single Cycle Surge: 30A; Recurrent Surge: 6A Reverse Recovery (Trr) 300—1000 ns Body Dimension: .070" D x .165" L Types: R1, R2, R3 & R4 Here's an optical tachometer for tape drive control (That won't blow your manufacturing costs!) - Mounts directly to drive motor - Frequency output < 0.7% FM deviation - Uses LED light sources, extremely reliable - Square wave, TTL compatible output - Temp. range of -20°C to +80°C - Priced under $35.00 in quantity Write or call for spec sheet on Disc's new 990 Series Tachometers. DISC Instruments, Inc., 102 E. Baker St., Costa Mesa, Calif. 92626, Phone (714) 979-5300 Circle 166 on reader service card Cut Navy Standard Hardware packaging costs 50% This free book tells how. Now you can configure Level II packages for any assortment of Navy SHP circuit modules in any quantity using a screwdriver and our off-the-shelf components. This Level II erector-set concept means savings in design, tooling and production that add up to rugged, thermally efficient SHP packaging for half what it would cost you to do it yourself. We've written a 17-page paper that gives all the details including information and drawings that let you design your own packaging from our six standard parts. Write for it today if you're involved in Navy SHP or ever expect to be. Don't bid Navy SHP without it. IERC, a subsidiary of Dynamics Corporation of America, 135 W. Magnolia Blvd., Burbank, Calif. 91502. IERC Heat Sinks Controller design eliminates rfi problem A time-proportioning temperature controller, the model 5CX-62, has a complete solid-state control circuit. Zero-voltage-crossing firing of the triac virtually eliminates rfi noise commonly associated with phase-modulated controllers. The circuit design will not permit half-cycling, which assures freedom from possible transformer saturation problems. An isolated-case-output is provided to facilitate heat-sink mounting without need for electrical insulation. Quick-connect spade lug terminals are provided for easy installation. Thermistor sensors are used in a Wheatstone Bridge arrangement for good control sensitivity. Standard TX series probes provide temperature control from... WESTON the newest family name in frequency counters. Meet the newest name in frequency counters—from the country’s oldest manufacturer of test equipment. For the engineer, service technician or serious hobbyist, there’s the new Model 1252, an autoranging crystal-clock counter, 5 Hz to 30 MHz range, 6-digit, and all solid-state circuitry. It features four autoranging gates plus two pre-set gates, automatic blanking of leading zeroes, and an unbelievably low price. Those wanting more capacity in a counter can find it in Model 1253, a 1 Hz to 200 MHz instrument, with separate 1-megohm and 50 ohm inputs, 7 digit LED with overrange indicator, 1 MHz time base, and external clock input. Great for precision work. Comparably low priced. Scientists, technicians and experimenters requiring the utmost from a counter can find the answer in either Model 1254 or Model 1255. Both have high-stability TCXO time bases, 1 Hz to 200 MHz range, BCD output, push-button re-set (first display is always correct), and remote programming capability. Model 1255 also has a pre-scaled 600 MHz capability. Oh, yes, the WESTONS have a couple of cousins, too. There’s Model 1251, a programmable 20 MHz timer with 100 nsec. resolution. It provides time interval, period, time average, event and ratio measurements in a 5-digit LED display. And, there’s Model 1259, a 600 MHz scaler which will automatically extend the range of any counter to 6 MHz–600 MHz. You can recognize the strong family resemblance in all these new members of the Weston test equipment team. They’re all lightweight, compact and easy-to-use. Each has a lockable, multi-position handle which serves as a tilt-stand. See them all at your nearby Weston distributor today. Circle 217 on reader service card. Try any one for 10 days free (Your WESTON distributor is betting you’ll never want to give it up.) Weston, the leader in portable test equipment, is willing to bet that once you try one of our digital DMM’s, you’ll never want to give it up. So, during the next few months you can try any one of these compact, portable, almost indestructible multimeters for 10 days—FREE! Just pick the one that suits your needs best. **Model 4440** Lowest cost, plus high performance. Full 3½ digits. 17 ranges, 20 mV to 1000V, 200 ohms to 2 megohms, plus AC/DC current. Blinking overrange indicator. Self-contained, re-chargeable battery—over 8 hours continuous operation. Or, use AC line converter. ($285. list) **Model 4442** 20 ranges, plus amazing accuracy. ±.05% of reading ± one digit—guaranteed! All solid-state circuitry built around MOS/LSI chip. Automatic blanking of unused digits conserves power. Battery or AC line converter. ($325. list) **Model 4443** Lowest price, most precise, DC-only DMM available. 13 ranges, 200 mV to 1000V (100uV res.), 200 ohms to 20 megohms (0.1 res.), plus current. ±.05% of reading ± one digit—guaranteed! Battery operated. Complete with charger. ($285. list) **Model 4444** The autoranging maximinimultimeter. Choose VAC, VDC, Ohms or DC current. The 4444 does the rest. Instant automatic range selection. Plus four full digit display including polarity. Ultimate accuracy in a portable, ±.02% of reading ± one digit. No other portable offers more, does more. ($575. list) So try one today. Send in the attached postal reply card for more details. Your nearby Weston distributor will contact you. Weston Instruments, Inc., 614 Frelinghuysen Avenue, Newark, N.J. 07114. Circle 218 on reader service card. We’re either first or best. Or both. WESTON (fold card at center, seal or staple open ends) Send me detailed data on these new Weston products: ☐ #217. Weston Frequency Counters: All ☐ Model 1252 ☐ Model 1253 ☐ Model 1254 ☐ Model 1255 ☐ Model 1251 Timer ☐ Model 1259 Scaler ☐ ☐ #218. Send me detailed information on Weston’s 10-day free trial offer on DMM’s. ☐ Or, have my nearby Weston distributor contact me. Phone________________________. Name_____________________________________________________ Title_______________________________________________________ Company_____________________________________________________ Address_____________________________________________________ City____________________________State_________ZIP__________ New products -65° to +425°C in four ranges. Price is $29.20. Oven Industries Inc., P. O. Box 229, Mechanicsburg, Pa. 17055 [377] Infrared analyzer measures gas-mixture constituents By sensing the attenuated infrared energy, the NDIR gas analyzer measures the concentration of the gas constituents in a gas mixture. The unit was designed for various gases including CO, CO₂, HC, NO, and SO₂. Optional features include linear outputs, internal optical calibration (which eliminates the need for span gas), and corrosion-resistant fittings for process streams. The analyzer is capable of measuring up to three gases simultaneously and is suited for applications in petrochemical, food, medical, and metals industries. Infrared Industries Inc., P. O. Box 989, Santa Barbara, Calif. 93102 [379] Centering computer added to precision gaging systems An electronic centering computer that will automatically center the chart recording of parts being measured on roundness machines has been added to the Gould line of precision gaging systems. CompuCenter I operates as a component of the company's spindle-drive Surf-analyzer 136 and 360 roundness-gaging systems. Used in combination with a simple locating fixture, the CompuCenter makes repetitive production-line measurements almost automatic. When connected to a roundness system, CompuCenter I accepts electronic signals from the THE MINIATURE PC ROTARY SWITCH. Very big in communications circuits. The screwdriver operated PC mount rotary is 0.6 inches in length. It's half that in diameter. (A shaft-actuated bushing-mounted version also available.) Both provide a 36° angle of throw with one or two pole circuitry. Rated make or break 200 milliamps at 115 VAC resistive load for 5,000 cycles. (Or 50 milliamps at 25,000 cycles.) For more information on all Grayhill products, write today for our newest Engineering Catalog. Grayhill, Inc., 523 Hillgrove Avenue, La Grange, Illinois 60525. (312) 354-1040. Grayhill Circle 167 on reader service card WISH TO IMPORT EXCLUSIVELY TO JAPAN ELECTRICAL MATERIALS, EQUIPMENTS, PARTS, CHEMICALS FOR ELECTRICAL USE, ETC. WHICH ARE NEWLY DEVELOPED OR PATENTED IN USA. WE ARE WHOLESALERS AND IMPORTERS OF ALL KINDS OF ELECTRICAL MATERIALS, ETC. IN JAPAN. HAVING BRANCHES ALL OVER JAPAN, TOKYO, NAGOYA, FUKUOKA, SHIKOKU, KASHIMA, HIROSHIMA, ETC. OUR MONTHLY TURNOVER IS ABOUT US$2,400,000.00. OUR BANKERS: SUMITOMO, FUJI, KYOWA, DAI-ICHI KANGIN, KOBE. PLEASE CONTACT US DIRECTLY. Z. KURODA & CO., LTD. 56-2, 5-CHOME KIGAWA HIGASHINOCHO HIGASHI YODOGAWAKU OSAKA, JAPAN TELEX: 523-8426 CABLE: KURODEN OSAKA AO StereoStar/ Zoom High Resolution Microscope combines unsurpassed optics, magnification and convenience. American Optical makes a StereoStar/Zoom Microscope that combines high resolution, extra-large field of view, wide total magnification range (3.5 through 210X) and the most convenient zoom control. What's more? A rotatable, interchangeable power body. Crisp, sharp images throughout the magnification range. A free brochure is available with full details. Write for it, or contact your AO Sales Representative. ©TM Reg., American Optical Corp. AMERICAN OPTICAL CORPORATION SCIENTIFIC INSTRUMENT DIVISION • BUFFALO, N.Y. 14215 Circle 168 on reader service card YOU'RE WHISTLING IN THE DARK... if you think that heart disease and stroke hit only the other fellow's family. GIVE...so more will live HEART FUND Contributed by the Publisher New products recorder for readings up to two times full scale. CompuCenter I is set up to work at 4 revolutions per minute. The full centering process takes 30 seconds. A green "ready" light indicates to the operator that the computation is complete, and the recording pen can then be lowered onto the chart. Marketing Manager, Measurement Systems Division, Gould Inc., 4601 Arden Dr., El Monte, Calif. 91731 [378] Digital countdown timer made for industrial jobs Model I-ms digital countdown timer displays both minutes and seconds in large luminous numbers for close control. Push-button controls provide virtually uninterrupted operation and include "reset," "preset," and "power." A digital readout indicates the time required to complete. An audio alarm is also available. The standard unit operates on 115 volts, 60 cycles, with other voltages and time ranges available. The price is $75 in small quantities. Nucon Co. Inc., 2557 Charleston Rd., Mountain View, Calif. 94040 [380] Oh, you Darlington! 10 Beautiful AMPS of Unequalled Performance Some of the best things come in small packages. Such as Solitron's 10 Amp single diffused Darlingtons. These NPN Silicon Power transistors are designed into a compact TO-66 package, but offer 30 watts dissipation and BVCEO ranges of 50, 80 and 100 Volts. They have excellent second breakdown capability and low leakage characteristics. Typical ICEO's at an elevated temperature of 100°C are less than 1.0 mA, with typical gains of 5000 @ 5.0 Amps and VCE (sat) @ 5.0 A typically 1.2 Volts. Identified as the SDM 2501-6 Series, these Darlingtons are ideal for power switching, inverters, converters and audio amplifier applications. For complete information, prices and engineering application assistance, dial toll-free 1-800-327-3243. Or write: Solitron Devices, Inc. 1177 Blue Heron Blvd., Riviera Beach, Florida 33404 / (305) 848-4311 / TWX: (510) 952-7610 SAN DIEGO, CALIF. 8808 Balboa Ave. CMOS, PMOS Circuits Diodes & Rectifiers, & Zeners FET & Dual FET Transistors High Voltage Assemblies Linear & Monolithic IC's RIVIERA BEACH, FLA. 1177 Blue Heron Blvd. Hi-Rel Power Transistors Si & GaAs Power Transistors Hi-Rel Power Hybrids PNP-MPN Industrial Transistors PORT SALERNO, FLA. Cove Road Microwave Connectors Precision RF Cable Precision RF Coaxial Connectors JUPITER, FLA. 1440 W. Indiantown Rd Microwave Stripline Feedthrough Devices Microwave Diodes RF and Microwave Transistors Hi-Rel Power Transistors Hybrid Circuits TAPPAN, N.Y. 256 Oak Tree Road Diodes & Rectifiers Feedthrough Devices High Voltage Assemblies Power Rectifiers Thick Film KENT, ENGLAND 100 London Road Sevenoaks Solivder, Ltd Full line of Solitron devices BEN BARAQ, ISRAEL Tubbs Hill House Alfa Israel, Ltd Full line of Solitron devices TOKYO 105, JAPAN No. 4, 2 Chome Shinbashi Minato-ku Full line of Solitron devices Electronics/May 24, 1973 Circle 169 on reader service card 169 Need a reliable way to meet budget? Specify HYBRID VOLTAGE REGULATORS by MICROPAC 4 to 10 amp output current - 4 to 36 fixed voltage range. - Positive & negative voltages. - 120 watts power dissipation. - Internal short circuit protection. - External components not required. - Standard TO-3 package. - Available from stock. - Economically priced. Other standard products available or in design: - PIN diode switch driver - Active filter - High current / high voltage driver Custom products also available. Contact— MICROPAC INDUSTRIES, INC. 905 E. Walnut St., Garland, Texas 75040 Tel. 214-272-3571 TWX 910-860-5186 New products/materials A silver/urethane conformal coating consists of a pure silver filler in a polyurethane resin. The one-component conductive air-drying system adheres and conforms to various substrate materials and cures at room temperature. When dry, the material is reflective to microwave energy. Tecknit Wire Products Inc., 129 Dermody St., Cranford, N.J. 07016 [476] A silver metalization process for mica electronic applications comprises two products—mica silver type 1 for screening and mica silver type 2 for brushing applications. The preparations are applied to mica to form electrodes for capacitors and for conductive lines. Price per ounce is $10. Transene Co. Inc., Rte. 1, Rowley, Mass. 01969 [477] A one-component silver epoxy with a work life of one week is designated Ablebond 36-2. The conductive material retains 0.0001 ohm-cm resistivity to 400 C and has a 1,000 psi lap shear strength after 250 hours at 200 C. It can be dispensed in 4-mil dots or screened. Price for the material in a 1-cc syringe is $3.30. A one-ounce jar costs $13.75. Ablestick Laboratories, 833 W. 182nd St., Gardena, Calif. [478] Eccomold 195R is an epoxy molding powder with a flow soft enough to encapsulate glass-enclosed reed switches. Although higher molding temperatures may be used, the material can be molded at 250 F. Molding pressures are from 100 to 1,000 psi. Price is $1.75 per pound in 2,500-pound lots. Emerson & Cuming Inc., Canton, Mass. [479] The all new ACTION RACK provides the top quality package with an exciting range of features and applications. - Multibay Systems—The ACTION RACK is at its best arranged in series, wedges and wrap-arounds. Future system expansion is a simple and easy operation. - Turret Display Modules—Isolate and focus attention as required in display, monitor and CRT applications. - Accessories—A decade of experience has gone into development of these useful accessories—Panels and handles, recessed mounting rails, trim inserts, doors, leveling feet, casters, center plate stabilizers, power outlets, chassis slides and support angles, storage shelf, blowers, vent grilles and louvers, drawers and a variety of writing surfaces. Engineered to provide full use of the expanded OPTIMA line of accessories and styling features at a money-saving cost. Available in 7 panel heights (22", 28", 35", 42", 52", 61", and 70") for 19" panel width and 24" panel-to-panel depth. Depth is adjusted with recessed mounting rails. ACTION RACK is delivered completely assembled and finished in two colors from a great selection of 16 vinyl colors. Write or call OPTIMA ENCLOSURES Division of Scientific-Atlanta, Inc. 2166 Mountain Industrial Blvd. Tucker, Ga. 30084 (404) 939-6340 Circle 243 on reader service card Buying your carbon comps from distributors? Then here are five good reasons for buying IRC. **Delivery.** TRW/IRC carbon composition resistors are available. A full line of RCR S-level ¼, ½, 1 W from 2.7 ohms to 22 megs is in stock at your local TRW distributor. If you need his name and address, contact us. **Quality.** IRC carbon comp performance has been proven in use by the billions—consumer, industrial and military applications. Ours were the first resistors to receive hi-rel qualification and are backed by nearly a billion test hours—with no failures. **Price.** TRW distributor stock—with the same competitive price for both industrial and mil spec applications—is standardized on S-level RCR types (best failure rate level available in the industry). **Packaging.** Standard distributor packaging is card pack. Cost saving lead tape, body tape or cut-and-formed leads can be supplied. **Complete resistor choice.** TRW offers a total resistor capability—carbon composition, thin film, Metal Glaze™, wirewound and other types. Broadest line in the business. All from one source—your local TRW distributor. He’ll be glad to work with you on all your resistor requirements. Contact Distributor Operation, TRW/IRC Resistors, an Electronic Components Division of TRW Inc., 401 N. Broad Street, Philadelphia, Pa. 19108. Phone (215) 923-8230. **TRW IRC RESISTORS** TAUTBAND, 0.5% AND... Low Cost* There are many other words that describe our complete line of portable standard instruments, too — like rugged, versatile and available. Yokogawa Electric has over 55 years of experience in the manufacture of precision instruments and is now introducing this complete line of taut-band portable standards to the U.S. These instruments are reknown, world-wide, for their quality and precision. Write for our complete technical data and prices today. We're confident that we have a portable AC or DC standard voltmeter, ammeter or wattmeter that will more than meet your requirements at prices that will surprise you. * low cost like... AC Ammeter 2/5/10/20 Amps $99.00 DC 17 Range Volt/Ammeter, 3 to 1,000 V, 1 mA to 30 A $195.00 AC/DC Wattmeter 5/25 A, 120/240 V $250.00 Yokogawa Electric In the USA, contact: Yewtec Corporation • 1995 Palmer Ave. Larchmont, N.Y. 10538 • 914-834-3550 Circle 172 on readerservice card World’s smallest power supply for op amps...$14 This 2.3 x 1.8 x 1-inch module has tracking outputs of ±15 V @ 25 ma with regulation of ±0.1% and ripple of 1 mv. It costs $14.00 in 1,000 lots and only $24.00 for one. Requisition Model D15-03. (For ±12 V @ 25 ma, order Model D12-03.) Three-day shipment guaranteed. Acopian Corp., Easton, Pa. 18042 Telephone: (215) 258-5441 Circle 194 on reader service card New literature Crystal oscillators. A 14-page brochure from Vectron Laboratories Inc., 121 Water St., Norwalk, Conn., covers crystal and clock oscillators ranging from 1 hertz through 200 megahertz. Circle 421 on reader service card. Relay sockets. Viking Industries Inc., 21001 Nordhoff St., Chatsworth, Calif. 91311, has issued a catalog covering the company's relay socket connectors and related hardware. [422] Switching. A two-page catalog from Fifth Dimension Inc., Box 483, Princeton, N.J., describes the theory, application, and specifications of a family of mercury-wetted switching devices that operate in any mounting position. Both dc and rf products are covered. [423] Power supply. Elcom Industries Inc., Civilian Terminal, Hanscom Field, Bedford, Mass. 01730. A line of modular power supplies is described in a data sheet providing information on features, specifications, and ordering. [424] Microwave. Sivers Lab, Box 42018, S-126 12 Stockholm 42, Sweden, is offering a 104-page catalog providing information on microwave instruments, YIG devices, ferrites, rotary joints, electromechanical GOING TO EXTREMES TO INSURE QUALITY If you could take an express trip from the North Pole to the equator, you’d experience roughly the same effect as our special environmental testing has on the fingertip-size “Toko Pulse Transformer.” More than a thousand hours of exhaustive testing in temperatures ranging from -55°C to +85°C and humidity increasing from zero to 98% allow no rest for these electronic machine parts. In addition, anti-vibration and anti-shock tests are repeatedly performed to satisfy uncompromising Toko standards. Only after repeated and severe testing can we determine if the quality of each piece of circuitry is acceptable. From the radio and TV components that brighten our everyday lives to the electronic miracles that will enable man’s exploration of the universe, even a single flaw in the smallest electronic part could spell failure...even disaster. No wonder, then, we strive to maintain such strict standards for all Toko products. Achievements like the development of the Pulse Transformer, Delay Lines, Wire Memories, Super-Miniature Coil and other innovative Toko products require this conscientious attitude, combined with a pioneering spirit, ready to meet every demand placed on the Toko brand wherever it appears in the world. EXCELLENT SET WITH TOKO PARTS TOKO, INC. For further information, just call or write: Head Office: 1-17, 2-chome, Higashi-Yukigaya, Ohta-ku, Tokyo, Japan Toko New York Inc.: 350-Fifth Avenue, New York, N.Y. 10001 U.S.A. Tel: 565-3767 Toko, Inc. Los Angeles Liaison Office: 1830 West Olympic Blvd., Los Angeles, 90006 Cal. U.S.A. Tel: 380-0417 Toko Elektronik GmbH: 4 Düsseldorf, Lakronstrasse 77, W. Germany Tel: 0211-284211 Toko (U.K.) Limited: 59-67 Gresham Street, London A CAMBION Double "QQ" Service Have a "Wrap" session with Cambion. When Cambion decided to develop a custom wire-wrapping service, we started with the idea of complete customer satisfaction. This meant analyzing every step from customer preparation to shipment of completed boards. In doing this, it became obvious that errors must be eliminated if we were to achieve customer satisfaction. While we can't (and shouldn't) fix your logic, we can (and do) eliminate any potential wiring errors through a simple, yet highly effective "cross-indexing" computer check. In fact, we use our computers to produce four different lists to ensure proper wiring according to your specifications. The computer also checks for seven different types of wiring errors. Only after the customer has been notified of any, and they are corrected, is the computer allowed to produce a set of wiring instructions from which the boards will actually be produced. Cambion's wiring service is fully guaranteed. You can order your design in any quantity, no matter how large. The Quality stands up as the Quantity goes on. That's the Cambion Double "QQ" approach to customer satisfaction. For complete details on this newest of Cambion services, write to Wm. G. Nowlin, Gen. Mktg. Mgr., Cambridge Thermionic Corporation, 445 Concord Avenue, Cambridge, Mass. 02138. Phone: 617-491-5400. In Los Angeles, 8703 La Tijera Blvd. Phone: 213-776-0472. Standardize on CAMBION® The Guaranteed Digital Products! New literature switches, and transmission-line components. Included are calculation charts for high-power designs [425] Calculator. A four-page brochure provides information on the model 1175 electronic printing calculator available from Facit-Addo Inc., 501 Winsor Dr., Secaucus, N.J. 07094. A chart is included, as well as a diagram of the keyboard and information on operating procedures. [426] Switches. Seacor Inc., 598 Broadway, Norwood, N.J. 07648. An eight-page catalog describes the series MX push-button-switch system. [427] Core memories. Standard Memories Inc., 2801 E. Oakland Park Blvd., Fort Lauderdale, Fla. 33306. A six-page brochure describes the company's line of add-on compatible core memories which expand IBM 360 computers to up to four times capacity. [428] Temperature controller. A high-low limit protection device designed to limit process temperature is described in a brochure available from Eagle Signal Division, Gulf + Western Industries Inc., 736 Federal St., Davenport, Iowa 52803 [429] Packaging hardware. A short-form catalog describes the product line of the Scanbe Manufacturing Corp., 3445 Fletcher Ave., El Monte, Calif. 91731. The products include socket cards, files, sockets and strips, and wiring. [430] Don't overdraw. Use these Kodak shortcuts: The snappy restoration shortcut. Why waste time retracing your old, battered drawings? Restore them by making sharp, clean photographic reproductions on Kodagraph film. Weak lines come back strong and clear. Stains virtually disappear. And instead of gray lines on yellow, you'll have snappy, contrasty, black-on-white prints. The drop-of-water shortcut. Why retrace the whole design for a few revisions? Just order a second original on Kodagraph wash-off film. Then use a drop of water and erase unwanted details. Draw your design revisions on the film and you're done. The multiplication shortcut. Why draw the same detail over and over? Kodagraph film will do the job for you. That way you draw the detail just once. Make as many photoreproductions as you need. Cut them out, paste them down, and make a Kodagraph film print of the paste-up. Now you have a superb second original for subsequent printmaking. Get the facts from Kodak. Drop us a line for more facts on how you can reduce drafting time and save money too, with Kodagraph films and papers. Eastman Kodak Company, Business Systems Markets Division, Dept. DP832, Rochester, N.Y. 14650. Kodak products for engineering data systems. New literature Disk drives. Diva Inc., 607 Industrial Way West, Eatontown, N.J. 07724. Specifications and descriptions are given in a brochure describing seven magnetic-disk-drive systems. [431] Connectors. A 28-page catalog from Solitron/Microwave, Connector Division, Cove Rd., Port Salerno, Fla. 33492, contains photographs and dimensional drawings of several types of TNC connectors. [432] Disk drives. International Memory Systems, 14609 Scottsdale Rd., Scottsdale, Ariz. 85260. A 24-page manual covers the line of moving-head disk drives and controllers and provides schematics and specifications. [393] Product catalog. A comprehensive catalog from BLH Electronics Inc., 42 Fourth Ave., Waltham Mass. 02154, describes wire and foil strain gages; pressure transducers; strain gage, process control and calibration instrumentation; and proprietary systems. [434] Image recorder. Dicomed Corp., 7600 Parklawn Ave., Minneapolis, Minn. 55435. A six-page brochure describes the company's line of digital-image color-film recorders. The brochure details how computer output is converted to high-resolution imagery and recorded on color film. [435] Capacitors. Sprague Electric Co., 35 Marshall St., N. Adams, Mass. 01247, has released an eight-page engineering bulletin to simplify ordering MIL style CE70 and CE71 electrolytic capacitors to the D revision of MIL-C-62/12. [436] Fiber optics. American Optical Corp., Fiber Optics Division, Southbridge, Mass. 01550. A fiber optics catalog covers the company's standard line of products as well as custom-design capabilities. Featured in the catalog are flexible Fiberscopes used to view inaccessible areas, light guides, image conduit, clad rod, faceplates, and components. [439] From Stock—Write for catalogs—largest line of stock sizes and shapes including special instrument and Mil. spec knobs Custom—Send Prints—for fast quote—"Family Mold" process assures lower cost Electronic Hardware Corp. A Division of Hi-Tech Industries, Inc. 180-08 Liberty Ave., Jamaica, N.Y. 11433 Telephone: 212-291-2710 The Dade County Public Safety Department is looking for a qualified individual to fill the position of Electronic-Electrical Systems Supervisor. Interested applicants should have a degree from an accredited college or university in Electrical Engineering and minimum of 3 years supervisory or managerial experience in the field of installation, maintenance, and repair of UHF-VHF and microwave systems. Complete resumes should be submitted in writing to: Dade County Public Safety Department, Attn: Personnel Section, 1320 N.W. 14th Street, Miami, Florida 33125. Lead Technician: Immediate opening as Shop Foreman/Lead Technician in Michigan's largest avionics facility. Benefits & working conditions best in industry. Top salary, plus percentage of net profit. Experienced & current aircraft electronic technicians send full resume to: Northern Air Service, Dept. P, Kent County Airport, Grand Rapids, Mich. 49501. CONTRACT WORK WANTED Winding through rectangular unbroken uncut cores. Cores snug fit cores, fill window, only usual clearances. Gerelo, BX 642 Cooper Sta, New York NY 10003. RESUME KIT Free Engineer's Resume Kit for Electrical Mechanical and Industrial Engineers. Scientific Placement, Inc., Employment Service 5051 Westheimer, Houston, TX 77027. Job-seekers... be the first to know with McGraw-Hill’s Advance Job Listings By having our new weekly ADVANCE JOB LISTINGS sent First-Class (or by Air Mail, if you prefer) to your home every Monday you can be the first to know about nation-wide openings you qualify for both in and out of your field. This preprint of scheduled employment ads will enable you to contact anxious domestic and overseas recruitment managers BEFORE their advertisements appear in upcoming issues of 22 McGraw-Hill publications. To receive a free sample copy, plus information about our low subscription rates (from one month to 12), fill out and return the coupon below. ADVANCE JOB LISTINGS / P.O. BOX 900 / NEW YORK NY 10020 PLEASE SEND A SAMPLE COPY OF ADVANCE JOB LISTINGS TO: NAME ADDRESS CITY STATE ZIP E5/24/73 Mike Truitt, in cap and gown, the day he graduated as a dental technician. While he was in the Navy, Mike Truitt went to class. Had experienced teachers. And, in very little time, became a dental technician. But that's not all he learned in the Navy. He also learned about people. About life. About himself. And while he learned, he got paid for it. If your son joins today's Navy, there are many different career opportunities he may be able to choose from. The Navy recruiter in your neighborhood can tell him all about it. Or, send this coupon for a full-color information brochure. In the last 20 years, the Navy has graduated over a million young men. So when you suggest us, you know you're giving some good advice. Please send complete information about the U.S. Navy. COMMANDER, NAVY RECRUITING COMMAND U.S. NAVY BLDG. 157 WASHINGTON NAVY YARD WASHINGTON, D.C. 20390 Call toll-free: (800) 841-8000 Your Name Your Son's Name Age Address Phone # ( ) City State Zip The Navy Electronics/May 24, 1973 ## Electronics advertisers | Company | Address | Page | |---------|---------|------| | Acopian Corp. | Mori Barish Associates, Inc. | 172 | | Alfa Laval America | Welch Mirabile & Co., Inc. | 154 | | Allegheny Ludlum Steel Corporation | Van Tassell, Haines and Company, Inc. | 61 | | Allen-Bradley Company | Hoffman, Young, Baker & Johnson, Inc. | 28 | | American Optical Corporation, Scientific Instruments Div. | Fuller & Smith & Ross, Inc. | 168 | | American Power Systems Corp. | Ehrlichman Nusbaum/Richard/Advertising | 21 | | AMP Incorporated | Aitkin-Kynett Co., Inc. | 172-170 | | Analog Devices, Inc. | S. Berman & Partner Guy, Inc. | 79 | | Anritsu Electric Co., Ltd. | Diamond Agency Co., Ltd. | 22E | | Atlantic Richfield Company (ARCO) | Newham, Hackett & Steers, Inc. | 201 | | Bartlett Tree Experts | Bruce Angus Advertising, Inc. | 194 | | Bayer AG | Werbeagentur | 62-63 | | The Boeing Aerospace Company | Cope & Weber, Inc. | 84-87 | | Bower/ALLI | Rosenfeld, Srowitz & Lawson, Inc. | 187 | | Brand-Rex | Schreiber, Trowbridge, Case & Bastord, Inc. | 149 | | Burndy | International Publicien | 11E | | Burroughs Corp. | Cook Advertising Agency, Inc. | 141 | | Cambridge Thermionic Corporation | Ching & Cairns, Inc. | 174 | | Carle Erbs | Studio Dema | 13E | | Celanese Corporation | D'Arcy-MacManus International, Inc. | 133 | | CIT Alcatel | Promotion Industrielle | 12E | | Conrad Electronics Corp. | Scoville & Associates | 158 | | A. T. Cross Company | Potter Hazelhurst, Inc. | 190 | | Dakin Industries, Inc. | A Sub. of The Itonel Corporation | 4th cover | | Data General Corporation | Swanson, Sinkley, Ellis, Inc. Advertising | 46-47 | | Data General Corporation | The Advertising Group | 2BE | | * Data IO | Delco Electronics Divsor, General Motors Corp. | 48-49 | | Disc Instruments, Inc. | Campbell-Ewald Company | 166 | | Eastman Chemical Products, Inc., Industrial Chemicals | Wavner, Goldstein Company | 6 | | Eastman Kodak Company, Business Systems Markets Div. | J. Walker Thompson Co. | 175 | | Eaton Corp./ATA Foundation | Meldrum and Fewsmitn, Inc. | 196 | | Electronic Associates, Inc. | General Clarke, Inc. | 82 | | Electronic Hardware Co. | Div. of Hi-Tech Inc. | 177 | | Electronic Industries Association | Patterson & Farrell, Inc. | 14 | | Electronic Navigation Industries | Hart Conway Co., Inc. | 14 | | * Electro Science Industries | Wavner, Goldstein Communications Group | 25E | | Energy Conversion Devices, Inc. | Williams Rogers, Inc. | 146 | | Erie Technical Products Co., Inc. | Altman Hall Associates Advertising | 25 | | * Excellon Industries | 21E | | Fairchild Semiconductor, Inc. | Carson/Roberts, Inc. Adv. | 43-45 | | | Division of Optima Technology, Inc. | 14 | | Fairchild-Swiss Technology | Hall Butler Blatherwick, Inc. | 112-113 | | General Automation | D'Arcy-MacManus Advertising | 12-13 | | General Electric Co. | Semiconductor Products Department | 32 | | | Adcock & Sales Promotion | Syracuse NY | 32 | | General Electric Company, Quality & Process Auto. Prod. Dept. | Moore Design, Inc. | 10-11 | | General Electric | Result/Name & Marketing NV | 4E | | General Instrument Europe S. P. A. | Staub, Hedy | 18E | | Georgia Department of Industry & Trade | Wilson & Acree, Inc. Adv. | 140 | | The Gerber Scientific Instrument Co. | Charles Palm & Co., Inc. | 147 | | Grayhill, Incorporated | D'Arcy-MacManus Advertising, Inc. | 167 | | Grinnell Fire Protection Systems | Hutchins/Darcy, Inc. | 198 | | * Guardian Electric Mfg. Co. | Koen-Thompson & Associates, Inc. | 62-63 | | Hansen Mfg. Company | Keller-Crescent Co. | 158 | | Harshaw Chemical Company | Industry Advertising Agency | 137 | | Hewlett-Packard | Richardson, Seigle, Rolls & McCoy, Inc | 162-163 | | Hewlett-Packard | Talman Yates Adv., Inc. | 68-69 | | Hewlett-Packard | Thomas, Seigle, Rolls & McCoy, Inc. | 127 | | Hewlett-Packard | McCarthy, Scelba, DeBiasi Adv. | 1 | | Hewlett-Packard | Phillips Ramsey Advertising & Public Relations | 2 | | Hickok Electrical Instrument Company | Key Marketing Associates | 16 | | Hughes Aircraft Company | Foote, Cone & Belding | 142-143 | | * Industrie Oltosi S.A.S. | C E T I | 20E | | Interdata | S. Berman Elliott, Inc. | 128 | | International Electronic Research Corporation | Van Der Boom, McCarron, Inc. Advertising | 166 | | Intronica | Impact Advertising Incorporated | 6 | | Johnson Company, E. F., | Martin Whalen Advertising | 26-27 | | | 26-27 | | Krohn-Hite Corporation | Impact Advertising, Inc. | 5 | | * Lambda Electronics Corporation | Michael Cairns, Inc. | 3rd Cover | | Lear Siegler | Citron Instruments | 135 | | Linear Digital Systems Corporation | Manning/Bowen and Associates | 182 | | Lithronix, Inc. | Techno Publications Corporation | 7 | | Lockheed Electronics Company | Regis McKenna, Inc. | 39 | | MEPCO/ELECTRA, INC. | McLennan Erickson, Inc. | 27E | | LTT | Publible | 202 | | Lyman Products, Inc. | Reineke, Meyer & Finn, Inc. | 64-65 | | MEPCO/ELECTRA, INC. | Welborn Advertising, Inc. | 170 | | Micropac Industries, Inc. | 2nd cover | | Microwave Semiconductor Corp. | P. W. Ayer & Son, Inc. | 144 | | 3M Electro Products Division | Batten, Barton, Durstine & Osborn. Inc. | 159 | | 3M Company/Film & Allied Products Div. | Yaw, R. H., Inc. Advertising | 77 | | 3M Company—Mincom Division | D'Arcy-MacManus & Masius, Inc. | 183 | | Monogram | The Ad Agency | 152 | | MOS Technology, Inc. | Reineke, Goebel/Martin Advertising, Inc. | 15 | | Mostek Corporation | Kaufmann Advertising, Inc. | 138-139 | | National Semiconductor Corp. | G. W. Ayer & Son Advertising | 23 | | Nippon Kogaku K.K. | K. L. Advertising Agency | 21E | | Norton | Technik Marketing | 19E | | OscilloQuartz SA, Neuchatel | M.R. W. W. Advertising BSR/EAAA Bern | 72 | | Panasonic Industrial Div. | Dentus Corporation of America | 58 | | * Phillips NV, PN/T & M Division | Koen-Thompson & Associates, Inc. | 145 | | Phoenix Data | Craig Miller Advertising | 195,197,199 | | Plimler Bowes | D'Arcy-Turner, Inc. | 157 | | Plastics Engineering Company | Kuttner, Kuttner, Inc. | 148 | | Pleiss Arco | A & E Brodina | 153 | | Princeton Electronic Products, Inc. | Wavner, Goldstein Associates, Inc. | 176 | | PROCON S.p.A. | QUADRAGONO | 176 | | Radiometer Copenhagen | 7E | | RCA Ltd. | Marssteller, Ltd. | 2E-3E | | RCA Electronics Components | A. J. Smith-Lutton Company, Inc. | 17E | | RCA (Mobile Communications Systems) | J. Walter Thompson Company | 191 | | Renishaw Precision | Communications Unlimited, Inc. | 24 | | Relicron Corporation | Koen-Thompson & Associates, Inc. | 81,83 | | RHG Electronic Laboratory, Inc. | Samuel H. Goldstein | 164 | | Rockland Systems | P. W. Ayer & Son, Inc. | 41 | | Rohde & Schwarz, Inc. | R + S Sales Company | 156 | | Rohde & Schwarz | 1E | | Rotron, Incorporated | Black-Russell-Morris | 161 | | Schauer Manufacturing Corp. | N. W. Ayer & Son, Inc. | 176 | | Scientific-Atlanta, Inc. | NOW Advertising | 170 | | Sentron Corporation | Burress/Advertising | 165 | | Sesocom | Perez Publicite | 23E | | SGS-ATES | Studio B Communications | 181 | | Siemens | Cappiello Colwell, Inc | 22-23 | | Siemens Aktiengesellschaft | Lohmann & Rosse Union GmbH | 56 | | Siglentics Corp. | Corning Glass Works | 119,150 | | Silicron | Hall Butler Blatherwick, Inc. | 105 | | Simpson Electric Co. | Simpson Advertising Services, Inc. | 160 | | Sinclair-Marshall, A Textron Company | Sperry-Bohm, Inc. | 186 | | Soltron Devices, Inc., Transistor Division | S. Berman Elliott Advertising, Inc. | 169 | | Sorensen Company, A Unit of Raytheon Company | Providence Eastover & Lombardi, Inc. | 50 & 51 | | Spectro-Physics, PD | The Maxwell Arnold Agency | 136 | | Sprague Electric Company | 8 | | Starkey Laboratories, Inc. | 8 | | Stackpole Carbon Company, Electronic Components Division | Thomas Associates, Inc. | 155 | | System Dynamics Conrad Instruments | Bonfield Associates | 9,134 | | TEAC Corp. | 21 | | The Advertising Advertising Ltd. | 51 | | Techmashexport | Vneshtorgpreklama | 37 | | Tektronix, Inc. | 55 | | Teledyne Semiconductor | Regis McKenna, Inc. | 66-67 | | Teradyne, Inc. | W. W. Ayer & Son, Inc. | 70 | | Thomson CSF | Bazaine Publicite | 131 | | Thomson CSF | 9E | | Tokai Publicite | 182 | | Tokyo Electric Co., Ltd. | PAL Inc. /Export Advertising | 173 | | Toko, Inc. | 173 | | TRW/Globe Motors | 88 | | TRW/Industrial Advertising, Inc | 171 | | TRW/IRC Research | Gray & Rogers, Inc. | 101 | | TRW Electronics, Semiconductor Division | 78 | | Tung Sol Division, Wagner Electric Corp. | Vinus Brandon Company | 151 | | Union Carbide Crystal/Products | Wavner, Goldstein Associates | 31 | | Unirotro Corporation | Impact Advertising | 58 | | Universal Oil Products, Norplex Div. | Campbell-Ewald Company | 24E | | Wandel and Goltermann | Werbeagentur | 52 | | Wayne Kerr Co., Ltd. | 10E | | Westinghouse Electric & Machine Ltd. | 192-193 | | Westinghouse Fluorescent Lamp Div. | Keithum, MacLeod & Grove, Inc. | 166A-166B | | Western Instruments, Inc. | 148 | | John Wiley & Sons | Shaw Elliott Incorporated | 184 | | Winfield Industries, Inc. | Bozlee & Associates, Inc. | 188-189 | | Xerox Corporation | Wavner, Goldstein & Steers, Inc. | 172 | | YEWTEC Corporation | 167 | | Z. Kuroda & Co., Ltd. | JTS Co., Ltd. | 167 | ### Classified & Employment Advertising F. J. Ebene, Manager 212-971-2557 **EQUIPMENT (Used or Surplus New) For Sale** American Used Computer Corp. ........................................... 178 Radio Research Instr. Co., Inc. ........................................... 178 For more information on complete product line see advertisement in the latest Electronics Buyer's Guide Advertisers in Electronics International † Advertisers in Electronics domestic edition Plastic power with guaranteed reliability. Thanks to the excellent thermal cycling characteristics of the Versawatt plastic power package, the SGS-ATES range of power devices in plastic are guaranteed against damage due to thermal fatigue. Two basic pin configurations are available, the version with bent leads being a direct plug-in for TO-66 sockets. The first six devices in the SGS-ATES plastic power range are NPN epitaxial transistors, completely free from reverse breakdown. They have low saturation voltage, high current capability and power dissipation as high as 75W even at maximum specified voltage. | Type | Package | $V_{CEO}$ (min) | $V_{GEO}$ (min) | $I_C$ (max) | $h_{FE}$ (min.) at $I_C$ | |------------|-----------|-----------------|-----------------|-------------|--------------------------| | BDX70/2N6098 | TO-220AA | 70 | 60 | 10 | 20 | | BDX71/2N6099 | TO-220AB | | | | | | BDX72/2N6100 | TO-220AA | 80 | 70 | 10 | 20 | | BDX73/2N6101 | TO-220AB | | | | | | BDX74/2N6102 | TO-220AA | 45 | 40 | 16 | 15 | | BDX75/2N6103 | TO-220AB | | | | | SGS-ATES Semiconductor Corporation 435 Newtonville Ave. · Newtonville · Mass. 02160 Tel.: (617) 969-1610 Circle 181 on reader service card CONTROL—precise SIZE—fits SUPPLY—fast PRICE—fine Miniature Relay Type MPM-100 ● Available for various voltages AC & DC. ● Both 4PDT and DPDT contact switching requiring minimum driving power is available. ● Highly economical. ● Incorporates anti insulation fatigue device which prevents short-circuits. ● Uses UL-approved resin bobbin. Digital Line Printer LP-108 ● Compact size with simplified mechanism. ● Up to 18 columns. ● 14 characters per column. ● High reliability. ● Red/black printing. ● Print rate of 2.5 to 3.0 lines/sec. ● Low cost Miniature Motor Timer Type UT-500 ● Economical due to simplified pointerless mechanism. ● Available in a variety of types ranging from 10 seconds to 24 hours, surface-mounted or flush-mounted. Whichever way you take the measure of TEC control instruments you're getting top value. They work longer; give you more reliable performance for your money. TOKYO ELECTRIC CO., LTD. 14-10, 1-chome, Uchi-Kanda, Chiyoda-ku, Tokyo, Japan Advertising Sales Staff Pierre J. Braudé [212] 997-3485 Advertising Sales Manager Atlanta, Ga. 30308: Joseph Lane 100 Colonnade Drive, 1175 Peachtree St., N.E. [404] 892-2688 Boston, Mass. 02118: James R. Pierce 607 Boylston St, [617] 262-1160 Chicago, Ill. 60611: 645 North Michigan Avenue Robert L. Reiss [312] 751-3739 Paul L. Reiss [312] 751-3738 Cleveland, Ohio 44113: William J. Boyle [716] 586-5040 Dallas, Texas 75201: Charles G. Hubbard 2001 Bryant Tower, Suite 1070 [214] 742-1747 Denver, Colo. 80202: Harry B. Doyle, Jr. Tower Building, 700 Broadway [303] 266-3863 Detroit, Michigan 48202: Robert W. Bartlett 1400 Fisher Bldg. [313] 873-7410 Houston, Texas 77002: Charles G. Hubbard 2270 Hurdle Bldg. [713] CA 4-8361 Los Angeles, Calif. 90010: Robert J. Rielly Bradley K. Jones 3200 Wilshire Blvd., South Tower [213] 487-1160 New York, N.Y. 10020 1221 Avenue of the Americas Warren E. Ball [212] 997-3617 Michael J. Stoller [212] 997-3616 Philadelphia, Pa. 19102: Warren H. Gardner Three Parkway, [212] 997-3617 Pittsburgh, Pa. 15222: Warren H. Gardner 4 Gateway Center, [212] 997-3617 Rochester, N.Y. 14584: William J. Boyle 9 Grove Street, Pittsford, N.Y. [716] 586-5040 San Francisco, Calif. 94111: Don Farris Robert J. Rielly, 425 Battery Street, [415] 362-4600 Paris: Alain Offergeld 17 Rue des Grands Etsel, 75 Paris 16, France Tel: 720-73-01 Geneva: Alain Offergeld 1 rue du Temple, Geneva, Switzerland Tel: 32-35-63 United Kingdom: Keith Mantle Tel: 01-493-1451, 34 Dover Street, London W1 Milan: Robert Sadel 1 via Baracchini Phone 86-90-656 Brussels: Alain Offergeld 23 Chemin de Wavre Brussels 1040, Belgium Tel: 13-65-03 Stockholm: Brian Bowes Office 17, Kontor-Center AB, Hagagarten 29, 113 47 Stockholm. Tel: 24 72 00 Frankfurt/Main: Fritz Krusebecker Liebigstrasse 27c Phone 72 01 81 Tokyo: Tatsumi Katagiri, McGraw-Hill Publications Overseas Corporation, Kasumigaseki Building 2-5, 3-chome, Kasumigaseki, Chiyoda-Ku, Tokyo, Japan [501] 9811 Osaka: Ryi Kobayashi, McGraw-Hill Publications Overseas Corporation, Kondo Bldg., 163, Umegae-cho Kitaa-ku [362] 8771 Australia: Warren E. Ball, P.O Box 5106, Tokyo, Japan Business Department Sheila R. Weiss, Manager [212] 997-3140 Thomas M. Egan, Production Manager [212] 997-3140 Carol Gallagher Assistant Production Manager [212] 997-2045 Dorothy Carter, Contracts and Billings [212] 997-2908 Francois Vallone, Reader Service Manager [212] 997-3657 Electronics Buyers' Guide George F. Warner, Associate Publisher [212] 997-3139 Regina Hera, Directory Manager [212] 997-2544 YOUR personal Digital Multimeter is here! The functions you need most — at the right price — with no sacrifice in quality. Full factory warranty. - Autopolarity - 100% Overrange - LSI & C/MOS Circuitry - Large 0.55" Digits $145 SPECIFICATIONS FOR MODEL 3000 DC VOLTS 1, 10, 100, 1000 AC VOLTS 1, 10, 100, 1000 OHMS 1K, 10K, 100K, 1M, 10M RESOLUTION 0.1% All Ranges ACCURACY DC & OHMS 0.5% 1 Digit AC VOLTS 1% 1 Digit POWER 105-125 VAC 60 Hz 3W Payment with personal check, money order, BankAmericard, and Master Charge card acceptable. Include all raised information on charge cards. Net 30 days to well rated firms. No COD orders. Add $3 for postage and insurance. EXPORT: For further information, write for details. LINEAR DIGITAL SYSTEMS, INC. P.O. BOX 941, GLENWOOD SPRINGS, COLORADO 81601 Circle 182 on reader service card S.S. HOPE, M.D. Doctor... teacher... friend to millions on four continents—this floating hospital is a symbol of America's concern for the world's disadvantaged. Keep HOPE sailing. PROJEC T HOPE Dept. A, Washington, D.C. 20007 FREE MONEY! It's a safe bet that you, like the average American, are completely unaware of the incredible bonanza recently granted you by Congress in the form of whopping new Social Security benefits. Item: When today's average worker of 22 retires, he and his wife, according to Social Security actuaries, will receive an annual pension of $38,000. Moreover, the total amount of Social Security he and his wife can expect to collect will surpass half a million dollars! Item: The average American doesn't know it, but the single most valuable asset he now possesses is his Social Security. It is equivalent, in maximum brackets, to a guaranteed 5% income on cash in bank, stocks or real estate worth over $100,000. Moreover, every cent of this bounteous income is TAX FREE! Item: Most Americans still believe, mistakenly, that Social Security is a dole exclusively for the aged. The fact is, however, that 10 million Americans under the age of 60 (and averaging a mere 30) are now collecting Social Security. These non-old-age pensioners receive $13 billion annually, and both their number and the amounts of money they collect are bound to increase in years ahead. So generous has Social Security become, younger people become, in fact, that it amounts to Free Money. The biggest problem in connection with Social Security - as the government itself is first to admit - is giving the money away. That is, the public's woeful ignorance of the availability of funds has prevented its full distribution. Over one billion dollars, according to experts, remains undistributed in U.S. Treasury vaults simply because no one steps forward to claim it. To help overcome this shocking public ignorance, and so that you get your share of the Social Security largess, the editors of Moneysworth, the authoritative new consumer-affairs and personal-finance fortnightly, have prepared - as a public service - a comprehensive, lucid, savvy, astonishing new manual entitled STAKE YOUR CLAIM! How to Work the Social Security Gold Mine. A copy is yours ABSOLUTELY FREE with a subscription to Moneysworth. STAKE YOUR CLAIM! How to Work the Social Security Gold Mine is more than just an encyclopedic reference work with charts, tables, descriptions of benefits and sample application forms. It is a personal adviser in a field of finance where impartial advice is otherwise almost impossible to obtain (the government, of course, is biased and lawyers are almost never willing to accept Social Security cases because, you know, they are not permitted to charge more than about $10 per case). Among the priceless nuggets of information you will pick up from STAKE YOUR CLAIM! are answers to such questions as: - How can you qualify for a pension even though you have never worked a day in your life, or contributed a cent in Social Security taxes, or even nearly reached the age of 65? - How can you arrange to collect Social Security from both Canada and the U.S.? - Why is it crucial to check the balance of your Social Security account periodically? - Does it ever pay to take out two Social Security cards? - Is it true, as some say, that you should "shop" at different Social Security offices since different interpretations of regulations can result in pensions of different amounts? - What steps, if any, are necessary to protect your pension from attachment by creditors? Since, as studies have shown, two out of three people who apply for their Social Security taxes, how can you check on your payments and possibly obtain a refund? - What forms of deception have people employed in order to maximize their Social Security benefits and collect pensions early? - What federal programs help retired persons get jobs to supplement Social Security? - What happens to your pension if an employer deducts Social Security taxes? If bills to forward them to Washington for credit to your account? What special steps should you take if the firm you work for is financially shaky? - How do you get about getting one of Social Security's huge "lump sum" payments? In short, STAKE YOUR CLAIM! How to Work the Social Security Gold Mine is a treasure map to the Social Security mother lode, telling what pitfalls to avoid, what tools to use, how to find your way through the maze of regulations and how to hit pay dirt. Its editor and compiler is Ralph Ginzburg, the 43-year-old publisher of Moneysworth, who himself collects $99.40 in Social Security every month and has been getting Social Security since he was 25. To repeat, a copy of STAKE YOUR CLAIM! How to Work the Social Security Gold Mine is yours ABSOLUTELY FREE with a subscription to Moneysworth. In case you're not familiar with Moneysworth, we'll explain that it is America's most ingenious periodical dealing with personal finance and consumer affairs. It will positively flabbergast you with its inventiveness for making and saving money. In less than three years it has bestowed the Midas touch upon nearly a million ecstatic subscribers and has become the most widely read newsletter IN THE WORLD. Perhaps the best way to describe Moneysworth is to list the kinds of articles it prints: How to Earn 100% on Your Savings Account Digital Wristwatches: A Product Rating Buying a New Car for $125 over Dealer's Cost Shrewd Buys in Life Insurance Air Travel at 50% Off Minicalculators under $100 Professional Sex Counseling, $00 Per Hour Belted Tires: A Rating without Bias How to Buy a Pistol for Protection Dog Foods Fit for King How to Contest a Bad Credit Rating Quadraphonic Hi-Fi: Innovation or Hype? Wheeling and Dealing for a New Bike Free Stock Advisory Services Easy-Riding Motorcycles Pianos of Note Trailers with No Hitches Home Burglar-Alarm Systems Cheap Skates Stoves that Are a Turn-On The ABC's of Buying Vitamins Scholarships that Go Begging Sailboats that Are Winners How to Break a Lease Legal Ways to Beat Sales Taxes How to Protect Your Hips Earn Interest on Your Checking Account How to Fight a Traffic Ticket 14 Ways to Save on Your Phone Bill In sum, Moneysworth is a shrewd, trustworthy financial mentor. It is the quintessence of sophisticated gainsmanship. The staff of Moneysworth is a team of hard-nosed, experienced journalists with a record of genius in the field of consumer affairs. Its publisher, as we mentioned earlier, is Ralph Ginzburg, creator of the daring and flamboyant magazines Fact, Eros and Avant-Garde (Mr. Ginzburg was first to publish Ralph Nader). Moneysworth's editor-in-chief is Albert Lee, a former top editor of Better Homes & Gardens. Radiating from this nucleus of editorial energy are reporters, writers and test-teachers throughout the U.S.A. Together they create America's first and only consumer periodical with charisma. Moneysworth is available by subscription only. The cost of a year is ONLY $5! This is a MERE FRACTION of the price of familiar, old-fashioned consumer publications. Moreover, we are so confident of Moneysworth's value to you that we are about to make what is probably the most generous offer of its kind in the history of publishing today: We will absolutely and unconditionally guarantee that Moneysworth in combination with STAKE YOUR CLAIM! How to Work the Social Security Gold Mine - will increase the purchasing power of your income by at least 15% or you get your money back IN FULL. In other words, if you now earn $10,000 a year, we guarantee that Moneysworth and the Social Security manual will increase the value of your income by at least $1,500, or we'll refund your money. Meanwhile, you will have enjoyed a year of Moneysworth ABSOLUTELY FREE and you may keep STAKE YOUR CLAIM! WITH OUR COMPLIMENTS!! What could be more foolproof? To enter your subscription, and obtain a free copy of STAKE YOUR CLAIM!, simply fill out the coupon below and mail it with $5 to: Moneysworth, 110 W. 40 St., New York 10018. Then sit back and prepare to receive your first copy of a gleeful, irreverent, wallet-fattening magazine whose motto is: "Ask not what you can do for your country, but what your country can do for you." 110 West 40th St. New York 10018 I enclose $5 for a one-year subscription to Moneysworth, the shrewd, authoritative new consumer fortnightly. I understand that I am paying A MERE FRACTION of the going rate for such a publication. Also, I will receive ABSOLUTELY FREE a copy of STAKE YOUR CLAIM! How to Work the Social Security Gold Mine. Furthermore, if STAKE YOUR CLAIM! do not increase the purchasing power of my income by at least 15%, I will get my money back IN FULL! Moreover, I may keep STAKE YOUR CLAIM! with your compliments and enjoy a year of Moneysworth ABSOLUTELY FREE!! Name Address City State Zip © MONEYSWORTH 1973 T.M REG PEND 9RC Makin' friends Winnebago style. This year you can see America in one of thirteen new motor home models from Winnebago. They've got all the comforts you need for easy living: complete kitchen, bath, beds, furniture, carpeting, everything. And they come in every price range—one reason why they're America's best-selling motor homes. Go see for yourself. Then come make some friends Winnebago style. We give you more. Still available: Your own 'Girl Friday' Are you looking for a private secretary with the appeal of an Ali MacGraw, the style of a Lauren Bacall, and the tact and polish of a man like yourself? It may just be possible to find her among the new crop of young commercial school, junior college, and even four-year college grads who will be eyeing the secretarial market next month. Or, your search may well encompass a lightly seasoned crop, say, those above age 30. Whichever way, though, if the screening is to be for top talent, the finalists will be vintage '73—you won't find an old-fashioned girl in the bunch. What this means, actually, is that you will be confronted with some fresh viewpoints and attitudes, including some on Women's Lib—and demands—that may or may not come as a surprise. The evidence adds up: In major urban areas, Women's Lib, though no earth shaker among secretarial types, has kindled some degree of determination to rid the office of its (so the gals say) ancient anti-female inequities. And on top of this, the record shows, too, that women in business have lately been making many solid gains, oftentimes as executives themselves. And so the coin is two-sided. For his part, the businessman in today's competitive climate needs a highly skilled secretary—as his job gets more complex, the girl who sits outside his office must be just that much better qualified. But conversely, he too needs to be better qualified, especially in terms of having broader ideas on delegating duties to his secretary. Today's secretary has a No. 1 complaint: she gets too little responsibility and not enough authority. "What the executive's secretary wants makes common sense," says a Boston company president. "She wants work on a higher level, a little clout to do it, and money to go with it. As her boss, you expect more—and you get more." Management consultants and psychologists to the trade for the most part agree to this proposition, and feel that this kind of awareness of Women's Lib and its broad implications is surely a sensible idea. At the same time, they point to a veritable officeful of ideas on how to sift and screen secretarial talent to find, as one says, "the particular Girl Friday you're looking for." The professionals suggest these rules for starters: - **Type.** Get a girl who wants to be a good, highly regarded and well paid secretary. Says New York consulting psychologist Dr. John Drake, of Drake-Beam Assocs.: "Get someone whose needs will be met by the job you offer—not one who wants to step up into management herself. Ask her very directly about her ambitions; you may be surprised at the candid answers you get." If "emotional needs" are delved into, the psychologists will often center in on the secretarial applicant who has a need to be helpful. "For example," says Dr. Drake, "you might pick a girl who's been in activities that demonstrate the helpfulness quality—church work, hospital volunteering, and such." Some very open questioning along these lines can be fruitful. "How do you honestly feel about getting coffee for the..." boss?" is the kind of question that may prove revealing. Some very fine private secretaries—capable of doing many things in an office—actually like getting coffee for the boss." - **Balance.** Pick a secretary who will balance your own shortcomings. Dr. Mortimer Feinberg, a Manhattan psychologist who has consulted with a number of major companies, cites speech-writing as the sort of chore that can be part of such balancing. "If you make a lot of speeches," he says, "but find that writing them is an agony, hire a bright girl with the skill to take bare ideas and put them in the form you want." Dr. Drake offers the example of the executive who is introverted (possibly in science or engineering) and somewhat removed from the office staff about him. "He needs a secretary who is gregarious and can handle the people in the outer office." Or, he adds, the man who abhors routine and whose desk shows it should hire a good organizer. Katherine Gibbs School, the New York-based training ground for countless high-powered secretaries in business, reports that such tasks as speech-writing are increasingly part of the workload carried by many of its 18,000 graduates. "One of the tricks in hiring a secretary," says Barbara Lyon who is in charge of the school's alumni relations, "is to take advantage of these special talents and make up for some abilities you lack. Don't hire a girl just like you." - **Mood.** Pick a secretary who is pleasant, but not obsequious. The ever-smiling, fawning gal is a type to screen out, say the pros, because she is apt to be concealing some sort of inner frustration. "It can mean bitterness," says Dr. Alexander Wesman, of Psychological Corp. "—for instance, wishing that she didn't have to work in an office at all." The obsequious type also may fall short in other ways. Here the prime example is that of the secretary who is so "nice" that she is unable to tell the boss when he is dead wrong and heading into some embarrassing blunder. "Let's be honest," notes the Boston company president, "whether you're top man or middle management, you want a girl who can help you out of possible jams and deal with others for you—so you want a girl with some spunk." One of the important jobs of the top-notch secretary, he adds, is to shoo away the idle talkers, favor-seekers, and problem children that tend to converge on the office of the senior executive. - **Vitality.** Try if you can, say the pros, to pick a girl who has that intangible spark that somehow attaches the word "success" to whatever she does. "You can find all sorts of abilities—that's fine," notes a Columbia Business School psychologist. "But when you hire her, get someone who manages to convince you at her first interview that she's vitally interested in the job. If she's passive, she lacks the quality you want and need most: enthusiasm." However depressing it may be, the evidence says clearly that the cute, good looking girl of, say, age 20 or 22, though well trained and bright as one might wish, will oftentimes find it a lot harder to make the grade as an executive secretary than someone five or 10 years her senior. The ideal age bracket, say consultants, is 30 to 45—at least, where fairly sophisticated duties are in store. The 30-year-old remains ripe for training on the job, a major plus; and this holds, too, for the woman of 40, provided she isn't set in her habits and attitudes. "Remember that a person of 40 or more may relate better to older men in the office," says Dr. Drake. "This may or may not fit your situation." If the woman to be considered is single, fine—though the warning sounded is to steer clear of the "old-maid" type. If she is married, make certain that her husband is steadily employed, but there is a twist here that some psychologists like to emphasize: "She wants work on a higher level, a little clout—and money to go with it." You'll likely be better off, they say, if his job is below the management or professional level. At the same time, don't hire a secretary if she will be earning as much or more than her husband. This can mean trouble, and a phone call to his employer may keep you out of it. If she has children under 16, be sure that they have steady daytime home supervision. This can, of course, be crucial, and it's a point to clear early in the interview. Ideally, the experts maintain, a top executive or key middle-management executive should have a girl with two years of college, or more, plus the usual secretarial training. In any case, don't reject a girl as being "overqualified" because she is a college graduate—unless she has an advanced degree. The gal hired should score well above average in secretarial-skills and language-usage tests—but beware "personality tests" which can be faked by a job-seeker. What about hiring a male secretary in this day of liberation? Do changing attitudes mean you'll have to hire a "Man Friday" one of these days? Probably not, says Dr. Wesman: "He's a rare bird, and likely to remain so." I got my looks from Mom, my drive from Dad, and my Brain from Aunt Tillie. Adding, subtracting, multiplying, dividing. They really used to do a number on my head. Decimals made me completely crazy. And when I saw percentages my whole life flashed before my eyes. Then my Aunt Tillie gave me a Bowmar Brain. It has a floating decimal and an automatic constant. It has an 8 digit read-out. It will never ever drop zeros like I do. If I want to know what 3.2% of 7,228.34 is, I don’t panic. I punch the percentage key. And my brand new Brain is fully guaranteed for one whole year. Now I can do everything my friend Leon, the brilliant Math major can do. Except it takes Leon longer. Thank you, thank you, thank you Aunt Tillie. You always said I’d need a Brain to get ahead. The Bowmar Brains® America’s No. 1 selling personal calculators. CIRCLE 706 ON READER SERVICE CARD New York to Dallas in minutes. By way of Xerox. To Xerox, making great copies in just minutes is certainly no great challenge. We proved we could do it, and do it better. But proving we could get a copy from, say, New York to Dallas in a matter of minutes was another story entirely. But we solved it. We unveiled the Xerox Telecopier transceiver. Merely by dialing the phone in one place, and answering it in another, we could actually transmit copies of documents. The Xerox Telecopier. Takes a piece of paper anywhere in the country, and in a matter of minutes makes a copy of it appear somewhere else. It’s the next best thing to mental telepathy. Xerox. The duplicating, computer systems, education, telecommunications, micrographics, copier company. And to think you knew us when. Looking over the choppy sea of used craft A Newport Beach, Calif., boat dealer is rubbing his hands with joy. In one recent 30-day period more than $250,000 worth of second-hand craft sailed away from his docks with new owners. Up the coast, a prominent Seattle jurist is selling his California-built Columbia 36 for $19,000—$1,000 more than he paid for it five years ago. They are just two of the happier beneficiaries of a phenomenon of the nearly $4-billion boating industry—the perky market in used boats. It's moving right along with new-boat sales, which, due to an early spring, were booming by mid-March this year, even in usually late-to-thaw New England. What's sparking the used-boat business? Trade-ins for new boats, for one. But there are also pocketbook reasons. Approached with proper care, the better parts of the used-boat market—reputable brokers, dealers, and responsible private owners—can be where the best dollar-for-dollar deals can be made, particularly by newcomers to the boating craze. "You have to be some kind of rich eccentric," snorts a sun-baked Miami salt, "to buy your first boat right off a dealer's showroom floor. Read the classified ads. Even we old-timers do; it's a comfort to know what your own boat might be worth. And talk to boatmen. They're a loquacious lot. If there's a real bargain around, it won't remain a secret long." Admittedly, most boatmen agree, the better deals are usually made in the moderately-priced ranges and up—with "moderately priced" meaning between $6,000 and $12,000 these days. Boats, indeed, are high, and good ones hold their value in the market. So why buy used when you can get new for a few thousand more? The reason veteran seamen give is that you are buying not just a bare boat but one to which equipment has been added. "The buyer of a new $20,000 boat may easily add $3,000 worth of equipment right away," explains Seattle yacht dealer Jay Wheeler. "You might have to pay him most of the $20,000 for his boat, but you won't pay anything for the equipment." The market for second-hand smaller boats—those which originally cost $5,000 or less—operate pretty much like the used-car market. These craft have a quicker depreciation, as do their motors and other gear. But they are often the best place for real beginners to look. Before even scouting for bargains, old salts strongly advise tyros, first, to do a considerable amount of sailing or motor-boating to jell their thinking: Do you prefer cruising, day-sailing or racing? Where will you boat—lake, river, sheltered harbor, sound, or open sea? Does your family want to come along, and what are their preferences? Even with all questions pinned down, few beginners come up with the right boat for all time. While some old-timers insist that there are too many caveats for beginners in the second-hand market, and that they should stick with brand-new equipment, from established dealers, they offer these bits of advice for a safer passage in such waters: - Fiberglass hulls are all the rage, and for a reason—they are less work and resist worms and other deteriorators. Avoid wooden hulls unless maintenance is your joy. Plywood hulls are out of fashion, but are good values if in good condition. - Have any boat you consider pulled out of the water and examined by a qualified boat surveyor. His fee—usually no more than $2 to $2.50 a linear foot, plus haul-out costs—might be the best investment you make that day. Banks dealing in boat loans can refer one to you. - Favor brand-name products over unknowns. They hold their resale value better, and some even appreciate. Learn the better foreign names, too. In some cases, reputed superior craftsmanship has given them the value edge over comparable American makes. - If you can afford it, go for the sailboat with an inboard auxiliary motor. "Outboards are a pain," says one California sailor. "They're either hung up on the stern or down in a well where they're hard to use." - Be prepared, psychologically as well as financially, to spend more than you ever intended. Off-season storage fees, for instance, can run up—and are a reason why many boats come on the market at season's end. In Miami, for instance, open-air space costs $1.50 per linear foot of hull, per month. Scarce covered storage is $2 a foot, per month. When the time comes to get rid of a boat—as it does eventually—the bigger it is the harder it may be to sell. An alternative for yachtsmen in this category is to give it away—as a tax-deductible contribution to support research and education (IRS allows gifts up to 50% of adjusted income for this purpose). In the past five years, for instance, the University of Miami School of Marine and Atmospheric Science has received $800,000 in donations of boats, some of which came to it in bequests fully tax deductible from the donors' estates. Your company is different. Your radio system should be, too. If your business is like most, it's different! For "different" businesses we put together custom designed 2-way communications systems using standard RCA solid state equipment. We do it by using versatile but standard building blocks. You get a system tailored to your business. And it costs you a lot less than a fancy overdesigned system. Whether you need mobile or portable 2-way radio or personal radio paging, RCA has the products to exactly meet your needs. Your RCA Communications Consultant can show you many ways to cut your operating expenses with a business radio system. Our free guide explains how we do it. We're RCA—specialists in radio. We've been in communications from the beginning and have everything you need in equipment and experience. And, most important of all, we specialize in personal service. RCA Mobile Communications Systems, Dept. PB23, Meadow Lands, Pennsylvania 15347 Please send me your RCA Mobile Radio Guide. Name ____________________________ Type of Business ______________________ Company ___________________________ Address _____________________________ City State Zip _______________________ Telephone ___________________________ CIRCLE 708 ON READER SERVICE CARD Meryl Streep The first mercury lamp for lighting people Our exclusive Beauty Lite™ lamp represents the most significant color breakthrough since the development of mercury vapor lighting. Its unprecedented color characteristics eliminate the yellow-green light typical of ordinary mercury lamps... and for the first time, make it possible for people and things to look natural. The result: Now mercury lamps can come indoors to provide effective and efficient lighting in stores and offices. Anywhere, in fact, where color comfort is important—indoors or out. For complete information, call your Westinghouse Agent-Distributor. Or write: Westinghouse Electric Corporation, Fluorescent-Mercury Division, Bloomfield, N.J. You can be sure...if it's Westinghouse Westinghouse Beauty Lite Mercury Lamps are available in 400, 250 and 175 watt sizes. Ecology is everybody's baby... It's not just the concern of the fellow with the big smoke stack or the one polluting the stream, but everybody's job, to keep America beautiful. For most of us it means recognizing and preserving the beauty of our own environment. It means giving our plants and shade trees, which like so many things these days are becoming more and more dependent on scientific research for survival, the opportunity to develop their full potential. As a company with a long history of effort on behalf of the environment, we are continuing to spend substantial amounts of time, talent and money in research to provide the means of preserving the shade trees of tomorrow. But we are more than just a laboratory. We have the people with the ability and experience to help you improve the health and beauty of your trees. Call your local Bartlett representative today—together there is so much that we can do. BARTLETT TREE EXPERTS Home Office, 2770 Summer Street, Stamford, Conn. Research Laboratories and Experimental Grounds, Pineville, N.C. Local Offices from Maine to Florida and west to Illinois and Alabama. CIRCLE 710 ON READER SERVICE CARD Tips for survival in the dizzying commodities whirl In the pits of the Chicago Board of Trade, the cries of the commodity men are fierce, the arm-waving and hand signals a language in themselves, and fortunes come and go with each tick of the clock. Until recently this was the almost-total domain of large commercial accounts. But the scene is changing. More and more small investors ("Call them speculators," insists one broker) are throwing themselves into the frantic futures market. Commodity speculation could be the most hazardous "investment" around. THE MARKETS Brokers say you can plan to lose on at least 60% of your transactions, and only hope that the remaining 40% result in big enough wins to make it all worthwhile. How worthwhile? "Most successful speculators believe that if they don't make 50% on their money in bad years and more than 100% in good years, they have been wasting their time," says one speculator. These markets have long been arenas only for those sophisticated enough to understand such gambits as fundamental analysis, margin calls, "spreading" and "going long." Lately, though, the opportunities to capitalize on big earnings has evoked the gambling instinct in the "small" investor—which means anyone with between $3,000 and $10,000 of risk capital. Many of these, unfortunately, know about as much about commodities as pseudonymical author "Adam Smith" who lost a bundle investing in cocoa futures, admitting, "All I know about cocoa is that it comes in little red cans . . . ." The typical new speculator is 45 years old, an executive type with a college degree and an income between $10,000 and $25,000 (although a large percentage make well over $25,000). There are 10 active commodity exchanges but the biggest is the Chicago Board of Trade, which trades $123 billion of the nation's $200 billion market each year—more than the value of all securities on the New York Stock Exchange. But any similarity between commodities and securities trading ends there. The purpose of a commodities market is not conventional investment. It's to transfer the risk of producing, selling and buying a commodity at a future date from the producer (seller) and ultimate buyer to the speculator. The profits can be much higher, and the initial investment (on margin) much smaller, than in the securities market. For example, last November, a speculator could have purchased one contract for delivery of 5,000 bushels of March soybeans at $3.69 ¼ per bushel, and sold the contract three months later at $6.03 ¼ per bushel. Since the commodities buyer puts up only a percentage of the total contract (the performance bond, or "margin," is set by the brokerage house or exchange), and the margin at the time for soybeans was $1,000 per contract, the three-month profit ($30 commission included) amounted to $11,670. The speculator who went short and agreed in November to deliver March soybeans at $3.69 ¼ would have lost $11,670 if he hung on until the delivery date. However, more than likely he would have sold his contract beforehand and got out with minimal losses. Every brokerage house has its standards for accepting and rejecting small accounts. One large house, for example, will refuse anyone whose net worth is less than $50,000, or who has less than $5,000 to risk in the market. "We also try to judge an individual's temperament," says one broker. "If he is a nervous type, calling his broker every half hour to check prices, we don't want him." It is likely that somebody else will take the account, however (there are no "official" standards) but few will accept accounts of less than $3,000. Having been forewarned, the would-be speculator who still wants to get in should observe the following rules: - Research commodities. Read about the government's changing agricultural policies, follow the price of a particular commodity for several months, play with "paper profits" for awhile. Also, learn the language of the commodities market—know what the various orders are—"sell-stop," OCO ("one cancels the other") and "on-close," for example. - Choose an advisory service and broker carefully. The printed services cost roughly $100 to $250 a year, with lower rates for trial subscriptions. Ask to see performance results of past recommendations, and look up issues of their weekly newsletter. As for choosing a broker, established firms usually have the best. However, chances are the small speculator will get a man on the bottom rung. The only advantage with a large house: They have the backup specialists to consult. - Spread your investment. Says one speculator, "By holding at least three different positions you improve the odds of holding at least one that's profitable." Then, once a market gets active and be- --- **TARGET YOUR OPPORTUNITIES IN THE HOME BUILDING MARKET** Order Your Copy of the New Edition, **THE BLUE BOOK OF MAJOR HOMEBUILDERS™** **WHAT IS THE BLUE BOOK?** Over 500 pages of individual reports. A description of every major home builder, some manufacturer, mobile home manufacturer, and new town community developer. The only complete reference published to the leaders of the home building industry. **WHAT DOES A "WHO'S WHO" REPORT SHOW?** The number of units built since 1968 and planned for 1974. Data is shown for all types of houses and apartments. Reports also show name of all principals, type of corporate structure, detailed financial data (if on record) the metropolitan areas of operation, prices and rents, degree of industrialization, and a description of the builder's 1973 outlook. **HOW CAN YOU USE IT?** This complete reference to the housing industry is used by builders as a reference book; by staffs with real property leading manufacturers to large sales; by lenders to determine trends; by Wall Street firms to identify investment opportunities. --- **THE BLUE BOOK OF MAJOR HOMEBUILDERS • 1559 Eton Way • Crofton, Md 21113** Gentlemen: Please enter _____ orders for the 8th Edition, The Blue Book of Major Homebuilders at the price listed below. I understand you are offering a 10-day, full refund, return privilege with each book. $69.50 Regular Price No. of Books ...... Total Amount $... ☐ Payment Enclosed NAME ____________________________________________________________ ☐ Bill me COMPANY _________________________________________________________________ ☐ Bill Co. STREET _________________________________________________________________ ☐ Check here if you want additional information CITY __________________________ STATE & ZIP __________ All orders plus postage and shipping costs. Save Postage and Handling charges by enclosing your payment. THE BLUE BOOK IS A COPYRIGHTED PUBLICATION OF CMR ASSOCIATES, INC. --- **Stop paying the price of hand addressing.** Nobody likes to type names and addresses or repetitive information over and over and over again. It's dull. Demeaning. Boring. And, it's not unlikely that one day one of your invoices, statements, notices or checks will arrive at the wrong hand-addressed address. Stop paying the price of hand-addressing. A Pitney Bowes addressing system can automatically imprint names, addresses, any fixed information neatly, accurately and in a fraction of the time it takes by hand. Every business function has its own particular paperwork problems. And, for just about every problem, Pitney Bowes can suggest a solution. A system that can handle the job better and faster for any size office. Write to Pitney Bowes, 8924 Pacific Street, Stamford, Conn. 06904. --- **Pitney Bowes** Because business travels at the speed of paper CIRCLE 711 ON READER SERVICE CARD Everything moves by truck. And, as the Interstate Highway System nears completion, everything moves more quickly by truck. If you've got it, a truck brought it. Much of it along the Interstate Highway System. More safely, too. We're for that. Eaton Corporation, Axle Division, Cleveland OH 44110, manufacturers of heavy-duty Eaton* truck axles; Transmission Division, Kalamazoo MI 49001, manufacturers of Fuller* Roadranger* transmissions. gins moving up, you can get out of unprofitable positions and begin cautiously "buying up" in profitable ones. - Beware of all margin calls (with few exceptions), and discretionary accounts (no exceptions). If the margin is 5% of the sale, and the market moves against you so that, say, 25% of the margin has been lost (this varies with every brokerage house, commodity, and type of contract)—the broker will call and ask for enough cash to restore margin. This is when to sell out and take the loss. Discretionary accounts—giving a broker the right to buy or sell commodity contracts without prior approval—are no way for the new investor to learn anything about the commodities market. - Map out a rigid plan and, almost without exception, do not deviate from it. The basis must be: Minimize losses, maximize profits—and get out of a loser. Says one broker: "You must have a predetermined point at which you will get out of a contract if you are wrong... Many investors have a tendency to move too quickly—especially if they made money on their first contract. But many just blunder ahead without a plan, and it's really pathetic to watch them." - Use the stop-loss, and do not move your stops (with few exceptions). These can be entered at any time, but should be decided beforehand. "Sometimes it is all right to raise stops when you are riding a major trend," says one broker, "but never lower your stops if you begin to lose money. This usually makes for bad results." - Ignore most non-professional advice, and much of public opinion. Says a Merrill Lynch brochure on speculating: "The rule is to act cautiously with public opinion, against it, boldly." - Take a percentage of your profits and bank them. - And one final rule: When in doubt—about anything—don't do it. One speculation definitely to avoid currently is the commodity option, on which states are increasingly cracking down. An option is an arrangement whereby a speculator purchases the right to buy or sell a futures contract at a set price and time. It is a graduated form of dealing in commodity futures and is not for the small investor—at least not at this stage. The final advice comes from Stanley W. Angrist, author of *Sensible Speculating in Commodities*: "Trade only with those funds you can afford to lose. That is, commodity trading is not investing for income; there is no assurance that even one of the next three trades you make is going to be profitable... And don't use borrowed money. There is nothing more liable to impair your judgment..." --- This unique book helps you manage your personal affairs with the same skill you bring to your business: **Business Week’s Guide to Personal Business** $9.95. "Many a man who should know better—whose business or professional income is, say, $25,000 to $50,000 or more—treats his own personal business like a third cousin who turns up in town in search of a loan. He brushes them off," says Editor Joseph L. Wiltsee. If that describes your situation in any way, here's help. This unique book, distilled from the popular "Personal Business" section of *Business Week*, offers you a wealth of *new* ideas on planning and managing your personal finances. Real estate, investments, insurance, school costs, taxes, estate planning—even advice on how to choose and evaluate an advisor! The money you can save—or make—from just *one* of the ideas in this 320-page guide should more than pay for its modest cost. Get it now from your favorite book store for $9.95. Or send your order (enclose payment please) to the address below. *Note:* a special deluxe edition, bound in buckram, gold stamped, slipcased is available at $2.50 more per copy, by mail only. Business Week Guide Book Service Office 330 Broadway Marion, Ohio 43302 --- Is your business getting its money's worth out of the mail? Promotional mailings can result in extra sales. Faster, more accurate billing encourages faster payment. Used right, the mails can help your business grow. But, if your employees are folding your mailings and inserting them by hand, an extra session or rush order might be the straw that breaks their backs—to say nothing of their morale. A Pitney Bowes folding and inserting system gets your employees back to their regular work and your mailings ready—with dispatch and nary an error. No skips. No misses. No mutinies in the ranks. Pitney Bowes has many ideas on how to solve your paperwork problems, no matter how small or large—and reap added benefits to boot. Ask any Pitney Bowes representative about them, or write today to Pitney Bowes, 8923 Pacific Street, Stamford, Conn. 06904. Pitney Bowes Because business travels at the speed of paper CIRCLE 711 ON READER SERVICE CARD Aquamatic. The first sprinkler head that resets itself automatically after it extinguishes the fire. It’s set to go time after time after time without replacement or adjustment. You don’t have to turn off the main valve for inspection after a fire. It’s Factory Mutual approved. It’s UL listed. And it’s new from Grinnell. Aquamatic is totally interchangeable with other sprinkler heads, too. It can be integrated into any existing system or designed into new construction. It’s also the first sprinkler head that uses water with maximum efficiency by sequentially turning itself on and off automatically. It’s ideal for areas containing high value inventories or materials highly sensitive to water. In situations where there’s a risk of flash fires or where the water supply is limited. In high rise buildings and many other locations. Aquamatic Sprinkler.* It’s a major breakthrough in sprinkler design. It’s made by Grinnell, the world’s leading designer, manufacturer, and installer of sprinkler systems. And it’s ready now. Write or call us for complete information. We’ll help you put the fire out. *Pat. Applied For GRINNELL FIRE PROTECTION SYSTEMS COMPANY, INC. EXECUTIVE OFFICES • 10 DORRANCE STREET • PROVIDENCE, R I 02903 • 401 331 3800 Sold throughout Europe by Koppertach GmbH Sprinkler Gmbh, Kalletkirchen, Germany CIRCLE 713 ON READER SERVICE CARD The tax mess and other woes THE TAX MESS: The 16th Amendment clearly states: "The Congress shall have the power to lay and collect taxes on incomes, from whatever sources derived . . ." There's nothing there about capital gains, tax-exempt bonds, oil depletion and other loopholes that excuse the rich and super-rich from paying some $77-billion in income taxes and let big corporations off the hook for another $10-billion. Yet that's the way it is, according to author Philip M. Stern, and it's why he has entitled his latest book, *The Rape of the Taxpayer* (Random House, $10)—it's the little-guy, says Stern, who has to make up the difference. The "loophole-ridden" tax code got a working-over by Stern in 1964 in his *The Great Treasury Raid*, a best-seller in its time. In *The Rape of the Taxpayer* he sharpens the attack with ridicule, calling the income-tax code a "'$77-billion welfare program for the rich." He nails the idea down from every angle (the book runs 483 foot-noted and thoroughly indexed pages), clearly to throw fuel on the sometimes sputtering flame under Congressional efforts toward tax reform. Stern calls for a truly "16th Amendment" income tax code. No deductions beyond those for dependents and costs of earning the income would be allowed. Such a no-loophole code, he claims, could reduce tax rates 43% in the lower and middle ranks. Some buyers of this $10 book, however, may be shocked to learn that Stern ranks them along with Jean Paul Getty as tax escapist; one plan he cites would actually raise the rate of anyone with a $25,000 taxable income or better. LIKE IT IS: In a foreword to *I Hate to See a Manager Cry—Or, how to prevent the litany of management from fouling up your career* (Addison-Wesley Publishing Co., $5.95), Martin R. Smith puts credit where credit is due—to Robert Townsend's *Up the Organization*. "'I realized," he notes, "'that little had been written for the student, trainee, supervisor and middle-level manager using the Townsend approach. So here we are.'" Smith, an experienced consultant, takes the "'litany'" or conventional wisdom for corporate success as a manager (i.e., "'Don't pass the buck.") and debunks it, item by item. If Mr. and Mrs. Bill Keller had known their son would choose the Navy after college, they could have saved $12,000. Of course Mr. and Mrs. Keller of Rockville Centre, Long Island, were pleased when their son chose the Navy after college. Especially proud because he received a commission as an officer. But financially, the Kellers—like many parents—missed the boat. They didn't realize the Navy might have paid for their son's college education through the Navy-Marine NROTC program, the Naval Reserve Officers Training Corps. The NROTC Scholarship pays a student's full college tuition, cost of textbooks and instructional fees, and a monthly subsistence allowance. All this could be worth as much as $12,000 depending on which of the more than 40 colleges and universities with NROTC units he chooses. NROTC students spend their college years in a normal course of study, with the addition of Naval Science courses. They also attend scheduled drills and summer training. And upon graduation, they are commissioned as Navy or Marine Officers on active duty (the minimum service requirement is presently four years). If you think that your son can qualify, talk it over with him and the Navy Recruiter in your area. It just might be that if he's ready for college, the Navy or Marines is ready to send him. Please send complete information about the Navy-Marine College Program: Commander, Navy Recruiting Command NROTC Dept. L 32 BLDG. 157 WASHINGTON NAVY YARD WASHINGTON, D.C. 20390 Your Name ____________________________ Your Son's Name _______________________ Age _________________________________ Address ______________________________ City _________________________________ State ________ Zip _________________ THE NAVY Where's the biggest bottleneck in your office? Where does all your outgoing mail funnel at 5 o'clock? Right into your mailroom. And, if your mailing system is manual or if you've outgrown your present equipment, the mailroom can become the worst paperflow bottleneck in your office. A Pitney Bowes mailing system gives your mail the go-ahead. Envelopes are sealed, postage metered and stacked. Your mail looks like it means business. As you do more business, convenient leasing plans make system updating easy and economical. From your mailroom to your accounts receivable department, no matter what your size, Pitney Bowes can help you ferret out your particular paperwork problems and suggest systems to solve them. Give any Pitney Bowes office a call, or write to Pitney Bowes, 8922 Pacific Street, Stamford, Conn. 06904. PitneyBowes Because business travels at the speed of paper CIRCLE 711 ON READER SERVICE CARD How legal-age 18 shakes up your will, family debts, taxes As a father, you may have to grapple with the effects of new laws in your state saying that a youth or girl of 18 can not only vote but also operate as a full-fledged adult in many other ways, as well. Nearly 40 states are updating their laws, and a whole bagful of possible problems arise. . . . Family debts: Your 18-year-old may become more closely responsible for his debts than before, with your responsibility diminished accordingly. This looks favorable—but what of the 18-year-old who legally obligates himself way over his head? . . . Or, your will might need revision; for example, where you've a trust which runs until your child reaches majority. The difference between his receiving the funds at 18 instead of 21 may be something you'd want to change. A custodian account for a child also could require some redoing. At least, check state law to see precisely how the age-18 vs. age-21 rules are to be applied to such accounts (traditionally, the child gets the property outright at 21). . . . "The father's legal duty to support a child could be directly affected, too," notes a leading Washington, D.C., attorney who has followed the various state laws. "This could make custodian account funds more readily available for college education—but it could also cause the father to lose the child as a dependent." . . . A possible side benefit is that many age-18 laws will free up various summer jobs for more college students under 21 (examples: taxi drivers, store or plant guards, some bonded employees). . . . "The whole area is new and unclear," warns the Washington lawyer. "It needs some mulling over before any drastic step, such as revising a will. But it's a developing situation—and a lot depends, too, on the particular 18-year-old." Tax refund coming? —Better relax If you've a tax refund due—and filed your 1040 near the April deadline—wait a little before you begin glancing in the mailbox. And above all, wait before you write to Internal Revenue about it—give the check six weeks to arrive (at least), and 10 weeks before making an inquiry. . . . If you write too soon, you'll clutter the paperwork process and may even cause more delay. It's like dropping a pencil down into the gears of a Xerox machine. . . . Or, if you've somehow already received the check, note that you're not scot-free. A check-in-hand doesn't rule out a later audit of your tax return—post-refund audits are common. Moral: Keep your 1972 records within arm's reach. . . . Silver lining: If one or two items on your 1040 are examined by IRS and you think the agent is wrong, you can appeal within IRS itself—and if you do, rest assured that over 60% of such appeals are "settled" for less than the agent originally demanded. Even full audits produce refunds about 7% of the time, and 40% end with no change in tax at all. The wines of Rioja and Piemonte Anyone can save money in buying wines simply by picking for low price—but the pouring result can be a harsh, raspy drink. With many French, German and other fine wines at crushing prices, Frederick Wildman, the long-time New York expert, suggests that Spain and Italy are fertile grounds often overlooked. Some very good, sound wines are to be had, says Wildman, at modest prices. . . . In Spain, he points to the northern Rioja region, and particularly to the wines of "Cune" (Compania Vinicola del Norte de Espana). Two "great" reds are Imperial and Clarette that go well with meat and fowl; and Blanco Seco and Monopole, whites, are splendid with seafood. These imports are all in the $2 range, except Imperial, at $4. . . . In Italy, he suggests a departure from Chianti: from Guido Giri winery at Cuneo, in Piemonte, try Barbera d' Alba, a delicate red ($3.50); and from Santa Sofia, a winery near Verona, try Valpolicella Classico, a medium red ($3.50). Mixology: Much liquor lore goes into the mint juleps on Derby weekend in Louisville, but a real warm-hearted substitute for such complex chemistry is a mix—Glenmore's Mint Julep Mix (70 proof). Served with a sprig of fresh mint, it can make you think you're on the clubhouse veranda at Churchill Downs. Petroleum and You (A History of the Former) Chapter One: Dawn of an Era Man's first encounter with petroleum harks back to ancient times when cavemen came upon pools of crude oil which had seeped to the surface due to underground pressures. At first this was interpreted to be a sign that the gods were angry, yet when compared with such events as erupting volcanoes it seemed to many that "angry" was perhaps too extreme a term. Further study of the matter eventually led to the philosophy expressed in the thought "we are all sinners in the hand of a slightly grumpy god." Whatever the case, the appearance of these petroleum pools brought about a profound schism among cavemen of the period. One group looked upon the seepage as some sort of mistake and wanted to get it back to wherever it had come from as soon as possible. On the other hand, a second group saw it as an omen, yet whether it boded well or ill they could not say since the whole subject of boding was at that time relatively unexplored. Nonetheless, one turns to this second group for the first touchingly awkward attempts of man to find a use for petroleum. A tribe from the frigid northernmost regions tried to wear it, but with little real success. Then, just when the advocates of petroleum were on the verge of despair, two amazing advances occurred. An early tribe settled along the early Nile learned that judicious use of pitch, a crude form of petroleum, was helpful in waterproofing their vessels. Vessel treated with pitch on left, untreated vessel on right. And at almost the same point in time, a tribe from the darkermost regions discovered that a branch dipped in the black viscous substance made, when ignited, a moderately serviceable torch. Consequently, given the migratory nature of the tribes during that era, it was only a short step from these two discoveries to the development of the first leak-proof torch. This is the first chapter in a seven-part series presented as a salute to the industry. In addition we would like you to know that we offer a full line of tube oils, greases, cutting oils, fuels, motor oils, white oils, LP-Gas, and specialty products with a complete network of service facilities. For further information and for a booklet of all seven chapters of the Petroleum and You series write to Mr. Frank Laudonio, Atlantic Richfield Company, P.O. Box 71169, Los Angeles 90071. (You might also indicate any product interest and your business.) ARCO Petroleum Products of Atlantic Richfield Company IT'S GOOD TO KNOW YOU HAVE THE LEADING SERVICE GOING FOR YOU We do many things to maintain our leadership. And they all work in your favor! We make sure there's a dealer in your area to make it easy for you to buy Lyon products. We provide the largest number of field men to assure you regular personal service. And we maintain four strategically located plants for your convenience. Everything works in your favor when you buy Lyon. We live up to our reputation for precision quality. We offer a broad selection (more than 1600 stock items). Our products are easy to order. Our packaging is the finest. Our delivery is prompt. And we keep giving you new reasons to stay with us. For instance, we're now a single source for both furniture and equipment! Call where you have leadership going for you. Call your Lyon Dealer today! Lyon Metal Products, Inc. General Offices: 575 Monroe Ave., Aurora, Ill. 60507. Plants in Aurora, Ill., York, Pa., Los Angeles. Dealers and Branches in All Principal Cities YOU'RE IN LYON COUNTRY LYON METAL PRODUCTS For Business, Industry and Institutions LYON METAL PRODUCTS, INC. 575 Monroe Avenue, Aurora, Illinois 60507 Please send me a copy of Catalog No. 100 Name Firm Address City State Zip Look for us in the Yellow Pages under LYON "STEEL SHELVING," "LOCKERS" or "SHOP EQUIPMENT" CIRCLE 715 ON READER SERVICE CARD Do you face a make or buy decision on power supplies? BUY LAMBDA'S NEW LT SERIES a 5V, 7A power supply with overvoltage protection for $80 Only 13 components Line regulation—0.02% Load regulation—0.15% Ripple and Noise—1.5 mV RMS Temperature coefficient—0.01%/°C Lambda's long life voltage regulating ferroresonant transformer Open construction Lambda's 100,000 hours MTBF power hybrid voltage regulator MIL-R-11 composition resistors Heavy duty barrier strip MIL-R-26 type wire wound resistors Convection cooled chassis Computer grade hermetically sealed 10-year life electrolytic capacitors Efficiencies up to 55% Under test for listing in Underwriters' Laboratories recognized components index 1 Day delivery 5-YEAR GUARANTEE LTS-CA SINGLE OUTPUT MODELS 4\(\frac{9}{32}\) " x 4\(\frac{11}{16}\) " x 9\(\frac{9}{16}\)" | MODEL | FIXED VOLT. RANGE VDC | MAX. AMPS AT AMBIENT OF: 40°C | PRICE | |-----------|-----------------------|-------------------------------|-------| | LTS-CA5-0V* | 5±1% | 7.0 | $80 | | LTS-CA-6 | 6±1% | 6.6 | 80 | | LTS-CA-12 | 12±1% | 4.4 | 80 | | LTS-CA-15 | 15±1% | 4.0 | 80 | | LTS-CA-20 | 20±1% | 3.1 | 80 | | LTS-CA-24 | 24±1% | 2.6 | 80 | | LTS-CA-28 | 28±1% | 2.2 | 80 | *LTS-CA5-0V includes fixed overvoltage protection at 6.8V±10% LTD-CA DUAL OUTPUT MODEL 4\(\frac{9}{32}\) " x 4\(\frac{11}{16}\) " x 9\(\frac{9}{16}\) | MODEL | FIXED VOLT. RANGE VDC | MAX. AMPS AT AMBIENT OF: 40°C | PRICE | |-----------|-----------------------|-------------------------------|-------| | LTD-CA-152 | ±15±1% | 2.0 | $110 | | LTD-CA-122 | ±12±1% | 2.0 | 110 | A-C INPUT: 105-132 Vac, 59.7 to 60.3 Hz (STD. Comm'l Line Frequency Spec.), consult factory for operation at other frequencies. Send for 1973 Power Supply Catalog and Application Handbook LAMBDA ELECTRONICS CORP. MELVILLE, NEW YORK 11746 515 Broad Hollow Road Tel. 516-694-4200 ARLINGTON HEIGHTS, ILL. 60005 2420 East Oakton St., Unit Q Tel. 312-593-2550 NORTH HOLLYWOOD, CALIF. 91605 7316 Varna Ave. Tel. 213-875-2744 MONTREAL, QUEBEC H0C 7X0 100c Hymus Blvd., Pointe-Claire, Quebec 730 Tel. 514-697-6520 PORTSMOUTH, HANTS, ENG. Marshlands Road, Farlington Tel. Gosham 73221 VERSAILLES, FRANCE 64a 70 rue des Chantiers 78004 Tel. 950-2224 Circle 901 on reader service card Good trimmer image! Low rejection rate, low profile and low cost are putting Dale Trimmers on more prints and boards than ever. Dale is putting a lot of things together to make its trimmers a better deal for you. Our low rejection rate of .93% (including items not related to quality) guarantees more productive time for you and your people. Our low profile 3/4" models give you a choice of cermet and wirewound elements to precisely match your functional needs. Both are completely immersion proof and have pin spacings that shrink packaging requirements by interchanging with many larger models. Matter of fact, Dale trimmers offer broad interchangeability with every competitive line. And that's something to keep in mind when you're looking for more depth in your supply situation. Send for free trimmer interchangeability guide today...or call 402-564-3131 for complete information. DALE ELECTRONICS, INC., 1300 28th Avenue, Columbus, Nebraska 68601 A subsidiary of The Lionel Corporation. In Canada: Dale Electronics Canada, Ltd. Circle 902 on reader service card
The race to prevent the extinction of South Asian vultures DEBORAH J. PAIN, CHRISTOPHER G.R. BOWDEN, ANDREW A. CUNNINGHAM, RICHARD CUTHBERT, DEVOJIT DAS, MARTIN GILBERT, RAM D. JAKATI, YADVENDRADEV JHALA, ALEEM A. KHAN, VINNY NAIDOO, J. LINDSAY OAKS, JEMIMA PARRY-JONES, VIBHU PRAKASH, ASAD RAHMANI, SACHIN P. RANADE, HEM SAGAR BARAL, KALU RAM SENACHA, S. SARAVANAN, NITA SHAH, GERRY SWAN, DEVENDRA SWARUP, MARK A. TAGGART, RICHARD T. WATSON, MUNIR Z. VIRANI, KERRI WOLTER and RHYS E. GREEN Summary *Gyps* vulture populations across the Indian subcontinent collapsed in the 1990s and continue to decline. Repeated population surveys showed that the rate of decline was so rapid that elevated mortality of adult birds must be a key demographic mechanism. Post mortem examination showed that the majority of dead vultures had visceral gout, due to kidney damage. The realisation that diclofenac, a non-steroidal anti-inflammatory drug potentially nephrotoxic to birds, had become a widely used veterinary medicine led to the identification of diclofenac poisoning as the cause of the decline. Surveys of diclofenac contamination of domestic ungulate carcasses, combined with vulture population modelling, show that the level of contamination is sufficient for it to be the sole cause of the decline. Testing on vultures of meloxicam, an alternative NSAID for livestock treatment, showed that it did not harm them at concentrations likely to be encountered by wild birds and would be a safe replacement for diclofenac. The manufacture of diclofenac for veterinary use has been banned, but its sale has not. Consequently, it may be some years before diclofenac is removed from the vultures’ food supply. In the meantime, captive populations of three vulture species have been established to provide sources of birds for future reintroduction programmes. Introduction Eight vulture species in the genus *Gyps* are widely distributed across Europe, Asia and Africa. They are all obligate scavengers, feeding primarily on the carcasses of large ungulates and nesting and roosting, often colonially, on cliffs or in trees. They use energetically economical soaring flight to travel long distances from nests and roosts in search of ungulate carcasses (Houston 1974, Ruxton and Houston 2004). *Gyps* vultures are believed to have evolved in parallel with large herds of migratory ungulates, feeding on the remains of sick, injured and depredated individuals (Mundy *et al.* 1992). These herds have disappeared from most of the world range of *Gyps* vultures, remaining only in some of the larger protected areas. However, the food supply formerly provided by wild ungulates was replaced by traditional farming practices in some areas. For example, in the Spanish Pyrenees, transhumance pastoralism, in which herds of domestic ungulates graze the high mountain pastures in the summer and are shepherded to the lowlands in the winter, provided a food supply for Eurasian Griffon Vultures... Gyps fulvus from the 18th to mid 20th centuries, although these practices have recently declined dramatically across Europe (Pain and Pienkowski 1997). In spite of these changes in food supply, the Cape Griffon Gyps coprotheres of southern Africa was the only member of the genus considered to be in danger of global extinction until the late 1990s. This species is believed to have been affected by multiple threats (BirdLife International 2007). It was then recognised that populations of vultures endemic to South Asia were declining rapidly across the Indian subcontinent for unknown reasons. This led to three species, the Oriental White-backed Vulture Gyps bengalensis (OWBV), the Long-billed Vulture G. indicus (LBV) and the Slender-billed Vulture G. tenuirostris (SBV) being listed by IUCN as ‘Critically Endangered’. In this paper, we describe recent research to determine the causes of the population declines and to identify ways to prevent the extinction of these species. **Population trends of Gyps vultures outside the Indian subcontinent** In southern Africa, populations of the endemic Cape Griffon have declined slowly, principally because of accidental poisoning, collision and electrocution, food stress and disturbance (BirdLife International 2007). In West Africa, vultures have undergone a large decline over the last 35 years, with national parks being the only areas not showing significant declines (Thiollay 2006). Many factors are thought to have contributed to these relatively recent declines, including habitat destruction or degradation, inadvertent poisoning from baits placed to kill other species, the capture of birds for local medicinal purposes or the wild bird trade. However, the disappearance of wild large ungulates because of exploitation for bush meat and reduced availability of the carcasses of domestic livestock may have been important factors in the declines. Two South Asian Gyps species, OWBV and SBV, were widespread and generally common in Southeast Asia (Cambodia, Vietnam, Laos, Thailand, Malaysia) at the beginning of the 20th century, but by the end of that century only a few small relict populations remained, primarily in Cambodia (Pain et al. 2003). Populations remain in Myanmar, but their numbers and status remain uncertain. Whilst factors like persecution may have played a role in the Southeast Asian declines, their main cause is believed to be food shortage. Overhunting resulted in a collapse in the populations of wild ungulates throughout the region (Srikosamatara and Suteethorn 1995, Duckworth et al. 1999, Hilton-Taylor 2000), and current livestock husbandry practices appear not to provide a sufficiently large food supply to support large populations (Pain et al. 2003). **Collapse of Gyps vulture populations across the Indian subcontinent** Although Gyps vulture populations were probably declining slowly in many parts of the world during the 20th century, a very different situation existed in India, Nepal and Pakistan. Here, large populations of OWBV and LBV remained until the 1990s. Large numbers of SBV, which was not distinguished as a separate species from LBV until recently (Rasmussen and Parry 2001), were also found in the northeastern parts of the subcontinent. Indeed, during the 1980s OWBV was thought likely to be the commonest large bird of prey in the world (Houston 1985). In India, Gyps vulture densities were so high in some areas that they were considered a hazard to aircraft (Grubb et al. 1990). This abundance was undoubtedly due to a plentiful food supply, in the form of the carcasses of domesticated ungulates. The keeping of livestock for milk production and as beasts of burden is common in rural areas across the Indian subcontinent and cattle are abundant in many towns and cities. Livestock numbers in India have exceeded 400 million since the 1980s, and reached 500 million in 2005 (ILC 2003, projection based on Animal Husbandry Statistics, Government of India). In large parts of the subcontinent, Hindu beliefs prohibit the slaughter of cows. When feral and domestic cows die a natural death they are left in the open in rural areas or disposed of in regulated carcass dumps around towns and cities. Skinners remove the hides from dead cattle for the leather industry, leaving vultures to scavenge the remaining soft tissue. As vulture populations benefited from the large amounts of food available, Indian society gained environmental health and other benefits from a free carcass disposal service. A flock of vultures can pick a cow carcass clean in a few hours, leaving little more than bones, that then dry rapidly in the sun, and are gathered by bone collectors for the fertilizer, gelatin and glue industries. Whilst vultures feed primarily on large ungulates, they were also historically the key scavenger of the dead from the ancient Parsi religion, who lay their dead out in the open in enclosures or specially constructed ‘Towers of Silence’ (Pain et al. 1993). Vultures also have spiritual significance in Hindu mythology, as the vulture-king Jatayu died attempting to protect Sita, one of the principal characters of the Hindu epic ‘Ramayana’, from the demon king Ravana, while her husband Prince Rama was away hunting (Griffith 1870–1874). The era of abundant *Gyps* vultures in the Indian subcontinent came to a sudden end in the 1990s. By the mid 1990s, newspapers in north India started publishing reports of vultures rapidly disappearing from carcass dumps. This was also documented by the Bombay Natural History Society (BNHS) whilst monitoring raptor numbers in Keoladeo National Park, a World Heritage Site at Bharatpur in eastern Rajasthan. In the mid 1980s, foraging vultures were numerous in the park. Several hundred pairs of OWBV nested within it and hundreds of pairs of LBV nested on cliffs at Bayana, not far outside. Between the late 1980s and mid to late 1990s, numbers of these two species found in the park had declined dramatically (Prakash 1999). Numbers of OWBV nests declined from 244–353 in the 1980s to none by the 1999/2000 breeding season (Prakash 1999, Prakash et al. 2003). There was also anecdotal evidence of a general decline in vulture numbers throughout much of northern India during the late 1990s. However, as there was little systematic bird monitoring, it was difficult to know whether reports reflected a truly nationwide decline, or isolated local changes. With support from the US Fish and Wildlife Service, BNHS had conducted nationwide raptor surveys in many parts of India between 1991 and 1993 using a repeatable road transect method (Samant et al. 1995). Surveys were carried out in, near, and along the routes travelled between protected areas. They covered large parts of north, west and eastern India. Unfortunately, not all *Gyps* vultures were counted because they were considered too numerous for this to be practicable. However, the surveyors counted vultures in any groups of five or more birds. BNHS, with support from the RSPB, repeated the road transect surveys in 2000. The results were dramatic. Both OWBV and LBV had almost disappeared from the areas surveyed. The population of OWBV across the surveyed range had declined by 96% between 1991–93 and 2000 and that of LBV by 92% (Prakash et al. 2003, 2005a, 2005b). It should be noted that these were minimum declines because individual vultures and those seen in small groups were counted in 2000, but not in 1991–1993. Subsequent counts on these and additional transects in 2002, 2003 and 2007 showed that OWBV and LBV continued to decline at an average rate of 44% (OWBV) and 16% (LBV) per year between 2000 and 2007 (Prakash et al. 2007). SBV was not distinguished from LBV until the 2002 count, when it was found to comprise less than 2% of the combined total of the LBV and SBV count (Green et al. 2004). Comparison of the 2002, 2003 and 2007 counts indicated that the population of SBV was declining in India about as rapidly as LBV (Prakash et al. 2007). Following the results of the 2000 surveys, BNHS organised an international meeting in September 2000. The meeting, held in New Delhi, was supported by the Ministry of Environment and Forests (MoEF) of the Government of India and the RSPB, and was attended by national and international scientists, conservationists, and Indian government representatives. Among those represented was The Peregrine Fund, who joined forces with Washington State University and the Ornithological Society of Pakistan (OSP) to conduct vulture studies in Pakistan. Subsequent counts of breeding pairs of OWBV in nesting colonies in Punjab province, Pakistan, revealed a population decline at a rate of 50% per year between 2000 and 2003 (Gilbert et al. 2004, 2006, Green et al. 2004). This decline continued to extinction at several formerly large OWBV colonies in the province (Gilbert et al. 2006). This group also counted nesting LBV in Sind province, Pakistan (Gilbert et al. 2004), where numbers have declined by about two-thirds between 2002 and 2006; an average annual decline rate of 25% per year (AVPP 2007). Hence, both in India and Pakistan, the rates of population decline of LBV, though rapid, are substantially slower (16% and 25% per year respectively) than the catastrophic decline rates for OWBV (44% and 50%). The similarity of the recent average decline rates in the two countries is striking for both species. Backwards extrapolation of log-linear Poisson regression models of counts of vultures across India (for methods see Cuthbert et al. 2006a), and of vulture nests at Keoladeo National Park, suggest that the vulture declines probably started in the early to mid 1990s (Figure 1). Nest counts of OWBV by Bird Conservation Nepal (BCN) in eastern Nepal suggested similar rates of decline there, with 65 active nests found at Koshi in 2000–01 falling to just 14 in 2002–03 (Baral et al. 2004). The work of two main research groups was to prove crucial in the search for the cause of declines and solutions. BNHS led one group, initially comprising the Forest Department of the state government of Haryana, the RSPB, the Zoological Society of London (ZSL) and the National Birds of Prey Trust (NBPT), and later expanding to include a wide range of national and international organisations. The second group comprised The Peregrine Fund (TPF), Washington State University and the Ornithological Society of Pakistan (OSP). Whilst the BNHS consortium focussed largely on India, the TPF/OSP group conducted a complementary research programme in Pakistan, and BCN worked in Nepal in collaboration with both groups. ![Graph showing population declines of Gyps vultures in India.](image) **Figure 1.** Population declines of *Gyps* vultures in India. Points show indices of population size from counts on a logarithmic scale, plotted against calendar year. This index represents the vulture population size as a proportion of the initial level (= 1). Triangles represent the number of active nests of *Gyps bengalensis* in Keoladeo National Park from Prakash et al. (2003) expressed as a proportion of the average in the 1980s. Indices of population size, relative to that in 1992, of *G. bengalensis* (diamonds) and *G. indicus* and *G. tenuirostris* combined (squares) in northern India were calculated from road transect count data as described by Prakash et al. (2007). Lines represent fitted log-linear regression models (dashed line = *G. bengalensis* at Keoladeo, dotted line = *G. bengalensis* on road transects, solid line = *G. indicus/tenuirostris* on road transects). Diagnosing the causes of the declines Identifying demographic mechanisms Bird population declines involve changes in breeding success, the proportion of adults breeding, or survival rate, which are in turn brought about by external causes such as changes in nest site availability, food supply, disease or predation. Comparison of demographic rates and external influences on declining populations with those of stable populations of the same species is a frequently used and powerful method for diagnosing the cause of a population decline (Green 1995, 2002). However, this approach was not possible for vultures because, except for a relict population in Cambodia, all populations appeared to be declining rapidly throughout a huge area. Nonetheless, the very rapidity of the declines gave at least some clues about the demographic mechanism. Like other large scavenging birds, *Gyps* vultures are usually long-lived. One bird was reported to live for 37 years in captivity and annual survival rates of wild large raptors are typically around 95% or higher (Newton 1979). An annual survival rate of 99% was reported for adult Eurasian Griffons, though this was for a reintroduced population receiving supplementary food and protection (Sarrazin et al. 1994). Adult survival rates have not been measured for *Gyps* vultures in the Indian subcontinent, but they too are likely to be high. If this is the case, then it is evident from their rapid rates that the demographic mechanism of the vulture declines in the Indian subcontinent must involve a substantial reduction in adult survival. Imagine that the adult survival rate of OWBV before the decline began was 95%. Even if breeding success and immature survival were reduced so that no recruitment of young adults occurred, the adult population could not decline by more than 5% per year if adult survival remained at its pre-decline level. However, the observed rate of population decline is about 50% per year for this species. Such declines could only occur if there was abnormally high adult mortality. In 1985–86, when > 1,700 OWBV were counted in Keoladeo National Park, only 14 birds (7 adults and 7 juveniles) were found dead. By contrast, by 1997–98, when only a few hundred OWBV remained, 73 adults and 10 juveniles were found dead (Prakash 1999). Prakash (1999) also found that the proportion of nests producing fledged young declined from 82% in 1985–1986 (*n* = 244) to none in 1997–98 (*n* = 25). The causes of nest failure were unknown, but could have resulted from factors affecting eggs or chicks directly, high adult mortality affecting nest success, or a combination of the two. At the nearby breeding colony of LBV at Bayana, numerous vulture carcasses were found at the base of the cliffs. In Pakistan the TPF/OSP research group made regular systematic searches for dead OWBV in and near the breeding colonies and roosts in their Punjab province study area (Gilbert et al. 2002). By comparing the number of birds found dead with the number counted at the beginning of a given time period, they were able to calculate minimum annual mortality rates. The minimum proportion of adults dying per year in 2001 was 15% and the proportion for adults and sub-adults combined was 26%. Mortality may have been considerably higher than this because some dead vultures probably died away from the areas that were searched or were removed by scavengers. Breeding success did not appear to be unusually low, compared with that of other *Gyps* species. Both the high rate of the vulture population declines and the high directly observed death rate of adults and sub-adults indicated that an elevated mortality rate of full-grown vultures must be the main demographic mechanism of the decline. Whether or not reduced breeding success, other than that associated with adult mortality, was also involved was not clear. However, these findings were sufficient to suggest that finding the most frequent cause of death of vultures would be the key to diagnosing the cause of the population declines. Causes of death Vultures at Keoladeo National Park were observed looking ill with drooping necks for uncharacteristically protracted periods (Prakash 1999). In 1999 two OWBV were collected by BNHS and sent for post-mortem examination at the Indian Wildlife Cooperative (North Division) at Hisar Veterinary College. One bird was seen to fall from a tree close to Keoladeo National Park, from where it was recovered alive but died soon thereafter, and the second was found dead in the city of Delhi. The only unusual post-mortem finding was visceral gout, an accumulation of uric acid crystals in the tissues. Extensive renal gout was evident in both birds and was considered to have been the proximate cause of death. In both cases, the renal gout was acute with extensive tissue destruction. There are several possible aetiologies of visceral gout, including abnormally high protein diet, primary renal failure and dehydration, but no causal factor was identified at this stage (Cunningham 2000). Further investigations of causes of vulture deaths in India were hampered because few fresh vulture carcasses were available for examination. This was largely due to the lengthy procedure required before permits to collect dead birds were issued. In 2000, the three *Gyps* species endemic to the Indian subcontinent were listed as ‘Critically Endangered’ by IUCN. Later, they were also placed on Schedule 1 of India’s Wildlife Protection Act (1972), further increasing the difficulty of obtaining permits to collect dead birds. This often resulted in vulture carcasses rotting or being removed by scavengers before they could be collected, and between February 2000 and June 2001, only eight dead vultures were collected from the large numbers of carcasses encountered. Hence, legislation intended to protect vultures and other wildlife inadvertently hindered the process of identifying the cause of declines. Post-mortem examinations were initially conducted at the Poultry Diagnostic and Research Centre (PDRC) in Pune, India, and six of eight birds collected were found to have visceral gout (Cunningham et al. 2003). The BNHS consortium also engaged the Australian Animal Health Laboratory (AAHL), expert in the identification of novel diseases, to help identify the causes of decline, although permits were only given for the export of a very small number of samples, again after a very lengthy application process. Investigations in India were made possible by funding to the BNHS consortium from the UK government’s Darwin Initiative grant scheme. As part of this programme, a captive care centre for vultures was set up to aid in diagnostic work and develop the capacity for vulture husbandry should conservation breeding become necessary. The centre was established in 2001 in collaboration with the Forest Department of Haryana at Pinjore and was opened in 2003 by Elliot Morley, then under secretary of State for the Environment in the UK government. Although the centre provided excellent facilities for post mortem examinations, the number of vulture carcasses collected remained low. Despite these small numbers, it remained clear that a high proportion of carcasses showed evidence of visceral gout. Of 13 OWBV from India and Nepal examined by February 2004, 10 (77%) had gout. Of 12 LBV, 8 (67%) had gout (Shultz et al. 2004). The TPF/OSP group collected larger numbers of dead OWBV in Pakistan. Post mortem analyses of an initial sample of 36 birds found that 58% had renal failure as indicated by the presence of visceral gout, but exhaustive analyses failed to find its cause (Oaks et al. 2001). Later studies of much larger samples confirmed this high proportion of visceral gout in OWBV in Pakistan, with the highest prevalence (> 80%) being found in adult and subadult birds (Oaks et al. 2004a, Gilbert et al. 2006). Both the BNHS consortium and the TPF/OSP group looked hard for the causes of renal damage and for other causes of death. The teams identified novel vulture pathogens. A mycoplasma was isolated from an OWBV in Pakistan (Oaks et al. 2004b) and the AAHL isolated a herpes virus from an LBV from India (Cardoso et al. 2005). However, there was nothing to suggest that either of these played any part in the declines. Extensive analyses of tissues of dead vultures collected in Pakistan for a wide range of toxic environmental pollutants, including heavy metals, organophosphorus and organochlorine compounds and carbamates, failed to find significant numbers of birds contaminated at concentrations likely to have caused death (Oaks et al. 2001, 2004a). In late 2002, the cause of the vulture declines was still eluding all of the researchers. A likely cause seemed to be an infectious disease (Pain et al. 2003, Cunningham et al. 2003). There was some evidence that declines had spread geographically. They were first noted in Rajasthan, Uttar Pradesh and Delhi, subsequently reported from other parts of India, and later reported from Pakistan and Nepal. This difference in timing may have been because of a spread in awareness and thus reporting of the problem. However, good populations of vultures remained in Pakistan in 1999–2000, where they were reported to have started to decline only within the previous two or three years (Khan et al. 2001), whereas vulture populations in most of India were already severely depleted by then (Prakash et al. 2003, 2005a). Numbers of OWBV in Pakistan then declined very rapidly (Gilbert et al. 2002). Work conducted in Pakistan in 2000 also found that the proportion of birds exhibiting neck or head-drooping behaviour, similar to that reported by Prakash (1999) for sick birds at Keoladeo National Park, was highest near the Indian border, as were numbers of dead birds, and this was interpreted as a westward spread in the factors responsible for the decline (Khan et al. 2001). Head or neck-drooping was considered to be a relatively uncommon behaviour in vultures in South Asia (Prakash et al. 2003, Riseborough and Virani in Khan et al. 2001), and appears to be exhibited when vultures are sick or weak (Prakash et al. 2003; Bahat, in Katzner and Parry-Jones 2001) and during periods of extremely hot weather (Camiña 2001, Gilbert et al. 2007a). Whilst some considered this a noteworthy behaviour, potentially indicative of sick birds (Prakash et al. 2003), others suggested that this may actually have low specificity and sensitivity as an indicator of poor health (Gilbert et al. 2007a). Although these observations were consistent with the spread of an infectious disease from India to Pakistan, there were also other possible explanations (e.g. Cunningham et al. 2003). The main reason that it was felt that infectious disease could have been the cause of the vulture declines was that other plausible explanations had been checked and found to be improbable. Although no pathogen had been identified as the cause, it was well known that finding such agents and demonstrating their effects is difficult. Hence, it seemed probable that continued work would uncover the pathogen. In fact, as subsequent events were to show, the same line of reasoning can be applied to novel environmental pollutants. **The diclofenac breakthrough** In 2003 the TPF/OSP team, working in Pakistan, conducted a survey of 74 veterinarians and veterinary pharmaceutical retailers to identify livestock drugs that were known to be toxic to bird and mammal kidneys and capable of being absorbed after ingestion. Non-steroidal anti-inflammatory drugs (NSAIDs) are known to be potentially nephrotoxic in mammals, with toxicity varying among drugs and species, and even between individuals (Hersh et al. 2005; Fletcher et al. 2006; Gooch et al. 2007). Several NSAIDs have been reported to cause renal disease in birds (Nys and Rsaaz 1983, Klein et al. 1994). The only NSAID identified as being in widespread use was diclofenac, which was used to reduce pain, inflammation and fever in livestock. It had been available for veterinary use in Pakistan only since 1998. In India however, diclofenac appears to have been available since *circa* 1990, and 19 of 23 veterinarians interviewed indicated that they had been using the drug since 1993–1994 or earlier (BNHS/RSPB unpublished data). The team analysed kidney samples from 38 OWBV found dead in Pakistan between 2000 and 2002, and found that all of 25 birds that died with visceral gout had detectable diclofenac residues in the kidney. By contrast, none of 13 birds that died without visceral gout had detectable diclofenac (Oaks et al. 2004a). Oaks’s team then established the toxicity of diclofenac to OWBVs experimentally by administering high and low diclofenac oral doses to two groups of two captive OWBVs, and then by feeding 20 OWBVs with meat from ungulates treated shortly before death with a standard veterinary dose of diclofenac. The vultures were affected by the diclofenac in a dose-dependent way. Death occurred rapidly in all of the birds exposed to high doses and many of those given low doses. In all cases, the dead birds had visceral gout. Histological examination revealed kidney damage similar to that found in the carcasses of wild vultures with gout (Oaks et al. 2004a). It is not known how diclofenac causes renal failure, although a mechanism has been proposed (Meteyer et al. 2005). Oaks and colleagues announced their preliminary findings as soon as they became available, at a conference in Hungary in May 2003, well in advance of publication. This was crucial, as it helped to speed up the process of checking whether the situation revealed so convincingly for OWBV in Punjab province, Pakistan, also held for other vulture species and across the wider area of the Indian subcontinent from which catastrophic vulture declines had been reported. The BNHS team moved rapidly to analyse frozen tissues collected from dead vultures found in India and Nepal. As had been found in Pakistan, all of the vultures with visceral gout had detectable diclofenac in the kidneys or liver whilst none of the birds with no sign of gout were contaminated (Shultz et al. 2004). This was the case for both OWBV and LBV and a high proportion of both species exhibited visceral gout. These results, taken together with those of Oaks et al. (2004a) indicated that diclofenac was associated with the rapid vulture declines being observed in all parts of the subcontinent. However, it was not yet clear that there was sufficient diclofenac in the vultures’ food supply to fully account for the catastrophic declines. Many scientists were sceptical and felt it unlikely that diclofenac alone could explain such large effects (Proffitt and Bagla 2004). **The case that diclofenac is the major or sole cause of the vulture declines** There seemed to be good reasons to question whether diclofenac could be the sole cause of the vulture declines and the RSPB, BNHS and colleagues went on to investigate whether there was any foundation to this scepticism. NSAIDs tend to have short residence times in mammalian tissue, including ungulates. In European cattle *Bos taurus* that receive standard veterinary doses of diclofenac, tissue levels decline to undetectable levels after about a week (EMEA 2004, Green et al. 2006). Hence, it seemed that an improbably large number of animals would have to be treated with diclofenac just before they died to pose a serious threat to vultures. A possible explanation might be that diclofenac is metabolised more slowly in Indian cattle *Bos indicus* than in European cattle. However, experiments showed that this was not the case. Tissue concentrations of diclofenac, taken from experiments in which Indian and European cattle were killed at different intervals after dosing (Taggart et al. 2006, EMEA 2004), were used to calculate diclofenac concentrations averaged across all the edible tissues of a carcass at different times after treatment. A dose-response model of the toxicity of diclofenac to OWBV was derived from the experiments of Oaks et al. (2004a) and used to estimate the proportion of vultures that would be killed by a large meal of mixed tissues from a carcass in relation to the time of treatment and the cow’s death. The average diclofenac concentration in edible livestock tissues was sufficient to kill more than 10% of vultures feeding from the carcass of an animal treated with diclofenac only within a day or two of treatment (Green et al. 2006). The rate of decline of tissue concentrations and differences among tissues were similar for European and Indian cattle and there were indications that a similar pattern is found in Water Buffalo *Bubalus bubalis*. In order to establish the proportion of livestock carcasses that would need to contain concentrations of diclofenac lethal to vultures to have caused the observed population crash, Green et al. (2004) developed a simulation model of a vulture population using demographic rates based upon the scientific literature and expert opinion. The model assumed that the population of full-grown vultures was exposed to a risk of death from diclofenac poisoning every time they fed, because a proportion of ungulate carcasses contained a lethal concentration of the drug. These deaths were assumed to elevate mortality rates and to reduce breeding success when parent birds are killed. Using a range of plausible assumptions about normal mortality rates and intervals between meals, it was shown that less than 1% of livestock carcasses (0.13–0.75%, depending upon the vulture species, population and model parameter values) would have to carry lethal concentrations of diclofenac to have caused the observed rates of OWBV and LBV population decline in India and Pakistan between 2000 and 2003. The model was also used to calculate the proportion of dead adult and subadult vultures that would have visceral gout, the characteristic sign of diclofenac poisoning, if the observed declines were caused only by diclofenac. It was found that the proportion of dead vultures observed to have gout in Pakistan and India was consistent with diclofenac being the most important cause of the decline, and perhaps its only cause, in both Pakistan and India and for both OWBV and LBV. These analyses demonstrated that a sufficiently high proportion of dead vultures showed signs of diclofenac poisoning to account for the declines and that the proportion of contaminated ungulate carcasses need only be low. However, they do not show that sufficient ungulate carcasses really are contaminated with high enough diclofenac concentrations to cause the declines. The only way to do that convincingly was to collect tissue samples from a representative sample of dead domesticated ungulates from many sites across India. The BNHS team, in collaboration with the Wildlife Institute of India (WII), collected 1,848 liver samples from domesticated ungulates from carcass dumps from 67 sites across 12 states in India, between May 2004 and June 2005. Results of diclofenac analyses revealed that 10.1% of carcasses had detectable concentrations. Diclofenac was found in cattle, water buffaloes, goats and horses, but not sheep. All states showed evidence of contamination except one in which only one site was sampled (Taggart et al. 2007). These observed concentrations were then used, in combination with the dose-response toxicity model and the vulture population model described above, to estimate the rate of population decline expected for a population of OWBV with this level exposure. The expected rate of decline was 80–99% per year, depending on model assumptions, which is more than, and not significantly different from, the rate of population decline (48% per year) estimated from road transect surveys carried out a few years before (Green et al. 2007). Hence, there was sufficient diclofenac in ungulate carcasses available to vultures in India to cause their populations to decline at the observed rate without the need to invoke any other causes. Studies in Pakistan estimated the proportion of diclofenac contaminated ungulate carcasses encountered by OWBV by identifying clusters in space and time of vultures killed by diclofenac (Gilbert et al. 2006). This research indicated that contamination was sufficient to account for the population decline and that variation among colonies and years in the rapidity of decline was strongly correlated with the mortality rate caused by diclofenac. **The scale of diclofenac use in India** Diclofenac is no longer covered by patent and more than 50 companies in India manufactured veterinary formulations. Across the subcontinent, it appears to have been the welfare drug of choice for veterinarians treating livestock for a range of conditions. It is generally administered as an intramuscular injection, although an ingestible bolus form also exists. It is likely to be useful in a range of situations, including in rural communities, where families frequently keep water buffalo and cattle for working the land and for milking. As a potent anti-inflammatory drug, diclofenac can help to temporarily alleviate the effects of a range of veterinary problems (e.g. muscle inflammation in the limbs, and mastitis) and so potentially render domestic livestock more able to continue to work productively or yield milk. Prakash et al. (2005b) estimated that if 10–20% of the estimated 503 million livestock in India die annually and become available to vultures (only a small proportion are eaten by people), then a pharmaceutical industry estimate of 5 million annual diclofenac treatments would result in 5–10% of carcasses being contaminated with detectable concentrations of diclofenac. However, given the short residence time of diclofenac, this would only be the case if all treated animals died within a week of being given diclofenac. The observed 10% diclofenac prevalence in samples from carcasses of domesticated ungulates (Taggart et al. 2007) suggests that considerably more than the estimated 5 million courses of treatment are given annually, and/or that the majority of animals treated are fatally ill. Finding practical ways to prevent the extinction of South Asian vultures In January 2004, as soon as the initial case for the importance of diclofenac in causing vulture declines had been assembled and tested, a group of conservation bodies, including both the BNHS and TPF/OSP groups, issued a Manifesto. This called for immediate action from the governments of all *Gyps* vulture range states to prevent the veterinary use of diclofenac. In February 2004, two important international meetings were held to review the scientific evidence, present the emerging consensus to government representatives and to initiate the planning of conservation action. The first was a Vulture Summit in Kathmandu, which was convened by TPF and BCN and the second was an International South Asian Recovery Plan Workshop convened by the BNHS group (ISARPW 2004). Participants included NGOs, governmental organisations and others from across South Asia and internationally. Two key recommendations emerged from these meetings and were presented in the report of the International South Asian Recovery Plan Workshop. These were (1) that government authorities in all range states introduce legislation or regulations to prevent all veterinary uses of diclofenac that pose a risk to vultures, and (2) that captive populations of all three affected *Gyps* species be established immediately in South Asia, for the purposes of conservation breeding and subsequent reintroduction to a diclofenac-free environment. The captive care facilities developed in India in 2001 were converted into a conservation breeding facility in 2004. These have now been expanded, and additional facilities have been constructed by BNHS in West Bengal and Assam. These three centres currently (as of 15th April 2008) hold 83 OWBV, 71 LBV and 28 SBV. The aim of the conservation-breeding programme in India is to hold a minimum of 25 pairs of each species at each of a minimum of three sites. In Pakistan, a facility, holding 11 OWBV, is run by the World Wide Fund for Nature (WWF) Pakistan and the Punjab Wildlife and Parks Department of the Provincial Government, with support from The Hawk Conservancy Trust and the Environment Agency of the United Arab Emirates. In Nepal, a facility is being developed by the National Trust for Nature Conservation, the Department of National Parks and Wildlife Conservation and Bird Conservation Nepal, supported by RSPB and ZSL. It holds 14 OWBV. Given the widespread use of diclofenac, and the evident importance of veterinary use of this drug across South Asia, it soon became apparent that an alternative NSAID, of low toxicity to vultures and effective for the treatment of livestock, would need to be found to facilitate and expedite a diclofenac ban. As an initial step, a questionnaire was sent to veterinarians at zoos and wildlife rehabilitation centres globally to ask which NSAIDs they had used to treat scavenging birds, and the clinical outcome. Survey results identified the NSAID meloxicam as a potential alternative. Meloxicam had been given to 39 *Gyps* vultures from six species and at least 700 individuals from 54 other raptor and scavenging bird species with no ill effects. However, mortality with associated kidney damage (gout and/or renal failure) was reported with the use of several other NSAIDs, including flunixin and carprofen (Cuthbert et al. 2006b). Subsequently, a comprehensive safety-testing programme for meloxicam was initiated in South Africa, as a collaboration between South African (Pretoria University and DeWildt Cheetah and Wildlife Trust), Namibian (Rare and Endangered Species Trust), Indian (BNHS, Indian Veterinary Research Institute) and UK (RSPB, Aberdeen University, Cambridge University) research and conservation groups. The threatened South Asian *Gyps* species could not be used for initial testing because they were essential to the captive breeding programme and it was therefore necessary to find a surrogate species. An obvious candidate was the African White-backed Vulture (AWBV), *Gyps africanus*, which is not considered to be threatened, being classified in the ‘Least Concern’ category by IUCN (IUCN 2007). Captive, injured or non-releasable AWBV were used to assess whether the toxicity of diclofenac to this species was similar to that of OWBV. Four AWBV were used for an experiment in which two were randomly selected and given 0.8 mg kg\(^{-1}\) of diclofenac by gavage and two were sham dosed with sterilised water. The dose was selected using the dose-response model previously established for OWBV. If diclofenac were as toxic to AWBV as it is to OWBV then there was a less than 1% chance that both of the treated birds would survive. The two diclofenac-treated birds died within two days with visceral gout, whilst the untreated controls remained healthy (Swan et al. 2006a). The meloxicam safety testing trial on AWBV was implemented in stages to avoid unnecessary deaths if the drug proved to be toxic. The maximum likely level of exposure (MLE, 1.5 mg kg\(^{-1}\) body weight) to meloxicam in the wild was first estimated, based upon known concentrations in the tissues of experimentally treated livestock and vulture food intake. At each stage of the experiment, the dose of meloxicam administered by gavage was increased until the MLE was exceeded. Eventually, a dose of 2.0 mg kg\(^{-1}\) body weight was administered to a sample of 40 AWBV. All birds survived these treatments with no obvious ill effects, and serum uric acid concentration, which is greatly elevated in OWBV, AWBV and Eurasian Griffons treated with diclofenac (Oaks et al. 2004a, Swan et al. 2006a), remained within normal limits. Next, an experiment was performed in which captive AWBV were fed tissues from cattle treated just before slaughter with a higher than standard veterinary course of meloxicam. All of the six treated AWBV remained healthy with normal serum uric acid concentrations. Finally, ten individuals from two of the threatened Asian vulture species (OWBV and LBV) were given meloxicam by gavage, five of them at a dosage above the MLE. All survived with no obvious ill effects, as did 21 birds (OWBV and LBV) fed muscle or liver tissue from water buffalo treated with double the standard veterinary dose of meloxicam until eight hours before slaughter (Swan et al. 2006b, Swarup et al. 2007). The results of these studies suggested that meloxicam is of low toxicity to *Gyps* vultures, and that in this respect it would be a suitable substitute for diclofenac. Meloxicam also appears to have very low toxicity to a wide range of other raptors and scavenging birds that may encounter carcasses, with over 700 individuals from 54 species clinically treated with meloxicam and a further five species dosed with meloxicam at dosages above MLE (Cuthbert et al. 2006b, Swarup et al. 2007). Like diclofenac, meloxicam is out of patent, licensed for veterinary use in India, already produced for veterinary use in injectable and bolus (ingestible) form, and considered a very effective NSAID (Noble and Balfour 1996, Del Tacca et al. 2002, Deneuche et al. 2004) used to treat a variety of livestock ailments (Friton et al. 2004, Hamman and Friton 2003, Milne et al. 2003). In November 2004, BNHS, with support from RSPB, initiated an advocacy programme in India to promote a ban on the use of diclofenac. Throughout the various phases of meloxicam safety testing, the researchers fed results through the advocacy programme to keep the Indian authorities fully informed of preliminary research findings. A preliminary report was made available to relevant government officials, and on 17 March 2005, Board Members of the National Board for Wildlife recommended a ban on the veterinary use of diclofenac. The Indian Ministry of Environment and Forests adopted a constructive approach and held a two-day international conference early in 2006. This meeting coincided with publication of the first meloxicam safety testing results (Swan et al. 2006b). A series of recommendations were produced during the meeting, of which the first was ‘to strongly recommend to the Governments of the respective countries to take immediate steps to completely phase out veterinary diclofenac’ (MoEF 2006). In May 2006, a directive from the Drug Controller General of India was circulated to relevant officials for withdrawal of manufacturing licences for veterinary diclofenac. The Government of Nepal took similar action in August 2006, shortly followed by the Government of Pakistan. The governments of these countries are to be commended on the rapidity with which this action was taken. A two and a half year interval between identifying veterinary diclofenac as the cause of declines and banning its production may appear far too long given the annual vulture decline rates, but it is rapid in comparison with many other efforts to resolve environmental problems. The widespread use of DDT from the mid-1940s onwards was identified as the cause of significant mortality, reduction of breeding success and population declines of birds and other non-target species by the early 1960s (Barnett 1950, Mohr et al. 1951, Hickey and Hunt 1960, Wurster et al. 1965, Ratcliffe 1967). However, it was not until 1972 that the majority of uses of Whilst the current bans on the manufacture of veterinary diclofenac are essential, much remains to be done to ensure that the affected species do not disappear from South Asia. Retail sale of veterinary diclofenac is still legal in India, and diclofenac is still being sold and used 9 months after the ban (authors’ unpublished information). Awareness campaigns, incentives for meloxicam use and a ban on retail sale and use of veterinary diclofenac are likely to be necessary to bring diclofenac contamination of domestic ungulate carcasses down to the very low levels required for the safety of wild vultures. The use on livestock of diclofenac formulated for human use is also a possible barrier to the full removal of diclofenac from vulture food supplies. Adequate monitoring is essential, both of the availability of veterinary diclofenac and its use. The latter is best performed through carcass sampling as described by Taggart et al. (2007), with the impact upon vultures of the observed level of contamination being assessed by modelling (Green et al. 2007). The rapidity of vulture declines and the uncertainty about when diclofenac contamination will be removed make the establishment of conservation breeding centres a continuing necessity. *In situ* conservation measures in combination with conservation advocacy and awareness programmes may also be necessary to help ensure that at least some of the small remaining vulture populations remain extant. Two *in situ* measures have been proposed to reduce mortality in the wild; the exchange of diclofenac for meloxicam in areas surrounding breeding colonies, and, in Nepal, diversionary feeding with diclofenac-free carcasses. The efficacy of these measures will depend upon the availability of alternative food sources, the extent of use of diversionary feeding stations, and bird movements within and outside the breeding season. Little is known of movements in Asian *Gyps* species. Throughout Africa and Europe, *Gyps* species can be sedentary, nomadic, partially migratory or migratory, with movement patterns varying between regions and apparently in relation to seasonal resource availability (Bernis 1983, Mundy et al. 1992). In general, adult birds appear to be more sedentary, and juvenile and immature birds more migratory or dispersive in nature. However, preliminary results from satellite-tagged OWBV in Nepal indicate that even adult birds can pass a high proportion of the non-breeding season distant from breeding sites (R. Cuthbert and H. S. Baral unpublished data). Satellite tagging studies of 5 non-breeding adult OWBV in Pakistan found that the maximum distance travelled from the colony varied considerably between individuals, ranging from 35 to 316 km, and home range areas varied from 1,824 to 68,930 km$^2$ (Gilbert et al. 2007b). Food provisioning near a colony of OWBV in Pakistan during the 2003–04 breeding season illustrated that the provision of clean food appeared to be able to reduce, but not eliminate, mortality from diclofenac (Gilbert et al. 2007b). There was also considerable seasonal variation in the extent to which vultures used the diversionary food, with the vulture restaurant visited on, only 16% of days and by a relatively small number of birds at the end of the breeding season compared with 74% of days by a far larger number of birds earlier in the season. There were significant declines in mortality when vultures were fed clean food, but no reduction in the rate at which numbers of breeding pairs (active nests) declined at the colony in the year following the diversionary feeding (298 nests in 2002–03, 203 nests in 2003–04 and 118 nests in 2004–05, AVPP 2007). These results show that, whilst food provisioning may be of some benefit, it did not prevent the population from declining. Whilst the impact of year-long food provisioning remains untested, it is likely to have a greater impact on vulture survival in areas where alternative food is scarce, in colonies where a high proportion of birds tend to be sedentary, and where local diclofenac use is minimal or non-existent. The impact on vulture populations of exchanging supplies of meloxicam for those of veterinary diclofenac is also untested, although exchange programmes are underway in Nepal, in combination with year round provisioning of safe food. Careful monitoring of the effectiveness of these programmes in reducing the rates of declines at colonies will inform future in-situ activities. Wider implications of NSAID use The known toxicity of diclofenac to four *Gyps* species (*G. bengalensis, G. indicus, G. fulvus* and *G. africanus;* Oaks *et al.* 2004a, Shultz *et al.* 2004, Swan *et al.* 2006a), and the phylogenetic position of these species each forming a sister relationship with one or more of the remaining *Gyps* species (Johnson *et al.* 2006) suggest that all members of the *Gyps* genus are likely to be sensitive to diclofenac. However, the NSAID survey that initially identified meloxicam as a potential alternative for diclofenac (Cuthbert *et al.* 2006b) also highlighted two additional issues of concern. First, diclofenac was not the only NSAID to have been associated with gout and/or renal failure in treated birds, and second, *Gyps* vultures were not the only bird species to be affected. Five of 40 birds given carprofen died (with doses from 1.0–5.0 mg kg$^{-1}$), and seven of 24 birds administered flunixin died (with doses from 0.5–12.0 mg kg$^{-1}$) with renal failure and/or gout, along with one bird given ibuprofen and one phenylbutazone at unknown dose levels. Both carprofen and flunixin are used to treat livestock in Europe, although not yet in South Asia, and information on residues of these NSAIDs in livestock tissues suggests that livestock dying shortly after treatment could contain sufficient residues to pose a threat to scavenging birds (Cuthbert *et al.* 2006b). Species that died following treatment with these NSAIDs included *Gyps* vultures, a Harris’s Hawk *Parabuteo unicinctus*, Northern Saw-whet Owl *Aegolius acadicus*, Red-legged Seriema *Cariama cristata*, Marabou Stork *Leptoptilos crumeniferus*, Cinereous Vulture *Aegypius monachus* and a Lappet-faced Vulture *Torgus tracheliotus*. The diversity of species affected suggests that the veterinary use of some NSAIDs may pose a problem for scavenging birds of other species and in other areas. However, preliminary results suggest that some species, including at least one New World vulture, appear particularly insensitive to the effects of diclofenac (B. Rattner *et al.* in press). Whilst the situation in India is unique, in that vast numbers of domestic livestock remain in the open after death, any situation in which recently treated livestock can be scavenged by birds presents a potential problem. More work is urgently needed on the risks that NSAIDs pose to scavenging birds globally. Governments should only licence NSAIDs for veterinary use if they have first been tested and found to be sufficiently safe to scavenging birds likely to feed on the carcasses of treated animals. It is possible that diclofenac use in India has resulted in mortality in a wider range of species than *Gyps* vultures. A lack of monitoring has made this difficult to investigate, but repeated surveys of Red-headed *Sarcogyps calvus* and Egyptian *Neophron percnopterus* Vultures have shown that these species have declined rapidly across India, though apparently with a later onset than *Gyps* vultures (Cuthbert *et al.* 2006a). As no carcasses of these species have been collected and analysed, it is not possible to determine the role that diclofenac may have played in their declines. Concluding comments The investigations described here have some interesting features. Quantification of the scale of the declines and estimation of minimum mortality rates by carcass searching were important in establishing elevated adult mortality as the main demographic mechanism of the declines. However, because of the absence of widespread bird population monitoring, it proved difficult to measure the declines and identify when and where they began. The mobilisation of scientific effort was rapid, once the declines had been recognised, but wildlife protection legislation, though essential in other contexts, was an obstacle to identifying the cause of elevated mortality. The engagement of a diversity of researchers from several countries and scientific disciplines and from academic institutions, NGOs and government agencies was vital. Their organisation into separate research groups, which were conducting complementary research, in competition, but also in communication with one another, was a stimulus to progress and rigorous evaluation of hypotheses. Once the cause was identified, the international research and conservation community, and the Indian Ministry of the Environment and Forests, closely followed by the authorities in Nepal and Pakistan, pulled together with remarkable rapidity and determination to find a solution to the problem. This international collaborative effort was exceptional, with academics setting aside their research agendas to give priority to this work alongside conservation scientists, advocates, civil servants and politicians. Conservation NGOs played a central role in arguing for action and in funding and designing relevant research. Considerable progress has already been made, but saving Asian vultures remains a daunting challenge, requiring effort and vigilance for decades to come. Establishing viable captive populations, removing diclofenac from the vulture food supply in the Indian subcontinent and preventing its replacement by other toxic NSAIDs are the main short-term priorities. Maintaining and breeding vultures in captivity for reintroduction, restoring wild populations and preventing future adverse impacts of NSAIDs and other veterinary drugs are tasks for the longer term. Acknowledgements We wish to express our appreciation of the contribution of the late Bill Burnham, former president of The Peregrine Fund, to efforts to save the vultures of South Asia. We thank Mark Avery, Alistair Gammell, David Houston, Georgina Mace, Ian Newton and Stephen Piper for advice and help, and David Gibbons for comments on the manuscript. We would like to thank the Indian Veterinary Research Institute and the Indian Council of Agricultural Research, the Indian Ministries of Environment and Forest, Agriculture, Health, and the Drug Controller General of India, the governments of Haryana, West Bengal and Assam states, and the governments of Nepal and Pakistan for all of their help and support. Many thanks to the Poultry Diagnostic and Research Centre, Pune, India, and especially Dr Ghalsasi, for collaborative work and analyses. The RSPB, BNHS and ZSL would like to thank the UK government for funding provided under the Darwin Initiative for the Survival of Species, the British High Commission, New Delhi, the British High Commission Global Opportunities Fund, The Earth Matters Foundation, the Rufford Foundation and many individual donors. Funding for work conducted by The Peregrine Fund was provided by the Gordon and Betty Moore Foundation, The Peregrine Fund, Disney Wildlife Conservation Fund, the UN, Summit, and Ivorybill Foundations, Zoological Society of San Diego, and other important donors. Finally, we thank all of the institutions to which the authors are affiliated for their financial and other support. References AVPP (2007) Asian vulture population project, February 2007, www.peregrinefund.org/vulture Baral, H. S., Giri, J. B. and Virani, M. Z. (2004) On the decline of Oriental White-backed Vultures *Gyps bengalensis* in lowland Nepal. Pp. 215–219 in R. D. Chancellor and B.-U. Meyburg, eds. *Raptors Worldwide. Proceedings of the 6th world conference on birds of prey and owls*. Berlin and Budapest: WWGBP and MME/Birdlife Hungary. Barnett, D. C. (1950) The effect of some insecticide sprays on wildlife. *Proc. Ann. Conf. West. Assoc. State Game and Fish Comm.* 30: 125. BirdLife International (2007) Species factsheet: *Gyps coprotheres*. Downloaded from http://www.birdlife.org on 21/6/2007 Bernis, F. (1983) Migration of the common Griffon Vulture in the Western Palearctic. Pp. 185–196 in S. R. Wilbur and J. A. Jackson, eds. *Vulture biology and management*. Berkeley and Los Angeles: University of California Press. Blus, L. J. (2003) Organochlorine pesticides. Chapter 13 in D. J. Hoffman, B. A. Rattner, G. Allen Burton Jr and J. Cairns Jr, eds. Camiña, A. (2001) The “head-drooping” behaviour in Spanish Eurasian griffon vulture populations. Preliminary results [abstract]. 4th Eurasian Congress on Raptors, Pp. 34–35. Seville, Spain: Estación Biológica Donaña and Raptor Research Foundation. Cardoso, M., Hyatt, A., Selleck, P., Lowther, S., Prakash, V., Pain, D., Cunningham, A. A. and Boyle, D. (2005) Phylogenetic analysis of the DNA polymerase gene of a novel alphaherpesvirus isolated from an Indian *Gyps* vulture. *Virus Genes.* 30: 371–81. Cunningham, A. A. (2000) Investigation of vulture mortality in India: Report of a visit. Unpublished report to RSPB, Sandy, UK. Cunningham, V., Prakash, D., Pain, G. R., Ghalsasi, G. A. H., Wells, G. N., Kolte, P., Nighot, M. S., Goudar, S. Kshirsagar and Rahmani, A. (2003) Indian vultures: victims of an infectious disease epidemic? *Anim. Conserv.* 6: 189–197. Cuthbert, R., Green, R. E., Ranade, S., Saravanan, S., Pain, D. J., Prakash, V. and Cunningham, A. A. (2006a) Rapid population declines of Egyptian Vulture (*Neophron percnopterus*) and red-headed vulture (*Sarcogyps calvus*) in India. *Anim. Conserv.* 9: 349–354. Cuthbert, R., Parry-Jones, J., Green, R. E. and Pain, D. J. (2006b) NSAIDs and scavenging birds: potential impacts beyond Asia’s critically endangered vultures. *Biol. Lett.* 3: 90–93. doi:10.1098/rsbl.2006.0554. Del Tacca, M., Colucci, R., Fornai, M. and Blandizzi, C. (2002) Efficacy and tolerability of Meloxicam a COX-2 preferential nonsteroidal anti-inflammatory drug. *Clin. Drug Inv.* 22: 799–818. Deneuche, A. J., Dufayet, C., Goby, L., Fayolle, P. and Desbois, C. (2004) Analgesic comparison of meloxicam or ketoprofen for orthopaedic surgery in dogs. *Vet. Surgery* 33: 650–660. Duckworth, J. W., Salter, R. E. and Khounboline, K. (compilers) (1999) *Wildlife in Lao PDR: 1999 status report.* Vientiane: IUCN-The World Conservation Union / Wildlife Conservation Society / Centre for Protected Areas and Watershed Management. EMEA (2004) Committee for veterinary medicinal products: diclofenac. Summary Report, EMEA/MRL/885/03-FINAL. Fletcher, J. T., Graf, N., Scarman, A., Saleh, H. and Alexander, S. I. (2006) Nephrotoxicity with cyclooxygenase 2 inhibitor use in children. *Pediatric Nephrology* 21: 1893–1897. Friton, G. M., Cajal, C., Romero, R. R. and Kleeman, R. (2004) Clinical efficacy of Meloxicam (Metacam®) and Flunixin (Finadyne®) as adjuncts to antibacterial treatment of respiratory disease in fattening cattle. *Berliner und Munchener Tierarztlche Wochenschrift* 117: 304–309. Gilbert, M., Virani, M. Z., Watson, R. T., Oaks, J. L., Benson, P. C., Khan, A. A., Ahmed, S., Chaudry, J., Arshad, M., Mahmood, S. and Shah, Q. A. (2002) Breeding and mortality of Oriental White-backed Vulture *Gyps bengalensis* in Punjab Province, Pakistan. *Bird Conserv. Internatn.* 12: 311–326. Gilbert, M., Oaks, J. L., Virani, M. Z., Watson, R. T., Ahmed, S., Chaudhry, M. J. I., Arshad, M., Mahmood, S., Ali, A., Khattak, R. M. and Khan, A. A. (2004) The status and decline of vultures in the provinces of Punjab and Sind, Pakistan: a 2003 update. Pp. 221–234 in R. C. Chancellor and B.-U. Meyburg, eds. *Raptors Worldwide. Proceedings of the 6th world conference on birds of prey and owls.* Berlin and Budapest: WWGBP and MME/Birdlife Hungary. Gilbert, M., Watson, R. T., Virani, M. Z., Oaks, J. L. and Ahmed, S. et al. (2006) Rapid population declines and mortality clusters in three Oriental white-backed vulture *Gyps bengalensis* colonies in Pakistan due to diclofenac poisoning. *Oryx* 40: 388–399. Gilbert, M., Watson, R. T. and Virani, M. Z. (2007a) Neck-drooping posture in oriental white-backed vultures (*Gyps bengalensis*): an unsuccessful predictor of mortality and its probable role in thermoregulation. *J. Raptor Res.* 41: 35–40. Gilbert, M., Watson, R. T., Ahmed, S., Asim, M. and Johnson, J. A. (2007b) Vulture restaurants and their role in reducing diclofenac exposure in Asian vultures. *Bird Conserv. Internatn.* 17: 63–77. Gooch, K., Culleton, B. F., Manns, B. J., Zhang, J. G., Alfonso, H., Tonelli, M., Frank, C., Klarenbach, S. and Hemmelgarn, B. R. (2007) NSAID use and progression of chronic kidney disease. *Am. J. Med.* 120 (3). doi:10.1016/j.amjmed.2006.02.015 MAR 2007. Green, R. E. (1995) Diagnosing causes of bird population declines. *Ibis* 137: S47–S55 Suppl. 1. Green, R. E. (2002) Diagnosing causes of population declines and selecting remedial actions. Pp. 139–156 in K. Norris and D. J. Pain, eds. *Conserving bird biodiversity*. Cambridge: Cambridge University Press. Green, R. E., Newton, I., Shultz, S., Cunningham, A. A., Gilbert, M., Pain, D. J. and Prakash, V. (2004) Diclofenac poisoning as a cause of vulture population declines across the Indian subcontinent. *J. Appl. Ecol.* 41: 793–800. Green, R. E., Taggart, M. A., Das, D., Pain, D. J., Sashikumar, C., Cunningham, A. A. and Cuthbert, R. (2006) Collapse of Asian vulture populations: risk of mortality from residues of the veterinary drug diclofenac in carcasses of treated cattle. *J. Appl. Ecol.* 43: 949–956. Green, R. E., Taggart, M. A., Senacha, K. R., Pain, D. J., Jhala, Y. and Cuthbert, R. (2007) Rate of decline of the oriental white-backed vulture *Gyps bengalensis* population in India estimated from measurements of diclofenac in carcasses of domesticated ungulates. *PloS One* 2(8) e686. doi:10.1371/journal.pone.0000686. Griffith, R. T. H. (translator) (1870–1874) *Rámáyan of Válmiki*. London: Trübner & Co.. and Benares: E. J. Lazarus & Co. http://www.sacred-texts.com/hin/rama/ry000.htm Grubb, R. B., Narayam, G. and Satheesan, S. M. (1990) *Conservation of vultures in (developing) India*. Pp. 360–363 in J. C. Daniel and J. S. Serrao, eds. *Conservation in developing countries*. Bombay: BNHS/OUP. Hamman, J. and Friton, G. M. (2003) Clinical efficacy of non steroidal antiphilogistica in acute mastitis. *Prakt. Tierarzt* 84: 390. Hersh, E. V., Lally, E. T. and Moore, P. A. (2005) Update on cyclooxygenase inhibitors: has a third COX isoform entered the fray? *Current Med. Res. and Opinion* 21: 1217–1226. Hickey, J. J. and Hunt, I. B. (1960) Initial song bird mortality following Dutch elm disease control programme. *J. Wildl. Manage.* 24: 259. Hilton-Taylor, C. (2000) *2000 IUCN Red List of Threatened Species*. Gland, Switzerland, & Cambridge, UK: IUCN/SSC. Houston, D. (1974) Food searching behaviour in griffon vultures. *Afr. J. Ecol.* 12: 63–77. Houston, D. (1985) Indian white-backed vulture *Gyps bengalensis*. Pp. 465–466 in I. Newton and R. D. Chancellor, eds. *Conservation studies on raptors*. Cambridge: International Council for Bird Preservation. Technical Publication No. 5. ILC (2003) Agricultural Statistics at a Glance 2003 and 17th Indian Livestock Census 2003. New Delhi: Dept. of Animal and Dairying, Ministry of Agriculture, Govt. of India. ISARPW (2004) Report on the international South Asian recovery plan workshop. *Bucerios* 9: 1–48. IUCN (2007) *2007 IUCN Red List of threatened species*. http://www.iucn.org Johnson, J. A., Lerner, H. R. L., Rasmussen, P. C. and Mindell, D. P. (2006) Systematics within *Gyps* vultures: a clade at risk. *BioMed Central Evol. Biol.* 6: 65 doi:10.1186/1471-2148-6-65. Katzner, T. and Parry-Jones, J., eds. (2001) Reports from the workshop on Indian *Gyps* vultures. 4th Eurasian Congress on Raptors, 25–29 September 2001, Pp. 4–6. Seville, Spain: Estación Biológica Doñana and Raptor Research Foundation. Khan, A. A., Virani, M., Oaks, L., Benson, P. C., Gilbert, M., Watson, R. T. and Risebrough, R. W. (2001) A survey of the Oriental White-backed Vulture *Gyps bengalensis* in the Punjab Province, Pakistan. *J. Research (Science)* 12: 97–104. Klein, P. N., Charmatz, K. and Langenberg, J. (1994) The effect of Flunixin meglumine (Banamine ®) on the renal function in northern bobwhite (*Colinus virginianus*): An avian model. *Proc. American Assoc. Zoo Vet.* 1994: 128–131. Meteyer, C. U., Rideout, B. A., Gilbert, M., Shivaprasad, H. L. and Oaks, J. L. (2005) Pathology and pathophysiology of diclofenac poisoning in free-living and experimentally exposed oriental white-backed vultures (*Gyps bengalensis*). *J. Wildl. Dis.* 41: 707–716. Milne, M. H., Nolan, A. M., Cripps, P. J. and Fitzpatrick, J. L. (2003) Assessment and alleviation of pain in dairy cows with mastitis. *Cattle Pract.* 11: 289–293. MoEF (2006) Proceedings of the International Conference on Vulture Conservation. New Delhi: Ministry of Environment and Forests, Government of India. Mohr, R. W., Telford, H. S., Peterson, E. H. and Walker, K. C. (1951) Toxicity of orchard insecticides to game birds in eastern Washington. *Wash. Agric. Exp., Sta. Circ.* 170: 22. Mundy, P., Butchart, D., Ledger, J. and Piper, S. (1992) *The vultures of Africa*. London: Academic Press. Newton, I. (1979) *Population ecology of raptors*. Berkhamsted, UK: Poyser. Noble, S. and Balfour, J. A. (1996) Meloxicam. *Drugs* 51: 424–430. Nys, Y. and Rzasa, J. (1983) Increase in uricemia induced by indomethacin in hens or chickens. *C. R. Séances Acad. Sci. III.* 296: 401–404. Oaks, J. L., Rideout, B. A., Gilbert, M., Watson, R., Virani, M. and Khan, A. A. (2001) Summary of diagnostic investigations into vulture mortality: Punjab Province, Pakistan, 2000–2001 [Abstract]. 4th Eurasian Congress on Raptors, 25–29 September 2001. Seville, Spain: Estación Biológica Doñana and Raptor Research Foundation. Oaks, J. L., Gilbert, M., Virani, M. Z., Watson, R. T., Meteyer, C. U., Rideout, B., Shivaprasad, H. L., Ahmed, S., Chaudhry, M. J. I., Arshad, M., Mahmood, S., Ali, A. and Khan, A. A. (2004a) Diclofenac residues as the cause of vulture population decline in Pakistan. *Nature* 427: 630–633. Oaks, J. L., Donahoe, S. L., Rurangirwa, F. R., Rideout, B. A., Gilbert, M. and Virani, M. Z. (2004b) Identification of a novel mycoplasma species from an Oriental White-backed Vulture (*Gyps bengalensis*). *J. Clinical Microbiol.* 42: 5909–5912. Pain, D. J. and Pienkowski, M. W., eds. (1997) *Farming and birds in Europe: the Common Agricultural Policy and its implications for bird conservation*. London: Academic Press. Pain, D. J., Cunningham, A. A., Donald, P. F., Duckworth, J. W., Houston, D. C., Katzner, T., Parry-Jones, J., Poole, C., Prakash, V., Round, P. and Timmins, R. (2003) *Gyps* vulture declines in Asia: temporospatial trends, causes and impacts. *Conserv. Biol.* 17: 661–671. Prakash, V. (1999) Status of vultures in Keoladeo National Park, Bharatpur, Rajasthan, with special reference to population crash in *Gyps* species. *J. Bombay Nat. Hist. Soc.* 96: 365–378. Prakash, V., Pain, D. J., Cunningham, A. A., Donald, P. F., Prakash, N., Verma, A., Gargi, R., Sivakumar, S. and Rahmani, A. R. (2003) Catastrophic collapse of Indian white-backed *Gyps bengalensis* and long-billed *Gyps indicus* vulture populations. *Biol. Conserv.* 109: 381–390. Prakash, V., Pain, D. J., Cunningham, A. A., Donald, P. F., Prakash, N., Verma, A., Gargi, R., Sivakumar, S. and Rahmani, A. R. (2005a) Corrigendum to “Catastrophic collapse of Indian white-backed *Gyps bengalensis* and long-billed *Gyps indicus* vulture populations” [Biol. Conserv. 109 (2003) 381–390] 124: 559. Prakash, V., Green, R. E., Rahmani, A. R., Pain, D. J., Virani, M. Z., Khan, A. A., Baral, H. S., Jhala, Y. V., Naoroji, R., Shah, N., Bowden, C. G. R., Choudhury, B. C., Narayan, G. and Gautam, P. (2005b) Evidence to support that diclofenac caused catastrophic vulture population decline. *Current Sci.* 88: 2. Prakash, V., Green, R. E., Pain, D. J., Ranade, S. P., Saravanan, S., Prakash, N., Venkitachalam, R., Cuthbert, R., Rahmani, A. R. and Cunningham, A. A. (2007) Recent changes in populations of resident *Gyps* vultures in India. *J. Bombay Nat. Hist. Soc.* 104: 129–135. Proffitt, F. and Bagla, P. (2004) Circling in on a vulture killer. *Science* 306: 223. Rasmussen, P. C. and Parry, S. J. (2001) The taxonomic status of the ‘Long-billed’ Vulture *Gyps indicus*. *Vulture News* 44: 18–21. Ratcliffe, D. A. (1967) Decrease in eggshell weight in certain birds of prey. *Nature* 215: 208. Rattner, B. A., Whitehead, M. A., Gasper, G., Meteyer, C. U., Link, W. A., Taggart, M. A., Meharg, A. A., Pattee, O. H. and Pain, D. J. (in press) Apparent tolerance of turkey vultures (*Cathartes aura*) to the non-steroidal anti-inflammatory drug diclofenac. *Environmental Toxicology and Chemistry*. Ruxton, G. D. and Houston, D. C. (2004) Obligate vertebrate scavengers must be large soaring fliers. *J. Theoret. Biol.* 228: 431–436. Samant, J. S., Prakash, V. and Naoroji, R. (1995) Ecology and behaviour of resident raptors with special reference to endangered species. Final Report to the U.S. Fish & Wildlife Service Grant No 14-1600009-90-1257. Mumbai: Bombay Natural History Society. Sarrazin, F., Bagnolini, C., Pinna, J. L., Danchin, E. and Clobert, J. (1994) High survival of griffon vultures (*Gyps fulvus fulvus*) in a reintroduced population. *Auk* 111: 853–862. Shultz, S., Baral, H. S., Charman, S., Cunningham, A. A., Das, D., Ghalsasi, D. R., Goudar, M. S., Green, R. E., Jones, A., Nighot, P., Pain, D. J. and Prakash, V. (2004) Diclofenac poisoning is widespread in declining vulture populations across the Indian subcontinent. *Proc. Roy. Soc. Lond. B (Supplement)* 271 (Suppl 6): S458–S460. DOI: 10.1098/rsbl.2004.0223 (available online). Srikosamatara, S. and Suteethorn, V. (1995) Populations of Gaur and Banteng and their management in Thailand. *Nat. Hist. Bull. Siam Soc.* 43: 55–83. Swan, G. E., Cuthbert, R., Quevedo, M., Green, R. E., Pain, D. J., Bartels, P., Cunningham, A. A., Duncan, N., Meharg, A. A., Oaks, J. L., Parry-Jones, J., Shultz, S., Taggart, M. A., Verdoorn, G. and Wolter, K. (2006a) Toxicity of diclofenac to *Gyps* vultures. *Biol. Lett.* 2: 279–282. DOI: 10.1098/rsbl.2005.0425. Swan, G., Naidoo, V., Cuthbert, R., Green, R. E., Pain, D. J., Swarup, D., Prakash, V., Taggart, M., Bekker, L., Das, D., Diekmann, J., Diekmann, M., Killian, E., Meharg, A., Patra, R. C., Saini, M. and Wolter, K. (2006b) Removing the threat of Diclofenac to Critically Endangered Asian vultures. (2006) *PLoS Biology*. March 2006 4(3): e66. Swarup, D., Patra, R. C., Prakash, V., Cuthbert, R., Das, D., Avari, P., Pain, D. J., Green, R. E., Sharma, A. K., Saini, M., Das, D. and Taggart, M. (2007) The safety of meloxicam to critically endangered *Gyps* vultures and other scavenging birds in India. *Anim. Conserv.* 10: 192–198. Taggart, M. A., Cuthbert, R., Das, D., Pain, D. J., Green, R. E., Shultz, S., Cunningham, A. A. and Meharg, A. A. (2006) Diclofenac disposition in Indian cow and goat with reference to *Gyps* vulture population declines. *Environ. Pollut.* 147: 60–65. Taggart, M. A., Senacha, K., Green, R. E., Jhala, Y. V., Ragathan, B., Rahmani, A. R., Cuthbert, R., Pain, D. J. and Meharg, A. A. (2007) Diclofenac residues in carcasses of domestic ungulates available to vultures in India. *Environ. Internatn.* 33: 759–765. doi:10.1016/j.envint.2007.02.010. Thiollay, J.-M. (2006) The decline of raptors in West Africa: long-term assessment and the role of protected areas. *Ibis* 148: 240–254. Wurster, C. F., Wurster, D. H. and Stickland, W. N. (1965) Bird mortality after spraying for Dutch elm disease with DDT. *Science* 148: 90. ANDREW A. CUNNINGHAM Institute of Zoology, Zoological Society of London, Regent’s Park, London NW1 4RY, U.K. DEVOJIT DAS, VIBHU PRAKASH, ASAD RAHMANI, SACHIN P. RANADE, KALU RAM SENACHA, S. SARAVANAN, NITA SHAH Bombay Natural History Society, Hornbill House, Mumbai, 400023, India. MARTIN GILBERT, RICHARD T. WATSON, MUNIR Z. VIRANI The Peregrine Fund, 5668 West Flying Hawk Lane, Boise, Idaho 83709, U.S.A. RAM D. JAKATI Haryana Forest Department, Van Bhawan, sector 6, Panchkula, 134109, Haryana, India. YADVENDRADEV JHALA Wildlife Institute of India, Post Bag #18, Chandrabani, Dehradun, 248001, Uttaranchal, India. ALEEM A. KHAN Ornithological Society of Pakistan, 109/D P.O. Box 73, Dera Ghazi Khan, Pakistan. VINNY NAIDOO, GERRY SWAN Department of Paraclinical Sciences, Faculty of Veterinary Science, University of Pretoria, Onderstepoort, South Africa. J. LINDSAY OAKS Department of Veterinary Microbiology and Pathology, Washington State University, Pullman, Washington 99164-7040, U.S.A. JEMIMA PARRY-JONES International Centre for Birds of Prey, Little Orchard Farm, Eardisland, Herefordshire HR6 9AS, U.K. HEM SAGAR BARAL Bird Conservation Nepal, P.O. Box 12465, Lazimpat, Kathmandu, Nepal. DEVENDRA SWARUP Indian Veterinary Research Institute, Izatnagar 243122, Uttar Pradesh, India. MARK A. TAGGART School of Biological Sciences, Dept of Plant & Soil Science, University of Aberdeen, AB24 3UU, U.K. KERRI WOLTER Rhino & Lion Wildlife Conservation NPO, “Vulture Programme”, Kromdraai, South Africa. * Author for correspondence; Director of Conservation, Wildfowl & Wetlands Trust (WWT), Slimbridge, Glos GL2 7BT, U.K.; e-mail: email@example.com
Touchstone Journal of the Oregon Massage Therapists Association Good Scents Colored Light Therapy Technology Tools In This Issue... Massage Good Scents ................................................................. 4 New Hillsboro Area Rep ............................................. 9 Colored Light Therapy ............................................... 5 Business Practices It’s Your Money ......................................................... 7 iTunes University ....................................................... 9 OBMT Updates .......................................................... 8 Oregon Massage Statistics ........................................... 3 Quickbooks Questions ................................................. 7 OMTA Information 2009 Election Results ................................................. 3 Executive Committee .................................................. 10 Other Calendar of Events ..................................................... 9 Classifieds .................................................................... 3 Photo credits Heather Bennouri Support Our Advertisers Breitenbush Hot Springs page 8 Institute for Esogetic Colorpuncture page 4 Touchstone Publication Information Touchstone is the journal of the Oregon Massage Therapists Association published several times a year. We welcome feedback, including letters to the editor. Letters will be printed, but may be edited for length and clarity. We also welcome topic requests for future articles and article submissions. For details on article requirements, advertising, and other questions, please contact Touchstone editor Heather Bennouri at 8827 SW Blake St, Tualatin, OR 97062, email@example.com or (971) 570-5404. Current advertising sizes and rates are posted at: www.omta.net/touchstone_ad_rates.pdf Thank You For Your Support! OMTA would like to recognize the following companies for their generous donations to the 2009 OMTA Annual Conference Raffle and Auction. Custom Craftworks Oregon School of Massage Robert Hunter & Co. 2009 OMTA Election Results Tallied November 7, 2009 54 Ballots and surveys returned Secretary: Joni Kutner (50-0) Membership Coordinator: Heather Bennouri (50-0) Change the OMTA Bylaws as Proposed: PASS (47-3) On November 8, 2009, these officers were inducted as well as the following new appointments and changes of office. Kami Manselle was appointed as Treasurer Carol Duncan resigned from State Coordinator. Carol Duncan was appointed as Vice President. Emden Griffin was appointed as State Coordinator. Neva Winter was appointed as Hillsboro Area Representative. Based on the survey results, the OMTA Executive Committee will be evaluating how to hold the annual conference next year and began forming an action plan to address the governor’s suggestion to suspend the OBMT. The Executive Committee voted to release the following statement to the Governor, the Legislature, the massage profession, and the public: The Oregon Massage Therapists Association (OMTA), a professional association of licensed massage therapists in Oregon, supports the Oregon Board of Massage Therapists (OBMT) in its current semi-independent structure. The OBMT has made positive changes that protect both the public and its licensees and serves a critical role in keeping massage therapy a safe and respected profession. The State of Oregon has been a leader in eliminating unsafe and unethical practices in the field of massage and has been a role model for legislation, policy, and procedures for other states throughout the country. Please help keep the profession of massage a legitimate and valuable resource for Oregonians. Keeping the OBMT in its self-funded, self-sufficient format ensures quality regulation and job security for the 6000 Licensed Massage Therapists currently practicing. Oregon Massage Therapy Statistics Source: OBMT Records as of October 2009 Note: The total license count varies on some of these reports as they were run on different days, reflecting minor changes that occurred in renewals, lapses, and new licenses during that time. Active LMTs: 5891 (includes 64 new licensees) Male: 17.28% Female: 82.72% Inactive licenses: 836 Male: 18.30% Female: 81.70% Total: 6720 licenses Male: 17.41% Female: 82.59% Demographics by age Age (years) LMTs <20 7 20-29 1295 30-39 1919 40-49 1392 50-59 1451 60-69 599 70-79 56 80+ 7 Demographics by license duration Years Active Inactive <2 1170 12 3 591 37 4 569 60 5-10 1876 345 11-20 1301 294 21-30 334 77 31-40 50 8 41+ 0 5 There were 135 modalities listed that LMTs practice, with an average of just over three modalities per LMT. The most common were Swedish (93.8%), Deep Tissue (56.2%), Reiki (12.8%), Myofascial Release (12.5%), Sports Massage (12.0%), Trigger point (11.6%), and Reflexology (9.2%). Drifting into Bliss available at New Renaissance Bookshop on NW 23rd Street in Portland Oregon School of Massage in Portland and Salem and online www.driftingintobliss.com Good Scents An Essential Introduction Carol Duncan LMT, RA By now we are all familiar with Aromatherapy. It seems everywhere we turn, there are companies looking to make a profit on anything with a fragrance. Just stop skipping over commercials with your DVR and you’ll surely see at least one every hour. From Plug-Ins to vacuum canisters with an essential oil well for improving the smell of your rooms, to candles and other scented products. As a massage therapist, you probably have at least one or two bottles lying around that add to the experience of your massage. Then there are the heavy-duty users that have thousands of dollars tied up in every conceivable essence imaginable. Did you know there was essential oil of carrot seed? This is the first in a series of articles designed to help you develop a better understanding of essential oils. From common to less-used essences, different methods of utilizing them will be discussed for you to enhance both your massage practice and your life. First and foremost, essential oils do not follow the rule that if a little is good, then a lot is better. Because essential oils are highly concentrated, they can burn if improperly used. It’s a good rule of thumb to never use oils “neat” (without diluting them) on the skin unless you are a trained professional. I recommend that you research any oils you use to see what is the best way to use them; perhaps in a massage blend, inhaled, or in a potpourri. Some can even be taken internally, but this should only be done under the care of a Licensed Physician, Naturopathic Physician or Registered Aromatherapist. Diffusion (where the essential oil is allowed to permeate the air so that it can be inhaled) is popular. There are many types of diffusers, ranging from ones that plug in to car lighters to expensive electric ones that can diffuse through an entire office building. Typically, using diffusion is fine in your home with an oil that you love. Remember though that not all who come into contact with diffused oils will appreciate the same essences and what is loved by some can be offensive to others. To test an essential oil with an individual, a simple way to start is to allow them to choose a scent or recommend one for a specific reason. Put a small amount on a cotton ball and place it under their nose. A drop of eucalyptus or ravensara on the cotton ball can help keep sinuses open, which can be particularly effective when your client comes out of the face cradle and has that initial “stuffed up” feeling. One of the fascinating things about the sense of smell is the connection to memory. Many people associate memories with specific smells and can recall events they may not have remembered otherwise when they come in contact with that smell later. Using this to your advantage, you can send the cotton ball home with the client to smell until the aroma has dispersed. This can last several days or even a week depending on the oil used. An added bonus to using this trick is that each time your client smells the aroma, they will think of you and their last visit. They can even share the smell with friends and that can generate more interest in your massage business. People love little gifts—this can be a gift that keeps on giving. Let the client know that to fill a room with the aroma to place the cotton ball in the heater vent or the grate of a fan in your room. This can be changed out very quickly with a new aroma by changing the cotton ball. Throw the old cotton ball in the trash for a fresher smelling trash bin. I like to put the used cotton ball under the seat in my car. Each time the heat comes on or the sun warms up the inside of the car, the aroma fills the space and is refreshing each time I get in to go somewhere. Be creative and think of other ways to use up the remnants of a good aroma. Carol Duncan has been an LMT since 1997 and a Registered aromatherapist since 2004. Trained though the Australasian College of Health Sciences and continuing education in Provence, France, she does Raindrop Therapy, Aromareflex, and custom blending. Duncan owns and operates Massage Central, a thriving massage practice in Sutherlin, Oregon. Classifieds Job Openings/Space for Rent Treatment Space for Sublease: 2 weekends a month with occasional full week during the month. Beautiful space near Washington SQ on Hall Blvd. Includes use of Hydraulic table, Hydrocollator and Vibracasser, $200.00/month. Contact Mary Elizabeth Smith: 503-626-1950 #1490 or visit her web site at www.luminnutritionandwellness.com Services Affordable Natural Skincare…Back to Nature Facials and Massage, specializing in 100% Oxygen Spray Facials offers OMTA members a 10% discount off facial and waxing services and skincare products. Close to I-5 and set in a beautiful forested are in Lake Oswego, we provide the highest quality skincare. Facial treatments are corrective to skin conditions yet relaxing and pampering. Call (503) 670-7749 for consultation, Leslie Martinsen, LE, LMT #6672. See website at www.backtonaturefacials.com. Volunteer Opportunities Seeking massage therapists to volunteer internationally. Information at www.ngoabroad.com or via email at firstname.lastname@example.org. NGO abroad is a nonprofit organization that provides frugal, customized international volunteer options and helps people enter international humanitarian work. INDIA: rural areas need massage therapists to work on weary villagers who have just dug wells, latrines, or done other hard labor. CHILDREN: Help untangle the emotional knots of children who have been abused or neglected. LIVELIHOODS: teach massage as an employment skill--a ticket out of poverty. Other/Misc. Yachats Beach House. This welcoming house is located just steps away from easy access to the eight-mile beach between Yachats and Waldport. It has 2 bedrooms, 1.5 baths, fully equipped kitchen, fire-view woodstove and large windows and decks for ocean viewing. With sleeper sofas in the living room and library, it sleeps eight. Pets are welcome. $120 winter, $155 summer, 7th night is free in winter. For info, contact Glenda Jones, (541) 726-9720 or ethelscoastcottage.com. New light healing technologies, which harness color and light frequencies, are now used with excellent and fascinating results. The following case study is an example of how colored light can be used to facilitate healing at body, soul, and spirit levels simultaneously. Ann, a 45-year-old businesswoman, came to me for treatment after a mastectomy and lymph node surgery had left her with a painful condition of edema or swelling in her arms and hands. As we talked about her situation, I also learned that Ann was in a frustrating, underpaid job and was struggling in her intimate relationship with her partner. I explained that colored light applied to her body would balance Ann’s energy flow and restore proper “biocommunication” between her cells. In particular, a light treatment designed to regulate the lymph system might help resolve her edema problem. I also mentioned that the light would bring up and help release any old unresolved emotional conflicts that might have stressed her immune system and made her vulnerable to cancer in the first place. After the first treatment, Ann called to say that the swelling and her edema had improved considerably. She also noticed a general sense of well being and relaxation. Over the course of several treatments, Ann’s continued to improve physically, but what amazed her most were the psychological and emotional shifts she experienced. Ann’s dreams began to take on a more vivid quality, often containing powerful symbols and messages. Long buried memories of her childhood began to surface. She had had a very difficult childhood, especially a painful relationship with a cold, rejecting mother. The pain of these memories slowly surfaced and released with the light treatments. Eventually, Ann began to question her current life situation. She realized that her personal relationship was emotionally abusive and decided to end it. Not long after that, she decided to quit her job, which she felt was a dead-end. Instead, she returned to graduate school to pursue her long-held dream of obtaining a graduate degree in art therapy. To date, her original symptom of edema no longer bothers her and her general health is good. As this story suggests, colored light is a powerful key to unlock the mysterious connections between body, mind and spirit. Light has often been used as a metaphor for the highest potential in human development, “enlightenment” and the ultimate direction for our souls, “into the light.” I suggest that these metaphors are pointing toward a more literal truth. Light can help us to heal emotionally and evolve spiritually, even as it supports our physical healing. **How Do Light Frequencies Affect the Human Body?** Light frequencies enter the body through the eye and the skin. Electromagnetic impulses activated by light travel along the optic nerve of the eye deep into the brain. There, for example, the impulses influence the workings of the hypothalamus (a part of your middle brain), the pituitary (the master endocrine gland of the body) and the pineal gland. These parts of your brain and glandular system are involved with the production of biochemical substances that influence many bodily functions – including mood regulation, the onset of puberty, sexual functioning, aging, the immune system and much more. Light or photoreceptive cells, once thought to exist only in (Continued on page 6) | Primary and Secondary Colors Used in Colorpuncture | |---------------------------------------------------| | **Color** | **General Action** | **Physical Effects** | **Emotional Effects** | | Red | Hot, greatest power of stimulation | Improves circulation, helps with coughs, asthma, anemia, eczema | Excites, arouses, passions, cheers, loosens tongue | | Green | Neutral, sedating, soothing, relaxing | Helps with inflammation in joints, promotes detoxification, reduces edema | Promotes contentment, tranquility | | Blue | Cold, relaxing, clears heat | Reduces pain, congestion, helps with insomnia and menopausal problems | Promotes quietness and reserve | | Orange | Warming, gives energy, raises spirits | Stimulates appetite, helps weight gain, supports the heart, helps exhaustion | Reduces fear, depression, pessimism, promotes joy and happiness | | Yellow | Warming, sun at zenith, stimulates and strengthens | Promotes digestion, strengthens nerves, stimulates stomach. | Promotes learning and intellect, brightens | | Violet | Calming, brings awareness and consciousness, prepares for meditation | Helps lymphatic system and spleen | Promotes spiritual strength and consciousness | the retina of the eye, are actually distributed through every tissue of the body. Scientists now understand that light entering the skin also travels deep into the body via the acupuncture meridians, and even more subtly, from cell to cell. A renowned German biophysicist, Dr. Fritz Albert Popp, has come closest to proving that we are actually beings of light. Popp’s research has demonstrated that human cells are constantly emitting low levels of light radiation. He calls this radiation “biophoton emission.” Popp believes that cells communicate via biophotons. These findings inspired Popp’s colleague, a German naturopath and light therapy originator, Peter Mandel. According to Mandel: “Light is life . . . Specifically, light is present in the communication between the cells in the body, and disease occurs when this communication is broken, when the cells can no longer speak the same language. Giving light has a resonance effect, bringing cells into the same language again and healing the body.” In addition, each wavelength of light, perceived by the human eye as a different color, has different effects on both body and mind. The chart on page 4 summarizes how various colors are used in the system of Esoteric Colorpuncture™. These particular guidelines are fairly consistent in most color therapy systems. For example, red light tends to have the greatest power of penetration, and is stimulating or even heating. In the body, red light helps increase blood supply and circulation in an injured area, thereby promoting faster healing. Emotionally, red light has a cheering and exciting effect, and can arouse passion. **Lighting Up the Frontiers of the Bodymind** Today’s pioneers in the field of color and light therapy are particular interested in light’s capacity to reach into our subconscious mind with ease and speed. The fact that light can so profoundly effect our emotions and our spirit, even as it influences the well-being of the body, is inspiring the development of many new light therapy technologies. “Light has a way of bringing up to the surface old, unresolved, unexpressed emotional trauma, which I feel are the roots of the weed we call disease,” says optometrist, light therapist and author, Jacob Liberman. Liberman developed a system to introduce light through the eye for bodymind healing. Psychotherapist and light practitioner, Dr. Stephen Vasquez, also believes that colored light acted as a catalyst to bring unconscious material to the surface. In his psychotherapy practice, he combines colored light, introduced through the eye, with traditional counseling methods to speed up clients’ healing processes. Peter Mandel originated a system of acu-light therapy in which colored light is applied to points on the skin. Mandel maintains that light and color can heal the “background” of illness: long-held emotional conflicts which weaken our bodies and set us up for disease. He believes light can be used to speed up the exchange of information between the conscious, unconscious and super-conscious mind, thereby supporting our individual evolution. Whatever the light therapy method, healing with colored light is a gentle and uniquely respectful process. Light never imposes any particular direction upon the client. Rather it supports the discovery of your own truth. For each of us, the journey toward healing and self-discovery will follow an individual path. In closing, consider these words by Peter Mandel for your inspiration: “We who are imprisoned in matter have to bring our [inner] ‘I’ out of matter and darkness, and into the light. On the level of the material world, we humans, in our wholeness, are light beings. We must and always will develop toward the absolute light, which we call God. In this process, we are accompanied by the light on the outside and, if we allow it, the light on the inside.” *Manohar Croke, M.A., CCP is the founder and Director of the Institute for Esoteric Colorpuncture, USA, dedicated to sharing the work of Peter Mandel in the United States. She teaches seminars on the Esoteric Colorpuncture acu-light system of Peter Mandel around the country, as well as lecturing and writing on the subject of light therapy. A psychotherapist with training in trauma resolution and psycho-spiritual process work, Manohar uses colored light in her own private practice to support the healing and personal evolution of her clients.* --- **Upcoming Classes** **Introduction to Esoteric Colorpuncture** Portland, Oregon February 20-21, 2010 **Professional Certification Course** Seattle, Washington Starts April 2010 For more information, contact Manohar Croke, Director Institute for Esoteric Colorpuncture, USA PMB 165 101 W. McKnight Way, Ste B Grass Valley, CA 95949 (530) 362-6908 email@example.com www.colorpuncture.org --- The Breitenbush Healing Arts Team is seeking Oregon LMTs to fill temporary, seasonal, and periodic year-round positions. Call, e-mail, or access our website for information and application. 503-854-3320 ext. 119 firstname.lastname@example.org www.breitenbush.com There are lots of ways to make money. There are even more ways to save money. Here are two tips that could save or make you thousands of dollars this year! **Tip # 1: Make Money** The Department of Veterans Affairs (the VA), operates clinics and hospitals throughout Oregon. There is an office that helps businesses get contracts with the VA. The Office of Small and Disadvantaged Business Utilization (OSDBU) advocates, assists and supports the interests of small businesses. They are particularly looking for the maximum practical participation of small, disadvantaged, veteran-owned, women-owned and empowerment zone businesses in contracts awarded by the VA. They advise businesses on marketing their products and services to the VA and other federal agencies. You can find them at http://www4.va.gov/osdbu/. Think big. Market your services to people who really need healing, our returning veterans. **Tip # 2: Save Money** The State of Oregon has a program where you do not have to pay your real estate taxes! If you own your own home and are elderly or disabled, the state will pay your county the taxes due. The taxes, interest at six percent, and a deferral fee will be placed as a lien on your title. To qualify, your household income must be less than $39,000 (the amount changes every year), you must own the property (or are buying it), and you must live on the property. If at least one property owner is disabled (on Social Security disability, you may qualify under the disabled option. If at least one spouse is 62, you may qualify under the elderly option. There are a few minor restrictions, but most homes will qualify. If you sell the home, the liens will automatically collect. But if you live in the house until you die, you will never have to pay the taxes, though whoever inherits the property will. This program may save you thousands of dollars every year. See http://www.oregon.gov/DOR/SCD/scsa.shtml or your county tax office for all the details. **OMTA Needs New Rules Committee Liaison** For the past four years, OMTA has had a representative on the Rules Committee. This position is now vacant and needs to be filled in order for the organization to be represented as the Rules Committee moves forward. All LMTs are welcome in the committee, but OMTA needs a formal representative for this position who is responsible for reporting the items addressed by the committee and taking concerns of OMTA members to the committee for consideration. If you are interested, please contact OMTA President, Robert Bike at (541) 465-9486. **Handling Duplicate Transactions** Jennifer Rodriguez Have you ever run into the situation where you entered a transaction twice in QuickBooks? Don’t worry—these brain cramps happen to all of us periodically. **Delete vs Void** The question then becomes: how do you get rid of the duplicate transaction? While your first thought might be to delete the transaction, that choice is NOT your best one. There needs to be a trail, and deletion can eliminate that trail entirely. Whenever you have a situation where something has been duplicated, the best practice in addressing it is to VOID the transaction, so the trail remains. **Correct the Error Properly** Let’s say you have entered a check twice into your register. If you right-click on the check, or click Edit in the menu bar, you will find the options to either Delete Check or Void Check. (if you use an version of QuickBooks older than 2009, you will not see the Delete Check as a right click option) Either way, be sure to choose the Void Check option. By choosing the “Void” option, you leave the transaction intact within QuickBooks. The only thing that changes is that the amount of the transaction is reset to $0.00. **Why Voiding is Better** It keeps your outside/tax accountant sane. If you delete a transaction, it is completely gone from QuickBooks (other than a record in the audit trail being created). Your outside accountant normally creates workpapers and reports based on your QuickBooks data. In many cases, the accountant uses those numbers on an ongoing basis. If you delete a transaction that affects one of those numbers, you’ve immediately introduced heart palpitations and created a “Tums Moment,” as the numbers won’t match because the transaction doesn’t exist anymore. Granted, even if you void the transaction, the numbers won’t match, but voiding leaves a trail to follow. It is much to deal with something that is voided than it is with something that is deleted. You can include a helpful note on a voided transaction. The fact that QuickBooks retains all the other details of the transaction allows you to add a note for future reference as to why this transaction was voided. Deleting does not allow this option. **Best Practice** Every transaction entry screen (invoice, bill, check, credit card, etc.) in QuickBooks has the option to Void or Delete. Choose Void: your outside accountant will hug you for making this choice! Jennifer Rodriguez is a specialist with QuickBooks Pro, Premier, and Enterprise. For more information, contact Jennifer Rodriguez at: www.pdxbookkeeper.com • (503) 995-1929 New clients eligible for a free 30-minute phone consultation OBMT Updates Multiple Discipline Task Force Formed The Board of Massage Therapists has created a Multi-Discipline Task Force in response to questions/concerns raised regarding the various modalities regulated under the practice of massage. The Task Force is charged with: 1. Identifying constituent issues, concerns and questions; 2. Developing a collaborative process to address the issues identified; 3. Researching and gathering pertinent information; 4. Making solution based recommendations to the OBMT and interested parties; and 5. Assisting the board with the dissemination and/or implementation of recommendations adopted by the Board which may include meeting with legislators or other key individuals. This task force will be chaired by John Combe, Oregon LMT #7492 and be comprised of volunteers representative of a variety of modalities. These meetings will be open to the public. The meeting needs and schedule will be determined by the committee, and the next meeting is going to be in mid- to late January. We are looking for volunteers who are interested in contributing to and participating in a collaborative process. Individuals volunteering for this task force should be representative of modalities regulated under the practice of massage, passionate, informed and interested in reaching a knowledge based outcome. Does this sound like you? If so, complete the volunteer application form on the OBMT website at: www.oregon.gov/OBMT/docs/Volunteer_interest_form.pdf You can send in your application via Email: email@example.com Fax: (503) 385-4465 Snail mail: Oregon Board of Massage Therapists 748 Hawthorne Avenue NE Salem OR 97301 New Hillsboro Area Rep Neva Winter Inducted at November Meeting Neva Winter owns and operates Winters Main Massage, a thriving private practice in Hillsboro. She has a degree in business management and graduated from the East West College of Healing Arts. She looks forward to bringing her background in business and freshness to the massage world as insight for the Executive Committee and as a resource for other LMTs. Neva will be a fantastic resource in building OMTA as a large community of learning and support for LMTs. Hillsboro Area Representative Neva Winter #14997 • 4004 E Main St • Hillsboro, OR 97123 • • 503-484-7565 • firstname.lastname@example.org • If you have a topic you’d like to see, if you would like information on meetings, or if you would like to be a presenter, please contact Neva. LMTs are welcome from all areas of the state. For other area reps and locations, see page 10. Add the Power of Color & Light to Your Healing Practice! Esogetic Colorpuncture™ Acu-light Therapy is a wholistic healing system in which colored light of specific frequencies is applied to acu-points on the skin. These treatments create powerful shifts in the emotions and consciousness, while simultaneously supporting the body’s natural healing processes. Upcoming Classes Sponsored by IEC, USA: • Introduction to Esogetic Colorpuncture • Professional Certification Course Portland, OR Feb 20-21, 2010 Seattle, WA Starts April, 2010 IEC, USA is approved by the National Certification Board for Massage Therapy and Bodywork (NCBTMB) as a continuing education approved provider. RECEIVE A 10% DISCOUNT ON THE INTRODUCTORY CLASS WITH THIS AD! For more information, visit: www.colorpuncture.org or call us at 530-362-6908 Free Classes from Schools Around the World Need to brush up on your A&P? Looking for marketing ideas to help carry you through a struggling economy? How about trying to write a successful business plan and pitch it to get a loan? If you’re not sure where to start—and, more importantly, you don’t want to waste money trying out different things that may not work for you, there’s no better deal than FREE. You do need a computer, internet access, and a little time, but the rest is easy. Apple launched iTunes University, which has a library of a wide range of courses from colleges and universities around the U.S. and the world. Some of the courses are essentially video of regular classes while others are shorter audio clips. Schools range from Stanford to Yale, State schools to International Universities abroad. You can search through either by subject matter or by school. Each item is called a “track” with a series a tracks making up different courses. You can download individual tracks, entire courses, and you can subscribe to a course, so that when new tracks are released, they are automatically downloaded. These courses can be great refreshers or a good start on learning a new topic. If it is related to massage, or to how your run your massage business, you can use the courses for non-contact hours for continuing education with your license renewal in Oregon (more on this in the box at the end of the article). Some prefer the video courses, which have visuals of hands-on instruction techniques and can include diagrams. Others prefer the audio-only as they can download them their iPod (or burn to CD) and listen on-the-go. Some of the audio courses can be challenging as the instructor speaking sometimes refers to diagrams or objects that cannot be transmitted through audio. However, for the most part, these can still be very beneficial. Calendar of Events Monday, January 11, 2010, 9:00 A.M. OBMT meeting Board office, 748 Hawthorne Ave, Salem, OR Agenda available online at http://www.oregon.gov/OBMT/minutes.shtml Sunday, January 24, 2010, 9:00 A.M.–4:30 P.M. Emotional Freedom Techniques Robert Bike, LMT 5473, EFT-ADV Register online at: eft1p.eventbrite.com $125.00 Repetitive Strain Injuries—Upper Extremities Donovan Monroe, LMT 10214 Register online at: rsportland.eventbrite.com $125.00 European Sports Stretching Carol Duncan, LMT 6367, RA Register online at: essp.eventbrite.com $125.00 All classes will be held at Oregon School of Massage 9500 SW Barbur Blvd. Suite 100, Portland, OR Monday, January 25, 2010, 7:0-9:00 P.M. Eugene Area Meeting: Trigger Point Therapy Speaker: Walter Libby, LMT Market of Choice on 28th and Willamette in Eugene Two CE contact hours FREE for OMTA members, $10.00 for nonmembers Saturday-Sunday, February 20-21, 2010 Introduction to Esoteric Colorpuncture Portland, Oregon (530) 362-6908 email@example.com www.colorpuncture.org Accessing iTunes University - Open iTunes. (If you don’t already have iTunes, it is a free download from www.itunes.com.) - Click on the iTunes Store button on the left. (You may need to set up an iTunes account, which requires a credit card but will not bill it if you do not purchase anything.) - At the top right, select the iTunes U button. - About halfway down on the left is a Categories box, with different topics available from iTunes University. You can choose a topic from this area or from a selected provider in one of the boxes below. - Different options will appear in the main part of the iTunes window. Click on one of the courses that interests you. - You can then download individual tracks by clicking on “Get” (to the right of that individual track), download an entire series by clicking on “Get Tracks” (near the title and description), and download future additions to the series by clicking on “Subscribe.” - Once your downloads have completed, you can access them from your iTunes Library (at the far left of your iTunes screen) under iTunes U. Tracks will be sorted by course. Click on a track and click the “Play” button to watch/listen. - You can leave them on your computer, delete and redownload in the future, or download to your iPod or iPhone (although you do not need either of these to view your tracks.) Claiming Non-Contact Continuing Education Hours - Review the track(s) related to massage or massage business practices. - Record the time spent reviewing them. Course times are listed in minutes in iTunes. Remember that you need to spend an actual clock hour on the information to receive one CE hour (so if you are watching/listening to 6-minute tracks, you would need to watch 10 tracks to get an hour of CE). - Record the track name(s), dates, university, and topic. - Write a one-page summary for each hour of information. Oregon Massage Therapists Association Executive Committee OMTA holds annual elections to select the Executive Committee (EC). Elections are open to all current OMTA members with voting status. President and Vice President are elected in even years. Secretary, Treasurer, and Membership are elected in odd years. All other positions are appointed by the elected officers. EC positions are volunteer except Conference Registrar and Ad Manager. Elected Officers The positions of Membership, Secretary, and Treasurer are up for election in October 2009. Nominations are now open (see page 8 of this issue for more information). President Robert Bike 1710 Oakhurst Court Eugene, OR 97402 (541) 465-9486 firstname.lastname@example.org Vice President Carol Duncan 1007 W. Central Ave Sutherlin, OR 97479 (541) 584-2810 email@example.com Secretary Joni Kutner 1630 Ash Street Lake Oswego, OR (503) 635-7591 firstname.lastname@example.org Treasurer Kami Manselle 4808 SE Ina Ave Milwaukie, OR 97267 (503) 957-9223 email@example.com Membership Heather Bennouri 8827 SW Blake St Tualatin, OR 97062 (971) 570-5404 firstname.lastname@example.org Appointed Positions Advertising Manager Vacant Contact OMTA President for details if you are interested Conference Registrar Vacant See Membership Coordinator for contact information OMTA Library Bruno DeBlock PO Box 306 Bend, OR 97709 (541) 330-1980 email@example.com State Coordinator (for Area Reps) Emden Griffin 5112 SW Garden Home Rd Portland, OR 97219 (541) 350-0723 firstname.lastname@example.org Touchstone Heather Bennouri See Membership Coordinator for contact information Volunteer Coordinator Emden Griffin See State Coordinator for contact information Webmaster Robert Bike See President for contact information Area Representatives Bend Bruno DeBlock See OMTA Library for contact information Eugene Mike Pooler PO Box 2397 Eugene, OR 97402 (541) 556-0970 email@example.com Hillsboro Neva Winter PO Box 2397 Eugene, OR 97402 (541) 556-0970 firstname.lastname@example.org Portland Donovan Monroe 1988 SE Ladd Ave Portland, OR 97214 (503) 984-1963 email@example.com Roseburg Carol Duncan See State Coordinator for contact information Tualatin-Sherwood Heather Bennouri See Membership Coordinator for contact information. Albany, Ashland, Coastal Area, Salem, and Eastern Oregon Positions open Please contact the President if you are interested. Touchstone is the journal of the Oregon Massage Therapists Association. Published several times a year for OMTA members, Touchstone features articles relating to the practice of massage, techniques, resources, tools, books, classes, continuing education, legislative information, Oregon Board of Massage Therapists (OBMT) updates, and other related information. If you would like information about advertising in Touchstone, more information about OMTA, to submit an article or letter to the editor, please contact the appropriate source listed below. Touchstone Editor: Heather Bennouri 8827 SW Blake St Tualatin, OR 97062 (971) 570-5404 firstname.lastname@example.org OMTA 1710 Oakhurst Ct Eugene, OR 97402 www.omta.net see specific officers for phone contact information
AUTUMN BUDGET STATEMENT 2024 WHAT IT COULD MEAN FOR YOUR FINANCES NAVIGATING THE COMPLEXITIES OF INHERITANCE Should you consider estate planning and gifting for future generations? THE COST OF EARLY WITHDRAWAL FROM YOUR PENSION How retirees are impacting their financial future by accessing pension pots too soon MASTERING FINANCIAL PLANNING Essential tips for mothers balancing family and finances Welcome to our latest issue. On 30 October, Chancellor of the Exchequer Rachel Reeves will deliver the Autumn Budget Statement 2024. It will be a critical indicator of the government’s approach to managing the economy, aiming to foster an environment conducive to sustainable growth. The outcomes of this Autumn Budget will have far-reaching implications, potentially influencing everything from tax rates and public services to business investment and consumer confidence. As such, it is a pivotal moment that will shape the economic landscape in the months and years ahead. On page 08, we look at what it could mean for your finances. As we age or accumulate more wealth, protecting and preserving our assets for future generations becomes increasingly essential. This process, known as Inheritance Tax planning, estate planning or intergenerational wealth planning, involves strategically managing your estate to minimise tax liabilities and ensure that your wealth is passed down to your loved ones in the most tax-efficient manner possible. On page 06, we explain how understanding these nuances is essential in making informed decisions that will benefit you and your loved ones. More than three-quarters (78%) of retirees have already dipped into their pension pots by the time they retire, according to recent data\(^1\). This trend highlights a significant shift in retirement planning behaviours, where immediate financial needs or desires often outweigh the long-term benefits of leaving pension funds untouched. The implications of early withdrawals are multi-faceted and can significantly impact retirees’ financial security. Turn to page 03. Balancing the many responsibilities of motherhood can be overwhelming, often pushing long-term financial planning onto the back burner. However, effective financial planning is essential for everyone, and as a mother, you face unique challenges that require extra attention. On page 05, we consider some key financial planning steps to help you take control and secure your family's future. A complete list of the articles featured in this issue appears opposite. --- **ARE YOU READY TO SECURE YOUR FINANCIAL FUTURE?** Whether planning for retirement, investing your money or protecting your wealth, we can assist with every aspect of your financial planning. Contact us today to discuss your specific needs and start building a brighter, more secure future now. Your financial success begins with a single step. --- **Source data:** [1] The statistics cited were the result of an analysis by Scottish Widows on 232,654 different retirement claim transactions between 2019 and 2023, which has been used from different sources to give a single view. --- INFORMATION IS BASED ON OUR CURRENT UNDERSTANDING OF TAXATION LEGISLATION AND REGULATIONS. ANY LEVELS AND BASES OF, AND RELIEFS FROM, TAXATION ARE SUBJECT TO CHANGE. THE VALUE OF INVESTMENTS MAY GO DOWN AS WELL AS UP, AND YOU MAY GET BACK LESS THAN YOU INVESTED. --- The content of the articles featured in this publication is for your general information and use only and is not intended to address your particular requirements. Articles should not be relied upon in their entirety and shall not be deemed to be, or constitute, advice. Although endeavours have been made to provide accurate and timely information, there can be no guarantee that such information is accurate as of the date it is received or that it will continue to be accurate in the future. No individual or company should act upon such information without receiving appropriate professional advice after a thorough examination of their particular situation. We cannot accept responsibility for any loss as a result of acts taken, or refrained from taking, based on any area of the content. To the extent permitted by law, we exclude all liability save for that which cannot be excluded. More than three-quarters (78%) of retirees have already dipped into their pension pots by the time they retire, according to recent data\(^1\). Of these, more than half (52%) withdraw funds five years before their Selected Retirement Age (SRA), with 21% opting to start taking out funds nine to ten years before they retire. This trend highlights a significant shift in retirement planning behaviours, where immediate financial needs or desires often outweigh the long-term benefits of leaving pension funds untouched. Factors such as unexpected medical expenses, the desire to pay off debts or the need for additional income to support a particular lifestyle can drive retirees to access their pension savings earlier than planned. **CONSIDER THE TIMING OF PENSION WITHDRAWALS** The implications of early withdrawals are multifaceted and can significantly impact retirees’ financial security. By withdrawing funds early, retirees potentially miss out on the compound growth that could have been achieved if the money had remained invested. This can result in a smaller pension pot during the later years of retirement when the need for financial stability is often greater. Furthermore, early withdrawals may indicate insufficient financial planning or awareness about the benefits of delaying pension access. As people live longer and retirement periods extend, it becomes increasingly important for individuals to carefully consider the timing of their pension withdrawals to ensure they stay within their savings. **FINANCIAL IMPACT OF EARLY WITHDRAWALS** The data revealed that the average amount an individual withdraws by age 65 is £47,000. Financial modelling shows how much that £47,000 could grow if invested for longer. If the money stayed invested from age 55 (when the member would have first been able to take benefits) for an additional five years, they would have £13,925 more on average by the time they reach 60. That figure rises to £24,661 if it were to stay invested for ten years to age 65 – a rise of more than 50%; and to more than £38,000 if invested to age 70. A separate modelling exercise was conducted assuming that individuals claimed the maximum tax-free cash available at age 55, which currently stands at 25%, equivalent to £11,750. **MAXIMISING PENSION BENEFITS** If the same modelling were run with the remaining £32,250 left in individuals’ pots after taking the tax-free cash, savers would, on average, be £10,441 better off after five years and £18,496 after ten years if they decided to stay invested. These figures highlight the significant financial benefits of delaying withdrawals and allowing pension funds to grow. The data further shows that most people withdraw money from their workplace pension before retirement age. While early withdrawals are often unavoidable, draining a pension pot too soon can carry substantial risks, which providers and retirees should be aware of and take steps to guard against where possible. **NAVIGATING A CHANGING PENSIONS LANDSCAPE** The pension landscape is ever-changing. People are living longer, which means pensions must cover longer retirements. Additionally, more individuals are choosing to phase into retirement with part-time work, changing how and when they access their pension funds. Early withdrawals can severely impact the long-term financial stability of retirees. Therefore, individuals must seek professional financial advice to make informed decisions about their pension pots. **PLANNING FOR A SECURE RETIREMENT** Retirees should also consider other sources of income and investments that can support them during their retirement years. Diversifying income streams can provide a safety net and reduce the need to dip into pension funds prematurely. Proper financial planning ensures that retirees can maintain their desired lifestyle without compromising their financial security. By understanding the implications of early withdrawals and exploring alternatives, retirees can make decisions that will benefit them in the long run. **WANT TO MAKE INFORMED DECISIONS THAT WILL HELP YOU MAXIMISE YOUR PENSION BENEFITS?** If you are approaching retirement or have already started considering your pension options, it’s crucial to understand the impact of early withdrawals on your long-term financial security. Contact us today to explore your options and create a personalised retirement plan that aligns with your goals. Secure your financial future now – don’t wait until it’s too late! Source data: [1] The statistics cited were the result of an analysis by Scottish Widows on 232,654 different retirement claim transactions between 2019 and 2023, which has been used from different sources to give a single view. THIS ARTICLE DOES NOT CONSTITUTE TAX, LEGAL OR FINANCIAL ADVICE AND SHOULD NOT BE RELIED UPON AS SUCH. TAX TREATMENT DEPENDS ON THE INDIVIDUAL CIRCUMSTANCES OF EACH CLIENT AND MAY BE SUBJECT TO CHANGE IN THE FUTURE. FOR GUIDANCE, SEEK PROFESSIONAL ADVICE. A PENSION IS A LONG-TERM INVESTMENT NOT NORMALLY ACCESSIBLE UNTIL AGE 55 (57 FROM APRIL 2028 UNLESS THE PLAN HAS A PROTECTED PENSION AGE). THE VALUE OF YOUR INVESTMENTS (AND ANY INCOME FROM THEM) CAN GO DOWN AS WELL AS UP, WHICH WOULD HAVE AN IMPACT ON THE LEVEL OF PENSION BENEFITS AVAILABLE. YOUR PENSION INCOME COULD ALSO BE AFFECTED BY THE INTEREST RATES AT THE TIME YOU TAKE YOUR BENEFITS. PENSION SCAMS ON THE RISE PROTECT YOUR SAVINGS! 7.3 MILLION UK ADULTS ENCOUNTERED AN ATTEMPTED SCAM IN THE PAST YEAR Around 7.3 million UK adults, or one in seven, encountered an attempted pension scam in the past year. Alarmingly, 14% were targeted through unsolicited calls, texts or emails, according to recent research, illustrating the aggressive tactics employed by scammers. This concerning trend has prompted a closer examination of the vulnerabilities within the pension system, especially as scammers become increasingly sophisticated in their approaches. This study also highlighted that six million individuals with multiple pension pots may be at greater risk, as half of the respondents believe scams are becoming increasingly difficult to identify. The complexity of managing several pension accounts can leave individuals more susceptible to fraudulent schemes, as it becomes challenging to keep track of all the details. Scammers take advantage of this confusion, making it harder for people to discern legitimate communications from deceitful ones. This growing difficulty in identifying scams calls for heightened awareness and stronger protective measures to safeguard pension savings. RISING THREAT OF PENSION SCAMS However, the awareness of reporting a scam is worryingly low, with only 32% of people knowing the proper channels. However, this figure improves significantly to 55% among those who consult financial advisers. This discrepancy underscores the importance of professional financial advice in mitigating the risk of scams. The research further uncovered a high prevalence of various consumer scams. A significant 42% of respondents reported phishing attempts, 36% encountered scams imitating reputable brands and 24% experienced refund scams. YOUNGER PEOPLE AT HIGHER RISK Interestingly, younger individuals between the ages of 18 and 34 are more susceptible to scams than the general population. The study found that 13% of this age group had been targeted, in contrast to 7% of the wider public. The evolving tactics of scammers make it increasingly challenging for consumers to avoid falling prey. With the growing number of people managing multiple pension pots, keeping track of their finances has become more difficult. PROTECTING YOUR PENSION To safeguard against pension scams, hanging up on unsolicited cold calls is crucial. Recognising unexpected contact as a potential red flag can also help avoid hasty and ill-informed decisions. Additionally, verifying firms on the Financial Conduct Authority (FCA) registry provides an extra layer of security. Remaining vigilant and informed is essential in this climate of sophisticated scams. Consumers must take proactive steps to protect their hard-earned savings. DO YOU REQUIRE INFORMATION OR ASSISTANCE IN SAFEGUARDING YOUR PENSION? If you require further information or assistance in safeguarding your pension, do not hesitate to contact us. Our team of financial experts is here to help you navigate these challenges and protect your future. Source data: [1] LV= Wealth and Wellbeing Research Programme, quarterly survey of 4,000 UK adults 12/08/24. THIS ARTICLE DOES NOT CONSTITUTE TAX, LEGAL OR FINANCIAL ADVICE AND SHOULD NOT BE RELIED UPON AS SUCH. TAX TREATMENT DEPENDS ON THE INDIVIDUAL CIRCUMSTANCES OF EACH CLIENT AND MAY BE SUBJECT TO CHANGE IN THE FUTURE. FOR GUIDANCE, SEEK PROFESSIONAL ADVICE. A PENSION IS A LONG TERM INVESTMENT NOT NORMALLY ACCESSIBLE UNTIL AGE 55/57 FROM APRIL 2028 UNLESS THE PLAN HAS A PROTECTED PENSION AGE. THE VALUE OF YOUR INVESTMENTS (AND ANY INCOME FROM THEM) CAN GO DOWN AS WELL AS UP, WHICH WOULD HAVE AN IMPACT ON THE LEVEL OF PENSION BENEFITS AVAILABLE. YOUR PENSION INCOME COULD ALSO BE AFFECTED BY THE INTEREST RATES AT THE TIME YOU TAKE YOUR BENEFITS. Balancing the many responsibilities of motherhood can be overwhelming, often pushing long-term financial planning onto the back burner. However, effective financial planning is essential for everyone, and as a mother, you face unique challenges that require extra attention. Here are some key financial planning steps to help you take control and secure your family’s future. **SAVE FOR UNFORESEEN EMERGENCIES** As a mother, you’ve probably realised that emergencies can strike when you least expect them to. While an emergency savings pot can’t prevent sick days, uniform mishaps or broken friendships, it can provide a useful financial buffer for more expensive emergencies, such as boiler or car breakdowns. Building up at least six months’ worth of essential expenditure in an easy-access savings account reduces the risk of falling into debt or dipping into savings allocated for long-term goals. **PROTECTION, PROTECTION, PROTECTION** An income protection policy should be considered if your family relies on your income to cover bills, childcare, school fees or after-school activities. This type of insurance pays out a portion of your salary if you suffer from a long-term illness and cannot work, helping you maintain financial stability and ensuring your children’s lifestyle isn’t unduly affected. Life insurance is another essential protection, offering a vital financial safety net should the worst happen to you. It provides a lump sum or regular income if you pass away during the policy term, which could help pay off the mortgage and ease the financial burden on your family. **YOUR PENSION MATTERS** If you’ve taken time off work to care for your children, finding ways to top up your pension savings is crucial. Many mothers prioritise their children’s futures over their own, but neglecting your pension can have long-term financial repercussions that ultimately affect your entire family. The good news is that there’s still ample time to get your pension back on track. If you qualify for the full amount of the new State Pension, you will receive £221.20 per week, or £11,502.40 a year (2024/25). You must have paid National Insurance (NI) contributions for 35 years to qualify for the maximum amount. If you’re not working, you’ll receive NI credits automatically as long as you claim Child Benefit, and your child is under 12. You may still receive these credits if you’ve claimed child benefits but opted out of payments to avoid the High-Income Child Benefit charge. **TOPPING UP PENSIONS** Consider topping up your workplace or private pensions. Pensions are a highly cost-effective way of saving for retirement due to the tax relief you receive on personal pension contributions. This means a £100 pension contribution will only cost you £80 if you’re a basic rate taxpayer, £60 if you’re a higher rate taxpayer or £55 if you’re an additional rate taxpayer, as long as the total gross contributions are matched by the income in that band. Even if you aren’t working, you can contribute up to £2,880 per year into a pension and still receive 20% tax relief, boosting your contribution to £3,600. If you receive any cash gifts or inherit some money, saving it into a pension can significantly enhance your retirement funds. **WEALTH CREATION FOR YOUR CHILDREN** If financially feasible, saving money for your children can profoundly impact their future, potentially helping with university fees or securing a deposit for their first home. To maximise the growth potential of their money, consider investing in the stock market. Although mothers might naturally lean towards being risk-averse, history shows that, over long periods, the stock market generally outperforms cash. A Junior ISA is a starting point. It offers tax-efficient investment growth and locks away funds until your child’s 18th birthday. **OBTAIN PROFESSIONAL FINANCIAL ADVICE** You might not have the time or inclination to sort out your finances independently – and that’s perfectly fine. Financial matters are one area where entrusting the responsibility to a professional can be done guilt-free. Obtaining professional financial advice can instil confidence that you’ve made the right decisions with your money, allowing you to focus on yourself and your family. --- **WANT TO FIND OUT INFORMATION OR SEE HOW WE CAN HELP WITH PERSONALISED FINANCIAL GUIDANCE?** Contact us today for expert professional advice and personalised financial guidance. We’re here to help you and your family achieve financial stability and peace of mind. Don’t wait – contact us now, and let’s secure a brighter future together! --- This article does not constitute tax, legal or financial advice and should not be relied upon as such. Tax treatment depends on the individual circumstances of each client and may be subject to change in the future. For guidance, seek professional advice. A pension is a long-term investment not normally accessible until age 55 (57 from April 2028 unless the plan has a protected pension age). The value of your investments (and any income from them) can go down as well as up, which would have an impact on the level of pension benefits available. Your pension income could also be affected by the interest rates at the time you take your benefits. NAVIGATING THE COMPLEXITIES OF INHERITANCE SHOULD YOU CONSIDER ESTATE PLANNING AND GIFTING FOR FUTURE GENERATIONS? As we age or accumulate more wealth, protecting and preserving our assets for future generations becomes increasingly essential. This process, known as Inheritance Tax (IHT) planning, estate planning or intergenerational wealth planning, involves strategically managing your estate to minimise tax liabilities and ensure that your wealth is passed down to your loved ones in the most tax-efficient manner possible. Effective planning can significantly impact the financial wellbeing of your heirs, making it crucial to consider various strategies and tools available for safeguarding your estate. One common question we receive from clients is whether to gift assets during their lifetime or wait until they have passed away. The answer is more complex and heavily depends on your personal and financial circumstances and objectives. Gifting can provide immediate support to family members and potentially reduce your estate’s size, lowering the IHT burden. However, careful consideration must be given to the gifts’ timing, amount and recipients to ensure that they align with your long-term goals and comply with tax regulations. Understanding these nuances is essential in making informed decisions that will benefit you and your loved ones. UNDERSTANDING INHERITANCE TAX When you pass away, IHT is potentially payable to HM Revenue & Customs (HMRC). The amount due depends on the estate’s value minus any debts and after all available thresholds have been used. These thresholds are the nil rate band (NRB) and the residence nil rate band (RNRB). At a high level, the NRB is £325,000, and the RNRB is £175,000, the latter of which is only available if you leave your home to a direct descendant. The standard rate of IHT due to HMRC on amounts over these thresholds is 40%. This reduces to 36% if at least 10% of your net estate is left to charity. WHY DO WE GIFT? We gift for two common reasons: We want to help our family and loved ones now, when they need it, and whilst we can see them enjoy it, as opposed to when we have passed away. This is often called a ‘living inheritance’. Additionally, we may have a large estate and wish to reduce its value so that our beneficiaries pay less or no IHT when we pass away. HOW MUCH CAN YOU GIFT? In short, you can gift away however much you want to whoever you like and whenever you like. If these gifts fall within the ‘annual gift allowances’ or are made from your regular surplus income, they automatically fall outside your estate for IHT tax purposes. Otherwise, you must survive seven years after making the gift before the gift is excluded from IHT tax calculations. THE IMPACT OF SEQUENCING GIFTS The sequencing of gifts can significantly impact the wealth you want to pass on. In addition to the seven-year rule, there is the less well-known 14-year rule. Giving a gift outright to an individual and/or Absolute/Bare Trust in excess of the annual allowances is known as making ‘Potentially Exempt Transfers’ or PETs. **Potentially Exempt Transfers and their uses** For example, a common reason for making a PET might be to help a child onto the property ladder. To ensure the gift is outside of your estate for IHT tax purposes, you need to survive seven years from when the gift is made. If the PET is more than the NRB (£325,000), there is gradual tapering on the excess once you have survived for over three years. The longer you survive after making the gift (between three and seven years), the greater the tapering. **Chargeable Lifetime Transfers** Should you settle any money into a relevant property trust, such as a Discretionary Trust, these gifts are known as ‘Chargeable Lifetime Transfers’ or CLTs. An example of such a settlement might be grandparents wanting to pass money down to their grandchildren. A common reason for this may be that their children already have a large estate, so if they were to inherit any more, it would be unhelpful for their IHT position. **Complications in Gift Order** Complications may arise when an individual has passed away and has made both PETs and CLTs. This is because the order of these gifts can result in bringing 14 years’ worth of gifts into the IHT calculation. When considering which gifts are liable to IHT, the gifts are placed in the order they were made, starting with the oldest and moving towards the date of death. **HMRC Rules on Failed PETs** HMRC rules are such that any CLTs made in the seven years before any ‘failed PETs’ must also be brought into account. If an individual makes a PET and dies within 6 years and 11 months, the PET fails. From the ‘failed PET’ date, HMRC will look back a further seven years and include any CLTs in their calculation to determine the IHT due on the PET. **Annual Gifting Allowances** Under current legislation, everyone can gift away £3,000 per year. This is called your ‘annual exemption’. Any unused allowance can be carried forward to the following tax year; however, it cannot be carried over again. There is also a wedding allowance of varying amounts depending on the relation, which must be made before the wedding, and the wedding must happen: £5,000 to a child, £2,500 to a grandchild, £1,000 to a relative or friend. Wedding gifts can be combined in the same year with the annual exemption. **Small Gifts Allowance** You can also make gifts of up to £250 to as many different people as you like, as long as the person has not received more than £250 from you that tax year. --- **Do you require information or personalised advice on gifting and Inheritance Tax planning?** For those seeking further information or personalised advice on gifting and Inheritance Tax planning, please do not hesitate to contact us for expert guidance tailored to your specific circumstances. --- This article does not constitute tax, legal, or financial advice and should not be relied upon as such. Tax treatment depends on the individual circumstances of each client and may be subject to change in the future. For guidance, seek professional advice. The Financial Conduct Authority doesn’t regulate trust planning and most forms of inheritance tax (IHT) planning. Some IHT planning solutions put your money at risk, and you may get back less than you invested. IHT thresholds depend on individual circumstances and the law. Tax and IHT rules may change in the future. On 30 October, Chancellor of the Exchequer Rachel Reeves will deliver the Autumn Budget Statement 2024 accompanied by a comprehensive fiscal statement from the Office for Budget Responsibility (OBR). This significant event comes as the new government, elected to boost economic stability and growth, takes its first important step in addressing the nation’s financial health. The Autumn Budget will outline the government’s economic strategy, providing insights into their taxation, public spending and fiscal policy plans. It will be a critical indicator of the government’s approach to managing the economy, aiming to foster an environment conducive to sustainable growth. BALANCING THE NATION’S BOOKS The new government has faced the challenge of assessing the state of public spending and has identified a significant spending gap in the nation’s finances. This gap underscores the complexities of balancing the nation’s books while striving to implement growth-oriented policies. The Autumn Budget will likely address these challenges head-on, proposing measures to stimulate economic activity while ensuring fiscal responsibility. The outcomes of this Autumn Budget will have far-reaching implications, potentially influencing everything from tax rates and public services to business investment and consumer confidence. As such, it is a pivotal moment that will shape the economic landscape in the months and years ahead. ECONOMIC STABILITY AND GROWTH Following an ambitious King’s Speech, the new government’s first budget will seek to announce initiatives for growth alongside the activation of plans to balance the books across the spectrum of personal and business taxes and employment policy. But what could the new Labour government mean for your finances? Prime Minister Starmer’s Labour manifesto emphasised wealth creation. The manifesto aimed to grow the economy and ‘keep taxes, inflation and mortgages as low as possible’. To fulfil those plans, Labour may have to make changes that could affect taxes, allowances, and various investment schemes and rules. Given the pledges made in the manifesto, doing so may prove challenging. PLEDGES AND CHALLENGES Although the manifesto is not legally binding, it best indicates Labour’s government plans. Here, we highlight what the pledges could mean for your finances. PENSIONS Ahead of launching its manifesto, Labour announced that it would drop plans to reintroduce the lifetime allowance, a cap on how much people can save into their pensions before paying tax. Importantly, Labour committed to upholding the pensions ‘triple lock’, which ensures that the State Pension will continue to increase yearly in line with the highest of three factors: wage growth, inflation or a minimum of 2.5%. This policy is designed to protect the purchasing power of retirees and ensure they can maintain a stable standard of living in retirement. In the Autumn Budget, there are rumours the Chancellor could look to change pension tax relief, with speculation that this might be one of her targets. One option for Reeves is to cut pension tax relief to 20%. This would be no change for basic rate taxpayers. But it would be a considerable reduction for higher and additional rate taxpayers, who receive 40% and 45% relief on some or all of their pension contributions. However, further clarity on the scope of this and the challenges they are looking to address has yet to be made available. In the meantime, making the most of all your pension allowances is essential to build your financial resilience in retirement. INHERITANCE TAX Although Inheritance Tax has been widely discussed recently, it was a noticeable absence from the Labour manifesto. It contained no comments on future Inheritance Tax rates or reliefs (such as Business and Agricultural Relief). VAT The Labour manifesto confirmed that it intended to introduce VAT on private school fees and will end business rates relief for the schools, with such measures estimated to raise around £1.5bn for the government. The delay until 2025 gives families additional time to consider their options and improve their planning. Families typically have a finite number of financial planning options that can be used to meet additional expenditures, namely reducing other expenditures, increasing earnings, targeting higher returns (with the additional risk that comes with this), looking to borrow and gifting from relatives. **INCOME TAX** Whilst Labour had pledged not to increase taxes on working people (including Income Tax at the basic, higher and additional rates), this does not preclude utilising fiscal drag to increase Income Tax revenues. Fiscal drag occurs when inflation and income growth push taxpayers into higher tax brackets, which will remain frozen until at least 2028. This policy results in higher taxes for affected individuals, even though the tax rates themselves have not changed. One area to watch could be taxes on dividend income. These have not been mentioned and may be outside the scope of the pledge as a non-working source of income with its own Income Tax rates. Moreover, Labour has pledged to reform the taxation of carried interest, which is a share of profits from a private equity, venture capital or hedge fund. The manifesto did not specify exactly how Labour would close the carried interest ‘loophole’, but the intent is clear: private equity is the only industry where performance-related pay is treated as capital gains. Labour will look to close this loophole. **CAPITAL GAINS TAX (CGT)** The Labour manifesto did not specifically mention CGT rates, and the party’s senior figures have said that they have no plans to reform these rates – with the exception of their proposed policy on carried interest. That said, future increases have not been ruled out entirely. **NATIONAL INSURANCE CONTRIBUTIONS** Labour supported the Conservatives’ cuts to National Insurance in the 2024 Spring Budget, and its manifesto outlined a commitment not to raise current rates. However, Labour may utilise fiscal drag with frozen tax rates until 2028. As the 30 October Autumn Budget approaches, individuals and families should take proactive steps to manage their personal finances. Anticipating potential changes and being prepared can significantly affect one’s financial wellbeing. Remember, proactive planning is key to financial stability and peace of mind. Don’t wait until the last minute – take action now to secure your financial future. --- **WANT EXPERT ADVICE ON HOW TO PREPARE FOR THE UPCOMING AUTUMN BUDGET?** To discuss the potential impacts of the upcoming Autumn Budget on your finances, we can provide tailored advice and help you navigate any changes that might affect your tax liabilities, pension contributions or investment strategies. If you need further guidance or personalised advice, please don’t hesitate to contact us. --- *THIS ARTICLE DOES NOT CONSTITUTE TAX, LEGAL OR FINANCIAL ADVICE AND SHOULD NOT BE RELIED UPON AS SUCH. TAX TREATMENT DEPENDS ON THE INDIVIDUAL CIRCUMSTANCES OF EACH CLIENT AND MAY BE SUBJECT TO CHANGE IN THE FUTURE. FOR GUIDANCE, SEEK PROFESSIONAL ADVICE.* If you are in your 40s or 50s, you have likely contributed to a pension for quite some time. Over the years, you may have accumulated multiple employer workplace pensions. However, when did you last thoroughly examine your pension and retirement strategy? Having a documented retirement plan can help you feel more prepared for this stage of your life, ensuring you have a sufficient income when you stop working. Here, we explore several factors to consider when reviewing your savings. If you don’t yet have a plan, in this article, we consider a helpful starting point. REVISIT YOUR RETIREMENT PLAN It’s always a good idea to reassess your plan to ensure you’re on track to achieve the retirement income and lifestyle you desire. Priorities and circumstances can change, necessitating adjustments to your plan. BEGIN BY ASKING YOURSELF THESE THREE KEY QUESTIONS: HOW WOULD YOU LIKE TO SPEND YOUR RETIREMENT? Consider what you’d like to do during your retirement to help determine how much money you’ll need. Whether it’s holidaying, investing more time in hobbies or starting a new business venture, it’s crucial to account for everyday expenses such as rent or mortgage payments, household bills and food shopping. Additionally, it’s wise to set aside savings for potential medical needs or home care as you age. When planning your expenses, don’t forget to factor in inflation. Prices tend to increase over time, so having an extra financial cushion can be beneficial. WHEN WOULD YOU LIKE TO RETIRE, AND FOR HOW LONG? Is the age you’d like to retire still the same, or has it changed? With life expectancy increasing, you’ll need to consider how much money you’ll need throughout your retirement. Dividing the total figure into an annual salary, followed by a monthly income, will help you determine if your savings are sufficient. Consider how you’ll access your retirement income. Different options have various terms and conditions that affect your take-home pay. DEBT REPAYMENTS BEFORE RETIREMENT If possible, set goals to pay off any debts before you retire. Clearing debts can provide peace of mind, as it’s one less expense to worry about. CHECK YOUR PENSION CONTRIBUTIONS Your retirement fund could include workplace pensions, personal pensions, Individual Savings Accounts (ISAs), investments and the State Pension. When reviewing your pension pot, check the amount and track performance, and take action if necessary. CONSIDER THE FOLLOWING WHEN REVIEWING YOUR PENSION POT: - Review your workplace pension contributions. Can you afford to increase them, even slightly? Even small annual increases can make a significant difference over time. - Check your employer’s contributions. Many employers offer benefits such as matching increases in your contributions to your workplace pension. - Keep track of all your pension pots to avoid forgetting about them. Consider whether you want to keep working part-time or flexible hours, which will give you more time to improve your savings. - Remember, the value of investments can fall as well as rise, and there are no guarantees. When you start drawing benefits, the value of your pension pot might be less than the total contributions made. THE STATE PENSION AS AN INCOME SOURCE The State Pension alone is unlikely to support your retirement. If you’re eligible, the amount you receive will depend on your National Insurance contribution record. You can check your State Pension forecast on the government’s website to see how much you could receive when you can claim it and if you can improve it. UNDERSTAND YOUR RETIREMENT INCOME OPTIONS From age 55 (57 from April 2028), you can access some or all of your pension benefits. Personal circumstances, lifestyle and health will influence your right income option. Some contracts restrict your options, and there are tax implications to consider. CONTROL OVER YOUR RELATIONSHIP WITH MONEY Planning for retirement is a step towards improving your financial wellbeing. It’s about how you feel regarding control over your financial future and your relationship with money. Focus on what makes your life enjoyable and meaningful now and in retirement. WANT TO IMPROVE YOUR FINANCIAL WELLBEING? Please get in touch with us if you require further information or assistance in planning your retirement. We’re here to help you navigate your financial future with confidence. THIS ARTICLE DOES NOT CONSTITUTE TAX, LEGAL OR FINANCIAL ADVICE AND SHOULD NOT BE RELIED UPON AS SUCH. TAX TREATMENT DEPENDS ON THE INDIVIDUAL CIRCUMSTANCES OF EACH CLIENT AND MAY BE SUBJECT TO CHANGE IN THE FUTURE. FOR GUIDANCE, SEEK PROFESSIONAL ADVICE. THE MIDDLE-AGED SQUEEZE JUGGLING CAREERS, FAMILY CARE AND FINANCIAL PRESSURE AMID RISING COSTS AND WEALTH TRANSFERS Increasing longevity and evolving demographics have left many middle-aged individuals juggling careers with caring for both ageing parents and children. This issue is particularly acute for ambitious professionals who prioritised establishing their careers before starting a family in their thirties or forties. On top of financial constraints, caring for elderly parents and young children places immense pressure on the most precious resource of all for working parents – time. The emotional and physical toll of feeling constantly at the coalface may generate significant stress. Choosing your priorities carefully can mean disapproval from those who think your attention should be elsewhere, often focusing on them. RISING INFLATION AND INTEREST RATES Has the squeeze got tighter? Rising inflation and interest rates have created clear winners and losers. Those with mortgages have been among the losers. As fixed rate deals have ended, moving onto a variable or new fixed rate has meant accepting higher payments or extending terms to keep monthly outlays the same. Coupled with inflation, this has reduced real disposable incomes. The winners have been those who are debt-free and those who have savings and investments. Typically, these individuals are retired, and the increased income may be surplus to requirements. THE COST OF EDUCATION School fees and care costs have historically risen faster than inflation. The cost of private education has soared. Fees jumped by an average of 5.1% in 2022\(^1\). The average cost per child is now £6,944 a term for day pupils and £12,344 a term for boarders\(^2\). There are big regional variations, too. With the rising cost of living, private schools have had little choice but to pass energy and food costs on to parents. Imagine those costs for a family of four. It is no wonder house prices are so high in the catchment areas of state schools with good Ofsted ratings. Many parents or guardians rely on other sources for some or all of the fees, such as loans, inheritances or other payments. UNIVERSITY AND HOUSING School may lead to university, with its accompanying student debts. Children may be dependent on their parents for longer and not leave the nest as quickly as one might hope. Rising rents mean the aspiration to get on the property ladder may only be achieved after age 30 and will require some financial assistance. THE MOUNTING COST OF CARE FEES If you are paying for care, the average weekly cost of a residential care home in the UK is £1,160, while average fees at a nursing home cost £1,410 per week\(^3\). This means residential care for a whole year (52 weeks) costs an average of £60,320, and nursing home care costs an average of £73,320 annually. Fees will vary depending on the area you live in and the home you choose. The families of those in care homes are unlikely to pay the entire bill but may top this up to ensure a better quality of life, such as an ensuite room, visits from the hairdresser, entertainment and day trips. As parents may live some distance away from other family members, time and practicalities may create the need to move closer, leading to inevitable upheaval and losing a friendship network. ROLE OF INHERITANCE Our elderly relatives play a crucial role in the upcoming shift in wealth. They’ll be vital in the wealth transfer over the next 10-15 years. The ‘sandwich generation’ – those caring for their children and ageing parents – are set to inherit significant assets. Figures from HM Revenue and Customs (HMRC) show a record-breaking increase in Inheritance Tax (IHT) receipts, reaching £7.5 billion from March 2023 to April 2024. This is a jump of £400 million compared to the previous year and continues a trend that’s been rising for the past two decades. With an IHT rate of 40%, nearly £19 billion in assets, beyond various exemptions and reliefs, were taxed\(^4\). The taxman might become the largest single beneficiary if multiple family members inherit. Given the current higher interest rates, the compounding effect of reinvested income can grow wealth even further. Therefore, financial planning is about reducing the size of estates and preventing them from growing too large. FINANCIAL PLANNING AND GIFTING Using surplus pension and investment income, for example, to help towards grandchildren’s school fees both invests in their future and reduces the growth rate of the estate. The notion of IHT planning may conjure images of esoteric and inaccessible investment schemes, but straightforward gifting can be just as effective. In addition to utilising various allowances and reliefs available, lump-sum gifts to an individual, known as Potentially Exempt Transfers, will not be subject to IHT if you live for seven years after making the gift. If you die before then, these gifts are initially set against the available nil rate bands, so they may still be tax-free. Lump-sum gifts could be a valuable way to help a grandchild working to be a first-time buyer get a decent deposit together. The average gift for a house deposit is £25,000. CREATING A FINANCIAL PLAN However, for many – possibly the majority – the fear of running out of income and capital mentally eclipses the huge benefits of helping younger generations now, providing the enjoyment of seeing the positive impact on their lives. Creating a financial plan will provide the knowledge and reassurance of knowing you are financially secure, whatever the future may hold. This, in turn, will enable you to consider gifting from income and capital. An inbuilt reluctance to discuss money matters with family members can lead to poorer long-term financial decisions and more money lost to the taxman. A lack of dialogue will also mean less influence over the choices made for you if you lose capacity – simply because your children might not know what you want to happen. Financial openness across generations is the starting point. DO YOU WANT TO DISCUSS YOUR FINANCIAL PLANNING REQUIREMENTS? Please get in touch with us if you require further information or need assistance with your financial planning requirements. We are here to help you secure your and your loved ones’ financial future. Source data: [1] Schoolfeeschecker, accessed April 2024. [2] Schoolguide, accessed April 2024. [3] www.carehome.co.uk/advice [4] https://britishbusinessexcellenceawards.co.uk/from-the-awards/inheritance-tax-receipts-reach-a-record-breaking-7.5 THIS ARTICLE DOES NOT CONSTITUTE TAX, LEGAL, OR FINANCIAL ADVICE AND SHOULD NOT BE RELIED UPON AS SUCH. TAX TREATMENT DEPENDS ON THE INDIVIDUAL CIRCUMSTANCES OF EACH CLIENT AND MAY BE SUBJECT TO CHANGE IN THE FUTURE. FOR GUIDANCE, SEEK PROFESSIONAL ADVICE. COULD YOU HAVE BEEN UNDERPAID THE STATE PENSION? HMRC ESTIMATES THAT AFFECTED WOMEN COULD BE OWED AN AVERAGE OF £5,000 EACH Thousands of mothers who have missed out on their full State Pension entitlement due to calculation errors have begun receiving letters from HM Revenue & Customs (HMRC) to address this oversight. These letters are being sent to women who have taken time off work to raise children since 1978, following the identification of underpayments in the Department for Work and Pensions (DWP) July 2022 annual report. STATE PENSION UNDERPAYMENTS Affected women may have been underpaid by tens of thousands of pounds over the course of their retirement due to not receiving National Insurance credits towards their State Pension entitlement. If you receive a letter from HMRC indicating that you may be one of those affected, it is crucial to check if you are owed a State Pension back payment. HMRC estimates that affected women could be owed an average of £5,000 each. The letters will be sent out over the next 18 months, prioritising those over State Pension age. Additionally, you may be eligible for Home Responsibilities Protection. AVOIDING SCAMS If you are concerned about potential scammers exploiting this issue, you can verify the letter’s authenticity by contacting HMRC on 0300 200 3500. The issue was initially corrected in 2011, resulting in 36,000 women receiving a share of £83m. Nevertheless, the DWP report indicates that thousands more women may still miss out on their rightful State Pension entitlement. HISTORICAL CONTEXT This is not the first instance of women’s pensions being underpaid. This latest issue follows a scandal involving the underpayment of State Pensions to married women and widows who claimed their pension before April 2016. Based on their husbands’ records, these women were entitled to higher rates, with the underpaid amount estimated to be around £1.5 billion. ONGOING CHALLENGES Many pensioners continue to be underpaid due to these errors, and sadly, tens of thousands have passed away without receiving any of the money they were owed. The DWP has pledged to track down and pay the owed amounts to those affected by the end of 2024. REQUIRE FURTHER INFORMATION OR BELIEVE YOU MAY BE AFFECTED? If you require further information or believe you may be affected, please do not hesitate to contact HMRC, visit their official website for guidance or contact us. Ensuring you receive your rightful State Pension is paramount. THIS ARTICLE DOES NOT CONSTITUTE TAX, LEGAL OR FINANCIAL ADVICE AND SHOULD NOT BE RELIED UPON AS SUCH. TAX TREATMENT DEPENDS ON THE INDIVIDUAL CIRCUMSTANCES OF EACH CLIENT AND MAY BE SUBJECT TO CHANGE IN THE FUTURE. FOR GUIDANCE, SEEK PROFESSIONAL ADVICE.
Knowledge Enhanced Event Causality Identification with Mention Masking Generalizations Jian Liu\textsuperscript{1,2}, Yubo Chen\textsuperscript{1,2} and Jun Zhao\textsuperscript{1,2} \textsuperscript{1} National Laboratory of Pattern Recognition, Institute of Automation \\ Chinese Academy of Sciences, Beijing, 100190, China \\ \textsuperscript{2} School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100049, China \\ {jian.liu, yubo.chen, email@example.com Abstract Identifying causal relations of events is a crucial language understanding task. Despite many efforts for this task, existing methods lack the ability to adopt background knowledge, and they typically generalize poorly to new, previously unseen data. In this paper, we present a new method for event causality identification, aiming to address limitations of previous methods. On the one hand, our model can leverage external knowledge for reasoning, which can greatly enrich the representation of events; on the other hand, our model can mine event-agnostic, context-specific patterns, via a mechanism called event mention masking generalization, which can greatly enhance the ability of our model to handle new, previously unseen cases. In experiments, we evaluate our model on three benchmark datasets and show our model outperforms previous methods by a significant margin. Moreover, we perform 1) cross-topic adaptation, 2) exploiting unseen predicates, and 3) cross-task adaptation to evaluate the generalization ability of our model. Experimental results show that our model demonstrates a definite advantage over previous methods. 1 Introduction Event causality identification (ECI) aims to identify \textit{causal relation} of events in texts. For example, in a sentence S1 (shown in Figure 1): “The \textbf{earthquake} generates a \textbf{tsunami} that rose up to 135 feet”, an ECI system should identify that a causal relationship holds between the two mentioned events, i.e., \textbf{earthquake} \texttt{causes} \textbf{tsunami}. ECI supports a wide range of intelligent applications including why-question answering [Girju, 2003; Oh et al., 2016], future event/scenario forecasting [Hashimoto et al., 2014], machine reading comprehension [Bertant et al., 2014], and others. To date, various approaches have been proposed for ECI, ranging from the early feature based methods [Do et al., 2011; Hashimoto et al., 2014; Ning et al., 2018; Gao et al., 2019] to the recent representation based methods [Kadowaki et al., 2019]. While, existing methods typically train ECI models on human annotated examples solely, and they generally lack the ability to leverage background knowledge for reasoning. Moreover, owing to the small size of training data (for example, the largest ECI corpus contains less than 300 documents [Caselli and Vossen, 2017]), existing ECI methods suffer from over-fitting issue and have difficulty in handling new, previously unseen cases. To address the limitations of previous methods, we propose a new approach for ECI, featured by its ability to: 1) explicitly leverage external (commonsense) knowledge for reasoning, which can build more expressive representations for events; and 2) mine event-agnostic, context-specific patterns for reasoning, which results in a decent generalization ability of our model to tackle new, previously unseen examples. Specifically, one key component of our model is a \textit{knowledge-aware causal reasoner}, which can exploit background knowledge in external knowledge bases (KBs) to enhance the reasoning process. We prefer CONCEPTNET [Speer et al., 2017] as the external KB, which contains abundant semantic knowledge of concepts (represented as words or phrases). For example, in CONCEPTNET, the encoded knowledge associated with “earthquake” includes “earthquake” \texttt{has} “natural disaster”, “earthquake” \texttt{Causes} “a tsunami” and others (as shown in Figure 1 a)). Such knowledge can be used to enrich the representations of events for a more accurate event causality inference. For example, the knowledge-aware causal reasoner may directly predict \textbf{earthquake} \texttt{causes} \textbf{tsunami} in S1 based on the semantic knowledge “earthquake” \texttt{Causes} “a tsunami” encoded in CONCEPTNET. This indicates that explicitly introducing external knowledge may benefit the ECI task. Nevertheless, a potential issue of the above method is that a KB is never complete [Min et al., 2013]; Especially, a KB may lack definitions of newly emerging events. To mitigate this problem, we propose a complementary mention masking reasoner, aiming to exploit the event-agnostic clues for reasoning. We motivate our approach by noting that causal statements usually contain event-independent patterns, which are helpful for identifying causality of unseen events. For example, we can distill a causality pattern: “The [SLOT1] generates [SLOT2] ... ” from S1, which can be used to identify traffic congestion cause environmental pollution in a new sentence “The traffic congestion generates environmental pollution and economic loss”. To learn such context-specific patterns, we propose a learning mechanism called event mention masking generalization, which explicitly excludes event information for learning. Methodologically, it replaces event mentions with a placeholder symbol [MASK], and force our model to make predictions based on such mask-containing texts (as shown in Figure 1 b)). This can be seen as adding a hard attention on context information, and thus enhance the ability of our model in handling unseen cases. Lastly, we build an attentive sentinel to allow a trade-off between the aforementioned two components. This trade-off is crucial because in some cases text context should override the background knowledge for the task, and in other cases the opposite is true (For example, despite the sentence “Both of earthquake and tsunami are natural disasters” contains an event pair of earthquake and tsunami, it does not express a causal relation according the context). In experiments, we evaluate our model on three benchmark datasets. We first concern the standard evaluation and show that our model attains state-of-the-art performance. We then estimate the generalization ability of our model by performing i) cross-topic adaptation, ii) exploring unseen predicates, and iii) cross-task adaptation. Our model demonstrates definite advantages over previous methods. To summarize, we make the following contributions: - We propose a new approach for ECI, which can leverage external knowledge to enrich representations of events for accurate reasoning. To our best knowledge, this is the first work explicitly introducing external knowledge for this task. - Moreover, we propose a mention masking generalization mechanism to learn event-agnostic, context-specific patterns. This grants our model a decent generalization ability to handle new, previously unseen data. - We conduct extensive experiments and show that our model sets up a new state-of-the-art for ECI. Moreover, our approach shows definite advantages over previous ECI methods regarding generalization evaluation. ## 2 Related Work ### 2.1 Event Causality Identification The task of ECI aims to identify causal relations of events in texts, which has attracted a lot of interests among researchers. Earlier methods for ECI are predominantly feature-based, which adopt lexical and syntactic features [Hashimoto et al., 2014; Gao et al., 2019], causality cues (such as “because”, “for”) [Riaz and Girju, 2014], event co-occurrence patterns [Beamer and Girju, 2009; Hu et al., 2017], temporal patterns [Mirza, 2014a; Ning et al., 2018], and others for the task. The very recent work [Kadowaki et al., 2019] employs BERT architecture [Devlin et al., 2019], which can learn context-dependent representations for the task and achieves superior performance. Regarding datasets construction, Do et al. [2011] annotated a corpus consisting of 25 documents for evaluation; Mirza [2014a] annotated event causal relations in the TempEval-3 corpus and release a corpus called CauseTimeBank; Caselli and Vossen [2017] had built a corpus called EventStoryLine, which contains 258 documents in total. Hashimoto [2019] exploited weakly supervised method to construct ECI datasets. However, as introduced in Introduction, previous methods typically train a model on the annotated examples only and disregard a lot of background knowledge. Moreover, they generally have difficulty in handling new, previously unseen data, owing the limited size of training data. ### 2.2 Knowledge Enhanced Text Understanding The importance of background knowledge in text understanding has long been recognized [Minsky, 1974]. With the development of knowledge bases (KBs) — ranging from manually annotated networks like WordNet [Miller, 1995] to semi-automatically/automatically constructed knowledge graphs like DBPedia [Lehmann et al., 2014] and ConceptNet [Speer et al., 2017] — large amounts of knowledge become available. Many studies have investigated to leverage such knowledge to boost text understanding tasks. To name a few, Rahman and Ng [2011] studies knowledge-enhanced entity co-reference; Yang and Mitchell [2017] took advantage of external KBs to improve recurrent neural networks for entity recognition and event detection; Zhou et al. [2018] studied incorporating commonsense knowledge for conversation generation. But to the best of our knowledge, no work has studied introducing external knowledge for ECI. ## 3 Approach Figure 2 schematically visualizes our approach. Specifically, we formulate ECI as a binary classification problem, following previous works [Mirza, 2014a; Ning et al., 2018; Gao et al., 2019] — for every pair of events in a sentence, we predict whether a causal relation holds. Our approach contains three major components: - Knowledge-aware reasoner, which retrieves background knowledge from CONCEPTNET, and then integrate the knowledge with texts for reasoning (§ 3.1). - Event masking reasoner, which masks event mentions in texts, aiming to learn event-agnostic, context-specific patterns for reasoning (§ 3.2). - The attentive sentinel, which adopts an attention mechanism to balance the above two components for the final prediction (§ 3.3). We illustrate each component in details in the followings. 3.1 Knowledge-Aware Reasoner Given a pair of events (denoted as e1 and e2), the knowledge aware reasoner first retrieves the related knowledge in CONCEPTNET, and then encodes the knowledge into contexts for reasoning. **Knowledge Retrieving.** CONCEPTNET structures knowledge as graph, where each node corresponds a concept, and each edge corresponds to a semantic relation. For e1 and e2, we search their definitions in CONCEPTNET but we only consider 18 semantic relations that are potentially useful for ECI: CapableOf, IsA, HasProperty, Causes, MannerOf, CausesDesire, UsedFor, HasSubevent, HasPrerequisite, NotDesires, PartOf, HasA, Entails, ReceivesAction, UsedFor, CreatedBy, MadeOf, and Desires. Part of the knowledge related to earthquake and tsunami in S1 is shown in Figure 2. **Knowledge Encoding.** To encode the knowledge and enrich representations of e1 and e2, we first conduct knowledge linearization, to transfer the discrete knowledge into a structured sequence, motivated by [Fan et al., 2019]. As shown in Figure 2, for each semantic relation, one special marker (such as \( \langle I s A \rangle \)) is designed and following the marker is the related knowledge separated by a delimiter \( \langle s \rangle \). Then, we adopt a BERT based encoder to encode the knowledge with the context texts jointly. Specifically, we first incorporate the linearized knowledge into the sentence; then we add additional event markers \( \langle E1 \rangle \), \( \langle /E1 \rangle \) and \( \langle E2 \rangle \), \( \langle /E2 \rangle \) to indicate boundaries of events (Two special tokens [CLS] and [SEP] are added at the beginning/ending of the sentence following BERT). Finally, after using BERT encoder to compute representations of the entire sequence, we concatenate representations of [CLS], \( \langle E1 \rangle \), and \( \langle E2 \rangle \) as the final representation regarding to \( \{e1, e2\} \), namely \[ F_{KG}^{(e1,e2)} = h_{[CLS]} \oplus h_{\langle E1 \rangle} \oplus h_{\langle E2 \rangle} \] where \( \oplus \) indicates the concatenation operator; \( h_{[CLS]} \), \( h_{\langle E1 \rangle} \), and \( h_{\langle E2 \rangle} \) are representations of [CLS], \( \langle E1 \rangle \), and \( \langle E2 \rangle \) respectively. \( F_{KG}^{(e1,e2)} \) is the knowledge-aware representation that would be used for further computation. 3.2 Mention Masking Reasoner The mention masking reasoner aims to explore event-agnostic, context-specific patterns for reasoning. Specifically, e1 and e2 are firstly replaced with a special token [MASK] to exclude event information. Then, another BERT encoder is adopted to encode the mask-containing sentence ([CLS] and [SEP] are also added). Similar as in the knowledge aware reasoner, we regard \( F_{MASK}^{(e1,e2)} \) as the masked representation of \( \{e1, e2\} \): \[ F_{MASK}^{(e1,e2)} = h_{[CLS]} \oplus h_{\langle E1 \rangle} \oplus h_{\langle E2 \rangle} \] where \( h_{\langle E1 \rangle}^1 \) and \( h_{\langle E2 \rangle}^2 \) are BERT representations of e1 and e2, which have been replaced by [MASK]. We train the mention masking reasoner with two different objectives: **Discrimination Learning.** Our model is forced to predict whether e1 and e2 forms a causal relation based on the masked representation \( F_{MASK}^{(e1,e2)} \). As \( F_{MASK}^{(e1,e2)} \) does not contain any event-specific information, our model has to explore context-specific clues for reasoning, which would gain the ability to tackle unseen events. **Distributional Similarity Learning.** In distributional similarity learning, we assume causal statements may share similar representations in some ways, by taking in pair of mask-containing statements as inputs and encouraging their representations to be similar if both of them express causal relations. Assume A and B are two pairs of events. \( F_{MASK}^{A} \) and \( F_{MASK}^{B} \) are their masked representation. We optimize the following loss to achieve distributional similarity: \[ L = - \delta_{A,B} * \log(p(l=1|A,B)) \] \[ + (1 - \delta_{A,B}) * \log(1-p(l=1|A,B)) \] where \( \delta_{A,B} \) is the Kronecker function that take the values of 1 when both of A and B express a causal relation and 0 otherwise. \( p(l=1|A,B) = \frac{1}{1+\exp(F_{MASK}^{A}T F_{MASK}^{B})} \) defines the distributional similarity score. In practice, we alternately adopt discrimination learning and distributional similarity learning to train the mention masking reasoner. 3.3 The Attentive Sentinel The attentive sentinel aims to learn a trade-off between the knowledge aware reasoner and the mention masking reasoner, by learning an attentive gate as their combination weights, namely: \[ g_{e_1,e_2} = \sigma(W(F^{(e_1,e_2)}_{KG} \oplus F^{(e_1,e_2)}_{MASK}) + b) \] (4) where \( W \) and \( b \) are model parameters; \( \oplus \) denotes the concatenation operator. Then it adopts an weighted summation to integrate \( F^{(e_1,e_2)}_{KG} \) and \( F^{(e_1,e_2)}_{MASK} \) as the final feature for (e1, e2), namely: \[ F_{e_1,e_2} = g_{e_1,e_2} * F^{(e_1,e_2)}_{KG} + (1 - g_{e_1,e_2}) * F^{(e_1,e_2)}_{MASK} \] (5) The attentive sentinel allows to balance the knowledge aware reasoner and the mention masking reasoner to make the final prediction. 3.4 Model Prediction and Training To make the final prediction, we perform a binary classification by taking \( F_{e_1,e_2} \) as input: \[ o_{e_1,e_2} = \sigma(W_oF_{e_1,e_2} + b_o) \] (6) where \( o_{e_1,e_2} \) denotes the probability of \( e_1 \xrightarrow{\text{cause}} e_2 \); \( W_o \) and \( b_o \) are model parameters. For training, we adopt cross-entropy as the loss function: \[ J(\Theta) = -\sum_s \sum_{e_i,e_j \in E_s} y_{e_i,e_j} \log(o_{e_i,e_j}) + \] \[ (1 - y_{e_i,e_j}) \log(1 - o_{e_i,e_j}) \] (7) where \( \Theta \) denotes the parameter set of our model; \( s \) ranges over each sentence in the training set; \( e_i \) and \( e_j \) ranges over each events in \( s \). We adopt the Adam [Kingma and Ba, 2015] algorithm to optimize model parameters. 4 Experiments 4.1 Experimental Setups Datasets and Evaluations. Our experiments are conducted on three benchmark datasets, including: a) EventStoryLine [Caselli and Vossen, 2017], which contains 258 documents in 22 topics, 5,334 events in total, and 1,770 of 7,805 event pairs are causally related; b) Causal-TimeBank [Mirza et al., 2014], which contains 184 documents, 6,813 events, and 318 of 7,608 event pairs are causally related. c) EventCausality [Do et al., 2011; Ning et al., 2018], which contains 25 documents, 1,134 events, and 414 of 887 event pairs are causally related. For evaluation, we adopt Precision (P), Recall (R) and F1-score (F1) as evaluation metrics, same as previous methods to ensure comparability. Significant test is conducted using paired t-test at a significance level of 0.05. Implementations. In our implementations\(^1\), both the knowledge aware reasoner and the mention masking reasoner are implemented as BERT-Large architecture, which has 24-layer, 1024-hidden, and 16-heads. We use CONCEPTNET 5.0 as the KB. Regarding hyper-parameters, the batch size is set as 10, and the learning rate is initialized as \( 5 \times 10^{-5} \) with a linear decay. We also adopt a negative sampling rate of 0.5 for training, owing to the sparseness of positive examples. Baseline Systems. We prefer different baseline systems to compare for different datasets. For EventStoryLine, we prefer OP [Caselli and Vossen, 2017], a dummy model assigns causal relation to every event pair; 2) LSTM [Cheng and Miao, 2017], a dependency parse based sequential model that models the context between events to identify causality; 3) Seq [Choutoby and Huang, 2017], a sequence model explore complex human designed features for the task; 4) LR+, and 5) LIP [Gao et al., 2019], state-of-the-art ECI system that adopts document structure for the task. For Causal-TimeBank, we prefer 1) RB, a rule-based system; 2) ML, a machine learning based model, and 3) HB, a hybrid method combine rules with features for comparison. These models are designed by [Mirza, 2014a; Mirza and Tonelli, 2016] for ECI. For EventCausality, we prefer PMI, ECD, and CEA [Do et al., 2011], which adopt different co-occurrence patterns for the task as baselines systems. For each dataset, we add a BERT based model as baseline. In our approach, we use \( M_{KG} \) to denote the knowledge-aware reasoner, which adopts \( F^{(e_1,e_2)}_{KG} \) for prediction; we denote \( M_{MMR} \) as the mention masking reasoner, which adopts \( F^{(e_1,e_2)}_{MASK} \) for prediction. \( M_{FULL} \) indicates our full model. 4.2 Experimental Results Experimental results on the three benchmark datasets follow. EventStoryLine. Table 1 shows the results on EventStoryLine, where we use the last two topics as development set, and conduct a 5-fold cross-validation on the rest 20 topics, as suggested by [Gao et al., 2019]. From the results, our full model \( M_{FULL} \) outperforms all baseline methods and achieves the best performance (50.1% on F1 score), outperforming the state-of-the-art model LIP by a margin of 5.5%, which justifies its effectiveness. Comparing with \( M_{KG} \) with BERT, we note adding external knowledge improves the performance by 3.6% in F1 score. Moreover, the mention masking reasoner (\( M_{MMR} \)) is more effective than the knowledge aware reasoner (\( M_{KG} \)) (43.9% v.s. 41.8%). This may imply that, for small dataset, generalization knowledge is more important than discrimination knowledge. Causal-TimeBank. Table 2 shows results on Causal-TimeBank, where we adopt two settings: 1) 10-fold cross-validation (CV) as in [Mirza, 2014a], and 2) evaluating on an additional Temporal-Eval-3 datasets as in [Mirza and Tonelli, 2016]. Our models show a consistence performance as in EventStoryLine, which achieves the best performance (44.1% for CV, and 66.7% for TE). Moreover, \( M_{FULL} \) and \( M_{MASK} \) demonstrate high recall value, which is benefitted from their generalization ability. EventCausality. Table 3 shows results on EventCausality. Note this is an extremely tiny datasets, and [Do et al., 2011] adopts a weakly-supervised methods to retrieve additional examples for training. Nevertheless, in our approach, we use | METHODS | PRE. | REC. | F1 | |---------|------|------|-----| | OP [Caselli and Vossen, 2017] | 22.5 | 98.6 | 36.6 | | LSTM [Cheng and Miyao, 2017] | 34.0 | 41.5 | 37.4 | | Seq [Choubey and Huang, 2017] | 32.7 | 44.9 | 37.8 | | LR+ [Gao et al., 2019] | 37.0 | 45.2 | 40.7 | | LIP [Gao et al., 2019] | 38.8 | 52.4 | 44.6 | | BERT | 37.9 | 38.5 | 38.2 | Table 1: Results on EventStoryLine. Pre., Rec. and F1 indicate precision (%), recall (%) and F1-score (%) respectively; Bold denotes best results; * denotes a significant test at the level of 0.05. | METHOD | PRE. | REC. | F1 | |--------|------|------|-----| | Rule-based [Mirza, 2014b] | 36.8 | 12.3 | 18.4 | | Data-driven [Mirza, 2014a] | **67.3** | 22.6 | 33.9 | | CV | | | | | BERT | 30.3 | 41.1 | 34.9 | | MKG (Ours) | 38.7 | 44.4 | 41.3 | | MMIR (Ours) | 31.1 | 51.9 | 38.8 | | MFULL (Ours) | **36.6** | **55.6** | **44.1** | Table 2: Results on Causal-TimeBank. CV denotes 10-fold cross-validation. TE denotes evaluating on a manually Temporal Eval-3 datasets. Pre., Rec. and F1 indicate precision (%), recall (%) and F1-score (%) respectively. Bold denotes best results; * denotes a significant test at the level of 0.05. | METHODS | PRE. | REC. | F1 | |---------|------|------|-----| | PMI [Do et al., 2011] | 26.6 | 20.8 | 23.3 | | ECD PMI [Do et al., 2011] | 40.9 | 23.5 | 29.9 | | CEA [Do et al., 2011] | **62.2** | 28.0 | 38.6 | | BERT | 16.8 | 30.7 | 21.7 | | MBERT (Ours) | 17.2 | 68.2 | 27.5 | | MMIR (Ours) | 20.7 | **77.3** | 32.6 | | MFULL (Ours) | 34.1 | 68.2 | **45.4** | Table 3: Results on EventCausality datasets. Pre., Rec. and F1 indicate precision (%), recall (%) and F1-score (%) respectively. Bold denotes best results; * denotes a significant test at the level of 0.05. | SET. | ST → TT (δ) | LIP | MKG | MMIR | MF | |------|-------------|-----|-----|------|----| | Low | | | | | | | T8 → T35 (0.0%) | 2.8 | 17.6 | 29.7 | 44.7 | | T13 → T12 (0.0%) | - | 6.0 | 20.4 | 25.1 | | T18 → T30 (0.0%) | - | - | 10.3 | 19.5 | | Med | | | | | | | T8 → T3 (1.7%) | 6.7 | 22.1 | 24.9 | 30.9 | | T13 → T41 (0.1%) | 4.5 | 12.1 | 20.7 | 28.6 | | T18 → T35 (2.8%) | 17.1 | 40.4 | 38.4 | 44.5 | | High | | | | | | | T8 → T19 (12.4%) | 19.4 | 42.7 | 42.8 | 45.1 | | T13 → T14 (17.1%) | 27.4 | 44.4 | 42.7 | 46.0 | | T18 → T33 (27.2%) | 32.2 | 45.3 | 44.1 | 49.0 | Table 4: Results (F1 score (%)) of cross-topic adaptation. ST → TT (δ) denotes that the source topic is ST, and the target topic is TT, and their similarity is δ. MF denotes our full model. only 10 documents ([Do et al., 2011] use them as seed documents) for training. From the results, the superior performance of MFULL (45.4% on F1) demonstrates the applicability of our approach for small datasets. ## 5 Model Generalization Evaluation Generalization refers to a model’s ability to adapt to new, previously unseen data. We conduct 1) cross-topic adaptation, 2) unseen predicates, and 3) cross-task adaptation to estimate the generalization ability of our model. ### 5.1 Cross-Topic Adaptation Different topics usually involve different events. In our cross-topic adaptation, we train our model on a source topic, but test our model on other topics. We use EventStoryLine to conduct our experiments. Specifically, we first randomly select a topic as the source topic (for model training and tuning), and then we rank the remaining topics based on their similarities with the source topic (The similarity of two topics $t_1$ and $t_2$ is defined as $\frac{1}{E_1 \cup E_2}$, where $E_t$ denotes the event set of $t$). Finally, we test how our model performs on topics with the lowest, medium, and highest similarity value with the source topic. We re-implement previous state-of-the-art system LIP to compare with our models. From the results in Table 4, the performance of LIP is highly depended on the similarity of source and target topics. It achieves relative good performance when the target and source topics are of high-similarity, but behaves extremely poorly when the target and source topics are of low-similarity. While our approach, especially MMIR and MFULL, are robust in cross-topic adaptation, which achieve superior performance even in low-similarity cases. ### 5.2 Unseen Predicates To further test the generalization ability of our model, we conduct experiments to explore unseen predicates. For the EventStoryLine corpus, we first randomly select 1/3 of documents as the training set. Then, we divide the remaining corpus as 1) ‘Both Seen’ set, where both of events exist in the training data (with a size of 3,464); 2) ‘One Unseen’ set, where only one of the events exists in the training data (with a size of 4,381); 3) ‘Both Unseen’ set, where both events are unobserved during training (with a size of 1,891). From the results in Figure 3, 1) LIP behaves relatively good on ‘Both Seen’, but poorly on ‘One Unseen’ and ‘Both Unseen’ (only 11.3% in F1). 2) Our full model achieves the best performance on all of the three sets. 3) MMIR achieves better performance than MKG on ‘One Unseen’ and ‘Both Unseen’. ### 5.3 Cross-Task Adaptation Finally, we investigate cross-task adaptation, where we train our model on ECI datasets but test the performance of our Figure 3: Results (F1 score (%)) of unseen predicates. ‘Both Seen’ indicates that both of events exist in the training data; ‘One Unseen’ indicates that only one of the events exists in the training data; ‘Both Unseen’ indicates that both events are unobserved during training. | Datasets | Methods | Pre. | Rec. | F1 | |----------|------------------|------|------|------| | SemEval | LIP [Gao et al., 2019] | 24.6 | 21.1 | 22.8 | | | MKG (Ours) | **63.5** | 55.2 | 59.1 | | | MMMR (Ours) | 37.7 | **89.3** | 52.9 | | | MFULL (Ours) | 59.4 | 75.0 | **66.0** | | FrameNet | LIP [Gao et al., 2019] | 10.5 | 11.8 | 11.1 | | | MKG (Ours) | 64.6 | 13.5 | 22.0 | | | MMMR (Ours) | **85.9** | 57.0 | 68.5 | | | MFULL (Ours) | 84.4 | **60.3** | **70.3** | Table 5: Results of cross-task adaptation. The model is trained/tuned on EventStoryLine. Pre., Rec. and F1 indicate precision (%), recall (%) and F1-score (%) respectively. model in generalizing to other tasks. Specifically, we train our model on EventStoryLine, but we test our model on identifying causal relations in SemEval-8 (which focuses on causal relations between entities) and FrameNet (which focuses on causal relations between frame elements). From the results in Table 5, LIP performs relatively poor in cross-task adaptation. The reason might be that features adapted by LIP are not applied to entities and frame elements. MKG behaves better than MMMR in SemEval, but much worse than MMMR in FrameNet. The reason is that, SemEval focus on relations between entities, which are more likely to have definitions on a KG. But FrameNet focuses on frame elements, which can be any span of the sentence, and do likely to have definitions on a KG. Our full model achieves the best performance among all models regarding cross-task adaptation. 6 Further Discussion Inductive Bias. To further explore the effectiveness of our model, we investigate the prediction bias of MKG and MMMR by inspecting their outputs. Accordingly, there are 685 cause relations only predicted by MKG, 655 relations only by MMMR and 382 relations by both of them in the experiments shown in Table 1 (for a specific fold). The values change to 102, 132 and 58 in the experiment of cross-topic adaptation (T18→T33). The relatively less of their common predictions indicate that MKG and MMMR focus on different aspects of features to identify the cause relations and they share complementary effects. This provides explanation for the good performance of our full model. EXAMPLES | | MKG | MMMR | MF | |------------------|-----|------|----| | a) ... has confessed to killing a pregnant mom, who died on ... | ✓ | × | ✓ | | b) his half-brother, ..., is also on trial for murder. | ✓ | ✓ | ✓ | | c) A gang member was convicted Tuesday for claiming the life of a mother of ... | × | ✓ | ✓ | | d) Horton was struck by a stray bullet as lopez targeted gang members ... | × | ✓ | ✓ | | e) ... Carrasquillo allegedly ordered Lopez to shoot ... | × | × | ✓ | Table 6: Results of case study where bold denotes the two event pair, ✓ and × denote a correct and incorrect prediction respectively. Case Study. We conduct case study to further investigate the effectiveness of our model. To simplify the discussion, we limit the experiments to a specific cross-topic adaptation, i.e., T18→T33 adaptation. Table 6 shows several cases showing the outputs of MKG and MMMR. Basically, MKG is good at finding commonsense causality that is usually context-independent, such as killing cause die in a), and murder cause on trial in b), but cannot handle context depended cases as in c), d), and e). While MMMR is completely opposite. The full model can take advantage of MKG and MMMR to make a more accurate prediction. 7 Conclusion and Future Work In this paper, we propose a new approach for event causality identification. Our approach on the one hand can leverage background knowledge to enhance the reasoning, on the other hand cam mine event-agnostic context-specific patterns for reasoning, which greatly enhances its generalization ability. The effectiveness of our model is verified on three datasets with diverse settings. In the future, we would like to apply our model to other NLP tasks such as relation classification, event temporal relation extraction and others. Acknowledgments This work is supported by the Natural Key R&D Program of China (No.2018YFB1005100), the National Natural Science Foundation of China (No.61922085, No.U1936207, No.61976211, No.61806201) and the Key Research Program of the Chinese Academy of Sciences (Grant NO. ZDBS-WW-JSCX002). This work is also supported by Beijing Academy of Artificial Intelligence (BAAI2019QN0501), CCF-Tencent Open Research Fund and independent research project of National Laboratory of Pattern Recognition. References [Beamer and Girju, 2009] Brandon Beamer and Roxana Girju. Using a bigram event model to predict causal potential. In COLING, pages 430–441, 2009. [Berant et al., 2014] Jonathan Berant, Vivek Srikumar, Pei-Chun Chen, Abby Vander Linden, Brittany Harding, Brad Huang, Peter Clark, and Christopher D. Manning. Modeling biological processes for reading comprehension. In EMNLP, pages 1499–1510, 2014. [Caselli and Vossen, 2017] Tommaso Caselli and Piek Vossen. The event StoryLine corpus: A new benchmark for causal and temporal relation extraction. In ACL Workshop, pages 77–86, 2017. [Cheng and Miyao, 2017] Fei Cheng and Yusuke Miyao. Classifying temporal relations by bidirectional LSTM over dependency paths. In ACL, pages 1–6, 2017. [Choubey and Huang, 2017] Prafulla Kumar Choubey and Ruihong Huang. A sequential model for classifying temporal relations between intra-sentence events. In EMNLP, pages 1796–1802, 2017. [Devlin et al., 2019] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In NAACL, pages 4171–4186, 2019. [Do et al., 2011] Quang Do, Yee Seng Chan, and Dan Roth. Minimally supervised event causality identification. In EMNLP, pages 294–303, 2011. [Fan et al., 2019] Angela Fan, Claire Gardent, Chloé Braud, and Antoine Bordes. Using local knowledge graph construction to scale Seq2Seq models to multi-document inputs. In EMNLP, pages 4186–4196, 2019. [Gao et al., 2019] Lei Gao, Prafulla Kumar Choubey, and Ruihong Huang. Modeling document-level causal structures for event causal relation identification. In NAACL, pages 1808–1817, 2019. [Girju, 2003] Roxana Girju. Automatic detection of causal relations for question answering. In ACL Workshop, pages 76–83, 2003. [Hashimoto et al., 2014] Chikara Hashimoto, Kentaro Torisawa, Julien Kloetzer, Motoki Sano, István Varga, Jong-Hoon Oh, and Yutaka Kidawara. Toward future scenario generation: Extracting event causality exploiting semantic relation, context, and association features. In ACL, pages 987–997, 2014. [Hashimoto, 2019] Chikara Hashimoto. Weakly supervised multilingual causality extraction from Wikipedia. In EMNLP, pages 2988–2999, 2019. [Hu et al., 2017] Zhichao Hu, Elahe Rahimtoroghi, and Marilyn Walker. Inference of fine-grained event causality from blogs and films. In ACL Workshop, pages 52–58, 2017. [Kadowaki et al., 2019] Kazuma Kadowaki, Ryu Iida, Kentaro Torisawa, Jong-Hoon Oh, and Julien Kloetzer. Event causality recognition exploiting multiple annotators’ judgments and background knowledge. In EMNLP, pages 5816–5822, 2019. [Kingma and Ba, 2015] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015. [Lehmann et al., 2014] Jens Lehmann, Robert Isele, Max Jakob, Anja Jentsch, Dimitris Kontokostas, Pablo N. Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick van Kleef, Sören Auer, and Christian Bizer. DBpedia - a large-scale, multilingual knowledge base extracted from wikipedia. Semantic Web Journal, (2):167–195, 2014. [Miller, 1995] George A. Miller. Wordnet: A lexical database for english. Commun. ACM, 1995. [Min et al., 2013] Bonan Min, Ralph Grishman, Li Wan, Chang Wang, and David Gondek. Distant supervision for relation extraction with an incomplete knowledge base. In NAACL, pages 777–782, 2013. [Minsky, 1974] Marvin Minsky. A framework for representing knowledge. Technical report, USA, 1974. [Mirza and Tonelli, 2016] Paramita Mirza and Sara Tonelli. CATENA: CAusal and TEmporal relation extraction from NATural language texts. In COLING, pages 64–75, 2016. [Mirza et al., 2014] Paramita Mirza, Rachele Sprugnoli, Sara Tonelli, and Manuela Speranza. Annotating causality in the TempEval-3 corpus. In EACL Workshop, pages 10–19, 2014. [Mirza, 2014a] Paramita Mirza. Extracting temporal and causal relations between events. In ACL Workshop, pages 10–17, 2014. [Mirza, 2014b] Paramita Mirza. Fbk-hlt-time : a complete italian temporal processing system for eventi-evalita 2014. In EVALITA 2014, pages 44–49, 2014. [Ning et al., 2018] Qiang Ning, Zhili Feng, Hao Wu, and Dan Roth. Joint reasoning for temporal and causal relations. In ACL, pages 2278–2288, 2018. [Oh et al., 2016] Jong-Hoon Oh, Kentaro Torisawa, Chikara Hashimoto, Ryu Iida, Masahiro Tanaka, and Julien Kloetzer. A semi-supervised learning approach to why-question answering. In AAAI, page 3022–3029, 2016. [Rahman and Ng, 2011] Altaf Rahman and Vincent Ng. Coreference resolution with world knowledge. In ACL, pages 814–824, 2011. [Riaz and Girju, 2014] Mehwish Riaz and Roxana Girju. In-depth exploitation of noun and verb semantics to identify causation in verb-noun pairs. In SIGDIAL, pages 161–170, 2014. [Speer et al., 2017] Robyn Speer, Joshua Chin, and Catherine Havasi. Conceptnet 5.5: An open multilingual graph of general knowledge. In AAAI, page 4444–4451, 2017. [Yang and Mitchell, 2017] Bishan Yang and Tom Mitchell. Leveraging knowledge bases in LSTMs for improving machine reading. In ACL, pages 1436–1446, July 2017. [Zhou et al., 2018] Hao Zhou, Tom Yang, Minlie Huang, Haizhou Zhao, Jingfang Xu, and Xiaoyan Zhu. Commonsense knowledge aware conversation generation with graph attention. In IJCAI-ECAI, pages 4623–4629, 2018.
Customary Measurement Word Problems 5th Grade If you ally obsession such a referred Customary Measurement Word Problems 5th Grade books that will offer you worth, get the certainly best seller from us currently from several preferred authors. If you desire to humorous books, lots of novels, tale, jokes, and more fictions collections are afterward launched, from best seller to one of the most current released. You may not be perplexed to enjoy all ebook collections Customary Measurement Word Problems 5th Grade that we will entirely offer. It is not concerning the costs. Its very nearly what you infatuation currently. This Customary Measurement Word Problems 5th Grade, as one of the most in force sellers here will agreed be accompanied by the best options to review. Time and Measurement - Scholastic Inc 2013-01-01 Great for parents! Fun and colourful workbooks filled with teacher-approved activities that are perfect for independent practice at home or enrichment programs. Activity book comes with: Fun and colourful pages to make learning exciting Quick assessment tests to ensure child's mastery of each topic Motivational stickers to encourage and reward the child Completion certificate to provide the child with a sense of accomplishment Also available: online resources for parents to extend their child's learning. Visit www.ScholasticLearningExpress.com. January Monthly Collection, Grade 5 - 2017-12-11 The January Monthly Collection for fifth grade is aligned to current state standards and saves valuable prep time for centers and independent work. The included January calendar is filled with notable events and holidays, and the included blank calendar is editable, allowing the teacher to customize it for their classroom. Student resource pages are available in color and black and white. Additional collection resources include: • Reading comprehension • Text features • Primary source • Grammar • Fractions • Volume • Martin Luther King, Jr. • Infographics • STEM • Handwriting practice • Law Enforcement Thank You The January Monthly Collection for fifth grade can be used in or out of the classroom to fit the teachers' needs and help students stay engaged. Each Monthly Collection is designed to save teachers time, with grade-appropriate resources and activities that can be used alongside classroom learning, as independent practice, center activities, or homework. Each one includes ELA, Math, and Science resources in a monthly theme, engaging students with timely and interesting content. All Monthly Collections included color and black and white student pages, an answer key, and editable calendars for teachers to customize. SBAC Math Workbook - Michael Smith 2020-08-26 The only prep book you will ever need to ace the SBAC Math Test! SBAC Math Workbook reviews all SBAC Math topics and provides students with the confidence and math skills they need to succeed on the SBAC Math. It is designed to address the needs of SBAC test takers who must have a working knowledge of basic Mathematics. This comprehensive workbook with over 2,500 sample questions and 2 complete SBAC tests can help you fully prepare for the SBAC Math test. It provides you with an in-depth focus on the math portion of the exam, helping you master the math skills that students find the most troublesome. This is an incredibly useful tool for those who want to review all topics being covered on the SBAC Math test. SBAC Math Workbook contains many exciting features to help you prepare for the SBAC Math test, including: Content 100% aligned with the 2019-2020 SBAC test Provided and tested by SBAC Math test experts Dynamic design and easy-to-follow activities A fun, interactive and concrete learning process Targeted, skill-building practices Complete coverage of all SBAC Math topics which you will be tested 2 full-length practice tests (featuring new question types) with detailed answers. Published By: Math Notion WWW.MathNotion.com Guided Math Lessons in Fifth Grade - Nicki Newton 2022-09-20 Guided Math Lessons in Fifth Grade provides detailed lessons to help you bring guided math groups to life. Based on the bestselling Guided Math in Action, this practical book offers 16 lessons, taught in a round of 3—concrete, pictorial and abstract. The lessons are based on the priority standards and cover fluency, word problems, fractions, and decimals. Author Dr. Nicki Newton shows you the content, as well as the practices and processes, that should be worked on in the lessons so that students not only learn the content but also how to solve problems, reason, communicate their thinking, model, use tools, use precise language and see structure and patterns. Throughout the book, you'll find tools, templates and blackline masters so that you can instantly adapt the lesson to your specific needs and use it right away. With the easy-to-follow plans in this book, students can work more effectively in small guided math groups—and have loads of fun along the way! Remember that guided math groups are about doing the math. So throughout these lessons, you will see students working with manipulatives to make meaning, doing mathematical sketches to show what they understand and can make sense of the abstract numbers. When students are given the opportunities to make sense of the math in hands-on and visual ways, then the math begins to make sense to them! Progress in Mathematics - Rose A. McDonnell 2006 Math Problem Solving in Action - Nicki Newton 2017-02-10 In this new book from popular math consultant and bestselling author Dr. Nicki Newton, you’ll learn how to help students become more effective and confident problem solvers. Problem solving is a necessary skill for the 21st century but can be overwhelming for both teachers and students. Dr. Newton shows how to make word problems more engaging and relatable, how to scaffold them and help students with math language, how to implement collaborative groups for problem solving, how to assess student progress, and much more. Topics include: Incorporating problem solving throughout the math block, connecting problems to students’ real lives, and teaching students to persevere; Unpacking word problems across the curriculum and making them more comprehensible to students; Scaffolding word problems so that students can organize all the pieces in doable ways; Helping students navigate the complex language in a word problem; Showing students how to reason about, model, and discuss word problems; Using fun mini-lessons to engage students in the premise of a word problem; Implementing collaborative structures, such as math literature circles, to engage students in problem solving; Getting the whole school involved in a problem-solving challenge to promote schoolwide effort and engagement; and Incorporating assessment to see where students are and help them get to the next level. Each chapter offers examples, charts, and tools that you can use immediately. The book also features an action plan so that you can confidently move forward and implement the book’s ideas in your own classroom. Free accompanying resources are provided on the author’s website, www.drnickinewton.com. Spectrum Word Study and Phonics, Grade 5 - 2014-08-15 Understanding letter sounds and word formation is an essential piece to the reading proficiency puzzle. Spectrum Word Study and Phonics for grade 5 guides children through acronyms, analogies, word families, multiple-meaning words, and more. Filled with engaging exercises in a progressive format, this series provides an effective way to reinforce early language arts skills. Mastering language arts is a long process—start with the basics. Spectrum Word Study and Phonics is here to help children begin a successful journey to reading proficiency. With the help of this best-selling series, your child will improve language arts skills through practice and activities that focus on phonics, structural analysis, and dictionary skills. Word Problems, Grade 5 - 2013-12-02 Spectrum(R) Word Problems for grade 5 includes practice for essential math skills, such as real world applications, multi-step word problems, fractions, decimals, metric and measurement, graphs and probability, geometry and preparing for algebra. Spectrum(R) Word Problems supplement to classroom work and proficiency test preparation. The series provides examples of how the math skills students learn in school apply to everyday life with challenging, multi-step word problems. It features practice with word problems that are an essential part of the Common Core State Standards. Word problem practice is provided for essential math skills, such as fractions, decimals, percents, metric and customary measurement, graphs and probability, and preparing for algebra and more. Word Problems, Grade 6 - 2013-12-02 Spectrum(R) Word Problems for grade 6 includes practice for essential math skills, such as real world applications, multi-step word problems, fractions, decimals, metric and measurement, graphs and probability, geometry and preparing for algebra. Spectrum(R) Word Problems supplement to classroom work and proficiency test preparation. The series provides examples of how the math skills students learn in school apply to everyday life with challenging, multi-step word problems. It features practice with word problems that are an essential part of the Common Core State Standards. Word problem practice is provided for essential math skills, such as fractions, decimals, percents, metric and customary measurement, graphs and probability, and preparing for algebra and more. **STAAR Math Workbook** - Michael Smith The only prep book you will ever need to ace the STAAR Math Test! STAAR Math Workbook reviews all STAAR Math topics and provides students with the confidence and math skills they need to succeed on the STAAR Math. It is designed to address the needs of STAAR test takers who must have a working knowledge of basic Mathematics. This comprehensive workbook with over 2,500 sample questions and 2 complete STAAR tests can help you fully prepare for the STAAR Math test. It provides you with an in-depth focus on the math portion of the exam, helping you master the math skills that students find the most troublesome. This is an incredibly useful tool for those who want to review all topics being covered on the STAAR Math test. STAAR Math Workbook contains many exciting features to help you prepare for the STAAR Math test, including: · Content 100% aligned with the 2019-2020 STAAR test · Provided and tested by STAAR Math test experts · Dynamic design and easy-to-follow activities · A fun, interactive and concrete learning process · Targeted, skill-building practices · Complete coverage of all STAAR Math topics which you will be tested · 2 full-length practice tests (featuring new question types) with detailed answers. Published By: The Math Notion www.mathnotion.com **Word Problems, Grade 8** - 2013-12-02 Spectrum(R) Word Problems for grade 8 includes practice for essential math skills, such as real world applications, multi-step word problems, variables, ratio and proportion, perimeter, area and volume, percents, statistics and more. Spectrum(R) Word Problems supplement to classroom work and proficiency test preparation. The series provides examples of how the math skills students learn in school apply to everyday life with challenging, multi-step word problems. It features practice with word problems that are an essential part of the Common Core State Standards. Word problem practice is provided for essential math skills, such as fractions, decimals, percents, metric and customary measurement, graphs and probability, and preparing for algebra and more. **Primary Grade Challenge Math** - Edward Zaccaro 2003-06-01 Offers a higher level of material that goes beyond calculation skills for children in the primary grades. **PSSA Math Workbook** - Michael Smith The only prep book you will ever need to ace the PSSA Math Test! PSSA Math Workbook reviews all PSSA Math topics and provides students with the confidence and math skills they need to succeed on the PSSA Math. It is designed to address the needs of PSSA test takers who must have a working knowledge of basic Mathematics. This comprehensive workbook with over 2,500 sample questions and 2 complete PSSA tests can help you fully prepare for the PSSA Math test. It provides you with an in-depth focus on the math portion of the exam, helping you master the math skills that students find the most troublesome. This is an incredibly useful tool for those who want to review all topics being covered on the PSSA Math test. PSSA Math Workbook contains many exciting features to help you prepare for the PSSA Math test, including: · Content 100% aligned with the 2019-2020 PSSA test · Provided and tested by PSSA Math test experts · Dynamic design and easy-to-follow activities · A fun, interactive and concrete learning process · Targeted, skill-building practices · Complete coverage of all PSSA Math topics which you will be tested · 2 full-length practice tests (featuring new question types) with detailed answers. Published By: The Math Notion www.mathnotion.com **Math Puzzles** - Dorling Kindersley Publishing 1998-09-01 A fun and interactive way to learn math using these fill-in workbooks, illustrated with dozens of bright stickers, board games, learning puzzles, pictures to color in, and much more - Builds into a series of workbooks for home learning and help with schoolwork - Reinforces key areas of the basic school curriculum in math, as well as science and English, and also gives kids plenty of practice in problem-solving - Answers are given where needed and clear instructions mean children can work confidently on their own. **Resources in Education** - 1991 **Word Problems, Grade 7** - Spectrum 2013-12-02 Spectrum(R) Word Problems for grade 7, includes focused practice for essential math skills. ~Skills include: --*Real world applications --*Multi-step word problems --*Fractions, decimals, and percents --*Ratio and proportion --*Metric and customary measurement --*Graphs, probability, and statistics --*Perimeter, area, and volume --Spectrum(R) Word Problems workbooks supplement classroom work and proficiency test preparation. The workbooks provide examples of how the math skills students learn in school apply to everyday life with challenging, multi-step word problems. It features practice with word problems that are an essential part of the Common Core State Standards, making it a perfect supplement at home or school. **Summer Math Workbook Grade 5** - Michael Smith 2020-08-14 Prepare for the 5th Grade Math with a Perfect Math Workbook! Summer Math Workbook Grade 5 is a learning math workbook to prevent summer learning loss. It helps you retain and strengthen their Math skills and provides a strong foundation for success. This Mathematics Book provides you with a solid foundation to get ahead starts on your upcoming Maths Exams. Summer Math Workbook Grade 5 is designed by top math instructors to help students prepare for the Math course. It provides students with an in-depth focus on the Math concepts, helping them master the essential math skills that students find the most troublesome. This is a prestigious resource for those who need extra practice to succeed on the Math Exams. Summer Math Workbook Grade 5 contains many exciting and unique features to help your student scores higher on the Math tests, including: Over 2,500 standards-aligned 5th Grade Math Practice Questions with answers Complete coverage of all Math concepts which students will need to ace the Math tests Content 100% aligned with the latest math courses 2 full-length Math Practice Tests Grade 5 with detailed answers This Comprehensive Summer Workbook for Grade 5 is a perfect resource for those Math takers who want to review core content areas, brush up in math, discover their strengths and weaknesses and achieve their best scores on the math test. Published By: The Math Notion www.mathnotion.com **PARCC Math Workbook** - Michael Smith The only prep book you will ever need to ace the PARCC Math Test! PARCC Math Workbook reviews all PARCC Math topics and provides students with the confidence and math skills they need to succeed on the PARCC Math. It is designed to address the needs of PARCC test takers who must have a working knowledge of basic Mathematics. This comprehensive workbook with over 2,500 sample questions and 2 complete PARCC tests can help you fully prepare for the PARCC Math test. It provides you with an in-depth focus on the math portion of the exam, helping you master the math skills that students find the most troublesome. This is an incredibly useful tool for those who want to review all topics being covered on the PARCC Math test. PARCC Math Workbook contains many exciting features to help you prepare for the PARCC Math test, including: · Content 100% aligned with the 2019-2020 PARCC test · Provided and tested by PARCC Math test experts · Dynamic design and easy-to-follow activities · A fun, interactive and concrete learning process · Targeted, skill-building practices · Complete coverage of all PARCC Math topics which you will be tested · 2 full-length practice tests (featuring new question types) with detailed answers. Published By: The Math Notion www.mathnotion.com **Math Word Problems** - Sullivan Associates Staff 1972 **Singapore Math, Grade 5** - 2015-01-05 Singapore Math creates a deep understanding of each key math concept, includes an introduction explaining the Singapore Math method, is a direct complement to the current textbooks used in Singapore, and includes step-by-step solutions in the answer key. Singapore Math, for students in grades 2 to 5, provides math practice while developing analytical and problem-solving skills. This series is correlated to Singapore Math textbooks and creates a deep understanding of each key math concept. Learning objectives are provided to identify what students should know after completing each unit, and assessments are included to ensure that learners obtain a thorough understanding of mathematical concepts. Perfect as a supplement to classroom work, these workbooks will boost confidence in problem-solving and critical-thinking skills! **Mathematics Framework, Kindergarten-grade 12** - Texas Education Agency 1986 **Math Measurement Word Problems** - Rebecca Wingard-Nelson 2013-09 "Presents a step-by-step guide to understanding word problems with math measurement"--Provided by publisher Word Problems, Grade 5 - Spectrum 2013-12-02 Spectrum(R) Word Problems for grade 5, includes focused practice for essential math skills. --Skills include: --*Real world applications --*Multi-step word problems --*Fractions and decimals --*Metric and customary measurement --*Graphs and probability --*Geometry --*Preparing for algebra --Spectrum(R) Word Problems workbooks supplement classroom work and proficiency test preparation. The workbooks provide examples of how the math skills students learn in school apply to everyday life with challenging, multi-step word problems. It features practice with word problems that are an essential part of the Common Core State Standards, making it a perfect supplement at home or school. The Common Core Mathematics Companion: The Standards Decoded, Grades 3-5 - Linda M. Gojak 2015-05-28 This book is modeled after Jim Burke’s successful Common Core Companion Series. It is the second of two books (K-2, 3-5) in the series. The book will include a clear explanation of the mathematics within each domain, cluster, and standard and suggested grade level appropriate visual models and representations. It is a book for math teachers who may or may not be math specialists. As teachers plan and develop their curriculum, this book will help them determine important mathematics in a cluster and how that mathematics connects from one grade to the next as well as within a grade. FSA Math Workbook - Michael Smith The only prep book you will ever need to ace the FSA Math Test! FSA Math Workbook reviews all FSA Math topics and provides students with the confidence and math skills they need to succeed on the FSA Math. It is designed to address the needs of FSA test takers who must have a working knowledge of basic Mathematics. This comprehensive workbook with over 2,500 sample questions and 2 complete FSA tests can help you fully prepare for the FSA Math test. It provides you with an in-depth focus on the math portion of the exam, helping you master the math skills that students find the most troublesome. This is an incredibly useful tool for those who want to review all topics being covered on the FSA Math test. FSA Math Workbook contains many exciting features to help you prepare for the FSA Math test, including: Content 100% aligned with the 2019-2020 FSA test - Provided and tested by FSA Math test experts - Dynamic design and easy-to-follow activities - A fun, interactive and concrete learning process - Targeted, skill-building practices - Complete coverage of all FSA Math topics which you will be tested - 2 full-length practice tests (featuring new question types) with detailed answers. Published By: The Math Notion www.mathnotion.com Helping Children Learn Mathematics - National Research Council 2002-07-31 Results from national and international assessments indicate that school children in the United States are not learning mathematics well enough. Many students cannot correctly apply computational algorithms to solve problems. Their understanding and use of decimals and fractions are especially weak. Indeed, helping all children succeed in mathematics is an imperative national goal. However, for our youth to succeed, we need to change how we’re teaching this discipline. Helping Children Learn Mathematics provides comprehensive and reliable information that will guide efforts to improve school mathematics from pre-kindergarten through eighth grade. The authors explain the five strands of mathematical proficiency and discuss the major changes that need to be made in mathematics instruction, instructional materials, assessments, teacher education, and the broader educational system and answers some of the frequently asked questions when it comes to mathematics instruction. The book concludes by providing recommended actions for parents and caregivers, teachers, administrators, and policy makers, stressing the importance that everyone work together to ensure a mathematically literate society. Complete Year, Grade 5 - Thinking Kids 2014-06-02 Complete Year for Grade 5 provides a whole year's worth of practice for essential school skills including verb tenses, using quotation marks, compound and complex sentences, fractions, working with multi-digit numbers, volume, and more. Thinking Kids(R) Complete Year is a comprehensive at-home learning resource with 36 lessons/none for each week of the school year! Practice activities for multiple subject areas, including reading, writing, language arts, and math, are included in each weekly lesson to ensure mastery of all subject areas for one grade level. Complete Year lessons support the Common Core State Standards now adopted in most US states. Handy organizers help parents monitor and track their child's progress and provide fun bonus learning activities. Complete Year is a complete solution for academic success in the coming school year. Go Math Grade 6 - Juli K. Dixon 2010-04 Georgia Milestones Assessment System Math Workbook - Michael Smith 2020-08-26 The only prep book you will ever need to ace the Georgia Milestones Assessment System Math Test! GMAS Math Workbook reviews all GMAS Math topics and provides students with the confidence and math skills they need to succeed on the GMAS Math. It is designed to address the needs of GMAS test takers who must have a working knowledge of basic Mathematics. This comprehensive workbook with over 2,500 sample questions and 2 complete GMAS tests can help you fully prepare for the GMAS Math test. It provides you with an in-depth focus on the math portion of the exam, helping you master the math skills that students find the most troublesome. This is an incredibly useful tool for those who want to review all topics being covered on the GMAS Math test. GMAS Math Workbook contains many exciting features to help you prepare for the GMAS Math test, including: Content 100% aligned with the 2019-2020 GMAS test Provided and tested by GMAS Math test experts Dynamic design and easy-to-follow activities A fun, interactive and concrete learning process Targeted, skill-building practices Complete coverage of all GMAS Math topics which you will be tested 2 full-length practice tests (featuring new question types) with detailed answers. Published By: Math Notion WWW.MathNotion.com Standardized Test Practice for 5th Grade - Charles J. Shields 1999-05 Grade-specific exercises and practice tests to prepare students for various standardized tests including the California Achievement Tests, the Iowa Tests Of Basic Skills, the Comprehensive Tests of Basic Skills, the Standard Achievement Tests, the Metropolitan Achievement Tests, and the Texas Assessment of Academic Skills. ACT Aspire Math Workbook - Michael Smith The only prep book you will ever need to ace the ACT Aspire Math Test! ACT Aspire Math Workbook reviews all ACT Aspire Math topics and provides students with the confidence and math skills they need to succeed on the ACT Aspire Math. It is designed to address the needs of ACT Aspire test takers who must have a working knowledge of basic Mathematics. This comprehensive workbook with over 2,500 sample questions and 2 complete ACT Aspire tests can help you fully prepare for the ACT Aspire Math test. It provides you with an in-depth focus on the math portion of the exam, helping you master the math skills that students find the most troublesome. This is an incredibly useful tool for those who want to review all topics being covered on the ACT Aspire Math test. ACT Aspire Math Workbook contains many exciting features to help you prepare for the ACT Aspire Math test, including: Content 100% aligned with the 2019-2020 ACT Aspire test - Provided and tested by ACT Aspire Math test experts - Dynamic design and easy-to-follow activities - A fun, interactive and concrete learning process - Targeted, skill-building practices - Complete coverage of all ACT Aspire Math topics which you will be tested - 2 full-length practice tests (featuring new question types) with detailed answers. Published By: The Math Notion www.mathnotion.com PSSA Subject Test Mathematics Grade 5: Student Practice Workbook + Two Full-Length PSSA Math Tests - Michael Smith 2021-01-15 Get the Targeted Practice You Need to Ace the PSSA Math Test! PSSA Subject Test Mathematics Grade 5 includes easy-to-follow instructions, helpful examples, and plenty of math practice problems to assist students to master each concept, brush up their problem-solving skills, and create confidence. The PSSA math practice book provides numerous opportunities to evaluate basic skills along with abundant remediation and intervention activities. It is a skill that permits you to quickly master intricate information and produce better leads in less time. Students can boost their test-taking skills by taking the book's two practice PSSA Math exams. All test questions answered and explained in detail. Important Features of the 5th grade PSSA Math Book: A complete review of PSSA math test topics, Over 2,500 practice problems covering all topics tested, The most important concepts you need to know, Clear and concise, easy-to-follow sections, Well designed for enhanced learning and interest, Hands-on experience with all question types, 2 full-length practice tests with detailed answer explanations, Cost-Effective Pricing, Powerful math exercises to help you avoid traps and pacing yourself to beat the Pennsylvania PSSA test. Students will gain valuable experience and raise their confidence by taking 5th grade math practice tests, learning about test structure, and gaining a deeper understanding of what is tested on the PSSA math grade 5. If ever there was a book to respond to the pressure to increase students' test scores, this is it. Published By: The Word Problems - Robert Smith 2003-03 Word Problems, Grade 5 Homework Booklet will help teach math skills like fractions, money, and mixed numbers using word problems. Students will strengthen their reading skills as they learn basic math operations and critical thinking skills. MCAS Math Workbook - Michael Smith The only prep book you will ever need to ace the MCAS Math Test! MCAS Math Workbook reviews all MCAS Math topics and provides students with the confidence and math skills they need to succeed on the MCAS Math. It is designed to address the needs of MCAS test takers who must have a working knowledge of Basic Mathematics. This comprehensive workbook with over 2,500 sample questions and 2 complete MCAS tests can help you fully prepare for the MCAS Math test. It provides you with an in-depth focus on the math portion of the exam, helping you master the math skills that students find the most troublesome. This is an incredibly useful tool for those who want to review all topics being covered on the MCAS Math test. MCAS Math Workbook contains many exciting features to help you prepare for the MCAS Math test, including: · Content 100% aligned with the 2019-2020 MCAS test · Provided and tested by MCAS Math test experts · Dynamic design and easy-to-follow activities · A fun, interactive and concrete learning process · Targeted, skill-building practices · Complete coverage of all MCAS Math topics which you will be tested · 2 full-length practice tests (featuring new question types) with detailed answers. Published By: The Math Notion www.mathnotion.com Your Mathematics Standards Companion, Grades 3-5 - Linda M. Gojak 2017-05-17 Transforming the standards into learning outcomes just got a lot easier In this expansion of the original popular Common Core Mathematics Companions, you can see in an instant how teaching to your state standards should look and sound in the classroom. Under the premise that math is math, the authors provide a Cross-Referencing Index for states implementing their own specific mathematics standards, showing which of your standards are the same as CCSS-M, which differ and how—and which page number to turn to for standards-based teaching ideas. It's all here, page by page: The mathematics embedded in each standard for a deeper understanding of the content Examples of what effective teaching and learning look like in the classroom Connected standards within each domain so teachers can better appreciate how they relate Priorities within clusters so teachers know where to focus their time The three components of rigor: conceptual understanding, procedural skills, and applications Vocabulary and suggested materials for each grade-level band with explicit connections to the standards Common student misconceptions around key mathematical ideas with ways to address them Sample lesson plans and lesson planning templates Cross-referenced index listing the standards in the following states, explaining what is unique to the standards of each state Your Mathematics Standards Companion is your one-stop guide for teaching, planning, assessing, collaborating, and designing powerful mathematics curriculum. Common Core Math Workbook - Michael Smith The only prep book you will ever need to ace the Common Core Math Test! Common Core Math Workbook reviews all Common Core Math topics and provides students with the confidence and math skills they need to succeed on the Common Core Math. It is designed to address the needs of Common Core test takers who must have a working knowledge of basic Mathematics. This comprehensive workbook with over 2,500 sample questions and 2 complete Common Core tests can help you fully prepare for the Common Core Math test. It provides you with an in-depth focus on the math portion of the exam, helping you master the math skills that students find the most troublesome. This is an incredibly useful tool for those who want to review all topics being covered on the Common Core Math test. Common Core Math Workbook contains many exciting features to help you prepare for the Common Core Math test, including: · Content 100% aligned with the 2019-2020 Common Core test · Provided and tested by Common Core Math test experts · Dynamic design and easy-to-follow activities · A fun, interactive and concrete learning process · Targeted, skill-building practices · Complete coverage of all Common Core Math topics which you will be tested · 2 full-length practice tests (featuring new question types) with detailed answers. Published By: The Math Notion www.mathnotion.com Word Problems, Grade 4 - Kumon Publishing 2009 Grade 4 workbook introduces word problems involving multi-digit multiplication and division, some decimals and tables and graphs. Math, Grade 5 - Thomas Richards 2006-12-11 Test with success using the Spectrum Math workbook! This book helps students in grade 5 apply essential math skills to everyday life. The lessons focus on multiplication and division, fractions, measurements, introductory geometry, and probability, and the activities help extend problem-solving and analytical abilities. The book features easy-to-understand directions, is aligned to national and state standards, and also includes a complete answer key. ~Today, more than ever, students need to be equipped with the essential skills they need for school achievement and for success on proficiency tests. The Spectrum series has been designed to prepare students with these skills and to enhance student achievement. Developed by experts in the field of education, each title in the Spectrum workbook series offers grade-appropriate instruction and reinforcement in an effective sequence for learning success. Perfect for use at home or in school, and a favorite of parents, homeschoolers, and teachers worldwide, Spectrum is the learning partner students need for complete achievement. Teaching Student-Centered Mathematics - John A. Van de Walle 2017-01-09 NOTE: Used books, rentals, and purchases made outside of Pearson If purchasing or renting from companies other than Pearson, the access codes for the Enhanced Pearson eText may not be included, may be incorrect, or may be previously redeemed. Check with the seller before completing your purchase. For courses in Elementary Mathematics Methods and for classroom teachers. Note: This is the bound book only and does not include access to the Enhanced Pearson eText. To order the Enhanced Pearson eText packaged with a bound book, use ISBN 0134090683. A practical, comprehensive, student-centered approach to effective mathematical instruction for grades Pre-K-2. Helping students make connections between mathematics and their worlds-and helping them feel empowered to use math in their lives-is the focus of this widely popular guide. Designed for classroom teachers, the book focuses on specific grade bands and includes information on creating an effective classroom environment, aligning teaching to various standards and practices, such as the Common Core State Standards and NCTM's teaching practices, and engaging families. The first portion of the book addresses how to build a student-centered environment in which children can become mathematically proficient, while the second portion focuses on practical ways to teach important concepts in a student-centered fashion. The new edition features a corresponding Enhanced Pearson eText version with links to embedded videos, blackline masters, downloadable teacher resource and activity pages, lesson plans, activities correlated to the CCSS, and tables of common errors and misconceptions. This book is part of the Student-Centered Mathematics Series, which is designed with three objectives: to illustrate what it means to teach student-centered, problem-based mathematics, to serve as a reference for the mathematics content and research-based instructional strategies suggested for the specific grade levels, and to present a large collection of high quality tasks and activities that can engage students in the mathematics that is important for them to learn. Improve mastery and retention with the Enhanced Pearson eText* The Enhanced Pearson eText provides a rich, interactive learning environment designed to improve student mastery of content. The Enhanced Pearson eText is: Engaging. The new interactive, multimedia learning features were developed by the authors and other subject-matter experts to deepen and enrich the learning experience. Convenient. Enjoy instant online access from your computer or download the Pearson eText App to read on or offline on your iPad® and Android™ tablet.* Affordable. Experience the advantages of the Enhanced Pearson eText along with all the benefits of print for 40% to 50% less than a print bound book. *The Enhanced eText features are only available in the Pearson eText format. They are not available in third-party eTexts or downloads. *The Pearson eText App is available on Google Play and in the App Store. It requires Android OS 3.1-4, a 7" or 10" tablet, or iPad iOS 5.0 or later. The Planets in Our Solar System - Franklyn M. Branley 1998-04-18 Where is it partly cloudy and 860°F? Venus. Read about the eight planets in our solar system and Earth's special place in it. This book also includes instructions for making your own solar system mobile, and on the new "Find Out More" page learn how to track the moon and visit the best plant web sites.
Label Distribution for Learning with Noisy Labels Yun-Peng Liu, Ning Xu, Yu Zhang and Xin Geng∗ MOE Key Laboratory of Computer Network and Information Integration, China School of Computer Science and Engineering, Southeast University, Nanjing 210096, China {yunpengliu, nxing, zhang_yu, firstname.lastname@example.org Abstract The performances of deep neural networks (DNNs) crucially rely on the quality of labeling. In some situations, labels are easily corrupted, and therefore become noisy labels. Thus, designing algorithms that deal with noisy labels is of great importance for learning robust DNNs. However, it is difficult to distinguish between noisy labels and clean labels, which becomes the bottleneck of many methods. To address the problem, this paper proposes a novel method named Label Distribution based Confidence Estimation (LDCE). LDCE estimates the confidence of the observed labels based on label distribution. Then, the boundary between clean labels and noisy labels becomes clear according to confidence scores. To verify the effectiveness of the method, LDCE is combined with the existing learning algorithm to train robust DNNs. Experiments on both synthetic and real-world datasets substantiate the superiority of the proposed algorithm against state-of-the-art methods. 1 Introduction Deep neural networks (DNNs) are the preferred choices for many classification tasks. A large number of labeled training instances are essential to training DNNs with high performance. It is convenient to obtain enough instances as well as labels with the assistance of the Internet and crawler [Divvala et al., 2014], but noisy labels are inevitable. Training DNNs with noisy labels is challenging since the networks can easily overfit to the corrupted labels [Nettleton et al., 2010]. Many prior works avoid overfitting to the corrupted data by correcting the noisy labels [Yi and Wu, 2019; Hendrycks et al., 2018]. Note that the corrupted dataset inherently contains a large number of samples with clean labels. It is inevitable to make wrong corrections to clean labels due to the uncertainty of the boundary between clean labels and noisy labels. Such wrong operations will result in the decline of the performance. Moreover, current methods are weak at handling various noise patterns. When the noise pattern is changed, some methods will make unstable corrections. As shown in Fig. 1, we make a comparison between the model trained with forward correction loss [Patrini et al., 2017], a classical label correction method, on samples containing noisy labels and the model trained with cross entropy loss on filtered samples with clean labels. Forward correction loss has good performances in asymmetric noise pattern but performs poorly under symmetric noise cases. In contrast, the model learned with only clean labels have stable performances on both noise patterns. It shows that the samples with clean labels are more important than the correcting operations under specific noise patterns. However, the uncertainty of labels makes it difficult to identify the samples with clean labels. To reduce the uncertainty of labels, a metric named label confidence is proposed in this paper for measuring the reliability of each label, in which clean labels get high confidence scores while noisy labels achieve low confidence scores. Note that Label Distribution (LD) naturally provides such a metric [Geng, 2016]. As shown in Fig. 2, LD assigns the description degree $d^y_x$ to all classes in a distribution format, i.e., $d^y_x \in [0, 1]$ and $\sum_y d^y_x = 1$. The description degree represents the degree to which $y$ describes $x$, which is naturally suitable to measure the label confidence. Moreover, compared with directly estimating the confidence score from the feature space, e.g., CleanNet [Lee et al., 2018], the description degree is more reliable because the degree value is restricted by other classes in the distribution format. Motivated by this, this paper proposes a novel algorithm named... Label Distribution based Confidence Estimation (LDCE) to estimate label confidence by generating label distribution. Note that using some trusted samples is an effective approach to improving robustness in noisy label problems, and such small set can be fetched easily in many real-world applications [Hendrycks et al., 2018]. LDCE estimates the label confidence via a small number of trusted samples, i.e., samples with clean labels. In this case, the training data is divided into two sets, i.e., a set with a few trusted samples and the other set with a large number of untrusted samples. Guided by the trusted samples, LDCE generates LD for each untrusted sample by measuring the similarity in feature space, i.e., the embedding space obtained from a feature encoder. Then, the confidence score of the observed label can be obtained from LD. After obtaining the label confidence, the samples with high confidence scores can be selected from the untrusted set, which is termed as purified data in this paper. Experiments show that the purified data mainly consists of the samples with clean labels. Since the purified data is selected from the untrusted set, the risk of wrong operations to clean labels is mitigated. In this case, we combine the purified data with the existing correction method to train robust DNNs. The empirical results substantiate that the proposed method achieves favorable performances in both synthetic noise cases and real-world noise cases. The contributions of this paper are as follows: - A reliable metric label confidence is designed for measuring the reliability of labels based on label distribution. - A practical algorithm for estimating the label confidence is proposed. Experimental results verify the efficiency of the estimation algorithm. - A novel learning method using label confidence is designed for training robust DNNs. Experiments on three datasets show the superiority of the proposed method against state-of-the-art methods. 2 Related Work Due to the presence of noisy labels, most of the learning algorithms based on the supervised learning framework cannot accurately capture the mappings between instances and ground-truth labels. To deal with this problem, existing methods focus on mitigating the adverse effect of noisy labels. One intuitive and easy approach is to remove the samples which are considered as wrong-labeled. For instance, [Han et al., 2018; Chen et al., 2019] attempt to filter out the unreliable samples during the training phase via a co-teaching framework. However, such methods do not explicitly deal with noisy labels. When the noise is severe, the performances of these methods are usually vulnerable. An alternative approach is to correct noisy labels. [Tanaka et al., 2018; Chen et al., 2019] propose to replace the corrupted label with a more robust soft label which is in a distribution format. By converting a categorical label into a label distribution, the noisy label can be probabilistically corrected. Aside from correcting the labels directly, a loss correction strategy is proposed to revise the effects of noisy labels with the correction loss function [Patrini et al., 2017]. However, wrong corrections to the clean labels will introduce extra noisy information during the learning process. Other than the works mentioned above, some works notice the value of clean labels and turn to focus on strengthening the importance of the samples with clean labels. In [Guo et al., 2018], CurriculumNet designs a learning schedule which starts from learning ‘easy’ subset to gradually adding ‘complex’ subset. In [Ren et al., 2018; Shu et al., 2019], sample reweighting strategy is used to improve the attention of clean labels. Our method belongs to this category. However, different from previous methods, we focus on combining the reweighting idea with correction methods. 3 The Proposed Methods 3.1 Notations Definition First of all, some notations used in this paper are clarified as follows. The $i$-th instance is denoted by $x_i$. The ground-truth label of the $i$-th instance is denoted by $y_i \in \{0, 1\}^c$ and $1^T y_i = 1$, where $c$ is the number of possible label values and $1$ is a vector of all-ones. As the training set is corrupted, the observed label of the $i$-th instance is denoted by $\hat{y}_i \in \{0, 1\}^c$ and $1^T \hat{y}_i = 1$. In a $c$-class classification problem, $D_u = \{(x_i, \hat{y}_i) | 1 \leq i \leq N\}$ is a corrupted dataset, where the observed label $\hat{y}_i$ is considered as unreliable. Moreover, a trusted dataset is prepared as $D_t = \{(x_i, y_i) | 1 \leq i \leq t\}$, where $t/(t + N) \ll 1$ is defined as the trusted fraction. To generate the label confidence, we introduce label distribution $d_i$. The description degree of class $j$ to instance $x_i$ is denoted as $d_{ij} \in [0, 1]$, and $\sum_{j=1}^{c} d_{ij} = 1$. The label confidence of the $i$-th sample is defined as $c_i$. 3.2 Confidence Estimation As referred above, the bottleneck of current methods is the uncertainty on the untrusted set. Measuring the reliability of each label is a practical approach to reduce the uncertainty. Guided by this motivation, this paper designs a metric named label confidence based on label distribution [Geng, 2016] and proposes a practical method LDCE for estimating this metric. Label Distribution Generation LD offers a numerical metric description degree for each class in label space. As shown in Fig. 2, a high degree usually denotes more reliable labeling. Thus, the description degree... on the observed label is naturally suitable to be the label confidence metric. Note that samples sharing the similar features tend to have the same label, similarity in feature space has been successfully applied in recovering LD [Xu et al., 2018]. In this paper, the feature similarity is calculated with a small batch of trusted samples. Then, LD is generated according to the similarity scores. In detail, it is the first step to sample a support set and two query sets from the training data. Sampling subsets to construct meta-task is commonly used in few-shot learning algorithms [Wang and Yao, 2019], which is helpful to learn from limited data. Then, the membership degree [Xu et al., 2018] to class $j$ for instance $\mathbf{x}_i$ is calculated by $$m_j^i = \frac{1}{|\mathbf{S}_j|} \sum_{j=1}^{|\mathbf{S}_j|} s_{ij},$$ \hspace{1cm} (1) where $\mathbf{S}_j$ denotes the samples of class $j$ in the support set, and $s_{ij}$ denotes the similarity score between instances $\mathbf{x}_i$ and $\mathbf{x}_j$. Finally, the membership degrees to different classes are normalized into a label distribution $\mathbf{d}_i = [d_1^i, d_2^i, \ldots, d_c^i]$ via a softmax layer $$d_j^i = \frac{\exp(m_j^i)}{\sum_{k=1}^{c} \exp(m_k^i)}. $$ \hspace{1cm} (2) After obtaining the label distribution, the label confidence $c_i$ is updated iteratively according to $$c_i^{(t+1)} = \alpha c_i^{(t)} + (1 - \alpha) \mathbf{d}_i^\top \tilde{\mathbf{y}}_i,$$ \hspace{1cm} (3) where $\alpha$ is the step size. **Framework of Estimation Model** In order to get accurate label confidence from the corrupted data, it is important to train a reliable feature encoder for similarity calculation. This paper designs a unified learning framework to estimate the label confidence as well as learn a reliable feature encoder. As shown in Fig. 3, the framework is composed of a feature encoder $f_\theta$, a metric module $g$, and a fully connected (FC) layer $f_\phi$. The estimation model is learned with a multi-task strategy [Ruder, 2017], which consists of a metric learning task and a classification task. The output of the feature encoder is denoted as $\mathbf{z}_i = f_\theta(\mathbf{x}_i)$, which refers to the feature embedding of instance $\mathbf{x}_i$. Then, the similarity score between two instances is measured as $s_{ij} = ||\mathbf{z}_i - \mathbf{z}_j||$. After obtaining the label confidence according to Eq. 1-2, the loss function of the metric learning task is calculated by the cross entropy loss on the trusted query set according to Eq. 4, $$\mathcal{L}_{sim} = -\frac{1}{m_1} \sum_{i=1}^{m_1} \sum_{j=1}^{c} y_i^j \log d_j^i,$$ \hspace{1cm} (4) where $m_1$ denotes the number of the query data sampled from the trusted set. For the classification task, the linear layer $f_\phi$ is used to predict the label of instance $\mathbf{x}_i$, and the predicted result is denoted as $p(\mathbf{y}_i | \mathbf{x}_i) = f_\phi(\mathbf{z}_i)$. For data sampled from the trusted set, the loss is calculated directly with the cross entropy loss according to Eq. 5. $$\mathcal{L}_c = -\frac{1}{n + m_1} \sum_{i=1}^{n+m_1} \sum_{j=1}^{c} y_i^j \log(p(y_i^j | \mathbf{x}_i)), $$ \hspace{1cm} (5) where $n$ and $m_1$ denotes the number of support data and query data sampled from the trusted set. For data sampled from the untrusted set, the loss is calculated by Eq. 6 based on the attention mechanism [Vaswani et al., 2017]. $$\mathcal{L}_a = -\sum_{i=1}^{m_2} \sum_{j=1}^{c} a_i y_i^j \log(p(y_i^j | \mathbf{x}_i)), $$ \hspace{1cm} (6) where $a_i$ denotes the attention value obtained from the label confidence, and $m_2$ is the number of the query data sampled from the untrusted set. Since the encoder is trained from scratch, the estimation result is unstable in the former iterations. In this case, we initialize the label confidence to 0 and calculate the attention value with a threshold $\delta$, $$a_i = \begin{cases} c_i & \text{if } c_i \geq \delta, \\ 0 & \text{otherwise}. \end{cases} $$ \hspace{1cm} (7) Then, the loss function for the classification task is designed as follows: $$\mathcal{L}_{cls} = \frac{\mathcal{L}_c + \mathcal{L}_a}{n + m_1 + \sum_{i=1}^{m_2} \mathbb{I}(a_i)}, $$ \hspace{1cm} (8) where $\mathbb{I}(.)$ is the indicator function. Based on the above analysis, Eq. 4 and Eq. 8 are combined to form the loss function for training the estimation model. $$\mathcal{J}_{LDCE} = \mathcal{L}_{sim} + \mathcal{L}_{cls}. $$ \hspace{1cm} (9) Algorithmic details are shown in Algorithm 1. **3.3 Learning with Purified Data** After obtaining the label confidence, the boundary between clean labels and noisy labels becomes clear by assuming that clean labels get higher label confidence. In this case, the samples with high confidence scores are selected with the same threshold in the estimation model, and the selected samples are named as *purified data*. Then, we combine the purified data with the classical correction method GLC [Hendrycks et al., 2018] by proposing a revised correction loss, and the learning method is named as *Purified Data based Loss Correction* (PDLC). Algorithm 1 Label Distribution based Confidence Estimation Input: Trusted data \( \mathcal{D}_t \), Untrusted data \( \mathcal{D}_u \), \( f_{\theta} \), \( f_{\phi} \). Parameter: batch size \( n \), \( m_1 \), \( m_2 \), max iterations \( T \), step size \( \alpha \), threshold \( \delta \). 1: Initialize model parameter \( \theta \), \( \phi \) and label confidence. 2: for \( i = 1 \) to \( T \) do 3: \( \{x^{(s)}, y^{(s)}\} \leftarrow \text{SampleMiniBatch}(\mathcal{D}_t, n) \) 4: \( \{x^{(i)}, y^{(i)}\} \leftarrow \text{SampleMiniBatch}(\mathcal{D}_t, m_1) \) 5: \( \{x^{(i)}, \tilde{y}^{(i)}\} \leftarrow \text{SampleMiniBatch}(\mathcal{D}_u, m_2) \) 6: \( z^{(s)} \leftarrow f_{\theta}(x^{(s)}) \), \( z^{(i)} \leftarrow f_{\theta}(x^{(i)}) \), \( z^{(i)} \leftarrow f_{\theta}(x^{(i)}) \). 7: Calculate the label distribution \( d_i \) by Eq. 1-2. 8: Formulate the learning function by Eq. 4-9. 9: Update model parameter \( \theta \), \( \phi \) in backward process. 10: Update label confidence by Eq.3. end for Revised Forward Correction Loss In [Patrini et al., 2017], forward correction loss is proposed to tackle the noisy label problem. The loss function is shown as \[ \ell_{corr}(p(y_i|x_i), \tilde{y}_i = e^k) = -\log \sum_{j=1}^{c} C_{jk} p(y_i^j|x_i), \] where \( e^k \) denotes the \( k \)-th standard canonical vector, i.e., \( e^k \in \{0, 1\}^c \) and \( 1^\top e^k = 1 \). \( C \in \mathbb{R}^{c \times c} \) is the noise transition matrix and \( C_{jk} = p(\tilde{y} = e^k|y = e^j) \). How to obtain \( C \) is the same as [Hendrycks et al., 2018]. Since the purified data contains mainly samples with clean labels, forward correction loss does not perform well on purified data when compared with cross entropy loss. However, simply replacing the correction loss with the cross entropy loss will mitigate the correction effects on the remaining untrusted samples. In this case, we design a revised forward correction loss by combining forward correction loss with cross entropy loss. The combination loss function is shown as \[ \ell_{purified} = \lambda \ell_{ce} + (1 - \lambda) \ell_{corr}, \] Specifically, \( \lambda \in (0, 1) \) is the hyperparameter to balance the cross entropy loss and the initial correction loss, which is selected according to the model performance on each noise pattern. We observe that the case with a high noise ratio favors a small \( \lambda \) value, while the case with a small noise ratio prefers a high \( \lambda \) value. Final Objective After obtaining the purified data, the whole training set is divided into three parts. The first part is the pre-acquired trusted samples \( \mathcal{D}_t \) with ground-truth labels, and the loss function is the cross entropy loss \( \ell_{ce} \). The second part is the purified data \( \mathcal{D}_p \) with high label confidence, and the loss on \( \mathcal{D}_p \) is the revised forward correction loss \( \ell_{purified} \). The last part is the remaining samples with relatively low label confidence that recorded as \( \mathcal{D}_u \), and the loss on \( \mathcal{D}_u \) is calculated by the forward correction loss \( \ell_{corr} \). \[ \mathcal{L}_{trusted} = \sum_{i=1}^{|\mathcal{D}_t|} \ell_{ce}(f_{\varphi}(x_i), y_i), \] \[ \mathcal{L}_{purified} = \sum_{i=1}^{|\mathcal{D}_p|} \ell_{purified}(f_{\varphi}(x_i), \tilde{y}_i), \] \[ \mathcal{L}_{untrusted} = \sum_{i=1}^{|\mathcal{D}_u|} \ell_{corr}(f_{\varphi}(x_i), \tilde{y}_i), \] \[ \mathcal{J} = \frac{\mathcal{L}_{trusted} + \mathcal{L}_{purified} + \mathcal{L}_{untrusted}}{|\mathcal{D}_t| + |\mathcal{D}_p| + |\mathcal{D}_u|}. \] The DNN model is denoted as \( f_{\varphi} \). By minimizing Eq. 15, the optimal parameter \( \varphi \) of the DNN model can be obtained. 4 Experiments 4.1 Experimental Setup Datasets The experiments are conducted on CIFAR10 and CIFAR100 [Krizhevsky et al., 2009] with synthetic label noise and Clothing1M [Xiao et al., 2015] with real-world label noise. CIFAR10 & CIFAR100 are two datasets consists of 32 × 32 color images. The two datasets both contain 50,000 training samples and 10,000 test samples. CIFAR10 assigns the samples with 10 classes, while CIFAR100 assigns the samples with 100 classes. Since a trusted set is essential in the learning settings, the training set is split into two parts with the trusted fraction of 5% and 10%. Then, the synthetic label noise is added into the untrusted set. Following the previous literature [Chen et al., 2019], experiments are conducted on two representative types of label noise: symmetric noise and asymmetric noise. As illustrated in Fig. 4, the label noise is uniformly distributed among all other classes in the symmetric case. In the asymmetric case, label noise is generated by flipping a label to a different class. The noise ratio \( \epsilon \) denotes the proportion of wrong labels. In this paper, we test noise ratio 20%, 50% and 80% for both symmetric and asymmetric noise. Clothing1M is a dataset collected with real-world label noise. The training set consists of 1M images with noisy labels from 14 fashion classes and 47,570 images with manually refined labels. The validation set and test set have 14,313 and 10,526 images respectively. The images with manually refined labels in training set are used as trusted samples. (a) Symmetric Noise (b) Asymmetric Noise Figure 4: Examples of noise transition matrix \( C \) (taking 5 classes and noise ratio \( \epsilon = 40\% \) as an example). Figure 5: (a)-(c) Distribution of label confidence in the interval $[\delta, 1]$ on CIFAR10 with 5% trusted fraction. (d)-(e) Performance comparison for methods learned without purified data and with purified data on CIFAR10 and CIFAR100 with 5% trusted fraction and 50% noise ratio. The striped bars denote the methods learned with purified data. **Implementation Details** The experiments are implemented with PyTorch framework. Detailed implementations for each dataset are as follows. **CIFAR10 & CIFAR100.** For estimation model, we use a ResNet-32 [He et al., 2016] as the feature encoder. The learning rate is 0.1 with a decay step 60 and a decay rate 0.1. The hyper-parameters is $\alpha=0.6$, $\delta=0.5$. We observe that the hyper-parameters, which are selected by experience, are not very sensitive to different noise patterns. Thus, the hyper-parameters are fixed for all noise patterns. For classifier model, we adopt a Wide Residual Network [Zagoruyko and Komodakis, 2016] of depth 40 and a widening factor of 2. The learning rate is 0.1 with a multi-step decay [60, 80, 90] and a decay rate 0.2. For both estimation model and classifier model, we use SGD optimizer with 0.9 momentum, a $\ell_2$ weight decay $1 \times 10^{-4}$ and train the models for 100 epochs. **Clothing1M.** Following the previous works [Tanaka et al., 2018; Shu et al., 2019], we use ResNet-50 [He et al., 2016] pre-trained on ImageNet for both the feature encoder and classifier model. The hyper-parameters is the same with CIFAR10 and CIFAR100. The learning rate is 0.01 with a decay step 5 and a decay rate 0.1. We use SGD optimizer with a momentum 0.9, a $\ell_2$ weight decay $1 \times 10^{-4}$ and train the models for 10 epochs. For preprocessing, we resize each image to $256 \times 256$, crop the middle $224 \times 224$ as input, and perform normalization. **Baselines** We compare our algorithm with **Trusted Only**, referring to learning DNNs with only trusted samples, and **Fine-tuning**, referring to fine-tuning DNNs trained on corrupted data with trusted samples [Shu et al., 2019]. Other than the two methods above, classical comparison methods with the same learning settings include: **Distillation** [Li et al., 2017], **MentorNet** [Jiang et al., 2018], **L2RW** [Ren et al., 2018], **MW-Net** [Shu et al., 2019], **GLC** [Hendrycks et al., 2018]. For fair comparisons, all the contrast methods are evaluated with the same setup. To ensure that the empirical results are reliable, we repeat each experiment on synthetic noise cases 5 times with different random seeds. ### 4.2 Experimental Results **Results on CIFAR10 & CIFAR100** The label confidence obtained from the estimation model is critical for the learning method PDLC. To investigate the performance of the estimation model, we firstly illustrate the distribution of label confidence in the interval $[\delta, 1]$ ($\delta = 0.5$) on CIFAR10 with 5% trusted fraction. As can be seen from the Fig. 5(a)-5(c), for both symmetric and asymmetric noise types, the purified data, i.e., samples with label confidence in the interval $[\delta, 1]$, mainly consists of the samples with clean labels. In other words, only limited wrong-labeled samples exist in the purified data, which verifies the effectiveness of the estimation model and the capability of label confidence in identifying the clean labels. To further investigate the effectiveness of the purified data, we combine the purified data with methods **Trusted Only**, **Fine-tuning** and **Distillation**. Since the three methods are easy to implement, we only need to enrich the trusted samples with purified data. Fig. 5(d)-5(e) summarize the results. It can be observed that the three methods combined with the purified data all achieved performance gain, which verifies the effectiveness of the purified data. | dataset (trusted fraction) | noise type | noise ratio | method | accuracy | |---------------------------|------------|-------------|--------------|----------| | CIFAR10 (5%) | symmetric | 20% | Trusted Only | 87.52±0.20 | 84.04±0.40 | 91.26±0.17 | 88.49±0.29 | 91.54±0.40 | 92.06±0.11 | 92.14±0.11 | | | | 50% | Fine-tuning | 67.78±0.58 | 82.82±0.36 | 73.92±2.36 | 85.82±0.27 | 83.39±0.71 | 86.37±0.38 | 87.10±0.31 | 87.36±0.43 | | | | 80% | Distillation | 65.90±1.78 | 66.52±3.33 | 44.48±6.62 | 56.61±1.75 | 64.06±0.65 | 70.42±1.43 | 77.80±2.23 | | CIFAR10 (10%) | asymmetric | 20% | Trusted Only | 88.83±0.17 | 84.04±0.40 | 92.52±0.27 | 89.64±0.27 | 92.73±0.28 | 93.38±0.25 | 93.42±0.18 | | | | 50% | Fine-tuning | 67.78±0.58 | 82.82±0.36 | 73.92±2.36 | 85.82±0.27 | 83.39±0.71 | 86.37±0.38 | 87.10±0.31 | 87.36±0.43 | | | | 80% | Distillation | 88.80±0.27 | 87.20±0.32 | 91.42±0.16 | 87.88±0.13 | 91.22±0.26 | 92.16±0.10 | 92.28±0.15 | | CIFAR10 (10%) | symmetric | 20% | Trusted Only | 85.12±0.37 | 79.18±1.01 | 86.15±0.35 | 81.75±0.40 | 82.36±0.33 | 87.66±0.10 | 88.36±0.22 | | | | 50% | Fine-tuning | 76.01±1.54 | 80.21±2.81 | 79.05±1.81 | 79.88±0.88 | 79.88±0.88 | 80.88±0.88 | 81.88±0.88 | | | | 80% | Distillation | 89.86±0.07 | 86.44±0.64 | 92.20±0.17 | 88.97±0.23 | 92.25±0.41 | 93.52±0.16 | 93.62±0.22 | | CIFAR100 (5%) | asymmetric | 20% | Trusted Only | 79.38±0.67 | 79.06±2.43 | 82.40±1.21 | 87.05±0.32 | 83.87±1.77 | 93.24±0.22 | 93.32±0.23 | | | | 50% | Fine-tuning | 88.88±0.40 | 73.50±3.41 | | | | | 92.26±0.19 | | | | 80% | Distillation | 62.38±0.51 | 54.48±0.19 | 49.28±0.50 | 59.44±0.57 | 51.29±1.87 | 61.44±2.01 | 62.76±0.41 | | CIFAR100 (10%) | symmetric | 20% | Trusted Only | 23.40±0.51 | 54.48±0.19 | 64.66±0.63 | 71.78±0.40 | 61.12±1.80 | 68.11±0.33 | 74.86±0.10 | 74.90±0.20 | | | | 50% | Fine-tuning | 30.18±1.26 | 26.90±1.26 | 15.50±2.08 | 23.51±0.35 | 37.40±4.80 | 34.26±1.08 | 40.46±1.21 | | | | 80% | Distillation | 64.66±0.78 | 68.18±0.63 | 45.10±1.03 | 43.34±1.37 | 41.05±0.76 | 41.05±0.76 | 74.44±0.43 | | CIFAR100 (10%) | asymmetric | 20% | Trusted Only | 23.40±0.51 | 54.48±0.19 | 64.66±0.63 | 71.78±0.40 | 61.12±1.80 | 68.11±0.33 | 74.86±0.10 | 74.90±0.20 | | | | 50% | Fine-tuning | 62.72±0.55 | 54.00±0.08 | | | | | 74.12±0.31 | | | | 80% | Distillation | 64.28±0.39 | 64.66±0.28 | 69.58±0.67 | 59.60±0.79 | 68.26±0.70 | 71.76±0.22 | 72.06±0.42 | | CIFAR100 (10%) | symmetric | 20% | Trusted Only | 38.70±0.29 | 58.04±0.64 | 52.36±0.66 | 60.70±0.72 | 50.83±2.28 | 60.33±3.26 | 64.82±0.39 | 65.30±0.31 | | | | 50% | Fine-tuning | 41.32±0.40 | 39.11±1.12 | 27.70±0.70 | 35.88±0.87 | 27.85±0.67 | 47.78±0.93 | 52.42±0.46 | | | | 80% | Distillation | 66.22±0.43 | 66.00±0.13 | 50.00±0.13 | 49.00±0.89 | 50.00±0.89 | 50.00±0.89 | 66.22±0.43 | | CIFAR100 (10%) | asymmetric | 20% | Trusted Only | 38.70±0.29 | 65.54±0.23 | 51.16±0.77 | 49.78±1.69 | 55.23±1.35 | 45.71±0.00 | 74.24±0.18 | 74.54±0.27 | | | | 50% | Fine-tuning | 64.48±0.31 | 36.20±1.37 | | | | | 74.14±0.15 | 74.24±0.30 | Table 1: Average test accuracy (%) with standard deviation on CIFAR10 and CIFAR100 under symmetric noise with ratio 20%, 50%, 80%, and asymmetric noise with ratio 20%, 50%, 80%. The best test accuracy is bolded. Next, we evaluate the performance of PDLC by comparing the method with seven contrast methods on different noise patterns and trusted fractions. Different from the above experiments that simply enrich the trusted samples with purified data, PDLC leverages the purified data with a revised correction loss function. Table 1 summarizes the experimental results. It can be observed that PDLC achieves favorable performances among different noise patterns. For symmetric noise cases, PDLC outperforms all the comparison methods in all noise ratios and trusted fractions. When the noise ratio is small (e.g., 20%), most of the comparison methods can achieve high test accuracies. Even in such cases, PDLC still achieves higher test accuracies. When the noise ratio is high (e.g., 80%), the performances of the classifier models drop significantly. In this case, PDLC shows strong superiority over the compared methods. For asymmetric noise cases, PDLC also achieves better test accuracies in most cases. Since the forward correction loss used in both GLC and PDLC is well-designed for asymmetric noise, both the two methods can achieve high test accuracies. In this case, the purified data used in PDLC can play a limited role in performance improvements, which explains why PDLC does not rank first in some cases. **Results on Clothing1M** To verify the effectiveness of the proposed method on real-world data, experiments are conducted on Clothing1M, which is a dataset with real-world label noise. We compare PDLC with several methods, including Cross Entropy, Forward [Patrini et al., 2017], LCCN [Yao et al., 2019], PENCIL [Yi and Wu, 2019], MW-Net [Shu et al., 2019] and GLC [Hendrycks et al., 2018]. The results are summarized in Table 2. Row 1 to 4 and row 6 are quoted from [Shu et al., 2019], and row 5 is quoted from [Yi and Wu, 2019]. Row 7 to 8 are obtained by our own implementation. It can be observed that PDLC achieves the best performance against the other methods. | # | method | accuracy | # | method | accuracy | |---|------------|----------|---|------------|----------| | 1 | Cross Entropy | 68.96 | 5 | PENCIL | 73.49 | | 2 | Forward | 69.84 | 6 | MW-Net | 73.72 | | 3 | LCCN | 73.03 | 7 | GLC | 73.53 | | 4 | MLNT | 73.47 | 8 | PDLC | 74.15 | Table 2: Test accuracy (%) on Clothing1M. **5 Conclusion** In this paper, a novel method LDCE is proposed to estimate label confidence. The label confidence is a metric designed for measuring the reliability of labels. LDCE estimates the label confidence by generating label distribution. Then, the samples with high confidence scores are selected as purified data. To verify the effectiveness of LDCE, we design a learning method PDLC by leveraging the purified data. The experiments conducted on both synthetic and real-world datasets substantiate the superiority of the learning method. This paper has shown that estimating the label confidence from the corrupted data is a feasible strategy in the noisy label problem. In the future, we will explore more effective approaches for estimating and utilizing the label confidence. **Acknowledgments** This research was supported by the National Key Research & Development Plan of China (No. 2018AAA0100104), the National Science Foundation of China (61622023), the Collaborative Innovation Center of Novel Software Technology and Industrialization, the Collaborative Innovation Center of Wireless Communications Technology, and the National Natural Science Foundation of China (61702095). References [Chen et al., 2019] Pengfei Chen, Ben Ben Liao, Guangyong Chen, and Shengyu Zhang. Understanding and utilizing deep neural networks trained with noisy labels. In *International Conference on Machine Learning*, pages 1062–1070, 2019. [Divvala et al., 2014] Santosh K Divvala, Ali Farhadi, and Carlos Guestrin. Learning everything about anything: Webly-supervised visual concept learning. In *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*, pages 3270–3277, 2014. [Geng, 2016] Xin Geng. Label distribution learning. *IEEE Transactions on Knowledge and Data Engineering*, 28(7):1734–1748, 2016. [Guo et al., 2018] Sheng Guo, Weilin Huang, Haozhi Zhang, Chenfan Zhuang, Dengke Dong, Matthew R Scott, and Dinglong Huang. Curriculumnet: Weakly supervised learning from large-scale web images. In *Proceedings of the European Conference on Computer Vision*, pages 135–150, 2018. [Han et al., 2018] Bo Han, Quanming Yao, Xingrui Yu, Gang Niu, Miao Xu, Weihua Hu, Ivor Tsang, and Masashi Sugiyama. Co-teaching: Robust training of deep neural networks with extremely noisy labels. In *Advances in neural information processing systems*, pages 8527–8537, 2018. [He et al., 2016] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pages 770–778, 2016. [Hendrycks et al., 2018] Dan Hendrycks, Mantas Mazeika, Duncan Wilson, and Kevin Gimpel. Using trusted data to train deep networks on labels corrupted by severe noise. In *Advances in neural information processing systems*, pages 10456–10465, 2018. [Jiang et al., 2018] Lu Jiang, Zhengyuan Zhou, Thomas Leung, Li-Jia Li, and Li Fei-Fei. Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels. In *International Conference on Machine Learning*, pages 2309–2318, 2018. [Krizhevsky et al., 2009] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009. [Lee et al., 2018] Kuang-Huei Lee, Xiaodong He, Lei Zhang, and Linjun Yang. Cleannet: Transfer learning for scalable image classifier training with label noise. In *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*, pages 5447–5456, 2018. [Li et al., 2017] Yuncheng Li, Jianchao Yang, Yale Song, Liangliang Cao, Jiebo Luo, and Li-Jia Li. Learning from noisy labels with distillation. In *Proceedings of the IEEE International Conference on Computer Vision*, pages 1910–1918, 2017. [Nettleton et al., 2010] David F Nettleton, Albert Orriols-Puig, and Albert Fornells. A study of the effect of different types of noise on the precision of supervised learning techniques. *Artificial intelligence review*, 33(4):275–306, 2010. [Patrini et al., 2017] Giorgio Patrini, Alessandro Rozza, Aditya Krishna Menon, Richard Nock, and Lizhen Qu. Making deep neural networks robust to label noise: A loss correction approach. In *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*, pages 1944–1952, 2017. [Ren et al., 2018] Mengye Ren, Wenyuan Zeng, Bin Yang, and Raquel Urtasun. Learning to reweight examples for robust deep learning. In *International Conference on Machine Learning*, pages 4331–4340, 2018. [Ruder, 2017] Sebastian Ruder. An overview of multi-task learning in deep neural networks. *arXiv preprint arXiv:1706.05098*, 2017. [Shu et al., 2019] Jun Shu, Qi Xie, Liuxuan Yi, Qian Zhao, Sanping Zhou, Zhongben Xu, and Deyu Meng. Meta-weight-net: Learning an explicit mapping for sample weighting. In *Advances in Neural Information Processing Systems*, pages 1917–1928, 2019. [Tanaka et al., 2018] Daiki Tanaka, Daiki Ikami, Toshihiko Yamasaki, and Kiyoharu Aizawa. Joint optimization framework for learning with noisy labels. In *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*, pages 5552–5560, 2018. [Vaswani et al., 2017] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In *Advances in neural information processing systems*, pages 5998–6008, 2017. [Wang and Yao, 2019] Yaqing Wang and Quanming Yao. Few-shot learning: A survey. *arXiv preprint arXiv:1904.05046*, 2019. [Xiao et al., 2015] Tong Xiao, Tian Xia, Yi Yang, Chang Huang, and Xiaogang Wang. Learning from massive noisy labeled data for image classification. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pages 2691–2699, 2015. [Xu et al., 2018] Ning Xu, An Tao, and Xin Geng. Label enhancement for label distribution learning. In *Proceedings of the International Joint Conference on Artificial Intelligence*, pages 2926–2932, Stockholm, Sweden, 2018. [Yao et al., 2019] Jiangchao Yao, Hao Wu, Ya Zhang, Ivor W Tsang, and Jun Sun. Safeguarded dynamic label regression for noisy supervision. In *Proceedings of the AAAI Conference on Artificial Intelligence*, 2019. [Yi and Wu, 2019] Kun Yi and Jianxin Wu. Probabilistic end-to-end noise correction for learning with noisy labels. *arXiv preprint arXiv:1903.07788*, 2019. [Zagoruyko and Komodakis, 2016] Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. *arXiv preprint arXiv:1605.07146*, 2016.
Bag of Hierarchical Co-occurrence Features for Image Classification Takumi Kobayashi Information Technology Research Institute AIST 1-1-1 Umezono, Tsukuba, Japan Email: firstname.lastname@example.org Nobuyuki Otsu Fellow AIST 1-1-1 Umezono, Tsukuba, Japan Email: email@example.com Abstract—We propose a bag-of-hierarchical-co-occurrence-features method incorporating hierarchical structures for image classification. Local co-occurrences of visual words effectively characterize the spatial alignment of objects’ components. The visual words are hierarchically constructed in the feature space, which helps us to extract higher-level words and to avoid quantization error in assigning the words to descriptors. For example, descriptors can employ two types of features hierarchically; narrow (local) descriptors like SIFT [1], and broad descriptors based on co-occurrence features. The proposed method thus captures the co-occurrences of both small and large components. We conduct an experiment on image classification by applying the method to the Caltech 101 dataset and show the favorable performance of the proposed method. Keywords—image classification, bag-of-features, cooccurrence, hierarchical visual words I. INTRODUCTION Image classification has become an important and attractive research area since the numbers of photo images stored in PCs and across the Internet have been significantly increasing. Over the last decade, the bag-of-features approach [2], [3], which is derived from text mining methods, has shown impressive levels of performance on image classification tasks. In the standard bag-of-features method, an image is represented by ensembles (bag) of local feature vectors (descriptors) which are quantized into clusters (visual words), and then a histogram of visual words is constructed by counting the occurrences of the words in the image. The bag-of-features method possesses shift invariance since it does not take into account the spatial locations from which the descriptors are extracted. Recently, spatial information has been explicitly incorporated into this framework, for example, by utilizing spatially partitioned images via kernel methods (spatial pyramid match kernel) [3], in order to improve performance, especially for the Caltech 101 dataset [4] in which target objects are roughly aligned in terms of their positions. Most of those methods, however, lead to a loss of the shift invariance property which is desirable for general object recognition. With respect to spatial characteristics, objects impose constraints on the (relative) spatial positions of their components, e.g., human body parts. In addition, those components should not be regarded equally, as they form hierarchical structures; e.g., a hand belongs to an arm. These relations provide important cues for recognition, and we focus on these characteristics. In this paper, we propose a method to extract local co-occurrences of visual words, utilizing hierarchical structures in images. While global co-occurrences characterize situations, such as context, in the images, local co-occurrences adequately extract relative spatial alignment of components, which enables shift invariant recognition in the proposed method. In addition, we exploit hierarchical structures for both visual words and descriptors. By utilizing the hierarchical structures of the visual words, we can reduce quantization errors in assigning words to descriptors; also we can extract higher-level words, like concepts. Two types of descriptors are constructed in a hierarchical manner: narrow (local) descriptors like SIFT [1] and broad descriptors based on the co-occurrence features of the narrow descriptors. These descriptors are useful for extracting co-occurrences of various kinds of components which form hierarchical structures in the target objects. II. BAG OF CO-OCCURRENCE FEATURES The proposed method is based on the co-occurrence histogram of visual words, not simple occurrences used in the standard bag-of-features method. By restricting co-occurrences within local neighborhoods, the resultant histogram features are shift invariant with respect to object positions. In the proposed method, the co-occurrences of visual words are effectively calculated by using local auto-correlation functions as in GLAC [5] which extracts image features based on gradient co-occurrences. In this paper, we assume that descriptors are extracted on (10 pixel spaced) grids in the image, since such dense descriptors result in better performance for the bag-of-features framework [6]. The proposed method also allows sparse interest points, such as by DoG [1] and Harris-Laplace [7], by slightly modifying the displacement vector $\alpha$ defined in the following. We assign to each descriptor not a symbolic label but a word vector $w_i$ representing a visual word. Suppose we have $N$ visual words (clusters); the word vector then has dimension $N$ ($w \in R^N$), such that there are only a few non-zero elements associated with the visual words that the descriptor belongs to. In this paper, since we employ soft assignment [8] and hierarchical words as described in the next section, the word vector has a few (more than one) non-zero elements. The first order auto-correlation function for the visual words is then defined as follows: $$H(a) = \sum_{r \in D} w(r) \otimes w(r + a), \quad (1)$$ where $\otimes$ denotes the outer-product of vectors, $r$ is a position vector $r = (x, y)$ in the whole image grid $D = X \times Y$, and $a$ is a displacement vector, $a \in \{(\Delta r, 0), (\Delta r, \Delta r), (0, \Delta r), (-\Delta r, \Delta r)\}$ in which shift-equivalent patterns are excluded. The parameter $\Delta r$ indicates the interval for local co-occurrences. Note that $r$, $a$ and $\Delta r$ are defined in the 2D grid coordinates, not pixel coordinates. The co-occurrence histogram $H(a)$ counts all visual word pairs that co-occur along the displacement vector $a$, as shown in Fig. 1, and it is actually unfolded to a vector $h(a)$. The image feature $h$ is finally constructed by concatenating the co-occurrence histogram vectors $h(a)$ over all displacements. This is an extended formulation of the standard bag-of-features method which can be represented by the zeroth order auto-correlation function: $$H^{(0)} = \sum_{r \in D} w(r). \quad (2)$$ These zeroth order features are not employed in this study. The method in [9] also utilizes similar co-occurrences of visual words. However, displacement vectors are not introduced and soft representations for words are not dealt with. In this paper, we define the local auto-correlation functions in Eq.(1) and provide a more general formulation. It should be noted that the computational cost of Eq.(1) is low since the word vector $w$ is sparse, and it is of the same order $O(n)$ as that of the bag-of-features, where $n$ is the number of pixels. III. HIERARCHICAL STRUCTURES We employ hierarchical representations in the bag-of-co-occurrence-features method described in the previous section. We focus on hierarchical structures for both visual words and descriptors. A. Hierarchical visual words In general, visual words are constructed by clustering descriptor vectors in the feature space. This procedure corresponds to quantization of the feature space and a quantization error is inevitable, depending on the assumed number of visual words (clusters). Therefore, we exploit the hierarchical structure of the feature space as in [10] (Fig. 2). For extracting hierarchical structures, we repeatedly apply $k$-means clustering, decreasing the number $k$ and obtaining the clusters (visual words) at different levels. A vocabulary tree [10] could also be applied, but this causes quantization errors by definitely assigning words to the descriptor at each level in a sequential manner. At each level, visual words are assigned to the descriptor by soft assignment and then the word vectors at all levels are concatenated into the whole word vector $w$, the dimensionality of which is $N = \sum_i N_i$ ($N_i$ is the number of words at the $i$-th level), as shown in Fig. 2. Since the concatenated word vector $w$ contains visual words at every level, the co-occurrences between visual words at different levels can be naturally extracted by the auto-correlations in Eq.(1). This hierarchical representation with soft assignment possibly reduces quantization error and robustly extracts higher-level words like concepts. Soft assignment is achieved by determining the weights for the $m$ nearest visual words as follows: $$\omega_i = \frac{d_{ij}/d_i}{\sum_{j=1}^m d_{ij}/d_j} = \frac{\prod_{i \neq j} d_i}{\sum_{j=1}^m \prod_{i \neq j} d_i}, \quad (3)$$ where $d_i$ is the distance to the $i$-th nearest cluster center (word). This is derived from the distance ratio on the basis of the nearest ($i = 1$) distance. B. Hierarchical descriptors Objects have different localities for their components. For example, the human body has hierarchical structures of fingers, hands, and arms, which are ordered according to their locality. We describe such components by different types of features considering their locality. For narrow (local) descriptors, we employ features such as SIFT [1] and GLAC [5], and for broad descriptors, we directly apply... co-occurrence features (Sec.II) using narrow descriptors, as follows (Fig. 3). The proposed co-occurrence features can be applied to broad descriptors by using visual words of the narrow descriptors and by replacing the whole image region $D$ with a local region $D_\Delta$ ($7 \times 7$ grid points) in Eq.(1). The feature is based on combinations (co-occurrences) of the narrow descriptors, and effectively characterizes bigger components, such as an arm composed of an elbow and a hand in the human body. For these two types of descriptor, hierarchical visual words are constructed (Sec.III-A) and then the co-occurrence features for the visual words are extracted (Sec.II), respectively, as shown in Fig. 3. The extracted features are finally concatenated to form the complete image feature vector with a weighting parameter $\alpha$ $(0 \leq \alpha \leq 1)$: $$h_c = \left[ \sqrt{\alpha} \ h^T_{narrow}, \sqrt{1 - \alpha} \ h^T_{broad} \right]^T,$$ where $h_{narrow}, h_{broad}$ are the co-occurrence feature vectors based on the narrow and broad descriptors, respectively. In this study, we assume the feature vectors are normalized ($||h_{narrow}|| = ||h_{broad}|| = 1$); the above weighting is to keep constant the norm of the total feature vector ($||h_c|| = 1$). IV. EXPERIMENT We applied the proposed method to image classification using the Caltech 101 dataset [4]. The dataset contains images in 101 categories with large intra-class variability, as shown in Fig. 4. A. Experimental setting For narrow (local) descriptors, we employed SIFT [1] and GLAC [5] features, and their performances are compared. We used three hierarchical levels for visual words: 250 words for level 1, 50 words for level 2, and 10 words for level 3. The number of words for soft assignment was 3 ($m = 3$ in Eq.(3)). The spatial interval $\Delta r$ was set to 3 grid points for the co-occurrence features of both narrow and broad descriptors. The weight $\alpha$ in Eq.(4) for those two types of co-occurrence features was determined based on cross validations. For classification, we applied multi-class linear SVM [11]. We followed the standard evaluation protocol: The dataset was split randomly into 15 training images per category and 15 images for testing. We calculated the classification rate for each category and then averaged them across all 101 categories. The trial was repeated five times and the average performance is reported. B. Experimental result Firstly, we show the effectiveness of the proposed method by varying the settings in the method. The baseline results for the standard bag-of-features method are 35.8% for SIFT descriptors and 41.8% for GLAC descriptors (Fig. 5(a)). In this case, the number of visual words was 250, which corresponds to level 1 in hierarchical words. The method of bag-of-co-occurrence-features (Sec.II) with 250 visual words only at level 1 produces 49.17% for SIFT and 52.2% for GLAC (Fig. 5(b)). By employing local co-occurrences of visual words, performance was improved by more than 10% for both types of descriptor. This result shows that local co-occurrences of visual words are effective for classification by capturing spatial alignment of the words. Then, we Figure 5. Performance results. Details are in the text. Table I PERFORMANCE COMPARISON. | Method | [14] | [3] | [13] | [15] | [16] | Ours | |--------|------|-----|------|------|------|------| | Acc.(%)| 49.5 | 56.4| 59.1 | 52.0 | 51 | 59.8 | Additionally applied hierarchical visual words (Sec.III-A). Performance was further improved by about 4%: 53.4% for SIFT and 56.6% for GLAC (Fig. 5(c)). Finally, the proposed method using a bag-of-co-occurrence-features with hierarchical words and hierarchical descriptors was applied. The results were 57.2% for SIFT and 59.8% for GLAC (Fig. 5(d)). The proposed method significantly improved performance for both types of descriptor by about 20% compared to the baseline results. Next, we compare the result of the proposed method using GLAC descriptors to the other methods, as shown in Table I. For fair comparison, we applied only methods based on a single type of feature, not using multiple kernel learning [12]. State-of-the-art results were obtained by using exhaustive classifier [13] and spatial pyramid match kernel [3] which spatially partitions images and thus is position-specific. The proposed method, which is shift invariant and uses simple linear classification with a low computational cost, produces better results compared to the previous methods. V. CONCLUSION We have proposed a bag-of-hierarchical-co-occurrence-features method incorporating hierarchical structures for image classification. The spatial alignments of objects’ components are effectively characterized by the local autocorrelation function of visual words for local co-occurrences, and the method is shift invariant. Both narrow and broad descriptors are extracted and the visual words are hierarchically assigned to the descriptors to capture various levels of characteristics. In the experiment for image classification using the Caltech 101 dataset, the proposed method exhibited favorable performance compared to state-of-the-art methods. REFERENCES [1] D. Lowe, “Distinctive image features from scale invariant features,” *International Journal of Computer Vision*, vol. 60, pp. 91–110, 2004. [2] G. Csurka, C. Bray, C. Dance, and L. Fan, “Visual categorization with bags of keypoints,” in *ECCV Workshop on Statistical Learning in Computer Vision*, 2004. [3] S. Lazebnik, C. Schmid, and J. Ponce, “Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories,” in *CVPR*, 2006. [4] L. Fei-Fei, R. Fergus, and P. Perona, “One-shot learning of object categories,” *IEEE Transaction on Pattern Analysis and Machine Intelligence*, vol. 28, no. 4, pp. 594–611, 2006. [5] T. Kobayashi and N. Otsu, “Image feature extraction using gradient local auto-correlations,” in *ECCV*, 2008. [6] T. Tuytelaars and C. Schmid, “Vector quantizing feature space with a regular lattice,” in *ICCV*, 2007. [7] K. Mikolajczyk, T. Tuytelaars, C. Schmid, A. Zisserman, J. Matas, F. Schaffalitzky, T. Kadir, and L. V. Gool, “A comparison of affine region detectors,” *International Journal of Computer Vision*, vol. 65, pp. 43–72, 2005. [8] J. Philbin, O. Chum, M. Isard, J. Sivic, and A. Zisserman, “Lost in quantization: Improving particular object retrieval in large scale image databases,” in *CVPR*, 2008. [9] H. Ling and S. Soatto, “Proximity distribution kernels for geometric context in category recognition,” in *ICCV*, 2007. [10] D. Nister and H. Stewenius, “Scalable recognition with a vocabulary tree,” in *CVPR*, 2006, pp. 2161–2168. [11] K. Crammer and Y. Singer, “On the algorithmic implementation of multiclass kernel-based vector machines,” *Journal of Machine Learning Research*, vol. 2, pp. 265–292, 2001. [12] M. Varma and D. Ray, “Learning the discriminative power-invariance trade-off,” in *ICCV*, 2007. [13] H. Zhang, A. Berg, M. Maire, and J. Malik, “Svm-knn: Discriminative nearest neighbor classification for visual category recognition,” in *CVPR*, 2006. [14] K. Grauman and T. Darrell, “Discriminative classification with sets of image features,” in *ICCV*, 2005. [15] R. Zhang, C. Wang, and B. Xiao, “A strategy of classification via sparse dictionary learned by non-negative k-svd,” in *ICCV*, 2009. [16] J. Mutch and D. G. Lowe, “Multiclass object recognition with sparse, localized features,” in *CVPR*, 2006.
Characteristics of Selected Major Sudden Stratospheric Warming Events and their Links to European Cold Waves in Extended Range Ensemble Forecasts Masterarbeit im Fach Meteorologie von Selina Kiefer Juli 2020 Referent: Prof. Dr. Joaquim Pinto Korreferent: Prof. Dr. Peter Braesicke This document is licenced under the Creative Commons Attribution-ShareAlike 4.0 International Licence. Sudden stratospheric warming (SSW) events which lead to a reversal of the stratospheric polar night jet in winter are discussed in literature as a potential source of increased predictability of European cold waves on the subseasonal to seasonal time-scale. One displacement-type (D-type) and three split-type (S-type) SSW events of the past 20 years are therefore investigated using the ERA-Interim reanalysis data set. The focus of this analysis lies on the characteristics of the SSW events and their potential links to European cold waves. The S-type events with their onset dates on 3 February 2001, 24 January 2009 and 25 January 2010 show a similar evolution in the middle stratosphere. Maximum westerly winds and minimum polar-cap averaged temperatures precede the rapid deceleration of the stratospheric polar night jet and temperature increase. These events feature generally stronger and longer-lasting easterly winds in the middle stratosphere compared to the D-type event. All three S-type events are followed by an equatorward displacement of the tropospheric mid-latitude jet stream over the North Atlantic ocean and a shift of the North Atlantic Oscillation (NAO) to its negative phase. Both indicate a downward influence of the SSW events on surface weather. Nevertheless, these events cannot be linked to European cold waves directly, at least not with the methods used in this thesis. Only the D-type SSW event with its onset date on 23 November 2000 is suggested to be linked directly to the European cold wave occurring between 21 and 25 December 2000. Therefore, this event is analyzed with the European Centre for Medium-Range Weather Forecasts (ECMWF) S2S reforecasts in addition to the ERA-Interim reanalysis. An improvement in the European 2 metre temperature anomaly distribution is found when the geopotential height anomalies in the lower stratosphere, caused by the SSW event, are represented correctly in the S2S reforecast initialized on 25 November 2000. Since the anomalies show non-negligible differences in the exact location and magnitude in comparison to the ERA-Interim reanalysis, the prediction of the European and Scandinavian mean temperature is not improved. The same applies to the NAO index. Hereby it is important to note that the investigated reforecast is the only reforecast comprising both, the European cold wave associated with the SSW event and an initialization with easterly winds in the middle stratosphere. To make a quantitative statement about a possible increase in the predictability of European cold waves after SSW events, further case studies need to be investigated. The large multi-model ensemble forecasts of the S2S data base are a good basis herefore. This thesis clearly demonstrates the high case-to-case variability of the characteristics and downward impacts of SSW events. Therefore, exploratory case studies are necessary to understand the phenomena of the coupling between SSW events and European cold waves. Zusammenfassung Plötzliche Stratosphärenerwärmungen (SSWs), die zu einem Richtungswechsel des stratosphärischen Strahlstroms in der winterlichen Polarnacht führen, werden in der Literatur als mögliche Quellen einer erhöhten Vorhersagbarkeit von europäischen Kältewellen auf subsaisonalen bis saisonalen Zeitskalen gehandelt. Eine SSW, die zu einer Verschiebung des polaren Wirbels Richtung Äquator führt (D-Typ), und drei SSW, die eine Teilung des polaren Wirbels zur Folge haben (S-Typ), werden daher mit dem ERA-Interim Reanalysedatensatz untersucht. Der Fokus der Analyse liegt auf den Eigenschaften der SSW-Ereignisse und möglichen Verbindungen zu europäischen Kältewellen. Die S-Typ-Ereignisse, deren Anfänge am 3. Februar 2001, 24. Januar 2009 und 25. Januar 2010 liegen, zeigen eine ähnliche Entwicklung in der mittleren Stratosphäre. Die stärksten Westwinde und geringsten Temperaturen über der Polkappe sind vor dem schnellen Abbremsen des stratosphärischen Strahlstroms in der Polarnacht und dem Anstieg der Temperaturen zu finden. Diese SSW-Ereignisse zeigen extremere und länger anhaltende Ostwinde in der mittleren Stratosphäre als das D-Typ-Ereignis. Auf alle drei S-Typ-Ereignisse folgt eine äquatorwärtigen Verschiebung des troposphärischen nordatlantischen Strahlstroms in den mittleren Breiten und eine negative Phase der Nordatlantischen Oszillation (NAO). Beides legt einen Einfluss der SSW-Ereignisse auf das Bodenwetter nahe. Trotzdem können diese Ereignisse nicht direkt mit europäischen Kältewellen verknüpft werden, zumindest nicht mit den in dieser Arbeit angewendeten Methoden. Nur die D-Typ-SSW mit Beginn am 23. November 2000 wird mit einer europäischen Kältewelle zwischen dem 21. und 25. Dezember 2000 in Verbindung gebracht. Daher wird dieses Ereignis zusätzlich zur Untersuchung mit dem ERA-Interim Reanalysedatensatz auch mit dem S2S Re-Vorhersagedatensatz des Europäischen Zentrums für mittelfristige Wettervorhersage (EZMW) analysiert. Eine verbesserte Vorhersage der Verteilung der europäischen 2-Meter-Temperaturanomalien wird beobachtet, wenn die von der SSW verursachten Anomalien der geopotentiellen Höhe in der unteren Stratosphäre korrekt in der S2S Re-Vorhersage mit Initialisierung am 25. November 2000 dargestellt werden. Da die Anomalien im Vergleich zur ERA-Interim Reanalyse nicht vernachlässigbare Unterschiede in der exakten Position und Stärke zeigen, ist eine Verbesserung der Vorhersage der mittleren europäischen und skandinavischen Temperatur nicht zu finden. Dasselbe gilt für die Vorhersage des NAO-Index. Die untersuchte Re-Vorhersage ist die einzige, die mit Ostwinden in der mittleren Stratosphäre initialisiert wird und gleichzeitig die mit der SSW in Verbindung gebrachte europäische Kältewelle enthält. Um eine quantitative Aussage über eine mögliche Erhöhung der Vorhersagbarkeit von europäischen Kältewellen nach dem Auftreten von SSW-Ereignissen zu treffen, müssen weitere Vorhersagen analysiert werden. Die großen Multi-Modell Ensemble-Vorhersagen des S2S Datensatzes sind hierfür eine gute Basis. Diese Arbeit zeigt die hohe Variabilität der Eigenschaften und bodengerichteten Einflüsse der SSW-Ereignisse. Daher sind detaillierte Fallstudien nötig, um die Kopplung zwischen diesen Ereignissen und europäischen Kältewellen zu verstehen. ## Contents 1 Introduction ................................. 1 2 Theoretical Concepts ....................... 3 2.1 Atmospheric Circulation ............... 3 2.1.1 The Navier-Stokes Equation for Atmospheric Motions ....... 3 2.1.2 The Transformed Eulerian Mean Equation for the Zonal-Mean of Atmospheric Circulations .............................................. 4 2.2 Dynamics of Sudden Stratospheric Warming Events .................. 6 2.2.1 Theoretical Description by the Model of Matsuno (1971) .......... 6 2.2.2 Development from Tropospheric Wave Forcing .................... 8 2.2.3 Downward Propagation of Stratospheric Anomalies ............. 10 2.2.4 Precursors for Tropospheric Wave Forcing ...................... 11 2.2.5 Resonant Excitation of the Polar Vortex ....................... 15 2.3 Characteristics of Sudden Stratospheric Warming Events .......... 15 2.4 Downward Impact of Sudden Stratospheric Warming Events .......... 16 2.4.1 Blocking in the Middle Troposphere ............................ 16 2.4.2 The Mid-Latitude North Atlantic Jet Stream in the Lower Troposphere ......................................................... 18 2.4.3 The North Atlantic Oscillation at the Surface ................ 19 2.4.4 European 2 Metre Temperatures ................................. 20 3 Data and Methods .......................... 23 3.1 ERA-Interim Reanalysis Data Set .............. 23 3.2 Subseasonal To Seasonal Reforecast Data Set ............... 23 3.3 Calculation of Climatologies and Standard Deviations ............ 24 3.3.1 Comparison of Different ERA-Interim Climatologies .......... 25 3.3.2 Calculation of S2S Climatologies ............................. 25 3.4 Downward Propagation of Standardized Geopotential Height Anomalies ................................................................. 26 3.5 Sudden Stratospheric Warming Indices ............................. 27 3.5.1 Comparison of Sudden Stratospheric Warming Indices for the Winters of 1999/2000 to 2018/2019 ................................. 28 3.6 Blocking Index .................................................. 29 3.7 Position of the Mid-Latitude Jet Stream ......................... 31 3.8 North Atlantic Oscillation Indices ............................... 32 3.8.1 Comparison of North Atlantic Oscillation Indices ............ 32 3.9 Definition of Cold Waves ........................................ 33 3.10 Selection of Case Studies .................................................. 34 3.10.1 Selection of S2S Reforecasts and Representative Members ....... 36 4 Winter 2008/2009 ................................................................. 39 4.1 Troposphere-Stratosphere Coupling .................................... 39 4.2 Sudden Stratospheric Warming Signals in the Middle Stratosphere .. 42 4.3 Blocking in the Middle Troposphere ................................... 46 4.4 Position of the Mid-Latitude Jet Stream in the Lower Troposphere .. 48 4.5 North Atlantic Oscillation Index at the Surface ..................... 50 4.6 European Cold Waves at the Surface .................................. 53 4.7 Concluding Remarks ...................................................... 56 5 Winter 2009/2010 ................................................................. 57 5.1 Troposphere-Stratosphere Coupling .................................... 57 5.2 Sudden Stratospheric Warming Signals in the Middle Stratosphere .. 60 5.3 Blocking in the Middle Troposphere ................................... 65 5.4 Position of the Mid-Latitude Jet Stream in the Lower Troposphere .. 67 5.5 North Atlantic Oscillation Index at the Surface ..................... 69 5.6 European Cold Waves at the Surface .................................. 71 5.7 Concluding Remarks ...................................................... 75 6 Winter 2000/2001 ................................................................. 77 6.1 Troposphere-Stratosphere Coupling .................................... 77 6.2 Sudden Stratospheric Warming Signals in the Middle Stratosphere .. 80 6.3 Predicted Sudden Stratospheric Warming Signals in the Middle Stratosphere .. 84 6.4 Predicted Shape of the Polar Vortex in the Middle Stratosphere .. 87 6.5 Predicted Sudden Stratospheric Warming Signals in the Lower Stratosphere .. 88 6.6 Blocking in the Middle Troposphere ................................... 90 6.7 Predicted Blocking in the Middle Troposphere ....................... 93 6.8 Position of the Mid-Latitude Jet Stream in the Lower Troposphere .. 97 6.9 NAO Index at the Surface .............................................. 98 6.10 Predicted NAO Index at the Surface .................................. 101 6.11 European Cold Waves at the Surface .................................. 105 6.12 Predicted European Cold Waves at the Surface ..................... 108 6.13 Concluding Remarks ...................................................... 112 7 Comparison of Case Studies and Discussion ............................. 115 7.1 Characteristics in the Middle Stratosphere ......................... 115 7.2 Influence on European Cold Waves .................................... 117 8 Summary and Outlook .......................................................... 119 References .............................................................................. 123 1 Introduction In winter, the polar stratosphere is characterized by strong westerly winds encircling the pole (Butler et al., 2015). These westerly winds are called „stratospheric polar night jet“ or „stratospheric polar vortex“. Variabilities in this stratospheric polar vortex are known to be able to affect tropospheric weather in mid- and high-latitudes (Tripathi et al., 2015). In spring, the stratospheric polar night jet reverses from westerly to easterly wind speeds (Butler et al., 2015) due to increased radiative heating by the rising sun. But also in winter, temporal reversals of the stratospheric polar night jet are possible. These so-called major „sudden stratospheric warming“ (SSW) events have first been observed 1952 by Scherhag (Butler et al., 2015). In 1971, Matsuno developed a simple model to demonstrate the development of SSW events from upward propagating planetary-scale waves which penetrate into the stratosphere. Wave breaking in the upper stratosphere-lower mesosphere region, which is characterized by prevailing easterly winds, leads to an increase of temperatures and the formation of a new layer with easterly winds where the following waves break. This so-called „critical layer interaction“ is typical for the top-down development of an SSW event. In the middle stratosphere, a temperature increase between 30 K and 40 K in the time-range of a few days is usually observed in combination with SSW events (Butler et al., 2015). The deposition of easterly momentum due to the breaking of tropospheric easterly waves leads to a deceleration of the westerly stratospheric polar night jet (Kidston et al., 2015; Matsuno, 1971). The weakened polar vortex is then either displaced off the pole or split into two parts (Charlton and Polvani, 2007). In some cases, the large temperature and wind anomalies caused by an SSW event have an influence on surface weather (Charlton-Perez et al., 2018). Hereby, especially the region over the North Atlantic ocean is sensitive to changes in the stratospheric circulation but also the North Pacific ocean can be affected according to Charlton-Perez et al. (2018) and Afargan-Gerstman and Domeisen (2020). Concerning the region of the North Atlantic ocean, changes in the stratospheric circulation induced by SSW events result in the negative phase of the North Atlantic Oscillation (NAO) at the surface. Although different studies find different fractions of SSW events which have an influence on the NAO, they agree that the likelihood of an NAO- phase is increased after SSW events in comparison to climatological or strong stratospheric polar vortex conditions. The negative phase of the NAO is one of the primary drivers of European cold waves in winter (Butler et al., 2015). According to King et al. (2019) especially Scandinavia experiences more cold extremes in the 2 months after an SSW event than under climatological conditions. A correct prediction of European cold waves on the subseasonal to seasonal time-scale is an important factor for both, society and economy (Cattiaux et al., 2010). According to Cattiaux et al. (2010) cold waves in the currently warming climate strongly affect social protection, sectors of energy supply and public as well as industrial transport. Since SSW events affect European surface weather up to 2 months after their occurrence, they are discussed as a potential source of increased predictability of European cold waves on subseasonal to seasonal time-scales (Baldwin et al., 2003; Garfinkel et al., 2017; Vitart et al., 2017). Therefore, the following thesis investigates four major SSW events of the past 20 years concerning their characteristics and potential links to European cold waves. The theoretical concepts needed to characterize and associate SSW events with European cold waves are described in chapter 2. Chapter 3 comprises the data and methods used in this thesis. The chapters 4 to 6 contain the detailed analysis of the chosen SSW events, sorted by the strength and duration of easterly winds in the middle stratosphere. The event which features the strongest and longest-lasting easterly winds is thereby described first. In chapter 7, the investigated SSW events are compared to each other and discussed with literature. Chapter 8 sums up the most important results of this thesis and gives an outlook. 2 Theoretical Concepts 2.1 Atmospheric Circulation 2.1.1 The Navier-Stokes Equation for Atmospheric Motions The Navier-Stokes equation describes essentially Newton’s second law and comprises all relevant forces in an air parcel, a symbolic volume of air. The visualization of air as parcels is used for the easier formulation of pressure balances (Holton, 2010). The Navier-Stokes equation can be written as: \[ F = F_c + F_p + F_f + F_g, \] (2.1) where \( F_c \) is the Coriolis force, \( F_p \) the pressure gradient force, \( F_f \) the friction and \( F_g \) gravity. The Coriolis force \( F_c \) describes the influence of the Earth’s rotation on the air parcel (Holton, 2010). It is latitude and height depended: \[ F_c = -f \cdot (\vec{k} \times \vec{v}), \quad f = 2\Omega \cdot \sin(\Phi). \] (2.2) The Coriolis parameter \( f \) comprises the Earth’s rotation rate \( \Omega = 7.3 \cdot 10^{-5} \text{s}^{-1} \) and the sine of the latitude \( \Phi \). The height dependency is calculated inside the cross-product, where the unit vector \( \vec{k} = (0, 0, 1)^T \) is multiplied with the 3-dimensional velocity \( \vec{v} \) of the air parcel. The pressure gradient force \( F_p \) describes the influence of different pressure levels in the same height. Since the circulation in the troposphere is always seeking an equilibrium, air is transported from higher pressure levels to lower pressure levels along the pressure gradient \( \vec{\nabla} p \): \[ F_p = -\frac{1}{\rho} \vec{\nabla} p. \] (2.3) The friction force \( F_f \) is usually neglected whenever suitable or parametrized in models when needed. The gravitational force \( F_g \) is only relevant for vertical movements: \[ F_g = \vec{k} \cdot g. \] (2.4) The gravitational constant \( g \) is latitude and height depended but its dependency on latitude is usually neglected (https://apps.ecmwf.int/codes/grib/param-db/?id=129, last viewed 2 September 2019). The resulting force \( F \) can then be decomposed in its Eulerian form, featuring the local time derivate and an advection term: \[ F = \frac{\partial \vec{v}}{\partial t} + \left( \vec{v} \cdot \vec{\nabla} \right) \vec{v}. \] (2.5) The Navier-Stokes equation can now be written in the following form: \[ \frac{\partial \vec{v}}{\partial t} + \left( \vec{v} \cdot \vec{\nabla} \right) \vec{v} = -\mathbf{f} \cdot \left( \vec{k} \times \vec{v} \right) - \frac{1}{\rho} \vec{\nabla} p + \vec{k} \cdot \mathbf{g} + F_{\text{fr}}. \] (2.6) In the undisturbed atmosphere, all forces balance each other. ### 2.1.2 The Transformed Eulerian Mean Equation for the Zonal-Mean of Atmospheric Circulations The stratospheric circulation is little turbulent due to the lack of surface contact and the lesser density of air (Baldwin et al., 2003). Therefore, only the zonal component of the Navier-Stokes equation is relevant: \[ \frac{\partial u}{\partial t} + u \frac{\partial u}{\partial x} + v \frac{\partial u}{\partial y} + w \frac{\partial u}{\partial z} = \mathbf{f} \cdot v - \frac{1}{\rho} \frac{\partial}{\partial x} p + F_{\text{fr}}. \] (2.7) The three dimensions of the air parcel’s velocity are described as \((u, v, w)^T\), the three cartesian dimensions as \((x, y, z)^T\). Since the zonal-mean zonal flow is the variable of interest when looking at the stratospheric polar night jet, a Reynold’s averaging is applied: \[ \left( \frac{\partial (\bar{u} + u')}{\partial t} \right) + \left( (\bar{u} + u') \frac{\partial (\bar{u} + u')}{\partial x} \right) + \left( (\bar{v} + v') \frac{\partial (\bar{u} + u')}{\partial y} \right) + \left( (\bar{w} + w') \frac{\partial (\bar{u} + u')}{\partial z} \right) \] \[ = \left( \mathbf{f} \cdot (\bar{v} + v') \right) - \frac{1}{\rho} \left( \frac{\partial (\bar{p} + p')}{\partial x} \right) + \left( \bar{F}_{\text{fr}} + F'_{\text{fr}} \right), \] (2.8) where \(\bar{x}\) indicates the zonal-mean and \(x'\) the deviation from the zonal-mean. Derivates from the zonal-mean are small and are therefore neglected: \[ \frac{\partial \bar{u}}{\partial t} + \bar{u} \frac{\partial \bar{u}}{\partial x} + \bar{v} \frac{\partial \bar{u}}{\partial y} + \bar{w} \frac{\partial \bar{u}}{\partial z} = \mathbf{f} \cdot \bar{v} - \frac{1}{\rho} \frac{\partial}{\partial x} \bar{p} + \bar{F}_{\text{fr}}. \] (2.9) The zonal derivate of the zonal-mean flow is zero by definition: \[ \frac{\partial \bar{u}}{\partial t} + \bar{v} \frac{\partial \bar{u}}{\partial y} + \bar{w} \frac{\partial \bar{u}}{\partial z} = \mathbf{f} \cdot \bar{v} - \frac{1}{\rho} \frac{\partial}{\partial x} \bar{p} + \bar{F}_{\text{fr}}. \] (2.10) Since the Earth is a sphere, it is easier to look at spherical coordinates instead of cartesian coordinates. The transformation of the formular in spherical coordinates leads to: \[ \frac{\partial \bar{u}}{\partial t} = \bar{v} \left( \mathbf{f} - \frac{1}{\cos(\Phi)a} \cdot \cos(\Phi) \frac{\partial \bar{u}}{\partial \Phi} \right) - \bar{w} \frac{\partial \bar{u}}{\partial z} - \frac{1}{\rho \cos(\Phi)a} \frac{\partial \bar{p}}{\partial \theta} + \bar{F}_{\text{fr}}. \] (2.11) It is now depending on the latitude \(\Phi\), the longitude \(\theta\) and the height \(z\). The Earth’s radius \(a\) is latitude depended as well. The friction term is still named \(\bar{F}_{\text{fr}}\) as it will not be considered explicitly. To describe the behaviour of the stratospheric polar vortex, different terms of the equation are substituted appropriately. The pressure gradient force is substituted by the Eliassen-Palm flux since the stratospheric zonal-mean zonal flow disturbances are typically forced by waves (Baldwin et al., 2003). The divergence of the Eliassen-Palm flux $\vec{\nabla} \cdot \vec{F}$ describes such a wave-forcing: $$\frac{\partial \bar{p}}{\partial \theta} \equiv \vec{\nabla} \cdot \vec{F}.$$ $$\vec{\nabla} \cdot \vec{F} = \vec{\nabla} \cdot$$ $$\left(0, \rho \cdot a \cdot \cos(\theta) \left(\frac{\partial \bar{u}}{\partial z} v' \Theta'\right), \rho \cdot a \cdot \cos(\theta) \left[f - \frac{1}{a \cdot \cos(\theta)} \left(\frac{\partial \bar{u}}{\partial \Theta} \cos(\theta)\right) \cdot \frac{v' \Theta'}{\partial z} - w' u'\right]\right)^T.$$ Using the Eliassen-Palm flux instead of the pressure gradient force leads to the dependency of the formular on the potential temperature $\Theta$ instead of the pressure $p$. The potential temperature is calculated as a function of temperature and pressure: $$\Theta = T \left(\frac{p_{\text{sic}}}{p}\right)^{\frac{R_1}{c_p}}.$$ (2.12) $R_1$ is the gas constant of dry air and $c_p$ the specific heat capacity of it. Substituting the pressure gradient force with the Eliassen-Palm flux leads to: $$\frac{\partial \bar{u}}{\partial t} = \bar{v} \left(f - \frac{1}{\cos(\Phi) a} \cdot \cos(\Phi) \frac{\partial \bar{u}}{\partial \Phi}\right) - \bar{w} \frac{\partial \bar{u}}{\partial z} - \frac{1}{\rho \cos(\Phi) a} \vec{\nabla} \cdot \vec{F} + F_{\text{fx}}.$$ (2.13) In the next step, the mean meridional circulation is expressed differently, adding the originally neglected height dependence again: $$\bar{v} \equiv \bar{v}^*$$ (2.14) $$\bar{v}^* = \bar{v} - \frac{1}{\rho H} \frac{R}{H} \frac{\partial}{\partial z} \left(\rho_0 \frac{v'T'}{N^2}\right), N^2 = \sqrt{\frac{g}{\Theta} \frac{d\Theta}{dz}}.$$ The equation is now depending on the Brunt-Väisälä frequency $N^2$ which itself is depending on potential temperature, the gravitational constant and height. It is a measure of stability of the air mass (Holton, 2010). Other dependencies of the formular are the universal gas constant $R$, the scale height $H$, the density of the air at the surface $\rho_0$ and the temperature $T$. Using the alternative expression of the mean meridional circulation, the original formular modifies to: $$\frac{\partial \bar{u}}{\partial t} = \bar{v}^* \left(f - \frac{1}{\cos(\Phi) a} \cdot \cos(\Phi) \frac{\partial \bar{u}}{\partial \Phi}\right) - \bar{w}^* \frac{\partial \bar{u}}{\partial z} - \frac{1}{\rho \cos(\Phi) a} \vec{\nabla} \cdot \vec{F} + F_{\text{fx}}.$$ (2.15) The equation is now called a transformed Eulerian Mean (TEM). Analogously, the mean vertical circulation is also expressed differently with the inclusion of the originally neglected vertical component: $$\bar{w} \equiv \bar{w}^*$$ (2.16) $$\bar{w}^* = \bar{w} + \frac{R}{H} \frac{\partial}{\partial y} \left(\frac{v'T'}{N^2}\right).$$ This leads to: $$\frac{\partial \bar{u}}{\partial t} = \bar{v}^* \left(f - \frac{1}{\cos(\Phi) a} \cdot \cos(\Phi) \frac{\partial \bar{u}}{\partial \Phi}\right) - \bar{w}^* \frac{\partial \bar{u}}{\partial z} - \frac{1}{\rho \cos(\Phi) a} \vec{\nabla} \cdot \vec{F} + F_{\text{fx}}.$$ (2.17) The last change to the equation is only a different notation for the small scale processes, which are usually not resolved by models: \[ \vec{F}_{\text{ix}} \equiv \vec{X}. \] (2.18) The TEM equation can now finally be written as: \[ \frac{\partial \vec{u}}{\partial t} = \vec{v}^* \left( f - \frac{1}{a \cdot \cos(\Phi)} \cos(\Phi) \frac{\partial \vec{u}}{\partial \Phi} \right) - \vec{w}^* \frac{\partial \vec{u}}{\partial z} + \frac{1}{\rho_0 \cdot a \cdot \cos(\Phi)} \vec{\nabla} \cdot \vec{F} + \vec{X}. \] (2.19) This is the formulation used by Kidston et al. (2015). They divide the TEM equation in seven parts to show the different influences on the zonal-mean stratospheric circulation: \[ \frac{\partial \vec{u}}{\partial t} = \underbrace{\vec{v}^*}_{1} \left( \underbrace{f}_{2} - \underbrace{\frac{1}{a \cdot \cos(\Phi)} \cos(\Phi) \frac{\partial \vec{u}}{\partial \Phi}}_{3} \right) - \underbrace{\vec{w}^*}_{4} \frac{\partial \vec{u}}{\partial z} + \underbrace{\frac{1}{\rho_0 \cdot a \cdot \cos(\Phi)} \vec{\nabla} \cdot \vec{F}}_{5} + \underbrace{\vec{X}}_{6}. \] (2.20) The term number 1 describes the change of the zonal-mean zonal flow. When looking at the polar cap, this is equivalent to changes of the polar vortex. The term number 2 describes the mean meridional circulation which is dependent on the heat flux and stability. The influence of the coriolis force on the zonal-mean zonal flow is described in term 3. Derivatives of the mean meridional circulation can be transported across latitudes, described by term 4. Derivates of the zonal-mean zonal-flow can also be transported vertically, described by term 5. This transport is dependent on the heat flux and stability. Disturbances of the zonal-mean zonal flow leading to derivations of it are usually forced by anomalously strong upward propagating waves. This is described by term 6, the Eliassen-Palm flux. It is dependent on density, turbulence and stability. Processes which are typically not resolved by models, such as friction and small-scale gravity waves, are described by term 7. ### 2.2 Dynamics of Sudden Stratospheric Warming Events #### 2.2.1 Theoretical Description by the Model of Matsuno (1971) Matsuno (1971) proposes a simple numerical model to show the development of SSW events originating from tropospheric wave forcing. It is based on the adiabatic, geostrophic potential vorticity equation which can be derived from the Navier-Stokes equation, equation (2.6). In a first step the so-called vorticity equation is obtained by taking the curl of it. For the interaction between the troposphere and stratosphere, only the vertical component is relevant: \[ \left( \frac{\partial}{\partial t} + u \frac{\partial}{\partial x} + v \frac{\partial}{\partial y} + w \frac{\partial}{\partial z} \right) \vec{\zeta}_z = f \left( \frac{\partial u}{\partial x} - \frac{\partial v}{\partial y} \right) + \frac{1}{\rho} \left( \frac{\partial}{\partial x} \frac{\partial p}{\partial y} - \frac{\partial}{\partial y} \frac{\partial p}{\partial x} \right) + F_{\text{iz}}. \] (2.21) The vorticity \( \vec{\zeta} \) with its vertical component \( \zeta_z \), is also called the relative vorticity of an air parcel and measures the local rotation. It is defined as: \[ \vec{\zeta} = \vec{\nabla} \times \vec{u}, \quad \zeta_z = \left( \frac{\partial v}{\partial x} - \frac{\partial u}{\partial y} \right). \] (2.22) In a next step, adiabatic conditions are assumed, meaning that neither heat nor mass is exchanged by the air parcel with its environment (Holton, 2010). This leads to the neglectation of friction, since friction leads to exchange of mass and heat between the air parcel and its surroundings. Besides the relative vorticity, also the potential vorticity can be used to express the rotation of an air parcel. It is a combination of the Earth’s planetary vorticity and the local vorticity of the air parcel, relative to the potential temperature. The potential temperature can thereby be used as a measure of height and the potential vorticity expressed in isentropic coordinates: \[ P = \frac{1}{\rho} \left( 2\Omega + \vec{\nabla} \times \vec{u} \right) \vec{\nabla} \cdot \Theta. \] (2.23) In spherical, isentropic coordinates, the potential vorticity is written as: \[ P \equiv q = \frac{1}{\rho} \left[ -\frac{\partial v}{\partial z} \frac{\partial \Theta}{a \cos(\theta) \partial \lambda} + \frac{\partial u}{\partial z} \frac{\partial \Theta}{a \partial \theta} + \left( 2\Omega \sin(\theta) + \frac{\partial v}{a \cos(\theta) \partial \lambda} - \frac{\partial [u \cos(\theta)]}{a \cos(\theta) \partial \theta} \right) \frac{\partial \Theta}{\partial z} \right], \] (2.24) with terms containing the vertical velocity and the Coriolis force. Terms proportional to the cosine of latitude are neglected and the distance to the Earth is set constant to the Earth’s radius \(a\). The vorticity equation, equation (2.21), is then expressed in spherical coordinates with the potential vorticity used instead of the relative vorticity. This results finally in the adiabatic, geostrophic potential vorticity equation. Matsuno (1971) uses this equation in a simple, comprehensive form in his model: \[ \left( \frac{\partial}{\partial t} + \bar{\omega} \frac{\partial}{\partial \lambda} \right) \mathcal{L}_p(\phi) + \frac{\partial \bar{q}}{a \partial \theta} v = 0, \quad \bar{\omega} = \frac{\vec{u}}{a \cos(\theta)}. \] (2.25) The zonal mean angular velocity of the air parcel is expressed with \(\bar{\omega}\). It is important to note, that \(v\) is the disturbance velocity in latitudinal direction in Matsuno’s notation, and not, as before, the velocity in y-direction. To describe the generation of potential vorticity from the disturbance heights of isobaric surfaces \(\phi\), Matsuno uses an operator \(\mathcal{L}\). This operator is dependent on the latitude \(\theta\), the longitude \(\lambda\), the height \(z\), the pressure \(p\), the Earth’s angular speed of rotation \(\Omega\) and radius \(a\) as well as the Brunt–Väisälä frequency \(N\). The potential vorticity in isentropic coordinates is given as \(\bar{q}\). Using an alternative expression for \(v\) leads to: \[ \left( \frac{\partial}{\partial t} + \bar{\omega} \frac{\partial}{\partial \lambda} \right) \mathcal{L}(\phi) + \frac{\partial \bar{q}}{\partial \theta} \frac{1}{\sin(\theta)^2 \cos(\theta)} \frac{\partial \phi}{\partial \lambda} = 0. \] (2.26) This differential equation is now solved numerically in Matsuno’s model. The wave solution for equation (2.26) uses thereby the following ansatz, introducing a new variable \(\psi\): \[ \phi(\lambda, \theta, z, t) = e^{im\lambda} e^{\frac{i}{N}t} \psi(\theta, z, t), \] (2.27) \[ \bar{\phi}(\lambda, \theta, z, t) = e^{\frac{i}{N}t} \psi(\theta, z, t). \] (2.28) The advantage of this new variable is the reduced dependency on only three variables, latitude, height and time, instead of four. The longitudinal dependence is expressed with exponential functions containing the longitudinal wavenumber \(m\). Without disturbances, \(\psi(\theta, z, t) = 0\). Matsuno (1971) investigates the development of SSW events from tropospheric wave disturbances that start on the surface. Therefore, the lower boundary conditions of his model are expressed as: \[ \psi(\theta, z = 0, t) = F(\theta, t), \quad \frac{\partial \psi}{\partial t} = 0. \] (2.29) A perturbation of \( \psi \) is thereby equivalent to a perturbation of \( \bar{u} \) or \( \bar{T} \). The forcing is described as a sine peaking at a specific latitude, here \( 60^\circ \text{N} \), in combination with a function \( f(t) \) describing the suggested treatment of the disturbance by the model: \[ F(\theta, t) = \sin \left[ \pi (\theta - 30^\circ)/60^\circ \right] \phi_{\text{max}} f(t), \quad 30^\circ \leq \theta \leq 90^\circ, \] (2.30) \[ F(\theta, t) = 0, \text{ otherwise}. \] (2.31) At the top of Matsuno’s model in 110 km, the disturbance is 0: \[ \psi(\theta, z = 110 \text{km}, t) = 0, \quad \frac{\partial \psi}{\partial t} = 0. \] (2.32) ### 2.2.2 Development from Tropospheric Wave Forcing In the following, the development of an SSW event in a typical wintertime stratospheric circulation from a wavenumber-2 tropospheric wave disturbance is shown according to Matsuno (1971). The typical wintertime circulation in the middle stratosphere is marked by the westerly polar night jet encircling the pole. In Matsuno’s notation, this case is called „C2“ and comprises a sphere with a model wall at the equator, thus, modelling only the flow in the northern hemisphere. The typical wintertime wind distribution in the troposphere and stratosphere contains continuous westerly wind conditions between \( 20^\circ \text{N} \) and the pole with maximum westerly winds in 10 km and 65 km height (Figure 2.1 left). This circulation can be disturbed by mechanically, thermally or turbulently excited waves propagating from the troposphere into the stratosphere (Schneidereit et al., 2017). The modelled wavenumber-2 disturbance fits roughly to the wavenumber-2 disturbance observed before the sudden stratospheric warming event in 1963 (Figure 2.1 right). Thus, the simple model by Matsuno (1971) serves the purpose of demonstrating the qualitative development of sudden stratospheric warming events. Tropospheric planetary-scale waves, such as Rossby waves, can only propagate in westerly wind fields (Baldwin and Dunkerton, 1999). This is shown by the Charney-Drazin criterion for wave propagation (Holton, 2010): \[ 0 < \bar{u} - c < u_c, \quad u_c = f \left( \frac{1}{m} \right)^2. \] (2.33) This criterion essentially states that the difference between the mean zonal background flow \( \bar{u} \) and the wave’s group velocity \( c \) has to be greater 0 but also smaller than a critical velocity \( u_c \). This critical velocity itself is a function of the wavenumber \( m \). For waves with wavenumber 1, the critical velocity is higher than for waves with wavenumber 2 and higher. Waves with wavenumber 1 can therefore propagate into stronger mean background zonal flows than waves with wavenumber 2 with the same group velocity. For stationary waves, the group velocity is 0 and the penetrating of upward propagating waves solemnly depends on the mean background zonal flow. In the troposphere and stratosphere, the mean background zonal flow is westerly, making $\bar{u}$ positive (Figure 2.1 left). In the polar stratosphere, this flow resembles basically the polar vortex which is then perturbed by upward propagation planetary-scale waves. The easterly acceleration of these waves increases with height as the amplitude of the planetary-scale wave increases with decreasing air density (Matsuno, 1971; Kidston et al., 2015). According to Charlton and Polvani (2007) SSW events are preceded by strong polar vortex conditions, meaning that the mean background flow velocity is largely positive. Following Matsuno (1971) easterly propagating waves with a positive group velocity can propagate into this strong mean background zonal flow. Westward propagating waves with a negative group velocity cannot. In regions with easterly mean background flow, for example in the region of the upper stratosphere and lower mesosphere, $\bar{u}$ turns negative. When easterly propagating waves enter these regions, it is impossible to meet the lower condition of the Charney-Drazin criterion. The amplitude of the easterly propagating waves dampens rapidly (Figure 2.2 left). The waves break and decelerate the westerly jet by depositing easterly angular momentum (Matsuno, 1971; Kidston et al., 2015). Kidston et al. (2015) explain the deceleration of the westerly jet as an effect of the conservation of angular momentum and mass. When breaking waves deposit angular momentum in the stratosphere, the circumpolar jet slows down and transports momentum to the pole to conserve angular momentum. The stratospheric circulation becomes less symmetric and the polar vortex can be displaced off the pole and in some cases eventually break up (Limpasuvan et al., 2004; Butler et al., 2015). This can be detected in the geopotential heights on a fixed pressure level, where the vortex core is located at the lowest geopotential height values (Figure 2.3). The geopotential height is defined as: $$Z = \frac{\Phi}{g_{\text{sfc}}}, \quad \Phi = \int_0^h g \, dz,$$ with the geopotential $\Phi$ and the gravitational constant at the surface $g_{\text{sfc}}$. When substituting $g$ according to the ideal gas law, the geopotential can be expressed as: $$\Phi = \int_0^h \frac{p(z)}{R_l T(z)} \, dz.$$ This means, that the geopotential height on a given pressure level $p(z)$ increases when the temperature increases (Limpasuvan et al., 2004). To regain the geostrophic balance in the stratosphere after the disturbance of the stratospheric polar vortex, a mean-residual circulation is induced which moves mass into the stratosphere over the polar cap (Coy and Pawson, 2015; Kidston et al., 2015). Due to continuity, the additional mass over the pole is transported downward, while in the lower latitudes mass is transported upward (Matsuno, 1971; Kidston et al., 2015; Limpasuvan et al., 2004). This leads to an adiabatic warming and a high pressure system at the surface on the pole and adiabatic cooling and a surface low pressure system in lower latitudes. Over the pole, the sinking of air masses leads to warm temperature anomalies in the regions below and low temperatures in the regions above the region of the sinking motion (Figure 2.2 right). When this process happens in the course of a few days, the warming over the pole is called a „sudden stratospheric warming“. 2.2.3 Downward Propagation of Stratospheric Anomalies Matsuno (1971) calls the interaction between the breaking waves and easterly background flow “critical layer interaction”, where the critical layer is the region with easterly winds. The breaking of waves leads to the formation of a new critical layer in the region of breaking. Thus, leading to a downward shift of the positive temperature and easterly wind anomalies. Above the critical layer, the polar jet recovers slowly, driven by radiative cooling due to absence of wave activity (Baldwin and Dunkerton, 1999, Tripathi et al., 2015). The recovery of the polar vortex takes on average 10 days in the middle stratosphere and 40 days in the lower stratosphere due to different radiative dumping scales (Charlton and Polvani, 2007). It is only possible during polar night as the increased radiative heating in spring due to the rising sun prevents the reformation of the polar vortex. The critical layer itself, respectively the stratospheric anomalies, can descend down into the troposphere and change surface weather pattern (Baldwin et al., 2003). A particularly sensitive area for the coupling between the troposphere and stratosphere is the lower stratosphere (Charlton and Polvani, 2007). Stratospheric anomalies persist there for several weeks up to 2 month, being prevented from frictional dissipation by the Coriolis force (Limpasuvan et al., 2004). In the lower stratosphere, the anomalies induce non-local dynamical effects that influence the tropospheric circulation and sometimes even propagate down to the surface (Karpechko et al., 2018; Baldwin et al., 2003; Hinssen et al., 2011). This results then in one of the strongest dynamical couplings between troposphere and stratosphere (Charlton and Polvani, 2007). Usually though, there is a hydrostatic balance, a geostrophic balance and a thermal-wind balance at the tropopause which hinder stratospheric anomalies from entering the troposphere. The hydrostatic balance can be expressed with the hydrostatic approximation which results from the balance between the pressure gradient force and buoyancy: \[ \frac{dp}{dz} = -p \cdot g. \] (2.36) The geostrophic balance describes the balance between the Coriolis force and the pressure gradient force and the thermal wind balance is given with the thermal wind equation: \[ -\frac{\partial \vec{v}_g}{\partial p} = \frac{R_l}{f_p} \cdot \vec{k} \times \vec{\nabla}_p T. \] (2.37) This equations describes the vertical change of the geostrophic wind $\vec{v}_g$ in hydrostatic approximation. These balances can be disturbed when the momentum forcing in the stratosphere is strong and persistent enough to penetrate through the tropopause to the surface (Limpasuvan et al., 2004). This so-called “downward control” principle is controversially discussed in literature. According to Limpasuvan et al. (2004) the main reason for discussion is the small mass of the stratosphere in comparison to the troposphere. In winter, the stratosphere contains less than 25% of the atmospheric mass of the extratropics which leads to a larger momentum forcing in the troposphere than in the stratosphere. The principle of downward control is therefore probably only working, when the stratospheric anomalies project onto the modes of tropospheric variability, such as the NAO pattern well. If this is the case, stratospheric anomalies can descend into the troposphere and influence surface weather. The wind and temperature anomalies caused by an SSW event are related via the thermal wind equation. Integrating this equation between two pressure levels leads to thermal wind $\vec{v}_T$: $$\vec{v}_T = \frac{R_l}{f} \ln \left( \frac{p_0}{p_1} \right) \vec{k} \times \vec{\nabla}_p \bar{T}$$ \hspace{1cm} (2.38) and is dependent on the mean temperature $\bar{T}$ of the atmospheric layer between the pressure levels $p_0$ and $p_1$. When looking at the lower pressure level, the warm temperature anomalies precede the easterly wind anomalies (Limpasuvan et al., 2004). According to Limpasuvan et al. (2004) this time-lag between warming temperatures and easterly winds is around 10 days in 10 hPa height. When the stratospheric stratospheric anomalies propagate down into the troposphere, the mass, and therefore the pressure, is slowly redistributed. The change in pressure is thereby proportional to a change in temperatures according to the ideal gas equation: $$p = R_l T g.$$ \hspace{1cm} (2.39) This redistribution of mass leads to a decrease of the tropopause height through up- and down-welling processes, an increase of pressure over the polar cap and an reduction of it over the mid-latitudes. This is the manifestation of the negative phase of the Arctic Oscillation (Baldwin et al., 2003; Wang et al., 2010). A balancing movement of airmasses transports the cold polar surface air from the pole into the mid-latitudes and further equatorwards, which leads to cold waves in the northern hemisphere mid-latitudes. Temperature changes below the tropopause are though much less than above it due to the lower lapse rate of the troposphere (Kidston et al., 2015). In literature, there are numerous time scales for the downward propagation of stratospheric anomalies discussed, reaching from days to 2 months (Baldwin et al., 2003; Tripathi et al., 2015; Manney et al., 2009). In general, the downward propagation is faster in the beginning of the winter than at its end (Baldwin and Dunkerton, 1999). According to Charlton-Perez et al. (2018) the response time from the troposphere to changes in the stratospheric circulation is on average 20 days. An open research question according to Kidston et al. (2015) is the exact role of tropospheric eddy feedbacks which are an important factor in the troposphere-stratosphere coupling. They are initialized when the mass between the troposphere and stratosphere is redistributed. These eddy feedbacks are associated with the altering of tropospheric weather systems and the propagation of tropospheric waves which influence the strength and position of the tropospheric jets in the high- and mid-latitudes. ### 2.2.4 Precursors for Tropospheric Wave Forcing Especially before the development of SSW events featuring a breakup of the polar vortex in the stratosphere, the polar vortex is reduced in size and strength by the breaking of upward propagating planetary-scale waves at the vortex edges (Baldwin and Dunkerton, 1999; Limpasuvan et al., 2004; Charlton and Polvani, 2007). This is called „preconditioning“ of the polar vortex. It leads to a smaller moment of inertia of the polar vortex due to its reduced size (Limpasuvan et al., 2004). Therefore, poleward and upward propagating planetary-scale waves can enter the stratosphere more easily and lead to SSW events (Manney et al., 2009; Limpasuvan et al., 2004). According to Charlton and Polvani (2007) a typical situation for preconditioning is a largely positive zonal wind anomaly in the troposphere and stratosphere, centered at 70°N. This can be linked to positive mean sea level pressure anomalies over the North Atlantic ocean, Alaska and the Ural which are often observed before SSW events (Karpechko et al., 2018). According to Lee et al. (2019) the high pressure system over the Ural is amplified by Rossby wave-breaking and co-occurs with the so-called „Scandinavia-Greenland dipole“ which is characterized by an anomalously strong mean sea level pressure gradient between Scandinavia and Greenland. In 35% of the cases, this dipole is observed in the 15 days prior to the central date of the SSW event. The Scandinavia-Greenland dipole requires the poleward shift of the North Atlantic storm track which is associated with the positive phase of the NAO. This NAO+ phase itself is often associated with a strengthened polar vortex. This is a typical situation observed prior to an SSW event (Lee et al., 2019; Charlton and Polvani, 2007). Another often observed situation before SSW events is blocking which is often mentioned in literature as a typical precursor for SSW events (Tripathi et al., 2015). Blocking patterns over the Atlantic ocean often precede SSW events which lead to a displacement of the polar vortex off the pole (Yu et al., 2018; Martius et al., 2009). According to Martius et al. (2009) their frequency maxima are located east of Greenland and over Scandinavia. If these SSW events take place in early winter, usually a strong Aleutian high is observed prior to the event (Baldwin and Dunkerton, 1999). Both situations lead to a wavenumber-1 flow in the troposphere according to Martius et al. (2009). The corresponding, upward propagating Rossby wave is tilted westwards with height and shows a baroclinic structure. Blocking patterns over either the Pacific ocean or both, the Atlantic and Pacific ocean, with frequency maxima over the eastern Pacific, Alaska and west of Greenland, often precede SSW events which lead to a split of the polar vortex. A wavenumber-2 tropospheric circulation is developed. The upward propagating Rossby waves are also tilted westward with height but show a more barotropic structure. This modulation of upward propagating planetary-scale tropospheric waves, such as Rossby waves, is confirmed by Woollings et al. (2018). These waves can penetrate into the stratosphere more easily and interfere there with climatological planetary waves when the tropospheric block is located beneficially. Especially in the case of SSW events leading to a split of the polar vortex, the location of the tropospheric blocks relative to each other matters. When the westward with height tilted upward propagating Rossby waves are located beneficially, the tropospheric wave with wavenumber 2 interferes constructively with the climatological stratospheric wave with wavenumber 1. In the upper stratosphere, this can lead eventually to the split of the polar vortex. It is also possible that first an anomalously strong upward propagation of Rossby waves with a wavenumber 1 is observed followed by an anomalous upward propagation of Rossby waves with wavenumber 2 (Tripathi et al., 2016). According to Martius et al. (2009) in the case of SSW events which lead to a displacement of the polar vortex off the pole, a beneficial location of the tropospheric wave with wavenumber 1 and the climatological stationary wave pattern is important. The climatological stationary wave pattern is enhanced which can lead to the development of an SSW event when the upward propagating Rossby wave, induced by the block, penetrates into the stratosphere. According to Tripathi et al. (2015) these upward propagating Rossby waves have to be anomalously strong for longer than a week in order to trigger an SSW event. Nevertheless, the blocking duration is not the dominant factor of the linkage between blocks and SSW events. Furthermore, blocking might be necessary for the development of an SSW event but it is not found to be sufficient by Manney et al. (2009). This is confirmed by the fact that some models produce the observed blocking patterns but not the following SSW (Tripathi et al., 2016). Controversially discussed as a precursor of SSW events is the snow cover extent in early winter which may enhances the upward propagation of planetary-scale waves (Tripathi et al., 2015). Another open research question is the linkage between SSWs and certain phases of the Madden Julian Oscillation (MJO) (Tripathi et al., 2015). When MJO and La Niña conditions are phased beneficially, they enhance a wavenumber-2 tropospheric wave-forcing (Schneiderit et al., 2017). According to Schneiderit et al. (2017) La Niña conditions also favor blocking anticyclones over the Pacific ocean and north of Scandinavia. The MJO, especially phase 7 and 8, form a low pressure anomaly, centered over the central North Pacific, and a high pressure anomaly centered over Canada. This leads to a quasi-stationary pattern of troughs and ridges, enhancing a wavenumber-2 flow pattern in the troposphere. When MJO phases 7 and 8 co-occur with La Niña conditions, this quasi-stationary pattern is strengthened, leading to an amplification of upward propagating Rossby waves. The linkage between the different states of the El Niño-Southern Oscillation (ENSO) and SSW events is also discussed controversially in literature. According to Tripathi et al. (2015) some studies find that events occur twice more likely during El Niño than during La Niña but other studies do not find a difference between the two phases. One reason for this discrepancy might the difficulty to separate the influence of ENSO and the Quasi-Biennial Circulation (QBO) (Lehtonen and Karpechko, 2016). During the QBO east phase, weak vortex events including SSWs are twice as likely as during the QBO west phase (Baldwin and Dunkerton, 2001; Tripathi et al., 2015). Changes in the Brewer-Dobson Circulation (BDC) are another discussed phenomena which possibly influences the occurrence of SSW events (Zhang and Tian, 2019). According to Zhang and Tian (2019) the BDC influences not only the state of the polar vortex but also the tropospheric jet streams and surface temperatures. When the BDC is enhanced, the meridional transport of air masses to the polar stratosphere is increased. The temperature over the polar cap rises, leading to a weaker polar vortex and possibly SSW events. Other discussed influences on the state of the polar vortex are volcanic eruptions, anthropogenic changes, including increased greenhouse gas concentrations, and the solar cycle (Butler et al., 2015). ### 2.2.5 Resonant Excitation of the Polar Vortex In some cases, SSW events are observed without the typical tropospheric precursors or vortex preconditioning (Tripathi et al., 2015). According to Tripathi et al. (2015) in these cases the stratospheric polar vortex is excited by small planetary-scale waves in such a way that resonance is created. Although an enhanced upward propagation of planetary-scale waves from the troposphere into the stratosphere does not take place, the polar vortex is deformed or even disrupted. This mechanism is called the „resonant excitation“ of the polar vortex. According to Tripathi et al. (2015) a resonant excitation of a polar vortex in a baroclinic structure leads to the displacement of the vortex off the pole, a resonant excitation of the polar vortex in a barotropic structure to a breakup of it. Especially in the latter case, very small changes in the tropospheric wave forcing or the stratospheric circulation can lead eventually to very large changes of the polar vortex state. ### 2.3 Characteristics of Sudden Stratospheric Warming Events SSW events are characterized in the stratosphere, in heights of 30 to 50 km, by a temperature increase of 30 to 40 K in the time range of a few days (Butler et al., 2015). As the land-sea contrast in the southern hemisphere is not favorable for the formation of strong planetary-scale waves which are needed for the occurrence of SSW events, SSWs occur almost exclusively in the northern hemisphere (Butler et al., 2015). In extreme cases, also called major SSWs, the anomalously high upward-propagation of tropospheric planetary-scale waves leads to a reversal of the stratospheric polar night jet from westerlies to easterlies (Butler et al., 2015; Charlton and Polvani, 2007; Karpechko et al., 2018). The polar vortex is then either displaced off the pole or split into two parts of comparable size (Figure 2.4; Butler et al., 2015, Charlton and Polvani, 2007). SSW events which lead to a displacement of the polar off the pole are called „displacement-type“ (D-type) events (Charlton and Polvani, 2007). During the displacement of the vortex filament, it is distorted into a „comma-shape“ (Figure 2.4 top). The circulation in the stratosphere is thereby still characterized by a wavenumber-1 structure (Charlton and Polvani, 2007). The occurrence of D-type SSW events is equally likely during the whole winter but due to the increased likelihood of SSW events which lead to a split of the polar vortex in mid-winter, SSW events in early winter are usually D-type events (Baldwin and Dunkerton, 1999; Charlton and Polvani, 2007). In literature, the D-type events occurring in early winter are sometimes referred to as „Canadian Warmings“ (Butler et al., 2015). SSW events which lead to a breakup of the polar vortex into two parts of comparable size are called „split-type“ (S-type) events (Figure 2.4 bottom). Nearly half of all SSW events belong to this type (Charlton and Polvani, 2007). During these events, the circulation in the stratosphere resembles a wavenumber-2 structure (Charlton and Polvani, 2007). In general, S-type events are characterized by a more sudden and deeper extending wind-reversal than D-type events (Butler et al., 2015). Middle-stratospheric temperatures are increased slightly stronger and influenced up to 20 days longer (Charlton and Polvani, 2007). S-type SSW events show a clear seasonality with the highest probability of occurrence in January and February (Charlton and Polvani, 2007). ### 2.4 Downward Impact of Sudden Stratospheric Warming Events #### 2.4.1 Blocking in the Middle Troposphere The term atmospheric „blocking“ is not defined in a unique way. In general, a large-scale meridional, horizontal tropospheric circulation which leads to changes in the prevailing zonal flow and storm tracks is referred to as blocking (Liu, 1994; Woollings et al., 2018). Other common characteristics found in literature are persistence and quasi-stationarity (Woollings et al., 2018). According to Liu et al. (1994) a persistence criterion is usually applied to exclude minor variability in the troposphere not related to the blocking pattern. Depending on the study, time scales from a single day to a week are given as the minimal duration of blocking (Tibaldi and Molteni, 1990; Tripathi et al., 2015). Although blocking patterns are often indicated as being especially persistent, it has to be kept in mind that they are generally not more persistent than the zonal flow regime (Liu, 1994). Typical blocking situations are stationary ridges embedded in a large-amplitude Rossby wave pattern with a phase-speed near zero (Figure 2.5a); Woollings et al., 2018). The so-called „Ω“ block is not a stationary ridge but also associated with a stationary Rossby wave pattern (Figure 2.5b); Woollings et al., 2018). According to Buehler et al. (2011) it is one of the two dominant blocking patterns over the North Atlantic region. In comparison with stationary ridges, its amplitude is generally larger and it features some closed contours of geopotential height isolines inside the high pressure area (Woollings et al., 2018). This high pressure area is usually called the „blocking anticyclone“ (Buehler et al., 2011). Upstream and downstream of it are low pressure systems located forming an $\Omega$ which is visible in the 500 hPa geopotential height isolines (Buehler et al., 2011; e.g. Figure 6.11 top). The second dominant blocking pattern over the North Atlantic ocean is a „high over low“ pattern at the same longitude (Figure 2.5e); Buehler et al., 2011). This kind of blocking pattern develops when large-scale Rossby waves break anticyclonically (Figure 2.5d); Woollings et al., 2018). In literature, it is also called a „Dipole-“ or „Rex-“ block, referring to the first studies of atmospheric blocking done by Rex in 1950 (Woollings et al., 2018; Liu, 1994). Blocking frequencies are generally higher in winter than in summer, with the blocking anticyclones being usually located over the oceans in winter and over the continents in summer (Woollings et al., 2018). In winter, blocking is often observed before SSW events and therefore seen as a typical tropospheric precursor of these events (Marius et al., 2009). According to Domeisen et al. (2020) blocking furthermore plays an important role in determining the tropospheric response of SSW events. Especially the so-called European blocking is important for the formation of European cold waves after SSW events. When the European blocking, a ridge over the British Isles, is present at the time of the formation of the SSW, colder than usual 2 metre temperatures are found over central and northern Europe after the event. The lowest anomalies are observed 20 days and 40 days after the SSW when simultaneously a so-called Greenland blocking situation is present. This blocking situation is characterized by an enhanced ridge over Greenland, leading to lower temperature anomalies than usually over Europe. This link between blocking and the tropospheric response of SSW events is though not confirmed by Garfinkel et al. (2017) who state that the tropospheric response to an SSW event is independent of the tropospheric state. Controversially discussed in literature is the question if SSW events have an influence on the occurrence of blocking. Charlton-Perez et al. (2018) state that blocking itself is not influenced by the state of the polar vortex, as it occurs usually during neutral polar vortex states. This is in contrast to Woollings et al. (2018) who mention a significant increase of blocking in the high-latitudes and a longer duration of this blocks after SSW events, especially in the region of the Atlantic ocean. According to Woollings et al. (2018) independent of the possible coupling to SSW events, blocking situations can influence European surface weather by changing the prevailing zonal flow which transports relatively warm oceanic air into Europe. During a blocking situation, relatively cold polar air is transported downstream into Europe leading possibly to cold waves. Only a minor effect in winter is the unusual cooling of the region below the blocking anticyclone due to the reduced cloud cover. Besides the influence on European temperatures, blocking patterns over the North Atlantic ocean influence precipitation (Woollings et al., 2018). The actual influence of the blocking system on temperatures and precipitation depends strongly on its geographical location and type (Woollings et al., 2018). Blocking patterns near the British Isles, for example, influence surface temperatures and precipitation over Europe (Buehler et al., 2011). According to Woollings et al. (2018) below the blocking anticyclone precipitation is drastically reduced to zero, while below the low pressure systems up- and downstream of it precipitation is increased strongly. This is due to the forced pathway of storms around the anticyclone. The stagnant air masses below the anticyclone lead furthermore to an accumulation of pollutants which affects air quality. Unlike a possible influence of blocking events on the stratospheric circulation, the severity of changes in temperature and precipitation due to a blocking situation is largely depending on its persistence leading up to several weeks of anomalous surface weather. This is confirmed by Buehler et al. (2011) who found that the number of days with cold spells increase with the duration of the blocking situation. ### 2.4.2 The Mid-Latitude North Atlantic Jet Stream in the Lower Troposphere Tropospheric zonal-mean winds are decreased by up to $5 \text{ ms}^{-1}$, when stratospheric anomalies, caused by an SSW event, are present in the lower stratosphere (Lehtonen and Karpechko, 2016). When these anomalies influence the troposphere, usually the mid-latitude tropospheric jet stream over the North Atlantic ocean is displaced from its climatological position (Afargan-Gerstman and Domeisen, 2020). According to Afargan-Gerstman and Domeisen (2020) the shift of the jet stream starts on average 10 days after the occurrence of the SSW event and persists in its new position up to 1 month. A fraction of 2/3 of the SSW events show a zonally symmetric tropospheric response, leading to an equatorward displacement of the mid-latitude jet stream over the North Atlantic ocean. This co-occurs with the negative phase of the NAO and a weaker than usual strom track over Europe. The remaining 1/3 of the SSW events are followed by a zonally asymmetric tropospheric response, leading to a poleward displacement of the mid-latitude jet stream over the North Atlantic ocean. The North Atlantic storm track is then stronger than usual. Besides the stratospheric variability, also the internal tropospheric variability, such as blocking patterns, can influence the position of the mid-latitude jet stream. Woollings et al. (2018) describe the onset of a block by a poleward displacement of subtropical air in the time range of 1-3 days. This creates an extended ridge, which penetrates into the midlatitude jet stream. The jet stream can then be displaced southward or split, with its remnants located up- and downstream of the blocking pattern (Manney et al., 2009; Martius et al., 2009). ### 2.4.3 The North Atlantic Oscillation at the Surface One of the most important factors that determine wintertime surface temperatures in the northern hemisphere is the NAO (Wang and Chen, 2010). It describes the oscillation of atmospheric air masses in the northern Atlantic ocean (Hurrell et al., 2003). This oscillation of air masses leads to a varying strength of the climatological high, located in the area of the Azores, and the climatological low, located in the region of Iceland. For the development of the two phases of the NAO, called NAO+ and NAO-, Rossby wave breaking plays a substantial role (Benedict et al., 2004). According to Benedict et al. (2004) synoptic-scale wave disturbances travelling from west to east are transformed into north-south direction, forming the typical NAO pressure dipole between the areas around Iceland and the Azores. The positive phase of the NAO is preceded by two anticyclonic Rossby wave breaking events. One over the west coast of North America and the other one over the subtropical North Atlantic ocean. The NAO+ phase is therefore characterized by a stronger than usual low pressure system in the region of Iceland and a stronger than usual high pressure over the region of the Azores (Figure 2.6 left, top plot; Leckebusch et al., 2008). The pressure over the polar cap is lower than usual and an enhanced frequency of high pressure systems, associated with blocking, is observed over Europe (Baldwin et al., 2003; Blessing et al., 2005). The negative phase of the NAO is preceded by a cyclonic Rossby wave breaking over the North Atlantic ocean (Benedict et al., 2004). This NAO phase is characterized by a weaker than usual low pressure system in the region of Iceland and a weaker than usual high pressure system over the region of the Azores (Figure 2.6 left, bottom plot; Leckebusch et al., 2008). The mid-latitude jet stream over the North Atlantic ocean is displaced southward with blocking patterns mostly located over the western North Atlantic ocean and south of Greenland (Butler et al., 2015; Blessing et al., 2005). The NAO is part of the AO which spans across the northern part of the northern hemisphere and includes the polar vortex in the stratosphere (Baldwin and Dunkerton, 1999). Therefore the AO, and subsequently the NAO, are strongly influenced by the state of the polar vortex (Baldwin and Dunkerton, 1999; Baldwin et al., 2003; Blessing et al., 2005). In the stratosphere, a strong polar vortex resembles an AO+ signature, a weak and disorganized polar vortex an AO- signature (Baldwin and Dunkerton, 2001; Lehtonen and Karpechko, 2016). This AO signature propagates downward in approximately 10 days from 10 hPa to the tropopause with the possibility to reach the surface (Figure 2.6 right; Baldwin and Dunkerton, 2001). According to Charlton-Perez et al. (2018) and Afargan-Gerstman and Domeisen (2020) the troposphere over the northern Atlantic ocean is more sensitive to the stratospheric state than over the northern Pacific. According to Charlton-Perez et al. (2018) especially the negative phase of the NAO is sensitive to the stratospheric variability, occurring in 1/3 of all cases after a weak polar stratospheric vortex but only in 5% after a strong polar stratospheric vortex. This sensitivity of the NAO- phase to the occurrence of SSW events is confirmed by Domeisen (2019) as such. But she additionally states that 2/3 of the SSW events are either followed by a persistent NAO- phase or a change from NAO+ to NAO- and 1/4 of the SSW are followed by both. Although a non-negligible number of SSW events is followed by the negative phase of the NAO, it has to be kept in mind that less than 1/4 of the NAO-phases observed in winter are preceded by an SSW event (Domeisen, 2019). Besides a possible influence of SSW events, the negative phase of the NAO is also prone to the influence of the MJO, showing a significantly higher chance of an NAO- phase 10 days after the MJO is in phase 6 (Vitart et al., 2017). Lee et al. (2019) explains this teleconnection with an enhanced vertical heat flux and upward propagation of Rossby waves over the region of the MJO which leads to a warmer stratosphere and a weaker stratospheric polar vortex in winter. This teleconnection from the MJO, in this study the MJO phase 7, to the NAO- phase via the stratosphere is especially likely during La Niña conditions. The NAO- regime can furthermore directly be influence by a teleconnection between ENSO and the North-Atlantic-European region, which is strongest for moderate El-Niño-conditions. ### 2.4.4 European 2 Metre Temperatures Especially over Europe and the North Atlantic ocean, a significant influence of SSW events on surface weather is observed (Domeisen et al., 2020). European 2 metre temperatures depend on the phase of the NAO which can be influenced by SSWs (Wang and Chen, 2010; Charlton-Perez During the negative phase of the NAO in winter, North America, northern Eurasia and Siberia experience colder than usual 2 metre temperatures (Butler et al., 2015). The midlatitude jet-stream is shifted southward leading to a movement of cold polar air into the mid-latitudes, a so-called „cold air outbreak“ (CAO) (Butler et al., 2015). Since the likelihood of occurrence of NAO- phases is increased after SSW events, an influence of SSW events on European surface temperatures, especially on wintertime cold waves, is suggested (Charlton-Perez et al., 2018). According to King et al. (2019) especially the cold extremes over Scandinavia are stronger in the 2 months after an SSW while colder than usual mean 2 metre temperatures are found to be present also before the SSW event (Figure 2.7). They propose the hypothesis, that the mean changes in weather patterns are small in the time around an SSW event but the likelihood of cold snaps is increased in the 2 month after the SSW event. Garfinkel et al. (2017) confirm the latter part of this hypothesis. They state that more cold snaps are observed when the polar vortex is in a weak state, for example during an SSW event, than in a strong state. In addition to that, the cold snaps observed during weak vortex states last up to 6 weeks longer than the ones observed during strong vortex states. The first part of the hypothesis by King et al. (2019) is confirmed by Lehtonen and Karpechko (2016). In their study they find that especially for D-type SSW events, the mean 2 metre temperature anomalies over northern Eurasia are lower before the SSW event than in the month after it. They link the lower 2 metre temperature anomalies in the time before an SSW event to atmospheric blocking situations which modulate the upward propagating Rossby waves, possibly leading to the SSW event. Blocking itself can also lead to cold waves in winter without an SSW event when the pattern is persistent for longer than a week (Woollings et al., 2018). Then, temperature and moisture anomalies can develop, leading to an increased number of days with colder and drier than usual conditions in the region of the blocking anticyclone (Woollings et al., 2018). According to Lehtonen and Karpechko (2016) this influence of blocking on the 2 metre temperatures is by far stronger than the influence of SSW events. It has also to be kept in mind that the NAO and the eventual European surface temperatures are strongly influenced by the internal tropospheric variability which can suppress the stratospheric influence (Tripathi et al., 2015; Domeisen et al., 2020). Figure 2.7: **Comparison of Mean and Extreme European Daily Minimum Temperatures around SSW Events.** Figure 3 of King et al. (2019) (top row): „Monthly average daily minimum temperature before and after central dates of SSW events. Stippling indicates at least 75% of grid box anomalies across individual SSW events are of the same sign. Anomalies are calculated from a daily climatological average for 1979–2016 to remove the influence of the seasonal cycle.“ Figure 5 of King et al. (2019) (bottom row): „Average anomalous intensity of the coldest minimum temperatures before and after central dates of SSW events. Stippling indicates at least 75% of grid box anomalies across individual SSW events are of the same sign. Anomalies are calculated from a daily climatological average for 1979–2016 to remove the influence of the seasonal cycle.“ 3 Data and Methods 3.1 ERA-Interim Reanalysis Data Set For the description of the atmospheric state, the reanalysis data set from the European Center for Medium Range Weather Forecast (ECMWF), ERA-Interim, is used in this thesis. According to King et al. (2019) this data set is suitable for the description of the mean surface response to SSW events. Numerous previous studies, e.g. by Afargan–Gerstman and Domeisen (2020), Charlton-Perez et al. (2018) or Karpechko et al. (2018) underline the suitability of the ERA-Interim reanalysis data set for the investigation of SSW events and their influence on tropospheric weather. Using the ERA-Interim reanalysis data set constraints the analysis to the period between 1 January 1979 and 31 August 2019. The winter 2019/2020 is therefore not investigated in this thesis. The used horizontal resolution of the data is $1.5^\circ \times 1.5^\circ$. In the vertical, 37 pressure levels are available between 1 and 1000 hPa. Except the geopotential height, all needed variables are directly given in the ERA-Interim data set. The geopotential height is calculated from the geopotential by dividing it through the gravitational constant. For doing this, the ECMWF recommends to neglect the latitudinal dependency of the gravitational constant and to use a fixed value of $9.80665 \text{ ms}^{-2}$ instead for all latitudes (https://apps.ecmwf.int/codes/grib/param-db/?id=129, last viewed 2 September 2019). This is done in this thesis. For every variable, the daily mean of all available times is used unless stated otherwise. 3.2 Subseasonal To Seasonal Reforecast Data Set The Subseasonal To Seasonal (S2S) data set is the used extended range ensemble forecast data set in this thesis. According to Vitart et al. (2017) the S2S data set is suitable to investigate SSW events concerning their impact on the predictability of surface weather on the subseasonal to seasonal time-scale. Studies, e.g. by Kautz et al. (2020) or Karpechko et al. (2018) demonstrate this. The S2S data set consist of both, near-realtime forecasts and reforecasts with lead times up to 60 days from 11 operational forecast centers worldwide (Vitart et al., 2017; Vitart et al., 2012). In this thesis, only the data from the ECMWF is used. Since near-realtime forecasts are only available from 2015 onwards, reforecasts are used in this thesis to increase the number of winters for analysis. These reforecasts are initialized twice weekly with the ERA-Interim reanalysis as initial conditions and computed for the same date of initialization for the last 20 years (https://confluence.ecmwf.int/display/S2S/ECMWF+Model+Description+CY46R1; last viewed 25 May 2020). ECMWF reforecasts are produced „on-the-fly“ using always the latest version of the Integrated Forecasting System (IFS) of the ECMWF for the computation (https://confluence.ecmwf.int/display/S2S/ECMWF+Model+Description+CY46R1; last viewed 25 May 2020). To take advantage of this, only model-versions of the years 2019 and 2020 are used in this thesis. Therefore, the earliest winter which is investigated in this thesis is the winter 1999/2000. The reforecast verification is done with the ERA-Interim reanalysis data set (Kautz et al., 2020; Karpechko et al., 2018). For a better comparison between the S2S reforecasts and the ERA-Interim reanalysis, the same horizontal resolution as the ERA-Interim reanalysis data is used for the S2S reforecasts, but in the vertical, only 10 pressure levels between 10 and 1000 hPa are available. All variables needed in this thesis, including the geopotential height, are directly available in the S2S data set but only with 1 value per day. In contrast to the reanalysis data, the reforecast data on subseasonal to seasonal time scales can be affected by a significantly large and non-negligible model error (Vitart et al., 2012). This is especially important when looking at extremes. ### 3.3 Calculation of Climatologies and Standard Deviations Climatologies are needed to calculate anomalies from the mean state of the atmosphere. However, in literature, a unique recommendation how to calculate a climatology best suited for the given application of this thesis cannot be found. Therefore, three different types of climatologies are computed and compared with each other. For every climatology, all available data from ERA-Interim, reaching from 1 January 1979 to 31 August 2019, are used for the calculation. The simplest approach to calculate a climatology is to use a multi-year daily mean, called “daily climatology” hereafter. Another possibility is the use of a running mean over a certain number of days. This ensures that sporadic and small-scale events do not have a strong influence on the climatology, which is especially important for short time series. According to Baldwin and Dunkerton (1999) a 10-day running mean is suitable to exclude synoptic-scale variations from the data. Therefore, the length of the running mean is set to 10 days. To spare computing power, the daily climatology is computed first and then the running mean is applied. To preserve the length of the time series, the necessary number of days from the daily climatology are pasted to the end and the beginning of the time series before applying the running mean. In case of the S2S data, which has a maximum length of 46 days per reforecast, the missing values at the beginning and end of the time series are replaced manually by the nearest valid value. To minimize the missing values due to the running mean on the one hand, but still excluding small-scale variations on the other hand, a shorter window of 7 days for the running mean is tested as well. As a third possibility for calculating climatologies, blocks of 10 and 7 days are used. In this case the mean over a fixed number of days is taken. So every day in this fixed number of days has the same value. Sporadic events and synoptic-scale variations are completely removed in this approach. The missing values are substituted the same way as done for the running-mean climatologies. For the calculation of standard deviations with ERA-Interim, only the time-step for 00 UTC is used to reduce the download load. The standard deviations are then calculated as a multi-year daily standard deviation over all available dates and averaged over the analyzed time-period unless stated otherwise. For the calculation of the standard deviations with the S2S data, all perturbed ensemble members from the available reforecasts with the same initialization date are used. The daily standard deviation is calculated in a first step and then the multi-year daily mean of the daily standard deviations is computed. Unless stated otherwise, the resulting standard deviations are averaged over the investigated time period in a last step. ### 3.3.1 Comparison of Different ERA-Interim Climatologies To compare the different climatologies with each other, the Pearson’s correlation coefficient over the Euro-Atlantic sector, $30^\circ$N to $80^\circ$N and $80^\circ$W to $60^\circ$E, is computed exemplarily for the 2 metre temperature and the mean sea level pressure. The Euro-Atlantic sector is chosen as reference area, because it is an important region for the downward influence of SSW events on surface weather (Charlton-Perez et al., 2018). In all cases, the correlation between the climatologies is higher for the 2 metre temperature climatologies than for the mean sea level pressure climatologies. It is lowest for the comparison between the daily climatology and blocks of either 10 or 7 days (not shown). The differences between the daily climatology and running mean climatologies with either 10 or 7 days are marginal, smallest for climatologies with a running mean of 10 days (Figure 3.1). The correlation coefficient is exceeding 0.91 for both, the 2 metre temperature climatologies and the mean sea level pressure climatologies. Comparing the running mean climatologies of 10 and 7 days, the correlation coefficients show values over 0.99 (not shown). Keeping in mind that the 7-day running mean climatology has less missing values which is especially important for the S2S reforecasts, the 7-day running mean climatology is used in this thesis. For the calculation of horizontal ERA-Interim climatologies, the daily mean of 4 times per day is used. Only for the climatology needed for the vertical profile of the normalized geopotential height anomalies, 1 time per day is used to reduce the download load. The difference between a climatology with 4 times per day and a climatology with 1 time per day is exemplarily calculated for 15 February. With a range between 1.5 gpm and 3.25 gpm difference between the two climatologies, the use of only 1 time per day to calculate this specific climatologies is justified. ### 3.3.2 Calculation of S2S Climatologies In case of the S2S data set, a separate climatology for each initialization date needs to be calculated. The climatology is computed from all perturbed reforecasts with the same initialization date of every available year. Due to the limited available data, the year which is investigated is excluded from the climatology. At first, the ensemble mean of every year is calculated. Then, the multi-year daily mean is computed and afterwards, a 7-day running mean is performed to guarantee comparability with the ERA-Interim climatologies. As the reforecasts have a maximum length of 46 days, the missing values due to the running mean are substituted manually by taking the first, respectively the last, available value and pasting it to beginning and end of the climatology. This has to be kept in mind when analyzing the first and last three values of climatologies or anomalies. 3.4 Downward Propagation of Standardized Geopotential Height Anomalies Standardized geopotential height anomalies are frequently used to show the downward propagation of stratospheric signals to the surface (e.g. Karpechko et al., 2018). Anomalies in wind and temperature are also visible in the geopotential height, making it a useful tool to show not only the downward propagation of signals but also their influence on a fixed pressure level. Usually the geopotential height anomalies in respect to the temporal climatology are used to detect positive anomalies induced by SSW events (e.g. Figure 5.1). The standardization of these anomalies is used for an easy comparison between the strength of different events. Also wave structures can be seen in the geopotential height and in its anomalies when using the anomalies from the zonal-mean geopotential height ((e.g. Figure 5.2); Lim and Wallace, 1991). It has to be kept in mind that waves with different wavenumbers can be superposed, making a clear statement concerning upward or downward propagation difficult. In general, westward with height tilted structures show upward propagating baroclinic waves (Lim and Wallace, 1991). So downward propagating baroclinic waves feature an eastward with height tilted structures. When the structures are not tilted with height, they show barotropic features (Lim and Wallace, 1991). For both, barotropic and baroclinic wave structures, coupling between the troposphere and stratosphere is possible (Attard and Lang, 2019). The structures can also be identified when looking at the geopotential height and, for example, on temperature (e.g. Figure 5.4). When the geopotential height isolines and the temperature isolines intersect each other, a baroclinic structure is present (Holton, 2010). Otherwise, the structure is barotropic. When looking at several pressure levels, vertical changes in baroclinic or barotropic structures can be determined as well as the vertical tilt and twist of structures. 3.5 Sudden Stratospheric Warming Indices SSW indices are defined in various ways, usually for the months November to March (Butler et al., 2015). SSW events occurring during this period are sometimes also called „midwinter warmings“ in literature (Butler et al., 2015). Based on the used definition, the number of detected SSW events per year changes significantly (Butler et al., 2015). One of the most used definitions is based on the reversal of the zonal-mean 10 hPa zonal wind at 60°N (Butler et al., 2015; Charlton and Polvani, 2007). It is argued though that it would be better to look at a reference latitude of 65°N rather than 60°N because the latter is located in the so called surf-zone, where local reversals of the zonal-mean zonal wind can occur due to wave breaking. Those wind-reversals are not associated with the dynamics of the polar vortex. Butler et al- (2015) found that using 65°N instead of 60°N in the SSW index definition gives about 10% less events in the period from 1958 to 2015. Another wind-based definition uses the zonal-mean zonal wind averaged over the polar cap, 60°N to 90°N. By this SSW index, 30% more events are detected in the period from 1958 to 2015. All wind-based SSW index definitions call the first day when the wind speed reaches 0 ms$^{-1}$ or becomes negative the central date of a major SSW (Karpechko et al., 2018; Charlton and Polvani, 2007). This implicitly defines which weak polar vortex events are classified as major SSW events. For the separation of two events, the zonal-mean zonal wind has to turn westerly again for at least 20 consecutive days (Butler et al., 2015, Charlton and Polvani, 2007). These 20 days correspond to the time of two radiative damping time-scales at 10 hPa, leaving enough time for the polar vortex to recover through radiative processes (Charlton and Polvani, 2007). Wind reversals at the end of winter are not classified as SSW events and called „final warmings“ instead (Butler et al., 2015; Charlton and Polvani, 2007). A wind-reversal is classified as a final warming when the zonal-mean zonal wind at 10 hPa does not return westerly in 10 consecutive days before 30 April (Butler et al., 2015; Charlton and Polvani, 2007). Final warmings occur at the end of every winter due to the seasonal reversal of zonal wind, caused by the increasing radiative heating due to the rising sun (Butler et al., 2015). In addition to the purely wind-based indices, a combination of the meridional temperature gradient, averaged over the polar cap, and the reversal of the zonal-mean 10 hPa zonal wind at 60°N is calculated in this thesis (Butler et al., 2015; Yu et al., 2018). A major SSW event is detected when the wind reverses to easterlies and the meridional temperature gradient turns negative in 10 consecutive days as well. According to Charlton and Polvani (2007) the inclusion of the meridional temperature gradient in the definition of the SSW index makes little difference in the number of detected SSWs. The fifth computed index in this thesis is a temperature-based SSW index. It detects a major SSW event when the temperature between 100 hPa and 10 hPa at any grid point northward of 60°N increases more than 40 K in one week. This index does not differentiate between SSWs and final warmings (Butler et al., 2015). 3.5.1 Comparison of Sudden Stratospheric Warming Indices for the Winters of 1999/2000 to 2018/2019 The SSW index by Charlton and Polvani (2007) (CP07) detects 17 SSW events in 14 out of the 20 winters which are investigated in this thesis (Table 3.1). Twelve of those SSWs are also detected by the wind- and temperature-based SSW index (U&T). The modification of the CP07 index which uses $65^\circ$N instead of $60^\circ$N as reference latitude (U65) detects 20 events in the same 14 winters. The central dates of the SSWs detected by both wind-based indices vary up to 8 days. This increase in the number of SSW events when using $65^\circ$N as reference latitude instead of $60^\circ$N is contrary to the findings of Butler et al. (2015). A possible explanation of this dissent is the different considered time period. The third wind-based SSW index which uses the meridional mean between $60^\circ$N and $90^\circ$N as reference latitude (U6090) detects 26 SSW events in 18 of the 20 available winters (Table 3.1). The increase of detected SSW events detected by this index in comparison to CP07 is consistent with the findings of Butler et al. (2015). The purely temperature based SSW index (TMP) does not detect any SSWs in the winters of 1999/2000 to 2018/2019 (Table 3.1). Therefore, this index is not used to detect SSW events in this thesis. The U&T index is also excluded because it is more computing-intensive than the purely wind-based indices and does not support necessary additional information. Concerning the wind-based indices, all three indices are taken into consideration, with U65 used for the detailed analysis (e.g. Figure 5.3). It is calculated for the ERA-Interim data using all available times per day to include important daily fluctuations in the 10hPa zonal-mean zonal wind. The classification of SSW events is done analogous to Charlton and Polvani (2007) but taking the period of 13 days prior and 18 days after the central date of the SSW obtained by U65 into consideration. This is done to account for the maximum variation of the central dates obtained by CP07 and U65. Table 3.1: Detection of SSW Events by Different SSW Indices for the Winters 1999/2000 to 2018/2019. The SSW index by Charlton and Polvani (2007) ("CP07") is compared to its modified versions regarding the reference latitude ("U65" and "U6090") and its combination with a meridional temperature gradient ("U&T"). Furthermore, a purely temperature-based index ("TMP") is used. If an SSW event is detected, the central date of this event is given, otherwise there is a dash. | Winter | CP07 | U&T | U65 | U6090 | TMP | |------------|------------|------------|------------|------------|------------| | 1999/2000 | 20 Mar | - | 20 Mar | 20 Mar | - | | 2000/2001 | 11 Feb | √ | 23 Nov, 3 Feb | 22 Nov, 2 Feb | - | | 2001/2002 | 30 Dec, 17 Feb | √, - | 29 Dec, 16 Feb | 29 Dec, 16 Feb | - | | 2002/2003 | 18 Jan | √ | 17 Jan, 17 Feb | 16 Jan, 17 Feb | - | | 2003/2004 | 4 Jan | √ | 3 Jan | 29 Dec | - | | 2004/2005 | - | - | - | 12 Mar | - | | 2005/2006 | 20 Jan | √ | 21 Jan | 12 Jan | - | | 2006/2007 | 24 Feb | √ | 23 Feb | 22 Feb | - | | 2007/2008 | 22 Feb | √ | 22 Feb | 22 Feb | - | | 2008/2009 | 24 Jan | √ | 24 Jan | 24 Jan | - | | 2009/2010 | 28 Jan, 23 Mar | √, √ | 25 Jan | 23 Jan, 20 Mar | - | | 2010/2011 | - | - | - | - | - | | 2011/2012 | - | - | - | 14 Jan, 14 Feb | - | | 2012/2013 | 6 Jan | - | 6 Jan | 6 Jan | - | | 2013/2014 | - | - | - | 4 Feb | - | | 2014/2015 | - | - | - | 3 Jan, 2 Feb | - | | 2015/2016 | - | - | - | - | - | | 2016/2017 | 2 Feb | √ | 24 Nov, 1 Feb, 24 Feb | 24 Nov, 31 Jan, 24 Feb | - | | 2017/2018 | 11 Feb, 19 Mar | √, - | 11 Feb, 22 Mar | 11 Feb | - | | 2018/2019 | 1 Jan | - | 31 Dec | 29 Dec | - | ### 3.6 Blocking Index In literature, many approaches to define blocking indices are found. Rex (1950) proposes a subjective criterion based on 500 hPa geopotential height anomaly maxima and Dole (1978) extends this criterion by adding a mandatory persistence of the pattern. Okland and Lejenäs (1987) also focus on blocking persistence and define a climatological probability that a specific blocking pattern lasts at least for a certain number of days. These subjective blocking indices are compared by Liu (1994). An objective blocking index based on the meridional gradient of the 500 hPa geopotential height... field is developed by Tibaldi and Molteni (1990). They calculate a northern meridional gradient, GHGN, between 60°N and 80°N and a southern gradient, GHGS, between 60°N and 40°N: \[ GHGN = \frac{Z(\Phi_n) - Z(\Phi_0)}{\Phi_n - \Phi_0}, \] \[ GHGS = \frac{Z(\Phi_0) - Z(\Phi_s)}{\Phi_0 - \Phi_s}, \] where \(Z\) is the 500 hPa geopotential height at a specific latitude \(\Phi\). The latitude \(\Phi\) is thereby varied as follows: \[ \Phi_n = 80^\circ N + \Delta, \quad \Phi_0 = 60^\circ N + \Delta, \quad \Phi_s = 40^\circ N + \Delta, \quad \Delta = -4^\circ, \ 0^\circ \text{ or } 4^\circ. \] Hereby is \(\Phi_0\) called the „central latitude“. For the occurrence of blocking, GHGN needs to be smaller than -10 m/\(^2\) latitude and GHGS greater than 0. For the detection of blocking, it is sufficient if GHGS and GHGN fulfill their criteria for one central latitude \(\Phi_0\). Therefore, the maximal values of GHGS, as this needs to be >0, and the minimal values for GHGN, as this need to be < -10 m/\(^2\) latitude, are computed for each longitude. If blocking is detected, GHGS, which is a proxy for the blocking strength, is shown for the ERA-Interim reanalysis in a Hovmöller plot with the 500 hPa geopotential height field in the background (e.g. Figure 5.6). A region is considered as blocked, if 3 adjacent grid points are blocked. A 2-dimensional extension of the index by Tibaldi and Molteni (1990) is developed by Scherrer et al. (2006). This index is computed in a similar way with GHGS and GHGN but the central latitude varies from 35°N to 75°N. In this thesis, the central latitude is varied between 34.5°N and 75°N with a latitude step of 1.5° due to the model resolution. This is also done by Quinting and Vitart (2019). \(\Delta\) is set constantly to 0: \[ \Phi_n = \Phi_0 + 15^\circ \] \[ \Phi_s = \Phi_0 - 15^\circ \] \[ \Phi_0 = 35^\circ N \text{ to } 75^\circ N \text{ in } 1.5^\circ \text{ steps}, \ \Delta = 0^\circ. \] In this approach, a field of the blocking index is calculated and not only a single value per latitude as done by Tibaldi and Molteni (1990). If both, GHGS and GHGN, fulfill their criterion, blocking is detected and GHGS plotted. For the ERA-Interim reanalysis data set, these plots also show the 500 hPa geopotential height field in the background to illustrate ridges and troughs and therefore the type of blocking (e.g. Figure 5.7). Scherrer et al. (2006) additionally include a 5 day persistence criterion, which is not used in this thesis because in the Hovmöller plot showing the blocking index by Tibaldi and Molteni (1990) the persistence of the blocking pattern can be detected easily and applied to the 2-dimensional extension by Scherrer et al. (2006). For the S2S data, the 500 hPa geopotential height anomalies, averaged between 70°W and 30°E as well as 40-80°N, are shown instead of the Hovmöller diagrams (e.g. Figure 6.12). This is done because Hovmöller diagrams are disadvantageous when having large ensemble spreads, such as at the end of the reforecasts’ lead time. Then, the overlap of the different GHGS values for the ensemble members makes it difficult to determine single blocking patterns. For the same reason, in the 2-dimensional 500 hPa geopotential height field only the 5600 gpm isoline is drawn for the ensemble members and the ERA-Interim reanalysis (e.g. Figure 6.14). Other blocking indices found in literature are based on the potential temperature. These indices include wave breaking in their criteria, which is an important factor of blocking (Pelly and Hoskins, 2003). As the potential temperature is only available in the ERA-Interim data set but not in the S2S data set, indices based on potential temperature are not used in this thesis. ### 3.7 Position of the Mid-Latitude Jet Stream For the position of the jet stream, the 850 hPa zonal-mean zonal wind is displayed in a Hovmöller diagram with latitude and time on its axes (e.g. Figure 5.8). The wind-maxima, which indicate the position of the jet stream, are marked additionally with lines. To filter out synoptic-scale events such as cyclones, a Lanczos filter with a 61-day moving window and a cutoff-frequency of 1/10 days is applied (Woollings et al., 2010; Duchon, 1979). The truncated weight function of the filter can be written as: $$w(k) = \sigma_1 \cdot \sigma, \quad w(n) = w(-n) \ldots = 2 \cdot \text{cutoff-frequency}, \quad k \in [-n, n],$$ where $w(k)$ is the truncated weight function with $k$ being the step and $\sigma$ the so-called $\sigma$-factor developed by Lanczos (Duchon, 1979). The weight function is symmetric around $k = 0$. The total number of steps $n$ is computed depending on the length of the moving window: $$n = \frac{(\text{length moving window} - 1)}{2} + 1.$$ It is half the length of the moving window, creating a symmetric weight function which is essentially a centered running mean multiplied by a factor $\sigma$. This $\sigma$-factor is dependent on the steps and the total number of steps: $$\sigma = \frac{\sin \left( \frac{\pi k}{n} \right) \cdot n}{\pi k}.$$ It is a sinc-function, showing only values $\neq 0$ inside a window between the negative and positive cutoff-frequency. The first $\sigma$-factor $\sigma_1$ is given with: $$\sigma_1 = \frac{\sin (2\pi k \cdot \text{cutoff-frequency})}{\pi k}.$$ This Lanczos filter is basically a 10-day lowpass-filter combined with a 61-day running mean. This leads to 31 days at the beginning and end of the used time series which are affected by boundary conditions and therefore uncertain. The same filtering method is applied to the 850 hPa zonal-mean zonal wind climatology to determine the position of the jet stream relative to its climatological mean. 3.8 North Atlantic Oscillation Indices There is not a unique definition of the NAO index in literature but there is a sign convention. A positive index corresponds to a strengthening difference between the Icelandic low and the Azores’ high, a negative index corresponds to a weakening difference between the Icelandic low and the Azores’ high (Blessing et al., 2005). According to Leckebusch et al. (2008) the traditional way to calculate the NAO index is the use of the difference of standardized mean sea level pressure anomalies between Lisbon and a station on Iceland, typically Stykkishólmur or Reykjavik. This is also called the Lisbon-Iceland index. To take the movement of the NAO centers into consideration, the EU Index uses two latitude circles, one at $35^\circ$N and one at $65^\circ$N, which are averaged over the sector between $20^\circ$W and $40^\circ$E. The difference of the standardized mean sea level pressure anomaly of the southern and northern circle is then called the EU Index. One possible issue with this approach is the choice of the longitude sector, which is positioned to a great extent over continental Europe, but the NAO itself is defined over the northern Atlantic ocean basin. The Zonal Index is defined in a region closer over the North Atlantic ocean by using a longitudinal sector between $0^\circ$W and $40^\circ$W. It also uses a latitudinal sector creating two regions, in which the mean sea level pressure anomalies are then averaged separately and standardized. The southern area is located between $35^\circ$N and $50^\circ$N, the northern area between $55^\circ$N and $70^\circ$N. The difference between the southern and the northern area creates the Zonal Index. Instead of using standardized mean sea level pressure anomalies, standardized geopotential height anomalies can also be used for the calculation of the NAO index in different heights (Jung et al., 2011). This approach is tested for the standardized 500 hPa geopotential height anomalies and the regions defined by the Zonal Index. Beside the gridpoint-based indices, indices based on empirical orthogonal functions are used frequently (Jia et al., 2007). But according to Jia et al. (2007) gridpoint-based indices represent the difference between the two NAO phases better than indices based on empirical orthogonal functions. Therefore, only the former are used in this thesis. 3.8.1 Comparison of North Atlantic Oscillation Indices The comparison of the different definitions of the NAO index is done exemplarily for the winter 2017/2018. The differences between the various NAO indices are found to be not negligible and sometimes even varying in sign (Figure 3.2 left). As the Lisbon-Iceland Index does not consider movements of the NAO centers, the other indices are preferred (https://climatedataguide.ucar.edu/climate-data/hurrell-north-atlantic-oscillation-nao-index-station-based, last viewed 26 August 2019). For the same reason, the Zonal Index is preferred over the EU Index, as the latter only considers longitudinal movements of the NAO centers. Except for the magnitude, the surface based Zonal Index and the 500 hPa geopotential height based Zonal Index show a similar behaviour (Figure 3.2 left and right). To include the analysis of dynamics on the surface, the mean sea level pressure based Zonal Index in this thesis. The standardization of the index is done with the multi-year temporal standard deviation of the mean sea level pressure. As daily values of the NAO index are computed to show the influence of the SSW events on the pattern, the results are sensitive to the used climatology. The difference for the Zonal Index calculated once with a daily climatology and once with a 7-day running mean climatology varies between ±0.25 standard deviations during November to May (not shown). For consistency in this thesis, the 7-day running mean climatology is used to calculate the Zonal Index. To exclude synoptic-scale events of the considered winter, a 7-day running mean of the Zonal Index is calculated afterwards and plotted together with the daily NAO index (e.g. Figure 5.9; e.g. Baldwin and Dunkerton, 1999). ![Comparison between Lisbon-Iceland, EU and Zonal NAO Index with 7 Days Running Mean Climatology](image1) ![Zonal NAO Index in 500 hPa Geopotential Height with 7 Days Running Mean Climatology](image2) **Figure 3.2:** **Comparison of Different NAO Indices.** Comparison between the mean sea level pressure based Lisbon-Iceland, EU and Zonal Index computed with a 7-day running mean climatology (left) and the Zonal Index in 500 hPa geopotential height also computed with a 7-day running mean climatology (right) for the winter 2017/2018. The green respectively black dashed line shows the central date of the first major SSW in this winter, the gray dashed line the date of the second major SSW. ### 3.9 Definition of Cold Waves In literature, a unique definition of cold waves cannot be found. Garfinkel et al. (2017) define a cold snap as the 2 metre temperature anomalies below 1 K under the climatological value. Although for the cold wave itself there is not a persistence criterion, the separation of events is clearly done with demanding at least 3 consecutive days between two events. This definition is adapted for the ERA-Interim reanalysis and the S2S-reforecast data and calculated for the European mean and regional means for different European regions (Figure 3.3). To spare the duration criterion, a 7-day running mean of the daily anomalies is calculated afterwards. The days which fulfill the criterion for cold waves are marked (e.g. Figure 4.11). The term „European cold wave“ describes in this thesis a period of consecutives days which fulfill the criterion for cold waves in the European mean. This description is adapted for the different European regions. In addition to this definition, the definition of cold waves by Smid et al. (2019) is used for the ERA-Interim reanalysis. This definition is based on the 2 metre daily minimum temperature and defines a cold snap as those days which are colder than the lowest 10 percentile of the climatology. The climatology is calculated for the period of 1999-2019 with a 31-day moving window. The minimum duration of a cold wave is given with 3 consecutive days. The definition by Smid et al. (2019) is also calculated for the European mean and the different European regions (Figure 3.3). The term „European cold waves“ is used in the same way as done for the definition of cold waves based on the 7-day running mean of the 2 metre temperature anomalies. Figure 3.3: **Regions for the Detection of Cold Waves.** The European mean is calculated by averaging between $10^\circ$W to $42^\circ$E and $35^\circ$N to $72^\circ$N. The anomalies for north-western Europe between $10^\circ$W to $3^\circ$E and $45^\circ$N to $60^\circ$N, for south-western Europe between $10^\circ$W to $3^\circ$E and $35^\circ$N to $45^\circ$N, for eastern Europe between $20^\circ$E to $42^\circ$E and $45^\circ$N to $60^\circ$N, for northern Europe between $3^\circ$E to $42^\circ$E and $60^\circ$N to $72^\circ$N, for central Europe between $3^\circ$W to $20^\circ$E and $45^\circ$N to $60^\circ$N and for the Mediterranean between $3^\circ$E to $42^\circ$E and $35^\circ$N to $45^\circ$N. ### 3.10 Selection of Case Studies The selection of case studies is done subjectively with the aim to show the high case-to-case variability of the SSW events itself and their possible influence on European cold waves on the sub-seasonal to seasonal time scale. As a first case study the winter 2008/2009 is selected. This winter features the strongest and longest-lasting SSW event of the past 20 years (Table 3.2). Easterly winds reach values up to $-36\,\text{ms}^{-1}$ and a total duration of 34 days in the middle stratosphere. According to Afargan-Gerstman and Domeisen (2020) this SSW does not influence surface weather over Europe. Thus, the SSW event should not have an effect on the predictability of European cold waves and the winter 2008/2009 is only analyzed with the ERA-Interim reanalysis. The winter 2009/2010 is selected as a second case study. This winter is appealing because it features besides a strong and long-lasting SSW event in January 2010 another reversal of the 10hPa zonal-mean zonal wind at the end of March 2010. The U65 index classifies this wind reversal already as the final warming but the CP07 and U6090 index detect a second SSW here (Table 3.1). The SSW detected by all indices features maximum easterly winds up to $-20\,\text{m}^{-1}\,\text{s}$ which last 32 days in the middle stratosphere. According to Jung et al. (2011) and Santos et al. (2013) the SSW plays only a minor role in the maintenance of the following surface weather pattern. Therefore, this winter is only analyzed with the ERA-Interim reanalysis. As a third case study, the winter 2000/2001 is selected. It features two very different SSW events (Table 3.2). The first SSW is a rather weak and short-lasting D-type warming at the end of November. Easterly winds with a maximum amplitude of $-4\,\text{ms}^{-1}$ are present for 4 days in the middle stratosphere. The second SSW is a S-type warming in the beginning of February. It features maximum easterly winds with $-16\,\text{ms}^{-1}$ and a duration of easterly winds for 20 days in the middle atmosphere. To the knowledge of the author, the first SSW of the winter 2000/2001 is not analyzed in studies concerning its impact on surface weather. A reason for this might be that it is not detected by the often used SSW index by Charlton and Polvani (2007) which uses the reversal of the 10 hPa zonal-mean zonal wind at 65°N as a measure of major SSWs. This makes the first SSW event of the winter 2000/2001 suitable for a detailed analysis with the S2S reforecasts in addition to the ERA-Interim reanalysis. Table 3.2: Features of the Major SSW events of the Winters 1999/2000 to 2018/2019. The reversal of the 10 hPa zonal-mean zonal wind at 65°N is used for the detection of events (Butler et al., 2015). The case studies selected for further analysis are printed in bold. | Winter | Central Date SSW | Type | Max. Easterly Wind Speed | Duration Easterlies | |------------|------------------|------|--------------------------|--------------------| | 1999/2000 | 20 Mar | D | -11 ms$^{-1}$ | 2 d | | 2000/2001 | 23 Nov | D | -3 ms$^{-1}$ | 4 d | | 2000/2001 | 3 Feb | S | -16 ms$^{-1}$ | 20 d | | 2001/2002 | 29 Dec | D | -5 ms$^{-1}$ | 18 d | | 2001/2002 | 16 Feb | D | -5 ms$^{-1}$ | 4 d | | 2002/2003 | 17 Jan | S | -3 ms$^{-1}$ | 2 d | | 2002/2003 | 17 Feb | S | -5 ms$^{-1}$ | 20 d | | 2003/2004 | 3 Jan | D | -12 ms$^{-1}$ | 33 d | | 2004/2005 | - | - | - | - | | 2005/2006 | 21 Jan | D | -28 ms$^{-1}$ | 27 d | | 2006/2007 | 23 Feb | D | -18 ms$^{-1}$ | 5 d | | 2007/2008 | 22 Feb | D | -21 ms$^{-1}$ | 33 d | | 2008/2009 | 24 Jan | S | -36 ms$^{-1}$ | 34 d | | 2009/2010 | 25 Jan | S | -20 ms$^{-1}$ | 32 d | | 2010/2011 | - | - | - | - | | 2011/2012 | - | - | - | - | | 2012/2013 | 6 Jan | S | -21 ms$^{-1}$ | 23 d | | 2013/2014 | - | - | - | - | | 2014/2015 | - | - | - | - | | 2015/2016 | - | - | - | - | | 2016/2017 | 24 Nov | D | -2 ms$^{-1}$ | 1 d | | 2016/2017 | 1 Feb | D | -5 ms$^{-1}$ | 1 d | | 2016/2017 | 24 Feb | D | -5 ms$^{-1}$ | 19 d | | 2017/2018 | 11 Feb | S | -34 ms$^{-1}$ | 16 d | | 2017/2018 | 22 Mar | D | -6 ms$^{-1}$ | 8 d | | 2018/2019 | 31 Dec | S | -15 ms$^{-1}$ | 26 d | 3.10.1 Selection of S2S Reforecasts and Representative Members The selection of reforecasts for analysis of the first SSW event of the winter 2000/2001 is based on the SSW index, which is only defined between November and March. Therefore, the earliest useful initialization date of the S2S reforecasts is 31 October 2000. At this date, 1 ensemble member predicts the central date of the first SSW in the winter 2000/2001 correctly, 1 member too early and 9 members show only westerly winds (Table 3.3). The SSW event is considered to be predicted correctly when the ensemble member predicts the reversal of the 10 hPa zonal-mean zonal wind in the time-range of 3 days around the central date of the SSW event obtained from the ERA-Interim reanalysis (Karpechko et al., 2018). At this early time of initialization, the correct prediction of the SSW event could be coincidence but the reforecast initialized 4 days later also features 1 member which predicts the SSW’s central date correctly. This might be an indicator for an early predictability of the SSW. The reforecast initialized on 31 October 2000 is therefore subject to further analysis. So is the reforecast initialized on 7 November 2000 which features 4 members predicting the SSW correctly and 5 members, which do not predict easterly winds during at all (Table 3.3). This case looks very promising to detect the differences in reforecasts with and without SSWs. The reforecasts initialized after 7 November 2000 show an increasing number of ensemble members which predict the SSW correctly. Firstly on 18 November 2000, 5 days prior to the central date of the SSW, all ensemble members predict the SSW correctly. The closest initialization date after the central date of the SSW is 25 November 2000. This reforecast is not only the closest to the central date of the SSW, but also the last one with all members being initialized as easterly winds (Table 3.3). Thus, this reforecast is also investigated further. For the selected reforecasts which are initialized prior to the central date of the SSW, the representative members are chosen based on the SSW index. From all members which predict the SSW correctly the ensemble mean is calculated. This is done also for all member which do not show easterlies at all during the reforecast period. For every cluster, the representative member is the one which shows the smallest root mean square error to the ensemble mean. Ensemble members which predict the SSW too early or too late are excluded. This guarantees a clear distinction between members being influenced by the SSW event and those which are not. For the selected reforecast initialized after the central date of the SSW, the choice of the representative members is based on the 100 hPa standardized geopotential height anomalies. The ensemble member closest to the ensemble mean of all ensemble members showing only values >0.5 standard deviation from the mid-December onwards is selected as the representative member with prevailing standardized geopotential height anomalies >0.5 standard deviation. As it follows the ERA-Interim reanalysis in sign, it is also called the member with the correct prediction of the atmospheric state. It has to be kept in mind though that this only applies to the sign but not the magnitude of anomalies. The ensemble member closest to the ensemble mean of all ensemble members showing only values <0.5 standard deviation from the beginning of December onwards is selected as the representative member with prevailing standardized geopotential height anomalies <0.5 standard deviation. Table 3.3: Selection of Reforecasts for the Analysis of the Winter 2000/2001. In the table, the number of ensembles members which predict the central date correctly (“Central Date”), too early or too late (“Earlier or Later Easterlies”) or not at all (“Only Westerlies”) are listed. The reforecasts which are analyzed further are written in bold and marked by a tick. | Initialization | Selected | Central Date | Earlier or Later Easterlies | Only Westerlies | |----------------------|----------|--------------|-----------------------------|-----------------| | **31 Oct 2000** | ✓ | 1 | 1 | 9 | | 4 Nov 2000 | - | 1 | 1 | 9 | | **7 Nov 2000** | ✓ | 4 | 2 | 5 | | 11 Nov 2000 | - | 8 | 2 | 1 | | 14 Nov 2000 | - | 10 | 0 | 1 | | 18 Nov 2000 | - | 11 | 0 | 0 | | 21 Nov 2000 | - | 11 | 0 | 0 | | **25 Nov 2000** | ✓ | 11 | 0 | 0 | | 28 Nov 2000 | - | 0 | 0 | 11 | 4 Winter 2008/2009 4.1 Troposphere-Stratosphere Coupling The winter 2008/2009 shows a strongly positive normalized geopotential height anomaly up to 3.0 standard deviations at the stratospause between mid-January and mid-February 2009 (Figure 4.1). This is the only time when normalized geopotential height anomalies >1.0 standard deviation are present at the stratospause. Especially striking is the sharp gradient of geopotential height anomalies at the beginning of the structure indicating a sudden change of the stratospheric circulation at that time (Figure 4.1). The positive geopotential height anomalies show values >1.0 standard deviation in the stratosphere during the whole time but only between 11 and 16 February 2009 positive anomalies are found continuously from the stratopause to the surface (Figure 4.1). When looking at the deviation from the geopotential height from the zonal-mean in the stratosphere at the same time, a strongly westward tilted with height structure of continuously positive geopotential height anomalies in the troposphere and stratosphere is found (Figure 4.2 bottom). This indicates the upward propagation of tropospheric baroclinic waves in the stratosphere (Lim and Wallace, 1991). In the troposphere, the structure is not tilted with height indicating a barotropic state. Besides this prominent structure of positive geopotential height anomalies over the North Atlantic-European sector, another structure is visible in the troposphere over North America (Figure 4.2 bottom). The westward tilted with height structure is also associated with an upward propagation of tropospheric baroclinic waves (Lim and Wallace, 1991). This leads to a wavenumber 2 flow in the troposphere whereas a wavenumber 1 circulation is present in the stratosphere. The same circulation pattern is present between 5 and 6 November 2008 (Figure 4.2 top). The wavenumber 2 flow in the troposphere is characterized by a barotropic structure with positive geopotential height anomalies over the North Atlantic-European sector and another barotropic structure over North America. In the stratosphere, only the latter structure is visible showing a slightly westward tilt with height. This indicates the upward propagation of baroclinic tropospheric waves (Lim and Wallace, 1991). Polar-cap averaged positive normalized geopotential height anomalies >1.0 standard deviation are visible up to heights of 2.5 hPa during this time (Figure 4.1). Figure 4.1: **Vertical Profile of the Polar-Cap Averaged Normalized Geopotential Height Anomalies during the Winter 2008/2009 based on ERA-Interim.** The green structure starting at 1 hPa is an indicator of a possible SSW event. Figure 4.2: **Normalized Geopotential Height Deviations from Zonal-Mean in the Winter 2008/2009 based on ERA-Interim.** The top plot is averaged over the positive polar-cap averaged normalized geopotential height anomalies at the surface, the bottom plot around the time of the largest positive standard deviations of the normalized geopotential height anomalies associated with the SSE event at the surface. 4.2 Sudden Stratospheric Warming Signals in the Middle Stratosphere As the presence of the normalized geopotential height anomalies >1.0 standard deviation at the stratopause already suggest, there is only one SSW event occurring in the winter 2008/2009 (Figure 4.1 and 4.3). This event is detected by all three wind-based SSW indices with its central date on 24 January 2009 (Table 3.1). Until roughly 1 month before the SSW event, the 10\text{hPa} zonal-mean zonal wind varies between approximately $20 \text{ ms}^{-1}$ and $40 \text{ ms}^{-1}$ while the polar-cap averaged 10\text{hPa} temperature is steadily below 215 K (Figure 4.3). The polar vortex is stable during this time with the exception of an elongation on 8 December 2008 (Figure 4.4 left column, top). This is may caused by an anomalously wavenumber 1 tropospheric wave forcing observed at the same time (Manney et al., 2009). In 10\text{hPa} height, the strong polar vortex is slightly elongated and shows a split of the 28750 gpm isoline. In 30\text{hPa} and 50\text{hPa} height only less pronounced separated geopotential height contours are seen (Figure 4.4 left column, middle and bottom). The temperatures are down to 190 K in all three heights and the polar vortex is in a slightly baroclinic state. Interestingly, polar-cap averaged normalized geopotential height anomalies >1.0 standard deviation are not found in the stratosphere at that time (Figure 4.1). In mid-December 2008 the stratospheric polar night jet accelerates until 11 January 2009 when it reaches its winter-maximum with 68 ms$^{-1}$ (Figure 4.3). The 10\text{hPa} polar-cap averaged temperature is below 205 K at that time, just a few degrees above its wintertime-minimum in early January 2009. In all three displayed heights, the polar vortex is centered over the pole (Figure 4.5 left column). In 10\text{hPa} height, it has an oval shape with minimum geopotential height values around 28000 gpm showing a slight baroclinicity while in lower heights the baroclinicity increases and the shape of the polar vortex is more concentric. From 11 January 2009 onwards the 10\text{hPa} zonal-mean zonal wind decelerates rapidly (Figure 4.3). On 19 January 2009 the polar vortex is clearly elongated in 10\text{hPa}, 30\text{hPa} and 50\text{hPa} height reaching far south into North America and Asia (Figure 4.4 middle column). In 10\text{hPa} height the polar vortex is clearly in a baroclinic state with temperatures reaching locally already up to 260 K (Figure 4.4 middle column, top). In lower heights, the polar vortex is in a more barotropic state with temperatures staying below 240 K (Figure 4.4 middle column, middle and bottom). At the central date of the SSW the 10\text{hPa} polar-cap averaged temperature reaches its wintertime-maximum with 252 K (Figure 4.3). Until this date, the temperature increased approximately 50 K in roughly 2 weeks. Locally temperatures up to 290 K are found over Grenland in 10\text{hPa}, making this SSW event to one of the strongest SSW events (Figure 4.4 left column, top; Schneideret et al., 2017). The polar vortex is clearly split at the central date of the SSW into two parts with minimum geopotential height values around 29500 gpm and 29250 gpm in 10\text{hPa} height where it features strong baroclinic characteristics. In 30\text{hPa} height the split of the polar vortex is also visible as well as the baroclinic structure (Figure 4.4 right column, middle). Temperatures stay here, and in 50\text{hPa} height, below 240 K. In 50\text{hPa} height the polar vortex is only elongated but not split and in a less baroclinic state than in the upper levels (Figure 4.4 right column, bottom). On 29 January 2009 the SSW index reaches its winter-minimum with -36 ms$^{-1}$ (Figure 4.3). In the last 3 weeks the stratospheric polar night jet changed its wind speed by 104 ms$^{-1}$ which is again characterizing the SSW of the winter 2008/2009 as an especially strong one. The polar vortex is now clearly split in all three displayed heights and in a baroclinic state (Figure 4.5 middle column, top, middle and bottom). Temperatures are still high in 10 hPa reaching locally values up to 270 K (Figure 4.5 middle column, top). In the lower heights, temperatures do not reach values above 250 K (Figure 4.5 middle column, middle and bottom). From 29 January 2009 onwards the 10 hPa zonal-mean zonal wind accelerates again and turns westerly on 25 February 2009 after 34 days with easterly winds (Figure 4.3). Concurrently with the acceleration of the stratospheric polar night jet the 10 hPa polar-cap averaged temperature decreases again. It reaches values slightly below 210 K on 9 March 2009, the time when the polar vortex in 10 hPa is restored (Figure 4.3). It is in a nearly barotropic state again showing a rather concentric shape, centering a little south of the pole (Figure 4.5 right column, top). According to Manney et al. (2009) this recovery of the polar vortex is not significant in lower heights showing again the large impact of the SSW event on the circulation in the stratosphere. In these lower heights the polar vortex is in a more baroclinic state with a less concentric shape, centering more southward with decreasing height (Figure 4.5 right column, middle and bottom). It is the only time when a clear tilt of the polar vortex with height is seen in the winter 2008/2009. ![SSW Index and Polar-Cap Averaged 10 hPa Temperature for the Winter 2008/2009](image) **Figure 4.3:** SSW Index and Polar-Cap Averaged 10 hPa Temperature for the Winter 2008/2009 based on ERA-Interim. The blue dots show the modified SSW index by Charlton and Polvani (2007) with 65°N instead of 60°N as reference latitude. The red line shows the polar-cap averaged 10 hPa temperature. Days with distinctive shapes of the polar vortex in the 10 hPa geopotential height field are marked. Figure 4.4: **Geopotential Height and Temperature in the Middle Stratosphere on 8 December 2008, 19 January 2009 and 24 January 2009 based on ERA-Interim.** The geopotential height and temperature are shown in 10 hPa (top row), 30 hPa (middle row) and 50 hPa (bottom row) for 8 December 2008 (left column), 19 January 2009 (middle column) and 24 January 2009 (right column). All plots have the same color-scale except the plot on the top right (the color-scale for this plot is given next to it). Figure 4.5: Geopotential Height and Temperature in the Middle Stratosphere on 11 January 2009, 29 January 2009 and 9 March 2009 based on ERA-Interim. The geopotential height and temperature are shown in 10hPa (top row), 30hPa (middle row) and 50hPa (bottom row) for 11 January 2009 (left column), 29 January 2009 (middle column) and 9 March 2009 (right column). 4.3 Blocking in the Middle Troposphere The winter 2008/2009 is characterized by a wavy zonal flow in the middle troposphere, shown for the time between the beginning of December and mid-February (Figure 4.6). Already on 5 November 2008 four ridges are found in the northern hemisphere, partly with blocking embedded (Figure 4.7 top). At the ridges over the Euro-Atlantic sector, North America and eastern Asia, an upward propagation of tropospheric waves is detected (Figure 4.2 top). The normalized geopotential height anomalies >1.0 standard deviation of this upward propagation reach up to 3 hPa height and lead to a small temperature increase and deceleration of the stratospheric polar night jet in 10 hPa (Figure 4.1 and 4.3). The elongation of the polar vortex on 8 December 2008 coincides also with the occurrence of pronounced ridges over the Euro-Atlantic sector and western North America but at that time, large blocking patterns are absent (Figure 4.6). According to Manney et al. (2009) an anomalously strong wavenumber 1 tropospheric wave forcing is present in the stratosphere around this time. This is may caused by the upward propagation of planetary-scale waves at one of the detected ridges. From 8 December 2008 onwards until 11 January 2009, four pronounced blocking patterns are detected over the Euro-Atlantic sector and the North Pacific (Figure 4.6). These lead, according to Schneidereit et al. (2017) to an anomalously upward propagation of wavenumber 2 tropospheric waves into the stratosphere (Figure 4.6). Its causes may be found in the presence of moderate La Niña conditions which lead to a stronger than usual anticyclonic circulation over Alaska and Scandinavia (Schneidereit et al., 2017). According to Schneidereit et al. (2017) the MJO also plays a role in maintaining the Alaskan ridges. This beneficial phasing of ENSO and MJO conditions in the winter 2008/2009 may dominates over the hindering influence of the QBO west phase and the present number of sunspots on the development of an SSW event. Nevertheless, this is not the only driver of the SSW event. During this time of enhanced upward propagation of tropospheric waves, the stratospheric polar night jet accelerates to its winter-maximum, the polar-cap 10 hPa temperatures drop below 205 K and the polar vortex stabilizes until 11 January 2009 (Figure 4.3). From this date onwards, the polar night jet rapidly decelerates and the polar-cap averaged 10 hPa temperature increases (Figure 4.3). At the same time a pronounced Scandinavian ridge develops and only small blocking patterns are observed over the northern hemisphere (Figure 4.6). The large and long-lasting blocking pattern before 11 January 2009 might be a precursor of the following SSW event, however, in the stratosphere only normalized geopotential height anomalies <1.0 standard deviations are present at that time (Figure 4.1). Additionally, the largely negative geopotential height anomalies at the stratopause in the beginning of January 2009 support rather the theory of a resonant excitation of the polar vortex in mid-January than the theory of the vortex disturbance by anomalously strong upward propagating tropospheric waves caused by blocking patterns (Figure 4.1; Albers and Birner, 2014). Positive normalized geopotential height anomalies >1.0 standard deviation at both, the stratopause and the surface, are found firstly between 2 and 5 February 2009 (Figure 4.1). During this time blocking is detected over the eastern North Atlantic ocean and parts of Europe (Figure 4.6). The resulting strong meridional flow component features four pronounced ridges (Figure 4.7 middle). The only time when positive normalized geopotential height anomalies are present continuously from the stratopause to the surface is between 11 and 16 February 2009 (Figure 4.1). At this time, only small blocking patterns are observed over North America and the Euro-Atlantic sector which coincide with the upward propagation of tropospheric waves at these locations (Figure 4.6 and 4.2 bottom). Especially over western Asia and the eastern North Atlantic, two pronounced ridges are found where tropospheric waves propagate upward into the stratosphere (Figure 4.7 bottom and 4.2 bottom). The time after 16 February 2009 is not investigated further since only normalized geopotential height anomalies <1.0 standard deviations are present at surface making an influence of the SSW event less likely than in the time between 11 and 16 February 2009 (Figure 4.1). The only exception are a few days in mid-March which show normalized geopotential height anomalies >1.0 standard deviations from the surface to roughly 800 hPa height. **Figure 4.6:** **Blocking Situation between December 2008 and February 2009 based on ERA-Interim.** The Hovmöller diagram shows the 500 hPa geopotential height, averaged between 40°N and 80°N, as grey shading. The GHGS component of the blocking index by Tibaldi and Molteni (1990) is shown in red. The horizontal black dashed lines mark the central date of the SSW event. The area between the solid blue lines refers to the Euro-Atlantic sector, 70°W to 30°E. 4.4 Position of the Mid-Latitude Jet Stream in the Lower Troposphere In the beginning of the winter 2008/2009, the mid-latitude jet stream is located at its climatological position, thus not indicating the upward propagation of tropospheric baroclinic waves occurring at the same time (Figure 4.8 and 4.2 top). Here it is important to note that the wind data in November is prone to boundary effects of the filtering and that the strongest upward propagation of baroclinic waves occurs over the North Pacific and not over the North Atlantic ocean. Coinciding with the elongation of the polar vortex on 8 December 2008, the mid-latitude jet stream over the North Atlantic ocean is displaced poleward from its climatological position (Figure 4.8). This may be caused by the occurring ridges in the North Atlantic region (Figure 4.6). The following long-lasting and strong blocking situation in the Euro-Atlantic sector may cause either a split of the mid-latitude jet stream or a rapid shift from high latitudes to $35^\circ$N at the end of December 2009 (Figure 4.6 and 4.8; Martius et al., 2009). From mid-January onwards, coinciding with the deceleration of the 10hPa zonal-mean zonal wind, the position of the jet stream varies between its climatological position and 35°N (Figure 4.8 and 4.3). Between 11 and 16 February 2009 when positive normalized geopotential height anomalies are present continuously from the stratopause to the surface, the mid-latitude jet stream is displaced poleward and maybe split for a short time period (Figure 4.1 and 4.8). Since large blocking patterns are absent over the North Atlantic ocean at that time, a split of the jet stream is rather unlikely but cannot be excluded with the methods used in this thesis (Figure 4.6). The poleward shift of the North Atlantic mid-latitude jet stream indicates according to Afargan-Gersman and Domeisen (2020) an surface influence of the SSW event over the Pacific ocean. This is not confirmed by the deviation of the geopotential height from the zonal-mean which shows an upward propagation of tropospheric waves over the Pacific ocean, at least at 65°N (Figure 4.2 bottom). The SSW event of the winter 2008/2009 is therefore not associated with the poleward displacement of the mid-latitude jet stream in mid-February 2009. Until April 2009, the jet stream is located constantly in the region around its climatological position and 10°latitude above it (Figure 4.8). The equatorward displacement in the beginning of April is already more than 60 days after the central date of the SSW and therefore not associated with it (Baldwin et al., 2003). ![Hovmöller diagram showing the zonal wind speed anomalies in 850 hPa during the winter 2008/2009](image) **Figure 4.8:** **Zonal Wind Speed Anomalies during the Winter 2008/2009 based on ERA-Interim.** The zonal-wind anomalies in 850 hPa, averaged over 60°W to 0°E are shown as shading in the Hovmöller diagram. The anomalies are filtered using a Lanczos filter with a moving window of 61 days and a cutoff-frequency of 1/10 days. Data on the edges of the timeseries are prone to boundary effects due to the filtering and therefore, shown paler than the unaffected data. The wind maxima are shown as a black solid line. The white dashed line shows the climatological position of the mid-latitudes jet stream. The central date of the SSW is marked with the vertical black dashed line. 4.5 North Atlantic Oscillation Index at the Surface The winter 2008/2009 shows a frequent change of the NAO phases (Figure 4.9). Two of the five NAO- phases in this winter reach values below -3.5 standard deviations in the 7-day running mean of the standardized NAO index. The first strongly negative NAO phase occurs between 23 December 2008 and 8 January 2009 (Figure 4.9). It co-occurs with an equatorward shift of the mid-latitude jet stream and a long-lasting strong blocking pattern over the North Atlantic-European sector (Figure 4.8 and 4.6). Although the polar vortex is elongated on 8 December 2008, a downward influence of the weak polar vortex is not detected by the methods used in this thesis (Figure 4.4 left column, top). On 9 January 2009, about 2 weeks prior to the central date of the SSW, a positive phase of the NAO establishes (Figure 4.9). Only at 3 consecutive days in this time, on 13 and 15 January 2009, the daily NAO index turns negative coinciding with negative polar-cap averaged geopotential height anomalies at the surface (Figure 4.9 and 4.1). This is not reflected in the 7 day running mean of the NAO index. On 28 January 2009, coinciding with positive polar-cap averaged geopotential height anomalies at the surface, the NAO index turns negative until 17 February 2009 (Figure 4.9 and 4.1). This comprises the time when positive geopotential height anomalies >1.0 standard deviation associated with the SSW, are present at surface (Figure 4.1). Nevertheless, the NAO- phase is not associated with a downward influence of the SSW event since during this time, an upward propagation of tropospheric waves is only present over the North Atlantic ocean, at least at 65°N (Figure 4.2 bottom). A possible trigger of this NAO- phase is a long-lasting blocking pattern over the North Atlantic-European sector which is detected at the same time as the NAO- phase starts (Figure 4.6; Santos et al., 2013). Two short blocking situations in mid-February may maintain it (Figure 4.6 and 4.9). It is noteworthy that the NAO- pattern is shifted eastward over Europe instead of being centered over the North Atlantic ocean between the end of January and mid-February 2009 (Figure 4.10 top row). The following NAO- phase in March 2009 could theoretically be influenced by a downward propagation of stratospheric anomalies caused by the SSW as it occurs less than 2 month after the central date of the SSW event (Figure 4.9; Baldwin et al., 2003). It coincides indeed with positive normalized geopotential height anomalies >1.0 standard deviation at the surface but the roughly climatological position of the mid-latitude jet stream at that time does not indicate a downward influence of the SSW event on the troposphere (Figure 4.1 and 4.8). A downward influence of the SSW event during this time is therefore not suggested. The following two NAO- phases occur after 2 month after the central date of the SSW and are therefore not associated with it (Figure 4.9; Baldwin et al., 2003). Figure 4.9: **NAO Index during the Winter 2008/2009 based on ERA-Interim.** Shown is the Zonal Index which is calculated as the standardized mean sea level pressure anomaly difference between a southern box, averaged over $40^\circ$W to $0^\circ$E and $35^\circ$N to $50^\circ$N, and a northern box, averaged over $40^\circ$W to $0^\circ$E and $55^\circ$N to $70^\circ$N (Leckebusch et al., 2008). The black dashed line marks the central date of the SSW. The period with normalized geopotential height anomalies >1.0 standard deviation associated with the SSW which are present continuously from the stratosphere to the surface, is shaded in dark grey. Figure 4.10: Mean Sea Level Pressure Anomalies and 2 Metre Temperature Anomalies for Two European Cold Waves based on ERA-Interim. Shown is the European cold wave between end of December 2008 and beginning of January 2009 (top row) and the European cold wave in the end of March 2009 (bottom row). The dashed contours show negative mean sea level pressure anomalies, the solid contours show positive mean sea level anomalies. The 2 metre temperature anomalies are plotted as shading. 4.6 European Cold Waves at the Surface When looking at the 2 metre temperature anomalies of the winter 2008/2009 only two European cold waves are detected and only one of them is confirmed by the approach of Smid et al. (2019) (Figure 4.11 and 4.12). The cold wave detected by both approaches coincides roughly with the NAO- phase occurring between 23 December 2008 and 9 January 2009 (Figure 4.9). The coldest temperatures are found over eastern Europe reaching mean values up to 6 K and locally up to 12 K below the climatology (Figure 4.11 and 4.10 top row). All European regions, except southwestern Europe according to the approach by Smid et al. (2019) experience colder than usual 2 metre temperatures. Since this cold wave happens before the SSW event, it cannot be associated with it. During the time when positive normalized geopotential height anomalies associated with the SSW event are found continuously from the stratopause to the surface, a mean European cold wave is not detected although the Mediterranean, central and northern Europe experience unusually cold temperatures (Figure 4.11 and 4.12). This is surprising as the co-occurring NAO- phase often leads to colder than usual temperatures in large parts of Europe (Butler et al., 2015). Since an upward propagation of tropospheric waves over the Euro-Atlantic sector is observed during this time, an influence of the SSW on European temperatures is not suggested (Figure 4.2 bottom). The cold wave occurring between 22 and 27 March 2009 is only detected in the 7-day running mean of the 2 metre temperature anomalies (Figure 4.11). It is strongest over northern Europe with temperature anomalies up to 4 K below the climatology (Figure 4.10 bottom row). Since the SSW event occurs almost exactly 2 month before the cold wave, it may have triggered but not maintained it (Figure 4.3). The poleward shift of the mid-latitude jet stream and the change between NAO- and NAO+ at that time also not support the idea of the SSW triggering the cold wave (Figure 4.8 and 4.9). Therefore, it is not associated with the SSW. Figure 4.11: **2 Metre Temperature Anomalies during the Winter 2008/2009 based on ERA-Interim.** Periods of cold waves are defined using 1 K below the climatological mean as the warm temperature threshold for cold waves (Garfinkel et al., 2017). The days with cold waves are marked as shading in the respective color. The vertical black dashed line marks the central date of the SSW in the winter 2008/2009. The period with normalized geopotential height anomalies >1.0 standard deviation associated with the SSW which are present continuously from the stratosphere to the surface, is shaded in dark grey. The European mean is calculated by averaging between $10^\circ$W to $42^\circ$E and $35^\circ$N to $72^\circ$N. The anomalies for north-western Europe between $10^\circ$W to $3^\circ$E and $45^\circ$N to $60^\circ$N, for south-western Europe between $10^\circ$W to $3^\circ$E and $35^\circ$N to $45^\circ$N, for eastern Europe between $20^\circ$E to $42^\circ$E and $45^\circ$N to $60^\circ$N, for northern Europe between $3^\circ$E to $42^\circ$E and $60^\circ$N to $72^\circ$N, for central Europe between $3^\circ$W to $20^\circ$E and $45^\circ$N to $60^\circ$N and for the Mediterranean between $3^\circ$E to $42^\circ$E and $35^\circ$N to $45^\circ$N. Figure 4.12: **2 Metre Daily Minimum Temperature during the Winter 2008/2009 based on ERA-Interim.** Periods of cold waves are defined as at least 3 consecutive days with daily minimum temperatures below the $10^{th}$ percentile of the climatological daily minimum temperature (Smid et al., 2019). The climatology is calculated for the period between 1999 and 2019 with a 31 day running mean. The days with cold waves are marked as shading in the respective color. The vertical black dashed line marks the central date of the SSW. The period with normalized geopotential height anomalies >1.0 standard deviation associated with the SSW which are present continuously from the stratosphere to the surface, is shaded in dark grey. The European mean is calculated by averaging between $10^\circ$W to $42^\circ$E and $35^\circ$N to $72^\circ$N. The anomalies for north-western Europe between $10^\circ$W to $3^\circ$E and $45^\circ$N to $60^\circ$N, for south-western Europe between $10^\circ$W to $3^\circ$E and $35^\circ$N to $45^\circ$N, for eastern Europe between $20^\circ$E to $42^\circ$E and $45^\circ$N to $60^\circ$N, for northern Europe between $3^\circ$E to $42^\circ$E and $60^\circ$N to $72^\circ$N, for central Europe between $3^\circ$W to $20^\circ$E and $45^\circ$N to $60^\circ$N and for the Mediterranean between $3^\circ$E to $42^\circ$E and $35^\circ$N to $45^\circ$N. 4.7 Concluding Remarks The winter 2008/2009 features the longest-lasting SSW event with the strongest easterly winds in the middle stratosphere of the past 20 years (Table 3.2). In roughly 2 weeks, the 10 hPa polar-cap averaged temperature increases by approximately 50 K while the 10 hPa zonal-mean zonal wind decreases $104 \text{ ms}^{-1}$ in total. However, this S-type event is not associated with a downward influence on European surface weather. Positive normalized geopotential height anomalies are present continuously from the stratosphere to the surface between 11 and 16 February 2010 and associated with an upward propagation of tropospheric baroclinic waves over the Euro-Atlantic sector (Figure 4.1 and 4.2 bottom). Although the NAO is in its negative phase and the mid-latitude jet stream is displaced equatorward, unusually low 2 metre temperatures are found only in central and northern Europe but not in the European mean (Figure 4.9, 4.8, 4.11 and 4.12). A possible explanation herefore is the occurrence of a pronounced ridge over the eastern North Atlantic ocean with embedded blocking (Figure 4.7 bottom). This may lead to a meridional transport of polar air at its eastern flank which is located over central Europe. The only European mean cold wave after the central date of the SSW occurs between 22 and 27 February 2009 (Figure 4.11). Interestingly, this cold wave is only detected by the 7-day running mean of the 2 metre temperature anomalies but not when using the lowest 10th percentiles of the 2 metre minimum temperatures (Figure 4.12). By the latter approach, only a small cold wave over northern Europe is detected. The late start of the European cold wave roughly 2 months after the central date of the SSW is one reason why it is unlikely that the cold temperature anomalies are linked to the SSW event (Baldwin et al., 2003). The other reasons are the poleward displacement of the mid-latitude jet stream and the shift from the negative to the positive phase of the NAO during this time (Figure 4.8 and 4.9). 5 Winter 2009/2010 5.1 Troposphere-Stratosphere Coupling During the winter 2009/2010, three structures with positive normalized geopotential height anomalies >1.0 standard deviation are visible in the stratopause region around 1 hPa (Figure 5.1). Two of these structures show continuous normalized geopotential height anomalies >1.0 standard deviation from the stratopause to the surface. In case of the first structure, these anomalies are present continuously at the surface between 9 December 2009 and 11 January 2010. The coupling of the troposphere with the stratosphere is nearly instantaneous at the beginning of December 2009 but around 2 weeks later positive normalized geopotential height anomalies >1.0 standard deviation are only detected below the lower stratopause region (Figure 5.1). A coupling between the troposphere and stratosphere can be excluded in early January as negative geopotential height anomalies are present in the upper and middle stratosphere above the positive anomalies detected in the troposphere. When looking at the deviation of the normalized geopotential height anomalies from the zonal-mean, the structure with continuous positive geopotential height anomalies >1.0 standard deviation in the troposphere and stratosphere shows a westward tilt with height (Figure 5.2 top). This indicates an upward propagation of baroclinic waves (Lim and Wallace, 1991). At the same time, an upward propagation of baroclinic waves into the lower stratosphere is present over Scandinavia. This upward propagation of tropospheric waves continuous as long as positive geopotential height anomalies are still present at the surface (Figure 5.1). A possible reasons for the stop of the troposphere-stratosphere coupling in mid-December 2009 might be the establishment of strong westerly winds in the tropopause region caused for example by tropospheric internal variability. As only positive geopotential height anomalies can be associated with SSW events (Karpechko et al., 2018), negative geopotential height anomalies are not investigated further in this thesis. Positive normalized geopotential height anomalies >1.0 standard deviation belonging to the second structure with continuous positive normalized geopotential height anomalies >1.0 standard deviation from the stratosphere to the troposphere are found at the surface between 30 January 2010 and 27 February 2010 (Figure 5.1). The coupling of the troposphere and the stratosphere is again nearly instantaneous. In the deviation of normalized geopotential height from the zonal-mean, the structure with positive anomalies over the Pacific is not tilted with height from the surface to 3 hPa height and then slightly tilted to the east with height (Figure 5.2 bottom). This mostly barotropic feature indicates a downward propagation of stratospheric signals to the surface, although the positive geopotential height deviation near the surface shows only positive values <1.0 standard deviation (Figure 5.2 bottom; Lim and Wallace, 1991). Another strongly positive geopotential height structure at the surface over Scandinavia shows a westward tilt with height, indicating the upward propagation of tropospheric waves over that region (Lim and Wallace, 1991). Figure 5.1: Vertical Profile of the Polar-Cap Averaged Normalized Geopotential Height Anomalies during the Winter 2009/2010 based on ERA-Interim. The green structures starting at 1 hPa are an indicator of possible SSW events. Figure 5.2: Normalized Geopotential Height Deviations from Zonal-Mean after the SSW Events of the Winter 2009/2010 based on ERA-Interim. The time for averaging is the time around the largest positive standard deviations of the normalized geopotential height anomalies at the surface, associated with positive geopotential height anomalies >1.0 standard deviation at the stratopause (top plot) and from the beginning of positive normalized geopotential height anomalies >1.0 standard deviation at the surface associated with the second structure of positive normalized geopotential height anomalies >1.0 standard deviation at the stratopause (bottom). 5.2 Sudden Stratospheric Warming Signals in the Middle Stratosphere The three structures with positive normalized positive geopotential height anomalies >1.0 standard deviation at the stratopause lead to the suggestion that three SSW events occur in the winter 2009/2010 (Figure 5.1). This is not confirmed by the SSW index which only detects one SSW event in this winter (Figure 5.3). The detected event with its central date on 25 January 2010 is associated with the third structure with positive normalized geopotential height anomalies >1.0 standard deviation at the stratopause. The structure with positive normalized positive geopotential height anomalies >1.0 standard deviation at the beginning of the winter coincides with a deceleration of the 10 hPa zonal-mean zonal wind at $65^\circ$N (Figure 5.1 and 5.3). This clearly shows a weakened polar vortex but the event is not classified as an SSW because the zonal-mean zonal wind is still westerly. At the end of November 2009, the polar jet is strengthening again reaching values slightly below 40 ms$^{-1}$ in the beginning of December. Until this time, the polar-cap averaged 10 hPa temperature is below 210 K, but rises about 5 K in the days before 12 December 2009 (Figure 5.3). At this date, positive normalized positive geopotential height anomalies >1.0 standard deviation are present at the stratopause for the second time in this winter (Figure 5.1). This structure shows positive anomalies continuously from the stratosphere to the surface and is associated with an upward propagation of baroclinic waves over the Pacific ocean (Figure 5.2 top). The upward propagation of tropospheric signals is also visible in the shape of the polar vortex (Figure 5.4 left column). In 50 hPa height, the polar vortex is split clearly in two parts of comparable size. Temperatures are below 230 K over the whole northern hemisphere (Figure 5.4 left column bottom). In 30 hPa height, the polar vortex is split, too but in this height temperatures between 230 K and 240 K are visible over the Sea of Okhotsk and parts of south-eastern Russia (Figure 5.4 left column middle). The region of the warm temperatures is the same region, where the upward propagating baroclinic waves are detected which indicates the breaking of these waves in that region (Figure 5.2 top; Matsuno, 1971). In 10 hPa height, the regions with temperatures above 230 K are the same as the regions where the upward propagating baroclinic waves are detected, too (Figure 5.4 left column top and 5.2 top). This supports the idea of wave breaking in this region. The polar vortex itself is elongated in 10 hPa height with the beginning formation of two smaller vortex parts inside the elongated vortex filament. The baroclinicity which is seen in the geopotential height deviation from the zonal-mean is also seen in the different heights (Figure 5.2 top and 5.4 left column). Although the polar vortex is split in 30 hPa and 50 hPa height, all three wind-based SSW indices do not detect an SSW event (Table 3.1). The U65 index shows minimal westerly wind speeds of 17 ms$^{-1}$ while the 10 hPa polar-cap averaged temperatures are around 210 K (Figure 5.3). After this split of the polar vortex, the two vortex parts reunite and form a stable polar vortex with 28000 gpm at its core and maximum westerly wind-speeds of 59 ms$^{-1}$ on 10 January 2010 (Figure 5.3). The 10 hPa polar-cap temperature is with roughly 205 K on the lowest state of the winter 2009/2010. The polar vortex has a concentric shape centered over the pole in all three displayed heights and features a barotropic structure (Figure 5.5 left column). After this strong polar vortex state, the 10 hPa zonal-mean zonal wind at $65^\circ$N decelerates rapidly and reverses on 25 January 2010 (Figure 5.3). This is a typical behaviour observed before SSW events (Charlton and Polvani, 2007). The polar-cap averaged temperature in 10 hPa increases up to 238 K until 1 February 2010. On 27 January 2010, 2 days after the central date of the major SSW, the polar vortex is displaced from the pole and clearly elongated in all three displayed heights (Figure 5.5 middle column). The vortex split is observed on 4 February 2010, coinciding with large positive normalized geopotential height anomalies at the surface belonging to the third structure with positive normalized geopotential height anomalies >1.0 standard deviation at the stratopause (Figure 5.3 and 5.1). Although the SSW index is positive at that time, the polar vortex is clearly split with its remnants located at the same position in all three displayed heights (Figure 5.4 middle column and 5.3). The stronger part with 29000 gpm is located over the North Atlantic ocean and the weaker part with 30250 gpm over the North Pacific ocean. In 10 hPa height, temperatures up to 260 K are found over eastern Scandinavia and locally over central Asia (Figure 5.4 middle column top). The structure of the polar vortex is baroclinic, whereas in 30 hPa the structure of the polar vortex is barotropic (Figure 5.4 middle column top and middle). This is also seen in the geopotential height deviation from the zonal-mean at 65°N (Figure 5.2 bottom). In 30 hPa height, the warmest temperatures up to 250 K are located over north-western Russia (Figure 5.4 middle column middle). In 50 hPa height, the warmest temperatures do not reach 240 K and are located further southward (Figure 5.4 middle column middle and bottom). The polar vortex is again in a barotropic state (Figure 5.4 bottom). Around 10 February 2010, the SSW index reaches its winter minimum with -20 ms\(^{-1}\), roughly a week later than the temperature maximum of the winter is reached (Figure 5.3). On 19 February 2010 the whole northern hemisphere in 10 hPa is still clearly warmed over eastern Europe and western Asia with temperatures up to 257 K (Figure 5.4 right column top). Meanwhile three remnants of the polar vortex show a baroclinic structure with the strongest remnants located over Hudson bay. The weaker remnants are located over central Europe and western Asia. In 30 hPa as well as in 50 hPa height, the vortex remnants located over Hudson Bay and western Asia stay in the same position (Figure 5.4 right column middle and bottom). In the lower heights, the northern hemispheric temperatures are cooler, reaching maximum below 250 K in 30 hPa height and below 240 K in 50 hPa height. In addition to the colder temperatures in 50 hPa height, the two weaker remnants of the polar vortex are not split and the filament is located completely over Asia (Figure 5.4 right column bottom). In the following two and a half weeks, the polar vortex is strengthening again, featuring westerly winds from 26 February onwards, 2 days before only normalized geopotential height anomalies <1.0 standard deviation are found at the surface (Figure 5.3 and 5.1). This ends the 32 day lasting SSW event. At the end of March 2010, the polar night jet turns again to easterlies. This marks the onset of the final warming of the winter 2009/2010 according to the U65 index (Figure 5.3). The other two wind-based SSW indices, CP07 and U6090, classify this wind-reversal as a second SSW of the winter 2009/2010 (Table 3.1). The polar vortex is displaced off the pole and shows barotropic features in all three displayed heights (Figure 5.5 right column). In 10 hPa height it shows the typical „comma-“shape of a D-type event (Figure 5.5 right column top; Charlton and Polvani, 2007). Temperatures up to 250 K are found over southern central Asia. In 30 hPa and 50 hPa height, the polar vortex has a less distinct „comma-“shape (Figure 5.5 right column middle and bottom). The maximum temperatures are below 220 K over the whole northern hemisphere at both pressure levels (Figure 5.5 right column middle and bottom). After this phase of easterly winds, the polar night jet accelerates again but does not reach westerly wind-speeds >5 ms\(^{-1}\) until the final warming of the winter 2009/2010 which changes the stratospheric zonal-mean zonal winds to easterlies again. (not shown). The standardized geopotential height anomalies at the stratopause are constantly <1.0 standard deviation during that time (Figure 5.1). This indicates a rather gradual than abrupt change of the stratospheric circulation. In addition to that, the polar vortex does not recover to a stable, concentric shape centered over the pole. Thus, this wind-reversal event is considered to be part of the final warming of the winter 2009/2010, as classified by the U65 index, and therefore, not investigated further in this thesis. ![SSW Index and Polar-Cap Averaged 10 hPa Temperature for the Winter 2009/2010](image) **Figure 5.3:** **SSW Index and Polar-Cap Averaged 10 hPa Temperature for the Winter 2009/2010 based on ERA-Interim.** The blue dots show the modified SSW index by Charlton and Polvani (2007) with 65°N instead of 60°N as reference latitude. The red line shows the polar-cap averaged 10 hPa temperature. Days with distinctive shapes of the polar vortex in the 10 hPa geopotential height field are marked. Figure 5.4: Geopotential Height and Temperature in the Middle Stratosphere on 12 December 2009, 4 February 2010 and 19 February 2010 based on ERA-Interim. The geopotential height and temperature are shown in 10 hPa (top row), 30 hPa (middle row) and 50 hPa (bottom row) for 12 December 2009 (left column), 4 February 2010 (middle column) and 19 February 2010 (right column). All plots have the same color-scale except the plot on the top right (the color-scale for this plot is given next to it). Figure 5.5: **Geopotential Height and Temperature in the Middle Stratosphere on 10 January 2010, 27 January 2010 and 24 March 2010 based on ERA-Interim.** The geopotential height and temperature are shown in 10 hPa (top row), 30 hPa (middle row) and 50 hPa (bottom row) for 10 January 2010 (left column), 27 January 2010 (middle column) and 24 March 2010 (right column). 5.3 Blocking in the Middle Troposphere During the time of possible upward propagation of tropospheric signals with positive geopotential height anomalies into the stratosphere, 9 December 2009 to 11 January 2010, four long-lasting blocking patterns are detected over the Pacific and Atlantic sectors (Figure 5.6 top). On 9 December 2009, a strong „Ω“-blocking over Alaska occurs concurrently with a pronounced ridge over Scandinavia (Figure 5.7 top). This is a typical situation of a wavenumber 2 tropospheric wave forcing, which may penetrates into the stratosphere (Tripathi et al., 2015). The roughly 1.5 weeks lasting blocking event is followed by a 2 weeks lasting blocking event over the North Atlantic ocean (Figure 5.7 top). According to Tripathi et al. (2016) a sequence of North Pacific blocking followed by North Atlantic blocking is typical for upward propagating planetary waves which lead to a break-up of the polar vortex. Although an SSW event is not detected at that time, the polar night jet is disrupted and splits on 12 December 2009 (Figure 5.3, 5.2 top and 5.4 left column). From this date onwards, the 10 hPa polar night jet accelerates again which indicates the absence of planetary waves in that height (Figure 5.3). As the wave forcing induced by the frequent blocking situations is constantly high, a plausible explanation for the stop of the troposphere-stratosphere coupling is the formation of a layer with easterly winds in the tropopause region (Figure 5.6 top; Matsuno, 1971). In the days before the central date of the SSW event Alaskan blocking is detected (Figure 5.6 bottom). Since nearly instantaneous coupling between the troposphere and stratosphere is observed, this blocking situation may be a precursor block of the SSW (Figure 5.1). During the time of positive standardized geopotential height anomalies associated with the SSW at the surface a long-lasting blocking pattern over the Atlantic ocean is observed besides several small blocking patterns (Figure 5.6 bottom). It is a complex structure, featuring a „high-over-low“ blocking type over the western Atlantic, an „Ω“ block over the eastern Atlantic and an amplified ridge over western Asia, leading to a highly wavy and meridional circulation in the middle troposphere (Figure 5.7 bottom). Figure 5.6: **Blocking Situation between December 2009 and March 2010 based on ERA-Interim.** The Hovmöller diagrams show the 500 hPa geopotential height between December and mid-January (top) as well as mid-January and mid-March (bottom). It is averaged between 40°N and 80°N and shown as grey shading. The GHGS component of the blocking index by Tibaldi and Molteni (1990) is shown in red. The horizontal black dashed lines mark the central date of the SSW event. The area between the solid blue lines refers to the Euro-Atlantic sector, 70°W to 30°E. 5.4 Position of the Mid-Latitude Jet Stream in the Lower Troposphere From mid-December 2009 until mid January 2010, the 850 hPa jet stream is located up to $20^\circ$ south of the climatological position of the jet stream (Figure 5.8). This is a typical feature observed after SSW events (Charlton-Perez et al., 2018; Domeisen, 2019). Although the equatorward displacement of the mid-latitude jet stream coincides with the split of the polar vortex on 12 December 2009, it is not associated with it. The reason herefore is the upward instead of downward propagation of positive geopotential height anomalies associated with frequent and long-lasting blocking situations over the North Pacific and North Atlantic ocean (Figure 5.2 top and 5.6 top). These blocking patterns themselves may lead to the southward displacement of the midlatitude-jet. In the days before the central date of the SSW, the jet stream moves poleward again to roughly $60^\circ$N which coincides with negative standardized geopotential height anomalies in the lower troposphere (Figure 5.8 and 5.1). Shortly after 25 January 2010, the central date of the SSW, the mid-latitude jet stream is displaced southward again and positive normalized geopotential height anomalies are observed in the whole stratosphere and troposphere (Figure 5.8 and 5.1). Already in the beginning of February 2010 it is located at latitudes about $20^\circ$ south of its climatological position. Since a downward propagation of positive geopotential height anomalies caused by the SSW is present only over the Pacific ocean, the equatorward shift of the jet stream over the North... Atlantic ocean is not associated directly with the SSW (Figure 5.2 bottom). It is either caused by the internal tropospheric variability or maybe by a teleconnection between the Pacific/North American (PNA) pattern and the North Atlantic storm track which is not analyzed further in this thesis (Pinto et al., 2011; Afargan-Gerstman and Domeisen, 2020). The persistent southward displacement of the mid-latitude jet stream leads to a weak zonal flow and frequent blocking patterns caused by cyclonic Rossby-wave breaking (Santos et al., 2013). These blocking patterns can lead to a split of the jet stream (Martius et al., 2009). The jump in the jet stream latitude in the beginning of March 2010 may shows such a split of the mid-latitude jet stream, perhaps caused by the Atlantic blocking happening at the same time (Figure 5.8 and 5.6). Another possible split of the jet stream is visible in mid-April 2010 after a poleward displacement of it (Figure 5.8). As the positive standardized geopotential height anomalies associated with the SSW event only last until 17 March 2010 in the lower troposphere, this development of the mid-latitude jet stream is not analyzed further (Figure 5.1). Additionally it has to be kept in mind that the wind data in April is prone to boundary effects due to the applied filtering. ![Hovmöller diagram showing the position of the jet stream in 850 hPa during winter 2009/2010](image) **Figure 5.8:** **Zonal Wind Speed Anomalies during the Winter 2009/2010 based on ERA-Interim.** The zonal-wind anomalies in 850 hPa, averaged over 60°W to 0°E are shown as shading in the Hovmöller diagram. The anomalies are filtered using a Lanczos filter with a moving window of 61 days and a cutoff-frequency of 1/10 days. Data on the edges of the timeseries are prone to boundary effects due to the filtering and therefore, shown paler than the unaffected data. The wind maxima are shown as a black solid line. The white dashed line shows the climatological position of the mid-latitudes jet stream. The central date of the SSW is marked with the vertical black dashed line. 5.5 North Atlantic Oscillation Index at the Surface It is striking that most of the winter 2009/2010 is characterized by a strongly negative NAO phase, lasting from 7 December 2009 until 19 March 2010 considering the 7-day running mean of the NAO index (Figure 5.9). One possible explanation of this long-lasting NAO-phase is the presence of moderate to strong El Niño conditions in the winter 2009/2010 which favor the teleconnection between ENSO and the negative phase of the NAO index (https://www.cpc.ncep.noaa.gov/products/analysis_monitoring/ensostuff/detrend.nino34.ascii.txt, last viewed 2 June 2020; Lee et al., 2019). Other factors which may contribute to the long-lasting NAO-phase are the QBO east phase and the anomalously snow cover extent over the northern hemisphere (https://www.geo.fu-berlin.de/en/met/ag/strat/produkte/northpole/index.html, last viewed 4 June 2020; Jung et al., 2011). According to Jung et al. (2011) and Santos et al. (2013) though, these external forcings are neither the dominant cause nor maintainer of the NAO-phase. On 9 December 2009, the daily NAO index turns negative (Figure 5.9). This coincides with the start of positive geopotential height anomalies at the surface which are associated with an upward propagation of tropospheric baroclinic waves (Figure 5.1 and 5.2 top). These waves are excited by a blocking pattern over the North Pacific ocean, lasting from 3 to 15 December 2009 (Figure 5.6). At the same time, upward propagation of tropospheric waves over Scandinavia is observed (Figure 5.2 top). According to Jung et al. (2011) the internal atmospheric variability which includes blocking pattern, is probably the cause of the NAO-phase. With an exception of 1 week at the end of December 2009, the North Atlantic ocean is constantly featuring blocking patterns until 14 January 2010 (Figure 5.6). Santos et al. (2013) state that cyclonic wave breaking causing the blocking pattern over the North Atlantic-European sector is relevant for the formation and maintenance of the negative phase of the NAO. Between 14 and 25 January, blocking is detected over western Europe but not over the North Atlantic ocean (Figure 5.6). The NAO index increases to values > -1 during that time but stays negative in the 7-day running mean (Figure 5.9). The fact that it decreases in the following 3 days when blocking occurs again over the North Atlantic ocean supports the idea that the North Atlantic blocking patterns maintain this NAO-phase (Figure 5.9 and 5.6). From 30 January until 8 February 2010, blocking patterns are again absent over the North Atlantic ocean (Figure 5.6). This time though, the NAO index continuous to decrease (Figure 5.9). At the same time, positive geopotential height anomalies at the surface associated with the SSW event on 25 January 2010 are detected (Figure 5.1). Due to the absence of North Atlantic blocking, the decrease of the NAO index during this time can be explained by the downward influence of the SSW event, if there is for example a teleconnection between the PNA and the NAO, keeping in mind that a varying strength of external forcings such as the possible teleconnection with ENSO may play a role as well. During the time of positive geopotential height anomalies at the surface, associated with the SSW event, the NAO index stays negative with maximum values around -4 in the 7-day running mean (Figure 5.9). Besides the influence of the SSW on the NAO during this time, a 2 weeks lasting North Atlantic blocking pattern in February may help to maintain this strongly negative NAO-phase (Figure 5.6). While in February a typical NAO-pattern with a strong pressure gradient between Iceland and the Azores is observed, in March the NAO-pattern is shifted southwards with the positive mean sea level pressure anomaly being displaced eastward and the negative mean sea level pressure anomaly displaced westwards (Figure 5.10 right column top and bottom). This results in a less negative NAO index in the beginning of March 2010 which is perhaps maintained by a roughly 1 week lasting blocking pattern in March (Figure 5.9 and 5.6). Though normalized positive geopotential heights >1.0 standard deviation associated with the SSW event are not present at the surface, a possible influence of the SSW on the NAO cannot be excluded for the times with less extreme positive geopotential height anomalies at the surface (Figure 5.1). On 19 March 2010 the NAO index turns positive until 24 March 2010 when considering the daily NAO index (Figure 5.9). According to Baldwin et al.- (2003) SSW events influence surface weather up to 60 days after their central date, which is in the case of the winter 2009/2010 up to 25 March 2010. Although the SSW event cannot be excluded as being part of the triggering mechanism of this NAO- phase, it is suggested to play a negligible role. The last NAO- phase of the winter 2009/2010 occurs more than 2 months after the central date of the SSW event and is therefore not associated with it (Baldwin et al., 2003; Tripathi et al., 2015). ![NAO Index for the Winter 2009/2010](image) **Figure 5.9:** **NAO Index during the Winter 2009/2010 based on ERA-Interim.** Shown is the Zonal Index which is calculated as the standardized mean sea level pressure anomaly difference between a southern box, averaged over $40^\circ$W to $0^\circ$E and $35^\circ$N to $50^\circ$N, and a northern box, averaged over $40^\circ$W to $0^\circ$E and $55^\circ$N to $70^\circ$N (Leckebusch et al., 2008). The black dashed line marks the central date of the SSW. The period with normalized geopotential height anomalies >1.0 standard deviation associated with the SSW at surface is shaded in dark grey. 5.6 European Cold Waves at the Surface During the time of the negative NAO phase, several cold waves occur in Europe (Figure 5.11). According to the index by Smid et al. (2019) the cold waves in Europe exclusively happen in the time, when the NAO index shows negative values. During this time, every region experiences between 2 and 5 periods with unusually low 2 metre minimum temperatures (Figure 5.12). This is also true for the 7-day running mean of the 2 metre temperature anomalies, except for 2 short cold waves over eastern Europe in the beginning and end of the winter 2009/2010 (Figure 5.11). The first two European mean cold waves occur during the time when positive geopotential height anomalies >1.0 standard deviation are present at surface (Figure 5.11 and 5.2). Since these anomalies are associated with an upward propagation of tropospheric waves, a stratospheric influence on European 2 metre temperatures is not suggested in this case (Figure 5.2 top). A possible trigger of the European cold waves are the frequent blocking situations occurring over the North Atlantic-European sector in the same time (Figure 5.6). According to Woollings et al. (2018) the frequent blocking patterns are an important reason for the cold European weather in the winter 2009/2010. The European cold wave comprising the central date of the SSW event, coincides with blocking over central Europe (Figure 5.11 and 5.6). This might be the reason why eastern Europe is especially strongly affected by this cold wave, featuring temperatures up to 11 K below the climatology in the 7-day running mean 2 metre temperature anomalies (Figure 5.11). During the time when positive geopotential height anomalies associated with the SSW are present on surface, one European cold wave is detected in the 7-day running mean 2 metre temperature anomalies (Figure 5.11). The cold wave happening between 5 and 10 February 2010 with European mean temperature anomalies around 2 K below the climatology is detected by both approaches to classify cold waves (Figure 5.11 and 5.12). Longest affected is northern Europe during the cold wave but the most extreme temperatures are found in eastern, central and south-western Europe with 2 metre temperatures around 4 K below the climatological mean (Figure 5.10 top left and 5.11). The only not-affected area when looking at the 2 metre temperature anomalies is the Mediterranean. This is not confirmed when looking at the lowest 10 percentiles of the daily minimum temperatures. When using this approach, all European regions experience unusual cold temperatures, with the strongest cold waves detected in south-western and north-western Europe (Figure 5.12). As the NAO-phase coinciding with the European cold wave is maybe influenced indirectly by the SSW, the European cold wave is, too. An indication of the downward propagation of stratospheric anomalies caused by the SSW is the strong positive temperature anomaly over western Greenland and northern North America, reaching values up to 20 K above the climatological mean locally (Figure 5.10 top right; Hinssen et al., 2011). At the time of the second European cold wave, happening between 3 and 15 March 2010, largely positive anomalies over southern Greenland and Canada are again an indication of the downward propagation of stratospheric signals (Figure 5.10 right bottom; Hinssen et al., 2011). Since negative geopotential height anomalies are present at the surface during this time, the European cold wave is not associated with the SSW (Figure 5.1). This cold wave affects in the mean central Europe the strongest with 2 metre temperatures up to 8 K below the climatology (Figure 5.10 left bottom). All European regions except the Mediterranean are affected by this cold wave detected by the 7-day running mean of the 2 metre temperature anomalies (Figure 5.11). The approach by Smid et al. (2019) does not detect the European mean cold wave (Figure 5.12). Only eastern, north-western and south-western Europe show cold waves in this approach with the strongest one detected over south-western Europe. Figure 5.11: **2 Metre Temperature Anomalies during the Winter 2009/2010 based on ERA-Interim.** Periods of cold waves are defined using 1 K below the climatological mean as the warm temperature threshold for cold waves (Garfinkel et al., 2017). The days with cold waves are marked as shading in the respective color. The vertical black dashed line marks the central date of the SSW in the winter 2009/2010. The period with normalized geopotential height anomalies >1.0 standard deviation associated with the SSW at surface is shaded in grey. The European mean is calculated by averaging between $10^\circ W$ to $42^\circ E$ and $35^\circ N$ to $72^\circ N$. The anomalies for north-western Europe between $10^\circ W$ to $3^\circ E$ and $45^\circ N$ to $60^\circ N$, for south-western Europe between $10^\circ W$ to $3^\circ E$ and $35^\circ N$ to $45^\circ N$, for eastern Europe between $20^\circ E$ to $42^\circ E$ and $45^\circ N$ to $60^\circ N$, for northern Europe between $3^\circ E$ to $42^\circ E$ and $60^\circ N$ to $72^\circ N$, for central Europe between $3^\circ W$ to $20^\circ E$ and $45^\circ N$ to $60^\circ N$ and for the Mediterranean between $3^\circ E$ to $42^\circ E$ and $35^\circ N$ to $45^\circ N$. Figure 5.12: **2 Metre Daily Minimum Temperature during the Winter 2009/2010 based on ERA-Interim.** Periods of cold waves are defined as at least 3 consecutive days with daily minimum temperatures below the $10^{th}$ percentile of the climatological daily minimum temperature (Smid et al., 2019). The climatology is calculated for the period between 1999 and 2019 with a 31 day running mean. The days with cold waves are marked as shading in the respective color. The vertical black dashed line marks the central date of the SSW. The period with normalized geopotential height anomalies >1.0 standard deviation associated with the SSW at surface is shaded in grey for the SSW of the winter 2009/2010. The European mean is calculated by averaging between $10^\circ$W to $42^\circ$E and $35^\circ$N to $72^\circ$N. The anomalies for north-western Europe between $10^\circ$W to $3^\circ$E and $35^\circ$N to $45^\circ$N, for south-western Europe between $10^\circ$W to $3^\circ$E and $35^\circ$N to $60^\circ$N, for eastern Europe between $20^\circ$E to $42^\circ$E and $45^\circ$N to $60^\circ$N, for northern Europe between $3^\circ$E to $42^\circ$E and $60^\circ$N to $72^\circ$N, for central Europe between $3^\circ$W to $20^\circ$E and $45^\circ$N to $60^\circ$N and for the Mediterranean between $3^\circ$E to $42^\circ$E and $35^\circ$N to $45^\circ$N. 5.7 Concluding Remarks The most striking feature of the winter 2009/2010 is the long-lasting negative NAO phase between December 2009 and April 2010 (Figure 5.9). It can be divided into 2 parts with a strongly negative NAO index reaching values around -4, separated by a short time period when the NAO index shows values around -1. Between 30 January and 27 February 2010, coinciding with the second part of the strongly negative NAO index and an equatorward shift of the mid-latitude jet stream, positive normalized geopotential height anomalies are found at the surface (Figure 5.1). These anomalies are associated with a downward propagation of stratospheric signals caused by the S-type SSW event with its central date on 25 January 2010 (Figure 5.2 bottom). Usually an equatorward shift of the mid-latitude jet stream over the North Atlantic ocean and the negative phase of the NAO are indicators of a downward influence of the SSW on surface (Afargan-Gerstman and Domeisen, 2020; Charlton-Perez et al., 2018). However, the downward propagation of the geopotential height anomalies caused by the SSW is located over the North Pacific ocean and, not as expected from the behaviour of the jet stream and the NAO, over the North Atlantic ocean (Figure 5.2 bottom). This leads to the suggestion that the shift of the jet stream and the NAO- phase may arise from internal tropospheric variability such as blocking situations or possibly from teleconnections. Due to the absence of blocking patterns over the North Atlantic-European sector at the end of January 2010, the decrease of the NAO index and the equatorward shift of the mid-latitude jet stream may therefore be linked to some kind of teleconnection (Figure 5.6). An example for a possible teleconnection is the influence of the PNA on the North Atlantic storm track and therefore the NAO (Pinto et al., 2011; Afargan-Gerstman and Domeisen, 2020). Since the downward propagation of stratospheric anomalies caused by the SSW occurs over the North Pacific ocean, an influence of the SSW on the PNA is likely. Assuming that there is a teleconnection between the PNA and NAO in the winter 2009/2010, the equatorward shift of the mid-latitude jet stream, the NAO- phase and the concurrent European cold wave may be linked to the SSW. It has to be kept in mind though that according to Jung et al. (2011) and Santos et al. (2013) external forcings, such as the SSW event, are not the primary cause and maintainer of the NAO- phase. Under the assumption of a nearly instantaneous coupling between the North Pacific and North Atlantic ocean, the first European cold wave occurring during this second strongly negative NAO phase of the winter 2009/2010 can be associated with the SSW event (Figure 5.11). Another factor which contributes to the European cold wave is blocking over the North Atlantic-European sector which coincides also with the anomalously low 2 metre temperatures over Europe (Figure 5.6; Woollings et al., 2018). The cold waves occurring in the first strongly negative NAO- phase cannot be associated with a stratospheric influence. Although the polar vortex is split in the middle stratosphere on 12 December 2009, stratospheric signals seem not to propagate downward (Figure 5.4 left column, 5.1 and 5.2 top). The steady upward propagation of tropospheric waves is associated with frequent blocking situations over the Pacific and North Atlantic ocean (Figure 5.6). The latter are together with the strong NAO- phase likely the reason for the European cold waves occurring before 30 January 2010. All European cold waves, except for two short ones over eastern Europe at the beginning and end of the winter 2009/2010, coincide with the long-lasting negative NAO phase. The short eastern European cold wave in the beginning of the winter 2009/2010 occurs simultaneously with positive geopotential height anomalies at the stratopause but at least the anomalies >1.0 standard deviation do not enter the troposphere (Figure 5.11 and 5.1). These anomalies indicate a weak stratospheric polar vortex state starting before November 2009. Since the SSW index is only defined from November onwards, this weak stratospheric polar vortex state is not detected as an SSW. The stratospheric temperatures stay below 215 K and the 10 hPa zonal-mean zonal wind at 65°N does not reverse (Figure 5.3). At the end of April, another eastern European cold wave occurs, roughly 1 month after a reversal of the 10 hPa zonal-mean zonal wind at 65°N. While the U65 index classifies this wind-reversal already as the final warming, the CP07 and U6090 index detect a second SSW in the winter 2009/2010 (Table 3.1). The polar vortex shows the typical features of a D-type SSW event but only in 10 hPa height (Figure 5.5 right column). As standardized geopotential height anomalies >1.0 standard deviation are not found in the stratosphere at that time, the wind-reversal is associated with the gradual final warming of the winter 2009/2010 (Figure 5.1). 6 Winter 2000/2001 6.1 Troposphere-Stratosphere Coupling The winter 2000/2001 shows 4 structures with positive normalized geopotential height anomalies >1.0 standard deviation in the troposphere and stratosphere (Figure 6.1). Two of these anomalies are present at 1 hPa. This indicates that these normalized geopotential height anomalies are induced by SSW events. The other two structures show a bottom-up development and are therefore not investigated further. The first possible SSW event is characterized by a large normalized geopotential height anomaly up to 4.0 standard deviations and an almost instantaneous downward propagation of the anomaly. Positive normalized geopotential height anomalies are present at surface between 10 December 2000 and 3 January 2001, containing a few days with slightly less positive values (Figure 6.1). During the time of the largest positive normalized geopotential height anomalies, an eastward tilted with height structure of the positive normalized geopotential height deviation from the zonal-mean is observed (Figure 6.2 top). This is an indicator for a downward propagation of stratospheric anomalies induced by the SSW to the surface with the largest geopotential height deviations located over the North Atlantic ocean (Figure 6.2 top; Lim and Wallace, 1991). The second possible SSW event is indicated by positive normalized geopotential height anomalies smaller than 2.5 standard deviations (Figure 6.1). Especially at the stratopause-level around 1 hPa the normalized geopotential height anomalies are less extreme than the ones of the first possible sudden stratospheric event of the winter 2000/2001. The downward propagation of the anomalies is also different to the first possible SSW, showing a clear time-lag of about 3 weeks between positive normalized geopotential height anomalies >1.0 standard deviation at 1 hPa and the tropopause region (Figure 6.1). At the surface, positive normalized geopotential height anomalies >1.0 standard deviation are present between 22 February and 6 March 2001, interrupted by a few days with slightly less positive normalized geopotential height anomalies. The normalized geopotential height deviations from the zonal-mean at 65°N show two slightly westward tilted with height structures of positive deviations over the North Pacific sector and the North Atlantic ocean (Figure 6.2 bottom). This leads to the suggestion that both structures develop at the surface and propagate upward. Figure 6.1: Vertical Profile of the Polar-Cap Averaged Normalized Geopotential Height Anomalies during the Winter 2000/2001 based on ERA-Interim. The green structures starting at 1 hPa are associated with SSW events. Figure 6.2: **Normalized Geopotential Height Deviations from Zonal-Mean after the SSW Events of the Winter 2000/2001 based on ERA-Interim.** The time for averaging is the time around the largest positive standard deviations of the normalized geopotential height anomalies associated with the preceding SSW at surface. 6.2 Sudden Stratospheric Warming Signals in the Middle Stratosphere The U65 index confirms the two SSW events suggested by the positive normalized geopotential height anomalies >1.0 standard deviation at the stratopause in the winter 2000/2001 (Figure 6.1 and 6.3). So does the U6090 index but not the CP07 index which detects only one SSW event with its central date on 11 February 2001 (Table 3.1). According to the U65 index, the central date of the first SSW is on 23 November 2000 and the central date of the second event on 3 February 2001 (Figure 6.3). Referring to Manney et al. (2001) the first SSW of the winter 2000/2001 is triggered by a wavenumber-1 amplification. The polar vortex in the middle atmosphere shows the typical „comma-shape“ classifying the warming as a D-type event (Figure 6.4 left column; Charlton and Polvani, 2007). The polar vortex is at a nearly constant location in the whole middle atmosphere. The coldest temperatures down to 190 K are found inside the polar vortex in 10 hPa, 30 hPa and 50 hPa height. This barotropic structure in the stratosphere is an indicator of a downward propagation of stratospheric signals into the troposphere (Attard and Lang, 2019). The polar-cap averaged temperature increases by about 10 K during this warming event in 10 hPa height, in the lower stratosphere by 15 K (Figure 6.3; Manney et al., 2001). The maximum easterly wind speed in 10 hPa is reached on 26 November 2000 with -3 ms$^{-1}$ (Figure 6.3). Easterly winds persist there for 4 days (Figure 6.3). According to Manney et al. (2001) this SSW has, despite its relatively small temperature increase and little weakening of the 10 hPa polar vortex, a substantial impact on the further development of the polar vortex. From the beginning of December 2000 to mid-January 2001, the polar vortex strengthens again with an intermediate weakening at the end of December. At the same time, the polar-cap averaged 10 hPa temperature curve shows a local maximum with temperatures around 222 K. According to Manney et al. (2001) the 10 hPa polar vortex is stronger than average and the lower stratospheric polar vortex weaker than average at this time. In mid-January 2001 the 10 hPa wind reaches its winter-maximum with 43 ms$^{-1}$. At the same time, the 10 hPa polar-cap averaged temperature reaches its winter minimum around 200 K (Figure 6.3). The polar vortex is stabilizing until the 23 January 2001, reaching values beneath 28250 gpm and a near concentric shape, centered north-east of Greenland (Figure 6.5 left column). The coldest temperatures down to 190 K are found inside the polar vortex, while warm temperatures up to 260 K are found over northern Asia in all three displayed heights, showing again a barotropic structure of the middle stratosphere. The polar vortex is located further northward in the lower heights leading to a slightly twisted vortex structure with height. In the following days the polar vortex is displaced southward and starts to elongate as the polar-cap averaged 10 hPa temperature rapidly rises (Figure 6.3). On 31 January 2001 the polar vortex features values up to 1500 gpm less in geopotential height than on 23 January 2001, showing a clear weakening of the stratospheric polar-night jet (Figure 6.5 middle column). The lowest 10 hPa temperatures of about 200 K are found on the western flank of the elongated jet, while the warmest temperatures up to 260 K are found on the north-eastern flank of it. In 30 hPa and 50 hPa height, the coldest temperatures are found inside the polar vortex and the warmest are centered north-west of it. In all three levels, baroclinicity is observed and the polar vortex is slightly twisted with height. On 9 February 2001, 6 days after the central date of the SSW, the polar vortex is clearly elongated and displaced off the pole, covering northern UK, Scandinavia and parts of northern Russia. (Figure 6.4 middle column). The lowest temperatures are found again inside the polar vortex filament, reaching values down to 210 K. The warmest temperatures of about 250 K are seen on the north-eastern flank of the elongated polar vortex. In all three levels, baroclinicity is again observed. The polar vortex is roughly located at the same position in all three levels, showing only a small northward displacement in lower heights. Maximum polar-cap averaged 10 hPa temperatures are reached with 235 K a few days before, ending the rapid temperature increase of more than 35 K in less than 1 week (Figure 6.3). Maximum easterly winds with -16 ms$^{-1}$ are reached on 18 February 2001. The polar vortex is already split a day earlier, featuring a stronger western than eastern part (Figure 6.3 and Figure 6.5 right column). Both parts are clearly weakened, showing geopotential height values greater than 30000 gpm. The SSW event is therefore classified as a S-type. Warm temperatures higher than 230 K are found almost everywhere over Europe and Asia in 10 hPa height with maximum temperatures up to 268 K over eastern Europe (Figure 6.5 right column). The middle stratosphere shows a baroclinic structure with a strongly twisted polar vortex. Easterly winds prevail in 10 hPa height for 20 days until 23 February 2001 (Figure 6.3). On 28 March 2001 the polar vortex in the middle atmosphere is restored again but profoundly weaker than before the SSW (Figure 6.4 left and right column). The 10 hPa polar-cap averaged temperature is with 215 K more than 15 K higher than before the warming, showing minimum values down to 200 K in the middle of the polar vortex (Figure 6.3 and 6.4 left and right column). Also in 30 hPa and 50 hPa low temperatures and a barotropic structure are present again. ![SSW Index and Polar-Cap Averaged 10 hPa Temperature for the Winter 2000/2001](image) **Figure 6.3:** *SSW Index and Polar-Cap Averaged 10 hPa Temperature for the Winter 2000/2001 based on ERA-Interim.* The blue dots show the modified SSW index by Charlton and Polvani (2007) with 65°N instead of 60°N as reference latitude. The red line shows the polar-cap averaged 10 hPa temperature. Days with distinctive shapes of the polar vortex in the 10 hPa geopotential height field are marked. Figure 6.4: **Geopotential Height and Temperature in the Middle Stratosphere on 26 November 2000, 9 February 2001 and 18 February 2001 based on ERA-Interim.** The geopotential height and temperature are shown in 10 hPa (top row), 30 hPa (middle row) and 50 hPa (bottom row) for 26 November 2000 (left column), 9 February 2001 (middle column) and 18 February 2001 (right column). All plots have the same color-scale except the plot on the top right (the color-scale for this plot is given next to it). Figure 6.5: Geopotential Height and Temperature in the Middle Stratosphere on 23 January 2001, 31 January 2001 and 28 March 2001 based on ERA-Interim. The geopotential height and temperature are shown in 10 hPa (top row), 30 hPa (middle row) and 50 hPa (bottom row) for 23 January 2001 (left column), 31 January 2001 (middle column) and 28 March 2001 (right column). 6.3 Predicted Sudden Stratospheric Warming Signals in the Middle Stratosphere The ensemble members of the reforecast initialized on 31 October 2000 show a similar behaviour of the SSW index as the ERA-Interim reanalysis but with a slight shift to later times in the first 10 days (Figure 6.6 top). Between 15 November and 4 December 2000, the shape of the curve of the S2S ensemble member with the correct prediction of the SSW is similar to the curve of the ERA-Interim data but shifted to earlier times by about 3 days. After this date, the two curves differ remarkably. Considering the early initialization 23 days before the central date of the SSW, the ensemble member which predicts the SSW correctly forecasts the state of the atmosphere up to a lead time of 34 days well. The representative ensemble member which does not show easterlies in this reforecast does not capture the state of the atmosphere well beyond 17 November 2000. It is important to note that the spread of the ensemble is remarkably small until the beginning of December 2000, concerning the early initialization time. This leads to the suggestion that this SSW event could have a high predictability. The two representative members of the reforecast initialized on 7 November 2000, 16 days prior to the central date of the SSW, show a way more similar behaviour than the representative members of the reforecast initialized on 31 October 2000 (Figure 6.6 bottom). The ensemble member which predicts the SSW correctly captures the form of the curve of the ERA-Interim reanalysis well until 8 December 2000 but is shifted about 4 days to earlier times. The ensemble member which shows only westerly winds follows the ERA-Interim reanalysis well from 22 November to 3 December 2000 but features especially on the central date of the SSW about $10 \text{ ms}^{-1}$ higher zonal wind speeds (Figure 6.6). The ensemble spread is rather small until the beginning of December 2000 and then increases. The fact that the ensemble spread for the initialization on 31 October 2000 also increases in the beginning of December 2000 leads to the suggestion that around this time a rather unpredictable weather situation occurs. A possible explanation could be an enhanced wave activity with at least two, for the model equally probable realization. At this point, it has to be kept in mind that the model simulates waves with wavenumber 2 less well than waves with wavenumber 1, and thus, waves with wavenumber 1 are more probable from the model’s point of view (Tripathi et al., 2016). Manney et al. (2001) state that after 8 December 2000 there is a wavenumber 2 amplification observed. This supports the suggestion that in the beginning of December 2000 the enhanced wave activity leads to the large spread among the ensemble members. This is also true for the reforecast initialized on 25 November 2000, which also indicates that in the beginning of December 2000 a rather unpredictable weather situation occurs (Figure 6.7 top). Until mid-December, both representative members follow the ERA-Interim reanalysis closely. Then the representative member with prevailing standardized geopotential height anomalies <0.5 standard deviation follows the shape of the ERA-Interim reanalysis curve roughly but mostly at higher values. The representative member with prevailing standardized geopotential height anomalies >0.5 standard deviation shows an additional, artificial SSW event on 20 December 2000. Figure 6.6: **SSW Index of S2S Reforecasts Initialized prior to the Central Date of the SSW.** The shaded area shows ±3 days around the central date of the SSW event obtained from the ERA-Interim reanalysis. The ERA-Interim SSW index is shown by the red dashed line. For the reforecast initialized on 31 October 2000, the SSW index is shown from 1 November 2000 onwards (top). The reforecast initialized on 7 November 2000 is shown on the bottom plot. Figure 6.7: **SSW Index of the S2S Reforecast Initialized after the Central Date of the SSW and Normalized Geopotential Height Anomalies in 100 hPa.** The shaded area shows ±3 days around the central date of the SSW event obtained from the ERA-Interim reanalysis. The ERA-Interim reanalysis is shown by the red dashed line (top). The representative members are obtained from the polar-cap averaged 100 hPa normalized geopotential height anomalies (bottom). The data of the first and last 3 days of the 100 hPa standardized geopotential height anomalies reforecast are prone to boundary effects due to the use of 7-day running mean for the calculation of climatology. 6.4 Predicted Shape of the Polar Vortex in the Middle Stratosphere Already in the first initialization on 31 October 2000, the representative member with the correct central date captures the „comma“-shape of the polar vortex in 10 hPa well (Figure 6.8 left column). The vortex in the S2S reforecast is only slightly smaller and more concentric but the magnitude of the geopotential height values is the same. The representative member without easterlies shows smaller geopotential height values than the ERA-Interim reanalysis but the core of the vortex is at the same position. The shape of the vortex is rather an oval than a „comma“ (Figure 6.8). The differences between the two representative members are small in the region of the polar vortex suggesting a well predictable vortex state. Interestingly, the prediction of the representative members of the reforecast initialized on 7 November 2000 is less well (Figure 6.8 left and middle column). This could either mean that the good representation of the polar vortex in the earlier reforecast is coincidence or, that there is a more unpredictable phenomena considered in the later initialized reforecast. The representative member with the correct central date of this reforecast shows a larger core of the polar vortex than the ERA-Interim reanalysis but centered on the same location (Figure 6.8). Also the „comma“-shape of the vortex is not as distinct as it is in the ERA-Interim reanalysis. The representative member without easterlies shows a vortex core with the same magnitude as in the ERA-Interim reanalysis but shifted to the west. Also the shape of the polar vortex does not resembles a classical „comma“-shape (Figure 6.8). The largest differences between the two representative members is found at the eastern end of the vortex core and east of it. The reforecast initialized on 25 November 2000 shows almost no differences between its two representative members (Figure 6.8). This is most likely due to the very short lead time of 1 day. Both representative members show the southward displaced, „comma“-shape polar vortex in the same location and with the same magnitude as the ERA-Interim reanalysis. 6 Winter 2000/2001 Figure 6.8: Shape of the Polar Vortex in 10 hPa Geopotential Height on 26 November 2000 in the Selected S2S Reforecasts. Comparison of the representative member with the correct prediction of the atmospheric state (top row) and the representative member without the correct prediction of the atmospheric state (middle row). The difference of both members is shown in the bottom row. The left column shows the representative members of the reforecast initialized on 31 October 2000, the middle column the representative members of the reforecast initialized on 7 November 2000 and the right column the representative members of the reforecast initialized on 25 November 2000. 6.5 Predicted Sudden Stratospheric Warming Signals in the Lower Stratosphere In contrast to the reforecast initialized on 25 November 2000, the representative members of the other two selected reforecasts show less distinct differences in their prediction of the 100 hPa normalized geopotential height anomalies (Figure 6.9). Concerning the reforecast initialized on 31 October 2000, the representative member with the correct central date follows the ERA-Interim reanalysis roughly until the beginning of December but then turns to largely negative geopotential height values while the ERA-Interim reanalysis stays positive. The representative memember without easterlies turns to negative standardized geopotential height values already in mid-November 2000. Regarding this reforecast, the member with the correct central date is closer to the ERA-Interim reanalysis. This finding is in contrast to the reforecasts initialized on 25 November and 7 November 2000 (Figure 6.7 bottom and 6.9 bottom). Therefore, it cannot be assumed that a good representation of the SSW index by an ensemble member leads to a good representation of the polar-cap averaged, standardized 100 hPa geopotential height anomalies. In case of the reforecast initialized on 7 November 2000, both representative members show a quite similar behaviour (Figure 6.9 bottom). The representative member with the correct central date is only closer to the ERA-Interim reanalysis than the representative member without easterlies until the end of November. This leads to the question whether a good representation of the SSW index by the S2S reforecast is more important for the prediction of a potential surface impact of an SSW or the correct representation of the anomalies induced by the SSW in the lower stratosphere, for example the polar-cap averaged 100 hPa standardized geopotential height anomalies. Figure 6.9: Polar-Cap Averaged Normalized Geopotential Height Anomalies in 100 hPa in the S2S Reforecasts Initialized Before the Central Date of the SSW. Shown are the reforecast initialized on 31 October 2000 from 1 November 2000 onwards (top) and the reforecast initialized on 7 November 2000 (bottom). The ERA-Interim reanalysis is marked by the red dashed line. The data of the first and last 3 days of the reforecasts are prone to boundary effects due to the use of 7-day running mean for the calculation of climatology. ### 6.6 Blocking in the Middle Troposphere During the time when the positive standardized geopotential height anomalies associated with the first SSW of the winter 2000/2001 are present at surface, a long-lasting strong blocking pattern over the Euro-Atlantic sector is detected (Figure 6.1 and 6.10 top). This pattern occurring between 20 and 30 December 2000 is clearly visible in the deviation of the geopotential height... at 65°N (Figure 6.2 top). The striking blocking pattern over the Euro-Atlantic sector shows a tilted Ω-like shape and is centered over central Europe (Figure 6.11 top). A similar blocking situation is detected between 24 February and 7 March 2001, in the period, when positive standardized geopotential height anomalies associated with the second SSW of the winter 2000/2001 are present at surface (Figure 6.1 and 6.10 bottom). The again Ω-shaped blocking pattern is less tilted at that time and centered over the western North Atlantic ocean (Figure 6.11 bottom). While the blocking pattern between 24 February and 7 March 2001 co-occurs with an upward propagation of tropospheric waves, the blocking pattern between 20 and 30 December 2000 co-occurs with a downward propagation of stratospheric signals to the surface. Therefore, an association with the preceding SSW event might be suggested. However, this is not in agreement with literature. According to Charlton-Perez et al. (2018) blocking patterns do not show a significant sensitivity to changes in the stratospheric circulation. On the other hand the stratospheric circulation shows significant changes after tropospheric blocking situations (Woollings et al., 2018; Martius et al., 2009). The frequent occurrence of long-lasting, simultaneously occurring Scandinavian and Alaskan ridges during the winter 2000/2001, can therefore lead to an enhanced upward propagation of tropospheric waves, which may disturb the stratospheric polar vortex (Figure 6.10 top and bottom; Schneidereit et al., 2017). According to Manney et al. (2001) this is the case of the first SSW of the winter 2000/2001, although it is associated with a stronger than average Aleutian high and a wavenumber-1 amplification. The strong Aleutian high is visible in the 500 hPa geopotential height field, developing before the beginning of November and persisting until 14 December 2000, 3 weeks after the central date of the first SSW of the winter 2000/2001 (Figure 6.10 top and 6.11 top). It is accompanied by a strong Scandinavian ridge (Figure 6.10 top and 6.11 top). At the central date of the second SSW in this winter, the two ridges are also present, indicating a possible wavenumber-2 perturbation of the polar vortex (Figure 6.10 bottom; Schneidereit et al., 2017). Furthermore, blocking over the pole at the same time is stated to be a precursor of the second SSW event of the winter 2000/2001 (Martius et al., 2009). Since slight La Niña conditions are present during the whole winter, it has also to be kept in mind that these may favor the development of blocking anticyclones (https://www.cpc.ncep.noaa.gov/products/analysis_monitoring/ensostuff/detrend.nino34.ascii.txt, last viewed 5 November 2019; Schneidereit et al., 2017). Figure 6.10: **Blocking Situation between November 2000 and March 2001 based on ERA-Interim.** The Hovmöller diagrams show the 500hPa geopotential height between November and mid-January (top) as well as between the end of January and the beginning of March (bottom). It is averaged between $40^\circ$N and $80^\circ$N and shown as grey shading. The GHGS component of the blocking index by Tibaldi and Molteni (1990) is shown in red. The horizontal black dashed lines mark the central date of the SSW events. The area between the solid blue lines refers to the Euro-Atlantic sector, $70^\circ$W to $30^\circ$E. 6.7 Predicted Blocking in the Middle Troposphere For the occurrence of blocking patterns in the S2S reforecasts, the 500 hPa geopotential height anomalies, averaged over $40^\circ$N to $80^\circ$N and $70^\circ$W to $30^\circ$E, are used. Concerning the reforecast initialized on 31 October 2000, the representative member with the correct central date follows the curve of ERA-Interim reanalysis rather closely with maximum deviations around 50 gpm (Figure 6.12 top). The representative member without easterlies follows the ERA-Interim reanalysis only until 15 November 2000 and then differs most of the time in magnitude and sign. It seems, as if there is an added value in the prediction of blocking situations, when the SSW event is represented correctly. This might be coincidence since the occurrence of blocking patterns is not sensitive to changes in the stratospheric state (Charlton-Perez et al., 2018). The most striking feature of the reforecast initialized on 7 November 2000 is the strong increase of the ensemble spread around 20 November 2000 (Figure 6.12 bottom). According to Manney et al. (2001) this is the time of a wavenumber-1 amplification in the troposphere. This leads to the idea of different realization possibilities for the type and strength of atmospheric waves at that time. The representative member without easterlies shows in the following 2 weeks deviations up to 75 gpm from the reanalysis, while the deviations of the representative member with the correct central date are maximally around 50 gpm. Thus, there is again a slightly better representation of... the atmospheric state by the representative member with the correct central date. After 25 December 2000 both representative members show a similar curve in shape and magnitude. Concerning the reforecast initialized on 25 November 2000, both representative members show a rather similar behaviour, not following the shape of the ERA-Interim reanalysis well (Figure 6.13). The correct representation of the standardized geopotential height anomalies in 100 hPa does not seem to add value to the prediction of the geopotential height anomalies in 500 hPa in this case. This supports the findings of Charlton-Perez et al. (2018) who state that the tropospheric blocking patterns are insensitive to the stratospheric conditions. The $\Omega$-blocking pattern observed after the first SSW of the winter 2000/2001, is not visible in the ERA-Interim 5600 gpm isoline which exhibits a rather zonal flow in the North Atlantic-European sector (Figure 6.14 top and bottom). The representative member without easterlies of the reforecast initialized on 7 November 2000 shows a distinct $\Omega$-shape, however, too far east and at the wrong geopotential height level (Figure 6.14 top). Regarding the representative member with the correct central date of the same reforecast, the 5600 gpm isoline is predicted too far south over the continents. The largest difference between the two representative members is found over northern Asia in the region of the $\Omega$-block predicted by the member without easterlies. Over the North Atlantic ocean, the difference and the ensemble spread are smallest. This also applies to the reforecast initialized on 25 November 2000 but the largest differences between the two representative members of this reforecast are found over Scandinavia, north of the $\Omega$-block predicted by the representative member with prevailing standardized geopotential height anomalies <0.5 standard deviation in 100 hPa. (Figure 6.14 bottom). The blocking pattern is predicted less pronounced than in the previous reforecast and on the right location but still to strong in the 5600 gpm isoline. The representative member with prevailing standardized geopotential height anomalies >0.5 standard deviation in 100 hPa follows the ERA-Interim reanalysis more closely but shows an additional trough south of the Iberian Peninsula. Figure 6.12: **Geopotential Height Anomalies in 500 hPa of the S2S Reforecast Initialized before the Central Date of the SSW.** The 500 hPa geopotential height anomalies are averaged between $40^\circ N$ and $80^\circ N$ as well as $70^\circ W$ and $30^\circ E$. The red dashed line shows the ERA-Interim reanalysis. The reforecast initialized on 31 October 2000 is shown from 1 November 2000 onwards (top plot). The reforecast initialized on 7 November 2000 is shown in the bottom plot. The data of the first and last 3 days of the reforecasts are prone to boundary effects due to the use of 7-day running mean for the calculation of climatology. Figure 6.13: **Geopotential Height Anomalies in 500 hPa of the S2S Reforecast Initialized after the Central Date of the SSW.** The 500 hPa geopotential height anomalies are averaged between 40°N and 80°N as well as 70°W and 30°E. The red dashed line shows the ERA-Interim reanalysis. The data of the first and last 3 days of the reforecast are prone to boundary effects due to the use of 7-day running mean for the calculation of climatology. Figure 6.14: **Blocking Pattern in the Middle Troposphere on 22 December 2000 in the S2S Reforecasts.** The top plot shows the ensemble members of the reforecast initialized on 7 November 2000, the bottom plot the ensemble members of the reforecast initialized on 25 November 2000. Shown are the 5600 gpm geopotential height isolines, for the ensemble in white, the representative members with the correct prediction of the atmospheric state in brown and the representative members without the correct prediction of the atmospheric state in blue. The red dashed line shows the ERA-Interim reanalysis. In the background, the difference between the two representative members is shown as grey shading for the depicted reforecast. 6.8 Position of the Mid-Latitude Jet Stream in the Lower Troposphere In comparison to the climatological mean, the mid-latitude jet stream is located further south than usual most of the time until 11 February 2001 (Figure 6.15). This is a typical behaviour, observed after 2/3 of the SSW events (Afargan-Gerstman and Domeisen, 2020). Then, the jet is displaced northwards up to $65^\circ$N, which is observed after 1/3 of all SSW events (Afargan-Gerstman and Domeisen, 2020). This indicates that the SSWs of the winter 2000/2001 show different influences on the surface. After the poleward shift of the mid-latitude jet stream, the maximum wind speeds are found around $30^\circ$N and are more likely associated with the subtropical jet than the mid-latitude jet stream. It is suggested that the mid-latitude jet stream is weakened from here on and restrengthens in mid-March, where it is again located equatorward of the climatological mean position (Figure 6.15). At this time though, its position cannot be associated with the second SSW of the winter 2000/2001. In April, the mid-latitude jet stream is located at its climatological latitude again or a little further northward (Figure 6.15). ![Figure 6.15: Zonal Wind Speed Anomalies during the Winter 2000/2001 based on ERA-Interim. The zonal-wind anomalies in 850 hPa, averaged over 60°W to 0°E are shown as shading in the Hovmöller diagram. The anomalies are filtered using a Lanczos filter with a moving window of 61 days and a cutoff-frequency of 1/10 days. Data on the edges of the timeseries are prone to boundary effects due to the filtering and therefore, shown paler than the unaffected data. The wind maxima are shown as a black solid line. The white dashed line shows the climatological position of the mid-latitudes jet stream. The central date of the first SSW in the winter 2000/2001 is marked with the vertical black dashed line, the central date of the second SSW with the vertical grey dashed line.](image) 6.9 NAO Index at the Surface Roughly 1 week before the SSW event occurring on 23 November 2000, the NAO is in its positive phase (Figure 6.16). Exactly on the central date of the SSW, the 7-day running mean of the NAO index turns negative but the daily NAO index turns negative already 3 days prior to the central date. Therefore, this following NAO-phase is rather unlikely triggered by the SSW and hence, not directly associated with it. This is supported by the fact that the SSW induced positive geopotential height anomalies >1.0 standard deviation associated with the SSW reach the surface earliest on 10 December 2000 (Figure 6.1). The long-lasting NAO-phase between 18 December 2000 and 23 January 2001 coincides with positive geopotential height anomalies >1.0 standard deviation at the surface and therefore is likely triggered and maintained by the first SSW of the winter 2000/2001 (Figure 6.16 and 6.1). This is supported by the finding that positive geopotential height deviations from the zonal-mean associated with a downward propagation of stratospheric signals, are found over the North Atlantic ocean at that time (Figure 6.2 top). The pressure systems over the North Atlantic ocean show a large pressure gradient between roughly 50°N and 60°N and are located over the area which is used for the calculation of the NAO index (Figure 6.17 top row). From 4 January 2001 onwards, less positive normalized geopotential height anomalies prevail at the surface, making the influence of the SSW on surface weather less likely (Figure 6.1). The NAO at that time is still in its negative phase and stays there until 22 January 2000 (Figure 6.16). A possible maintainer of this NAO-phase is a long-lasting blocking pattern over the Euro-Atlantic sector occurring in mid-January (Figure 6.10 top). After a short period with a positive phase of the NAO, the 7-day running mean of the NAO index turns negative again on 31 January 2001. The daily values of the index turn negative on 3 February 2000, the day of the central date of the SSW (Figure 6.16). This indicates that this NAO-phase could be triggered by the SSW (Lee et al., 2019; Domeisen, 2019). The fact that at this time normalized positive geopotential height anomalies >1.0 standard deviation have not reached the lower stratosphere, does not support this indication. On 10 February 2001, the NAO becomes positive again for 11 days (Figure 6.16). The following NAO-phase coincides with positive normalized geopotential height anomalies at surface for the following 2 weeks (Figure 6.1). But since the deviation of the normalized geopotential height from the zonal-mean shows an upward propagation at this time, the NAO-phase is not associated with the SSW event (Figure 6.2 bottom). With the exception of 4 positive values of the daily NAO index in the beginning of March, the NAO index stays negative until 27 March 2001 (Figure 6.16). Possible maintainer of this long-lasting NAO-phase are 3 blocking patterns occurring over the Euro-Atlantic sector during that time (Figure 6.10 bottom). Interestingly, the pressure distribution does not resemble the „classical“ NAO-pressure distribution (Figure 6.17 bottom row). The low pressure system over Iceland extends further south and it is not well distinguishable. The situation of the displayed week shows a rather merdional than zonal flow over the North Atlantic ocean. Figure 6.16: **NAO Index during the Winter 2000/2001 based on ERA-Interim.** Shown is the Zonal Index which is calculated as the standardized mean sea level pressure anomaly difference between a southern box, averaged over $40^\circ$W to $0^\circ$E and $35^\circ$N to $50^\circ$N, and a northern box, averaged over $40^\circ$W to $0^\circ$E and $55^\circ$N to $70^\circ$N (Leckebusch et al., 2008). The black dashed line marks the central date of the first SSW, the grey dashed line the central date of the second SSW. The period with normalized geopotential height anomalies >1.0 standard deviation associated with the SSWs at surface is shaded in dark grey for the first SSW of the winter 2000/2001 and in light grey for the second SSW of this winter. Figure 6.17: **Mean Sea Level Pressure Anomalies and 2 Metre Temperature Anomalies for Two European Cold Waves based on ERA-Interim.** Shown is the European cold wave associated with the first SSW of the winter 2000/2001 (top row) and the European cold wave happening after the second SSW of this winter (bottom row). The dashed contours show negative mean sea level pressure anomalies, the solid contours show positive mean sea level anomalies. The 2 metre temperature anomalies are plotted as shading. 6.10 Predicted NAO Index at the Surface It is striking, that all three selected reforecasts show rather large differences to the ERA-Interim NAO index (Figure 6.18 top and bottom and 6.19). This is may due to the fact that even slight shifts of the pressure systems responsible for the typical NAO pattern can result in large changes of the NAO index, as the area over which it is computed is fixed (Leckebusch et al., 2008). Concerning the reforecast initialized on 31 October 2000, both representative members of the reforecast show a similar behaviour until 5 December 2000 (Figure 6.18 top). But none of them is following the daily or the 7-day running mean values of the ERA-Interim NAO index closely. The large ensemble spread shows that the NAO index is not predicted well by this reforecast, independently whether the SSW is represented correctly in the reforecasts or not. The ensemble spread of the reforecast initialized on 7 November 2000 shows also a large ensemble spread (Figure 6.18 bottom). The representative members do not capture the ERA-Interim NAO index well but show a more different behaviour this time. From the beginning of November onwards, they differ mostly in sign but show a similar magnitude. Until 10 December 2000, the curve of the representative member without easterlies is closer to the curve of the ERA-Interim reanalysis. After this date, the representative member with the correct central date shows a more similar behaviour to the ERA-Interim reanalysis. Only for the reforecast initialized on 25 November 2000, the representative member with the correct representation of the state of the stratosphere is closer to the ERA-Interim NAO index than the representative member without the correct representation of the stratospheric state (Figure 6.19). The representative members for this reforecast are chosen based on the normalized geopotential height anomalies in 100 hPa. Although the representative member with prevailing standardized geopotential height anomalies >0.5 standard deviation is generally closer to the ERA-Interim reanalysis, it differs mostly in sign from it. So does the representative member with prevailing standardized geopotential height anomalies <0.5 standard deviation but with a larger amplitude. The strong negative NAO-phase starting in mid-December 2000 is not captured by the two representative members at all. At that time, the representative member with prevailing standardized geopotential height anomalies >0.5 standard deviation predicts a high pressure system over most of the North Atlantic ocean, while the representative member with prevailing standardized geopotential height anomalies <0.5 standard deviation predicts an NAO+ structure, shifted from the North Atlantic ocean to the European continent (Figure 6.17 bottom row). Figure 6.18: NAO Index of S2S Reforecasts Initialized Before the Central Date of the SSW. The top plot shows the reforecast initialized on 31 Octobre 2000 from 1 November 2000 onwards, the bottom plot the reforecast initialized on 7 November 2000. Shown is the zonal index which the standardized mean sea level pressure anomaly difference between a southern box, averaged over $40^\circ$W to $0^\circ$ and $35^\circ$N to $50^\circ$N, and a northern box, averaged over $40^\circ$W to $0^\circ$ and $55^\circ$N to $70^\circ$N (Leckebusch et al., 2008). The red shading shows the daily NAO index and the red dashed line the 7-day running mean of it calculated with the ERA-Interim reanalysis data set. The vertical black dashed line marks the central date of the SSW. The data of the first and last 3 days of the reforecasts are prone to boundary effects due to the use of 7-day running mean for the calculation of climatology. Figure 6.19: **NAO Index of the S2S Reforecast Initialized After the Central Date of the SSW.** Shown is the zonal index which the standardized mean sea level pressure anomaly difference between a southern box, averaged over $40^\circ$W to $0^\circ$ and $35^\circ$N to $50^\circ$N, and a northern box, averaged over $40^\circ$W to $0^\circ$ and $55^\circ$N to $70^\circ$N (Leckebusch et al., 2008). The red shading shows the daily NAO index and the red dashed line the 7-day running mean of the daily NAO index calculated with the ERA-Interim reanalysis data set. The data of the first and last 3 days of the reforecast are prone to boundary effects due to the use of 7-day running mean for the calculation of climatology. Figure 6.20: **2 Metre Temperature Anomalies and Mean Sea Level Pressure Anomalies between 22 and 25 December 2000 in the S2S Reforecast Initialized on 25 November 2000.** Comparison of the representative member with prevailing standardized anomalies >0.5 standard deviation (top row) with the representative member with prevailing standardized anomalies <0.5 standard deviation (middle row) of the reforecast initialized on 25 November 2000. The difference between the two representative members is shown in the bottom row. All plots in the top and middle row except one in the middle row on the right have the same color-scale, shown on the top right. The color-scale of the plot in the middle row on the right is given next to it. 6.11 European Cold Waves at the Surface Three European cold waves can be detected during the winter 2000/2001 using the 7-day running mean of the 2 metre temperature anomalies (Figure 6.21). Following the cold wave definition by Smid et al. (2019) only the latter two are identified as cold waves (Figure 6.22). The first European cold wave is not detected by the approach of Smid et al. (2019) but coincides with positive normalized geopotential height anomalies at the surface, associated with the SSW event on 23 November 2000 (Figure 6.21 and 6.1). It shows values about 1.2 K below its climatological mean in the time between 22 and 25 December 2000 (Figure 6.21). The strongest manifestation is seen in northern Europe with values around 5 K below average. Eastern, north-western and central Europe also experience unusual cold temperatures during that time. The coinciding NAO-phase, the equatorward shift of the tropospheric jet and the detected downward propagation of positive normalized geopotential height anomalies caused by the first SSW of the winter 2000/2001 indicate that the European cold wave happening between 22 and 25 December 2000 is linked to the SSW event with its central date on 23 November 2000 (Figure 6.21, 6.16, 6.15 and 6.2 top). This is supported by the large positive temperature anomalies up to 20 K above average found over Greenland and the Bering Strait at that time (Figure 6.17 top row). The second European cold wave in the winter 2000/2001 happens around the central date of the second SSW (Figure 6.21). Since positive geopotential height anomalies induced by this SSW event reach the surface first in mid-February, this cold wave is not associated with it (Figure 6.1). Between 21 February and 1 March 2001, the third and last European cold wave of the winter 2000/2001 occurs (Figure 6.21). During that time, positive standardized geopotential height anomalies, caused by the SSW with its central date on 3 February 2001, are observed at the surface (Figure 6.1). But because the deviation of the geopotential height from the zonal-mean at 65°N does not show a downward propagation of stratospheric signals, this cold wave is not associated with the SSW event as well (Figure 6.2 bottom). This European cold wave is characterized by temperatures lower than 1 K below average (Figure 6.21). The maximum cold temperatures are lower than 7 K below average in northern Europe and around 3 K lower than average in the European mean. Except the Mediterranean, all European regions experience unusually cold temperatures (Figure 6.17 bottom row). This is detected by both definitions of cold waves. The largest positive temperature anomalies are found south-west of Greenland, indicating again downward propagating stratospheric positive temperature anomalies, which is not supported by the deviation of the geopotential height from the zonal-mean, averaged at 65°N (Figure 6.17 bottom row and 6.2 bottom). Figure 6.21: **2 Metre Temperature Anomalies during the Winter 2000/2001 based on ERA-Interim.** Periods of cold waves are defined using 1 K below the climatological mean as the warm temperature threshold for cold waves (Garfinkel et al., 2017). The days with cold waves are marked as shading in the respective color. The vertical black dashed line marks the central date of the first SSW in the winter 2000/2001, the vertical light grey line the central date of the second SSW. The period with normalized geopotential height anomalies >1.0 standard deviation associated with the SSWs at surface is shaded in dark grey for the first SSW of the winter 2000/2001 and in light grey for the second SSW of this winter. The European mean is calculated by averaging between 10°W to 42°E and 35°N to 72°N. The anomalies for north-western Europe between 10°W to 3°E and 45°N to 60°N, for south-western Europe between 10°W to 3°E and 35°N to 45°N, for eastern Europe between 20°E to 42°E and 45°N to 60°N, for northern Europe between 3°E to 42°E and 60°N to 72°N, for central Europe between 3°W to 20°E and 45°N to 60°N and for the Mediterranean between 3°E to 42°E and 35°N to 45°N. Figure 6.22: **2 Metre Daily Minimum Temperature during the Winter 2000/2001 based on ERA-Interim.** Periods of cold waves are defined as at least 3 consecutive days with daily minimum temperatures below the $10^{th}$ percentile of the climatological daily minimum temperature (Smid et al., 2019). The climatology is calculated for the period between 1999 and 2019 with a 31 day running mean. The days with cold waves are marked as shading in the respective color. The vertical light grey dashed line marks the central date of the first SSW in the winter 2000/2001, the vertical black line the central date of the second SSW. The period with normalized geopotential height anomalies >1.0 standard deviation associated with the SSW at surface is shaded in dark grey for the first SSW of the winter 2000/2001 and in light grey for the second SSW of this winter. The European mean is calculated by averaging between $10^\circ$W to $42^\circ$E and $35^\circ$N to $72^\circ$N. The anomalies for north-western Europe between $10^\circ$W to $3^\circ$E and $45^\circ$N to $60^\circ$N, for south-western Europe between $10^\circ$W to $3^\circ$E and $35^\circ$N to $45^\circ$N, for eastern Europe between $20^\circ$E to $42^\circ$E and $45^\circ$N to $60^\circ$N, for northern Europe between $3^\circ$E to $42^\circ$E and $60^\circ$N to $72^\circ$N, for central Europe between $3^\circ$W to $20^\circ$E and $45^\circ$N to $60^\circ$N and for the Mediterranean between $3^\circ$E to $42^\circ$E and $35^\circ$N to $45^\circ$N. 6.12 Predicted European Cold Waves at the Surface Considering all selected reforecasts, it is striking that the ensemble members tend to be colder than the ERA-Interim reanalysis. The reforecast initialized on 31 October 2000 features one ensemble member which closely follows the ERA-Interim reanalysis until 8 December 2000 (Figure 6.23 top). In contrast to this member, the representative members of the reforecast only follow the ERA-Interim reanalysis rather closely until mid-November 2012. Both predict a European cold wave in early December, when the ERA-Interim reanalysis shows only positive temperature anomalies. The representative member without easterlies deviates less from the ERA-Interim reanalysis than the representative member with the correct central date (Figure 6.23 top). This is not true for the reforecast initialized on 7 November 2000. The representative member with the correct central date follows the ERA-Interim reanalysis more closely than the representative member without easterlies but still deviates up to 4 K from it (Figure 6.23 bottom). It shows the same shape of curve as the ERA-Interim reanalysis until the beginning of December but with an offset of about 1 K to lower temperature anomalies. Until this time, the correct prediction of the SSW in the reforecast seems to add value to the European 2 metre temperature prediction. Nevertheless, it is important to note that for the reforecast initialized on 7 November 2000, none of the ensemble members follows the ERA-Interim reanalysis well and even the member closest to the ERA-Interim reanalysis shows deviations of more than 2 K temporarily (Figure 6.23 bottom). This implies that either the good representation of the ERA-Interim reanalysis of one member of the reforecast initialized on 31 October 2000 is coincidence or that the reforecast initialized on 7 November 2000 takes into account other or maybe additional atmospheric phenomena which decrease its predictive skill. At the very end of the lead time of the reforecast initialized on 7 November 2000 the European cold waves starts. While the representative member with the correct central date does not predict a European cold wave at all, the representative member without easterlies predicts a cold wave but too early in the beginning of December 2000 (Figure 6.23 bottom). The reforecast initialized on 25 November 2000 comprises the European cold wave between 22 and 25 December 2000. Since this cold wave is especially strong in Scandinavia, the Scandinavian 2 metre temperature anomalies are shown as well (Figure 6.17 top row and 6.24 bottom). Concerning the European mean temperatures, both representative members show a similar behaviour on 10 December 2000, following the ERA-Interim reanalysis quite well (Figure 6.24 top). The European cold wave between 22 and 25 December 2000 is captured at the beginning by the representative member with prevailing standardized geopotential height anomalies >0.5 standard deviation. However the member underestimates its amplitude and duration. The reason for this is the prediction of too warm temperatures over the Iberian Peninsula and western France as well as over parts of eastern Scandinavia and eastern Europe (Figure 6.17 top row and 6.20 top row). The representative member with prevailing standardized geopotential height anomalies <0.5 standard deviation predicts positive temperature anomalies above 2 K higher than the climatological value at that time. This can be clearly seen by the positive temperature anomalies predicted over almost every European region except parts of the Iberian Peninsula and Scandinavia (6.17 top row and 6.20 middle row). The differences between both representative members is therefore locally up to +18 K and down to -24 K (Figure 6.20 bottom row). The ensemble member which predicts the European cold wave best, is generally too cold and predicts two additional cold waves, which are not observed in the ERA-Interim reanalysis. Concerning the Scandinavian mean temperature anomalies, both representative members perform better (Figure 6.24 bottom). Although they deviate up to 5 K from the ERA-Interim reanalysis, they capture the form of its curve well and predict the Scandinavian cold wave at the end of December 2000. The representative member with prevailing standardized geopotential height anomalies >0.5 standard deviation thereby predicts only a short cold wave reaching temperature anomalies only slightly below 1 K under the climatological value. This is due to the fact that in eastern Scandinavia warmer than normal temperatures are predicted (Figure 6.17 top row and 6.20 middle row). The representative member with prevailing standardized geopotential height anomalies <0.5 standard deviation predicts a roughly double as long cold wave as the ERA-Interim reanalysis shows, starting at 16 December 2000. The predicted magnitude of the cold wave is underestimated about 2.5 K. Although cold temperature anomalies are predicted for most of Scandinavia, the too warm predicted temperatures over southern Scandinavia lead to the too little magnitude of the Scandinavian cold wave in comparison with the ERA-Interim reanalysis (Figure 6.17 top row and 6.20 middle row). In addition, the representative member with prevailing standardized geopotential height anomalies <0.5 standard deviation predicts a cold wave in the beginning of December 2000, when the ERA-Interim reanalysis shows positive temperature anomalies of about 2.5 K above climatology. The closest ensemble member to the ERA-Interim reanalysis predicts two additional cold waves in the beginning and mid of December 2000. The cold wave seen in the ERA-Interim reanalysis is predicted, too but with an overestimated duration and an underestimated magnitude. Figure 6.23: **2 Metre Temperature Anomalies of S2S Reforecasts Initialized Before the Central Date of the SSW.** In the top plot, the reforecast initialized on 31 October 2000 is shown from 1 November 2000 onwards, in the bottom plot the reforecast initialized on 7 November 2000 is shown. The red dashed line shows the ERA-Interim reanalysis. The European mean is calculated by averaging between $10^\circ W$ to $42^\circ E$ and $35^\circ N$ to $72^\circ N$. The data of the first and last 3 days of the reforecasts are prone to boundary effects due to the use of 7-day running mean for the calculation of climatology. Figure 6.24: **2 Metre Temperature Anomalies of S2S Reforecasts Initialized After the Central Date of the SSW.** The European mean temperature (top) and the Scandinavian mean temperature (bottom) of the reforecast initialized on 25 November 2000 are shown. The European mean is calculated by averaging between $10^\circ W$ to $42^\circ E$ and $35^\circ N$ to $72^\circ N$, the Scandinavian mean by averaging between $3^\circ E$ to $42^\circ E$ and $60^\circ N$ to $72^\circ N$. The data of the first and last 3 days of the reforecast are prone to boundary effects due to the use of 7-day running mean for the calculation of climatology. 6.13 Concluding Remarks The stratospheric circulation in the winter 2000/2001 is dominated by two SSW events with very different features. The first SSW event is an example of the sensitivity of the wind-based SSW indices on the chosen reference latitude (Table 3.1). Only the SSW indices using $65^\circ$N or the meridional mean between $60^\circ$N and $90^\circ$N detect the event with its central date on 23 November 2000. The often used SSW index by Charlton and Polvani (2007) with $60^\circ$N as reference latitude, does not. Although the D-type event features only a small temperature increase of about 10 K of the 10 hPa polar-cap averaged temperature, it still shows downward propagating normalized geopotential height deviations from the zonal-mean at $65^\circ$N (Figure 6.4 left column, top, 6.3 and 6.2 top). The polar vortex shows a barotropic structure in the middle stratosphere, with almost no change of its position in the different heights. In the time between 10 December 2000 and 3 January 2001, the positive normalized geopotential height anomalies associated with this SSW are present at surface (Figure 6.1). During this time, a strong NAO- phase and an equatorward displaced mid-latitude jet stream in 850 hPa are observed (Figure 6.16 and 6.15), both indicators of an influence of the SSW on the surface and therefore possibly on European cold waves (Charlton-Perez et al., 2018; Afargan-Gerstman and Domeisen, 2020). The European cold wave between 22 and 25 December 2000, detected in the 7-day running mean of the 2 metre temperature anomalies, is therefore associated with the first SSW of the winter 2000/2001 (Figure 6.21). It is strongest over northern Europe with anomalies down to 5 K below climatology. Besides the SSW, a large blocking pattern over the Euro-Atlantic sector at this time is another candidate for the trigger and maintainer of the NAO- phase and the cold wave (Figure 6.10). The second SSW event in the winter 2000/2001 with its central date on 3 February 2001 is detected by all three wind-based SSW indices used in this thesis (Table 3.1). It is a S-type warming event, featuring a baroclinic vortex structure in the middle atmosphere and a temperature increase of about 35 K in roughly 1 week (Figure 6.4 right column and 6.3). Although positive normalized geopotential height anomalies are present at surface between 22 February and 6 March 2001, the normalized geopotential height deviation from the zonal-mean at $65^\circ$N shows only upward propagating signals (Figure 6.1 and 6.2 bottom). This is one indication that the SSW does not have an influence on surface weather. The present NAO- phase seems to contradict this but it can also be triggered and maintained by the frequent blocking situations happening over the Euro-Atlantic sector (Figure 6.16 and 6.10). Additionally, the mean sea level pressure anomalies show a rather meridional flow over the North Atlantic ocean featuring strong cyclonic anomalies over the pole and most of the North Atlantic ocean (Figure 6.17 bottom row). The mid-latitude jet stream in 850 hPa is displaced poleward in the beginning of the NAO- phase and then weakened to an extend, so that the subtropical jet stream is stronger (Figure 6.15). This leads to the suggestion that the European cold wave occurring between the end of February and early March 2001 is not influenced by the second SSW of the winter 2000/2001 (Figure 6.21). It is unclear, whether the correct representation of the SSW or the normalized geopotential height anomalies in the lower stratosphere have a beneficial influence on the prediction of European cold waves on subseasonal to seasonal time scales for the winter 2000/2001. The shape of the polar vortex and the 10 hPa temperature is predicted well by all representative members of all selected reforecasts, regardless if the SSW index or the normalized 100 hPa geopotential height anomalies... are represented correctly (Figure 6.8 and 6.7). Concerning blocking situations and the NAO index, the representative ensemble members perform generally not well, not showing a clear benefit when the atmospheric conditions in the stratosphere are represented well (Figure 6.13, 6.14 bottom and 6.19). Especially the strong NAO- index, during which the European cold wave between 22 and 25 December 2000 occurs, is not captured by the representative members of the reforecast covering that time period. This reforecast is initialized 2 days after the central date of the SSW and its representative members are therefore chosen based on the normalized geopotential height anomalies in 100 hPa. When looking directly at the 2 metre temperature anomalies, the representative members do not predict the European or Scandinavian cold wave well in the mean but the representative member with the correct atmospheric state predicts the 2 metre temperature anomaly field better than the representative member without the correct atmospheric state (Figure 6.24 and 6.20). In this case, there seems not to be a substantial increase in predictability, when the stratospheric state is represented correctly in this S2S reforecasts. 7 Comparison of Case Studies and Discussion 7.1 Characteristics in the Middle Stratosphere Four representative SSW events are investigated in this thesis. The analyzed S-type events occur in the second half of the respective winter with their central dates on 3 February 2001, 24 January 2009 and 25 January 2010. This is consistent with Charlton and Polvani (2007) who find the highest occurrence probability of S-type events in January and February. The analyzed D-type event occurs on 23 November 2000 which is also consistent with literature. Since D-type events do not show a seasonality, SSW events in early winter are generally D-type events (Butler et al., 2015). The consistency of the chosen SSW events to literature confirms the representativeness of these events. In January and February the polar vortex is radiatively strongest (Charlton and Polvani, 2007). Therefore, all three S-type events are preceded by the strongest westerly winds and coldest polar-cap averaged temperatures in the respective winter. Nevertheless, they show a high variability in the magnitude and speed of the deceleration of the stratospheric polar night jet and the temperature increase over the pole. During the SSW event in the winter 2008/2009, the 10 hPa polar-cap averaged temperature increases about 50 K in 2 weeks and the 10 hPa zonal-mean zonal wind decelerates about 104 ms$^{-1}$ in 3 weeks (Figure 4.3). In comparison to this event, the event in the winter 2009/2010 is less extreme. The 10 hPa zonal-mean zonal wind weakens during this SSW event about 80 ms$^{-1}$ in roughly 6 weeks and the polar-cap averaged 10 hPa temperature increases by 36 K in 1 month (Figure 5.3). The weakest of the three analyzed S-type events is the SSW occurring in February 2001. It features a deceleration of the 10 hPa zonal-mean zonal wind around 60 ms$^{-1}$ in roughly 4 weeks and an increase of the 10 hPa polar-cap average temperature of approximately 35 K in 2 weeks (Figure 6.3). In comparison to the S-type events, the analyzed D-type event is by far less extreme. The 10 hPa zonal-mean zonal wind decelerates about 30 ms$^{-1}$ in roughly 2 weeks and the polar-cap averaged 10 hPa temperature rises around 20 K in 3 weeks (Figure 6.3). The weaker characteristics of the D-type event are consistent with literature (Charlton and Polvani, 2007). This is especially true for D-type events occurring in the beginning of the winter like the analyzed one (Butler et al., 2015). Furthermore it is important to note that the D-type event develops from a neutral polar vortex state with average westerly wind speeds and polar-cap averaged temperatures. The different vortex states before the SSW events already show the variability between S- and D-type events. When looking at the deceleration of the zonal-mean zonal wind and the increase in temperatures, the variability in strength among the S-type events themselves is evident as well. A difference between roughly 40 ms$^{-1}$ in the deceleration of the stratospheric polar night jet between 3 weeks and 6 weeks after the central date of the SSW event is observed. Concerning the temperature change, a difference of 30 K between events in time-ranges between 2 weeks and 3 weeks is seen. This already shows that the evolution of single events in the middle stratosphere differs greatly from composites of SSW events of the same type. The large differences in the characteristics of SSW events is also seen in the absolute values of the zonal-mean zonal wind and the polar-cap averaged temperature in the middle stratosphere. The SSW event occurring in the winter 2008/2009 features the most extreme and longest-lasting easterly winds in the middle stratosphere of the past 20 winters (Table 3.2). Easterly winds prevail in 10 hPa height for 34 days reaching maximum values of $-36 \text{ ms}^{-1}$. The polar-cap averaged temperature in the same height reaches maximum values about 252 K (Figure 4.3). The SSW event in the winter 2009/2010 reaches a similar duration of the easterly winds in the middle stratosphere with 32 days (Table 3.2). It is important to point out though that the stratospheric polar night jet accelerates after the central date of SSW event again and reaches westerly wind speeds for approximately 1 week before turning to an easterly wind direction again (Figure 5.3). The maximum easterly wind speed with $-20 \text{ ms}^{-1}$ and the maximum polar-cap averaged temperatures with 237 K are profoundly weaker than during the SSW event in the winter 2008/2009. The weakest of the three S-type events is again the event in the winter 2000/2001 (Table 3.2). It shows maximum easterly wind speeds of $-16 \text{ ms}^{-1}$ and a duration of easterly winds in the middle stratosphere of 20 days. Similar to the SSW event of the winter 2009/2010, an intermittent phase of westerly winds is observed. Maximum polar-cap averaged temperature reach 232 K in 10 hPa (Figure 6.3). The D-type event is the weakest of all analyzed events. Easterly winds reach a maximum speed of only $-3 \text{ ms}^{-1}$ in 10 hPa height and last there for 4 days (Table 3.2). This is consistent with Charlton and Polvani (2007) who find a longer duration of easterly winds in the middle stratosphere of up to 20 days for S-type events. The maximum polar-cap averaged temperature reaches values around 226 K (Figure 6.3). Again, the differences among events are non-negligible. Concerning the maximum easterly wind speed, a difference of $16 \text{ ms}^{-1}$ is observed between the S-type events and a difference of $33 \text{ ms}^{-1}$ between all four SSW events is observed. The duration of easterly winds in the stratosphere differs between 4 days and 36 days with two events showing an intermittent phase of westerly winds. Also the absolute polar-cap averaged temperature shows a non-negligible difference of 26 K between events. Concerning the time of the vortex displacement or split, the location of the vortex remnants and the temperature distribution in the middle stratosphere a high case-to-case variability between events is again seen. On the central date of the SSW event in the winter 2008/2009, the polar vortex is clearly split into two parts (Figure 4.4 right column, top). These are centered over the Hudson Bay and central Asia. Maximum temperatures reach values up to 290 K locally over Greenland, east of one the vortex remnants. Ten days after the central date of the SSW in the winter 2009/2010, the polar vortex is split in 10 hPa height (Figure 5.4 right column, top). The stronger remaining vortex part is thereby centered over Iceland, the weaker part over eastern central Asia. At both centers, temperatures up to 260 K are found locally. The vortex split during the S-type SSW event of the winter 2000/2001 happens also after the central date of the event. In this case, 15 days later (Figure 6.4 right column, top). The two remaining, nearly equal in size polar vortex parts are centered over the eastern North Atlantic ocean and eastern central Asia. Over eastern Europe, locally maximum temperatures up to 270 K in 10 hPa height are found. Three days after the central date of the D-type SSW event in the winter 2000/2001, the polar vortex is clearly displaced off the pole in the middle stratosphere (Figure 5.5 left column, top). Its center is thereby located over northern Siberia. The maximum 10 hPa temperature is still below 240 K and observed over western Alaska. When looking at the 2-dimensional fields of geopotential height and temperature, it is best seen that composite analyses are not sufficient to describe the characteristics of SSW events in the middle stratosphere completely. Therefore, the analysis of case studies is of great importance. 7.2 Influence on European Cold Waves Concerning the influence of SSW events on European cold waves, the characteristics of the events in the middle stratosphere seem not to be the dominant factor, at least not in the case of the four analyzed SSW events. According to Afargan-Gerstman and Domeisen (2020) SSW events lead to an equatorward displacement of the tropospheric mid-latitude jet stream and the negative phase of the NAO when influencing European surface weather. The relevant time-range comprises thereby the 2 months after the central date of the SSW event (Baldwin et al., 2003). After the SSW event in the winter 2008/2009, two NAO- phases are observed (Figure 4.9). During the first NAO- phase, also an equatorward displacement of the mid-latitude jet stream is found (Figure 4.8). Regarding the SSW event in the winter 2009/2010, the whole 2 months after the central date are characterized by the negative phase of the NAO and most of the time also by an equatorward displacement of the mid-latitude jet stream (Figure 5.9 and 5.8). After the second SSW event of the winter 2000/2001, two NAO- phases co-occur with an equatorward displacement of the mid-latitude jet stream (Figure 6.16 and 6.15). Concerning the first SSW event of the same winter, the mid-latitude jet stream is also displaced southward during the two NAO- phases occurring in the 2 months after the central date of the SSW. When looking only at these two tropospheric phenomena, as done e.g. by Afargan-Gerstman and Domeisen (2020), a downward impact of all four analyzed SSW events on European surface weather is suggested. To verify this suggestion, the positive geopotential height anomalies caused by the SSW events are analyzed in the stratosphere and troposphere. After the SSW event in the winter 2008/2009, positive geopotential height anomalies are only present continuously from the stratosphere to the surface for 6 days (Figure 4.1). During this time, an upward propagation of tropospheric waves is observed over the Euro-Atlantic sector (Figure 4.2 bottom). Positive geopotential height anomalies associated with the SSW event of the winter 2009/2010 are present continuously from the stratosphere to the surface for roughly 1 month. But a downward propagation of stratospheric signals is only observed over the North Pacific ocean (Figure 5.1 and 5.2 bottom). After the S-type SSW event in the winter 2000/2001, positive geopotential height anomalies are present continuously from the stratosphere to the surface for approximately 2 weeks (Figure 6.1). During the time of the largest anomalies at surface, only upward propagating signals are observed (Figure 6.2 bottom). Positive geopotential height anomalies are observed continuously from the stratosphere to the surface for roughly 1 month after the D-type SSW event of the same winter (Figure 6.1). During this time, a downward propagation of stratospheric signals is observed over the North Atlantic ocean (Figure 6.2 top). Although all SSW events show the typical indications of a downward influence of SSW events on European surface weather, only the D-type event also shows an downward propagation of stratospheric signals to the surface over the North Atlantic ocean. It has to be kept in mind though that an overlap of upward and downward propagating waves, especially in barotropic tropospheric structures, cannot be excluded. Additionally, the occurrence probability of an NAO- phase is higher after weak vortex states than after strong vortex states, but still less than one quarter of the wintertime NAO- phases are preceded by an SSW event (Charlton-Perez et al., 2018; Domeisen, 2019). This highlights the importance to analyze the downward propagation of stratospheric anomalies instead of only focussing on the tropospheric state after the SSW events. The D-type SSW event is the only SSW event analyzed in this thesis which is suggested to influence European surface weather directly. During the time of the downward propagation of stratospheric signals, a European cold wave is observed. This cold wave occurs between 21 and 25 December 2000 and shows European mean 2 metre temperature anomalies around 1.5 K below the climatology and northern European mean 2 metre temperature anomalies down to 5 K below average (Figure 6.21). Locally, temperature anomalies down to 16 K below the climatology are observed over northern and central Europe (Figure 6.17 top row). The fact that the most extreme cold temperatures are observed over northern Europe is consistent with the findings of King et al. (2019). Besides the influence of the SSW on European surface weather, a large blocking pattern located over the Euro-Atlantic sector at the same time might also have an influence on this European cold wave (Figure 6.10; Buehler et al., 2011). Under the assumption that the PNA, which is likely influenced by the downward propagating stratospheric signals over the North Pacific ocean, is coupled to the NAO via teleconnection, the SSW event of the winter 2009/2010 could also influence European surface weather. According to Pinto et al. (2011) and Afargan-Gerstman and Domeisen (2020) a link between the PNA and NAO is possible. But a link between the SSW and the European cold wave occurring between 7 and 22 February 2010 is not clear since Jung et al. (2011) and Santos et al. (2013) exclude external forcings, such as the SSW event, as the primary cause and maintainer of the NAO- phase and therefore also the European cold wave. This cold wave features roughly 2 K lower than usual European mean 2 metre temperatures and 9 K lower than usual northern European mean temperatures. Locally, down to 12 K below average are observed over Scandinavia (Figure 5.11). During the cold wave, blocking over the North Atlantic-European sector is observed which may maintains the strongly negative NAO- phase and the European cold wave (Figure 5.6; Buehler et al., 2011). Even when a downward propagation of stratospheric signals over the North Atlantic ocean and a simultaneously occurring NAO- phase are observed, an association of a co-occurring European cold wave with the preceding SSW event is still not easy to make. The influence of blocking on the 2 metre temperatures is e.g. by far stronger than the influence of SSW events (Lehtonen and Karpechko, 2016). Generally, the NAO and European cold waves are strongly influenced by the internal tropospheric variability which is able to suppress a stratospheric influence (Tripathi et al., 2015; Domeisen et al., 2020). 8 Summary and Outlook SSW events are able to influence mid-latitude surface weather in the 2 months after their central date (Baldwin et al., 2003; Tripathi et al., 2015). Two thirds of the events are thereby followed by the negative phase of the NAO and an equatorward displacement of the tropospheric mid-latitude jet stream over the North Atlantic ocean (Afargan-Gerstman and Domeisen, 2020). Therefore, SSW events possibly influence European cold waves and their predictability on the subseasonal to seasonal time-scale (Vitart et al., 2017; Garfinkel et al., 2017). In this thesis an overview of useful techniques to analyze SSW events, their potential impact on European surface weather and possible use in tropospheric weather forecasts with lead times up to one and a half month is given. The thesis is embedded in the Waves-to-Weather (W2W) C8 project which deals with the stratospheric influence on the predictability of persistent weather patterns. This thesis focuses on the characteristics of SSW events in the stratosphere, their possible downward coupling via geopotential height anomalies and the dominating tropospheric drivers of European surface weather, such as blocking and the NAO. It underlines the high case-to-case variability among the characteristics of four representative SSW events and their downward impacts on European cold waves. Furthermore, it is demonstrated that a coupling between the stratosphere and the troposphere cannot be determined by solemnly looking at the tropospheric state after the SSW event. Instead, the analysis of vertical profiles, e.g. of geopotential height anomalies, is necessary. One D-type and three S-type SSWs are selected based on the reversal of the 10 hPa zonal-mean zonal wind at 65°N. These representative events of the past 20 years are analyzed with the ERA-Interim reanalysis data set regarding their characteristics and possible surface impacts, especially focussing on European cold waves. The D-type SSW event is additionally analyzed with S2S reforecasts. This is done to determine the influence of the correct representation of the SSW event and its subsequent anomalies on the predictability of European cold waves. The analyzed S-type SSW events with their central dates on 3 February 2001, 24 January 2009 and 25 January 2010 show a similar behaviour in their development in the middle stratosphere. The strongest westerly winds and coldest polar-cap averaged temperatures are observed right before the rapid decrease of the wind speed of the stratospheric polar night jet and the increase in temperature. Nevertheless, the change in wind speed and temperature differs remarkably in time and magnitude. The same applies to the maximum easterly wind speed and polar-cap averaged temperature as well as the duration of easterly winds in the middle stratosphere. Two of the S-type events show thereby an intermittent phase of westerly winds. The D-type SSW event with its central date on 23 November 2000 develops from a neutral polar vortex state with average westerly wind speeds and polar-cap averaged temperatures. The strongest of the four analyzed SSW events is the S-type event in the winter 2008/2009, the weakest the D-type event in the winter 2000/2001. The deceleration of the 10 hPa zonal-mean zonal wind is small with $30 \text{ ms}^{-1}$ to a maximum easterly wind speed of $-3 \text{ ms}^{-1}$. In comparison, the S-type events lead to a deceleration of the 10hPa zonal-mean zonal wind between $60 \text{ ms}^{-1}$ and $104 \text{ ms}^{-1}$ reaching a maximum easterly wind speed between $-16 \text{ ms}^{-1}$ and $-36 \text{ ms}^{-1}$. Also the temperature increase by roughly $15 \text{ K}$ to $226 \text{ K}$ is smaller in comparison to the S-type events which feature a temperature increase between $35 \text{ K}$ and $50 \text{ K}$ to a maximum polar-cap averaged temperature between $232 \text{ K}$ and $252 \text{ K}$. The duration of easterly winds is also shortest after the D-type event with only 4 days. During the S-type events, easterly wind conditions last in the middle stratosphere between 20 days and 36 days. Independently of the large differences among the analyzed events in the middle stratosphere, the typical indications of a downward influence on surface weather are observed in the 2 months after every event. Nevertheless, a downward influence of all analyzed SSW events on European surface weather is not suggested. Concerning the S-type events in the winter 2008/2009 and 2000/2001, only upward propagating signals are found in the deviation of the geopotential height from the zonal-mean at $65^\circ \text{N}$. Therefore, these events are not associated with European cold waves. The S-type event in the winter 2009/2010 shows a downward propagation over the North Pacific ocean. An association with the European cold wave occurring between 7 and 22 February 2010 can therefore only be made under the assumption of a nearly instantaneous teleconnection between the PNA and NAO. After the D-type SSW event in the winter 2000/2001 a downward propagation of stratospheric signals over the North Atlantic ocean is detected. Therefore, this SSW event is associated directly with the European cold wave observed between 21 and 25 December 2000. Since a downward propagation of stratospheric anomalies caused by the D-type SSW event is detected over the North Atlantic ocean, this event is analyzed further regarding its influence on the predictability of the subsequent European cold wave occurring between 21 and 25 December 2000. Therefore, the ECMWF S2S reforecasts are used. The only reforecast comprising the European cold wave and an initialization with easterly winds in the middle stratosphere is initialized on 25 November 2000, 2 days after the central date of the SSW event. Therefore, the selection of the representative members from this ensemble reforecast is based on the 100 hPa geopotential height anomalies. The representative members predict clearly different fields of the 2 metre temperature and mean sea level pressure anomalies during the European cold wave. A better representation of the ERA-Interim 2 metre temperature anomaly distribution is predicted by the representative member with the correct atmospheric state. But differences in the exact location and magnitude of the cold anomalies in comparison to the ERA-Interim reanalysis are non-negligible. This is also the reason why an improvement of the European or Scandinavian mean temperature prediction is not found for the representative member with the correct atmospheric state. Concerning the distribution of the mean sea level pressure anomalies, both representative members perform not well, neither in location nor in magnitude. Subsequently, an improvement of the prediction of the NAO index when the correct atmospheric state is represented in the ensemble member, is also not seen. At this point it is important to keep in mind that the European cold wave occurs at the lead time of roughly 1 month of the reforecast. In this investigated case, a substantial increase in the predictability of European cold waves when the SSW event is represented correctly in the ECWMF reforecast, is not given. To obtain a statistically relevant statement of a possible increase in the predictability of European surface weather after SSW events, further case studies need to be investigated. Since SSW events exhibit a high case-to-case variability in their characteristics and downward influence, as demonstrated in this thesis, case studies show an added value compared to composite studies. Additionally, a larger ensemble of forecasts on the subseasonal to seasonal time-scale is necessary to perform a statistical analysis. This is given in the S2S data base which consists in total of 270 ensemble members in forecasts and 93 ensemble members in hindcasts (Vitart et al., 2017). Multi-model studies with this large ensemble of the S2S data base are planned within the W2W C8 project. To link SSW events to European cold waves causality, further case studies are needed as well. This coupling between the stratosphere and the troposphere is not fully understood yet and, as shown in this thesis, highly variable. An important goal of the W2W C8 project is therefore to get a better understanding of the influence of SSW events on surface weather. This is of high importance for the exploitation of the full potential of SSW events as a possible source of increased predictability of European cold waves and other extremes on the subseasonal to seasonal time-scale. References Afargan-Gerstman, H. & Domeisen, D. I. V. (2020). Pacific Modulation of the North Atlantic Storm Track Response to Sudden Stratospheric Warming Events. *Geophysical Research Letters*, 47(2). doi:10.1029/2019GL085007 Albers, J. R. & Birner, T. (2014). Vortex Preconditioning due to Planetary and Gravity Waves prior to Sudden Stratospheric Warnings. *Journal of the Atmospheric Sciences*, 71(11), 4028–4054. doi:10.1175/JAS-D-14-0026.1 Attard, H. E. & Lang, A. L. (2019). Troposphere–Stratosphere Coupling Following Tropospheric Blocking and Extratropical Cyclones. *Monthly Weather Review*, 147(5), 1781–1804. doi:10.1175/MWR-D-18-0335.1 Baldwin, M. P. & Dunkerton, T. J. (1999). Propagation of the Arctic Oscillation from the stratosphere to the troposphere. *Journal of Geophysical Research: Atmospheres*, 104(D24), 30937–30946. doi:10.1029/1999JD900445 Baldwin, M. P. & Dunkerton, T. J. (2001). Stratospheric Harbingers of Anomalous Weather Regimes. *Science*, 294(5542), 581–584. doi:10.1126/science.1063315 Baldwin, M., Thompson, D., Shuckburgh, E., A Norton, W., & P Gillett, N. (2003). Atmospheric science. Weather from the stratosphere? *Science*, 301, 317. doi:10.1126/science.1085688 Benedict, J. J., Lee, S., & Feldstein, S. B. (2004). Synoptic View of the North Atlantic Oscillation. *Journal of the Atmospheric Sciences*, 61(2), 121–144. doi:10.1175/1520-0469(2004)061<0121:SVOTNA>2.0.CO;2 Blessing, S., Fraedrich, K., Junge, M., Kunz, T., & Lunkeit, F. (2005). Daily North-Atlantic Oscillation (NAO) index: Statistics and its stratospheric polar vortex dependence. *Meteorologische Zeitschrift*, 14(6), 763–769. doi:10.1127/0941-2948/2005/0085 Buehler, T., Raible, C. C., & Stocker, T. F. (2011). The relationship of winter season North Atlantic blocking frequencies to extreme cold or dry spells in the ERA-40. *Tellus A*, 63(2), 212–222. doi:10.1111/j.1600-0870.2010.00492.x Butler, A. H., Seidel, D. J., Hardiman, S. C., Butchart, N., Birner, T., & Match, A. (2015). Defining Sudden Stratospheric Warnings. *Bulletin of the American Meteorological Society*, 96(11), 1913–1928. doi:10.1175/BAMS-D-13-00173.1 Cattiaux, J., Vautard, R., Cassou, C., Yiou, P., Masson-Delmotte, V., & Codron, F. (2010). Winter 2010 in Europe: A cold extreme in a warming climate. *Geophysical Research Letters*, 37(20). doi:10.1029/2010GL044613 Charlton-Perez, A. J., Ferranti, L., & Lee, R. W. (2018). The influence of the stratospheric state on North Atlantic weather regimes. *Quarterly Journal of the Royal Meteorological Society*, 144(713), 1140–1151. doi:10.1002/qj.3280 Charlton, A. J. & Polvani, L. M. (2007). A New Look at Stratospheric Sudden Warnings. Part I: Climatology and Modeling Benchmarks. *Journal of Climate, 20*(3), 449–469. doi:10.1175/JCLI3996.1 Coy, L. & Pawson, S. (2015). The Major Stratospheric Sudden Warming of January 2013: Analyses and Forecasts in the GEOS-5 Data Assimilation System. *Monthly Weather Review, 143*(2), 491–510. doi:10.1175/MWR-D-14-00023.1 Domeisen, D. I. V., Grams, C. M., & Papritz, L. (2020). The role of North Atlantic-European weather regimes in the surface impact of sudden stratospheric warming events. *Weather and Climate Dynamics Discussions, 2020*, 1–24. doi:10.5194/wcd-2019-16 Domeisen, D. I. (2019). Estimating the Frequency of Sudden Stratospheric Warming Events From Surface Observations of the North Atlantic Oscillation. *Journal of Geophysical Research: Atmospheres, 124*(6), 3180–3194. doi:10.1029/2018JD030077 Duchon, C. E. (1979). Lanczos Filtering in One and Two Dimensions. *Journal of Applied Meteorology, 18*(8), 1016–1022. doi:10.1175/1520-0450(1979)018<1016:LFIQAT>2.0.CO;2 Garfinkel, C. I., Son, S.-W., Song, K., Aquila, V., & Oman, L. D. (2017). Stratospheric variability contributed to and sustained the recent hiatus in Eurasian winter warming. *Geophysical Research Letters, 44*(1), 374–382. doi:10.1002/2016GL072035 Hinssen, Y., van Delden, A., & Opsteegh, T. (2011). Influence of sudden stratospheric warmings on tropospheric winds. *Meteorologische Zeitschrift, 20*(3), 259–266. doi:10.1127/0941-2948/2011/0503 Holton, J. R. (Ed.). (2010). *An introduction to dynamic meteorology*. International geophysics series; 16. New York: Academic Press. Hurrell, J., Kushnir, Y., Ottersen, G., & Visbeck, M. (2003). The North Atlantic Oscillation. *Geophys. Monogr. Ser. 134*, 603–605. doi:10.1029/134GM01 Jia, X. J., Derome, J., & Lin, H. (2007). Comparison of the Life Cycles of the NAO Using Different Definitions. *Journal of Climate, 20*(24), 5992–6011. doi:10.1175/2007JCLI1408.1 Jung, T., Vitart, F., Ferranti, L., & Morcrette, J.-J. (2011). Origin and predictability of the extreme negative NAO winter of 2009/10. *Geophysical Research Letters, 38*(7). doi:10.1029/2011GL046786 Karpechko, A. Y., Charlton-Perez, A., Balmaseda, M., Tyrrell, N., & Vitart, F. (2018). Predicting Sudden Stratospheric Warming 2018 and Its Climate Impacts With a Multimodel Ensemble. *Geophysical Research Letters, 45*(24), 13, 538–13, 546. doi:10.1029/2018GL081091 Kautz, L.-A., Polichtchouk, I., Birner, T., Garny, H., & Pinto, J. G. (2020). Enhanced extended-range predictability of the 2018 late-winter Eurasian cold spell due to the stratosphere. *Quarterly Journal of the Royal Meteorological Society, 146*(727), 1040–1055. doi:10.1002/qj.3724 Kidston, J., Scaife, A., Hardiman, S., Mitchell, D., Butchart, N., Baldwin, M., & Gray, L. (2015). Stratospheric influence on tropospheric jet streams, storm tracks and surface weather. *Nature Geoscience, 8*, 433–440. doi:10.1038/ngeo2424 Leckebusch, G. C., Kapala, A., Maechel, H., Pinto, J. G., & Reyers, M. (2008). Indizes der Nordatlantischen und Arktischen Oszillation. *promet, 34*(3/4), 95–100. doi:1124083871/34(URN) Lee, S. H., Charlton-Perez, A. J., Furtado, J. C., & Woolnough, S. J. (2019). Abrupt Stratospheric Vortex Weakening Associated With North Atlantic Anticyclonic Wave Breaking. *Journal of Geophysical Research: Atmospheres, 124*(15), 8563–8575. doi:10.1029/2019JD030940 Lehtonen, I. & Karpechko, A. Y. (2016). Observed and modeled tropospheric cold anomalies associated with sudden stratospheric warmings. *Journal of Geophysical Research: Atmospheres, 121*(4), 1591–1610. doi:10.1002/2015JD023860 Lim, G. H. & Wallace, J. M. (1991). Structure and Evolution of Baroclinic Waves as Inferred from Regression Analysis. *Journal of the Atmospheric Sciences, 48*(15), 1718–1732. doi:10.1175/1520-0469(1991)048<1718:SAEOBW>2.0.CO;2 Limpasuvan, V., Thompson, D. W. J., & Hartmann, D. L. (2004). The Life Cycle of the Northern Hemisphere Sudden Stratospheric Warmings. *Journal of Climate, 17*(13), 2584–2596. doi:10.1175/1520-0442(2004)017<2584:TLCOTN>2.0.CO;2 Liu, Q. (1994). On the definition and persistence of blocking. *Tellus A, 46*(3), 286–298. doi:10.1034/j.1600-0870.1994.t01-2-00004.x Manney, G. L., Sabutis, J. L., & Swinbank, R. (2001). A unique stratospheric warming event in November 2000. *Geophysical Research Letters, 28*(13), 2629–2632. doi:10.1029/2001GL012973 Manney, G., Schwartz, M., Krüger, K., Santee, M., Pawson, S., Lee, J., … Livesey, N. (2009). Aura Microwave Limb Sounder observations of dynamics and transport during the record-breaking 2009 Arctic stratospheric major warming. *Geophys. Res. Lett, 36*. doi:10.1029/2009GL038586 Martius, O., Polvani, L. M., & Davies, H. C. (2009). Blocking precursors to stratospheric sudden warming events. *Geophysical Research Letters, 36*(14). doi:10.1029/2009GL038776 Matsuno, T. (1971). A Dynamical Model of the Stratospheric Sudden Warming. *Journal of the Atmospheric Sciences, 28*(8), 1479–1494. doi:10.1175/1520-0469(1971)028<1479:ADMOTS>2.0.CO;2 Pelly, J. L. & Hoskins, B. J. (2003). A New Perspective on Blocking. *Journal of the Atmospheric Sciences, 60*(5), 743–755. doi:10.1175/1520-0469(2003)060<0743:ANPOB>2.0.CO;2 Pinto, J. G., Reyers, M., & Ulbrich, U. (2011). The variable link between PNA and NAO in observations and in multi-century CGCM simulations. *Climate Dynamics, 36*(1), 337–354. doi:10.1007/s00382-010-0770-x Santos, J. A., Woollings, T., & Pinto, J. G. (2013). Are the Winters 2010 and 2012 Archetypes Exhibiting Extreme Opposite Behavior of the North Atlantic Jet Stream? *Monthly Weather Review, 141*(10), 3626–3640. doi:10.1175/MWR-D-13-00024.1 Schneidereit, A., Peters, D. H. W., Grams, C. M., Quinting, J. F., Keller, J. H., Wolf, G., … Martius, O. (2017). Enhanced Tropospheric Wave Forcing of Two Anticyclones in the Prephase of the January 2009 Major Stratospheric Sudden Warming Event. *Monthly Weather Review, 145*(5), 1797–1815. doi:10.1175/MWR-D-16-0242.1 Smid, M., Russo, S., Costa, A., Granell, C., & Pebesma, E. (2019). Ranking European capitals by exposure to heat waves and cold waves. *Urban Climate, 27*, 388–402. doi:https://doi.org/10.1016/j.uclim.2018.12.010 Tibaldi, S. & Molteni, F. (1990). On the operational predictability of blocking. *Tellus A, 42*(3), 343–365. doi:10.1034/j.1600-0870.1990.t01-2-00003.x Tripathi, O. P., Baldwin, M., Charlton-Perez, A., Charron, M., Cheung, J. C. H., Eckermann, S. D., ... Stockdale, T. (2016). Examining the Predictability of the Stratospheric Sudden Warming of January 2013 Using Multiple NWP Systems. *Monthly Weather Review, 144*(5), 1935–1960. doi:10.1175/MWR-D-15-0010.1 Tripathi, O. P., Baldwin, M., Charlton-Perez, A., Charron, M., Eckermann, S. D., Gerber, E., ... Son, S.-W. (2015). The predictability of the extratropical stratosphere on monthly timescales and its impact on the skill of tropospheric forecasts. *Quarterly Journal of the Royal Meteorological Society, 141*(689), 987–1003. doi:10.1002/qj.2432 Vitart, F., Ardilouze, C., Bonet, A., Brookshaw, A., Chen, M., Codorean, C., ... Zhang, L. (2017). The Subseasonal to Seasonal (S2S) Prediction Project Database. *Bulletin of the American Meteorological Society, 98*(1), 163–173. doi:10.1175/BAMS-D-16-0017.1 Vitart, F., Robertson, A., & L. T. Anderson, D. (2012). Subseasonal to Seasonal Prediction Project: Bridging the gap between weather and climate. *WMO Bulletin, 61*. Wang, C., Liu, H., & Lee, S.-K. (2010). The record-breaking cold temperatures during the winter of 2009/2010 in the Northern Hemisphere. *Atmospheric Science Letters, 11*(3), 161–168. doi:10.1002/asl.278 Wang, L. & Chen, W. (2010). Downward Arctic Oscillation signal associated with moderate weak stratospheric polar vortex and the cold December 2009. *Geophysical Research Letters, 37*(9). doi:10.1029/2010GL042659 Woollings, T., Barriopedro, D., Methven, J., Son, S.-W., Martius, O., Harvey, B., ... Seneviratne, S. (2018). Blocking and its Response to Climate Change. *Current Climate Change Reports, 4*(3), 287–300. doi:10.1007/s40641-018-0108-z Woollings, T., Hannachi, A., & Hoskins, B. (2010). Variability of the North Atlantic eddy-driven jet stream. *Quarterly Journal of the Royal Meteorological Society, 136*(649), 856–868. doi:10.1002/qj.625 Yu, Y., Cai, M., Shi, C., & Ren, R. (2018). On the Linkage among Strong Stratospheric Mass Circulation, Stratospheric Sudden Warming, and Cold Weather Events. *Monthly Weather Review, 146*(9), 2717–2739. doi:10.1175/MWR-D-18-0110.1 Zhang, S. & Tian, W. (2019). The effects of stratospheric meridional circulation on surface pressure and tropospheric meridional circulation. *Climate Dynamics, 53*(11), 6961–6977. doi:10.1007/s00382-019-04968-x Acknowledgement First of all I thank Prof. Dr. Joaquim Pinto for advising me during the time of my master’s thesis and trusting me to work in home office during this spring and summer. I always found an open door when questions arose and felt very welcome to take advantage of it. Prof. Dr. Peter Braesicke receives my thanks for being my co-advisor and a great help for the interpretation of my results. With both, Prof. Pinto and Prof. Braesicke, I was able to get a comprehensive view of the stratospheric and tropospheric processes, their possible interaction and developed a good way to analyze and represent them in my thesis. Never forgetting to scrutinize my results. The master’s thesis is one of the things I enjoyed most during my study. To a large extent this is due to the great supervision by Dr. Lisa-Ann Kautz. She has always been there no matter what kind of issue I have been facing and eager to find a solution soon. Thank you a lot for being there for me, in person and online! The collaboration with the W2W C8 team provided great support in the second half of my thesis. Especially Prof. Dr. Thomas Birner and Jonas Späth receive my thanks for helping me with detailed scientific discussions and technical issues. In this regard I would also like to thank Xiaoyang Chen for his scientific advise. The working group „Regional Climate and Weather Hazards“ made me feel very welcome and their members provided me with a broad range of different subjects to improve my knowledge in. I really enjoyed being part of it. In this context I also want to thank Florian Becker for his expertise on cold wave indices which provided me with the necessary information to choose a suitable index for the analysis in my thesis. Last but not least I would like to thank Gabi Klinck for the technical support. Although she had a lot of other work to do, she always found time to help with technical issues. Ich versichere wahrheitsgemäß, die Arbeit selbstständig angefertigt, alle benutzten Hilfsmittel vollständig und genau angegeben und alles kenntlich gemacht zu haben, was aus Arbeiten anderer unverändert oder mit Abänderungen entnommen wurde. Karlsruhe, den 16.07.2020 Selina Kiefer
IN THE SUPREME COURT OF BRITISH COLUMBIA IN THE MATTER OF THE COMPANIES' CREDITORS ARRANGEMENT ACT, R.S.C. 1985, c. C-36, AS AMENDED AND IN THE MATTER OF THE BUSINESS CORPORATIONS ACT, S.B.C. 2002, c.57, AS AMENDED AND IN THE MATTER OF THE CANADA BUSINESS CORPORATIONS ACT, R.S.C. 1985, C. c-44, AS AMENDED AND IN THE MATTER OF A PLAN OF COMPROMISE AND ARRANGEMENT OF ALL CANADIAN INVESTMENT CORPORATION APPLICATION RESPONSE Application response of: Those preferred shareholders of All Canadian Investment Corporation (the "Company") who did not request redemption of their shares in the Company (the "application respondents" or the "Non-Redeeming Shareholders") THIS IS A RESPONSE TO the Notice of Application of the Petitioner filed 25/01/2019. Part 1: ORDERS CONSENTED TO The application respondents consent to the granting of the orders set out in the following paragraphs of Part 1 of the notice of application on the following terms: Paragraph 1(a), (b), and (c). Part 2: ORDERS OPPOSED The application respondents oppose the granting of the orders set out in paragraphs NIL of Part 1 of the notice of application. Part 3: ORDERS ON WHICH NO POSITION IS TAKEN The application respondents take no position on the granting of the orders set out in paragraphs NIL of Part 1 of the notice of application. Part 4: FACTUAL BASIS 1. The application respondents agree with the facts set out in Part 2: Factual Basis of the notice of application. 2. The application respondents take note of the additional facts contained in the Application Response of the Redeeming Shareholders filed April 10, 2019 (the "Redeeming Shareholders' Response") and will refer to such facts as required. 3. Terms that are not defined herein shall otherwise have the meaning as set out in the notice of application. 4. The applicant respondents wish to emphasize some of the facts contained in the Factual Bases of the notice of application and the Redeeming Shareholders' Response. Terms of Redemption of Preferred Shares 5. All preferred shareholders bought preferred shares from the Company following receipt of an Offering Memorandum setting out terms, including terms of redemption, of the preferred share offerings. 6. The various amended and restated Offering Memoranda (collectively, the "Offering Memoranda") state the terms and conditions of the process of redemption of preferred shares. All Offering Memoranda refer to restrictions on redemption, including the restriction of insolvency, the requirement to maintain a certain level of cash reserves, the limit on the amount of preferred shares the Company may redeem in any fiscal year and the maintenance requirements of its asset portfolio. Affidavit #10 of Donald Bergman made on January 24, 2019, Exhibits F to Y. 7. The Offering Memoranda state a redemption request is subject to the exercise of director's discretion. They also advise, in bold print, that the adoption of the Company's redemption policy does not fetter the discretion of the directors to amend or cancel such policy in whole or in part, to adopt an alternative policy regarding redemption of shares or to refuse to consent to a redemption. 8. The Offering Memoranda also refer to a Policy (the "Policy") adopted by the Company regulating the redemption of preferred shares. They state that a copy of the Policy is available from the Company on request. The Policy sets out restrictions on redemption of preferred shares including that the Company's director must consent to redemption pursuant to terms and conditions set by the director in his sole discretion. 9. All preferred shares were issued by the Company pursuant to its Articles. Section 27.4 sets out the terms of redemption of preferred shares, including that the directors in their sole discretion must consent to redemption. Part 5: LEGAL BASIS (A) Common Law Presumption of Equality 1. There is a presumption at common law (referred to by L.C.B. Gower as the "initial presumption") that there is equality among shareholders, analogous to the presumption of equality among partners in a partnership. 2. The equality among shareholders arises from equality among shares. Rights related to a share attach to the share, not the shareholder. *Bowater Canadian Ltd. v. R. L. Crain Inc.* (Ont C.A.), 62 O.R. (2d) 752 at p. 3 (B) **Statutory Treatment of Share Equality** 3. The common law presumption of share equality is now included in compulsory legislation governing British Columbia corporations. 4. The Company was incorporated under the *Business Corporations Act*, S.B.C. 2002, c. 57 (the "BCA"). Pursuant to ss. 11(h) and 58 of the BCA, all special rights and restrictions attached to each class of shares must be set out in the notice of articles of a company. 5. The articles of a company "must... set out, for each class... of shares, all of the special rights or restrictions that are attached to the shares of that class". *BCA*, s 12(2)(b). 6. S. 54(3) of the BCA requires a change in the share structure of a company to be made by altering the company's notice of articles to effect the change or that the change be made by a form of resolution specified by the articles or by a special resolution. 7. S. 58(2) of the BCA states that special rights or restrictions attached to a share are not varied or deleted until the articles have been altered to reflect the variation or deletion. 8. The presumption of equality among shares is reflected in ss. 59 (3) and (4) of the BCA: "(3) Every share must be equal to every other share, subject to special rights or restrictions attached to any such share under the memorandum or articles. (4) Subject to subsection (6), each share of a class of shares must have attached to it the same special rights or restrictions as are attached to every other share of that class of shares." 9. S. 61 of the BCA states as follows: "61. A right or special right attached to issued shares must not be prejudiced or interfered with under this Act or under the memorandum, notice of articles or articles unless the shareholders holding shares of the class or series of shares to which the right or special right is attached consent by a special separate resolution of those shareholders." (C) **CCAA on Equity Claims** 10. S. 2(1) of the *Companies' Creditors Arrangement Act*, R.S.C. 1985, c. C-36 (the "CCAA"), defines "equity claim" as follows: "equity claim means a claim that is in respect of an equity interest, including a claim for, among others, (a) a dividend or similar payment, (b) a return of capital, (c) a redemption or retraction obligation, (d) a monetary loss resulting from the ownership, purchase or sale of an equity interest or from the rescission, or, in Quebec, the annulment, of a purchase or sale of an equity interest, or (e) contribution or indemnity in respect of a claim referred to in any of paragraphs (a) to (d); " 11. S. 22.1 of the CCAA states the following with respect to priority among equity claims: "Despite subsection 22(1), creditors having equity claims are to be in the same class of creditors in relation to those claims unless the court orders otherwise and may not, as members of that class, vote at any meeting unless the court orders otherwise." 12. In most instances of companies filing for protection under the CCAA, there are insufficient assets with which to pay debt claims. Section 6(8) of the CCAA codifies the common law rule that in insolvency situations, debt claims must be paid in full before equity claims: "Payment-equity claims (8) No compromise or arrangement that provides for the payment of an equity claim is to be sanctioned by the court unless it provides that all claims that are not equity claims are to be paid in full before the equity claim is to be paid." (D) Characterization of Debt and Equity Claims during Insolvency Proceedings 13. There appears to be insufficient assets of the Company to pay its creditors in full if shareholders are considered to be debt claimants rather than equity claimants. 14. Putting preferred shareholders in the same class as creditors would also leave a lower or perhaps no return for other preferred shareholders than if all preferred shareholders remain equity claimants. 15. Claims made in a CCAA or other insolvency proceedings by a shareholder, or a former shareholder, are scrutinized to determine whether in substance the claims are in debt or in equity. To do this, the courts try to find the "true nature of the transaction" by looking at the words chosen by the parties to reflect their intentions. If the words prove inadequate to support a conclusion, then the admissible surrounding circumstances are reviewed for assistance. *Canada Deposit Insurance Corp. v. Canadian Commercial Bank*, [1992] 3 SCR 558 at paras 46 and 51 ("Canadian Commercial"). 16. In *Royal Bank of Canada v. Central Capital Corp.*, [1996] O.J. No. 359 (Ont C.A.) ("Central Capital"), the majority stated at para. 126: "Although the relationship between each appellant and the company has characteristics of debt and equity, in substance both... are shareholders, not creditors of Central Capital. Neither the existence of their retraction rights nor the exercise of those rights converts them into creditors." 17. Central Capital was followed in *Nelson Financial Group Ltd.*, 2010 ONSC 6229 ("Nelson Financial"). 18. In *Earthfirst Canada Inc. (Re)*, [2009] A.J. No.749 ("Earthfirst"), a case where shareholders in a CCAA proceeding advanced debt claims, the court held that the claims were in equity, saying at para. 5: "Counsel for the appellant stresses the express indemnity covenant here, but in our view, it is ancillary to the underlying right, as found by the Chambers Judge. Characterization flows from the underlying right, not from the mechanism for its enforcement, nor from its non-performance." 19. In *JED Oil Inc. (Re)*, [2010] A.J. No. 512, the court characterized claims of preferred shareholders as equity claims, stating at para. 11 that a "corporation cannot issue shares that in effect make the shareholders creditors". 20. A claim of indemnity advanced by a shareholder for the recovery of a share purchase price on account of alleged breach of contract and fraud inducing a share purchase is an equity claim, not a claim in debt. The legal tools used are not the important thing. It is the fact they are being used to recover an equity investment that is important. *Return On Innovation v. Gandi Innovations*, 2011 ONSC 5018, at para 59 ("Return On Innovation"). 21. The definition of "equity claim" added to the CCAA in 2009 should be broadly interpreted to include instances that might not otherwise be within its plain meaning. An "equity claim" is not confined to a claim advanced by the holder of an equity interest; the definition is sufficiently clear to alter the pre-existing common-law by bringing into play a more expansive approach to what an equity claim is. *Sino-Forest Corp. (Re)*, 2012 ONCA 816. 22. The expansive definition of equity claim in the CCAA appears intended to preserve the original status of claimants who started their relationship with a company as shareholders, as opposed to allowing them to transform into debt claimants as easily as they might outside of the CCAA context. 23. Claims of shareholders who issue notice of redemption prior to bankruptcy or a CCAA filing, even having obtained default judgement for the unpaid redemption amount, are claims in equity, not in debt. *Bul River Mineral Corporation (Re)*, 2014 BCSC 1732 at para 109 ("Bul River"); *Dexior Financial Inc. (Re)*, 2011 BCSC 348 at para 12. (E) Altering the Contract 24. The agreement, understanding or contract (defined for present purposes as the "Contract") created between the Company and the preferred shareholders consists of the Articles of Incorporation (the "Articles") of the Company, the Offering Memoranda and, if received by a shareholder, the Policy. All of those refer to the element of directors' discretion in the process of the redemption of shares. 25. The rigidly formal and regulated process of creating detailed rights and restrictions on redemption, including the role of directors' discretion, in the Articles is incompatible with generally worded advertising statements of the Company that mention the redemption of shares having the effect of amending or even "clarifying" elements of the Contract. 26. Allowing advertisements to override the wording of the Contract, in particular the Articles, raises public policy concerns of undermining the certainty created by specific, accessible and generally known corporate legislation and its required procedures to create and modify rights and restrictions attached to shares. To do so would permit different "contracts" between a company and its shareholders of the same class. This would in turn encourage litigation based on alleged differences created among shares of the same class. 27. The Articles state that redemption of shares is subject to the exercise of directors' discretion. The directors of the Company are allowed to clarify the rights and restrictions pursuant to Article 27.6. However, the ability to informally clarify restrictions attached to shares set out in the Articles, such as the exercise of directors' discretion, cannot be presumed to include the ability to eliminate that restriction. 28. Advertising material is generally understood by its nature to be less deliberate and formal in its wording - often referred to as 'puff' - than language used in the documents comprising the Contract. All purchasers of preferred shares were required to confirm, pursuant to the Offering Memoranda, that they were sophisticated purchasers. 29. To form a collateral contract alongside the Contract, formal elements of contract creation must exist, including certainty of terms and an intention to enter into a binding agreement. G.H.L. Fridman, *The Law of Contract*, (4th ed.) (Thomson Canada Limited 1999) at pp. 535-6. 30. If the management of the Company misrepresented the process of redemption in advertisements to certain shareholders, damages may be claimed for those misrepresentations, but such damages would still be "equity claims". In any event, other shareholders and the Company's creditors should not be thereby prejudiced. (F) Directors' Discretion 31. There is more, not less, need in properly managing a corporate business for the exercise of director's discretion in the processing of redemption requests in a company that is undergoing difficult financial circumstances. 32. The Offering Memoranda, from 2003 until 2015, state in bold print the following: "Redemption of Preferred Shares: The adoption of its policy regarding the redemption of Preferred Shares does not fetter the discretion of the Directors of the Company from time to time to amend or cancel such policy in whole or in part or to adopt an alternative policy with respect to the redemption of Preferred Shares, or to refuse to consent to a Requesting Shareholders request to have their Preferred Shares redeemed by the Company." (p.10) (G) Fairness 33. Fairness is a fundamental objective of CCAA proceedings. The CCAA seeks to recognize legitimate expectations to the extent possible and not to allow those expectations to be unexpectedly subverted. The preferred shareholders started their relationship with the Company with the legitimate expectation of being treated as preferred shareholders on a winding up; creditors dealing with the Company held the legitimate expectation that their financial recovery would not be diluted on a winding up by shareholders transforming into creditors. Similar claims should be treated in a similar fashion. Bul River, at paras 55 and 109. 34. Shareholders are entitled to assume that the articles of a company will prevail and that their priority position established by the articles will not be altered except through formally established procedures. 35. Presenting advertising statements to some present or future shareholders that have the effect of clarifying the element of directors discretion in the redemption process so as to eliminate the role of directors discretion is unfairly prejudicial to other shareholders who relied on the Contract for the safeguards allowed by the exercise of discretion in considering redemption requests. Part 6: MATERIAL TO BE RELIED ON 1. Materials that have been filed by the Petitioner and the Redeeming Shareholders in this proceeding. The application respondents estimate that the application will take two days. ☐ The application respondent has filed in this proceeding a document that contains the application respondent's address for service. The application respondent has not filed in this proceeding a document that contains an address for service. The application respondent's ADDRESS FOR SERVICE is: c/o 700 - 401 West Georgia Street, Vancouver, BC, V6B 5A1. Date: 22 May, 2019 Signature of Mark Davies, lawyer for the application respondents THIS RESPONSE is filed by Mark Davies, of the firm of Richards Buell Sutton LLP, whose place of business and address for service is 700 - 401 West Georgia Street, Vancouver, BC V6B 5A1, Telephone 604.682.3664.
Belarusian case study of P2P lending market digitalization: state-of-the-art, needs and perspectives Joanna Koczar\textsuperscript{1}, Yury Karaleu\textsuperscript{2,*}, and Aliaksandr Dudkin\textsuperscript{2,3} \textsuperscript{1}Wroclaw University of Economics and Business, 53-345, Komandorska Str., 118/120, Wroclaw, Poland \textsuperscript{2}School of Business of Belarusian State University, Oboinaya Str., 7, 220004 Minsk, Belarus \textsuperscript{3}JSC BSB Bank, Pobediteley ave., 23/4, 220004 Minsk, Belarus Abstract. The paper investigates the present state and prospects of development of the Belarusian market of mutual lending, the business model and the results of the activity of KUBYSHKA crowdlending Internet platform, the possibilities of organizing cooperation between Belarusian banks and crowdlending platforms, obstacles to the development of the market for mutual lending. 1 Introduction Since the early 2000s and with the development of information technology, the increasing level of globalisation, the anonymous nature of the flow of money, a new innovative element of the money market (in the non-banking sector) had emerged – the market for mutual lending or crowdlending. The uniqueness of the crowdlending or marketplace lending (in contrast to other segments of the non-banking sector of the financial market) is the absence of a traditional financial intermediary (primarily a bank) in the process of transferring money from an individual (lender or investor) to the borrower. The latter could be: - legal entities (basically SMEs) – also called person-to-business lending (P2B lending) - business lending by a private person; - another individual – also called peer-to-peer lending (P2P lending) – lending by one private person to another [1, 2]. The crowdlending world market is basically presented by the Asian market (84% with a turnover of USD420 billion in 2018), the American market (12% market share with a turnover of USD60 billion) and the European market (3% market share with a turnover of USD15 billion). The remaining balance of 1% is split between Africa and Australia [3]. The P2P lending market, which is the object of this survey, is not fundamentally a new stream but it has indeed recently become relatively cheap and easy to find sufficient investors to finance the amount requested by the borrower through the virtual crowdlending platforms. After a loan agreement is concluded between parties (often via the platform), the crowdfunding platform transfers the invested loan amount from the investor to the borrower. In return, the borrower has to repay the loan amount to the lender over a predefined period, *Corresponding author: firstname.lastname@example.org plus interests. So, credit check, rating, brokering, processing, operation, etc. are handled by the crowdlending platform. Even any payment delay or borrower’s inability to pay are usually handled by the platform, without the investors themselves having to intervene, in exchange for a fee (mostly paid by the borrower) [3, 4]. So, crowdlending platforms “epitomise the idea of disintermediated ‘middle-man-free’ access to finance, especially insofar as platforms are conceived as technological means that allow borrowers to directly access available funding on the one hand, and lenders to invest in specific loans on the other” [5]. The first-ever worldwide launched crowdlending platform named ZOPA (Zone of Possible Agreement) was founded in the UK in 2005. It still remains one of the largest and most authoritative in the world. Over the past 15 years, half a million borrowers have been loaned through ZOPA for a total amount of over GBP5 billion; investors of the site (lenders) received interest income in the amount of more than GBP250 million [6]. In 2006, similar crowdlending platforms PROSPER and LENDING CLUB appeared in the United States. Since that moment, the practice of P2P lending has become the most widespread in the world, both in high-income economies such as UK, USA, Sweden, Australia, Canada, China as well as in emerging economies such as India and Brazil. Since 2012, similar services have become and appear in Russia. ZAYMIGO, LOANBERRY and Вдолг.ру are the most prominent today [7, 8]. Summing up the aforesaid, we consider that interaction between individuals on the mutual landing based on the global Internet, remotely, online, is an innovative digital technology in the field of finance (fintech project), fully correlates with the digital transformation trend in all sectors of the global economy. 2 Belarusian market of mutual landing Belarusian financial market includes the banking system (the main and largest market participant), as well as the non-banking sector (leasing companies, forex companies, insurance organizations, professional participants in the securities market, microfinance organizations, etc.). The National Bank of the Republic of Belarus (National Bank) and the Belarusian Government strongly encourage the development and competition between various participants in the financial market. The intensification of competition and even the preferential development of the non-banking sector were declared in the Strategy for the Development of the Financial Market of the Republic of Belarus until 2020, approved by a joint resolution of the Council of Ministers and the National Bank of March 28, 2017, No. 229/6, as one of the strategic objectives. Despite the declared goals, the Belarusian market for mutual lending is still in its infancy. Until now, the issues of granting loans between individuals in the Belarusian legislation has not been properly regulated [9]. The work on the preparation of the draft Decree regulating the activities of such services in the Republic of Belarus, initiated by the National Bank back in 2017, has actually been suspended today. At the same time, in the report of the First Deputy Chairman of the Board of the National Bank Sergei Kalechits it is proposed to pay special attention to the development of online borrowing services [9]. 3 Business model and operational results of KUBYSHKA crowdlending Internet platform Today, the Belarusian market of mutual lending is represented by a single player – professional intermediary KUBYSHKA crowdlending Internet platform, founded by Financial and Analytical Bureau LLC (Holding Company). The activity of KUBYSHKA crowdlending Internet platform was launched in December 2016. The National Bank, as the main regulator of the national financial market, was promptly informed by the founder about the start of activities and about the business model of the platform based on online Internet resource www.kubyshka.by [10]. Today 1,300 individuals (borrowers and lenders) are registered as users of the crowdlending Internet platform. Over the period since the beginning of the activity through the Internet site, more than 3,000 loans were issued totalling USD900 thousand (in equivalent). As shown in Fig. 1, the current portfolio of 900 loans by reference to the principal outstanding is equal to USD183 thousand. ![Fig. 1. The dynamic of the loan portfolio and total loan amount, USD (in equivalent).](image) According to the business model of KUBYSHKA crowdlending Internet platform, the registration of borrowers is carried out after they have signed, in the presence of a representative of the company, a loan adhesion agreement, a user agreement and when the individual’s consent to process his/her personal data from the Credit Register is obtained. Registration of lenders is carried out remotely. Despite the fact that the Holding Company is not a party to the treaty and is not responsible for any financial risks of non-repayment of debts, it assumed the responsibility for assessing the solvency of borrowers and guarantors (if any), assigning a credit rating (basic (from 0.0 to 5.0) or advanced rate (from 0.0 to 10.0)) and setting a personal credit limit. As practice has shown, most borrowers undergo an initial assessment of solvency, which does not require supporting documents and certificates, as well as the presence of guarantors, which, as a result, ensures that borrowers receive a basic credit rating. The average ratings of borrowers entering into loan deal in local (BYN) and foreign currency are 3.2 and 3.4, respectively (Fig. 2). The specific feature of KUBYSHKA crowdlending Internet platform is the assignment of the borrower to set the loan terms and conditions (such as the size of the loan, the loan terms, interest rate, the loan currency) when presenting loan applications to the crowdlending platform. If the lender accepts the submitted loan application, he transfers the money to the borrower. By doing so, the lender joins the loan adhesion agreement and since that moment the loan deal is considered concluded. The procedure (schedule) for the repayment of loan debt and accrued interests are established automatically when concluding a loan deal using the annuity method. Settlements between parties for issuing and repaying loans and paying interests are carried out remotely making instant card-to-card money transfers between parties using the “virtual terminal” of the partner bank (OJSC Belgazprombank), which is implemented into KUBYSHKA crowdlending Internet platform. 4 Preconditions for the development of the market for mutual lending The political and socio-economic uncertainty in Belarus, enhanced since the second half of 2020, has become the main reason for the large-scale outflow of deposits of individuals from the banking system. As shown in the Analytical Review of the National Bank “Survey on the implementation of the banking principle “Know your customer” (KYC approach) in relation to attracting term deposits of individuals, in 2020 deposits in foreign currency decreased from USD6.3 billion to USD4.7 billion or more than 25%; deposits in BYN – from BYN5.2 billion to BYN4.6 billion or by 11.6% [11] (Fig. 3). This, in turn, has led to unstable capital markets, a significant decrease in the liquidity of the banking sector and a reduction of available borrowings for individuals (or caused the total suspension of lending) with a conjoined growth of the borrowing value [12] (Fig 4). Such a large-scale decrease in the volume of lending to individuals by banks stimulates individuals to seek alternative sources of financing for their own current needs. Only an insignificant part of the banking customers became clients of KUBYSHKA crowdlending Internet platform and a significant part of them apparently turned to participants of the shadow banking system, qualitative and quantitative evaluation of which is extremely difficult in Belarus due to a lack of information required for monitoring. At the same time, banking customers with limited access to bank resources, who became users of KUBYSHKA crowdlending Internet platform, as well as previously registered borrowers, fully demonstrated an urgent need for credit resources that lead to rising interest rates for new loan applications. Fig. 5 below shows the dynamic of interest rates for new loan applications over the past two years. The weighted average interest rates reached a maximum by January 2021: 243% per annum – for transactions in BYN and 97% per annum – for transactions in foreign currency. Fig. 5. The dynamic of interest rates for new loan applications over the past two years. The described situation would not have been possible in a balanced market environment, providing free access of individuals with a higher credit rating to credit resources through crowdlending Internet platforms with a reasonable personal credit limit and sensible interest rates. On the other hand, such a rise in interest rates in the absence of obvious factors of a sharp deterioration of borrowers’ solvency (proved by rational level of resulting risk-reward ratio) is quite attractive for another group of KUBYSHKA crowdlending Internet platform – lenders. The profitability of lenders' investments is much higher than the interest rates on bank deposits and investments in securities on the Belarussian stock market. Simultaneously, lending of investments on crowdlending platforms is less than the risk of operations in the Forex market and, most importantly, it is controlled and managed by the lender himself. The attractiveness of such a model of personal investments is illustrated by the statistics of overdue and toxic debts. As of February 1, 2021, according to KUBYSHKA crowdlending Internet platform data, more than 81% of the funds invested by creditors were repaid in full amount including interest payments, and in some cases – with penalties and fines. The remainder of the current portfolio is classified as standard loans – 7.8%, loans with insignificant (often technical) overdue – 1.8%, toxic loans with more than 90 days overdue 9.4% [10]. In order to reduce the risk of non-repayment of issued loans, the Holding Company has assumed the responsibility of assisting creditors in the event of defaults or delay. In particular, the proceedings of disputes between borrowers and lenders are carried out in the Arbitration Court, established at the Financial and Analytical Bureau LLC and registered in the manner established by the legislation. Despite the attractiveness of the model of mutual lending for both borrowers and lenders, as described earlier, a significant part of the personal funds is withdrawn from financial turnover and transferred to the unregulated shadow borrowing market and criminalization. 5 Needs and Perspectives of the development of Belarusian P2P lending market The KUBYSHKA crowdlending Internet platform extensive operation experience in the real Belarusian economic environment has revealed a number of problems that demand the immediate solution for the further development of P2P technology. For example, the cooperation between banks and crowdlending platforms, in addition to the infancy of the market for mutual lending and the absence of a large number of participants, is seriously hampered by the fears of banks related to their joined cross-services. Some experts raised concerns about ambiguous interpretation by the National Bank of the Republic of Belarus of the possibility and legality of providing cross-services by banks together with crowdlending platforms. In our opinion, such fears are not justified, because, in a similar regime, Belarusian banks have been selling services of insurance companies at their points of sale for several years. This kind of cross-sales does not raise doubts because it is an established practice that has never been questioned. In addition, international experience demonstrates only the positive effects of the proposed collaboration. Among other urgent problems in the development of P2P technologies, the following can be distinguished: - Organization of monitoring and control of crowdlending platforms as professional financial intermediaries; - A legal regulation of lending deals between individuals; - The implementation of personal bankruptcy; - A legal regulation of the activity of collection agencies; - The inclusion of the history of borrowings on crowdlending platforms in the credit history of borrowers; - The development of risk insurance of non-recovery funds, etc. [1]. For example, to ensure enforcement in the event of borrower’s default, it seems appropriate to amend Law On economic insolvency (bankruptcy) dated July 13, 2012, No. 415-3 in the part concerning the introduction of the institution of personal bankruptcy. A similar institute exists in many countries of the world: in the USA, Germany, England, Sweden, Denmark, Spain, and not so long ago it appeared in the Russian Federation [13]. To qualify the issues of granting loans between individuals as an entrepreneurial activity, it is necessary to clarify the legality of actions to provide no more than two loans by an individual without registration as an entrepreneur, which, in our opinion, can be regarded in the situation under consideration as a definition of the concept of ‘on a regular basis’ [9]. Each of the highlighted problems is a range of measures and activities, new legal mechanisms that deserve separate consideration and are just beyond the scope of this article. 6 Conclusions According to the crowdfunding-platforms.com, the European crowdlending market had in 2013-2018 yearly growth of +70% on average. Even despite the expected slowdown of its growth over the next years because of tightening of monetary and credit conditions in response to COVID-19 and increased inflationary pressures, specialists consider the value of the European crowdlending market equal to USD38 billion in 2023 representing a forecasted yearly growth of +16% on average [3]. Taking into account such prospects for the development of the P2P crowdlending market, an incentive for its development at the national level could be the development and enactment of a new special legal act regulating the activities of crowdlending platforms (services) as a professional intermediary, as well as amending the Law On economic insolvency (bankruptcy) and a number of other regulatory legal acts. The creation of a full-fledged regulatory legal framework will fully legalize mutual lending, ensure proper state control over this market, and increase budget revenues by including the corresponding income of individuals in the taxable base. Integration of Belarusian banking system with crowdlending Internet platforms and collaboration between them will allow maintaining and increasing the client base, more effectively using the intellectual potential of the staff and the existing territorial network of points of sale. All this will ensure high quality, speed and convenience in managing the personal finances of individuals, and will significantly increase the availability of modern financial services for them. Acknowledgements The article has been prepared as a part of the project financed by the Ministry of Science and Higher Education in Poland under the programme “Regional Initiative of Excellence” 2019–2022 project number 015/RID/2018/19. References 1. Y. Y. Karaleu, A. B. Dudkin, *Finance architecture: new solutions in the digital economy*, 272 (2019) 2. V. A. Kuznetsov, *Money and credit*, 1, 65 (2017) 3. Crowdlending Guide: What is it and how to invest? https://crowdfunding-platforms.com/ 4. A. B. Dudkin, *Business. Innovation. Economy*, 1, 52 (2017) 5. V. Bavoso, *J Bank Regul*, 21, 395 (2020) 6. ZOPA, https://www.zopa.com 7. K. Treskova, P2P lending: what is it, https://brobank.ru/ 8. A. B. Dudkin, *Legal world*, 6, 78 (2017) 9. D. L. Kalechits. Ensuring financial stability in 2020 and targets for 2021, https://www.nbrb.by/ 10. KUBYSHKA, https://kubyshka.by/ 11. Analytical review of the National Bank of the Republic of Belarus "Survey on the implementation of the banking principle "Know your customer" (in relation to attracting time deposits (deposits) of individuals), https://www.nbrb.by/ 12. Statistical Bulletin of the National Bank of the Republic of Belarus, https://www.nbrb.by/ 13. Y. Y. Karaleu, *Innovative development of the economy: entrepreneurship, education, science* (2017)
Phase-Field Modeling of Nonlinear Material Behavior Y.-P. Pellegrini, C. Denoual and L. Truskinovsky Abstract Materials that undergo internal transformations are usually described in solid mechanics by multi-well energy functions that account for both elastic and transformational behavior. In order to separate the two effects, physicists use instead phase-field-type theories where conventional linear elastic strain is quadratically coupled to an additional field that describes the evolution of the reference state and solely accounts for nonlinearity. In this paper we propose a systematic method allowing one to split the nonconvex energy into harmonic and nonharmonic parts and to convert a nonconvex mechanical problem into a partially linearized phase-field problem. The main ideas are illustrated using the simplest framework of the Peierls–Nabarro dislocation model. 1 Introduction Nonconvex energy potentials are used in solid mechanics for the modeling of martensitic transformations [9], plasticity [1] and fracture [25]. Parts of the resulting energy landscapes correspond to sufficiently smooth deformations preserving the locally affine structure of the lattice environment of each atom. Other parts represent highly distorted atomic arrangements associated with either loss or reacquisition of nearest neighbors. While deformations of the first type can (often) be described by the conventional strain tensor of (linear) elasticity theory, a representation of the deformations of the second type requires introducing additional internal variables accounting for deviations from the local affinity of the stressed atomic configura- Y.-P. Pellegrini · C. Denoual CEA, DAM, DIF, F-91297 Arpajon, France; e-mail: firstname.lastname@example.org, email@example.com L. Truskinovsky Laboratoire de Mécanique des Solides, CNRS UMR-7649, École Polytechnique, Route de Saclay, F-91128 Palaiseau Cedex, France; E-mail: firstname.lastname@example.org tions. In particular, these supplementary variables describe the evolution of the local reference state (LRS) from which the elastic deformations are measured [3, 11, 26]. The main difference between the elastic strains and these supplementary internal variables is that the dynamics of the former is typically inertial, while that of the latter is usually overdamped. Sometimes the nonelastic variables can be minimized out as in the case of deformational plasticity (e.g., [2]). In this paper we deal instead with situations where the internal variables have to be revealed rather than hidden. We assume that the coarse-grained nonconvex energy density $f(\varepsilon)$ is known either from extrapolations of experimental measurements or from \textit{ab-initio} calculations involving atomic homogeneity constraints. We suppose that the argument $\varepsilon$ of this function, that represents a coarse-grained strain, is small and can be additively split into the linear elastic part $e$, and a phase-field part $\eta$ that accounts for the nonelastic evolution of the LRS. Our next assumption is that $f$ can be represented as a sum of two terms: the elastic energy $f_e$, which depends on $e = \varepsilon - \eta$ and the phase-field energy $g$, which depends on $\eta$. We interpret $f(\varepsilon)$ as the outcome of adiabatic elimination of the variable $\eta$ and consider the inverse problem of recovering the phase-field energy $g(\eta)$ from the function $f(\varepsilon)$ under the assumption that the function $f_e(e)$ is quadratic. The problem of the identification of $g(\eta)$ reduces to a problem of optimization and the relation between the ‘optimally’ related functions $f(\varepsilon)$ and $g(\eta)$ is studied in some prototypical cases. If, in contrast, the function $g(\eta)$ is chosen independently, the corresponding function $f(\varepsilon)$ is typically non-smooth and non single-valued, e.g. [6]. To motivate the need for the phase-field variables we consider in full detail a specific physical example. It deals with the mixed, discrete-continuum representation of a dislocation core [12, 16]. More specifically, we develop a modified version of the classical Peierls–Nabarro (PN) model that accounts for a finite thickness of the slip region. In this problem the coarse-grained description of the slip zone is provided by the so-called $\gamma$-potential [5, 27]. The phase field represents an ‘atomically sharp’ slip and the part of the interaction potential related to $g$ gives rise to the slip-related pull-back force [7, 16, 23]. Our general method of recovering the expression for this force represents an extension of Rice’s transform, which was first introduced in the context of a dislocation nucleation problem [19]. In this paper only the simplest scalar problem in a one-dimensional setting is considered. The slightly more general question of extracting from the coarse-grained energy a convex (instead of quadratic) component will be examined elsewhere [17]. \section{Surface Problem} We begin with the special case when the phase field is localized on a surface. In problems involving fracture or slip it often proves convenient to represent the energy of a body as the sum of a bulk term depending on strain gradients and a surface term penalizing displacement discontinuities. The bulk term is usually modeled by linear elasticity. The modeling of the surface energy is less straightforward [6, 25]. For instance, the models will be different depending on whether the location of the discontinuities is known a priori or not. In a 1D setting with a *known* fracture set the equilibrium problem reduces to minimizing the following energy functional \[ W[u] = \int_0^1 dx \ f_e(u_x) + \sum_{\Gamma_a} f_a(\delta(x)). \] (1) Here \( f_e(e) = (E/2)e^2, \ E > 0 \) is the elastic modulus and \( a \) is a coarse-graining length scale that typically exceeds several atomic sizes. The set \( \Gamma_a \) in (1) represents discontinuity points resolved at scale \( a \) and \( \delta(x) = [u]_a(x) \) is the corresponding displacement discontinuity. The surface energy \( f_a(\delta) \) is then an effective interaction over the distance \( a \); in particular, the shear-related component of \( f_a(\delta) \) coincides the \( \gamma \)-potential mentioned in the Introduction. In the case when the fracture set is *unknown* the surface energy has to be chosen differently. The reason is that in this model the displacement discontinuity at scale \( a \) does not represent the microscopic slip between neighboring atomic planes, and therefore the difference between elastic deformation and inelastic slip has to be yet resolved at this scale [19]. More precisely, linear elasticity, which has nothing to do with slip and which is already accounted for in the bulk term, has not been excluded from \( f_a(\delta) \). The identification of the surface energy in (1) with \( f_a(\delta) \), which is quadratic at the origin, leads in a free discontinuity problem to a degenerate solution with infinitely many infinitely small discontinuities [6]. To remove linear elasticity from the surface term, one should replace the coarse-grained discontinuity \([u]_a\) by the atomically sharp slip \( \eta(x) = [u](x) \) that does not depend on \( a \). The energy (1) is then rewritten as \[ W[u] = \int_0^1 dx \ f_e(u_x) + \sum_{\Gamma} g(\eta), \] (2) where now \( \Gamma \) is the set of discontinuity points corresponding to \( a = 0 \). The problem is to find the relation between the function \( f_a(\delta) \), representing an empirical input, and the unknown function \( g(\eta) \). To define \( g(\eta) \) we divide the total slip \( \delta \) into an elastic part, \( ae \), where \( e \) is an equivalent elastic strain, and an inelastic part \( \eta \). The function \( g(\eta) \) is defined by the condition that \( f_a(\delta) \) is a relaxation of the energy \( af_e(e) + g(\eta) \) under the condition that \( ae + \eta = \delta \), namely: \[ f_a(\delta) = \inf_{\eta} \left[ a \frac{E}{2} \left( \frac{\delta - \eta}{a} \right)^2 + g(\eta) \right]. \] (3) If the energy \( f_a(\delta) \) is a single-well function and the infimum is unique, the function \( g \) is completely defined. If \( f_a(\delta) \) is periodic as in the case of dislocations, in order to have a uniquely-defined \( g(\eta) \), we need to replace in definition (3) the global minimization by a properly-defined local minimization denoted hereafter by ‘inf\textsubscript{loc}’ (minimization over $\eta$ starting from the minimum of $f_a$ closest to $\delta$). In what follows, our task will be to reverse definition (3) and to recover the nonequilibrium energy $g(\eta)$ from $f_a(\delta)$. What allows us to proceed is the specific (harmonic) structure of the elastic part of the energy. We observe that the function $g$ must satisfy the following necessary condition $$\frac{(E/a)(\delta - \eta)}{g'(\eta)}. \tag{4}$$ Moreover, differentiation of (3) with regard to $\delta$ gives $$f'_a(\delta) = \frac{(E/a)(\delta - \eta)}. \tag{5}$$ These two equations allow one to represent $g'(\eta)$ in the following parametric form [7, 8, 19] $$(\eta, g'(\eta)) = \left( \delta - \frac{a}{E} f'_a(\delta), f'_a(\delta) \right). \tag{6}$$ The parametric representation for $g(\eta)$ then reads $$(\eta, g(\eta)) = \left( \delta - \frac{a}{E} f'_a(\delta), f_a(\delta) - \frac{a}{2E}[f'_a(\delta)]^2 \right). \tag{7}$$ Since for nonconvex $f_a(\delta)$ this representation may lead to a multivalued function $g(\eta)$ formula (7) must be supplemented by an additional branch selection procedure. To illustrate the mapping $f(\delta) \rightarrow g(\eta)$ given by (7) and the selection of a physical branch we consider a Lennard–Jones potential $f_a$, with $a = 1$ and assume that $E = f''(\delta_0)$, where $\delta_0$ is the only minimum of $f$ (see Figure 1). Notice that the resulting function $g'(\eta)$ has an infinite slope at $\eta = \delta_0$ and that for $\eta \gtrsim \delta_0$ we must have $g(\eta) \propto (\eta - \delta_0)^{3/2}$. The removal of the linear elastic part of the energy becomes important in PN-type modeling of dislocations. Consider, for instance, a straight screw dislocation in an isotropic linear-elastic body and assume that the sharp discontinuity plane, \( y = 0 \), lies between the two effective gliding surfaces located at \( y = \pm a/2 \). To account for the finite thickness of the core region \( a \) we need to modify the classical PN model [12]. According to our interpretation the linear elastic stress outside the slip region \((-a/2, a/2)\) must be balanced by the coarse-grained pull-back stress that is resolved at the spatial scale \( a \). We therefore interpret the pull-back stress at this scale as \( f'_a(\delta(x)) \), where \( f_a \) is the \( \gamma \)-potential, a periodic function with period \( b \) and with \( f'_a(0) = 0 \). The expression for the linear stress outside the slip region is derived in the Appendix. With these considerations in mind we obtain for the unknown function \( \eta(x) \) representing a mathematical slip at \( y = 0 \) the following system of equations \[ -\frac{\mu}{\pi a} \int_{-\infty}^{+\infty} dx' \eta'(x') \arctan \frac{a}{2(x - x')} + \overline{\sigma}_a(x) = f'_a(\delta(x)), \] \[ \delta(x) = (a/\mu) f'_a(\delta(x)) + \eta(x), \] where \( \overline{\sigma}_a \) is the resolved applied stress at scale \( a \). If we match the linear elastic behavior at \( \eta = 0 \) with that in the bulk regions we obtain that \( \mu = af''_a(0) \). Using in this relation the physical shear modulus and the value of \( f''_a(0) \) from the \( \gamma \)-potential provides a rough estimate for \( a \), the effective interaction range. We notice that parameter \( a \) enters both equations (8, 9), which makes this system different from the one studied in [16, 19]. The ideas behind our nonlocal extension of the PN model are also different from that of Miller et al. [13] where a nonlocal kernel was introduced empirically as part of the pull-back stress, and the usual \( 1/(x - x') \) kernel was used for the bulk stress. To bring the system (8, 9) into the framework of phase-field models, we identify the effective pull-back force \( f'_a(\delta(\eta)) \) with \( g'(\eta) \) and rewrite Eq. (8) as \[ \frac{\mu}{2} \int_{-\infty}^{+\infty} dx' K_a(x - x') \eta'(x') + \overline{\sigma}_a(x) = g'(\eta). \] where \( K_a(x) = -(2/\pi a) \arctan(a/2x) \). It is now easy to see that \( g'(\eta) \) enjoys the parametric representation \[ (\eta, g'(\eta)) = \left( \delta - \frac{a}{\mu} f'_a(\delta), f'_a(\delta) \right), \] where we recognize the mapping (6) (see also [16, 19, 23]). To make the link with the classical PN model one needs to consider the limit \( a \to 0 \). By computing \( \eta \) in terms of \( \delta \) and expanding (10) in powers of \( a \), we obtain to order \( O(a) \) the following ‘gradient’ extension of the PN model \[ -\frac{\mu}{2\pi} \int_{-\infty}^{+\infty} dx' \frac{\delta'(x')}{x - x'} + \lambda \delta''(x) + \widetilde{\sigma}_a(x) = f(\delta(x)), \] where \( \lambda = a\mu/4 \). For different weakly or strongly nonlocal generalizations of the PN model see [13, 21]. Equation (12) features an effective applied stress that differs from $\sigma_a(x, 0)$ defined in the Appendix by an $O(a)$ correction, namely $\tilde{\sigma}_a(x) \equiv \sigma_a(x, 0) + (a/2)[(1/2)\partial_y\sigma_a(x, 0) - K_0 \star \partial_x\sigma_a(x, 0)]$. The classical PN model is retrieved by letting $a = 0$. 3 Bulk Problem Now let us place the problem in a more general framework. The task is to approximate locally the empirical potential $f(\varepsilon)$ by a quadratic function with an optimally chosen reference state $\eta$, and to associate with this state a reference energy $g(\eta)$. Behind such construction is the assumption that all the nonlinearity of the problem is related to the evolution of the reference state. The simplest setting to pose formally the problem is the one-dimensional geometrically linearized theory of nonlinear elastic bars. According to our interpretation the empirical energy is represented as $$f(\varepsilon) = \inf_{\eta, \text{loc}} \left[ \frac{E}{2} (\varepsilon - \eta)^2 + g(\eta) \right]$$ and the problem is to find the intrinsic phase-field function $g(\eta)$. Following the previous section we write the parametric representation for $g'(\eta)$ in the form $$(\eta, g'(\eta)) = \left( \varepsilon - \frac{f'(\varepsilon)}{E}, f'(\varepsilon) \right).$$ The function $g(\eta)$ is then given by the mapping $$(\eta, g(\eta)) = \left( \varepsilon - \frac{f'(\varepsilon)}{E}, f(\varepsilon) - \frac{f'(\varepsilon)^2}{2E} \right).$$ The consistency of this procedure requires the parameter $E$ and the function $f(\varepsilon)$ to be related. If we expand the parametric definition of $g(\eta)$ near a reference state $\varepsilon_0$ where $f'(\varepsilon_0) = 0$, we obtain $g(\varepsilon_0) = f(\varepsilon_0)$, $g'(\varepsilon_0) = 0$ and $g''(\varepsilon_0) = f''(\varepsilon_0)/[1 - f''(\varepsilon_0)/E]$. The natural choice $E = f''(\varepsilon_0)$ makes $g''(\varepsilon_0)$ infinite. The behavior of the higher derivatives of $g(\eta)$ near $\eta = \varepsilon_0$ is found by assuming (without loss of generality) that derivatives $f^{(k)}(\varepsilon_0)$ vanish for $k = 3, \ldots, n - 1$. The order of the asymptotics depends on $n > 2$, which is the first integer such that $f^{(n)}(\varepsilon_0) \neq 0$: $$(\eta, g(\eta)) \sim \left( \varepsilon_0 - \frac{f^{(n)}(\varepsilon_0)}{(n-1)!E} \delta\varepsilon^{(n-1)}, f(\varepsilon_0) - \frac{(n-1)f^{(n)}(\varepsilon_0)}{n!} \delta\varepsilon^n \right).$$ Hence $g$ behaves near its minimum as: $|g(\eta) - g(\varepsilon_0)| \sim |\eta - \varepsilon_0|^{n/(n-1)}$. The generic case is $n = 3$; the case $n = 4$ corresponds to periodic potential relevant for dislocations; for $f$ locally harmonic, $n = \infty$. Observe now that the function $g$ computed from (14, 15), can also be viewed as a solution of the following optimization problem: $$g(\eta) = \sup_{\varepsilon,\text{loc}} \left[ f(\varepsilon) - \frac{E}{2}(\varepsilon - \eta)^2 \right], \tag{17}$$ which is a natural inverse of (13) (see also [18, 20]). Since the equation $\eta = \varepsilon - f'(\varepsilon)/E$ may have several solutions $\varepsilon(\eta)$, the representation (17) removes the ambiguity by always selecting the upper branch. The working of Eqs. (14-17) with $E = f''(\varepsilon_0)$ is illustrated in Figure 2. In the domain at the left of $\varepsilon_0$, where $f$ grows faster than harmonic the desired tangency point does not exist. In this case the difference $f(\varepsilon) - \frac{E}{2}(\varepsilon - \eta)^2$ is maximized at $\varepsilon = -\infty$. This situation takes place in the Lennard–Jones example of Section 2 where we have to use $g(\delta) = +\infty$ for $\delta < \delta_0$ (hatched area of Figure 1a). To handle general multi-well energies, we first introduce the Stillinger–Weber mapping $\varepsilon_0(\varepsilon)$ that links to any state $\varepsilon$ the local minimum $\varepsilon_0$ of $f(\varepsilon)$ that would be attained from this state by steepest-descent [22]. Next, we modify equations (13) and (17) as: $$f(\varepsilon) = \inf_{\eta,\text{loc}} \left[ \frac{1}{2} f''(\varepsilon_0(\varepsilon))(\varepsilon - \eta)^2 + g(\eta) \right], \tag{18}$$ $$g(\eta) = \sup_{\varepsilon,\text{loc}} \left[ f(\varepsilon) - \frac{1}{2} f''(\varepsilon_0(\eta))(\varepsilon - \eta)^2 \right]. \tag{19}$$ Whereas (19) defines $g$, equation (18) states that knowing $f$ is equivalent to knowing $g$ plus the linear-elastic behavior of $f$ near its local minima. The precise meaning of the ‘loc’ in Eqs. (18, 19) is as follows. Operationally, the minimization in the definition of $f$ is carried out over $\eta$, starting from $\varepsilon_0(\varepsilon)$, the local minimum nearest to $\varepsilon$ determined by the SW mapping; the corresponding elastic modulus is also determined by the starting point. The maximization in the definition of $g(\eta)$ proceeds along similar lines except that now the relevant elastic modulus is determined by the local minimum closest to $\eta$, and is fixed during the maximization. The optimization is carried out starting from $\varepsilon = \eta$. Figure 3 illustrates the case of a double-well potential with unequal curvatures of the wells. Notice that in contrast to what we saw in Figure 1 the function $\eta(\varepsilon)$ is now bounded. Another interesting case is the periodic potential that is used in the description of reconstructive phase transitions (e.g., [4]). Consider, for instance, the piecewise-harmonic *periodic* case shown in Figure 4 that is often used in analytical studies [13, 21]. The parametric representation (14) of $g$ is here useless and the definition (19) must be used instead. In this extreme case, *all* elasticity has been removed from $g$ and the resulting $g(\eta)$ is cone-shaped at its minima (Figure 4a) as predicted by Eq. (16) for $n \to \infty$. The force $g(\eta)$ is discontinuous (Figure 4b) and its extreme values provide thresholds for the evolution of $\eta$, whose stepwise character is an artifact due to the absence of smooth spinodal regions in $f$. It is also instructive to consider for comparison the case of an unbounded harmonic potential $f(\varepsilon) = (E_f/2)(\varepsilon - \varepsilon_0)^2$. From (17) with $E = E_f$, one deduces that $g(\eta) = +\infty$ if $\eta \neq \varepsilon_0$ and $g(\eta = \varepsilon_0) = 0$. This trivial example indicates that in a purely linear-elastic model, the reference state does not have to evolve. 4 Concluding Remarks The goal of this paper was to reveal in the simplest setting the variational nature of the generalized Rice transform. The problem consists in splitting a coarse-grained lattice potential $f$, describing the overall deformation of a sufficiently large number of atoms, into a (quasi) convex elastic potential and an inelastic potential $g$ dealing with structural rearrangements. Here the potential $f$ is assumed to be measurable by molecular statics along a prescribed deformation path relevant to the material transformation in question. For simplicity, the elastic potential is assumed in this paper to be a standard quadratic function of the macroscopic strain. The inelastic potential $g$ must be a function of the phase-field variable $\eta$, whose identification represents an important part of the problem. While our precise construction solving the above problem is presented in the static setting (see Eq. (19)), the motivation for the splitting concerns, first of all, dynamical applications (e.g., [7]). Thus we assume that material displacement $u$ associated to the strain $\varepsilon = \partial_x u$ evolves inertially almost without damping (standard elastodynamics), while the dynamics of the phase-field variable is overdamped. More precisely we assume that the relaxation of the variable $\eta$ follows the time-dependent Ginzburg–Landau (TDGL) equation. By means of an empirical ‘viscosity’ parameter $\nu$ we can write the evolution equation in the form $$\dot{\eta} = -\frac{1}{\nu} \frac{\partial}{\partial \eta} \left[ \frac{1}{2} f''(\varepsilon_0(\varepsilon)) (\varepsilon - \eta)^2 + g(\eta) \right],$$ where we have omitted for simplicity the conventional gradient-penalizing terms (e.g., [7, 24]). In the static setting the above equation reduces to our basic Eq. (18). The definitive advantage of separating the wave motion from an overdamped TDGL relaxation is the possibility to attribute effective damping only to large atomic displacements. Our preliminary investigations [17] indicate that extending the variational set-up presented in this paper to higher dimensions and generalizing it in the direction of extracting (quasi) convex, rather than merely quadratic elastic components, is feasible. These issues will be addressed systematically in a separate publication. Appendix The following computations are largely based on the Eshelby’s arguments presented in [10]. Consider a Volterra screw dislocation with zero-width core and with Burgers vector $b$. The displacement $u_z(x, y)$ has the form $$u_z(x, y) = \frac{b}{2\pi} \text{Arg}(x + iy) = \frac{b}{2\pi} \arctan \frac{y}{x} + \frac{b}{2} \text{sign}(y)\theta(-x),$$ (20) where $\theta$ is the Heaviside function, and where the indeterminacy in the discontinuity of $u_z$ is resolved by specifying the glide plane ($y = 0$). The distributional part in the r.h.s. of Eq. (20), usually omitted in the literature (e.g. [12]), is crucial to the present derivation because it represents the irreversible atomic displacements on the plane $y = 0$. We introduce the eigendistortion, $\beta^*_i j$, as the part of the dislocation-induced distortion $\beta_{ij} = u_{i,j}$, that is not linear-elastic. For our dislocation, its only non-zero component is $\beta^*_{yz}(x, y) = b \theta(-x) \delta_D(y)$, where $\delta_D$ is the Dirac distribution [14]. The linear-elastic distortion, $\beta^e_{ij}$, is defined through the additive decomposition of the total distortion $\beta_{ij}$, namely $\beta^e_{ij} \equiv \beta_{ij} - \beta^*_i j$ [14]. The elastic strains are $e_{ij} = \text{sym} \beta^e_{ij}$. By using the identity $[\arctan(1/x)]' = \pi \delta_D(x) - 1/(1 + x^2)$, we obtain that the distributional parts in $\beta_{ij}$ and $\beta^*_i j$ mutually cancel out giving the standard result [12] $$e_{xz}(x, y) = -\frac{b}{4\pi} \frac{y}{x^2 + y^2}, \quad e_{yz}(x, y) = \frac{b}{4\pi} \frac{x}{x^2 + y^2}. \tag{21}$$ The stress induced by the eigenstrain is then $\sigma^*_i z(x, y) = 2\mu e_{iz}(x, y)$, where $i = x, y$. In the presence of an applied shear stress [15] $\sigma_a \equiv \sigma_{a yz}$, Eq. (20) becomes $$u_z(x, y) = \frac{1}{\mu} \int_0^y dy' \sigma_a(x, y') + \frac{b}{2\pi} \arctan \frac{y}{x} + \frac{b}{2} \text{sign}(y)\theta(-x), \tag{22}$$ The total stress is then $\sigma = \sigma^* + \sigma_a$ and $e = \sigma/2\mu$. Now, the key step consists in averaging the stress over the layer of width $a$ containing the glide plane. Introduce: $\overline{\sigma}_{ij}(x) \equiv \frac{1}{a} \int_{-a/2}^{+a/2} dy \sigma_{ij}(x, y)$. From (22), the $x$ component of the total relative atomic lattice displacement between the atomic planes at $y = \pm a/2$ reads: $$\delta(x) \equiv u_z(x, +a/2) - u_z(x, -a/2) = \frac{a}{\mu} \left[ \overline{\sigma}_a(x) + \frac{\mu b}{\pi a} \arctan \left( \frac{a}{2x} \right) \right] + b \theta(-x). \tag{23}$$ Furthermore on account of (21) the average shear stress $\overline{\sigma}_{yz}(x)$ in the layer is: $$\overline{\sigma}_{yz}(x) = \overline{\sigma}_a(x) + \frac{\mu b}{2\pi} \frac{1}{a} \int_{-a/2}^{a/2} dy \frac{x}{x^2 + y^2} = \overline{\sigma}_a(x) + \frac{\mu b}{\pi a} \arctan \left( \frac{a}{2x} \right). \tag{24}$$ Comparison of (24) and (23) shows that: $$\delta(x) = (a/\mu) \overline{\sigma}_{yz}(x) + b \theta(-x). \tag{25}$$ Consider next an Eshelby screw dislocation with an extended core described by a continuous function $\eta(x)$. The distortion $\beta^*$ becomes: $$\beta^*_{yz}(x, y) = \delta_D(y) \eta(x) = -\delta_D(y) \int_x^{+\infty} dx \eta'(x);$$ the Volterra dislocation corresponds to the limiting case $\eta(x) = b \theta(-x)$. Displacements, strains and stresses are obtained by convolution using $d\beta^*_{yz}(x, y) \equiv -\delta_D(y) \eta'(x) dx$ [10] as elementary distortions. The analogs of Eqs. (24, 25) are: \[ \overline{\sigma}_{yz}(x) = \overline{\sigma}_a(x) - \frac{\mu}{\pi a} \int_{-\infty}^{+\infty} dx' \arctan \left( \frac{a}{2(x - x')} \right) \eta'(x'), \] \[ \delta(x) = (a/\mu)\overline{\sigma}_{yz}(x) + \eta(x). \] Equation (27), which we use in the paper, shows the relation between the coarse-grained displacement $\delta$, and the discontinuity $\eta$. **References** 1. Carpio, A. and Bonilla, L.L.: Discrete models for dislocations and their motion in cubic crystals. *Phys. Rev. B* **12**, 2005, 1087–1097. 2. Carstensen, C., Hackl, K. and Mielke, A.: Nonconvex potentials and microstructures in finite-strain plasticity. *Proc. R. Soc. London A* **458**, 2002, 299–317. 3. Choksi, R., Del Piero, G., Fonseca, I. and Owen, D.R.: Structural deformations as energy minimizers in models of fracture and hysteresis. *Math. Mech. Solids* **4**, 1999, 321–356. 4. Conti, S. and Zanzotto, G.: A variational model for reconstructive phase transformations, and their relation to dislocations and plasticity. *Arch. Rational Mech. Anal.* **173**, 2004, 69–88. 5. Christian, J.W. and Vitek, V.: Dislocations and stacking faults. *Rep. Prog. Phys.* **33**, 1970, 307–411. 6. Del Piero, G. and Truskinovsky L.: Macro and micro-cracking in 1D elasticity. *Int. J. Solids Struct.* **38**, 2001, 1135–1148. 7. Denoual, C.: Dynamic dislocation modeling by combining Peierls–Nabarro and Galerkin methods. *Phys. Rev. B* **70**, 2004, 024106. 8. Denoual, C.: Modeling dislocations by coupling Peierls–Nabarro and element-free Galerkin methods. *Comput. Meth. Appl. Mech. Engrg.* **196**, 2007, 1915–1923. 9. Ericksen, J.: Equilibrium of bars. *J. Elast.* **5**, 1975, 191–202. 10. Eshelby, J.D.: Uniformly moving dislocations. *Proc. Phys. Soc. London A* **62**, 1949, 307–314. 11. Hakim, V. and Karma, A.: Crack path prediction in anisotropic brittle materials. *Phys. Rev. Lett.* **95**, 2005, 235501. 12. Hirth, J.P. and Lothe J.: *Theory of Dislocations*, 2nd edn. Wiley & Sons, New York, 1982. 13. Miller, R., Phillips, R., Beltz, G. and Ortiz, M.: A non-local formulation of the Peierls dislocation model. *J. Mech. Phys. Solids* **46**, 1998, 1845–1867. 14. Mura, T.: *Micromechanics of Defects in Solids*, 2nd edn. Martinus Nijhof, Dordrecht, 1987. 15. Nabarro, F.R.N.: Dislocations in a simple cubic lattice. *Proc. Phys. Soc.* **59**, 1947, 256–272. 16. Ortiz, M. and Phillips, R.: Nanomechanics of defects in solids. *Adv. Appl. Mech.* **36**, 1999, 1–79. 17. Pellegrini, Y.-P., Denoual C. and Truskinovsky, L., in preparation. 18. Ponte Castañeda, P. and Suquet, P.: Nonlinear composites. *Adv. Appl. Mech.* **34**, 2002, 171–302. 19. Rice, J.R.: Dislocation nucleation from a crack tip: An analysis based on the Peierls concept. *J. Mech. Phys. Solids* **40**, 1992, 239–271. 20. Rockafellar, T.: *Convex Analysis*. Princeton University Press, Princeton, 1997. 21. Rosakis, P.: Supersonic dislocation from an augmented Peierls model. *Phys. Rev. Lett.* **86**, 2001, 95–98. 22. Stillinger, F.H. and Weber, T.A.: Packing structures and transitions in liquids and solids. *Science* **225**, 1984, 983–989. 23. Sun, Y., Beltz, G.E. and Rice, J.R.: Estimates from atomic models of tension-shear coupling in dislocation nucleation from a crack tip. *Mat. Sci. Eng. A* **170**, 1993, 69–85. 24. Truskinovsky, L.: Kinks versus shocks. In: Fosdick, R., Dunn, E. and Slemrod, M. (Eds.), *Shock Induced Transitions and Phase Structures in General Media*, IMA, Vol. 52, Springer-Verlag, 1993, pp. 185–229. 25. Truskinovsky, L.: Fracture as a phase transformation. In: Batra, R. and Beatty, M. (Eds.), *Contemporary Research in Mechanics and Mathematics of Materials*, CIMNE, Barcelona, 1996, pp. 322–332. 26. Wang, Y. and Khachaturyan, A.G.: Three-dimensional field model and computer modeling of martensitic transformations. *Acta Mater.* **45**, 1997, 759–773. 27. Woodward, C.: First-principles simulations of dislocation cores, *Mat. Sci. Engrg. A* **400–401**, 2005, 59–67.
Practical Public PUF Enabled by Solving Max-Flow Problem on Chip Meng Li\textsuperscript{1}, Jin Miao\textsuperscript{2}, Kai Zhong\textsuperscript{3}, David Z. Pan\textsuperscript{1} \textsuperscript{1}Electrical and Computer Engineering Department, University of Texas at Austin, Austin, Tx USA 78712 \textsuperscript{2}Cadence Design Systems Inc., San Jose, CA USA 95134 \textsuperscript{3}Institute of Computational Engineering and Science, University of Texas at Austin, Austin, Tx USA 78712 firstname.lastname@example.org, email@example.com, firstname.lastname@example.org, email@example.com ABSTRACT The execution-simulation gap (ESG) is a fundamental property of public physical unclonable function (PPUF), which exploits the time gap between direct IC execution and computer simulation. ESG needs to consider both advanced computing scheme, including parallel and approximate computing scheme, and IC physical realization. In this paper, we propose a novel PPUF design, whose execution is equivalent to solving the hard-to-parallel and hard-to-approximate max-flow problem in a complete graph on chip. Thus, max-flow problem can be used as the simulation model to bound the ESG rigorously. To enable an efficient physical realization, we propose a crossbar structure and adopt source degeneration technique to map the graph topology on chip. The difference on asymptotic scaling between execution delay and simulation time is examined in the experimental results. The measurability of output difference is also verified to prove the physical practicality. 1. INTRODUCTION Many electronic systems require solutions for security, unique identification and authentication. Physical unclonable function (PUF) has been proposed as a promising solution [1–3]. A PUF is a pseudo-random function that exploits the randomness inherent in the scaled CMOS technologies to generate unique output response given certain input challenge. A Public PUF (PPUF) is a PUF that is created so that its simulation model is publicly available, but large discrepancies exist between the execution time and simulation time [4–6]. A PPUF relies on the time gap between execution and simulation to derive its security, which is promising because no secret information needs to be kept, and the enrollment phase before using a PUF (during which large amount of responses need to be characterized and stored) is also eliminated [5]. Therefore, PPUFs are able to underlie multiple public-key protocols and have potentially much more applications compared with traditional PUFs [6]. For a PPUF to be an effective security primitive, execution-simulation gap (ESG) acts as a fundamental property and needs to be justified in terms of theoretical soundness and physical practicality. Theoretical soundness requires the ESG to be bounded rigorously, especially considering the advanced parallel and approximate computing scheme. Physical practicality further requires that the ESG can be realized effectively considering the existing fabrication technique and the generated output must be measurable. Although a number of PPUF designs have been proposed in the literature over the years [5–9], most of them do not justify the proposed ESG in the two aspects above. The first PPUF is proposed in [6] and relies on exclusive-or networks to convert the delay variation into small voltage glitches. Because the amount of glitches ideally increases exponentially relative to circuit depth, the authors need to keep increasing all the glitches, which requires exponential computation. Although the idea is innovative, the PPUF is hard to realize because the generate glitches usually have very small pulse width and are very likely to be attenuated during propagation to output due to the electrical property of the logic gates [10]. Therefore, actual time gap is much smaller compared with the ideal expectation. Another security primitive, termed as SIMplest Possible, but Unpredictable (SIMPUF) system, leverages the time gap between the real optical interference and solving the differential equations underlying the optical system [5]. However, its security against attacks relies on the nonlinearity of optical medium, which is still an open problem. In [8], a nano-PPUF design based on memristors is proposed. The authors justify the ESG by the complexity of matrix multiplication operation used in SPICE simulation. However, the authors ignore that matrix multiplication can be effectively paralleled to reduce the simulation time significantly. While previous designs are more conceptual, we introduce a practical PPUF whose execution is equivalent to solving the max-flow problem in a complete graph. The equivalence enables us to use the max-flow problem [11] as the simulation model and bound the ESG rigorously. To enable an efficient physical realization of our design, we propose a crossbar structure to map the graph topology to silicon. The basic building block is designed with MOS transistors working in saturation region and enhanced by source degeneration (SD) technique [12] to instantiate flow constraints on chip. ESG is examined in experimental results by verifying the difference between asymptotic scaling of execution delay and simulation time. We summarize our contributions as follows: - A new PPUF design is proposed with rigorous ESG achieved by solving max-flow problem in a complete graph on chip. - A crossbar structure is proposed and SD technique is adopted to map the graph topology and flow constraints on chip and enable an efficient physical realization. The rest of the paper is as follows. Section 2 describes preliminaries on max-flow problem and discusses the algorithms that aim to solve it. Section 3 introduces our PPUF topology and basic building blocks, which maps the max-flow problem on chip. ESG is also analyzed in Section 3. Section 4 describes the physical realization of the PPUF and also discusses the PPUF challenge-response pairs (CRPs). We evaluate the performance of the PPUF in Section 5 and conclude the paper in Section 6. 2. MAX-FLOW PROBLEM IN DIRECTED GRAPH: PRELIMINARIES Let \( G = (V, E) \) represent a directed graph with \( |V| = n \) vertices and \( |E| = m \) directed edges. If \( \forall v_i, v_j \in V, \exists (v_i, v_j) \in E \), \( G \) is called complete with \( m = n(n - 1) \). In the directed graph \( G \), we distinguish a set of source vertices \( S \subset V \) and sink vertices \( T \subset V \) and assign a non-negative capacity \( c(v_i, v_j) \) to each edge \( (v_i, v_j) \in E \). The instance of the max-flow problem consists of the directed graph \( G \) and the set of capacities. Given an instance of a max-flow problem, a function, \( f : E \to R^+ \), is called a flow function if it satisfies the following conservation and capacity constraints: \[ \sum_{(v_i, v_j) \in E} f(v_i, v_j) = \sum_{(v_j, v_k) \in E} f(v_j, v_k) \quad \forall v_j \in V - S - T \\ 0 \leq f(v_i, v_j) \leq c(v_i, v_j) \quad \forall (v_i, v_j) \in E \] The value of a flow \( f \) is defined as the net flow from a source node. The max-flow problem is to find a maximum-value flow function on a given instance of a flow problem. Max-flow problem has been shown to be computationally demanding and difficult to parallel [11]. Traditional methods include augmenting path algorithm [13], push-relabel algorithm [14], blocking flow method [13] and so on. All these methods have at least \( O(n^3) \) complexity for complete graph. Recent efforts on calculating the exact solution for max-flow problem can be classified into parallel and approximate methods. The best known parallel method showed in [15] leverages blocking flow algorithm and achieves parallel runtime of \( O(n^2 \log(n)/p) \), where \( p \) is the number of processors. Therefore, the best achieved complexity of parallel algorithms is lower bounded by \( O(n^2 \log(n)) \). The best known approximate algorithm targeting at max-flow problem is proposed in [16]. To get an \( \epsilon \)-approximate solution, the complexity of the proposed algorithm is \( O(m^{1 + o(1)} \epsilon^{-2}) \). In our case, for the complete graph, the complexity is \( O(n^{2 + o(1)} \epsilon^{-2}) \). Therefore, considering the parallel and approximate algorithms, the complexity of solving the max-flow problem is lower bounded by \( O(n^2) \) with respect to number of nodes in the graph. While solving the max-flow problem is computational intensive, it is much easier to check the optimality of a flow \( f \). Define the residual capacity \( r_f(v_i, v_j) \) of an edge \( (v_i, v_j) \) to be \( c(v_i, v_j) - f(v_i, v_j) \). The residual graph \( G_f = (V, E_f) \) for a flow \( f \) is the directed graph whose vertex set is \( V \) and edge set \( E_f \) is the set of edges with positive residual capacity. \( f \) is optimal iff \( \forall t \in T, s \in S, t \) is not reachable from \( s \) in the residual graph. To find optimality, we just need to create the residual graph and do a breadth first search from source to sink, which is highly parallelizable and can be finished with \( O(n^2/p) \) complexity for a complete graph [17]. 3. PPUF TOPOLOGY AND ESG ANALYSIS In this section, we introduce our PPUF design and rigorously prove the ESG. Our main intuition is to build a PPUF circuit which is equivalent to solving the max-flow problem but requires asymptotically less time compared with best known algorithms. The main difficulty comes from mapping the constraints and objective function on chip. As we will show, our PPUF topology together with the basic building block guarantees the equivalence and thus, enables a rigorous ESG. 3.1 PPUF Topology and Basic Building Block The proposed PPUF topology is shown in Figure 1. The PPUF consists of a pair of nominally identical networks that are different only because of process variation. The circuit nodes correspond to the vertices in the graph while each building block as shown in Figure 2 (d) instantiates one directed edge. Inputs to the PPUF are used to select the source nodes and sink nodes, and control the current capacity of each edge. The selected source and sink nodes are connected to \( V(s) \) and ground, respectively. Output is generated by comparing the current flowing into the source node. ![Figure 1: Topology of the proposed PPUF design.](image) To explain our design methodology for the basic building block, we list our requirements below and describe the proposed circuit block step by step to ensure all the requirements are satisfied. **Requirement 1** The maximum current of the basic building blocks must be controllable. This is because the basic block is used to instantiate capacity constraint on each edge. To satisfy the requirement, we make the MOS transistors working in saturation region and set the gate to source bias (\( V_{gs} \)) to control the saturation current as shown in Figure 2 (a). The diodes are used to ensure the direction of the current. However, due to channel length modulation and other short channel effects (SCE), the saturation current still changes as drain to source bias given fixed \( V_{gs} \), which is undesired. ![Figure 2: Evolution of basic building block design to satisfy all the requirements.](image) To reduce the change of saturation current, we adopt the SD technique from analog design. SD technique can help stabilize the current and mitigate the impact of SCE by creating negative feedback with resistors or MOS transistors. In Figure 2 (b), \( R_1 \) acts as the degeneration circuit for \( M_2 \). After \( M_2 \) enters the saturation region, the change of current caused by the increase of drain to source bias can be compensated by the increased voltage drop on \( R_1 \). The degeneration circuit can be nested to further suppress the change of saturation current. As in Figure 2 (c), two levels of source degeneration are nested; \( R_1 \) works as the degeneration circuit for \( M_2 \) while \( M_2 \) and \( R_1 \) together work as the degeneration circuit for \( M_1 \). The additional voltage source (\( V_b \)) is used to ensure both \( M_1 \) and \( M_2 \) are working in saturation region. Figure 3 (a) shows the I-V relation of the three circuits in Figure 2 (a)-(c). As we can see, the SD technique can mitigate the impact of SCEs. Better control over saturation current can be gained with more levels of SD technique, which can also lead to large design overhead. To decide the sufficiency of the SD technique, we propose the following requirement. **Requirement 2** The impact of process variation on the saturation current needs to be much larger compared to the inaccuracy induced by SCEs. This requirement ensures that the inaccuracy will not lead to false response calculated from simulation compared to PPUF execution. Experimental results from SPICE based Monte Carlo simulation indicate that with two-level SD technique, the amplitude of saturation current variation of the basic block is around 130X larger than the current change induced by SCE, which indicates the sufficiency of the two-level SD technique. **Requirement 3** The boundary between PPUF 0-response and I-response needs to be nonlinear. The requirement aims to ensure good resilience to model-building attack. Model-building attack aims to model the challenge-response behavior of PPUF with machine learning techniques [18]. To ensure the resilience, a nonlinear boundary between PPUF 0-output and 1-output is needed. To exemplify this requirement, we use the basic building block and connect them in serial as in Figure 2 (d). We also limit the sum of the $V_{in}$ and $V_{out}$ to be a constant ($V_c$) and choose the control voltage for input 0 and 1 such that their nominal saturation current are the same as shown in Figure 3 (b). Because the current of the basic building block is limited by different MOS transistors for input 0 and 1, given the current information for input 0, the current information for input 1 remains unknown without further information. Meanwhile, because the sum of all the edges connected to the node sum up to 0 for each internal node, the current flowing through one edge is not only determined by the voltage of the edge, but also impacted by all the other edges connected to the node. Therefore, all the inputs are closely correlated to achieve a nonlinear boundary between 0-output and 1-output. The requirement is also verified in the experimental results with both parametric and non-parametric model-building techniques. Besides satisfying all the requirements above, another intriguing property of the building block is its incremental passivity [19]. A memoryless component is incrementally passive if its current increases monotonically as the increase of voltage. The proposed building block satisfies this condition. As we will show, the incremental passivity of basic block helps ensure that the steady state current of the PPUF circuit is optimal solution to the corresponding max-flow problem. ### 3.2 Lower Bound of PPUF Simulation In this section, we will prove the equivalence between execution of PPUF and calculation of max-flow in a complete graph to derive the lower bound of the simulation time. First let us consider the capacity constraints for each edge in the graph. As we have discussed in Section 3.1, the diodes on the two sides of the basic block limit the direction of the current such that it is always unidirectional. Meanwhile, once the control voltage is given for the building block, its current is limited by the saturation current $I_{sat}$. Therefore, for each basic block, we have $$0 \leq I \leq I_{sat}$$ Flow conservation constraint is realized naturally. Based on Kirchhoff’s current law, for each internal node $v_i$, we have $$\sum_{(v_i,v_j) \in E} I(v_i, v_j) = \sum_{(v_j,v_k) \in E} I(v_j, v_k)$$ The objective function of max-flow problem corresponds to the current flowing into the source node in the circuit. Based on Kirchhoff’s current laws for the source node, we have $$I(s) = \sum_{(s,v) \in E} I(s, v)$$ where $I(s)$ is the current flowing into the source node. Because the PPUF is only composed of basic building blocks that are incrementally passive, the circuit is also incrementally passive [19]. Such incremental passivity guarantees: - As the increase of $V(s)$, the current flowing into the PPUF circuit increases monotonically. - For any voltage input and initial condition, the circuit will converge to a unique steady-state solution. Therefore, increasing $V(s)$ will always try to maximize the current flowing into the circuit under the edge capacity constraints and conservation constraints. Now, we are able to show the mapping between constraints and objective function of max-flow problem with the PPUF circuit. Theorem 1 shows that that the PPUF execution is equivalent to solving a max-flow problem in a directed graph. Specifically, since each node in the PPUF is designed to be connected with all the other nodes, the direct graph is complete. The equivalence enables us to use the max-flow problem as the simulation model. More importantly, because the max-flow problem is hard to parallel and approximate, we are able to rigorously derive the lower bound for the simulation time: with the best known algorithm, the simulation time scales at least $O(n^3)$ as the increase of PPUF node number. Though hard to simulate, it is much easier to verify the optimality of a solution as described in Section 2. The verifier can leverage the asymmetry between verification and calculation of max-flow problem by asking for the residual edges from PPUF holder or attacker. Based on the information, the verifier can build the residual graph and decide whether the sink is reachable from the source to determine the optimality. As described in Section 2, the verification process can be finished efficiently in $O(n^3/p)$ time. ### 3.3 Upper Bound of PPUF Execution Rigorous analysis of ESG requires an accurate upper bound of the PPUF execution time. For the proposed design, the execution delay is the time required for the current from the source node to become stable, which can be upper bounded by the time required for the voltage of all circuit nodes to become stable. In this section, we aim to derive the upper bound on the execution time by considering the charging delay of each node. To be noticed here, different from traditional RC tree structure, driving and loading networks of a vertex in the complete graph are not explicit. We modify the method proposed in [20] to create the delay relation for all the circuit nodes, based on which rigorous upper bound on charging delay can be derived. Consider a vertex $v_i \in V$, and let $R(v_i, v_j)$ denote the resistance of the edge $(v_i, v_j)$ as shown in Figure 4 (a). [20] proves that we can decompose the capacitance of $v_i$, denoted as $C(v_i)$, into $n - 1$ parts and redistribute each part to all the edges pointing to $v_i$, denoted as $C(v_j, v_i)$, as shown in Figure 4 (b), such that $$T(v_i) = T(v_j) + R(v_j, v_i)C(v_j, v_i) \quad \forall v_i \in V - \{v_j\}$$ $$\sum_{(v_j, v_i) \in E} C(v_j, v_i) = C(v_i)$$ Here, $T(v_i)$ denotes the delay from source node to $v_i$. $C(v_j, v_i)$ can be either positive or negative depending on the relation between $T(v_j)$ and $T(v_i)$, while $R(v_j, v_i)$ is always positive. Consider the node with largest delay in the PPUF circuit, denoted as $u$. Then, we have $T(u) \geq T(v), \forall v \in V - \{u\}$. For the redistributed capacitance $$0 \leq C(v, u) \leq C(u) \quad \forall v \in V - \{u\}$$ Since in the complete graph, $u$ is connected with source $s$ directly, we have $$T(u) = R(s, u)C(s, u) + T(s) = R(s, u)C(s, u) \leq R(s, u)C(u)$$ Here $R(s, u)$ is the resistance of the edge connecting $s$ and $u$, which remains unchanged as the increase of node number. $C(u)$ is the capacitance of node $u$, which increases linearly because the number of edges incident on $u$ increases linearly. Therefore, the delay for the PPUF scales at most $O(n)$ relative to circuit node number. The analysis above shows rigorous ESG considering the parallel and approximate computing scheme. The proposed ESG can be further amplified by deploying the feedback loop technique proposed in [5]. Instead of calculating the response for one challenge, the verifier can present the PPUF with a challenge $C_1$ and force the PPUF holder or attacker to determine a sequence of challenge-response pairs $(C_1, R_1), \ldots, (C_k, R_k)$ with $R_k$ being the final response. The later challenge $C_i$ is determined by earlier response $R_{i-1}$, where $2 \leq i \leq k$. In this way, the lower bound of the simulation time becomes $O(n^2k^2)$ and the upper bound of execution delay becomes $O(kn)$ and thus, ESG can be amplified by $k$ times. ## 4. PPUF PHYSICAL REALIZATION Although the ESG is proved rigorously in Section 3, realizing a complete graph and the basic building block on chip is non-trivial. In this section, we describe our strategies towards a practical and efficient on-chip realization of the proposed PPUF design. ### 4.1 Complete Crossbar Structure The completeness of the graph requires each circuit node to be connected with all the other nodes. To realize the complete connection, we propose a $n \times n$ crossbar structure as shown in Figure 5 (b). In the crossbar structure, the number of horizontal and vertical bars are the same as the node number. The $i$th horizontal bar and $i$th vertical bar are connected directly through a wire. These two bars together represent one node in the graph. Then, at the intersection of $i$th vertical bar and $j$th horizontal bar ($i \neq j$), there is a basic building block. The direction of the building blocks are always pointing from the vertical bars to the horizontal bars. In this way, each node is connected with all the other nodes through the basic building blocks, which realizes the complete connection of vertices in the graph. Figure 5 (b) shows an example of the crossbar structure for the graph in Figure 5 (a). To ensure enough ESG, the circuit size of the PPUF can be large. Therefore, systematic variation across die must be taken into consideration. To mitigate the impact of systematic variation, we choose to place the transistors in the same positions from the two different networks side by side. In this way, transistors in the same positions can be assumed to have the same systematic variation. Combined with the differential structure of the proposed PPUF, the impact of systematic variation can be suppressed. ### 4.2 Grid Partition for Control Signal As we have mentioned, the capacity of each edge is controlled by input signal. Though using one input signal for each basic block can provide very large challenge-response space, the number of individual control signals increases quadratically relative to the node number, which leads to high cost for large design. To reduce the number of voltage sources, we partition the crossbar structure into $l \times l$ grids, where $l \leq n$, and use one input signal to control all the building blocks within the grid. It shall be noted that since we use relative voltage source in our basic block design, we cannot directly use the relative voltage source with the control signal directly. Instead, we can leverage the input signal to control the charging and discharging of multiple capacitors, which work as the voltage bias for the basic blocks within one grid. In this way, the requirement on using individual independent voltage source is eliminated. Now, we analyze the CRPs of the PPUF. For a PPUF to be a strong PUF, one important requirement is the large challenge-response space. In the PPUF circuit, we identify two types of control inputs, denoted as type-A inputs and type-B inputs. Type-A inputs are used to choose the source and sink nodes of the network. The chosen source and sink nodes are connected to $V(s)$ and ground, respectively while all the other nodes are left floating. Therefore, the size of type-A input space is $n(n - 1)$. Type-B inputs are used to control the maximum current for the circuit units. Since we partition each network into $l \times l$ grids, with one input controlling all the edges within the grid, the size of the Type-B challenge space will be $2^{d^2}$. However, we argue that not all the CRPs can be used. This is because to ensure good unpredictability, when a single input bit is flipped, the ideal probability for a output bit to flip is 0.5. We try to achieve the same effect by posing requirement on the minimum hamming distance (HD) between different challenges. To be more specific, we select a subset from the whole challenge space such that the minimum HD for any two challenges in the subset is at least $d$. We demonstrate the importance of $d$ in the following theorem. To obtain the number of challenges that satisfy this requirement, it is equivalent to constructing binary codes of length $l^2$ and minimum HD $d$. As proved by [21], the size of the Type-B challenge space is larger than $2^{d^2}/(\sum_{i=0}^{d-1} \binom{l^2}{i})$. Then the total number of CRPs ($N_{CRP}$) satisfies \[ N_{CRP} \geq n(n - 1) \times \frac{2^d}{\sum_{i=0}^{d-1} \binom{d}{i}} \] Consider a PPUF with \( n = 200 \) circuit nodes. Assume \( l = 15 \) and \( d = 2l \), then \( N_{CRP} \geq 6.53 \times 10^{43} \). Large challenge-response space makes it impossible for an adversary to enumerate all the CRPs exhaustively. 5. EXPERIMENTAL RESULTS In this section, we examine the security properties of the proposed PPUF design. The experiments fall into the following categories: accuracy of the simulation model, asymptotic scaling of ESG, PPUF output measurability and power consumption, statistical evaluation of PUF metrics and model-building attack resilience. The current output and execution delay of the PPUF circuit is acquired using SPICE simulation with 32 nm predictive technology model [22]. 32 nm technology node is chosen because we want to have good control over SCEs while ensure enough impact of process variation. We assume the threshold voltage variation follows normal distribution with a standard deviation of 35 mV, a value consistent with ITRS [23]. Concerning the voltage settings, \( V(s) \) is set to be 0.9 V, the offset voltage drop is set to be \( V_0 = 0.1 \text{V} = 0.12 \text{V} \). If input is 1, \( V_{\text{out},0} \) shown in Figure 2 (d), is set to be 0.5 V while if input is 1, \( V_{\text{out},0} \) is set to be 0.67 V. The simulation model is implemented in C++. Because the best known sequential and parallel algorithms are more conceptual with no packages available, we instead choose the widely used push-relabel and augmenting-path algorithms from boost library [24]. Meanwhile, because the statistical evaluation for PPUFs with large number of nodes is too time consuming, we demonstrate most of the tests on relatively small PPUFs and use interpolation to estimate the performance for large PPUF. The simulation is run on an Intel Xeon 2.93 GHz workstation with 74G memory. We first demonstrate the accuracy of using max-flow problem as simulation model. We compare the max-flow results from execution and simulation for PPUFs with different number of nodes. We define the inaccuracy as \( |I_{\text{max-exec}} - I_{\text{max-sim}}| / I_{\text{max-exec}} \). For each PPUF, we run 100 times and show the average inaccuracy in Figure 6. As we can see, the average inaccuracy is less than 1%. Compared with the inaccuracy, the average variation of the maximum current flow is around 9.27% for a 100-node PPUF. The comparison ensures that we can get accurate response from the simulation model. Next, we demonstrate the ESG by comparing the PPUF execution and simulation time. To be noticed here, though it is possible to reduce the simulation time by running on better machine or using more efficient algorithm, we argue that the lower bound of the simulation time still exists and justified ESG can be guaranteed as we have proved. The scaling of execution delay and simulation time is shown in Figure 7 (a). Then, ESG can be calculated as the difference between the execution delay and simulation time. We show the ESG with/without feedback loop technique in Figure 7 (b). For feedback loop technique, we set the loop number to be the same as the node number in PPUF. The feedback loop size is 1S, which is shown to be a reasonable requirement in [4]. 900 nodes are needed for our PPUF design while with feedback loop technique, the required number of nodes reduces to 190. Another aspect that we investigate is the measurability of PPUF output. This serves as a measure of the PPUF practicality. We measure the average current from the two crossbar structures and their difference because they impose requirements on the input range and resolution of the comparator. We use extrapolation to infer these two parameters for large design based on Figure 8. For a 900-node PPUF, the average current is 33.6\( \mu A \), while the current difference is 2.89\( \mu A \). These requirements are easy to accomplish by designs shown in existing papers [25,26], which proves the practicality of the proposed design. We also estimate the power consumption of the 900-node PPUF. The power of the two crossbar structure is around 134.4\( \mu W \). As for the current comparator, we use the data from [25], which is 153\( \mu W \). Based on Figure 7 (a), the execution delay for a 900-node PPUF is estimated to be 1.0\( \mu s \). Therefore, for one evaluation, the total power consumption is around 287.4\( \mu J \). We further examine the PPUF performance over several commonly used metrics that quantify the quality of the PPUF design: inter-class HD, intra-class HD, randomness and uniformity [27]. In our experiments, intra-class HD accounts for supply voltage variation of 10% and temperature variation ranging from \(-20^\circ C\) to \(80^\circ C\). We evaluate these metrics for a 40-node and a 100-node PPUF. As we can see in Table 1, the average performance of both PPUFs are close to ideal value. We also evaluate the relation between output flip probability and minimum HD (\( d \)) of PPUF challenge. Changing \( d \) inputs, we check the probability for the output bit to flip. Here we do experiments on 100 40-node PPUF circuits with grid size \( g = 8 \). For each PPUF and each minimum HD \( d \), we random sample 1000 input vectors. The change of output flip probability relative to \( d \) is shown in Figure 9. As we can see, when \( d = 16 \), the average output flip probability is approaches 0.5. To evaluate the model-building attack resilience, we leverage both parametric and nonparametric machine learning algorithms, including Support Vector Machine (SVMs) [28] and K Nearest Neighbor (KNN) [29]. We employ a nonlinear radial basis function (RBF) kernel for SVM algorithm while for KNN algorithm, we run a series of empirical KNN tests with \( K = 1, 3, \ldots, 21 \). The final prediction inaccuracy is the minimum of SVM and KNN tests. The prediction error for 40-node and 100-node PPUFs is shown in Figure 10. Compared to the arbiter PUF with the same input length, our PPUF achieves more than an order of magnitude higher prediction error than arbiter PUF, which indicates much better model-building attack resilience. 6. CONCLUSION In this paper, we propose a PPUF with practical ESG in terms of theoretical soundness and physical practicality. The execution of PPUF is proved to be equivalent to calculating max-flow in a complete graph, which enables us to use the max-flow problem as the simulation model to rigorously bound the simulation time. The execution time is also bounded for the proposed design. Therefore, rigorous ESG can be shown based on the difference on the asymptotic scaling. To enable an efficient utilization of PPUF, we develop a PPUF architecture and adopt the SD technique to build the PPUF basic building blocks to map the complete graph on chip. Our PPUF exhibits good performance as shown in the experimental results. 7. REFERENCES [1] B. Gassend, D. Clarke, M. Van Dijk, and S. Devadas, “Silicon physical random functions,” in CCS. 2002. [2] J. W. Lee, D. Lim, B. Gassend, G. E. Suh, M. Van Dijk, and S. Devadas, “A technique to build a secret key in integrated circuits Figure 6: Inaccuracy of simulation model compared with PPUF execution. Figure 7: Comparison between execution and simulation time: (a) scaling of execution and simulation time and polynomial fitting; (b) scaling of ESG with/without feedback loop technique. Figure 8: Scaling of output current average and difference. Figure 9: Output bit flip probability with respect to minimum distance of input challenges. Figure 10: Comparison on prediction error for a 40-node and 100-node PPUF with arbiter PUF. for identification and authentication applications,” in *VLSI Circuits*, 2004. [3] M. Gao, K. Liu, and G. Qu, “A highly flexible ring oscillator puf,” in *DAC*, pp. 1–6, 2010. [4] M. Potkonjak and V. Goudar, “Public physical unclonable functions,” *Proceedings of the IEEE*, vol. 102, no. 8, pp. 1142–1156, 2014. [5] U. Rührmair, “Simpl systems: On a public key variant of physical unclonable functions..,” *IACR Cryptology ePrint Archive*, vol. 2009, p. 256, 2009. [6] N. Beckmann and M. Potkonjak, “Hardware-based public-key cryptography with public physically unclonable functions,” in *Information Hiding*, pp. 206–220, 2009. [7] M. Potkonjak, S. Meguerdichian, A. Nahajetian, and S. Wei, “Differential public physically unclonable functions: architecture and application,” *Circuits*, pp. 242–247, 2011. [8] J. Rajendran, G. S. Rose, R. Karri, and M. Potkonjak, “Nano-ppuf: A monolithic nano-scale primitive,” in *ISVLSI*, pp. 101–106, 2012. [9] M. M. Azodi and F. Koushanfar, “Hardware authenticated encryption of ipsec,” *Information Forensics and Security, IEEE Transactions on*, vol. 6, no. 3, pp. 1123–1134, 2011. [10] T. Kukliansky and D. Blumenthal, “Characterization of soft errors caused by single event upsets in cmos processes,” *Dependable and Secure Computing, IEEE Transactions on*, vol. 1, no. 2, pp. 128–143, 2004. [11] L. M. Goldberger, R. A. Shaw, and J. Staples, “The maximum flow problem is log-space reducible to the minimum cut problem,” *Science*, vol. 21, no. 1, pp. 105–111, 1982. [12] I. Mehra and R. Weller, “A new continuous-time mac-filter for multi road channel applications at 150 mb/s and beyond,” *Solid-State Circuits, IEEE Journal of*, vol. 32, no. 4, pp. 499–513, 1997. [13] E. Dinic, “Algorithm on solution to problem of maximum flow in networks with power estimation,” *Diskretnyj Analiz i Issledovanie Operacij*, vol. 194, no. 4, p. 754, 1970. [14] A. V. Goldberg and R. E. Tarjan, “A new approach to the maximum-flow problem,” *Journal of the ACM (JACM)*, vol. 35, no. 4, pp. 921–940, 1988. [15] Y. Shiloach and U. Vishkin, “An o (n log n) parallel max-flow algorithm,” *Journal of Algorithms*, vol. 3, no. 2, pp. 128–146, 1982. [16] J. A. Kelner, Y. T. Lee, L. Orecchia, and A. Sidford, “An almost-linear time algorithm for approximate max flow in undirected graphs, and its multicommodity generalizations,” in *Proceedings of the Twenty-Fifth Annual ACM-SIAM Symposium on Discrete Algorithms*, pp. 217–226, SIAM, 2014. [17] A. Yoo, B. Chen, C. Hennessy, W. McLendon, B. Hendrickson, and U. Vahidyan, “A scalable distributed parallel breadth-first search algorithm on bluegene/l,” in *SC*, pp. 25–28, 2005. [18] U. Rührmair, F. Schake, J. Söltzer, G. Dror, S. Devadas, and J. S. Rowenhorst, “Modeling attacks on physical unclonable functions,” in *CCS*, 2010. [19] C. Mead and M. Ismail, *Analog VLSI implementation of neural systems*. Springer Science & Business Media, 2012. [20] T.-M. Lin, C. Mead, et al., “Signal delay in general rc networks,” *TCAD*, vol. 3, no. 4, pp. 331–349, 1984. [21] M. Plotkin, “Binary codes with specified minimum distance,” *Information Theory, IEEE Transactions on*, vol. 6, no. 4, pp. 445–450, 1960. [22] W. Zhao and Y. Wu, “New generation of predictive technology model for sub-15 nm early design exploration,” *TED*, vol. 53, no. 11, pp. 2816–2823, 2006. [23] “International technology roadmap for semiconductors.” http://public.itrs.net. [24] B. Schaling, *The boost C++ libraries*. Boris Schaling, 2011. [25] Y. Sun, Y. Swami, and F. Koushanfar, “Low-cost high speed switched current comparators,” in *MIXDES*, pp. 303–308, 2007. [26] N. K. Chauhan, “A very high speed high resolution current comparator design,” *International Journal of electric, electronics science and engineering*, vol. 7, no. 11, 2013. [27] A. Marti, V. D. G. Sa, and J. Dehaeneum, “A systematic method to evaluate and compare the performance of physical unclonable functions,” in *Embedded systems design with FPGAs*, pp. 245–267, 2013. [28] J. A. Suykens and J. Vandewalle, “Least squares support vector machine classifiers,” *Neural processing letters*, vol. 9, no. 3, pp. 293–300, 1999. [29] P. Cunningham and S. J. Delany, “k-nearest neighbour classifiers,” *Multiple Classifier Systems*, pp. 1–17, 2007.
Generation of galactic disc warps due to intergalactic accretion flows onto the disc M. López-Corredoira$^{1,2}$, J. Betancort-Rijo$^{2,3}$, and J. E. Beckman$^{2,4}$ $^1$ Astronomisches Institut der Universität Basel, Venusstrasse 7, 4102-Binningen, Switzerland $^2$ Instituto de Astrofísica de Canarias, 38200 La Laguna, Tenerife, Spain $^3$ Departamento de Astrofísica, Universidad de La Laguna, Tenerife, Spain $^4$ Consejo Superior de Investigaciones Científicas (CSIC), Spain Received 29 March 2001 / Accepted 5 February 2002 Abstract. A new method is developed to calculate the amplitude of the galactic warps generated by a torque due to external forces. This takes into account that the warp is produced as a reorientation of the different rings which constitute the disc in order to compensate the differential precession generated by the external force, yielding a uniform asymptotic precession for all rings. Application of this method to gravitational tidal forces in the Milky Way due to the Magellanic Clouds leads to a very low amplitude of the warp, as has been inferred in previous studies; so, tidal forces are unlikely to generate warps, at least in the Milky Way. If the force were due to an extragalactic magnetic field, its intensity would have to be very high, greater than $1 \mu G$, to generate the observed warps. An alternative hypothesis is explored: the action of the intergalactic medium over the disk. A cup-shaped distortion is expected, due to the non-linearity of the linear momentum; but, this effect is small and the predominant effect turns out to be the transmission of angular momentum, i.e., a torque giving an integral-sign shape warp. The torque produced by a flow of velocity $\sim 100 \text{ km s}^{-1}$ and baryon density $\sim 10^{-23} \text{ kg/m}^3$ is enough to generate the observed warps and this mechanism offers quite a plausible explanation. First, because this order of accretion rate is inferred from other processes observed in the Galaxy, notably its chemical evolution. The inferred rate of infall of matter, $\sim 1 M_\odot/\text{yr}$, to the Galactic disc that this theory predicts agrees with the quantitative predictions of this chemical evolution resolving key issues, notably the G-dwarf problem. Second, the required density of the intergalactic medium is within the range of values compatible with observation. By this mechanism, we can explain the warp phenomenon in terms of intergalactic accretion flows onto the disk of the galaxy. Key words. galaxies: structure – galaxy: structure – galaxies: interactions – galaxies: kinematic and dynamics – galaxies: magnetic fields 1. Introduction Many spiral galaxies present warps, distortions to a flat disc with an integral-sign shape. The Milky Way is an example (Burton 1988, 1992). Indeed, most of the spiral galaxies for which we have relevant information on their structure (because they are edge on and they are nearby) present warps. Sánchez-Saavedra et al. (1990) and Reshetnikov & Combes (1998) show that nearly half of the spiral galaxies of selected samples are warped, and many of the rest might also be warped since warps in galaxies with low inclination are difficult to detect. The intergalactic magnetic field has been suggested as the cause of galactic warps (Battaner et al. 1990; Battaner et al. 1991; Battaner & Jiménez-Vicente 1998). This is in our opinion a serious proposal (an opinion not held, however, by Binney 2000) which could explain many of the observations, although observational support is still controversial. The postulated alignment of warps of different galaxies (Battaner et al. 1991) and the differences between the gaseous and stellar warps (Porcel et al. 1997) can have alternative explanations, as we shall see in the present paper. Gravitational tidal effects on the Milky Way due to the Magellanic Clouds are not enough to justify the observed amplitude of the warp. Hunter & Toomre (1969) calculated that the Clouds with mass $M_{\text{Mag}} = 10^{10} M_\odot$ and distance $d = 55 \text{ kpc}$ would generate a warp of amplitude less than 117 pc at radius of 16 kpc in the most favourable case instead of the observed 2 or 3 kpc. The Magellanic Clouds are near the pericentres of their orbits around Milky Way (Murai & Fujimoto 1980; Lin & Lynden-Bell 1982), so it is not expected that this amplitude could be greater due to a closer approach of the Magellanic Clouds in a recent past. Weinberg (1998) proposed a mechanism to amplify the tidal effects due to a satellite by means of an intermediate massive halo around the galactic disc, but García-Ruiz et al. (2000) have found that the orientation of the warp is not compatible with the generation of warps by means of this mechanism if the satellites are the Magellanic Clouds. Quantitatively, a better prospect would be the Sagittarius dwarf galaxy (Ibata & Razonov 1998) since tidal effects are proportional to $\frac{M_{\text{sat}}}{d_{\text{sat}}^3}$, the galactocentric distance to this dwarf Galaxy is only 16 kpc and its mass $\sim 10^9 \ M_\odot$ (see Sect. 2.2). Binney’s (1992) review concludes that halos dominate the dynamics of the warps, although he also points out that “warps will in the end prove to be valuable probes of cosmic infall and galaxy formation”. In a subsequent paper (Jiang & Binney 1999), cosmic infall is used to explain the reorientation of a massive Galactic halo (8 degrees per Gyr) which produces a warp in the disc. This model requires a halo ten times more massive than the disc, an extremely high accretion rate (3 disc masses in 0.9 Gyr) and, in this scenario, after a sufficiently long time, the angular momentum of the Galaxy would become parallel to the direction of the falling matter causing the warp to decay. This last problem might be solved by including a prolate halo (Ideta et al. 2000). The general case of warps produced by the dynamical friction between a misaligned rotating halo and disk was also studied by Debattista & Sellwood (1999). Other proposals which include a massive halo have also serious flaws or not very plausible assumptions (Nelson & Tremaine 1995; Binney et al. 1998). In spite of the importance which is given to the halo in the dynamics of galaxies, it is quite possible that they may not even play a major role in the formation of warps. The mass of the halo of the Galaxy is not especially well determined and the mass fraction of galaxy halos might be small (see, for instance: Nelson 1988; Battaner et al. 1992; Evans 2001 and references therein). We should emphasize here that the presence or absence of a very massive halo will not modify qualitatively the arguments presented below and would imply quantitative changes within the same order of magnitude. We will argue in this paper that warps can be generated without massive halos or magnetic fields, although our results are perfectly compatible with the existence of these. We aim here an alternative solution to those hitherto hypothesized. The mechanism of generation of warps which is explained in this paper solves these previous difficulties and does not require implausible assumptions. It is even simpler than the hypotheses previously proposed; it requires only the infall of a very low density intergalactic medium onto the disc without the dynamical intervention of an intermediate halo. It is a very simple idea but it works well, as will be shown below. An analytical calculation is performed to reduce the problem to a differential equation and some integrals, which are subsequently solved by means of numerical algorithms. A new method is developed in Sect. 2 to calculate the warp parameters from an induced external torque which takes into account the interaction between all rings of the galactic disc. The external torque induced by the accretion is calculated in Sect. 3, which allows us to estimate the required density of the inflow, in Sect. 4. 2. Warp induced by an external torque In this section, we will explain the mechanism of generation of warps in a galactic disc due to a net external torque. This general method is applicable to any kind of torque acting over the disc and will be used to derive the properties of the warp induced by an intergalactic accretion flow. In order to describe a warp in a galactic disc, we will use the usual model of tilted rings (Rogstad et al. 1974): the disc is taken to be a set of concentric rings, each of radius between $R_i$ and $R_i + dR$ having angle inclination $\alpha_i$ with respect to the central disc and intersecting its plane at two nodes, perpendicular to the points on the ring where the elevation is maximum and minimum. The line joining the nodes (“line of nodes”), will be taken as common to all the rings, i.e., the nodes for all rings are aligned, as observed in our Galaxy; this is the equilibrium state in our model. Therefore, the only parameters which define the warp are: the direction of the line of nodes and a function, $\alpha(R)$, the maximum angular elevation of the ring of radius $R$ with respect to the plane defined by the central disc. A torque applied to a rotating rigid body (in our case, the rigid body is a ring) produces a precession in it, as in the case of the equinox precession of the Earth. The importance of precession in galaxies was indeed first recognized by Lynden-Bell (1965). This section describes how the differential precession in successive rings generates a warp due to their redistribution. The general formalism included here is applicable to any kind of torque induced by a external force over a set of nearly coplanar rings. In the following section, the calculation of the torque for the specific case of the intergalactic accretion flows will be derived. Other analytical approaches to the problem were used previously. For instance, the consideration of the warp as a product of equilibrium forces of vertical components, such that $F_{\text{ext},z} = F_{\text{grav},z}$, where $F_{\text{grav}}$ is the gravitational force due to an axisymmetric potential (used in Kahn & Woltjer 1959; Binney 1991; Binney 1992 (his Sect. 2: “Naive theory”); Battaner & Jiménez-Vicente 1998). Since the orbit within a ring is tilted, the centrifugal forces will cancel both radial and vertical components of the gravitational forces so this consideration is not valid. Our approach has some advantages and finer calculations than these papers. A mass in the centre of the galaxy cannot provide any torque to the rings. However, the axisymmetric potential of the disc is distorted by the warp (Binney 1992), and these non-axisymmetries are those responsible for the warp itself. The differential orientation of the successive rings rather than a point mass placed in the centre produces a torque. The approach used in Hunter & Toomre (1969) is much better although the analysis is very different to the one presented here. ### 2.1. Equations We consider the disc to be made of material on circular orbits with angular velocity $\omega_{\text{rot}}(R)$. If initially all the orbits are in the same plane (the plane of the disc), perpendicular to a unit vector $k$, under the influence of an external torque with a non-vanishing component perpendicular to $k$ the orbits will precess. If the precession is not equal for all orbits they will not remain in the same plane, so that the vector perpendicular to the plane of a given orbit $k(R,t)$ will be a function of $R$ and of time $t$. In the general case, the component of the external torque along the vector $k(R,t)$ will induce changes in $\omega_{\text{rot}}(R)$. In addition, other moments of the external force may produce changes in the shape of the orbits. However in the applications relevant to the present problem these changes are negligible and we will take the orbits to remain circular, and with essentially constant $\omega_{\text{rot}}(R)$. In this case, the dynamics of the disc under an external torque may be reduced to an equation in partial derivatives for $k(R,t)$, or in other words a set of an infinite number of ordinary differential equations, (one for each value of $R$). The unit vector $k(R,t)$ is defined by two parameters which are in fact angles. The equations for the evolving system, (throughout this work we use equations valid for an inertial, non-rotating frame, and at no stage do we use a rotating frame) averaged over times longer than the orbital periods are: $$\frac{\text{d}J}{\text{dt}} = \tau(R,t) = \tau_{\text{ext}}(R,t) + \tau_{\text{int}}(R,t),$$ where $J(R,t)$ is the angular momentum for the ring with radius between $R$ and $R+dR$, and $\tau_{\text{ext}}$, $\tau_{\text{int}}$ represent the average torque in the ring due to the external force, and to the gravitational interaction with the rest of the disc respectively. #### 2.1.1. External torque The external torque is: $$\tau_{\text{ext}}(R,t) = \tau_{\text{ext}}[k(R,t),u,R]i_0$$ $$i_0 \equiv j_0 \times k_0,$$ where $u$ is a unit vector in the direction associated with the cause of the external torque, which may be e.g., the direction of the infalling gas flow or the position of a perturbing galaxy, $j_0$ is a unit vector parallel to the projection of $u$ onto the initial plane (perpendicular to $k_0$), $\tau_{\text{ext}}(x,R)$ is a function of the cosine of the angle between $u$ and $k(R,t)$, which characterizes the specific mechanism giving rise to the torque. #### 2.1.2. Internal torque The internal torque is: $$\tau_{\text{int}}(R,t) = \int_{\text{all rings}} [\text{d}\tau_S(R,t)\text{d}R]\text{d}S,$$ and $\text{d}\tau_S(R,t)\text{d}R\text{ d}S$ is the gravitational torque that the ring of radius between $S$ and $S+\text{d}S$ produces on the ring of radius between $R$ and $R+\text{d}R$, which is: $$\text{d}\tau_S(R,t)\text{d}R\text{ d}S = G\sigma(R)\sigma(S)\text{d}R\text{ d}S$$ $$\times \tau_{\text{int}}(k(R,t),k(S,t),R,S)k(R,t) \times k(S,t),$$ where $\tau_{\text{int}}(\cos \alpha_{R,S}, R, S)$ is a well determined function (the same for any mechanism) of the cosine of the angle $\alpha_{R,S}$ between the planes of the two orbits. Explicitly: $$\text{d}\tau_S(R,t)\text{d}R\text{ d}S = G\sigma(R)\sigma(S)\text{d}R\text{ d}S \frac{S^2}{R} \int_0^{2\pi} \text{d}\phi_1 \int_0^{2\pi} \text{d}\phi_2$$ $$\times \left[1+(S/R)^2 - 2(S/R)(\sin \phi_1 \sin \phi_2 + \cos \phi_1 \cos \phi_2 \cos \alpha_{R,S})\right]^{1/2}$$ $$\times [\sin \phi_1 \sin \alpha_{R,S} \cos \phi_2 i(R,S,t) - \cos \phi_1 \sin \alpha_{R,S} \cos \phi_2 j(R,S,t)$$ $$+[\cos \phi_1 \sin \phi_2 - \cos \phi_2 \sin \phi_1 \cos \alpha_{R,S}]k(R,t)],$$ $\alpha_{R,S} \equiv \alpha(S,t) - \alpha(R,t); \quad \alpha(0,t) \equiv 0;$ $\cos \alpha_{R,S} \equiv k(R,t) \cdot k(S,t)$ $i(R,S,t) \equiv j(R,S,t) \times k(R,t); \quad j(R,S,t) \equiv \frac{k(R,t) \times k(S,t)}{|k(R,t) \times k(S,t)|}$ The torque between two rings, $\text{d}\tau_S(R,t)\text{d}R\text{ d}S$, is proportional to $-j(R,S,t)$, and the external torque must be parallel to this. The $z$-component of the external torque, if this were present, would produce an acceleration in the rotation of the disc rather than warping. Other components of a galaxy produce negligible contribution. The bulge in practice contribute negligibly to the torque: firstly, because it is more spherical than the rings and a spherical distribution of mass produces no torque; and, secondly, because the distance of the bulge to the outer rings is large enough for these to produce negligible effects (the torque is proportional to $S^2/R$ for small $S$, where $S$ is the radius of the structure). Numerical experiments were carried out which confirmed this point. A massive halo, if it exists, would produce an extra internal torque if the halo is non spherical. In practice, a massive non-spherical halo will change quantitatively the amplitude of the warp but, qualitatively the mechanism will be the same; the warp amplitude will be reduced, never increased, because a massive halo would keep the rings more tightly bound. At present, no precise calculations including non-spherical massive halos are given here, but the net result would be to increase the required accretion rates inferred below. Only the inner mass of the halo ellipsoid with semimajor axis equal to $R$ produce net torque, assuming there is a constant ellipticity halo, whose quadrupole is $\sim \frac{e^2}{15} \sigma_{\text{disc}}^2 R f_h(R)$ ($v_{\text{circ}}(R)$ is the rotation velocity of the Galaxy; $e$ is the eccentricity of the ellipsoids; a mass distribution for the halo derived from a hypothetical flat rotation curves due to a dark halo is assumed: $f_h(R)$ fraction of the mass $M(R)$ embedded in the halo) while the disc quadrupole component is $\sim 12\pi \sigma(R_0) e^{R_0/h_R} h_R^3$ for a model such as (15). The disc dominates at $R < 22$ kpc for $\epsilon = 0.2$ and $f_h = 0.5$ (Kuijken & Dubinski 1995), and the contribution of the halo for $R < 16$ Kpc is less than 40% of the disc contribution. This means that the order of magnitude of the torque will be not affected by the inclusion of the massive halo, and will be perhaps affected by a factor less than $\sim 1.4$ at $R < 16$ kpc. For small angles, in a linear approximation, the proportionality which results is: $$\lim_{\alpha_{R,S} \to 0} d\tau_S(R)dR \ dS \propto -\alpha_{R,S} j(R,S,t).$$ 2.1.3. Dynamics and evolution of the warp The precession velocity of $k(R,t)$ is much less than $\omega_{\text{rot}}(R,t)$, so we have $$J(R,t) \approx I \omega_{\text{rot}}(R,t) k(R,t)$$ $$I = 2\pi R^3 \sigma(R) dR,$$ where $I$ is the moment of inertia of the ring, and $\sigma(R)$ the surface density of the disc. $$\sigma(R) = \int_{-\infty}^{\infty} dz \rho_{\text{disc}}(R,z),$$ where $\rho_{\text{disc}}$ is its spatial density, independent of $\phi$ in an assumed axisymmetric case. This approximation is typical in planetary precession, for instance. Using this expression in (1), and calling $\tau_\parallel$, $\tau_\perp$ the components of $\tau$ along $k(R,t)$ and perpendicular to $k(R,t)$ respectively, we obtain $$|I \dot{\omega}_{\text{rot}}(R,t)| = |\tau_\parallel| = |k(R,t) \cdot \tau(R,t)|;$$ $$\frac{d k(R,t)}{d t}(R,t) = \frac{\tau_\perp}{I \omega_{\text{rot}}(R,t)}$$ $$= \frac{[\tau(R,t) - (k(R,t) \cdot \tau(R,t)) k(R,t)]}{I \omega_{\text{rot}}(R,t)}.$$ With the set of initial conditions $k(R,t_0) = k_0 \equiv (0,0,1) \ \forall R$, Eq. (10) can be integrated to give the configuration of the orbits, and hence the shape of the disc, at any time. The method described here is not restricted to linear perturbations but is valid as long as the deformed disc is describable by a set of tilted rings. However, for large deformations the orbit cannot remain flat and the approximation will fail. This is not the case however in the examples given in the present discussion. It must be noted that Eq. (10) for $\frac{d k}{d t}$ is a first order differential equation, which implies that it is the angular velocities and not their time derivative that are determined by the torque. So no matter how large (modulus of vector $\frac{d k}{d t}$) may be, if the torque vanishes instantly so will $\frac{d k}{d t}$. This is due to the fact that the energy associated with $\frac{d k}{d t}$ is much smaller than the energy corresponding to $\omega_{\text{rot}}$; the full equations are obviously second order. 2.1.4. Equilibrium configuration Here we are interested in the stationary, or equilibrium configuration that we assume exists, rather than in the evolution. It is clear that in this situation all orbits (or rings) must precess around $u$ at the same constant speed, $\omega_p$, keeping $k(R,t), u$ constant in time, though different for different orbits. For $\alpha(R)$ and the precession velocity, $\omega_p(R)$, it be independent of time, $\tau(R,t)$ must be perpendicular to $k(R,t)$ and $u$ for any $R$. Since this holds for $\tau_{\text{ext}}(R,t)$ it must also hold for $\tau_{\text{int}}(R,t)$. However, examining the expression for this last vector in (10), for this condition to hold for all orbits, they must all intersect along the same straight line, parallel to the unit vector $i$. In this case the position of the orbits is simply given by the function $\alpha(\bar{R}) \equiv \cos^{-1}(k(R,t), k_0)$. If the vector $k(R,t)$ precesses around $u$ with angular velocity $\omega_p(R)$ we then have: $$\omega_p(R) u \times k(R,t) = \frac{d k}{d t},$$ From this equation we obtain $\omega_p(R)[\alpha(R)]$ as a function of $\alpha(R)$. Now, since one of the conditions for a stationary configuration is that $\omega_p(R)$ be independent of $R$, setting the derivative of this function with respect to $R$ equal to zero provides us with the functional equation which we must solve for $\alpha(\bar{R})$ to give the shape of the distorted disc, $$\frac{d \omega_p(\bar{R})}{d R}[\alpha(R)] = 0.$$ The functional Eq. (12) can be solved numerically using the method explained in Appendix A. From this, we can obtain $\alpha(\bar{R})$, i.e. the amplitude of the warp as a function of the distance from the Galactic centre for a given external torque $\tau_{\text{ext}}$. 2.1.5. Few remarks on the transient regime towards the equilibrium configuration As explained above, we are not concerned here about the transient regime. However, it is interesting to comment on the qualitative aspects of this regime, since they indicate that some of the mechanisms considered here may well be of interest. For a mechanism to be acceptable it should lead to a stationary result, in reasonable agreement with the observations, which are assumed to correspond to stationary systems. However the mechanism is not plausible Fig. 1. Spherical triangle which relates different parameters in differential precession of two rings. Translating this to the real case, it implies that the time for the formation of the warp must be of the order of one quarter of the final global precession period. This time could be quite long, and a formation procedure which leads to a shorter time scale would be more plausible. This is the case, for example, where $\tau_{\text{ext}}$ is absorbed by the gas. In this case to simulate the evolution we must consider Eq. (10) for two sets of rings, gas rings, which are affected by both $\tau_{\text{ext}}$ and $\tau_{\text{int}}$, and stellar rings, which are affected only by $\tau_{\text{int}}$. Qualitatively the gas rings will move initially as indicated above, giving rise to a $\tau_{\text{pot}}$ which is perpendicular to $\tau_{\text{ext}}$. The stellar rings which initially remain in the same plane start to move under the influence of $\tau_{\text{int}}$, with $k(R,t)$ parallel to the projection of $u$ onto the plane of the ring so that the resulting nodes already are in their final position. For very large radii $R$, the rings would precess in a direction perpendicular to that for the inner rings because the torque of the inner stellar rings, which are already warped, dominate that due to the gas. Thus, the line of the nodes of these orbits will form some angle (going asymptotically to $\pi/2$) with that of the inner orbits. The nodes of the outer orbits trace a leading spiral. This seems to agree qualitatively with some observations (Briggs 1990). The gas rings then move rapidly under the much stronger $\tau_{\text{int}}$ generated by the stellar rings, and align generally with them, although the final warp will be slightly different for the gas rings and the stellar rings, so that the $\tau_{\text{ext}}$ experienced by the gas is transferred gravitationally almost entirely (but not quite) to the stellar ring at the same radius, although probably the relative displacement between the two is smaller than the thickness of the disc, and will not be considered in the present paper. As we pointed out earlier, Eq. (10) is first order, so if when the orbits arrive at their stationary position the torque takes their stationary values (which happens when all orbits reach their stationary positions simultaneously) the line of the nodes of every orbit will stop precessing in the mean plane defined by all other orbits, so stationarity is achieved. However this does not seem likely and a more probable outcome is that the line of the nodes oscillates around the equilibrium position. Perhaps, friction could damp the amplitude of these oscillation. There is no problem in this oscillation being eliminated without dissipation (since they contain no energy) but it seems unlikely because of the degree of conspiracy between the orbits required. Thus our stationary solution will correspond to the statistical mean, but at any time there will be present minor wiggles (in $\alpha(R)$) superimposed on it. The comment on warp formation presented here is a qualitative anticipation of future detailed work and does not affect any of the conclusions of the present work. 2.2. Example of application: gravitational torque Once we have a general formalism in which a warp is generated when a torque is produced in a galactic disc, we can analyze possible mechanisms to produce that torque. A conventional example is the torque generated by gravitational tidal effects. In Fig. 2, we show the Galactic disc centred at $O$ and a point mass at $P$ such that $\overrightarrow{OP} = d \mathbf{e}_P$. In the case of the Milky Way disc, the Sun would be situated in the disc with $R = R_0$, $\phi = 180^\circ$. The unit vector can be expressed as $$\mathbf{e}_P = \cos \phi_P \cos \theta_P \hat{\mathbf{i}} + \sin \phi_P \cos \theta_P \hat{\mathbf{j}} + \sin \theta_P \hat{\mathbf{k}}.$$ (13) It is well-known that the torque produced by a point mass of mass $m_P$ on any axisymmetric body is, considering only the quadrupolar term, $$\tau_{\text{grav}} \approx \frac{3G}{2d^3} m_P (I_3 - I_1) \cos \theta_P \sin \theta_P$$ (14) $$\times (\sin \phi_P \hat{\mathbf{i}} - \cos \phi_P \hat{\mathbf{j}}),$$ where $I_1$ and $I_3$ are the inertia tensor components (in this case for the disc). The solution of (12) with $\tau_{\text{ext}} = \tau_{\text{grav}}$ by means of the numerical method explained in Appendix A gives $\alpha(R)$ for the galactic warp. It will depend on the adopted Galactic model. For the Milky Way, a suitable model can be adopted as follows: The surface density of the disc, (9), is taken as $$\sigma(R) = \begin{cases} \sigma(R_0)e^{-\frac{R-R_0}{h_R}}, & R \leq 3R_0 \\ 0, & R > 3R_0 \end{cases};$$ (15) where the local surface density is $\sigma(R_0) = 48 \ M_\odot \text{pc}^{-2}$ (Kuijken & Gilmore 1989), the distance to the Galactic centre is $R_0 = 7.9 \ \text{kpc}$ (López-Corredoira et al. 2000); and the scale length $h_R = 3.5 \ \text{kpc}$ (Bahcall & Soneira 1980). We truncate the exponential disc at $3R_0$. The Galactic disc undoubtedly extends to larger radii but the effects of those outer rings can be considered negligible in the present dynamical context. The rotation velocity is taken as $$v_{\text{rot}}(R) = R\omega_{\text{rot}}(R)$$ (16) $$= \begin{cases} 200 \ \text{km s}^{-1}, & R \leq 15 \ \text{kpc} \\ 200\sqrt{1.5 \ \text{kpc}/R} \ \text{km s}^{-1}, & R > 15 \ \text{kpc} \end{cases}.$$ (Honma & Sofue 1996). Outside the stellar truncation radius ($\sim 15 \ \text{kpc}$), a Keplerian law ($v_{\text{rot}} \propto R^{-1/2}$) is followed, which implies that we have neglected dark matter contributions from larger radii (according to Honma & Sofue 1996, there is no essentially dark matter beyond 15 kpc). For the Large Magellanic Cloud, we adopt $\theta_P = -33^\circ$, $m_P = 10^{10} \ M_\odot$ and $d = 55 \ \text{kpc}$, the same values that Hunter & Toomre (1969) adopted. The precession we obtain is $\omega_p = 2.7 \times 10^{-19} \ \text{rad/s}$ and the amplitude of the warp is shown in Fig. 3 where $\alpha(R) \approx |z|/R$. The functional shape of $\alpha(R)$ resembles that of observational data, but the amplitude is very different: a factor of 20 or 30 separates the two curves, in agreement with Hunter & Toomre (1969). This means that we would require the mass of the Magellanic Clouds to be around 2 or $3 \times 10^{11} \ M_\odot$ to justify the Milky Way warp as a product of the gravitational interaction with the Clouds. The calculation of the amplitude with the Sagittarius dwarf galaxy ($m_P = 10^9 \ M_\odot$, $d = 16 \ \text{kpc}$; Ibata & Riazoumov 1998) gives an amplitude 4 or 5 times larger, which is still not enough to produce the warp, nor is the predicted direction of the warp in agreement with the observations. 2.3. Example of application: Magnetic torque Another example can be calculated: the case of magnetic forces (Battaner et al. 1990; Battaner & Jiménez-Vicente 1998), in which the force per unit volume which produces the torque for a $\theta_P = 45^\circ$ inclination field is: $$F = \frac{B^2 \sin(2(\theta_P - \alpha(R)))}{16\pi L} \sin(\phi - \phi_P) k,$$ where $L = 1$ kpc is the adopted value for the characteristic length in which galactic regions dominated by the galactic magnetic field become dominated by the extragalactic magnetic field. Hence, the torque over a ring between $R$ and $R + dR$ is: $$\tau \, dR = 2h_z R \, dR \int_0^{2\pi} d\phi \ r \times F$$ $$= \frac{B^2 R^2 h_z \sin(2(\theta_P - \alpha(R))) dR}{8L} (\sin \phi_P i - \cos \phi_P j),$$ where $h_z = 0.1$ kpc is the scale height of the disc. With the same Galactic model as the previous subsection, we obtain that the amplitude of the magnetic field is $$B \sim 1.4 \ \mu G,$$ in order to obtain a coincidence with observational Burton (1988) data. The precession angular velocity is $\omega_p = 5.1 \times 10^{-18}$ rad/s. There is a good agreement for the curves in Fig. 4, which was noted by Battaner & Jiménez-Vicente (1998). The intensity of the field required to produce the warp is perhaps somewhat high, but we will not discuss in this paper the possibility of the existence of such a field. Kronberg (1994), for instance, argues that the value of the intergalactic magnetic field can be as high as this. We remark only that the possibility of warps generated by extragalactic magnetic fields should not be taken lightly, although the high value on the required field is perhaps questionable. This value is in agreement with that of Battaner & Jiménez-Vicente (1998) although their calculation method is much simpler. Binney (1991) obtained a higher required value for $B$, with a difference of an order of magnitude, in part due to his adoption of an expression for the magnetic force different from expression (17) equivalent to a much lower value of $L$. He also adopted an insufficiently precise approximation of an axisymmetric potential not distorted by the warp. 3. Generation of warps by an intergalactic flow 3.1. Accretion of matter onto the galactic disc and warps The idea we are suggesting here is not totally new. It is based on ideas about infall of intergalactic matter previously explored by other authors (Kahn & Woltjer 1959; Binney & May 1986; Ostriker & Binney 1989; Jiang & Binney 1999). Kahn & Woltjer (1959) first suggested that an intergalactic matter flow could bring about the warp; however, their rough calculations in fact derive a pressure gradient transmitted to the disc by means of a hypothetical halo compressed by a subsonic massive wind and do not specify the mechanism of generation of the warp. The representation they use is very simple, and quite different from that presented here. The dominant response to the intergalactic medium ram pressure of a disc moving through it would be axisymmetric, taking the form of a rim rather than a warp (Binney 1992), although Kahn & Woltjer’s estimates of the amplitude in fact agree with our values and their work was a first indication of a possible mechanism to explain warps. Ostriker & Binney’s (1989) suggestion of how cosmic infall can bring about warped galactic discs is rather qualitative and they did not proceed to a quantitative analysis. Subsequent papers (Binney et al. 1998; Jiang & Binney 1999) use a model based on the infall onto the Galactic halo and do not consider the direct infall onto the disc. The latter is the novelty here: we take the idea of cosmic infall, and explore the torque it creates when it collides transmitting its angular momentum, but we consider directly the interaction with the disc. We do not invoke the idea of a massive halo modulating the dynamical effects on the disc. The idea of cosmic infall has been considered in the context of CDM theory. Ryden & Gunn (1987) and Ryden (1988) have shown that half of the total angular momentum of any galaxy was contributed by material that fell in over the last third of a Hubble time. Galaxy formation theories require this infall. Furthermore, many observations imply that there must be an infall of material to galaxies (Binney 2000): Local Group members approach each other, the high velocity clouds around the Galaxy have on the average a net negative velocity (Blitz et al. 1999; Braun & Burton 1999), and others. There are good reasons to believe that the infalling baryonic matter is accreted directly by the disc rather than the halo (non-baryonic matter can escape completely or be captured by the halo). The principal supporting arguments here are those based on the observed chemical evolution (Ostriker & Binney 1989; López-Corredoira et al. 1999). Significant accretion of metal poor gas is necessary to justify the observations concerning star formation and metallicity distribution in the galactic disc, often termed the G-dwarf problem (Tinsley 1980) and the details of time-dependent evolution of individual metals (Casuso & Beckman 1997, 2000). Recent results implying that this accretion has been constant, or even increased, during the disc lifetime, are found in Rocha-Pinto et al. (2000). Moreover, it is clear that a halo should trap accreted matter with low efficiency, since its mean baryonic density is very low. In a more general context, it should not be thought that the general secular infall of matter is the only factor. Any cloud in the intergalactic medium whose orbit intersects the galaxy and is accreted by the disc provides a torque due to the interchange of its angular momentum. For instance, the exchange of matter between two galaxies can supply accretable intergalactic matter: an intergalactic flow is produced between the two galaxies. The HVCs (High Velocity Clouds) have been suggested (Blitz et al. 1999; López-Corredoira et al. 1999; Wakker et al. 1999a; Binney 2000) to be observable evidence of the material which is continuously falling onto the Galactic disc. ### 3.2. Description of the flow An intergalactic flow can be described as a beam of particles which comes from infinite distance towards the galactic disc with velocity $v_0$. Each particle of the beam follows a trajectory which is not a straight line, due to the gravitational attraction of the galaxy, until it reaches the galactic plane ($z = 0$). As it intersects the plane, it collides with the gas of the galaxy and remains trapped in the disc. A torque results from the angular momentum contribution of this particle to the disc. The net torque over a ring of the disc of radius between $R$ and $R + dR$ will be the sum of the torques produced by all the particles of the beam which collide with the galactic disc at distance between $R$ and $R + dR$ from its centre. The total angular momentum with respect to the centre of the galaxy which is transported by a cylindrical beam of these characteristics with axis crossing the centre of the galaxy is zero. Is the total angular momentum deposited in each ring also zero? The answer to this question is “no”, and this is the key to warp generation by an intergalactic flow. The net momentum transferred to each ring is non-zero because, for a general case where the net flow is not perpendicular to the plane of the galaxy nor isotropic, the particles which fall onto a given ring do not come from a single cylindrical shell of the flow. The particles are redistributed, and so is their angular momentum, due to the gravitational interaction with the galaxy. Therefore, the impact parameter of each infalling particle is not the same for all the particles which cut a given ring (if it were the same the net contribution of the angular momentum would indeed be zero); this will be analytically expressed in the equations given below in the present section. In particular, the variation of the impact parameter for a ring as a function of azimuthal angle is expressed in Eq. (29). The calculations in subsections below are, perhaps, a little complicated to follow. However, the qualitative description of the physical system whose variables we will calculate is not difficult to understand. Figure 5 gives a pictorial description which should be helpful. There is a net torque because the set of particles which fall into the ring between radii $R$ and $R + dR$ comes from a non-circular ring (which is not in fact elliptical). The transformation of this non-circular ring to the circular ring in the disc plane is effected by gravity. The velocity of the impacting particles varies with azimuth in the galactic plane ring. We wish to calculate the total angular momentum which is transported by matter in the non-circular ring which falls into the circular ring. Conservation of angular momentum implies that the angular momentum transmitted to the circular ring must be the same as that in the non-circular ring. Nevertheless, calculation of this angular momentum is not easy because the geometrical shape of the non-circular ring is not easy to describe analytically. This gives rise to the rather tedious calculations in subsections below. Once a torque is produced over each ring a warp is generated to compensate the differential precession of the rings, as we have seen in the previous section. We have explained the mechanism of generation of warps when any kind of torque due to external forces is applied over the disc, in Sect. 2. Now, in this section, we carry out the calculation of the torque due to the intergalactic flow. ### 3.3. Torque due to collision with a particle In Fig. 6, we give a graphical representation of the galactic disc centred at $O$, and the trajectory of a particle which intersects the disc. A particle comes from infinite distance with velocity $v_0$ in the normalized direction $e_0$ given by angles $\phi_0$, $\theta_0$ in spherical coordinates (in Fig. 6, $\theta_0 < 0$), where $$\mathbf{v}_0 = v_0 (\cos \phi_0 \cos \theta_0 \hat{\mathbf{i}} + \sin \phi_0 \cos \theta_0 \hat{\mathbf{j}} + \sin \theta_0 \hat{\mathbf{k}}),$$ and it follows a trajectory which crosses the Galactic disc in some point $Q$ whose distance from the centre is $R$ and angle with respect to the $x$-axis (defined as the line “Galactic centre-Sun” with $x$ negative towards the Sun) is $\phi$, i.e. $$\overline{OQ} = R \cos \phi \ \hat{\mathbf{i}} + R_Q \sin \phi \ \hat{\mathbf{j}}.$$ A minor order correction for small warp amplitudes, that we also take into account, is the variation of angle of the flow in the warped rings. We must bear in mind that $$\theta_0(R) = \theta_0(R=0) - \alpha(R).$$ The same trajectory is represented in Fig. 7 in the plane of the orbit $x'y'$. It is assumed that the trajectory is a hyperbola typical of a two-body gravitating system where the heavier body is the galaxy whose mass is $M_{\text{gal}}$ concentrated at the point $O$. Some minor effects due to the dispersion of the mass throughout the disc are expected but they are negligible if $R$ is larger than several disc scale lengths (i.e. greater than $R \approx 10$ kpc). The orbit is a hyperbola because the energy of the system is positive, since the velocity at infinite distance, $|v_0|$, is greater than zero. Therefore, the plane of the orbit will be determined by the independent vectors $\mathbf{r}$ and $\mathbf{e}_0$ and the equation of the orbit in the $x'y'$ system is ($r$ and $\beta$ are the polar coordinates in this system; see Fig. 7): $$\frac{\epsilon}{A \ r} = 1 + \epsilon \cos \beta,$$ where $\epsilon$ is the eccentricity of the orbit, $$\epsilon = \sqrt{1 + \left( \frac{b \ v_0^2}{G \ M_{\text{gal}}} \right)^2},$$ and $$A = \sqrt{\left( \frac{G \ M_{\text{gal}}}{v_0^2 b^2} \right)^2 + \frac{1}{b^2}} = \frac{v_0^2}{G M_{\text{gal}}} \frac{\epsilon}{\epsilon^2 - 1}.$$ The impact parameter is $b$ and the net asymptotic angular deviation $\gamma$ (see Fig. 7) is given by $$\tan \frac{\gamma}{2} = \frac{1}{\sqrt{\epsilon^2 - 1}} = \frac{G M_{\text{gal}}}{b v_0^2}.$$ The determination of the point of intersection of the orbit with the disc of the galaxy is a simple trigonometric problem. From a triangle shown in Fig. 7, we can derive: $$\beta_Q = \frac{\pi}{2} + \frac{\gamma}{2} - \cos^{-1}(e_{0Q}),$$ $$e_{0Q} = \cos(\mathbf{v}_0, \mathbf{r}_Q) = \frac{v_0 r_Q}{v_0 r_Q} = \cos(\theta_0) \cos(\phi_0 - \phi).$$ From these expressions, together with (23), (24), (25) and (26), the radial galactocentric distance $R$ of the point of orbit intersection with the Galactic plane is: $$R = r_Q = \frac{b^2 v_0^2}{b \ v_0^2 \sqrt{1 - e_{0Q}^2} + G \ M_{\text{gal}}(1 - e_{0Q})}.$$ The angular momentum of the particle with mass $dm$ is constant along its trajectory, $$J = dm \ v_0 b \frac{r_Q \times v_0}{r_Q \ v_0 |\sin(v_0, r_Q)|},$$ (30) and the torque produced by a particle which transmits its angular momentum to the disc is: $$\tau = \frac{\text{d}J}{\text{d}t} = \frac{\text{d}m}{\text{d}t} \ v_0 b \frac{r_Q \times v_0}{r_Q \ v_0 \sqrt{1 - e_{0Q}^2}} = \frac{\text{d}m}{\text{d}t} v_0 b \ (1 - e_{0Q}^2)^{-1/2}$$ $$\times [\sin \phi \sin \theta_0 i - \cos \phi \sin \theta_0 j + \cos \theta_0 \sin(\phi_0 - \phi) k].$$ (31) The new material is stopped by the friction with the disc and its angular momentum added to the ring. Note that although the ring is not really a rigid body it behaves like one. A single particle in orbit around the Galactic centre can resemble the dynamics of the rigid body; it carries the increase of angular momentum and the orbit is distorted according to the applied torque. The flow is stopped by the friction with the gas, so the angular momentum is, at first, transmitted to the gas. This does not mean that gas disc warps while stellar disc does not. Indeed, it is expected that stars in the ring feel the gravitational torque due to the gas rings and are dragged by them. There may be some difference between the stellar warp and the gas warp due to this lag in the dynamics, but the difference is likely to be small. If the stellar disc were demonstrated to be less warped than the gas warp, it would be evidence in favour of either this theory or the theory of the intergalactic magnetic field as the generator of the warp (Porcel et al. 1997), or indeed of any theory in which the external torque affects directly the gas disc rather than the stellar disc. ### 3.4. Total torque due to collision with a particle beam Expression (31) gives the torque produced by a particle which falls to the disc with an impact parameter $b$ and intersects the disc at $r_Q$. If we want to know the total torque produced by all the particles which come with any $b$ and intersect the disc at a distance between $R$ and $R + \text{d}R$ with any azimuth $\phi$, we have to integrate over all the particles of the beam which fall in this ring. The whole beam is then represented by the varying in the plane perpendicular to $v_0$ the initial (at infinite distance) position of the falling particle, whose polar coordinates are $b$ and $\phi_b$ (see Fig. 8). Thus, the total torque exerted over the ring with radius between $R$ and $R + \text{d}R$ is $$\tau(R)\text{d}R = \int_0^{2\pi} \text{d}\phi_b \int_{0, R < R_Q < R + \text{d}R}^\infty \text{d}bb$$ $$\times \left[ \frac{\text{d}m}{\text{d}t} v_0 b \ (1 - e_{0Q}(\phi))^2)^{-1/2} [\sin \phi \sin \theta_0 i$$ $$- \cos \phi \sin \theta_0 j + \cos \theta_0 \sin(\phi_0 - \phi) k] \right],$$ (32) $$\text{d}m = \rho_b \ v_0 \text{d}t,$$ (33) where $\rho_b$ is the density of baryonic matter in the particle beam, assumed to be independent of $b$ and $\phi_b$. Any non-baryonic matter in the inflow would not be trapped in the disc so it should not be taken into account within the total mass of the flow for the purpose of computing the torque. In the notation, $\tau$ stands for the torque per unit galactocentric radial length. Note that $\phi_b$ is the polar angle in the plane perpendicular to $v_0$ and is different from the polar angle $\phi$ in the Galactic disc. The relationship between the two is: $$\cot(\phi_b - \phi_{bn}) = \frac{\cot(\phi - \phi_n)}{|\sin \theta_0|},$$ (34) according to the general formula relating the angles in the spherical triangle of Fig. 8. $\phi_n$ and $\phi_{bn}$ are the polar angles of the galactic disc and the plane perpendicular to $v_0$ respectively corresponding to the node where the two planes intersect. We can choose for convenience the origin of the angles $\phi_b$ such that: $$\phi_{bn} = 0,$$ (35) and the angle of the line of nodes in the galactic disc is: $$\phi_n = \phi_0 \pm \pi/2.$$ (36) We change the variables of integration in the expression (32) to $R$ and $\phi$ (the Jacobian of the transformation is $\frac{\partial \phi_b}{\partial \phi} \frac{\partial b(R, \phi)}{\partial R}$, we neglect $\frac{\partial \phi_b}{\partial R}$ due to the dependence $\theta_0(R)$) and obtain $$\tau(R) = \frac{\rho_b v_0^2}{|\sin \theta_0|} \int_0^{2\pi} \frac{\text{d}\phi (1 - e_{0Q}(\phi))^2)^{-1/2}}{1 + \sin^2(\phi_0 - \phi)(\sin^{-2}\theta_0 - 1)}$$ $$\times [\sin \phi \sin \theta_0 i - \cos \phi \sin \theta_0 j + \cos \theta_0 \sin(\phi_0 - \phi) k]$$ (37) where \( b(R, \phi) \) is derived from (29): \[ b(R, \phi) = \frac{1}{2} R \sqrt{1 - e_{0Q}^2(\phi)} \\ + \sqrt{\frac{1}{4} R^2 (1 - e_{0Q}^2(\phi)) + R \ G \ M_{\text{gal}} v_0^{-2} (1 - e_{0Q}(\phi))}. \] If we make a new change of variables in the integral, substituting \( x = e_{0Q}(\phi) \) for \( \phi \) according to (28), and simplify the expression (some terms cancel because of the antisymmetry in the interval \( (\phi - \phi_0) \in (0, \pi) \) and \( (\phi - \phi_0) \in (\pi, 2\pi) \)), we obtain \[ \tau(R) = \frac{2 \rho_0 v_0^2 \sin \theta_0}{|\sin \theta_0| \cos^3 \theta_0} (\sin \phi_0 i - \cos \phi_0 j) \\ \times \int_{-\cos \theta_0}^{\cos \theta_0} dx \ x^2 \frac{\partial b(R, x)}{\partial R} b(R, x)^2 \\ \times \frac{1}{\sqrt{1 - x^2 / \cos^2 \theta_0} [1 + (1 - x^2 / \cos^2 \theta_0) (\sin^{-2} \theta_0 - 1)]}. \] The integral is positive. The direction of the torque is \( \pm (\sin \phi_0 i - \cos \phi_0 j) \); the sign is “+” when \( \theta_0 < 0 \) and “-” when \( \theta_0 > 0 \). That is, the torque is in the disc and perpendicular to \( \mathbf{e}_0 \). The direction of the infall of a particle flow is the same as that produced by the gravitational torque when \( \mathbf{e}_0 = \mathbf{e}_P \), although the amplitude is rather different. Note, for instance, that the amplitude in (39) does not depend on the disc density. Therefore, the effect produced by the infall of the particle beam is similar to that due to gravitational effects. The torques between rings do depend on disc densities (expression (5) is proportional to \( \sigma(R) \sigma(S) \)). This explains why the disc will warp significantly only where its surface density is low (Ostriker & Binney 1989) which, in practice, means at its outer edge. For the inner disc, \( \sigma \) is high enough to provide strong torques between two rings with a small angle \( \alpha_{R,S} \) of separation which can compensate the difference of precession with respect to the average. However, external rings must separate further to compensate these differences. Note that there is no \( z \)-component of the torque. Taking into account that the total mass of the ring is increased, this means that the angular velocity of rotation should decrease and, therefore, the radius of the orbit will be reduced (the increasing mass of the inner galaxy will also tend to reduce the radius) and the galaxy will concentrate further material in the inner regions. We will not study these aspects further in the present paper. The limit of low initial velocity \( \left( \frac{R v_0^2 (1 + e_{0Q})}{4 G \ M_{\text{gal}}} \ll 1 \right) \) gives a proportionality \[ \lim_{\left( \frac{R v_0^2 (1 + e_{0Q})}{4 G \ M_{\text{gal}}} \right) \to 0} \tau(R) \propto \rho_0 \left( \frac{G \ M_{\text{gal}}}{v_0} \right)^{3/2} R^{1/2} \\ \times (\sin \phi_0 i - \cos \phi_0 j). \] This means that a low velocity beam provides a stronger torque. This effect can be seen due to the increased curvature of the hyperbolic trajectories for low velocities. A ring will then accrete flow particles over a wider range of impact parameters, if the velocity of the beam is lower. In the extreme case of \( e_0 = 0 \) all the particles fall to the centre, so \( R = 0 \) for all cases with a finite impact parameter and there is no divergence \( (R \propto v_0^2 \) from (29)). Although we have integrated \( b \), the impact parameter, from zero to infinity, in a real case \( b \) is limited to a finite value. Expression (40) means that at lower flow velocities the disc accretes particles from a greater fraction of the total stream of particles, and this is the reason why extra angular momentum is deposited. Again, as emphasized above, we must note the angular momentum deposited within a given annulus is due to accretion of particles from a non-axisymmetric volume of space whose integrated angular momentum is thus non-zero. ### 3.5. Mass accreted by the galaxy The total accretion rate due to this infall is \[ \frac{\mathrm{d}M}{\mathrm{d}t} = \int_0^{2\pi} \mathrm{d}\phi_0 \int_0^\infty \mathrm{d}b \ \frac{\mathrm{d}m}{\mathrm{d}t} = \rho_0 v_0 \int_0^{2\pi} \mathrm{d}\phi \ \frac{\mathrm{d}\phi_0}{\mathrm{d}\phi} \\ \times \left[ \int_0^{R_{\text{max}}} \mathrm{d}R \ b(R, \phi) \frac{\partial b(R, \phi)}{\partial R} \right] \\ = \frac{-\rho_0 v_0}{|\sin \theta_0| \cos \theta_0} \int_{-\cos \theta_0}^{\cos \theta_0} \mathrm{dx} b(R_{\text{max}}, x)^2 \\ \times \frac{1}{\sqrt{1 - x^2 / \cos^2 \theta_0} [1 + (1 - x^2 / \cos^2 \theta_0) (\sin^{-2} \theta_0 - 1)]}. \] This rate is positive since the integral is negative \( (b(R, -x) > b(R, x)) \). ### 3.6. Transfer of linear momentum It is important at this stage to consider the transfer of linear momentum. While the transfer of angular momentum produces an integral sign shape (component \( m = 1 \) of the galactic warp), as shown above, the linear momentum produces a cup-shaped deformation of the disk (component \( m = 0 \) of the galactic warp). The numbers in Sect. 4 will show that this effect is indeed quite small, and the \( m = 1 \) component is predominant. The points of equilibrium between the vertical gravitational force and the vertical force due to transfer of linear momentum give the deformation of the disk. The vertical linear momentum due to the accretion of a particle of mass \( dm \) is: \[ p_z = dm \ v \sin(\mathbf{RQ}, \mathbf{v}) \sqrt{1 - \cos^2 \theta_0 \sin^2 (\phi_0 - \phi)}. \] The vector \( \mathbf{v} \) is the velocity at the impact point of the disc \( (\mathbf{RQ}) \). The last two factors account for the projection of the velocity onto the vertical axis. It is not easy to visualize the origin of these factors; but they can be understood by reference to Figs. 6 and 7. The radial and azimuthal linear momentum would produce some distortion of the orbits within the disc, which is not the subject of the present paper. From the conservation of the angular momentum, we have: \[ |J| = \text{dm} \ v_0 b = \text{dm} \ v \ R_Q \sin(R_Q \cdot v). \] (43) From expressions (42) and (43), the vertical force of a particle of mass dm is: \[ F_z = \frac{\text{d}p_z}{\text{dt}} = \frac{\text{dm} \ v_0 b}{\text{dt} \ R_Q} \sqrt{1 - \cos^2 \theta_0 \sin^2(\phi_0 - \phi)}. \] (44) The total vertical force on all the particles of the beam which fall into the ring of radii \( R \) and \( R + \text{d}R \) is (from now on, \( F_z \) stands for the vertical force per unit galactocentric radial length): \[ F_z(R) \text{d}R = \int_0^{2\pi} \text{d}\phi_0 \int_{0; \ R < R_Q < R + \text{d}R}^\infty \text{db} \ b \frac{\text{dm} \ v_0 b}{\text{dt} \ R_Q} \] \[ \times \sqrt{1 - \cos^2 \theta_0 \sin^2(\phi_0 - \phi)} \] \[ = \frac{2\rho_1 v_0^2 \text{d}R}{|\sin \theta_0| \cos \theta_0} \int_{-\cos \theta_0}^{\cos \theta_0} \text{dx} \sqrt{1 + x^2 - c^2} \frac{\partial b(R, x)}{\partial R} \frac{b(R, x)^2}{R} \] \[ \times \frac{1}{\sqrt{1 - x^2 / \cos^2 \theta_0}[1 + (1 - x^2 / \cos^2 \theta_0)(\sin^{-2} \theta_0 - 1)]}. \] This acceleration is compensated by the vertical gravitational acceleration, so: \[ \frac{F_z(R) \text{d}R}{2\pi \sigma(R) R \text{d}R} \approx \frac{GM_{\text{gal}}(R)}{R^2} \frac{z}{R}, \] (46) which implies, for a mass derived from the rotation curve, that \[ z \approx \frac{F_z(R) R}{2\pi \nu_{\text{rot}}(R)^2 \sigma(R)} \] (47) This monopolar approximation would be exact if the distribution of mass were spheroidal or elliptical. For the disc, whose contribution is dominant in the torque, the contribution of the gravitational force is different: \[ \frac{F_z(R)}{2\pi \sigma(R) R} = Gz \int_0^{R_{\text{max}}} \text{ds} \ s \ \sigma(s) \] \[ \times \int_0^{2\pi} \frac{\text{d}\phi}{[s^2 + R^2 - 2sR \cos \phi + z^2]^{3/2}} \] \[ \approx Gz \int_0^{R_{\text{max}}} \text{ds} \ s \ \sigma(s) \int_0^{2\pi} \frac{\text{d}\phi}{[s^2 + R^2 - 2sR \cos \phi]^{3/2}}. \] (48) The last approximation is for \( z \) small, compared to \( (R-s) \); this is a good approach for large \( R \). This is also an approximation in another sense: it does not take into account the distortion of the disc produced by both \( m = 0 \) and \( m = 1 \) components of the warp. In any case, most of the mass is in the inner rings which are not distorted and this effect is small. Hence, \( z \) is inversely proportional to the attraction of the disc even for a non-monopolar approximation. The disc contribution is something larger than its monopolar contribution, so \( z \) is lower than (47). This means that the expression (47) gives, approximately, a maximum limit for the distortion of the disc, i.e. \[ z \leq \frac{F_z(R) R}{2\pi \nu_{\text{rot}}(R)^2 \sigma(R)}. \] (49) This approximation is good enough when the effects of the transfer of linear momentum are nearly independent of those due to the transmission of angular momentum. A more accurate model would refine these approximations and calculate both the \( m = 0 \) and \( m = 1 \) distortions simultaneously. We will see later that the effect of the transfer of linear momentum is small compared with the transfer of angular momentum and, therefore, we do not need to make these more refined calculations for this effect, but only for the integral sign warp, which is dominant. Is it counterintuitive to have \( m = 1 \) deformation larger than \( m = 0 \) deformation? We can argue that it is not. Imagine a galaxy in which practically all the mass is in the centre, so that the disc has a negligible mass. How big is the countortorque of the galaxy to compensate the external torque? The answer is: zero (or nearly zero), because only the dipole and higher order components of the gravitational attraction produce a countortorque. The monopolar component does not create countortorque. In this case, what will be the equilibrium condition in which the torque is compensated by the countortorques of the disc? It is an “\( m = 1 \) deformation of the disc” (i.e. a warp) whose amplitude tends to infinity (as the mass of the disc tends to zero). In fact the limiting case will not be at infinity, since we are talking about an angular amplitude. The ring will take the angle of incidence of the wind as its maximum limit. So what happens to the “\( m = 0 \) deformation”? This deformation is nothing like as strongly dependent on the distribution of the mass and there is a net finite countertorce of the galaxy to compensate the external force; the amplitude of the \( m = 0 \) cup-shaped distortion is thus finite. Therefore, we have a very high amplitude for \( m = 1 \) and the amplitude of \( m = 0 \) is much smaller. If the density of the wind tends to zero, the amplitude of \( m = 0 \) distortion tends to zero, but the amplitude of the \( m = 1 \) distortion tends to take up the incidence angle of the wind. We conclude that this is a very clear case in which the \( m = 1 \) distortion amplitude is much bigger than the \( m = 0 \) amplitude and, therefore, it is possible to have a \( m = 1 \) deformation larger than the \( m = 0 \) deformation. ### 3.7. Direction of the warp One relevant aspect of the present hypothesis is that it allows us to relate the direction of the warp to the direction of the inflow, \( v_0 \), with respect to the centre of the galaxy. The torque between two rings, \( d\tau_{2}(R) \, dR \, dS \), is proportional to \( (\sin \phi_{0} i - \cos \phi_{0} j) \). This can be demonstrated by use of the Eq. (5), or by a simple analogy with the gravitational torque of one particle (Sect. 2.2); the second ring is the equivalent to an integration of the particle position over the second ring azimuthal angle. Likewise, the external torque is proportional to \( (\sin \phi_{0} i - \cos \phi_{0} j) \). This means that the warp direction is the same of the projection of \( v_{0} \) over the disc, and the precession is around the axis parallel to \( v_{0} \). The amplitude of the collisional torque, (39), is larger for larger \( R \), since \( b^{2} \frac{d\rho}{dR} \) increases with \( R \), so the outer rings have an excess angular velocity of precession compared to the average. However, the outer rings receive a torque from the inner rings, which is proportional to the collisional torque when \( e_{P} = \phi_{0} \), where \( e_{P} \) is the position of the maximum height above the plane of both inner and outer rings (i.e. the direction of the warp), but must act in the opposite direction to counteract the above excess. Hence, the outer rings should be oriented towards positive \( z \) if \( \theta_{0} \) is positive or towards negative \( z \) if \( \theta_{0} \) is negative. From the above considerations, we deduce that the warp is oriented in the direction parallel to the projection of \( e_{0} \) in the disc. The azimuthal angles of the maximum and minimum heights (\( z \)) of the warp are \( \phi_{0} \) and \( \phi_{0} + \pi \) respectively if \( \theta_{0} > 0 \). If \( \theta_{0} < 0 \), the maximum height is at azimuth \( \phi_{0} + \pi \) while the minimum is at \( \phi_{0} \). Figure 9 shows a graphical representation of these orientations in the plane which contains the vector \( v_{0} \) perpendicular to the disc. Note that the orientation of the warp does not change if the velocity is \( -v_{0} \) instead of \( v_{0} \). 4. Warp in a typical spiral galaxy due to intergalactic accretion flows The proposed hypothesis of formation of warps can be compared with our observational knowledge about galaxies and the intergalactic medium. We must bear in mind that the calculations developed in this paper are based on fairly crude approximations. We have considered particle trajectories as hyperbolic, neglecting the gravitational effects of the extended disc, we have considered a collimated infinite beam, constant density of the intergalactic flow, etc. Therefore, we should not expect the theory to reproduce exactly all the fine details of a warp. However, we will show that our model reproduces quite well the observed warps using a standard model of the disc of a typical spiral galaxy (for instance, the Milky Way) and with entirely reasonable parameters for the intergalactic flow. Our restricted goal here is to offer a general model for warps (a warp amplitude \( \alpha(R) \) which is close to zero out to some specific \( R \), but rises rapidly at higher increasing \( R \)), and to test it using the parameters of a typical spiral galaxy such as the Milky Way. 4.1. Amplitude of the warp The solution of (12) by means of the numerical method explained in Appendix A gives \( \alpha(R) \) for the galactic warp. It will depend on the adopted galactic model. Our purpose is to demonstrate that the order of magnitude of the amplitude of the warp is approximately the observed amplitude, so we use only one model. Varying the parameters of the disc will, of course, lead to variable results but these would not be quantitatively very different. The surface density of the disc we adopt is given by (15) and the rotation velocity is given by (16), both corresponding to the Milky Way. In the linear regime, for a low amplitude of the warp and low velocity \( v_{0} \), in the limits (40) and (6), the proportionality followed is \( \alpha(R) \propto \omega_{p} \propto \frac{\rho_{b} M_{\text{gal}}^{3/2}}{v_{0}} \), in which the dependence of \( \alpha(R) \) on \( R \) is not specified. Specifically, for the adopted disc model and \( |\theta_{0}| = 45^\circ \), we obtain in this linear regime that the angle of the warp is \( (\Omega_{b} = 0.02 \, h^{-2}; \) Schramm & Turner 1998) \[ \alpha(2R_{0}) \approx 5.2 \times 10^{-34} \frac{\rho_{b} (\text{kg/m}^3)}{v_{0} (\text{m/s})} \left( \frac{M_{\text{gal}}}{10^{11} \, M_{\odot}} \right)^{3/2} \text{rad} \] \[ = 1.7 \times 10^{-4} \frac{\rho_{b}}{\Omega_{b} \rho_{\text{crit}}} \left( \frac{M_{\text{gal}}}{10^{11} \, M_{\odot}} \right)^{3/2} \frac{100 \, \text{km s}^{-1}}{v_{0}} \text{rad}, \] (50) and the precession angular velocity \[ \omega_{p} \approx 7.7 \times 10^{-21} \frac{\rho_{b}}{\Omega_{b} \rho_{\text{crit}}} \left( \frac{M_{\text{gal}}}{10^{11} \, M_{\odot}} \right)^{3/2} \frac{100 \, \text{km s}^{-1}}{v_{0}} \text{rad/s}. \] (51) Burton (1988) gives an observational value of \( \alpha(2R_{0}) \approx 0.14 \) rad in our Galaxy which, if we take the value at \( 2R_{0} \) to normalize the amplitude of the warp, leads to \[ \rho_{b} \approx 820 \, \Omega_{b} \rho_{\text{crit}} \left( \frac{v_{0}}{100 \, \text{km s}^{-1}} \right) \left( \frac{M_{\text{gal}}}{10^{11} \, M_{\odot}} \right)^{-3/2}, \] (52) \[ \omega_{p} \approx 6.2 \times 10^{-18} \text{rad/s} = 1 \text{ cycle in 32 Gyr}. \] (53) The precession is far too slow to be significant, and much slower than the rotational velocity of the Galaxy. We adopt a total mass of the Galaxy of $M_{\text{gal}} = 2 \times 10^{11} M_\odot$ (Honma & Sofue 1996) to calculate the curvature of the hyperbolic trajectories. This estimate includes bulge, disc and spheroidal component masses. Hence, $$\rho_b \approx 290 \frac{\Omega_b}{\rho_{\text{crit}}} \left( \frac{v_0}{100 \text{ km s}^{-1}} \right)^3.$$ (54) The only free parameters with observational uncertainties are the mean density of the intergalactic inflow and its velocity which are related by the last equation. Assuming a typical velocity, in galactocentric coordinates, of $v_0 \sim 100 \text{ km s}^{-1}$ (the limit of low velocity adopted in (40) is valid if $R$ is much less than 300 or 400 kpc, which is certainly the case since we take $R \leq 16$ kpc), we find: $$\rho_b \sim 290 \frac{\Omega_b}{\rho_{\text{crit}}} = 1.1 \times 10^{-25} \text{ kg/m}^3,$$ (55) which is equivalent to $$n_{\text{HI}} \sim 6 \times 10^{-5} f \text{ cm}^{-3},$$ (56) where $f$ is the fraction of HI in the total baryonic mass of the galaxy. This value coincides with the most probable value estimated by Kahn & Woltjer (1959) for the mean density of intergalactic matter yielding dynamical stability for the Local Group of galaxies. López-Corredoira et al. (1999) also calculated a total intergalactic mass in the Local Group around $2 \times 10^{12} M_\odot$, which in a volume of $\sim 1$ Mpc$^3$ gives a mean density around $10^{-25} \text{ kg/m}^3$. The shape of $\alpha(R) \approx |z|/R$ for these values would be that given in Fig. 10. The prediction of this model agrees with many of the features of the observed warp (Burton 1988, 1992): it predicts a flat disc which is not significantly warped for values of $R$ less than $\sim 1.3 R_0 = 10$ kpc, and a warp whose amplitude increases rapidly at larger radii. The shape of the warp depends on the detailed radial variation of $\sigma(R)$ and we have used a rough exponential estimate of this dependence, which is really more complex than a simple exponential function with a constant scale length. In any case, it is not at this stage worth using a more realistic model since our approach is only an approximation, given the simplifications we have employed in the analytical calculation of the torque. The functional shape of $\alpha(R)$ resembles that of the observational data, within the values of $R$ less than $\sim 16$ kpc from the centre. Nonlinear effects have a significant influence at larger radii as well as departures from the simple ring model of the disc. Nevertheless, we should note that the dependence of $\tau_{\text{ext}}$ on $R$ does not affect too much the form of $\alpha(R)$ and other external torques may also produce similar shapes, as we have seen in Figs. 3 and 4. The maximum distortion of the disc due to the transfer of linear momentum (Sect. 3.6) is calculated using the expression (49). The results are also plotted in Fig. 10. In this case, $z$ is always positive or always negative, a cup shape rather than an integral sign shape. From the numbers plotted in this figure, it can be concluded that this distortion is small compared to that produced by the torque, i.e. the integral sign shape will be predominant in the galactic disc. We think the predominance of torque effects is due to the smaller mass concentrated in the disc in comparison to the total mass of the Galaxy, which is mostly responsible of the counter-torques which reduce the heightness of the warp $m = 1$, while the whole mass of the Galaxy produces the counter-forces which reduce the distortion $m = 0$. In some galaxies, a mixture of cup-shape and integral sign shape could be produced by this mechanism. This would be a possible explanation for the asymmetries between the two sides of the warp, even for our own Galaxy. It will depend, among other factors, on the angle $\theta_0$. For $\theta_0 = 90^\circ$, the transfer of vertical linear momentum would be maximum while the torque would be zero. Therefore, there is a probability, although small, of finding cup-shaped distortions of galactic discs rather than integral-sign warps. In Fig. 11, we show that the cup-shaped distortion is predominant for $|\theta_0| > \sim 85^\circ$, which translates to a probability that a cup-shape is dominant of $< \sim 4 \times 10^{-3}$, for spiral galaxies like the Milky Way. It would be of interest to make a statistical search over a major sample of warped galaxies to see how many galaxies present appreciable cup-shaped distortion. The study of the fraction of galaxies which present irregularities within the integral sign warp would also give some clue about the validity of this hypothesis. Near the limit $\theta_0 = 0$ the amplitude of the warp goes to zero quickly. Although, Fig. 11 shows apparently a vertical tangent in the limit $\theta_0 = 0$, that is not the case: there is a horizontal tangent, as the limit of (39) when $\theta_0$ is very small is proportional to $\theta_0^2$. However, this limit manifests for very small angles and this is the reason for the apparent Fig. 11. Milky Way warp maximum height as a function of the inclination of the incoming flow with respect to the galactic plane ($\theta_0$) in the hypothesis of continuous accretion of intergalactic matter: $m = 1$ (integral-sign warp due to a torque), solid line; $m = 0$ (cup-shaped distortion due to a force), dashed line. vertical tangent in 11. Indeed, the values are: $|z|(\theta_0 = 10^\circ) = 2.45$ kpc, $|z|(\theta_0 = 5^\circ) = 2.24$ kpc, $|z|(\theta_0 = 2^\circ) = 1.79$ kpc, $|z|(\theta_0 = 1^\circ) = 1.24$ kpc, $|z|(\theta_0 = 0.5^\circ) = 0.59$ kpc, $|z|(\theta_0 = 0.25^\circ) = 0.19$ kpc, $|z|(\theta_0 = 0.1^\circ) = 0.034$ kpc,..., $z(\theta_0 = 0) = 0$ kpc. We can add some further comments about the stellar warp. In the Milky Way, the OB stars follow the gas (Porcel & Battaner 1995). But it is not just the young population, recently formed from the warped gas, which shows a stellar warp, but the whole population of stars (Porcel et al. 1997; Dehnen 1998). The whole population of stars of the Milky Way projected onto the sky appears less deviated from the plane than the projected gas, but this does not necessarily represent a difference between amplitudes of the gaseous and stellar warps. Rather, it may well be due to the cut-off of the stellar population at a smaller Galactocentric radius than the gas, around 15 kpc (Porcel et al. 1997). The stellar disc is clearly warped, perhaps somewhat less than the gaseous disc although not obviously so. As pointed out in Sect. 3.3, if the stellar disc were demonstrated to be less warped than the gas warp, it would be evidence supporting either this theory or the theory of the intergalactic magnetic field as the generator of the warp (Porcel et al. 1997). In fact, this would be predicted by a theory in which the external torque directly affects the gas disc and only indirectly the stellar disc. 4.2. Possible scenarios Possible scenarios for the intergalactic flow postulated here are: 1. The galaxy is passing through a continuous intergalactic medium, i.e. the velocity of the flow is due to the relative motion of the galaxy with respect to the rest frame of the intergalactic medium. In this case, the flow may be well approximated by a beam of infinite extent. This may be a representative scenario for most galaxies, and could explain why most spiral galaxies are warped. Asymmetries in the warp could also be explained in terms of a transfer of both linear and angular momentum, although the cases when cup-shaped distortions dominate has low probability. We think that this first item is the most plausible explanation. 2. The galaxy has an interacting companion and some exchange of material is produced due to tidal effects. The stream of material goes from the companion to the main galaxy, and this flow could also generate a warp. The direction of the wind would not be constant, because the companion is orbiting around the main galaxy so the warp would not be steady. This scenario would give asymmetries in the warp, apart from that coming from the $m = 0$ component, which would be due to the non-symmetric form of the infalling material, with finite impact parameter, or clouds which intersect the centre of the galaxy in a non-axisymmetric way. Reshetnikov & Combes (1998) find some correlation between the frequency of warps and the interaction with other galaxies. An accretion inflow due to the exchange of material with a companion might appear to be undermined by the observations of warped galaxies which are apparently isolated. However, this phenomenon is itself being challenged by the discovery of some companions to galaxies which had been considered as classical examples of isolated galaxies (for instance, NGC 5907; Shang et al. 1998). While the present paper was in the refereeing process, a work by García-Ruiz (2001) has been published. This is an interesting work which analyzes in detail 26 edge-on galaxies in radio and optical. García-Ruiz (2001) finds that 20 galaxies are warped, two of them present U-warp, and 7 with a warp of only one side. These asymmetric cases can be explained by the present theory as a combination of $m = 0$ and $m = 1$ distortion and it is interesting to note that up to now there is no alternative explanation for the U-warps as well as the asymmetric warps. He also found that the frequency of warps and its amplitude is dependent on environment. It is even more interesting to note that the most isolated galaxies are more frequently warped (although with less amplitude). “While this suggests that tidal interaction plays a role in warping, it seems likely that there are other effects at work that cause even quite isolated galaxies to warp.” (García-Ruiz 2001). It seems clear that warping is due to something related with the environment rather that the intrinsic properties of the galaxies, and something which is not related with the proximity of other galaxies. The accretion of intergalactic matter onto the disc seems a good candidate to explain these observational facts. The density of the intergalactic medium is very low on average, making it difficult to detect. However, it could well be that the High Velocity Clouds (HVCs; see the reviews in Wakker & van Woerden 1997; Wakker et al. 1999b) are part or all of this material falling towards the Galactic disc from all directions. If these do produce a net torque it must be because there is a net galactocentric average velocity of the complete set of HVCs, including the Magellanic Stream, the complexes and the HVCs associated with more distant clouds infalling towards the Local Group barycentre (Blitz et al. 1999; Braun & Burton 1999; López-Corredoira et al. 1999). The tidal Stream from the Sagittarius dwarf galaxy (Ibata et al. 2001) might also take part. An average hydrogen density for the flux of \( n_{\text{HI}} \sim 6 \times 10^{-5} f \, \text{cm}^{-3} \) is in good agreement with the average density of an HVC, \( \langle n_{\text{HI}} \rangle \sim 10^{-4} - 10^{-1} \, \text{cm}^{-3} \) (Blitz et al. 1999; Wakker et al. 1999a), including the value of \( f \) which may be as low as 0.02 (Braun & Burton 2000) if the matter of the flow is baryonic. The mean density in the intergalactic clouds may be enough to create the Galactic warp. If the density of infalling matter were equal to that of an individual HVC, we would have a higher density than that required, by several orders of magnitude. However, we know that the intrachuster medium is not filled by HVCs; these represent a low volume high density fraction; averaging over the complete medium with plausible filling factor can yield a net hydrogen density around \( n_{\text{HI}} \sim 6 \times 10^{-5} f \, \text{cm}^{-3} \). The degree of clumpiness of the intergalactic medium is not well known. However, it is not of importance whether the flow is continuous or discretized in clouds. The warp will be produced by the average infall. Short-term fluctuations of the infalling density do not appreciably affect the warp since the forces responsible for it have a very low amplitude and require a long time to produce or distort the warp. Using expression (41), we can derive a total accretion rate of the Galactic disc out to \( R_{\text{max}} = 15 \, \text{kpc} \) of \( \sim 1 \, M_\odot/\text{yr} \) for this density. This turns out to be of the order of the accretion rate required to resolve the G-dwarf problem in our Galaxy as well as explaining a number of phenomena of chemical evolution which require the long-term infall of low metallicity gas (López-Corredoira et al. 1999; Wakker et al. 1999a). The infall of 1 solar mass per year is enough to produce the warp because: 1) the external disc has a very low density and small forces produce considerable accelerations; 2) the acceleration may in fact be very small in amplitude, but the period of time to produce the warp is large enough (order of Gyr), so this gives time to distort the galactic disc (in 1 Gyr, \( 10^9 \) solar masses are accreted which is a significant quantity of accreted mass). This hypothesis could, in a general way, explain the possible alignments of the different warps of neighbouring galaxies (Battaner et al. 1991) if the flow velocity is similar around such galaxies within the same zone of the intergalactic medium. Whatever the structure and composition of the intergalactic medium, it is clear that intergalactic space is by no means empty, and the accretion of this material by galaxies is likely to have been continuous during their lifetimes. The effects of this accretion can be detected in their chemical evolution as well as in their structure, as pointed out above. ### 5. The effects of a very massive halo Although we already discussed the effects of the halo, we are going to clarify the use of the halo here as well as the effects which a very massive halo could produce. We may infer from the present calculations that we have found a mechanism which could explain both qualitatively and quantitatively the generation of warps in normal spiral galaxies, and this mechanism is the interaction between the disc and the infalling matter as well as the interaction of the disc with itself (rings interacting with other rings). The dark halo is included although it is not explicit in some calculations of this paper, but its effect is not very important for a rough calculation in which we are interested in the order of magnitude. In fact, our calculations are not rough but almost exact, but the roughness of the numbers obtained is due the uncertainties in the different parameters. Briefly, these are the effects which are treated here, and others which have not been treated but require treatment in future papers: - The dark halo is present in the total mass of the Galaxy: \( M_{\text{gal}} \). The assumed value of the Milky Way mass of \( 2 \times 10^{11} \, M_\odot \) within a radius \( R \sim 20 \, \text{kpc} \) from the centre includes the halo, some of whose matter is not visible. The effects of the halo are also implicitly included in the calculation of \( M_{\text{gal}}(R) \) as a function of \( v_{\text{rot}}(R) \) from the rotation curves. - The interaction of the halo with the warp is not considered, but we have made estimates to show that for this case the effect is small compared to the counter-torque of the disc. Therefore, we feel that the exclusion of halo effects in the counter-torques is a fair approximation if we are interested only to find the order of magnitude of the warp amplitude. The counter-torques of the halo are less important than those of the disc because the halo is much closer to sphericity. We showed in Sect. 2 that the contribution of the halo counter-torque for \( R < 16 \, \text{kpc} \) is less than \( \sim 40\% \) of the disc contribution. The uncertainties in the halo mass distribution lead to an uncertainty similar to this value and we did not feel that much would be gained by adding the extra complexity. We know that the warp would be reduced by this effect, but this reduction is not the dominant term. - A very massive and very extended halo would introduce some variations in the numbers we derived although the mechanism would work qualitatively in a similar way. First, if it is very massive \( M_{\text{gal}} \) would be larger and the amplitude of the warp would be correspondingly larger. We have shown that the amplitude of the warp is proportional to \( M_{\text{gal}}^{3/2} \). This, however, was on the assumption that most of the mass is concentrated within a radius less than $\sim 20$ kpc. If the halo is very extended, this approximation is not appropriate. We should then need to consider the mass distribution of the halo instead of assuming a point mass to calculate the infall velocity of the clouds. The problem would be much more complex because we would need a gravitational potential different from $\phi \propto 1/r$, and the trajectories would not be hyperbolae. Indeed, the mechanism we propose to form warps works with any potential, but the calculations are of course much easier for a $\phi \propto 1/r$ law. Since we are interested in proposing a new mechanism and showing how it works, we think that these complexities should be left for a future paper. At present, we can say that a very massive halo would increase the amplitude of the warp, not proportionally to $M_{\text{gal}}^{3/2}$ but some other power with exponent less than $3/2$. This is because the infall velocity of the clouds is increased as well as the curvature of the trajectories. It is important to note that a very extended halo would not differ from a halo constrained within $R < 20$ kpc in the countertorques produced on the warp because only the mass in the ellipsoids internal to the radius of the warp produce gravitational torque, assuming there is a constant ellipticity halo. To summarize, with a very massive halo our mechanism does operate, even in fact more effectively because it would require a lower intergalactic density to produce the same warp amplitude. - If the halo axis were displaced with respect the disc axis the disc would be pinched by the halo. This was the case studied by Ostriker & Binney (1989), Binney et al. (1998) or Jiang & Binney (1999) and is indeed a mechanism which will give rise to warps too. The scenario presented by these authors is different from the scenario proposed here. They assume the infall of the intergalactic matter onto the halo rather than the disc, and this produces motion of the halo with respect to the disc. We do not seek to challenge this in the present paper, rather to present a possible alternative, which might be complementary. The compatibility of the two mechanisms certainly merits effort to study further. There is no doubt that accretion into a halo which gives an offset in the rotation axis from the disc axis will give rise to a warp; if one accepted the possibility of a low density halo accreting intergalactic matter, one should recognize the possibility of giving rise to warps by this mechanism. We have shown here how accretion directly onto the disc can also yield a warp, with parameters in the observed range. At the present stage of understanding the problem, we think that either or both mechanisms can act to produce warps if one accepted that the accretion of matter by the disc or the halo have the same plausibility. Our opinion is that our mechanism is preferable since the accretion of matter by the disc is more plausible than the accretion by the halo but this point is open for discussion. 6. Conclusions We propose that galactic warps are produced by the reorientation of the galactic disc structure in order to compensate the differential precession due to a torque generated by an external force. The external force might be the gravitational interaction with a satellite, but in the case of the Milky Way, with the Magellanic Clouds as the satellites, there is not enough mass close enough to provide the observed amplitude in the warp. Magnetic forces could also produce the warp, but the intergalactic field would then be of the order of $\mu$G. A simple model of an intergalactic accretion flow which intersects a galactic disc (or, equivalently, considering the galaxy as moving through the intergalactic medium) can explain the existence of warps in the galaxy if the mean density of baryonic matter in the medium is around $10^{-25}$ kg/m$^3$ and the infall velocity at large distance is $\sim 100$ km s$^{-1}$. This hypothetical low density flow is a very reasonable physical assumption and would explain why most spiral galaxies are warped. Accretion due to such a flow is in good accord with the observations of the chemical evolution of the Milky Way by contributing $\sim 1 M_\odot$/yr of low metallicity gas to the disc. The High Velocity Clouds, which are presumably the accretable material in the Local Group galaxies (Blitz et al. 1999; Braun & Burton 1999; López-Corredoira et al. 1999; Wakker et al. 1999a) are candidates for a significant fraction of the material which fills intergalactic space and is accreted. No massive halo is necessary nor high values of magnetic fields are necessary, although the presence of these elements would not modify qualitatively the present conclusions. No calculations in the framework of accretion onto the disc are given for a very massive halo but its effect would not affect qualitatively the present mechanism, as it is explained in Sect. 5. Models with a very massive halo could be generated and different numerical results would be obtained depending on the parameters of the halo, although no major qualitative changes are expected to the model presented here since the mechanism of formation of warps is dominated by the interaction with the disc; only quantitative changes would come from the increasing velocity and trajectory curvature of the infalling material which would increase the amplitude of the warp, i.e. the same amplitude of the warp would be obtained with a density even lower than $10^{-25}$ kg/m$^3$. Several mechanisms can generate warps but, among them, accretion of an intergalactic flow seems to offer a very plausible scenario: it is quantitatively consistent with many observations and works independently of other ingredients of galaxies and their structure. Acknowledgements. We particularly appreciate the comments of the referees E. Battaner and the anonymous referee – and of J. J. Binney and I. Shlosman, whose detailed questions have helped us to explain more clearly some important technical points. This work has been supported by grant PB97-0219 of the Spanish DGES. Appendix A: Numerical calculation of $\alpha(R)$ The solution of $\frac{d\omega_p(R)}{dR}[\alpha(R)] = 0$, from (12), can be achieved by means of a numerical calculation with $N$ discreet values of $R$, i.e. $$H(\alpha) = 0,$$ $$\alpha = (\alpha_1, ..., \alpha_N); \quad H = (H_1, ..., H_N),$$ $$\alpha_i \equiv \alpha(R_i); \quad H_i \equiv \frac{d\omega_p}{dR}(R_i).$$ Newton-Raphson's iterative method is appropriate for this kind of numerical calculations. The iteration $k + 1$ is given by $$\alpha^{k+1} = \alpha^k - W^{-1}(\alpha^k)H(\alpha^k),$$ $$W = \begin{pmatrix} W_{11} & ... & W_{1N} \\ ... & ... & ... \\ W_{N1} & ... & W_{NN} \end{pmatrix}; \quad W_{ij} = \frac{\partial H_i}{\partial \alpha_j}.$$ By means of this application, we can calculate $\alpha(R)$. The first iteration is taken with $\alpha(R) = |\theta_0|$. References Bahcall, J. N., & Soneira, R. M. 1980, ApJS, 44, 73 Battaner, E., Florido, E., & Sánchez-Saavedra, M. L. 1990, A&A, 236, 1 Battaner, E., Garrido, J. L., Membrado, M., & Florido, E. 1992, Nature, 360, 652 Battaner, E., Garrido, J. L., Sánchez-Saavedra, M. L., & Florido, E. 1991, A&A, 251, 402 Battaner, E., & Jiménez-Vicente 1998, A&A, 332, 809 Binney, J. J. 1991, in Dynamics of Disc Galaxies, ed. B. Suytensius, Chalmers Univ., Göteborg: Dep. Astronomy, 297 Binney, J. J. 1992, ARA&A, 30, 51 Binney, J. J. 2000, in Dynamics of Galaxies: from the Early Universe to the Present, ASP Conf. Ser., 197, ed. F. Combes, G. A. Mamon, & V. Charmandaris (ASP, San Francisco), 107 Binney, J. J., Jiang, I.-G., & Dutta, S. N. 1998, MNRAS, 297, 1237 Binney, J. J., & May, A. 1986, MNRAS, 218, 743 Blitz, L., Spiegel, D., Teuben, P., Hartmann, D., & Burton, W. B. 1999, ApJ, 514, 818 Braun, R., & Burton, W. B. 1999, A&A, 341, 437 Braun, R., & Burton, W. B. 2000, A&A, 354, 853 Briggs, F. H. 1990, ApJ, 352, 15 Burton, W. B. 1988, in Galactic and Extragalactic Radio Astronomy, ed. K. I. Kellerman, & G. L. Verschuur (Springer-Verlag, Berlin), 295 Burton, W. B. 1992, in The Galactic Interstellar Medium, ed. D. Pfenniger, & P. Bartholdi (Springer-Verlag, Berlin), 126 Casuso, E., & Beckman, J. E. 1997, ApJ, 475, 155 Casuso, E., & Beckman, J. E. 2000, PASP, 112, 942 Debattista, V., & Sellwood, J. 1999, ApJ, 513, L107 Dehnen, W. 1998, AJ, 115, 2384 Evans, N. W. 2001, in IDM 2000: Third International Workshop on the Identification of Dark Matter, ed. N. Spooner (World Scientific, Singapore), in press [astro-ph/0102082] García-Ruiz, I., Kuijken, K., & Dubinski, J. 2000, MNRAS, submitted [astro-ph/0002057] García-Ruiz, I. 2001, Ph.D. Thesis, Univ. Groningen (Holland) Honma, M., & Sofue, Y. 1996, PASJ, 48, L103 Hunter, C., & Toomre, A. 1969, ApJ, 155, 747 Ibata, R. A., Irwin, M., Lewis, G., & Stoate, A. 2001, ApJ, 547, L133 Ibata, R. A., & Riazoumov, A. O. 1998, A&A, 336, 130 Ideta, M., Hozumi, S., Tsuchiya, T., & Tazikawa, M. 2000, MNRAS, 311, 733 Jiang, I.-G., & Binney, J. 1999, MNRAS, 303, L7 Kahn, F. D., & Woltjer, L. 1959, ApJ, 130, 705 Kronberg, P. P. 1994, Rep. Progress in Phys., 57, 325 Kuijken, K., & Dubinski, J. 1995, MNRAS, 277, 1341 Kuijken, K., & Gilmore, G. 1989, MNRAS, 239, 605 Lin, D. N. C., & Lynden-Bell, D. 1982, MNRAS, 198, 707 López-Corredoira, M., Beckman, J. E., & Casuso, E. 1999, A&A, 351, 920 López-Corredoira, M., Hammerley, P. L., Garzón, F., Simonneau, E., & Mahoney, T. J. 2000, MNRAS, 313, 392 Lynden-Bell, D. 1965, MNRAS, 129, 299 Murai, T., & Fujimoto, M. 1989, PASJ, 32, 581 Nelson, R. W. 1998, MNRAS, 293, 117 Nelson, R. W., & Tremaine, S. 1995, MNRAS, 275, 897 Ostriker, E. C., & Binney, J. J. 1989, MNRAS, 237, 785 Parcel, C., & Battaner, E. 1995, MNRAS, 274, 1153 Parcel, C., Battaner, E., & Jiménez-Vicente, J. 1997, A&A, 322, 103 Reshetnikov, V., & Combes, F. 1998, A&A, 337, 9 Rocha-Pinto, H. J., Maciel, W. J., Scalo, J., & Flynn, C. 2000, A&A, 358, 850 Rogstad, D. H., Lockhart, I. A., & Wright, M. C. H. 1974, ApJ, 193, 309 Ryden, B. S. 1988, ApJ, 329, 589 Ryden, B. S., & Gunn, J. E. 1987, ApJ, 318, 15 Sánchez-Saavedra, M. L., Battaner, E., & Florido, E. 1990, MNRAS, 246, 458 Schramm, D. N., & Turner, M. S. 1998, Rev. Mod. Phys., 70, 303 Shang, Z., Brinks, E., Zheng, Z., et al. 1998, ApJ, 504, L23 Tinsley, B. M. 1980, Fund. Cosmic Phys., 5, 287 Wakker, B. P., & van Woerden, H. 1997, ARA&A, 35, 217 Wakker, B. P., Howk, J. C., Savage, B. D., et al. 1999a, Nature, 402, 308 Wakker, B. P., van Woerden, H., & Gibson, B. K. 1999b, in Stromlo Workshop on High-Velocity Clouds, ed. B. K. Gibson, & M. E. Putman, ASP Conf. Ser., 166, 311 Weinberg, M. D. 1998, MNRAS, 299, 499
Names and profiles of independent directors proposed to be a proxy Name - Surname Dato’ Shaarani Bin Ibrahim Position Independent Director Age 67 years Nationality Malaysian Appointed on 20 January 2009 Years in Director Position 8 years 1 month Current Positions - Independent Director - Member of the Audit Committee - Member of the Nomination, Remuneration and Corporate Governance Committee Education - B.A.(Hons) (International Relations), Universiti Malaya Director Training Program - September 2015: World Capital Markets Symposium, Malaysia - June 2015: Affin Hwang Asset Management Investment Forum 2015, Malaysia - June 2015: IDFR (Institute of Diplomacy and Foreign Relations) Lecture Series 3/2015 themed “China’s One Belt, One Road Initiative: Strategic Implications, Regional Responses,” Malaysia - September 2014: ASEAN Game Changer Forum, Singapore - June 2014: CIMB Group on the 6th Regional Compliance, Audit & Risk (CAR) Summit, Malaysia - June 2014: 28th Asia-Pacific Roundtable (APR), Malaysia - June 2013: CIMB Group on the 5th Regional Compliance, Audit & Risk (CAR) Conference, Indonesia - April 2011: Director Certification Programme (DCP) Class 145/2011 English Programme, Thai Institute of Directors - April 2010: Director Accreditation Programme (DAP) Class 83/2010 English Programme, Thai Institute of Directors - August 2009: Non-Executive Director Development Series - August 2009: “Corporate Governance” by PriceWaterhouseCoopers, Malaysia **Positions in Other Listed Companies** - None **Positions in Non-listed Companies** - Director, CIMB Bank PCL, Vietnam - Chairman of the Board, Chairman of Risk Committee, Member of Audit Committee, CIMB Bank PCL, Cambodia - Independent Director, Chairman of Remuneration Committee, Member of Audit Committee, Member of Nomination Committee, Dragon Group International Limited (DGI), Singapore **Work Experience within 5 years** - Board Member, Chairman of the Audit Committee, Member of the Investment Committee, Universiti Putra Malaysia (UPM) - Ambassador of Malaysia, The Kingdom of Thailand **Position in Rival Companies / Other Banking-related Companies** - None **Shareholding in CIMBThai** - None **Legal Dispute** - None **Meeting Attendance in 2016*** - Board of Directors 12/12 times (100.00%) - Audit Committee 15/15 times (100.00%) - Nomination, Remuneration and Corporate Governance Committee 11/11 times (100.00%) (* Details of attendance as presented in Annual Report 2016) **Conflict of Interest in This Meeting** - None Additional qualifications for independent director: | Type of Relationship with the Bank | Yes | No | |---------------------------------------------------------------------------------------------------|-----|----| | 1. Being a close relative of management or major shareholders of the Bank or its subsidiary companies. | - | ✓ | | 2. Having the following relationship with the Bank, parent company, subsidiary companies, associated companies or any juristic persons who may have a conflict of interest at present or during the past two years: | | | | 2.1. Taking part in the management or being an employee, staff member, or advisor who receives a regular salary. | - | ✓ | | 2.2. Being a professional service provider, e.g. auditor or legal advisor. | - | ✓ | | 2.3. Having a business relation that is material and could be a barrier to independent judgment. | - | ✓ | Remark: Information as of 28 February 2017 Names and profiles of independent directors proposed to be a proxy Name - Surname Mrs. Watanan Petersik Position Independent Director Age 56 years Nationality Thai Appointed on 25 April 2007 Years in Director Position 9 years 10 months Current Positions - Independent Director* - Chairperson of the Nomination, Remuneration and Corporate Governance Committee (*The Board of Directors’ meeting no. 8/2009, held on 30 July 2009, resolved to appoint her to be Independent Director) Education - AB Bryn Mawr College, PA, USA Director Training Program - Bursatra Sdn Bhd: Mandatory Accreditation Programme (MAP) for Directors of Public Listed Companies (17-18 March 2010) - Director Accreditation Programme (DAP) Class 83/2010 English Programme, Thai Institute of Directors (27 April 2010) - Certificate, Singapore Institute of Directors Course: Role of Directors Positions in Other Listed Companies - Independent Director, PTT Global Chemical PCL Positions in Non-listed Companies - Director, TPG Star SF Pte Ltd - Director, TPG Growth SF Pte Ltd - Director, TPG Growth III Asia Internet Holdings Pte Ltd - Director, TE Asia Healthcare Advisory Pte Ltd. - Director, TE Asia Healthcare Partners Pte Ltd. - Independent Director and Non-Executive Director, CIMB Group Holdings Bhd - Independent Director and Non-Executive Director, CIMB Group Sdn Bhd - Director, Lien Centre for Social Innovation Singapore Management University - Director, Asia Capital Advisory Pte Ltd - Senior Adviser/Consultant, TPG Capital Asia Work Experience within 5 years - None Position in Rival Companies / Other Banking-related Companies - None Shareholding in CIMB Thai - None Legal Dispute - None Meeting Attendance in 2016* - Board of Directors 10/12 times (83.33%) - Nomination, Remuneration and Corporate Governance Committee 10/11 times (90.90%) - Audit Committee 2/15 times (13.33%) (The Board meeting, held on 28 April 2016, acknowledged Mr. Watanan Petersik’s resignation from the Audit Committee effective from 1 May 2016.) (* Details of attendance as presented in Annual Report 2016) Conflict of Interest in This Meeting - Agenda item 7 Additional qualifications for independent director: | Type of Relationship with the Bank | Yes | No | |---------------------------------------------------------------------------------------------------|-----|----| | 1. Being a close relative of management or major shareholders of the Bank or its subsidiary companies. | - | ✓ | | 2. Having the following relationship with the Bank, parent company, subsidiary companies, associated companies or any juristic persons who may have a conflict of interest at present or during the past two years: | | | | 2.1. Taking part in the management or being an employee, staff member, or advisor who receives a regular salary. | - | ✓ | | 2.2. Being a professional service provider, e.g. auditor or legal advisor. | - | ✓ | | 2.3. Having a business relation that is material and could be a barrier to independent judgment. | - | ✓ | Remark: Information as of 28 February 2017 Names and profiles of independent directors proposed to be a proxy Name - Surname Mr. Pravej Ongartsittigul Proposed Position Independent Director Age 61 years Nationality Thai Appointed on 19 April 2016 Years in Director Position 10 Months Current Positions - Independent Director - Member of Audit Committee Education - Master of Business Administration (Finance), New Hampshire College, U.S.A. - Master of Business Administration (Decision Support Systems), New Hampshire College, U.S.A. - Bachelor of Accounting, Faculty of Commerce and Accountancy, Chulalongkorn University Director Training Program - 2016: Corporate Governance for Capital Market Intermediaries (CGI), Class 17/2016, Thai Institute of Directors - 2009: Advanced Senior Executive Program, Northwestern University (Kellogg) - 2007: Director Certification Program, (DCP), Class 86/2007), Thai Institute of Directors - 2007: Strategic Leadership Program, Capital Market Academy Class 1/2007, Stock Exchange of Thailand - 2007: Public-Private Partnership Program, Class 1/2007, Royal Thai Police - 1990: Chartered Bank EDP Auditor, Designation 898/1990, Bank Administration Institute, U.S.A. - 1987: Chartered Bank Auditor, Designation 3167/1987, Bank Administration Institute, U.S.A. | Positions in Other Listed Companies | - None | |-----------------------------------|--------| | Positions in Non-listed Companies | - Independent Director, Advance Medical Co., Ltd. - Chairman and Independent Director, AIRA Securities PCL - Director (Investment Advisory), Thai Red Cross Society | | Work Experience within 5 years | - Secretary General, Office of Insurance Commission - Senior Assistant Secretary General, Securities and Exchange Commission - Member of Committee for the Protection of Credit Information, Bank of Thailand - Member of Financial Institutions Policy Committee, Bank of Thailand - Director, Anti-Money Laundering Office | | Position in Rival Companies / Other Banking-related Companies | - None | | Shareholding in CIMB Thai | - None | | Legal Dispute | - None | | Meeting Attendance in 2016* | - Board of Directors 8/12 times (66.66%) - Audit Committee 11/15 times (73.33%) | (The Annual general Shareholder meeting no. 22, held on 19 April 2016, resolved to appoint Mr.Pravej Ongartsitigul as Director of the Board. The Board of Director meeting, held on 28 April 2016, resolved to appoint Independent Director and member of Audit Committee effective from 1 May 2016) (* Details of attendance as presented in Annual Report 2016) Conflict of Interest in This Meeting | - None | Additional qualifications for independent director: | Type of Relationship with the Bank | Yes | No | |---------------------------------------------------------------------------------------------------|-----|----| | 1. Being a close relative of management or major shareholders of the Bank or its subsidiary companies. | - | ✓ | | 2. Having the following relationship with the Bank, parent company, subsidiary companies, associated companies or any juristic persons who may have a conflict of interest at present or during the past two years: | | | | 2.1. Taking part in the management or being an employee, staff member, or advisor who receives a regular salary. | - | ✓ | | 2.2. Being a professional service provider, e.g. auditor or legal advisor. | - | ✓ | | 2.3. Having a business relation that is material and could be a barrier to independent judgment. | - | ✓ | Remark: Information as of 28 February 2017
Daytime cover, diet and space-use of golden jackals (*Canis aureus*) in agro-ecosystems of Bangladesh Michael M. Jaeger *US Department of Agriculture, Animal and Plant Health Inspection Service, Wildlife Services, National Wildlife Research Center* Emdadul Haque *Bangladesh Agriculture Research Institute (BARI)* Parvin Sultana *Bangladesh Agriculture Research Institute (BARI)* Richard L. Bruggers *US Department of Agriculture, Animal and Plant Health Inspection Service, Wildlife Services, National Wildlife Research Center* Follow this and additional works at: [https://digitalcommons.unl.edu/icwdm_usdanwrc](https://digitalcommons.unl.edu/icwdm_usdanwrc) Part of the [Environmental Sciences Commons](https://digitalcommons.unl.edu/icwdm_usdanwrc) Jaeger, Michael M.; Haque, Emdadul; Sultana, Parvin; and Bruggers, Richard L., "Daytime cover, diet and space-use of golden jackals (*Canis aureus*) in agro-ecosystems of Bangladesh" (2007). *USDA National Wildlife Research Center - Staff Publications*. 701. [https://digitalcommons.unl.edu/icwdm_usdanwrc/701](https://digitalcommons.unl.edu/icwdm_usdanwrc/701) Daytime cover, diet and space-use of golden jackals (Canis aureus) in agro-ecosystems of Bangladesh Michael M. Jaeger\textsuperscript{1,*}, Emdadul Haque\textsuperscript{2}, Parvin Sultana\textsuperscript{2} and Richard L. Bruggers\textsuperscript{3} \textsuperscript{1} US Department of Agriculture, Animal and Plant Health Inspection Service, Wildlife Services, National Wildlife Research Center, Utah State University, Logan, UT 84322, USA, e-mail: email@example.com \textsuperscript{2} Bangladesh Agriculture Research Institute (BARI), Vertebrate Pest Section, Joydebpur, Bangladesh \textsuperscript{3} US Department of Agriculture, Animal and Plant Health Inspection Service, Wildlife Services, National Wildlife Research Center, Fort Collins, CO 80521, USA *Corresponding author Abstract Golden jackals are locally common in Bangladesh despite intensive cultivation and high human densities. We studied the relative importance of seasonal flooding, rodent prey-base, and daytime cover on the occurrence of golden jackals in the two major agro-ecosystems in Bangladesh, one with annual monsoon flooding and the other without. Jackals were less common throughout the year where floodwaters occurred that would have excluded them for 1–3 months during their pup-rearing season. Diets of jackals were similar in the two agro-ecosystems. Rodents were the most common food type in scats throughout the year. The occurrence of burrowing rats in scats peaked seasonally when these rats were most concentrated in ripening cereals, suggesting that jackals are beneficial for rat control. Radiotelemetry of seven jackals in the non-flooded agro-ecosystem over an 11-month period indicated that sugarcane was the preferred type of daytime cover, despite representing only 2–4% of the area. There was a day-to-day return rate of 67% to the same 1-ha patch of cover. Evidently, sugarcane provides daytime cover for avoiding humans and for feeding on roof rats (\textit{Rattus rattus}), which concentrate in this crop. Evidence suggests that breeding pairs of jackals were annual residents that defended cover (average of 37.3 ha) but not foraging areas beyond. Keywords: annual territories; daytime cover; rodent prey-base; seasonal flooding. Introduction The range of the golden jackal is extensive and includes contiguous areas of Africa, Asia, and Europe (Macdonald and Sillero-Zubiri 2004). This implies that the golden jackal is a habitat generalist, similar to the coyote (\textit{Canis latrans}) in North America (Bekoff and Gese 2003). Both species are generalist predators with adaptable social systems (e.g., Macdonald 1979) that are able to exist in close proximity to humans and exploit agro-ecosystems. However, in some parts of their range, golden jackals have either disappeared or their numbers are shrinking due to anthropogenic causes (Jhala and Moehlman 2004). Surveys in Greece indicate that jackals have a fragmented distribution associated with coastal wetlands and that local populations are disappearing coincident with the destruction of the remaining patches of this habitat (Giannatos et al. 2005). Dense vegetation usually associated with wetlands may provide cover for avoiding humans during the daytime and be an important limiting factor for the existence of golden jackals in close proximity to humans. In contrast, golden jackals are reported to be expanding their range in Bulgaria (Krystufek et al. 1997). Conservation of this species depends on developing a better understanding of the resources that are necessary to sustain populations in agro-ecosystems. Golden jackals occur in intensively cultivated areas of Bangladesh (Poché et al. 1987) despite an agrarian population density averaging more than 900 humans/km$^2$ (USAID 2006). In addition, extensive flooding occurs during the annual monsoon rains that would force jackals to vacate submerged areas for up to 3 months. Three resources seem necessary for jackals to become established under these conditions: (1) daytime cover, (2) a diet that is predominately of rodents and does not seriously compete with humans for food (e.g., poultry and livestock), and (3) access to areas that do not flood annually for establishing territories and breeding. These resources are not mutually exclusive. For example, sugarcane has been suggested as an important source of both cover and food for jackals in cultivated areas of Pakistan (Khan and Beg 1986). In Bangladesh, jackals are relatively common in sugarcane-growing areas (Poché et al. 1987) but sugarcane is grown only where monsoon flooding is uncommon, suggesting that flooding per se, or its effect on the distribution of food, may be the more important determinant of jackal distribution. Flooding likely forces jackals and other carnivores to leave until after the waters recede. Breeding adults may not occur where they cannot be territorial throughout the year, as seems to be the case for the coyote (Shivik et al. 1996, Gantz and Knowlton 2005). Alternatively, flooding may impact the post-flood density of burrowing rodents, particularly the lesser bandicoot rat (\textit{Bandicota bengalensis}), which is common throughout south Asia. Golden jackals are reported to prey primarily upon rodents (Lanszki and Heltai 2002), including bandicoot rats (Khan and Beg 1986). Rodents are also an important prey for side-striped jackals (\textit{C. adustus}; Atkinson et al. 2002). The importance of rodents in the annual diet is unknown for the agro-ecosystems of Bangladesh, where it is reported that jackals are primarily scavengers of human refuse (Poché et al. 1987). They also consume a variety of fruits and vegetables, together with poultry and livestock (Sarker and Ameen 1990). Farmers often respond to depredations by culling jackals. The broad objective of this study was to determine the relative importance of annual access to territories, rodents in the diet, and availability of daytime cover to the occurrence of golden jackals in intensively cultivated areas of Bangladesh. Three specific questions were addressed: (1) do the two major agro-ecosystems (seasonally flooded versus not flooded) differ in relative abundance of jackals; (2) are rodents an important component of the annual diet; and (3) is daytime cover an important determinant of the distribution and local abundance of jackals. Materials and methods Study sites Bangladesh is situated in the delta formed by the confluence of the Brahmaputra, Ganges, and Meghna Rivers. Extensive flooding occurs annually from June to October, coincident with the monsoon rains in south Asia. Flooding usually peaks by mid-August, when water covers one-third to half of the country. Most of this is due to run-off into the river systems from upstream in the Himalayas and not to the 2300 mm of average annual rainfall in Bangladesh itself (Ali 2002). Most rainfall (81%) occurs during the monsoon season, with the remainder during the pre-monsoon summer season (March to May). The mean annual temperature in Bangladesh is 26°C, it is coolest (7–31°C) during the dry winter season (November to February) and warmest during summer (30–40°C). There are two major agro-ecosystems based on the type of rice (*Oryza sativa*) grown in flooded (deep-water) versus non-flooded (paddy) areas. In both systems, rice is grown throughout the year in three seasons and wheat (*Triticum aestivum*) is grown in a single season (December to March). In addition, pulses (e.g., grass pea, *Lathyrus sativus*) and mustard (*Brassica campestris*) are common winter crops in both systems. Jute (*Corchorus spp.*) is grown in the seasonally flooded agro-ecosystem and sugarcane (*Saccharum officinarum*) in the non-flooded system. The study was conducted at two field sites: Mirzapur (24°15' N, 89°55' E) representing the seasonally flooded (July–September) and Ishurdi (24°08' N, 89°04' E) the non-flooded agro-ecosystem. The reader is referred to Sultana and Jaeger (1992) for a description of each agro-ecosystem, including the different crop types at each site, area cultivated with each, field sizes, and harvest periods. According to the 1991 census, Tangail District, which includes Mirzapur, had an overall population density of 910/km² and Pabna District, which includes Ishurdi, an overall density of 850/km² (www.citypopulation.de/Bangladesh.html). At Mirzapur, housing clusters become seasonal islands during the annual flood, and are surrounded by water of up to 6 m in depth and are accessible only by boat. A single all-weather road serves this area, whereas at Ishurdi there is an intersection of two all-weather roads along which the study was conducted. Jackals and jungle cats (*Felis chaus*) were observed at both sites, as were Bengal fox (*Vulpes bengalensis*), large Indian civets (*Viverra zibetha*), and mongoose (*Herpestes spp.*). In addition, a fishing cat (*Prionailurus viverrinus*) was captured at Ishurdi. A variety of rodents were common at both sites. Burrowing rats were concentrated in ripening cereals (Sultana and Jaeger 1992). The most common was the lesser bandicoot rat. The distribution of the greater bandicoot rat (*B. indica*) was more restricted to seasonally flooded areas at the Mirzapur site. The short-tailed mole rat (*Nesokia indica*) occurred only at the Ishurdi site (Poche et al. 1982). The roof rat (*Rattus rattus*) was common throughout the country but restricted to patches of dense vegetation (e.g., clusters of trees, vines, or sugarcane) in which it can climb and seek cover (Chandrasekar-Rao and Sunquist 1998). Domestic dogs occurred around clusters of houses, but were not encountered away from them. Sampling Field sites were sampled monthly from November 1986 to January 1988 for jackal scats to determine the relative density of jackals and the types of food they consumed. Vegetation type and field size were also recorded. In addition, the number of active burrow systems of bandicoot rats was sampled to determine whether the spatial or temporal distribution of this potentially important prey was likely to affect the occurrence or distribution of jackals. At each study site we established sampling areas by defining a 24-km transect along the main roadway through the area and identifying 48 1-km² blocks, 24 along each side of the roadway. This approach was used because these general areas were inaccessible other than by foot. Each transect was partitioned into two equal strata of 12 km in length. For each of the first 6 months of the study, we randomly selected two 1-km² blocks in each stratum and within each block we randomly selected and sampled four 1-ha plots. In May 1987, sampling was changed to the use of fixed blocks. This approach was used on the recommendation of a statistician for reasons to do with ANOVA and unrelated to the present study. Subsequently, we randomly selected four 1-km² blocks per stratum and then randomly selected two 1-ha plots from these same blocks each month. Under each sampling regime, we sampled 16 1-ha plots per site per month. Within each 1-ha plot, jackal scats were collected and their number recorded. The total number of scats collected per site per month was unlikely to be affected by the change in sampling procedure. The numbers of scats collected by this method were few; thus, for the purpose of diet analysis only, additional scats were collected opportunistically throughout the year at both sites. We did not search through the vegetation within fields for scats so as to avoid damaging the crop. Instead, searches were along the bunds bordering each field. Fields were small at both Mirzapur and Ishurdi, averaging 0.12 and 0.19 ha, respectively (Sultana and Jaeger 1992). This resulted in a network of bunds and therefore a thorough search of 1-ha plots. Diet analysis Food types consumed by jackals were determined by identifying undigested remains in their scats. A scat can consist of more than a single piece from the same dropping. Jackal scats were distinguished from those of other carnivores, including domestic dogs, foxes, civets, and jungle cats, by their characteristic size, shape, and contents. The reader is referred to Chame (2003) for a description of the scats of different carnivores. This process was carried out in the same way in which coyote scats are distinguished from those of bobcats (*Lynx rufus*), gray foxes (*Urocyon cinereoargenteus*), raccoons (*Procyon lotor*), striped skunks (*Mephitis mephitis*), and domestic dogs (Neale et al., 1998). Confusion was most likely to occur in distinguishing the scats of jackals from jungle cats owing to their similar dimensions and both tending to be straight in shape. Scats of these two species were distinguished from one another by well-defined segments characteristic of felids and by the presence of fruits, seeds, insect, plant tissue, etc. characteristic of the more omnivorous canids. In addition, scats of captive jackals and jungle cats that had been fed rats as part of another study to determine the number of rats consumed by an individual per day (R. Pandit unpublished data, BARi) were used as reference material. Nevertheless, some errors were likely in correctly assigning species of origin, but we assumed that this error was small. Jackals were observed more often than jungle cats, but the two species were captured in nearly equal numbers at the Ishurdi site where trapping occurred. Domestic dogs were uncommon and never encountered away from clusters of houses. Feral dogs and cats are not tolerated by Bangladeshi farmers. In the laboratory, scats were boiled briefly in water and teased apart. The presence of hair, bone, feathers, exoskeleton, fish scales, plant material, etc. was recorded and identified when possible. Invertebrates found on the surface of scats were discarded. Rodent teeth and jaw parts were retained to identify species from unique cusp patterns of the molars (Peláez-Campomanes and Martín 2005) and to determine the minimum number of rodents’ scat. Radiotelemetry We captured 10 adult jackals at the non-flooded study site at Ishurdi between November 30, 1986 and March 15, 1987 using number 3 coil-spring, padded-jaw foothold traps. Traps were set after dark each evening, checked the next morning before dawn, and removed. Captured jackals were sedated with an intramuscular injection of ketamine hydrochloride (10 mg/kg body mass) for approximately 20 min while their sex, age, and weight were determined and they were fitted with a radio-collar. Eyes were lubricated to protect against desiccation. The absence of tooth wear (Gier 1968) and body size were used to distinguish first-year from older jackals (i.e., juveniles from adults). Reproductive condition (e.g., lactation, descended testes) was also checked. Individuals were released at their trap site after regaining coordination, generally within 2 h of capture. There were no apparent injuries to any of the jackals from trapping or handling. Radio tracking was carried out during 4–5 successive days each month. However, not all jackals could be located every day. Attempts were made to locate each jackal on at least 2 days/month. Tracking was done predominantly between dawn and dusk to obtain daytime locations in cover. Jackals were located with a hand-held Yagi antenna. At the onset of a radio-tracking session, it was determined whether the jackal was moving (fluctuating signal strength) or stationary (constant signal strength). The location and cover type of a stationary animal could be determined without triangulation using a single-antenna receiver. This was performed by following the general direction of the signal and when close to the animal circling to verify its position (1-ha plot). Triangulation of nighttime locations, when jackals were moving, could not be carried out with confidence. Interference from electrical power lines made it difficult to obtain accurate bearings (which are estimated by increasing and decreasing the gain on the receiver to hear subtle differences in signal strength) needed to triangulate positions. Nevertheless, the general direction and distance of the animal from the tracking position and whether or not it was moving could be determined. Results Relative density of jackals A comparison of the number of jackal scats collected monthly at each study site is shown in Figure 1. Overall, more scats were found per month at Ishurdi than at Mirzapur (Mann-Whitney test, $U_{0.05(0.02)} = 64$, $0.01 < p < 0.02$). There was an annual peak in the number of scats during March at both sites. This peak was greater at Ishurdi (Mann-Whitney test, $U_{0.05(0.14)} = 181$, $0.005 < p < 0.01$). This represents the greatest monthly difference between sites in the number of scats found (6.4:1). This peak likely represents a seasonal change in the use of cultivated fields by jackals rather than a change in their density. The ![Figure 1](image.png) **Figure 1** Monthly number of scats of golden jackals collected at each of the two study sites in Bangladesh during 1987. Dashed line represents the site at Ishurdi in the non-flooded agro-ecosystem and the solid line the site at Mirzapur in the seasonally flooded agro-ecosystem. The Mirzapur site was covered by floodwaters from July through August. Mirzapur site could not be sampled from July through September due to flooding. The entire area was under water (>1 m in depth) except for the man-made islands where humans were concentrated and the embankments of the major road through the area. Neither of these was sampled for scats. **Jackal diet** The trend in occurrence of different food items in scats was the same at both sites (Figure 2A). Rodents were the principal food type as measured by incidence of occurrence. Their bones and teeth were found in 62% of the 502 scats from Ishurdi and 56% of the 155 scats from Mirzapur. Birds were the second most common food type, occurring in 31% of the scats from both sites. Wild birds were not distinguished from domestic ducks and fowl. Plant material was found in 17% of the scats from Ishurdi and 12% from Mirzapur. This was most often sugarcane stem or panicles of rice or wheat. Invertebrates, mostly insects, occurred in 9% and 10% of scats from Ishurdi and Mirzapur, respectively. Bones of livestock occurred in approximately 10% of the scats from each site and fish scales in 2%. The most common rodent remains were those of *R. rattus*, *B. bengalensis*, and *Mus* spp., but the relative incidence of each varied between sites (Figure 2B). The incisors of two or more rodents were found in 30% of all scats, indicating a minimum average 1.1 rodents/scat (Figure 2C). All the scats from both sites, including those opportunistically collected (*n*=606), were pooled to determine if there was an overall annual pattern in jackal predation on different rodent species. Pooling was carried out because there was no difference between sites in the relative incidence of different food types in scats and because no scats were collected from the Mirzapur site during monsoon flooding (July–September) and few were collected from October to December, thus making comparison between sites of little value. The proportion of burrowing rats (i.e., *Bandicota* spp. and *N. indica*) in scats decreased through the course of the year, being greatest from January through March and least from October through December (Figure 3). The proportion of scats with *R. rattus* and *Mus* spp. remained similar in all four quarters of the year. Independence between incidence of burrowing rats in scats (versus no burrowing rats) and season (January–June versus July–December) was rejected in a two-way analysis (G test, $\chi^2_{(1)}=19.78$, $p<0.001$), whereas it was accepted for both roof rats ($\chi^2_{(1)}=0.06$, $0.9>p>0.5$) and mice ($\chi^2_{(1)}=0.62$, $0.5>p>0.1$). The decrease in incidence of burrowing rats was reflected in an increase in the incidence of refuse ($\chi^2_{(1)}=20.64$, $p<0.001$), a decrease in the incidence of birds ($\chi^2_{(1)}=5.58$, $0.025>p>0.01$), and no change for either plant material ($\chi^2_{(1)}=2.68$, $0.5>p>0.1$) or invertebrates ($\chi^2_{(1)}=0.02$, $0.9>p>0.5$). ![Figure 2](image) **Figure 2** Importance of rodents in the diets of jackals from Ishurdi and Mirzapur during 1987: (A) incidence of different food types in scats; (B) incidence of rodent species in scats; and (C) minimum number of rodents/scat. ![Figure 3](image) **Figure 3** Seasonal trend in the occurrence of burrowing rats, roof rats, and mice in the scats of golden jackals in Bangladesh during 1987–1988. Scats from the two study sites are pooled. Sample sizes are as follows: $n=212$ scats for January–March, $n=89$ for April–June, $n=107$ for July–September, and $n=198$ for October–December. Daytime cover Radio tracking of seven jackals over periods ranging from 5 to 11 months indicated that they used cover throughout the hours of daylight. Between 07:00 and 17:00 h, 97% of locations (n = 86) were in cover, compared with 72% of locations (n = 18) from 05:00 to 07:00 h and 73% of locations (n = 49) from 17:00 to 19:00 h. Sugarcane was the preferred type of cover. All jackals used it when it was available. The 44 locations during December–February were all in sugarcane, compared to 38 of 51 locations during March–May (G-test, d.f.=2, p<0.005). This difference may have been due to the staggered harvest among sugarcane fields, which began in December and was completed by April. This resulted in a staggered re-growth of the ratoon sugarcane. From December to January, 85% of cover used was mature sugarcane, with the remainder being the emerging ratoon crop. By April–May, only 11% of the cover used was mature sugarcane and 56% was emerging sugarcane. The availability of sugarcane at Ishurdi varied seasonally between 2% and 4% of the area sampled. Use of alternative cover (n=13 locations) was equally divided among bamboo, jute, and ripening rice and wheat. The likelihood that daytime cover was limited in availability is suggested by the high incidence of reuse of the same 1-ha patch of cover on successive days by the same jackal (67% return rate). Reuse of the same 1-ha patches in successive months occurred 34% of the time. The Ishurdi site was intensively cultivated throughout the year, with a monthly average of 74.8% of the area sampled covered by crops, 15.8% plowed or fallow, and 9.4% uncultivated. The spatial and temporal patterns of stationary daytime locations of seven jackals monitored over 11 months (December–October) are shown in Figures 4 and 5. Daytime cover used by five of the animals (♂1, ♂3, ♀5, ♀6, and ♀8) was clumped in relatively small areas that tended not to overlap among individuals. This indicates that these jackals were resident in the same exclusive (i.e., defended) areas throughout the year. There were two exceptions. First, ♀2 moved from his original area of daytime cover to an area of sugarcane approximately 3 km away (Figure 5A,B) after his mate (♀10) and their pups were killed by farmers. This occurred in April following discovery of their den in a wheat field during harvest. This jackal (♂2) was itself killed by farmers in August. Prior to moving, this jackal’s use of daytime cover partially overlapped that of ♂1. On two occasions, these males were located traveling together at night, suggesting that ♀2 may have been a beta male residing in his natal territory. Second, ♂7 was a 1–2-year-old non-resident transient (i.e., dispersed from his natal territory) who was located only once after March (Figure 5E). Evidently this jackal was forced to disperse following harvest of the sugarcane it had been sharing with ♀6 and at least one other unidentified jackal. The size of each territory as determined by daytime locations in cover was affected not only by the presence of conspecifics in contiguous territories, but also by the spatial and temporal distribution of sugarcane or other suitable cover. The area of daytime cover used by each jackal was calculated (as illustrated in Figure 5F) by totaling the minimum number of contiguous 1-ha plots that encompassed all plots used by an individual for cover. These averaged 57.3 (±43.3, ±1 SD) ha for the six resident jackals (area used for ♀2 was the 5-month period after moving). Female 6 also moved her area of daytime cover following the harvest of sugarcane (Figure 5G). If only the area used after this move is considered (May–October) the average area for the six jackals becomes 37.3 (±23.4) ha. Telemetry indicated that jackals departed from cover at dusk and returned again at dawn and that their movements were outside of the territories defined by cover. Precise locations of moving jackals could not be determined. Discussion Jackal density The scat index indicates an overall difference in jackal density between the two major agro-ecosystems. One or more of the following factors may account for this difference: seasonal flooding, prey base, and daytime cover. The spike in the number of scats found in cultivated fields in March coincided with the time of year when lesser bandicoot rats were most concentrated. We found an average of 57 rats/ha in ripening wheat at Mirzapur, double that at Ishurdi (Sultana and Jaeger 1992). This difference may have been due to greater predation by jackals (and jungle cats) at Ishurdi. At both sites, over 90% of rat burrows were in wheat fields, which represented approximately 11% of the area cultivated in each agro-ecosystem (Sultana and Jaeger 1992). This indicates that jackals focused their foraging where these rats were concentrated. This is supported by the finding that the incidence of bandicoot rats in scats was highest at this time. A Male 2a B Male 2b C Male 1 D Male 3 E Male 7 F Female 5 of year (Figure 5). The extent of the difference in jackal density between agro-ecosystems is probably best reflected by the numbers of scats found during March. At other times of year, relatively few scats were found in cultivated fields at either site. This maybe because jackals did not deposit scats randomly, but used them for advertising their claim to a limited resource such as territory space (Macdonald 1979) or a wheat field and its rats. This is elaborated on in the section dealing with diet. **Seasonal flooding** The scat index indicates that jackals were less abundant where seasonal flooding occurred. This is not surprising during the monsoon season, but the difference persisted throughout the year. Flooding was extensive in the deep-water rice agro-ecosystem represented by Mirzapur. Most of this study area was flooded to a depth of 1–3 m from July through September. Deep-water rice is grown during the floods, which is the crop type that distinguishes this agro-ecosystem. It seems reasonable that extensive flooding adversely affects the abundance of jackals by excluding them from areas with deep water. It is unknown whether some jackals remain on small patches of high ground throughout the floods or whether they tend to migrate back and forth into these areas with the rising and receding flood waters. However, it seems likely that breeding jackals, similar to breeding (i.e., alpha) coyotes, do not establish territories where they cannot maintain them year round (Shivik et al. 1996, Gantz and Knowlton 2005), particularly during pup rearing. Whelping of pups peaks in March and the young are unlikely to become independent before August–September when floodwaters are present. Consequently, it is the transient non-breeding jackals that seem more likely to use flooded areas on a seasonal basis. **Prey base** Bandicoot rats, roof rats, and mice were the most common prey in the scats of jackals from both agro-ecosystems. The importance of rodents in the diets of jackals is consistent with findings from elsewhere on the sub-continent (Khan and Beg 1986, Mukherjee et al. 2004). Bandicoot rats are the most common rat in cultivated cereals and pulses throughout Bangladesh. Through the use of underground burrow systems (Poché et al. 1982) they are able to exist in cultivated areas where vegetative cover provided by crops is insufficient protection from predators. Each adult establishes a separate burrow system in which it caches ripening panicles, allowing it to remain underground for extended periods (Sultana and Results from a concurrent study (Sultana and Jaeger 1992) found a similar annual cycle in the overall density of active burrows (i.e., adults) at both study sites, with the peak (15–20 burrows/ha) in November and December coincident with the main (aman) rice harvest and a low (<1 burrow/ha) from July to September during the monsoon. Secondary and tertiary peaks coincided with the wheat (March) and boro rice (May–June) harvests, respectively. Bandicoot burrows were most concentrated in wheat and boro rice owing to the relatively small areas cultivated. Interestingly, it was at these times that the incidence of bandicoot rats in scats was greatest (Figure 3). Incidence was lowest during October–December, when rats were dispersed in aman rice that covered approximately 60% of the cultivated land in both agro-ecosystems. At Ishurdi (non-flooded), bandicoot rats remained important in the diet of jackals during the monsoon, despite their overall low density. Evidently, jackals found concentrations of burrows on well-drained road embankments or in and around structures with stored grain (M.Y. Mian, unpublished data, BARI). There was no seasonal change in the numbers of roof rats or mice in scats. Roof rats, like lesser bandicoot rats, were common at both sites (Mian et al. 1987). Unlike lesser bandicoot rats, roof rats are restricted to structures or vegetation on which they can climb and build nests above ground; sugarcane fields provide suitable habitat where roof rats will concentrate. At Ishurdi, jackals used sugarcane for daytime cover; and evidently fed on the roof rats that were common in these fields. Sugarcane was present through the monsoon and in the months that followed when the consumption of bandicoot rats was relatively low. Sugarcane therefore provided both cover and a daytime food source for jackals. The lower incidence of roof rats in scats at Mirzapur (Figure 2B) may be due in part to the absence of sugarcane, but also to a sampling bias at this site, in that more scats were collected at the time of year when the incidence of bandicoot rats in scats was highest (i.e., February–March). Evidence indicates that jackals did scavenge anthropogenic refuse at both sites, but that this was not their primary food source in either agro-ecosystem. However, the incidence of refuse in scats increased seasonally when that of burrowing rats decreased (July–December), suggesting that refuse was used to supplement the loss of these rats. Refuse may be an even more important food source in other situations within Bangladesh where the abundance of rodents is less (Poché et al. 1987), as has been reported in Israel (Macdonald 1979, Yom-Tov et al. 1995). The implication is that rats are the preferred food source where they are available. Similarly, urban coyotes have been reported to prey primarily on small mammals (Morey et al. 2007). A high incidence of bird remains was found in scats from both sites. The proportion corresponding to domestic fowl was not determined. However, the decrease in the incidence of birds in scats during the monsoon suggests that jackals had been feeding primarily upon wild granivorous birds that would have departed at this time of year when grain was scarce. The most likely source of wild birds would have been from their night roost sites. Flocks of wild birds commonly roosted in sugarcane and those that died of natural causes during the night may have been scavenged by jackals. A similar reduction in the incidence of birds in scats during the monsoon was reported by Sarker and Ameen (1990). They reported that most bird remains from an urban study site in Bangladesh were those of domestic chickens and ducks. It is unclear from our data whether jackals killed livestock to provision pups as coyotes do (Till and Knowlton 1983). Depredation seems most likely during the monsoon because of increased energetic needs, together with the reduced availability of bandicoot rats. Black Bengal goats (a dwarf variety) were the most likely type of livestock to have been preyed upon by jackals owing to their relative abundance and small size. However, there was no evidence of goat hair in scats. Jackals do kill these goats (personal observation) and the remains of goats have been found in scats from Bangladesh (Sarker and Ameen 1990), but this is not reported to be an important constraint to goat production. This may be because flock sizes were small and as a consequence the goats are easy to pen at night near humans. **Daytime cover** Radio-collared jackals at Ishurdi used cover during daylight hours and were rarely located in the open during the day. This finding is consistent with the behavior of golden jackals in Ethiopia (Admasu et al. 2004), Greece (Giannatos et al. 2005) and with coyotes in areas where they are exploited by humans, particularly in connection with livestock depredation (Sacks 1996, Kitchen et al. 2000). On several occasions jackals were observed to be pursued by a group of children after having been flushed from daytime cover. Three of the 11 jackals captured at Ishurdi were known to have been killed by farmers. For jackals to exist where human density is high requires effective avoidance of detection during daylight and the use of cover of darkness for activity in the open. Red fox (*Vulpes vulpes*) in Europe (Luccherini et al. 1995) and coyotes in North America (Grinder and Krausman 2001) are also able to live in close association with humans, including in urban areas. Where golden jackals are not threatened by humans, they are more diurnal in their activity (Fuller et al. 1989). Sugarcane was the favored type of daytime cover at Ishurdi, despite there being relatively little of it. Sugarcane is typically grown in small holdings (<1 ha). In some areas of Bangladesh, sugarcane fields are much more numerous and it is in these areas that jackal densities are relatively high (Jaeger et al. 1996). This observation is supported by surveys in Greece that showed golden jackals to be restricted to patches of dense marsh along the coast (Giannatos et al. 2005). High stalk density and extensive lodging of stalks make a mature sugarcane field virtually impenetrable to humans. In addition, the availability of roof rats, roosting birds, and sugarcane stalks (that jackals are fond of chewing on) provide food that was utilized during the day. Other species of carnivores were trapped on the edges of sugarcane fields at Ishurdi, suggesting that they also use sugarcane for daytime cover. These included the jungle cat, fishing cat, mongoose, and small Indian civet. Jungle cats were relatively common at Ishurdi; nine were captured in the process of trapping 11 jackals. **Annual territoriality** We have suggested that access to territoriality throughout the year, a prey-base that is not competed for with humans (i.e., rodents), and suitable daytime cover are all necessary conditions for jackals to maintain stable populations in densely populated agro-ecosystems. A distribution of suitable cover that is patchy in both space and time, such as was found with sugarcane at Ishurdi, can potentially disrupt year-around territoriality. Evidence from Ishurdi (including Poché et al. 1987) suggests most jackals were resident throughout the year and that their nighttime home ranges probably overlapped, while areas used for daytime cover were more exclusive (e.g., males 1 and 3, Figures 4 and 5C,D). This implies that breeding pairs of jackals defended areas of cover (with roof rats), but not foraging areas beyond (i.e., fields with bandicoot rats). This is a departure from the more common situation in canids such as the coyote where the breeding pair defends a territory that includes all necessary resources (i.e., home range and territory are the same, e.g., Andelt 1985). Macdonald (1979) showed that the expression of territoriality in golden jackals can be flexible and dependent on the distribution of resources. The average size of putative territories at Ishurdi (as determined by the spatial pattern of daytime cover) was only 37 ha. This compares with approximately 200 ha for a resident pair of golden jackals in a Kenya study (Fuller et al. 1989) and home ranges of 39 ha and 173 ha for two resident females reported from Algeria (Khidas 1990). A home range size ranging from 1.1 to 20.0 km$^2$ is reported for this species by Macdonald and Sillero-Zubiri (2004). The overall area used for daytime cover by individual jackals at Ishurdi is a cumulative area over months and does not reflect the day-to-day use of cover. Resident jackals often returned to the same 1-ha patch of sugarcane on successive nights and even across successive months (Figure 5D) which would be expected when pups were in dens. **Implications for conservation and management** The results of this study indicate that both annual access to an area and suitable cover are important factors influencing local abundance of golden jackals in the major agro-ecosystems of Bangladesh. The potential exists to manipulate cover (e.g., sugarcane) for the conservation and management of golden jackals, jungle cats, and other medium and small carnivores. A scattered distribution of sugarcane is likely to increase opportunities to establish territories (maximum density of approx. 1 territory/km$^2$), while its staggered harvest in the same area (i.e., $\leq 1$ km$^2$) is likely to provide year-around cover within the same territory. Clustering territories so that they can be contiguous may be important to success (e.g., groups of 2–4 territories). Conservation of jackals assumes the cooperation of farmers. This may be possible where jackals are not dependent on anthropogenic sources of food, particularly poultry and livestock, and where their prey base has pest status. In agricultural areas of Bangladesh, as elsewhere on the Indian sub-continent, jackals prey primarily upon rodents and as a consequence are probably beneficial for rat control, particularly in pre-harvest cereals and sugarcane. This is supported by the greater density of bandicoot burrows in mature wheat (March) at the study site, with fewer jackals: $53.8 \pm 10.7$ ($\pm$SE) burrows/ha at Mirzapur (seasonal flooding) versus $21.9 \pm 4.4$ burrows/ha at Ishurdi (no flooding). The relative area cultivated in wheat was similar at the two sites: 11.5% and 10.9%, respectively. **Acknowledgements** We thank F. Knowlton for comments on the manuscript and assistance with the figures. Two anonymous reviewers made substantial improvements to the manuscript. Funding was provided by USAID under the project “Agricultural Research II Vertebrate Pest Management Component, PASA ID-0051-P-IF-2252-05”. Use of animals in this research complies with the current laws of Bangladesh. **References** Admasu, E., S.J. Thirgood, A. Bekele and M.K. Laurenson. 2004. Spatial ecology of golden jackal in farmland in the Ethiopian Highlands. Afr. J. Ecol. 42: 144–152. Ali, M.L. 2002. An integrated approach for improvement of flood control and drainage schemes in the coastal belt of Bangladesh. Taylor Francis, London, pp. 24–25. Andelt, W.F. 1985. Behavioral ecology of coyotes in south Texas. Wildl. Monogr. 94: 1–45. Atkinson, R.P.D., D.W. Macdonald and R. Kamizola. 2002. Dietary opportunism in side-striped jackals *Canis adustus* Sundevall. J. Zool. Lond. 257: 129–139. Bekoff, M. and E.M. Gese. 2003. Coyote (*Canis latrans*). In: (G.A. Feldhamer, B.C. Thompson and J.A. Chapman, eds.) Wild mammals of North America: biology, management, and conservation. Johns Hopkins University Press, Baltimore, MD, pp. 467–481. Charme, M. 2003. Terrestrial mammal feces: a morphometric summary and description. Mem. Inst. Oswaldo Cruz, Rio de Janeiro 98 (Suppl. 1): 71–94. Chandrasekara-Rao, A. and M.E. Sunquist. 1996. Ecology of small mammals in tropical forest habitats of southern India. J. Trop. Ecol. 12: 607–624. Fuller, T.K., A.R. Biknevicius, P.W. Kat, B. Van Valkenburgh and R.K. Wayne. 1989. The ecology of three sympatric jackal species in the Rift Valley of Kenya. Afr. J. Ecol. 27: 313–323. Gantz, G.F. and G.F. Knowlton. 2005. Seasonal activity areas of coyotes in the Bear River Mountains of Utah and Idaho. J. Wildl. Manage. 69: 1652–1659. Giannatos, G., Y. Mihailidis, P. Maragou and G. Catiadarakis. 2008. The status of the golden jackal (*Canis aureus* L.) in Greece. Ber. Z. Zool. 134: 37–40. Gier, H.T. 1968. Coyotes in Kansas (revised). Kans. State Coll. Agric. Exp. Sta. Bull. 393: 1–118. Grinder, M.I. and P.R. Krausman. 2001. Home range, habitat use, and nocturnal activity of coyotes in an urban environment. J. Wildl. Manage. 65: 887–898. Jaeger, M.M., R.K. Paridit and E. Haque. 1996. Season differences in diet consumed by wild golden jackals in Bangladesh: hunting versus confrontation. J. Mammal. 77: 768–775. Jhala, Y.V. and P.D. Moehlman. 2004. Golden jackal (*Canis aureus*). In: (C. Sillero-Zubiri, M. Hoffmann and D.W. Macdonald eds.) Canids: foxes, wolves, jackals and dogs. Status survey and conservation action plan. IUCN/SSC Canid SpeKhan, A.A. and M.A. Beg. 1986. Food of some mammalian predators in the cultivated areas of Punjab. Pak. J. Zool. 18: 71–79. Khidas, K. 1990. Contribution à la connaissance du chacal doré. Facteurs modulant l’organisation sociale et territoriale de la sous-espèce algérienne (Canis aureus algeriensis Wagner, 1941). Mammalia 54: 363–375. Kitchen, A.M., E.M. Geese and D.R. Schaueter. 2000. Changes in coyote activity patterns due to reduced exposure to human persons. Can. J. Zool. 78: 855–857. Krystufek, B., D. Muraru and C. Kurtunov. 1997. Present distribution of the golden jackal Canis aureus in the Balkans and adjacent regions. Mamm. Rev. 27: 109–114. Lanszki, J. and M. Heitai. 2002. Feeding habits of golden jackal and red fox in south-western Hungary during winter and spring. Mamm. Biol. 67: 129–136. Luherman, M., M. Lowry and G. Correia. 1995. Habitat use and ranging behavior of the red fox (Vulpes vulpes) in a Mediterranean rural area: is shelter availability a key factor? J. Zool. Lond. 237: 577–591. Macdonald, D.W. 1979. The flexible social system of the golden jackal, Canis aureus. Behav. Ecol. Sociobiol. 5: 17–38. Macdonald, D.W. and C. Sillero-Zubiri. 2004. Dramatis personae. In: (D.W. Macdonald and C. Sillero-Zubiri eds.) Biology and conservation of wild canids. Oxford University Press, Oxford, pp. 3–30. Mian, M.Y., M.S. Ahmed and J.E. Brooks. 1987. Small mammals and stored food losses in farm households in Bangladesh. Crop Prot. 6: 200–203. Mukherjee, S., S.P. Goyal, A.J.T. Johnsingh and M.R.P. Leite Pitman. 2004. The importance of rodents in the diet of jungle cat (Felis chaus), caracal (Caracal caracal) and golden jackal (Canis aureus) in Sariska Tiger Reserve, Rajasthan, India. J. Zool. Lond. 262: 405–411. Morey, P.S., E.M. Geese and S. Gehrt. 2007. Spatial and temporal variation in the diet of coyotes in the Chicago Metropolitan Area. Am. Midl. Nat. in press. Nee, S., R.C., B.N. Sacks, M.M. Jaeger and D.R. McCullough. 1998. A comparison of bobcat and coyote predation on lambs in north-coastal California. J. Wildl. Manage. 62: 700–706. Peláez-Campomanes, P. and R.A. Martin. 2005. The Pliocene and Pleistocene history of cotton rats in the Meade Basin of southwestern Kansas. J. Mammal. 86: 475–494. Poché, R.M., M.Y. Mian, M.E. Haque and P. Sultana. 1982. Rodent damage and burrowing characteristics in Bangladeshi wheat fields. J. Range Manage. 46: 139–147. Poche, R.M., S.J. Evans, P. Sultana, M.E. Haque, R. Sterner and M.A. Siddique. 1987. Notes on the golden jackal (Canis aureus) in Bangladesh. Mammalia 51: 259–270. Sacks, B.N. 1996. Ecology and behavior of coyotes on a California sheep ranch in relation to depredation and control. MS thesis, University of California, Berkeley. Sarker, N.J. and M.N. Ameen. 1990. Food habits of jackals (Canis aureus), Bangladesh J. Zool. 18: 189–202. Shivik, J.A., M.M. Jaeger and R.H. Barrett. 1996. Coyote movements in relation to the spatial distribution of sheep. J. Wildl. Manage. 60: 422–428. Sultana, P. and M.M. Jaeger. 1992. Control strategies to reduce rat damage in Bangladesh. In: (J.E. Borrecco and R.E. Marsh, eds.) Proceedings of the 15th Vertebrate Pest Conference, University of California, Davis, pp. 261–267. Till, J.A. and F.F. Knowlton. 1983. Efficacy of denning in alleviating coyote depredations upon domestic sheep. J. Wildl. Manage. 47: 1018–1025. USAID. 2006. http://www.usaid.gov/bd/popo.html Yom-Tov, Y., S. Ashkenazi and O. Viner. 1995. Cattle predation by the golden jackal Canis aureus in the Golan Heights, Israel. Biol. Cons. 73: 19–22.
Spatial memory of Paridae: comparison of a storing and a non-storing species, the coal tit, *Parus ater*, and the great tit, *P. major* JOHN R. KREBS*, SUSAN D. HEALY* & SARA I. SHETTLEWORTH† *Edward Grey Institute of Field Ornithology, South Parks Road, Oxford OX1 3PS, U.K. †Department of Psychology, University of Toronto, Toronto M5S 1A1, Canada Abstract. The performances of the coal tit (a food storer) and the great tit (a non-storer) were compared in two experiments in which the bird was rewarded in phase 2 of a trial for returning to sites where it had seen food in phase 1 ('window-shopping'). In both experiments the differences in performance between the species were small. In the first experiment the storing species was better than the non-storer at returning to the site where it had seen a seed, and in the second experiment the storing species was better at discriminating in phase 2 between sites visited in phase 1 that contained a seed and sites visited in phase 1 that were empty. A third experiment demonstrated that the motivational variables of reward size and deprivation were unlikely to account for the difference between the species. The results might indicate that there are differences in the spatial memory of the storing and non-storing species, but a difference in perceptual discrimination cannot be excluded. Two approaches to comparing learning and memory of different species can be distinguished. In the traditional approach (Thorndike 1911; Bitterman 1975) the performances of very different species such as goldfish, *Carassius auratus*, and pigeons, *Columba livia*, are compared in arbitrary laboratory tasks like discrimination reversal and probability learning. Macphail (1982) claimed that this approach has so far provided no conclusive evidence for qualitative or quantitative differences between species once 'contextual' variables like sensory, motor and motivation differences are taken into account. Recently some authors (e.g. Domjan & Galef 1983; Kamil 1987; Rozin & Schull 1988) have advocated what Kamil (1987) termed the synthetic approach, combining the method and the theory of psychology with the insights of ethologists and behavioural ecologists into how animals use learning and memory in nature. According to this view, closely related species should be compared with regard to abilities that they naturally need to use to different degrees. Proponents of the synthetic approach predict that quantitative or qualitative specializations of learning and memory will be found that correspond to differences in ecology (Domjan & Galef 1983; Sherry & Schacter 1987). However, little systematic research using this approach has been done. Food-storing birds provide an excellent opportunity to test the notion that memory differs between species in an adaptive way. Some parids (titmice) and corvids (crows) store food in the wild and rely on memory to find it again. They may store items in hundreds of different places for periods of hours or days (reviews in Shettleworth 1985; Balda et al. 1987; Sherry 1987). Other species, living in similar areas and with similar feeding habits, store very little or not at all. The importance of good spatial memory to a food-storing way of life suggests that food-storing species may have evolved a particularly large-capacity, long-lasting and accurate spatial memory, which might be qualitatively or quantitatively different from the spatial memory used in normal foraging. This speculation is encouraged by recent neuroanatomical studies showing that the hippocampus in food-storing birds is significantly larger relative to brain and body size than in non-storing birds (Krebs et al. 1989; Sherry et al. 1989). The hippocampus is the area of the brain implicated in some aspects of spatial and other memory in mammals (Rawlins 1985) and birds (Bingman et al. 1984; Sherry & Vaccarino 1989). Moreover, hippocampal damage reduces performance in recovery of stored food in black-capped chickadees, *Parus atricapillus* (Sherry & Vaccarino 1989). Regarding behavioural evidence for specialized memory, Balda & Kamil (1989) have described differences in cache recovery among three corvid species that could reflect differences in memory. In the experiments reported here we compare the performance of a storing and a non-storing species of *Parus* using a test of memory called 'window-shopping' (Shettleworth & Krebs 1986). This test was designed to capture the essential features of food-storing memory without requiring the birds to store food, thereby allowing the performance of storing and non-storing species to be compared. The initial hypothesis is that the performance of a storing species on this task should be superior to that of a non-storer and that the difference should reflect differences in memory. A window-shopping trial consists of two phases: the first phase corresponds to storing food except that instead of putting food into each site itself, the bird sees food behind a small window while foraging. The windows are covered by small cloth flaps which the birds are trained to lift to inspect the contents of the hole behind the window. Thus, as in storing food, the bird visits a number of places and learns that food is in some of them. In the second phase of each trial, after a retention interval, the bird is allowed to return and eat the food it saw behind the window in the earlier phase. The food is still hidden behind cloth flaps in the sites where it was seen in phase 1, but now the windows are open so that when the flap is lifted the food can be taken out of the hole and eaten. Food-storing black-capped chickadees can perform the window-shopping task at above chance level when the interval between the two phases is about 2 h (Shettleworth & Krebs 1986). In the present study we compared the performance of coal tits with that of the non-storing great tit. We first compared the species in a simple version of the task in which a single seed was hidden on each trial in one of seven sites; the retention interval was 30 min. In the second experiment we compared performance in a more complicated version of the task in which seeds were seen in seven out of 60 sites in phase 1 and the retention interval was 120 min. The reason for using a simple version of the task in the first experiment was that we wanted to start from a situation in which we expected both species to perform well and then increase the difficulty of the task to see if a species difference might emerge. In the final experiment we tested whether or not two motivational variables, reward size and deprivation level, influence performance in the complex window-shopping task. **EXPERIMENT 1** Our aim in this experiment was to compare the performances of coal tits and great tits in a simple version of the window-shopping task referred to above, with one out of seven sites rewarded and a retention interval of 30 min. **Methods** **Subjects** The subjects were four great tits and four coal tits caught in deciduous woodland near Stanton St John, Oxfordshire. Both species were fed a mixture of dried insects, grated apple and carrot, and hard-boiled egg together with peanuts, sunflower seeds and mealworms. The birds were kept indoors in standard holding cages measuring 0·44 m wide × 0·77 m long × 0·44 m high each connected by a trap-door to the experimental room. Both the living and experimental rooms were maintained on a 10:14 h light:dark cycle. **Experimental environment** The birds were tested in a room, 3·75 × 3·9 × 2·4 m high; along the 3·9-m wall were eight trap-doors connecting the birds’ living cages with the room and the 3·75-m wall had a door with a smoked one-way Plexiglas window through which the experimenter could view the birds. In the experimental room were seven wooden blocks (9·0 cm wide × 15·0 cm high × 4·0 cm thick; see Shettleworth & Krebs 1986). Each had a small hole (0·5 cm diameter) in its front face covered by a piece of transparent plastic measuring 2·5 × 1·3 × 0·1 cm. Through this piece of plastic, or “window”, the contents of the hole were visible but not accessible and the window could be moved so that in the recovery phase of the experiment the bird could gain access to the hole’s contents. The window had a small hole (3 mm diameter) in the lower right hand corner (see below). A piece of black cloth (3 × 3·5 cm) was stapled to the block above the hole so as to cover both the hole and most of the window. It was held in place by a piece of Velcro attached to the block below the hole. This piece of cloth could be lifted easily by both species. Each block had a perch 0·9 cm in diameter × 5·5 cm long 4 cm below the central hole. The blocks were arranged on seven of the nine artificial ‘trees’ randomly placed around the room. The trees were cut saplings set into concrete bases or plastic umbrella stands. Arranged at various heights on the trees were perches, 10 cm long, onto which the blocks could be positioned. The blocks were placed Table I. Summary of data (X ± s) collected from experiments 1 and 2 | | Experiment 1 | Experiment 2 | |------------------------|--------------------|--------------------| | | Coal tits | Great tits | Coal tits | Great tits | | No. of trials | 15:00 ± 0:00 | 15:00 ± 0:00 | 18:17 ± 3:98 | 17:71 ± 3:69 | | No. of holes visited in phase 1 | 4:50 ± 0:50 | 3:75 ± 0:63 | 14:61 ± 3:98 | 14:68 ± 3:80 | | Duration of phase 1 (min) | 2:28 ± 2:37 | 1:88 ± 1:95 | 3:55 ± 3:98 | 4:69 ± 3:53 | | No. of holes visited in phase 2 | 2:25 ± 0:25 | 3:25 ± 0:25 | 21:81 ± 7:85 | 20:79 ± 8:31 | | Duration of phase 2 (min) | 2:02 ± 1:63 | 1:27 ± 1:01 | 5:80 ± 3:34 | 6:71 ± 4:27 | | No. of seeds seen in phase 2 | 1:00 ± 0:00 | 1:00 ± 0:00 | 5:05 ± 1:50 | 4:52 ± 1:35 | | Retention interval (min) | 34:40 ± 7:42 | 34:13 ± 8:09 | 130:44 ± 16:46 | 132:97 ± 18:46 | Training For the duration of the experiment food bowls were removed from the birds' living cages at dusk or shortly afterwards (1700–1800 hours, GMT). Following testing the birds were given a fresh portion of the maintenance diet. As dawn was at about 0700 hours and the birds were tested about 3 h later, this meant that after their overnight fast the birds had only the peanuts they ate in the experiment during the first 3 or 4 h of daylight. Under this feeding regime the birds were trained to come into the room and lift the pieces of cloth on the blocks in search of pieces of peanut placed in the large (0·5 cm) holes. During initial training all seven sites contained pieces of peanut, but later only one site was rewarded. The birds required between 15 and 18 days of training and the criterion performance for proceeding to the next stage was for the bird to visit the block containing a piece of peanut within 5 min. Window-shopping trials Window-shopping trials were conducted on the same schedule as training except that the trials were divided into two phases approximately 30 min apart. One trial was conducted each day. In the first phase only one block contained a piece of peanut (average weight = 62 mg). Each hole was covered by a plastic window through which the peanut was visible but not accessible. The hole containing the peanut also had a smaller piece of peanut (average weight = 7·8 mg) pressed into the small hole in the window where the bird could pull it out. When a bird lifted the cloth it would remove and eat the piece of peanut. The occurrence of 'window-shopping' was recorded when the bird visited a site and lifted the cloth. Each block contained the peanut once for the first 7 days, in random order. This order was maintained for subsequent trials. In the first phase of the trial the bird was allowed into the room to window-shop until it located the single site containing a peanut. All of the sites inspected and the length of time taken for the bird to locate the site with a piece of peanut were recorded. The birds were given no food between the two phases of the trial. The bird was allowed back into the room 30 min later for the second, recovery, phase of the trial. The phase lasted until the bird visited the rewarded site or until 5 min had elapsed, whichever came sooner. The birds almost always visited the rewarded site within 5 min. All of the windows were now open but the black cloths were again pressed down leaving the holes accessible but not visible. The site where the bird had found a piece of peanut in the first phase was the only site to contain a peanut in this phase. As we had shown in previous work, using the same experimental design, that the birds did not locate the seed in phase 2 by responding to direct cues (e.g. sight, smell; Shettleworth & Krebs 1986), we did not run these controls in the present experiments. A summary of the data collected from this experiment is shown in Table I. Results and Discussion Number of visits to find the seed The simplest comparison of performance comes from examining the number of visits to find the peanut in phase 1 and phase 2 of the experiment. In phase 1 the birds are searching for a piece of peanut placed randomly in one of seven sites, so they should need on average to visit 4-0 sites (assuming no revisiting) to find the peanut. In phase 2 the birds have the possibility of using their experience in phase 1 to direct their search, and might therefore be expected to visit fewer than 4-0 sites before finding the seed. Therefore one way to examine the performance of each species is to see if it is significantly above chance. A more direct comparison of the two species is to examine the number of different sites visited by each species to find the seed in phase 2. Both coal tits and great tits performed significantly above the chance level of 4-0 in phase 2 (using individual bird medians as data points for coal tits, $t = 7.0$, $df = 3$, $P < 0.01$, two-tailed, one sample $t$-test; for great tits all four birds had a median of 3-0). Examining the performance of individual birds based on the 15 trials revealed that three coal tits and one great tit were above chance (based on the median ±95% confidence intervals of 15 trials per bird). In phase 1 there was no significant difference between coal tits and great tits in the number of visits needed to find the seed ($t = 0.775$, $df = 6$, $P = 0.468$), but in phase 2 the coal tits visited significantly fewer different sites before they found the seed ($t = 3.0$, $df = 6$, $P = 0.024$; Fig. 1). The performance in phase 2 is further analysed in Fig. 2. In phase 1 a site could fall into one of three exclusive and exhaustive categories: found to contain a peanut (s = seeded); visited and found to contain no peanut (u = unseeded); and not visited (n = non-visited). $P(v|s)$ is the probability of visiting a seeded site on the first look of phase 2 (averaged across all trials); $P(v|u)$ and $P(v|n)$ are the probabilities of visiting unseeded and non-visited sites, respectively. These probabilities were estimated from the data by calculating the proportion of times each event occurred for each bird. For example if the site with the seed was visited on the first look on 10 out of the 15 trials, $P(v|s)$ would be estimated as $10/15 = 0.67$. The data summarized in Fig. 2 were angular transformed and analysed with a split-plot ANOVA with species as the plots and experience in phase 1 as a fixed effect within plots. This showed a significant effect of experience in phase 1 ($F_{2,12} = 13.494$, $P < 0.001$) and of species ($F_{1,6} = 43.007$, $P < 0.001$). (The species $\times$ experience interaction was not significant: $F_{2,12} = 2.84$, $P < 0.10$.) Inspection of Fig. 2 suggests that the difference between species arose mainly because coal tits are more likely to return to seeded sites than are great tits. Multiple comparisons of $P(v|s)$, $P(v|u)$ and $P(v|n)$ within each species were made using Tukey's significant difference test and alpha = 0.05. These showed that for coal tits $P(v|s)$ was greater than $P(v|n)$ whilst for great tits there were no significant differences between classes. Effect of experience One possible explanation of the results reported above (Fig. 1) is that coal tits learned the task more rapidly than great tits. However, over the course of the experiment neither coal tits nor great tits showed any evidence of improvement in performance, at least as measured by the number of looks to find the seed in phase 2 (Fig. 3a). Effect of location of seed on previous trials In phase 1, coal tits were significantly more likely to go to sites that had been rewarded in the preceding three trials than to sites used in earlier trials (paired t-test, $P < 0.02$), while great tits showed no such trend (paired t-test, $P > 0.05$; Fig. 4). This indicates that there may also be a difference between the species in the extent to which experience in preceding trials affects performance in later trials (i.e. in long-term retention of memory from one trial to the next, producing so-called proactive interference; Wright et al. 1986). Other differences between the species So far we have interpreted the differences in performance between species in terms of differences in memory, but other factors might also account for the results. In experiment 3 we consider the possible effects of reward size and deprivation on performance. Within the present experiment there were no significant differences between the species in the number of holes visited in phase 1 (Fig. 1), in the time taken to visit the holes, or the retention interval (Table I). Another factor that could in principle influence performance is site preference: if, for example, great tits restricted their visits to a few preferred sites this might decrease performance in phase 2 in trials when these sites were not the rewarded one. Preference was analysed by calculating the number of holes that were visited on 0, 1, ..., n trials during phase 1 (thus a bird with strong preferences would visit some holes on a high proportion of trials and others not at all). This analysis revealed that although each individual showed significant preferences (chi-squared test) there were no significant differences between the means for coal tits and great tits ($\chi^2 = 3.8$, df=6, ns). To summarize, in phase 2 of experiment 1 coal tits performed slightly but significantly better than great tits (Fig. 1). It still remains to be shown, however, that this difference between the species is not caused by differences in factors other than memory (see experiment 3 and Discussion). EXPERIMENT 2 In experiment 2 our aim was to test whether the difference observed in experiment 1 would be exaggerated when we increased the difficulty of the task. Instead of searching for a single seed hidden in one of seven sites, the birds were allowed to search for seven hidden seeds behind the windows in a total of 60 sites. The experiment was run with eight coal tits and seven great tits. The overall design of experiment 2 was similar to that of experiment 1: in phase 1 of a trial the bird was allowed to search for hidden seeds behind windows, and in phase 2 it was allowed back into the room to search for the seeds seen in phase 1, these now being accessible but hidden behind the cloth flaps. Methods Subjects The subjects were seven great tits and eight coal tits (two of which had previously been used in other experiments, but none was used in experiment 1) caught in deciduous woodland near Stanton St John, Oxfordshire. The birds lived in natural daylength in individual outdoor living cages measuring 1·5 m wide × 6 m long × 2·5 m high. Diet and general maintenance was as described in experiment 1. Experimental environment The birds were tested in a room 3·75 × 3·9 × 2·4 m high which was connected to each of the outdoor living cages by a trapdoor. The 3·75-m wall with the trapdoors had a door with a smoked one-way Plexiglas window through which the birds could be observed in the experimental room. In the experimental room were 60 wooden blocks of the type used in experiment 1. The blocks were arranged around the room on two of the walls and on nine artificial trees randomly placed around the room. The blocks were placed pointing in different directions to make it more difficult for the birds to move up or down to all the blocks of one tree before moving on to the next tree. The blocks placed on the walls were also positioned so as to minimize systematic search patterns. In the centre of the room there was a bowl of water and a wooden perch 30 cm high. Training The birds were trained in the same way as described in experiment 1 except there were 60 blocks in the experimental room and during training all 60 sites contained pieces of peanut. The birds required between 7 and 11 days of training. The criterion performance for going on to the next stage was when within a training session the bird visited seven blocks within 10 min. Window-shopping trials Window-shopping trials were run with half of the 60 holes blocked. The purpose of this procedure was to attempt to eliminate the tendency of the birds to visit the same sites every day. Thirty holes were selected at random and were covered by a white 1-cm sticker (these were not removed for phase 2). The remaining 30 holes contained pieces of peanut. The holes were then covered by the plastic window through which the sticker or peanut was visible but not accessible. The first phase was the same as in experiment 1 except that the bird was allowed into the room to window-shop until it had located seven sites containing a piece of peanut or until 10 min had elapsed, whichever came sooner. All inspected sites were recorded. When the bird had returned to its home cage it was given three or four small pieces of peanut. Two hours later the bird was readmitted to the experimental room for the recovery phase of the trial. In this phase all of the windows were open, the cloths again pressed down onto the Velcro and only the seven sites where the bird had seen a peanut during window-shopping in phase 1 contained a piece of peanut. This phase of the experiment lasted until the bird had visited all seven seeded sites or until 10 min had elapsed, whichever came sooner. The number of trials and their durations are indicated in Table I. Results and Discussion The analyses of this experiment were based on the first 15 holes visited in phase 2. This number of visits was chosen because it gave an equal amount of data per trial, whilst including trials in which phase 2 sessions contained the fewest visits. Number of seeds found in the first 15 visits There was no significant difference between the two species in the number of seeds found after 15 visits (coal tits: $\bar{X} \pm SE = 3·86 \pm 1·35$; great tits: $3·7 \pm 1·47$). As in experiment 1, there was no strong evidence of an improvement in performance with time (Fig. 3b); coal tits appear to show a slight increase, but this is not significant (rank correlation, $P > 0.10$). **Probability of visiting sites containing seeds** As in experiment 1, we compared the transformed probabilities of visiting three classes of site in phase 2 ($P(v|s)$, $P(v|n)$ and $P(v|u)$). Before calculating the values of $P(v|s)$ etc., the sites were divided into ‘high’ and ‘low’ preference categories on the basis of the number of trials on which each site was visited during phase 1 of each trial (Shettleworth & Krebs 1986). High preference sites were those visited on 30% of trials or more and low preference sites were visited on fewer than 30% of trials. This division was chosen so that approximately equal numbers of sites fell into the two categories. The classes of site were less well discriminated than in experiment 1 (as might be expected from the greater complexity of the task; Fig. 5). The data in Fig. 5 were subjected to a split plot ANOVA with species as plots and experience in phase 1 as a fixed effect. The data for high and low preference were included as repeated measures. There was a significant effect of species ($F_{1,13} = 14.409$, $P < 0.01$), of experience in phase 1 ($F_{5,65} = 44.691$, $P < 0.001$) and of an interaction between species and experience in phase 1 ($F_{5,65} = 12.845$, $P < 0.001$). When the data for high and low preference holes were analysed separately in split plot ANOVAs, the interaction term was significant for high preference holes ($F_{2,26} = 55.353$, $P < 0.001$) but not for low preference holes ($F_{2,26} = 0.641$, $P = 0.535$). Thus coal tits and great tits differ in their discrimination of the three classes of site, at least in high preference holes. Multiple comparisons using Tukey’s significant difference test showed that for high preference holes coal tits discriminated between all three classes of site, visiting seeded sites more than unseeded or non-visited sites and non-visited sites more than unseeded sites, but great tits did not discriminate between seeded and unseeded, visiting both more than non-visited sites. Great tits showed a similar pattern for low preference holes (seeded the same as unseeded, both greater than non-visited). Coal tits discriminated between seeded and non-visited but not between seeded and unseeded sites or between unseeded and non-visited sites. **Other differences between the species** The two species did not differ in the mean number of sites visited in phase 1, the time taken to visit sites in phase 1 or the retention interval (Table I). In conclusion, the patterns revealed by comparison of the probability of visiting sites that were seen in phase 1 to contain a seed, to contain no seed, or were not visited, showed that coal tits appear to be better than great tits at discriminating between visited sites that contained a seed and those that contained no seed. We had predicted that the difference between species observed in experiment 1 would be emphasized in experiment 2. This was not true for the overall success in finding seeds as measured by the number of seeds found in 15 looks. However, this is a relatively weak measure of performance in experiment 2, where the possibility of finding seeds by visiting preferred sites is greater than in experiment 1 because there are more sites available. The other measure of performance, namely discrimination of different classes of site, showed a significant difference between species in experiment 2 but not in experiment 1. ![Figure 5](image-url) *Figure 5.* Experiment 2. The probability of visiting a site in phase 2 in relation to experience of that site in phase 1 (a) for low preference holes; (b) for high preference holes. Details as for Fig. 2a. Table II. Summary of experiment 3 (X from three birds ± se) | | Low reward/high deprivation | Low reward/low deprivation | High reward/high deprivation | High reward/low deprivation | |------------------------|-----------------------------|----------------------------|------------------------------|-----------------------------| | No. of trials | 15 | 15 | 15 | 15 | | No. of holes visited in phase 1 | 15.5 ± 4.0 | 14.8 ± 4.3 | 15.9 ± 2.7 | 15.4 ± 3.4 | | No. of seeds seen in phase 1 | 6.9 ± 0.8 | 6.7 ± 1.0 | 7.1 ± 0.6 | 7.1 ± 0.4 | | Duration of phase 1 (min) | 3.9 ± 2.8 | 4.1 ± 2.6 | 3.2 ± 1.5 | 2.9 ± 1.7 | | No. of holes visited in phase 2 | 21.1 ± 6.6 | 17.9 ± 3.2 | 19.3 ± 4.6 | 16.7 ± 2.4 | | No. of seeds seen in phase 2 | 4.5 ± 1.3 | 3.3 ± 1.2 | 3.9 ± 1.3 | 3.9 ± 1.1 | | Duration of phase 2 (min) | 3.8 ± 2.4 | 4.4 ± 2.0 | 6.1 ± 2.3 | 7.6 ± 3.4 | | Retention interval (min) | 127.6 ± 16.5 | 124.4 ± 13.7 | 127.9 ± 21.4 | 121.8 ± 16.5 | EXPERIMENT 3 Differences between species in performance in experiments 1 and 2 could be due to factors other than differences in memory (Bitterman 1975; Macphail 1982). We have already considered the effect of differences in the number of sites visited and time taken to visit sites in phase 1. In this experiment we examine the effect of two factors that might be expected to influence motivation to search for food: reward size and level of deprivation. In experiments 1 and 2 we used the same reward size and deprivation for the two species. However, the weight of a great tit is almost twice that of a coal tit (19 g versus 10 g), so if reward size and deprivation level scales with body size it could be argued that the species experienced different levels of these factors in experiments 1 and 2. Metabolic rate of passerines increases approximately with $W^{0.73}$, where $W =$ body mass (Schmidt-Nielsen 1984). Relative to its requirements, the reward of about 60 mg used in experiments 1 and 2 was therefore smaller for a great tit than for a coal tit. Similarly, because of its higher absolute requirement, a deprivation period of a fixed time would result in a greater deficit for the larger species, although here it could be argued that the picture is complicated by the amount of reserves carried as fat in relation to metabolic rate. In short, great tits may have performed less well because of interspecific differences in motivation. Therefore we investigated whether or not performance of great tits in the same set-up as experiment 2 would be influenced by reward size or level of deprivation. Although it would be ideal to compare the two species across a range of values of these two factors, we analysed their effect only on the great tit in the complex window-shopping task. The logic of this is that if (as it turns out) reward size and deprivation with the range tested have little effect on performance of one species, they are less likely to account for the differences between species. Methods Subjects The subjects were four of the great tits used in experiment 2. This experiment was run after these four birds had completed experiment 2. Procedure The birds were maintained as described in experiment 1. This experiment was run to examine the effects of reward and deprivation on the birds' performance in the recovery phase of a trial. The layout of the experimental room was the same as in experiment 2 but the size of the pieces of peanut behind the windows was varied between treatments. The piece of peanut behind the window could be 'large' (average weight = 142 mg) or 'small' (average weight = 18 mg). Following phase 1 the birds were given either no peanut or four pieces of peanut (average weight = 38 mg each); these two conditions correspond to 'high' and 'low' deprivation, respectively. The experiment was designed as a factorial experiment with 5 days of each treatment to allow a test of the main effects of bird, deprivation and reward size as well as the interaction between them; however, the birds began to moult part way through the experiment and so the design was not completed. Therefore the analysis was done just for main effects. Four birds were each tested with three of the four conditions. A summary of the data collected from this experiment is shown in Table II. Results and Discussion The angular transformed values of $P(v|s)$, $P(v|u)$ and $P(v|n)$ were calculated for the first 15 visits of each trial (in this case high and low preference sites were combined; Fig. 6). Paired $t$-tests were used to compare the performance of each bird in $P(v|s)$, $P(v|u)$ and $P(v|n)$ across the experimental conditions. There was no significant difference in either $P(v|s)$ or $P(v|u)$ across the four conditions. In one comparison, that of low deprivation/small reward and high deprivation/large reward, $P(v|n)$ differed significantly ($P < 0.05$): the value was higher in the former than in the latter treatment. The explanation of this effect is not clear and may be a chance result in a large number of comparisons. Thus, under the range of conditions studied here, two motivational variables, seed size and deprivation level, did not have a significant effect on the probability of returning to sites where a seed had been seen. This helps to support the view that the differences between coal tits and great tits observed in experiments 1 and 2 did not arise as a result of differences in motivation. GENERAL DISCUSSION The main results of our experiments are as follows. (1) Coal tits (a storing species) performed slightly better than great tits (a non-storer) in a spatial memory task in which the bird was required to return in phase 2 of a trial to the site where it had seen a seed 30 min earlier in phase 1 (experiment 1). (2) In a more complex task in which there were seven sites with seeds out of a total of 60 sites, and the retention interval was 2 h, there was a difference between the species in their discrimination of sites visited in phase 1 that did and did not contain a seed. (3) Motivational variables (reward size and deprivation) did not appear to have an effect on returning to seeded sites. Relationship Between Window-shopping and Storing In contrast to many previous comparative studies of memory and learning (see Introduction), in this study we have attempted to identify on the basis of the ecology of two species a particular difference in memory that could be predicted a priori. The experimental task was designed to incorporate some of the features of food-storing memory: a site is visited once and food is placed in it, some aspect of the site is memorized during this single visit, and the bird returns to the site once some time later to collect its hoard. The main difference between storing and window-shopping is that in the latter case the bird does not place the food in the site itself; the similarities include the fact that memory is based on a single trial and that it is in some sense spatial. However, the differences we have observed between a storing and a non-storing species are small. This could be because there are in fact only small or subtle differences, or because the task we studied did not reveal the differences. In this context it is worth noting that although window-shopping resembles food-storing in the ways outlined above, as previously shown (Shettleworth & Krebs 1986), food-storing tits do not perform as well in remembering the locations of window-shopped food as they do with stored food. This difference appears to be attributable to the presence of the window rather than to the fact that the seeds are encountered and not stored, since without the window, performance does not differ between storing and encountering seeds at least at retention intervals of 2 h (Shettleworth & Krebs 1986; Shettleworth et al., in press). This suggests that the current experiments do not reveal the full spatial memory capacity of the food-storer. Therefore it could be that the very small differences we have observed would be amplified if the storers were tested in a task that reveals their full memory capacity. **Alternative Interpretations** In experiment 3 and in the comparisons summarized in Table 1, we considered whether factors other than differences in memory might account for the differences in performance in experiments 1 and 2. Here we consider other possible interpretations. One possibility is that great tits are more thwarted than coal tits by observing a seed behind a window. If differences in tendency to visit a site with a seed in phase 2 reflect differences in motivation rather than differences in memory, then the balance of positive reinforcement (the seed) and negative reinforcement (the window) could be different for the two species. Whilst this might account for a lower value of $P(\text{vis})$ in great tits, it does not readily account for the difference between the species in their discrimination of unseeded and non-visited sites in experiment 2 (great tits discriminate less well than coal tits). A second possibility is that the difference between the species is not a result of memory but a result of perceptual discrimination. Coal tits might inspect sites they visit during phase 1 more thoroughly than do great tits (this might be a tendency that is deployed normally during storing food) and therefore have more cues with which to discriminate seeded and unseeded sites in phase 2. The data from the present experiments cannot distinguish this possibility from the interpretation that the difference is one of memory. **Comparisons Based on Other Memory Tasks** Hilton (summarized in Krebs, in press; Krebs et al., in press) compared the spatial memory of storing and non-storing tits in two tasks. In an analogue of the radial maze developed by Olton & Samuelson (1976), non-storing blue tits, *P. caeruleus*, and great tits performed with greater accuracy than storers (coal tits and marsh tits, *P. palustris*) at a retention interval of 30 s between forced and free choice, but the former showed a steeper decline in performance with increasing retention interval, suggesting that in this task storers have a more persistent memory. In another comparison of the same four species on a simple spatial discrimination (the birds were trained over a series of trials to go to six out of 64 sites), the storing species tended to do worse than the non-storers. Thus there is no consistent pattern of storing species performing better than non-storers across a range of memory tasks. Further work is necessary to define the conditions under which storers do better (Bald a & Kamil 1989, Krebs, in press; Krebs et al., in press). It should also be apparent that although we have referred to the differences observed in experiments 1 and 2 as being associated with food-storing, this conclusion is based on only one pair of species, a storer and a non-storer. Therefore, before concluding that the small differences we observed here are related to storing, we need to collect data on other storing and non-storing species. **ACKNOWLEDGMENTS** SDH and JRK were supported by a grant from the SERC, and SJS by grants from NSERC. Collaboration between Oxford and Toronto was supported by a NATO grant. We thank Andrew Bennett, Marian Dawkins, Susan Hilton and David Sherry and two anonymous referees for their comments on an earlier draft of the manuscript. **REFERENCES** Bald a, R. P., Bunch, K. G., Kamil, A. C., Sherry, D. F. & Tomba ck, D. F. 1987. Cache site memory in birds. In: *Foraging Behavior* (Ed. by A. C. Kamil, J. R. Krebs & H. R. Pulliam), pp. 645–663. New York: Plenum Press. Bald a, R. P. & Kamil, A. C. 1989. A comparative study of cache recovery in three corvid species. *Anim. Behav.*, **38**, 486–495. Bingman, V. P., Bagnoli, P., Ioate, P. & Casini, G. 1984. Homing behaviour of pigeons after telencephalic ablations. *Brain Behav. Evol.*, **24**, 94–108. Bitterman, M. E. 1975. The comparative analysis of learning. *Science*, **188**, 699–709. Domjan, M. & Galef, B. G., Jr. 1983. Biological constraints on instrumental and classical conditioning: retrospect and prospect. *Anim. Learn. Behav.*, **11**, 151–161. Kamil, A. C. 1987. A synthetic approach to the study of animal intelligence. *Neb. Symp. Motiv.*, **1987**, 257–308. Krebs, J. R. In press. Food storing birds: adaptive specialisation in brain and behaviour? *Phil. Trans. R. Soc. B.* Krebs, J. R., Sherry, D. F., Healy, S. D., Perry, V. H. & Vaccarino, A. L. 1989. Hippocampal specialization of food-storing birds. *Proc. natn. Acad. Sci. U.S.A.*, **86**, 1388–1392. Krebs, J. R., Hilton, S. C. & Healy, S. D. In press. Spatial memory in food-storing birds: adaptive specialisation in brain and behavior? In: *Signal and Sense: Local and Global Order in Perceptual Maps* (Ed. by G. Edelman, W. E. Gall & M. W. Cowan), New York: Neuroscience Institute. Macphail, E. 1982. *Brain and Intelligence in Vertebrates*. Oxford: Clarendon Press. Olton, D. S. & Samuelson, R. J. 1976. Remembrance of places past: spatial memory in rats. *J. exp. Psychol. Anim. Behav. Proc.*, **2**, 97–116 Rawlins, J. N. P. 1985. Associations across time: the hippocampus as a temporary memory store. *Behav. Brain Sci.*, **8**, 479–496. Rozen, P. & Schull, J. 1988. The adaptive-evolutionary point of view in experimental psychology. In: *Handbook of Experimental Psychology* (Ed. by R. C. Atkinson, R. C. Herrnstein, G. Lindzey & R. D. Luce), pp. 503–546. New York: Wiley-Interscience. Schmidt-Nielsen, K. 1984. *Scaling. Why is Size so Important?* Cambridge: Cambridge University Press. Sherry, D. F. 1987. Foraging for stored food. In: *Quantitative Analysis of Animal Behavior: G. J. O. Varley* (Ed. by M. L. Commons, A. Kacelnik & S. J. Shettleworth), pp. 209–227. Hillsdale, New Jersey: Lawrence Erlbaum. Sherry, D. F. & Schatzman, D. L. 1987. The evolution of multiple memory systems. *Psychol. Rev.*, **94**, 439–454. Sherry, D. F. & Vaccarino, A. L. 1989. Hippocampal aspiration disrupts cache recovery in black-capped chickadees. *Behav. Neural Sci.*, **103**, 306–318. Sherry, D. F., Vaccarino, A. L., Buckenhauer, K. & Herz, R. 1989. The hippocampal complex of food-storing birds. *Brain Behav. Evol.*, **34**, 308–317. Shettleworth, S. J. 1985. Food storing by birds: implications for comparative studies of memory. In: *Memory Systems of the Brain: Animal and Human Cognitive Processes* (Ed. by N. H. Weinberger, J. L. McGaugh & G. Lynch), pp. 231–250. New York: Guilford. Shettleworth, S. J. & Krebs, J. R. 1986. Stored and encountered seeds: a comparison of two spatial memory tasks in marsh tits and chickadees. *J. exp. Psychol. Anim. Behav. Proc.*, **12**, 248–256. Shettleworth, S. J., Krebs, J. R., Healy, S. D. & Thomas, C. M. In press. Spatial memory of food-storing tits (*Parus ater* and *P. atricapillus*): comparison of storing and non-storing tasks. *J. comp. Psychol.* Thorndike, E. L. 1911. *Animal Intelligence*. New York: Macmillan. Wright, A. A., Urcuoli, P. J. & Sands, S. F. 1986. Proactive interference in animal memory. In: *Theories of Animal Memory* (Ed. by D. F. Kendrick, M. E. Rilling & M. R. Denny), pp. 101–125. Hillsdale, New Jersey: Lawrence Erlbaum. (Received 2 November 1988; initial acceptance 8 February 1989; final acceptance 16 July 1989; MS. number: 3314)
Book Review Follow this and additional works at: https://scholarlycommons.law.northwestern.edu/jclc Part of the Criminal Law Commons, Criminology Commons, and the Criminology and Criminal Justice Commons Recommended Citation Book Review, 83 J. Crim. L. & Criminology 684 (1992-1993) This Book Review is brought to you for free and open access by Northwestern University School of Law Scholarly Commons. It has been accepted for inclusion in Journal of Criminal Law and Criminology by an authorized editor of Northwestern University School of Law Scholarly Commons. BOOK REVIEW STATE CONSTITUTIONS AND CRIMINAL JUSTICE KURT VON S. KYNELL* STATE CONSTITUTIONS AND CRIMINAL JUSTICE. By Barry Latzer. Greenwood Press 1991. Pp. 218. A welcome contribution to the literature of state constitutional law, this rather slender volume explores what the author describes as the "New Federalism," a concerted effort by state appellate courts to modify the alleged growing conservatism of the Supreme Court in criminal case review.\(^1\) Much of the text compares the Warren Court liberalism with the Burger-Rehnquist Court tilt toward conservatism. By liberalism, the author means a distinct emphasis on defendant rights, using every conceivable constitutional means; by conservatism, he refers to a reverse judicial penchant apparently favoring the police and prosecution. Latzer correctly points out that the Warren Court (1953-1969) interpreted the "four key criminal justice Bill of Rights provisions . . . more favorably to the accused,"\(^2\) and also subsumed them more quickly into the Due Process Clause of the Fourteenth Amendment than previous high courts.\(^3\) Conversely, he claims that the Burger-Rehnquist Court (1970-present) began to narrowly interpret de- --- * Professor, Department of Justice Studies, Northern Michigan University. \(^1\) BARRY LATZER, STATE CONSTITUTIONS AND CRIMINAL JUSTICE (1991). "This state constitutional law renaissance is known variously as the new (judicial) federalism, the state law movement or, more extravagantly, the state constitutional revolution." Id. at 1. Justice William Brennan also described this concept as "[r]ediscovery by state supreme courts of the broader protections afforded their own citizens by their state constitutions . . . is probably the most important development in constitutional jurisprudence of our times." The Fourteenth Amendment, Address to the Section on Individual Rights and Responsibilities of the American Bar Association (Aug. 8, 1986), in Nat'l L.J., Sept. 29, 1986, (Special Supplement) at S-1. \(^2\) LATZER, supra note 1, at 3. Specifically, the Fourth, Fifth, Sixth, and Eighth Amendments all deal with a defendant's rights under the criminal justice system. \(^3\) Id. fendants' rights in criminal cases, precipitating state court reactions. The states realized at this time that "although they could not reduce the rights mandated by due process, the state courts were free to expand as a matter of state law."4 This, then, is the essence of the "New Federalism" expounded by Latzer: while the Supreme Court has unquestioned jurisdiction over federal constitutional interpretations, it does not have jurisdiction over the interpretations of state constitutions.5 Former Justice Brennan is cited as the champion of criminal defendants' rights, as well as a progenitor and early defender of the "New Federalism." This poses some interesting comparisons in legal and constitutional syntax. Traditionally, scholars view federalism as the balance between state and federal authority as originally envisaged in Articles I and III of the Constitution,6 certainly within the Tenth Amendment7 and also within the Supremacy Clause, which mandates state court acceptance of federal constitutional issues.8 But the interesting double helix is that this "New Federalism," rather than manifesting the old states' rights opposition to centralized judicial authority, actually surpasses Justices Brennan, Marshall, and Stevens, and their Fourteenth Amendment crusade to grant more latitude to defendants in criminal trials. Latzer uses two measuring stratagems to test his thesis that in order to protect criminal defendants, state constitutional amendments expand Fourteenth Amendment interpretations more than the federal judicial system. The initial strategy uses a case-by-case comparison of legal issues, such as search and seizure, Miranda rights and self-incrimination, right to counsel, adverse witness confrontation, cruel and unusual punishment, and double jeopardy. --- 4 Id. This, however, can be considered moot. If the states are bound to heed the Supreme Court decisions only on federal law, the U.S. Constitution and treaties, but not when interpreting their own state law, what prevents the Court from interpreting state statutes as violations of federal law? Judicial decisions follow application and interpretation, as well as fact. Supreme Court negative review can include state statutes. 5 Id. at 5. The Supreme Court would have jurisdiction over state constitutions only if they were to interpret them in violation of the federal constitution. 6 Article I grants Congress the "necessary and proper" authority for exercising enumerated powers, but it was the Supreme Court that spelled this out in the implied powers doctrine in Dartmouth College v. Woodward, 17 U.S. (4 Wheat.) 518 (1819); McCollough v. Maryland, 17 U.S. (4 Wheat.) 316 (1819) et al. Article III did not expressly endow the Supreme Court with judicial review powers, but it was so interpreted by Chief Justice John Marshall in Marbury v. Madison, 5 U.S. (1 Cranch) 137 (1803). 7 "The powers not delegated to the United States by the Constitution, nor prohibited by it to the States, are reserved to the States respectively, or to the people." U.S. Const. amend. X. 8 "...Judges in every State shall be bound thereby, any Thing in the Constitution or Laws of any State to the Contrary notwithstanding." U.S. Const. art. VI, § 2. The second strategy utilizes a quantitative comparison of all fifty state supreme courts with the Supreme Court in criminal case jurisprudence.\textsuperscript{9} Looking initially at case law comparisons, the author has compiled post-briefs on a large number of cases reflecting these federal constitutional issues. Starting with the now venerable \textit{Mapp v. Ohio},\textsuperscript{10} in which the Warren Court imposed the exclusionary rule regarding illegally obtained evidence on state criminal cases, Latzer demonstrates how the Burger-Rehnquist Court has consistently narrowed the \textit{Mapp} protection, through such cases as \textit{United States v. Calandra}.\textsuperscript{11} In this case, the Supreme Court held the \textit{Mapp} protection inapplicable to proceedings other than the trial and restricted Fourth Amendment habeas corpus review. In other words, the Court downgraded the philosophical intent of the exclusionary rule to a mere “judicially created remedy.”\textsuperscript{12} But if the United States Supreme Court was attempting to assist police and prosecutors by gradually desiccating the exclusionary rule without actually overturning it, Latzer cites ample evidence that many state supreme courts simply were not buying it. Hence, the states created a “New Federalism” in criminal law, which was also championed consistently by United States Supreme Court Justices Brennan and Marshall. The Oklahoma Supreme Court, for example, in \textit{Turner v. City of Lawton},\textsuperscript{13} ruled against the Burger Court’s reasoning in \textit{United States v. Janis}\textsuperscript{14} that under the exclusionary rule, drugs found in an unlawful search were inadmissible for use in a disciplinary administrative personnel hearing. The Oklahoma court ruled that exclusion is a fundamental right under the Oklahoma Constitution, and not just a rule of judicial procedure. In doing so, the court followed the lead \textsuperscript{9} \textsc{Latzer}, \textit{supra} note 1, at ch. 8. \textsuperscript{10} \textit{Id.} at 3, 33. \textit{See also} 367 U.S. 643 (1961). An excellent brief discussion of the gradual application of the exclusionary rule to the states is found in \textsc{Martin Shapiro} & \textsc{Rocco J. Tresolini}, \textsc{American Constitutional Law} 631-33 (6th ed. 1983). \textsuperscript{11} 414 U.S. 338 (1974). \textsc{Latzer}, \textit{supra} note 1, at 33-34. \textsuperscript{12} \textsc{Latzer}, \textit{supra} note 1, at 34. The exclusionary rule derives from the Fourth Amendment prohibition against “unreasonable searches and seizures,” which the Warren Court expanded in such cases as \textit{Chimel v. California}, 395 U.S. 752 (1969) (ruling against warrantless searches of a home), and \textit{Trupiano v. United States}, 334 U.S. 699 (1948) (ruling against warrantless searches of other property, in this case, an alcohol still). Eloquent dissenting in the Court’s limiting of the exclusionary rule in \textit{United States v. Rabinowitz}, 339 U.S. 56 (1950), Justice Frankfurter said, “the test of reason which makes searches reasonable . . . [is] underlying and expressed by the Fourth Amendment.” \textit{Id.} at 83. \textsuperscript{13} 733 P.2d 375 (Okla. 1987). \textsc{Latzer}, \textit{supra} note 1, at 46 n.19. \textsuperscript{14} 428 U.S. 433 (1976). \textsc{Latzer}, \textit{supra} note 1, at 35. of the Oregon Supreme Court which strongly asserted the right of the defendant to invoke the exclusionary rule from the Fourth Amendment.\textsuperscript{15} Latzer correctly points out that Oregon followed \textit{Weeks v. United States},\textsuperscript{16} which was the first case to mandate that evidence illegally obtained in a criminal case may not be used against the defendant by virtue of the Fourth Amendment exclusionary rule.\textsuperscript{17} But the dictum in that case pertained only to the person, not to a warrantless search of property.\textsuperscript{18} Other states, including Massachusetts and Connecticut, have joined this “New Federalism” extension of the exclusionary rule to prevent \textit{Mapp} rights from being reduced. The Connecticut Supreme Court even invoked the increasingly rare legal concept of \textit{desuetude} (loss of precedent through disuse) to support exclusionary protection.\textsuperscript{19} Other states are more conservative, however, preferring the so-called \textit{inclusionary} rule, which allows state judges to admit certain kinds of illegally obtained evidence. California and Florida use inclusionary rules, and Michigan goes so far as to refuse to exclude \textit{any} evidence in criminal cases involving bombs, drugs, or weapons, no matter how obtained.\textsuperscript{20} Apart from the multitudinous \textit{Mapp} spin-offs, especially the poetic legalism known as “the fruit of the poisoned tree,”\textsuperscript{21} as extensions of the exclusionary rule, this small but heavily compact volume is well-researched in other areas of state constitutionalism and criminal justice. These include private searches (normally not subject to warrants nor \textit{Miranda}), and the concept of “automatic standing” for \textsuperscript{15} \textit{Latzer, supra} note 1, at 35. \textsuperscript{16} 232 U.S. 383 (1914). \textsuperscript{17} \textit{Latzer, supra} note 1, at 33, 46 n.16, 17. \textsuperscript{18} See \textit{Chimel v. California}, 395 U.S. 752, 755 (1969) (quoting \textit{Weeks}, 232 U.S. at 392). \textsuperscript{19} \textit{Latzer, supra} note 1, at 37. See also State v. Dukes, 547 A.2d 10 (Conn. 1988), and Alexander M. Bickel, \textit{The Least Dangerous Branch: The Supreme Court at the Bar of Politics} 148-56 (2d ed. 1986), for an excellent historical analysis of \textit{desuetude} in British and American law. \textsuperscript{20} “California’s Proposition 8, adopted June 8, 1982, placed in that state’s constitution an inclusionary rule broader than Michigan’s . . . prohibiting exclusion of any ‘relevant evidence,’ even if seized in violation of the state charter.” \textit{Latzer, supra} note 1, at 37. See also \textit{People v. Moore}, 216 N.W.2d 770 (Mich. 1974). \textsuperscript{21} The “poison tree” doctrine dates back to 1920, but with such exceptions as information from sources independent of the illegal search. The Burger-Rehnquist Court expanded the exceptions. \textit{Latzer, supra} note 1, at 38-39. “The colorfully named fruit of the poisonous tree doctrine is an extension of the exclusionary rule to evidence derived from other, illegally obtained evidence.” \textit{Id}. evidence suppression.\textsuperscript{22} Search and seizure problems are further illuminated in such diverse examples as electronic surveillance, bank and phone records, overflights, bodily intrusions, automobile searches, and various specialized warrant problems.\textsuperscript{23} Latzer criticized the Burger-Rehnquist Court's \textit{New York v. Belton}\textsuperscript{24} decision as another conservative trend, and "hardly a ringing endorsement of the Burger Court's efforts,"\textsuperscript{25} since no less than eleven states reject the idea.\textsuperscript{26} He states further that the \textit{Belton} case has caused the most significant rift among state courts.\textsuperscript{27} Continuing what he observes as an exacerbating gap between the state and federal courts over exclusionary rule interpretations, Latzer also makes heavy use of \textit{Miranda} as a landmark in defendant rights, which the current Supreme Court has weakened.\textsuperscript{28} Three select cases illustrate his claim: \textit{Harris v. New York},\textsuperscript{29} in which \textit{Miranda} rights were ignored to admit impeachment of a defendant;\textsuperscript{30} \textit{New York v. Quarles},\textsuperscript{31} in which the Court placed public safety over the Fifth Amendment self-incrimination protection;\textsuperscript{32} and \textit{Oregon v. Elstad},\textsuperscript{33} in which a failure to administer \textit{Miranda} rights before a suspect's initial statement did not prejudice subsequent warned statements, \textit{provided} the first statement was voluntary.\textsuperscript{34} Latzer has documented a potentially serious problem here in demonstrating the extent to which the Burger-Rehnquist Court has experimented with the "poison tree" violations in its persistent whittling down of defendants' rights in criminal cases.\textsuperscript{35} \textsuperscript{22} \textsc{Latzer}, \textit{supra} note 1, at 40. Automatic standing is granted to "persons legitimately on the premises." Jones v. United States, 362 U.S. 257 (1960). \textsuperscript{23} \textsc{Latzer}, \textit{supra} note 1, at ch. 3. \textsuperscript{24} 453 U.S. 454 (1981). In this case, the Court allowed police to search the passenger compartment of an automobile, but not the trunk. \textsuperscript{25} \textsc{Latzer}, \textit{supra} note 1, at 71. \textsuperscript{26} \textit{Id.} \textsuperscript{27} \textit{Id.} Furthermore, the author declares that the "Supreme Court's Fourth Amendment rulings have been so numerous and so controversial that they have served as virtual lightning rods for \textit{state court divergence} . . . the Burger-Rehnquist Court has made significant incursions into defendants' Fourth Amendment rights—especially in such matters as the exclusion of evidence, standing, automobile searches, the validity of warrants, and searches incident to arrest." \textit{Id.} at 73 (emphasis added). \textsuperscript{28} Compare \textsc{Latzer}, \textit{supra} note 1, at 89 ("The Burger Court weakened but did not overturn \textit{Miranda}.") with \textsc{Shapiro} & \textsc{Tresolini}, \textit{supra} note 10, at 649 ("The Burger Court has very seriously reduced the scope of \textit{Miranda}.") (emphasis added). \textsuperscript{29} 401 U.S. 222 (1971). \textsuperscript{30} \textsc{Latzer}, \textit{supra} note 1, at 89. \textsuperscript{31} 467 U.S. 649 (1984). \textsuperscript{32} \textsc{Latzer}, \textit{supra} note 1, at 90. \textsuperscript{33} 470 U.S. 298 (1985). \textsuperscript{34} \textsc{Latzer}, \textit{supra} note 1, at 90. \textsuperscript{35} "Fruit" metaphors abound in the law. \textit{See Black's Law Dictionary} 669-70 (6th In gathering his artillery against the Burger-Rehnquist Court, the author summons up heavy reserves of legal ammunition from various state supreme courts on the *Miranda* issue. For example, when the current Court ruled in *Moran v. Burbine* that the police are not required to inform a suspect in custody that a *third party* has retained counsel on his behalf, six states promptly opposed it, specifically California, Connecticut, Florida, Louisiana, New York, and Oklahoma. In *People v. Houston*, the California court ruled that a defendant must be informed if an attorney retained by his friends arrives at the police station; the other five states ruled in similar, if slightly different, scenarios. Latzer also has compiled concise but highly informative chapters on the Eighth Amendment *vis-a-vis* the death penalty, and the Fifth Amendment's limitations on double jeopardy. A legal as well as a semantic conundrum for scholars has always been the question of "cruel *and* unusual" or "cruel *or* unusual." A conjunction or correlative can make a big difference in the criminal law. What of the normal meaning of words? Obviously the Eighth Amendment forbids cruel *and* unusual punishment, but does this allow executions that are cruel but not unusual? The late Justice Clark was fond of saying that the death penalty would be clearly unconstitutional had there not been that ubiquitous conjunction. The Burger-Rehnquist Court has upheld the death penalty, however, and most state supreme courts have, in the author's words, "brushed aside arguments relying on phraseology." The California State Court has found its own state death penalty provision us- --- 41 LATZER, supra note 1, at ch. 6. 42 Id. at ch. 7. 43 Caminetti v. United States, 242 U.S. 470 (1917). In Justice Day's opinion in this case, "Statutory words are uniformly presumed, unless the contrary appears, to be used in their *ordinary and usual sense*." Id. at 485 (emphasis added). 44 LATZER, supra note 1, at 135. Where the U.S. Constitution uses "and," 21 state constitutions use the word "or." Id. at 139 n.28, 205-06. ing “or” to be highly significant,\textsuperscript{45} but if there really exists “New Federalism,” the sobering fact remains that most states have not vociferously opposed federal doctrine on this issue.\textsuperscript{46} The question of whether a truly strong case can be made for the “New Federalism” in criminal law is moot because jurisprudential interpretations between the states and the federal government remain mixed. In some areas, such as the exclusionary rule, the states seem to be more liberal than the Supreme Court, and in others, such as the death penalty, the states are more conservative; but the thesis remains interesting and deserves more attention by scholars in constitutional law. Latzer has summed up the impact of “New Federalism,” not only through exhaustive case law review, but also by means of a quantitative measuring tool he devised.\textsuperscript{47} To assess the impact of state-federal disagreement, he tabulated rejections and adoptions of U.S. Supreme Court reasoning in criminal procedure cases by state supreme courts on grounds of state constitutional law. His database assumed that a seventy-five percent agreement or disagreement warranted a conclusion of conformity with, or opposition to, the Supreme Court rulings in criminal case procedures.\textsuperscript{48} Following those guidelines, the ten most active, hence liberal, states, which relied on their own state constitutions as opposed to Supreme Court rulings, were: California, New Hampshire, Oregon, Florida, Pennsylvania, Montana, West Virginia, Connecticut, Alaska, and New Jersey.\textsuperscript{49} The ten least active, hence conservative, states within this “New Federalism” movement were: South Carolina, Arkansas, Nevada, Alabama, Indiana, Minnesota, New Mexico, Virginia, Georgia, and North Dakota.\textsuperscript{50} There is also a table based upon the broader continuum of all fifty states, and the degree to which they rejected or adopted given Supreme Court procedural maxims by percentages.\textsuperscript{51} \textsuperscript{45} People v. Anderson, 493 P.2d 880 (Cal. 1972). It is significant because the death penalty is \textit{both} cruel and unusual; the former because of painful death, the latter because it is rarely carried out. \textsuperscript{46} \textsc{Latzer}, \textit{supra} note 1, at 137. \textsuperscript{47} \textit{Id.} at 160-64. \textsuperscript{48} \textit{Id.} at ch. 8. \textsuperscript{49} \textit{Id.} at 162, tbl. 2. \textsuperscript{50} \textit{Id.} at 162, tbl. 3. \textsuperscript{51} \textit{Id.} at 164, tbl. 4. The legal contests between the several states and the national government are as old as the Union itself, whether in criminal or civil law. One of the best known is Gibbons v. Ogden, 22 U.S. (9 Wheat.) 1 (1824). Others include Pensacola Telegraph v. Western Union Telegraph, 96 U.S. 1 (1878), Hammer v. Dagenhart, 247 U.S. 251 (1918), and NLRB v. Jones and Laughlin Steel Corp., 301 U.S. 1 (1937). In criminal law, the \textit{actus reus}, or prohibited conduct, can and does vary in degree, interpreThe "New Federalism" can be an ambiguous political-legal phenomenon, and Latzer admits this, but his book is important for all judges, attorneys and constitutional scholars, not only for its message, but for its implications. For much of our national history, for example, there has existed what might be termed a "Jeffersonian apprehension"\textsuperscript{52} over a national Supreme Court, which has extended judicial activism at times to the perceived usurpation of legislative prerogatives, both state and federal. The ancient maxim, "Jus Dicere et non Jus Dare,"\textsuperscript{53} comes to mind. The author implies new movement in the increasingly legal ambiguity of criminal law jurisprudence between state and federal courts, but there is a positive legal and constitutional antidote. He correctly points out that there exists both vertical and horizontal legal threads in the national web of criminal law, which tend to modify each other. The vertical pressure exerted is from the Supreme Court to state courts to adopt the federal positions,\textsuperscript{54} and the horizontal counterpoint comes from the sheer volume of state cases. In a given year, especially during the current decade, the fifty "state supreme courts render thousands of decisions while the one U.S. Supreme Court resolves fewer than three hundred . . . . Thus, verti- \textsuperscript{52} One of the fundamental arguments of the Constitutional Convention was the extent of federal judicial power. Delegates such as John Rutledge and Luther Martin specifically opposed judicial review, and while the Framers discussed this concept, they did not include it in Article III. The power was assumed in \textit{Marbury v. Madison}, 5 U.S. (1 Cranch) 137 (1803), by the brilliant reverse logic of Chief Justice John Marshall during Thomas Jefferson's first administration. Jefferson's response was immediate and vociferous in his insistence that negative review would, in effect, give the Supreme Court veto power over the Congress, fatally weakening Article I and the legislative prerogatives. A stunned Jefferson called Marshall that "crafty chief judge" and claimed that with the \textit{Marbury} decision, the Supreme Court had made of the Constitution " . . . a thing of wax," which could be shaped to the Court's will. See SAUL PADOVER, \textsc{Jefferson: A Great American's Life and Ideas} (1952) and ROSCOE POUND, \textsc{The Formative Era of American Law} (1960). Dean Pound wrote, "The American Colonial Republic was hostile to all things English . . . the whole idea of professions like law, was repugnant to the mass egalitarianism of the Jeffersonian era." \textsc{Pound, supra} at 7. The Jeffersonian distaste for national judges making policy also stemmed from the fight against British Crown Judges vetoing legislation by the Virginia House of Burgesses prior to the Revolution. The Tenth Amendment also later reflected the insistence upon states' rights against intrusive federal authority. This amendment may indeed be resurfacing in Latzer's "New Federalism" thesis if his premise is correct, but hopefully, in a positive, rational, and democratic manner. \textsuperscript{53} "To declare the law, not to make it." \textsc{Black's Law Dictionary} 859 (6th ed. 1990). \textsuperscript{54} \textsc{Latzer, supra} note 1, at 167. cal federalism's most potent weapon—Supreme Court review—is limited by the vastness of the state court output."55 The criminal justice system thereby remains viable, even as the thrust of the "New Federalism" is developing. The liaisons between state and federal courts continue, including ongoing debates over constitutional issues, an elaboration which this volume provides. It is a balanced analysis that offers hope for the individual states within the possible increasing centralism of federal jurisprudence. Perhaps a corollary study is also needed of the relationships between state courts and the federal district courts. Latzer points out that few books exist on "New Federalism"; the majority of work on this issue is published in law journals. As such, this study is a definitive step forward and constitutes an excellent reference for graduate research in both political science and law. The six appendices and specialized bibliography are also especially helpful for a condensed overview of sources on "New Federalism." 55 Id. at 168 (emphasis added).
MANAGERS' VIEWS ON TEACHING WORK CONDITIONS IN THE INTEGRAL EDUCATION PROGRAM O OLHAR DOS GESTORES SOBRE AS CONDIÇÕES DE TRABALHO DOCENTE NO PROGRAMA ENSINO INTEGRAL LA MIRADA DE LOS GESTORES SOBRE LAS CONDICIONES DE TRABAJO DOCENTE EN EL PROGRAMA DE ENSEÑANZA INTEGRAL Renata Portela RINALDI¹ e-mail: firstname.lastname@example.org Renan Moreira ULLOFFO² e-mail: email@example.com How to reference this article: RINALDI, R. P.; ULLOFFO, R. M. Managers' views on teaching work conditions in the Integral Education Program. Revista Ibero-Americana de Estudos em Educação, Araraquara, v. 18, n. 00, e023052, 2023. e-ISSN: 1982-5587. DOI: https://doi.org/10.21723/riaaee.v18i00.16163 Submitted: 24/01/2022 Revisions required: 02/02/2023 Approved: 10/04/2023 Published: 14/08/2023 Editor: Prof. Dr. José Luís Bizelli Deputy Executive Editor: Prof. Dr. José Anderson Santos Cruz ¹ São Paulo State University (UNESP), Presidente Prudente – SP – Brazil. Professor at the Department of Education. Coordinator of the Graduate Program in Education. Full Professor in Didactics and Technologies Applied to Education (UNESP). ² São Paulo State University (UNESP), Presidente Prudente – SP – Brazil. Master’s student in Education. ABSTRACT: This qualitative research aims to analyze the conditions of teaching work in the Integral Education Program, from the perspective of the school management team. Data collection took place through a questionnaire and semi-structured interview. From the descriptive-interpretative analysis, we identified positive aspects and weaknesses in the program, such as exclusive dedication made possible by the RDPI and the salary increase through the GDPI. About weaknesses: a possible technicist character, pressure for good performance through evaluations, absence of substitute professionals, increase in the number of classes, decrease in the teaching staff, work overload and lack of continuing education actions. Therefore, although the PEI has aspects that can contribute to good working conditions and, consequently, the promotion of "good quality education", the program is the result of the nuances of neoliberal reforms that worsen and make working conditions precarious over the years. KEYWORDS: Integral Education Program. School management. Teaching work. RESUMO: A pesquisa, de caráter qualitativo, pretende analisar as condições do trabalho docente no Programa Ensino Integral (PEI), a partir da ótica da equipe de gestão escolar. A coleta de dados ocorreu com questionário e entrevista semiestruturada. A análise descritivo-interpretativa evidenciou aspectos positivos e fragilidades no programa, a saber: dedicação exclusiva possibilitada pelo RDPI e o acréscimo salarial por meio do GDPI; quanto às fragilidades: um possível caráter tecnicista, pressão pelo bom desempenho por meio das avaliações, ausência de um quadro de profissionais substitutos, aumento do número de aulas, diminuição do quadro docente, sobrecarga de trabalho e falta de ações de formação continuada. Apesar do PEI possuir aspectos que podem apontar para boas condições de trabalho e a promoção de uma “educação de boa qualidade”, o programa é fruto dos matizes das reformas neoliberalistas que se agravam e precarizam as condições de trabalho no passar dos anos. PALAVRAS-CHAVE: Programa Ensino Integral. Gestão escolar. Trabalho docente. RESUMEN: La investigación, de carácter cualitativo, tiene como propósito analizar las condiciones del trabajo docente en el Programa de Enseñanza Integral (PEI), a partir de la perspectiva del equipo de gestión escolar. La recolección de datos se efectuó por medio del cuestionario y la entrevista semiestructurada. A partir de un análisis descriptivo e interpretativo, identificamos aspectos positivos y falencias en el programa, tales como la dedicación exclusiva por parte del RDPI y el aumento salarial por medio del GDPI. En lo que respecta a las fragilidades: un notorio carácter tecnicista, presión laboral atendiendo al buen desempeño por medio de evaluaciones, ausencia de un equipo de profesionales sustitutos, aumento de la carga horaria, disminución del equipo docente, sobrecarga de trabajo y omisión de una serie de acciones de formación continua. Aunque el PEI posee aspectos que pueden contribuir con la mejora de las condiciones laborales y, consecuentemente, con el fomento de una “educación de calidad”, el programa es fruto de los matices de las reformas neoliberalistas que se intensifican y precarizan tales condiciones a medida que pasa el tiempo. PALABRAS CLAVE: Programa de Enseñanza Integral. Gestión escolar. Trabajo docente. Introduction In the current educational scenario, the advance of globalization and access to information through the mass media increasingly demand that schools adopt, in their educational practices and in the teaching and learning processes, educational innovations, changes and/or, in certain cases, educational reforms. It is true that “[…] this process is complex in nature and involves, among other factors, conflict of interests, need for legitimation and control and purposes of domination involving numerous factors” (RINALDI; BROCANELLI; MILITÃO, 2012, p. 91, our translation). In addition, population growth, the expansion of inequalities and the emergence of social, cultural, ethical, political and economic issues in contemporary society, aggravated by the Covid-19 pandemic, pose new challenges to educational institutions, mainly to teachers, the main actor in the teaching process and responsible for mobilizing a set of specific knowledge and different natures in their work, since they make educational choices that significantly mark the actual curriculum and the formation of students’ identities. Although its performance does not have the potential to define, in isolation, the quality of school education, its decisions are very relevant in the fight for a quality public school for all. Contradictorily, we are perplexed by the logic of the regulatory State that institutes and guides the implementation of the National Common Curricular Base (BNCC), to be compulsorily respected, throughout the stages and modalities of basic education, the use of textbooks as resources that promote equal opportunities, student success, and an education that is focused on “the student’s right to learn”, with emphasis on reading, writing and mathematics skills and abilities, according to the BNCC. However, they are resources that stifle the teacher's work based on prescriptions for the development of activities that disregard cultural and socioeconomic diversity, regional, community, school and daily life characteristics. We argue that valuing cognitive performance alone is not enough for a full student education, as school education is not and should not be a fully controlled process. Furthermore, as problematized by Gouveia (2010, p. 2-3, emphasis added, our translation): […] the objective of Brazilian education is “the full development of the person” (BRASIL, 1988). However, such a definition is not indisputable. What does it mean to achieve full development in a context marked by economic inequalities that are reflected in great inequalities in access to the most varied social and cultural goods? Ensuring the school's subjective effectiveness requires that educational practices incorporate cultural diversity, while building conditions for overcoming socioeconomic inequalities. This requires a conception of inclusion that allows all students to perceive that the school recognizes their aspirations, their desires, their demands and, in addition to recognizing it, organizes itself in such a way as to build an affirmative subjective experience of fulfilling these demands. It is essential, in the teaching and learning process, that the teacher can mobilize knowledge that allows him to critically analyze the demands that come to the school, in order to recognize what is consistent with the socially defined function for it and with the consolidation of conditions so that everyone can find, in this institution, the necessary support to develop the learning considered relevant in a given socio-historical context. The pursuit of school success: […] implies a set of aspects of sociability development. The expression concerns the school, which has a differentiated function in today's societies: the school concretely realizes the human right to instruct oneself, at the same time educating oneself and forming oneself to participate in a civilization and in the preservation of life. Success in the school path is success in apprehending knowledge in its relation to the modes of existence in the various human societal forms. Learning is an inalienable right for all and sensitivity to the signs of learning – learning contents, behaviors, attitudes, values and learning to share, respect for others – in diversity, is what can lead teachers and students to tread interactive and dialogical paths towards the successful expansion of their knowledge, in its social, cultural, scientific, personal and ethical meanings (GATTI, 2010a, p. 1, our translation). In this perspective, success involves, among other aspects, the teaching working conditions. As Oliveira (2010, p. 1, emphasis added, our translation): […] working conditions designates the set of resources that make it possible to carry out the work, involving the physical installations, the materials and inputs available, the equipment and means of carrying out the activities and other types of support needed, depending on the nature of the production [...] We understand that there is a consolidated production on teaching work and we chose to work with the following authors: Duarte (2010), Oliveira (2010, 2004), Pereira Junior (2017), Gatti (2010 b), among others. We chose as the focus of analysis the conditions of teaching work in the Integral Teaching Program (PEI) in the state of São Paulo, from the perspective of the school management team – Principal, Vice-Principal and General Coordinating Teacher. This text is part of a research developed in scientific initiation, in which the configuration and possible innovations of the Integral Teaching Program (PEI) were analyzed from the perspective of managers, taking the particular case of state schools in the municipality of Presidente Prudente, SP, which joined the program. In that study, we used as data collection instruments the questionnaire and the semi-structured interview applied to the members of the school management team, namely: School principal, School vice-principal and general coordinating teacher, working in three schools, totaling eight participants. Data analysis was based on a descriptive-interpretative perspective. The study indicated that the program has brought many changes to schools since its implementation, for example: improvement in infrastructure, teaching materials and didactic and pedagogical equipment; meanings and resignifications in students' learning; commitment and commitment of professionals to the school; partnership with family and community; among others of equal educational relevance. There was also the observation of innovations with the PEI, notably with regard to: the structure of the school with the insertion of digital technologies, laboratories, among other material resources; interdisciplinary curriculum; the education of professionals with regard to teaching; full-time work regime (RDPI); career valuation, remuneration; protagonism in the students' actions; relational commitment among school staff (management, staff, teachers, students, and parents/community). At this moment, we propose to expand the discussions with the objective of analyzing the conditions of teaching work in the Integral Teaching Program, from the perspective of the school management team (principal, vice-principal, and general coordinating professor). To this end, we present, initially, some theoretical and conceptual aspects and, in sequence, part of the results obtained through the research. **Notes on teaching work** We understand the teaching work in a broader way, no longer restricted to the classroom space, exclusively to the teaching and learning process. But as a work activity that involves, among others, planning activities, participation in the construction of the political-pedagogical project, in school management activities and, therefore, involves all school actors, whether they are: School principal, School vice-principal and general coordinating teacher, teachers, employees and parents, that is, to all those who can promote the applicability of the actions and functions of the school in search of creating conditions for the development of the formation and academic success of the students. Duarte (2010, p. 105, our translation) warns that teaching work “[...] encompasses both subjects in their complex definition, experience and identity, as well as the conditions under which activities are carried out in the school context. It comprises, therefore, the activities, responsibilities and relationships that take place in the school, beyond the class management”. Concomitantly with the author’s statements, Pereira Junior (2017, p. 104, our translation), in his doctoral thesis, states that “[…] teaching work encompasses both pedagogical attributions and activities related to the management or routine of schools” and adds, considering what Oliveira (2010) defends, as previously mentioned, that the conditions of teaching work: […] constitute the objective and subjective aspects found or experienced […] in school routine that enable the development of teaching work and are associated with factors related to physical and psychological aspects, feelings, perceptions and actions carried out by teachers as a result of school daily life (PEREIRA JUNIOR, 2017, p. 103, our translation). When portraying the issues of teaching work, it is clear that the educational reforms emerging in recent decades in Brazil and in other Latin American countries have caused significant changes in the working conditions of education professionals, since they are neoliberal reforms that imply not only at the school level but also in the entire educational system, which cause changes in the structure of school work. From the reforms present in the post-1990 Brazilian context, education professionals began to be assigned new and varied functions that go beyond their education. In other words, teachers are obliged to comply with the prescriptions that come to them through textbooks and school handouts, and they must also perform attributions/functions of public agents, psychologists, nurses, social workers, among others, which can bring out the feeling of a deprofessionalization, in which the role of planning and teaching is no longer the main one. Libâneo (2012) draws attention to the charitable character of the public school, which mischaracterizes the teaching-learning process and asserts social inequalities. This situation causes the professional to lose their autonomy. For Oliveira (2004, p. 1134, our translation), “[…] the worker who loses control over the work process loses the notion of integrity of the process, starting to execute only one part, alienating himself from the conception”. In addition, it gives up the vigilant stance against all dehumanizing practices, defended by Paulo Freire. This idea stems from narratives that generate a status quo of what it is and what conditions we have to be a teacher, as these suffer actions and social, economic and technical implications, which materialize in public policies that disregard the real and effective working conditions experienced by teachers of basic education in Brazil. We know that the neoliberal ideology, supported by contemporary and more progressive regulations, incorporates the category of “autonomy” into its ideological discourse. It is necessary to be attentive to the forces of this discourse and to the inversions that can operate in everyday pedagogical thought and practice by stimulating individualism, competitiveness, know-how dissociated from pedagogical knowledge and ethical, cultural and political education. It is necessary to implement public educational policies to defend, stimulate and enhance the teaching career, with the aim of overcoming the numerous challenges, including entry into the teaching career through public competition, wage and social security losses, low attractiveness of the career in the country, training guided by the paradigm of technical rationality, the devaluation of teaching courses, so that the choice of teaching is a conscious choice and not a lack of option or even does not constitute a second option for a professional project; or even, as Gatti (2010b) warns, if it takes effect in unemployment insurance, that is, that it does not become an alternative work option in case of the impossibility of exercising the profession chosen in the first place. **Considerations about teaching work in the Integral Teaching Program of the state of São Paulo** Researchers Oliveira (2010) and Pereira Júnior (2017) indicate the elements that make up the conditions of teaching work. Oliveira (2010, p. 1, our translation) points out that “[...] it is possible to define the teaching work as every act of achievement in the educational process, [since] it comprises the activities and relationships present in educational institutions, extrapolating the class regency”. The Integral Teaching Program, created in 2012 through Complementary Law n. 1,164, of January 4, 2012, later amended by Complementary Law n. 1,191, of December 28 of the same year, aims to improve educational results through excellent teaching in the integral formation of young people contemplated by this policy (SÃO PAULO, 2012). For this, professionals work under the Full-time Dedication Scheme (RDPI), which guarantees their permanence for 40 hours a week in the same institution and also counts on a Full-time Dedication Bonus (GDPI) with a 75% increase in their base salary. It is possible to see that these elements are identified in Oliveira's defense (2010, p. 1, our translation), when he clearly points out that working conditions involve “[...] employment conditions (forms of hiring, remuneration, career and stability)”. State schools in the Integral Teaching Program have their own organization and operating structure, with an exclusive teaching staff, regardless of the personal module. The work of the school units that adhere to and implement the PEI is guided by four axes: Mission, Vision of the Future, Values and Premises. It is interesting to resume Libâneo (2018) when he states that school organization would not be a totally objective and functional thing, a neutral element to be observed, but a social construction carried out by teachers, students, parents and members of the nearby community. Furthermore, it would not be characterized by its role in the market, but by the public interest. Considering the focus of this article, based on the guiding principles of the PEI, we want to highlight two of them. The values constituted by the “[...] Offering quality public education; valuing educators; democratic and responsible school management; team spirit and cooperation; the mobilization, engagement, commitment of the Network [...]” (SÃO PAULO, 2014a, p. 09, emphasis added, our translation). And the assumptions that unfold into five, of which we emphasize Continuing Education, in which the educator is in a “[...] permanent process of professional improvement and committed to his/her self-development in his career” (SÃO PAULO, 2012, p. 37, our translation), and Excellence in Management, which is configured in the pursuit of achieving the objectives and goals outlined in the school’s Action Plan, as well as that of the São Paulo State Department of Education (SEDUC/SP). The working conditions advocated in the Integral Teaching Program are in line with goal 6.1 of the National Education Plan (2014-2024), which provides for “[...] the progressive expansion of teachers' working hours in a single school” (BRASIL, 2014, p. 99, our translation). **Methodological design** This research is qualitative, that is, [...] qualitative research works with the universe of meanings, motives, aspirations, beliefs, values and attitudes, which corresponds to a deeper space of relationships, processes and phenomena that cannot be reduced to the operationalization of variables. The qualitative approach goes deeper into the world of the meanings of human actions and relationships, a side that is not perceptible and cannot be captured in equations, averages and statistics (MINAYO, 2002, p. 21-22, our translation). We had as a locus for data collection three state schools, in the city of Presidente Prudente, which participate in the Integral Teaching Program. The selection of the municipality was due to its regional expressiveness, that is, because it is the largest city in the West of São Paulo region and has schools that have joined the PEI since its implementation in the state network. This is a propitious scenario for important reflections and analyzes of the process that involves the complexity of the work trajectories of managers in the PEI. In order to meet the ethical precepts of research with human beings, schools will be identified by the letter 'E' and the numerical sequence '1', that is, E1, E2 and E3. As for the participants, eight professionals who make up the PEI school management team in each school (principal, vice-principal and general coordinating teacher) agreed to collaborate with the research. These participants will be identified by random letters (Chart 1) in the subsequent section, when the research results are presented and discussed. These procedures seek to preserve the identity of the schools and employees, as provided for in the Free and Informed Consent Form (TCLE). Through this methodological path, we anchor data collection in the questionnaire and in the semi-structured interview, which “[...] starts from certain basic questions, supported by theories and hypotheses, which are of interest to the research, and which then offer a wide range of questions, the result of new hypotheses that arise as the informant's answers are received” (TRIVIÑOS, 1987, p. 146, our translation). For recording purposes, the interviews were all recorded (in audio), stored in a repository of the research group Teacher Education and Teaching Practices in Basic and Higher Education (FPPEEBS), at the Faculty of Science and Technology, under the domain of Unesp. Subsequently, they were transcribed in full and, subsequently, analyzed with the objective of identifying the conditions of teaching work in the Integral Teaching Program (PEI), from the perspective of the school management team. The analysis and interpretation of the data occurred through the analytical process of the descriptive-interpretative perspective that is supported by the methodological triangulation of Tuzzo and Braga (2016). After the analysis, seven axes were identified, which are presented below. **Results and discussions** **Profile of participants** The public participating in the research included members of the school management team working in three schools that joined the PEI, in the municipality of Presidente Prudente. The data obtained with the questionnaire allowed us to delineate the profile of the collaborators in terms of gender, age group, professional relationship, position/function and length of experience (Chart 1). We verified that of the total of eight participants in the research, 37.5% are male aged between 40 and 53 years old, all beginners in the function they occupy in the PEI. According to Rinaldi (2009, p. 128), “[...] as with many teachers at the beginning of their careers, [the beginning trainer] goes through an induction period and needs 2 to 3 years of work to form his professional identity, regardless of his previous success in his career as a teacher”. With the exception of two professionals, M and E, respectively, all the others can also be classified as beginners in the role they occupy in the school management of the PEI of the schools in which they work. The mean age among female participants is approximately 46 years. All employees hold effective positions in the state network of São Paulo, 25% hold the role of PCG, 37.5% the role of vice-principal and 37.5% the role of school principal, respectively. Regarding the age group, we noticed a phenomenon of aging among managers. This reality has already been verified in previous research regarding teachers (FERNANDES; SILVA, 2012; BRASIL, 2018). **Teaching working conditions at PEI** With the aim of analyzing the teaching working conditions in the Integral Teaching Program in the state of São Paulo, we resumed some data from the research carried out by Ulloffo and Rinaldi (2021). When asked about 'being' and 'acting' as a manager in the PEI, the school principal of E1 narrates that: I think that in regular schools, we don't have this possibility of spending more time with the students and being able to promote this interdimensional education and this pedagogy of presence. In the program, you have this opportunity, because you spend more time with them [...] this opportunity you have here to talk to the student (SCHOOL PRINCIPAL, S, E1, Interview, our translation). We observe in the speech of school principal S that the Regime of Full and Integral Dedication (RDPI) contributes significantly to the promotion of educational practices (interdimensional education and the pedagogy of presence) designed by the program, as well as enabling a greater approximation and interaction with students. Concomitant with the statements by S from E1, the PCG from E3 tells us that this full-time stay in the same school collaborates so that he can carry out his activities with greater commitment, which does not happen in regular schools, so, here I ask my teacher, look, I want the agenda for fifteen days from the beginning of the two-month period, one week it is there, in a week it is delivered, the date I asked for is delivered. There on the regular, wow! don't even ask, don't even think about it, because he runs from school to school, he doesn't have time, he teaches at three schools. In PEI, no! at PEI I have the teacher in here just for me (PCG, A, E3, Interview, our translation). What caught our attention in the speech of S and A is the closer contact that exists between the actors involved in the educational process in the program, whether principal-student, teacher-student, student-student, coordinator-teacher, teacher-teacher, among others. This situation is barely visible in regular schools, where, for the most part, there are professionals who, in the face of factors such as salary devaluation, began to work in double or even triple shifts in the same network or in different teaching networks. It is thinking about this salary factor that M, PCG of E1, tells us that of the working conditions offered by the PEI, the one that he likes the most is the issue of remuneration, it is the salary difference, but it is not specific to management, all professionals here have a salary difference (PCG, M, E1, Interview, our translation). This aspect was also mentioned in previous studies, when they point out that the quality of education involves improving working conditions, and consequently, salary and career conditions (GATTI; BARRETO, 2009). In view of these assertions, it seems that the program has been providing some satisfactory working conditions, as proposed by the LDB/96 in its article 67, which correspond respectively to “professional salary floor” and “adequate working conditions” (BRASIL, 1996, p. 44, our translation). Another relevant aspect identified in the research, from the perspective of vice-principal C of E3, is the organization of the program's policy, [...] I believe it is much more organized, it makes the work of a vice-principal much easier, right? he knows what he really needs to do, he has an agenda, he has a schedule, he has his weekly alignments, so, he has the support of the principal and the coordinator (Vice-principal, C, E3, Interview, our translation). C's report from E3 takes up the organization and structuring of the program's operation as one of the aspects that help professionals to be clear about what they need to do. We identified in the report elements of a model that defends the appreciation of education professionals and the improvement of their working conditions, and contradictorily has a regulatory perspective of educational work, allegedly neutral and technical. The regulatory aspect of the program also includes the promotion of methodologies practiced that involve specific curricular components, such as: Life Project, Tutoring, Welcome, Leveling, Individual Improvement and Education Plan (PIAF), Action Plans and Program, Agenda (School and Bimonthly), Class Leaders, Youth Clubs, among others. With regard to teaching work, we draw attention to the Programs and Action Plans and Agenda. The first two consist of a document guiding management actions, with goals and strategies outlined by professionals, and the second a description of “by whom” and “what” each one will develop (SÃO PAULO, 2014b). The agenda (bimonthly and school) indicates when the activities should be carried out and how they will happen. This proposal of the program, in which each professional must execute an Action Program with goals and objectives to be achieved and the rigidity in the fulfillment of agendas and schedules, puts us in the position of professionals anchored in what Schon (1997) calls technical rationality, in which the applicability of teaching in an instrumentalized way predominates, not creating conditions for the professional to carry out a critical, reflective and creative work, defended by Ghedin (2003). Based on this idea, regarding the fulfillment of objectives and goals, we recall the statements made by school principal E of E3 when narrating about how the teaching work in the program takes place, [...] in the PEI we go through an evaluation, we evaluate ourselves, I evaluate my teachers, my teachers evaluate me, students evaluate me and the teachers, you know? So, everyone has to walk straight, in the PEI no one can fail (Principal, E, E3, Interview, our translation). This evaluation proposed by the program intends to identify the gaps presented by the professionals, so that, based on the results presented, it will be decided whether or not to remain in the PEI. However, in view of the principal's speech, what worries us about this evaluation mechanism present in the policy is the pressure and demands that fall on these subjects and the valuation of performance, which is contrary to the proposal of full and emancipatory formation of the subject. In this regard, we highlight an excerpt from E's speech, when he states: "in the PEI, no one can fail". This idea of failure or insufficiency causes professionals to feel insecure about their work and professional instability, as they depend on good performance to remain or not in the program. This situation can lead to the search for other jobs or even excessive work hours, generating physical, emotional, personal and mental health wear. This situation experienced by PEI professionals was analyzed by Dias (2018) and Ball (2006), who portray the scenario of neo-managerialism that has fallen on education workers, depriving them of the issue of "performativity" and creating control over teaching activities, provoking competitiveness among them. Still, regarding the school principal's speech about the idea of "it cannot fail", we can clearly glimpse the market and business presence in school spaces through a capitalist and neoliberal policy that dominates public schools. Supported by the studies by Rios (2001), which deal with the competences of the teaching professional, it is possible to identify when their performance is at the service of a social demand, enabling the construction of knowledge in critical subjects in society, as well as, it may be due to a market demand, in which individuals act mechanically, directed by demands, acting only as technicians of an already prepared teaching, without the possibility of reflecting on their actions, working on the diversity and subjectivity of the school context, which consequently entails the non-possibility of "making a mistake" in the face of their practices. This reality can be glimpsed in the performance evaluations of the Integral Teaching Program, which increase competitiveness between institutions and their professionals, creating a "positive" image for those qualified and punitive for those disqualified for the function. From this perspective on the teaching work in the program, R, who is vice-principal of E2, tells us that what he likes most is the fact that everyone knows each other, [...] in regular education you usually don't know, today it's a teacher, tomorrow it's another and if you mess up the next day it's still another one, right, so, you don't know your team fully and there aren't so many teachers who are effective in these regular schools, right? in integral education, no! everyone is effective and you manage to do a better job there, right, it gets easier (vice-principal, R, E2, Interview, our translation). This excerpt narrated by vice-principal R of E2 about the PEI not having this 'rotating system' of substitution present in the education network is justified due to the fact, already presented, that the program has its own teaching staff, that is, as M from E1 says, [...] in schools that are not in the PEI, when the teacher takes a leave of absence, a substitute is placed, although not always, right? you can get a substitute, but then the students have a vacant class, not here! here the other teachers are directed to be replaced, so you end up and they end up not participating in the pedagogical meetings (PCG, M, E1, Interview, our translation). It seems to us a good idea that the program does not depend on substitutes external to politics and/or eventual ones. However, it is necessary to think of a strategy for a staff of substitute professionals, because in exceptional situations of absence and/or leave of one, there may be another who will replace him, since these compulsory replacements made by other individuals “[…] do not receive any additional for the surplus work” (DIAS, 2018, p. 13, our translation). Which already leads us to question whether the wage bonus offered is so attractive and sufficient. In the same sense as M from E1, school principal E from E3 reports on the module that needs to be revised, since: [...] today the government put it in my school, for example, because it's by module, right? so a school with eleven classes like mine has X teachers, it has seventeen teachers. But I don't just have the curriculum, I have the whole diversified part, so this year it increased by eleven technology classes, and I don't have a teacher (Principal, E, E3, Interview, our translation). This module changes with the increase in classes and the lack of professionals, as well as these substitutions in which an articulation must be promoted to carry out the suppression of the absent professional, directly imply the teaching work of the other subjects who work in the program, since there are countless attributions that each individual must carry out in the program, which gives us indications of a work overload. This overload expressed in the speeches of these managers leads us to question what the consequences could be for these professionals? It is with this in mind that we resort to the report of P, who is the principal of E2, when he tells us what is missing: having the collaboration of colleagues, teachers, in looking for solutions and not just bringing the problems, because what happened a lot was that, the teachers did not bring a proposal (Principal, P, E2, Interview, our translation). We believe that the lack of collaboration on the part of his colleagues may be a result of both this overload and the already explored competitiveness issues that are present in politics. We are not saying that within the PEI there is no dialogue and personal relationship, on the contrary, at other times it has already been reported that the policy presents the characteristic of a horizontal and democratic management. In other words, this idea of collaboration between all can, or if it is no longer present, be weakened, giving rise to a possible virtue of having as a greater concern the fulfillment of goals, objectives and the achievement of high educational results to the detriment of creating conditions for reflection on the action of teaching work. Still reflecting on the possible implications that may be caused by this intensification of teaching work, we think about carrying out the education of these professionals within the PEI, since one of the premises proposed by the program concerns “Continuing Education”. Faced with this, the same school principal P from E2 says that, […] one of the things that makes me most anxious is to see that there is still a long way to go for us to understand this, in relation to what a child's cognitive is, when and how the child learns. I think that would be it (Principal, P, E2, Interview, our translation). This statement made by the manager of E2 about feeling distressed in the face of the need to understand the child's cognitive and how he learns leads us to believe that his initial education may have been effective and not efficient. Thus, this gap could be reduced through a process of continuing education. In this sense, we asked the following questions: how to carry out education in a scenario of professional intensification? Has this increase in classes evidenced in one of the statements above taken away the training time of these professionals? These reported situations of compulsory substitutions lead to non-participation in pedagogical meetings, does the same happen with education? This is where E3 school principal E's report fits, […] it's true, because I fought because they destroyed it, because my teachers have thirty-one classes and another thirty-two (Principal, E, E3, Interview, our translation). This situation presented by the school principal of E3 shows that, over the years, new classes have been assigned to teachers and, consequently, new demands on management, which translates to us that little time has been allocated to political professionals to carry out their education. In other words, we are told that the program's Continuing Education premise may not actually be happening. Linked to this situation presented on continuing education, we would like to highlight another excerpt from school principal E, stating that the lack of this directly impacts on her working condition, as she tells us: [...] it’s challenging to work in an era like this with this advance in technology and your school is still in its infancy, right? (Principal, E, E3, Interview, our translation). Little or no training for these professionals can directly inflict on their practices, as pointed out by the principal, who, no matter how experienced she is in her position/function, faces current challenges. It is due to this nature that we elucidate the need for the program to rethink formations that make it possible to work with contemporary social needs, such as: use of technology, adaptation to virtual classes, among others. Because it is a neoliberal policy in which new attributions are implemented for the subjects who work there, a challenging scenario is created for professionals who have been in the educational field for a long time, which requires changes and adaptations on their part, or as Oliveira (2006, p. 215, our translation) says, “[...] teaching workers feel forced to master new practices, new knowledge and mastery of certain skills in the exercise of their functions”. Given the complexity of the PEI proposal, it was possible to infer from the perspective of the school management team the real working conditions experienced by professionals in the implementation of the proposal in the school routine: sometimes they point to positive characteristics for the development of teaching, sometimes they denounce limitations, lags, precariousness and challenges faced by professionals in the development of their teaching work. Final remarks This article aimed to analyze the working conditions in the PEI in the state of São Paulo from the perspective of the school management team. It allowed us to glimpse the daily reality that these professionals face daily to provide the student with a good quality education, the construction of knowledge, as well as the formation of a critical subject before society. We know that from initial education, passing through continuing education and reaching their teaching practice, there is a need to observe and analyze whether there are conditions to act with formative, social, political and emotional competence. In this way, due to its professional complexity, the interest in studying and promoting constant teaching professionalization emerges. In this sense, the objective of the article was defined as a result of the idea of analyzing the conditions of teaching work in the program, from the perspective of managers, because, despite the weaknesses and controversies, we see in politics the possibility of providing students with conditions to build themselves as subjects of initiative, commitment and freedom, since they are mostly young people in situations of social vulnerability. According to the analyses, it was possible to identify positive aspects of the implementation of the PEI for professionals: exclusive dedication through the RDPI, avoiding the condition experienced by many regular school teachers of working in more than one school; wage bonus resulting from dedication to the same school, improvement of infrastructure and material conditions. However, we identified weaknesses that directly impact the teaching work and, consequently, the student's academic success. It is concluded that, in order to form students committed to their social work, it is necessary that we value the teaching profession, professionals with good working conditions, that is, good wages, adequate workload, efficient and effective teacher education (initial and continuing), professional stability, among other characteristics of good professionalism and its political and social appreciation. REFERENCES BALL, S. Sociologia das políticas educacionais e pesquisa crítico-social: Uma revisão pessoal das políticas educacionais e da pesquisa em política educacional. *Currículo sem Fronteiras*, v. 6, n. 2, p. 10-32, 2006. Available at: https://www.curriculosemfronteiras.org/vol6iss2articles/ball.pdf. Access: 10 Aug. 2022. BRASIL. Constituição (1988). *Constituição da República Federativa do Brasil*. Brasília, DF: Senado Federal, 1988. BRASIL. Lei n. 9.394 de 20 de dezembro de 1996. Estabelece as diretrizes e bases da educação nacional. Brasília, DF: MEC, 1996. Available at: https://www.planalto.gov.br/ccivil_03/leis/19394.htm. Access: 07 Feb. 2023. BRASIL. Lei n. 13.005 de 25 de junho de 2014. Aprova o plano nacional de educação - PNE e dá outras providências. Brasília, DF: Presidência da República, 2014. Available at: https://www.planalto.gov.br/ccivil_03/_ato2011-2014/2014/lei/l13005.htm. Access: 07 Feb. 2023. BRASIL. Instituto Nacional de Estudos e Pesquisas Educacionais Anísio Teixeira. *Relatório SAEB (ANEB e ANREC) 2005 – 2015*: Panorama da década. Brasília, DF: Inep, 2018. Available at: https://download.inep.gov.br/educacao_basica/saeb/2018/documentos/livro_saeb_2005_2015_completo.pdf. Access: 10 Aug. 2022. DIAS, V. C. Programa de Ensino Integral Paulista: Problematizações sobre o trabalho docente. *Educação Pesquisa*, v. 44, e180303, 2018. Available at: https://www.scielo.br/j/ep/a/FGFSKCC83RqZnLJpk6YwC9y/abstract/?lang=pt#. Access: 07 Feb. 2023. DUARTE, A. Produção acadêmica sobre trabalho docente na educação básica no Brasil: 1987-2007. *Educar em Revista*, n. esp. 1, p. 101-117, 2010. Available at: https://www.scielo.br/j/er/a/PzrnWtybJwnKgcVvthGXTJ/abstract/?lang=pt. Access: 02 Feb. 2023. FERNANDES, D. C.; SILVA, C. A. S. Perfil do docente da educação básica no Brasil: Uma análise a partir dos dados da PNAD. *In*: OLIVEIRA, D. A.; VIEIRA, L. M. F. (org.). *Trabalho na educação básica*: A condição docente em sete estados brasileiros. Belo Horizonte: Fino Traço, 2012. GATTI, B. A. Sucesso escolar. *In*: OLIVEIRA, D. A.; DUARTE, A. M. C.; VIEIRA, L. M. F. *Dicionário*: Trabalho, profissão e condição docente. Belo Horizonte: UFMG/Faculdade de Educação, 2010a. GATTI, B. A. Formação de professores no Brasil: características e problemas. *Educação e Sociedade*, v. 31, n. 113, p. 1355-1379, 2010b. Available at: https://www.scielo.br/j/es/a/R5VNX8SpKjNmKPxxp4QMt9M/. Access: 02 Feb. 2023. GATTI, B. A.; BARRETO, E. S. *Professores do Brasil: Impasses e desafios*. Brasília, DF: UNESCO, 2009. GHEDIN, E. Professor reflexivo: Da alienação da técnica à autonomia da crítica. In: PIMENTA, S. G.; GHEDIN, E. (org.). *Professor reflexivo no Brasil: Gêneses e crítica de um conceito*. São Paulo: Cortez, 2003. GOUVEIA, A. B. Efetividade escolar. In: OLIVEIRA, D. A.; DUARTE, A. M. C.; VIEIRA, L. M. F. *Dicionário*: Trabalho, profissão e condição docente. Belo Horizonte: UFMG/Faculdade de Educação, 2010. LIBÂNEO, J. C. O dualismo perverso da escola pública brasileira: A escola do conhecimento para os ricos, escola do acolhimento para os pobres. *Educação e Pesquisa*, v. 38, n. 1, p. 13-28, 2012. Available at: https://www.scielo.br/j/ep/a/YkhJTPw545x8jwpGFsXT3Ct/abstract/?lang=pt. Access: 02 Feb. 2023. LIBÂNEO, J. C. O sistema de organização e gestão da escola. In: LIBÂNEO, J. C. *Organização e Gestão da Escola*: Teoria e prática. 10. ed. Goiânia: Alternativa, 2018. MINAYO, M. C. S. Ciência, técnica e arte: Desafio da Pesquisa Social. In: MINAYO, M. C. S. (org.). *Pesquisa Social*: Teoria método e criatividade. Petrópolis, RJ: Vozes, 2002. OLIVEIRA, D. A. A reestruturação do trabalho docente: Precarização e flexibilização. *Educação e Sociedade*. v. 25, n. 89, p. 1127-1144, 2004. Available at: https://www.scielo.br/j/es/a/NM7Gfq9ZpjpVcJnsSFdrM3F/abstract/?lang=pt. Access: 02 Feb. 2023. OLIVEIRA, D. A. Regulação educativa na América Latina: Repercussões sobre a identidade dos trabalhadores docentes. *Educação em Revista*, v. 44, p. 209-227, 2006. Available at: https://www.scielo.br/j/edur/a/PBxVTPKfBjQgNKH6GVn34ym/?lang=pt. Access: 08 Feb. 2023. OLIVEIRA, D. A. Condições de trabalho docente. In: OLIVEIRA, D. A.; DUARTE, A. M. C.; VIEIRA, L. M. F. *Dicionário*: trabalho, profissão e condição docente. Belo Horizonte: UFMG/Faculdade de Educação, 2010. PEREIRA JUNIOR, E. A. *Condições de trabalho docente nas escolas de Educação Básica no Brasil: Uma análise quantitativa*. 2017. 230 f. Tese (Doutorado em Educação) – Universidade Federal de Minas Gerais, Belo Horizonte, 2017. Available at: http://hdl.handle.net/1843/BUOS-AQQPSG. Access: 07 Feb. 2023. RINALDI, R. P. *Desenvolvimento profissional de formadores em exercício*: Contribuições de um programa online. 2009. Tese (Doutorado em Educação) – Universidade Federal de São Carlos, São Carlos, 2009. Available at: https://repositorio.ufscar.br/bitstream/handle/ufscar/2225/2701.pdf?sequence=1&isAllowed=y. Access: 07 Feb. 2023. RINALDI, R. P.; BROCANELLI, C. R.; MILITÃO, S. C. Política educacional brasileira: Implicações para o projeto educativo escolar. In: DOS SANTOS FILHO, J. C. (org.). Projeto Educativo Escolar. 1. ed. Petrópolis, RJ: Vozes, 2012. RIOS, T. A. Compreender e ensinar: Por uma docência de melhor qualidade. São Paulo: Cortez, 2001. SÃO PAULO. Diretrizes do programa de ensino integral. São Paulo: SEE, 2012. SÃO PAULO. Caderno do Gestor: Ensino Integral. Modelo de gestão do Programa Ensino Integral. 1. ed. São Paulo: Secretaria de Educação, 2014a. SÃO PAULO. Caderno do Gestor: Ensino Integral. Diretrizes do programa Ensino integral. 1. ed. São Paulo: Secretaria de Educação, 2014b. SCHÖN, A. S. Formar professores como profissionais reflexivos. In: NÓVOA, A. (org.). Os professores e sua formação. Lisboa: Dom Quixote, 1997. TRIVIÑOS, A. N. S. Introdução à pesquisa em ciências sociais: A pesquisa qualitativa em educação. São Paulo: Atlas, 1987. TUZZO, S. A.; BRAGA, C. F. O processo de triangulação da pesquisa qualitativa: A meta fenômeno como gênese. Rev. Pesquisa Qualitativa, v. 4, n. 5, p. 140-158, 2016. Available at: https://editora.sepq.org.br/rpq/article/view/38. Access: 02 Feb. 2023. ULLOFFO, R. M.; RINALDI, R. P. Programa Ensino Integral: Percepção de gestores das escolas estaduais de Presidente Prudente - SP. Relatório de Pesquisa 3 (Licenciatura em Pedagogia) - Universidade Estadual Paulista, Fundação de Amparo à Pesquisa no estado de São Paulo (FAPESP), 2021. CRediT Author Statement Acknowledgments: To the State of São Paulo Research Foundation (FAPESP) for promoting research, to the Board of Education – Presidente Prudente Region and to the study participants for their partnership and collaboration. Financing: FAPESP Proc. 2019/14946-0 and Proc. 2021/10020-5 Renata Portela RINALDI and Renan Moreira ULLOFFO. Conflicts of interest: There are no conflicts of interest. Ethical approval: Approval by the Ethics Committee for Research with Human Beings of the Science and Technology Faculty of São Paulo State University, Presidente Prudente campus, under registration CAAE 24632919.8.0000.5402. Availability of data and material: The data underlying the study are reported in the article. Authors' contributions: The author and the author participated equally in the design of the study, conducting the literature review and writing the text. Processing and editing: Editora Ibero-Americana de Educação. Proofreading, formatting, normalization and translation.
STATE OF LOUISIANA COURT OF APPEAL FIRST CIRCUIT NO. 2004 CE 1844 CHARLES N. BRANTON VERSUS BRYAN D. HAGGERTY, VINCENT LOBELLO, AND MALISE PRIETO, CLERK OF COURT, PARISH OF ST. TAMMANY Judgment Rendered: __________. * * * * * On Appeal from the 22nd Judicial District Court, in and for the Parish of St. Tammany State of Louisiana Trial Court No. 2004-13906E Honorable William J. Burris, Judge Presiding * * * * * Charles N. Branton Slidell, LA Plaintiff/Appellant, In Proper Person Raymond G. Hoffman, Jr. Metairie, LA Counsel for Defendant/Appellee, Bryan D. Haggerty Patrick J. Berrigan Slidell, LA Counsel for Defendant/Appellee, Vincent J. Lobello Jeanne M. Roques Covington, LA Counsel for Defendant/Appellee, Malise Prieto, Clerk of Court * * * * * BEFORE: PARRO, KUHN, PETTIGREW, DOWNING, AND GAIDRY, JJ. [Signature] GAIDRY, J. CONCURS [Signature] KUHN, J. CONCURS & ASSIGNS REASONS Parro, J., dissents and assigns reasons. JTP by Rob Pettigrew, J., dissents for the reasons assigned by Judge Parro. Plaintiff, Charles N. Branton, filed suit challenging Bryan D. Haggerty’s and Vincent J. Lobello’s qualifications as candidates for the office of Judge of Slidell City Court. Plaintiff appeals the trial court’s dismissal of his petition. For the following reasons, we affirm. **DISCUSSION** Haggerty and Lobello sought to qualify in the special election called to fill the unexpired term of the Honorable Gary J. Dragon, Judge, Slidell City Court. Both men were admitted to the state bar on October 15, 1999. The primary election is scheduled for September 18, 2004, and the general election is set for November 2, 2004. Plaintiff filed suit challenging Haggerty’s and Lobello’s qualifications as candidates, as neither man had been admitted to the practice of law for a period of five years at the time he qualified. Plaintiff further alleged that should either candidate win the primary election of September 18, 2004, he would not be qualified to assume the office. An action objecting to the candidacy of a person who qualified as a candidate in a primary election may be based on the ground that the candidate “does not meet the qualifications for the office he seeks in the primary election.” LSA-R.S. 18:492A(3). Louisiana Revised Statute 18:451 states in pertinent part, “Except as otherwise provided by law, a candidate shall possess the qualifications for the office he seeks at the time he qualifies for that office.” (emphasis supplied) With respect to the City Court of Slidell, LSA-R.S. 13:2487.2 specifies in pertinent part that the “city judge must be licensed to practice law in the State of Louisiana for at least five years previous to his election.” The trial court concluded that the provisions of LSA-R.S. 13:2487.2 fall into the category of the “except as otherwise provided by law” language of LSA-R.S. 18:451 and held the determinative date of whether a candidate possesses the qualifications for the office he seeks is the date of the general election and not the date of qualification. We basically adopt the insightful written reasons of the trial judge. Other circuits have held that the similar phrase “prior to his election” refers to the general election and not the primary election.\(^1\) *Cook v. Campbell*, 360 So.2d 1193, 1197 (La. App. 2 Cir.), *writ denied*,\(^2\) 362 So.2d 573 (La. 1978); *Aiple v. Naccari*, 454 So.2d 894, 894 (La. App. 5 Cir.), *writ denied*, 456 So.2d 151 (La. 1984); *see also* *Soileau v. Board of Sup’rs, St. Martin Parish*, 361 So.2d 319, 323 (La. App. 3 Cir. 1978). Under the facts of this case, we find the phrase “previous to his election” also refers to the date of the general election and not the primary election. As observed by the second circuit: The determinative date of a candidate’s qualifications should be fixed, certain and ascertainable at the time of qualification for candidacy. The date of the general election is certain and ascertainable in both regular and special elections. The interpretation of “election” urged by plaintiff would leave the determinative date uncertain and dependent on how many candidates qualify and, where there are more than two candidates, on whether one candidate receives a majority of the votes cast in the first primary, matters which cannot be determined until after expiration of the qualifying period or after the primary election. *Cook*, 360 So.2d at 1197. Moreover, in an election contest, the person objecting to the candidacy bears the burden of proving the candidate is not qualified. *Russell v. Goldsby*, 00-2595 (La. 9/22/00), 780 So.2d 1048, 1051. The plaintiff has not carried his burden of establishing that the defendants will be elected prior to the fifth year of their admissions to the Louisiana State Bar Association. *See Soileau*, 361 So.2d at 323. The laws governing the conduct of elections must be liberally interpreted so as to promote rather than defeat candidacy. *Russell*, 780 So.2d at 1051. Any doubt as to the qualifications of a candidate should be resolved in favor of permitting the candidate to run for public office. *Russell*, 780 So.2d at 1051. --- \(^1\) “A judge of the supreme court, a court of appeal, district court, family court, parish court, or court having solely juvenile jurisdiction shall have been admitted to the practice of law in this state for at least five years **prior to his election**, and shall have been domiciled in the respective district, circuit, or parish for the two years preceding election. He shall not practice law.” La. Const. art. V, §24. (emphasis supplied) \(^2\) The Louisiana Supreme Court stated: “Writ denied. We find no error of law in the reasons stated by the Court of Appeal.” Considering the foregoing, the judgment appealed from is affirmed. Costs of this appeal are assessed to the plaintiff/appellant, Charles N. Branton. AFFIRMED. Kuhn, J., concurring. I concur to amplify a proposition established in the jurisprudence, i.e., the applicable legislation should be liberally interpreted to favor and promote candidacy. The legal positions assumed by the parties in this appeal allow both sides to rely on statutes addressing the subject of qualifications for, or challenges to qualifications for, the office of Slidell City Court Judge. While the Election Code provides for challenging the qualifications of a candidate, it does not specifically address the qualifications of a judge of the Slidell City Court. Conversely, La. R.S. 13:2487.2, in establishing the office, states in pertinent part that "[t]he city judge must be licensed to practice law in [the state] for at least five years previous to his election . . . ." While one may argue, with some persuasion, that one statute or another should be used to determine the issue in this case, well-established jurisprudence resolves the ultimate issue of whether the candidate possesses the relevant qualification for office. In *Pattan vs. Fields*, 95-2375 (La. 9/28/95), 661 So.2d 1320, the Supreme Court reinstated the candidacy of a candidate for election to the state senate and set forth the following pertinent principles of law: The laws governing the conduct of elections must be liberally interpreted so as to promote rather than defeat candidacy. Any doubt as to the qualifications of a candidate should be resolved in favor of permitting the candidate to run for public office. These legal principals are sufficient to decide this case. The candidacy of these two office seekers should be allowed. PARRO, J., dissenting. As the majority opinion correctly notes, LSA-R.S. 18:451 requires that a candidate possess the qualifications for the office he seeks at the time he qualifies for that office, "except as otherwise provided by law." Such an exception is provided by LSA-R.S. 13:2487.2, which states that a judge for the City Court of Slidell must be licensed to practice law in the State of Louisiana for at least five years "previous to his election." Thus, the precise question before the court is whether the "election" date as of which the candidates in this case must possess the qualifications for the office is the date of the primary election on September 18, 2004, or the general election on November 2, 2004. Louisiana Revised Statute 18:451 does not answer this question, as it refers to both elections when it states that a person who meets the qualifications for the office he seeks may become a candidate and be voted on in a primary or general election if he qualifies as a candidate in "the election." However, I believe the answer to this question can be found in another provision of the election code. According to LSA-R.S. 18:492(A), an action objecting to the candidacy of a person must be based on certain stated grounds, one of which is set out in subparagraph (3) and states that the defendant "does not meet the qualifications for the office he seeks in the primary election." (Emphasis added.) In fact, subparagraphs (1) and (2) of that statute also indicate that the election upon which a challenge to candidacy must be based is the primary election, either because the candidate did not qualify "for the primary election" in the manner prescribed by law or within the time prescribed by law. Given this clear and unambiguous statement, which has not been interpreted in any jurisprudence,\(^1\) I would find the plaintiff carried his burden of proof that neither of the challenged candidates meets the qualifications for the office he seeks "in the primary election," as neither will have been licensed to practice law in the State of Louisiana for at least five years "previous to his election," as that term is defined in the statute governing objections to candidacy. For that reason, I respectfully dissent. \(^1\) The majority opinion relies primarily on the Cook case for the conclusion that the "election" means the general election. However, the Cook case involved a candidate for district court judge, which is a court established by the Louisiana Constitution, and the case therefore interpreted constitutional provisions, some of which were applicable only to district court judges. In contrast, our case involves the City Court of Slidell, which was established by statute. Therefore, the principles of law applied by the court in Cook differ from those in this case.
An EM Approach to Non-autoregressive Conditional Sequence Generation Zhiqing Sun 1 Yiming Yang 1 Abstract Autoregressive (AR) models have been the dominating approach to conditional sequence generation, but are suffering from the issue of high inference latency. Non-autoregressive (NAR) models have been recently proposed to reduce the latency by generating all output tokens in parallel but could only achieve inferior accuracy compared to their autoregressive counterparts, primarily due to a difficulty in dealing with the multi-modality in sequence generation. This paper proposes a new approach that jointly optimizes both AR and NAR models in a unified Expectation-Maximization (EM) framework. In the E-step, an AR model learns to approximate the regularized posterior of the NAR model. In the M-step, the NAR model is updated on the new posterior and selects the training examples for the next AR model. This iterative process can effectively guide the system to remove the multi-modality in the output sequences. To our knowledge, this is the first EM approach to NAR sequence generation. We evaluate our method on the task of machine translation. Experimental results on benchmark data sets show that the proposed approach achieves competitive, if not better, performance with existing NAR models and significantly reduces the inference latency. 1. Introduction State-of-the-art conditional sequence generation models (Baldanau et al., 2014; Gehring et al., 2017; Vaswani et al., 2017) typically rely on an AutoRegressive (AR) factorization scheme to produce the output sequences. Denoting by \( x = (x_1, \ldots, x_T) \) an input sequence of length \( T \), and by \( y = (y_1, \ldots, y_{T'}) \) a target sequence of length \( T' \), the conditional probability of \( y \) given \( x \) is factorized as: \[ p^{AR}(y|x) = \prod_{i=1}^{T'} p(y_i|x, y_1, y_2, \ldots, y_{i-1}). \] (1) As such a sequential factorization cannot take the full advantage of parallel computing, it yields high inference latency as a limitation. Recently, Non-AutoRegressive (NAR) sequence models (Gu et al., 2017; Lee et al., 2018) are proposed to tackle the problem of inference latency, by removing the sequential dependencies among the output tokens as: \[ p^{NAR}(y|x) = p(T'|x) \prod_{i=1}^{T'} p(y_i|x, T'). \] (2) This formulation allows each token to be decoded in parallel and hence brings a significant reduction of the inference latency. However, NAR models also suffer from the conditional independence assumption among the output tokens, and usually do not perform as well as their AR counterparts. Such a performance gap is particularly evident when the output distributions exhibit a multi-modality phenomenon (Gu et al., 2017), which means that the input sequence can be mapped to multiple correct output sequences. Such a multi-modal output distribution cannot be represented as the product of conditionally independent distributions for each position in NAR models (See 3.2 for a detailed discussion). How to overcome the multi-modality issue has been a central focus in recent efforts for improving NAR models. A standard approach is to use sequence-level knowledge distillation (Hinton et al., 2015; Kim & Rush, 2016), which means to replace the target part \( y \) of each training instance \((x, y)\) with the system-predicted \( \hat{y} \) from a pre-trained AR model (a.k.a. the “teacher model”). Such a replacement strategy removes the one-to-many mappings from the original dataset. The justification for doing so is that in practice we do not really need the sequence generation models to mimic a diverse output distribution for sequence generation tasks such as machine translation\(^1\) and text summarization. Such a knowledge distillation strategy has shown to be effective for improving the performance of NAR models in \(^1\)For example, Google Translate only provide one translation for the input text. 2. Related Work Related work can be divided into two groups, i.e., the non-autoregressive methods for conditional sequence generation, and the various approaches to knowledge distillation in non-autoregressive models. Recent work on non-autoregressive sequence generation has developed ways to address the multi-modality problem. Several work try to design better training objectives (Shao et al., 2019; Wei et al., 2019) or regularization terms (Li et al., 2019; Wang et al., 2019; Guo et al., 2018). Other methods focus on direct modeling of multi-modal target distributions via hidden variables (Gu et al., 2017; Kaiser et al., 2018; Ran et al., 2019; Ma et al., 2019) or sophisticated output structures (Libovický & Helcl, 2018; Sun et al., 2019). There are also a few recent work (Lee et al., 2018; Stern et al., 2019; Ghazvininejad et al., 2019; Gu et al., 2019) focusing on a multiple-pass iterative-refinement process to generate the final outputs, where the first pass produce an initial output sequence and the following passes refine the sequence iteratively in the inference phase. As for knowledge distillation in NAR models, Gu et al. (2017) is the first effort to use knowledge distillation (Hinton et al., 2015; Kim & Rush, 2016). Recently, Zhou et al. (2019) analyzed why knowledge distillation would reduce the complexity of datasets and hence be helpful in the training of NAR models. They also used Born-Again networks (BANs) (Furlanello et al., 2018) to produce simplified training data for NAR models. All the above methods take a pre-trained AR model as the teacher model for knowledge distillation; none of them iteratively updates the teacher model based on the feedback from (or the measured performance of) the NAR model. This is the fundamental difference between existing work and our proposed EM approach in this paper. 3. Problem Definition 3.1. Conditional Sequence Generation Let us describe the problem of conditional sequence generation in the context of machine translation and use the terms of “sequence” and “sentence”, “source” and “input”, “target” and “output” interchangeably. We use $x$ and $y$ to denote the source and target sentences, $x_i$ to indicate the $i$-th token in $x$, and $\mathcal{X} = \{x^1, x^2, \ldots, x^N\}$ and $\mathcal{Y} = \{y^1, y^2, \ldots, y^N\}$ to be a parallel dataset of $N$ sentence pairs in the source and target languages, respectively. The training of both AutoRegressive (AR) and Non-AutoRegressive (NAR) sequence generation models is performed via likelihood maximization over parallel data. \((\mathcal{X}, \mathcal{Y})\) as: \[ \phi^* = \arg \max_{\phi} \mathbb{E}_{(x, y) \sim (\mathcal{X}, \mathcal{Y})} \log p^{AR}(y|x; \phi), \tag{3} \] \[ \theta^* = \arg \max_{\theta} \mathbb{E}_{(x, y) \sim (\mathcal{X}, \mathcal{Y})} \log p^{NAR}(y|x; \theta), \tag{4} \] where \(p^{AR}\) and \(p^{NAR}\) are defined in Eq. 1 and Eq. 2; \(\phi\) and \(\theta\) are the parameters of the AR and NAR models, respectively. ### 3.2. Instance-level & Corpus-level Multi-modality There are two different but not mutually exclusive definitions for the concept of multi-modality (a.k.a., translation uncertainty in machine translation terminology). Gu et al. (2017) define the multi-modality problem as the existence of one-to-many mappings in the parallel data, which we refer to as the *instance-level* multi-modality. Formally, given the parallel data \((\mathcal{X}, \mathcal{Y})\), if there exist sentences \(i\) and \(j\) such that \(x^i = x^j\) but \(y^i \neq y^j\), we say that \((\mathcal{X}, \mathcal{Y})\) contains instance-level multi-modality. In contrast, Zhou et al. (2019) use the conditional entropy to quantify the translation uncertainty, which we refer as the *corpus-level* multi-modality. It requires the use of an additional alignment model in calculating this quantity. In this paper, we want to avoid the requirement for external alignment tools. Thus, we directly use training-set likelihood of the NAR model as a measure of corpus-level multi-modality. Formally, the Corpus-level Multi-modality (CM) of parallel data \((\mathcal{X}, \mathcal{Y})\) is defined as: \[ CM_{\mathcal{X}}(\mathcal{Y}) = \mathbb{E}_{(x, y) \sim (\mathcal{X}, \mathcal{Y})} \left[ -\log p^{NAR}(y|x; \theta^*) \right], \tag{5} \] \[ \theta^* = \arg \max_{\theta} \mathbb{E}_{(x, y) \sim (\mathcal{X}, \mathcal{Y})} \log p^{NAR}(y|x; \theta). \tag{6} \] To make this metric comparable across different datasets, we further define the Normalized Corpus-level Multi-modality (NCM) as: \[ NCM_{\mathcal{X}}(\mathcal{Y}) = \frac{\mathbb{E}_{(x, y) \sim (\mathcal{X}, \mathcal{Y})} \left[ -\log p^{NAR}_M(y|x; \theta^*) \right]}{\mathbb{E}_{y \sim \mathcal{Y}} [ |y| ]} \tag{7} \] where \(|y|\) denotes the length of output sequence \(y\). ### 4. Properties of NAR Models How powerful are NAR models? Are they as expressive as AR models? Our answer is both yes and no. On one hand, it is easy to see that NAR models can only capture distributions that can be factorized into conditional independent parts. On the other hand, we will show in the next that if the instance-level multi-modality can be removed from the training data (e.g., via sequence-level knowledge distillation), then NAR models can be as powerful just as AR models. #### 4.1. Theoretical expressiveness of NAR models Let us focus on the expressive power of NAR models when the instance-level multi-modality is removed, that is, when there are only one-to-one and many-to-one mappings in training examples. More specifically, we consider the ability of the vanilla Non-Autoregressive Transformer (NAT) models (Gu et al., 2017; Vaswani et al., 2017) in approximating arbitrary continuous \(\mathbb{R}^{d \times n} \rightarrow \mathbb{R}^{d \times m}\) sequence-to-sequence single-valued functions, where \(n\) and \(m\) are the input and output sequence length, and \(d\) is the model dimension. Given the definition of a distance between two functions \(f_1, f_2 : \mathbb{R}^{d \times n} \rightarrow \mathbb{R}^{d \times m}\) as: \[ d_p(f_1, f_2) = \left( \int \| f_1(X) - f_2(X) \|_p^p dX \right)^{1/p}. \tag{8} \] where \(p \in [1, \infty)\). We can make the following statement: **Theorem 4.1.** Let \(1 \leq p < \infty\) and \(\epsilon > 0\), then for any given continuous sequence-to-sequence function \(f : \mathbb{R}^{d \times n} \rightarrow \mathbb{R}^{d \times m}\), there exists a non-autoregressive Transformer network \(g\) such that \(d_p(f, g) \leq \epsilon\). This theorem is a corollary of the Theorem 2 in Yun et al. (2020). For completeness, we provide the formal theorem with proof in the appendix. #### 4.2. What limits the success of NAR models in practice? Theorem 4.1 shows that for any sequence-to-sequence dataset containing no instance-level multi-modality, we can always find a good NAT model to fit this dataset. However, in reality, it is still a big challenge for NAT models to fit the distilled deterministic training data very well. The gap between theory and practice is due to the fact that in theory we may use as many Transformer layers as needed, but in reality, there are only a few layers (e.g., 6 layers) in the Transformer model. This would greatly restrict the model capacity of real NAR models. To further understand the limitation let us examine the following two hypotheses: - The NAT model intrinsically cannot accurately produce very long output sequences when it has only a few Transformer layers. - The corpus-level multi-modality in data makes it hard for NAT models to deal with (i.e., to memorize the “mode” in the output for each input). These hypotheses focus on two different reasons that might cause the poor performance of NAR models. In order to Table 1. Toy examples illustrating the two types of synthetic experiments. | | source | target | |----------------|--------|--------| | Experiment I | 2 1 4 3 → 2 2 1 4 4 4 4 3 3 3 | | | 2 2 3 → 2 2 2 2 3 3 3 | | | 2 1 5 → 2 2 1 5 5 5 | | Experiment II | 2 1 4 3 → 0 2 2 1 4 4 4 4 3 3 3 0 0 | | | 2 2 3 → 0 0 0 2 2 2 2 3 3 3 | | | 2 1 5 → 2 2 1 5 5 5 5 0 0 0 | Table 2. The accuracy in whole-sentence matching of the AR and NAR models over 1000 synthetic examples. | Models | Experiment I | Experiment II | |--------|--------------|---------------| | | AR | NAR | AR | NAR | | Accuracy(%) | 99.9 | 95.7 | 99.8 | 00.0 | verifying which one is true, we design two types of synthetic data and experiments. In Experiment I, a synthetic translation dataset is constructed as follows: The source and target sides share the same vocabulary of \{1, 2, 3, 4, 5\}. 1, 2, 3, and 4, and 5 are translated into 1, 2, 2, 3 3 3, 4 4 4 4, and 5 5 5 5 5, respectively. The translation is deterministic, i.e., no multi-modality in the resulted parallel dataset. In Experiment II, we randomly insert four 0 in the front or the back of each target sentence in the 1st dataset. In other words, the source-to-target translation in the 2nd dataset is non-deterministic and hence is with corpus-level multi-modality. In addition, we filter the source data in Experiment II to make sure that there is no instance-level multi-modality in this dataset. The toy examples illustrating the two types of datasets can be found in Tab. 1. Following such rules we randomly generated 2,000,000 sentences for training and 1000 sentences for testing; both the training and testing source sentences have the length of 30. We trained both the AR transformer and the NAR Transformer on these synthetic datasets, each of which consists of a 3-layer encoder and a 3-layer decoder. The detailed model settings can be found in the appendix. In our evaluation we used the ground-truth lengths for the decoding of both the AR and NAR Transformers; the whole-sentence matching accuracy of those models are listed in Tab. 2. The results in Experiment I show that both the autoregressive Transformer and the non-autoregressive Transformer can achieve an high accuracy of 99.9% and 95.7%, respective, when the training data do not have the multi-modality. In contrast, the results of Experiment II show that the non-autoregressive Transformer model failed completely on the synthetic dataset with corpus-level multi-modality. The sharp contrast in these synthetic experiments indicates that the real problem with NAR models is indeed due to the corpus-level multi-modality issue. 5. Proposed Method Let us formally introduce our EM approach to addressing the multi-modality issue in NAR models, followed by a principled decoding module for effective removal of word duplication in the predicted output. 5.1. The EM Framework With the definition of the corpus-level multi-modality (i.e., CM in Eq. 5), we consider how to reduce this quantity for the better training of NAR models. Formally, given source data $\mathcal{X}$, we want to find target data $\mathcal{Y}^*$ that satisfies the following property: $$\mathcal{Y}^* = \arg \min_{\mathcal{Y}} \text{CM}_{\mathcal{X}}(\mathcal{Y})$$ $$= \arg \min_{\mathcal{Y}} \mathbb{E}_{(x, y) \sim (\mathcal{X}, \mathcal{Y})} \left[ -\log p^{NAR}(y|x; \theta^*) \right].$$ However, there can be many trivial solutions for $\mathcal{Y}^*$. For example, we can simply construct a dataset with no variation in the output to achieve zero corpus-level multi-modality. To avoid triviality, we may further apply a constraint to $\mathcal{Y}^*$. This leads us to the posterior regularization framework. Posterior Regularization (Ganchev et al., 2010) is a probabilistic framework for structured, weakly supervised learning. In this framework, we can re-write our objective as following: $$L_Q(\theta) = \min_{q \in Q} \mathbb{E}_q(q(\mathcal{Y})) \| p^{NAR}(\mathcal{Y}|\mathcal{X}; \theta) \),$$ where $q$ is the posterior distribution of $\mathcal{Y}$ and $Q$ is a constraint posterior set that controls the quality of the parallel data given by: $$Q = \{ q(\mathcal{Y}) : \mathbb{E}_{\mathcal{Y} \sim q} [Q_{\mathcal{X}}(\mathcal{Y})] \geq b \},$$ where $Q$ is a metric for quality mapping from $(\mathcal{X}, \mathcal{Y})$ to $\mathbb{R}^N$ in the training set and $b$ is a bound vector. For sequence generation tasks, there are many corpus-level quality metrics, such as BLEU (Papineni et al., 2002) and ROUGE (Lin & Hovy, 2003). However, they are known to be inaccurate for measuring the quality of single sentence pairs. Thus, we use the likelihood score of a pre-trained AR model as a more reliable quality metric: $$[Q_{\mathcal{X}}(\mathcal{Y})]_i = Q_{x^i}(y^i) = \log p^{AR}(y^i|x^i; \phi^1),$$ where $\phi^1$ denotes that the AR model trained on the original ground-truth dataset. Given the posterior regularization likelihood $L_Q(\theta)$, we use the EM algorithm (McLachlan & Krishnan, 2007; Ganchev to optimize it. In the E-step (a.k.a. the inference procedure), the goal is to fix $p^{NAR}$ and update the posterior distribution: $$q^{t+1} = \arg\min_{q \in Q} \mathbb{KL}(q(\mathcal{Y}) || p^{NAR}(\mathcal{Y} | \mathcal{X}; \theta^t)),$$ In the M-step (a.k.a., learning procedure), we will fix $q(\mathcal{Y})$ and update $\theta$ to maximize the expected log-likelihood: $$\theta^{t+1} = \arg\max_{\theta} \mathbb{E}_{q^{t+1}} [\log p^{NAR}(\mathcal{Y} | \mathcal{X}; \theta)],$$ Next, we introduce the details of the E-step and the M-step in our framework. **Inference Procedure** The E-step aims to compute the posterior distribution $q(\mathcal{Y})$ that minimizes the KL divergence between $q(\mathcal{Y})$ and $p^{AR}(\mathcal{Y} | \mathcal{X})$. Ganchev et al. (2010) show that for graphical models, $q(\mathcal{Y})$ can be efficiently solved in its dual form. Specifically, the primal solution $q^*$ is given in terms of the dual solution $\lambda^*$ by: $$q^*(\mathcal{Y}) \propto p^{NAR}(\mathcal{Y} | \mathcal{X}; \theta^t) \exp \{ \lambda^* \cdot Q_X(\mathcal{Y}) \}$$ $$\propto \prod_{i=1}^{N} \left[ p^{NAR}(y^i | x^i; \theta^t) \left( p^{AR}(y^i | x^i; \phi^0) \right)^{\lambda^*_i} \right],$$ However, a problem here, as pointed out by Zhang et al. (2018), is that it is hard to specify the hyper-parameter b to effectively bound the expectation of the features for neural models. Besides, even when b is given, calculating $\lambda^*$ is still intractable for neural models. Therefore, in this paper, we introduce another way to compute $q(\mathcal{Y})$. We first factorize $q(\mathcal{Y})$ as the product of $\{ q(y^i) \}$, and then follow the idea of amortized inference (Gershman & Goodman, 2014) to parameterize $q(y^i)$ with an AR sequence generation model: $$q(\mathcal{Y}) = \prod_{i=1}^{N} p^{AR}(y^i | x^i; \phi).$$ The E-step can thus be re-written as follows: $$\phi^{t+1} = \arg\min_{\phi \in Q'} \mathbb{E}_{x \sim X} \mathbb{KL}(p^{AR}(y | x; \phi) || p^{NAR}(y | x; \theta^t)).$$ where the new constraint posterior set $Q'$ is defined as $$\{ \phi : \mathbb{E}_{p^{AR}(\mathcal{Y} | \mathcal{X}; \phi)} [CQ_X(\mathcal{Y})] \geq b \}.$$ We further apply the REINFORCE algorithm (Williams, 1992) to estimate the gradient of $\mathcal{L}_Q$ w.r.t. $\phi \in Q'$: $$\nabla_\phi \mathcal{L}_Q = \mathbb{E}_{x \sim X} \mathbb{E}_{y \sim p^{AR}(y | x; \phi)} \left( - \log \frac{p^{NAR}(y | x; \theta^t)}{p^{AR}(y | x; \phi)} \nabla_\phi \log p^{AR}(y | x; \phi) \right).$$ This can be intuitively viewed as to construct a weighted pseudo training dataset $(\mathcal{X}^{t+1}, \mathcal{Y}^{t+1})$, where the training examples are sampled at random from $p^{AR}(y | x; \phi)$ and weighted by $\log \frac{p^{NAR}(y | x; \theta^t)}{p^{AR}(y | x; \phi)}$. In practice, we find that there are two problems in implementing this algorithm: One is that sampling from $p^{AR}(y | x; \phi)$ is very inefficient; the other is that the constraint $\phi \in Q'$ cannot be guaranteed. Therefore, we instead use a heuristic way when constructing the pseudo training dataset $(\mathcal{X}^{t+1}, \mathcal{Y}^{t+1})$: We first follow Wu et al. (2018) to replace the inefficient sampling process with beam search (Sutskever et al., 2014) on $p^{AR}(y | x; \phi^t)$, and then filter out the candidates that doesn’t satisfy the following condition: $$Q_x(y) \geq \hat{b}_i,$$ where $\hat{b}_i$ is a newly introduced pseudo bound that can be empirically set by early stopping. In this way, we control the quality of $p^{AR}(y | x; \phi^{t+1})$ by manipulating the quality of its training data. Finally, we choose the ones with the highest $p^{AR}(y | x; \phi^t) \log \frac{p^{NAR}(y | x; \theta^t)}{p^{AR}(y | x; \phi^t)}$ score as the training examples in $\mathcal{Y}^{t+1}$, and $\mathcal{X}^{t+1}$ is merely a copy of $\mathcal{X}$. In each E-step, in principle, we should let $\phi^{t+1}$ converge under the current NAR model. Although this can be achieved by constructing the pseudo datasets and train the AR model for multiple times, it is practically prohibitive due to the expensive training cost. We, therefore, use only a single update iteration of the AR model in the inner loop of each E-step. **Learning Procedure** In the M-step, we seek to learn the parameters $\theta^{t+1}$ with the parameterized posterior distribution $p^{AR}(\mathcal{Y} | \mathcal{X}; \phi^{t+1})$. However, directly sampling training examples from the AR model will cause the instance-level multi-modality problem. Therefore, we apply sequence-level knowledge distillation (Kim & Rush, 2016) to solve this problem, that is, we only use the targets with maximum likelihood in the AR model to train the NAR model: $$\theta^{t+1} = \arg\max_{\theta} \mathbb{E}_{(x, y) \sim (\mathcal{X}, \hat{\mathcal{Y}}^{t+1})} \log p^{NAR}(y | x; \theta).$$ where $\hat{\mathcal{Y}}^{t+1}$ denotes the training examples produced by the AR model $p^{AR}(\mathcal{Y} | \mathcal{X}; \phi^{t+1})$. **Joint Optimization** We first pre-train an AR teacher model on the ground-truth parallel data as $p^{AR}(y | x; \phi^1)$. Then we alternatively optimize $p^{NAR}$ and $p^{AR}$ until convergence. We summarize the optimization algorithm in Alg. 1. In our EM method, the AR and NAR models are jointly optimized to reduce the corpus-level multi-modality. Algorithm 1 An EM approach to NAR models Input: Parallel training dataset \((\mathcal{X}, \mathcal{Y})\) \(t = 0\) Pre-train \(p^{AR}(y|x; \phi^1)\) on \((\mathcal{X}, \mathcal{Y})\). while not converged do \(t = t + 1\) \(\square\) M-Step: Learning Procedure Construct the distillation dataset \(\mathcal{Y}^t\) with \(p^{AR}(y|x; \phi^t)\). Train \(p^{NAR}(y|x; \theta^t)\) on \((\mathcal{X}, \mathcal{Y}^t)\). \(\square\) E-Step: Inference Procedure Construct the pseudo dataset \((\mathcal{X}^{t+1}, \mathcal{Y}^{t+1})\). Train \(p^{AR}(y|x; \phi^{t+1})\) on \((\mathcal{X}^{t+1}, \mathcal{Y}^{t+1})\). end while Output: A NAR model \(p^{NAR}(y|x; \theta^t)\). 5.2. The Optimal De-duplicated Decoding Module Word duplication is a well-known problem in NAR models caused by the multi-modality issue. To improve the performance of NAR models, some previous work (Lee et al., 2018; Li et al., 2019) remove any duplication in the model prediction by collapsing multiple consecutive occurrences of a token. Such an empirical approach is not technically sound. After collapsing, the length of the target sequence is changed. This will cause a discrepancy between the predicted target length and the actual sequence length and thus make the final output sub-optimal. We aim to solve the word duplication problem in NAR models, while preserving the original sequence length. Similar to Sun et al. (2019), we use the Conditional Random Fields (CRF) model (Lafferty et al., 2001) for the decoding of NAR models. The CRF model is manually constructed as follows. It treats the tokens to be decoded as the predicted labels. The unitary scores of the labels in each position are set to be NAR models’ output distribution and the transition matrix is set to \(-\infty \cdot \mathbf{I}\), where \(\mathbf{I}\) is an identity matrix. Our model is able to find the optimal decoding when considering only the top-3 candidates w.r.t. the unitary scores in each position: Proposition 5.1. In a CRF with a transition matrix of \(-\infty \cdot \mathbf{I}\), only top-3 likely labels for each position are possible to appear in the optimal (most likely) label sequence. We can thus crop the transition matrix accordingly by only keeping a \(3 \times 3\) transition sub-matrix between each pair of adjacent positions. The forward-backward algorithm (Lafferty et al., 2001) is then applied on the top-3 likely labels and the \(3 \times 3\) transition sub-matrices to find the optimal decoding with the linear time complexity of \(O(|y|)\). The proposed decoding module is a lightweight plug-and-play module that can be used for any NAR models. Since this principled decoding method is guaranteed to find the optimal prediction that has no word duplication, we refer it as optimal de-duplicated (ODD) decoding method\(^2\). 6. Experiments 6.1. Experimental Settings We use several benchmark tasks to evaluate the effectiveness of the proposed method, including IWSLT14\(^3\) German-to-English translation (IWSLT14 De-En) and WMT14\(^4\) English-to-German/German-to-English translation (WMT14 En-De/De-En). For the WMT14 dataset, we use Newstest2014 as test data and Newstest2013 as validation data. For IWSLT14/WMT14 datasets, we split words into BPE tokens (Sennrich et al., 2015), forming a 10k/32k vocabulary shared by source and target languages. We use the Transformer (Vaswani et al., 2017) model as the AR teacher, and the vanilla Non-Autoregressive Transformer (NAT) (Gu et al., 2017) model with sequence-level knowledge distillation (Kim & Rush, 2016) as the NAR baseline. For both AR and NAR models, we use the original base setting for the WMT14 dataset, and a small setting for the IWSLT14 dataset. To investigate the influence of the model size on our method, we also evaluate large/base NAT models on WMT14/IWSLT14 datasets as a larger model setting. These larger NAT models are not used in the EM iterations. They are merely trained with the final AR teacher from the EM iterations of the original model (base/small) for WMT14/IWSLT14). The detailed settings of the model architectures can be founded in the appendix. We use Adam optimizer (Kingma & Ba, 2014) and employ a label smoothing (Szegedy et al., 2016) of 0.1 in all experiments. The base and large models are trained for 125k steps on 8 TPUs v3 nodes in each iteration, while the small models are trained for 20k steps. We use a beam size of 20/5 for the AR model in the M/E-step of our EM training algorithm. The pseudo bounds \(\{b_t\}\) is set by early stopping with the accuracy on the validation set. 6.2. Inference During decoding, the target length \(l = |y|\) is predicted by an additional classifier conditional on the source sentence: \(l = \arg \max_{T'} p(T'|x)\). We can also try different target \(^2\)Although this method does not allow any word duplication in the output sequence, it is still able to produce any sequence. To solve the problem that multiple consecutive occurrences of a token cannot be captured by our decoding method, we can introduce a special “(concat)” symbol. For example, “very very good” can be represented by “very (concat) very good”. \(^3\)https://wit3.fbk.eu/ \(^4\)http://statmt.org/wmt14/translation-task.html Table 3. Performance of BLEU score on WMT14 En-De/De-En and IWSLT14 De-En tasks for single-pass NAR models. ‘/’ denotes that the results are not reported in the original paper. Transformer (Vaswani et al., 2017) results are based on our own reproduction. | Models | WMT14 En-De | IWSLT14 De-En | Latency | Speedup | |-------------------------|-------------|---------------|---------|---------| | | De-En | De-En | | | | Autoregressive teacher model | | | | | | Transformer w/ beam size 5 | 27.84 | 32.14 | 34.69 | 393ms | 1.00× | | Non-autoregressive models | | | | | | NAT-FT | 17.69 | 21.47 | / | 39ms | 15.6× | | LT | 19.80 | / | / | 105ms | | | ENAT | 20.65 | 23.02 | 24.13 | 2Anns | 25.3× | | NAT-BAN | 21.47 | / | / | 22ms | | | NAT-REG | 20.65 | 24.77 | 23.89 | 22ms | 27.6× | | NAT-HINT | 21.11 | 25.24 | 25.55 | 26ms | 30.2× | | NAT-CTC | 17.68 | 19.80 | / | 350ms | 34.2× | | FlowSeq-base | 21.45 | 26.16 | 27.55 | / | / | | ReorderNAT | 22.79 | 27.28 | / | / | 16.1× | | NAT-CRF | 23.44 | 27.22 | 27.44 | 37ms | 10.4× | | Ours | | | | | | NAT baseline | 19.55 | 23.44 | 22.35 | 22ms | 17.9× | | + EM training | 23.27 | 26.73 | 29.38 | 22ms | 17.9× | | + ODD decoding | **24.54** | **27.93** | **30.69**| **24ms**| **16.4×**| lengths ranging from \((l - b)\) to \((l + b)\) and obtain multiple translations with different lengths, where \(b\) is the half-width, and then use the AR model \(p^{AR}(y|x; \phi^{-1})\) as the rescorer to select the best translation. Such a decoding and rescoring process can be conducted in parallel and is referred as parallel length decoding. To make a fair comparison with previous work, we set \(b\) to 4 and use 9 candidate translations for each sentence. For each dataset, we evaluate the model performance with the BLEU (Papineni et al., 2002) score. We evaluate the average per-sentence decoding latency\(^6\) on WMT14 En-De test sets with batch size 1 on a single NVIDIA GeForce RTX 2080 Ti GPU by averaging 5 runs. ### 6.3. Main Results We compare our model with the Transformer (Vaswani et al., 2017) teacher model and several NAR baselines, including NAT-FT (Gu et al., 2017), LT (Kaiser et al., 2018), ENAT (Guo et al., 2018), NAT-BAN (Zhou et al., 2019), NAT-REG (Wang et al., 2019), NAT-HINT (Li et al., 2018), NAT-CTC (Libovicky & Helcl, 2018), Flowseq (Ma et al., 2019), RecorderNAT (Ran et al., 2019), NAT-CRF (Sun et al., 2019), NAT-IR (Lee et al., 2018), CMLM (Ghazvininejad et al., 2019), and LevT (Gu et al., 2019). Tab. 3 provides the performance of our method with maximum likely target length \(l\), together with other NAR baselines that generate output sequences in a single pass. From the table, we can see that the EM training contributes most to the improvement of the performance. The optimal deduplicated (OOD) decoding also significantly improves the model performance. Compared with other models, our method significantly outperforms all of them, with nearly no additional overhead compared with the vanilla NAT. Tab. 4 illustrates the performance of our method equipped with rescoring and other baselines equipped with rescoring or iterative refinement. Since our method has nearly no additional overhead compared with the vanilla NAT, to make a fair comparison with previous work (Kaiser et al., 2018; Lee et al., 2018; Ma et al., 2019; Sun et al., 2019; Gu et al., 2019), we also show the results of our method with a larger model setting. From the table, we can still find that the EM training significantly improves the performance of the vanilla NAT model, but the effect of the OOD decoding is not as significant as in the single-pass setting. This shows that the rescoring process can mitigate the word duplication problem to some extent. Surprisingly, we also find that using the larger model also does not bring much gain. A potential explanation for this is that since our EM algorithm significantly simplifies the training dataset and the NAT model can be over-parameterized, there is no much gain in further increasing the model size. Compared with --- \(^6\)We follow common practice in previous works to make a fair comparison. Specifically, we use tokenized case-sensitive BLEU for WMT datasets and case-insensitive BLEU for IWSLT datasets. \(^7\)In Tab. 3 and Tab. 4 indicates that the latency and the speedup may be affected by hardware settings and are thus not fair for direct comparison. other baselines, our method significantly outperforms these rescored single-pass NAR methods and achieves competitive performance to iterative-refinement models with a much better speedup. Note that these iterative-refinement models (Lee et al., 2018; Ghazvininejad et al., 2019; Gu et al., 2019) still rely on sequence-level knowledge distillation in training, which indicates that it is still a hard problem for these approaches to capture multi-modality in the real data. Our EM algorithm may further improve their performance. We leave combining the two techniques for future work. 6.4. Analysis of the Convergence We analyze the convergence of the EM algorithm. We show the dynamics of the performance of the NAR model (test BLEU), the performance of the AR model (test BLEU), and the Normalized Corpus-level Multi-modality (NCM, defined by Eq. 7) on the WMT14 En-De dataset. The results are shown in Tab. 5. We describe the detailed optimization process here to clarify how our EM algorithm works with early stopping in this example. In the first 5 iterations, as we have no idea how to set $\{\hat{b}^t\}$ precisely, we simply set them to zeros. But after the $5^{th}$ iteration, we find an accuracy drop in the validation set. So we will re-use the quality metrics at the $4^{th}$ iteration to set $\{\hat{b}^t\}$ and continue the EM algorithm until convergence. We can see that our EM algorithm takes only a few iterations to convergence, which is very efficient. With the EM algorithm continues, the NCM metric, which can be regarded as the optimization objective, decreases monotonically. The performance of the NAR model and the performance of the AR model also converge after 5 iterations. Table 5. Analysis of the convergence for the EM algorithm on WMT14 En-De test BLEU. NCM represents the Normalized Corpus-level Multi-modality. All the models are evaluated without ODD decoding and rescoring. Iteration $t = 1, 2, 3, 4, 5^*$ are performed without quality constraints. Iteration $t = 5, 6$ are re-started from $t = 4$ with quality constraints. | $t$ | NAR Model | AR Model | NCM | |-----|-----------|----------|-----| | 1 | 19.55 | **27.84**| 2.88| | 2 | 22.27 | 27.50 | 2.33| | 3 | 22.85 | 27.12 | 2.24| | 4 | 23.27 | 26.78 | 2.16| | 5* | 22.86 | 26.11 | **2.04**| | 5 | 23.18 | 26.72 | 2.12| | 6 | 23.16 | 26.75 | 2.11| 6.5. Analysis of the Amortized Inference In our EM method, we employ amortized inference and parametrize the posterior distribution of the target sequences by an AR model. In this section, we investigate the importance of amortized inference. Specifically, we try to directly train the NAR model on $(x^{t+1}, y^{t+1})$ in the M-step. The results are shown in Tab. 6. We can see that parameterizing the posterior distribution by a unified AR model always improves the performance of the NAR model. Table 6. Analysis of the amortized inference for iteration $t = 2, 3, 4$ on WMT En-De test BLEU. All the models are evaluated without ODD decoding and rescoring. We show the results of the NAR models using different training data in the M-step. | $t$ | amortized | non-amortized | |-----|-----------|---------------| | 2 | **22.27** | 21.78 | | 3 | **22.85** | 22.44 | | 4 | **23.27** | 22.98 | 6.6. Analysis of the Optimal De-duplicated Decoding Finally, we analyze the effect of the proposed optimal de-duplicated (ODD) decoding approach. We compare it with another plug-and-play de-duplicated decoding approach, that is, “removing any repetition by collapsing multiple consecutive occurrences of a token” (Lee et al., 2018), which we refer to as post-de-duplication. The results are shown in Tab. 7. We can see that the proposed ODD decoding method consistently outperforms this empirical method. This shows that our proposed method can overcome the sub-optimal problem of the post-de-duplication method. Table 7. Analysis of the ODD decoding on WMT En-De test BLEU. All the models are trained with our EM algorithm. | | WMT En-De | WMT De-En | IWSLT | |------------------------|-----------|-----------|-------| | post-de-duplication | 23.67 | 26.93 | 24.96 | | + rescoring 9 | 25.56 | 28.92 | 32.03 | | ODD decoding | **24.54** | **27.93** | **30.69** | | + rescoring 9 | **25.75** | **29.29** | **32.66** | 7. Conclusion This paper proposes a novel EM approach to non-autoregressive conditional sequence generation, which effectively addresses the multi-modality issue in NAR training by iterative optimizing both the teacher AR model and the student NAR model. We also developed a principled plug-and-play decoding method for efficiently removing word duplication in the model’s output. Experimental results on three tasks prove the effectiveness of our approach. For future work, we plan to examine the effectiveness of our method in a broader range of applications, such as text summarization. Acknowledgements We would like to thank Meng Qu and anonymous reviewers for valuable comments and suggestions. This work is supported in part by the National Science Foundation (NSF) under grant IIS-1546329. References Bahdanau, D., Cho, K., and Bengio, Y. Neural machine translation by jointly learning to align and translate. *arXiv preprint arXiv:1409.0473*, 2014. Furlanello, T., Lipton, Z. C., Tschannen, M., Itti, L., and Anandkumar, A. Born again neural networks. *arXiv preprint arXiv:1805.04770*, 2018. Ganchev, K., Gillenwater, J., Taskar, B., et al. Posterior regularization for structured latent variable models. *Journal of Machine Learning Research*, 11(Jul):2001–2049, 2010. Gehring, J., Auli, M., Grangier, D., Yarats, D., and Dauphin, Y. N. Convolutional sequence to sequence learning. In *Proceedings of the 34th International Conference on Machine Learning-Volume 70*, pp. 1243–1252. JMLR. org, 2017. Gershman, S. and Goodman, N. Amortized inference in probabilistic reasoning. In *Proceedings of the annual meeting of the cognitive science society*, volume 36. 2014. Ghazvininejad, M., Levy, O., Liu, Y., and Zettlemoyer, L. Mask-predict: Parallel decoding of conditional masked language models. In *Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)*, pp. 6114–6123, 2019. Gu, J., Bradbury, J., Xiong, C., Li, V. O., and Socher, R. Non-autoregressive neural machine translation. *arXiv preprint arXiv:1711.02281*, 2017. Gu, J., Wang, C., and Zhao, J. Levenshtein transformer. *arXiv preprint arXiv:1905.11006*, 2019. Guo, J., Tan, X., He, D., Qin, T., Xu, L., and Liu, T.-Y. Non-autoregressive neural machine translation with enhanced decoder input. *arXiv preprint arXiv:1812.09664*, 2018. Hinton, G., Vinyals, O., and Dean, J. Distilling the knowledge in a neural network. *arXiv preprint arXiv:1503.02531*, 2015. Kaiser, L., Roy, A., Vaswani, A., Parmar, N., Bengio, S., Uszkoreit, J., and Shazeer, N. Fast decoding in sequence models using discrete latent variables. *arXiv preprint arXiv:1803.03382*, 2018. Kim, Y. and Rush, A. M. Sequence-level knowledge distillation. *arXiv preprint arXiv:1606.07947*, 2016. Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization. *arXiv preprint arXiv:1412.6980*, 2014. Lafferty, J., McCallum, A., and Pereira, F. C. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. 2001. Lee, J., Mansimov, E., and Cho, K. Deterministic non-autoregressive neural sequence modeling by iterative refinement. *arXiv preprint arXiv:1802.06901*, 2018. Li, Z., He, D., Tian, F., Qin, T., Wang, L., and Liu, T.-Y. Hint-based training for non-autoregressive translation. 2018. Li, Z., Lin, Z., He, D., Tian, F., Qin, T., Wang, L., and Liu, T.-Y. Hint-based training for non-autoregressive machine translation. *arXiv preprint arXiv:1909.06708*, 2019. Libovický, J. and Helcl, J. End-to-end non-autoregressive neural machine translation with connectionist temporal classification. *arXiv preprint arXiv:1811.04719*, 2018. Lin, C.-Y. and Hovy, E. Automatic evaluation of summaries using n-gram co-occurrence statistics. In *Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics*, pp. 150–157, 2003. Ma, X., Zhou, C., Li, X., Neubig, G., and Hovy, E. Flowseq: Non-autoregressive conditional sequence generation with generative flow. *arXiv preprint arXiv:1909.02480*, 2019. McLachlan, G. J. and Krishnan, T. *The EM algorithm and extensions*, volume 382. John Wiley & Sons, 2007. Papineni, K., Roukos, S., Ward, T., and Zhu, W.-J. Bleu: a method for automatic evaluation of machine translation. In *Proceedings of the 40th annual meeting on association for computational linguistics*, pp. 311–318. Association for Computational Linguistics, 2002. Ran, Q., Lin, Y., Li, P., and Zhou, J. Guiding non-autoregressive neural machine translation decoding with reordering information. *arXiv preprint arXiv:1911.02215*, 2019. Sennrich, R., Haddow, B., and Birch, A. Neural machine translation of rare words with subword units. *arXiv preprint arXiv:1508.07909*, 2015. Shao, C., Zhang, J., Feng, Y., Meng, F., and Zhou, J. Minimizing the bag-of-ngrams difference for non-autoregressive neural machine translation. *arXiv preprint arXiv:1911.09320*, 2019. Stern, M., Chan, W., Kiros, J., and Uszkoreit, J. Insertion transformer: Flexible sequence generation via insertion operations. *arXiv preprint arXiv:1902.03249*, 2019. Sun, Z., Li, Z., Wang, H., He, D., Lin, Z., and Deng, Z. Fast structured decoding for sequence models. In *Advances in Neural Information Processing Systems*, pp. 3011–3020, 2019. Sutskever, I., Vinyals, O., and Le, Q. V. Sequence to sequence learning with neural networks. In *Advances in neural information processing systems*, pp. 3104–3112, 2014. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., and Wojna, Z. Rethinking the inception architecture for computer vision. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 2818–2826, 2016. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., and Polosukhin, I. Attention is all you need. In *Advances in neural information processing systems*, pp. 5998–6008, 2017. Wang, Y., Tian, F., He, D., Qin, T., Zhai, C., and Liu, T.-Y. Non-autoregressive machine translation with auxiliary regularization. *arXiv preprint arXiv:1902.10245*, 2019. Wei, B., Wang, M., Zhou, H., Lin, J., and Sun, X. Imitation learning for non-autoregressive neural machine translation. *arXiv preprint arXiv:1906.02041*, 2019. Williams, R. J. Simple statistical gradient-following algorithms for connectionist reinforcement learning. *Machine learning*, 8(3-4):229–256, 1992. Wu, L., Tian, F., Qin, T., Lai, J., and Liu, T.-Y. A study of reinforcement learning for neural machine translation. *arXiv preprint arXiv:1808.08866*, 2018. Yun, C., Bhojanapalli, S., Rawat, A. S., Reddi, S., and Kumar, S. Are transformers universal approximators of sequence-to-sequence functions? In *International Conference on Learning Representations*, 2020. URL https://openreview.net/forum?id=ByxRM0Ntvr. Zhang, J., Liu, Y., Luan, H., Xu, J., and Sun, M. Prior knowledge integration for neural machine translation using posterior regularization. *arXiv preprint arXiv:1811.01100*, 2018. Zhou, C., Neubig, G., and Gu, J. Understanding knowledge distillation in non-autoregressive machine translation. *arXiv preprint arXiv:1911.02727*, 2019.
TOWN OF SAN ANSELMO PLANNING COMMISSION MINUTES FOR JANUARY 21, 1997 The special meeting of the San Anselmo Planning Commission was convened at 7:30 p.m. by Chair Israel. Staff present was Planning Director Ann Chaney, and Assistant Planner Chip Griffin. CALL TO ORDER Commissioners present: Harle, Sargent, Wittenkeller, Duys, Israel Commissioners absent: Mihaly, Cronk OPEN TIME FOR PUBLIC DISCUSSION CONSENT AGENDA 1. MINUTES: December 16, 1996 and January 6, 1997 2. Lenny Lerner, Attention to Detail, 1535 Sir Francis Drake Boulevard, A/P 5-153-01, status report on uses on property located within the C-1 Zoning District. 3. SR-9701 - Wells Fargo Bank at Andronico's Market, 100 Center Boulevard, A/P 6-101-04, size variance to permit a 17.75 square foot sign (Maximum sign area already exceeded by Andronico's signage) on property located within the C-3 zoning district. 4. PDP-9701/DR-9702, Stuart Jacobson and Andrea Sandvig, 500 Oak Avenue, A/P 7-191-07, Precise Development Plan and Design Review to construct a 684 square foot second floor addition with a small first floor addition. The addition is primarily to add a second floor master bedroom and bath to an existing single story house; however, an additional 476 square feet of basement (area open to room below) will add wall and floor area, but not floor area, therefore, was not calculated into the total square footage, on property located within the R-1-H zoning district. M/s Harle/Wittenkeller, and passed, to approve Consent Item C.1 - Minutes. Ayes: Wittenkeller, Harle, Duys, Sargent, Israel Absent: Mihaly, Cronk M/s Harle/Duys, and passed, to continue Consent Item C2 to the meeting of 2/3/97. Ayes: Wittenkeller, Harle, Duys, Sargent, Israel Absent: Mihaly, Cronk M/s Harle/Wittenkeller, and passed, to remove Consent Items C3 and C4 from Consent and open them for discussion. Ayes: Wittenkeller, Harle, Duys Sargent, Israel Absent: Mihaly, Cronk CONTINUED ITEMS 1. Environmental Review/GPA-9601/Z-9601/U-9608 Russ Johnson, 12 Loma Robles and 750 Sir Francis Drake Boulevard, A/P 6-091-41, 770 and 760 Sir Francis Drake Boulevard, A/P 6-091-38, 764 Sir Francis Drake Boulevard, A/P 6-091-39, and 700 Sir Francis Drake Boulevard, A/P 6-091-40: environmental review; General Plan amendment to amend the land use designation from Limited Commercial to General Commercial; Zoning Ordinance amendment to amend the zoning from C-L (Limited Commercial) to C-3 (General Commercial) or to revise the list of allowed uses (Table 3A) in the C-L zone to permit a mini-mart food store. This request is being initiated by the Chevron Service Station owner in order to permit a mini-mart at that service station. CONTINUED TO 2/3/97 PUBLIC HEARINGS 3. SR-9701 - Wells Fargo Bank at Andronico's Market, 100 Center Boulevard, A/P 6-101-04, size variance to permit a 17.75 square foot sign (Maximum sign area already exceeded by Andronico's signage) on property located within the C-3 zoning district. (Taken from Consent) Chair Israel asked if a parking analysis was done with regard to the number of spaces required to support the market, and is there any change in parking requirements due to the ATM banking facility. He was also concerned about the circulation in the lot that is already stressed. Also, a sign variance was previously granted for the market and now they are asking for an additional amount for the ATM facility. There is pressure to require the adjacent parcel to be purchased and used for parking and wondered what bearing it may have on the proposal. Ms. Chaney stated that some of the parking spaces were reconfigured and is in compliance with the parking standards for the market. Because parking is based on square footage, the parking requirements would not technically changed. Another argument could be made that the ATM is a secondary use to the market. No studies have been made that determine people just go to the market to use the ATM. Chair Israel wanted to know what the impact this facility would have on the Wells Fargo Bank on Tunstead, noting that he would not want to lose that Bank. Commissioner Sargent stated that if the ATM only serves those people who use the market, no special signage should be necessary outside the market. Because the applicant was not present to respond to the Commission, the application was continued. M/s Duys/Harle, and passed, to continued this application to 2/3/97. Ayes: Wittenkeller, Harle, Duys, Sargent, Israel Absent; Mihaly, Cronk 4. PDP-9701/DR-9702, Stuart Jacobson and Andrea Sandvig, 500 Oak Avenue, A/P 7-191-07, Precise Development Plan and Design Review to construct a 684 square foot second floor addition with a small first floor addition. The addition is primarily to add a second floor master bedroom and bath to an existing single story house; however, additional 475 square feet of clearstory (area open to room below) will add wall and roof area but not floor area, and therefore, was not calculated into the total square footage, on property located within the R-1-H zoning district. (Taken from Consent) Ms. Chaney stated that the Commission should be aware that there is a policy in the Bald Hill Plan that encourages property owners to pay a proportionate share of cost for the roadway improvements on Oak Avenue. That condition was placed on this application staff wanted the Commissions comments. Commission Wittenkeller wondered if there was a pro rata amount that could be applied to this application, noting that he was in favor of applying it to expansions. Commissioner Harle stated that he was not sure the Commission can require an amount without getting direction from the Town Council. Ms. Chaney suggested that she bring this to the Council for direction, and if they concur, it can be made a condition of the building permit issuance for this project. Stuart Jacobson stated that he was part of the Bald Hill Committee and stated that the intent was to impose a fee when traffic will be increased due to a project. Commissioner Duys stated that it seemed fair to impose a charge for adding a bedroom, but would like to clarify the language, both for this application, as well as for future applications and suggested getting guidance from the Council. Commissioner Sargent agreed that it would be fair to have owners that live on Oak Avenue and add additional living space, to take part in the pro rata share. Chair Israel would like to make sure the Council is in agreement and then assess a calculate based on the percentage of living area, noting that the generation of a new room could add additional traffic. He suggested that Staff prepare a recommendation to the Town Council for a fee share. M/s Wittenkeller/Duys, and passed, to approve the application with the condition that Staff prepare a recommendation to the Town Council for a fee share; should such policy be adopted, a condition should be placed on this application that the fee should be imposed prior to issuance of the building permit. Ayes: Harle, Sargent, Wittenkeller, Harle, Israel Absent: Cronk, Mihaly. The audience was advised of the ten day appeal period. 1. V-9644 - Kevin McGee, 5 Jordan Avenue, A/P 6-166-04, Variance to reduce the parking space length within a garage from 19' (required) to 16'6" to accommodate a new interior addition, on property located within the R-1 Zoning District. Ms. Chaney presented the staff report, noting that the applicant is proposing an alternative plan with reduced parking. Staff is still unable to support the compact parking space. John Hood, Architect representing the applicant, said his parcel is one of the smallest lots in the neighborhood. Their research with car dealers in Marin County indicate that approximately 75% of cars are under 16.6' in length. They have designed a space that will fit a decent size car and feel they are making a compromise. Ms. Chaney clarified for the Commission that there are other lots in the immediate neighborhood that share the same lot size. Commissioner Harle stated that although he was sympathetic to the needs of the applicant, he was unable to make the findings for approval. Commissioner Duys said she was also sympathetic but did not find the small lot size as a special circumstance to grant approval of reduced parking. Commissioner Sargent felt the request was reasonable because many jurisdictions are allowing compact parking. There has to be a way to accommodate the needs of families who want to add-on and therefore will support the application. Commissioner Wittenkeller thought the request was a reasonable approach with one full size space, and one compact space. The trend in automobiles is not to get larger and this will accommodate the use of the property. Chair Israel stated that although the request might be reasonable he is opposed to it. The house is already over the allowable lot coverage and car sizes are going back up again. People tend to use their garages for storage and he visualizes this car will not be in the garage. If the Town wants to adjust the ordinance he would be happy to discuss it; but given the fact that the ordinance is in place, he cannot approve this. If a situation occurs that the garage door will have to be left open, it will leave a car in the driveway. M/s Sargent/Wittenkeller, and passed, to approve the application on the grounds that special circumstances are that a standard lot would require standard size parking but a substandard lot would look out of scale with standard parking spaces because the 40'x40' garage would look out of scale fronting this substandard lot; and that this lot can accommodate a compact car, and that there is no increase to the footprint of the house. Ayes: Wittenkeller, Harle, Sargent Noes: Duys, Israel Absent: Mihaly, Cronk The audience was advised of the ten day appeal period. 2. V-9708 - Mr. & Mrs. Charles Monte, 35 Suffield Avenue, A/P 5-129-10. Variance to construct a 277 square foot addition to the master bedroom within 11' from the front property line (20' required). A Variance to increase coverage on subject property to 37% (35% is maximum allowable), on property located within the R-1 Zoning District. Mr. Griffin presented the staff report and stated that bay windows are counted in floor area if they have a floor. Jim McDonald, Architect representing the applicant, stated that they have explored many alternatives and ruled out a second story because of their age as well as the negative impact on the neighbors. The proposal is to add one bathroom and increase the closet but will still have only two bedrooms. There is a driveway that is shared with the westerly neighbor. Many neighbors do not support a second story and this proposal will have no negative impact on the neighborhood. If the bay windows are removed and a second story proposed, they would be very close to the 35% coverage. He felt, based on all the alternatives, the current proposal was very reasonable. Margaret Honey, 106 Hawthorne, and the owner of 30 Suffield, stated that they were opposed to the second story because it would obstruct her view. Ron Hink 45 Suffield, stated he was in support of the project as proposed. Commissioner Duys stated that she liked the way the addition was handled, and given the neighborhood, a one story addition would be much more beneficial. She also liked the way the garage is tucked to the side of the house and the addition is aesthetically pleasing. However, she would have problems making the findings for the excessive lot coverage. Commissioner Sargent agreed with the neighbors that a second story would be a detriment and suggested a continuance for redesign to set the addition back and get back into the approved 35% lot coverage. Commissioner Wittenkeller agreed that the second story would have a serious impact on the neighbors. The project was architecturally pleasing and does not have as much of a concern with the lot coverage if the design is good; particularly because it is not much over the allowable 35%. Commissioner Harle supported Commissioner Sargent's comments. In response to Chair Israel, Mr. McDonald stated that he cannot remember if the bay windows were counted in the calculations because the measurements were done a while ago. Chair Israel felt that the design was beautiful; the character of the street has been maintained and it is sensitive to the site and the neighbors, but he concurred with the staff recommendation because of the lot coverage overage. If the revised calculations still show an overage of lot coverage, he would like a reduction to meet the lot coverage requirements. A straw poll was taken on the front yard setback: Commissioner Wittenkeller thought it was acceptable. Commissioner Harle thought it was acceptable if someone could make the findings Commissioner Duys thought it was acceptable but was unable to make finding number one, and did not feel the applicant's finding was acceptable. Commissioner Sargent abstained. Chair Israel would be able to support a front yard variance if it was 20' back from the curb and would allow the intrusion of a bay window. M/s Wittenkeller/Harle, to continue, for potential redesign and recalculation of lot coverage. Continued to 2/3/97. Ayes: Wittenkeller, Duys, Harle, Sargent, Israel Absent: Mihaly, Cronk 3. V-9704 - Cathleen Dorinson, 295 Butterfield Road, A/P 5-055-02, Variance to reconstruct an existing 108 square foot room within 3.5' from the northerly side property line (8' required). Remodel includes modification (raising) to roof line matching the original house on property located within the R-1 Zoning District. Mr. Griffin presented the staff report. Commissioner Wittenkeller stated he would like documentation from a professional if the 15' is integral to making it water tight, otherwise he would not be able to support the height. Commissioner Duys asked about the skylight, noting that it is on the plans but not on the elevations. If it is being proposed it will increase the roof height. Cathleen Dorinson, applicant, explained her proposal and advised the Commission that she was not adverse to going with Staff's recommendation for a roof height of 13'. The adjacent neighbor is in support of the project and there are many trees blocking the addition. The consensus of the Commission was to approve the application with a height maximum of 13', with staff to review the final plan, making sure that the new roof integrates with the rest of the roof. M/s Sargent/Wittenkeller, and passed, to approve the height variance to 13' with the architectural to be reviewed by staff; and that Condition 1 and 2 be left up to staffs' discretion. The roof should integrate with the rest of the roof. Ayes: Wittenkeller, Duys, Harle, Sargent, Israel Absent: Mihaly, Cronk GENERAL DISCUSSION Election of Chair and Vice Chair of Planning Commission for 1997 M/s Israel/Duys, and passed, to continue until 2/3/97, so that all the members of the Commission are present. The Commission directed Staff to include the following discussion items on the future joint Planning Commission and Town Council meeting: - Does the Town want to consider reducing the parking standards. - Intrusions into the setbacks: Would it be better to grant approvals, rather than going up to a second story; if so, what would be the mechanism. REPORT OF UPCOMING APPEALS TO TOWN COUNCIL ADJOURNMENT TO FEBRUARY 3, 1997. The Planning Commission was adjourned at 10:15 p.m. to the next meeting on February 3, 1997. BARBARA CHAMBERS
Issues in Controlling Oxygen Depletion in the San Joaquin River Deep Water Ship Channel: Developing an NPS Nutrient Control Program G. Fred Lee PhD PE DEE & Anne Jones-Lee PhD G. Fred Lee & Associates El Macero, CA 95618 website: www.gfredlee.com email: email@example.com - Characteristics of Low-DO Problem in San Joaquin River (SJR) Deep Water Ship Channel (DWSC) - Role of NPS Nutrient Sources as Cause of Low DO in DWSC - Approaches Being Followed to Control Low DO Problem Upstream SJR Diversions for Southern CA Water Supplies and Central Valley Agriculture Adversely Impact Oxygen Demand Assimilative Capacity SJR DWSC Watershed: Area: 7,300 mi² Intense Agriculture: Fruits/Nuts Row Crops Diaries, Feedlots, Ducks 2 Million People Increasing 2%/yr SJR Flow Highly Regulated SJR DWSC Reach of Concern Is the First 15 miles below Port of Stockton Figure 5 Location of Navigation Lights on the San Joaquin River in the Vicinity of Stockton Characteristics of the Deep Water Ship Channel (DWSC) from Jones & Stokes (1998) Problem - At Times, Dissolved Oxygen in San Joaquin River Deep Water Ship Channel Violates Water Quality Objective/Standard - SJR DWSC Placed on 303(d) List of “Impaired” Waterbodies - Requires TMDL to Control Oxygen Depletion below Water Quality Objective by June 2003 DWSC DO Data – Summer/Fall 1999 (Adapted from DWR – Lehman, 2000) Dissolved Oxygen Concentrations at DWSC Light 48 - Surface - Bottom - Water Quality Objective Dissolved Oxygen (mg/L) Date - 1999 DWSC DO Data – Summer/Fall 1999 (Adapted from DWR – Lehman, 2000) Dissolved Oxygen Concentrations at DWSC Light 41 - Surface - Bottom - Water Quality Objective Dissolved Oxygen (mg/L) Date - 1999 DWSC DO Data – Summer/Fall 1999 (Adapted from DWR – Lehman, 2000) Dissolved Oxygen Concentrations at DWSC Light 34 - Surface - Bottom - Water Quality Objective Dissolved Oxygen (mg/L) Date - 1999 DWSC DO Data – Summer/Fall 1999 (Adapted from DWR – Lehman, 2000) Dissolved Oxygen Concentrations at DWSC Light 18 - Surface - Bottom - Water Quality Objective Date - 1999 Conceptual Model of SJR DWSC Oxygen Demand Situation Components - DWSC Watershed - Nutrients, Algae, Non-Algal $O_2$ Demand Sources - Algae Develop in Channels - DWSC - Oxygen Demand Causes and Reactions Factors Affecting Dissolved Oxygen in DWSC (adapted from COE, 1988) - **Sun** - A long-term cloudy period can cause D.O. depressions by retarding photosynthetic activity of phytoplankton. - **Cloud** - Sunlight attenuation in turbid water due to photosynthesis & inorganic suspended solids. - **Deep Water Ship Channel** - D.O. + N + P = algae - Phytoplankton produce O₂ through photosynthesis; also respire (use O₂) in light & dark. - Vertical Circulation - Phytoplankton respire in the dark. - **Lighted Water** - Atmospheric reaeration adds oxygen. - Gas Transfer O₂ - Ship traffic stirs up sediments. - **Dark Water** - Bacteria consume organic (BOD, N) & NH₃ and respire, unaffected by sunlight. - **Sedimentation Basin** - Heavier inorganic particles settle out. Some particulate organic load and phytoplankton settle out. - **San Joaquin River** - San Joaquin River Flow - River Oxygen Demand Loads - Phytoplankton, BOD organics, org N & NH₃, algal nutrients (N&P) from domestic & industrial wastewaters, stormwater runoff, farms, dairies, etc. - **SJR - San Joaquin River** - DWSC - SJR Deep Water Ship Channel - **Residence time determined by SJR net flow into DWSC** - **"I want to go home."** - **Jet Aeration** Algae & Organic Detritus as Sources of Oxygen Demand \[ \text{CO}_2 + \text{N} + \text{P} \xrightarrow{\text{light (hv)}} \text{algae} + \text{O}_2 + \text{org N} \quad (\text{produces O}_2) \] \[ \text{algae} \xrightarrow{\text{respiration}} \text{CO}_2 + \text{H}_2\text{O} + \text{ammonia} \quad (\text{uses O}_2) \] \[ \text{organic detritus} \xrightarrow{\text{bacteria} + \text{O}_2} \text{CO}_2 + \text{H}_2\text{O} \quad (\text{uses O}_2) \] \[ \text{DOC} + \text{O}_2 \xrightarrow{} \text{CO}_2 + \text{H}_2\text{O} \quad (\text{uses O}_2) \] \[ \text{Org N} + \text{O}_2 \xrightarrow{\text{mineralization}} \text{NH}_3 \quad (\text{uses O}_2) \] \[ \text{NH}_3 + \text{O}_2 \xrightarrow{} \text{NO}_3^- \quad (\text{uses O}_2) \] \[ \text{SOD} \quad (\text{sediment oxygen demand}) \] \[ \text{SOD:} \quad \text{particulate organics} \xrightarrow{\text{O}_2 + \text{bacteria}} \text{CO}_2 \quad (\text{uses O}_2) \] \[ \text{sulfate:} \quad \text{SO}_4^{2-} \xrightarrow{\text{no O}_2} \text{S}^- \quad (\text{sulfide}) \quad \text{S}^- + \text{O}_2 \xrightarrow{\text{abiotic}} \text{SO}_4^{2-} \quad (\text{rapid reaction}) \quad (\text{uses O}_2) \] \[ \text{iron:} \quad \text{Fe}^{3+} \xrightarrow{\text{no O}_2} \text{Fe}^{2+} \quad (\text{ferrous iron}) \quad \text{Fe}^{2+} + \text{O}_2 \xrightarrow{\text{abiotic}} \text{Fe}^{3+} \quad (\text{rapid reaction}) \quad (\text{uses O}_2) \] Box Model of Estimated DO Sources/Sinks in SJR DWSC August 1999 Oxygen Demand Sources - Oxygen demand in SJR above Vernalis that reaches DWSC (BOD, NH₃, orgN, chlBOD, oxygen deficit) - 61,000 lbs/day - City of Stockton wastewater discharges (CBOD, NH₃, org N, chlBOD, oxygen deficit) - 5,600 lbs/day - Minor local BOD sources - French Camp Slough - Manteca - ag & city drains? - others? - Algae that develop in DWSC that exert oxygen demand in DWSC - ? - Sediment Oxygen Demand (partially incorporated into USV) - 8,000 lbs/day DO Sources/Oxygen Demand Export - DO added reaeration - 5,500 lbs/day - Mechanical aeration - 2,000 lbs/day - Oxygen Demand Export - Export of BOD, algae, NH₃ and oxygen deficit from downstream DWSC at Turner Cut - 69,000 lbs/day - 27,000 lbs/day O₂ added by algae Summer Oxygen Demand Loads Control DO Depletion - Short Hydraulic Residence Time of DWSC 5 to 20 days for SJR Flows of 2,000 cfs to 100 cfs - Only Summer Oxygen Demand Loads Important to Summer/Fall DO Depletion - High Winter-Spring Flows/Loads Flush through the DWSC - Stormwater Runoff Not Normally Important Source of Oxygen Demand - May Be Important in Late Fall | Source | August | September | October | |------------------------------|---------|-----------|---------| | SJR DWSC Net Flow (cfs): | ~900 | ~900 | 150 | | | | | 400 | | | | | 1,000 | | Upstream of Vernalis | 61,000 | 70,000 | 6,300 | | City of Stockton | 5,600 | 9,300 | 12,200 | | Local DWSC | ? | ? | 1,750 | | SOD | 6,000 | 6,000 | 6,000 | | Aeration(Natural) | 5,500 | 5,500 | ? | | Aeration(Mech.) | 2,000 | 2,000 | ? | | DWSC Algae | ? | ? | ? | | Export from DWSC | 27,000 | 27,000 | ? | Retention Time in SJR Deep Water Ship Channel (to Turner Cut, including Turning Basin) as a Function of Flow Retention Time (days) Flow (cfs) Role of Sacramento River in DO Depletion in SJR DWSC - Export Pumping of Delta Water via State & Federal Projects to Central & Southern California Limits Downstream Extent of DO Depletion to Columbia Cut - Brings Sacramento River Water Across the SJR DWSC - Mixing & Dilution & Advective Transport to Central Delta - Does This Create a Low DO Problem in Central Delta? SJR Flow through DWSC - Important in Determining DO Depletion in DWSC - Flow Impacts Hydraulic Residence Time of Water & Oxygen Demand Constituents in Critical Reach of DWSC (first 15 miles) - 2,000 cfs about 5 days - 200 cfs 20 – 25 days - At Low Flow, Much Greater Part of Oxygen Demand Added to DWSC Is Exerted before Dilution by Sacramento River Cross—SJR Channel Flow en route to Export Pumps - SJR Flow through DWSC > 2,000 cfs - Few DO Problems - SJR Flow through DWSC Controlled by - SJR Flow at Vernalis - Diversion of SJR Flow down Old River - Operation of South Delta Channel Barriers - State & Federal Export Pumping of Delta Water to Central & Southern California Responsibility for SJR DWSC DO Depletion below Water Quality Objective Sources of Oxygen Demand - Non-Point Runoff/Discharge of Oxygen Demand - Agricultural Lands, Irrigation Drainage, Stormwater Runoff - Non-NPDES Permitted Urban Stormwater Runoff - Riparian Lands - Pollution of Groundwater That Leads to Nitrate Discharge to Surface Waters - Agriculture - Dairies & Other Animal Husbandry Activities - Land Disposal of Municipal Wastewaters - Urban Areas Oxygen Demand Sources - Algae, Other Oxygen Demand Constituents & Nutrients (N&P) from Upstream Tributary Sources Transported Downstream - Algae Developed in Channels, Creeks, Sloughs & River - Doubling Time about 1 to 2 Days in Summer - Algae & Water Diverted through Abstraction of Irrigation Water - Water, Nutrients, Algae Added to SJR & Tributaries - Algae Growth Rate Limited by Light Penetration - Surplus Available N & P - Grazing of Phytoplankton by Zooplankton & Macroinvertebrates - Non-Algal Oxygen Demand, Detritus, NH₃, Organic N - Exert Oxygen Demand in River - Photosynthesis Produces O₂ in River - Excess O₂ in River Lost to Atmosphere Sources/Sinks of Oxygen Demand in SJR-DWSC Watershed Oxygen Demand Components - organic BOD -- wastewater, land runoff - ammonia, organic N - algae/algal nutrients Principal Reactions BOD + O₂ → low DO CO₂ + N + P → sunlight → algae Algae die → BOD San Joaquin River Delta SJR Deep Water Ship Channel Turning Basin wastewater stormwater Cities Industry groundwater stormwater irrigation drains irrigation diversion Farms groundwater Groundwater nitrate Dairies, Duck Farms, Feed Lots & Other Commercial Animal Facilities discharges water Riparian Lands Reactions Governing Oxygen Demand Dynamics in DWSC Watershed **Growth:** \[ \text{Algae} + \text{N} + \text{P} \Rightarrow \text{More Algae} + \text{O}_2 \] **Decay:** Non-Algal Oxygen Demand \( \Rightarrow \) BOD Exerted in "River" + N + P **Grazing:** \[ \text{Algae} + \text{"Zooplankton"} \Rightarrow \text{Dead Algae} + \text{Oxygen Demand} + \text{N} + \text{P} \] **SJR Diversion:** \[ \text{Algae (Abstraction of Water + Associated Algae)} + \text{Other Oxygen Demand} \Rightarrow \text{Reduced Oxygen Demand Load in River} \] **Tributary Input:** Water Input (Tributary, River, Creeks + Ag Drains) \( \Rightarrow \) Add Algae + Water, Nutrients + Turbidity \( \Rightarrow \) Increased Algae Concentration/Load or Dilution of Upstream Oxygen Demand/Algae **SJR Scour:** Elevated SJR Flow \( \Rightarrow \) Suspended Sediment-Associated Oxygen Demand, Nutrients + Suspended Solids (Turbidity) **Nitrification:** \[ \text{NH}_3 + \text{OrgN} + \text{O}_2 \Rightarrow \text{NO}_3^- \quad \text{N-BOD Exerted} \] **Denitrification:** \[ \text{NO}_3^- \text{ (Low DO near Sediments)} \Rightarrow \text{N}_2 \quad \text{N Removal} \] Responsibility for SJR DWSC DO Depletion below Water Quality Objective Sources of Oxygen Demand • NPDES Permittees – Municipal and Industrial Wastewater Discharges and Stormwater Runoff – City of Stockton & Other Municipalities – Dairies and Other Animal Husbandry Operations, Including Feedlots, Hogs, Horses, Chickens Responsibility for SJR DWSC DO Depletion below Water Quality Objective DWSC Geometry - Port of Stockton & Those Who Benefit from Commercial Shipping to Port - Channel Depth Impacts Oxygen Demand Assimilative Capacity - Ship Traffic That Stirs Sediments into Water Column That Increases SOD Responsibility for SJR DWSC DO Depletion below Water Quality Objective SJR DWSC Flow - All Entities That Divert Water from the SJR above the DWSC, as Well as Those That Alter the SJR Flow Pattern through the Delta - Municipal and Agricultural Diversions Oxygen Demand Constituents C-BOD — Carbonaceous Biochemical Oxygen Demand \[ \text{Organic} + \text{O}_2 \xrightarrow{\text{Bacteria}} \text{CO}_2 + \text{H}_2\text{O} \quad \text{Respiration} \] N-BOD — Nitrogenous Biochemical Oxygen Demand \[ \text{NH}_3 + \text{O}_2 \xrightarrow{\text{Bacteria}} \text{NO}_3^- \quad \text{Nitrification} \] \[ \text{Organic N} \xrightarrow{\text{Bacteria}} \text{NH}_3 \quad \text{Ammonification} \] Algae \xrightarrow{\text{Death}} + \text{O}_2 \longrightarrow \text{CO}_2 + \text{H}_2\text{O} \quad \text{Respiration} SOD — Sediment Oxygen Demand \[ \text{Inorganic} \xrightarrow{\text{Abiotic}} \text{Fe}_3^{+} + \text{SO}_4^{=} \] \[ \text{Organic} + \text{O}_2 \xrightarrow{\text{Biotic}} \text{CO}_2 + \text{H}_2\text{O} \] Evaluation of Need for Nutrient Control in SJR Watershed - Algal Nutrients (N&P) Discharged to SJR & Its Tributaries Important Precursors for Algae-Related Oxygen Demand - Not All Nutrient Discharges in SJR Watershed of Equal Weight in Contributing to Algal Growth That Leads to Oxygen Demand in DWSC Algae as Source of Oxygen Demand \[(CH_2O)_{106}(NH_3)_{16}H_3PO_4 = 106 CH_2O + 16 NH_3 + H_3PO_4\] Reactions Added: \[(CH_2O)_{106}(NH_3)_{16}H_3PO_4 + 138 O_2 = 106 CO_2 + 122 H_2O + 16 HNO_3 + H_3PO_4\] Algae as a Source of BOD SJR at Vernalis BOD 10 v chl a \[ y = 9.5186x - 23.203 \] \[ r = 0.8628 \] \[ n = 71 \] Focused Algal Control - Key Issue to Controlling SJR DWSC Watershed Algae-Caused Oxygen Demand Is: Where in SJR Watershed Tributaries Do the Algae That Are the Primary Seed for Algae-Caused Oxygen Demand at Tributary Mouth, Develop? - Understand Whether Control of Nutrient (N or P) at That Location(s) Could Limit Algal Biomass at Tributary Mouth - Are There Areas in Tributary Where Focused Nutrient Control Could Reduce Algal Biomass at Tributary Mouth? Importance of Nutrient Discharge Depends on: - Rate/Amount of Discharge - When Discharge Occurs (Summer/Fall; Winter/Spring) - Distance/Travel Time for Algal Growth from Point of Discharge to DWSC - Fate (Loss to Atmosphere) of Oxygen Produced with Algal Growth in DWSC & Upstream - Amount of SJR Water/Algae Diverted to Ag Fields & Down Old River - Amount of Water/Algae Added to SJR & Its Tributaries That Enters DWSC - Grazing of Phytoplankton by Zooplankton & “Clams” - Turbidity – Light Penetration - Amount of “Surplus” N & P in DWSC - Amount of Soluble Ortho P & Total P - Soluble Ortho P Primary Algal Nutrient Importance of Nutrient Discharge • Relationship Among Factors Poorly Understood – Need Focused, Detailed Studies of Algal Nutrient Dynamics in SJR Watershed & DWSC • Essential for Technically Valid Allocation of Oxygen Demand Responsibility • How Will Regulation Proceed? – Will Information Be Developed? – Will an Arbitrary Allocation of TMDL Control and Responsibility be Assigned? Significance of Algal Growth in DWSC to Its Oxygen Resources - P. Lehman has shown, based on algal growth in laboratory incubation, there is appreciable algal development in DWSC. - Growth in DWSC could, at times, be equal to upstream algal chlorophyll loads. - Issue: Is growth of algae in DWSC a significant additional source of oxygen demand that leads to increased DO depletion in DWSC? NO Algal growth occurs with oxygen production. Since DWSC is under-saturated in DO, all DO produced is available to satisfy oxygen demand of algae developed in DWSC. What Is Known about Sources of Oxygen Demand? - In Summer 2000, 50 to 70% of Oxygen Demand in SJR at Vernalis Originated in the Watersheds of Mud & Salt Sloughs and SJR at Landers Ave. - In Fall 1999, Fall 2000, and Summer 2001, City of Stockton Wastewater Discharges to SJR Were a Significant Source of Ammonia Which Contributed to Oxygen Demand in DWSC. Schematic Representation of Algal Growth in San Joaquin River Algae Control in Mud & Salt Sloughs & SJR above Lander Ave - High Concentrations (Loads) of Algae in Mud & Salt Sloughs & SJR above Lander Ave (Hwy 165) Lead to These Areas’ Being Significant Sources of Algae-Caused Oxygen Demand in SJR at Vernalis. - Management Issue: Summer Conditions Typically Provide Residence Time of Week or More for Nutrient-Rich Water (Sol O-P > 100 µgP/L and NO$_3^-$ + NH$_3$ > 1 mgN/L) to Develop the Algal Concentrations Found in Mud & Salt Sloughs and SJR at Landers Ave. - Are There Areas in Mud & Salt Sloughs & SJR at Lander Ave Where the Water Could Be Treated with Alum to Remove Sol O-P and Algae to Significantly Reduce the Summer Algal Concentration in the Discharges from Mud & Salt Sloughs and SJR at Lander to San Joaquin River? Algae Control in Mud & Salt Sloughs & SJR above Lander Ave - Need to Determine if These Are Locations at Which Alum Could Be Added, or Biological P Removal Could Be Practiced to Significantly Reduce Summer Algal Concentrations in Discharges from Mud & Salt Sloughs and SJR above Lander Ave to San Joaquin River - Need to Evaluate Nutrient Concentrations & Algae Sources in Mud & Salt Sloughs and SJR above Landers Ave - Need to Define Hydrology (Flow & Residence Times) - Need to Be Able to Add Alum & Periodically Remove Alum/Algae Sludge - Must Consider Selenium Concentrations in Sludge & Its Appropriate Management Management Approach - CVRWQCB Organized Stakeholder Process to Develop TMDL for Oxygen Demand Substances and Allocation of Loads among Municipal Wastewater/Stormwater Dischargers, Agriculture Runoff/Tail Water, Dairies, Feedlots, Riparian Wetlands Runoff/Releases - If the Stakeholders Do Not Develop Consensus Allocation of Responsibility by December 2002, CVRWQCB Will Assign Allocation of Load Reduction - CALFED Provided $866,000 for Studies in 2000 and $2.5 million/yr for 1 yr to Conduct Studies Needed to Develop TMDL and Allocate Responsibility for Control of Low DO in DWSC - Total 3-yr Study Effort in Excess of $6 Million TMDL Process - Define Water Quality Problem - Define Pollutant Sources - Define TMDL Goal - Linkage between Sources/Loads of Pollutants & Water Quality Impacts Technical TMDL Allocation of Responsibility Implementation of Control Programs Phase I Monitoring Phase II TMDL Regulatory Issues - Clean Water Act (CWA) Includes TMDL to Ensure Compliance with Water Quality Standards - While Conceptually Appropriate, in Practice Requires Adequate Time & Funding to Provide Technical Information Base to Reliably Implement Approach - Thus Far, US Congress, State Legislators, & Stakeholders Unwilling to Fund Proper CWA Implementation - SJR Technical Support Funding, While Substantial Owing to CALFED Support, Is Inadequate for Time Allowed - Funding for TMDL Development & Allocation of Oxygen Demand among Sources Not Adequate to Allow Development of Technically Valid Information Needed to Meet June 2003 Deadline - Current SJR Steering Committee Stakeholder-Based Approach Not Adequate to Meet Deadlines Control of Oxygen Depletion - Increased SJR Flow through DWSC - Shorten Hydraulic Residence Time - Aeration of Channel - Funding? - Reduce Oxygen Demand Load to DWSC - Control C-BOD, N-BOD, Algae - Allocation of Oxygen Demand Load - Based on % Allowed Oxygen Demand Load to DWSC from Each Tributary - Will Need to Allocate Oxygen Demand Load within Each Tributary Channel Aeration - Likely Need Selective Aeration of DWSC to Eliminate All Low-DO Problems - Sidestream with Air or 100% $O_2$ - Funding – Who Will Pay for It? - All Responsible Parties? Issues That Will Need to Be Addressed - Export/Loss of BODu, CBOD, NBOD, Algae, N and P between Source (Land Runoff/Discharges) and DWSC - Assess Additional Oxygen Demand and Nutrient Loads to SJR, & Losses, between Vernalis and Channel Point in DWSC - Impact of SJR Flow at Vernalis and in DWSC on DWSC DO Depletion - Understanding the Factors Controlling the Impacts of SJR Flow through DWSC on DO Depletion below WQO’s Issues That Will Need to Be Addressed • Understanding Significance of DWSC DO Excursions below 5 mg/L for a Few Hours to a Few Days on the Growth Rates of Fish in DWSC • Assessing the Significance of DO Depletion below 6 mg/L in Inhibiting Upstream Chinook Salmon Migration • Cost of Controlling N, P, NBOD, and CBOD from Wastewater, Stormwater Runoff, and Irrigation Return (Tail) Water • Can a Reliable Oxygen-Demand-Load/DO-Depletion-below-WQO Model for Given SJR DWSC Flow Be Developed That Can Be Used to Establish a Reliable Oxygen Demand TMDL? • How to Best Manage the Increasing Urbanization (approx. 2%/yr) of the SJR DWSC Watershed with Its Potentially Increased Oxygen Demand Load Responsibility for SJR DWSC DO Depletion below Water Quality Objective Future Urban Development in Watershed - How Will Future Development in the SJR DWSC Be Controlled So That the Increased Oxygen Demand and Nutrients Associated with Urban Development Will Not Cause Future Low DO Problems in the DWSC? Conclusions - San Joaquin River Deep Water Ship Channel Low DO Problem Is Primarily Due to the Discharge/Release of Aquatic Plant Nutrients That Develop into Algae That Die and Consume Oxygen in the Deep Water Ship Channel - NH₃ Discharged by Stockton Important Source of Oxygen Demand - Oxygen Demand Assimilative Capacity of the San Joaquin River Has Been Greatly Reduced by Construction of the Deep Water Ship Channel - Upstream Diversions of SJR Flow Exacerbate the DO Depletion Problem Conclusions - Nutrient Control from Agricultural, Wetland, and Other Rural Sources Will Not Likely Eliminate the Algal-Related Oxygen Demand So That Violations of the DO Water Quality Objectives Do Not Occur. - A Combination of Instream Aeration, and Nutrient and Oxygen Demand Control Will Be Needed to Control Low DO Problems. - Will It Be Possible to Obtain Financial Support by Water Diverters and Those Who Benefit from the Existence of the Channel to Help Pay for Nutrient Control and Aeration? Issues Report Discusses the Issues That Will Need to Be Addressed to Control the Low DO Problem Issues in Developing the San Joaquin River Deep Water Ship Channel DO TMDL Report to San Joaquin River Dissolved Oxygen Total Maximum Daily Load Steering Committee and the Central Valley Regional Water Quality Control Board Sacramento, CA Submitted by G. Fred Lee, PhD, PE, DEE and Anne Jones-Lee, PhD G. Fred Lee & Associates El Macero, California firstname.lastname@example.org www.gfredlee.com August 17, 2000 Further Information Consult Website of Drs. G. Fred Lee and Anne Jones-Lee http://www.gfredlee.com
Unveiling radio halos in galaxy clusters in the LOFAR era R. Cassano$^1$, G. Brunetti$^2$, H. J. A. Röttgering$^2$, and M. Brüggen$^3$ $^1$ INAF – Istituto di Radioastronomia, via P. Gobetti 101, I-40129 Bologna, Italy e-mail: email@example.com $^2$ Leiden Observatory, Leiden University, Oort Gebouw, PO Box 9513, 2300 RA Leiden, The Netherlands $^3$ Jacobs University Bremen, PO Box 750 651, 28725, Bremen, Germany Received 4 August 2009 / Accepted 7 October 2009 ABSTRACT **Aims.** Giant radio halos are mega-parsec scale synchrotron sources detected in a fraction of massive and merging galaxy clusters. Radio halos provide one of the most important pieces of evidence of non-thermal components in large-scale structure. Statistics of their properties can be used to discriminate among various models for their origin. Therefore, theoretical predictions of the occurrence of radio halos are important as several new radio telescopes are about to begin to survey the sky at low frequencies with unprecedented sensitivity. **Methods.** We carry out Monte Carlo simulations to model the formation and evolution of radio halos in a cosmological framework. In the context of the turbulent re-acceleration model, we extend previous work on the statistical properties of radio halos. **Results.** We first compute the fraction of galaxy clusters that show radio halos and derive the luminosity function of the radio halos. We then derive differential and integrated number count distributions of radio halos at low radio frequencies to explore the potential of the upcoming LOFAR surveys. By restricting ourselves to clusters at redshifts $<0.6$, we find that the planned LOFAR all-sky survey at 120 MHz is expected to detect about 350 giant radio halos. About half of these halos have spectral indices greater than 1.9 and brighten substantially at lower frequencies. If detected they will enable us to confirm that turbulence accelerates the emitting particles. We also propose that commissioning surveys, such as MS$^3$, have the potential to detect about 60 radio halos in clusters of the ROSAT brightest cluster sample and its extension (eBCS). These surveys will allow us to constrain how the rate of formation of radio halos in these clusters depends on cluster mass. **Key words.** radiation mechanism: non–thermal – galaxies: clusters: general – radio continuum: general – X-rays: general 1. Introduction Radio halos are diffuse Mpc-scale radio sources observed at the center of $\sim 30\%$ of massive galaxy clusters (e.g., Feretti 2005; Ferrari et al. 2008, for reviews). These sources emit synchrotron radiation produced by GeV electrons diffusing through $\mu$G magnetic fields and provide the most important evidence of non-thermal components in the intra-cluster medium (ICM). Clusters hosting radio halos always display evidence of very recent or ongoing merger events (e.g., Buote 2001; Schuecker et al. 2001; Govoni et al. 2004; Venturi et al. 2008). This suggests a connection between the gravitational process of cluster formation and the origin of these halos. Cluster mergers are expected to be the most important sources of non-thermal components in galaxy clusters. A fraction of the energy dissipated during these mergers could be channelled into amplification of the magnetic fields (e.g., Dolag et al. 2002; Brüggen et al. 2005; Subramanian et al. 2006; Ryu et al. 2008) and into the acceleration of high energy particles by shocks and turbulence (e.g., Enßlin et al. 1998; Sarazin 1999; Blasi 2001; Brunetti et al. 2001, 2004; Petrosian 2001; Miniati et al. 2001; Fujita et al. 2003; Ryu et al. 2003; Hoeft & Brüggen 2007; Brunetti & Lazarion 2007; Pfrommer et al. 2008; Brunetti et al. 2009). A promising scenario proposed to explain the origin of the synchrotron emitting electrons in radio halos assumes that electrons are re-accelerated by the interaction with MHD turbulence injected into the ICM in connection with cluster mergers (turbulent re-acceleration model, e.g., Brunetti et al. 2001; Petrosian 2001). An alternative possibility is that the emitting electrons are continuously injected by pp collisions in the ICM (secondary models; e.g., Dennison 1980; Blasi & Colafrancesco 1999). In the picture of the turbulent re-acceleration scenario, the formation and evolution of radio halos are tightly connected with the dynamics and evolution of the hosting clusters. Indeed, the occurrence of radio halos at any redshift depends on the rate of cluster-cluster mergers and on the fraction of the merger energy channelled into MHD turbulence and re-acceleration of high energy particles. In the past few years, this has been modeled by Monte Carlo procedures (Cassano & Brunetti 2005; Cassano et al. 2006a) that provide predictions verifiable by future instruments. In this scenario radio halos have a relatively short lifetime ($\approx 1$ Gyr), and the fraction of galaxy clusters in which radio halos are generated is expected to increase with cluster mass (or X-ray luminosity), since the energy of the turbulence generated during cluster mergers is expected to scale with the cluster thermal energy (which scales roughly as $\sim M^{0.75}$; e.g., Cassano & Brunetti 2005). It has been shown that the predicted occurrence of radio halos as a function of the cluster mass (or X-ray luminosity) is in line with results obtained from a large observational project, the “GMRT radio halo survey” (Venturi et al. 2007, 2008), and its combination with studies of nearby halos based on the NVSS survey (e.g., Cassano et al. 2008). The steep spectrum of radio halos makes these sources ideal targets for observations at low radio frequencies suggesting that present radio telescopes can only detect the tip of the iceberg of their population (Enßlin & Röttgering 2002; Cassano et al. 2006a; Hoeft et al. 2008). The discovery of the giant and ultra-steep spectrum radio halo in Abell 521 at low radio frequencies (Brunetti et al. 2008) allows a first confirmation of this conjecture and provides a glimpse of what future low frequency radio telescopes, such as the Low Frequency Array (LOFAR)\footnote{\url{http://www.lofar.org}} and the Long Wavelength Array (LWA, e.g., Ellingson et al. 2009), might find in upcoming years. LOFAR promises an impressive gain of two orders of magnitude in sensitivity and angular resolution over present instruments in the frequency range 15–240 MHz, and as such will open up a new observational window to the Universe. In particular, LOFAR is expected to contribute significantly to the understanding of the origin and evolution of the relativistic matter and magnetic fields in galaxy clusters. The main focus of the present paper is to provide a theoretical framework for the interpretation of future LOFAR data by quantifying expectations for the properties and occurrence of giant radio halos in the context of the turbulent re-acceleration scenario. In particular, in Sect. 2 we summarize the main ingredients used in the model calculations and provide an extension of the results of previous papers on the occurrence of radio halos in clusters (Sect. 2.1) and on the expected radio halo luminosity functions (Sect. 2.2). In Sect. 3, we derive the expected number counts of radio halos at 120 MHz and explore the potential of LOFAR surveys. Our conclusions are given in Sect. 4. A $\Lambda$CDM ($H_0 = 70$ Km s$^{-1}$ Mpc$^{-1}$, $\Omega_m = 0.3$, $\Omega_\Lambda = 0.7$) cosmology is adopted throughout the paper. ## 2. Statistical modelling of giant radio halos in galaxy clusters Turbulence generated during cluster mergers may accelerate relativistic particles and produce diffuse synchrotron emission from Mpc regions in galaxy clusters (e.g., Brunetti et al. 2008). Diffuse radio emission in the form of giant radio halos should be generated in connection with massive mergers and fade away as soon as turbulence is dissipated and the emitting electrons cool due to radiative losses. It is likely that the generation of turbulence and the acceleration of particles persist for a few crossing times of the cluster-core regions, implying a lifetime of about 1 Gyr. Since the physics of the proposed scenario is rather uncertain, we choose to model the properties of the halos and their cosmic evolution using a simple statistical approach. By means of Monte Carlo calculations, we take into account the main processes that play a role in this scenario. These include the rate of cluster-cluster mergers in the Universe and their mass ratios, and the fraction of the energy dissipated during these mergers that is channelled into MHD turbulence and acceleration of high energy particles (Cassano & Brunetti 2005; Cassano et al. 2006a). We refer the reader to these papers for details, here we briefly report the essential steps that enter into the calculations: i) The formation and evolution of galaxy clusters is computed by the extended Press & Schechter approach (1974, hereafter PS; Lacey & Cole 1993), which is based on the hierarchical theory of cluster formation. The PS mass function shows good agreement with that derived from $N$-body simulations, at least for relatively low redshifts and masses $\sim 10^{14} - 10^{15}$ h$^{-1}$ $M_\odot$ (e.g., Springel et al. 2005), although it has the tendency to underestimate the number density of systems with mass $\geq 10^{15}$ h$^{-1}$ $M_\odot$ (e.g., Governato et al. 1999; Bode et al. 2001; Jenkins et al. 2001). Given the present-day mass and temperature of the parent clusters, the cluster merger history (\textit{merger trees}) is obtained by using Monte Carlo simulations. We simulate the formation history of $\sim 1000$ galaxy clusters with present-day masses in the range $2 \times 10^{13} - 6 \times 10^{14}$ $M_\odot$. This allows a statistical description of the cosmological evolution of galaxy clusters and of the merger events with a reasonable accuracy. ii) The generation of the turbulence in the ICM is estimated for each merger identified in the \textit{merger trees}. The resulting turbulence is assumed to be generated and then dissipated within a timescale of the order of the cluster-cluster crossing time in that merger\footnote{The cascading timescale of large-scale turbulence is expected to be of the same order as the cluster-cluster crossing time (e.g., Cassano & Brunetti 2005; Brunetti & Lazarian 2007).}. Furthermore, it is assumed that turbulence is generated in the volume swept by the subcluster infalling into the main cluster and that a fraction, $\eta_s$, of the \textit{PdV} work done by this subcluster goes into the excitation of fast \textit{magneto-acoustic waves}. The \textit{PdV} work is estimated to be $\rho \pi r_s^2 v_t^2 R_s$, where $\rho$ is the ICM density of the main cluster averaged over the swept cylinder, $v_t$ is the impact velocity of the two subclusters, $r_s$ is the stripping radius (see also Sect. 2.1), and $R_s$ is the virial radius of the main cluster (see Cassano & Brunetti 2005, for details). iii) The resulting spectrum of MHD turbulence generated by the chain of mergers in any synthetic cluster and its evolution with cosmic time is computed by taking into account the injection of waves and their damping in a collisionless plasma. Acceleration of particles by this turbulence and their evolution is computed in connection with the evolution of synthetic clusters by solving Fokker-Planck equations and including the relevant energy losses. iv) This procedure allows for the exploration of the statistical properties of radio halos. Following Cassano et al. (2006a), we consider homogeneous models, i.e. without spatial variation in the turbulent energy, acceleration rate, and magnetic field in the halo volume. We assume a value of the magnetic field, averaged over a region of radius $R_{\text{H}} = 500$ h$_{70}^{-1}$ kpc, which scales with the virial mass of clusters, $M_v$ as $$\langle B \rangle = B_{(M)} \left( \frac{M_v}{\langle M \rangle} \right)^b,$$ where $b > 0$ is a parameter that enters into the model calculations. Equation (1) is motivated by numerical cosmological (MHD) simulations that found a scaling of the magnetic field with temperature or mass of the simulated clusters (e.g., Dolag et al. 2002)\footnote{Dolag et al. (2002) found a scaling $B \propto T^{2/3}$, which would imply that $B \propto M^{4/3}$ if the virial scaling $M \propto T^{5/2}$ is assumed.}. This steepening makes it difficult to detect these sources at frequencies higher than the frequency, \( v_s \), at which the steepening becomes severe (see Fig. 1), where \( v_s \) is expected to be a few times higher than the break frequency, \( v_b \), and \( v_b \) depends on the acceleration efficiency in the ICM, \( \chi \), being defined by (e.g., Cassano et al. 2006a) \[ v_b \propto \langle B \rangle \gamma_{\text{max}}^2 \propto \frac{\langle B \rangle \chi^2}{(\langle B \rangle^2 + B_{\text{cmb}}^2)^2}. \] (2) The transit time damping (TTD) is the most important collisionless resonance between the magnetosonic waves and particles, and is produced by the interaction of the compressible component of the magnetic field of these waves with the particles (e.g., Schlickeiser & Miller 1998; Cassano & Brunetti 2005; Brunetti & Lazzarini 2007). In this case \( \chi \approx 4D_{\text{ep}}/p^2 \), where \( p \) is the momentum of the electrons and \( D_{\text{ep}} \) is the electron diffusion coefficient in the momentum space due to the coupling with turbulent waves. In the case of a single merger between a cluster with mass \( M_c \) and a subcluster of mass \( \Delta M \), Cassano & Brunetti (2005) derived that \( \chi \) can be approximated by \[ \chi \propto \frac{\eta_c}{R_H^3} \left( \frac{M_c + \Delta M}{R_c} \right)^{3/2} \frac{r_s^{3/2}}{\sqrt{k_B T}} \times \begin{cases} 1 & \text{if } r_s \leq R_H \\ (R_H/r_s)^2 & \text{if } r_s > R_H , \end{cases} \] (3) where \( r_s \) is the stripping radius of the subcluster crossing the main cluster, i.e., the distance from the center of the subcluster where the static pressure equals the ram pressure (see Cassano & Brunetti 2005 for details), \( R_H \) is the size of the radio halo, and \( R_c \) and \( T \) are the virial radius and temperature of the main cluster, respectively. Combined with Eq. (2), this implies that higher values of \( v_b \) are expected in the more massive clusters, \( v_b \propto (M_c/R_c)^{3/2}/T \propto M_c^{3/2} \) (here considering for simplicity a fixed value of \( B \), see Cassano et al. 2006a for a more general discussion), and in connection with major merger events, \( v_b \propto (1 + \Delta M/M)^{3/2} r_s \) (in Eq. (3) also increases with \( \Delta M/M \)). Monte Carlo simulations can now be used to follow cluster-mergers and to explore how different mergers contribute to the acceleration (efficiency) of relativistic particles in the ICM. Consequently, this allows a statistical modeling of \( v_b \) to be performed within a synthetic cluster sample and the derivation of its statistical dependence on cosmic time and cluster mass. Surveys cannot detect radio halos that have \( v_s \) lower than the observing frequency, since the spectrum of these halos should be very steep and their emission should fall below the survey detection limit (Fig. 1). To investigate the statistical behavior of the population of radio halos at different frequencies, we only consider halos to be observable when \( v_s \geq v_0 \). Figure 2 shows the ratio \( v_s/v_b \) calculated for homogeneous models of radio halos, defining \( v_s \) as the frequency where the synchrotron spectrum of these halos is \( \alpha = 1.9 \) (\( \alpha = 1.9 \) being calculated between \( v_s/2.5 \) and \( v_s \) to mimic 600–1400 MHz spectra); since \( v_s/v_b \) is only mildly dependent on magnetic field strength and the assumed fraction of turbulent energy injected, we adopt \( v_s \sim 7v_b \). A statistical modeling of \( v_b \) provides a statistical evaluation of \( v_s \) in the synthetic cluster sample. In the context of the turbulent acceleration model for giant radio halos, energetics arguments imply that halos with \( v_s \geq 1 \) GHz must be generated in connection with the most energetic merger-events in the Universe. Only these mergers can produce the efficient acceleration necessary to have relativistic electrons emitting at these frequencies (Cassano & Brunetti 2005). Present surveys carried out at \( v_0 \sim 1 \) GHz detect radio halos only in the most massive and merging clusters (e.g., Buote 2001; Venturi et al. 2008), and their occurrence has been used to constrain the value of \( \eta_c \approx 0.1–0.3 \) in the models (Cassano & Brunetti 2005). Similar energetics arguments can be used to claim that radio halos with lower values of \( v_s \) must be more common, since they can be generated in connection with less energetic phenomena, e.g., major mergers between less massive systems or minor mergers in massive systems (e.g., Eqs. (2)–(3)), that are more common in the Universe (e.g., Cassano et al. 2008). In Fig. 3, we plot the fraction of clusters that host radio halos with \( v_s \geq v_0 \) as a function of the cluster mass and by considering two redshift ranges: 0–0.1 (left panel) and 0.4–0.5 (right panel); this is obtained by assuming a reference set of model parameters, namely \( \langle B \rangle = 1.9 \mu \text{G}, b = 1.5, \eta_c = 0.2 \) (see also Cassano et al. 2006a). As expected, the fraction of clusters with halos increases at lower values of \( v_0 \), and the size of this increment depends on... Fig. 3. Fraction of clusters with radio halos, with $\nu_s \geq \nu_{s*}$, as a function of the cluster mass in the redshift range $0–0.1$ (left panel) and $0.4–0.5$ (right panel). Calculations assume $\nu_0 = 1.4$ GHz, 240 MHz, 150 MHz, 120 MHz, and 74 MHz (from bottom to top). Fig. 4. Fraction of clusters with radio halos with $\nu_s \geq 120$ MHz (black, upper, solid lines) as a function of the cluster mass in the redshift range $0–0.1$ (left panel) and $0.5–0.6$ (right panel). The fractions of clusters with radio halos $\nu_s$ in different frequency ranges are also shown: $\nu_s \geq 1400$ MHz, $600 < \nu_s < 1400$ MHz, $240 < \nu_s < 600$ MHz, and $120 < \nu < 240$ MHz (from top to bottom). the considered mass and redshift of the parent clusters, being greater at lower cluster masses and at higher redshifts. In Fig. 4, we plot the fraction of radio halos with $\nu_s \geq 120$ MHz (black upper line) and the differential contribution to this fraction from radio halos with $\nu_s$ in four frequency ranges (see figure caption for details). For nearby systems (Fig. 4, Left Panel), a significant fraction of massive clusters, $M_* > 10^{15} M_\odot$, is expected to host radio halos with $\nu_s \geq 120$ MHz; a sizeable fraction of them with $\nu_s > 600$ MHz (blue and magenta lines). On the other hand, the majority of radio halos in clusters with mass $M_* \lesssim 10^{15} M_\odot$ would have very steep spectra if observed at GHz frequencies, $\nu_s < 600$ MHz (red line and black dot-dashed line). Our calculations suggest that a similar situation is expected for clusters at higher redshift (Fig. 4, right panel). Radio halos with higher values of $\nu_s$ become much rarer with increasing redshift, mainly because the unavoidable inverse Compton losses at these redshifts limit the maximum energy of the accelerated electrons in these systems. At $z > 0.5$, only merging clusters with mass $M_* \gtrsim 2 \times 10^{15} M_\odot$ have a sizeable chance of hosting giant radio halos with $\nu_s \geq 1.4$ GHz, and an increasing contribution to the percentage of radio halos at higher redshift comes from halos with lower $\nu_s$. 2.2. The radio halo luminosity function The luminosity functions of radio halos (RHLFs), i.e., the number of halos per comoving volume and radio power, with $\nu_s \geq 1.4$ GHz was derived by Cassano et al. (2006a) to be $$\frac{dN_{\text{HI}}(z)}{dV \, dP(1.4)} = \frac{dN_{\text{HI}}(z)}{dM \, dV} \left| \frac{dP(1.4)}{dM} \right|,$$ (4) where \( \frac{dN_H(z)}{dM \, dV} \) is the theoretical mass function of radio halos with \( \nu_s \geq 1.4 \) GHz, that is obtained by combining Monte Carlo calculations of the fraction of clusters with halos and the PS mass function of clusters (e.g., Eq. (18) in Cassano et al. 2006a). The quantity \( \frac{dP(1.4)}{dM} \) can be estimated from the correlation between the 1.4 GHz radio power, \( P(1.4) \), and the mass of the parent clusters that is observed for radio halos (e.g., Giovani et al. 2001; Cassano et al. 2006a). Cassano et al. (2006a) discussed the \( P(1.4) - M \) correlation in the context of the turbulent acceleration model and demonstrated that the slope is consistent with the observed value (\( \alpha_M = 2.9 \pm 0.4 \)) for a well constrained region of parameter space (\( B_{\text{MF}}, b, \) and \( \eta_t \)). Fig. 7 in Cassano et al. (2006a), model parameters adopted in the present paper, i.e., \( B = 1.9 \mu G, b = 1.5 \) and \( \eta_t = 0.2 \), fall in this range. In particular, the value of the derivative \( \frac{dP(1.4)}{dM} \) in Eq. (4) depends on the set of parameters (\( B_{\text{MF}}, b \)) that, in the case of the reference model we use in this paper, sets \( \alpha_M = 3.3 \). To derive the RHLF at frequency \( \nu_0 \), the contribution of all radio halos with \( \nu_s \geq \nu_0 \) should be taken into account. We first obtained the RHLF for halos with \( \nu_s \) in a given frequency interval, e.g., \( \Delta \nu_s \), and then combined the different contributions from the considered intervals \( \Delta \nu_s \): \[ \frac{dN_H(z)}{dV \, dP(\nu_0)} = \sum_i \left( \frac{dN_H(z)}{dM \, dV} \right)_{\Delta \nu_i} \left( \frac{dP(\nu_0)}{dM} \right)_{\Delta \nu_i}. \] (5) To derive the contribution to the RHLF from radio halos with \( \nu_s \geq 1.4 \) GHz, we should calculate \( \frac{dP(\nu_0)}{dM} \) for these halos. This can be estimated from the \( P(1.4) – M \) correlation assuming a monochromatic radio power of these halos at \( \nu_0 \) given by \[ P_{1.4}(\nu_0, M_\nu) = P_{1.4}(1.4, M_\nu) \left( \frac{1400 \text{ MHz}}{\nu_0} \right)^\alpha, \] (6) where \( P_{1.4}(1.4, M_\nu) \) is the monochromatic radio power at 1.4 GHz from the \( P(1.4) – M \) correlation, and \( \alpha \sim 1.3 \) is the typical spectral index of these halos, \( P(\nu) \propto \nu^{-\alpha} \) (e.g., Ferrari et al. 2008). We now consider the case of halos with \( \nu_s < 1.4 \) GHz. The bolometric synchrotron power of radio halos is expected to scale with \( \nu_B \) and \( B \) (e.g., Cassano et al. 2006a) such that \[ P_{\text{syn}} \approx P(\nu_B) \nu_B \approx B \nu_B \Rightarrow P(\nu_B) \propto B. \] (7) From Eqs. (2)–(3), it is clear that clusters of the same mass \( M_\nu \) (and magnetic field \( B \)) at redshift \( z \) could have different values of \( \nu_B \), depending on the merger event responsible for the generation of the radio halo. Yet, for a fixed cluster mass (and consequently for a fixed value of the magnetic field), Eq. (7) implies that the synchrotron radio power emitted at the break frequency, \( P(\nu_B) \), is constant. In addition, homogeneous models, that consider an average value of \( B \) and \( \nu_{\text{max}} \) in the halo volume, also imply that \( P(\nu_B) \nu_B \propto P(\nu_s) \nu_s \) (Fig. 2). From Eq. (7), we can then derive the monochromatic radio power at \( \nu_0 \) of halos with a given \( \nu_s \) to be \[ P_{\nu_0}(\nu_0, M_\nu) = P_{\nu_s}(\nu_s, M_\nu) \left( \frac{\nu_s}{\nu_0} \right)^\alpha = P_{1.4}(1.4, M_\nu) \left( \frac{\nu_s}{\nu_0} \right)^\alpha. \] (8) This allows the evaluation of \( \left( \frac{dP(\nu_0)}{dM} \right)_{\Delta \nu_s} \) starting from \( \frac{dP(1.4)}{dM} \), and thus the derivation of Eq. (5). We also note that from Eqs. (6) and (8), one has \[ P_{\nu_0}(\nu_0, M_\nu) = P_{1.4}(\nu_0, M_\nu) \left( \frac{\nu_s}{1400 \text{ MHz}} \right)^\alpha, \] (9) i.e., radio halos with synchrotron spectra that steepen at lower frequencies will also have monochromatic radio powers at \( \nu_0 \) that are lower than those of radio halos of higher \( \nu_s \). As a relevant example, in Fig. 5 we report the expected RHLF at 120 MHz (black lines) for \( z = 0–0.1 \) (solid thick lines) and \( z = 0.5–0.6 \) (dashed thick lines), where we also show the relative contributions of halos with \( \nu_s < 600 \) MHz (red lines) and \( \nu_s > 600 \) MHz (blue lines). As already discussed in Cassano et al. (2006a), the shape of the RHLF flattens at lower radio powers because of the decrease in the efficiency of particle acceleration in less massive clusters. We note that halos with \( \nu_s > 600 \) MHz (blue lines, Fig. 5) do not contribute to lower radio powers in the RHLF. This is because higher-frequency halos are generated in very energetic merger events, and must be extremely rare in smaller systems and consequently their monochromatic radio power is greater than that of halos with \( \nu_s < 600 \) MHz (red lines, Fig. 5). Finally, we note that with increasing redshift the RHLFs decrease due to the evolution, where both the mass and mass function of the fraction of galaxy clusters with radio halos (Fig. 3; see also Cassano et al. 2006a). The evolution of the RHLF with \( z \) is stronger at higher radio powers, where the dominant contribution to the RHLF comes from halos with higher \( \nu_s \) and the fraction of clusters hosting these halos decreases more rapidly with redshift (e.g., Fig. 4). ### 3. Number counts of radio halos and LOFAR surveys at 120 MHz It has been shown that model expectations of the occurrence of radio halos observed at \( \nu_0 = 1.4 \) GHz are consistent with the fraction of radio halos with cluster mass (Cassano et al. 2008) and with the number counts of nearby radio halos (Cassano et al. 2006a). As already discussed, in this paper we adopt a reference model with parameters: \( \langle B \rangle = 1.9 \mu G, b = 1.5, \eta_t = 0.2 \). In Fig. 6, we report number counts of giant radio halos expected... Fig. 6. All-sky integrated RHNCs for: (1) $z = 0.044–0.2$, obtained by considering a minimum mass of clusters constrained at any $z$ by the XBACS X-ray flux limit (Ebeling et al. 1996) (dashed black line), and by combining the above mass-constraint with that implied by the NVSS sensitivity (following Cassano et al. 2008, see their Fig. 3) (solid upper black line); (2) $z = 0.2–0.32$, obtained by considering the X-ray luminosity range of the GMRT cluster sample (Venturi et al. 2007, 2008, red lower line). Black filled points are the observed RHNC of giant radio halos from NVSS-selected clusters in the redshift range $0.044–0.2$, re-normalized to account for the NVSS and XBACS sky coverage (and XBACS completeness). Red open points are the observed RHNC of giant radio halos in the GMRT cluster sample (with redshift $z = 0.2–0.32$), re-normalized to account for the sky coverage of the GMRT cluster sample. with these parameters compared with radio halo counts from the NVSS survey at low redshift, $0.044 \leq z \leq 0.2$ (Giovannini et al. 1999) and from the GMRT radio halo survey at intermediate redshift, $z = 0.2–0.32$ (Venturi et al. 2007, 2008). The latter is a pointed survey down to 70 $\mu$Jy/beam at 610 MHz of a sample of $\sim 50$ galaxy clusters extracted from the REFLEX (Bohringer et al. 2004) and eBCS (Ebeling et al. 1998, 2000) cluster catalogs. The clusters have redshifts $0.2–0.4$ and $0.2–1.2$ km/s, $5 \times 10^{14}$ km/s (the X-ray sample is complete for $z < 0.32$; see Cassano et al. 2008). All halos in the survey have 1.4 GHz follow-up data. Beside the fair agreement between expectations and observations (see caption), we note that the GMRT radio halo survey is sufficiently sensitive to detect relatively faint halos and constrain the flattening of the distribution of number counts of radio halos (RHNC) at lower fluxes. Encouraged by these results, in this section we derive the expected RHNC at 120 MHz and explore the potential of upcoming LOFAR surveys. Because in our simplified procedure the radio power of halos scales with a spectral slope $\alpha = 1.3$ (Eqs. (6)–(9)) and the vast majority of halos is at $z \sim 0.2–0.4$, in the following we neglect the $K$-correction\footnote{For simplicity, we also consider as observables those halos with $v_s \approx v_o$ regardless of their redshift. This would slightly affect only the number counts of halos with $v_s \lesssim v_o(1 + z)$ that represent a minimal fraction of our halo population.}. 3.1. LOFAR surveys LOFAR will carry out surveys between 15 MHz and 210 MHz with unprecedented sensitivity and spatial resolution (e.g., Röttgering et al. 2006). The unprecedented $(u,v)$ coverage of LOFAR on short baselines also maximizes the instrument capability to detect extended sources of low surface brightness such as radio halos. These surveys will constrain models of diffuse radio emission in galaxy clusters. In this paper, we assume an observing frequency $v_o = 120$ MHz, at which LOFAR will carry out the deepest large-area radio surveys (e.g., Röttgering et al. 2006). The crucial step in our analysis is the estimate of the minimum diffuse flux from giant radio halos (integrated over a scale of $\sim 1$ Mpc) that is detectable by these surveys as a function of redshift. This depends on the brightness profiles of radio halos that is known to smoothly decrease with distance from the cluster center (e.g., Govoni et al. 2001). Consequently, the outermost, low brightness, regions of halos will be difficult to detect. However what is important is the capability to detect the central, brightest, regions of radio halos in the survey images. Following Brunetti et al. (2007), we consider a shape of the radial profile of radio halos that is obtained from the analysis of well studied halos. We assume a circular observing beam $= 25 \times 25$ arcsec, and follow two complementary approaches: i) Since radio halos emit about half of their total radio flux within their half radius (Brunetti et al. 2007), we estimate the minimum flux of a detectable halo, $f_{\text{min}}(z)$, by requiring that the mean brightness within $R_H/2$, $\frac{\int_0^{R_H/2} I(r)dr}{\pi(R_H/2)^2}$, is $\xi$ times the rms, $F$, of the survey, i.e., $$f_{\text{min}}(z) \simeq 10^{-3}\left(\frac{\xi F}{0.5\,\text{mJy/beam}}\right)\theta_H(z)\ [\text{mJy}],$$ where $\theta_H(z)$ is the angular size of radio halos, in arcseconds, at a given redshift, allowing for the detection of diffuse halo emission in the images produced by the survey. Injection of fake radio halos in the $(u,v)$ plane of interferometric data from NVSS observations show that radio halos at $z \lesssim 0.3$ become visible in the images as soon as their flux approaches that obtained by Eq. (10) with $\xi = 1–2$ (Cassano et al. 2008). ii) Following a second approach, we estimate the minimum flux of a detectable halo by requiring that the average brightness within 3 observing beams is $3 \times \xi$ times the rms, $F$, of the survey. The minimum flux is obtained from the condition $$2\pi \int_0^{\infty} I(b)bdb = 5S_{\text{beam}}(3\xi F),$$ where $I(b)$ is the typical radial profile of halos (Brunetti et al. 2007), $S_{\text{beam}}$ is the beam area, and $b_c = (5S_{\text{beam}}/\pi)^{1/2}$. The aim of this second approach is to avoid any bias related to the redshift of the halos since, in the first approach, the sensitivity limit is reached across a fairly large area (many beams) for nearby radio halos, but only within an area of few beams in the case of halos at $z = 0.5–0.6$. Figure 7 shows $f_{\text{min}}$ of radio halos as a function of redshift (left panel), and the corresponding minimum radio power (right panel), obtained following the two approaches and assuming that $\xi F = 0.1, 0.25, 0.5, 1, 1.5$ mJy/beam (see figure caption). \footnote{The 120 MHz LOFAR survey will have a full resolution of 5–6 arcsec, thus we are considering the case of tapered images that increase the sensitivity to extended emission without changing significantly the point source sensitivity (due to the large number of inner LOFAR stations).} Given the RHLF \((dN_H(z)/dP(\nu_0)dV)\), the number counts of radio halos with \(f \geq f_{\text{min}}(z)\) in a redshift interval, \(\Delta z = z_2 - z_1\), is given by \[ N_H^\Delta (> f_{\text{min}}(z)) = \int_{z=z_1}^{z=z_2} dz' \left( \frac{dV}{dz'} \right) \int_{P_{\text{max}}(f_{\text{min}},z')}^{\infty} dN_H(P(\nu_0),z') \cdot dP(\nu_0)dV \] In Fig. 8, we show the all-sky number of radio halos with \(\nu_s \geq 120\) MHz in different redshift intervals detectable by typical LOFAR surveys of different sensitivities (0.1 … 1.5 mJy/beam, see figure caption) and following the approaches i) (left panel) and ii) (right panel) described above. The LOFAR all-sky survey (e.g., Röttgering 2009, priv. comm.) is expected to reach an rms = 0.1 mJy/beam at 120 MHz. Considering the case i) (Fig. 8, left panel) with \(\xi \sim 2-3\), we predict that this survey will detect more than 350 radio halos at redshift \(\leq 0.6\), in the northern hemisphere (\(\delta \geq 0\)) and at high Galactic latitudes (\(|b| \geq 20\)). This will increase the statistics of radio halos by about a factor of 20 with respect to that produced by the NVSS. The LOFAR commissioning MS\(^3\) survey is expected to reach sensitivities of \(\approx 0.5\) mJy/b at 150 MHz. Based on our results, \(\approx 100\) radio halos are expected to be discovered by this survey within a one year timescale. The spectral properties of the population of radio halos visible by the future radio surveys at low frequencies are expected to change with the increasing sensitivity of these surveys. In Fig. 8, we show the total number of halos with \(\nu_s \geq 120\) MHz (solid lines) and the number of halos with a spectral steepening at low frequencies, \(120 \leq \nu_s \leq 600\) MHz. The latter class of radio halos has a synchrotron spectral index \(\alpha > 1.9\) in the range 250–600 MHz, and would become visible only at low frequencies, \(\nu_0 < 600\) MHz. We find that about 55% of radio halos in the LOFAR all-sky survey at 120 MHz is expected to belong to this class of ultra-steep spectrum radio halos, while radio halos of higher \(\nu_s\) are expected to dominate the population in shallower surveys. This is simply because, for the reasons explained in Sect. 2.2, low frequency radio halos are expected to populate the low power-end of the RHLF (e.g., Fig. 5). Complementary information is given in Fig. 9 that shows the expected distribution of halo spectral indices, with reference to the number distributions in Fig. 8, and its evolution with sensitivity of radio observations; the spectra in Fig. 9 have been calculated for the range 120–300 MHz assuming homogeneous models. Ultra-steep spectrum halos are a unique prediction of the turbulent re-acceleration model (e.g., Brunetti et al. 2008) and our expectations demonstrate the potential of LOFAR to constrain present models for the origin of radio halos. ### 3.2. Application to X-ray selected cluster samples Although unbiased surveys of radio halos provide an important way to measure the occurrence of these sources (Sect. 3.1), a potential problem with these approaches is the identification of both radio halos and their hosting clusters. This is because radio halos constitute only a very small fraction of the entire radio source population and need to be distinguished from confused regions produced by the superpositions of radio AGNs and starburst galaxies. Alternatively, an efficient approach is to exploit deep LOFAR surveys of X-ray selected samples of galaxy clusters. Here we derive the number of radio halos, and their flux and redshift distributions, that should be detected by LOFAR observations of X-ray selected clusters. There are several catalogs of X-ray selected clusters in the northern hemisphere that contain clusters extracted from the ROSAT All-Sky Survey (RASS, Trümper 1993). At redshift $z \lesssim 0.3$, the ROSAT Brightest Cluster Sample and its extension to lower X-ray fluxes (eBCS, Ebeling et al. 1998, 2000) and the Northern ROSAT All-Sky (NORAS) Cluster Survey (Böhringer et al. 2000) provide cluster catalogs with X-ray flux $f_{X(0.1-2.4)} \text{keV} \gtrsim 3 \times 10^{-12} \text{erg s}^{-1} \text{cm}^{-2}$; the eBCS is 75% complete down to this flux limit. The extension of these catalogs to higher redshifts led to the Massive Cluster Survey (MACS, Ebeling et al. 2001), which contains clusters with $f_{X(0.1-2.4)} \text{keV} \gtrsim 1 \times 10^{-12} \text{erg s}^{-1} \text{cm}^{-2}$ at $z \approx 0.3–0.6$. All these surveys have optical follow-ups and provide a useful starting point for detecting radio halos in LOFAR surveys. A well-known correlation exists between the synchrotron power of giant radio halos and the X-ray luminosity of the hosting clusters: $P(1.4) \propto L_X^{\alpha}$, where $\alpha \approx 2$ (e.g., Liang et al. 2000; Bacchi et al. 2003; Enßlin & Röttgering 2002; Cassano et al. 2006a; Brunetti et al. 2009). This implies that the X-ray flux limit of the survey, $f_X$, is related to the radio flux of halos. The minimum flux of radio halos that can be detected at redshift $z$ is given by the maximum value of the minimum radio flux due to the sensitivity of radio surveys (Sect. 3.1) and that constrained by $f_X$ by the radio – X-ray correlation. To address this issue at $\nu_R = 120$ MHz in the case of radio halos with $\nu_S \geq 1.4$ GHz, we assume a correlation between the monochromatic radio luminosity at 120 MHz and $L_X$ rescaled from that at 1.4 GHz by means of Eq. (6). For halos with lower $\nu_S$ (yet $\nu_S > 120$ MHz) the correlation between the radio luminosity at 120 MHz and $L_X$ is obtained from Eq. (8), which accounts for the lower radio power expected for halos with steeper spectra (Sect. 2.2). In this section, we model the sensitivity of LOFAR at 120 MHz following the approach i) described in Sect. 3 (Eq. (10)). More specifically, to detect radio halos, we consider 120 MHz LOFAR follow-up of a cluster catalog obtained by combining the eBCS (at $z \lesssim 0.3$) and the MACS ($0.3 < z \lesssim 0.6$) samples, and assume reference sensitivities of radio observations of $\xi F = 0.25$ and 1 mJy/beam. The minimum $L_X$ of cluster for which these radio observations are expected to detect giant radio halos is evaluated by combining the above radio sensitivity and the minimum $L_X$ in cluster catalogs at redshift $z$, and is shown in Fig. 10 by considering different $\nu_S$ (see figure caption for details). One may note that at intermediate redshifts the higher redshift the radio sensitivity is driven by the X-ray flux limit of the eBCS and MACS catalogs, respectively. On the other hand, we expect that in the redshift range where the minimum $L_X$ is constrained by the radio sensitivity, radio halos with $\nu_S$ in the range 120–240 MHz (Fig. 10, black lines) can be detected in clusters of X-ray luminosity about 3 times higher than that of clusters with $\nu_S \geq 1.4$ GHz halos (Fig. 10, magenta lines). In Fig. 11, we show the cumulative and differential number counts of radio halos expected from the LOFAR follow-up of eBCS and MACS clusters at 120 MHz. This is obtained from Eq. (12) and by taking into account both the selection criteria illustrated in Fig. 10 and the sky coverage of the eBCS and MACS surveys. The inflection in the number counts at $z = 0.3$ is caused by the change in the X-ray selection criteria (see Fig. 10) between the eBCS ($z \lesssim 0.3$) and the MACS ($z \geq 0.3$) cluster sample. We expect that the LOFAR all-sky survey, with a planned sensitivity in line with the case $\xi F = 0.25$ mJy/beam (Fig. 11, upper panels), will discover about 130 radio halos out of the ~400 clusters in the eBCS and MACS catalogs. Remarkably, about 40% of these radio halos are expected to have $\nu_S \lesssim 600$ MHz, thus to be halos with extreme steep spectra at GHz frequencies. The majority of radio halos in eBCS and MACS clusters is expected to be found at $z = 0.2–0.4$, while the small number of clusters at $z \geq 0.5$ with X-ray flux above the flux limit of the MACS catalog does not allow a statistically solid expectations, although we may expect a couple of radio halos hosted in MACS clusters at this redshift. At this redshift, we expect that only major mergers in massive clusters ($M_* \geq 2 \times 10^{15} M_\odot$) can generate radio halos with $\nu_s \geq 1.4$ GHz (Fig. 4, right panel). The powerful radio halo discovered in the cluster MACS J0717.5 +3745 (e.g., Bonafede et al. 2009; van Weeren et al. 2009) is consistent with these expectations. Figure 11 (lower panels) shows the expected number counts of radio halos assuming the more conservative case $\xi F = 1$ mJy/b that is suitable for exploring the potential of the LOFAR MS$^3$ commissioning survey. In this case, about 80 radio halos are expected to be found in eBCS and MACS clusters, and about 20 of these halos are expected to have $\nu_s < 600$ MHz. We note that the number of radio halos expected to be detected in follow-up observations of eBCS and MACS clusters increases by less than a factor of 2 because of a substantial drop in radio sensitivity from $\xi F = 1$ to 0.25 mJy/b. This is not surprising as the majority of radio halos that are expected to be discovered by deep radio observations should be found in galaxy clusters of X-ray luminosity below the luminosity-threshold of the eBCS and MACS catalogs (e.g., Fig. 10). The eBCS cluster sample contains 300 galaxy clusters at $z < 0.3$ and covers the northern hemisphere. The redshift and X-ray luminosity distribution of eBCS clusters is public (Ebeling et al. 1998, 2000) and thus we can provide a more quantitative expectation based on e.g., the more conservative case, MS$^3$-like that assumes that $\xi F = 1$ mJy/beam at 120 MHz (in this case, the selection function of clusters in the $L_X$-$z$ plane is reported in Fig. 10, solid lines at $z < 0.3$). In Fig. 12, we show the distribution of the expected radio halos in the eBCS clusters in two redshift intervals: 0–0.2 and 0.2–0.3 (left and right panels, respectively). We find that radio observations at 120 MHz are expected to discover radio halos in about 60 clusters, i.e., in about 20% of eBCS clusters. In addition, about 12 of these halos are expected to have very steep radio spectra, $\nu_s < 600$ MHz (magenta, shadowed region in Fig. 12). Finally, the percentage of clusters with radio halos is expected to increase with the X-ray luminosity of the hosting clusters. This is particularly relevant in the redshift interval $z = 0–0.2$ when comparing with expectations calculated based on the assumption that the fraction of clusters hosting radio halos is constant with cluster mass (Fig. 12 dashed lines, see caption). Consequently, LOFAR will be able to readily test this unique expectation of the turbulent re-acceleration model. 4. Summary and conclusions We have performed Monte Carlo simulations to model the formation and evolution of giant radio halos in the framework of the merger-induced particle acceleration scenario (see Sect. 2). Following Cassano et al. (2006a), we have used homogeneous models that assume a) an average value of the magnetic field strength in the intracluster volume, $B$, and the cluster mass-weighted $B = \bar{B}_{(M)} M_c^b$; and b) that a fraction, $\eta_1$, of the PAV work done by subclusters passing the main clusters during mergers goes into magneto-acoustic turbulence. Although simple, these models reproduce the presently observed fraction of galaxy clusters with radio halos and the scalings between the monochromatic radio power of halos at 1.4 GHz and the mass and X-ray luminosity of the host clusters (e.g., Cassano et al. 2006a, 2008; Venturi et al. 2008), provided that the model parameters $(\bar{B}_{(M)}, b, \eta_1)$ lie within a fairly constrained range of values (Fig. 7 in Cassano et al. 2006a); in the present paper, we have adopted a reference set of parameters, i.e., $\langle B \rangle = 1.9 \ \mu$G, $b = 1.5$, $\eta_1 = 0.2$, that fall in that range. In Fig. 6, we show that the expected number counts of giant radio halos at $\nu_o = 1.4$ GHz obtained with this set of parameters are in good agreement with both the data at low redshift (NVSS-XBACS selected radio halos, Giovannini et al. 1999) and intermediate redshift (clusters in the “GMRT radio halo survey”, Venturi et al. 2007, 2008). The most important expectation of the turbulent re-acceleration scenario is that the synchrotron spectrum of radio halos should become gradually steeper above a frequency, $\nu_s$, that is determined by the energetics of the merger events that generate the halos and by the electron radiative losses (e.g., Fujita et al. 2003; Cassano & Brunetti 2005). Consequently, the population of radio halos is expected to consist of a mixture of halos with different spectra, steep-spectrum halos being more common in the Universe than those with flatter spectra (e.g., Cassano et al. 2006a). The discovery of these very steep-spectrum halos will allow us to test the above theoretical conjectures. In Sect. 2, we have derived the expected radio halo luminosity functions (RHLF) at frequency $\nu_0$ that account for the contributions of the different populations of radio halos with $\nu_s \geq \nu_o$. The RHLF are obtained combining the theoretical mass function of radio halos (of different $\nu_s \geq \nu_o$) with the radio power-cluster mass correlation (Eq. (4)). The expected monochromatic radio power at $\nu_0$ of halos hosted by clusters with mass $M_c$ is extrapolated from the observed $P(1.4)-M_c$ correlation by assuming simple scaling relations, appropriate for homogeneous models, that account for the dependence of the emitted synchrotron power on $\nu_s$ (Eqs. (8), (9)). As a relevant case, we calculate the expected RHLF at $\nu_0 = 120$ MHz (Fig. 5). The shape of the RHLF can be approximated by a power law over more than two orders of magnitude in radio power. Homogeneous models imply that scalings Fig. 11. Integrated (left) and differential (right) number counts of radio halos from radio follow up of eBCS and MACS clusters (see text). Calculations are shown for $\xi F = 0.25$ (upper panels) and 1.0 mJy (lower panels) at 120 MHz. Thick (black) solid lines give the case $v_s \geq 120$ MHz, while differential contributions are shown with different colors: $v_s \geq 1.4$ GHz (magenta lines), $600 < v_s < 1400$ MHz (blue lines), $240 < v_s < 600$ MHz (red lines) and $120 < v_s < 240$ MHz (black thin lines). between $v_s$, cluster mass and the radio luminosity at $v_o$, $P_{v_o}(v_s)$ are given by $$v_s \propto M^{4/3+\alpha} \frac{(1 + \Delta M/M)^3}{\langle (B)^2 + B^2_{\text{cmb}} \rangle^{2}},$$ (13) and from Eq. (9) and the $P_{1.4} - M_r$ correlation $$P_{v_o}(v_s) \propto M^{\alpha} v_s,$$ (14) i.e., radio halos with higher $v_s$ are typically generated in massive clusters that undergo major mergers and contribute to the RHLF at higher powers. On the other hand, halos with lower $v_s$ are typically generated in less massive systems and contribute to the RHLF at fainter powers. Radio halos with $v_s \geq 120$ MHz, however, become increasingly rare in clusters of mass $\leq 5 \times 10^{14} M_\odot$ explaining the drop in the RHLF at lower radio powers in Fig. 5. At the same time, halos with monochromatic radio emission at 120 MHz $> 10^{26}$ W Hz$^{-1}$ would be generated by very energetic merging events in very massive clusters, which are extremely rare, and this explains the RHLF cut-off at higher synchrotron powers in Fig. 5. In Sect. 3, we discussed the expected number counts of radio halos at 120 MHz that would allow us to explore most effectively the potential of upcoming LOFAR surveys in constraining present models. A crucial step in this analysis is the estimate of the minimum diffuse flux from giant radio halos that is detectable by these surveys. Because the LOFAR capabilities will become clearer during the upcoming commissioning phase, we exploit two complementary approaches: i) we required that at least half of the radio halo emission is above a fixed brightness-threshold, $\xi F$ ($F$ being the rms of LOFAR surveys; ii) we required that the signal from the radio halo is $\geq 3 \times \xi F$ in at least 5 beam areas of LOFAR observations. In both cases we assume that the radial profile of radio halos has a fixed shape calibrated by means of several well studied halos at 1.4 GHz, which introduces a potential source of uncertainty. Despite the uncertainties caused by the unavoidable simplifications in our calculations, the expected number counts of radio halos highlights the potential of future LOFAR surveys. By assuming the expected sensitivity of the LOFAR all-sky survey (e.g., Röttgering 2009; priv. comm.), rms = 0.1 mJy/b, and $\xi \sim 2$–3, we predict that about 350 giant radio halos ($\sim$200 considering the case ii)) can be detected at redshift $z \leq 0.6$. This means that LOFAR will increase the statistics of these sources by a factor of $\sim$20 with respect to present-day surveys. About 55% of these halos are predicted with a synchrotron spectral index $\alpha > 1.9$ in the range 250–600 MHz, and would brighten only at lower frequencies, which are inaccessible to present observations. Most importantly, the spectral properties of the population of radio halos are expected to change with the increasing sensitivity of the surveys as steep spectrum radio halos are expected to populate the low-power end of the RHLE. A large fraction of radio halos with spectra steeper than $\alpha \approx 1.5$ (e.g., Fig. 9) is expected to allow a robust discrimination between different models of radio halos, for instance in this case simple energetic arguments would exclude a secondary origin of the emitting electrons (e.g., Brunetti 2004; Brunetti et al. 2008). Because of the large number of expected radio halos, a potential problem with these surveys is the identification of halos and their hosting clusters. As a matter of fact, we expect that LOFAR surveys will detect radio halos in galaxy clusters with masses $\gtrsim 6 \times 10^{14} M_\odot$, at intermediate redshift. On the other hand, statistical samples of X-ray selected clusters, which are unique tools for identifying the hosting clusters, typically select more massive clusters at intermediate $z$. Consequently, we explored the potential of the first LOFAR surveys as deep follow-ups of available X-ray selected samples of galaxy clusters. We calculate the radio halo number counts expected from the follow-up of clusters in the eBCS and MACS samples which contain $\sim$400 galaxy clusters in the redshift range 0–0.6. We expect that the LOFAR all-sky survey, with a planned sensitivity in line with $\xi F = 0.25$ mJy/b, will discover about 130 radio halos in eBCS and MACS clusters and that about 40% of these radio halos will have a very steep spectrum, with $\nu_s \leq 600$ MHz. The majority of radio halos in eBCS and MACS clusters are expected to be at $z = 0.2$–0.4, while the small number of clusters at $z \geq 0.5$ in the MACS catalog does not allow us to form a statistically solid expectations, although we expect a couple of radio halos to be hosted by MACS clusters at this redshift. The MS$^3$ survey will be carried out in 2010, covering the northern hemisphere, and is expected to reach a noise level of about 0.5 mJy/b at 150 MHz, implying a sensitivity to diffuse emission from galaxy clusters of about one order of magnitude (assuming $\alpha \approx 1.3$) higher than present surveys (e.g., NVSS, Condon et al. 1998; VLSS, Cohen et al. 2007, WENSS, Rengelink et al. 1997). We considered MS$^3$ pointings towards the fields of the about 300 galaxy clusters at $z \leq 0.3$ in the eBCS catalogs. We found that about 60 radio halos are expected to be detected by MS$^3$ observations in these clusters, 25% of them (10–15 halos) with $\nu_s \leq 600$ MHz. Fairly sensitive GMRT observations of eBCS clusters at redshift 0.2–0.3 are already available (Venturi et al. 2007, 2008), and in a few cases we expect that radio halos would be detectable in the MS$^3$ images, where no diffuse radio emission is detected at 610 MHz. We also find that MS$^3$ observations of eBCS clusters at $z = 0$–0.2 can be used to test the increase in the fraction of cluster with radio halos, with the X-ray luminosity of the host clusters, which is a unique prediction of our model (Fig. 12). The most important simplification of our calculations is the use of homogeneous models. Non-homogeneous approaches, which model the spatial dependence of the acceleration efficiency and magnetic field in the halo volume (e.g., Brunetti et al. 2004), and possibly their combination with future numerical simulations, will provide an additional step in interpreting LOFAR data. Also the use of the extended PS theory is expected to introduce some biases. For instance, it is well-known that the PS mass function underpredicts the expected number of massive clusters ($M > 10^{15} \, M_\odot$) at higher redshift, $z \sim 0.4–0.5$, by a factor of $\sim 2$ with respect to that found in $N$-body simulations (e.g., Governato et al. 1999; Bode et al. 2001; Jenkins et al. 2001). In our model since the vast majority of halos at these redshift is associated with massive clusters, the use of the PS mass function implies that the RHNC at $z > 0.4–0.5$ could be underestimated by a similar factor. A refinement of the approach proposed in the present paper could be achieved by using galaxy cluster merger trees extracted from $N$-body simulations. These would also allow a more realistic description of the merger events (e.g., spatially resolved, multiple mergers). In the present paper, we focus on a reference set of model parameters. Cassano et al. (2006a) discussed the dependence of model expectations at 1.4 GHz on these parameters. Based on their analysis, we expect that all the general results given in the present paper are independent of the adopted parameter values. The expected number counts of halos should change by a factor of $\sim 2–2.5$ considering sets of model parameters within the region $(B_b, b, \eta_3)$ that allow us to reproduce the observed $P_{1,4} - M_0$ correlation. In this case, the number of halos that we expect decreases between super-linear sets of parameters ($b > 1$ and $(B_b \geq 1.5 \, \mu G)$ and sub-linear cases ($b < 1$ and $(B_b \leq 1.5 \, \mu G)$ (see also Fig. 4 in Cassano et al. 2006b); a more detailed study will be presented in a future paper. Acknowledgements. We thank the anonymous referee for useful comments. This work is partially supported by ASI and INAF under grants PRIN-INAF 2007, PRIN-INAF 2008 and ASI-INAF I/088/06/0. References Bauchi, M., Feretti, L., Giovannini, G., et al. 2003, A&A, 400, 465 Blasi, P. 2001, Ap&SS, 15, 253 Blasi, P., & Volonteri, S. 1999, Astropart. Phys., 12, 169 Bode, P., Bahcall, N. A., Ford, E. B. et al. 2001, ApJ, 551, 15 Bonafede, A., Feretti, L., Giovannini, G., et al. 2009, A&A, 503, 707 Böhringer, H., Voges, W., Huchra, J. P., et al. 2000, ApJS, 129, 435 Böhringer, H., Schindler, S., Giacintucci, S., et al. 2004, A&A, 425, 367 Brunetti, G. 2007, JKAS, 37, 293 Brunetti, G., & Lazarian, A. 2007, MNRAS, 378, 245 Brunetti, G., Setti, G., Feretti, L., et al. 2001, MNRAS, 320, 365 Brunetti, G., Blasi, P., Cassano, R., et al. 2004, MNRAS, 350, 1174 Brunetti, G., Giacintucci, S., & Dallacasa, D. 2008, ApJ, 670, 5 Brunetti, G., Giacintucci, S., Cassano, R., et al. 2008, Nature, 455, 944 Brunetti, C., Cassano, R., Dolag, K., & Setti, G. 2009, A&A, 507, 661 Brüggen, M., Ruszkowski, M., Simionescu, A., et al. 2005, ApJ, 631, L21 Buote, D. A. 2001, ApJ, 553, 15 Cassano, R., & Brunetti, G. 2005, MNRAS 357, 1313 Cassano, R., Brunetti, G., & Setti, G. 2006a, MNRAS 369, 1577 Cassano, R., Brunetti, G., & Setti, G. 2006b, ApJ, 642, 557 Cassano, R., Brunetti, G., & Venturi, T. 2008, et al., A&A, 480, 687 Cohen, A. S., Lane, W. M., Cotton, W. D., et al. 2007, AJ, 134, 1245 Condon, J. J., Cotton, W. D., Greisen, E. W., et al. 1998, AJ, 115, 1693 Dallacasa, D., Bonafede, A., Giacintucci, S., et al. 2009, ApJ, 699, 1288 Dolag, K., Bartelmann, M., & Lesch, H. 2002, A&A, 387, 383 Ebeling, H., Voges, W., Böhringer, H., et al. 1996, MNRAS, 281, 799 Ebeling, H., Edge, A. C., Böhringer, H., et al. 1998, MNRAS, 301, 881 Ebeling, H., Edge, A. C., Allen, S. W., et al. 2000, MNRAS, 318, 333 Ebeling, H., Edge, A. C., Murray, P. D., et al. 2003, ApJ, 553, 609 Ellingson, S., W. Clarke, J. E., Cohen, A., et al. 2009, Proc. of the IEEE, 97, Issue 8, 1421 Enßlin, T. A., Birnbraun, P. L., Klein, U., et al. 1998, A&A, 332, 395 Enßlin, T. A., & Röttgering, H. 2002, A&A, 386, 83 Feretti, L., 1998, in X-Ray and Radio Cosmics, published electronically by NRAO, ed. L.O. Spouwerman, & K. K. Dyer Ferrari, C., Govoni, F., Schindler, S., Bykov, A. M., & Rephaeli, Y. 2008, Space Sci. Rev., 134, 93 Fujita, Y., Takizawa, M., & Sarazin, C. L. 2003, ApJ, 584, 190 Governato, F., Babul, A., Quinn, T., et al. 1999, MNRAS, 307, 949 Govoni, F., Feretti, L., Giovannini, G., et al. 2001, A&A, 376, 803 Govoni, F., Feretti, L., Brunetti, G., & Brunetti, M. 2004, ApJ, 605, 695 Gupta, C.-Y. 2004, JKAS, 37, 461 Hoertl, M., & Brüggen, M. 2007, MNRAS, 375, 77 Hoertl, M., Brüggen, M., Yepes, G., Gottlöber, S., & Schwope, A. 2008, MNRAS, 391, 1511 Jenkins, A., Frenk, C. S., White, S. D. M., et al. 2001, MNRAS, 321, 372 Liang, H., Hunscheidt, R. W., Birkhamsd, M., et al. 2000, ApJ, 544, 686 Lacey, C., & Cole, S. 1993, MNRAS, 262, 627 Miniati, F., Jones, T. W., Kang, H., et al. 2001, ApJ, 562, 233 Petrosian, V. 2008, ApJ, 557, 1 Pfrommer, C., Battistelli, E., & Springel, V. 2008, MNRAS, 385, 1211 Press, W. H., & Schechter, P. 1974, ApJ, 187, 425 Rengelink, R. B., Tang, Y., de Bruyn, A. G., et al. 1997, A&AS, 124, 259 Röttgering, H. J. A., Braun, R., Barthel, P. D., et al. 2006, proceedings of the conference Cosmic radio galaxies formation and astroparticle physics on the pathway to the SKA, Oxford, 10–12 [arXiv:astro-ph/0610961] Ryu, D., Kang, H., Hallman, E., et al. 2003, ApJ, 593, 599 Ryu, D., Kang, H., Cho, J., et al. 2008, Science, 320, 909 Sarazin, C. L. 1999, ApJ, 520, 529 Schlickeiser, R., Sievers, A., & Baumann, H. 1987, A&A, 182, 21 Schlickeiser, R., & Miller, J. A. 1998, ApJ, 492, 352 Schnecker, P., Böhringer, H., Reiprich, T. H., et al. 2001, A&A, 378, 408 Springel, V., White, S. D. M., & Jenkins, A. 2005, Nature, 435, 629 Subramanian, K., Shukurov, A., & Haugen, N. E. L. 2006, MNRAS, 366, 1437 Thielemann, M., Klein, U., & Wiehlischinski, R. 2003, A&A, 397, 53 Trümper, J., 1993, Science, 260, 1769 Venturi, T., Giacintucci, S., Brunetti, G., et al. 2007, A&A, 463, 937 Venturi, T., Giacintucci, S., Dallacasa, et al. 2008, A&A, 484, 327 van Weeren, R. J., Röttgering, H. J. A., Brüggen M., et al. 2009, A&A, 505, 991
Deaf Professionals’ Views on the Importance of Features of Simultaneous Communication Michael Stinson, William Newell, Diane Castle, Dominique Mallery-Ruganis, and Barbara Ray Holcomb National Technical Institute for the Deaf Rochester Institute of Technology Focus group discussions with 36 professionals who are deaf/hard of hearing and “consumers” of Simultaneous Communication (SC) were conducted which resulted in differentiation of 42 categories of comment related to SC further classified into seven major domains of comments. Participants’ rankings of most important features of SC indicated recognition of the bimodal and complex nature of SC. The results provided some insight into participants’ perceptions of the relationship between the oral/aural and sign components of SC. In addition, other domains related to “attitude, sensitivity, and culture” as well as general “communication strategies” were suggested as important considerations for effective SC. Simultaneous Communication (SC), which involves simultaneous use of speech, signs, and fingerspelling, is currently widely used in educational programs for deaf students at all levels (Akamatsu, Stewart, & Bonkowski, 1988; Caccamise & Newell, 1984). In practice, SC occurs in two forms: (a) a more-or-less literal mapping of manual signs for English morphemes (e.g., in SEE-like systems; Luetke-Stahlman & Moeller, 1989) or (b) a “conceptually accurate” or semantically based use of signs representing the meanings and co-occurring with the spoken message. This study involved SC of the second variety. One reason for its widespread use may be that it can accommodate the wide range of communication preferences and skills of deaf and hard-of-hearing students (Mallery-Ruganis & Fischer, 1991; VanBinsbergen, 1990). Simultaneous Communication can be more or less successful with respect to its comprehensibility and aesthetic qualities. Previous research and discussion has suggested that at least six factors influence the effectiveness of SC. 1. **Grammaticality of sign production**. Some studies such as those by Kluwin (1981), Marmor and Pettito (1979), Swisher and Thompson (1985), and Strong and Charlson (1987) have focused on the adequacy with which the visual/gestural component of SC represents English. The studies have demonstrated that there is a tendency to omit from the sign channel certain elements that were spoken. Many people find it difficult to sign exactly everything that is said (Maxwell, 1990). One reason persons may not sign everything is because they have not undergone sufficient training or practice. Luetke-Stahlman (1988) reported that teachers with extensive training and practice in manually coded English can learn to simultaneously provide a manual code that corresponds to their speech. She found that all seven teachers in her study were able to simultaneously sign the elements of their spoken sentences with above 90% accuracy. A second reason certain elements in the spoken message are often not signed may be that a visually effective production of the corresponding signs does not necessarily include all the spoken elements. In using SC, people tend to mix features of both English and American Sign Language (ASL) (Lucas & Valli, 1989; Maxwell, 1990). Stewart (1989) suggests that in using SC, teachers include features of ASL, such as negative incorporation (e.g., DON'T-WANT), and such signing results in a sequence of elements somewhat different than the spoken one. Other features of ASL that may be incorporated into SC have also been noted (Stewart, 1989; Winston, 1989). 2. *Semantic congruity of the spoken and sign components* of SC. Maxwell and Bernstein (1985) examined the adequacy of the semantic representation of signing in relation to speech. They concluded that the two channels are overwhelmingly equivalent in terms of meaning when used in the hands of skilled communicators. Although there is generally not a one-to-one match between the spoken and signed elements of the utterance, the message can still be coherent and comprehensible (Maxwell, 1990; Maxwell & Bernstein, 1985). One important aspect of the correspondence between the meanings of the spoken and signed messages appears to be the conceptual accuracy of the sign, which should indicate the meaning of the word (Winston, 1989). For example, the word "get" would be signed differently for the sentences "I got the book" (GET), and "I got sick" (BECAME) (Winston, 1989). 3. *Effective use of speech and mouth movement*. Simultaneous Communication requires speaking or mouthing the message as well as signing it. Winston (1989) discusses mouthing of English words during transliteration by interpreters, and her analysis seems applicable to SC. The spoken (or mouthed) channel provides a more complete, linguistic representation of the literal English message, while the signed channel provides an efficient representation of its meaning. For example, for the phrase "groups of people," the interpreter might mouth all of the words, but signs GROUP PEOPLE (Winston, 1989, p. 161). In addition, the speech or mouthing can reduce ambiguity when the sign has multiple meanings. Simultaneous Communication may be both speech-driven and sign-driven. Akamatsu et al. (1988) reported that the SC of teachers was basically speech driven in the sense that sign production was modified to fit with a relatively unmodified speech channel. However, the choice of words spoken and the way they are pronounced is influenced by the sign channel. Davis (1989) has noted that interpreters often alter the mouthed words to be congruent with the message produced on the hands, and it seems likely that people using SC select words and expressions that are compatible with their signing. The addition of signing may also alter features of speech, such as rate, phrasing, rhythm, stress, and intonation (Maxwell, 1990; Whitehead & Whitehead, 1988). 4. **Body language and facial expression.** Body language and facial expression are important for conveying information that is carried by stress and intonation in speech (Kluwin & Kluwin, 1983; Winston, 1989). These non-manual behaviors may convey emotional qualities, such as happiness, sadness, and so forth. In addition, body language and facial expression can serve to modify the meaning of the sign (Winston, 1989). Body language and facial expression are also used to convey grammatical information, such as question form (Baker & Padden, 1978; Baker-Shenk, 1985). 5. **Communication strategies.** Successful communication involves strategies to increase the effectiveness of communication for particular situations (Foster, Barefoot, & DeCaro, 1989; Roth & Speckman, 1984). One strategy is to use fingerspelling to support signs, such as to indicate a particular English word when the sign may have multiple translations (Akamatsu et al., 1988; Davis, 1989). Another strategy is to switch from simultaneous speech and sign, with English-like word order, to sign alone, with greater emphasis on ASL features for a limited segment of a passage (Akamatsu et al., 1988; Lucas & Valli, 1989; Stewart, 1989). A strategy that appears to be important in an extended discussion is to clearly indicate the transition from one subtopic to another by using pauses (Kluwin & Kluwin, 1983). 6. **Affect.** Effective communication also involves transmission of appropriate affect (Robinson, 1972; Woolfolk & Brooks, 1983). Cues such as distance, posture, facial expression, and hesitation contribute to the aesthetics and comfort of the communication (Woolfolk & Brooks, 1983). Showing respect for and knowledge of Deaf culture also increases the likelihood that deaf people will respond positively (Kannapell, 1989). **Purpose** One way to assess the importance of the various features of SC would be to ask knowledgeable consumers (i.e., deaf people who have experience communicating with SC signers) for their perceptions. The approach of asking deaf consumers, however, has rarely been used. One of the few exceptions was Kautzky-Bowden and Gonzales’ (1987) survey of deaf adults’ preference regarding sign systems used in the classroom with deaf students. This approach is in contrast to what has often been done. Typically, researchers have analyzed samples of SC and then, on the basis of a theoretical framework (e.g., linguistic theory), drawn conclusions regarding its effectiveness, such as adequacy in conveying an English message (e.g., Marmor & Petitto, 1979). What features of SC would deaf persons identify as important for effective communication? The present study obtained information on categories of descriptors of SC provided by deaf professionals, and rankings of these categories in terms of importance. For purposes of this work, effective communication was defined as that which is “easy to understand” and “comfortable and enjoyable to watch.” A focus-group interview methodology was employed to generate categories of SC and a ranking procedure was used to obtain opinions regarding the relative importance of different categories (Calder, 1977). **METHOD** **Characteristics of Participants** Deaf and hard-of-hearing faculty and staff at the National Technical Institute for the Deaf participated in the focus groups. (Hereafter, we will use the term “deaf” to refer to all participants.) Each person was asked to fill out a questionnaire about his or her hearing loss, communication preference, and educational background. Ten of the 36 participants described their unaided hearing level as moderate to severe (40-95 dB) while 26 reported a profound hearing level (95 dB or greater). Seventeen reported that they wear a hearing aid “all the time”; 2, “sometimes”; 14, “never”; and 3 did not respond. Signing was learned by 8 respondents before age 5, by 5 between 6-15 years, and by 23 at the age of 16 or older. When asked how they most like to communicate with other deaf people, many of the participants selected more than one response: 14 selected ASL, 20 selected sign English without voice, 16 selected sign English with voice, and 1 selected speech only. This information was used to help sort people into six different focus groups based on degree of preference for and background in signing. Those assigned to Groups 1, 2, and 3 had stronger preferences for signing than did those assigned to Groups 4, 5, and 6. To assist further in selection of participants for the focus groups respondents were asked about the schools attended (residential school or public schools – either in special classes or mainstreamed), years of enrollment, and availability of support services if enrolled in a school with hearing students. Those assigned to Groups 1 and 2, except for one person, had most of their education experience in schools where there was a stronger communication preference for signing. The average proportion of educational time spent in residential schools was 79% in Group 1 and 87% in Group 2 (Group $n = 5$ and 4, respectively). Those in Groups 3, 4, 5, and 6 had most of their education in schools where there was a stronger preference for oral communication. The average amount of time in public schools was 96% for Group 3, 100% for Groups 4 and 5, and 64% for Group 6. (Four of the 6 participants in Group 6 spent a significant portion of time in oral day or residential schools.) (Group $n = 5$, 6, 7, and 8, respectively.) **Procedure** Six focus groups held discussions of approximately 2 hours that were comprised of three parts (Bogdon & Biklen, 1982; Calder, 1977). The first part was a general brainstorming session, which lasted approximately 1 hour. Participants were asked to think about persons who express themselves very well using sign and speech together and to "picture" these persons communicating. With this context, participants were asked the following questions: 1. What is the person doing that makes his/her communication clear and easy to understand? 2. What is the person doing to make his/her communication enjoyable to watch, "read," and perhaps, hear. In the second part, which lasted about 50 minutes, participants watched three videotapes, each 3 minutes in length, of signers using SC. Participants sat approximately 2-to-3 m from the 19 in. (48 cm) monitor. The playback showed the upper body of the signer with the hands, fingers, and mouth movements clearly visible. The first signer was deaf and had been signing for approximately 55 years. The second signer was a hearing child of deaf parents who had been signing for approximately 50 years and the third was a hearing professional who had been signing for approximately 20 years. After each videotape, participants were asked to further discuss ideas regarding the strengths and weaknesses of each signer as related to the two questions posed in the first part. Participants viewed one of the first two videotapes with sound and the other without sound in counterbalanced order. Participants also viewed half of the third videotape with sound and half without it. These variations were included to elicit further discussion of features of SC. The third part of each focus group discussion, which lasted about 10 minutes, consisted of asking the participants to rank in order of importance on a blank sheet of paper the three most important aspects of SC that had been suggested during discussions in parts one and two. The moderator of the focus groups was a deaf professional. The information generated during the focus groups was recorded in three ways: (a) a notetaker standing in front of the focus group took notes on newsprint which was taped to the walls for easy review, (b) the discussion was voice interpreted and recorded on audiotape, and (c) a videotape recording was made with two cameras. Typed transcripts were made from the audiotapes and were compared to the videotapes and revised as necessary. An analysis of these typed transcripts is presented in a separate report (Newell, Stinson, Castle, Mallery-Ruganis, & Holcomb, 1990). **RESULTS** **Categories of Comments about SC** The approximately 400 statements written by the notetaker on the newsprint for all six focus group discussions were reviewed by two members of the project team to develop an initial list of categories, to which participants' comments regarding SC could be assigned. This original list of categories was then revised after two other project team members independently read the typed transcripts of the six focus group discussions with the aim of refining the categories. This revised list was comprised of 42 comment categories which were grouped into the following seven domains: 1. **Expressive sign production features.** Clear sign production; position of hands and use of signing space; pausing; smoothness of signing; overall signing, overall intelligibility of signal; correct choice of sign vocabulary to represent meaning; grammatical features of visual/gestural modality; use of space; directionality; use of other ASL principles such as sign inflection; clear fingerspelling production. 2. **Aural/oral features.** Clear lip movement; use of voice. 3. **Simultaneous production features.** Match between fingerspelling, signs, facial expression, and voice intonation; match between fingerspelling and mouth movements; simultaneity of speech and sign; pace. 4. **Non-manual features.** Maintain eye contact, face one another while signing, cultural expectations regarding eye contact; body language, body movement; body shifts; facial expression. 5. **Relationship of English and ASL.** ASL mouth-movements versus speech mouth movements; relationship of English syntax to ASL syntax; degree of English representation in SC; sign systems; definitions of simultaneous communication. 6. **Communication strategies.** Distance between communicators as it affects sign production; ability to use both ASL and English signing, code switching; providing visual breaks when making a presentation; organization and presentation of thoughts and ideas; setting as it affects signing – classroom presentation versus one-on-one; strategies for reception of speech and sign and of ASL and English; use of fingerspelling to specify English words for emphasis and reinforcement. 7. **Affective domain: Attitude, sensitivity, and culture.** Internalization of deaf culture, cultural expectations; confidence when signing, relaxation level, comfort level; signing clearly communicates mood and attitude; sensitivity and respect for audience, is approachable; sensitivity toward communication level and intelligence; sensitivity toward audience, awareness of importance of visual feedback; style of expression – interesting, enthusiastic, attractive, dull, or exaggerated; personal appearance/visual distractors – clothing, jewelry, mannerisms, mustache, lipstick; inappropriate moving around when signing; use of sophisticated, appropriate vocabulary. **Rank Scores** The following analysis procedure was used to determine what features of SC the deaf participants ranked as most important. Each of the statements written in ranking the three most important aspects of SC were assigned by a coder to 1 of the 42 comment categories. The category-coded statements were then given values of 3 (most important), 2 (second in importance), or 1 (third in importance). Participants sometimes wrote more than one statement for a given rank. Consequently, a weighting procedure was employed. The value for each statement, that is, category, was divided by the number of categories listed for the three rankings. If only one category was listed for each of the levels, the category given the highest rank received a weighted value of 1; that is, 3 (value of highest rank) divided by 3 (number of categories listed for the three ranks). If a participant wrote in two categories instead of one for the highest rank, for example, "clear signs" and "fingerspelling," and one item for each of the other two ranks, there would be four categories altogether for the three levels. In this case, the two categories assigned to the highest rank would receive a value of .75, that is, \( \frac{3}{4} \). The one category assigned to the second highest rank would receive a value of .5, that is, \( \frac{3}{4} \), and the one assigned to the third rank would receive a value of .25, that is, \( \frac{1}{4} \). The scores were analyzed in two ways. First, these scores were analyzed for the highest ranking categories listed by all individuals. Table 1 presents the sums for the weighted ranks for each category of SC for all participants. The higher the sum score, the more frequently and highly the item was ranked. The three items that were most frequently and highly ranked were (a) "clear lip movement," (b) "facial expression," and (c) "body language, body movement." "Clear lip movement" referred to mouth movements that were natural and easy to "read," and not exaggerated. Participants tended to describe body language and facial expression together and indicated these two features were important for signing that was "animated, dramatic, and expressive." They also said that certain body postures and body shifts were important for indicating grammatical features. A second way the ranking data were analyzed was for the individual focus groups. Since Groups 1, 2, and 3 had a relatively stronger preference for sign communication and Groups 4, 5, and 6 had a stronger preference for oral communication, it was possible that the first three groups might emphasize sign features of SC, whereas the latter three groups might emphasize oral features. In this analysis, the five most highly ranked categories of comments about SC within each focus group were examined. For example, for Group 1 the five highest categories in rank order were "maintain eye contact," "clear sign production," "clear fingerspelling," "confidence when signing," and "clear lip movement." These highly ranked categories were then compared across groups in order to determine whether there were any features that were highly ranked by all groups and to also determine whether certain features were only highly ranked by those with a certain communication preference. Table 2 shows in summary form the major results of this comparison across groups. It shows categories that were highly ranked by four different focus groups regardless of communication preference or that were highly ranked by groups with a particular communication preference. Two categories, "clear lip movement" and "correct choice of sign," were | Items | Sum of Rank Scores | |----------------------------------------------------------------------|---------------------| | 1. Clear lip movement | 21.3 | | 2. Facial expression | 18.3 | | 3. Body language; body movement | 16.1 | | 4. Grammatical features of visual/gestural modality | 14.9 | | 5. Clear sign production | 14.3 | | 6. Correct choice of sign vocabulary to represent meaning | 14.1 | | 7. Pace | 12.8 | | 8. Clear fingerspelling production | 9.0 | | 9. Maintain eye contact; face one another when signing; cultural expectations | 8.6 | | 10. Signing clearly communicates mood and attitude | 7.8 | | 11. Match between fingerspelling, signs, and facial expressions and voice intonation | 7.2 | | 12. Confidence when signing | | | 13. Overall signing skill; overall intelligibility of signal | | | 14. Use of fingerspelling to specify English words for emphasis | | | 15. Use of voice | | | 16. Position of hands and use of signing space | | | 17. Match between fingerspelling and mouth movement | | | 18. Internalization of deaf culture | | | 19. Use of space | | | 20. Organization and presentation of thoughts and ideas | | | 21. Simultaneity of speech and sign | | | 22. Personal appearance/visual distractors | | | 23. Sensitivity and respect; approachable | | Continued on next page | Items | Sum of Rank Scores | Sum of Rank Scores | |-------|--------------------|--------------------| | 24. Body shifts | 1.2 | .6 | | 25. Pausing | 1.2 | .4 | | 26. Distance between communicators as it affects sign production | 1.1 | .3 | | 27. ASL mouth movements vs. speech mouth movements | 1.0 | .3 | **Items Not Ranked by Any Participants** | Items | Sum of Rank Scores | |-------|--------------------| | 32. Sensitivity toward communication level and intelligence | | | 33. Sensitivity toward audience; awareness of visual feedback | | | 34. Ability to use both ASL and English signing | | | 35. Smoothness of signing | | | 36. Style—exaggerated, interesting, etc. | | | 37. Relationship of English syntax to ASL syntax | | | 38. Inappropriate moving around when signing | | | 39. Use of sophisticated/appropriate vocabulary | | | 40. Directionality | | | 41. Sign systems | | | 42. Definitions of simultaneous communication | | Table 2 Categories That Were Highly Ranked by the Different Focus Groups Regardless of Communication Preference or That Were Highly Ranked by Groups With a Particular Communication Preference | Highly-Ranked Categories* | Clear Lip Movement | Correct Choice of Sign | Grammatical Features | Pace | Facial Expression | |---------------------------|--------------------|------------------------|----------------------|------|-------------------| | Groups with Relative Preference for Signing | #1* | X | | | | | | #2 | | X | X | | | | #3 | X | X | X | | | Groups with Relative Preference for Oral Communication | #4 | | | X | X | | | #5 | X | X | X | X | | | #6 | X | X | X | X | *The category was either among the five most highly ranked by at least two groups of the three who had a relative preference for signing and two with a relative preference for oral communication; or the category was highly ranked by at least two groups with a preference for one of the forms of communication, but by none of the groups who preferred the other form. *Numbers refer to focus group number (see text). Groups 1, 2, and 3 had a relative preference for signing and Groups 4, 5, and 6 had a preference for oral communication. *The X means that the category was one of the five most highly ranked for the particular focus group. highly ranked, regardless of the group’s communication preference. As noted, “clear lip movement” pertained to mouth movements that were natural and easy to follow. “Correct choice of sign” referred to choosing signs that communicate the meaning of the message while at the same time speaking or mouthing English. Other categories appeared to be favored only by groups with a particular communication preference and background. The category “grammatical features” was highly ranked by two of the groups with a preference for sign communication. “Grammatical features” included comments related to features such as use of space to establish referents and use of directionality of movement. In addition, analyses with a Mann-Whitney U nonparametric test (Siegel, 1956) revealed that the mean rank for “grammatical features” for the three groups with a preference for sign communication was significantly higher than that for the three groups with a preference for oral communication ($U = 72.5$, $z = 2.20$, $p = .02$). The categories “pace” and “facial expression” were highly ranked by groups with a stronger preference for oral communication. By “pace,” participants generally meant an unhurried, “comfortable” rate that provided for a good match. in the timing of mouth movements and signs. “Facial expression” is important for speechreading (Lesner, 1988) as well as for showing emotions, modifying the meaning of signs, and conveying grammatical features. In addition, the mean rank for “pace” for the three groups with a preference for oral communication was higher than that for the groups with a preference for sign communication \((U = 60.5, z = 2.57, p = .01)\); also, the mean rank for “facial expression” for the groups with a preference for oral communication was significantly higher \((U = 65.0, z = 2.42, p = .02)\). Thus, groups with different communication preferences ranked different features of SC as important. **DISCUSSION** Participants in the focus groups appeared to recognize the complex bimodal nature of SC, as well as the multi-faceted nature of the communicative act in general. One feature of SC that was viewed as highly important was “clear lip movement.” This feature was ranked among the five most important categories of SC by four of the six focus groups and received the highest overall ranking. It should be noted that lip movement is by definition an essential feature of SC. Attention to this feature is consistent with conclusions of other investigators of SC who have stated that people are not expected to understand SC without the speech channel (Akamatsu et al., 1988; Maxwell & Bernstein, 1985). While “clear lip movement” received the highest ranking, “use of voice” received a substantially lower ranking, fifteenth overall. It would appear, in regard to the oral/aural component of SC, that these focus group participants generally considered the visual signal more important than the auditory. This conclusion is supported by the analyses of transcripts of the focus group discussions described earlier in the study and reported by Newell et al. (1990). In the Newell et al. (1990) analyses, few participants indicated that listening to the voice of the individual enhanced comprehension of SC. Those participants who benefitted from listening indicated that they used the speech sounds to supplement comprehension of signs and mouth movement. While “clear lip movement” and “use of voice” are the only two categories in the overall ranks that appear to be exclusively related to the oral/aural component of SC, many categories are important to both the oral/aural and sign components. Examples of categories related to both components are “facial expressions, body language, body movement”; “pace”; and “match between fingerspelling and mouth movement.” Numerous other categories are related to the sign component of SC. Examples of such categories are “grammatical features of the visual/gestural modality” and “clear sign production.” In addition there are general comments related to “sensitivity to deaf culture” and “communication strategies.” With respect to the sign component of SC, analyses of the transcripts indicated that participants viewed this component as complex and multifaceted (Newell et al., 1990). Participants provided numerous concerns and details regarding what must occur in the visual/gestural modality in effective SC. The category "match between fingerspelling signs, and facial expressions and voice intonation" was one of the more highly ranked (11th highest); this result, however, does not adequately convey the importance of different components of SC carrying a consistent message. For example, in the transcripts comments were made regarding the importance of the message produced on the mouth matching that conveyed by the signs (Newell et al., 1990). As one participant indicated: I really depend on watching both the lip movement and the signs and sometimes if they're not congruent between the two, if the signs don't match the lip movement, I really get confused and then communication breaks down. The importance of consistency among different components of SC has been noted by others (Maxwell & Bernstein, 1985). Although most of the 15 more highly ranked categories of SC pertained to competent production of the signal, two of them pertained to affective and cultural issues. According to the rankings and the transcripts (Newell et al., 1990), communicators who appeared confident while signing appeared to demonstrate qualities such as friendliness, acting "naturally and pleasantly," using body language, and being dramatic when necessary. Furthermore, the highly-ranked "eye contact" category referred to extent of awareness of deaf culture, as well as sensitivity to the visually-oriented communication of deaf individuals. CONCLUSIONS AND IMPLICATIONS It is clearly important to address the multiple factors that constitute effective SC. Effective users are able to convey a sense of the voiced message. Semantically appropriate signs are necessary, as is the inclusion of accurate fingerspelled support for signs for which there are several English synonyms. This is critical in the case of technical terms, whether they are specialized terms or ordinary words used with specialized meaning. In addition, effective users of SC incorporate facial expression, eye contact, and other non-manual behaviors (Mallery-Ruganis & Fischer, 1991). One of the authors, an experienced sign language instructor, has suggested that the skills for SC must be taught to adult learners. One strategy she uses is to video-record students signing and speaking at the same time and then to have the students view the recording of themselves without sound to see what they can (and often cannot) understand (Mallery-Ruganis & Fischer, 1991). Groups with a relatively greater preference for oral communication gave greater emphasis to pace and facial expression. This is not surprising given the benefit of pace and facial expression to lipreading (Castle, 1987, 1988; Lesner, 1988). The variation in the communication preferences of the participants and the relation of these preferences to the importance of particular features of SC seems consistent with the work of Kannapell (1989). She found that there is a wide spectrum of linguistic/communication repertoires among deaf college students, and her work suggests that this variation is related to individual preferences regarding the extent that communicators use features of ASL, pidgin sign English, and spoken English. Presumably, these preferences apply when a person is using SC. It should be noted that participants in this study were primarily college educated and professional employees of a postsecondary program for deaf students with primarily “oral” and mainstream educational backgrounds. As a group, however, they overwhelmingly preferred that signing with or without speech, as compared to speech alone, be used when communicating with them. They generally preferred sign English, with or without voice, rather than ASL. Deaf persons (adults and students) with different communication, educational, occupational, and social backgrounds might provide different perspectives on SC than the participants in this study. ACKNOWLEDGEMENTS We thank the 36 deaf faculty and staff at NTID who generously took time to participate in discussion groups and to share their insights about simultaneous communication. We also thank Judy Braege, Jill Baylow, and Yufang Liu for their assistance in summarizing the data. Also our sincerest thanks to Charlie Johnstone and Peter Reeb for their technical advice and assistance and to Cindy Sinsebox for transcription of the audio recordings of the focus group sessions and word processing assistance with this manuscript. This research was conducted in the course of an agreement with the U.S. Office of Education. Requests for reprints should be sent to Michael Stinson, Department of Educational Research and Development, NTID, Rochester Institute of Technology, P.O. Box 9887, Rochester, New York, 14623. REFERENCES Akamatsu, C.T., Stewart, D.A., & Bonkowski, N. (1988, April). *Constraining factors in the production of simultaneous communication by teachers*. Paper presented at the convention of the American Educational Research Association, New Orleans, LA. Baker, C., & Padden, C. (1978). Focusing on the non-manual components of American Sign Language. In P. Siple (Ed.), *Understanding language through sign language research* (pp. 27-57). New York: Academic Press. Baker-Shenk, C. (1985). The facial behavior of deaf signers: Evidence of a complex language. *American Annals of the Deaf*, 130, 297-304. Bogdon, R., & Biklen, S. (1982). *Qualitative research for education*. Boston: Allyn and Bacon, Inc. Caccamise, F., & Newell, W. (1984). A review of current terminology used in deaf education and signing. *Journal of the Academy of Rehabilitative Audiology*, 17, 106-129. Calder, B.J. (1977). Focus groups and the nature of qualitative marketing research. *Journal of Marketing Research*, 14, 353-364. Castle, D. (1987). Effective oral interpreters: An analysis. In W.H. Northcott (Ed.), *Oral interpreting: Principles and practices* (pp. 169-186). Baltimore, MD: University Park Press. Castle, D. (Ed.). (1988). *Oral interpreting: Selections from papers by Kirsten Gonzalez*. Washington, DC: Alexander Graham Bell Association for the Deaf. Davis, J. (1989). Distinguishing language contact phenomena in ASL interpretation. In C. Lucas (Ed.), *Sociolinguistics of the deaf community* (pp. 85-102). San Diego, CA: Academic Press. Foster, S., Barefoot, S., & DeCaro, P. (1989). The meaning of communication to deaf college students: A multidimensional definition. *Journal of Speech and Hearing Disorders, 54*, 558-569. Kannapell, B. (1989). An examination of deaf college students' attitudes toward ASL and English. In C. Lucas (Ed.), *Sociolinguistics of the deaf community* (pp. 191-210). San Diego, CA: Academic Press. Kautzky-Bowden, S.M., & Gonzales, B.R. (1987). Attitudes of deaf adults regarding preferred sign language systems used in the classroom with deaf students. *American Annals of the Deaf, 132*, 251-255. Kluwin, T. (1981). The grammaticality of manual representations of English in classroom settings. *American Annals of the Deaf, 126*, 417-421. Kluwin, T., & Kluwin, B. (1983). Microteaching as a tool for improving simultaneous communication in classrooms for hearing-impaired students. *American Annals of the Deaf, 128*, 820-825. Lesner, S.A. (1988). The talker. *Volta Review, 90*, 89-95. Lucas, C., & Valli, C. (1989). Language contact in the American deaf community. In C. Lucas (Ed.), *Sociolinguistics of the deaf community* (pp. 11-40). San Diego, CA: Academic Press. Luetke-Stahlman, B. (1988). SEE-2 in the classroom: How well is English represented? In G. Gustason (Ed.), *Signing Exact English in total communication: Exact or not exact?* Los Alamitos, CA: Modern Sign Press. Luetke-Stahlman, B., & Moeller, M.P. (1989). Enhancing parents' use of SEE-2: Progress and retention. *American Annals of the Deaf, 135*, 371-378. Mallery-Ruganis, D., & Fischer, S. (1991). Characteristics that contribute to effective simultaneous communication. *American Annals of the Deaf, 136*, 401-408. Marmor, G.S., & Petitto, L. (1979). Simultaneous communication in the classroom: How well is English grammar represented? *Sign Language Studies, 23*, 99-136. Maxwell, M. (1990). Simultaneous communication: The state of the art and proposals for change. *Sign Language Studies, 69*, 333-390. Maxwell, M., & Bernstein, M. (1985). The synergy of sign and speech in simultaneous communication. *Applied Psycholinguistics, 6*, 63-82. Newell, W., Stinson, M., Castle, D., Mallery-Ruganis, D., & Holcomb, B.R. (1990). Simultaneous communication: A description by deaf professionals working in an educational setting. *Sign Language Studies, 69*, 391-414. Robinson, W. (1972). *Language and social behavior*. London: Penguin Books. Roth, F., & Speckman, N. (1984). Assessing the pragmatic abilities of children: Part 1. Organizational framework and assessment parameters. *Journal of Speech and Hearing Disorders, 49*, 2-11. Siegel, S. (1956). *Non-parametric statistics for the behavioral sciences*. New York: McGraw-Hill. Stewart, D. (1989). Rationale and strategies for American Sign Language intervention. *American Annals of the Deaf, 135*, 205-210. Strong, M., & Charlson, E.S. (1987). Simultaneous communication: Are teachers attempting an impossible task? *American Annals of the Deaf, 132*, 376-382. Swisher, M., & Thompson, M. (1985). Mothers learning simultaneous communication: The dimensions of the task. *American Annals of the Deaf, 130*, 212-217. VanBinsbergen, D. (1990). One teacher's response to "Unlocking the curriculum." *Sign Language Studies, 69*, 327-331. Whitehead, R., & Whitehead, B. (1988, November). *Vowel duration characteristics during simultaneous communication*. Paper presented at the Joint Meeting of the Acoustical Society of America and the Japan Acoustical Society, Honolulu, HI. Winston, E.A. (1989). Transliteration: What's the message? In C. Lucas (Ed.), *Sociolinguistics of the deaf community* (pp. 147-164). San Diego, CA: Academic Press. Woolfolk, A., & Brooks, D. (1983). Nonverbal communication in teaching. In E. Gordon (Ed.), *Review of research in education* (pp. 103-150). Washington, DC: American Educational Research Association.
SUBTRACTIVE SCHOOLING U.S.-Mexican Youth and the Politics of Caring ANGELA VALENZUELA Published by State University of New York Press, Albany © 1999 State University of New York All rights reserved Production by Susan Geraghty Marketing by Nancy Farrell Cover photo by Emilio Zamora. Printed in the United States of America No part of this book may be used or reproduced in any manner whatsoever without written permission. No part of this book may be stored in a retrieval system or transmitted in any form or by any means including electronic, electrostatic, magnetic tape, mechanical, photocopying, recording, or otherwise without the prior permission in writing of the publisher. For information, address State University of New York Press, State University Plaza, Albany, N.Y., 12246 Library of Congress Cataloging-in-Publication Data Valenzuela, Angela. Subtractive schooling: U.S. Mexican youth and the Politics of Caring / Angela Valenzuela. p. cm. — (SUNY series, the social context of education) Includes bibliographical references (p. ) and index. ISBN 0-7914-4321-3 (hc : alk. paper). — ISBN 0-7914-4322-1 (pb. : alk. paper) 1. Mexican Americans—Education (Secondary)—Texas—Case studies. 2. Children of immigrants—Education (Secondary)—Texas—Case studies. 3. Mexican American youth—Social conditions—Texas—Case studies. I. Title. II. Series: SUNY series, social context of education. LC2683.4.V35 1999 371.829'6872073—dc21 NOTICE: THIS MATERIAL MAY BE PROTECTED BY COPYRIGHT LAW (TITLE 17, U.S. CODE). State University of New York Press It almost seemed, maybe I'm wrong, like the teachers didn't want to know us, or too much about us. I try to be fair. Maybe it was like the more they knew us, the more they'd be responsible and their problems were so big, big! What would it mean in that situation to genuinely care for us? It would mean caring for big problems. And, not to let anybody off the hook, but who of all of them was ready or willing to take on a cause for raza [the Mexican American people]? (Junior female who walked out and eventually graduated from another high school) The walkout was about caring. We cared for our education though the teachers and administration didn't care for us. Even if they said they cared, talk is cheap. If it wasn't their fault the school was in such trouble—and they'll tell you that, clean their hands—it was their responsibility no matter what. Todos, toditos [All, all], they were all to blame. (Freshman male student who walked out and eventually dropped out of school, took his G.E.D. and enrolled in a community college) CHAPTER 3 Teacher-Student Relations and the Politics of Caring This chapter examines competing definitions of caring at Seguín. The predominantly non-Latino teaching staff sees students as not sufficiently caring about school, while students see teachers as not sufficiently caring for them. Teachers expect students to demonstrate caring about schooling with an abstract, or aesthetic commitment to ideas or practices that purportedly lead to achievement. Immigrant and U.S.-born youth, on the other hand, are committed to an authentic form of caring that emphasizes relations of reciprocity between teachers and students. Complicating most teachers' demands that students care about school is their displeasure with students' self-representations, on the one hand, and the debilitating institutional barriers they face on a daily basis that impede their abilities to connect effectively with youths' social world, on the other. From these adults' perspective, the way youth dress, talk, and generally deport themselves "proves" that they do not care about school. For their part, students argue that they should be assessed, valued, and engaged as whole people, not as automatons in baggy pants. They articulate a vision of education that parallels the Mexican concept of educación. That is, they prefer a model of schooling premised on respectful, caring relations. As discussed in chapter 1, educación closely resembles Noddings' (1988) concept of authentic caring which views sustained reciprocal relationships between teachers and students as the basis for all learning. Noddings (1984, 1992) argues that teachers' ultimate goal of apprehending their students' subjective reality is best achieved through engrossment in their students' welfare and emotional displacement. That is, authentically caring teachers are seized by their students and energy flows toward their projects and needs. The benefit of such profound relatedness for the student is the development of a sense of competence and mastery over worldly tasks. In the absence of such connectedness, students are not only reduced to the level of objects, they may also be diverted from learning the skills necessary for mastering their academic and social environment. Thus, the difference in the way students and teachers perceive school-based relationships can bear directly on students' potential to achieve. The landscape of caring orientations among teachers and immigrant and U.S.-born students at Seguín is presented in the following pages. A mutual sense of alienation evolves when teachers and students hold different understandings about school. Because teachers and administrators are better positioned than students to impose their perspective, aesthetic caring comes to shape and sustain a subtractive logic. That is, the demand that students embrace their teachers' view of caring is tantamount to requiring their active participation in a process of cultural and linguistic eradication (Bartolomé 1994) since the curriculum they are asked to value and support is one that dismisses or derogates their language, culture, and community. (See chapter 5 for an elaboration of the culturally subtractive elements of schooling.) Rather than building on students' cultural, linguistic, and community-based knowledge, schools like Seguín typically subtract these resources. Psychic and emotional withdrawal from schooling are symptomatic of students' rejection of subtractive schooling and a curriculum they perceive as uninteresting, irrelevant, and test-driven. Immigrant youth resemble their disaffected, U.S.-born counterparts when they, too, become "uncaring" after having acculturated and become "Americanized" too rapidly. However, because the "uncaring" student prototype is overwhelmingly U.S.-born, they are the primary focus here. With their experiences of psychic and emotional withdrawal within the regular track, these teenagers demand with their voices and bodies, even more strongly than do their immigrant peers, a more humane vision of schooling. Since their critique of the aesthetic-caring status quo is sometimes lodged in acts of resistance—not to education, but to schooling—school officials typically misinterpret the meaning of these challenges. A look at the consequences for youth when their teachers do or do not initiate relationships reveals how a sense of connectedness can have a direct impact on success at school. After a closing discussion of the limitations of both aesthetic and authentic caring as currently conceptualized in the literature, a peek at Seguín's Social Studies Department provides insights into the relation between caring and pedagogy. The chapter concludes with an account of Seguín's highly successful band teacher. This teacher's embodiment of authentic caring, including his apprehension of Seguín students' cultural world and structural position, demonstrates the enormous benefits that accrue when schooling is transformed into education—or more appropriately, educación. TEACHER CARING The view that students do not care about school stems from several sources, including social and cultural distance in student-adult relationships and the school culture itself. Most of the school's staff neither live nor participate in their students' predominantly Mexican community. The non-Latino teachers who constitute the majority (81 percent) are doubtful and even defensive about the suggestion that more Latino teachers would make a difference in school climate. Seguín's high attrition rate—particularly among the newer staff (see chapter 2)—further exacerbates social distance and increases the difficulty of developing an explicit ethic of caring. Some schools have consciously articulated an ethic of authentic caring (e.g., see Danin's [1994] ethnography of one such school), but no such effort has ever been deliberately undertaken at Seguín. Except for a minority of teachers for whom aesthetic and authentic caring are not mutually exclusive, a more general pattern of aesthetic caring prevails among those who teach the "middle majority" of regular-track youth. In my many conversations with teachers, only a few indicated that they knew many of their students in a personal way, and very few students said that they thought that their teachers knew them or that they would be willing to go to their teachers for help with a personal problem. This is not surprising. Despite perceiving of themselves as caring, many teachers unconsciously communicate a different message—to their colleagues as well as to their students. Committed teachers who invest their time in students are chided for their efforts, with the reminder that working hard is not worth the effort "since these kids aren't going anywhere anyway." The subtext is more damning still: Seguín students don't "go anywhere" because they don't, can't, or won't "try." Teachers sometimes make this view explicit. Consider the case of Mr. Johnson, English teacher and self-proclaimed student advocate. Mr. Johnson is openly critical of the counselors and the administration for their sustained incompetence in handling students' course schedules. No doubt, Mr. Johnson does rescue some students from bureaucratic harm, but his good deeds are nullified by his abrasive and overbearing behavior in the classroom. As the following description of his teaching style shows, this teacher's apparent need to feel and be powerful cuts him off from the very individuals he seems to believe he is helping—or trying to help. One sunny day in April when I am observing in Mr. Johnson's ninth-grade English classroom, I hear him say to his class—yet somehow I know his comments are for my benefit—in a loud, deep, Southern drawl, "The main problem with these kids is their attitude. They're immature and they challenge authority. Look at them, they're not going anywhere. I can tell you right now, a full quarter of these students will drop out of school come May." One of the girls sitting right in front of Mr. Johnson smiles awkwardly and rolls her eyes in apparent disgust. Most students simply pretend not to hear him, though a few glance at me and chuckle nervously in embarrassment. The teacher sounds like he is joking but the students do not find him funny. "See what I mean?" Mr. Johnson says. "They think they can get by in life without having to take orders from anyone." A student slumped in his chair with his chin and arms on his desk peers up, then lifts his head, responding in a mumble, "Aw, Mr. Johnson, you don't . . . you're just . . ." Mr. Johnson interrupts, "Joel, stop thinking, you know it might hurt you, cause you some damage upstairs." Joel smiles wryly and sinks back into his chair. As extreme as Mr. Johnson's behavior may seem, teachers at Seguín often engage in such verbal abuse. He communicates—perhaps more vividly than most—a sentiment shared by teachers and other school personnel, namely that Mexican students are immature, unambitious, and defiant of authority, and that teachers have no power to change the situation since it is the students' fault. The school's obvious systemic problems, most evident in its astronomical dropout rate, are brushed aside and the burden of responsibility and the struggle for change is understood as rightfully residing first with the students, their families, and the community. A lack of urgency about the school's academic crisis itself is a sign of dangerously low expectations on the part of Seguín teachers and administrators. Mr. Johnson articulated this belief that students' academic performance is primarily a matter of individual initiative and motivation when he introduced me to his class. Much to my chagrin, he patronizingly informed his students that I was a "doctor" from Rice University and then added, "Something y'all could be if you just stopped your foolishness and grew up." I could feel myself staring back at the students with the same disappointed and humiliated look that they were giving me. During this entire interaction, students were passively sitting in their seats instead of working on the *Romeo and Juliet* writing assignment scribbled boldly on the chalkboard. So Mr. Johnson was accurate in one respect: they were challenging his ability to make them learn under abusive conditions. However, Mr. Johnson and other teachers conveniently overlook the fact that they do have sway in the classroom. In this case, for instance, no student showed outright anger, despite the tension in the air. Students were clearly deferring to his authority, thus demonstrating, ironically, the fallacy in the teacher's view. More importantly, they exhibited extraordinary self-control, hardly what one would expect from youth who are inherently "immature" and "defiant." That the students were, in fact, restraining themselves was made dramatically clear to me later when I spoke with Joel outside the classroom. Summing up his feelings toward his English teacher, Joel exploded, "Johnson's full of shit! . . . he's always got an attitude." The bias most mainstream teachers have toward the majority of Seguín students arises from many sources. Mainly white and middle-class, these adults' more privileged backgrounds inevitably set them up for disappointment in youth whose life circumstances differ so radically from their own. Students' failure to meet their teachers' expectations is further complicated by a generational divide. Like most adults, teachers misremember the past as a golden era; they recall a time when everyone was "honest," when old and young alike "worked hard," when school was "important," and students were "respectful." Some days, the teachers' lounge could easily be confused with the set of a daytime TV episode, as teachers exchange comments like, "My father was poor and he worked hard for everything he earned"; "When I was young, things were different"; "Where I grew up, if you raised your voice"; and "I never even thought once that I shouldn't go to class." Without exception, the school's most dedicated teachers avoid the lounge altogether, fearing the disabling potential of their colleagues' negativity. Contemporary students, in failing to conform to this misty, mythical image of their historical counterparts, seem deficient, so teachers find it hard to see them in an appreciative, culture-affirming way. Moreover, teachers see the differences in culture and language between themselves and their students from a culturally chauvinistic perspective that permits them to dismiss the possibility of a more culturally relevant approach in dealing with this population. For instance, teachers and counselors more often lament their students' linguistic limitations than they do their own. An affirming stance toward Mexican culture is deemed unnecessary since, as one teacher on Seguín's Shared Decision-Making (SDM) Committee explained to me, "the school is already 'all-Mexican.'" The interrelationship between the tendency to objectify students and the rejection of a nurturing view of education is clear in everyday classroom experiences at Seguín. An algebra teacher who appears to have little success in maintaining an orderly atmosphere in her class perceives rowdiness as evidence that many youth are not in school to learn. She complained to me one day, "I'm not here to baby-sit and I'm certainly not their parent. . . . I finally told them, 'Listen, you don't have to be here if you don't want to be here. No one's forcing you.'" Teachers often give students the option of remaining in or leaving the classroom. Typically they justify their actions by saying that they are trying to inculcate a sense of adult responsibility in these teenage boys and girls. At issue here is the means by which youth acquire a sense of adult responsibility. When uttered in the absence of authentic caring, such language objectifies students as dispensable, nonessential parts of the school machinery. Another dismissive expression that has prompted repeated complaints from PTA members involves teachers unilaterally rejecting students who have been assigned to already overcrowded classrooms at the beginning of each semester. As addressed in the previous chapter, chaos always characterizes the first several weeks of each new year. The school's ten to twelve counselors have the demonstrably impossible task of processing over a thousand new entrants, emanating from the feeder middle schools, from other area high schools, and from outside the state or country. If the sheer size of this incoming tide were not enough to ensure the counselors' failure, the additional fact that they do not begin processing any students' fall schedules until the week before school opens would settle the matter. With so little time to process so many students, the counselors resort to simply oversigning them to classes. The rationale for this deliberate misscheduling is, predictably, purely bureaucratic: this is the easiest way to get students "into the system" so that they may be counted as enrolled. Interestingly, there is no district policy that states that youth must be enrolled by any particular day. In a "good" year, counselors "level off" these classes by the third week of school when most students' schedules are finally "fixed"—that is, when students are assigned to the classes they should have been enrolled in from the first day of school. As might be expected, the first few weeks are extremely stressful. Teachers face huge classes composed of a random mix of students, only some of whom belong where they are. Even larger than the actual classes are the rosters of students who are supposedly present in their classrooms. Massively long class rosters, teachers' and students' conflictual relations with counselors, extraordinarily large class sizes despite absent and disappearing bodies, insufficient numbers of desks, books, teaching materials and space, combine with students' displeasure over schooling to make for a state of high tension and intense normlessness. Regarding counselors, teachers see them as incompetent and overly bureaucratic, while students begin each semester with the sense that "the system," including counselors, exhibits precious little concern for them. In fact, in the fall 1995 semester, several Latino and white teachers grew so disgusted with the counselors that they appropriated a sense of leadership that they did not see operating within the school's administration by usurping the student assignment process from the counselors, superseding the principal's authority. Their actions created even greater havoc. The assignment process turned out not to be as simple as it seemed and relations between teachers and counselors were polarized for awhile. Fortunately, a cadre of Seguín parents and community activists mobilized to make Seguín accountable for the chaos that had developed. With community members participating, working groups formed and by the seventh week of the semester, a modicum of equanimity evolved. Among the handful of teacher leaders in this revolt, what became apparent was their own sense of authentic caring as markedly contrasting with their view of counselors' penchant for aesthetic caring. Accordingly, one teacher leader said to me, "Yes, things got confused, but we wanted to do what was right for our kids. We're the ones who have to experience the effects of their [counselors'] actions." These teachers' moral authority came from their status as effective classroom teachers as well as from their personal involvement on the school's central committees. Not surprisingly, one was also the social studies teacher who empowered her students with the skills and understandings they needed to carry out the October 1989 protest in a peaceful, non-violent manner. Hence, despite the confusion their actions created, the constructive dialogue and decisions that resulted would probably not have occurred had matters not indeed grown worse. Personnel changes in Seguín's administration have made it difficult for principals and assistant principals to make any sustainable progress in improving the efficiency with which the school is run. Nor have they been able to alter the school's culture. Assistant Principal Ana Luera, who by her third year at Seguín had become significantly involved in working toward changing the school's culture, maintains that changing counselors' and teachers' practices is a long process that requires both patience and perseverance. Most importantly, she notes that no change can occur in the absence of mutual respect and trust: You can't do anything with them [teachers and counselors] your first couple of years because you have to gain their trust. They're just like kids. You have to show you love them. . . . Now, by the third year . . . you don't know how many teachers I called in to tell them to show more respect to the students, to not do certain things. Now that I got their trust, I can tell them. Sometimes they deny what they do or they admit it and say that they won't do it again. I respect them and I give them due process. You have to do that. . . . This year, we're going to do some cultural sensitivity training. . . . Students' schedules were also fixed this time at the end of the school year . . . you just can't do anything as a new principal the first couple of years. Luera reveals the need for teachers to feel cared for. As Noblit (1994) similarly found in his case study of a caring principal in a school, principals can assert their leadership by authentically caring for teachers and also by promoting honest dialogue on how to authentically care for students. The brief tenures of principals is a widespread problem in urban schools throughout the state of Texas. In addition to "burnout," the district loses principals by adhering to an accountability scheme that makes the tenure of a principal's assignment contingent on raising students' test scores on a statewide exam within a three-year time period. One unintended consequence of this "revolving door" approach to posting principals is that it reinforces counselors' and teachers' sense of autonomy and increases their power. In a system where they are the "old hands," they must be continually "won over" by top administrators whose jobs may be hostage to their subordinates' willingness to cooperate. The intransigence of teacher and counselor culture at Seguín has other consequences besides potentially undermining the efforts of a new principal. Parents, PTA members, and community advocates whose appeals to Seguín staff are routinely dismissed without serious consideration frequently resort to bypassing the school and carrying their concerns directly to the district superintendent or the school board. According to one PTA leader, the highly predictable surplus of students enrolling each semester relative to spaces available is tolerated because school staff know "that the students will drop out anyway by the fifth or sixth week of classes." Enrollments of between 3,000 and 3,400 each semester in a physical facility capable of housing no more than 2,600 students lend credence to this claim. And not surprisingly, the numbers do substantially trim down in a five- to six-week time frame. A small, nearby alternative high school serving approximately 150 students annually—itself a remnant of the 1970 school boycott—rejects an average of 7 students per day who are attempting to re-enroll in school after having “dropped out.” Unfortunately, Seguín does not keep records on such students’ whereabouts. Teachers occupy an uncomfortable middle ground. They are both victims of and collaborators with a system that structurally neglects Latino youth. Armed with limited classroom materials and often outdated equipment and resources, and facing large classes overflowing with overage, at-risk, and underachieving youth, teachers frequently opt for efficiency and the “hard line” over a more humanistic approach. The district’s emphasis on quantitative measures and “accountability” to evaluate students’ commitment to school streamlines some aspects of teaching, but at the same time alienates scores of marginalized students. As the distance between teachers and their students widens, any possibility of an alliance between the two evaporates. Isolated from and unhappy with one another, neither party finds much to call rewarding about a typical day at Seguín High School. Students who say and act like they do not care about school mystify teachers; the latter profess great difficulty understanding such attitudes. The possibility that an uncaring attitude might be a coping strategy or a simple facade has little currency among Seguín teachers. My interactions and conversations with students, on the other hand, suggest that youth who maintain that they don’t care about school may often really mean something else. For example, there are many students like Susana, a young woman with a fragile academic self-concept who takes comfort in the thought that she does not really care about learning in school. She protects herself from the pain of possibly failing to do well by choosing to do poorly. My investigation of Susana’s withdrawn attitude (described below) supports, albeit negatively, the caring literature’s hypothesized relationship between the teacher’s apprehension of the student and the sense of academic competence and mastery that should ensue. Mrs. Hutchins, a ninth-grade English teacher, asked me to talk to Susana to find out why she refused to answer when called upon in the classroom. I can only guess that Ms. Hutchins enlisted my assistance because she perceived my ethnicity as a possible route into Susana’s world. “She always makes faces when I call on her,” Mrs. Hutchins said, explaining her request. Then, she offered a theory about the reasons for Susana’s behavior. “She doesn’t want to be in my class. She may even resent me somehow.” Mrs. Hutchins had introduced problem-solving techniques into her teaching, but she said that certain students still seemed beyond reach. When she first started teaching at Seguín, her fellow teachers cautioned her that there were many such students. After two years of teaching, she felt she had to get to the bottom of the problem of mentally absent students. I was able to approach Susana as she was settling into her desk just before the bell sounded on the following day. I complimented her on the length and beauty of her jet-black, braided hair and told her I was a researcher studying what students think about school. Susana briefly let down her guard. We exchanged a few words about what researchers do and she told me that when she had seen me the day before she couldn’t tell whether I was a teacher or a student. She became sufficiently interested in our conversation, enough to upbraid a young man who was trying to get her attention. She told him to “Shut up!” because she was busy right then. I told her that I noticed many students who did not participate in classroom discussions when teachers asked them to, and I wanted to know what she thought about that. She took a deep breath and said, seriously, “You kinda’ have to seem like you don’t care because if you say something, and it comes out sounding stupid, then everybody will say you’re dumb. And even the teacher will think you’re dumb, when they didn’t think that before.” While Susana may sound unusually protective of her ego, her thinking is quite logical, inverting the relation of authentic caring and academic competence: a dearth of authentic relations with teachers subtracts, or minimizes, opportunities youth have to develop and enjoy a sense of competence and mastery of the curriculum. My discussion with Susana further revealed that her comportment toward her history teacher was a generalized response to schooling based on several past negative experiences with teachers. "I've had some bad things happen to me with teachers," she confided. "Like what?" I asked, just as the bell rang. "Oh, lots of things," she said, sneering and pulling backwards as if not wanting to elaborate. Feeling that I was losing her and that our conversation was about to end, I took a chance and asked, "Has anyone ever made you feel like what you said in class was dumb?" "Oh yeah, but not anymore. Na-ah, not me." Susana's, withdrawn, defensive posture was most fully revealed in the following statement, which ended our conversation: Once this bad science teacher asked me in front of everybody to stop raising my hand so much in class. And all the students laughed at me. I was trying to learn and he was a new teacher... hard to understand. I felt so stupid... so yeah, that and other things.... Teachers say that they want to talk to you, but I notice that they really don't. I used to get mad about it, but now it's like "What's the use?" Not gonna change nuthin'. If I can just make it through the day without no problems.... So now if something bad happens, I know that I didn't cause it cuz I'm just here mindin' my business. Teachers' repeated threats to Susana's academic self-concept have made her lower her expectations about the likelihood of forming productive relationships with teachers. As she was open with me, my guess is that Susana is not yet entirely lost because she hasn't quite given up. Later, when I shared what I had learned about Susana with Mrs. Hutchins, the teacher expressed a mixture of frustration, annoyance, and grief over the thought of having to deal with the consequences of Susana's previous teachers' mistakes and insensitivities. "As if teaching were not enough to preoccupy myself with," she sighed, and then continued in a more defensive tone, "It's overwhelming to think that this is the level we're dealing at, and frankly, neither was I trained nor am I paid to be a social worker." "Well, at least you know more of what you're up against in this situation," I offered. "Yeah, I suspected this would be the case and it's uncomfortable for me to deal with someone who is hard set with the idea that teachers are the enemy." Clearly, in this case both student and teacher resist a caring relationship. The effects of this mutual resistance are not equally balanced, however. Mrs. Hutchins may have to continue to put up with the distraction of funny faces rather than the positive classroom participation she would like, but Susana's adjustment will be much more costly. As her sense of alienation gets reinforced, her willingness to remain even marginally mentally engaged will steadily erode. The individual histories that students and teachers bring to their classroom encounters necessarily influence the chances for successful relationship building. Still, in most cases, there is likely to be some room to maneuver—that is, if the situation is approached literally "with care." However unintended, the story of Mrs. Hutchins and Susana captures a teacher in the very process of closing the door to relationship by privileging the technical over the expressive. Notwithstanding her expressed desire to get at the root of Susana's problem, Mrs. Hutchins' rather self-absorbed, emotional response reveals the limitations of her aesthetic framework. In a contradictory fashion, she is angry with Susana's previous teachers' mistakes at the same time that she resists pursuing a possible solution through the alternative route implied within Susana's schooling experiences—that is, a more relational and compassionate pedagogy. Fine (1991) provides reasons for the technical, aesthetic focus of schools that resonate with this study, in general, and with this teacher's response, in particular. Fine's investigation of dropouts, undertaken in a comprehensive, inner-city school similar to Seguin, leads her to conclude that teachers are committed to an institutional "fetish" that views academics as the exclusive domain of the school. This fetish supports the status quo by preserving the existing boundaries between the ostensibly "public" school and the "private" matters of family and community. Though Susana's problems appear related to the schooling process itself, Mrs. Hutchins observes that she was not trained to be a social worker as an implicit justification for her refusal to pursue Susana’s situation any further. Such reasoning is persuasive only if one first accepts as real—and right—the hypothesized public-private dichotomy in the realm of education. When real-life concerns are thrust into the classroom, many teachers find themselves in uncomfortable and disorienting positions. They may be called on not only to impart their expert knowledge, but also to deal with barriers to students’ learning of which they may not be fully aware or trained to recognize. If and when they do become aware of these contingencies, time and skill constraints remain. When teaching effectiveness gets reduced to methodological considerations and when no explicit culture of caring is in place, teachers lose the capacity to respond to their students as whole human beings and schools become uncaring places (Kozol 1991; Bartolomé 1994; Prillaman and Eaker 1994). These are conditions under which teachers and administrators may turn resolutely to face-saving explanations for school-based problems. Rather than address the enormity of the issues before them, they take solace in blanket judgments about ethnicity and underachievement or “deficit” cultures that are allegedly too impoverished to value education. These kinds of explanations are often embedded in a larger framework that co-identifies underachievement and students’ dress, demeanor, and friendship choices. The tendency to place the onus of students’ underachievement on the students themselves has been amply observed in other ethnographic research among youth in urban schools (Peshkin 1991; Fine 1991; Orenstein 1994; Yeo 1997; Olsen 1997; McQuillan 1998). Collective problems are regularly cast in individual terms, as if asymmetrical relations of power were irrelevant. Not weighed against individual students’ proclivities are the larger structural features of schooling that subtract resources from youth (see chapter 5), preempting a fair rendering of the parameters of low educational mobility. This absence of a self-critical discourse unwittingly promotes condescending views toward students, as the following incident reveals. On an overcast winter afternoon a counselor named Mr. Ross and I stand guard by a steel exit door. The final afternoon bell has rung and students begin pouring out of the building. A seemingly endless river of brown faces and bodies pressing against each other spews forth out of several narrow exit doors into the school’s muddied and rapidly vanishing front lawn. A group of three boys tumble by us, jostling one another and calling each other “putos” (whores) and “bitches.” I catch a glimpse of the elastic top band of Fruit of the Loom underwear as one boy tries to knock another down. Mr. Ross shakes his head in disapproval as the boys scurry off with mischievous grins on their faces. The counselor turns to me and confesses that he just cannot understand why Latino youth “do not take school seriously”: I’m just amazed all the time at how much these kids skip and mess around instead of doing their school work. It’s different in the black community. It’s like you grow up expecting to graduate from high school. It’s never a question of whether you’re going to go or not. You just go. . . . I try to help these Hispanic kids. I tell them, “Hey, this is the only time anything in your life is going to be free, so take advantage of it.” But, you can only lead a horse to water . . . if they don’t want to be here, what can you do? Mr. Ross’ analysis fails to consider the disempowering nature of the school’s curriculum. Questions of equity persist: entitlement to a “free” public education does not automatically translate into just schooling conditions for all, particularly for poor, minority youth (Kozol 1991). The following section examines how students’ self-representations make them vulnerable to school authorities whose caring for students is oftentimes more centered on what they wear than on who they are. THE “UNCARING STUDENT” PROTOTYPE U.S.-born, Seguín ninth-graders are especially preoccupied with looking and acting in ways that make them seem cool. Males tend to be more involved than females in countercultural styles, but many females share these same preoccupations. Boys wear tennis shoes, long T-shirts, and baggy pants with crotches that hang anywhere between mid-thigh to the knees. Also popular are pecheras (overalls) with the top flap folded over the stomach, dickies, khaki pants, earrings, and, sometimes, tattoos (many of which are self-inflicted) on their hands and arms. Boys, and some girls, may also shave their heads partially or fully. Gold-colored chains, crucifixes, and name pendants often dangle from students’ necks. The tastes of these urban teens closely resemble those of Latino Angelino youth (see Parthey-Chavez’s [1993] ethnography of a Los Angeles high school). The mainstream values of the high school and its school-sponsored organizations tend to assure that high achievers and students involved in school activities will be underrepresented in the ranks of the “uncaring-student” prototype. Average- and low-achieving ninth-graders concentrated in the school’s regular track, on the other hand, are likely to fit the type. This alignment between student type and student attire leads teachers and administrators to use (consciously and unconsciously) greater amounts of garb as a signal. Although the majority of Seguín students do not belong to gangs, school personnel readily associate certain clothing with gang apparel. Most Seguín parents, by contrast, staunchly maintain that the way their children dress has much more to do with their adolescent need to “fit in” than their proclivity for trouble or their membership in any particular gang. Though the school disapproves of urban hip-hop styles, and views the more exaggerated manifestations as a “problem” that needs to be “fixed,” the school itself cultivates this taste in attire through its Channel 1 television programming—which is accessible in virtually every school space where students congregate. Students huddled around rap exhibitions on TV in the cafeteria or in a homeroom classroom is a familiar sight. Not all youth, of course, prefer rap and hip-hop but the vast majority of U.S.-born youth appreciate it. It is not hard to pick out Seguín’s “hip” urban youth. They strut about campus in a stiff-legged but rhythmic, slightly forward-bouncing fashion and act like they do not care much about anything. This posturing helps mark group boundaries and communicates solidarity. Exaggerated posturing is evident in certain situations, such as before a fight, or when students get into trouble with school authorities, either as a face-saving strategy or to communicate righteous indignation. I witnessed this in a fight that was quickly broken up by the two district security officers on regular duty at the school. The dispute was over a young woman. One boy’s girlfriend was being courted by a male outside of the group. My field notes reveal how the boys’ posturing demarcated group boundaries and signaled to others that a fight was about to occur: No sooner had I entered the cafeteria than I noticed a student signal to another with an abrupt shake of his head. His friend lunged his head and body backwards as he plowed his hands into his pockets, which hung very low on his hips. His thin frame, erect body, and quick, rigid movements reminded me of those wooden roadrunner toys which simulate drinking when perched at a right-angled tilt. I thought he was reaching for a weapon, but no instrument was drawn or shown. His movements nevertheless grabbed everyone’s attention . . . a display of toughness or righteousness for what was about to take place. A third friend then popped onto the scene from out of nowhere. All three approached a smaller guy, who withdrew into a row of students lined up near the nachos food stand about fifteen feet away. A large crowd quickly formed as a couple of punches were thrown, leaving the solitary student on the ground, scrambling to get himself up. Two school cops bustled through the crowd, yelling, “Break it up, boys! Break it up!” The growing crowd started booing and the boys stopped fighting. All four were hauled off in a matter of minutes. A few scrapes and bruises. No one was seriously hurt. Students who are marginal to the mainstream values of the school overwhelmingly conform to the “uncaring student” prototype. They engage in such deviant behaviors as skipping class and hanging out (lounging in the cafeteria through all three lunch periods is a favorite pastime). Although immigrant youth are typically appalled by the glaring indifference to schooling displayed by U.S.-born youth, whom the immigrant teens view as having become too americanizados (or Americanized), a small but noticeable segment within their ranks is seduced into this style of self-representation. Most at risk are youth who have a strong need for acceptance from their acculturated peers. Teachers in the ESL department, in particular, express a great deal of concern for these students who, in the words of one beginning ESL teacher, “wish to assimilate so quickly and so completely that many go too far.” This woman is a very caring Anglo, Spanish-speaking teacher with a clear grasp of her students' political reality and a vivid awareness of the strengths they possess as immigrants entering U.S. schools. She tries to drill in her students' heads the idea that *as immigrants* they are uniquely positioned to succeed. There's no rush [to assimilate and become American]. You're the ones in this school who really and truly possess the capacity to excel. You're the ones who have it all. In such a short time, you will be bilingual. With your intelligence and your skills, you, more than the others, can really make something of your lives. Except for the handful of wayward immigrant youth, a visit to any of the four assistant principals' offices on any day of the week reveals how homogeneous a group the "uncaring," "trouble-making" students are. Although they tend to be mainly ninth-grade males, girls are increasingly well represented. According to one school police officer, whose opinion is widely shared by the staff, "More and more . . . the girls are no different from the guys." My observations during my many visits to the assistant principals' offices reveal a ratio of one girl processed for every three boys. Despite increasing similarities between males and females with respect to overtly deviant acts, the extreme levels of alienation among many U.S.-born females is still most likely to manifest itself as passivity and quietness in classroom situations. As was manifest in Susana's and even Mr. Johnson's classroom, females deviate less visibly because they respond to the same stimuli within an uncaring environment in a gender-appropriate, and therefore less physically threatening, manner. The overrepresentation of ninth-graders in this "uncaring student" category is due to three factors. First, many of these students have not yet shed their middle school personae. They are still carrying on with tough, gangster-type attitudes and a clothing style to match. The social pressure to continue in this mode is abetted by the school's high dropout and failure rates, which leave freshmen to make up more than half of the school's total population. Academic failure is so common that in any given year, a full quarter of the students have to repeat the ninth grade for at least a second time. School officials refer to many of these students as "career ninth-graders." Second, because many of the ninth-graders were members of middle school "gangs," loosely defined, they are subjected to intense scrutiny by an aggressive, discipline-focused, "zero-tolerance," administration that tends to approach disciplinary problems in a reactive and punitive fashion. "Withdrawing students for inattendance," for example, is a customary way of handling students like these with high absentee rates. In this environment, even the appearance of gang membership often results in students receiving unwelcome attention from school authorities. A self-fulfilling prophecy develops when youth react negatively against school authorities who breathe heavily on them. Third, upperclassmen tone down their appearance. Tenth- but especially eleventh- and twelfth-grade students make a point of distinguishing themselves from freshmen by dressing differently. Whereas the upperclassmen may still wear baggy jeans or khakis low around their hips, their pants may be pressed and only somewhat baggy. One student I interviewed reminisced about having been a "punk" himself when he was a freshman. Now that he was a football player and working part-time, he decided that he had to "grow up." Students' informal discussions of their orientations toward schooling and achievement make their teachers' judgments difficult to endorse. As the stories below reveal, Seguin students' definition of education is markedly different from that of school personnel. To varying degrees, the students advance a view that is in line with the meaning of *educación* and conforms to the ideas of caring theorists like Gilligan (1982) and Noddings (1984). Whereas teachers demand caring about school in the absence of relation, students view caring, or reciprocal relations, as the basis for all learning. Their precondition to caring about school is that they be engaged in a caring relationship with an adult at school. Laura's encounter with an assistant principal illustrates the trouble youth get into when a school official does not like they way they dress. Laura had come to school that day wearing a long T-shirt emblazoned with the message, "Give Peace a Chance," against a black background streaked with color. She had coupled the shirt with baggy pants that stopped above her ankles, displaying white socks and shiny, black leather combat boots. She exploded when the assistant principal told her that she had to go home and change her clothes. The following excerpt is from the field notes I wrote that day: As I sat waiting to speak to the assistant principal, a young woman with white makeup walks in screaming, "What! Are you crazy? What does what I wear have to do with anything? I live alone. I work for my money. And not even my parents tell me what to do or wear. And you're telling me that what I've got on isn't good enough? I don't bother anyone when I go to class. I go to class to learn! School should be about me learning and not about what I wear! This is bullshit!" The assistant principal smiled condescendingly, telling her "Now, now, Laura . . ." and coaxed her into her office where her tirade could not be witnessed by others, including myself. She entered her office, where she continued screaming. She then threw the door open and stomped out of the office all red in the face. Her second outburst, the assistant principal later informs me, landed her with a one-day, on-campus, suspension from school. I met up with Laura two weeks later at work at a convenience store several blocks from school. She recognized me and immediately divulged that she was still getting "hassled by the school." Although she needed to work in order to support herself, the school counselors were continuing to refuse to enroll her in Cooperative Education, or in the component of the school's Career and Technology Education (CTE) vocational program that enables youth to work for credit off campus for half a day. They based their denials on the fact that Laura had not taken certain prerequisite courses. She was "in violation of the rules." "So what happens?" Laura asked, rhetorically. "I'm being counted absent every day from three classes to set me up so I'll flunk this semester. They don't even have to say, 'Laura, you're worthless. You should flunk.' All they have to say is, 'We have rules.'" I recommended that she talk to Ms. Trujillo, Seguín's vocational counselor. I knew that in cases where no other options were available, Ms. Trujillo was willing to use her position as the official CTE counselor on campus to prevent students from dropping out of school by giving them jobs through the Cooperative Education component of CTE. The catch here, for which she and CTE have drawn fire, is that slipping students in based on their need rather than their academic qualifications weakens the status of the program, which tries to groom and place students into entry-level, corporate-sector jobs. CTE faculty compete against other high schools inside and outside the district for good jobs for their students. Allowing "less-qualified" students to enter the program disrupts the highly selective admissions process, which in turn jeopardizes the corporate relationship, since employers begin questioning the shared understanding of guaranteed student quality. When I talked to Ms. Trujillo later and asked about Laura, I discovered that the counselor had indeed performed her magic: "Students come first for me and letting a few squeak through the program is a small price to pay if we can keep them from dropping out. We must attend to our students' needs. This young girl has to work to feed, clothe, and support herself." Laura's conflict with school staff shows the existence of competing definitions of caring. It also makes clear her enormous frustration over being powerless to insert her definition of education into the schooling process. More positively, Laura's story demonstrates the power of a caring counselor who is willing to intervene on a student's behalf, even when that means breaking school rules. The inflexibility of bureaucracies often places caregivers in the problematic position of having to break rules in order to be caring (Fisher and Tronto 1990). Conflicts between surface and substance are a daily occurrence at Seguín, where great attention is paid to what students wear and it is assumed that style and learning are necessarily connected. In 1994, one of the two parent representatives on the school's Shared Decision-Making Committee suggested that the school require students to wear uniforms as one way to diffuse this conflict. In an ensuing meeting to address the issue, the principal joked, "Their pants are so baggy and lay so far down on the hips that it's no wonder they don't make it to class on time." In this same meeting, a student leader, speaking on behalf of other students, aggressively challenged the recommendation of uniforms, arguing that teenagers' manner of dress is an important aspect of their individuality. In the end, the committee decided to enforce the dress code by outlawing baggy pants for the coming school year. When classes began in the fall, however, the school was so overrun with baggy pants-wearers that enforcement of the new provision of the dress code was impossible. This outcome also revealed the school's lack of connectedness to the parents and community who could have helped inform and educate students about the new dress code. Another indication of the fragile nature of teacher caring at the high school is apparent in the case of Carla, a tenth-grader. When she changed her style of dress and choice of friends, she quickly became the object of extraordinary scrutiny from her coaches—despite a seemingly close relationship with them. Carla lives in a one-bedroom house with her sixty-year-old grandmother and her thirteen-year-old brother in an East End area that is also home to many gang members. Abandoned by her mother, who did not want to raise her or her younger brother, Carla has experienced her fair share of suffering. Family life is stressful. She lives under the constant threat of losing her grandmother to a chronic, upper-respiratory illness and her brother to middle-school gangs. The family is on welfare and they barely manage to survive from one month to the next. As Carla speaks, her lower jaw stiffens and her large, brown eyes squint, exposing teeth-gritting strength, the embittering effects of poverty and abandonment, and an intense sense of responsibility for her loved ones. In her own mind, her future is clear. She tells me, with a mixture of determination and confidence, "I plan to get an athletic scholarship and go to college." Although Carla's background makes her an unlikely candidate for school success, she is well connected to the school, both through her participation in the athletic program and her placement in honors' classes. Her precarious life in the barrio, however, places her at great risk. Her relationship to track team members is a key source of continuity and support. The team is a small, tightly knit group that includes the coaching staff as well as the student-athletes. The track coaches treat the girls like family, providing various kinds of help—including money, rides home, and a sympathetic ear when someone wants to talk over a problem. The coaches fear that Carla's recent friendships with "gangster-looking" types at school and her shift toward ganglike attire may jeopardize her dreams of success. In response to a question I posed about why she dresses as she does, Carla states flatly that she has to be able to "fit in" in her neighborhood. She explains that, far from trying to make a statement, she is doing her best to not stand out in her neighborhood. And she sees her friendships quite differently than do her coaches. Carla says that she is merely spending more time with people whom she has known all of her life. Carla's choices vividly convey the relationship between "fitting in" and survival, a connection that other research has documented among high-achieving, low-income African American youth (Fordham and Ogbu 1986). The irony of Carla's survival strategy is that it only works in one sphere: in her neighborhood, she blends perfectly into the scenery; at school, she calls unwanted attention to herself, even though her clothing actually strike a middle ground. Although her pants are baggy, they are not falling off her body and they are neatly pressed. She does not smoke nor does she display any tattoos. Unfortunately, these compromises seem to be going unnoticed—or unacknowledged—by her coaches. There is a clear risk in Carla's efforts to negotiate two conflicting identities. If the adults at school view her as separating herself from the academic identity they would prefer that she sustain, Carla may not get the guidance she needs at the point she most needs it. A breakdown in the process of authentic caring could have extremely damaging effects. Carla's coaches care enough about her to notice apparent changes in her clothes and friends, but they fail to go beyond superficial assessments. Instead of empathizing with Carla's need to be an insider in her own community—as the authentic caring model that Noddings (1984) outlines would have the coaches do—they fall prey to aesthetic caring, emphasizing form over content. They interpret the changes they see in Carla as evidence of her failure to reciprocate their caring. They view her as oppositional, when in fact, she continues to care deeply about her future in the very terms they value. Carla probably should reconsider the decisions she is making about friends and attire, but the "just-say-no" mentality that informs her teachers' judgments is not only unrealistic, but unappealing. Rather than encouraging dialogue and exploration into the complexity of students' lives, it encourages youth like Carla to square off in a defensive posture. Since her trust is not easily secured, a rush to judgment is experienced as heavy-handedness. In the absence of complete information, teachers must rely on students’ self-representations—including changes in their public identities—for signals about their deeper emotional and intellectual states. At the same time, it is important to remember that in some contexts meaning may be severed from representation. What may come across as youthful rebelliousness may be nothing more than youth exploring and finding ways to negotiate their lived experience as ethnic, bicultural human beings (Darder 1991). In an ironic twist of fate, this group’s whole-hearted embrace of American urban youth culture—their grandly successful “assimilation”—is what assures their teachers’ propensity to negatively label them. “AMERICANIZED” IMMIGRANT YOUTH In a schooling context that privileges a North American or English-speaking identity over a Mexican or Spanish-speaking one, there is strong pressure to assimilate subtractively. Because peer models favor the hip-hop attire and comportment that currently characterize urban, dispossessed youth, “American-ness” itself assumes a countercultural connotation. Thus, immigrant youth necessarily emulate a marginal peer group culture when fulfilling their desire to “fit in.” The following situations provide some insights that help explain the finding of “accelerated subtractive assimilation” among some immigrant youth. These youth share many of the same problems as U.S.-born youth. Outside Seguin’s attendance office in spring 1994, I spoke at length with an immigrant mother whose daughter had not attended classes during any of the previous six-week grading period. Had a family emergency not brought the mother to campus that day, she might never have discovered that her daughter had been “withdrawn for inattendance.” Until that day, she thought that her daughter had been attending school daily. The mother had approached an attendance officer to find out which class her daughter was in that period, only to discover that her daughter’s name was not listed on any class roster. Because this woman was visibly distressed and the attendance officers were obviously busy with other students and parents, I approached her and offered my assistance. With her arms wrapped around her waist, Mrs. Treviño doubled over and wept softly as I tried to guide her to the nearby steps in the center of the hall where she could sit down. “Ha fallecido mi papá y tenemos que irnos a México y vengo a la escuela a descubrir esto?” she cried. (“My father has died and we have to go to Mexico and I come to school to discover this?”) I told her that I was sorry and I suggested that perhaps her daughter could make up the work in summer school. With an incredulous tone in her voice, Mrs. Treviño vented her anger with the school for failing to notify her of her daughter’s lack of attendance: Uno deja a sus hijos esperando que las escuelas los estén cuidando y no nos informan que nuestros hijos no han estado asistiendo. O esperan que nos digan nuestros hijos. Cómo nos van a decir ellos si ellos mismos son los que están quebrando las reglas? Y que si le hubiera pasado algo a mi’jita . . . ? Entonces qué? (One leaves one’s children trusting that the schools are taking care of them and they inform us that our children are not attending. Or they expect our children to tell us. How are they going to tell us if they’re the ones breaking the rules? And what if something had happened to my daughter . . . ? Then what?) Mrs. Treviño’s daughter rounded the corner carrying textbooks in her arms. She was in the process of withdrawing from school and was returning her books to the registrar’s office. I realized that I recognized her from a lunch time discussion during the previous fall semester. I was struck by the incongruity of this young woman being Mrs. Treviño’s daughter. She wore extraordinarily baggy khaki pants—which somehow at the same time clung to her narrow hips—and her head was partially shaved in broad strokes around her ears, exposing olive-brown skin. She sported a tiny, golden nose earring. “Yes, I know you. You’re Elvia, right? Do you remember me?” I asked. She acknowledged me with a slight nod. I also realized for the first time that she spoke Spanish. In a soft voice with her head lowered, the daughter said, “Amá, tengo que entregar estos libros. Ahorita vuelvo por usted.” (“Mom, I have to turn in these books. I’ll come back for you in a minute.”) I also noted that she spoke formally to her mother, using the formal pronoun “usted,” instead of the more familiar form, “tú.” Her mother’s voice trembled as she told her daughter that now she would have to contend with her father. Looking humiliated, Elvia glanced at me and walked away. *Me da verguenza como se mira y como se viste. Verdaderamente, me da verguenza! Y cómo la puedo llevar a México vestida así? Y con ese aretito? Imagínate? Ni parece Mexicana!* Mrs. Treviño lamented. (“The way she looks and dresses embarrasses me. It really embarrasses me! And how can I take her to Mexico like that? And with that little earring? Can you imagine? She doesn’t even look Mexican!”) She then assured me that her daughter did not learn to be this way in their home. The mother added that she had two older sons and an older daughter, none of whom ever caused her any serious problems. *“Pero ésta, la mas chiquita, es fuerte de carácter!”* (“But this one, the youngest one, has a stubborn character!”) The family situation had been very unstable. Mr. Treviño was a migrant laborer who spent part of the year in Michigan harvesting beets and other vegetables. For the previous two years, he had also spent months at a time in Mexico helping take care of his father who was dying slowly from cancer. The family’s story gushed out of Mrs. Treviño’s mouth as she repeatedly wiped tears from her face. I embraced her and told her not to feel obligated to tell me anything. *“Al contrario, no quiero ser una molestia para usted,”* she said. (“To the contrary, it is I who does not wish to trouble you.”) “No es ninguna molestia,” I assured her. (“It’s no trouble whatsoever.”) So she continued, explaining that she worked evenings as a waitress while her older daughter, who held a daytime job, stayed at home with Elvia during the evenings. Elvia continually challenged her sister’s authority and also had her friends over on a regular basis. They spent most of their time talking, although the mother also suspected that Elvia was taking drugs. She noted dryly that Elvia always underestimates her ability to detect the smell of marijuana or to recognize when she and her friends are high. Mrs. Treviño thought that perhaps she had made a mistake by allowing Elvia to have her current set of friends. But she also admitted that her daughter had shown troubling tendencies since middle school. *“Es cuando empezó a vestirse como un Chicano. Siempre ha sido importante para ella ser aceptada por sus amigas y la influyen mucho,”* she mused. (“It’s when she began to dress like a Chicano. It has always been important for her to be accepted by her friends.”) By the time Elvia entered high school, Mrs. Treviño had decided not to make an issue of her daughter’s attire. The haircut and the earring were recent additions. They had appeared this year when Elvia began hanging out with friends who spoke to one another either in English or in “Spanglish,” a dialect that uses both languages. She laments that although she speaks to Elvia in Spanish, Elvia responds primarily in English: *“Y si puede hablar el Español pero parece que no le gusta.”* (“And she can speak Spanish but it seems as if she does not like to.”) She found it strange that the daughter she had attempted to spare from the kind of hardships the family had endured earlier is the same one that she has now “lost.” I asked her to elaborate. How had she “lost” Elvia? The mother prefaced her explanation by saying that she was convinced that the success she had with her other children (all of whom had graduated from Seguín) was related to their prior schooling in Mexico. She left her children with their grandparents until they completed *primaria* (grade school). She remarked that leaving the children in Mexico while she and her husband lived and worked in Houston had been very hard—but it was impossible with her youngest. Then Mrs. Treviño sighed. Smiling faintly, she confessed with rueful affection, *“Me la traje conmigo. Yo no pude dejar a mi bebita, mi Elvita.”* (“I brought her with me. I could not leave my little baby, my Elvita.”) A few moments later, Elvia returned. I asked her if everything was okay. “Yeah, I just checked out of school,” she replied, her eyes glistening, as if she might burst into tears at any moment. “Don’t worry, Mom,” she said, “I’ll make it up. I’ll take summer school. I promise.” With a look of disappointment on her face, her mother shook her head and rolled her eyes in disbelief. When her mother stepped away to use the restroom, I was able to talk to Elvia alone. AV. It's pretty bad, huh? ET. Yeah, I can't stand it. . . . I wish she was mad at me instead. AV. So what's the problem? Why haven't you been going to classes? ET. I just don't like school and I used to like it. I just can't get into my classes this year. They're all so boring and no one seems to care if I show up. And then they talk down to you when you do show up. AV. What do you mean? ET. It's like all of our teachers have given up and they don't want to teach us no more. In one class, I had a sub [substitute teacher] for all the time I was there, for four weeks! And he can't teach us nothing because he don't know math. The dude tried but that wasn't good enough, man! God, it just kills me to give that man even just a little bit of my time! If the school doesn't care about my learning, why should I care? Answer me that. Just answer me that! A friend of mine dropped out of high school, took her GED, and went on to college. I tell my Mom that's what I want to do, but it's like she don't get it. AV. So what was your brothers' and sister's experience here at Seguin? ET. They just took all the crap you get here. It's like, "You're Mexican; take crap." Well, man, I got some pride and self-respect. "Sorry to disappoint you, but this Mexican don't take crap." Mexicans who do, embarrass the hell out of me. I just want to tell them, "Lay off the humble trip, man. You some damn Indio (Indian) or something?" Elvia's anger with and alienation from schooling is unmistakable. Her questionable choice in friends combined with insufficient parental monitoring to influence her disaffection from schooling. She was in need of much more concerted attention than an older sister could provide. Whereas certain "shortcuts" or compromises taken within families may be expedient and perhaps unavoidable at the time—especially for poor families struggling to make ends meet—the end result can be disastrous. Further complicating matters at school was a lack of authentic caring as well as a lack of aesthetic caring in one of her classes that stretched Elvia beyond her limits. Her story made me wonder whether, if she and I traded places, I would be able to tolerate such a bad situation for very long. Elvia's case also brings to the foreground a schooling strategy that is increasingly common among youth in HISD schools: they drop out of high school, secure a General Equivalency Diploma, and enroll in community college.\(^4\) Elvia's dramatic departure from her siblings' educational experiences is partly evident in her unflattering portrayal of Mexican immigrants. She sees them as spineless individuals, lacking in "pride and self-respect." She further attributes this weakness to cultural factors—that is, to their "Indian-ness." Although expressed off-handedly, Elvia's dismissal of immigrants reveals the complexities of a colonized mestiza (Spanish and Indian mixed-blood) undergoing a personal decolonization process. Even as Elvia asserts her Mexican identity in a U.S. context, she negates her indigenous ancestry. "De-Indianization" is a manifestation of the subtractive assimilation processes that operate at a transnational level wherever indigenous communities are viewed with contempt.\(^5\) I never saw Elvia again, but I noted with relief and pleasure that her name was included on the school roster in the attendance office the following year. Rapid cultural assimilation, marked by a strong orientation either to the peer group or to the culture of the peer group, characterized every immigrant youth I observed who conformed to the "uncaring student" prototype. The contrast in language, clothing, demeanor, and other cultural markers between these young people and their parents is stark. Whatever its source, a need for acceptance by the more Americanized peer group appears to contribute to youths' accelerated effort to assimilate. In the handful of cases of rapidly culturally assimilated students I observed, the most vulnerable youth within the immigrant generation were those who had been born in Mexico or Latin America but who had lived most of their lives and had been schooled in the United States. These teens, of whom Elvia is a striking example, more closely resemble their U.S.-born peers than their immigrant counterparts and are referred to in the literature as "1.5 generation" youth (Vigil 1997). Still, even the recently arrived are sometimes drawn into accelerated cultural assimilation, as a story told to me by an immigrant mother who was also a custodial worker at the high school, illustrates. Mrs. Galvez, a single mother with three sons, had been living in the United States for approximately six years. She told me that her oldest son, Ignacio, was her biggest worry. In fact, he was the reason that she had taken a daytime job at Seguin as a custodial worker over a higher-paying evening job at a shipping company. She worked the same hours that her sons attended high school so that she could also be at home with them at night. Notwithstanding his mother's efforts, Ignacio dropped out of school at the beginning of his tenth-grade year "porque no le gustó" ("because he didn't like it"). Mrs. Galvez believed that her son's decision was partly a consequence of her divorce from her husband, which had been finalized nine months earlier, during the fall semester when Ignacio dropped out. When his father returned to Mexico shortly after the divorce, Ignacio became withdrawn and depressed. Mrs. Galvez was currently trying to get her son to return home; he was living with a young white woman whom he had met at a rock concert. He had moved in with this woman after he and Mrs. Galvez had argued over a one hundred dollar bill that had gone missing from his mother's purse. After Mrs. Galvez had tricked Ignacio into admitting that he had taken the money, she told him that if he wasn't going to attend school, he would have to get a full-time job to help support the family. The argument escalated and then ended abruptly when Ignacio bolted from the house. About one-and-a-half months had passed since the argument. Mrs. Galvez still had not seen her son, but she had spoken to him the day before. "Creo que ahora quiere regresarse a la casa," she told me ("I believe he wants to return home now"). She felt that Ignacio was punishing her for the divorce. From her perspective, the divorce had been a matter of self-respect. Her husband had been unfaithful and had taken several queridas (lovers) over the course of their marriage. "También le gustaba la tomada," she added ("He also liked to drink.") Her ex-husband's philandering had made home life stressful. His most recent affair proved to be more than Mrs. Galvez was willing to bear. She ordered him out. Sadly, in ridding herself of an abusive husband, she also rid her children of their father. They lost contact with him after he returned to Mexico. Plainly, Mrs. Galvez summed up the situation: "Ni una sola llamada. Esto es lo que mas ha afectado a mis hijos, especialmente al mayor." ("Not even a single call. This is what has most affected my sons, especially the oldest.") Ignacio may have been angry with his father for not keeping in touch with the family, "Pero también lo extraña mucho," Mrs. Galvez observed. ("But he also misses [his father] a lot.") She was not completely sure why of all her children it was her oldest son who seemed the most affected by the father's absence, but she thought that it probably had something to do with the fact that they were very close when Ignacio was just a child, before the extramarital affairs began. Not having any relatives living nearby may have been another contributing factor. I asked her to elaborate on her son's changes. She said that he had undergone a fairly rapid transformation. "No lo podía creer!" ("I couldn't believe it!") she exclaimed. Several months prior to the divorce, but when his parents were already separated, Ignacio stopped wearing belts and began wearing baggy pants and black T-shirts adorned with what looked to Mrs. Galvez like diabolical designs and messages. He began listening to heavy metal music for hours, stretched out on his bed, with his tape recorder headset on. He would come out of his bedroom long enough for a quiet dinner and then retreat to his room once again. After he dropped out of Seguín, Ignacio cultivated the habit of sleeping through the day and staying up through the night. This schedule left his mother uncertain about exactly what he was up to. Though he had taken money from her, she suspected that he was getting money from somewhere besides her purse, because he could afford to attend concerts. He told her that he earned money doing odd jobs as a day laborer; she could never confirm this because she was always at work. Several tattoos appeared on his upper arms and he had his head partially shaved. Ignacio had transformed his bedroom wall into a giant mural, filled with pictures that he tore out of rock music and car magazines. What most worried his mother was that, except for the girlfriend, Ignacio did not seem to have any friends, "Y rehusa hablar conmigo" ("And he refuses to talk to me"). I suggested the possibility of either individual or family counseling. Mrs. Galvez said that she had already gotten a referral for her son to see a professional counselor from Communities in Schools (CIS), but Ignacio had failed to show up for his scheduled appointment. That Ignacio had requested the appointment himself suggested that he knew he was in need of help. Mrs. Galvez commented on how hard it is to raise children in this society. In Spanish, she said, “There’s so much confusion. In trying to be someone else, Ignacio forgot who he was.” She wished that the school could help her son, and others like him, with the kind of counseling they need to make wiser decisions. As a high school custodial worker, she sees how many youth are tempted to go off in the wrong direction. Her conversation with her son earlier that day had left her hopeful. “Espero que regrese pronto a la casa. Creo que ya se le ha acabado el dinero,” she confided. (“I expect him to return home soon. I think he ran out of money.”) “Sería bueno,” I agree. (“That would be good.”) Growing nervous about the time we had spent talking, Mrs. Galvez shook my hand, and then quickly made her way to the nearby teachers’ lounge, clutching her dust-mop. Both Elvia and Ignacio express a need to belong or to “fit in” to the peer group of the dominant culture. The more worrisome of the two teenagers is the solitary and despondent Ignacio who recognizes that he has a serious problem and needs help, but can’t quite make it happen. Since schooling factors distinguished Elvia from her siblings, I made a point of inquiring further about Ignacio’s schooling experiences the next time I ran into Mrs. Galvez. Mrs. Galvez said that Ignacio had been a diligent student in Mexico but that the primaria (elementary school) he had attended through the fifth grade had not been particularly good. Ignacio had attended a rural school near the family’s home in the countryside outside of San Luis Potosí, Mexico. Low teacher salaries and poor working conditions, especially very large class sizes, resulted in a high attrition rate among the school’s students and staff. The lack of staff had resulted in the school being closed for half a year when Ignacio was to have entered the first grade. Though she had not given it much thought because Ignacio never talked about school, Mrs. Galvez said that Ignacio’s prior learning experiences in Mexico may have contributed to his apparent unhappiness with school in the United States. Ignacio serves as an important reminder that not all immigrant youth are able to translate their schooling experiences in Mexico into a positive schooling experience in the United States, as the following chapter finds, is generally the case. Schools in the homeland must also have been accessible and able to provide youth with continuous learning experiences. In the absence of such a foundation, immigrant youth are at great risk of dropping out of school. If, in addition, they strive for rapid cultural assimilation, the result may be acute maladjustment. In their rush to claim a new identity, these young people become marginal not only with respect to the academic mainstream, but also in relation to their family’s social identity. This dynamic is clear in both students’ cases; their mothers agonize over the “loss” of their children. In each case, what had been lost was the child’s Mexican cultural identity. Though revealing the importance of prior schooling experiences, the preceding discussion also highlights the interplay between subtractive cultural assimilation and student disaffection. With her wish for a school-based “cultural therapy,” Ignacio’s mother conveys her recognition of the destabilizing consequences of rapid cultural change, as well as her belief in schools’ potential for playing a productive role in helping youth negotiate their emergent cultural identities. Her son’s psychological well-being could have been better protected had the school mediated a discussion of the potential pitfalls that exist in the dominant culture, as well as the dangers attendant upon the attempt to assimilate very quickly. From a critical perspective on biculturalism (Darder 1991, 1995), students’ “choices” in identity, however constrained, are optimally premised on an affirmation of the new identity that effectively expands one’s cultural and linguistic repertoire. “Choices” based on a disaffirmation of self—that is, one’s original identity—is hardly a choice at all since this set of options pits one culture against the other. Expressed differently, the two cases reveal the alienating consequences of schools’ failure to be additive, by confirming the language, history, and experiences of the cultural “other.” If some immigrant youth are susceptible to the messages that demean their worth, how much more vulnerable are U.S.-born youth—whose Mexican identities are often less firm—to such messages? The following section examines some of the ways in which students resist these messages. “NOT CARING” AS STUDENT RESISTANCE What looks to teachers and administrators like opposition and lack of caring, feels to students like powerlessness and alienation. Some students’ clear perception of the weakness of their position politicizes them into deliberately conveying an uncaring attitude as a form of resistance not to education, but to the irrelevant, uncaring, and controlling aspects of schooling (Callahan 1962; LeCompte and Dworkin 1991). Take Frank, for example. Frank is an unusually reflective ninth-grader. As a C-student, he achieves far below his potential. One of Frank’s teachers, Mr. Murray, tells him that if he would only apply himself more, he could prepare himself well for college. Instead, Frank exerts himself only when a classroom assignment happens to interest him. Mr. Murray, who correctly noted and followed up on Frank’s interest in science, has become the boy’s mentor and sounding board. Whereas Mr. Murray sees Frank as he truly is—as a “thinker”—his other teachers generally perceive him as passive and indifferent. In a very thoughtful, intense discussion with me, Frank explained his approach to schooling: FRANK. I don’t get with the program because then it’s doing what they [teachers] want for my life. I see Mexicanos who follow the program so they can go to college, get rich, move out of the barrio, and never return to give back to their gente [people]. Is that what this is all about? If I get with the program, I’m saying that’s what it’s all about and that teachers are right when they’re not. Except for Mr. Murray, I don’t care what teachers think because then they can control me. AV. Does Mr. Murray control you? FRANK (smiling). He does make me think about college but I still ask myself for what. I could go to college if I wished, but for what? For Frank, not caring constitutes resistance to teachers, school, and a curriculum that he views as meaningless because it is not helping him to become a “better” person, that is, a socially minded individual who cares about his community. Moreover, teachers’ definition of caring—which involves a commitment to a predetermined set of ideas—is equivalent to cultural genocide. Success in school means consenting to the school’s project of cultural disparagement and de-identification. Frank is not unwilling to become a productive member of society; he is simply at odds with a definition of productivity that is divorced from the social and economic interests of the broader Mexican community. With his indifference, Frank deliberately challenges schooling’s implicit demand that he derogate his culture and community. Frank’s critique of schooling approximates that of Tisa, another astute U.S.-born, female student that I came across in the course of my group interviews (“Friends from the ’Hood” group). When I asked Tisa whether a college education was necessary in order to have a nice house and car, and to live in a nice neighborhood, she provided the following response: You can make good money dealing drugs, but all the dealers—even if they drive great cars—they still spend their lives in the ’hood. Not to knock the ’hood at all... If only us raza [the Mexican American people] could find a way to have all three, money... clean money, education, and the ’hood. In a very diplomatic way, she took issue with the way I framed the question. Rather than setting up two mutually compatible options of being successful and remaining in one’s home community, Tisa interpreted my question in either/or terms that in her mind unfairly juxtaposed success to living in the ’hood. That I myself failed to anticipate its potentially subtractive logic—at least according to one legitimate interpretation—caused me to reflect on the power of the dominant narrative of mobility in U.S.-society—an “out-of-the-barrio” motif, as it were (Chavez 1991; but also see Suro 1998). These findings bring to mind the ethos that Ladson-Billings (1994) identifies as central to culturally relevant pedagogy for African American youth. Specifically, effective teachers of African American children see their role as one of “giving back to the community.” Returning to Frank, his relationship with Mr. Murray inspires hope. His teacher reminds him that he does really care about education. Because his other teachers do not distinguish between schooling and education, they are unlikely to notice and nurture Frank’s interests the way Mr. Murray has. I asked Frank if he ever expresses his very thoughtful and important opinions in class. He says no, explaining that he’s sure that he’d never get any “backup” from other students. “Mexicans are too damned polite, taking whatever it is the teacher tells them. It’s like you say something and it’s like you never said anything when no one says, ‘Yeah, Frank, what you said was right.’” “Why don’t Mexicans speak up?” I question. “Because they’re afraid of what the teacher will say, or they think other students will laugh at them, or maybe it’s like no one ever does, so what’s the use?” Aggravated, Frank asserts, “It doesn’t matter to speak up anyway. For what? What’s the point? So I never open my mouth.” As critical as Frank is about the subtractive nature of the curriculum, his relationship with Mr. Murray illustrates that at least in the short term, there is a possibility of salvaging disaffected youth through a caring relationship. Mr. Murray demonstrates genuine interest in Frank as a person. Most of the time the two spend talking, they focus on topics of interest to Frank; sometimes these include science, sometimes not. The mainstream curriculum is thus demonstrably accessible through a route responsive to students’ definition of caring, that is, caring as relation. A senior male, Rodrigo’s approach is an even clearer example of how some students use “not caring” as a strategy of resistance. Though capable of excelling in honors’ classes, he chooses to remain in the regular curriculum to which he had automatically been assigned after transferring to Seguín from a magnet school in another area of the city. Besides being an avid reader, Rodrigo has been writing poetry and prose for much of his young life. Wellsprings of inner strength emanate, in great part, from his role in his family’s protracted struggle with his mother’s long-standing comatose condition. “The last time I saw my mother was in kindergarten,” he reminisces, referring to the last time he saw her as a whole, healthy person. After seeing Rodrigo off to school one day, she went to the hospital for a routine hysterectomy. During the operation, human error resulted in oxygen loss to her brain, causing extensive brain damage. Despite a decent monetary settlement and the passage of more than a decade, neither Rodrigo’s father nor his two older half-sisters and half-brother have fully recovered from this catastrophe. Rodrigo’s breadth of knowledge of Chicana and Chicano literature easily rivals that of any college graduate specializing in this field. Not only does he have detailed knowledge of poetry and fiction, poets and authors, he also knows which publishers are the most progressive on questions of multiculturalism. He has an expansive portfolio of written works, parts of which he takes to high schools and community gatherings where he has been invited to read. Gifts of books from publishers, professors, and other donors stand on shelves alongside those he purchases, filling a large space that he refers to as his “library” in his backyard garage. Rodrigo laces his conversation with lines of poetry from various works, including his own. A memorable verse from one of his poems, titled “Woman,” brought tears to my eyes as it flowed sweetly from his mouth: “I have touched Mexican women, but not as much as they have touched me.” Personal tragedy, coupled with his literary expeditions, have made of Rodrigo the feminist he is today. When he and I first met, Rodrigo was involved in preparations to teach a multicultural literature class after school to at least ten fellow students who had expressed interest. Although he secured the principal’s permission to teach the class, in the end, Rodrigo’s plans came to nothing. The principal was unable to come up with the funds needed to cover the cost of the text Rodrigo wanted to use. The process of preparing the class was an education in itself. According to Rodrigo, when he came into contact with teachers at the high school who had not met him before, they wondered where this remarkable young man had come from. Some wondered, as well, whether he might be half white because of the lightness of his complexion. Rodrigo was insulted by the implication that a dark-skinned Mexican could not be either as gifted or as accomplished as he. One of the aims of his course was to combat just that kind of stereotyping, as well as other negative images teachers held toward his fellow students in the regular curriculum track: They have this image of kids, that we are just messed up in the head. That’s not really true because many students here—I think their intellectual ability is just too high for them to be in regular classes, but they don’t enter honors classes. There are people out there who just think that we are into sex and drugs. That’s not true. I can’t say that I’m just one exception because there are many exceptions. At this school, there are many students, but some teachers at this school... I'll start saying this because it's true. Certain teachers say, "No, let's not read this. This is too hard for these kids. No, let's not read John Keats. No, Shakespeare's Hamlet. Let's show the movie or let's not learn about Excalibur. Let's not read it, but let's watch the film." That's something that I see, always some other kind of source that they turn to that is some kind of a secondary source, something that is not on level, but a little bit more basic. Rodrigo's decision to remain in the regular track at Seguín was influenced by his disappointment with the magnet program at the high school in which he had been enrolled before transferring to Seguín. "There they paid more attention to the grades rather than to your thinking ability," he said. One result of this narrow focus, Rodrigo observed, was that although "kids have good arguments... they have absolutely no argument skills. The only argument they have is probably to curse. Say the F-word and that's it." He added that if it were not for his commitment to self-education, he would never have realized how wrong-headedly schools approach their mandate to educate. He further speculated that it was his independent-mindedness that made school tolerable and kept him from dropping out. He blamed widespread academic failure on the administration and teachers, not on the students. Schooling was thus an obstacle to Rodrigo's education and his devaluation of scholastic achievement represented his silent rebellion against uninspiring curricula, misplaced priorities, and teachers' lowered expectations. Health was the class he valued the most at Seguín. In a pragmatic tone, he remarked, "Health is important to keep your body maintained." Rejected from Rice University and the University of Houston because his high school grades and SAT scores were low, Rodrigo enrolled as an undergraduate student at Kenyon College, a prestigious liberal arts college in the Midwest. He found out about the college from an information brochure he plucked out of the wastebasket in a Seguín counselor's office. The school looked beyond the "objective" data of grades and raw scores and admitted Rodrigo on the basis of his vast and creative intellect. The earlier rejections still rankle, however: U of H told me that I needed to apply through special admissions. I told them, "No! Look at my portfolio. This is who I am and what I can do. If I didn't do well in school, it's because I didn't care about school. It wasn't challenging. Accept me for who I am, not for some number or letter on a piece of paper." Rodrigo's words and experiences summarize students' experiences, generally, of profound alienation from, and hostility toward, uncaring bureaucracies. Universities' and colleges' insistence on evidence of student conformity to the high school curriculum, regardless of whether that curriculum is challenging and supportive or degrading and meaningless, closes off an important avenue of advancement for many potentially productive youth. There is little reason to bother aspiring to higher education if the price of admission must be prepaid in yearly installments of humiliation and alienation. Making schools and schooling affirmative, truly educational experiences for all students requires implementing changes that reach deep into the structure of the educational system. Using daily life at Seguín as a guide, the first and arguably the most important step is to introduce a culture of authentic caring that incorporates all members of the school community as valued and respected partners in education. The next section explores some of the positive effects that emerge when teachers and teachers, as well as teachers and students truly connect with one another. CARING AND PEDAGOGY The art of initiating a relationship is well expressed through the words of one of Seguín's most beloved social studies teachers, Ms. Aranda. In my interview with her, she conveys her philosophy of teaching as caring: Kids have to know the line so that they know not to cross it and so they know that they've crossed it. Whenever students are acting up, I take them out of the classroom and ask them, "What have I done that would cause you to act that way?" This question always disarms them because usually they can't imagine that me, a teacher, would suggest that I had done something wrong. And then after they say either yes, that I was the problem because they thought I was picking on them in class or no. I ask them what it is that's causing them to act in the way that they do? I always try to work things out with them individually. Sometimes, kids have certain problems that make me work out a personal arrangement with them. Like if they work a lot at night, I may tell them that they don't need to take a test but that they could be evaluated by pursuing another kind of project. What's important is that they need to know that I am fair, that I will listen to them, that they can come to me and talk and deal with a problem. The need for a culturally sensitive curriculum is not lost on Ms. Aranda who works at structuring her classes so that all students feel included: ESL kids are the most shy and they benefit a lot from group activities. I provide opportunities for them by giving them the chance to bring something of interest from their country for show and tell. This gets them talking. I also provide opportunities for them by allowing them to work on assignments bilingual—like a bilingual newspaper. Or I allow them to write a story about an event that goes on along the border. So the paper might deal with Piedras Negras or something like that. Since Ms. Aranda is also the chair of the Social Studies Department, her leadership is key. Their collectivist, team-building approach makes it one of the stronger academic departments on campus. Consider further Ms. Aranda's winning strategy: Collaborative planning with teachers is essential. Teachers need to share with other teachers, exchange information and ideas, and they need to feel supported for their efforts. Teachers who have less time don't necessarily have to be creative. They just have to be able to copy. So we meet a lot, which is something that other teachers don't do. And so while it might seem like an extra demand that's placed on them, it gets passed off as support because we all happen to get along. The productive power of healthy professional relationships rings clear in Ms. Aranda's account. She exemplifies the desirable qualities Assistant Principal Ana Luera is attempting to develop in other teachers. Interestingly, one advantageous factor mentioned by faculty inside and outside the Social Studies Departflunked her but Ms. Aranda helped her catch up. If something like that came up with me, I know I could go to her with it. (Second-generation, ninth-grade female student) Like I like the way Ms. Aranda is nice to the ESL students. It’s like they just got here and they need special help. They got to do some stuff [assignments] in Spanish and we all learned. It’s nice to see your language be part of your learning. It’s like, wow! That’s me, my culture, my language. . . . She’s gente [good people]! (Third-generation, ninth-grade male student) Some of the most compelling evidence that students do care about education despite their rejection of schooling is found among the great number of students who skip most classes chronically, but who regularly attend one class that is meaningful to them. Terry is a good example of this group. Although his overall attendance is erratic, he never misses his mechanics class. Auto mechanics, taught by Mr. Lundgren, is the only class where he feels he really learns something. Mr. Lundgren confirms that he sees many boys like Terry. He tells me that these boys find most of their classes irrelevant and thus consider them unimportant: “Mechanics is more closely connected to their sense of the future than their academic classes.” Mr. Lundgren is certain that were it not for the CTE vocational courses, many more students would find school meaningless and drop out. His sentiments are shared unanimously by Seguín’s other CTE teachers. My extensive observations of the CTE program lead me to conclude that the acquisition of work skills is compatible with the acquisition of both academic knowledge and an aspiration for postsecondary education. Most CTE teachers make a point of positively reinforcing the academic curriculum. They feel misunderstood by their colleagues in mainstream academic fields, who tend to dismiss the CTE program on the mistaken grounds that it is insufficiently intellectually rigorous. Several CTE teachers told me that they suspected that part of the reason for the disdainful treatment they often receive from other teachers and administrators was simple envy: CTE staff earn higher salaries, teach smaller classes, and have final say over which students may enroll in the higher-level courses they teach. Mr. Lundgren provides a good model of a positive interface between the academic curriculum and the CTE program. He pays close attention to his students’ writing. When he assigns a descriptive paper on internal combustion, for example, he knows that the majority will find the subject interesting and thus he expects—and requires—that his students produce well-written papers. In addition, after he grades the papers, he gives every student a chance to rewrite the assignment if they want to try for a higher grade. Because Mr. Lundgren provides a detailed evaluation on each paper he hands back, most students take advantage of the opportunity to rewrite. Few settle for a poor grade on a written assignment. In some cases, Mr. Lundgren gives his Spanish-dominant students the opportunity to do the assignment in Spanish. He mentioned a female student whose poor English-language skills would have made the paper assignment overwhelming. “She struggles a little bit but she does read a little bit in English.” For the most part, language is not a barrier for Mr. Lundgren, partly because he understands some Spanish, but also because he makes use of other students in his class. “The ones who don’t understand [English], I know who they are and they’re sitting next to a friend of theirs who translates to them and tells them what I expect,” he says. While I found his capacity and willingness to reach out to students extraordinary, Mr. Lundgren could not have been more unassuming about his approach: “My goal is to get them to write and what language they write in makes no difference to me.” Mr. Lundgren regularly counsels students, advising all—and convincing a few—that to be good mechanics, they need math and that to be able to run their own auto shops, they need to be able to read and write well. Mr. Lundgren indicated to me that what Terry (and others like him) needs is someone to care enough to take the time to help him see the connections between what he learns in school and what he wants to do with his life. The virtues of a standardized curriculum that middle-class youth take for granted are difficult for the Terrys of the world to appreciate. Terry’s behavior is his critique of schooling, namely, that it is meaningless, unrewarding, and irrelevant to his life. Terry did change his behavior the following semester largely because of Mr. Lundgren’s advice, encouragement, and gentle prodding. Whereas Terry skipped continuously before, he now religiously attends all of his classes. He now desires to work toward the goal of owning his own auto mechanics shop someday. Like the scores of youth who skip every single class except the one where a caring teacher may be found, Terry's renewed interest in school is directly attributable to Mr. Lundgren's connectedness to him. Though I never pursued the issue, Mr. Lundgren made me contemplate the effects of an inclusive pedagogy that respects all youth regardless of their linguistic abilities. While the immigrant youth he mentioned directly benefited, it is easy to imagine that his capacity to work with youths' differences have contributed to the authority he commands in the classroom. Since relationships with teachers like Mr. Lundgren are often either short-lived or nonexistent, however, Seguin would do well to heed Noddings' (1992) call for continuity (in place, people, and curriculum). Such continuity permits the development of trusting relationships and preempts students from turning exclusively to peers and strategies for academic survival that often increase their marginalization. WHEN TEACHERS DO NOT INITIATE RELATION Students' desire for reciprocal relationships with adults at school is tempered by their experience, which teaches them not to expect such relationships. As Noddings (1984) has noted, students' weak power position relative to school personnel makes it incumbent that the adults be the initiators of social relationships. Mark, an academically average ninth-grade student, explains why he is content to achieve far below his potential: Mark. It's cool to look like you don't care 'bout nuthin' 'cause then you're bad. Maybe some students act that way to get at the teachers, I don't know. I do it just to be cool, I guess, though I don't really think about it. AV. But underneath, you really care about school, huh? Mark (pausing). Yeah, I guess so. AV. You had to think about that. Mark. I know like school is good for me, but there's lots of things I don't like about it. AV. Like what? Mark. I don't know, I can't explain. AV. Like your classes? Mark. The teachers . . . they're not bad. It's just that they're not good. Further discussion elicited the basis for Mark's assessment. He had attended a Catholic private school during the eighth grade because his parents were concerned about his declining grades and the rowdy set of boys he had befriended. He told me that he had accepted his parents' decision because he was not learning much in his middle school anyway. With each addition to his story, Mark's thin layers of aloofness and defensiveness dissolved, exposing an impish personality. I began to anticipate a "punch line." He said that he had really enjoyed his one-year stay at the school, and he would have continued, except that his parents could not afford the tuition after his father had lost his job as the manager of a small business. Mark recalled how the interest that one of the nuns, Sister Mary Agnes, took in him helped him to discover that he had an instinctive talent for world geography. "I can name you the capital of almost any country in the world," he boasted. "What's the capital of Ireland?" I quizzed. "Dublin." "Zaire?" "Kinshasa." "Honduras?" "Tegucigalpa." "Excellent!" I exclaimed, simultaneously realizing that it was this unusual talent for geography that was his punch line. "I don't know why, it just comes to me," he said, snapping his fingers as the ends of his lips turned downward, with pride. "I know all the states and capitals in the U.S. and Mexico, too." The pleasure apparent in his now-radiant face contrasted sharply with the studied nonchalance he had displayed at the beginning of our conversation. "She took me just like I was, you know, like I don't want to be pushed to do things, like I need time to think about it," he continued, explaining his relationship with Sister Mary Agnes. Most importantly, she let him use her computer with the world atlas software on it. "I liked it so much! It'd be just me 'n her after school sometimes," he reminisced. Stimulated by his year with Sister Mary Agnes, Mark has become an avid map collector. During his family's summer trip to and from Mexico, he applied his newly developed talent by assuming primary responsibility for navigating. To encourage his interest, Mark's parents promised to buy him a world atlas for his next birthday. He regretted losing touch with his former teacher, paying her homage with his description: She was "really, really cool," with all her students. "No one here is like the Sister," he added, softly. "She liked you no matter how you were or how you looked." I asked Mark whether he had a map for his life. He said that he would like to do something connected with maps or travel. "The Sister said that I could be a plane pilot and I liked that," he said, smiling. "So you'll need to go to college first," I suggested. "Yeah, she talked to me about that, too." I hoped that Mark would really do as I asked when we parted—keep reaching for the sky. Sister Mary Agnes' capacity to accept her students unconditionally had a profound impact on Mark's life. This aura of acceptance lured him into her sphere; but it was the nun's quick apprehension that Mark needed a chance to work alone and at his own pace that brought out the very best in him. Mark learned much more than world geography from Sister Mary Agnes. Her authentically caring attitude set him free to discover some important things about himself. Not only was he an unusually talented geographer, he was also a special person, capable and worthy of the friendship of the "really, really cool" Sister Mary Agnes. It remains to be seen whether Mark will experience any similarly affirming relationships during his years at Seguin. The thinness of his aloofness and the strength of his newfound talent provide some hope that another perceptive teacher will continue where Sister Mary Agnes left off. Until this happens, Mark's peer group will be his most prominent source of school-based connectedness. However understandable, even justifiable, students' "uncaring" attitudes can make them not merely vulnerable, but virtually invisible, as Mark's and now Ronny's case demonstrates. I met Ronny, a tall, heavy-set, wannabe gangster with a short-cropped crew cut, during a visit I made to his ninth-grade English class. He denies being in a gang, but his two best friends are known to be involved in gang activity. Ronny has been a good reader since elementary school, but he fails to complete half of his homework assignments because they bore him. At home, he reads mystery novels. At school, he shares the stories' plotlines with his friends, who think he's smart. The English teacher tells me that Ronny never speaks a word in class, though he attends daily. Holding stacks of papers to grade, the teacher sighs, "He just sits there in the corner, and I figure I'll leave him alone if he leaves me alone." Ronny's tough appearance makes him seem unapproachable, even to other students; his teachers never call on him. Ronny prefers the status quo. When I see him later, during his lunch hour, we converse and he is surprisingly friendly. I ask him why he even goes to class if he doesn't participate. He said that he had always "gotten hy" with just going to class. "For all my teachers, it has always been enough—and it's funny how they never, never call on me." "Maybe because you look scary," I think to myself. "But you're a smart guy," I insist, "why don't you give school greater importance?" "Well, my friends think I'm smart, but I'm not so sure." "Don't you like to learn?" I ask. "It's not that I don't want to learn, it's what I learn that matters. Maybe I'm lazy, but teachers could also make school more fun. And besides, I'm doing what I have to do to not flunk and I never do flunk." "Since you know how to pass and heat the system, why don't you think about going to college?" I ask him. "I don't think I could do it. My cousin went. He even had a scholarship and he dropped out after the first semester . . . said it was too hard. He graduated from here, too, and he's smarter than me so I don't think I could handle it." I spent a few more minutes trying to get him to reconsider his decision about college. He told me that he had not really decided against college. He simply did not know enough about it to make an informed decision. I was the first person who had ever talked to him seriously about this possibility. Ronny’s teachers are well positioned to advise him about college but his demeanor and his attire reduce the chances that such a discussion might ever take place. Students like Ronny, those who are subdued and do not cause trouble, are among the easiest to overlook, regardless of their potential. Of further significance is Ronny’s disconnectedness from his English class, despite his continued interest in reading. Because schools fail to create environments that nurture the kinds of meaningful experiences that would allow learning to follow naturally, important opportunities for growth are missed (McNeil 1988; Smith 1995). As schooling is currently structured at Seguín, alienation and tension between students and school personnel is ongoing and unavoidable. This corrosive daily atmosphere negates the possibility of creating the collective contexts that facilitate the transmission of knowledge, skills, and resources. CONTRIBUTIONS AND LIMITATIONS OF THE CARING AND EDUCATION LITERATURE The literature on caring is properly premised on the notion that individuals need to be recognized and addressed as whole beings. All people share a basic need to be understood, appreciated, and respected. Among many acculturated, U.S.-born, Mexican American youth at Seguín, however, these basic needs go unmet during the hours at which they are in school. These students’ culturally assimilated status only exacerbates the problems inherent in an institutional relationship that defines them as in need of continuing socialization (DeVillar 1994). My findings show that American urban youth culture, filtered through a Mexican American ethnic minority experience, is at odds with adults’ tastes and preferences in dress and self-representation. This generational divide combines with a subtractive schooling experience to heighten students’ sense of disconnectedness from school and also to remind them of their lack of power. Rodrigo conveys teens’ sense of powerlessness at school in his observation that “Kids have good arguments, but they have absolutely no argument skills.” Unable to articulate their frustration and alienation effectively, and inexperienced with even the idea of collective action, most regular-track students settle for individual-level resistance. They engage in random acts of rebellion, posture and pose, mentally absent themselves, physically absent themselves, or attend and participate in only those classes that interest them. The few students who are adept articulators, like Rodrigo, condemn schooling, not education. The maladaptive consequences of subtractive schooling are magnified among immigrant youth who try to acculturate very rapidly. The suggestion by one parent that the school should help youth sort out their cultural issues as they undergo change is echoed by Spindler and Spindler (1994), who contend that schools should engage explicitly in cultural therapy. They suggest that culturally appropriate training might allow teachers to help students better understand themselves and thus make it possible for youth to learn “with less rancor and resistance.” (p. xiv) By examining misunderstandings of caring, a fundamental source of students’ alienation and resistance becomes apparent. Schools like Seguín not only fail to validate their students’ culture, they also subtract resources from them, first by impeding the development of authentic caring; and secondly, by obliging youth to participate in a non-neutral, power-draining type of aesthetic caring. To make schools truly caring institutions for members of historically oppressed subordinate groups like Mexican Americans, authentic caring, as currently described in the literature, is necessary but not sufficient. Students’ cultural world and their structural position must also be fully apprehended, with school-based adults deliberately bringing issues of race, difference, and power into central focus. This approach necessitates abandoning the notion of a color-blind curriculum and a neutral assimilation process. The practice of individualizing collective problems must also be relinquished. A more profound and involved understanding of the socioeconomic, linguistic, sociocultural, and structural barriers that obstruct the mobility of Mexican youth needs to inform all caring relationships (Delgado-Gaitan and Trueba 1991; Phelan et al. 1993; Stanton-Salazar 1996). Authentic caring cannot exist unless it is imbued with and motivated by such political clarity (Bartolomé 1994). The finding that students oppose schooling rather than education expands current explanations for oppositional or reactive subcultures that characterize many urban, U.S.-born youth in inner-city schools. Rather than signifying an anti-achievement ethos, oppositional elements constitute a response to a Eurocentric, middle-class “culture of power” (see Delpit 1994, for a similar argument with respect to African American underachievement). This culture individualizes the problem of underachievement through its adherence to a power-neutral or power-blind conception of the world (Frankenberg 1993; Twine 1995; McIntyre 1997). So deeply rooted and poorly apprehended is this culture of power that a 50–75 percent dropout rate at Seguín is systematically rationalized—year after year—as an individual-level problem. Such explanations preserve current institutional arrangements and asymmetries of power. Noddings (1992) rightly argues that the current crisis of meaning, direction, and purpose among youth in public schools derives from a poor ordering of priorities. The current emphases on achievement and on standard academic subjects may lead youth to conclude that adults do not care for them. Noddings further acknowledges that her call for a re-ordering of priorities to promote dedication to full human growth necessarily means that not all youth be given exactly the same kind of education. Indeed, as the logic of authentic caring dictates, a complete apprehension of the “other” means that the material, physical, psychological, and spiritual needs of youth will guide the educational process. One final story, that of Mr. Sosa, Seguín’s band director from 1991–1994, illustrates how authentic caring can be infused with political clarity, and thus serves as a fitting conclusion to this chapter. To meet the particular needs of his students, Mr. Sosa dissolves the conventional boundary that exists between “public” school and “private” home and community matters. Rather than construing a collective matter (poor nutrition) as an individual problem, Mr. Sosa adjusted his pedagogy in a humane and culturally sensitive way to meet all of his students’ needs. The marching band’s successes are a testimony to the effectiveness of meaningful relationships in promoting competence and mastery of worldly tasks. **LOVE IS ONE TAQUITO AWAY** In a late-morning visit with Mr. Sosa in early fall 1992, he told me that when he first arrived at Seguín (two years earlier), the students did not respect him. They were unmanageable. He said that they “just didn’t know,” meaning that they had to learn what his expectations were. He explained to me that in order for this kind of learning to take place, he first had to earn his students’ respect and confidence. He emphasized that this happened “slowly.” He recalled a series of three football games during which three different girls fainted while participating in the marching band’s half-time show. At the football stadium where the football players play, there is a lot of dust in the air. It just comes up and it happens that the kids start breathing it. So, there are kids that are malnourished. They don’t eat any breakfast, lunch, and then they don’t have supper. Then they go to participate. They are weak already, and that dust doesn’t help any. These students who fainted were taken by EMS [Emergency Medical Service] for treatment at the hospital and hospitalized. . . . Some kids are still being billed for that. . . . These kids don’t have insurance. They take them to the hospital, and they’re administered treatment, and the parents don’t have any money to pay. Yet, if they don’t have any money, they are not going to be administered. Some that are administered are billed without the parents having any money. So, I try to get insurance for them, but it’s only accidental insurance through the school. It’s cheap, but I can’t find any insurance that will take care of their hospital stay. He pointed to a large, bright-blue, vinyl bag that he brings to school every day. It is packed with bean- and meat-filled, flour tortilla tacos wrapped in foil. He gives this food to his students. “At first, I would come to school with a little bag. Now, I bring this one because I can feed many more students with it. I used to begin handing them out during the lunch hour. Now I begin earlier than that. They come here to eat breakfast," he says, with a smile. I remark that he must spend a lot of time preparing these meals. Nodding his head, he responds, "I spend one-and-a-half to two hours every night making these. He then pulls out a *taquito* (small taco) and offers it to me. I'm dying to taste Mr. Sosa's *taquitos* and so I accept his hospitality. He gives me one of his prized bean-and-meat versions, which I savor slowly as we talk. Mr. Sosa tells me how his gift of food helped create a strong bond between him and his students: I usually finish by ten-thirty or eleven. A big part of the trust that I have been able to build has been because of this. At first, they were overly defensive with me. If you tell them something they don't like, they are ready to hit back. Now, I can go ahead and tell them to do things which they don't understand, but they will do them anyway. That's what I'm up to with them, but it has taken almost two years. "So feeding your students has really made a difference in your relationship with them?" I probe. "It all happened by accident," he responds. You see, the food thing, I don't bring it just to win them over. It was because they don't eat. They don't have any money. They don't even have breakfast . . . don't have money for dinner. And then we practice 'til five or six after school. So, consequently their physical endurance is spent. I really got after the kids, to try to get them to eat something. I then would do my part by bringing them food, and then I would have them talk in here while they are eating. I would give them advice. Some kids come in and sit down and talk to me about personal things. Just last week, I pulled a kid out of jail. This changed everything around for me because when I first came in and tried to tell them things that are not exactly the way they've been told by other students and by other teachers, they resented me. "So, to reach these kids, what is your advice to other teachers?" I ask him. Characteristically, he answers my question by telling me another story: When I first got here in 1990, this is what actually happened. I came and was interviewed by the principal. The principal was outside and he called some of the band students that were there. They were practicing there by themselves because they didn't have a band director. He had left. So, he called the kids around to where we both were and he introduced me as their possible new teacher. So, one of the girls put her arms around me. *Me abrazó*. [She hugged me.] And she assumed that I was going to be their teacher and director. She told me in front of everybody, "Sir, just one thing. Don't lie to us." So, it kind of hit me. These kids want the truth. They want sincerity. For the teacher, it's one thing to say you care and it's another to show it. You can show your sincerity, your honesty, when you talk to them or you can demonstrate that you are sincere and that you care. Recently, some kid told me when I offered him some food, he said, "I don't take handouts." So, I told the boy, "This is not a handout. It took a lot of love. It took not only my own money, but my own time." I'll spend an hour or two making, preparing this food, plus buying the materials I need every day. So, it's not a matter of being a handout. It's a matter of love. They are like my children to me. It's not a handout. It's like giving something without expecting something in return. I don't expect something in return. To complete the story, Mr. Sosa led his band to the city championship title for three consecutive years. They also competed well at the state level and the band had the privilege of participating in the "16th of September" parade in Mexico City for two consecutive years. Mr. Sosa's story, the example he set as a caring human being, would be moving under any circumstances at any high school. At Seguín, where the importance of personal worth is often overlooked, where the links between academic achievement, cultural integrity, and mutual respect are so fragile, and where helpfulness and hopefulness are often in short supply, Mr. Sosa reminds us that a different, more affirming and positive world may be only a *taquito* away—that is, if it is one made with sincerity and love.
Association Between Hospital Recognition for Nursing Excellence and Outcomes of Very Low-Birth-Weight Infants Eileen T. Lake, PhD, RN Douglas Staiger, PhD Jeffrey Horbar, MD Robyn Cheung, PhD, RN Michael J. Kenny, MS Thelma Patrick, PhD, RN Jeannette A. Rogowski, PhD Context Infants born at very low birth weight (VLBW) require high levels of nursing intensity. The role of nursing in outcomes for these infants in the United States is not known. Objective To examine the relationships between hospital recognition for nursing excellence (RNE) and VLBW infant outcomes. Design, Setting, and Patients Cohort study of 72,235 inborn VLBW infants weighing 501 to 1500 g born in 558 Vermont Oxford Network hospital neonatal intensive care units between January 1, 2007, and December 31, 2008. Hospital RNE was determined from the American Nurses Credentialing Center. The RNE designation is awarded when nursing care achieves exemplary practice or leadership in 5 areas. Main Outcome Measures Seven-day, 28-day, and hospital stay mortality; nosocomial infection, defined as an infection in blood or cerebrospinal fluid culture occurring more than 3 days after birth; and severe (grade 3 or 4) intraventricular hemorrhage. Results Overall, the outcome rates were as follows: for 7-day mortality, 7.3% (5258/71955); 28-day mortality, 10.4% (7450/71953); hospital stay mortality, 12.9% (9278/71936); severe intraventricular hemorrhage, 7.6% (4842/63528); and infection, 17.9% (11915/66496). The 7-day mortality was 7.0% in RNE hospitals and 7.4% in non-RNE hospitals (adjusted odds ratio [OR], 0.87; 95% CI, 0.76–0.99; P = .04). The 28-day mortality was 10.0% in RNE hospitals and 10.5% in non-RNE hospitals (adjusted OR, 0.90; 95% CI, 0.80–1.01; P = .08). Hospital stay mortality was 12.4% in RNE hospitals and 13.1% in non-RNE hospitals (adjusted OR, 0.90; 95% CI, 0.81–1.01; P = .06). Severe intraventricular hemorrhage was 7.2% in RNE hospitals and 7.8% in non-RNE hospitals (adjusted OR, 0.88; 95% CI, 0.77–1.00; P = .045). Infection was 16.7% in RNE hospitals and 18.3% in non-RNE hospitals (adjusted OR, 0.86; 95% CI, 0.75–0.99; P = .04). Compared with RNE hospitals, the adjusted absolute decrease in risk of outcomes in RNE hospitals ranged from 0.9% to 2.1%. All 5 outcomes were jointly significant (P < .001). The mean effect across all 5 outcomes was OR, 0.88 (95% CI, 0.83–0.94; P < .001). In a subgroup of 68,293 infants with gestational age of 24 weeks or older, the ORs for RNE for all 3 mortality outcomes and infection were statistically significant. Conclusion Among VLBW infants born in RNE hospitals compared with non-RNE hospitals, there was a significantly lower risk-adjusted rate of 7-day mortality, nosocomial infection, and severe intraventricular hemorrhage but not of 28-day mortality or hospital stay mortality. JAMA. 2012;307(16):1709-1716 For editorial comment see p 1750. Author Audio Interview available at www.jama.com. ©2012 American Medical Association. All rights reserved. velopment and behavior, lower levels of morbidity, and shorter hospitalization.\textsuperscript{12} Nurse handling of an infant and recognition and response to subtle cues that an infant is distressed may support infant hemodynamic stability and reduce the likelihood of intraventricular hemorrhage.\textsuperscript{13} Aseptic technique and scrupulous hand hygiene by nurses during infant care, especially in the maintenance of central lines, decrease the risk of infants acquiring a nosocomial infection.\textsuperscript{11,12} The American Nurses Credentialing Center developed the Magnet Recognition Program to recognize health care organizations for quality patient care, nursing excellence, and innovations in professional nursing practice.\textsuperscript{14} Organizations are evaluated for evidence of achieving 5 program elements: transformational leadership; structural empowerment; exemplary professional practice; new knowledge, innovations, and improvements; and empirical outcomes. Exemplary professional practice is achieved when “nurses have significant [professional] control . . . and work in collaboration with interdisciplinary partners to achieve high-quality patient outcomes.”\textsuperscript{14(p28)} The other 4 elements support and maintain nursing excellence. For instance, structural empowerment means “the flow of information and decision-making is bi-directional and horizontal . . . among professional nurses at the bedside, the leadership team, and the chief nursing officer (CNO).”\textsuperscript{14(p44)} New knowledge includes “establishing new ways of achieving high-quality, effective, and efficient care.”\textsuperscript{14(p32)} Transformational leadership requires that “the CNO in a Magnet organization . . . develops a strong vision and well-articulated philosophy, professional practice model, and strategic and quality plans in leading nursing services.”\textsuperscript{14(p42)} Empirical outcomes document achievement in all of these areas. These criteria are expected to assist health care organizations in achieving high-quality nursing care for all patients. The route to recognition is an extensive and rigorous process that generally takes 2 years. Recognition is at the hospital level but all units must meet criteria. The hospital pays a sliding-scale application fee, conducts an extensive self-evaluation followed by an analysis to identify the gaps in achieving standards, works with a consultant to implement organizational changes to fulfill numerous recognition of nursing excellence (RNE) standards, and is evaluated by outside appraisers through a site visit of several days.\textsuperscript{15} Hospitals are required to undergo a redesignation process every 4 years. Interim reporting is also required. Recognition for nursing excellence is uncommon. Only 7% of US hospitals achieve this. Very few lose it (<10 since the program’s inception in 1994); however, approximately 20% of hospitals with a NICU have this recognition (authors’ tabulations of American Hospital Association Annual Survey data and American Nurses Credentialing Center public listing). Patient outcomes in RNE hospitals have been understudied.\textsuperscript{16,17} The objective of this study was to examine the association of hospital RNE status with VLBW infant outcomes. We analyzed mortality, severe intraventricular hemorrhage (sIVH), and nosocomial infection because we hypothesized these outcomes would be influenced by nursing care and prior research has indicated that they may be affected.\textsuperscript{18-21} In addition to hospital stay mortality, 2 other mortality time frames were predefined: within the critical first week of life and within 28 days of birth. Death in the first week of life accounts for the majority of neonatal (71%) and in-hospital (57%) mortality in VLBW infants. Death within 28 days, or neonatal mortality, is a commonly reported statistic. **METHODS** **Sites and Patient Sample** The Vermont Oxford Network (VON) is a voluntary collaborative network of hospitals with a NICU dedicated to improving the quality and safety of medical care for newborn infants and their families. VON hospitals are located in 47 states, Washington, DC, and 22 foreign countries. The VON database contains detailed uniform clinical and treatment information on all VLBW infants cared for by network hospitals. By 2008, the US VON database comprised 578 hospitals, which included approximately 65% of NICUs and 80% of all VLBW infants born in the United States. This cross-sectional study included 538 VON hospitals with inborn infants in 2007 and 2008. The remaining 20 were children’s hospitals that had only outborn infants. The study population consisted of 72 235 inborn infants who weighed between 501 and 1500 g. Infants who died in the delivery department or elsewhere in the hospital were included even if they were not admitted to the NICU. Infants who weighed 500 g or less were excluded for consistency with prior studies. Infants with incomplete data on infant characteristics (n=599) were excluded to yield a consistent sample for multivariable models. In analyses of mortality, an additional 299 infants were excluded for missing data on death. Institutional review board approval was obtained from the University of Medicine and Dentistry of New Jersey and the University of Vermont, including a waiver of informed consent. The University of Pennsylvania institutional review board judged the project exempt. **Variables** All patient- and NICU-level measures were obtained or derived from the VON database. VON data are collected using standardized definitions. The data are subjected to extensive range, logic, and consistency checks when submitted and are reviewed and verified annually. Infant characteristics were measured at birth. The key outcome measures were death (within 7 days, 28 days, and the hospital stay), nosocomial infection, and sIVH. Nosocomial infection was defined as an infection in blood or cerebrospinal fluid culture occurring more than 3 days after birth. The database includes information on 3 culture-proven infections: coagulase-negative *Staphylococcus*, the most common bacterial infection in the NICU; other bacterial infections; and fungal infections. Severe intraventricular hemorrhage was defined as the presence of grade 3 or 4 intraventricular hemorrhage on a cranial ultrasound performed within the first 28 days.\textsuperscript{22} Grades 3 and 4 hemorrhages are the most severe and are more likely to be associated with long-term neurodevelopmental sequelae. Of the sample, 14.6% of the infants were transferred and 3.7% were readmitted to the birth hospital. The final disposition (discharge alive or dead) is tracked for all infants and attributed to the birth hospital regardless of transfer status. If an infant was readmitted to the birth hospital after a transfer, SIVH and infection were collected for the entire stay, including at the transfer hospital, and attributed to the birth hospital. These data were not collected on infants transferred out and not readmitted. However, since SIVH occurs principally in the first few days of life, the 23-day median age of transfer implies that SIVH is unlikely to occur in a transfer hospital. In 2009, VON data were collected on infection location and indicated that among readmitted infants, 4% of infections were contracted at the transfer hospital; in this analysis, those would be attributed to the hospital of birth. The independent variable, hospital RNE designation in 2008, was obtained from a public website listing designated hospitals’ original and most recent year of redesignation.\textsuperscript{23} Patient risk adjusters consisted of infant characteristics that were developed for the VON risk-adjustment model.\textsuperscript{23} These covariates included gestational age in weeks (and its square); small for gestational age; 1-minute Apgar score; race and ethnicity (non-Hispanic black, non-Hispanic white, or other [including Hispanic]); sex; multiple birth; presence of a major birth defect; vaginal delivery; and whether the mother received prenatal care. Race and ethnicity were classified into standard VON options based on maternal race and ethnicity as recorded in the birth certificate or medical record. Maternal socioeconomic status was not available in the VON database and could not be geocoded. Previous research did not find an effect of maternal socioeconomic status on mortality using earlier years of the VON database.\textsuperscript{24} The risk-adjustment model had area under the receiver operating characteristic curves of 0.88 for mortality, 0.82 for SIVH, and 0.75 for infection. Two NICU-level variables were included consistent with prior research.\textsuperscript{24-26} Volume was measured as the mean number of VLBW infants admitted to the hospital in 2007 and 2008. Due to the presence of high-volume NICUs, the data were transformed to the natural log of volume for a more normally distributed measure. NICU level was obtained from the VON’s annual survey. The VON classifies NICUs into levels A (restriction on ventilation; no surgery), B (major surgery), and C (cardiac surgery), corresponding to high level II and level III units in the American Academy of Pediatrics NICU classification. The universe of US NICUs was identified from the American Hospital Association survey\textsuperscript{27} by nonzero values for neonatal intensive care beds. Two hospital characteristics, hospital ownership (not-for-profit, for-profit, or public) and teaching status (membership in the Council of Teaching Hospitals), were also obtained from the American Hospital Association survey. **Data Analysis** Our focus in this study was on hospital RNE and VLBW infant outcomes. We first examined the bivariate relationship between RNE and each outcome. Tests of bivariate comparisons adjusted for infant clustering within hospitals. We then estimated 3 logistic regressions for each outcome. The first included only RNE status as the independent variable. The second added patient risk adjusters. The third added NICU- and hospital-level covariates. All models controlled for birth year to account for a secular trend. We estimated random-effects models by the maximum likelihood method. This method includes an unobserved hospital-level component (the random effect) that captures any omitted hospital-level factors that systematically increase or decrease the likelihood of each outcome for all infants in that hospital. Inclusion of this random effect corrects the standard errors for the resulting within-hospital correlation (ie, clustering) in patient outcomes. When there are multiple outcomes and all are hypothesized to be important, a joint significance test computes the average effect to summarize the overall pattern. The joint F test accounts for correlation between the 3 mortality measures. To determine whether RNE status was significantly related to all 5 outcomes, we tested the hypothesis that all 5 odds ratios (ORs) were jointly equal to 1 and also tested whether the mean OR across all 5 outcomes was equal to 1. Confidence intervals and P values for these tests were based on the bootstrap method to account for correlation between the estimates.\textsuperscript{28} To explore the possibility that RNE may have a different association with outcomes for VLBW infant subgroups, such as those above a viability threshold, we repeated our regression analyses in subgroups stratified by gestational age of 24 weeks or older vs younger than 24 weeks and birth weight of 1000 g or more vs less than 1000 g (extremely low birth weight). The analyses were conducted using Stata software, version 10.1.\textsuperscript{29} The a priori significance level was $P < .05$ for a 2-sided significance test. **RESULTS** Of the sample, 21% of hospitals had RNE status compared with 19% of US hospitals with a NICU. 16% of sample hospitals provided the highest level of care (level C). Compared with the universe of hospitals with a NICU, our sample contains somewhat more teaching hospitals (33% vs 27%) and larger units (a mean of 28 beds vs 22 beds). Compared with non-RNE hospitals, the RNE hospitals with a NICU are... mostly not-for-profit (87% vs 71%) and have more registered nurse hours (10.5 vs 9.3 hours per patient-day at the hospital level), twice as many are teaching hospitals (55% vs 27%) (TABLE 1). Few RNE hospitals are for-profit compared with non-RNE hospitals (3% vs 13%). The RNE hospitals care for a larger volume of VLBW infants than non-RNE hospitals (93 vs 74 VLBW infants, respectively). Also, RNE hospital NICUs are disproportionately level C (32% vs 12%) rather than level A (23% vs 33%) compared with non-RNEs. These RNE/non-RNE differences mirrored those of US NICUs (eTable 1; available at http://www.jama.com). Sample infants had a mean birth weight of 1056 g and a gestational age of 28.2 weeks (Table 1). The racial and ethnic composition of the entire sample was 47% non-Hispanic white, 29% non-Hispanic black, and 24% other, while the composition of infants in RNE hospitals was disproportionately non-Hispanic white (54%) (P < .001). The risk profile of RNE hospitals was higher than for non-RNE hospitals based on the characteristics of VLBW infants born in those hospitals. The RNE hospitals had disproportionately more infants with higher-risk characteristics such as lower Apgar score, multiple birth, and white race. It is well known in this literature that black infants have a survival advantage, which differs from most other populations. The mean predicted probability of death was 13.0% in RNE hospitals and 12.6% in non-RNE hospitals controlling for infant factors. The percentage of eligible infants with each outcome was as follows: 7-day mortality, 7.3% (n = 52/5871955); 28-day mortality, 10.4% (n = 7450/71953); hospital stay mortality, 12.9% (n = 9278/71936); sIVH, ### Table 1. Hospital, NICU, and Infant Characteristics | Characteristics | Participants<sup>a</sup> | P Value<sup>b</sup> | |-----------------|--------------------------|---------------------| | | Total N = 558 | RNE Hospitals n = 119 | Non-RNE Hospitals n = 439 | | Hospital characteristics | | | | | Hospital ownership | | | | | Public | 85 (15) | 13 (11) | 72 (16) | | For-profit | 60 (11) | 3 (3) | 57 (13) | .001 | | Not-for-profit | 413 (74) | 103 (87) | 310 (71) | | Member, Council of Teaching Hospitals | 185 (33) | 66 (55) | 119 (27) | <.001 | | Hospital nursing characteristics | | | | | RNE hospital | 119 (21) | 119 (100) | 0 | | Registered nurse hours per adjusted patient-day, mean (SD)<sup>c</sup> | 9.6 (3.0) | 10.5 (2.9) | 9.3 (2.9) | <.001 | | NICU characteristics | | | | | NICU level | | | | | A | 171 (31) | 27 (23) | 144 (33) | <.001 | | B | 296 (53) | 54 (45) | 242 (55) | | C | 91 (16) | 38 (32) | 53 (12) | | Annual volume of very low-birth-weight admissions, mean (SD)<sup>d</sup> | 78 (60.4) | 93 (58.9) | 74 (60.3) | <.001 | | Infant characteristics | n = 72236 | n = 17455 | n = 54780 | | Birth weight, mean (SD), g | 1056 (287) | 1056 (286) | 1056 (287) | .89 | | Gestational age, mean (SD), wk | 28.2 (2.9) | 28.2 (2.9) | 28.2 (2.9) | .96 | | 1-Minute Apgar score, mean (SD) | 5.4 (2.5) | 5.3 (2.5) | 5.5 (2.5) | <.001 | | Small for gestational age | 13916/72216 (19) | 3345/17449 (19) | 10571/54767 (19) | .70 | | Multiple birth | 20616/72224 (29) | 5284/17454 (30) | 15332/54770 (28) | <.001 | | Congenital malformation | 3439/72184 (5) | 840/17449 (5) | 2599/54745 (5) | .72 | | Vaginal delivery | 19972/72230 (28) | 4817/17452 (28) | 15155/54778 (28) | .87 | | Hard prenatal care | 69124/72025 (96) | 16817/17421 (97) | 52307/54604 (96) | <.001 | | Male | 36341/72211 (50) | 8869/17451 (51) | 27472/54760 (50) | .13 | | Race/ethnicity | n = 72040 | n = 17410 | n = 54630 | | Non-Hispanic white | 33541 (47) | 9426 (54) | 24115 (44) | | Non-Hispanic black | 21164 (29) | 4588 (26) | 16576 (30) | <.001 | | Other<sup>e</sup> | 17335 (24) | 3396 (20) | 13939 (26) | | Year of birth 2009 | 37116/72235 (51) | 9132/17455 (52) | 27984/54780 (51) | <.001 | Abbreviations: NICU, neonatal intensive care unit; RNE, recognition for nursing excellence. Data are expressed as No. (%) of participants unless otherwise indicated. <sup>a</sup>The χ² test was used for comparison of categorical variables and the unpaired 2-tailed t test for continuous variables. <sup>b</sup>Calculated by the authors from the 2008 American Hospital Association Annual Hospital Survey. <sup>c</sup>All other races/ethnicities, including Hispanic. Table 2. Very Low-Birth-Weight Infant Outcomes, 2007-2008 | Outcomes | Infants, No./Total (%) | |-----------------------------------------------|------------------------| | | All Hospitals (N = 555) | RNE Hospitals (n = 119) | Non-RNE Hospitals (n = 436) | | No. of infants | 72,236 | 17,455 | 54,780 | | Death within 7 d | 5528/71 955 (7.3) | 1215/17 415 (7.0) | 4043/54 540 (7.4) | | Death within 28 d | 7450/71 953 (10.4) | 1740/17 415 (10.0) | 5710/54 538 (10.5) | | Death before discharge home | 9278/71 036 (12.9) | 2150/17 414 (12.4) | 7119/64 522 (13.1) | | Nosocomial infection | 11 915/66 496 (17.9) | 2706/16 221 (16.7) | 9209/50 275 (18.3) | | Severe intraventricular hemorrhage | 4842/63 525 (7.6) | 1109/15 482 (7.2) | 3733/48 043 (7.8) | Abbreviation: RNE, recognition for nursing excellence. 7.6% (4842/63 525); and infection, 17.9% (11 915/66 496) (TABLE 2). The 7-day mortality was 7.0% in RNE hospitals vs 7.4% in non-RNE hospitals (difference, 0.4%); 28-day mortality was 10.0% in RNE hospitals vs 10.5% in non-RNE hospitals (difference, 0.5%); and hospital stay mortality was 12.4% in RNE hospitals vs 13.1% in non-RNE hospitals (difference, 0.7%). The incidence of SIVH was 7.2% in RNE hospitals and 7.8% in non-RNE hospitals (difference, 0.6%). Infection occurred in 16.7% of VLBW infants in RNE hospitals and 18.3% in non-RNE hospitals (difference, 1.6%). Table 3 shows the relationships between RNE status and infant outcomes in logistic regression models. The lower rates of adverse outcomes in RNE hospitals observed in Table 2 understate the differences between these hospital types. From the unadjusted OR to the OR adjusted for infant risk, the ORs associated with RNE status decreased on average by 0.07 (range, 0-0.12). This is because somewhat higher-risk infants are born in RNE hospitals, so unadjusted models confound RNE status with patient risk. Adjusting for patient risk, RNE hospitals had statistically significant ORs of 0.84 to 0.87 for mortality and SIVH, but the OR of 0.88 (95% CI, 0.76-1.00) for infection was not statistically significant. Three infant outcomes exhibited statistically significant associations with RNE status in models that also controlled for NICU and hospital variables: 7-day mortality, infection, and SIVH. Birth in an RNE hospital was associated with odds of death in the first week of life of 0.87 (95% CI, 0.76-0.99), an odds of infection of 0.86 (95% CI, 0.75-0.99), and an odds of SIVH of 0.88 (95% CI, 0.78-1.00). The 28-day and in-hospital mortality had similar ORs (0.90) but were not statistically significant. Compared with non-RNE hospitals, the adjusted absolute decrease in risk of outcomes in RNE hospitals ranged from 0.9% to 2.1%. All 5 outcomes were jointly significant (P < .001). The mean effect across all 5 outcomes was an OR of 0.88 (95% CI, 0.83-0.94; P < .001). Infants cared for in type A NICUs had an OR for infection of 0.74 (95% CI, 0.60-0.92; P = .005) relative to type C NICUs. Infants born in for-profit hospitals had an OR for infection of 1.24 (95% CI, 1.02-1.49; P = .03) relative to not-for-profit hospitals. The OR for the log volume of VLBW infants for 7-day mortality was 0.90 (95% CI, 0.76-0.99; P = .02). The 2 gestational age subgroups exhibited marked differences in the ORs for the mortality variables but not for infection and SIVH. In the older gestational age subgroup (≥24 weeks), the ORs for all 3 mortality outcomes were smaller than in the full cohort, ranging from 0.83 to 0.87, and were statistically significant (TABLE 4). In the younger gestational age subgroup (<24 weeks), the ORs for all 3 mortality outcomes were weaker (ie, closer to or exceeding 1.00), with P > .60 (TABLE 2). The results of analyses in birth-weight subgroups mirrored the overall findings (eTable 3 and eTable 4). COMMENT Hospital RNE status was found to be associated with significantly lower rates of 7-day mortality, nosocomial infection, and SIVH in VLBW infants. Rates of 7-day mortality (7%), SIVH (8%), and nosocomial infection (18%) were high in these patients. There was a 12% to 14% difference in the odds of these outcomes between RNE and non-RNE hospitals, with 95% confidence limits close to 1, which translates to relatively small adjusted absolute risk differences of 0.9% to 2.1%. For neonatal and in-hospital mortality, the findings were not significant. Although the significant mortality difference between the 2 hospital groups disappeared by 28 days of life, it remained significant in older-gestational-age infants. These morbidities have serious consequences. Development of an infection more than doubles the mortality rate among VLBW infants. In our sample, among infants who survived 3 days, 13.8% of those with nosocomial infection died compared with 5.5% without infection. Even more striking are the implications of SIVH for mortality. In our sample, 36.4% of infants with SIVH died compared with 5.9% without SIVH. There are important long-term consequences of SIVH for brain development, including neurocognitive impairment, cerebral palsy, and developmental delays. Among VLBW infants born at 24 weeks of gestational age or more, the ORs for all 3 mortality measures were stronger (0.83 to 0.87) and statistically significant. The exclusion of the extremely premature subgroup (<24... weeks) sharpened the RNE association with mortality in the remaining infants. Infants born before 24 weeks are at the lower limit of viability. Some families and physicians of these infants will choose not to use assisted ventilation and instead provide comfort care. Thus, RNE status was more strongly associated with survival for infants in the gestational age range in which intensive care is usually applied. Our study identified larger differences in the odds of outcomes than did the few studies that have identified similar associations between hospital RNE and adult outcomes. The earliest study documented a 5% lower Medicare mortality rate in 1988 in 39 hospitals identified by reputation as a good place to practice nursing and for a record of recruiting and retaining professional nurses in a competitive market compared with a matched sample of hospitals. Another study of 2004 data found a 5% lower patient fall rate in RNE vs non-RNE hospitals. In the decade since Crossing the Quality Chasm, there have been numerous calls to improve the quality of the health care system. The Quality Health Outcomes Model links system-level factors to patient outcomes. Recognition of nursing excellence status is a system-level factor encompassing professional control, interdisciplinary collaboration, decision making shared from the bedside to the highest management level, and developing new knowledge about how to achieve high-quality, effective, and efficient care. Improving the quality of care for vulnerable infants was emphasized in the Institute of Medicine report on preterm birth, which pointed to nursing as a promising avenue for developing NICU quality measures, and the focus on infants was reinforced by a March of Dimes report. One way to increase the number of infants that receive high-quality care would be to increase the number of hospitals with RNE. Our results suggest benefit for the VLBW infant population, but other hospitalized patients may also benefit, as suggested by the limited empirical evidence. ### Table 3. Odds Ratios Estimating the Association of Hospital RNE Status and NICU and Hospital Variables With Very Low-Birth-Weight Infant Outcomes | Outcomes | Unadjusted | P Value | Adjusted for Patient Characteristics | P Value | Adjusted for Patient, NICU, and Hospital Characteristics | P Value | |---------------------------------|------------|---------|--------------------------------------|---------|----------------------------------------------------------|---------| | Mortality | | | | | | | | Within 7 d | 0.96 (0.86–1.08) | .41 | 0.84 (0.74–0.96) | .01 | 0.87 (0.76–0.99) | .04 | | Within 28 d | 0.96 (0.87–1.05) | .35 | 0.87 (0.77–0.98) | .02 | 0.90 (0.80–1.01) | .08 | | Before discharge | 0.95 (0.87–1.03) | .21 | 0.87 (0.78–0.97) | .01 | 0.90 (0.81–1.01) | .06 | | Morbidity | | | | | | | | Nosocomial infection | 0.88 (0.78–1.01) | .06 | 0.88 (0.76–1.00) | .06 | 0.86 (0.75–0.99) | .04 | | Severe intraventricular hemorrhage | 0.90 (0.80–1.00) | .05 | 0.84 (0.75–0.95) | .01 | 0.88 (0.78–1.00) | .045 | Abbreviations: NICU, neonatal intensive care unit; RNE, recognition for nursing excellence. *Odds ratios and 95% CIs were derived from random-effects logistic regression models. All models control for year of birth; Infant risk adjusters were gestational age, gestational age squared, 1-min Apgar score, small for gestational age, multiple birth, congenital malformation, vaginal delivery, prenatal care, race/ethnicity, and sex. NICU characteristics were adjusted for the natural log of volume of very low-birth-weight infants and level of care. Hospital characteristics were adjusted for hospital ownership and membership in the Council of Teaching Hospitals.* ### Table 4. Odds Ratios Estimating the Association of Hospital RNE Status and NICU and Hospital Variables With Very Low-Birth-Weight Infant Outcomes Among Infants With Gestational Age of 24 Weeks or More at Birth | Outcomes | Unadjusted | P Value | Adjusted for Patient Characteristics | P Value | Adjusted for Patient, NICU, and Hospital Characteristics | P Value | |---------------------------------|------------|---------|--------------------------------------|---------|----------------------------------------------------------|---------| | Mortality (n = 67 497–67 517) | | | | | | | | Within 7 d | 0.91 (0.81–1.02) | .10 | 0.81 (0.70–0.93) | .004 | 0.83 (0.72–0.96) | .01 | | Within 28 d | 0.92 (0.83–1.02) | .11 | 0.85 (0.75–0.95) | .01 | 0.87 (0.77–0.99) | .03 | | Before discharge | 0.91 (0.83–1.00) | .06 | 0.85 (0.76–0.96) | .01 | 0.87 (0.78–0.98) | .02 | | Morbidity | | | | | | | | Nosocomial infection (n = 64 201)| 0.87 (0.77–1.0) | .04 | 0.87 (0.75–0.99) | .04 | 0.86 (0.74–0.99) | .03 | | Severe intraventricular hemorrhage (n = 61 030) | 0.89 (0.80–1.00) | .06 | 0.84 (0.74–0.96) | .01 | 0.88 (0.77–1.00) | .05 | Abbreviations: NICU, neonatal intensive care unit; RNE, recognition for nursing excellence. *Odds ratios and 95% CIs were derived from random-effects logistic regression models. All models control for year of birth. Infant risk adjusters were gestational age, gestational age squared, 1-min Apgar score, small for gestational age, multiple birth, congenital malformation, vaginal delivery, prenatal care, race/ethnicity, and sex. NICU characteristics were adjusted for volume of very low-birth-weight infants and level of care. Hospital characteristics were adjusted for hospital ownership and membership in the Council of Teaching Hospitals.* The better outcomes observed in VLBW infants in RNE hospitals may reflect higher-quality NICU and obstetric care. Perhaps RNE hospitals have a broad, long-standing commitment to quality care that is reflected in other aspects of care, such as excellent physician care, respiratory care, or infection control, that are not directly related to RNE but that may independently contribute to better outcomes for VLBW infants. Thus, RNE status may serve as a marker for an institution-wide commitment to optimizing outcomes. Recognition for nursing excellence status has been included as a criterion for a high-quality institution by the national groups US News & World Report Best Hospitals (since 2004) and Leapfrog (since 2011). The practical importance of our findings is influenced by the accessibility of existing RNE hospitals to mothers at high risk of preterm birth. Currently, access is limited because only 1 in 5 hospitals with a NICU has RNE. This is a particular source of concern for racial and ethnic minorities because disproportionately few minority infants are born in hospitals with RNE. Our study has limitations. The VON is not fully representative of US hospitals with a NICU. Our results may underestimate the “true” RNE associations. The comparison hospitals in this sample participate in a network dedicated to improving the quality and safety of neonatal care; therefore, they most likely give greater attention to quality improvements and monitoring. In addition, the VON disproportionately lacks the smallest NICUs, where prior research shows that outcomes are the worst. In addition, we excluded 20 network hospitals without inborn infants. Outborn infants may acquire morbidities before admission, thus confounding the role of RNE status in these outcomes. By restricting to inborn infants, we excluded some freestanding children’s hospitals. Infection and SIVH were not recorded for some infants who were transferred out. However, transfer rates were low and did not differ substantially by hospital type (12% for RNE and 15% for non-RNE). Also, the cross-sectional research design prevents causal inferences. There may be unobserved quality-related characteristics of RNE hospitals that are differentially associated with outcomes. Future research should focus on NICU nursing care, including the roles of specific factors (eg, nurse staffing and experience), as well as physicians and other health care professionals. Our study focused on hospitals that met criteria for organizational excellence in nursing through comprehensive standards that are documented and continuously monitored. Meeting these criteria was associated with better outcomes for high-risk infants. Author Affiliations: Center for Health Outcomes and Policy Research, School of Nursing, Department of Sociology, and Institute of Health Economics, University of Pennsylvania, Philadelphia (Dr Lake); Department of Economics, Dartmouth College, Hanover, New Hampshire (Dr Staiger); Harvard School of Public Health, Cambridge, Massachusetts (Dr Stalger and Rogowski); Departments of Pediatrics (Dr Horbar) and Medical Biostatistics (Mr Kenny), University of Vermont, and Vermont Children’s Hospital (Dr Cheung), Burlington; Health Care Enterprise, Lexington, Kentucky (Dr Cheung); College of Nursing, Ohio State University, Columbus; Health Systems and Policy, School of Public Health, University of Medicine and Dentistry of New Jersey, Piscataway (Dr Rogowski). Author Contributions: Dr Lake had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. Study concept and design: Lake, Staiger, Horbar, Patrick, Rogowski Acquisition of data: Lake, Staiger, Horbar, Cheung, Kenny, Patrick, Rogowski Analysis and interpretation of data: Lake, Staiger, Horbar, Cheung, Kenny, Patrick, Rogowski Drafting of the manuscript: Lake, Staiger, Horbar, Cheung, Kenny, Patrick, Rogowski Critical revision of the manuscript for important intellectual content: Lake, Staiger, Horbar, Cheung, Kenny, Patrick, Rogowski Statistical expertise: Staiger, Kenny Obtained funding: Lake, Staiger, Horbar, Patrick, Rogowski Administrative, technical, or material support: Lake, Cheung Study supervision: Lake. Conflict of Interest Disclosures: All authors have completed and submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest. Dr Lake received an honorarium for plenary remarks at the 2010 American Nurses Credentialing Center’s Annual Symposium. Dr Stalger holds an equity interest in AHRMetric Inc, a company that sells efficiency measurement systems and consulting services to insurers and hospitals. Dr Rogowski is executive and scientific officer for the Vermont Oxford Network. No other disclosures were reported. Funding/Support: This study was supported by a Robert Wood Johnson Foundation Interdisciplinary Nursing Quality Research Initiative grant (to Dr Lake) and grant R01HD030537 (to Dr Rogowski) from the National Institutes of Nursing Research, National Institutes of Health. Role of the Sponsors: The funding organizations had no role in the design and conduct of the study, in the collection, analysis, or interpretation of the data, or in the preparation, review, or approval of the manuscript. Disclaimer: The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institute of Nursing Research or the National Institutes of Health. Online-Only Material: eTables 1 through 4 and the Author Video Interview are available at http://www.jama.com. Additional Contributions: We thank the 978 institutions that participated in the Vermont Oxford Network database, whose work made this research possible. REFERENCES 1. Matthews TJ, Menacker AM, Osterman MIK, Strobino DM. Annual summary of vital statistics: 2008. Pediatrics. 2011;127(1):146-157. 2. Eichenwald EC, Stark AR. Management and outcome of very low birth weight. N Engl J Med. 2008;358(16):1711-1718. 3. Beardman JD, Powers DA, Padilla YC, Hummer RA. Low birth weight, social factors, and development: a longitudinal study in the United States. Demography. 2002;39(2):353-368. 4. Hack M, Flannery DJ, Schluchter M, Cartar L, Borawski E, Klein N. Outcomes in young adulthood of very-low-birth-weight infants. N Engl J Med. 2002;346(3):149-157. 5. Maternal and Child Health Bureau, Health Resources and Services Administration, US Department of Health and Human Services. Child Health USA 2010. http://www.mchb.hrsa.gov/chusa10/html/hsp/pages/2010vbx.html. Accessed March 29, 2012. 6. American College of Obstetricians and Gynecologists. Guidelines for Perinatal Care. 6th ed. Elk Grove Village, IL: American College of Obstetricians and Gynecologists; 2007: chap 2. 7. Association of Women’s Health Obstetric and Neonatal Nurses. Guidelines for Professional Registered Nurse Staffing for Perinatal Units. Washington, DC: Association of Women’s Health Obstetric and Neonatal Nurses; 2008. 8. Borewell G, ed. Neonatal Intensive Care Nursing. 2nd ed. New York, NY: Routledge; 2010. 9. Becker C, Grunwald PC, Moorman J, Shulr S. Outcomes of developmental support in nursing care for very low birth weight infants. Nurs Res. 1991; 40(3):150-155. 10. Volpe J. Intracranial hemorrhage. In: Neurology of the Newborn. 5th ed. Philadelphia, PA: Saunders; 2001. 11. Kilbide HW, Witschger D, Powers RJ, Sheehan MJ. Improving neonatal-lung-specifically: best practices to decrease nosocomial infections. Pediatrics. 2003;111(4 pt 2):e519-e533. 12. McCourt M. At risk for infection: the very-low-birthweight infant. J Perinat Neonatal Nurs. 1994;7(4):52-64. 13. American Nurses Credentialing Center. Magnet recognition program overview. http://www.nursingworld.org/MainMenuCategories/AMAGNET/ProgramOverview.aspx. Accessed March 29, 2012. 14. American Nurses Credentialing Center. Application Manual: Magnet Recognition Program. Silver Spring, MD: American Nurses Credentialing Center; 2011. 15. American Nurses Credentialing Center. Journey to Magnet excellence. http://www.nursecredentialing.org/Magnetjourney.aspx. Accessed July 30, 2011. 16. Lake ET, Shirey J, Knecht S, Durney NE. Patient falls: association with hospital volume and staffing in nursing unit staffing. Res Nurs Health. 2010;33(5):413-425. 17. Hickey P, Gauvreau K, Connor J, Sporing E, Jenkins K. The relationship of nurse staffing, skill mix, and Magnet recognition to institutional volume and mortality for congenital heart surgery. J Nurs Adm. 2010;40(3):219-223. 18. Hamilton KES, Redshaw ME, Tarnow-Mordi W. Nurse staffing in relation to risk-adjusted mortality in neonatal care. Arch Dis Child Fetal Neonatal Ed. 2007;92(F1):F10-F15. 19. Grandi C, González A, Meritano J. Grupo Collaborativo NUTRIS. Patient volume, medical and nursing staffing and the association with clinical outcome outcomes of VLBW infants in 15 Neoncur network NICUs. Arch Argent Pediatr. 2010;108(6):499-510. 20. Cimino JP, Haas J, Saiman L, Larson EL. Impact of staffing on bloodstream infections in the neonatal intensive care unit. Arch Pediatr Adolesc Med. 2006;160(8):822-830. 21. Pollack MM, Koch MA. NIH-District of Columbia Neonatal Network. Association of outcomes with organizational characteristics of neonatal intensive care units. Crit Care Med. 2003;31(6):1620-1626. 22. Papile LA, Burstein J, Burstein R, Koffler H. Incidence and evolution of subependymal and intraventricular hemorrhage: a study of infants with birth weights greater than 1500 gm. J Pediatr. 1978;92(4):529-534. 23. American Nurses Credentialing Center. Health care organizations with Magnet-recognized nursing services. http://www.nursecredentialing.org/MagnetFindAMagnetFacility.aspx. Accessed March 29, 2012. 24. Regoado JA, Howard M, Stalger M, Kennel M, Burch J, Cullen J. Indicators of direct hospital quality indicators for very low-birth-weight infants. JAMA. 2004;291(2):202-209. 25. Lee DS, Baker LC, Caughey AB, Danielson B, Chmilt SK, Leveno KJ. Level and volume of neonatal intensive care and mortality in very-low-birth-weight infants. N Engl J Med. 2007;356(21):2165-2173. 26. Chung J, Phibbs C, Boscardin W, et al. Examining the effect of hospital-level factors on mortality of very low-birth-weight infants: multilevel modeling. J Perinatol. 2011;31(12):770-775. 27. American Hospital Association. AHA Annual Survey Database. 2009 ed. Chicago, IL: American Hospital Association; 2009. 28. Efron B, Gong G. A leisurely look at the bootstrap, the jackknife, and cross-validation. Am Stat. 1983;37:36-48. 29. StataCorp. Stata Statistical Software: Release 10.1 [computer program]. College Station, TX: Stata Corp; 2007. 30. Medlock S, Ravelli ACJ, Tannirna P, Mud BWB, van der Hana A. Prediction of mortality in very premature infants: a systematic review of prediction models. PLoS One. 2011;6(9):e23444. 31. Yeh HC, Bij J, Hartzema AG, Grunhoff AA, et al. Latent onset sepsis in very low birth weight neonates: the experience of the NICHD Neonatal Research Network. Pediatrics. 2002;110(2 pt 1):128-135. 32. Chandra J, Chandra J, Chandra J, Chandra J, et al. National Institute of Child Health and Human Development Neonatal Research Network. Neurodevelopmental and growth impairment among extremely low-birth-weight infants with neonatal infection. JAMA. 2004;292(19):2357-2365. 33. Vohr BR, Wright LL, Poole WK, McDonald SA, D’Angelo LJ. Mortality and outcomes of extremely low birth weight infants <32 weeks’ gestation between 1993 and 1998. Pediatrics. 2005;116(3):635-643. 34. Berwick DM, Studdert DM, Lake ET. Low-risk Medicare mortality among set of hospitals known for good nursing care. N Engl J Med. 1994;328(8):771-787. 35. Institute of Medicine. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, DC: National Academy Press; 2001. 36. Mitchell PH, Fertleth S, Jennings BM; American Academy of Nursing. Expert Panel on Quality Nursing Care. Quality health outcomes model. Image J Nurs Sch. 1998;30(1):43-46. 37. Institute of Medicine. Preterm Birth: Causes, Consequences, and Prevention. Washington, DC: National Academies Press; 2006. 38. March of Dimes. Towards Improving the Outcomes of Preterm Birth III: Enhancing Perinatal Health Through Quality, Science, and Policy Initiatives. White Plains, NY: March of Dimes; 2010. 39. Murphy J, Geising E, Olmsted MG, et al. Methodology. Leap News. 6. Leapfrog’s Best Hospitals 2012. Jul 12, 2011. http://statisticbrain.com/documents/health/best-hospitals-methodology.pdf?_ga=1.16304044.16304044.1337000000. Accessed August 1, 2012. 40. Leapfrog Group. The Leapfrog Group will publicly report on nursing excellence: 2011 hospital ratings will include Magnet recognition for the first time [press release]. July 12, 2011. http://www.leapfroggroup.org/news/Leapfrog_news/48019277. Accessed March 29, 2012.
The Motors Powering A-Motility in *Myxococcus xanthus* Are Distributed along the Cell Body Oleksii Sliusarenko, David R. Zusman, and George Oster Departments of Physics and Molecular and Cell Biology, University of California, Berkeley, California 94720 Received 12 June 2007/Accepted 9 August 2007 Two models have been proposed to explain the adventurous gliding motility of *Myxococcus xanthus*: (i) polar secretion of slime and (ii) an unknown motor that uses cell surface adhesion complexes that form periodic attachments along the cell length. Gliding movements of the leading poles of cephalaxin-treated filamentous cells were observed but not equivalent movements of the lagging poles. This demonstrates that the adventurous-motility motors are not confined to the rear of the cell. The gram-negative bacterium *Myxococcus xanthus* glides on surfaces using two independent propulsive engines: (i) S (social)-motility, which is similar to twitching motility in *Pseudomonas aeruginosa* (4), is driven by extension, adhesion, and retraction of polar type IV pili (3); and (ii) A (adventurous)-motility, which is driven by an uncharacterized engine hypothesized to be associated with slime secretion (8). In 1924, Jahn proposed that the A-motility motor was powered by extrusion and hydration of slime (2). Recently, Wolgemuth et al. showed that a slime extrusion engine could theoretically produce enough force to drive a bacterium at the observed speed (7). This model was consistent with the observation that in a *Phormidium* sp., a gliding cyanobacterium, the rates of slime secretion and cell movements were similar (1). Furthermore, putative nozzles for slime secretion were observed clustered at the cell poles (8), suggesting that if slime extrusion powered A-motility, the A-motility engines should be located at the cell posterior, pushing cells forward. Recently, an A-motility protein labeled with yellow fluorescent protein, AglZ-YFP, was used to track protein complexes in living cells as cells moved forward and in reverse (5). In moving cells, AglZ-YFP was found to be associated with transient adhesion complexes that remained at fixed positions relative to the substratum as cells moved forward. Interestingly, the periodic spacing of the AglZ clusters was similar to the helical period of MreB in *Escherichia coli* and *Bacillus subtilis*, which suggests that these clusters may be associated with the bacterial cytoskeleton. On the basis of these observations, Mignot et al. (5) proposed that an uncharacterized protein motor attaches to bacterial “focal adhesion complexes” to propel the cell. An important aspect of this model is that the propulsion forces are distributed periodically along the bacterial axis and are not focused primarily to the posterior of the cells as proposed in the slime extrusion hypothesis. In this study, we sought to distinguish between distributed A-motility motors and posterior motors. The location of the A-motility motors should help us distinguish between the motility models. The slime extrusion mechanism is unlikely to utilize motors distributed along the cell length, because slime secretion is localized mostly at the cell poles (8). Moreover, if slime propulsion motors were distributed, they would require tilted nozzles along the cell body that could change their direction of tilt at the moments of cell reversal or to switch between two populations of such tilted nozzles. Either would require an extremely complicated mechanism. In contrast, propulsion using the observed substrate-fixed focal adhesions requires adhesion points approximately equally distributed along the cell body, that is, a distributed engine. Sun et al. (6) addressed the issue of rear- versus distributed-force generation by measuring the velocity of cells as cells became elongated (filamentous) following treatment with nonlethal concentrations of the antibiotic cephalaxin. $A^+$ $S^-$ mutant cells (i.e., cells with only the A-motility motor) moved at a constant speed regardless of cell length. In contrast, the $A^-$ $S^+$ cells (i.e., cells with only the S-motility motor) slowed dramatically as they become longer. This finding is consistent with the A-motility motor being distributed and the S-motility motor being polar. However, it is still possible that when they become filamentous, $A^+$ $S^-$ mutant cells acquire stronger engines, for example, because they secrete more slime. Additionally, speed may depend nonlinearly on the motor force, as observed for other molecular motors. For example, if the A-motility engine is very strong, it may operate in the regime where the cell speed is nearly constant regardless of the cell length. In this case, the speed is not limited by the friction force but instead is limited by the processivity of the motor itself, for example, by the slime secretion rate. Using filamentous cells treated as described by Sun et al. (6), we sought additional evidence regarding the distribution of the A-motility motor. For our studies, we used strains derived from wild-type strain DZ2: the DZ2 $\Delta pilA$ and DZ2 $\Delta pilA$ AglZ-YFP mutants (5). The cells were grown to mid-exponential phase in rich medium, plated on hard agar containing 1/2- diluted CTT medium (1.5% agar, 0.5% Casitone, 10 mM Tris, 8 mM MgSO$_4$, 1 mM KPO$_4$), and covered with a coverslip. The cells were then treated with cephalixin at a 100 $\mu$M concentration starting approximately 6 hours before the imaging was done and continuing during the imaging, as described previously (6). The cells were imaged by fluorescence microscopy as described previously (5). To confirm that our cephalixin-treated cells did not have septa, we stained cells with a membrane dye, FM4-64 (Invitrogen), which can clearly stain septa in nontreated cells; one such cell is shown in Fig. 1a. The vast majority of the cephalixin-treated cells did not have septa, although there were occasional exceptions, but no more than one septum per 100 cells. To further confirm the continuity of the cytoplasm in the filamentous cells, we monitored the localization of AglZ-YFP. Previous studies showed that AglZ-YFP is localized initially to the front of a cell; as the cell reverses, AglZ-YFP relocates to the opposite pole (5). Similar results were found with motile AglZ-YFP-containing filamentous cells (Fig. 1c and d). This result demonstrates the continuity of the cytoplasm and that the filaments do not contain barriers to the movement of AglZ complexes or nodes that may function like cell poles. Figure 2 shows some typical results of our cell motility observations. These images are frames from time-lapse movies available at our website (http://mcb.berkeley.edu/faculty/BMB/zusmand.html). They show filamentous cells stained with the membrane dye FM4-64 visualized by fluorescence microscopy. We observed that in these cells, the anterior portions of cells moved forward using their A-motility motors and that the posterior portions lagged behind or did not move. Since elastic energy stored in sharp folds can potentially affect motility, we selected cells in which the curvature and the number of folds of the cell body remained constant; for these cells, elastic forces could not affect the motion. The cells also did not change their lengths: the first cell measured $20.6 \pm 0.1$ $\mu$m (Fig. 2a) and the second one measured $13.15 \pm 0.1$ $\mu$m (Fig. 2b) throughout the duration of the experiment. These movies show that since the A-motility engine provides the only driving force, it must be localized not at the cell posterior but distributed along the cell body. This behavior was common for filamentous cells; we recorded movies of 27 unfolding cells, at least 14 of which unambiguously showed distributed force production. Our observations clearly show that the A-motility engine is distributed (but not necessarily uniformly) along the cell body of filamentous cells with no signs of intermediate septa, rather than being localized at the rear pole. The data do not eliminate the possibility that slime extrusion may contribute to the propulsive force. However, we consider it unlikely because the putative slime extrusion pores appear clustered at the cell poles. The data are consistent with the “focal adhesions” model but do not provide proof, for they do not address the nature of the propulsive engine. Thus, the A-motility motor awaits better characterization of its exact mechanism. This work was supported by NSF grant DMS 0414039 (to G.O.) and NIH grant GM20509 (to D.Z.). REFERENCES 1. Hoiczyk, E., and W. Baumeister. 1998. The junctional pore complex, a prokaryotic secretion organelle, is the molecular motor underlying gliding motility in cyanobacteria. Curr. Biol. 8:1161–1168. 2. Jahn, E. 1924. Beitrage zur botanischen Protistologie. I. Die Polyangiden. Gebruder Borntraeger, Leipzig, Germany. 3. Li, Y., H. Sun, X. Ma, A. Lu, R. Lux, D. Zusman, and W. Shi. 2003. Extracellular polysaccharides mediate pilus retraction during social motility of *Myxococcus xanthus*. Proc. Natl. Acad. Sci. USA 100:5443–5448. 4. McBride, M. 2001. Bacterial gliding motility: multiple mechanisms for cell movement over surfaces. Annu. Rev. Microbiol. 55:49–75. 5. Mignot, T., J. W. Shaevitz, P. L. Hartzell, and D. R. Zusman. 2007. Evidence that focal adhesion complexes power bacterial gliding motility. Science 315:853–856. 6. Sun, H., Z. Yang, and W. Shi. 1999. Effect of cellular filamentation on adventurous and social gliding motility of *Myxococcus xanthus*. Proc. Natl. Acad. Sci. USA 96:15178–15183. 7. Wolgemuth, C., E. Hoiczyk, D. Kaiser, and G. Oster. 2002. How myxobacteria glide. Curr. Biol. 12:369–377. 8. Yu, R., and D. Kaiser. 2007. Gliding motility and polarized slime secretion. Mol. Microbiol. 63:454–467.
Does manufacturer advertising suppress or stimulate retail price promotions? Analytical model and empirical analysis Raj Sethuraman*, Gerard Tellis Cox School of Business, Southern Methodist University, P. O. Box 75033, Dallas, Texas 75275-0333, USA Marshall School of Business, University of Southern California, California, USA Submitted June 27, 2000; revised May 3, 2001; accepted January 12, 2002 Abstract Does manufacturer advertising for a brand stimulate or suppress retail price promotions? This study addresses this controversial issue. The authors develop an analytical model that shows that the relationship between manufacturer advertising and retail price promotion depends on the role of advertising. If advertising differentiates brands and suppresses consumer response to retail promotion, then the relationship is negative. But, if advertising is informative enough to increase consumer response to retail promotions, then the relationship is positive. A follow-up empirical analysis shows a strong positive relationship between category advertising expenditure and size of retail price discount, and between advertising and discount frequency. The finding supports the informative role of advertising in the context of retail price promotions. The implications of these findings and directions for future research are discussed. © 2002 by New York University. All rights reserved. Keywords: Price Promotion; Advertising; Retailing; Price Elasticity; Advertising-Promotion Tradeoff Introduction Does manufacturer advertising for a brand stimulate or suppress retail price promotions? This is an important issue in the current competitive environment characterized by substantial increases in sales promotions and steady declines in manufacturer advertising over the last two decades. For example, about 400 billion dollars worth of grocery products were sold in the year 2000. Of this, about 25% or nearly 100 billion dollars worth of goods were sold on deal to consumers (source: Information Resources, Inc. 2001). Retail promotions to consumers primarily involve price discounts, but also include displays, features, and special promotions. Retail price discounts are often triggered by manufacturers’ trade deals. While sales promotions have increased, the proportion of manufacturers’ total promotional budget spent on advertising declined sharply in the 1980s, and has continued a steady decline in the 1990s (Hoyt 1997; PROMO News 1998; Scott 1992). Proponents of sales promotion interpret this change as the result of the increasing awareness of the power of price promotions. Supporters of advertising interpret it as the cause for the decline of national brands and the growth of price promotions. Their argument is that decline in advertising and increase in sales promotions result in weaker brand loyalty, lower manufacturer prices, and greater retailer power. Over the last two decades this debate has turned in to a major controversy with implications for marketing strategy and practice (see for example, Blattberg and Neslin, 1990; Jones 1995; Mela, Gupta and Lehmann, 1997; Sethuraman and Tellis, 1991 for discussion and research related to these issues). The controversy revolves around the issue of whether advertising and sales promotions are substitutes or complements, and whether the use of one negatively influences the use of the other. This advertising versus sales promotion controversy parallels a much older one in the economics literature about advertising and prices. Many economists believe that advertising is a means for firms to build market power. Firms do so by differentiating their brands, creating brand loyalty, and making consumers insensitive to price differences (Comanor & Wilson, 1974). Thus advertising reduces price sensitivity and advertised brands can increase their prices, leading to a positive relationship between advertising and prices. Other economists assume that advertising is information (Nelson, 1970, Nelson 1974). As such, advertising increases consumers’ information about their choices, allowing consumers to comparison shop. Consumers become more price-sensitive and are better able to choose low-priced brands. As a result, firms compete on price and end up serving consumers with lower prices. It follows that advertising and prices are negatively related. The controversy regarding the relationship between advertising and price sensitivity has spawned many empirical studies in marketing. (Kaul & Wittink 1995) and (Shankar & Krishnamurthi 1996) provide a detailed discussion of these studies. Both these reviews state that it is difficult to draw general conclusions from prior studies because (i) some studies support the differentiation theory while others support the informative role or information theory of advertising, and (ii) because there are significant differences in the nature of the studies. There are relatively fewer studies dealing with the effect of advertising on prices. (Benham 1972) studied the impact of advertising on the price of eyeglasses and concluded that the presence of advertising is associated with lower prices. (Cady 1976) and (Kwoka 1984) also reached the same conclusion. Relatedly, (Steiner 1973, Steiner 1993) argues that advertising may increase the salience of brands sufficiently that retailers compete with each other to promote these brands in order to draw consumers into their stores and increase sales of these and other brands. As a result, heavily advertised brands tend to have lower retail margin and, possibly, lower retail prices. (Farris & Albion 1980) summarize the advertising-price literature and observe (Table 5) that higher advertising tends to be associated with higher factory prices but possibly lower retail prices. None of the above papers, however, explicitly consider the relationship between manufacturer advertising and retail price promotion. Indeed, even though there is an extensive literature on promotion, only a few of them address the issue of the linkage between advertising and price promotion. (Sethuraman & Tellis 1991) analyze a monopoly model at the manufacturer level (with no retailers) and show that the decision to invest in advertising or price promotion depends on the ratio of price elasticity to advertising elasticity. (Neslin, Powell & Stone 1995) develop a dynamic optimization model to understand the tradeoff between advertising and trade promotion. They use simulations and obtain useful results about the effect of several factors such as promotion sensitivity, purchase acceleration on a manufacturer’s promotion and advertising plan. However, they focus on the manufacturer side only and consider a single manufacturer selling to an average retailer. (Agrawal 1996) examines the issue of balancing media advertising and trade promotion utilizing a game-theoretic model with two manufacturers who distribute their brands to consumers through a common retailer. They derive several interesting results about the effect of brand loyalty on advertising and trade promotion. (Shankar & Bolton 1999) empirically analyze promotion data from six product categories and find that advertising leads to better price/promotion coordination at the retail level. The objective of our study is to contribute to the literature on price promotions by investigating the implications of information and differentiation theories of advertising on retail price promotion decisions. In particular, we investigate the following two questions using an analytical model and an empirical study. 1. Is the relationship between manufacturer advertising and depth of retail price discount positive or negative? That is, does higher level of advertising lead to larger retail price discount or smaller discount? 2. Is the relationship between manufacturer advertising and frequency of retail price discount positive or negative? Insights into these relationships can help retailers decide which brands and categories to promote, and whether to offer deep or shallow discounts. These insights could also provide guidance to manufacturers regarding decisions concerning the incidence of price promotions. In particular, we develop a formal model to show that the relationship between advertising and retail price promotion is mediated by the role that advertising plays. If advertising provides information and increases consumer response to price promotions, as theorized by (Nelson, 1970), then advertising and retail promotion will be positively related. On the other hand, if advertising intensifies brand loyalty by differentiation and decreases consumer response to retail promotions, as stated by (Comanor & Wilson, 1974), then advertising and retail promotion will be negatively related. So the actual relationship is an empirical issue. We test this relationship through an empirical analysis using a cross section of 82 grocery products. The paper is divided as follows. The next section describes the analytical model and results. The third section presents an empirical test of the relationship between advertising and retail price promotion. The final section concludes by summarizing the implications and discussing the limitations and future research directions. **Analytical model and results** We analyze a parsimonious game theoretic model that explicates the relationship between advertising and retail price promotion by capturing the spirit of the differentiation (Comanor & Wilson, 1974) and information (Nelson, 1974) theories of advertising. In this section, we present the key elements of the model organized as follows: (i) Model assumptions, (ii) Equilibrium solutions, (iii) Effect of price sensitivity, (iv) Relationship between manufacturer advertising and depth of retail price discount, (v) Relationship between manufacturer advertising and frequency of retail price discount, and (vi) Summary of analytical results. **Model assumptions** We make five assumptions in our model structure. 1. We consider a market for a product category comprised of two manufacturers each selling one brand of the product category through a retailer, who sells both brands. Clearly, there are likely to be multiple brands and multiple retailers in the market. However, the differentiation and information theories of advertising relate predominantly to price competition (cross-price sensitivity) across brands within a store. Furthermore, at least in the grocery products market, which is the main focus of our study, brand switching within stores account for over 80% of total sales impact of price promotions (Gupta 1988; Bell, Chiang & Padmanabhan, 1999). Store switching is a relatively less important factor. 2. We assume that the two brands are “symmetric” in that they have the same costs and same response to marketing variables. Symmetry is a common assumption made in game-theoretic models that study price competition in the context of manufacturer-retailer channel structure (e.g., McGuire & Staelin, 1983; Choi 1996). Furthermore, our attempt here is to capture the spirit of differentiation and information theories of advertising. These theories relate simply to price competition between brands and not to asymmetries in these brands. Therefore, we use a symmetric model to gain initial insights into price promotion decisions. Incorporating asymmetry makes the model more cumbersome and may confound the effect of asymmetry with the effect of advertising. 3. Our focus is on assessing the impact of advertising on price promotion decision in the spirit of information and differentiation theories. Accordingly, we assume that the regular retail price $p_i$ ($i = 1, 2$), manufacturer wholesale price ($w_i$) and advertising outlay ($A_i$) are fixed while deciding on price promotion decisions. These assumptions also appear reasonable since regular price and advertising budgets are often decided prior to making price cut decisions. (Later, in §2.6, we discuss the situation where the manufacturer decides on advertising in conjunction with price discount decisions.) The resulting quantity sold at regular prices is denoted as $q_i$ ($i = 1, 2$). By assumption (2) of symmetry across brands: $$p_1 = p_2 = p_r \text{ (say); } w_1 = w_2 = w_r; \quad q_1 = q_2 = q_r;$$ gross retail margin at regular price, $g_r = p_r - w_r$; manufacturer margin at regular price, $m_r = w_r - c$, where $c$ is the variable cost to manufacturer. 4. During the promotion period, first each manufacturer (i) determines the size of trade deal ($t_i$), that is, discount from regular wholesale price to be offered to the retailer. Given these trade deals, the retailer decides on the discount ($d_i$) to be passed on to consumers that would maximize the retailer’s total category profits. The manufacturers know the retailer’s decision rule and incorporate it into their decision making. In game-theoretic terms, each manufacturer acts as a Stackelberg leader (McGuire & Staelin, 1983; Coughlan, 1985). 5. We assume that the demand ($q_{di}$) for the brand $i$ ($i = 1, 2$) is linear in own discount ($d_i$) and competitive retail discount ($d_j$), given the regular price. In particular, we assume the following demand function $$q_{di} = q_r + d_i + \theta(d_i - d_j) \quad i, j = 1, 2; \quad i \neq j,$$ where $\theta \in (0, 1)$ is a measure of the degree of cross-promotion sensitivity (or price competition) and $q_r$ is the demand at regular price. A demand function that contains a term for own price (discount) and another term that captures the effect of the difference between own price (discount) and competitor price (discount) is consistent with individual utility maximization behavior (Shubik & Levitan 1980) and used in (Raju, Sethuraman & Dhar 1995). Note that when there is no discount, that is, $d_i = d_j = 0$, demand ($q_{di}$) equals the regular price demand, $q_r$. **Equilibrium solutions** The retailer sets $d_1$ and $d_2$ to maximize the following profit function, given regular prices and manufacturer trade deals $t_1$ and $t_2$: $$\max_{d_1, d_2} \sum_{i=1}^{2} [(g_r + t_i - d_i)q_{di}]$$ Solving this problem gives retail discount $\hat{d}_1$ and $\hat{d}_2$ as functions of trade deals $t_1$ and $t_2$ and of $\theta$, $q_r$. Substituting these expressions in (1), we obtain $\hat{q}_{di}$ as functions of $t_1$ and $t_2$. Manufacturer $i$’s problem involves selecting $t_i$ so as to maximize its own profits: $$\max_{t_i} (m_r - t_i) \hat{q}_{di}$$ The solution to problem (3) gives the equilibrium trade deal ($t_1^*$ and $t_2^*$). Substituting the equilibrium trade deal in $\hat{d}_1$ and $\hat{d}_2$, we obtain the symmetric equilibrium retail price Table 1 Equilibrium solutions | Variable | Notation | Expression | |---------------------------------|----------|-----------------------------------------------------------------------------| | Retail price discount | $d^*$ | $\frac{(g_r + m_r)(1 + \theta) - q_a(3 + \theta)}{2(2 + \theta)}$ | | Manufacturer trade deal | $t^*$ | $\frac{m_r(1 - (g_r + g_s))}{2 + \theta}$ | | Demand at discounted price | $q_d^*$ | $\frac{(g_r + m_r)(1 + \theta)}{2(2 + \theta)}$ | | Retail margin after discount | $g_d^*$ | $\frac{g_r(7 + 3\theta) - (m_r + q_r)(1 + \theta)}{2(2 + \theta)}$ | | Manufacturer margin after discount | $m_d^*$ | $\frac{g_r + m_r + q_r}{2 + \theta}$ | | Retail profits after discount | $\Pi_a^*$ | $2g_a^*a \cdot q_a^*$ | | Manufacturer profits after discount | $\Pi_m^*$ | $m^*_a \cdot q^*_a$ | Note: - $g_r =$ retailer’s gross margin at regular price - $m_r =$ manufacturer’s gross margin at regular price - $q_r =$ demand at regular price - $\theta =$ cross-promotion sensitivity (measure of price competition) discount and consumer demand. These equilibrium solutions are given in Table 1. The equilibrium obtained is the unique Stackelberg equilibrium. As in (McGuire & Staelin 1983) and (Raju, Sethuraman & Dhar 1995), we restrict our analysis to situations with non-negative discounts ($d^*$ and $t^*$). From Table 1, equilibrium retail discount ($d^*$) and manufacturer trade deal ($t^*$) both increase with their respective gross margins. These results are intuitive. If the manufacturer (or retailer) expects to get high margins from unit sales, he/she would have an incentive to offer deeper discount and increase brand sales, other things equal. Furthermore, both discounts decrease with regular price demand ($q_r$). This term represents the loss due to existing regular price consumers availing of the discount. The greater this potential loss, the less the incentive to offer big discounts. The key equilibrium result relates to the effect of price competition ($\theta$), which we discuss next. **Effect of price sensitivity ($\theta$)** From the expressions in Table 1, it can be shown that equilibrium retail price discount is higher for higher values of $\theta$, that is, $\frac{\partial d^*}{\partial \theta} > 0$. This result is intuitive. As brand price competition increases, retailers would offer deeper discounts in equilibrium, other things equal. This deeper discount results in higher demand. A higher $\theta$ also leads to higher retail margin, resulting in higher profits for the retailer. In other words, from a discounting perspective, the retailer benefits when the price competition between brands within a store is higher. We formally state these results as: **Lemma 1:** When the cross-price sensitivity ($\theta$) between brands in a store is higher (a) size of retail price discount is higher, (b) retailer’s margins are higher, and (c) retailer’s profits from discounting are higher. From Table 1, the manufacturer’s trade deal ($t^*$) is also higher for higher values of $\theta$. However, by differentiating the relevant expressions with respect to $\theta$, it can be shown that manufacturer margin ($m^*_m$) and manufacturer profits ($\pi^*_m$) decrease with $\theta$. We state these results formally as: **Lemma 2:** When the cross-price sensitivity ($\theta$) between brands in a store is higher (a) size of manufacturer trade deal is higher, (b) manufacturer’s margin is lower, and (c) manufacturer profits from discounting are lower. These lemmas help us infer the relationship between advertising and retail promotion. **Relationship between manufacturer advertising and size of retail price discount** The informative role of advertising (Nelson 1970, Nelson 1974) suggests that advertising increases consumers’ information about their choices. Armed with this information, consumers are motivated to do more comparison-shopping, thereby increasing sensitivity to retail price promotions ($\theta$). In contrast, the advertising equals market power argument (Comanor & Wilson, 1974) asserts that advertising differentiates the brands, creating brand loyalty, and making consumers less sensitive to price promotions ($\theta$). Integrating these theories with lemmas 1 and 2, we develop the relationship between advertising and discount depth (i) within category and (ii) across categories. Relationship within category The relationship between advertising and retail discount depends on whether advertising increases price sensitivity or decreases it, as described in the following result. Proposition 1 (a) If advertising equals information, an increase in advertising of brands within a category will result in a larger retail discount in that category, other things equal (b) If advertising equals market power, an increase in advertising of brands within a category will result in a smaller retail discount in that category, other things equal Relationship across categories From the equilibrium solutions in Table 1, we can also infer the relationship between manufacturer advertising and retail price cut across categories. Let C be the set of all relevant product categories. For one category c ∈ C, we can write the retail discount as (subscript c denotes the particular category, c): \[ d_c^* = \frac{(g_{cc} + m_{cc})(1 + \theta_c) - q_{rc}(3 + \theta_c)}{2(2 + \theta_c)} \] For any two categories c, c' ∈ C, the information theory would suggest that advertising \( A_c > A_{c'} \Rightarrow \theta_c > \theta_{c'} \), other things equal. Extending lemma 1, \( \theta_c > \theta_{c'} \Rightarrow d_c^* > d_{c'}^* \). Combining, when advertising equals information, \( A_c > A_{c'} \Rightarrow \theta_c > \theta_{c'} \Rightarrow d_c^* > d_{c'}^* \). When advertising equals market power, \( A_c > A_{c'} \Rightarrow \theta_c < \theta_{c'} \Rightarrow d_c^* < d_{c'}^* \). Thus, we have the following results: Proposition 2 (a) If advertising equals information, other things equal, categories with higher advertising levels would have larger discounts than categories with lower advertising levels (b) If advertising equals market power, other things equal, categories with higher advertising levels would have smaller discounts than categories with lower advertising levels Relationship between manufacturer advertising and frequency of retail price discount We do not directly incorporate frequency of discounts in our analytical model. However, we can infer the same from the profitability of price promotions. Following, (Raju, Sethuraman & Dhar 1995), we assume that the likelihood (or probability) of taking a particular action is proportional to the profitability of that action, that is, the greater the profits, the more likely it is that action would be taken.\(^5\) In our promotion context, retailer’s profits from discounting increases as \( \theta \) increases. Thus, if advertising equals information and increases \( \theta \), then retailer’s profits will increase with advertising and he would be more inclined to promote. Conversely, if advertising equals differentiation and decreases \( \theta \), then retailer’s profits will decrease with advertising and s/he would be less inclined to promote. The situation reverses for the manufacturer because its profits decrease with increase in \( \theta \). Thus, if advertising equals information and increases \( \theta \), then manufacturer’s profits will decrease with advertising and s/he would be less inclined to promote. Conversely, if advertising equals differentiation and decreases \( \theta \), then manufacturer’s profits will increase with advertising and s/he would be more inclined to promote. The net effect of advertising on frequency of price cuts is ambiguous. However, it is likely that the competition between manufacturers will force them to promote according to the retailer’s incentive. If one manufacturer does not offer a trade deal, then the other manufacturer will offer the trade deal and will take substantial sales away from the nondealing manufacturer. Thus based on retailer’s incentive to promote, we have the following tentative results: Proposition 3 (a) If advertising equals information, an increase in advertising of brands within a category will result in more frequent retail price cuts in that category, other things equal (b) If advertising equals market power, an increase in advertising of brands within a category will result in less frequent retail price cuts in that category, other things equal Proposition 4 (a) If advertising equals information, other things equal, categories with higher advertising levels would be more frequently discounted than categories with lower levels of advertising (b) If advertising equals market power, other things equal, categories with higher advertising levels would be less frequently discounted than categories with lower levels of advertising Manufacturer advertising decision At this point, a pertinent question is what implication does information and differentiation theories have on the manufacturers’ advertising decisions. If advertising increases or decreases price competition (\( \theta \)), what should be the optimal advertising level? Isolating this effect, we can write the manufacturer’s advertising decision problem (that takes into account the price promotion decisions) as \[ \text{Max} \quad A \quad \Pi_m^*(A) - A, \] where \( \Pi_m^* \) is as given in Table 1. The optimal \( A^* \) solves the FOC \[ \frac{d\Pi_m^*(A)}{dA} - 1 = \frac{\partial \Pi_m^*}{\partial \theta} \cdot \frac{d\theta}{dA} - 1 = 0 \] (5) Note that if advertising equals information \( \frac{d\theta}{dA} > 0 \). From Lemma 2, \( \frac{\partial \Pi_m^*}{\partial \theta} < 0 \). Hence, the left hand side of Eq. (5) will never equal zero. In other words, the optimal action for manufacturer is not to advertise in this context if advertising increases brand price competition. If advertising equals differentiation, then \( \frac{d\theta}{dA} < 0 \) and optimal advertising is that \( A^* \) which solves Eq. (5). However, there are at least three effects of advertising on demand a. direct effect of advertising on primary demand (increasing category sales), b. direct effect of advertising on selective demand (increased market share through sales from competitors), and c. indirect effect of advertising on demand through changes in price sensitivity. The net profits arising from a combination of these three demand effects will determine the optimal advertising level. **Summary of analytical results** The above analysis shows that the relationship between advertising and retail price promotion cannot be asserted a priori but is an empirical issue. It depends critically on the role of advertising. If advertising differentiates brands and suppresses consumer response to retail promotion, then the relationship is negative. But, if advertising is informative enough to increase consumer response to retail promotions then the relationship is positive.\(^3\) Thus empirical analysis is needed to throw further light on the problem. **Empirical analysis** This section assess the relationship between advertising, price sensitivity, and retail price promotion across categories (Results 2 and 4). We do not have within category data of the type needed to investigate Results 1 and 3. **Empirical model** Our theory (Result 2) states that the relationship between advertising and discount size is mediated through promotional price sensitivity (or consumer response to price promotions). In addition, we need to account for covariates that might influence the relationship. Our theory (Eq. (4)) says that, besides price sensitivity (\( \theta \)), retail gross-margin (\( g_c \)), manufacturer margin (\( m_c \)), and average regular-price brand sales (\( q_c \)) affect size of discount. We use retail margin and brand sales as covariates in the discount size model. (We do not have data on manufacturer margin.) The covariates in the promotional price sensitivity model are the same as in (Narasimhan, Neslin & Sen 1996): (i) category penetration (percentage of households purchasing the product), (ii) purchase cycle (interpurchase time), (iii) average purchase price, (iv) number of brands, (v) propensity to purchase on impulse, and (vi) ability to stockpile. (Narasimhan, Neslin & Sen 1996) state that promotional price elasticity would likely be higher in categories (i) that are purchased by a large number of households (high penetration), (ii) that are purchased more frequently, (iii) where the average purchase price is high, (iv) where the purchase is based on impulse, and (v) where stockpiling is easier. It is also possible that price competition is greater in categories with a larger number of brands. Fig. 1 provides a description of the empirical model we use to test Result 2. The empirical model for testing Result 4 is the same as in Fig. 1 except that the dependent variable is discount frequency instead of discount size. **Data** Table 2 lists the variables used in the empirical analysis, their sources and descriptive statistics. Below, we describe the key variables. *Retail Promotion (Discount Size - DISCSIZE and Discount Frequency - DISCFREQ).* Category level measures of retail price promotion are obtained from the Infoscan Report on Trade Promotions\(^TM\) prepared by Information Resources, Incorporated (IRI). The report measures the sales response to price and promotional activities for several product categories, by analyzing over 45 million promotional weekly sales observations from over 2,400 Infoscan stores in 49 metropolitan markets during the year 1988. The report also records the average percentage cut from regular prices (DISCSIZE) and total number of weekly price discounts (DISCFREQ) for each category. *Advertising Expenditure (ADEXP).* Following (Hoch & Banerji 1993) and (Sethuraman 1992), the category advertising data (in total dollars) were obtained from information compiled by Leading National Advertisers (LNA) and presented in the BAR/LNA Report 1988. *Price Sensitivity (PRICELAS).* As in (Raju, Sethuraman & Dhar 1995), category level, price sensitivity is obtained from the average promotional price elasticity reported in the Infoscan Report on Trade Promotions (1988). *Retail Gross Margin (MARGIN).* We use the same gross margin data as the ones employed in (Hoch & Banerji 1993) and (Sethuraman 1992). Gross retail margins (expressed as a percentage of price) for 1988 are obtained from the Supermarket Business Annual Expenditure Survey, published in *Supermarket Business*, September 1989. *Category Sales (CATSALE).* Category sales data were obtained from Infoscan Supermarket Review (1988) and provided by (Raju, Sethuraman & Dhar 1995). Number of Brands (NBRAND). Number of distinct brands in a product category was obtained from InfoScan Supermarket Review (1988) and provided by (Raju, Sethuraman & Dhar 1995). Average Brand Sales (BRSALE). The average brand sales in a category is obtained by simply dividing the category sales by the number of brands in the category. Consumer Purchase Variables. The variables –household penetration (PENETRATION), purchase cycle (PCYCLE), and average purchase price (PRICE)—are obtained from the Marketing Factbook (1988) and are the same as the ones used in (Narasimhan, Neslin & Sen 1996). The percentage of households purchasing an item in the category and the average number of days between purchases (purchase cycle) are obtained directly. The average price per purchase in a product category is computed by multiplying the price per unit volume and the number of units per purchase. Impulse Purchase (IMPULSE) and Stockpiling (STOCK). Category level measure on propensity to purchase on impulse is obtained from a consumer survey reported in (Narasimhan, Neslin & Sen 1996). They measure impulse based on consumer response to two items: “I often buy this product on a whim.” and “I typically like to buy this product when the urge strikes me.” They use principal components analysis and use factor score to combine the two items to obtain an aggregate measure of the propensity to purchase on impulse. Measure of ability to stockpile is also obtained in the same manner. In summary, category-level data on retail price promotion were obtained from InfoScan Report on Trade Promotions. All the other data were obtained from the same sources used in previously published research. Combining the various data sources, we have information on all model variables for 82 grocery products for the year 1988.4 Table 2 Variables used in empirical analysis—sources and means | Variable | Acronym | Source | Previous Research Where Used | Mean (Std. Dev.) | |-----------------------------------------------|-----------|---------------------------------------------|-----------------------------------------------------------------------------------------------|------------------| | Discount Size (percent) | DISCSIZE | Infoscan Report on Trade promotion | — | 12.9 (2.6) | | Discount Frequency ('000) | DISCFREQ | Infoscan Report on Trade Promotion | — | 509 (589) | | Advertising Expenditure ($Million) | ADEXP | BAR/LNA Report | Hoch and Banerji (1993); Sethuraman (1992) | 73.9 (82.0) | | Promotional Price Elasticity (Absolute Value) | PRICELAS | Infoscan Report on Trade Promotion | Raju, Sethuraman and Dhar (1995) | 2.64 (.56) | | Retail Gross Margin (Percent) | MARGIN | Supermarket Business | Hoch and Banerji (1993) | 22.1 (4.6) | | Category Retail Sales ($Million) | CATSALE | Infoscan Supermarket Review | Raju, Sethuraman and Dhar (1995) | 837 (743) | | Number of Brands | NBRAND | Infoscan Supermarket Review | Raju, Sethuraman and Dhar (1995) | 40.4 (43.0) | | Household Penetration (Percent) | PENETRATION | Marketing Factbook | Narasimhan, Neslin and Sen (1996) | 66.0 (27.0) | | Purchase Cycle (Days) | PCYCLE | Marketing Factbook | Narasimhan, Neslin and Sen (1996) | 64.9 (27.1) | | Purchase Price ($) | PRICE | Marketing Factbook | Narasimhan, Neslin and Sen (1996) | 1.98 (1.03) | | Impulse Purchase | IMPULSE | Consumer Survey | Narasimhan, Neslin and Sen (1996) | −.05 (.43) | | Ability to Stockpile | STOCK | Consumer Survey | Narasimhan, Neslin and Sen (1996) | −.01 (.36) | Estimation and results on discount size The correlation between retail discount size and advertising expenditure is 0.31. This positive relationship suggests that advertising is likely to act as information leading to greater price sensitivity. To assess this further, we estimate the empirical model represented in Fig. 1. \[ \text{DISCSIZE} = a_0 + a_1 (\text{PRICELAS}) + a_2 (\text{MARGIN}) + a_3 (\text{BRSALE}) + \text{Error} \quad (6A) \] \[ \text{PRICELAS} = b_0 + b_1 (\text{ADEXP}) + b_2 (\text{PENETRATION}) + b_3 (\text{PCYCLE}) + b_4 (\text{PRICE}) + b_5 (\text{NBRAND}) + b_6 (\text{IMPULSE}) + b_7 (\text{STOCK}) + \text{Error} \quad (6B) \] The equations are jointly estimated using two stage least squares. The results are in Table 3. The \( R^2 \) for the discount size model is 0.19. As predicted in the theoretical analysis, coefficients \( a_1 \) and \( a_2 \) are positive and statistically significant (\( p < .05 \)). That is, discount depth is higher in categories with higher promotional price elasticity and higher retail margin. Coefficient for brand sales is negative (as predicted) but not significant. The \( R^2 \) for the price elasticity model is 0.26. Promotional price elasticity is significantly higher in categories with higher advertising expenditures, consistent with the informative role of advertising. In addition, price elasticity is higher in categories purchased by a larger number of households and products that can be stockpiled. Multicollinearity does not appear to be a potential problem in identifying the effect of advertising on discount size. The (absolute) correlations among the independent variables in the discount size model (6A) are all less than 0.2. Table 3 Regression results Table 3A: Equations 6A/7A (standardized estimates) | Independent Variables | Dependent Variables | |-----------------------|---------------------| | | Discount Size (Equation 6A) | Discount Frequency (Equation 7A) | | Price Elasticity (PRICELAS) | .67*** | .61*** | | Retail Margin (MARGIN) | .23*** | .07 | | Brand Sale (BRSALE) | −.11 | .16* | | # of Brands (NBRAND) | N/A | .67*** | | \( R^2 \) (adjusted \( R^2 \)) | .19 (.16) | .54 (.52) | Table 3B: Equations 6B/7B (standardized estimates) | Independent Variables | Dependent Variable: Price Elasticity | |-----------------------|--------------------------------------| | Advertising Expenditure (ADEXP) | .27** | | Household Penetration (PENETRATION) | .31** | | Purchase Cycle (PCYCLE) | .20 | | Purchase Price (PRICE) | .02 | | # of Brands (NBRAND) | .14* | | Impulse Purchase (IMPULSE) | .16 | | Ability to Stockpile (STOCK) | .22** | | \( R^2 \) (adjusted \( R^2 \)) | .26 (.19) | *** \( p < .01 \). ** \( p < .05 \). * \( p < .10 \). Number of observations = 82 analytical model and past literature (Narsimhan, Neslin & Sen, 1996), other more complicated models are possible. In particular, because our focus in this paper is on assessing the impact of advertising on consumer promotion, we considered advertising as an exogenous variable within our modeling context. One could posit that advertising is endogenous and that advertising is also determined by price elasticity. If this were the case, however, we would find a negative (not a positive) relationship between advertising and price elasticity. According to Dorfman and Steiner theorem, advertising (to sales ratio) is inversely related to price elasticity (see Farris & Albion, 1980, p. 21 for a similar argument). It may also be posited that category sales may influence both discount size (through price elasticity) and advertising (through advertising budget determination). We observe a significant positive relationship between household penetration and price elasticity (Table 3). Categories with higher penetration generally have higher category sales (the correlation between the two variables is 0.56). Thus large (sale) categories would have higher price elasticity, leading to deeper discounts. Many firms set advertising budgets as a percentage of sales. Hence, category sales and advertising expenditure could be positively related. (In our data, the correlation between the two variables is 0.33). Thus the observed positive relationship between advertising and discount size may be due to category sales, which positively affects both advertising and price elasticity. To account for this possibility, we replace dollar advertising expenditure (ADEXP) with advertising to sales ratio (ASRATIO) as the measure of advertising in Equation (6B). The category advertising to sales ratio is computed as the category advertising expenditure divided by category dollar sales. Dividing by category sales normalizes the advertising expenditures with respect to changes in category sales. The standardized estimate of ASRATIO in Equation (6B) is 0.26 and also significant at $p < .05$. Thus the basic results do not change. **Estimation and results on discount frequency** The correlation between discount frequency and advertising expenditure is 0.69, which is much higher than that between discount size and advertising expenditure (0.31). The correlation between discount size and discount frequency across categories is 0.46. The empirical model for testing the relationship between advertising and discount frequency is estimated using the following equations: \[ \text{DISCFREQ} = c_0 + c_1 (\text{PRICELAS}) + c_2 (\text{MARGIN}) + c_3 (\text{BRSALE}) + c_4 (\text{NBRAND}) + \text{Error} \quad (7A) \] \[ \text{PRICELAS} = d_0 + d_1 (\text{ADEXP}) + d_2 (\text{PENETRATION}) + d_3 (\text{PCYCLE}) + d_4 (\text{PRICE}) + d_5 (\text{NBRAND}) + d_6 (\text{IMPULSE}) + d_7 (\text{STOCK}) + \text{Error} \quad (7B) \] The only difference from the discount size model (6A) is that number of brands is used as an additional covariate in Equation (7A). Since category-level discount frequency is the number of times brands in a category are discounted, the larger the number of brands, the greater would be the total number of deals. The equations are jointly estimated using two stage least squares. The model results are in Table 3. The $R^2$ for the discount frequency model is 0.54. Price elasticity is significantly positively related to discount frequency ($p < .05$). In addition, number of brands is strongly positively related to discount frequency, as expected. The $R^2$ for the price elasticity model is 0.26. The results are the same as in Equation (6B). Promotional price elasticity is significantly higher in categories with higher advertising expenditures, consistent with the informative role of advertising. **Conclusion** Is advertising positively related to retail price promotion? Are nationally advertised categories also heavily price promoted at the retail level? We provide insights into these questions through theoretical and empirical analysis. Our research is triggered by the continuing controversy in the literature about whether advertising stimulates or suppresses retail price promotions. It parallels a much older debate in the literature about whether advertising provides useful information to consumers (Nelson 1970, Nelson 1974) or creates brand loyalty by brand differentiation (Comanor & Wilson 1974). To gain insight into these issues, we develop a symmetric duopoly model that analyzes the relationship between advertising, trade promotion, and retail promotion. The analytical model shows that the relationship between advertising and retail price promotion depends on the role of advertising. If advertising differentiates brands and suppresses consumer response to retail price promotion, then the relationship is negative. That is, a higher level of advertising is associated with a smaller price discount and, possibly, less frequent price cuts. But, if advertising is informative enough to increase consumer response to retail promotions, then the relationship is positive. A higher level of advertising is associated with a larger price discount and, possibly, more frequent price cuts. A follow-up empirical analysis shows a strong positive relationship between category advertising expenditure and size of retail price discount, and between advertising and discount frequency. These relationships are partly due to higher advertising being associated with higher promotional price elasticity. Thus our finding supports the informative role of advertising in the context of retail price promotions. It is also interesting to note that the informative theory appears to be the “majority” view in the literature. Of the 18 studies listed in Table 1 of (Kaul & Wittink 1995), 9 support the informative theory, 7 support the market power theory, and 2 support both theories. Of the 11 studies listed in Table 1 of (Shankar & Krishnamurthi 1996), 5 support the informative theory, 3 support the market power theory, and 3 support both theories. The implication for retailers is that they should, in general, be more willing to pass-through the trade deal offered by manufacturers and increase their frequency and depth of promotion for brands in highly advertised categories. If competition is predominantly across brands within a store, then, because advertising plays the informational role and increases price competition, manufacturers in highly advertised categories may need to offer greater trade deals in equilibrium. Several limitations in our paper provide avenues for future research. On the analytical side, we have provided a parsimonious model for understanding the equilibrium relationship between advertising and retail promotion. The model can be extended in a number of ways to gain further insights. For example, we can relax the assumption of symmetry across competing brands, include store competition, and study situations with more than two brands. Incorporating interstore competition is a particularly useful topic for future research. (Steiner 1973, Steiner 1993) argues that advertising may increase the salience of the brands and consumers will be attracted to stores that offer lower prices on these advertised brands. Therefore, stores compete on the basis of price and may promote advertised brands heavily, even if the manufacturers do not offer adequate trade deals. An extension of this logic is the notion of loss leader pricing. Popular brands are offered by the retailers at lower prices and used as loss leaders to build store traffic. Though past research indicates that brand switching within store accounts for the bulk of promotional sales, future analytical and empirical research can study the influence of store competition on the relationship among manufacturer advertising, trade promotion, and retail price promotion. In our empirical model, we attempted to account for and eliminate potential loss-leader effect in the following way. It is reasonable to expect that price promotions intended to attract shoppers from other stores would be feature advertised. The Infoscan data set identified for each category the proportion of total price cuts that were featured and the average price cut during the featured periods. (About 20% of the price cuts are feature advertised.) From these data, we were able to calculate the discount size and discount frequency of unadvertised price cuts. If store competition is the dominant reason for the observed positive relationship between advertising and price discount, then if we exclude featured price cuts and consider only unadvertised price cuts, the positive relationship would not be observed. The correlation between manufacturer advertising and retailers’ (unadvertised) discount size is 0.24, which is significant \((p < .05)\), though slightly lower than 0.31 observed with all price cuts. The correlation between manufacturer advertising and retailers’ (unadvertised) discount frequency is 0.57, which is also lower than 0.69 observed with all price cuts, but statistically significant \((p < .05)\). In summary, by eliminating featured price cuts, we partially account for and eliminate store competition effect. Even after this adjustment, the relationship between advertising and discount size/discount frequency is positive, though the strength of the relationship is lower. On the empirical side, our analysis is at the category level based on data from one year. Cross-category studies may be associated with potential endogeneity problems. We have attempted to address the endogeneity problem in several ways as described earlier. Nevertheless, some endogeneity problems may still remain unaddressed. Future researchers can test the robustness of the positive relationship between advertising and retail price promotion by analyzing brand-level data and by investigating how changes in advertising within a brand increases its promotional price sensitivity, deal depth and deal frequency. We recognize that the data we use for empirical testing is somewhat dated, though we do not believe the relationship we explore is time-dependent. We were unable to obtain a more recent Infoscan Report on Trade Promotions, or a similar data set that provides information on the key variables of interest – discount depth, discount frequency, and promotional price elasticity at the category level measured in the same year for the same (national) market. Testing the relationships between advertising and retail promotion with a more recent data set would be a useful avenue for future research. Finally, we are not able to test the results on trade deals (Lemma 2) due to lack of data. Empirical analysis of the relationship between manufacturer advertising and trade deal can provide useful insights into the manufacturer’s advertising-price promotion tradeoff. Notes 1. The demand function used by (Raju, Sethuraman & Dhar 1995) can be written as \(q_i = 1 - p_i + \theta (p_i - p_c)\). Substituting \(p_i = p_r - d_r\), \(p_j = p_r - d_j\) and noting that \(p_r\) and \(\theta\) are constant, we can rewrite the demand function as \(q_i = q_0 + p_i + \theta (d_i - d_j)\), where \(q_0 = 1 - p_r\). 2. The logic behind this assumption is that managers will take action if the expected profits exceed some threshold value (to cover investment costs or expected returns). The larger the profits, the more likely that the realized profit will exceed the threshold value. 3. It can also be shown that these results hold even if only one of the manufacturers is promoting in one period instead of both manufacturers promoting in the same period. 4. Data sets are combined based on product nomenclature. In several cases, one-to-one matches are obtained. In some cases, where there is no clear match, we used our judgment in matching the categories by inspecting the brand names. Where there is some uncertainty about the match, those observations are deleted. Acknowledgement The authors thank Roger Kerin and two anonymous reviewers for their comments on an earlier version of this manuscript, and, especially, Louis Bucklin for his valuable guidance throughout the review process. References Agrawal, D. (1996). Effect of brand loyalty on advertising and trade promotions: a game theoretic analysis with empirical evidence. *Marketing Science, 15* (1), 86–108. Bell, D. R., Chiang, C. & Padmanabhan, V. (1999). The decomposition of promotional responses: an empirical generalization. *Marketing Science, 18* (4), 504–526. Benham, L. (1972). The effect of advertising on the price of eye glasses. *Journal of Law and Economics, 4* (October), 337–352. Blattberg, R. C. & Neslin, S. A. (1990). *Sales Promotion: Concepts, Methods, and Strategies*. Englewood Cliffs, NJ: Prentice Hall. Cady, J. (1976). Advertising restrictions and retail prices. *Journal of Advertising Research, 16* (October), 27–30. Choi, C. (1996). Price competition in a duopoly common retailer channel. *Journal of Retailing, 72* (2), 117–134. Connor, W. S. & Wilson, T. A. (1974). *Advertising and Market Power*. Cambridge, MA: Harvard University Press. Coughlan, A. (1985). Competition and cooperation in marketing channel choice: theory and application. *Marketing Science, 4* (Spring), 110–129. Farris, P. & Albion, M. (1980). The impact of advertising on the price of consumer product. *Journal of Marketing, 44* (Summer), 17–35. Gupta, S. (1988). Impact of sales promotion of when, what, and how much to buy. *Journal of Marketing Research, 25* (November), 342–355. Hoch, S. & Banerji, S. (1993). When do private labels succeed? *Sloan Management Review*, (Summer): 57–67. Hoyt, C. W. (1997). You cheated, you lied. *PROMO Magazine*, July 1997. Information Resources, Inc. (2001). Sales data summary –available online. Jones, J. P. (1995). Single-source research begins to fulfill its promise. *Journal of Advertising Research, 35* (3) (May/June), 9–16. Kaul, A. & Wittink, D. R. (1995). Empirical generalizations on the impact of advertising on price sensitivity and price. *Marketing Science, 14* (3), G151–G160. Kwoka, J.E. (1984). Advertising and the price and quality of optometric services. *American Economic Review, 74* (1), 211–216. McGuire, T. & Staelin, R. (1983). An industry equilibrium analysis of downstream vertical integration. *Marketing Science, 2* (Spring), 161–192. Mela, C. F., Gupta, S. & Lehmann, D. R. (1997). The long-term impact of promotions and advertising on consumer brand choice.” *Journal of Marketing Research, 34* (May), 248–261. Narasimhan, C., Neslin, S. & Sen, S. (1996). Promotional elasticities and category characteristics. *Journal of Marketing, 60* (April), 17–30. Nelson, P. (1970). Information and consumer behavior. *Journal of Political Economy, 78* (March/April), 222–239. Nelson, P. (1974). Advertising and information, *Journal of Political Economy, 82* (July/August), 729–754. Neslin, S. A., Powell, S. G. & Stone, L. S. (1995). The effects of retailer and consumer response on optimal manufacturer advertising and trade promotion strategies. *Management Science, 41* (May), 749–766. PROMO News (1998). Trade spending up. *PROMO Magazine*, 12th January 1998. Raju, J. S., Sethuraman, R. & Dhar, S. K. (1995). The introduction and performance of store brands. *Management Science, 41* (June), 957–978. Scott, H. (1992). Trade promotion $ share dips in 1992. *Advertising Age*, April 5, 3. Sethuraman, R. (1992). *Understanding cross-category differences in private label shares of grocery products*. Working Paper, Report No. 92–128. Cambridge, MA: Marketing Science Institute. Sethuraman, R. & Tellis, G. (1991). An analysis of the tradeoff between advertising and price discounting. *Journal of Marketing Research, 28* (May), 160–174. Shankar, V. & Krishnamurthi, L. (1996). Relating price sensitivity to retailer promotional variables and pricing policy. *Journal of Retailing, 72* (3), 249–273. Shankar, V. & Tellis, R. (1999). *Dimensions of retailer pricing strategy and tactics*. Working Paper, Report No. 99–101. Cambridge, MA: Marketing Science Institute. Shubik, M. & Levitan, R. (1980). *Market Structure and Behavior*. Cambridge, MA: Harvard University Press. Steiner, R. L. (1973). Does advertising lower consumer prices?” *Journal of Marketing, 37* (October), 19–26. Steiner, R. L. (1993). The inverse association between the margins of manufacturers and retailers. *Review of Industrial Organization, 8* (3); 717–740.
Mixed Duopoly in Education with Vouchers Franceska Tomori Document de treball n.5 - 2018 Edita: Departament d’Economia www.fcee.urv.es/departaments/economia/public_html/index.html Universitat Rovira i Virgili Facultat d’Economia i Empresa Av. de la Universitat, 1 43204 Reus Tel.: +34 977 759 811 Fax: +34 977 758 907 Email: email@example.com CREIP www.urv.cat/creip Universitat Rovira i Virgili Departament d’Economia Av. de la Universitat, 1 43204 Reus Tel.: +34 977 758 936 Email: firstname.lastname@example.org Adreçar comentaris al Departament d’Economia / CREIP ISSN edició en paper: 1576 - 3382 ISSN edició electrònica: 1988 - 0820 DEPARTAMENT D’ECONOMIA – CREIP Facultat d’Economia i Empresa Mixed Duopoly in Education with Vouchers* Franceska Tomori† Advisors: Joan Calzada‡ and Ester Manna§ July 17, 2017 Abstract In a mixed duopoly environment, I study under which conditions the introduction of a voucher system for private schools can increase competition and as a result, social welfare. My model considers a market in which schools compete in qualities to attract students. Specifically, I consider two settings: one with two private profit-maximizing schools and one with a mixed duopoly, in which one of the schools maximizes social welfare. In the two situations, the quality level offered by the schools plays a crucial role in the students’ enrollment decision. I find that in both private and mixed duopoly, the voucher reduces the tuition fee and the quality of the high quality school. It also increases own profits and decreases the ones of its competitor. Thus, the voucher reduces the incentives of the high-quality school to invest on its quality, and this weakens the competition in the market. In the mixed duopoly scenario, particularly for having an increasing consumer surplus and social welfare, the social planner needs to implement a low voucher. The contrary needs to be implemented in the private duopoly. Finally, the low voucher policy can be successful as a high voucher is costly. Keywords: Mixed Duopoly, Voucher Programs, Educational System, Vertical and Horizontal Differentiation. JEL classifications: D21, H52, I2, L13. *Acknowledgements: I would first and foremost like to thank my thesis advisors: Joan Calzada and Ester Manna. The door of Prof. Joan’s and Prof. Ester’s office was always open, whenever I ran into a trouble spot or had questions about my research. I have been extremely lucky to have such advisors who cared so much about my work, and who responded to my questions and queries so promptly. They consistently allowed this paper to be my own work, but steered me in the right direction, whenever they thought I needed it. I would like to thank my friends and relatives for accepting nothing less than excellence from me. Completing this work would have been all the more difficult were it not for the support and friendship provided by the staff of UB Schools of Economics, especially by Jordi Roca. I am indebted to them for their help. Finally, I must express my very profound gratitude to my parents, my brother and my partner, Ditmar for providing me with unfailing support and continuous encouragement throughout my years of study and through the process of researching and writing this thesis. This accomplishment would not have been possible without them. †Universitat de Barcelona. Email: email@example.com. ‡Universitat de Barcelona. Email: firstname.lastname@example.org. §Universitat de Barcelona. Email: email@example.com. 1 Introduction In the last decades, voucher programs that allow students to attend private schools have become increasingly common (see Epple et al., 2015, and Nechyba, 2000). Indeed, vouchers are already implemented in many countries such as: United States, Colombia, Chile, New Zealand, Denmark, Sweden, The Netherlands, India and Pakistan. The introduction of vouchers in the educational system and its impact on the quality provided by public and private schools have been widely discussed. One argument in favor of voucher programs is that public schools will improve their quality because they compete fiercely to attract students. However, it is also considered that voucher programs in private schools may have a negative impact on the quality provided by public schools. This is because they may draw away the most promising students from the public schools, leading to a reduction in the public schools’ incentives to increase quality. Therefore, the impact of voucher programs is controversial and it is important to provide a comprehensive framework that is able to convey useful insights regarding the vouchers’ role in a mixed duopoly environment. To achieve this objective, I develop a model where schools compete in terms of quality and tuition fees. This model is built on a paper of vertical product differentiation of Cremer et al. (1997) which focuses on the service quality and competition in the postal sector. I adapt this model to consider a market in which schools compete in qualities to attract students. Specifically, I consider two settings: one with two private profit-maximizing schools and one with a mixed duopoly, in which one of the schools maximizes social welfare (public school). In the two situations, the quality level offered by the schools plays a crucial role in the students’ registration decision. In this context, my analysis focuses on the impact of voucher programs on quality levels, tuition fees and in the enrollment of both private and public schools. Quality and tuition fees are not the only variables that students consider when they decide which school to attend. Schools offer different educational services and this is especially true if one considers the educational services provided by public and private schools. For example, in several countries, most private schools have a strong religious connotation, while public schools are typically secular. I consider that each school offers a service of different quality, and students are heterogenous regarding their willingness to pay. If all students had the same willingness to pay, all of them would go the private high-quality school. But since this is not the case, students with the smaller willingness to pay enroll in the public low-quality school. This article develops a model that studies the interaction between public and private schools. The private school maximizes its own profits, while the public school maximizes social welfare which is a weighted function of its profits and the utilities of the other participants in the market.\footnote{A mixed oligopoly is defined as a market in which two or more firms with different objective functions co-exist (see Fraja and Delbono, 1990, and Nett, 1993, for a survey.)} The private school offers a higher quality to students who choose to go there and, consequently, it charges a higher tuition fee than the public school. Regarding the development of the model, I first analyze the education market without introducing the voucher. Then, once it is introduced, I show how the equilibrium changes in both settings: when both schools maximize profits and in a mixed duopoly setting. The voucher is introduced in the model as a reduction in the tuition fee of the private school, the quantity of which is decided by the government. I am interested on its impact on the model’s variables such as: qualities, tuition fees and profits of schools and also consumer surplus and social welfare. The reason, I wanted to apply the voucher policy in my model is to see if it results successful. The main criteria for success is satisfying the students with low willingness to pay. This will imply that they cannot afford an expensive private school with a high tuition fee. The results of the model show that in the context of a private duopoly, the introduction of the voucher to reduce the tuition fee of the high quality school reduces its quality. Interestingly enough, the voucher also reduces in the same amount the quality of the low-quality school. This suggests that the voucher reduces the incentives of the high-quality school to invest in its quality, and this situation weakens competition in the market and also leads the other school to reduce its quality. On the other hand, the voucher increases the enrollment in the high quality school, which increases its profits, and decreases the profits of the low-quality school. Taking this situation into account, I analyze which is the voucher policy that can increase consumer surplus. I show that the social planner has a trade-off when deciding the value of the voucher. On the one hand, the voucher increases the number of students that enroll in the high-quality school, which has a positive impact on consumer surplus. But on the other hand, the voucher leads to a reduction of the quality of both schools. As a consequence, students in the low-quality schools are necessarily made worse off. The optimal voucher policy takes this trade-off into account and depends on the marginal cost of the quality. However, in the mixed duopoly context, the results differ compared to the previous case. The qualities for both schools decline with the implementation of a low-valued voucher. The high quality private school decreases its own quality because investing on it is expensive. Then, the low quality public school decreases it because of the weaker competition as in the previous case. Consequently, by implementing a low voucher it seems that the tuition fee of the high quality school decreases. On the other side, the tuition fee of the low-quality school stays constant intending to increase for higher values of voucher. As investing in quality is expensive, school competition strengthens through a competition through the tuition fees. Thus, in this education market the competition becomes strong. Moreover, the profits for private school 1 increase when the voucher tends to be low. Further, it is noted that profits of the public school decrease while private school gains a lot. Such result is generated because of the huge number of students getting enrolled there. Moreover, consumer surplus decreases with the voucher and the contrary happens with the social welfare. Social welfare increases due to the competition between schools. This satisfies both students and other agents. At the same time, the social planner may find this as a good policy. He may intend to maximize consumer surplus and social welfare by keeping the voucher really low. Finally, all agents in the education market may turn to be content with this policy. Since the main results of the paper depend on the individuals’ preferences for school quality and governmental decision of implementation, the analysis calls for additional empirical studies. Indeed, there is some empirical evidence showing voucher impacts after applying them on the outcome of poor children. It is shown that vouchers raise the competition between schools, increase their qualities, and poor students’ test scores (see Neilson, 2013, for more details). Taking into account students’ preferences, Gazmuri (2015) shows that after the SES\(^2\) reform in Chile applied in private schools, low-SES students cared more about a school’s quality. This result explains why students migrate from public to private schools, by affecting the competition between schools. But, in my case, these results are in contradiction with my findings. I find that the voucher does not raise the competition that much between schools in my model. Moreover, it decreases the qualities of both schools in the two education markets. Similarly to Gazmuri (2015), the students in my model care a lot about the school’s quality. This is the reason why students choose to go to the private high-quality school when implementing the voucher. The paper is organized as follows. In the next section, the related literature is reviewed; in Section 3 the model setup is presented; in Section 4 the optimal allocation and the equilibrium is characterized when both schools are private; Section 5 introduces the mixed equilibrium, when there is competition between a public and a private school; in Section 6 the voucher is introduced, applied to both previous settings, and its impact is studied; concluding remarks \(^2\)Subvencion Escolar Preferencial is a subsidizing reform for poor families. and discussion of the results are given in Section 7. 2 Related Literature This article is related to the literature on mixed duopoly and to the one on vouchers in education. First, when the analysis focuses on education, it is important to take into account that in many countries the educational services are provided by both public and private institutions. Moreover, voucher programs have been increasingly used over the last decades in many different countries. **Literature on mixed oligopoly.** Since in numerous markets, for-profit firms compete with cooperative firms and non-profit organizations, there is a growing literature on mixed oligopoly (see for example Casadesus-Masanell and Ghemawat, 2006, and Marini et al., 2015). This literature focuses on the competition among firms with different objective functions (see Cremer et al., 1991, Grilo, 1994, and Delbono et al., 1996).\(^3\) In this literature, some articles study the issue of competition in education markets when education providers can be public and private (see Cellini and Goldin, 2014, Deming et al., 2012, and Cremer and Maldonado, 2013). More specifically, Cellini and Goldin (2014) and Deming et al. (2012) empirically show that US education markets are effectively mixed, while Cremer and Maldonado (2013) study a mixed duopoly model in which the quality of education depends on “peer group” effects. Ishibashi and Kaneko (2008) use a Hotelling model to argue that in a duopoly, the public firm provides lower quality than the private firm. Based on this paper’s claim, I have assumed that the quality of private school is higher compared to public. Thus, the education system is considered a real market where schools are the ones that compete in my model. On the other hand, Fraja and Delbono (1990) and Delbono et al. (1996) use a duopoly model to show the existence of the qualities’ equilibrium. Here, a public firm chooses the lower quality while private ones choose the higher quality. According to these authors’ assumptions, I have chosen public school to have the lower quality. Matsumura and Matsushima (2003) use the sequential choice of location in a mixed duopoly, where a welfare-maximizing public firm stands as a competitor of a profit maximizing private firm. Additionally, they introduce the price regulation effect. Related to this paper, I introduce the voucher as a reduction of private school’s price and I study \(^3\)Cremer et al. (1991) study price competition in a market represented by a Hotelling (1929) line in which private and public firms choose locations first and then prices. Then, Grilo (1994) analyzes a mixed competition model in which products are vertically differentiated and firms non-cooperatively choose qualities first and then prices. Finally, Delbono et al. (1996), using a model similar to Grilo (1994), introduce the possibility that the market might be uncovered. its impact on quality, profits, consumer surplus and social welfare. However, the literature for mixed duopoly in education is scarce. The Romero and Rey (2004) paper examines the education market as a mixed duopoly through a sequential choice analysis and competition for optimal quality, prices and exams. Similarly to this paper, I study a mixed duopoly in education with a sequential choice in two scenarios. The first is when both public and private school maximize their profits. Then, the other case is when public school maximizes social welfare and the private one maximizes its profits. **Literature on vouchers.** This article is also related to the literature of vouchers in the education system as a public policy. My paper is particularly related to Epple and Romano (1998). The authors deal with private and public school competition when students vary in ability and household income. Moreover, school quality increases in the peers’ ability. Their model introduces a universal voucher where the increasing amount of vouchers make the average peer quality in the public schools decline. This happens because private entrants ‘cream-skin’ the higher income and ability students from public schools. Contrary to this paper, the students in my model vary in their preferences for school and their willingness to pay. Furthermore, I use a universal voucher for those students who choose to attend a private school. Though the quality of public school increases after applying the voucher in private school’s price. Epple and Romano (2008) investigate the implication of voucher design for cream skimming. Moreover, they show that a voucher constrained by ability and tuition preserves benefits from competition. When eliminating cream skimming, there will be uniform benefits from across the distribution of student income and types of ability. Hence, I use a voucher constrained by the preferences of the students for schools. Once applying the voucher, there will be benefits for both schools and society. Nechyba (1999, 2000) consider the effects of voucher programs in multi-district local economies. Tuition varies across private schools and, differently from Epple and Romano (1998, 2008), price discrimination is not allowed. More specifically, these papers study three voucher programs: (1) a general voucher applicable to any child in private school, (2) a voucher targeted only to low-income households, (3) a voucher targeted to poor districts. Their results change depending on the voucher program considered. Manski (1992) was a pioneer on using a theoretical and computational model to capture the features of the education environment. Students choose between public and private schools, differing by household income and motivation, with demand for education quality and a positive peer effect. Public sector is rent-seeking and private one gets zero profits by setting tuition to maximize enrollments and a voucher spent on educational inputs. Even in the best case, a choice system would not reach to equalize educational opportunity across income groups. In my model, the contrary occurs. Specifically, the introduction of the voucher allows students to attend private schools even if their willingness to pay is low. Other articles try to better explain how different types of vouchers work in the education system (see Epple et al., 2015, and Brunner and Imazeki, 2008).\footnote{These articles discuss Tiebout-choice for the universal vouchers.} Neal (2002) gives a theoretical discussion on how vouchers can change the education market, giving examples from the U.S. This is also my aim for the paper, to see the impact of vouchers in the education system. Cellini and Goldin (2014) provide evidence that shows that federal student aid raises tuition fees for the case of profit colleges. In my model, it happens that it decreases the tuition fees while at the same time decreasing school quality. Therefore, these two effects cancel each other, as soon as social welfare increases. This is the case with a mixed duopoly education market, where all agents remain satisfied at the end. \section{Model Setup} I use a Mussa and Rosen (1978) vertical differentiation model similarly to Cremer et al. (1997), but extending and adapting it to the education sector. We consider a duopoly model where there is a private school ($i = 1$), and a public school ($i = 2$). The private school maximizes its profits, while public school may care not only about its own profits, but also about social welfare. Social welfare is measured by total surplus, which is equal to the sum of producer and consumer surplus. Consumer surplus represents the utility that students obtain by choosing the school they prefer. Schools decide the tuition fees and qualities they offer. As in Cremer et al. (1997), without loss of generality, I assume that the private school has higher quality and a higher tuition fee compared to the public school, i.e. $x_1 > x_2$ and $p_1 > p_2$. Students make the choice depending on their willingness to pay for schools and their preferences for the quality of their school. The variable $(c)$ represents the quality cost that schools need to pay once they increase their quality. The cost for increasing the quality is expensive. However, schools need to invest in order to satisfy the students preferences and at the same time increase their own profits. There is a continuous uniform number of students whose types are identified by $\theta \in [\underline{\theta}, \overline{\theta}]$. $\theta$ represents the marginal willingness to pay for quality. The quality of a school $i$, $x_i > 0$, reflects the preferences of individuals and their utility while choosing between public and private schools. The student’s surplus of type $\theta$ who asks for a school with quality $x_i$ and tuition fee $p_i$ is given by $\theta x_i - p_i$. All students have the same total demand which is perfectly inelastic and normalized to one.\footnote{Inelastic demand in economics is when people buy the same amount whether the price increases or decreases. In this case, it means that if the tuition fee drops, the quality demanded by students will not change.} The timing of the game is as follows. In Stage 0, the voucher is selected by the government; in Stage 1, both schools decide their qualities. Then, in Stage 2, they choose the tuition fee that will apply; finally, in Stage 3, students decide which school to attend. Since there is perfect information, I solve the model by backward induction. The following assumption guarantees an interior solution. **Assumption 1.** To guarantee an interior solution, $\bar{\theta} > \theta > \frac{\theta}{3}$. In this paper, I will consider two specific scenarios: First, I analyze the case in which both schools only care about their own profits (private duopoly); second, I consider the case where only the private school is a profit-maximizer, while the public school also takes into account the students’ utility and the private school’s profits (mixed duopoly). In general terms, the public school maximizes the following: $$W = \alpha(\pi_1 + CS) + \pi_2,$$ where $\alpha$ represents the weight given to the utility of the other participants in the market. In the first scenario, the public school is a profit-maximizer and $\alpha$ takes the value of 0. In contrast, in the other extreme scenario where the public school maximizes social welfare, $\alpha$ is equal to 1. In these two alternative settings, I will study whether the government finds it beneficial for the society to introduce a voucher that reduces the tuition fees students pay in the private school. If this is the case, I will study how the voucher impacts the quality provided by both schools, their profits, and social welfare in the two settings. Finally, I will discuss the policy implications of this analysis in the conclusions. All the mathematical computations and proofs of the results are in the appendix. ## 4 Private duopoly In this section, I consider the case in which both schools maximize their profits, i.e. $\alpha = 0$. The student’s surplus is given by the following expression: \[ CS = \int_{\hat{\theta}}^{\bar{\theta}} (\hat{\theta}x_1 - p_1) d\hat{\theta} + \int_{\underline{\theta}}^{\hat{\theta}} (\hat{\theta}x_2 - p_2) d\hat{\theta}. \] (2) Here, the variable $\hat{\theta} = \frac{p_1 - p_2}{x_1 - x_2}$ represents the marginal student. Students with a low willingness to pay ($\underline{\theta} < \hat{\theta}$) go to the cheapest and lowest quality school, i.e. the public school. In contrast, students with a high willingness to pay ($\hat{\theta} > \bar{\theta}$) choose the highest quality and more expensive school, i.e. the private school. Having the prices and qualities given, each student chooses the school that maximizes its own utility. Hence, a student chooses private school with the highest quality only if $[\hat{\theta}x_1 - p_1 > \hat{\theta}x_2 - p_2]$. Let’s determine the optimal index of the marginal consumer given the qualities. Maximizing the $CS$ function (2) with respect to $\hat{\theta}$, yields: \[ \hat{\theta} = \frac{c(x_1 + x_2)}{2} \] (3) Proposition 1 illustrates the results under private duopoly. **Proposition 1.** The optimal levels of quality for the two profit-maximizing private schools is unique and equal to the following: \[ x_1^\circ = \frac{5\bar{\theta} - \underline{\theta}}{4c}. \] (4) \[ x_2^\circ = \frac{5\theta - \bar{\theta}}{4c}. \] (5) The equilibrium prices are: \[ p_1^\circ = \frac{25\theta^2 - 58\theta\bar{\theta} + 49\bar{\theta}^2}{32c}. \] (6) \[ p_2^\circ = \frac{49\theta^2 - 58\theta\bar{\theta} + 25\bar{\theta}^2}{32c}. \] (7) After substituting the price and quality functions on each of the profit function, the equilibrium profits are: \[ \pi_1^\circ = \frac{-3(\theta - \bar{\theta})^3}{8c}. \] (8) \[ \pi_2^\circ = \frac{-3(\theta - \bar{\theta})^3}{8c}. \] (9) It is possible to notice that both schools obtain the same profits in equilibrium. Therefore, the marginal consumer obtained from these equalities is: \[ \hat{\theta}^\circ = \frac{\bar{\theta}}{2} + \frac{\theta}{2}. \] (10) By comparing the quality levels, it results that \(x_1^\circ < x_2^\circ\), if Assumption 1 is satisfied. Moreover, the tuition fee of private school 1 is higher than the tuition fee paid to attend the public school, i.e. \((p_1^\circ > p_2^\circ)\). Finally, schools obtain the same profits. 5 Mixed duopoly Here, I consider a mixed duopoly environment in which a public and a private school compete in terms of quality and tuition fees. The private school continues to maximize profits while the public school cares only about social welfare (this is the case in which \(\alpha = 1\) in (1)). The game is solved by backward induction, as in the previous section. The following proposition illustrates the results under mixed duopoly: **Proposition 2.** The equilibrium of qualities in the second stage of the game for a mixed duopoly market in which private school maximizes profits and the public one maximizes social welfare is unique and is given by: \[ x_1^m = \frac{\theta + 3\bar{\theta}}{4c}. \] (11) \[ x_2^m = \frac{3\theta + \bar{\theta}}{4c}. \] (12) The equilibrium prices are: \[ p_1^m = \frac{9\theta^2 - 10\theta\bar{\theta} + 17\bar{\theta}^2}{32c}. \] (13) \[ p_2^m = \frac{17\theta^2 - 10\theta\bar{\theta} + 9\bar{\theta}^2}{32c}. \] (14) The indifferent student will be equal to the one in private duopoly, where: \[ \hat{\theta}^m = \frac{\theta}{2} + \frac{\bar{\theta}}{2}. \] (15) After substituting these price and quality equilibrium functions in the each of the profit functions, the generated profits of each school in equilibrium; \[ \pi_1^m = \frac{(\bar{\theta} - \theta)^3}{8c}. \] (16) \[ \pi_2^m = \frac{(\bar{\theta} - \theta)^3}{8c}. \] (17) and both schools will have the same profit function in equilibrium. Here, the welfare function is different from the private duopoly case. It will have a higher value compared to the private duopoly. Thus, its function is represented as below: \[ W^m = \frac{(-\theta - \bar{\theta})(5\theta^2 + 6\theta\bar{\theta} + 5\bar{\theta}^2)}{32c}. \] (18) By comparing the quality levels that depend on $\bar{\theta}$, $\underline{\theta}$ and $c$, it obvious that $x_1^m > x_2^m$ if Assumption 1 is satisfied. Then, the tuition fee paid to attend the private school is higher than the one in the public school, i.e. $p_1^m > p_2^m$. Interestingly, profits continue to be the same in the private and public school. But, they are lower in this case compared with the private duopoly market. Similarly to the previous case, irrespective of the value of $\theta$ and the schools’ objective functions, they obtain the same profits.\footnote{A similar result has been obtained by Cremer et al. (1997) by using a horizontal (Hotelling-type) differentiation model.} To provide an intuition for the results, it is important to understand the public school’s incentives on choosing an efficient quality level. To understand this, let us recall that the high quality choice of a private school is the result of two opposing effects. On the one hand, the private school wants to move closer to its competitor and cover the largest possible market share. On the other hand, it prefers to choose a quality far away from its competitor’s to reduce the intensity of the tuition fee’s competition. If the competitor is also private, the second effect tends to dominate and schools will over-differentiate in equilibrium. Moreover, if the competitor is public, the price competition will have a different nature. This happens because the public school is interested more on allocating the students efficiently. Hence, it has less aggressive behaviour in undercutting its competitor. Also, the public school no longer has incentives to choose a very low quality. This is an interesting explanation on why the public school (the low quality school) finds it optimal to increase it own quality. In such a way, it can move closer to the social optimum. However, in my education markets I first introduce only two private schools and to the other case; one public school and one private school. I assume that private school has a higher quality compared to public one. Thus, I observe what happens with schools’ qualities in equilibrium.\footnote{A crucial assumption of the paper is that the demand is inelastic. This explains that public school can always achieve an efficient allocation of students, even when it involves setting a "high" tuition fee. If the demand were elastic, these results would not be true, as the public school may face quality and quantity distortions. Therefore, it may not be able to set a high price and also achieve an efficient segmentation of the market.} Finally, I compare the results in the private duopoly with those obtained under mixed duopoly. It is obvious that the quality chosen by the private school in private duopoly is higher than in mixed duopoly. The quality of public school in private duopoly is lower than that on mixed duopoly. Then, the private school has a higher quality compared to public one in the mixed duopoly market, but it results lower than that of the private school in private duopoly. These results are represented in Figure 1. To obtain them, I use a numerical simulation by assigning values to the parameters they depend on. Thus, the cost is ($c = 0.5$), the lowest willingness to pay ($\theta = 1$), and ($\bar{\theta} = \theta + 1$) = 2. Dealing with the tuition fees, they are lower in mixed duopoly case for both schools. Furthermore, it results that the public school has the lowest tuition fee in both education markets compared to private one. Now, considering the profits of each school, they are equal inside the market. Thus, I compare the profits of private school in the private duopoly case with those of private school in mixed duopoly. The private schools’s profits of the private duopoly market are lower than in the mixed duopoly one. The same result is generated for the public school. However, the social welfare and consumer surplus in mixed duopoly is higher. Hence, in the mixed duopoly education market all agents are satisfied. 6 Vouchers In general, a voucher is defined as a government-supplied coupon that is used to offset tuition at an eligible private school. Programs that distribute such vouchers exhibit variation in dimensions including who is eligible to receive them, their source of funding and the criteria for private school participation. In this section, I introduce a voucher \((v)\) as a reduction in private school’s price. Similarly to Epple and Romano (1998) and Nechyba (1999), I use a general voucher applicable to any child that decides to go to a private school after the policy starts to get implemented. The equilibrium is achieved in the same way as in the previous sections. I start implementing the voucher in an education market when both schools are profit-maximizers. Then, I introduce the voucher in a mixed duopoly setting where the public school maximizes the social welfare. The aim of this analysis is to study the impact of the introduction of the voucher on quality, school profits, and social welfare. ### 6.1 Private duopoly with voucher The objective function, I consider for the public school in this case is the following: \[ W_r = \alpha(\pi_1 + CS_r) + \pi_2, \quad \text{with} \quad 0 \leq \alpha \leq 1. \] (19) where \(\pi_1 = (p_1 - \frac{c x_1^2}{2})(\bar{\theta} - \theta)\) is the profit function of private school and \(\pi_2 = (p_2 - \frac{c x_2^2}{2})(\bar{\theta} - \theta)\) is the profit function of public school. This is the case of the model when \(\alpha = 0\). This generates \(W_r = \pi_2\) and the public school maximizes its profits behaving like a private school. Different from previous cases, the indifferent student, taking the voucher into account, will be represented by: \(\bar{\theta} = \frac{-p_1 + p_2 + \theta v}{v - x_1 + x_2}\). Thus, when I do the difference between the functions obtained here with those of Cremer et al. (1997), I assume \(\bar{\theta} = \theta + 1\). Then, if I substitute the results for \(v = 0\), I go back to the results of the benchmark model without vouchers. I also consider that the quality of private school 1 is higher than the quality of public school 2. The student’s surplus function \((CS)\) is equal to: \[ CS_r = \int_{\theta}^{\bar{\theta}} (\bar{\theta}x_1 - p_1 + v(\bar{\theta} - \theta))d\theta + \int_{\bar{\theta}}^{p_2} (\bar{\theta}x_2 - p_2)d\bar{\theta}. \] (20) The equilibrium for the case of \(\alpha = 0\) is obtained by backward inductions.\(^8\) The results are provided in Proposition 3. **Proposition 3.** By using Assumption 1, together with the condition: \(\bar{\theta} = \theta + 1\) for the case of \(\alpha = 0\), I find the equilibrium under private duopoly. Taking into account that the public school \(^8\)All the steps and mathematical computations are in the appendix only maximizes profits, the equilibrium quality levels are: \[ x_1^r = \frac{5\bar{\theta} - \bar{\theta}}{4c} - \frac{v}{3 + 4cv}. \] (21) \[ x_2^r = \frac{5\bar{\theta} - \bar{\theta}}{4c} - \frac{v}{3 + 4cv}. \] (22) The tuition fees students pay to attend the private or the public school are given by: \[ p_1^r = \frac{25\bar{\theta}^2 - 58\bar{\theta}\bar{\theta} + 49\bar{\theta}^2}{32c} + \frac{-63v - 36\bar{\theta}v + 138cv^2 + 48\bar{\theta}cv^2 - 64c^2v^3}{12(-3 + 4cv)^2}. \] (23) \[ p_2^r = \frac{49\bar{\theta}^2 - 58\bar{\theta}\bar{\theta} + 25\bar{\theta}^2}{32c} + \frac{-81v - 36\bar{\theta}v + 210cv^2 + 48\bar{\theta}cv^2 - 128c^2v^3}{12(-3 + 4cv)^2}. \] (24) Thus, the marginal student after having implemented the voucher in private school’s price will be as following: \[ \hat{\theta}^r = \frac{\bar{\theta} + \bar{\theta}}{2} + \frac{2cv}{3(-3 + 4cv)}. \] (25) Finally, the schools’ profits are: \[ \pi_1^r = \frac{-3(\bar{\theta} - \bar{\theta})^3}{8c} + \frac{27v + 24cv^2 - 64c^2v^3}{36(-3 + 4cv)^2}. \] (26) \[ \pi_2^r = \frac{-3(\bar{\theta} - \bar{\theta})^3}{8c} + \frac{-189v + 456cv^2 - 256c^2v^3}{36(-3 + 4cv)^2}. \] (27) I use the label ‘r’ to denote the equilibrium. It is noticeable that voucher plays a positive role on increasing social welfare and school quality. Moreover, it also increases the competition on tuition fees and quality between the schools. Comparing the equilibrium results, it is easy to show that the quality, the tuition fee, and the profits obtained by the private school are higher than those obtained by the public school. Thus, \( x_1^r > x_2^r \), \( p_1^r > p_2^r \), and \( \pi_1^r > \pi_2^r \). Differing from the results of Cremer et al. (1997)’s model, the voucher reduces the school quality and the tuition fees for both school. It also reduces the profits of public school. The contrary happens with the profits of private school, which increase with the voucher. By deriving the consumer surplus function (\( CS \)) with respect to the voucher (\( v \)) and then equaling the first order condition to zero, I find the optimal voucher (\( v_{optimal} \)) in this setting. Proposition 4 illustrates this result. **Proposition 4.** The optimal voucher is equal to: \[ v_{optimal}^r = \frac{66c^4 + (33(-396 + \sqrt[3]{160941}))^\frac{1}{3}c^2(c^6)^\frac{1}{3} - (33(396 + \sqrt[3]{160941}))^\frac{1}{3}(c^6)^\frac{2}{3}}{88c^5}. \] (28) It is intuitive that the voucher will increase the number of students going to the private school. The voucher is negatively affected by the quality cost (\(c\)). The higher is the quality cost, the lower should be the voucher. Moreover, the indifferent student will be better off when the voucher increases. The introduction of the voucher to reduce the tuition fee of the high quality school reduces its quality. At the same time, the voucher also reduces in the same amount, the quality of the low-quality school. As a consequence, students in the low-quality school are necessarily worse off. ### 6.2 Mixed duopoly with unconstrained equilibrium Similarly to the previous subsection, the marginal student will be different from the simple case due to the voucher. Moreover, in this section, we consider the case in which \( \alpha = 1 \). In this setting, the public school maximizes the social welfare that is equal to: \[ W_v = CS_v + \pi_1 + \pi_2. \] (29) The game is solved again by backward induction. Stage 2 of the game determines the price equilibrium for any given pair of qualities and then, stage 1 analyzes the quality choices. The function of consumer surplus is the same as in the previous case (\(CS_r = CS_v\)). Thus, I state the following proposition: **Proposition 5.** Considering Assumption 1, and also \( \bar{\theta} = \underline{\theta} + 1 \), the equilibrium of qualities is obtained in the Stage 2 of the game for unconstrained mixed duopoly education market. By adding the voucher in this market, in which private school maximizes profits and public school maximizes social welfare, the equilibrium is unique and is given by: \[ x_1^v = \frac{\underline{\theta} + 3\bar{\theta}}{4c} - \frac{v}{1 + 4cv}. \] (30) \[ x_2^v = \frac{3\underline{\theta} + \bar{\theta}}{4c} - \frac{v}{1 + 4cv}. \] (31) The prices that students have to pay in equilibrium, depending on the school they choose, are as following: \[ p_1^v = \frac{9\underline{\theta}^2 - 10\underline{\theta}\bar{\theta} + 17\bar{\theta}^2}{32c} + \frac{-v - 4\underline{\theta}v - 6cv^2 + 16\underline{\theta}cv^2}{4(-1 + 4cv)^2}. \] (32) \[ p_2^v = \frac{17\bar{\theta}^2 - 10\bar{\theta}\underline{\theta} + 9\underline{\theta}^2}{32c} + \frac{v - 4\bar{\theta}v - 2cv^2 + 16\bar{\theta}cv^2}{4(-1 + 4cv)^2}. \] (33) Thus, the marginal student after having implemented the voucher in private school’s price will be as following: \[ \hat{\theta}^v = \frac{\underline{\theta} + \bar{\theta}}{2} + \frac{2cv}{-1 + 4cv}. \] (34) After substituting all these previous results in the profits’ functions, they are equal to: \[ \pi_1^v = \frac{(-\underline{\theta} + \bar{\theta})^3}{8c} + \frac{3v - 8cv^2}{4(-1 + 4cv)^2}. \] (35) \[ \pi_2^v = \frac{(-\underline{\theta} + \bar{\theta})^3}{8c} + \frac{-v}{4(-1 + 4cv)^2}. \] (36) Now, let’s see what happens to the social welfare function in equilibrium. \[ W_v = \frac{(-\underline{\theta} + \bar{\theta})(5\underline{\theta}^2 + 6\bar{\theta}\underline{\theta} + 5\bar{\theta}^2)}{32c} + \frac{-v}{8(-1 + 4cv)^2}. \] (37) Here, I label the variables with a ‘v’ to indicate the equilibriums of mixed duopoly when I apply the voucher. Now, let’s compare the equilibrium results with each other. It is obvious that the quality of the private school is higher than that of public school. Hence, we have: \(x_1^v > x_2^v\). But the tuition fee and the profits of private school are lower than those of public one. So, we have: \(p_1^v > p_2^v\) and \(\pi_1^v > \pi_2^v\). This is true because a part of the tuition fee of private school is covered by the voucher. It means that the private school will generate low profits, by putting a higher quality and a lower tuition fee. Even though the public school’s aim is not to maximize own profits, it still generates higher profits compared to private school. Next, the qualities and profits of both schools are lower with the voucher compared to the model without voucher. The tuition fees are higher with the voucher due to the competition between schools. Here, the indifferent student is better off. Anyway, the social welfare is higher with the voucher. Furthermore, it is the turn to obtain the optimal voucher’s function. I find it by deriving the consumer surplus function ($CS$) with respect to the voucher ($v$) and equaling the obtained result to zero. The sign of the derivative function is negative which means that consumer surplus decreases. However, its value from positive changes to negative at low levels of voucher. Thus, it is constant and positive at high levels of voucher. This is translated into a higher voucher compared to the private duopoly case for having a positive CS. Finally, I do a comparison of the equilibrium results between the two different education markets of both schools. It is obvious, that the quality of private school in private duopoly case is lower than in mixed duopoly market (i.e. $x_1^r < x_1^v$). A similar result is obtained when I consider the quality provided by the public school which results in higher quality in the mixed duopoly market (i.e. $x_2^r < x_2^v$). These results are illustrated in Figure 2. Here, I represent the qualities of each of schools in the private and mixed duopoly with voucher. The results are generated after doing simulations and giving values to the variables they depend on. Thus, the cost is ($c = 0.5$), the lowest willingness to pay ($\underline{\theta} = 1$), ($\bar{\theta} = \underline{\theta} + 1$), and the voucher is very low. ![Figure 2: Private and Mixed Duopoly Qualities](image) The tuition fees of both private and public school in private duopoly education market are higher than in mixed duopoly (i.e. $p_1^r > p_1^v$ and $p_2^r > p_2^v$). Then, the profits of private school in private duopoly case are lower than in mixed duopoly one (i.e. $\pi_1^r < \pi_1^v$). For public school, the contrary happens ($\pi_2^r < \pi_2^v$). Furthermore, the social welfare is higher in a mixed duopoly education market (i.e. $W_v > W_r$). Finally, consumer surplus is higher in private duopoly market (i.e. $CS_r > CS_v$). It is positive and constant when the voucher is high in a mixed duopoly. ### 6.3 Discussion of the voucher impact In this subsection, I provide some intuition and comment on the impact of the voucher on the different variables in both settings considered. I start dealing with the private duopoly. In this case, the quality of both private and public school decreases with the voucher. This happens because investing in quality is expensive. It is interesting to note that both schools’ tuition fees decrease with the voucher. Note that the voucher is introduced as a reduction in the tuition fee of the private school. Because of the weak competition among schools, the tuition fee of the public school decreases too. Then, the profits of public school behaving as private together with the social welfare decrease when voucher increases. I restrict the value of voucher not more than 0.8 and especially for consumer surplus, I use a really high voucher. It makes sense, as by lowering the quality and also the tuition fee, probably the profits will decrease. In the model, the profits of public school are equal to social welfare. This is the reason why it decreases too when the voucher increases. The contrary happens with consumer surplus which increases only with higher values of voucher. The social planner has to take this problem into account. It’s aim is to maximize the consumer surplus, but it results quite impossible and really expensive in this case. This may be a reason for not implementing the voucher in such an education market. In order to see the impact of the voucher in this education market, I show it also with graphs. In this way, it is easier to see the variable’s evolution and notice the impact that the voucher has on each variable. The graph is represented below: ![Graph showing the impact of the voucher](image) **Figure 3:** Voucher’s Impact in Private Duopoly Now, it is the turn of mixed duopoly. The quality of high and low quality schools decreases with the voucher only for low values of the voucher. In the simulation, I restrict the voucher not more than 0.35 as for higher values of it, some variables make no economic sense. By implementing a low voucher it seems that the tuition fee of the high quality school decreases. On the other side, the tuition fee of the low-quality school stays constant and increases for higher values of the voucher. As investing in quality is expensive, schools strengthen the competition by competing in tuition fees. Thus, in this education market the competition is strong. Moreover, the profits for the private school increase when the voucher tends to be low. It is obvious that the profits of public school decrease while private school gains a lot. Such a result is because of the huge number of students getting enrolled there. Moreover, consumer surplus decreases with the voucher and the contrary happens with the social welfare. Social welfare increases due to the competition between schools. This makes students happy. At the same time, the social planner may find this as a good policy. He may intend to maximize the consumer surplus and also the social welfare by keeping the voucher low. Finally, all agents in the education market may turn to be content with this policy. In order to see the impact of the voucher in the mixed duopoly education market, I represent the following graphs: 7 Concluding Remarks In this paper, I have modeled a mixed duopoly in education market when the public policy of vouchers is implemented. The economics literature has stressed how schools benefit from vouchers, but I have shown several issues beyond that. Let me summarize the steps I followed and then the results. If students get a voucher as a reduction in the private school tuition fee, they will probably go to a private school. This increases the number of students going to private school even though their willingness to pay is low. This also creates a competitive environment for schools. On the other hand, this policy has its own impacts on the quality and tuition fee of both schools. For this reason, I take into account two different scenarios of the education market. Firstly, we consider a private duopoly market where both schools behave like private ones and maximize their profits. Secondly, there is a mixed duopoly case when the private school maximizes profits and public one maximizes the social welfare. The final results are not the same for both cases so I deal with them separately. Further, the model is followed by a discussion of the voucher’s policy impacts on the tuition fees, qualities and profits of each school. By the end, I comment its effect on consumer surplus and social welfare and find the optimal voucher. Regarding the private duopoly case, the results seem to be quite unexpected from the students point of view. This is because the voucher implemented as a reduction in the tuition fee of the private school causes a decrease in its own quality. Such result may disappoint the students who expected to receive higher quality from the private school. On the other hand, there is public school who competes with the private by decreasing quality too. Note that quality is expensive to invest on. Then, the tuition fees of both schools decrease when voucher increases. This later impact may be positive from the point of view of the student and negative for the education market side. The students may be satisfied if they would pay less. But, for schools it may result the contrary as they earn less per student. Consequently, the profits of the private school increase within the restricted value of the voucher. The contrary happens with the profits of the public school which decrease because its quality decreases. Note that $W = \pi_2$ in this case. Thus, the social welfare will decrease because of the weak competition between schools. Further, the consumer surplus tends to increase with high values of the voucher, resulting in it being expensive for the social planner to implement this high voucher policy. If he chooses the low voucher policy, he may not satisfy its aim of maximizing consumer surplus. Finally, I can say that such policy may not be appropriate in this case. Now, it’s time to the most important case, the mixed duopoly one. Here, the results are the same compared to the previous case. The qualities for both schools decline with the implementation of a low voucher. The high quality school decrease its own quality because investing on it is expensive. The low quality school decreases it because of the competition and the same reason as before. Consequently, by implementing a low voucher it seems that the tuition fee of the high quality school decreases. On the other side, the tuition fee of the low-quality school stays constant intending to increase for higher values of voucher. As investing in quality is expensive, schools strengthen the competition by competing in tuition fees. Thus, in this education market the competition is strong. Moreover, the profits for the private school increase when the voucher tends to be low. Further, it is noted that profits of public school decrease while private school gains a lot. Such result is generated because of the huge number of students getting enrolled there. Moreover, consumer surplus decreases with the voucher but social welfare increases. Social welfare increases due to the competition between schools. This satisfies both students and other agents. At the same time, the social planner may find this as a good policy. He may intend to maximize the consumer surplus and also the social welfare by keeping the voucher really low. Finally, all agents in the education market may turn out to be content with this policy. Further, I will state several limitations of the model. Firstly, it is the government of the country who decides on implementing a voucher policy or not by the end. Then, there is always a limit on the value of the voucher. The government may intent to maximize the consumer surplus by lowering the voucher’s value. So, the students will go to private school as the quality is higher. But, this may not be the case for all of them because there exist some capacity restrictions of private school. Secondly, I restrict in my model the cost that each of schools has to afford implementing the voucher. On the other hand, there is the government which may not do that. Finally, there are several externalities, that I do not consider. I say that more students go to private school and this causes externality. Moreover, private school is the one that invests less on quality. This creates the effect of segregation and the public school remains the poorest, because of the reduction of own quality. Finally, it is important to understand the impact of voucher as a public policy in the education market. For further research, the paper can be extended to use different models or social welfare functions. However, to be completely sure about my results, I would suggest proving them with more empirical works in order to attempt to estimate a value for the optimal voucher. A Appendix A.1 Private duopoly. Proof of Proposition 1. I start by considering the case in which there are two private schools in the education market. In order to guarantee strictly positive equilibrium qualities for both schools, I use Assumption 1. The equilibrium of the sequential game is determined by backward induction. The student choosing a school in the private duopoly obtains the following utility: $\theta x_i - p_i$. We have just two private schools, and a student choosing the high quality one, means that this school gives him more utility: $\theta x_1 p_1 > \theta x_2 p_2$. The profit function of the private school is as below: $$\pi_1 = (p_1 - \frac{cx_1^2}{2})(\bar{\theta} - \frac{p_1 - p_2}{x_1 - x_2}). \tag{38}$$ Then, the public school’s profit function is as following: $$\pi_2 = (p_2 - \frac{cx_2^2}{2})(\frac{p_1 - p_2}{x_1 - x_2} - \theta). \tag{39}$$ Here, it is the case that both schools maximize profits. In stage 2, by deriving each of the profit functions with respect to prices; I obtain $p_1$ and $p_2$ respectively equal to: $$\frac{\partial \pi_1}{\partial p_1} = 0 \Leftrightarrow \frac{-4p_1 + 2p_2 + 2\bar{\theta}x_1 + cx_1^2 - 2\bar{\theta}x_2}{2(x_1 - x_2)} = 0. \tag{40}$$ $$\frac{\partial \pi_2}{\partial p_2} = 0 \Leftrightarrow \frac{2p_1 - 4p_2 - 2\theta x_1 + 2\theta x_2 + cx_2^2}{2x_1 - 2x_2} = 0. \tag{41}$$ After solving the equations for $p_1$ and $p_2$, there are obtained respectively the following: $$p_1 = \frac{1}{6}[4\bar{\theta}(x_1 - x_2) + 2\theta(-x_1 + x_2) + c(2x_1^2 + x_2^2)]. \tag{42}$$ $$p_2 = \frac{1}{6}[2\bar{\theta}(x_1 - x_2) + 4\theta(-x_1 + x_2) + c(x_1^2 + 2x_2^2)]. \tag{43}$$ In stage 1, I apply the derivative of each profit function with respect to qualities. The choice is made sequentially and solving the game by backward induction yields: $$\frac{\partial \pi_1}{\partial x_1} = 0 \Leftrightarrow \frac{1}{36}[4\theta^2 + 16\bar{\theta}^2 - 16\bar{\theta}cx_1 + \bar{\theta}(-16\theta + 8cx_1) + c^2(3x_1^2 + 2x_1x_2 - x_2^2)] = 0. \tag{44}$$ \[ \frac{\partial \pi_2}{\partial x_2} = 0 \Leftrightarrow \frac{1}{36}[-16\theta^2 - 4\bar{\theta}^2 - 8\bar{\theta}cx_2 + 16\bar{\theta}(\bar{\theta} + cx_2) + c^2(x_1^2 - 2x_1x_2 - 3x_2^2)] = 0. \tag{45} \] Let's solve the obtained equations for \(x_1\) and \(x_2\) respectively, in order to get a final expression for each of them: \[ x_1^\circ = \frac{5\bar{\theta} - \theta}{4c}. \tag{46} \] \[ x_2^\circ = \frac{5\theta - \bar{\theta}}{4c}. \tag{47} \] As a final step, having these final expressions for qualities, I substitute each of them in the price and profit functions. The expressions generated depend only on cost (\(c\)) and willingness to pay (\(\theta'\)'s), same as with the quality functions. Thus, it gives us the following: \[ p_1^\circ = \frac{25\theta^2 - 58\theta\bar{\theta} + 49\bar{\theta}^2}{32c}. \tag{48} \] \[ p_2^\circ = \frac{49\theta^2 - 58\theta\bar{\theta} + 25\bar{\theta}^2}{32c}. \tag{49} \] Hence, the equilibrium profits are as below: \[ \pi_1^\circ = \frac{-3(\theta - \bar{\theta})^3}{8c}. \tag{50} \] \[ \pi_2^\circ = \frac{-3(\theta - \bar{\theta})^3}{8c}. \tag{51} \] ### A.2 Mixed duopoly-Simple case with unconstrained equilibrium. Proof of Proposition 2. Let's see what happens in the case in which there is one private and one public school in education market. This is the case when \(\alpha = 1\). Thus, the private school is a profit maximizer one, while public school aims maximizing social welfare. In this case to guarantee strictly positive equilibrium qualities for both schools, I use Assumption 1. The equilibrium of the sequential game is determined by backward induction. The utility obtained by the student when choosing in the private duopoly is equal to: \(\theta x_i - p_i\). There are two schools (public and private) and the students that choose the school that give them more utility. Here, it is considered the consumer surplus (\(CS\)) and social welfare function (\(W\)). Their functions are represented by the following equations: \[ CS_m = \int_{\bar{\theta}}^{\theta} (\hat{\theta}x_1 - p_1) d\hat{\theta} + \int_{\bar{\theta}}^{\theta} (\hat{\theta}x_2 - p_2) d\hat{\theta}, \] (52) \[ W_m = \alpha(CS_m + \pi_1) + \pi_2. \] (53) The profit function of the private school is as below: \[ \pi_1 = (p_1 - \frac{cx_1^2}{2})(\bar{\theta} - \frac{p_1 - p_2}{x_1 - x_2}). \] (54) Then, the public school’s profit function is as following: \[ \pi_2 = (p_2 - \frac{cx_2^2}{2})(\frac{p_1 - p_2}{x_1 - x_2} - \theta). \] (55) In stage 2, deriving the profit function of private school and the social welfare function for public school with respect to prices; hence, \( p_1 \) and \( p_2 \) are respectively equal to: \[ \frac{\partial \pi_1}{\partial p_1} = 0 \Leftrightarrow \frac{-4p_1 + 2p_2 + 2\bar{\theta}x_1 + cx_1^2 - 2\bar{\theta}x_2}{2(x_1 - x_2)} = 0. \] (56) \[ \frac{\partial W_m}{\partial p_2} = 0 \Leftrightarrow \frac{2p_1 - 2p_2 - 2cx_1 + cx_2^2}{2x_1 - 2x_2} = 0. \] (57) After solving the equations for \( p_1 \) and \( p_2 \), there are obtained respectively the following: \[ p_1 = \bar{\theta}(x_1 - x_2) + \frac{cx_2^2}{2}. \] (58) \[ p_2 = \bar{\theta}(x_1 - x_2) + c(\frac{-x_1^2}{2} + x_2^2). \] (59) In stage 2, there are applied the same steps as in private duopoly case, but now with quality functions. As the choice is made sequentially, but the game is solved by backward induction, it yields: \[ \frac{\partial \pi_1}{\partial x_1} = 0 \Leftrightarrow \frac{1}{4}[4\bar{\theta}^2 - 8\bar{\theta}cx_1 + c^2(3x_1^2 + 2x_1x_2 - x_2^2)] = 0. \] (60) \[ \frac{\partial W_m}{\partial x_2} = 0 \Leftrightarrow \frac{1}{4}[-4\bar{\theta}^2 + 8\bar{\theta}cx_2 + c^2(x_1^2 - 2x_1x_2 - 3x_2^2)] = 0. \] (61) Let’s solve the obtained equations for \( x_1 \) and \( x_2 \) respectively, in order to get a final expression for each of them; \[ x_1^m = \frac{\theta + 3\bar{\theta}}{4c}. \] (62) \[ x_2^m = \frac{3\theta + \bar{\theta}}{4c}. \] (63) As a final step, having these expressions for qualities, we substitute each of them in the price, profit, consumer surplus and social welfare functions. The functions generated depend only on cost and $\theta$'s. Thus, it gives us the following: \[ p_1^m = \frac{9\theta^2 - 10\theta\bar{\theta} + 17\bar{\theta}^2}{32c}. \] (64) \[ p_2^m = \frac{17\theta^2 - 10\theta\bar{\theta} + 9\bar{\theta}^2}{32c}. \] (65) Hence, each schools’ profit functions result to be equal: \[ \pi_1^m = \frac{(\bar{\theta} - \theta)^3}{8c}. \] (66) \[ \pi_2^m = \frac{(\bar{\theta} - \theta)^3}{8c}. \] (67) The indifferent student will be equal to the one in private duopoly, where: \[ \tilde{\theta}^m = \frac{\theta + \bar{\theta}}{2}. \] (68) Then, the consumer surplus and social welfare functions are represented by the following expressions: \[ CS^m = \frac{(\theta - \bar{\theta})(3\theta^2 - 22\theta\bar{\theta} + 3\bar{\theta}^2)}{32c}. \] (69) \[ W^m = \frac{(-\theta + \bar{\theta})(5\theta^2 + 6\theta\bar{\theta} + 5\bar{\theta}^2)}{32c}. \] (70) ### A.3 Private duopoly behaviour with binding constraints (Vouchers). Proof of Proposition 3. I start considering the case in which there are vouchers in the education market. In this case to guarantee strictly positive equilibrium qualities for both schools, it has to be fulfilled the Assumption 1. The equilibrium of the sequential game is determined by backward induction. The student choosing a school in the private duopoly obtains an utility equal to: $\theta x_i p_i$. This is the case when $\alpha = 0$. We have a mixed duopoly, where public firm behaves as a private schools. Thus, it seems that both schools are private and their aim is to maximize own profits. In stage 0, the voucher is applied as a reduction in the price of the private school ($p_1$). The social welfare and consumer surplus, I consider for this case are as following: $$W_r = \alpha(\pi_1 + CS_r) + \pi_2, \quad \text{with} \quad 0 \leq \alpha \leq 1.$$ (71) $$CS_r = \int_{\bar{\theta}}^{\hat{\theta}} (\hat{\theta}x_1 - p_1 + v(\bar{\theta} - \hat{\theta}))d\hat{\theta} + \int_{\underline{\theta}}^{\hat{\theta}} (\hat{\theta}x_2 - p_2)d\hat{\theta}.$$ (72) Hence, if $\alpha = 0$, it gives us $W_r = \pi_2$ and the public school maximizes its profits, behaving like a private school. Then, I represent the profit functions of the schools (private and public) as below: $$\pi_1^r = (p_1 - \frac{cx_1^2}{2})(\bar{\theta} - \hat{\theta}).$$ (73) $$\pi_2^r = (p_2 - \frac{cx_2^2}{2})(\hat{\theta} - \underline{\theta}).$$ (74) To derive the mixed equilibrium, I follow the steps as in the simple cases without voucher. However, there are some differences in this case because the equations are more complex. In stage 1, I derive the profit function of private school and the social welfare function for public school with respect to prices. I obtain $p_1$ and $p_2$ functions respectively equal to: $$\frac{\partial \pi_1}{\partial p_1} = 0 \Leftrightarrow -\frac{-4p_1 + 2p_2 + 2\bar{\theta}x_1 + cx_1^2 - 2\bar{\theta}x_2}{2(v - x_1 + x_2)} = 0.$$ (75) $$\frac{\partial W_r}{\partial p_2} = 0 \Leftrightarrow -\frac{2p_1 - 4p_2 + 2\bar{\theta}v - 2\bar{\theta}v - 2\bar{\theta}x_1 + 2\bar{\theta}x_2 + cx_2^2}{2(v - x_1 + x_2)} = 0.$$ (76) After solving the equations for $p_1$ and $p_2$, there are obtained respectively the following: $$p_1 = \frac{1}{6}[2\bar{\theta}(v - x_1 + x_2) - 2\bar{\theta}(v - 2x_1 + 2x_2) + c(2x_1^2 + x_2^2)].$$ (77) $$p_2 = \frac{1}{6}[4\bar{\theta}(v - x_1 + x_2) - 2\bar{\theta}(2v - 2x_1 + 2x_2) + c(x_1^2 + 2x_2^2)].$$ (78) Similarly, there are applied the same steps as in simple mixed duopoly case, but now with quality functions. As the choice is made sequentially, and the game is solved by backward induction, in stage 1 it yields: \[ \frac{\partial \pi_1}{\partial x_1} = 0 \Leftrightarrow (79) \] \[ \frac{\bar{\theta}(-6v+4x_1-4x_2)+2\theta(v-x_1+x_2)+c(4vx_1-3x_2^2+4rx_1x_2-x_2^2)(2\bar{\theta}(v-x_1+x_2)-2\bar{\theta}(v-2x_1+2x_2)+c(-x_1^2+x_2^2))}{36(v-x_1+x_2)^2} = 0. \] \[ \frac{\partial W_r}{\partial x_2} = 0 \Leftrightarrow (80) \] \[ \frac{(4\theta(v-x_1+x_2)-2\bar{\theta}(2v-x_1+x_2)+c(x_1^2-x_2^2))(2\bar{\theta}(-x_1+x_2)-4\theta(v-x_1+x_2)+c(x_1^2+4vx_2-4x_1x_2+3x_2^2))}{36(v-x_1+x_2)^2} = 0. \] Let’s solve the obtained equations for \(x_1\) and \(x_2\) respectively. By using the Assumption 1 and the condition \(\bar{\theta} = \underline{\theta} + 1\) we get a final expression for each of them: \[ x_1^r = \frac{5\bar{\theta} - \theta}{4c} - \frac{v}{3 + 4cv}. \tag{81} \] \[ x_2^r = \frac{5\theta - \bar{\theta}}{4c} - \frac{v}{3 + 4cv}. \tag{82} \] Then, having these final expressions for qualities, we substitute each of them in the price, profit, consumer surplus and social welfare functions. The functions generated depend only on cost and \(\theta'\)'s as qualities’ functions. Thus, it gives us the following: \[ p_1^r = \frac{25\theta^2 - 58\theta\bar{\theta} + 49\bar{\theta}^2}{32c} + \frac{-63v - 36\bar{\theta}v + 138cv^2 + 48\theta cu^2 - 64c^2v^3}{12(-3 + 4cv)^2}. \tag{83} \] \[ p_2^r = \frac{49\theta^2 - 58\theta\bar{\theta} + 25\bar{\theta}^2}{32c} + \frac{-81v - 36\theta v + 210cv^2 + 48\bar{\theta}cu^2 - 128c^2v^3}{12(-3 + 4cv)^2}. \tag{84} \] The profit functions result to be as below: \[ \pi_1^r = \frac{-3(\theta - \bar{\theta})^3}{8c} + \frac{27v + 24cv^2 - 64c^2v^3}{36(-3 + 4cv)^2}, \tag{85} \] \[ \pi_2^r = -\frac{3(\theta - \bar{\theta})^3}{8c} + \frac{-189v + 456cv^2 - 256c^2v^3}{36(-3 + 4cv)^2}. \tag{86} \] Hence, the marginal student after having implemented the voucher in private school’s price will be as following: \[ \hat{\theta}^r = \frac{\bar{\theta} + \theta}{2} + \frac{2cv}{3(-3 + 4cv)}. \] (87) Then, consumer surplus and social welfare function are as following: \[ CS_r = \frac{11}{18v} + \frac{5}{32(3c - 4c^2v)} - \frac{3}{4c} + \frac{\theta + \theta^2}{2c} - \frac{1}{96c(3 - 4cv)^2}. \] (88) \[ W_r = \pi_2^r = -\frac{3(\theta - \bar{\theta})^3}{8c} + \frac{-189v + 456cv^2 - 256c^2v^3}{36(-3 + 4cv)^2}. \] (89) ### A.4 Private duopoly (Optimal Voucher). Proof of Proposition 4. In order to find the optimal voucher, it suffices doing the first order condition of consumer surplus ($CS_r$) with respect to voucher ($v$): \[ \frac{\partial CS_r}{\partial v} = 0 \Leftrightarrow \frac{11}{18} + \frac{5}{8(3 - 4cv)^2} + \frac{3}{2(-3 + 4cv)^3} = 0 \] (90) Finally, by solving the equation above, it gives us the optimal voucher function: \[ v_{optimal}^r = \frac{66c^4 + (33(-396 + \sqrt[3]{160941}))^\frac{1}{3}c^2(c^5)^\frac{1}{3} - (33(396 + \sqrt[3]{160941}))^\frac{1}{3}(c^6)^\frac{2}{3}}{88c^5}. \] (91) ### A.5 Mixed duopoly with unconstrained equilibrium (Vouchers). Proof of Proposition 5. Let’s consider the case, in which there is one private and one public school in education market. This is the case when $\alpha = 1$. Private school is a profit maximizer one, while public school aims maximizing social welfare. In this case to guarantee strictly positive equilibrium qualities for both schools, I use Assumption 1 and also the condition $\bar{\theta} = \theta + 1$. The equilibrium of the sequential game is determined by backward induction. The student choosing a school in the private duopoly obtains an utility equal to: $\theta x_i p_i$. We have two schools (public and private) and the student choosing the school that gives him more utility. Moreover, it is added the voucher as a reduction in the price of private school. Thus, we expect different results from the Mixed duopoly case without vouchers. Here, it is considered the consumer surplus ($CS_v$) and the social welfare function ($W_v$). Their functions are represented by following equations: \[ CS_v = \int_{\bar{\theta}}^{\bar{\theta}} (\bar{\theta}x_1 - p_1 + v(\bar{\theta} - \bar{\theta}))d\bar{\theta} + \int_{\underline{\theta}}^{\bar{\theta}} (\bar{\theta}x_2 - p_2)d\bar{\theta}. \] (92) \[ W_o = \alpha(\pi_1 + CS_v) + \pi_2, \quad \text{with} \quad 0 \leq \alpha \leq 1. \] (93) The profit function of the private school is as below: \[ \pi_1^v = (p_1 - \frac{cx_1^2}{2})(\bar{\theta} - \frac{-p_1 + p_2 + \bar{\theta}v}{v - x_1 + x_2}). \] (94) Then, the public school’s profit function is as following: \[ \pi_2^v = (p_2 - \frac{cx_2^2}{2})(\frac{-p_1 + p_2 + \bar{\theta}v}{v - x_1 + x_2} - \underline{\theta}). \] (95) In stage 1, by deriving the profit function of private and public school with respect to prices: \( p_1 \) and \( p_2 \), they are respectively equal to: \[ \frac{\partial \pi_1^v}{\partial p_1} = 0 \Leftrightarrow -\frac{-4p_1 + 2p_2 + 2\bar{\theta}x_1 + cx_1^2 - 2\bar{\theta}x_2}{2(v - x_1 + x_2)} = 0. \] (96) \[ \frac{\partial \pi_2^v}{\partial p_2} = 0 \Leftrightarrow -\frac{2p_1 + 2p_2 + c(x_1^2 - x_2^2)}{2(v - x_1 + x_2)} = 0. \] (97) Now, in stage 2, I apply the same steps as in private duopoly case, but with quality functions. It is already known that the choice is made sequentially and the game is solved by backward induction. Thus, it yields: \[ \frac{\partial \pi_1^v}{\partial x_1} = 0 \Leftrightarrow \] \[ \frac{(x_1 - x_2)(-2\bar{\theta} + c(x_1 - x_2))(2\bar{\theta}(2v - x_1 + x_2) + c(-4vx_1 + 3x_1^2 - 4x_1x_2 + x_2^2))}{4(v - x_1 + x_2)^2} = 0 \] . \[ \frac{\partial \pi_2^v}{\partial x_2} = 0 \Leftrightarrow \] \[ \frac{(2\bar{\theta}v - 2\bar{\theta}(v - x_1 + x_2) + c(-x_1^2 + x_2^2))(2\bar{\theta}v + 2(v - x_1 + x_2) - c(x_1^2 + 4vx_2 - 4x_1x_2 + 3x_2^2))}{8(v - x_1 + x_2)^2} = 0 \] Let’s solve the obtained equations for $x_1$ and $x_2$ respectively, in order to get a final expression for each of them: \begin{equation} x_1^v = \frac{\theta + 3\bar{\theta}}{4c} - \frac{v}{1 + 4cv}. \end{equation} \begin{equation} x_2^v = \frac{3\theta + \bar{\theta}}{4c} - \frac{v}{1 + 4cv}. \end{equation} As a final step, I substitute each of the above expressions in the price, profit, consumer surplus and social welfare functions. The functions generated depend only on cost and $\theta's$. Thus, it gives us the following: \begin{equation} p_1^v = \frac{9\theta^2 - 10\theta\bar{\theta} + 17\bar{\theta}^2}{32c} + \frac{-v - 4\theta v - 6cv^2 + 16\theta cv^2}{4(-1 + 4cv)^2}. \end{equation} \begin{equation} p_2^v = \frac{17\theta^2 - 10\theta\bar{\theta} + 9\bar{\theta}^2}{32c} + \frac{v - 4\theta v - 2cv^2 + 16\theta cv^2}{4(-1 + 4cv)^2}. \end{equation} Then, the marginal consumer has a function as below: \begin{equation} \hat{\theta}^v = \frac{\theta + \bar{\theta}}{2} + \frac{2cv}{-1 + 4cv}. \end{equation} Each schools’ profit functions result to be equal: \begin{equation} \pi_1^v = \frac{(-\theta + \bar{\theta})^3}{8c} + \frac{3v - 8cv^2}{4(-1 + 4cv)^2}. \end{equation} \begin{equation} \pi_2^v = \frac{(-\theta + \bar{\theta})^3}{8c} + \frac{-v}{4(-1 + 4cv)^2}. \end{equation} Then, consumer surplus and social welfare function are as following: \begin{equation} CS_v = \frac{(\theta - \bar{\theta})(3\theta^2 - 22\theta\bar{\theta} + 3\bar{\theta}^2)}{32c} + \frac{3v}{8(-1 + 4cv)^2}. \end{equation} \begin{equation} W_v = \frac{(-\theta + \bar{\theta})(5\theta^2 + 6\theta\bar{\theta} + 5\bar{\theta}^2)}{32c} + \frac{-v}{8(-1 + 4cv)^2}. \end{equation} In order to find the optimal voucher, it suffices doing the first order condition of consumer surplus ($CS_v$) with respect to voucher ($v$): \begin{equation} \frac{\partial CS_v}{\partial v} = 0 \Leftrightarrow -\frac{3}{8(1 - 4cv^2)} = 0. \end{equation} The derivative function results negative. Thus, it can not be obtained the exact function of the optimal voucher. This means that when voucher increases the consumer surplus will be positive but will decrease. References Brunner, E. J., Imazeki, J., 2008. Tiebout choice and universal school vouchers. Journal of Urban Economics 63 (1), 253–279. Casadesus-Masanell, R., Ghemawat, P., 2006. Dynamic mixed duopoly: A model motivated by linux vs. windows. Management Science 52 (7), 1072–1084. Cellini, S. R., Goldin, C., 2014. Does federal student aid raise tuition? new evidence on for-profit colleges. American Economic Journal: Economic Policy 6 (4), 174–206. Cremer, H., De Rycke, M., Grimaud, A., 1997. Cost and benefits of universal service obligations in the postal sector. In: Managing Change in the Postal and Delivery Industries. Springer, pp. 22–41. Cremer, H., Maldonado, D., 2013. Mixed oligopoly in education. Cremer, H., Marchand, M., Thisse, J.-F., 1991. Mixed oligopoly with differentiated products. International Journal of Industrial Organization 9 (1), 43–53. Delbono, F., Denicolo, V., Scarpa, C., et al., 1996. Quality choice in a vertically differentiated mixed duopoly. ECONOMIC NOTES -SIENA-, 33–46. Deming, D. J., Goldin, C., Katz, L. F., 2012. The for-profit postsecondary school sector: Nimble critters or agile predators? The Journal of Economic Perspectives 26 (1), 139–163. Epple, D., Romano, R., 2008. Educational vouchers and cream skimming. International Economic Review 49 (4), 1395–1435. Epple, D., Romano, R. E., 1998. Competition between private and public schools, vouchers, and peer-group effects. American Economic Review, 33–62. Epple, D., Romano, R. E., Urquiola, M., 2015. School vouchers: A survey of the economics literature. Tech. rep., National Bureau of Economic Research. Fraja, G., Delbono, F., 1990. Game theoretic models of mixed oligopoly. Journal of Economic Surveys 4 (1), 1–17. Gazmuri, A., 2015. School segregation in the presence of student sorting and cream-skimming: Evidence from a school voucher reform. Tech. rep., Working paper, University of Pennsylvania. Grilo, I., 1994. Mixed duopoly under vertical differentiation. Annales d’Economie et de Statistique, 91–112. Ishibashi, K., Kaneko, T., 2008. Partial privatization in mixed duopoly with price and quality competition. Journal of Economics 95 (3), 213–231. Manski, C. F., 1992. Educational choice (vouchers) and social mobility. Economics of Education Review 11 (4), 351–369. Marini, M. A., Polidori, P., Teobaldelli, D., Zevi, A., 2015. Welfare enhancing coordination in consumer cooperatives under mixed oligopoly. Annals of Public and Cooperative Economics 86 (3), 505–527. Matsumura, T., Matsushima, N., 2003. Mixed duopoly with product differentiation: sequential choice of location. Australian Economic Papers 42 (1), 18–34. Mussa, M., Rosen, S., 1978. Monopoly and product quality. Journal of Economic theory 18 (2), 301–317. Neal, D., 2002. How vouchers could change the market for education. The Journal of Economic Perspectives 16 (4), 25–44. Nechyba, T. J., 1999. School finance induced migration and stratification patterns: the impact of private school vouchers. Journal of Public Economic Theory 1 (1), 5–50. Nechyba, T. J., 2000. Mobility, targeting, and private-school vouchers. American Economic Review, 130–146. Neilson, C., 2013. Targeted vouchers, competition among schools, and the academic achievement of poor students. Nett, L., 1993. Mixed oligopoly with homogeneous goods. Annals of Public and Cooperative Economics 64 (3), 367–393. Romero, L., Rey, E., 2004. Competition between public and private universities: quality, prices and exams.
Formalization and Preliminary Evaluation of a Pipeline for Text Extraction from Infographics Falk Böschen\textsuperscript{1} and Ansgar Scherp\textsuperscript{1,2} \textsuperscript{1} Kiel University, Kiel, Germany \textsuperscript{2} ZBW - Leibniz Information Centre for Economics, Kiel, Germany \{fboe,asc\}@informatik.uni-kiel.de Abstract. We propose a pipeline for text extraction from infographics that makes use of a novel combination of data mining and computer vision techniques. The pipeline defines a sequence of steps to identify characters, cluster them into text lines, determine their rotation angle, and apply state-of-the-art OCR to recognize the text. In this paper, we formally define the pipeline and present its current implementation. In addition, we have conducted preliminary evaluations over a data corpus of 121 manually annotated infographics from a broad range of illustration types such as bar charts, pie charts, and line charts, maps, and others. We assess the results of our text extraction pipeline by comparing it with two baselines. Finally, we sketch an outline for future work and possibilities for improving the pipeline. Keywords: infographics · OCR · multi-oriented text extraction · formalization 1 Introduction Information graphics (short: \textit{infographics}) are widely used to visualize core information like statistics, survey data or research results of scientific publications in a comprehensible manner. They contain information that is \textit{frequently not present in the surrounding text} [3]. Current (web) retrieval systems do not consider this additional text information encoded in infographics. One reason might be the varying properties of text elements in infographics that makes it difficult to apply automated extraction techniques. First, information graphics contain text elements at various orientations. Second, text in infographics varies in font, size and emphasis and it comes in a wide range of colors on varying background colors. Therefore, we propose a novel infographic processing pipeline that makes use of an improved combination of methods from data mining and computer vision to find and recognize text in information graphics. We evaluate on 121 infographics extracted from an open access corpus of scientific publications to demonstrate the effectiveness of our approach. It significantly outperforms two baselines based on the open source OCR engine Tesseract\textsuperscript{3}. Subsequently, we discuss the related work. Section 3 presents our pipeline for text extraction and Section 4 specifies the experiment set-up and dataset used. The results regarding our OCR accuracy are presented in Section 5 and discussed in Section 6. 2 Related Work Research on analyzing infographics is commonly conducted on classifying the information graphics into their diagram type [27] or separating the text from graphical elements [1], [6], [21]. Information graphics show a variety in appearance, which makes such classifications challenging. Thus, many researchers focus on specific types of infographics, e.g., extracting text and graphics from 2D plots using layout information [14]. Other works intend to extract the conveyed message (category) of an infographic [16]. Many research works focus on bar charts, pie charts and line charts when extracting text and graphical symbols [5], reengineer the original data [7], [22], or determine the infographic’s core-message [4] to render it in a different modality or make it accessible to visually impaired users. In any case, one requires clean and accurate OCR results for more complex processing steps, e.g. determining a message. Therefore, they use manually entered text. A different approach [13], [15] to make infographics available to sight impaired users is to translate infographics into Braille, the tactile language, which requires text extraction and layout analysis. This research is similar to our approach but relies on a semi-automatic approach which requires several minutes of human interaction per infographic. Furthermore their approach is challenged by image noise and their supervised character detection algorithm works under the assumption that the text has a unified style, i.e., font, size, and others. Another more specialized approach for mathematical figures [25] describes a pipeline for (mathematical-)text and graphic separation, but only for line graphs and the evaluation corpus is very small and they do not conduct any kind of OCR to verify the results. The assumption to automatically generate high-quality OCR on infographics with today’s tools is certainly far-fetched. 3 TX Processing Pipeline Our Text eXtraction from infographics (short: TX) pipeline consists of five steps plus a final evaluation step as shown in Figure 1. It combines certain ideas from related research [11], [13], [24] to build an automated pipeline which takes an infographic as input and returns all contained text. An initial version of our pipeline was briefly presented in [2]. Here we elaborate in detail on the steps of the pipeline, formalize it, and extend our evaluation. Given the heterogeneous \textsuperscript{3} \url{https://github.com/tesseract-ocr}, last access: Sep 07, 2015 research field, a formalization is required to map the related work for a thorough comparison and assessment. In our pipeline, an information graphic $I$ is defined as a set of pixels $P$ with $p = (x, y) \in P \land x \in \{1..width(I)\} \land y \in \{1..height(I)\}$ where the latter two are integer arrays. The color information of each pixel $p$ is defined by a function $\Psi : P \rightarrow S$, where $S$ is a color space. We use this information implicitly during our pipeline and use multiple $\Psi$ functions to map to certain color spaces (e.g. RGB, grey scale...). A set of text elements $T$ is generated from $P$ by applying the text extraction function $\Upsilon$: $$\Upsilon : P, \Psi \rightarrow T$$ Each text element $\tau \in T$ is a sequence of regular expressions $\omega_i$ specified as $\tau = <\omega_1, ..., \omega_n>$, separated by blank space characters, and with $\omega = [A-Za-z0-9!$'§$%&/()=?*’\{\}\|\+-*..;:|#@__~<>\€&.f.©@¥e]$. In the following, we break down the formalization of $\Upsilon$ into five sub-functions $v_j$, one function for each step in our pipeline. We define $\Upsilon$ as a composition: $$\Upsilon := v_5 \circ v_4 \circ v_3 \circ v_2 \circ v_1$$ An overview of the notation used in this paper can be found in Table 1. | Notation | Description | |----------|-------------| | $\Upsilon$ | Text extraction function | | $v_j$ | Sub-function | | $P$, $p$ | Set of pixels and individual pixel | | $R$, $r$ | Set of regions and individual region | | $C$, $c$ | Clustering and individual cluster | | $C'$, $c'$ | Set of text lines and individual text line | | $\Omega$, $\omega$ | Set of words and individual word | | $A$, $\alpha$ | Set of text line orientations and individual orientation | | $T$, $\tau$ | Set of text elements and individual text element | (1) **Region extraction**: The first step is to compute a set of disjoint regions $R$ from the infographic’s pixel set $P$ using adaptive binarization and Connected Component Labeling [20]. This step is formally defined as: $$v_1 : P \rightarrow R, R := \{ r | r \subset P \land r \neq \emptyset \land \forall i, j, i \neq j : r_i \cap r_j = \emptyset \}$$ Each region $r \in R$ is a set of pixels forming a connected space, i.e. each region has a single outer boundary, but may contain multiple inner boundaries (holes). Furthermore, the constraints in equation 3 ensure that all regions are non-empty and disjoint. First, we perform a newly-developed hierarchical, adaptive binarization that splits the infographic into tiles. The novelty of this approach is that it computes individual local thresholds to preserve the contours of all elements. This is based on the assumption that the relevant elements of an infographic are distinguishable through their edges. We start with a subdivision of the original image into four tiles by halving its height and width. For each tile, we apply the popular Sobel operator [24] to determine the edges. We compute the Hausdorff distance [9] over the edges of the current tiles and their parent tile. We further subdivide a tile, by halving its height and width, if a certain empirical value is not reached. A threshold for each tile is computed with Otsu’s method [18] and the final threshold per pixel is the average of all thresholds for that pixel. This procedure appeared to be more noise tolerant and outperformed the usual methods, e.g., fixed threshold or histogram, during preliminary tests. The resulting binary image is labeled using the Connected Component Labeling method. This method iterates over a binary image and computes regions based on the pixel neighborhood giving each region a unique label. From the binary image, we compute for each region $r$ the relevant image moments [10] $m_{pq}$ as defined by: $$m_{pq} = \sum_x \sum_y x^p y^q \Psi \quad \text{with} \quad p, q = 0, 1, 2, \ldots$$ Please note that $p, q$ hereby denote the $p, q^{th}$ moment and may not be mistaken with the notation used in the remaining paper. For binary images, $\Psi$ takes the values 0 or 1 and therefore only pixels contained in a region are considered for the computation of the moments. Using the first-order moments, we can compute each regions center of mass. Afterwards, we apply simple heuristics to perform an initial filtering. We discard all regions that fulfill the following constraints: (a) Either width or height of the region’s bounding box are above average width/height plus 3 times standard deviation (e.g. axes) or (b) bounding box is smaller than 0.001% of the infographic’s size (noise) as well as (c) elements occupying more than 80% of their bounding box (e.g. legend symbols). The function $v_1$ generates a set of regions $R$, which can be categorized into “text elements” and “graphic symbols”, the two types of elements in an infographic. Thus, in a next step we need to separate good candidates for text elements from other graphical symbols. (2) Grouping regions to text elements: The second step computes a clustering $C$ from the set of regions $R$ by using DBSCAN [26] on the regions’ features: $$v_2 : R \rightarrow C, \quad C := \{ c \subseteq R | c \neq \emptyset \land \forall i, j, \ i \neq j : c_i \cap c_j = \emptyset \}$$ Each cluster $c \in C$ is a subset of the regions $R$ and all cluster are disjoint. For each region, the calculated feature vector comprises the x/y-coordinates of the region’s center of mass, the width and height of its bounding box, and its mass-to-area ratio. Due to the huge variety of infographics, we apply the density-based hard clustering algorithm DBSCAN to categorize regions into text elements or noise (graphic symbols and others). This step outputs a clustering $C$ where each cluster is a set of regions representing a candidate text element. We assume that these clusters contain only text while all graphical symbols are classified as noise. (3) Computing of text lines: In this step, we generate a set of text lines $C'$ on the clustering $C$ by further subdividing each cluster $c \in C$. A text line $c'$ is a set of regions that forms a single line, i.e., the OCR output for these regions is a single line of text. Each clustering $c$ instead may generate multiple lines of text when processed by an OCR engine and therefore may implicitly contain other white space characters. To this end, we apply a second clustering based on a Minimum Spanning Tree (MST) [26] on top of the DBSCAN results, since clusters created by DBSCAN do not necessarily represent text lines. We compute a forest of Minimum Spanning Trees, one MST for each DBSCAN cluster. By splitting up the MST, a set of text lines for each cluster will be built. The rationale is that regions belonging to the same text lines a) tend to be closer together (than other regions) and b) the edges between those regions are of similar orientation. This is defined as: $$v_3 : C \rightarrow C', \quad C' := \{c' \subseteq c | c \in C \land c' \neq \emptyset \land \forall i, j, \ i \neq j : c'_i \cap c'_j = \emptyset\} \quad (6)$$ Each text line $c' \in C'$ contains a subset of the regions of a specific cluster $c \in C$. Again, all text lines are non-empty and disjoint. For each cluster, the MST is built using the regions’ center of mass coordinates which are the first two elements of the feature vectors computed in Step 2. We compute a histogram over the angles between the edges in the tree and discard those edges that differ from the main orientation. The orientation outliers are estimated from the angle histogram by finding the maximal occurring orientation and defining an empirical estimated range of $\pm 60$ degrees, where everything outside is an outlier. (4) Estimating the orientation of text lines: In Step 4, we compute an orientation $\alpha \in A$ for each text line $c' \in C'$ so that we can rotate each line into horizontal orientation for OCR. This can be formalized as: $$v_4 : C' \rightarrow C' \times A, \quad A := \mathbb{Z} \cap [-90, 90] \quad (7)$$ Every orientation angle $\alpha \in A$ for a text line $c'$ can have an integer value from -90 to 90 degree. While the MST used in the previous step can well produce potential text lines, it is not well suited for estimating the orientation of text lines as it is constructed on the center of mass coordinates which differ from region to region. Thus, we apply a standard Hough line transformation [12] to estimate the actual text orientation. During the Hough transformation, the coordinates of the center of mass of each element are transformed into a line in Hough space, which is defined by angle and distance to origin, creating a maximal intersection at the lines’ orientation. This computation is robust with regard to a small number of outliers that are not part of the main orientation. (5) Rotate regions and apply OCR: The final step rotates the text lines along an angle of $-\alpha$ in order to apply a standard OCR tool. It is defined as: $$v_5 : C' \times A \rightarrow T \quad (8)$$ We cut sub-images from the original graphic using the text lines $C'$ from $v_3$, rotate them based on their orientation $A$ from $v_4$ and finally apply OCR. Step 6, the evaluation of the results, is described in detail below. 4 Evaluation Setup We assess the results of our pipeline TX by comparing it with two baselines based on Tesseract, a state-of-the-art OCR engine. In our evaluation, we compute the performance over 1-, 2- and 3-grams as well as words. During the evaluation, we match the results of TX and the baselines with some gold standard. Both, the position of the text elements as well as their orientation are considered in this process. We use different evaluation metrics as described in Section 4.4. 4.1 Dataset and Gold Standard Our initial corpus for evaluating our pipeline consists of 121 infographics, which are manually labeled to create our gold standard. Those 121 infographics were randomly retrieved from an open access corpus of 288,000 economics publications. 200,000 candidates for infographics were extracted from these publications. All selected candidates have a width and height between 500 and 2000 pixel, since images below 500 most likely do not contain text of sufficient size and images above 2000 pixel appear to be full page scans in many cases. From the candidate set, we randomly picked images - one at a time - and presented them to a human viewer to confirm that it is an infographic. We developed a labeling tool to manually define text elements in infographics for the generation of our gold standard. For each text element we recorded its position, dimension, rotation and its alpha-numeric content. Please note that we considered using existing datasets like the 880 infographics from the University of Delaware\footnote{\url{http://ir.cis.udel.edu/~moraes/udgraphs/}, last access: Sep 07, 2015}, but they were incomplete or of poor quality. 4.2 Baselines Today’s tools are incapable of extracting text from arbitrary infographics. Even approaches from recent research works, as presented in Section 2, are too restrictive to be applicable on information graphics in general. This holds also for specialized research like rotation-invariant OCR [17], [19]. Since no specialized tools exist that could be used as a baseline, we rely on Tesseract, the state-of-the-art OCR engine, as our initial baseline (BL-1). It is reasonable to use this baseline, since Tesseract supports a rotation margin of $\pm 15^\circ$ [23] and is capable of detecting text rotated at $\pm 90^\circ$ due to its integrated layout analysis. Since infographics often contain text at specific orientations ($0^\circ, \pm 45^\circ, \pm 90^\circ$), we also apply a second baseline. This second baseline (BL-2) consists of multiple runs of Tesseract with the rotated infographic at the above specified angles. We combine the five results from the different orientations by merging the results between those sets and in case of overlaps we take the element with greatest width. 4.3 Mapping to Gold Standard The most accurate approach to compare OCR results with the gold standard would be to evaluate the results on the level of individual characters. Our pipeline, the baselines and the gold standard generate their output on varying levels. Only our pipeline supports the output of individual character regions. Tesseract supports only words, as specified in the hOCR standard\textsuperscript{5}, on the lowest level. Thus, we transform the gold standard and pipeline output to word level under the assumption of equality in line height and character width. Each text element is defined by its position, i.e. x/y coordinates of the upper left corner of the bounding box, its dimensions determined by width and height of the bounding box and its orientation in terms of a rotation angle around its center. We subdivide each text element $\tau$ into words by splitting at blank spaces and carriage returns. The new position and dimensions for each word $\omega \in \Omega$ are computed while retaining the text element’s orientation. This is defined by: $$\Phi : \quad T \times C' \times A \rightarrow \Omega \times C'' \times A$$ \hspace{1cm} (9) $$\Omega := \{\omega \in \tau | \tau \in T\}$$ \hspace{1cm} (10) $$C'' := \{c'' \subseteq c' | c' \in C' \land c'' \neq \emptyset \land \forall i, j, \ i \neq j : e_i'' \cap e_j'' = \emptyset\}$$ \hspace{1cm} (11) The bounding boxes of the individual words are matched between TX and gold standard as well as baselines and gold standard for evaluation. For each word $\omega \in \Omega$ we compute the contained n-grams for further evaluation. 4.4 Evaluation Metrics As previously mentioned, we are evaluating our pipeline over n-grams and words. Since infographics often contain sparse and short text as well as short numbers, we only use 1-, 2-, and 3-grams. We use standard metrics precision ($PR$), recall ($RE$), and F$_1$-measure ($F_1$) for our n-grams evaluation as defined by: $$PR = \frac{|Extr \cap Rel|}{|Extr|}, \ RE = \frac{|Extr \cap Rel|}{|Rel|}, \ F_1 = \frac{2 \cdot PR \cdot RE}{PR + RE}.$$ \hspace{1cm} (12) Here, $Extr$ refers to the n-grams as they are computed from text elements that are extracted from an infographic by TX and the baseline, respectively. $Rel$ refers to the relevant n-grams from the gold standard. For comparing individual words (i.e. sequences of alpha-numeric characters separated by blank or carriage return), we use standard Levenshtein distance. The same n-gram can appear multiple times in both the extractions result from TX, the baselines, as well as the gold standard. Thus, we have to deal with multisets when computing our evaluation metrics. In order to accommodate this, we have to slightly modify the standard definitions of $PR$ and $RE$, respectively. To properly account for the number of times an n-gram can appear in $Extr$ or $Rel$, we define the counter \textsuperscript{5} The hOCR Embedded OCR Workflow and Output Format: \url{http://tinyurl.com/hOCRFormat}, last access: Sep 07, 2015 function \( C_M(x) := |\{x | x \in M\}| \) (as an extension of a set indicator function) over a multiset \( M \). For an intersection of multisets \( M \) and \( N \), the counter function is formally defined by: \[ C_{M \cap N}(x) := \min\{C_M(x), C_N(x)\} \] (13) Based on \( C_{M \cap N}(x) \), we define \( PR \) and \( RE \) for multisets: \[ PR = \frac{\sum_{x \in Extr \cup Rel} C_{Extr \cap Rel}(x)}{\sum_{x \in Extr} C_{Extr}(x)} \] (14) \[ RE = \frac{\sum_{x \in Extr \cup Rel} C_{Extr \cap Rel}(x)}{\sum_{x \in Rel} C_{Rel}(x)} \] (15) Specific cases may happen when either one of the sets \( Extr \) or \( Rel \) is empty. One case is that our pipeline TX or the baselines do not extract text where they should, i.e., \( Extr = \emptyset \) and \( Rel \neq \emptyset \). When such a false negative happens, we define \( PR := 0 \) and \( RE := 0 \) following Groot et al. [8]. For the second situation, when the approaches we compare find something where they shouldn’t (false positives), i.e., \( Extr \neq \emptyset \) and \( Rel = \emptyset \), we define \( PR := 0 \) and \( RE := 1 \). ## 5 Results This section presents the results of our initial evaluation to assess the quality of the OCR results using our pipeline. We start with a descriptive statistics of the gold standard and the extraction results over the infographics. Subsequently, we present the evaluation results in terms of precision, recall and \( F_1 \)-measure for infographic and word-level evaluation of TX and the two baselines as well as the Levenshtein distances computed for the extracted text and the gold standard. **Data Characteristics:** Table 2 presents the average numbers and standard deviation (in brackets) with regard to n-grams, words and word length for our extraction pipeline (TX), both baselines (BL-1/-2), and gold standard (GS). Table 2 clearly shows that our novel pipeline detects at least 1.5 as many n-grams and words as BL-1 and still some more than BL-2. Compared with the gold standard, TX extracts more n-grams and words. In addition TX and the baselines extract words shorter than the gold standard. Overall, we observe high standard deviations in the gold standard and the extraction results. **Evaluation results on word-level n-grams:** The average precision (\( PR \)), recall (\( RE \)) and \( F_1 \)-measures for n-grams in Table 3 (standard deviation in brackets) show a relative improvement (Diff.) of TX over BL-1 of about 30% on average. The differences are computed by setting the pipeline results into relation with the baselines. We verified the improvement using significance tests, i.e., if the two distributions obtained from TX and BL-1/2 significantly differ. We checked whether the data follows a normal distribution and has equal variances. Subsequently, we have applied Student’s t-tests or the non-parametric Wilcoxon Table 2: Average number of n-grams and words of the 121 infographics and average word length for GS/TX/BL-1/BL-2 | | 1-grams | 2-grams | 3-grams | Words | Length | |-------|-----------|-----------|-----------|----------|----------| | GS | 150.65 (122.28) | 115.93 (103.09) | 84.95 (85.61) | 35.46 (22.24) | 4.22 (1.48) | | TX | 177.21 (128.21) | 127.34 (100.51) | 89.34 (79.35) | 50.07 (31.95) | 3.63 (2.69) | | BL-1 | 106.30 (87.71) | 80.17 (69.12) | 60.79 (54.54) | 25.21 (22.12) | 4.15 (2.25) | | BL-2 | 135.08 (125.56) | 100.20 (98.20) | 75.08 (78.10) | 35.25 (33.94) | 4.08 (1.95) | signed rank test. For all statistical tests, we apply a standard significance level of $\alpha = 5\%$. All TX/BL-1 comparison results are significant with $p < .01$ except for the recall over trigrams which has $p < .046$. The test statistics for t-tests are between $-7.5$ and $-3.1$ and for the Wilcoxon tests between 1808 and 2619. The second part of Table 3 reports the comparison between TX and BL-2. The results are similar to the previous comparison, but for recall over unigrams and $F_1$-measure over trigrams the improvement is smaller. Here, all differences are significant with a p-value of $p < .01$ except for the recall and $F_1$-measure over trigrams with $p < .049$ and $p < .027$, respectively. The test statistics are between $-6.8$ and $-3.1$ and between 1652 and 2626 for non-parametric tests. Finally, we observe a smaller performance increase when comparing the results from 1-grams to 3-grams as well as overall high standard deviations. Table 3: Average $PR$, $RE$, $F_1$ measures for TX and BL-1/BL-2 | | word level | infographic level | |-------|------------|-------------------| | | $PR$ | $RE$ | $F_1$ | $PR$ | $RE$ | $F_1$ | | n-gram| | | | | | | | TX | 1 | .50 (0.41) | .68 (0.36) | .47 (0.39) | .67 (0.23) | .79 (0.20) | .71 (0.21) | | | 2 | .58 (0.39) | .54 (0.38) | .54 (0.34) | .60 (0.27) | .67 (0.25) | .62 (0.25) | | | 3 | .52 (0.39) | .48 (0.37) | .49 (0.37) | .57 (0.29) | .60 (0.29) | .57 (0.28) | | BL-1 | 1 | .37 (0.36) | .48 (0.36) | .36 (0.35) | .67 (0.29) | .54 (0.31) | .58 (0.30) | | | 2 | .42 (0.33) | .42 (0.34) | .42 (0.33) | .60 (0.33) | .50 (0.33) | .53 (0.32) | | | 3 | .42 (0.31) | .42 (0.31) | .36 (0.33) | .55 (0.35) | .48 (0.34) | .49 (0.34) | | Diff. | 1 | 35.14% | 41.67% | 30.06% | 0.00% | 46.30% | 22.41% | | | 2 | 38.10% | 28.57% | 28.57% | 0.00% | 34.00% | 16.98% | | | 3 | 23.81% | 14.29% | 36.11% | 3.64% | 25.00% | 16.33% | | BL-2 | 1 | .37 (0.37) | .51 (0.38) | .36 (0.36) | .65 (0.25) | .59 (0.29) | .60 (0.26) | | | 2 | .42 (0.34) | .42 (0.35) | .42 (0.34) | .57 (0.31) | .52 (0.31) | .53 (0.30) | | | 3 | .42 (0.32) | .42 (0.32) | .42 (0.32) | .51 (0.33) | .50 (0.34) | .49 (0.32) | | Diff. | 1 | 35.14% | 33.33% | 30.06% | 3.08% | 33.90% | 18.33% | | | 2 | 38.10% | 28.57% | 28.57% | 5.26% | 28.85% | 16.98% | | | 3 | 23.81% | 14.29% | 16.67% | 11.76% | 20.00% | 16.33% | Evaluation results on infographic level n-grams: We conducted another evaluation on infographic level where we did not consider the location mapping constraint between words and compared the n-grams for the whole infographic. The results are shown in Table 3 for both baselines BL-1 and BL-2. While having on average higher values for all metrics in both comparisons, the relative improvement for precision, recall, and $F_1$-measure compared with the word level evaluation decreases in most cases. The significance of the results is only given for recall and $F_1$-measure, but not for precision. For recall and $F_1$-measure we have $p < .04$ and the test statistics are between $-9.2$ and $-2.4$ for t-tests. **Evaluation on words (Levenshtein):** For TX the Levenshtein distance is on average 2.23 (SD=1.29). Hence, for an exact match one has to alter about two characters. The average Levenshtein distance for BL-1 is 2.53 (SD=1.59) and we verified that they differ significantly ($t(120) = 2.10, p < .04$). The difference in Levenshtein from BL-2 to TX with an average distance of 2.54 (SD=1.51) is significant as well ($V(120) = 4713, p < .01$). **Special case evaluations:** The number of special cases for TX are on average 12.94 (SD=17.88) false negatives and 49.87 (SD=31.52) false positives. For BL-1 we can instead report 17.01 (SD=17.40) false negatives and 5.67 (SD=9.42) false positives on average. BL-2 generates on average 9.03(SD=15.61) false negatives and 17.01(SD=17.40) false positives. Comparing TX pipeline with BL-1 shows that TX produces significantly less false negatives ($V(120) = 4503.5, p < .01$), but simultaneously generates significantly more false positives ($t(120) = -16.6, p < .001$). The second baseline is on average better than TX with regard to false negatives and false positives. ## 6 Discussion Our novel pipeline shows promising results for the extraction of multi-oriented text from information graphics. The difference between word and infographic level evaluation can be explained by the constraints induced by the matching procedure on word-level. The main reason for the performance improvement is the increased recall, which is a result of finding text at non-horizontal angles. We define all elements as non-horizontal which have an orientation outside of Tesseract’s tolerance range of $\pm 15$ degree. About 20% of the words in an infographic are on average at non-horizontal orientation, as specified by the gold standard. Our pipeline output consists to 37% of non-horizontal words while extracting 41% more words on average than actually present in the gold standard. On the other hand, the first baseline which extracts only about 77% as many words as actually contained, all of horizontal orientation. The second baseline is closest to the gold standard with respect to the number of extracted words and contains on average 31% non-horizontal words. In addition, we have improved precision and therefore an overall performance increase, collected through the $F_1$-measure, with TX. The standard deviation is in all cases quite high, which can be explained by the variance in the gold standard. Consequently, these are dataset characteristics and not issues of TX or the baselines. The lower number of 3-grams, which are on average only half as many as 1-grams, is a potential negative influence on the results. As reported in Table 2, there is a high standard deviation of the number of n-grams in the gold standard. Thus, some graphic might not even contain 3-grams. However for most cases, there are on average 85 3-grams per infographic as denoted by the gold standard statistics in Table 2, which is enough for reasonable results. Furthermore, TX produces less false negatives, i.e., it extracts more text elements from the gold standard than BL-1. But it still makes more mistakes with regard to extracting text elements where there are none in the gold standard. This is reflected in Table 2, where TX extracts on average more text elements than there are actually present in the gold standard. These false positives often consist of special characters such as colons, semicolons, dots, hyphens, and others. Removing them will be a future extension of our work. 7 Conclusion We have presented our novel pipeline for multi-oriented text extraction from information graphics and proved its concept on a set of 121 infographics. Our text extraction shows a significant increase in $F_1$-measure over two baselines, which is explained by detecting text elements at non-horizontal angles. In our future work, we plan to add a merge step after the MST clustering to reduce the Levenshtein distance and to perform entity detection over the text extraction results. In addition, we want to apply our pipeline to a larger set of infographics for a more thorough evaluation. We will create the required gold standard using crowd-sourcing in the near future. Finally, we plan to include alternative OCR engines like Ocropus to find the best solution for our needs. References [1] P. Agrawal and R. Varma. Text extraction from images. *IJCSET*, 2(4):1083–1087, 2012. [2] F. Böschen and A. Scherp. Multi-oriented text extraction from information graphics. In *ACM DocEng*, 2015. [3] S. Carberry, S. Elzer, and S. Demir. Information graphics: an untapped resource for digital libraries. In *SIGIR*, pages 581–588. ACM, 2006. [4] S. Carberry, S. E. Schwartz, K. F. McCoy, S. Demir, P. Wu, C. Greenbacker, D. Chester, E. Schwartz, D. Oliver, and P. Moraes. Access to Multimodal Articles for Individuals with Sight Impairments. *TiiS*, 2(4):21:1–21:49, 2013. [5] D. Chester and S. Elzer. Getting Computers to See Information Graphics So User Do Not Have to. In *Foundations of Intelligent Systems*, volume 3488 of *LNCS*, pages 660–668. Springer, 2005. [6] S. R. Choudhury and C. L. Giles. An architecture for information extraction from figures in digital libraries. In *WWW*, pages 667–672, 2015. [7] J. Gao, Y. Zhou, and K. E. Barner. VIEW: Visual information extraction widget for improving chart images accessibility. In *ICIP*, pages 2865–2868. IEEE, 2012. [8] P. Groot, F. van Harmelen, and A. ten Teije. Torture tests: A quantitative analysis for the robustness of knowledge-based systems. In *EKAW*, pages 403–418, 2000. [9] F. Hausdorff. *Grundzüge der Mengenlehre*. AMS Chelsea Publishing Series. Chelsea Publishing Company, 1949. [10] M. Hu. Visual pattern recognition by moment invariants. *IRE Transactions on Information Theory*, 8(2):179–187, 1962. [11] W. Huang and C. L. Tan. A system for understanding imaged infographics and its applications. In *Proc. DocEng*, pages 9–18, 2007. [12] J. Illingworth and J. Kittler. A survey of the hough transform. *Computer Vision, Graphics, and Image Processing*, 44(1):87–116, 1988. [13] C. Jayant, M. Renzelmann, D. Wen, S. Krisnandi, R. E. Ladner, and D. Comden. Automated tactile graphics translation: in the field. In *ASSETS*, pages 75–82, 2007. [14] S. Kataria, W. Browner, P. Mitra, and C. L. Giles. Automatic extraction of data points and text blocks from 2-dimensional plots in digital documents. In *Advancement of Artificial Intelligence*, pages 1169–1174. AAAI, 2008. [15] R. E. Ladner, M. Y. Ivory, R. Rao, S. Burgstahler, D. Comden, S. Hahn, M. Renzelmann, S. Krisnandi, M. Ramasamy, B. Slabosky, A. Martin, A. Lacenski, S. Olsen, and D. Groce. Automating tactile graphics translation. In *ASSETS*, pages 150–157, 2005. [16] Z. Li, M. Stagiriti, S. Carberry, and K. F. McCoy. Towards retrieving relevant information graphics. In *SIGIR*, pages 789–792. ACM, 2013. [17] R. Mariani, M. P. Deselligny, J. Labiche, and R. Mullot. Algorithms for the hydrographic network names association on geographic maps. In *ICDAR*. IEEE, 1997. [18] N. Otsu. A threshold selection method from gray-level histograms. *TSMC*, 9(1):62–66, 1979. [19] P. M. Patil and T. R. Sontakke. Rotation, scale and translation invariant handwritten devanagari numeral character recognition using general fuzzy neural network. *Pattern Recogn.*, 40(7):2110–2117, 2007. [20] H. Samet and M. Tamminen. Efficient component labeling of images of arbitrary dimension represented by linear bintrees. *IEEE TPAMI*, 10(4):579–586, 1988. [21] J. Sas and A. Zolnierek. Three-Stage Method of Text Region Extraction from Diagram Raster Images. In *CORES*, pages 527–538, 2013. [22] M. Savva, N. Kong, Á. Chajhta, L. Fei-Fei, M. Agrawala, and J. Heer. ReVision: Automated Classification, Analysis and Redesign of Chart Images. In *UIST*, pages 393–402. ACM, 2011. [23] R. Smith. A simple and efficient skew detection algorithm via text row accumulation. In *ICDAR*, volume 2, pages 1145–1148, 1995. [24] I. Sobel. History and definition of the so-called "sobel operator", more appropriately named the sobel-feldman operator. Sobel, I., Feldman, G., ‘A 3x3 Isotropic Gradient Operator for Image Processing’, presented at the Stanford Artificial Intelligence Project (SAIL) in 1968., 2015. [25] N. Takagi. Mathematical figure recognition for automating production of tactile graphics. In *ICSMC*, pages 4651–4656, 2009. [26] P.-N. Tan, M. Steinbach, and V. Kumar. *Introduction to Data Mining, (First Edition)*. Addison-Wesley Longman Publishing Co., Inc., 2005. [27] F. Wang and M.-Y. Kan. NPIC: Hierarchical synthetic image classification using image search and generic features. In *CIVR*, volume 4071 of *LNCS*, pages 473–482. Springer, 2006.
Type of dance: ABC dance. A: 32 counts/nightclub. B: 16 counts/rumba. C: 32 counts/funky. Intro: Start after 8 counts (app. 8 secs into track). NOTE that your count-in should be slow. Start with weight on L. **2 Restarts: 1st) During 3rd A, after 8 counts, facing 12:00. 2nd) During 5th C, after 16 counts, facing 12:00. See Detailed Restart description at bottom of page Sequence: ABCC, ABCC, A*, ABC*C. A – 32 counts/Nightclub/1 wall (The A part always starts facing 12:00) A[1 – 9] Side R, back rock, fwd L & full spiral, run run rock, back sweeps X 3, ¼ R sways, ¼ L 1 – 2& Step R to R side (1), rock back on L (2), recover fwd onto R (&) 12:00 3 Step L fwd turning a full spiral turn R on L (3) 12:00 4&5 Run R fwd (4), run L fwd (&), rock R fwd (5) 12:00 6&7 Recover L back sweeping R (6), step R back sweeping L (&), step L back sweeping R (7) 12:00 &8&1 Turn ¼ R stepping R to R side swaying body R (&), sway L (8), sway R (&), turn ¼ L onto L dragging R next to L (1) … * restart: when doing your 3rd A change counts &&&1 to: rock back on R (8), recover onto L (&). Remember: Don’t turn the ¼ R but stay facing 12:00 when doing this rock step 12:00 A[10 – 16] Weave, ¼ L, step turn turn, R arm up, R&L arm down & out, to chest, shoulders LR 28&3 Cross R over L (2), step L to L side (&), cross R behind L (3), turn ¼ L stepping L fwd (&) 9:00 4&5 – 6 Step R fwd (4), turn ¼ L onto L (&), turn ¼ L on L stepping R to R side starting to reach R arm fwd with palm opened up (5), R arm ends stretched forwards and slightly up (6) 9:00 7&8 Bring R arm down alongside R leg with R hand fisted (7), do the same with L arm (&), bring both arms up to chest crossing R arm over L (8) 9:00 &a Twist upper-body slightly L (&), twist upper-body slightly R (a) – weight on R 9:00 A[17 – 24] Sweep R, cross ¼ R, R side rock, full turn with jump/kick, ¼ R, ¼ R, together, weave 1 – 2&3 Recover onto L sweeping R fwd (1), cross R over L (2), turn ¼ R stepping L back (&), rock R to R side (3) 12:00 4&5 Recover onto L (4), turn ¼ R stepping R fwd (&), turn ½ R stepping back on L kicking R leg up but continuing to turn ¼ R on L (5) Styling for count 5: Jump slightly off R foot to show the lyrics ‘jump into the deep end’ … ⊙ 12:00 6&7 Turn ¼ R stepping R fwd (6), turn ¼ R stepping L to L side (&), step R next to L (7) 6:00 &&& Cross L over R (&), step R to R side (8), close L behind R (&) 6:00 A[25 – 32] R basic, side rock cross, ¼ L, R arm up, R&L arm down & out, to chest, shoulders LR 1 – 2& Step R a big step to R side (1), step L behind R (2), cross R over L (&) 6:00 3&4& Rock L to L side (3), recover onto R (&), cross L over R (4), turn ¼ L stepping back on R (&) 3:00 5 – 6 Turn ¼ L stepping L to L side starting to reach R arm fwd with palm opened up (5), R arm ends stretched forwards and slightly up (6) 12:00 7&8 Bring R arm down alongside R leg with R hand fisted (7), do the same with L arm (&), bring both up to chest crossing R arm over L (8) 12:00 &a Twist upper-body slightly L (&), twist upper-body slightly R (a) – weight on R 12:00 B – 16 counts/Rumba/1 wall (The B part always starts facing 12:00 – NOTE: use them hips!) B[1 – 8] Sweep R diagonally L, R rocks, L side rock cross, ¼ L X 2, R rocks with body rolls 1 – 2&3 Recover onto L sweeping R fwd into L diagonal (1), rock R fwd (2), recover back on L (&), recover fwd to R (3) 10:30 4&5 Turn 1/8 R rocking L to L side (4), recover onto R (&), cross L over R (5) 12:00 6& Turn ¼ L stepping back on R (6), turn ¼ L stepping L to L side (&) 6:00 7&8& Cross rock R slightly over L (7), recover on L (&) recover fwd to R (8), recover back on L (&) … Styling: roll body from chest and down during your two rock steps 6:00 B[9 – 16] Sweep L diagonally R, L rocks, R side rock cross, ¼ R X 2, L rocks with body rolls 1 – 2&3 Recover onto R sweeping L fwd into R diagonal (1), rock L fwd (2), recover back on R (&), recover fwd to L (3) 7:30 4&5 Turn 1/8 L rocking R to R side (4), recover onto L (&), cross R over L (5) 6:00 6& Turn ¼ R stepping back on L (6), turn ¼ R stepping R to R side (&) 12:00 7&8 Cross rock L slightly over R (7), recover on R (&), recover fwd to L (8) … Styling: roll body from chest and down during your two rock steps 12:00 C – 32 counts/Funky/2 walls (The C part always starts facing 12:00 and always comes twice) C[1 – 8] Out RL, centre, fwd L, R swivel up, return, bounce side/back/side, fwd R & open body 1&2& Step R out to R (1), step L out to L (&), step R to centre (2), step L fwd (&) 12:00 3 – 4 Step R fwd swivelling both heels R and going up on ball of both feet at the same time (3), swivel heels back again recovering back on L (4) 12:00 5 – 8 Rock R to R side (5), recover on L rocking R back (6), recover on L rocking R to R side (7), recover onto L stepping R fwd (8) Styling for count 8: open body to R side that way slightly crossing R over L when stepping R fwd AND look over R shoulder. - Note: During all 4 rocks try to bounce bending in both knees when taking your steps 12:00 C[9 – 16] Walk LRL fwd, together with R, walk LR back, ball back rock 1 – 2 Walk L fwd (1), walk R fwd (2) … Styling: bring both arms in front of body crossing R arm over L (1), bring arms out to both sides and snap fingers (2) 12:00 3 – 4 Step L fwd (3), step R next to L (4) … Styling: push arms and hands fwd and up to face level/palms open towards face (3), flip hands around so that both palms are facing fwd/fingers pointing up (4) 12:00 5 – 6 Walk back L (5), walk back R (6) … Styling: drop arms down on count 5 12:00 &7 – 8 Step L a small step back (&), rock back on R (7), recover fwd to L (8) … * Restart: when doing your 5th C the music changes, then restart here, after 16 counts, facing 12:00 12:00 C[17 – 24] Step R fwd & Hand claps, push L to L side with drag, chug ¾ L 1&2 Step R fwd slapping thigh with R hand and placing L hand over R thigh with palm facing down (1), slap L hand’s palm with back of R hand (8), slap R thigh with R hand again (2) 12:00 3 – 4 Drop arms stepping L a big step to L side and pushing R hand/arm to R side (3), drag R towards L (4) 12:00 5 – 8 Drop R arm starting to turn ¾ L rocking R to R side (5), continue turning and finish the ¾ turn over the next 3 counts ending with the weight on L (8) 3:00 C[25 – 32] Heel grind ¼ R, L side rock, cross shuffle, vine R with big step R, slide together 1 – 2& Touch R heel fwd (1), grind ¼ R on R rocking L to L side (2), recover onto R (&) 6:00 3&4 Cross L over R (3), step R a small step to R side (&), cross L over R (4) 6:00 5 – 6 Step R to R side (5), cross L behind R (6) … Styling: touch L shoulder with R hand and R shoulder with L hand (5), touch L shoulder with L hand and R shoulder with R hand (6) 6:00 7 – 8 Step R a big step to R side (7), step L next to R (8) … Styling: push hands/arms down (7), push hands/arms out to sides (8) … then drop arms again ☺ 6:00 START AGAIN! Ending : When doing your last C do up to count 31 (you’re facing 6:00). Rather than stepping L to R you touch L behind R (count 32), then unwind ¾ L to face 12:00 stepping L to L side 12:00 Contacts: Dee Musk: firstname.lastname@example.org Fred Whitehouse: email@example.com Guyton Mundy: firstname.lastname@example.org Niels Poulsen: email@example.com
After A.P. Davis failed to replace D.D. Palmer's principles of chiropractic with a medical/osteopathic theory of nerve stimulation, a new Palmer detractor appeared. Solon Massey Langworthy attempted to amalgamate osteopathy and naturepathy into chiropractic and also medical orthopedics, including the use of mechanical traction and stimulation devices. This activity made a mockery of D.D. Palmer's term "chiropractic," which means "done by hand." Solon M. Langworthy enrolled as a student at the Palmer School and Cure, in Davenport, Iowa, on July 1, 1901, and graduated in early September 1901. Mrs. Solon Langworthy was adjusted by D.D. Palmer on January 10th and 19th, 1901, for insanity, for which her husband paid $15.00. On September 7, 1901, on his stationery titled the "Cedar Rapids Chiropractic Cure and School," Dr. Langworthy assures D.D. Palmer that he "never solicited business for myself from any of your patients" while a student. In a letter to B.J. Palmer dated January 19, 1902, Langworthy stated he had thirty-three regular patients and declared: "I use chiropractic and osteopathy on them, and it is work." D.D. Palmer defined the word "mixer" to one who used other healing art methods with chiropractic and portrayed it as chiropractic. Langworthy's Cedar Rapids Chiropractic Cure and School drew D.D.'s attention, as well as some osteopaths! The letter continues, "Say Bart, would you and your father sell me an interest in the Davenport plan at rock bottom, you (B.J.) to take the active management and I to give you some of my time each week? My idea would be to run the infirmary for all it is worth and then incorporate the Western School of Chiropractic securing as stock holders D.D. Palmer, Thomas Story, Oakley Smith, B.J. Palmer, Dr. Sutton, Dr. Jones, Dr. Stouder, Miss Olcutt, and Solon Langworthy, each to own an equal amount of stock and each (except D.D. Palmer) to send all students to the school, all sharing alike in the profits of the school." A P.S. was added: "[I (Langworthy) neglected to tell you that the photos were received okay. Please let me know how much I owe on them and I will remit. Don't forget to send me a copy of those lectures]." The reference here was to a Palmer "gathering," the first week in January, 1902, held in Davenport. Lectures were given by D.D. and photos taken. Included in the photos was the photo on page 882 of D.D.'s 1910 book, *The Adjuster*. Recent investigation indicates this picture was taken in January, 1902, at the chiropractic gathering and should state that B.J. was the 15th graduate up to 1902. B.J.'s diploma from D.D. is dated January 16, 1902. Langworthy, in a letter to D.D. on April 10, 1902, envisioned the course of study at the new Western School to be broadened to include "Hydropathy and the other good things in Nature Cure." Nature cure was synonymous with Naturopathy, Natural Cure, Natural Healing, Physiological Therapeutics, Drugless Healing, etc. B.J., on advice of his father, met with Langworthy in Cedar Rapids in April, 1902, to discuss the "school proposition." Evidently, B.J. brought some books from Langworthy on "Nature Cure" to D.D. in Davenport. On May 4, 1902, D.D. wrote to B.J. as follows: *I have no use for those books on "natural cure," as I have been over the whole field and have outgrown them. It is a positive fact that after we Chiropractics have done the right thing, that we should not undo what we have done. I.e. for e.g. Mama hau me to treat her Chiro. than I must treat her Magnetic and undo what I have done. By Chiro. I free the nerves and set them in action, by magnetic I soothe them and quiet them, give them ease. Miss Jenko did not do much good here until she quit the magnetic. A chiro. relieves a nerva or nerves and the drug Dr. steps in and gives a deadening dose, the Chiro. sets the nerves in action, the drug deadens action. A Chiro. fixes a wrong and a C. Science says there was nothing wrong. So we might go through the list and find two opposites. If Chiro. is right, then its opposite cannot be right. This I am daily being much more thoroughly convinced of. The less our patients use of treatments, the better they succeed with us. This is true* of all "Natural Methods," whether they be "Hypnotism, Homopathy, X-rays, Ozone, Electricity, Oil, Variations of Diet, Gymnastics, Massage, Magnatium, Bacteria Remedies, Baths, Medical Herbs, or Kneipp's Water Cure (diseases are in the blood and must be washed out) etc." If Blis had known of one touch of Chiro, he would have had a page or two of Chiro. in his book. Every physician, no matter of what school, says and believes that his means are the "Natural Methods." Chiro... is not benefited by mixing it with any other method, if there is a positive excuse for mixing, it is to fool the patient, belittling Chiro., deceiving the patient and losing confidence in ourselves. The school proposition was evidently negatively received and the lines drawn between the naturopathic chiropractor (MIXER) and the pure chiropractor (later called straight). Langworthy had included osteopathy and naturopathy (ALL drugless methods) into chiropractic, and was in rapport with Benedict Lust, the "father" of naturopathy. Solon Mussey also attempted to involve Dr. Stouder of Des Moines and Dr. Storey of Minneapolis, early graduate of D.D. Palmer, in the formation of a new school, but did not succeed. The first declared naturopathic chiropractor continued correspondence with B.J. Palmer, then practicing in Lake City, Iowa, and obtained from his drawings and dimensions of B.J.'s adjusting table, which he described as a "beauty." Langworthy very shortly thereafter invented and patented a "chiropractic treating table" and described tables such as B.J.'s as an "unpadded washbench on which to put the poor patient." This did not exactly improve the relations between the Palmers and Langworthy. "Poxy Grandpa" as D.D. called himself, was wary and stated in March 1902: "I think that I can see a wood check in the wood pile. Dr. Stouder and among them (Langworthy and Storey) think of starting a school and they don't know how to get rid of us." The Palmers were soon to find out! THE AMERICAN SCHOOL OF CHIROPRACTIC AND NATURE CURE The scion of the Langworthy family continued to operate the Cedar Rapids School of Chiro- practic and Nature Cure for the remainder of 1902. In early 1903, the name was changed to the American School of Chiropractic and Nature Cure. The April 1903 full page advertisement appeared in Medical Talk, a liberal medical home journal. The ad showed Dr. S.M. Langworthy holding a spinal column and states, "This Spine Needs Fixing and So Does Yours." At the bottom, the following statement appears: "Our school is not a 'Diploma Mill' with a cheap mail course. We teach men and women to cure disease, charge a reasonable sum for doing so, and require their personal attendance at the school during the last term." *Emphasis added. In other words, the first terms were taught by mail, and sometime during the last term, the student had to appear. Langworthy's advertising experience as a mercantile executive was utilized. A June 1904 announcement stated that The American School of Chiropractic and Nature Cure, Inc., had been recently reorganized and would reopen September 6, 1904. The length of the course was to be 2 years; 4 terms of 5 months each. An editorial in their June 1904 osteopathic journal (The Cosmopolitan Osteopath) stated: A Mr. S.M. Langworthy, formerly of Dubuque, for several years an insurance agent for the Penn Mutual, subsequently became a boot and shoe traveling salesman. He had and has excellent ideas of advertising and grafting. . . Now he runs a school and has just issued a circular enclosing a fee simple of a $500 check which "Murphy" has paid him for a "mail course." Reference is made to a Langworthy $100 chiropractic mail course on June 19, 1905, letter from a Dr. Herman Dickel of Oak Lane, Pennsylvania. I received from Cedar Rapids, Morris Anatomy, Manning's Physiology, an abridged Gould's Dictionary, Bates and Tabor's Chart, lesson papers and quizzes from time to time on Anatomy, Physiology, Symptomatology, Hygiene and Dietetics. I received nothing on chiropractic principles or practice, except what was continued in "Chiropractic Facts." This was sent to me continued on page 6 continued from page 5 Zarbuck because I asked for it, not because it was part of the tuition.\textsuperscript{11} \textbf{LANGWORTHY'S CHIROPRACTIC FACTS} This 15-page publication by Langworthy shows a standing, pointing Dr. S.M. Langworthy on the cover which states: "Chiropractic Facts—A Book Full of New Ideas" and "Chiropractic Adjustment Makes It Possible For Nature To Cure All Disease." This appeared in 1904 as the 7th edition.\textsuperscript{12} The appearance and contents were very similar to B.J. Palmer's "Chiropractic Proofs," published in 1903, while B.J. was in charge of the Palmer School at Davenport, although D.D. was still president of the school.\textsuperscript{13} "Chiropractic Facts" defines Chiropractic as: \textit{Chiropractic is from two Greek words: zein, hand; nepakton, to be done. Chiropractic is a drugless system founded upon the principle that luxations of osseous or other compact structures, by interfering with the normal action of nerves and vessels are the CAUSE of disease, and that adjustment of these displaced parts to their normal position, by giving freedom of action to all nerves and vessels results in the CURE of disease.} AND \textit{Chiropractic is a method of hand adjustment by which the cause of disease is removed. In not the slightest particular does it resemble Massage, Magnetic, or Hygienic treatment and must not be confounded with any of them. Neither is it Osteopathy, nor does a Chiropractic adjustment resemble an Osteopathic treatment in the least. Chiropractic reaches many diseases upon which Osteopathy has failed.} Chiropractic is defined as "to be done by hand" and chiropractic is not osteopathy, and chiropractic adjustments and osteopathic treatments are distinguished.\textsuperscript{14} These statements are difficult to reconcile with the statement: "I use Chiropractic and Osteopathy" in his letter to B.J. in January, 1902.\textsuperscript{15} The Human Being is referred to as "A Human Machine," an analogy used in 1899 and earlier by D.D.\textsuperscript{16} The chiropractor is referred to as "A Master Mechanic of the Human Machine" by B.J. on the cover of his "Chiropractic Proofs."\textsuperscript{17} According to Russell Gibbons of the National Association for Chiropractic History, Dr. Langworthy received a diploma in 1902 from an institution called the American College of Manual Therapeutics in Kansas City, Missouri.\textsuperscript{18} This diploma and/or course must have included osteopathy, as Dr. Langworthy states in a "Chiropractic Facts" article which illustrated the difference between osteopathy and chiropractic: "I make this statement from positive knowledge for I know both systems..." \textsuperscript{19} \textbf{THE SCENARIO} Dr. Langworthy had incorporated osteopathy into chiropractic from the beginning of his practice, ignoring the differences between the two. Osteopathy and all other medical healing arts claimed "supremacy of the blood," as opposed to the chiropractic claim of "supremacy of the nerves." The osteopathic manipulation, requiring as long as two hours, was not distinguished from the chiropractic adjustment, requiring less than a minute. D.D. said: \textit{There are many to claim to practice chiropractic who know little or nothing of it... It is therefore the purpose of the chiropractor and the parent school to teach this modern science unimized. Those who desire to practice it with other methods have a right to do so, but if they call the mixture chiropractic, we will call them down.}\textsuperscript{20} The labeling of osteopathy as chiropractic raised the ire of the osteopathic profession and contributed to a political war between chiropractors and osteopaths. Langworthy did not only keep a flame, he started conflagration. Osteopaths became eligible for licensure in Iowa in 1902. \textbf{BACKBONE} The chiropractic journal, "Backbone," appeared in October, 1903, published by Solon Langworthy. In the very first issue, the impression was conveyed that Chiropractic was originated and developed at the American School of Chiropractic and Nature Cure at Cedar Rapids, Iowa by Solon Langworthy. Cyrus Lerner, an investigative attorney hired in the early 1950's by a New York Chiropractic group commented: continued on page 17 continued from page 5 For reasons which you will see, Langworthy selected a unique and descriptive title for the magazine. The word "Chiropractic" was not included in the name. The magazine was called "Backbone" -- and on the outside cover there appeared a drawing of the human spine. Volume 1, Number 1, of "Backbone" was published in October 1903. In the inside pages of "Backbone" the reader is introduced to the subject of "Chiropractic." Let me quote for you part of this introduction: "Chiropractic--the science of 'hand-fixing'--is an original Iowa idea--and in the American School of Chiropractic and Nature Cure at Cedar Rapids, Iowa, U.S.A., the science of Chiropractic has been developed until the skilled practitioner knows he can find the immediate cause of disease, and with almost never an exception he can remove it and see his patient restored to health..." Reading this introduction, the ordinary person could not learn who it was that had founded the new science of Chiropractic. By careful wording of the introduction the reader is left with the impression that the science of Chiropractic was naturally a product of "The American School of Chiropractic and Nature Cure." But even more significant than the omission of the name of the founder is the title of the magazine which Langworthy chose. In selecting this title I will show you how Langworthy intended to narrow down in scope the "Science of Chiropractic" and confining it to limitations not intended in the original concept of Palmer. Volume 1, Number 1 of "Backbone" or the American School 1903-04 announcement made no mention of the Palmer School of Chiropractic in Davenport or of D.D. Palmer. D.D. was conducting a Chiropractic school in Santa Barbara, California when these publications appeared, and B.J. was in charge at the Davenport facility, although "rolling around the country" practicing chiropractic in various localities. A legal distinction existed that allowed a person to teach chiropractic but not practice chiropractic. B.J. taught chiropractic along with other assistants he hired in Davenport, but practice in other locations. The distinction was drawn when D.D. was prosecuted in April of 1906 for practicing chiropractic and B.J. was not prosecuted for teaching chiropractic. D.D. conducted his branch schools of chiropractic by the authority granted by the Iowa corporate charter of the Palmer School of Magnetic Cure in June, 1896 and by the corporate laws of Iowa as The Chiropractic School and Infirmary on July 10, 1896. The Palmers were not publishing a chiropractic journal at this time and did not until December, 1904, when D.D. and B.J. joined in Davenport. The Palmer journal, named "The Chiropractor" was edited by D.D. and managed by B.J. The first issue of "Backbone" also contained the following article: ARE YOU TOO FAT? If you are too fat, send us fifty cents for a year's subscription to "Backbone" and Dr. Langworthy will send you infallible instructions for reducing your weight; no medicine, no hard work and eat all you want while doing it. Dr. Langworthy and his "Nature Cure" were in the "fatty" business in 1903. Benedict Lust, Naturopath, also had an ad to progressive men and women in regard to our NATURAL CURE TREATMENT which was guaranteed to CURE ALL DISEASES. Chiropractic was not listed. Three articles on "THE NEW THOUGHT" by Arba Joseph were also included. The believers in New Thought declared that BY THE MIND DISEASE CAN BE CONTROLLED, poverty cured and ideals realized. This accounted for Langworthy to claim that "Backbone" was a book on brain building--a nature cure. This was known as "THE MIND CURE" in 1903. The naturopath's natural cure treatment, the mind cure and the fat control programs were part of Langworthy's "NATURE CURE," amalgamated with the American School of Chiropractic. Benedict Lust, Naturopath, wrote the following letter to the Palmer School of Chiropractic on March 31, 1905: continued in next issue continued on page 19 continued from page 17 Zarbuck CHIROPRACTIC PARALLAX—PART I BIBLIOGRAPHY 1. Zarbuck, Mervyn V., D.C., "Chiropractic Parallax, Part I," IPSCA Journal of Chiropractic, Vol. 9, No. 1, p. 3. 2. Palmer, D.D., The Records, 1981, Palmer Archives, Davenport, Iowa. 3. Palmer Jettter, Solon Langworthy to D.D. Palmer, Sept. 7, 1911—Palmer Archives. 4. Palmer Jettter, Solon Langworthy to D.D. Palmer, Jan. 19, 1912—Palmer Archives. 5. Palmer Jettter, Solon Langworthy to D.D. Palmer, April 10, 1912—Palmer Archives. 6. Langworthy, S.M. to D.D. Palmer, May 4, 1902—Palmer Archives. 7. Langworthy, S.M., "Chiropractic Facts," 1905. 8. Langworthy, S.M., "Chiropractic Facts," 1905. 9. Palmer, D.D., "Chiropractic Facts," 1905. 10. Palmer, D.D., "Chiropractic Facts," 1905. 11. Ecker Report, Vol. 2, p. 264, "The Connecticut Osteopath," June 1912. 12. The Chiropractor Journal, June 1912, Vol. 1, No. 8, p. 29. 13. Langworthy, S.M., "Chiropractic Facts," 1905 Edition, 1905. 14. Palmer, D.D., "Chiropractic Facts," 1905. 15. Bida, p. 1. 16. Palmer Jettter, S.M. Langworthy to D.D. Palmer, Jan. 19, 1912—Palmer Archives. 17. Palmer, D.D., "Chiropractic Parallax," p. 3—Palmer Archives. 18. Palmer, D.D., "Chiropractic Parallax," 1905. 19. Gibson, Burton W., Solon Langworthy "Keeper of the Flame," The Journal of the American College of Chiropractic History: The Archives and Journal of the Association for the History of Chiropractic, Vol. 1, No. 1, p. 1. 20. Langworthy, S.M., "Chiropractic Facts," 1905. 21. The Chiropractor, May 1905, Vol. 1, No. 6, p. 9. 22. The Chiropractor, Vol. 5, p. 218. 23. Barkhouse, Vol. 1, p. 199. 24. Palmer School of Magnetic Cure, Corporate Charter—1896, Palmer Archives. 25. Chiropractic Proofs, 1907, p. 3. Copyrighted, Mervyn V. Zarbuck, D.C., July 1988. Reproductions in any parcel or form only by express written permission of the author. LEGISLATIVE SESSION ENDS Robert Brinkmeier, IPSCA Lobbyist The Illinois General Assembly adjourned on July 2, 1988, without taking any action on the major issues confronting them i.e., a tax increase for education and human services. The General Assembly did not pass the controversial tax increase which many people believed to be essential to the state's future social and economic health. State support for both higher and lower education has been declining as a percentage of state spending for the past ten years. Many other states have been pouring money into education and have enacted a series of education improvement plans. Illinois has enacted school reforms but has not followed up with the necessary financial support. Now that the Illinois General Assembly has adjourned, it is time for our members to analyze the performance of their respective legislators and take appropriate action during the Fall political campaign. Your Legislative Committee has already started to analyze the voting records of all representatives and senators and will soon have a report ready for the Board of Directors. continued from page 8 Homecoming The annual Homecoming banquet on Saturday evening will feature stand-up comedian Stan White as after-dinner entertainment. Delta Sigma Chi, the oldest chiropractic fraternity in the world, will mark its seventy-fifth anniversary on July 18. The actual celebration of the seventy-fifth anniversary for the brothers of all chapters will be at Homecoming. An estimated 3,000 Palmer College alumni, spouses, and guests are expected to attend Homecoming 1988. CHRISTIAN CHIROPRACTORS ASSOCIATION AWARDS MRS. MARGUERITE SMITH THE "CHRISTIAN CHIROPRACTOR SPOUSE OF THE YEAR" AWARD The Christian Chiropractors Association awards the "Christian Chiropractic Spouse of the Year" plaque each year to a spouse who has shown his or herself to be a faithful servant of the Lord Jesus Christ. This year the past recipient, Mrs. Lois Kalsbeek of Castro Valley, California, made the presentation at the annual convention banquet in Union, Washington. Mrs. Marguerite Smith was unable to attend due to illness. Mrs. Smith has been an Auxiliary member of CCA since 1970. She is the wife of Willard Smith, D.C., who is in private practice in Rock Island, Illinois, and also an instructor at Palmer College of Chiropractic, Davenport, Iowa. Mrs. Smith has been the Office Manager of Doctor's Clinic. Dr. Smith is fortunate to have a life partner who is his business consultant, advisor, companion, and friend. The CCA recognizes Mrs. Smith's faithful financial support, but more importantly her faithfulness as a prayer warrior. She regularly prays for the CCA Home Office and the CCA ministries. The Christian Chiropractors Association welcomed the opportunity to recognize Mrs. Marguerite Smith for her faithfulness to her Lord through this Association. The vote was unanimously that she truly is the "Christian Chiropractic Spouse of the Year." 8 year old Trans World X-ray 300/125; $13,000 deluxe package, electronic brakes, 12 to 1 full spine bucky, call (618) 662-4100 July 1988/IPSCA Journal of Chiropractic/19
The Board of Directors (the “Board”) of Harris County Municipal Utility District No. 132 (the “District”) met in regular session, open to the public, at the Atascocita Country Club, 2014 Pinehurst, Humble, Texas, 77346, on July 16, 2009 at 6:00 p.m.; whereupon the roll was called of the Board, to-wit: Ray Hughes, President Tim Stine, Vice President Bobby Haney, Secretary Don House, Assistant Secretary Jerrel Holder, Assistant Secretary All members of the Board were present except Director Hughes. Also attending all or parts of the meeting were Mr. Michael Keefe of Bob Leared Interests, tax assessor and collector for the District; Ms. Freida Conley of Myrtle Cruz, Inc., bookkeeper for the District; Mr. Leroy Mensik of Severn Trent Environmental Services, Inc. (“ST”), operator of the District’s facilities; Ms. Amy Zapletal of Brown & Gay Engineers, Inc. (“Brown & Gay”), engineer for the District; Ms. Jana Cogburn and Ms. Carla Christensen of Fulbright & Jaworski L.L.P. (“F&J”), attorneys for the District; and numerous members of the public. A sign-in sheet is attached hereto as Exhibit “A.” Call to Order. The Vice President called the meeting to order in accordance with notice posted pursuant to law, copies of certificates of posting of which are attached hereto as Exhibit “B”, and the following business was transacted: 1. Minutes. Proposed minutes of the meeting of June 25, 2009, previously distributed to the Board, were presented for approval. Upon motion by Director Haney, seconded by Director Holder, after full discussion and the question being put to the Board, the Board voted unanimously to approve the minutes of the meeting of June 25, 2009, as presented. 2. Receive comments from the public. There were no comments from the public at this time. 3. Discuss and take necessary action regarding the Atascocita Country Club and golf course property. Director Stine reported that the District has not received any requests from the new owners of the Atascocita Country Club. Director House noted that the District has an agreement with Atascocita Country Club dated July 29, 1975 to provide utility service. Director House stated that, in accordance with the agreement, the Atascocita Country Club purchases water and sanitary sewer services from the District to serve the Country Club. Ms. Cogburn noted that the term of the agreement was 20 years, with a provision that automatically renews every year until terminated by either party 90 days prior to the end of the one-year term. In response to a question from a resident, Director Stine noted that is premature to discuss the District’s options regarding the possibility of the District acquiring the new country club property. 4. **Discuss and take action in connection with security contract with ADT and payment of same.** Mr. Mensik reported that all the security cameras have been installed at the District’s facilities. Mr. Mensik reported that ADT has already uploaded the necessary security software to ST’s computer system and has provided training on the new security system to ST personnel. In response to a question from Mr. Mensik, the Board members stated that they do not want the new security software downloaded on their computers. Discussion ensued regarding payment to ADT. In response to a question, Mr. Mensik stated that he will determine the date that the security system was completed and provide such date to the District’s bookkeeper and attorney. Upon motion by Director Haney, seconded by Director House, after full discussion and the question being put to the Board, the Board voted unanimously to authorize payment to ADT for the installation costs in connection with the new security system. It was the consensus of the Board to authorize payment of monthly security service fees (as of the date of completion) at the next Board meeting and that F&J prepare and forward a letter to ADT requesting that ADT remove all monthly billing charges prior to the completion date and provide a revised invoice and to reimburse the District for all of the District’s operator’s “response to alarm” calls prior to completion of the security equipment. 5. **Review Bookkeeper’s Report and Investment Report.** The Vice President recognized Ms. Conley, who presented to and reviewed with the Board the Bookkeeper’s Report for the period ending July 15, 2009 and the Investment Report, copies of which are attached hereto as Exhibit “C.” Upon motion by Director Haney, seconded by Director Holder, after full discussion and the question being put to the Board, the Board voted unanimously to accept the Bookkeeper’s Report for the period ending July 15, 2009, to approve the Investment Report, and to authorize payment of check numbers 6520 through 6581 from the Operating Account and check number 5088 from the Capital Project Account, all as listed in the Bookkeeper’s Report. 6. **Discuss and take necessary action in connection with current electricity rates and electricity contract with Suez.** Director Haney reported that the District’s current contract with Suez does not expire until the end of September 2009. Director Haney stated that he will continue to investigate the District’s options in connection with entering into a new electricity contract. 7. **Review Tax Collector’s Report and authorize payment of certain bills.** Mr. Keefe presented to and reviewed with the Board the Tax Assessor and Collector’s Report for the month of June 2009 and the delinquent tax attorney report, copies of which are attached hereto as Exhibit “D.” Mr. Keefe noted that 97.7% of the District’s 2008 taxes had been collected as of June 30, 2009. Upon motion by Director Haney, seconded by Director House, after full discussion and the question being put to the Board, the Board voted unanimously to approve the Tax Assessor and Collector’s Report, to authorize payment of check numbers 1429 through 1442 | Project Description | Construction Costs | Engineering and Construction Costs | Total | |----------------------------------------------------------|--------------------|------------------------------------|-------| | Atascocita Point Drive SS repair | $70,000 | $34,115.26 | $62,333.88 | $96,449.14 | | Water Plant Disinfection modifications | | | | $15,904.17 | | - Engineering and construction costs | $37,443 | $9,075.93 | $6,828.24 | | Water Plant Fence Replacement Project | | | | $ | | - Construction costs | $215,000 | $ | $ | $ | | Sanitary Sewer Rehabilitation, Phase IV | | | | $ | | - Construction costs (as available) | $197,482 | $ | $ | $ | | **TOTALS** | **$519,925** | **$43,191.19** | **$69,162.12** | **$112,353.31** | **Construction Plan Review:** a. Chateaux at Pinehurst Apartments: Brown & Gay approved the plans in October 2008. The developer is required to provide Brown & Gay and ST video inspections of the existing sanitary sewer mains to confirm the condition of the original construction prior to connecting to the District’s sanitary sewer system. Brown & Gay has not received records of a video inspection. b. NE Corner of Atascocita Road & Town Center Boulevard (Bank to be constructed on 1.853 AC): Brown & Gay provided Bury+Partners the plans and District submittal requirements in October 2008. c. Rowland Interests-Atascocita Business Park/Sports Complex (19505 West lake Houston Parkway): The preliminary construction plans submitted by H2B, Inc. have been reviewed. Financing has been secured and he hopes to commence construction before mid-July 2009. d. Atascocita Lutheran Church: No plans have been received to date. e. Southwest corner of FM 1960 East & Atascocita Shores: Nothing new. f. Atascocita Shores Personal Warehouse: Nothing new. g. Residential/commercial construction at FM 1960 East & Atascocita Shores: Nothing new. h. Proposed office building north of FM 1960 East and Atascocita Shores Drive: Nothing new. i. NE and NW corners of FM 1960 East and Atascocita Shores Drive: Nothing new. **Water Plants No. 1 and 2 Fence Replacements and Landscape Improvements:** Eleven submittals have been reviewed and approved to date. Construction at Water Plant No. 2 on West Lake Houston Parkway is approximately 95% complete. The contractor is waiting for gate materials. Fencing replacement is approximately 50% complete at Water Plant No. 1 on Rebawood Drive. Pay Estimate No. 1 from T&C Construction. Ltd. in the amount of $74,698.20 has been reviewed and is approved for payment. This invoice includes payment for partial completion of demolition and fencing at Water Plant No. 2, for completion of temporary fencing and change order work at Water Plant No. 2, and for the contractor's performance and payment bonds. Funds totaling ten percent of the completed work to date ($8,299.80) remain on retainerage. **Water Plants No. 1 and 2 Disinfection System Improvements:** The construction plans have been signed by the City of Houston. Harris County signatures are still pending but Brown & Gay expects to receive them this week. Per the attached June 26, 2009 letter from the TCEQ, the project is conditionally approved for construction. Brown & Gay is responding to the TCEQ’s approval letter to address comment no. one and correct the statement regarding the District’s emergency water interconnections. Brown & Gay is still awaiting the TCEQ approval of the disinfection conversion (a separate approval letter will be issued). The District’s Operator will be provided fully-approved construction plans and the TCEQ approvals by Brown & Gay as documentation of approval to install the improvements as previously approved by the District. As part of the required communication with the TCEQ, Brown & Gay must notify the TCEQ when construction commences and must certify that the work is completed as approved in the plans. Brown & Gay will continue to communicate with ST throughout the project to satisfy these conditions. **Atascocita Joint Operations Board (Final Engineering Report from June 23, 2009):** Nothing new. AJOB is waiting for instructions from the TCEQ on how and when to make the required payment to the Gulf Coast Waste Disposal Authority’s “River, Lakes, Bays ‘N Bayous Trash Bash.” Upon motion by Director Haney, seconded by Director House, after full discussion and the question being put to the Board, the Board voted unanimously to approve the Engineer’s Report and to approve Pay Estimate No. One in the amount of $74,698.20 to T&C Construction. Ltd. in connection with the fence replacement and landscaping improvements at water plant no. one and water plant no. two and authorize payment of same. 10. **Review and authorize capacity commitment letters.** Ms. Zapletal stated that no capacity commitment requests have been received since the last meeting. Ms. Zapletal reported that F&J and Brown & Gay met with the new golf course owners and informed them of the requirements for separating the capacity commitments to the clubhouse and the pool/tennis center. To date, Brown & Gay has not yet received a request detailing the division of capacity. Ms. Zapletal reviewed with the Board a summary of the District capacity allocation: | WWTP ESFC Not Committed | Water ESFC Not Committed | Undeveloped Acreage | |-------------------------|--------------------------|---------------------| | | | | Ms. Zapletal reported that the limiting factor is the remaining hydro-pneumatic tank ("HPT") capacity, which is rated at 17.4 gallons per connection (rather than 20 gallons per connection) as part of the TCEQ's approved variance from the elevated storage requirement. Brown & Gay previously estimated that in late 2009 the Board should start discussing the necessary water supply system improvements (HPT, ground storage, and booster pumps). Ms. Zapletal noted that bond funds totaling $1,117,600 remain escrowed for construction of such improvements and $192,992 is available for engineering in connection with the improvements. 11. **Award contract for the next phase of the sanitary sewer rehabilitation.** Ms. Zapletal reported that portions of the following areas are included for rehabilitation under the scope of this project: - Atascocita Shores, Sections 1-5 - Atascocita Villas - Estates of Pinehurst - Golf Villas - Pinehurst of Atascocita, Sections 1-4, 7, 11 - Pines of Atascocita, Sections 1 and 2 - Atascocita Town Center, Sections 1 and 2 - Pinehurst of Atascocita/Atascocita Shores (trunk mains) - Miscellaneous point repairs Brown & Gay estimates the following updated schedule for the project: - Authorization to advertise received on March 19, 2009 and added amended scope on April 16, 2009 - All sets of plans have been submitted to Harris County for review - Harris County comments have been received on all plans - Comments have been addressed on all sets of plans - Submitting intermittently for Harris County signatures on all plans by July 24, 2009 - Estimated Notice to Proceed to contractor before the middle of August 2009 The bonds and insurance provided by Insituform Technologies, Inc. have been reviewed and approved by F&J. The contracts are provided today for the Board’s signature. The Agreement will not be dated or become effective until signatures have been received on all sets of construction plans. The preconstruction meeting will be conducted on July 23, 2009 at 10:00 am. The Notice to Proceed will not be issued until the Agreement becomes effective and signatures have been received on all sets of construction plans. Tolunay-Wong Engineers, Inc. (TWEI) provided the attached Proposal No. P09-C133 for construction material testing services in an estimated amount of $17,078.00. Fees will be invoiced based on actual expenses incurred during construction. Upon motion by Director Haney, seconded by Director House, after full discussion and the question being put to the Board, the Board voted unanimously to authorize execution of the contract with the low bidder, Insituform Technologies, Inc., for phase IV of the sanitary sewer rehabilitation, and to approve Proposal No. P09-C133 with Tolunay Wong Engineers, Inc. in the amount of $17,078 for construction material testing services. 12. Discuss and take any action in connection with drainage issues in Kings River Estates, Section 4 (“KRE4”), including award contract for construction of improvements. **Diversion Swale and Berm for Kings River Estates, Section Four:** a. C.E. Barker, Ltd submitted the low bid of $249,678.57 on April 2, 2009. The contracts have been signed by the Board but are still pending final receipt of Harris County signatures on the plans. The reviewer for Harris County Flood Control District (“HCFCD”) returned the mylars again recently with requests for additional information. Brown & Gay has returned another letter to HCFCD on July 16, 2009 and is awaiting signature approval on the plans. Brown & Gay and Directors Haney and House met with Mr. Hammond and Mr. Stunja with Pinehurst Trail Holdings, LLC, on July 6, 2009 to review the project alignment, to discuss an alternative box culvert intake structure to minimize impact to the tee box (and Brown & Gay suggested vertical bars on pipe), to provide a simple lake overflow with an aesthetic rock treatment versus plain concrete, and to confirm desires by all parties to minimize tree removal. The attached layout and overflow detail have been reviewed and approved preliminarily by all parties. For the realigned project, easements will be revised. The construction plans will be revised to reflect the agreed-upon facilities. Brown & Gay will work with the contractor to provide a breakdown of the costs for non-bid items and will confirm use of bid prices for the extended box culvert. The required Small Construction Site Notice (“SCSN”) and Storm Water Pollution Prevention Plan (“SWPPP”) documents will be finalized once the construction dates are known. As also required, copies will be provided to Harris County, which is the Municipal Separate Storm Sewer System (“MS4”) operator and is responsible to the TCEQ for the storm water management program. b. Drainage Improvements (internal improvements) for Kings River Estates, Section Four: C.E. Barker, Ltd submitted the low bid of $337,586.70 on April 7, 2009. Notice to Proceed was issued for June 15, 2009. The required SCSN and SWPPP documents have been provided to the MS4 Operator, Harris County. Change Order No. One in the net amount of $1.52 (addition of $20,470.56 and deletions of $20,469.04) is proposed to resolve conflicts with the storm sewer by boring new sanitary sewer leads. As authorized in June 2009, Directors Haney and House provided preliminary review and approval of the attached change order. Brown & Gay requests the Board’s formal authorization and signature of Change Order No. One. Pay Estimate No. 1 from C.E. Barker in the amount of $125,294.90 has been reviewed and is approved for payment. Funds totaling ten percent of the completed work to date ($19,888.08) remain on retainage. C.E. Barker is installing the concrete curbing and swales to the new inlets. Adjustments to some of the manholes will be completed prior to the completion of the project. The Embarq telephone line was relocated in two locations to accommodate construction. Embarq estimates the work to cost no more than $750.00 per relocation. One location of damage by the contractor was due to marking mistakes by Embarq. Repairs will not be back-charged to the contractor or the District. Upon motion by Director Haney, seconded by Director House, after full discussion and the question being put to the Board, the Board voted unanimously to approve Change Order No. One in the net amount of $1.52 and authorize payment of Pay Estimate No. One in the amount of $178,982.71 ($125,294.90 is District’s share and $53,697.81 is KRE4’s share) in connection with the drainage improvements (internal) to serve Kings River Estates, Section Four. 13. **Discuss and take action in connection with request from Atascocita Country Club regarding termination of Amended and Restated Agreement for Maintenance of Drainage Ditch.** Ms. Cogburn reported that she and Ms. Zapletal previously met with the new owners of the Country Club and the Country Club seemed agreeable to having the District maintaining the ditch. This item was tabled. 14. **Discuss and take any action in connection with District communications.** Ms. Christensen reported that Ms. Wynn is coordinating with the Board regarding the third quarterly newsletter. It was noted that the District’s operator provided information to Ms. Wynn regarding the Drought Contingency Plan for an article to be posted on the District’s website. It was the consensus of the Board that Ms. Wynn include an article in the next newsletter informing residents about the email blast option and how to sign up. 15. **Approve and authorize execution of an Interlocal Agreement with Harris Galveston Coastal Subsidence District for Waterwise Program.** The Board reviewed a proposed Interlocal Contract, a copy of which is attached hereto as Exhibit “G.” Upon motion by Director Haney, seconded by Director Holder, after full discussion and the question being put to the Board, the Board voted unanimously to approve and authorize execution of an Interlocal Agreement with Harris Galveston Coastal Subsidence District for the Waterwise Program. In response to a question, Ms. Christensen stated that she will contact Ms. Brown with the Subsidence District to determine why the enrollment in the Waterwise Program has decreased for the upcoming school year. 16. **Executive Session pursuant to Section 551.071, Texas Government Code, as amended, to discuss litigation.** The Board did not convene in Executive Session at this time. 17. **Executive Session** pursuant to Section 551.076, Texas Government Code, as amended, to discuss security related matters at District facilities. The Board did not convene in Executive Session at this time. 18. **Other matters.** There were no other matters to come before the Board at this time. THERE BEING NO FURTHER BUSINESS TO COME BEFORE THE BOARD, the meeting was adjourned. The above and foregoing minutes were passed and approved by the Board of Directors on August 20, 2009. ATTEST: President, Board of Directors Secretary, Board of Directors (DISTRICT SEAL)
The Tribune may not be on the "inside," but Colonels Bishop and Smith will hardly deny its being "in it" in a mile way. But we are always on the "outside." That was a nasty swipe Bishop of the Courier gave County-Judge Beck, last week. The Colonel deserves to be doubled into the likeness of a jack knife for that dig. It is a matter of wonderment with many that the Times-Democrat is so silent on the vital question of the fall campaign—the A. P. A. Or is he like Bishop—afraid. There is not a single Republican principle at stake in Red Willow county this fall. Which affords the party an excellent opportunity to rebuke the secret society that has assumed the prerogatives and title of the party in this county. The Republican party will do well to throw off this galling yoke of bondage, and be in line for 1896. The A. P. A. leaders are insincere. We have in mind the language of one of the most prominent leaders of October 10th, which he advised a friend not to allow the association to get a foothold in the community in which the friend lived, stating "the association was a bad thing that was for a town." Within ninety days this same gentleman was a bright and shining light in the lodge here. The Tribune can only view the impending defeat of the A. P. A. ticket with any degree of complacency at all on the ground that it will be the salvation of the Republican party of this county, and will put us in line for a glorious and complete victory in 1896. But the party must stand alone itself—even if it has to go through the fires of defeat to do so. This one man power in the party needs a set back. A few more such leaders must be unloaded. Cleaner and squarer methods adopted. Then the victory is our own. The Republican party is the party of property and light not of proscription and prejudice; and when it allows itself to be run or absorbed, or even to be dictated to, by a secret organization which denies the rights of citizenship to even a small portion of its honorable membership, the party is clearly and reprehensibly forgetful of the glorious purpose of its organization, and of the ends of its very existence. The Republican party of Red Willow county is not in edgewise. Its future will depend upon the heroic action of such Republicans as esteem party integrity above success. A mistake in judgment is often just as fatal in politics as in other matters. Before the Republican county convention, the A. P. A. leaders were almost exclusively consulted. Others of the party who did not affiliate with the order were ignored completely. This error was even followed by the nominees for some time after the convention. But now as the magnitude of the issue is seen and the dignity at the usurpation of the prerogatives of the Republican party in the order become more and more apparent, the dark signs of impending disaster gather about the unfortunate men and ticket, the situation becomes critical, and no earnest effort being made outside of this city at least to deny any admission to the knowledge of the order. These 56 solid votes so triumphantly cast by the clever and astute president of Council 100 may yet prove the death of the ticket. FOR COMMISSIONER. The quite general feeling in this city that this commissioner district should be represented by a well qualified business man, and that the commissioner's home should be in the city, where he is more accessible to all parties interested,—both in country and town—has this week found an echo and expression in the statement of the mayor C. T. Brewer as an independent Republican candidate for any commissioner for the Third district. C. T. Brewer is well qualified to represent this district an active and efficient commissioner. He is one of our oldest business men. He is acquainted with the needs of the district. He is one of the county's reliable business men. He has always been identified with the county affairs and is part of the community. He will be a formidable candidate at the coming election. THE A. P. A. is proving a Jonah instead of a Moses. Just about the time people become so excessively intoxicated with a little power, they lose their horse sense and get it in the idiotic shape. H. B. Harris's friends are determined to rally around him in fine shape—although he did not have 26 delegates to nominate him, but 34 out of the power. The leader is a capable and deserving man and his friends will put up a stiff fight for him. When we see such towns as Oxford and Oxford, organize councils of the American Protective Association within their borders, we wish the business men of those town had come to McCook and observed the way that organization has blotched this city. How democratic and loyal our liberties and privileges have been protected and preserved! If they had not been so much pleased would have encouraged the association to come and protect them. Such an organization is the most disturbing thing that could possibly be expected into the commercial, social or political life of a community and all towns will do well and righteously to discourage it. The Republican party of Red Willow county is now encamped within the stumps of the secret organization known as the A. P. A. The secret society has gone outside of its proper sphere in absorbing the Republican party. And the Republican party of good men which has fallen below the dignity and purpose of its existence in allowing itself to be swallowed up by anybody or anything. The result of this fall's campaign will tell largely to decide whether the Republican party shall rehabilitate itself and resume business as usual, or whether the party shall quietly sleep away its days usefulness as a mere playing ground for the A. P. A., or whether it shall become a party organization which prescribes many honorable and patriotic members of the party, and operates behind locked doors. Our old friend and "frosty old man" from "among the hills of McCook," Low A. K. of the Meyersdale (Pa.) council—makes a few pertinent observations apropos and aent the recent and purely attempted of certain new papers of this county, to read and write the publication of the Republican party. He says: "Some of the freak editors of Red Willow county Neb. are trying to turn down editor F. M. Kimmell of the Tribune. At least one of them is a new recruit to Republicanism, and yet dry and unseasoned, and he is trying to read the veteran, who has fought the battles of the party for twenty years, out of the party. It is a pity such fools enter into the party, and worse yet, into the chamber of sanctums. They are not satisfied with power, but want the throne. If Kimmell is of the stuff the Kimmells of this county are made of he isn't worrying himself very about the matter." Awarded Highest Honors—World's Fair, DR. PRICES'S CREAM BAKING POWDER MOST PERFECT MADE. A pure Grape Cream of Tartar Powder, Free from Ammonia, Sulphur or any other adulterant, 40 YEARS THE STANDARD. INDIANOLA. Elmer Rowell was down from McCook, Tuesday. Miss Edna Messer spent Sunday with her parents at Omaha. Judge J. W. Welty is seeing his many friends here today. S. E. Hager made a flying visit to the metropolis, last evening. Mrs. Smith made a visit by her brother from Arapahoe, Sunday. And now poor "Juckes" has run afoul Colonel Bishop. Ah, me! L. B. Beekwith returned from Denver, Friday morning, via McCook. T. E. McDonald, cashier of the Bank of Danbury, was in town on Monday. L. W. Smith and two or three others visited the commercial center, Saturday. J. W. Dolan of the State bank and son James went up to Denver, Tuesday night. Messrs. J. E. Allen and F. M. Kimmell were guests of Mrs. J. B. Messer, Thursday. James Robinsons wife and young children up to Denver, Tuesday night, to attend the hunt. I. M. Smith has been looking after his political interests in McCook since victory, this week. A. D. McConaughy & P. Walsh, two of McCook's Democratic war heroes, were in the city, Monday. William Allen and family, from Chicago, visited Saturday. Mrs. Allen's brother, W. W. Gervar, returned with them. It is amusing to see the Courier go after the McCook A. P. A., and then observe the "masterly silence" and "stand in" of the Tribune. J. B. Messer, of McCook, Tuesday evening, returned home on the following evening. He will make a mass meeting at Lebanon, tomorrow evening. Colonna Mitchell and Barnes do not agree as touching the value of reform. But they claim the interest of the city, if we can not subscribe to his political creed. This community is no friendly soil in which to plant a secret society and its aims. McCook is wide open and is keeping her people in a feverish turmoil all the time. W. R. Starr spent a few hours in McCook, Monday evening, telling the boys of his splendid Rebellion majority. Indiana county council would roll up, this fall, for the A. P. A. ticket. Colonel Bishop, himself, lifted up his blueherring and gave his merry and lowly correspondent a broadside he generally goes to the other side of the mark. The Colonel isn't much of a pot luck hunter any way. The Populist central committee in McCook, Saturday last, decided not to fill the place on their ticket made vacant by S. T. Partridge, who has been appointed for county superintendent. It is understood, that the Populists will support Prof. L. W. Smith, the Democratic nominee for the able and popular superintendent of the Indianola public schools, who will run like a steamboat. NORTH STAR GLEANINGS. Al Kelley is husking corn for J. B. Pickering. Ira Neal started, last week, for an extended trip to Minnesota. Edna Whitaker was a visitor at the Smiths' last, last day. Mrs. Kilgore of McCook was out looking after her real estate, last week. Mrs. Joseph Dudak returned from an extended eastern visit, the latter part of last week. Ethel Korn of Indianola was the guest of the Misses Endebey, latter part of last week. BENNETT. We now have in stock a full line of the Riverside Oak, the White Oak and the Bonded White Oak Heaters—both soft and hard coal burners. In fact we have the finest thing in heating stoves to be bought in this section of the state, exhibited in this city. Don't fail to see our Ventiduct Heaters. We also have the popular Stove and Cook Stove in stock. Everybody ought to have one of our Square Oven Cooks—they are the latest and the very best. COCHRAN & CO. S. CORDEAL. Notary Public, Reliable Insurance, Collection Agent. SUNNY SIDE DAIRY. We respectfully solicit your business, and guarantee pure milk, full measure, and prompt, courteous service. LEWIS W. SMITH, Bonded Abstractor, B. G. GOSSARD, Ass't. INDIANOLA, - NEBRASKA. JULIUS KUNERT, Carpet Laying, Carpet Cleaning. I am still doing carpet laying, carpet cleaning, lawn cutting and other work. See what I am doing giving such low charges as very reasonable. Send orders at once. JULIUS KUNERT. J. S. MCBRAYER, PROPRIETOR OF THE McCook Transfer Line. BUS, BAGGAGE AND EXPRESS. ANTI-RUST TINWARE. Remember, we are showing the best line of Buggies, Carts and Wagons to be seen in this part of the Republican valley. Cochrans & Co. BUGGIES AND CARTS. Only furniture van in the city. Also have a first class horse moving outfit. Leave orders for buggies at Commercial hotel or at office opposite the depot.
Daniel Martins Coutinho A theory based, data driven selection for the regularization parameter for LASSO Dissertação de Mestrado Thesis presented to the Programa de Pós-graduação em Economia, do Departamento de Economia da PUC-Rio in partial fulfillment of the requirements for the degree of Mestre em Economia. Advisor: Prof. Marcelo Cunha Medeiros Rio de Janeiro November 2020 Daniel Martins Coutinho A theory based, data driven selection for the regularization parameter for LASSO Thesis presented to the Programa de Pós-graduação em Economia da PUC-Rio in partial fulfillment of the requirements for the degree of Mestre em Economia. Approved by the Examination Committee: Prof. Marcelo Cunha Medeiros Advisor Pontifícia Universidade Católica do Rio de Janeiro – PUC-Rio Prof. Ricardo Pereira Masini FGV-SP Prof. Anders Bredahl Koch Ox Rio de Janeiro, November the 6th, 2020 Daniel Martins Coutinho Graduação em Ciências Econômicas pela PUC-Rio e mestrado em Ciências Econômicas pela PUC-Rio Bibliographic data Martins Coutinho, Daniel A theory based, data driven selection for the regularization parameter for LASSO / Daniel Martins Coutinho; advisor: Marcelo Cunha Medeiros. – 2020. 38 f.: il.; 30 cm Dissertação (mestrado) - Pontifícia Universidade Católica do Rio de Janeiro, Departamento de Economia, 2020. Inclui bibliografia 1. Economia -- Teses. 2. Aprendizado por Máquina. 3. LASSO. 4. adaLASSO. 5. Parâmetro de Regularização. I. Cunha Medeiros, Marcelo. II. Pontifícia Universidade Católica do Rio de Janeiro. Departamento de Economia. III. Título. CDD: 000 I thank Marcelo Medeiros, my advisor, for the support and ideas. I would also like to thank my family, for their support in those two years. I have been fortunate to have many friends along this journey, and I thank, among others, Daniel Sá Earp, Leila Vieira and Lucas Maynard. The staff of the Economics Department at PUC-Rio were always helpful to solve the bureaucracy. I would also like to acknowledge the financial support by CNPq and CAPES. We provide a new way to select the regularization parameter for the LASSO and adaLASSO. It is based on the theory and incorporates an estimate of the variance of the noise. We show theoretical properties of the procedure and Monte Carlo simulations showing that it is able to handle more variables in the active set than other popular options for the regularization parameter. Keywords Machine Learning; LASSO; adaLASSO; Regularization Parameter. Martins Coutinho, Daniel; Cunha Medeiros, Marcelo. *Selecionando o parâmetro de regularização para o LASSO: baseado na teoria e nos dados*. Rio de Janeiro, 2020. 38p. Dissertação de Mestrado – Departamento de Economia, Pontifícia Universidade Católica do Rio de Janeiro. O presente trabalho apresenta uma nova forma de selecionar o parâmetro de regularização do LASSO e do adaLASSO. Ela é baseada na teoria e incorpora a estimativa da variância do ruído. Nós mostramos propriedades teóricas e simulações Monte Carlo que o nosso procedimento é capaz de lidar com mais variáveis no conjunto ativo do que outras opções populares para a escolha do parâmetro de regularização. **Palavras-chave** Aprendizado por Máquina; LASSO; adaLASSO; Parâmetro de Regularização. # Table of contents 1 Introduction 10 1.1 Notation 11 2 The Algorithm 12 3 Theory 15 3.1 Convergence of the Algorithm 16 3.2 Regularization Parameter 20 4 Simulations 21 4.1 Convergence 21 4.2 Model selection 23 4.3 Regularization Parameter 27 5 Empirical Example 31 6 Conclusion 33 Bibliography 34 | Figure 4.1 | Experiment 1. Boxplot of estimated variances | 22 | |-----------|---------------------------------------------|----| | Figure 4.2 | Experiment 2. Boxplot of estimated variances | 23 | | Figure 4.3 | Experiment 3. Boxplot of estimated variances | 23 | | Figure 4.4 | Experiment 4. Boxplot of estimated variances | 24 | | Figure 4.5 | Experiment 5. Boxplot of estimated variances | 24 | | Figure 4.6 | Boxplot of estimated standard deviation | 29 | | Figure 4.7 | Boxplot of estimated standard deviation | 30 | | Table | Description | Page | |-------|-----------------------------------------------------------------------------|------| | 4.1 | Design 1: Result for 5000 replications | 25 | | 4.2 | Design 2: Result for 3000 replications | 25 | | 4.3 | Design 3: Result for 1000 replications | 25 | | 4.4 | Design 4: Result for 1000 replications | 25 | | 4.5 | Design 5: Result for 1000 replications | 26 | | 4.6 | Design 6: Result for 1000 replications | 26 | | 4.7 | Simulations with fixed design | 26 | | 4.8 | Simulations with Subexponential error | 27 | | 4.9 | Simulations with Polynomial Tails: Student’s t Distribution with 4 degrees of freedom | 27 | | 4.10 | Simulations with Polynomial Tails: Student’s t Distribution with 8 degrees of freedom | 28 | | 4.11 | Simulations with Polynomial Tails: Student’s t Distribution with 3 degrees of freedom | 28 | | 4.12 | Design 1, 2000 replications | 28 | | 4.13 | Design 3, 2000 replications | 28 | | 5.1 | Coefficients: Effect over Violent Crimes | 31 | | 5.2 | Coefficients: Effect over Property Crimes | 31 | | 5.3 | Coefficients: Effect over Murder | 31 | | 5.4 | Number of variables selected: Violent Crimes | 32 | | 5.5 | Number of variables selected: Property Crimes | 32 | | 5.6 | Number of variables selected: Murder | 32 | 1 Introduction The linear model, usually estimated by ordinary least squares (OLS), is the workhorse for the analysis of economic data. It provides reliable statistical properties and easy interpretation. However, nowadays is not uncommon to have more variables than observations, which precludes the use of OLS. This arises in forecasting, in which one uses a large number of inputs to make better forecasts or when doing causal inference, in which one has a large number of variables that are potential confounders that should be used as controls. The LASSO (Least Absolute Shrinkage and Select Operator), first suggested by Tibshirani (1996), extends the usual OLS estimators and allows for more variables than observations. It is able to select variables using the $\ell_1$ norm as a penalty, which induces kinks in the objective function. The main issue with LASSO is selecting the regularization parameter. There are lots of possible choices: information criteria, Cross Validation and some attempts to choose the regularization parameter using the theory created for the LASSO. As shown in Bickel et al. (2009), and discussed in Bühlmann & Van de Geer (2011) and Wainwright (2019), setting the regularization parameter $\lambda = 2\sigma \sqrt{2 \log(p)/n}$ guarantees good results. However, it requires knowledge of the variance of the error. The contribution of this paper is twofold: we show how to use the regularization parameter in Bickel et al. (2009) using an iterative procedure in which at each step we estimate the model and, using the coefficients obtained, we compute the variance of the residual. We discuss the convergence properties of our algorithm. In general, it is not true that the LASSO version converges: it is highly dependent on the sample size, number of variables and the size of the active set. On the other hand, a simple twist of the LASSO, called the adaptive LASSO, first suggested by Zou (2006), allows us to have much better results regarding convergence. The second contribution is to show that, when using the adaLASSO, one can use $\lambda = \sigma \sqrt{2 \log(p)/n}$ and the proofs still work. We show theoretical results for this regularization parameter for the adaLASSO. Bickel et al. (2009) is the main article in which we base our ideas. The selection of the regularization parameter is a key problem for the LASSO and an active area of research. There are suggestions based on information criteria, as Zhang et al. (2010), Fan & Tang (2013) and Hui et al. (2015), among others; and some suggestions based on the theory, as Belloni et al. (2012) and Belloni et al. (2013). See Coutinho et al. (2017) for a review of different choices of the regularization parameter for the adaLASSO and the LASSO. This paper has five sections: the next one describes the algorithm. Section 3 shows a bit of the theory. Section 4 shows the Monte Carlo simulations. The last section concludes. 1.1 Notation We will say that $\hat{\beta}$ is the estimated vector of coefficients, and $\beta^0$ is the true vector of coefficients. $X_S$ are the columns of $X$ for which $\beta^0$ is different of zero, and $X_{S^c}$ is the set for which the columns of $X$ are equal to zero. Therefore, $\beta^0_S \neq 0$ and $\beta^0_{S^c} = 0$. There are $p$ variables with $s$ is the cardinality of the set $S$. We use $\|x_i\|_q = (\sum_i |x_i|^q)^{1/q}$, with the convention that $\|x_i\|_\infty = \max_i |x_i|$. In the algorithm definition, we use $Sd(u) = \frac{1}{n-1} \sum_{i=1}^{n} (u_i - \bar{u})$, the empirical standard deviation of $u$, in which $\bar{u}$ is the mean of $u$. 2 The Algorithm Formally, the LASSO solves: \[ \beta_{LASSO} \in \arg \min_{\beta} \frac{1}{2n} \sum_{i=1}^{n} (Y_i - X_i \beta)^2 + \lambda, \sum_{j=1}^{p} |\beta_j|, \] in which $\lambda$ is the regularization parameter. Algorithms for solving the LASSO problem are well established and our algorithm focuses on selecting the regularization parameter. The choice of the regularization parameter is an important part of the algorithm, as it controls how many variables will be added to the model and the amount of shrinkage of the coefficients. Cross Validation is a popular choice (see Hastie et al. (2009)). However, Cross Validation might be slow for large data sets and is not suitable for dependent data without modifications. Information Criteria are also a possibility. Although one could use AIC and BIC, neither of these criteria were created having in mind a high dimensional setting. There are criteria created for the high dimensional case, as Zhang et al. (2010), Fan & Tang (2013) and Hui et al. (2015). Bickel et al. (2009) is one of the first articles suggesting a regularization parameter based on the theory and Bühlmann & Van de Geer (2011) explicitly use the theory to suggest a feasible regularization parameter, using the variance of the dependent variable. Belloni et al. (2012) and Belloni et al. (2013) also provide a way to select the regularization parameter based on the theory, that handles heteroscedasticity. This huge variety of procedures is due to the fact that they try to solve different problems and work for different kinds of DGPs. Based on Bickel et al. (2009), we suggest that $\lambda = \sigma A \sqrt{2 \log(p)/n}$, where $\sigma$ is the standard deviation of the error, $p$ is the number of regressors, potentially $p \gg n$, and $A$ is a parameter - in Bickel et al. (2009), $A = 2$. This regularization parameter is unfeasible, since it depends on the standard deviation of the error. However, we can use an estimator of the standard deviation of the error, which we denote by $\hat{\sigma}$, to get a feasible version of the regularization parameter, $\hat{\lambda}$. We propose Algorithm 1 that uses the LASSO to generate the estimate for $\hat{\sigma}$ and iterates on it to get $\hat{\beta}$: start with a guess for the standard deviation and compute the LASSO using the regularization parameter by Bickel et al. (2009). This will give us a vector of coefficients. Use this vector to calculate the residuals $\hat{u} := Y - X\beta$. Use the standard deviation of $\hat{u}$ as a new guess and iterate. **Input:** Some guess for $\sigma$, the data **while convergence fails do** 1. Set $\hat{\lambda} = \hat{\sigma}\sqrt{2\log(p)/n}$; 2. Estimate the LASSO ($\hat{\beta}_{LASSO}$) using $\hat{\lambda}$ as the regularization parameter; 3. Compute the residual $\hat{u} = Y - X\hat{\beta}_{LASSO}$; 4. Compute $\hat{\sigma} = Sd(\hat{u})$; **if convergence then** Return $\hat{\beta}_{LASSO}$ and report success **else** Go back to step 1 **end** **Algorithm 1:** An algorithm for LASSO There can be several ways to define convergence. In the following simulations we use one of the two bellow: 1. The $\max(abs(\hat{\beta}_i - \hat{\beta}_{i-1}))$ is smaller than a $\delta$ 2. The $abs(\hat{\sigma}_i - \hat{\sigma}_{i-1})$ is smaller than a $\varepsilon$ $i$ is the number of iteration and $abs(\cdot)$ stands for the absolute value function. We also limit the maximum number of iterations, so even if there is not convergence the algorithm still quits. It is simple to extended our algorithm to deal with adaptive LASSO (adaLASSO), Elastic Net or Thresholded LASSO. The adaLASSO uses a two step procedure in which we first estimate the model with the LASSO. In the second step, we solve the following problem: $$\beta_{adaLASSO} \in \arg\min_\beta \frac{1}{2n} \sum_{i=1}^{n} (Y_i - X_i\beta)^2 + \lambda \sum_{j=1}^{p} \omega_j |\beta_j|,$$ in which $\omega_j$ are a set of positive weights. We implement two versions of the adaLASSO, and they only change with respect to the weights. The first version use fixed weights based on a first stage LASSO with the initial guess for the regularization parameter, so $\omega = 1/(1/\sqrt{n} + |\hat{\beta}_{LASSO}|)$, in which $n$ is the sample size. This weight never changed again. The second version uses as weights the previous estimation of the adaLASSO, so $\omega = 1/(1/\sqrt{n} + |\hat{\beta}_{i-1}|)$, so the weights are updated at each step. The algorithm for the adaLASSO is shown in Algorithm 2. Input: Some guess for $\sigma$ ($\hat{\sigma}$), the data a. Set $\hat{\lambda} = \hat{\sigma} \sqrt{2 \log(p)/n}$; b. Estimate the LASSO ($\hat{\beta}_{LASSO}$) using $\hat{\lambda}$ as the regularization parameter; c. Set $\omega = \frac{1}{1/n + |\hat{\beta}_{LASSO}|}$; while convergence fails do 1. Set $\hat{\lambda} = \hat{\sigma} \sqrt{2 \log(p)/n}$; 2. Estimate the adaLASSO ($\hat{\beta}_{LASSO}$) using $\hat{\lambda}$ as the regularization parameter and $\omega$ as weights; 3. Compute the residual $\hat{u} = Y - X \hat{\beta}_{LASSO}$; 4. Compute $\hat{\sigma} = Sd(\hat{u})$; if convergence then | Return $\hat{\beta}_{LASSO}$ and report success else | Go back to step 1 end if reweighted then | Set $\omega = \frac{1}{1/n + |\beta|}$ in which $\beta$ is the parameters obtained in step 2. end Algorithm 2: An algorithm for adaLASSO 3 Theory In this chapter we will lay out the theory for two things: 1. That $\lambda = \sigma \sqrt{2 \log(p)/n}$ allows us to say $\|X'u\|_\infty < \lambda$ with high probability. 2. Under which conditions our algorithm converges to the true variance of the error. We will focus on model selection. It requires more hypothesis than if we focused on doing prediction. However, economists are usually interested in causal explanations and it requires knowing which variables are relevant and which are irrelevant. Another goal economists nowadays use variable selection is for choosing controls when estimating a treatment effect. For this goal, the conditions are milder than model selection. There are numerous assumptions about the Data Generating Process (DGP) sufficient to prove the theorems bellow: **Assumption 1** The true model is linear on the parameters: $$Y = X\beta^0 + u,$$ and $X$ is independent of $u$ **Assumption 2** The true vector of coefficients $\beta^0$, can be sparse: $\text{card}(\beta_0) = s \leq p$ **Assumption 3** The smallest eigenvalue of the sample covariance matrix of the active variables is bounded away from zero. **Assumption 4** $u$, the error vector, is independent and subgaussian with parameter $\sigma$ **Assumption 5** $X$ is deterministic **Assumption 6** The weights for the adaLASSO, $\omega_j$, are $1/\sqrt{n} + |\hat{\beta}|$, in which $\hat{\beta}$ is a consistent estimator of the true vector of coefficients, $\beta^0$ Hypothesis 1 allows the use of dictionary of variables, e.g. powers of variables. Hypothesis 2 allows for the possibility that the true vector of coefficients is sparse. We will always work with sparse coefficients in the simulations. Most bounds bellow depend on the cardinality of the true vector of coefficients, $s$. We can allow $s = p$, however it makes most of the bound bellow large and potentially useless if $p \to \infty$. Hypothesis 4 allows for gaussian errors with variance $\sigma^2$ and more general distributions that are not heavy tailed. Hypothesis 5 is a bit unusual in economics and Bühlmann & Van de Geer (2011) provides ways of relaxing it. Hypothesis 6 is similar to the hypothesis in Zou (2006). For models in which $p < n$, one could use the Least Squares estimator. For cases in which $p > n$, one could use the LASSO. Medeiros & Mendes (2016) provides guarantees that in a more general setting that ours, the weights coming from a first stage LASSO penalize more the coefficients that are zero than the non zero coefficients. We will need also a hypothesis concerning both the data and the estimation process: **Assumption 7** $X$ satisfies the Restricted Eigenvalue (RE) condition with $(\kappa, 3)$, i.e. $$\frac{1}{n} \| X \Delta \|_2^2 > \kappa \| \Delta \|_2^2 \quad \forall \Delta \in C_\alpha(S),$$ in which $C_\alpha(S) = \{ \Delta \in \mathbb{R}^p : \| \Delta_{S^c} \|_1 < \alpha \| \Delta_S \|_1 \}$ Besides these hypothesis, we will also use the Basic Inequality, that comes from the basic optimality condition: $$0 \leq \frac{\| X \hat{\Delta} \|_2^2}{n} \leq 2 u' X \hat{\Delta}/n + 2 \lambda (\| \beta^0 \|_1 - \| \hat{\beta} \|_1),$$ where $\hat{\Delta} = \hat{\beta} - \beta_0$. ### 3.1 Convergence of the Algorithm We want to show that the procedure outlined in Algorithm 1 converges. We ideally would like to show that it converges to somewhere near the real error variance. This is not always true for the LASSO. On the other hand, the algorithm will always converge to some point. The reason for that is simple: the sequence of regularization parameters is monotonic and bounded, and any monotonic sequence that is bounded has a limit. For this consider the function: $$\Lambda(\lambda) = \frac{\| Y - X \beta(\lambda) \|_2}{\sqrt{n}} \sqrt{2 \log(p)/n}$$ Let's prove both claims. To see that it is bounded, notice that $\|\hat{u}\|_2$ is never larger than $\|y\|_2$. This will happen only if no variable is added to the model for a given $\lambda$. So starting the algorithm at $\|y\|_2/\sqrt{n}$ it can only go down: if, with $\sigma_k = \sigma_y$, no variable is added, then $\sigma_{k+1} = \sigma_y$ and the algorithm quits. On the other hand $\|\hat{u}\|_2$ is never smaller than zero, since it is a norm. A more statistical approach requires that we break this case in two: if there are more variables than observations, the LASSO will select a subset such that there are $n - 1$ variables with coefficients different from zero. Since there is no penalization, we will fit the OLS estimate for that subset of variables. On the other hand, if $p < n$, then all variables will be on the model and we will have the OLS fit, which will generate $\|\hat{u}\|_2 \geq 0$ Now let's show that it is monotonic. To see that, assume that from the $k$ to the $k + 1$ iteration we have $\lambda_{k+1} < \lambda_k$. This is equivalent to make the constraint less tight, and therefore $||\beta_{k+1}||_1 > ||\beta_k||_1$. Now notice that: $$\min_{||\beta||_1 < R'} ||y - X\beta||^2_2 \leq \min_{||\beta||_1 < R} ||y - X\beta||^2_2$$ (3-2) If $R' > R$, since the solution of the problem on the right hand side is feasible for the left hand side. So, we have that $||y - X\beta_{k+1}||^2_2 \leq ||y - X\beta_k||^2_2$. This leaves two options: if $||y - X\beta_{k+1}||^2_2 = ||y - X\beta_k||^2_2$, the algorithm quits. If $||y - X\beta_{k+1}||^2_2 < ||y - X\beta_k||^2_2$, then $\lambda_{k+2} = \|u_{k+1}\|_2/\sqrt{n}\sqrt{2\log(p)/n} < \|u_k\|_2/\sqrt{n}\sqrt{2\log(p)/n} = \lambda_{k+1}$. If $\lambda_{k+1} > \lambda_k$, then we can apply the same argument to see that $\lambda_{k+2} \geq \lambda_{k+1}$. Monotonicity and the fact that the algorithm only searches a limited space guarantees the existence of a fixed point, and the fact that iteration will reach a fixed point - this is guaranteed by the Tarski-Kantorovich Theorem (see the Appendix). The theorem also states that there will be a minimum and a maximal fixed point, and that in order to reach the minimum fixed point one needs that there exists a point such that $\lambda \geq \Lambda(\lambda)$; in order to reach the maximum fixed point, we need a point $\lambda \leq \Lambda(\lambda)$. We have both: if $\lambda = 0$, then $\Lambda(0) = \|Y - X\beta_{OLS}\|_2A\sqrt{2\log(p)/n} \geq 0$, which can be equal to zero if $p \geq n$. Now, on the other side if we use $\lambda_{\sigma_y} = \sigma_yA\sqrt{2\log(p)/n}$, then $\forall \lambda > 0 \|Y - X\beta(\lambda)\|_2/\sqrt{n} \leq \sigma_y$, and therefore $\Lambda(\sigma_y) \leq \sigma_y$. Tarski Kantorovich Theorem does not tells us how many fixed point there are, or even what are their values. However, the existence of this fixed point and how to find it is also of independent interest: most theorem on the consistency of the LASSO depend on the fact that $\|2X'u/n\|_\infty < \lambda$. Frequently, people use the fact that $\sigma_y\sqrt{2\log(p)/n} > \|2X'u/n\|_\infty$ with high probability. Now, assume one uses a model estimated by the LASSO and selects the regularization parameter by any method. Then, if the standard deviation of the residual implies that $\sigma \sqrt{2 \log(p)/n} > \lambda$, the researcher faces an internal consistency problem: if his model is right, his choice of regularization parameter violates the most common bound given to guarantee the conditions for $\ell_2$ consistency of the parameters. Fortunately, we can get some bounds on the size of the error. Using Theorem 1, and after some algebra, equation iii yields: $$\frac{\|\hat{u} - u\|_2}{\sqrt{n}} \leq 3 \sqrt{\frac{s}{\kappa}} \lambda$$ Denote $\hat{u}_k$ the residual obtained by the $k$ step of Algorithm 1. Then, substituting our choice of $\lambda$, we get: $$\frac{\|\hat{u}_k - u\|_2}{\sqrt{n}} \leq 3A \frac{\|\hat{u}_{k-1}\|_2}{n} \sqrt{\frac{2s \log(p)}{\kappa}}$$ Cancel the $1/\sqrt{n}$ on both sides to get: $$\|\hat{u}_k - u\|_2 \leq 3A \frac{\|\hat{u}_{k-1}\|_2}{\sqrt{n}} \sqrt{\frac{2s \log(p)}{\kappa}} \quad (3-3)$$ Since we do not have the distance between the previous estimation and the true error in the right hand side, we cannot use stronger results to characterize the fixed point. One could be tempted to pick $A$ in such a way that this bound is really small. However, in all theorems above we made the hypothesis that $\lambda > \|2u'X/n\|_\infty$. Choosing an $A$ too small will lead to a violating of this hypothesis. In the next section we will show some results that allow us to get around this. We will also work with the adaLASSO, and while the proofs of the LASSO can be carried for the adaLASSO case, we can prove conditions for the adaLASSO that allow more control over the bounds. Let’s start with the weighted LASSO: $$\min_{\beta} \|y - X\beta\|_2^2 + \lambda \|W\beta\|_1,$$ in which $W$ is a $p \times p$ diagonal matrix of weights. We can rewrite the expression above by setting $\beta_w = W\beta$ and we will get: $$\min_{\beta} \|y - XW^{-1}\beta_w\|_2^2 + \lambda \|\beta_w\|_1$$ We will define $\|\beta\|_{w1} := \|W\beta\|_1$, the weighted $\ell_1$ norm. We can have a basic inequality for this new penalty that is: $$0 < \frac{1}{n} \|X\hat{\Delta}\|_2^2 < \frac{2}{n} u'X\hat{\Delta} + 2\lambda(\|\beta^0\|_{w1} - \|\hat{\beta}\|_{w1}),$$ and $\hat{\Delta} = \beta^0 - \hat{\beta}$. We can also define a $C_\alpha(S)$ cone with respect to $\|.\|_{w1}$: $$C_\alpha^{w1}(S) = \{ \Delta \in \mathbb{R}^p \mid \| \Delta_{S^c} \|_{w1} < \alpha \| \Delta_S \|_{w1} \},$$ and we can define a RE condition with respect to this new cone, with parameters $(\kappa, \alpha)$, which we will call the weighted RE condition: $$\kappa_w \| \hat{\Delta}_w \|_2^2 \leq \frac{1}{n} \| XW^{-1} \hat{\Delta}_w \|_2^2$$ We will work with assumption 1 unaltered and A2A: **Assumption 2A** $X$ satisfies the weighted RE condition with $(\kappa, 3)$ Now we can do a slight change to Theorem A of Appendix I to use the Weighted Eigenvalue condition: **Theorem 3.1** With $\lambda \geq \| \frac{2}{n} u'XW^{-1} \|_\infty$ and the Weighted RE condition $(\kappa, 3)$: $$\| \hat{\beta} - \beta^0 \|_2 \leq \frac{3}{\kappa_w} \lambda \sqrt{s}$$ This might not seem like a big change from the LASSO to the adaLASSO, however notice that the weighted RE condition allows us to write, for the case in which the Gram Matrix is the identity: $$\kappa_w \| \hat{\Delta}_w \|_2^2 \leq \frac{1}{n} \hat{\Delta}_w' W^{-1} X'XW^{-1} \hat{\Delta}_w$$ $$\kappa_w \| \hat{\Delta}_w \|_2^2 \leq \hat{\Delta}_w' W^{-1} W^{-1} \hat{\Delta}_w$$ $$\kappa_w \| \hat{\Delta}_w \|_2^2 \leq \hat{\Delta}_w' W^{-2} \hat{\Delta}_w$$ (3-4) Now, since $W$ is just a diagonal matrix that can be written as a vector $\omega$ with size $p$ and so $\hat{\Delta}_w' W^{-2} \hat{\Delta}_w = \sum_{j=1}^{p} \omega_j^{-2} \hat{\Delta}_{wj}^2$ and so: $$\kappa_w \| \hat{\Delta}_w \|_2^2 \leq \sum_{j=1}^{p} \omega_j^{-2} \hat{\Delta}_{wj}^2 \leq \max_{j=1,\ldots,p} \omega_j^{-2} \| \hat{\Delta}_w \|_2^2$$ So $\kappa_w \leq \max_{j=1,\ldots,p} \omega_j^{-2}$, which is possibly a really large number and helps in our contraction argument. Notice that in the case of a identity Gram matrix, $\kappa = 1$. This result is true for any set of weights. It does not mean that it is always useful, since a random set of weights might not generate a useful inequality. One could also argue that we could choose the weights such that $\| \omega^{-2} \|_2$ was as large as possible. To avoid this complications, we work with the weights that are the inverse of the absolute value of the LASSO. 3.2 Regularization Parameter So what about our regularization parameter? We require that \( \lambda > \|2u'X/n\|_\infty \). While the proof for the case \( \lambda = 2\sigma \sqrt{2 \log(p)/n} \) is available in Bickel et al. (2009), Bühlmann & Van de Geer (2011) and Wainwright (2019), we will show what happens when \( \lambda = A\sigma \sqrt{2 \log(p)/n} \). Theorem 4 justifies why \( A > 2 \) in Bickel et al. (2009). Otherwise, \( 2p^{1-A^2/4} \) would diverge and \( \lambda > \|2u'X/n\|_\infty \) would not happen with high probability. This means that our proposal of \( A = 1 \) would not work for the LASSO. The adaLASSO, on the other hand, requires a different event: \( \lambda > \|2u'XW^{-1}\|_\infty \). This gives us a lot more room: **Theorem 3.2** Assume \( X \) fixed and \( u \) be subgaussian with parameter \( \sigma \) and let \( \omega_j = (1/\sqrt{n} + |\beta_L|)^{-1} \), in which \( \beta_L \) is some consistent estimator of the coefficients. Then \( P(\sigma A \sqrt{2 \log(p)/n} > \|2u'XW^{-1}/n\|_\infty) > 1 - 2p^{1-(\omega_{\min A})^2/4} \), in which \( \omega_{\min} \) is the smallest weight. **Proof.** See the Appendix. \( \blacksquare \) 4 Simulations This chapter shows Monte Carlo experiments. We have two sets of experiments: the first one investigates the convergence of the algorithm. We show that the algorithm using adaLASSO has better convergence properties than the one using the LASSO. The second set of experiments shows how the algorithm behaves, with respect to model selection and forecasting. We compare it with some alternatives and show that for a number of cases, particularly when we have many variables in the active set, it behaves reasonably well. 4.1 Convergence We begin analysing when the algorithm converges. Notice that since the only part of $\lambda$ that is updated is the standard deviation of the error, we will analyse the convergence of the standard deviation of the error. In all simulations of this subsection, we have $X$ i.i.d. from a standard normal and the error also comes from a standard normal. Using an i.i.d design with covariance matrix equal to the identity allows us to say that $\kappa = 1$. In this section, we will always make 2000 replications for each experiment. We will vary the sample size ($n$), the number of variables included ($p$) and the size of the active set ($s$). We conjecture that convergence of the algorithm depends on $3\sigma \sqrt{2s \log(p)/n}$ and controlling for the three parameters above allow us to control $3\sigma \sqrt{2s \log(p)/n}$. Smaller values for it should give better convergence and simulations back this claim. In our first experiment, we set $n = 100$, $p = 50$ and $s = 10$. These values imply $3\sigma \sqrt{2s \log(p)/n} = 2.65$. Our initial guess for the variance of the error is the standard deviation of the dependent variable, which is around $\sqrt{11}$. We show the results in Figure 4.1. We show the results for the LASSO and two cases of the adaLASSO: updating the weights at each iteration and not updating the weight at each iteration. The former corresponds to adaLASSOrw in the figure. It is clear that the LASSO does not converge to the true value of the standard deviation of the error. The adaLASSO in which the weights are not updated makes things a lot better. On the other hand, always re-estimating the weights allow to the adaLASSO to get the standard deviation with much more precision. It also shows a nice feature of the algorithm: in no case the standard deviation of the error diverges. As a matter of fact, we never reach the limit of iterations. This backs the claim that the algorithm will always converge, but not always to the right point. The second experiment keeps all the parameters above the same, but change the initial guess of the standard deviation to 0.5. A better guess and a finite number of iterations should cause a better estimation of the standard deviation of the error by the LASSO and the adaLASSO not re-weighted, as we show in Figure 4.2. The gains for the adaLASSO not re-weighted are clear, while the gains for the LASSO are less clear. Experiment three only changes the sample size, to $n = 1000$, so $3\sigma \sqrt{2s \log(p)/n} = 0.83$. Figure 4.3 shows the estimates for this case. Notice the change of the axis y: the LASSO comes down from almost double the true value of the standard deviation of the error, in the case of experiments above, to a value 5% above the true value - and the “outliers” are a bit above 10% off. Experiment four changes the sample size to 400 and $3\sigma \sqrt{2s \log(p)/n} = 1.32$. The results are in Figure 4.4. Again, the LASSO does not converge and the adaLASSO shows better properties. Experiment five is closely related, and uses $n = 400$ and $s = 5$ and therefore $3\sigma \sqrt{2s \log(p)/n} = 0.93$. The objective is to show that is not only sample size that matters, but actually all the three elements: the size of the active set, the number of variables included and the sample size. The results are illustrated in Figure 4.5. ### 4.2 Model selection Building on Coutinho et al. (2017), we will test the our method (NM) for selecting the regularization parameter against three competitors: adaLASSO using Cross Validation (CV) and the BIC (Bayesian Information Criteria) for the regularization parameter and the hdm package\(^1\), that is based on Belloni et al. (2013). We always let the regularization parameter change from the first step estimation for the second step estimation (unlike Coutinho et al. (2017)). \(^1\)We change the default to let the package assume that the error is homoscedastic. The results are even worse when we use the default. We have six designs, and the results are reported in Tables 4.1 to 4.6. The variable *Non zeros Right* pick how many relevant coefficients were recovered; the *Zeros Right* pick how many irrelevant coefficients were set to zero; the *Right model* is a dummy that is equals to 1 if in a given simulation the method been tested recovered *all* the variables, setting the irrelevant variables to zero and the relevant variables different of zero. The design of the Monte Carlo simulations are: 1. $n = 100, \sigma^2 = 3$, 20 relevant variables and 30 irrelevant variables. 2. $n = 100, \sigma^2 = 3$, 10 relevant variables and 40 irrelevant variables. 3. $n = 100, \sigma^2 = 1$, 10 relevant variables and 40 irrelevant variables. 4. \( n = 1000, \sigma^2 = 1, 40 \) relevant variables and \( 60 \) irrelevant variables. 5. \( n = 100, \sigma^2 = 1, 10 \) relevant variables and \( 90 \) irrelevant variables. 6. \( n = 100, \sigma^2 = 1, 30 \) relevant variables and \( 70 \) irrelevant variables. We set that if the variable \( j \) is relevant, then \( \beta_j = 1 \). Otherwise, \( \beta_j = 0 \). Designs 1 and 6 are particularly tricky for the LASSO due to the high variance and the fact that the model is not as sparse as the others, respectively. Only design 5 and 6 can be considered “big data” i.e. \( p \geq n \). **Table 4.1: Design 1: Result for 5000 replications** | | Non Zeros Right | Zeros Right | Right model? | |-------|-----------------|-------------|--------------| | BIC | 1.00 | 0.85 | 0.04 | | CV | 0.99 | 0.90 | 0.10 | | NM | 0.97 | 0.98 | 0.44 | | HDM | 0.27 | 0.99 | 0.00 | **Table 4.2: Design 2: Result for 3000 replications** | | Non Zeros Right | Zeros Right | Right model? | |-------|-----------------|-------------|--------------| | BIC | 1.00 | 0.93 | 0.17 | | CV | 0.99 | 0.96 | 0.35 | | NM | 1.00 | 0.97 | 0.31 | | HDM | 0.73 | 0.99 | 0.14 | **Table 4.3: Design 3: Result for 1000 replications** | | Non Zeros Right | Zeros Right | Right model? | |-------|-----------------|-------------|--------------| | BIC | 1.0000 | 0.9647 | 0.3940 | | CV | 1.0000 | 0.9830 | 0.6990 | | NM | 1.0000 | 0.9968 | 0.8780 | | HDM | 0.9926 | 0.9811 | 0.4650 | **Table 4.4: Design 4: Result for 1000 replications** | | Non Zeros Right | Zeros Right | Right model? | |-------|-----------------|-------------|--------------| | BIC | 1.0000 | 0.9976 | 0.9090 | | CV | 1.0000 | 0.9999 | 0.9920 | | NM | 1.0000 | 0.9997 | 0.9800 | | HDM | 1.0000 | 0.9987 | 0.9250 | The results show that our option is not always the best: for designs 2 and 4, the CV is better than our method, but not by much. On the other hand, for designs 1 and 6, our method dominates the other options and is better in the remaining tests. Table 4.5: Design 5: Result for 1000 replications | | Non Zeros Right | Zeros Right | Right model? | |-------|-----------------|-------------|--------------| | BIC | 0.995 | 0.280 | 0.033 | | CV | 1.000 | 0.983 | 0.525 | | NM | 1.000 | 0.996 | 0.723 | | HDM | 0.984 | 0.983 | 0.241 | Table 4.6: Design 6: Result for 1000 replications | | Non Zeros Right | Zeros Right | Right model? | |-------|-----------------|-------------|--------------| | BIC | 1.00 | 0.93 | 0.04 | | CV | 1.00 | 0.94 | 0.03 | | NM | 0.90 | 0.98 | 0.32 | | HDM | 0.09 | 1.00 | 0.00 | It’s interesting to note that designs 1 and 6 show clearly the trade off between getting more zeros right and getting the non zeros right: our method is worst than CV if our main concern is to include all the relevant regressors and is worst than the HDM in exclude the irrelevant variables. However, by allowing the possibility of exclusion of more variables than CV and less than the HDM, we are able to set more right zeros. Its interesting to note that this actually makes our method better than the others in situations in which the model is less sparse, namely designs 1 and 6. Table 4.7, we repeat the same designs as above. However, instead working with a random $X$ and a random $u$, we keep $X$ fixed. This is more in line with the hypothesis we used in the theory. This should make easier to recover the right model, and the simulations back it, although the difference is not dramatic. Table 4.7: Simulations with fixed design | | Non Zeros Right | Zeros Right | Model Right | |-------|-----------------|-------------|-------------| | Design 1 | 0.976 | 0.985 | 0.473 | | Design 2 | 0.998 | 0.970 | 0.290 | | Design 3 | 1.000 | 0.992 | 0.714 | | Design 4 | 1.000 | 0.999 | 0.967 | | Design 5 | 1.000 | 0.997 | 0.727 | | Design 6 | 1.000 | 0.996 | 0.743 | Table 4.8 shows the simulations using an error with chi-squared distribution - a distribution that is not subgaussian, but is subexponential. We change the number of degrees of freedom in order to change the variance. We keep $X$ fixed between simulations and make 2000 replications. The number of observations and variables and relevant variables are the same as the designs above. The performance is worse than the case with gaussian errors, which is unsurprising since the theory is based on the hypothesis that the errors are subgaussian. Table 4.8: Simulations with Subexponential error | Design | Non Zeros Right | Zeros Right | Model Right | |--------|-----------------|-------------|-------------| | 1 | 0.984 | 0.985 | 0.577 | | 2 | 0.997 | 0.971 | 0.314 | | 3 | 1.000 | 0.992 | 0.733 | | 4 | 1.000 | 0.999 | 0.959 | | 5 | 1.000 | 0.991 | 0.463 | | 6 | 0.984 | 0.992 | 0.636 | Tables 4.9, 4.10 and 4.11 show the results of simulations using the algorithm when we use Student’s t distribution for the error with different degrees of freedom, with 3000 replications each. We use a fixed design for $X$ and the designs are the same as in the previous simulations. We drop design 2 since the only change between it and design 1 is the variance of the error. Notice that these designs are not completely equivalent to the previous designs, since setting the degrees of freedom define the variance of the distribution. The number of observations and variables and relevant variables are the same as the designs above. The fact that the variances are not the same make it harder to compare these results with the previous results. However, more degrees of freedom make the distributions more well behaved and we would expect better results as the degrees of freedom increase. The model right column illustrates exactly that. Table 4.9: Simulations with Polynomial Tails: Student’s t Distribution with 4 degrees of freedom | Design | Non Zeros Right | Zeros Right | Model Right | |--------|-----------------|-------------|-------------| | 1 | 0.99 | 0.99 | 0.72 | | 3 | 1.00 | 0.98 | 0.45 | | 4 | 1.00 | 1.00 | 0.85 | | 5 | 0.99 | 0.99 | 0.32 | | 6 | 0.97 | 0.98 | 0.35 | ### 4.3 Regularization Parameter Using $\lambda = \sigma \sqrt{2 \log(p)/n}$ instead of using $\lambda = 2\sigma \sqrt{2 \log(p)/n}$ is another innovation that needs backing. In this section, we show some Monte Carlo simulation that compares this to the original proposal made in Bickel et al. (2009). We gave an explanation on why it is not problematic when used with the adaLASSO, and therefore we will compare all results using the adaLASSO. Table 4.10: Simulations with Polynomial Tails: Student’s t Distribution with 8 degrees of freedom | Design | Non Zeros Right | Zeros Right | Model Right | |--------|-----------------|-------------|-------------| | 1 | 1.00 | 1.00 | 0.90 | | 3 | 1.00 | 0.99 | 0.61 | | 4 | 1.00 | 1.00 | 0.93 | | 5 | 1.00 | 0.99 | 0.55 | | 6 | 0.98 | 0.99 | 0.50 | Table 4.11: Simulations with Polynomial Tails: Student’s t Distribution with 3 degrees of freedom | Design | Non Zeros Right | Zeros Right | Model Right | |--------|-----------------|-------------|-------------| | 1 | 0.97 | 0.98 | 0.59 | | 3 | 0.99 | 0.97 | 0.36 | | 4 | 1.00 | 1.00 | 0.75 | | 5 | 0.99 | 0.98 | 0.18 | | 6 | 0.96 | 0.97 | 0.23 | We start with design 1, and the results are showed on Table 4.3. Both are fitted using the same algorithm, all that changes is the value of $A$. Table 4.12: Design 1, 2000 replications | $A = 1$ | Non Zero Right | Zero Right | Model right | |---------|----------------|------------|-------------| | | 0.97 | 0.98 | 0.45 | | $A = 2$ | 0.59 | 0.99 | 0.00 | Looking at the model right column, using $A = 1$ is better than $A = 2$ for this design. All the gain comes from the fact that we get more non zero coefficients right then using $A = 2$. In other words, in a design in which we have a lot of noise and a lot of relevant variables, the regularization parameter from Bickel et al. (2009) do not let enough coefficients be different than zero. Notice that both benefit from using adaLASSO, but since $A = 1$ was engineered to work with adaLASSO, it works better than the alternative. It could be that in cases in which the variance of the error is smaller and we have less relevant variables we perform (much) worse. We then test design 3, which is shown in Table 4.3. Notice that using $A = 2$ still beats BIC and HDM from Table 4.3. Table 4.13: Design 3, 2000 replications | $A = 1$ | Non Zero Right | Zero Right | Model right | |---------|----------------|------------|-------------| | | 1.0000 | 0.9966 | 0.8710 | | $A = 2$ | 0.9445 | 0.9997 | 0.6670 | A valid worry is whatever we have a situation in which we have two errors cancelling each other. Our method could be overestimating the variance in both cases, and $A = 1$ is just correcting for the bias. Figures 4.6 and 4.7 show the estimated residual standard deviation for Tables 4.3 and 4.3, respectively. The horizontal line mark the true standard deviation. Notice that for Design 3, using $A = 2$ generates gross errors. Even for Design 1, the residual standard deviation is a lot more spread than for $A = 1$. Figure 4.7: Boxplot of estimated standard deviation 5 Empirical Example As an empirical example, we repeat the regressions made by Donohue & Levitt (2001) over the effects of abortion over crimes. We follow closely the replication done by Belloni et al. (2013), using their data and their program to generate regressors to be selected via adaLASSO. We will compare our implementation with two different guesses to the variance and compare the results to the HDM package in R\textsuperscript{1}. The initial guess is 1 and the lower guess is 0.1. Results for the coefficients and standard errors are reported in Tables 5.1, 5.2 and 5.3. | Table 5.1: Coefficients: Effect over Violent Crimes | |-----------------------------------------------------| | Coef | SE | t-Stat | P-value | |------|----|--------|---------| | HDM | -0.17 | 0.12 | -1.41 | 0.16 | | Us | -0.28 | 0.13 | -2.17 | 0.03 | | Us - Lower Guess | -0.09 | 0.13 | -0.63 | 0.53 | | Table 5.2: Coefficients: Effect over Property Crimes | |-----------------------------------------------------| | Coef | SE | t-Stat | P-value | |------|----|--------|---------| | HDM | -0.12 | 0.42 | -0.28 | 0.78 | | Us | -0.05 | 0.05 | -1.04 | 0.30 | | Us - Lower Guess | -0.10 | 0.63 | -0.16 | 0.87 | | Table 5.3: Coefficients: Effect over Murder | |---------------------------------------------| | Coef | SE | t-Stat | P-value | |------|----|--------|---------| | HDM | -0.12 | 0.42 | -0.28 | 0.78 | | Us | -0.11 | 0.45 | -0.24 | 0.81 | | Us - Lower Guess | -0.10 | 0.63 | -0.16 | 0.87 | There are differences between our algorithm with different guesses. However, considering the amount of regressors and the size of the series, previous results say that the algorithm won’t necessarily converge to a single value, which explains the difference between the guesses. The results are \textsuperscript{1}Even the Matlab programs available online do not replicate the results they report in Belloni et al. (2013) more scattered for the coefficient over violent crimes, with one estimate being significant, while the estimates are incredibly concentrated for murders. To understand better what each algorithm is doing, Tables 5.4, 5.5 and 5.6 show how many regressors are selected by each method in both the Outcome and Treatment regressions. Table 5.4: Number of variables selected: Violent Crimes | | HDM | Us | Us - Lower Guess | |------------------|-----|----|------------------| | Outcome $\sim x$ | 3 | 2 | 40 | | Treat $\sim x$ | 12 | 9 | 26 | Table 5.5: Number of variables selected: Property Crimes | | HDM | Us | Us - Lower Guess | |------------------|-----|----|------------------| | Outcome $\sim x$ | 6 | 2 | 32 | | Treat $\sim x$ | 14 | 11 | 30 | Table 5.6: Number of variables selected: Murder | | HDM | Us | Us - Lower Guess | |------------------|-----|----|------------------| | Outcome $\sim x$ | 0 | 2 | 72 | | Treat $\sim x$ | 9 | 10 | 22 | In line with the results from the simulation, our algorithm is able to select more variables than HDM. The effect is larger when we lower the starting guess of the variance. This might explain the difference in the coefficients we obtain. 6 Conclusion This paper presents yet another way to select the regularization parameter. We use both the theory and the data to select the regularization parameter. In the end, we have a relatively simple algorithm that is useful - as shown by the simulations. Using the adaptive LASSO instead of the LASSO proves to be important for the convergence of the algorithm. The adaptive LASSO also plays a key role for variable selection - this was the main point of Zou (2006). Our simulations point to the potential of the adaLASSO, especially in challenging problems that are not “too sparse”. However, its non asymptotic theory is not completely developed. The simulation results are encouraging about the effectiveness of the algorithm. However, it still requires that the user sets a initial guess for the variance, and the result can be quite sensitive to the initial guess. Understanding the sensitivity and what is the optimal initial guess - if there is one - would be an important direction to make the algorithm easier to use. TIBSHIRANI, R.. Regression Selection and Shrinkage via the Lasso, 1996. BICKEL, P. J.; RITOV, Y.; TSYBAKOV, A. B.. Simultaneous analysis of lasso and dantzig selector. Ann. Stat., 2009. BÜHLMANN, P.; VAN DE GEER, S.. Statistics for High-Dimensional Data: Methods, Theory and Applications. Springer, 2011. WAINWRIGHT, M. J.. High-dimensional statistics: A non-asymptotic viewpoint, volumen 48. Cambridge University Press, 2019. ZOU, H.. The adaptive lasso and its oracle properties. Journal of the American Statistical Association, 2006. ZHANG, Y.; LI, R.; TSAI, C. L.. Regularization parameter selections via generalized information criterion. J. Am. Stat. Assoc., 2010. FAN, Y.; TANG, C. Y.. Tuning parameter selection in high dimensional penalized likelihood. Journal of the Royal Statistical Society. Series B: Statistical Methodology, 2013. HUI, F. K.; WARTON, D. I.; FOSTER, S. D.. Tuning Parameter Selection for the Adaptive Lasso Using ERIC. Journal of the American Statistical Association, 2015. BELLONI, A.; CHEN, D.; CHERNOZHUKOV, V.; HANSEN, C.. Sparse Models and Methods for Optimal Instruments With an Application to Eminent Domain. Econometrica, 80(6):2369–2429, 2012. BELLONI, A.; CHERNOZHUKOV, V.; HANSEN, C.. Inference on treatment effects after selection among high-dimensional controls. Review of Economic Studies, 2013. COUTINHO, D.; MEDEIROS, M.; SOUZA, P.. The Illusion of Independence: High Dimensional Data, Shrinkage Methods and Model Selection. 2017. HASTIE, T.; TIBSHIRANI, R.; FRIEDMAN, J.. *Elements of Statistical Learning* 2nd ed. 2009. MEDEIROS, M.; MENDES, E.. L1-regularization of high-dimensional time-series models with non-gaussian and heteroskedastic errors. *Journal of Econometrics*, 191, 11 2016. DONOHUE, J. J.; LEVITT, S. D.. The impact of legalized abortion on crime. *Quarterly Journal of Economics*, 2001. GRANAS, A.; DUGUNDJI, J.. *Fixed Point Theory*. Springer Monographs in Mathematics. Springer New York, 2013. COLEMAN, W. J.. Equilibrium in a production economy with an income tax. *Econometrica*, 59(4):1091–1104, 1991. **Appendix I: Theorems** **Theorem 1** Under Assumptions 1 and 2 and $\lambda > \max_{j=1,\ldots,J} |2u'X_j|/n$, we have: \[ \|\hat{\beta} - \beta^0\|_2 < \frac{3}{\kappa} \sqrt{s}\lambda \\ \frac{\|X(\beta^0 - \hat{\beta})\|_2^2}{n} \leq \frac{9s\lambda^2}{\kappa} \] The proof can be found in Wainwright (2019) **Theorem 2 (Banach Fixed Point Theorem)** Let $f : A \to A$ and $x, y \in A$ and $(A, d)$ is a complete metric space. If: \[d(f(x), f(y)) \leq hd(x, y)\] For $h < 1$, then $f$ has a unique fixed point that is reached from any point by the sequence $x_0 \in A, x_n = f^n(x_0)$ and $f$ is called a contraction map **Theorem 3 (Tarski Kantorovich Fixed Point)** Let $(P, \preceq)$ be a partially ordered set and $F : P \to P$ continuous. Assume that $b \in P$ and: - if $x \succeq y$, $F(x) \succeq F(y)$ - $b \preceq F(b)$ - Every countable chain in $\{x | x \succeq b\}$ has a supremum Then $F$ has a fixed point $\mu = \sup_n F^n(b)$ and $\mu$ is the infimum of the set of fixed points of $F$ in $\{x | x \succeq b\}$ For a proof, see Granas & Dugundji (2013). For an application similar to the one we do here, see Coleman (1991). **Theorem 4** Assume $X$ fixed and $u$ be subgaussian with parameter $\sigma$. Then \[P(\sigma A \sqrt{2 \log(p)/n} > \|2u'X/n\|_\infty) > 1 - 2p^{1-A^2/4}\] For a proof of this Theorem, see Bickel et al. (2009) Appendix II: Proofs Theorem 4 We will show that by using the concentration bound for \( \sigma A \sqrt{2 \log(p)/n} < \|2u'XW^{-1}/n\|_{\infty} \), the complementary event. The event \( \|2u'XW^{-1}\|_{\infty} > \lambda \) is equal to \( \cap_{j=1}^{p} |2u'X_j \omega_j^{-1}| > \lambda \) and using DeMorgan’s law we get the complementary event is \( \cup_{j=1}^{p} |2u'X_j W_j^{-1}| \geq \lambda \) and plug in our regularization parameter: \[ P \left( \bigcup_{j=1}^{p} |2u'X_j \omega_j^{-1}| \geq A \sigma \sqrt{2 \log(p)/n} \right) \] Boole’s law gives that: \[ P \left( \bigcup_{j=1}^{p} |2u'X_j \omega_j^{-1}|/n \geq A \sigma \sqrt{2 \log(p)/n} \right) \leq \sum_{j=1}^{p} P \left( |2u'X_j \omega_j^{-1}|/n \geq A \sigma \sqrt{2 \log(p)/n} \right) \] (iv) Use again the facts that \( u \) is subgaussian and we have a fixed design. Also, notice that \( \omega_j^{-1} = 1/\sqrt{n} + |\beta_L|. \) Apply Chernoff Bounds to the probability above: \[ \sum_{j=1}^{p} P \left( 2u'X_j \omega_j^{-1}/n \geq A \sigma \sqrt{2 \log(p)/n} \right) \leq \sum_{j=1}^{p} 2 \exp \left( -\frac{2A^2 \sigma^2 \log(p)/n}{2 \times 4 \omega_j^{-2} \sigma^2/n} \right) \leq 2p^{1-(A \omega_{\min})^2/4} \] Since \( \omega_{\min} \leq \omega_j \therefore -\omega_{\min} \geq -\omega_j \forall j = 1, \ldots, p \) Lemma 1 Lemma 1 The weighted LASSO solution belongs to \( C_3^{\omega_1} \) for \( \lambda \geq \|2u'XW^{-1}/n\|_{\infty} \) Proof. Start with the basic inequality: \[ 0 < \frac{1}{n} ||X \hat{\Delta}||_2^2 < \frac{2}{n} u'X \hat{\Delta} + 2\lambda(\|\beta^0\|_{w1} - \|\hat{\beta}\|_{w1}) \] Now, $\hat{\beta} = \beta^0 - \hat{\Delta}$, and substituting this on the norm we get: $$\|\hat{\beta}\|_{w1} = \|\beta^0 - \hat{\Delta}\|_{w1} = \|\beta^0_S - \hat{\Delta}_S\|_{w1} + \|\hat{\Delta}_{S^c}\|_{w1}$$ Plug the expression above on the basic inequality: $$0 < \frac{1}{n} \|X\hat{\Delta}\|_2^2 < \frac{2}{n} u'X\hat{\Delta} + 2\lambda(\|\beta^0\|_{w1} - (\|\beta^0_S - \hat{\Delta}_S\|_{w1} + \|\hat{\Delta}_{S^c}\|_{w1}))$$ Use the inverse triangle inequality on $\|\beta^0_S - \hat{\Delta}\|_{w1}$: $$\|\beta^0 - \hat{\Delta}\|_{w1} \geq \left| \|\beta^0\|_{w1} - \|\hat{\Delta}\|_{w1} \right|$$ This allows us to rewrite the basic inequality: $$0 < \frac{1}{n} \|X\hat{\Delta}\|_2^2 \leq \frac{2}{n} u'X\hat{\Delta} + 2\lambda(\|\beta^0\|_{w1} - (\|\beta^0_S\|_{w1} - \|\hat{\Delta}_S\|_{w1} + \|\hat{\Delta}_{S^c}\|_{w1})) = \frac{2}{n} u'X\hat{\Delta} + 2\lambda(\|\hat{\Delta}_S\|_{w1} - \|\hat{\Delta}_{S^c}\|_{w1})$$ Rewrite $2/nu'XW\hat{\Delta}$ as $2/nu'XW^{-1}W\hat{\Delta}$ and use Hölder Inequality to get: $$\frac{2}{n} u'X\hat{\Delta} \leq \left| \frac{2}{n} u'X\hat{\Delta} \right| \leq \frac{2}{n} u'XW^{-1}\|_\infty \|W\hat{\Delta}\|_1 = \frac{2}{n} u'XW^{-1}\|_\infty \|\hat{\Delta}\|_{w1}$$ The last equality comes from the definition of $\|.\|_{w1}$. Plug it once again in the basic inequality: $$0 < \frac{1}{n} \|X\hat{\Delta}\|_2^2 \leq \frac{2}{n} u'X\hat{\Delta} + 2\lambda(\|\hat{\Delta}_S\|_{w1} - \|\hat{\Delta}_{S^c}\|_{w1}) \leq \frac{2}{n} u'W^{-1}X\|_\infty \|\hat{\Delta}\|_{w1} + 2\lambda(\|\hat{\Delta}_S\|_{w1} - \|\hat{\Delta}_{S^c}\|_{w1})$$ We swapped $W^{-1}$ because it is a diagonal matrix. Use that $\lambda > \|2/nu'W^{-1}X\|_\infty$ to get: $$0 < \lambda(\|\hat{\Delta}\|_{w1} + 2\|\hat{\Delta}_S\| - 2\|\hat{\Delta}_{S^c}\|_{w1}) = \lambda(\|\hat{\Delta}_S\|_{w1} + \|\hat{\Delta}_{S^c}\|_{w1} + 2\|\hat{\Delta}_S\| - 2\|\hat{\Delta}_{S^c}\|_{w1}) = \lambda(3\|\hat{\Delta}_S\|_{w1} - \|\hat{\Delta}_{S^c}\|_{w1})$$ $\blacksquare$
Experimental Study on Chemical Treatment Performance of Quicklime Based on Multi-layer Elastic Model Bingling Yan Inner Mongolia Vocational and Technical College of Communications, Chifeng 024000, China email@example.com $Ca^{2+}$ can be dissociated with quicklime added into over-wet soil, and then has the exchange reaction with the selected $Na^+$ and $H^+$ in the soil, reducing the thickness of the water film on the surface of the original soil particles and further lowering the water content and plasticity of the over-wet soil. In this paper, typical thickness and modulus values of the reinforced layer of quicklime-processing over-wet soil were selected. And then with the bi-wheel uniaxial BZZ-100 as an example and the multi-layer elastic model as the basis, then influencing path of the reinforced layer of quicklime-processing over-wet soil on the dynamical deviatoric stress in soil base was analyzed. It was found that the thickness and modulus of the reinforced layer of quicklime-processing over-wet soil both pose negative effect on the dynamical deviatoric stress in the soil base. And the effect of the former is weak, while that of the latter is significant. 1. Introduction Over-wet soil refers to very moist clay soils or silty soils with high moisture content, featuring poor stability, low bearing capacity, and deformation tendency (Zhou, 2017). In the course of construction, the "spring phenomenon" often occurs if improper disposal is performed on over-wet soil. That is to say, when the subgrade soil is rolled, the over-wet soil subsidizes under the pressure with its surroundings bouncing to form a soft plastic volume, in which condition the subgrade soil will easily fail to meet the specified compaction requirements and becomes loose (Chen, 2017). If it is not found in time, after the completion of the project, the over-wet soil section will easily cause instability, deformation, and even subsidence under various ground loads, which will seriously reduce the quality of the project. At this stage, the over-wet soil can be treated with the sun-cure drying, replacement, and admixture methods (Kun, 2017). The sun-cure drying method requires clear weather and constant relatively high temperature. Despite of simple construction, it is time-consuming and spot-specific for drying. As for the replacement method, though simple and effective, it involves huge work and high costs. Therefore, most projects choose the admixture treatment method (Xu and Wang, 2017). Despite of certain cost, the admixture treatment can quickly and effectively reduce the water content of the over-wet soil so that the over-wet soil zone can meet the rolling requirements. The admixtures mainly include quicklime, hydrated lime, cement, lime ash and NCS curing agents. Considering the cost and construction demands, quicklime becomes the main way to process over-wet soil. Domestic and foreign scholars have also done a lot of researches on reinforced layer of quicklime-processing over-wet soil. A series of research results have been obtained in terms of lime incorporation ratio, specific construction process design, and strengthening layer mechanical properties. In 2014, Qian et al., conducted an experimental study of the effect of water content, compaction degree, and stress state on the resilience modulus of quicklime-processing over-wet soil, and obtained a resilience modulus prediction model of the quicklime-processing over-wet soil. Wang Tianliang et al., studied the mechanism of the influence of the freeze thawing on the mechanical properties of quicklime-processing over-wet soil through indoor static and dynamic triaxial tests (Qian, 2016). It is easy to encounter over-wet soil in road works, which will not only severely reduce the stability of the subgrade and bring serious safety hazards, but will also shorten the engineering life cycle. As one of the main over-wet soil treatment methods, the quicklime processing can not only effectively reduce the water content of over-wet soil, but also improve the over-wet soil plasticity, so that the soil layer reaches the compaction condition. At present, there have been studies mainly on the treatment process, lime incorporation ratio, and... treatment methods and effects, while few studies focus on the mechanical properties of the reinforced layer of quicklime-processing over-wet soil. It is helpful to further optimize the design parameters of the quicklime-processing over-wet soil by investigating the effect of the reinforcement layer on the mechanical response of the pavement. In view of this, through the mechanical analysis, this paper explored the influence law of the modulus and thickness of the reinforced layer of over-wet soil on the dynamical deviatoric stress in soil base, in order to provide theoretical support for the application of quicklime-processing over-wet soil in road engineering. 2. The mechanism of Quicklime-processing over-wet soil 2.1 Identification of over-wet soil Before the practical construction, the soil quality shall be checked by calculating the average consistency of the soil, so as to identify whether the soil is over-wet soil according to the wetness and consistency of the soil and relevant engineering specifications. The formula for calculating the average consistency of the soil is $$w_c = \frac{w_L - w_p}{w_L - w_p}$$ In the formula, $w_L$ refers to the liquid limit of the soil, with $w_p$ as the plastic limit of the soil, $\bar{w}$ as the average water content of the soil, and $w_c$ as the average consistency of the soil. When $w_c \geq 1$, the soil is semi-solid and could be used normally. For clay soil, if $w_c \leq 0.8$, the soil is identified as the over-wet soil. As for silty soil, the soil is identified as the over-wet soil when $w_c \leq 0.75$. 2.2 The specific improvement of Quicklime on over-wet soil First of all, quicklime can effectively reduce the water content of over-wet soil (Xiao, 2016). On the one hand, when quicklime is incorporated into over-wet soils, the mixing of dry and wet materials can reduce the water content of over-wet soil. On the other hand, quicklime can react chemically with water to produce calcium hydroxide. The specific chemical reaction formula is $CaO + H_2O = Ca(OH)_2$. In this process, about 32 grams of water are consumed per 100 grams of calcium oxide, with much heat released at the same time, which further promotes the evaporation of the original moisture in the over-wet soil. Secondly, the quicklime added can expand the optimal water content range of over-wet soil compaction. There is a clear requirement for the dry density of soil in road construction, and the maximum dry density of different soils corresponds to an optimal moisture content. The range of soil dry density required by construction corresponds to a minimum critical moisture content and a maximum critical moisture content. When the soil moisture content in the construction area is in this range, the construction requirement is met. The incorporation of quicklime into over-wet soils will increase the optimum moisture content, which in turn will increase the optimum moisture content of the corresponding dry density (Han, 2016). Finally, the addition of quicklime can effectively reduce the plasticity of over-wet soil. When quicklime is incorporated into the over-wet soil, $Ca^{2+}$ will be dissociated. With a large number of $Na^+$ and $H^+$ in the soil, the three ions will undergo the exchange reaction to reduce the thickness of the water film on the surface of the original soil particles (Yan et al., 2011). And through the cementation between lime and soil, the soil particles will see agglomeration or flocculation, thereby reducing the plasticity of over-wet soil. In this way, the compaction can be easily achieved with the soil stability further improved. 3. Analysis of the influence 3.1 Model selection and methods The bi-wheel uniaxial BZZ-100 was used to analyze the stress of the quicklime-processing over-wet soil pavement. The contact surface between the tire and the road surface of the vehicle is represented by a circular uniformly distributed load, and the diameter of the equivalent circle is $d = \sqrt{\frac{P}{\pi p}}$. In the formula, the wheel load is $P(N)$ with $P =$ Axial Load/4, and the tire ground pressure is $p$ (kPa). The instantaneous nature of the driving wheels makes the visco-plastic deformation of the pavement structure extremely small. Therefore, a high-strength and high-thickness highway can be considered as a linear elastic system, and a multi-layer elastic model can be used for calculation and analysis. The multi-layer elastic model is represented by cylindrical coordinates, as shown in Figure 1. $E_i$ refers to the rebound modulus of each layer, with $v_i$ as the Poisson's ratio of each layer, $q$ as the circular uniformly distributed load, $a$ as the radius, and $H$ as the distance between the top surface and the uppermost boundary of the bottom layer. In view of the fact that the pavement design is mostly based on the analysis of the mechanical properties of the soil base through the rebound modulus, this study mainly used the rebound modulus of the top surface of the soil base to analyze the influence of the reinforced layer of quicklime-processing over-wet soil on the dynamical deviatoric stress in the soil base. Specifically, based on the multi-layer elastic theory, the value of the thickness, modulus, and vehicle load of the reinforced layer of quicklime-processing over-wet soil were used to calculate the deflection value of the top surface of the soil base in the pavement structure system. And then the deflection value equivalent principle was used to calculate the rebound modulus of the top surface of the soil base. The specific calculation process was as follows: Stage 1: Search for design-oriented data, determine the pavement structure, obtain the thickness and modulus values of the reinforced layer of quicklime-processing over-wet soil, set the rebound thickness or modulus of the reinforced layer of quicklime-processing over-wet soil, and then add the vehicle load to calculate the reference deflection value $l_0$ of the top surface of the reinforcement layer. Stage 2: Under the same or similar vehicle load conditions, the above-mentioned pavement structure layer covered the homogeneous elastic semi-infinite soil base. The rebound modulus was constantly adjusted to repeatedly calculate the deflection value of the top surface of the soil base until the difference of $l_0$ values obtained at the first stage met the set tolerance requirement. The finally adjusted value was the rebound modulus value of the top surface of the soil base. 3.2 The experimental Process Before analyzing the effect of the reinforced layer of quicklime-processing over-wet soil on the dynamical deviatoric stress in soil base, the effect of the reinforced layer of quicklime-processing over-wet soil on the strength of the soil base and the top compressive stress of the soil base must be determined, which was conducted based on the BISAR 3.0 program from two perspectives of the thickness and modulus of the reinforced layer of quicklime-processing over-wet soil. Firstly, the effect of thickness and modulus of the reinforced layer of quicklime-processing over-wet soil on the strength of soil base was analyzed based on the multi-layer elastic model. According to the results of tests with different moisture contents, the modified soil base rebound modulus value ranged from 10MPa to 30MPa, and the thickness of the reinforced layer of quicklime-processing (Antiohos et al., 2006; Ruiz et al., 2008) over-wet soil ranged from 20cm to 60cm, with the modulus value between 100MPa and 500MPa. In the above ranges, the efficacy of the reinforced layer of quicklime-processing over-wet soil can be effectively exerted as shown in Table 1. Table 1: Applicable ranges for efficacy of the reinforced layer of quicklime-processing over-wet soil | Soil base rebound modulus (MPa) | Thickness of the reinforced layer of quicklime-processing over-wet soil (cm) | Modulus of the reinforced layer of quicklime-processing over-wet soil (MPa) | |-------------------------------|---------------------------------------------------------------|--------------------------------------------------------------------------| | | 20 | 30 | 40 | 50 | 60 | 100 | 200 | 300 | 400 | 500 | | 10 | 10.9 | 11.6 | 12.4 | 13.5 | 14.2 | 11.4 | 12.5 | 13.7 | 14.9 | 16.3 | | 15 | 16.5 | 17.5 | 18.9 | 19.5 | 20.9 | 17.9 | 18.7 | 19.8 | 21.4 | 22.8 | | 20 | 21.8 | 23.7 | 24.6 | 25.8 | 27.6 | 22.8 | 23.6 | 25.4 | 27.5 | 29.9 | | 25 | 27.2 | 28.9 | 30.4 | 31.9 | 33.1 | 28.4 | 29.8 | 31.2 | 34.1 | 36.4 | | 30 | 32.8 | 34.7 | 36.7 | 39.1 | 40.3 | 32.7 | 35.9 | 37.6 | 40.5 | 43.7 | From the above table, it can be seen that when the rebound modulus of the soil base is the same, the greater the thickness of the reinforcement layer, the smaller the increase in the strength of quicklime-processing over-wet soil subgrade; the larger the modulus of the reinforcement layer, the greater the increase in the strength of the quicklime-processing over-wet soil subgrade. Secondly, the rebound modulus of the over-wet soil base and the modulus of the reinforced layer of quicklime-processing over-wet soil were selected as 15 MPa and 200 MPa respectively, and the influence of the thickness and modulus of the reinforced layer of quicklime-processing over-wet soil on the top compressive stress of the soil base were analyzed. The calculation points were set as shown in Figure 2. x axis is the driving direction of the traffic vehicle, with y axis as the direction of the road cross-section and z axis as the depth direction of the roadbed and the road surface. ![Figure 2: Calculating points](image) According to Figure 2 and the contact position between the wheel and the subgrade surface, the coordinates for feature points calculation were selected as shown in Table 2. Among them, Point 1 was the inner edge point where the wheel touched the subgrade surface; Point 2 was the center point of the contact area between the wheel and the subgrade surface; Point 3 was the outer edge point where the wheel and the subgrade surface contacted; Point 4 was the middle point of the gap between the wheel and the subgrade surface; Point 5 was the midpoint of Point 1 and Point 2; Point 6 was the midpoint of Point 3 and Point 2. **Table 2: Coordinates of calculating points (unit: m)** | No. | Coord | 1 | 2 | 3 | 4 | 5 | 6 | |-----|-------|-------|-------|-------|-------|-------|-------| | | x | y | x | y | x | y | x | y | | Inate| 0.0 | 0.053 | 0.0 | 0.16 | 0.0 | 0.267 | 0.0 | 0.0 | 0.0 | 0.106 | 0.0 | 0.213 | From the calculation results, it can be seen that as for the improvement of the performance of the over-wet soil subgrade, when the thickness of the reinforced layer was 20cm-60cm, the maximum change rate of the compressive stress on the top surface of the soil base was 29%. When the modulus of the reinforcement layer was 100MPa-500MPa, the maximum change rate of the compressive stress on the top surface of the soil base was 23.7%. And the optimum value of the rebound modulus of the over-wet soil base and the modulus of the reinforced layer of quicklime-processing over-wet soil were 15 MPa. Finally, due to the dynamical deviatoric stress generated by the traffic load in the soil base that can lead to the plastic deformation of the roadbed by accumulation to a certain extent, the traffic load was further added on the basis of the above optimal values for the analysis of the effect rule of the thickness and modulus of the reinforced layer of quicklime-processing over-wet soil on the dynamic deviatoric stress in the soil base. Take the point 1.2m distant from the subgrade surface under Point 4 in Figure 2 as the research object, and calculate the dynamical deviatoric stress generated by the traffic load in the soil base through $\sigma_d = \sigma_z - \frac{(\sigma_x + \sigma_y)}{2}$. $\sigma_d$ refers to the dynamical deviatoric stress of the traffic load in the soil base, with $\sigma_z$ as the vertical stress and $\sigma_x$ and $\sigma_y$ as the stresses in the x and y directions respectively. 3.3 Experimental results As for the thickness of the reinforced layer of quicklime-processing over-wet soil, the modulus of the reinforced layer of quicklime-processing over-wet soil was 200 MPa based on the optimum value of 15 MPa for the rebound modulus of over-wet soil base. The introduction of traffic load led to that the greater the thickness of the reinforcement layer, the smaller the dynamical deviatoric stress in soil base, as shown in Figure 3. During the thickness of the reinforced layer of quicklime-processing over-wet soil increasing from 20cm to 60cm, the dynamical deviatoric stress in soilbase decreased by 0.03. This means that the change in the thickness of the reinforcement layer has little influence on the dynamical deviatoric stress in soil base. ![Figure 3: The influence of the thickness of the reinforced layer of quicklime-processing over-wet soil on the dynamical deviatoric stress in soil base](image) Similarly, as for the modulus of the reinforced layer of quicklime-processing over-wet soil modulus, based on the optimal rebound modulus of over-wet soil base, the thickness of the reinforced layer of quicklime-processing over-wet soil was selected as 40 cm. The introduction of traffic load concluded that the higher the modulus of the reinforced layer, the smaller the dynamical deviatoric stress in soil base, as shown in Figure 4. As the modulus of the reinforced layer of quicklime-processing over-wet soil increased from 100 MPa to 500 MPa, the dynamical deviatoric stress in soil base decreases by 0.27, indicating the change in the modulus of the reinforcing layer has significant influence on the dynamical deviatoric stress in soil base. ![Figure 4: The influence of the modulus of the reinforced layer of quicklime-processing over-wet soil on the dynamical deviatoric stress in soil base](image) 4. Conclusion In summary, the incorporation of quicklime into over-wet soil will dissociate $Ca^{2+}$, and through the exchange reaction with the selected $Na^+$ and $H^+$ in the soil, the water film thickness on the surface of the original soil particles can be reduced. In addition, the cementation between lime and soil can reduce the water content and plasticity of the over-wet soil and enhance the stability of the reinforced layer. As a component of the subgrade pavement structure, the reinforced layer of quicklime-processing over-wet soil mainly influences the internal dynamical deviatoric stress in the soil base by its thickness and modulus. When the thickness and modulus of the reinforced layer of quicklime-processing over-wet soil increase, the dynamical deviatoric stress in the soil base decreases, with little effect by the thickness change and significant effect by the modulus change. Due to simple processing method and enhancement in the quicklime-processed over-wet soil, the quicklime-processed over-wet soil brings good economic benefits in practical road projects. Therefore, in the future, the quicklime treatment of over-wet soil should be fully utilized to provide technical support for road engineering optimization in seasonally frozen areas. References Antiohos S., Papageorgiou A., Tsimas S., 2006, Activation of fly ash cementitious systems in the presence of quicklime. Part II: Nature of hydration products, porosity and microstructure development, Cement and Concrete Research, 36(12), 2123-2131, DOI: 10.1016/j.cemconres.2006.09.013 Chen Q.H., 2017, Construction Method of Overwetting Soil in Roadbed Construction, Value Engineering, 36(34), 127-129. Han H.B., 2016, Treatment of Overwetting Soil in Roadbed Construction, Urban Architecture, 13(5), 287-287. Kun T.L., 2016, The Treatment and Application of Wet Soil in Highway Subgrade Construction, Urban Architecture, 13(3), 272-272. Qian, M.G., 2016, Technical analysis of the construction of wet soil in highway subgrade construction, Engineering technology: abstract edition, 1(4), 00234-00234. Ruiz V., Ruiz D., Germat A. G., Grimes J. L., Murillo J. G., Wineland M.J., Anderson K. E., Maguire R. O., 2008, The Effect of Quicklime (CaO) on Litter Condition and Broiler Performance, Poultry Science, 87(5), 823-827, DOI: 10.3382/ps.2007-00101 Xiao R., 2016, Treatment Technology and Disease of the Wet Loess Roadbed, Low Carbon World, 6(2), 167-168. Xu K., Wang Z.F., 2017, Analysis of the Deformation Characteristics of Overwet-loess Subgrade in Huangwei Expressway, Journal of West Anhui University, 23(5), 137-140. Yan X.T., Kong Y.R., Yang L., 2011, Study on the Mechanism of Lime-modified Wet Soil Subgrade, Jilin Transportation Technology, 9(1), 17-19. Zhou Z.J., 2017, Application of Overwetting Soil Treatment Technology in Highway Subgrade Construction, Heilongjiang Transportation Technology, 40(3), 94-94, DOI: 10.3969/j.issn.1008-3383.2017.03.058
THIS ISSUE Unlimited Power... We continue our synopsis, review and commentary on the book and the methods advocated by Anthony Robbins, with examples for the shooter and the shooting coach added by the reviewer. Part Three includes such topics as the power of precision, the magic of rapport, the distinctions of excellence, handling resistance and solving problems, and the power of perspective. NEXT ISSUE Unlimited Power – Conclusion... We complete our synopsis, review and commentary on the book and the methods advocated by Anthony Robbins, with examples for the shooter and the shooting coach added by the reviewer. Part Four includes such topics as anchoring yourself to success, using your values to develop success, five keys to wealth and happiness, the power of persuasion and the challenge of 'living excellence'. One Thin Wire... A story about shooters reaching past their limitations to new levels of success, originally published by Precision Shooting. UNLIMITED POWER - PART THREE Chapter XII – The Power of Precision Robbins says that when Grinder and Bandler studied successful people they found that one of their most important attributes was precise communications skills. They also found that these people distinguished between what they needed to know and what they didn't need to know and focused on the former. Robbins says that in order to get what you want, you need to ask for it. And then he provides guidelines for how to do this: 1. Be specific. 2. Ask the right person (someone who can help you). 3. Create value for them. 4. Ask with authenticity. Be confident, show your conviction and sound sincere. 5. Keep asking until you get what you want. (Change the message or the person you ask, but persist.) In order to keep your communications as precise and to the point as you need, Robbins suggests the following guidelines: 1. Universals rarely are. If you hear yourself use a universal term like, 'all' or 'never', question whether the statement is really true, and if it is not true, restate it specifically until it is true. So, the shooter who says, "I always blow my last shot," needs to state this more honestly and specifically, probably something like, "I blow my last shot when I give up trying" or "I blow my last shot when I let myself get nervous." As negative as they are, these are statements that the coach can start to do something with. 2. Negatives don't go anywhere. If you use words like 'don't' or 'can't' or 'shouldn't' you are limiting without creating the picture of the possibilities you want to communicate. If the shooter says, "I can't read the wind," he is closing the door to learning how to read the wind. The shooter needs to say, "I need to learn to memorize the details of the flags in order to improve my wind reading." 3. Verbs need to be specific. When you use a verb, make sure that you are conveying the precise action that is involved. Make sure you answer the question "how". It is not enough --- 1 John Grinder and Richard Bandler are the inventors of NLP, neuro-linguistic programming, on which much of Robbins' thinking is based. for a coach to say, "You need to develop a smaller holding pattern" without telling the shooter specifically how that can be done. 4. Nouns need to be specific, whether they depict locations, people, concepts or things. One of the most common, "fuzzy" nouns is "they", as in "they are against it." This just puts you in a "stuck state"... you just have to ask, "Who are they and what exactly are they against?" When you get the answers, you have something to work with. 5. "Too much!" This is the dreaded 'unknown comparative'. Your idea or plan is met with the response of "That's too expensive," or "That's too hard," or "That will take too long". To get to the specific objection, you need to ask, "Compared to what?" Robbins identifies several other mental traps that a lack of precision can create. There are certain words that are like red flags to a precision communicator... judgmental words like "good" or "bad", for example. These should be challenged with "According to whom?" or "In what way?" or "How do you know that?" Another red flag is a sentence that includes the phrase "made me"; for example, "He made me mad." Think about the causal relationship here: if you are in control of your mental representations, then you can "make you" mad, but no one else can. Similarly, if someone says, "I just know... something," you need to ask, "How do you know that?" This one is particularly useful for querying your own internal dialogue or self talk. In addition, there are words that are inherently vague, and clear communicators will avoid them when they speak and challenge them when they hear them. These words are basically nouns that have been constructed from verbs, such as "attention", which was formed from the process of "attending", or "experience", which was formed from the process of "experiencing". The easiest way to get more specificity is to change the noun back into a verb; for example, ask, "What do you want to experience?" One of the phrases often heard in shooting circles is "attention control"; perhaps newcomers would better understand "control of the process of attending". Robbins refers back to NLP (neuro-linguistic programming), where asking the right questions is emphasized. And the right questions are "outcome questions". This simply means changing the direction of the comment away from the problem and towards the solution or outcome. As a coach, you can really help your shooter with this technique. If a shooter says, "I flinched on that shot," ask him "What is the solution to flinching?" Another tip Robbins gives is to avoid asking "why" and ask "how" instead. For example, instead of asking why a shooter didn't do well (on that shot, on that match) ask him what he needs to perform better or how you can help him get there.² As Robbins says, if you try a piece in a jigsaw puzzle and it does not fit, you don't take it as a failure and stop... you take it as feedback and carry on. Keep looking for the question or phrase that will transform a problem into a communication that will lead to a solution. Chapter XIII - The Magic of Rapport What is rapport? You know it when you have it with someone, but do you know what causes it? Robbins says that a feeling of rapport is generated when you see something similar to yourself in another person. --- ² This is core to the MilCun method of Solution Analysis. We also invented the phrase, "Great advice, Coach, but how?" to focus the coaching on how to get things done. So how do we create rapport? We create or discover things we have in common. You can mirror interests (like shooting sports), associations (like friends in common), and beliefs (political, social, sports theories, etc.). These are communicated through words, but verbal communications is a very small part of the whole communications package. Experts estimate that the words we use provide only 7 percent of what we communicate. Another 38 percent comes from our tone of voice. The biggest part of our communications comes from our physiology or our body language... our facial expressions, our gestures, our posture, and our movement. Even if all we have in common is simply body language, rapport develops. If you learn to mirror another person's body language (subtly, of course), you will not only develop rapport, you will genuinely start to understand them better. You need to develop keen observation skills and practice for a while, but after a while you will start to do it automatically. One of the keys to developing rapport is to identify a person's primary representational system: visual, auditory, or kinesthetic. As previously described, there are a whole set of behaviors that go with each of these systems. Once you have identified a person's representational system, all you have to do is match it. If you think this is manipulative, then be aware that it is what you have been doing unconsciously all of your life. When you do it unconsciously, you are not particularly selective about who you choose to develop rapport with... more likely, they choose you. In fact, if you do not mirror someone, in order to develop rapport, he must mirror you. In the end, it's not a matter of manipulation; it is a matter of volunteering to be the one that is flexible enough to enter another person's world. The most effective leaders in the world are adept at all three representational systems (visual, auditory, kinesthetic). We tend to trust people who communicate with all three systems, and who are congruent (are giving the same message from all three systems as well as with the meaning of their words). Successful people have a great talent for creating rapport. Once you have learned to mirror effectively, you can add another dimension to your skill. Robbins calls it "pacing" and "leading". Once you have mirrored the person and have established rapport, you continue "pacing" with his body language for a time, and then you start to make small changes to your own body language ("leading"). If you have established sufficient rapport, the person will follow your lead. One essential teaching of NLP is that the meaning of your communication is the response you elicit. For coaches, this is a very important statement. The best coaches establish rapport, so their message gets through. The best coaches know not only their subject (shooting) but also their students (shooters). They understand each athlete's representational system, and cater to it. They know how to transfer their knowledge from their own mental map to the shooter's mental map. Robbins says that there's another way to establish rapport, and this is the subject of the next chapter. Chapter XIV - Distinctions of Excellence: Metaprograms Robbins says that the quickest way to find out just how different people are, is to do a little public speaking. You can say the same thing to a room full of people and get a hundred different reactions. The reason is that everyone has his own internal way of sorting or filtering your message. The filters they are using are called "metaprograms". These filters help us deal with information. Large amounts of information can be processed by our brains because these filters (or metaprograms) categorize, select and delete the information before we become aware of it. To communicate effectively with a person, you have to understand his metaprograms. There are seven key areas to understand: 1. Moving towards or moving away. Humans are motivated by moving towards things that are pleasurable and/or by moving away from things that are not. While people will use both of these techniques to navigate through life, usually one dominates. To find out which way a person moves, ask him what he wants in something - his profession, his shooting career, his family. If he tells you what he wants, he is moving towards; if he tells you what he doesn't want, he is moving away. If you want to motivate your shooter, you need to know which metaprogram he is using. If he is moving towards, you can motivate him to train by emphasizing the good things that will happen when he trains carefully and well. If he is moving away, you can motivate him to train by emphasizing the bad things that will happen if he fails to attend to his training regime. 2. Internal or external frames of reference. If you have an external frame of reference, you will consider the opinions of others to determine the worth of something. If you have an internal frame of reference, your proof of something's worth comes from the inside...it's right because it feels right to you. This pair of metaprograms is context-dependent. If you have the benefit of years of experience, you are more likely to have a strong internal frame of reference. If you are new to something, you are likely to rely more on an external frame of reference. An effective leader (an effective coach) has to have a strong internal frame of reference. While the leader or the coach has to be able to take in new information from the outside, he will usually assimilate it into his own frame of reference. 3. Sorting by self or sorting by others. Some people look at things in the world in terms of "what's in it" for them alone; some people look at things in the world in terms of what they can do for themselves and others. People don't usually fall into one extreme (self-centered egotist) or the other (selfless martyr). 4. Matchers and mismatches. Matchers tend to see similarities, or similarities with exceptions and mismatches tend to see differences, or differences with exceptions. Among shooters and shooting coaches, I believe that matchers who see similarities with exceptions are the most effective. The shooting process needs to be consistently the same, unless the conditions change. Mismatchers will tend to feel obliged to vary their routine, and will sabotage the very process that is bringing them success. 5. The verification (or convincer) metaprogram. The first part of the convincer metaprogram is "what does it take to convince the person" and the second part is "how often does that proof have to be demonstrated?" The "what does it take" part refers to whether the person needs to see it, hear about it, do it, and/or read about it. The second part refers to the number of times that the proof has to be demonstrated: once, two or more, over a period of time, or every time. People have many different needs when it comes to being convinced. For some people, one demonstration is sufficient and they will continue to believe and trust until they feel betrayed. Others need more reassurance, or proof, to maintain their state of being convinced. It is critical for a coach to understand the shooter's needs in this regard. If the shooter needs frequent or constant reinforcement that he is doing the right thing, then the coach needs to give it to him. 6. Possibility versus necessity. Some people are motivated by necessity - they more or less take what life offers, rather than seeking out what they really want. Other people are motivated more by what they want to do than what they have to do - they seek out possibilities and opportunities. 7. Working style. Some people work best when they are "independent"; they have to run their own show. Others function best as a part of a group; we call their strategy "cooperative". A third group, using a so-called "proximity" strategy, is in between; they like to work with others, but take responsibility for their own task. Many shooters are "independent" and that sometimes makes coaching them a bit of a challenge. The best approach that I have found is what I think of as "transaction coaching". There is no contract with this type of shooter, no ongoing relationship. But there can be a transaction between the coach and shooter when the shooter sees something of value that the coach can provide. Shooters with a "proximity" strategy are the best to work with because they want to take responsibility for their own shooting, but they enjoy the coaching relationship as well. (I have not personally met any successful shooters who use the "cooperative" strategy in anything other than social situations.) When I read this section of Robbins' book, a light bulb went on. I had recently spoken at a sports association meeting where, at the end of my presentation, one member of the audience spoke very rudely and derogatorily. My partner said to me afterwards, "He can't see that there's a world beyond his own front sight." Exactly the right call. The guy is a "sort by self" kind of guy. My message had to do with a project that would benefit the community, not him personally. My message didn't reach him at all. Worse yet, I described the project in terms of "moving towards" a possible great future, and this guy was a "moving away" kind of guy. I failed to use what I think of as "scare tactics"... but using them and personalizing them to the individual might have produced the response I wanted. There are lots of different metaprograms that are used by lots of different people. Some people sort by logic, others by feelings. Some people respond best to details first and others need to see the "big picture" first. Some people are excited by beginnings and others are not satisfied until things are completed. Metaprograms can be changed. Sometimes a "significant emotional event" will cause us to change them. If you've been burned by your metaprogram or if you've missed a big opportunity because of it, you may be motivated to try another method. The other way you can change is... by deciding to. Understanding metaprograms can help you communicate more effectively with others. Metaprograms can also help you understand yourself and, when you want to change your behavior, changing them can help you do so. Chapter XV – How to Handle Resistance and Solve Problems Every shooting coach has had to face shooter resistance at one time or another. Even the most easy-going athlete occasionally has a sticking point, especially when he is a little nervous about an upcoming competition. And most shooters are pretty strong-willed and independent, so the shooting coach often has to handle built-in resistance before he can get on to solving any problems. Robbins key point is that in order to truly handle resistance you must be flexible. Many of us think that the "other guy" needs to be flexible. Actually, if you're the one with the point of view to communicate, you're the one that needs to be flexible. As he says, "You can't communicate by force of will; you can't bludgeon someone into understanding your point of view. You can only communicate by constant, resourceful, attentive flexibility." If what you've tried before isn't working, break the pattern. Try a new approach. Say something different. If that doesn't work, try another approach. Stay friendly, be flexible, and persist. Robbins' writes: "there is no such thing as resistance, there are only inflexible communicators who push at the wrong time and in the wrong direction." The way to handle resistance is to "not disagree". Well, how do you ever get your own point of view across if you agree with the other guy? Start your remarks with phrases that indicate that you understand and accept his point of view. "I appreciate..." or "I respect..." or "I agree and..." This way, you acknowledge his point of view and you respect him for having a point of view. When you start this way, the shooter thinks, "Well, he heard me and understood me..." and if you're lucky, the shooter further thinks, "I guess it's my turn to hear and understand." In any case, he is more receptive to hearing your point of view. When you use the expression, "I agree and..." you are not disagreeing, you are adding your own ideas to his. This is a good way not only to improve his receptiveness to your ideas but can also initiate a creative dialog between the two of you. Robbins later makes a related point about the use of the word "but". He advocates that you banish the word "but" from your vocabulary. Instead, use the word "and". I couldn't agree more. (Long ago, I took two words out of my vocabulary... "but" and "should". This improved my own internal dialog, and improved my communications with other people.) When you use the word "but" you negate your athlete's entire thought. If you say, "I agree, but..." your athlete says to himself, "He doesn't agree at all. He is now going to tell me why I'm wrong." Whereas, if you say, "I agree, and..." your athlete says to himself, "He agrees with me and he has something to add to my thoughts." The other key point that Robbins makes is that we are creatures of habit, even when the habit is self-destructive. He recounts a lovely story about a psychiatrist visiting a patient in a mental hospital. The patient insisted that he was Jesus Christ, not just spiritually but completely. The psychiatrist one day asked him, "Are you Jesus Christ?" and the man replied, "Yes, my son." The psychiatrist said that he would be back in a minute and he left. The man was confused, but in a few minutes, the psychiatrist returned with a measuring tape. He asked the man to spread his arms and he measured his arm span; in addition, he measured his height from head to toe. Then the psychiatrist left. When he returned, he had a couple of long boards, some large spikes and a hammer. The patient asked him what he was doing. The psychiatrist asked him, "Are you Jesus?" Again the patient replied, "Yes, my son." The psychiatrist said, "Then you know why I'm here." Apparently, this cured the patient for he then excitedly declared, "I'm not Jesus! I'm not Jesus!" This is an example of what Robbins (and psychiatrists) call a "pattern interrupt". This is a device that can help you break a pattern or a habit that is not serving your purposes. We have all, at one time or another, let ourselves go down a path that we know is not constructive, yet we carry on. Arguments with family often go this way. So can coaching situations go this way. I once had a coach who was in the habit of saying things like, "If your standing scores were a little higher, you'd be shooting world championship 3-position." He wasn't a stupid man; he probably knew that this wasn't a good way to motivate a shooter; yet he was stuck in the habit of making this type of remark. If I had known then what I know now, I would have said, "I agree and I need help to improve my standing. What do I need to work on?" This would have helped him break the pattern, and certainly would have gotten me further in my shooting. Robbins makes the same two points over and over in this chapter. He repeats his point of view because he knows that what he is saying is counter to what most of us have been taught. The first point, in a nutshell, is that you can "persuade better through agreement than through conquest." The second point is that you are in control of your own behavior. And the key application of these two truths is that when you are communicating (persuading), you must be flexible. The point that Robbins does not cover directly is that in order to be flexible communicators, we need to stay focused on the objective. The objective is to persuade the other person to change his point of view. (The objective is not to prove how smart we are, or how right we are.) We must have the emotional maturity to let go of our own ego long enough to appreciate the athlete's ego. Chapter XVI - Reframing: The Power of Perspective Robbins starts this chapter by making the point that our own perspective determines how we see the world, and how we interpret what goes on in it. One example he uses is the image that can be seen either as a lovely girl or as an old hag, depending on how you look at it. His key point is that people who are successful consistently represent their experiences in ways that support them in producing even greater results for themselves and for others. If you are in the habit of seeing things in ways that do not support you, Robbins says, you need to "reframe"; i.e., you need to change your frame of reference. There are two types of reframing that Robbins describes. The first, he calls "context reframing" which turns a bad situation to an advantage. He cites Rudolph the Red-Nosed Reindeer as a classic example of context reframing, where the socially undesirable red nose enables Rudolph to "save the day". In shooting sports, the female's lesser upper body strength is reframed as a lower centre of gravity that provides greater stability. The second type of reframing Robbins calls "content reframing", which you change the way you see, hear or represent the situation. The example Robbins gives is the blind boy whose mother was so good at reframing, he believed he had "unusual vision", a special insight into people and the world. The most common example of reframing for the coach is enabling the athlete to see mistakes as a learning situation, or to see weaknesses as a training objective, or to see steps up the competition ladder where others might see matches lost. Reframing is the art of the advertiser and the politician. Most of us understand that professionals put their products into the most favorable possible light, regardless of their shortcomings. We, as individuals, usually understand the process applies to ourselves when we are writing a resume or going for a new job interview. Robbins’ key point is that we need to be able to reframe ourselves and our situation all the time, constantly and consistently seeing our abilities and our experiences as contributing to our ultimate success. When you are faced with a very upsetting situation, it is sometimes difficult to reframe it. Robbins offers several practical ways to get control: - Put yourself into a resourceful state (as described in the first section of the book). - Disassociate yourself by putting the image of the negative situation (or the key person in the situation) in the palm of your hand. - Ask yourself to see the situation from someone else’s point of view. - Pretend the situation is a movie you are watching in a theatre and play it in reverse or in fast forward, then reconstruct it as a cartoon, then set it to silly music. As a coach, this may be more than you are willing to guide your athlete through. However, you can help him feel more resourceful, disassociate and benefit from a negative experience by asking him one simple question: “If you were a coach, what would you say to an athlete who had just gone through this type of experience?” As a coach, I always try to get my athletes to focus on the answers to the following questions: 1. What did you learn from this situation that can help you (and others) in the future? 2. Of the things that happened, what could you have controlled in your favor and how will you do that in the future? 3. Of the things that happened, what could you not control…of those things, which could you influence, avoid or neutralize in the future? (Are there any other contingency plans you need?) 4. What are you going to do right now that will be your first step towards a better situation next time? Robbins cautions that we are not all fully consciously aware of the deeper reasons (secondary benefits) of our behaviors, and that until we are, we may not be able to produce a long-lasting change. He gives an example of a housewife who, when her foot turns numb, gets the secondary benefits of a helpful husband. When her doctor solves her foot problem, she loses her secondary benefits. Therefore, the foot problem reappears. In our next issue of CoachNet, we relate this situation to the shooting coach and we continue our synopsis, review and commentary on Robbins’ book “Unlimited Power”, including such topics as: - Anchoring yourself to success, - Using your values to develop success, - Five keys to wealth and happiness, - The power of persuasion, and - The challenge of ‘living excellence’.
I extend a warm welcome to all the delegates attending the Fifth Meeting of the Parties to the Montreal Protocol. Let me begin by thanking the Government of Thailand for their gracious hospitality and excellent arrangements for these meetings, even though they happened to, fortuitously, coincide with the peak tourist season in Thailand. I would also like to express my deep gratitude to the outgoing Bureau, its energetic Chairman, Mr. Kamal Nath of India and the Ministers Mr. David N. Magang of Botswana, Mr. Eduardo Mora Anda of Ecuador, Mr. Hans H.M. Alders of Netherlands and Mr. Ryszard Purski of Poland. Dedication and strenuous efforts of the Bureau have achieved much in the past year. Ladies and Gentlemen: Every meeting, seminar or conference is an opportunity not only for taking stock of the situation, exchanging views and information, but more significantly for gaining fresh perspectives and discovering new meanings. As we begin the Fifth Meeting of the Parties to the Montreal Protocol, we continue the questioning. The Montreal Protocol has been successful. Why? Can it be attributed to the Protocol’s inherently flexible character, its inclusiveness? Or is it because of its political acceptability or its ability to link diverse issues demonstrating the common advantage of adhering to it? Or can we attribute it to its transparency? When the Montreal Protocol was originally negotiated, the parties recognized dangers that would affect not only all nations but all life on earth for times much beyond the normal time frame of governments. More than that, the decisions arrived at required balancing of probabilities, for it was realized that the risks of waiting for more scientific evidence to emerge were infinitely greater. Clearly one reason for the success of the Montreal Protocol is that it is constituted as an on-going process and not merely as a static solution, a freezing of the status quo. Thus relying on periodic scientific, economic and technological assessments, it has adapted itself progressively to the rapidly evolving conditions. The continuum of negotiations from Montreal to London to Copenhagen has served not only to clarify several ambiguous provisions of the Protocol, accelerate the phase-out of several ozone depleting substances but also to put many ambitious work plans in place. We are now in the midst of an orderly process - of which this meeting is an important part, to deal with the threat of the depletion of the ozone layer. As equal partners in a global endeavor, I am sure our decisions in this meeting will lead to more harmonized measures to protect the ozone layer. It is important to remind ourselves that the state of the depletion of the Ozone Layer continues to be alarming. While we believe that Montreal Protocol is working well and that the extent of CFCs in the atmosphere has shown a decline, I urge you not to take a complacent view of the situation. The line that divides complacency from catastrophe is very Even now millions of tons of CFCs products are enroute to their fatal stratospheric rendezvous. As you are aware, even if CFC emissions were to level off, chlorine would continue to accumulate in the atmosphere for some more years. We can see real improvement only after the year 2000. In 1992, the Antarctic hole was at its largest and the ozone layer had been depleted by 60%. The hole covered 37 million square kilometres compared to 27.4 million square kilometres previously observed. Some stations reported 100% ozone destruction between heights of 14-20 kilometres. The destruction was significant in the northern latitudes. In February 1993, the ozone levels over North America and most of Europe were 20% below normal. In 1993, very low ozone values over Antarctica have appeared in August. Record low ozone values reported in September 1993 were the lowest ever reported for that month and these values have continued into early October. By size, the surface area covered was the largest ever. Clearly this is not the time to break the momentum towards consensus and treaty obligations. There is another disturbing factor. It has been reported that all countries which have reported their data have complied with the control measures of the Protocol. Figures for the year 1991 reveal that all parties not operating under Article 5 have shown reductions in consumption beyond the percentages mandated by the Protocol. The average reduction for these countries was 45% with two countries, Austria and Sweden recording nearly 80% reduction. But, of the Article 5 countries, only 9 countries have shown a decrease in their consumption of controlled substances. In fact, 3 countries in this category have shown more than 80% increase. The overall increase in countries operating under Article 5 is 54%. While I am aware that these parties have a grace period of ten years and control measures applicable to them become effective only on 1 January 1999, this exponential increase calls for an increased reflection on the state of the Ozone Layer and calls for bold decisions on increased assistance to the developing countries. There are certain factors that inhibit the full and effective implementation of the provisions of the Montreal Protocol. The first issue that has been causing concern is that of ratification of various International Agreements. Signing a treaty is only the first step - a declaration of intent. The proof lies in formal ratification. Unless a state actually ratifies a protocol, no binding commitments exist under the International Law. The number of countries which have ratified the Montreal Protocol now stands at 129 including 88 developing countries. Thus the Montreal Protocol now covers more than 90% of the population of the world and nearly 99% of the consumption of ozone depleting substances. It is however a matter of regret that only 69 countries have ratified the London Amendments and only 9 have ratified the Copenhagen amendments. It is one year since the parties took the historic decision in Copenhagen to advance the time tables for the phase-outs of many ozone depleting substances and to include more substances to be controlled. These adjustments and Amendment have been communicated to all the Governments by the depositary of the Protocol, the Secretary-General, on 22 March 1993. Consequently, these adjustments are already in force from 22 September 1993. However, the Amendment will come into force only after ratification by at least 20 parties. Since only 9 countries have so far ratified it, the Amendment can come into force by 1 January 1994 as proposed, only if you persuade your governments to take immediate action. It is clearly not enough for the parties to implement the provisions of the various conventions and protocols faithfully, they have also to demonstrate to the world that they are formally committed to implementing them. UNEP urges all the countries who have not yet ratified the London and Copenhagen Amendments to do so immediately. We would also strongly encourage the remaining 50 or so non-party countries to ratify the Montreal Protocol and its Amendments urgently. The second issue that is a cause of some anxiety is the palpable delay in reporting of data by the Parties to the Montreal Protocol. While we are aware that the implementation of the Protocol is well ahead of schedule, many countries have chosen not to report their data. In fact, a third reminder had to be sent to the Parties in May 1993 to report their 1991 and 1992 data. For the year 1991, out of 74 countries, only 46 reported. The picture for 1992 is changing, but as we come to this Meeting, out of 99 countries only 23 had reported data. Some of these non-reporting countries are non-Article 5 Parties. In 1991 and 1992, the non-Article 5 countries which defaulted numbered 5 and 34 respectively. Will this not be a reflection on the working of the Montreal Protocol - if the world perceives only half the countries as fulfilling the obligations of the Montreal Protocol? Accurate and timely data is an extremely important element in our monitoring and decision-making process. May I remind this distinguished audience of the requirement under Article 7 for all parties to report the data for 1992 on production, consumption, exports and imports in each of the controlled substances not later than nine months after the end of the year to which they relate. In the 9th meeting of the open ended Working Group of the Parties to the Montreal Protocol in Geneva, we specifically inquired whether there were any difficulties in reporting the data accurately and in a timely manner. We sought to find ways in which the UN agencies could ameliorate these problems. Now that UNEP, UNDP, UNIDO and the World Bank - the implementing agencies of the Multilateral Fund have initiated the preparation of country programmes, I hope that the problems of reporting of data will diminish. The third important issue that we have to address is the requirement of the Multilateral Fund for the years 1994, 1995 and 1996. The meeting at Copenhagen saw the establishment of the Multilateral Fund to replace the Interim Multilateral Fund. The Executive Committee had prepared an excellent report in this regard. I would urge you to contribute the maximum possible resources now to reverse the trend of the increasing consumption in developing countries. This is in the interest of the ozone layer. It is also in the interest of the donors to avoid funding a larger incremental cost of phasing out a larger consumption in developing countries, which will result if we economize now. I would also like to mention the contributions due for 1991, 1992 and 1993. Out of the 127 million dollars due for 1991 and 1992, 21 million dollars are still outstanding while for 1993 only about 53 million dollars of the pledges of 114 million. Even if we were to ignore the contributions of the countries who had pleaded temporary difficulties, there are some who could pay, but have not. We are now at a crucial stage when a large number of developing countries are requesting assistance and have expressed willingness to proceed faster than mandated by the Montreal Protocol if they are given the necessary technologies and financial assistance. It is imperative that outstanding commitments to the Multilateral Fund be honored. Finally, the entire administration of the Montreal Protocol hinges on the contributions to the Trust Funds. As you are aware, contributions to the Trust Funds of the Montreal Protocol and of Vienna Convention are much below the pledges. Quite simply, the Secretariat will not be able to function nor be able to convene and service your meetings until the pledged sums are paid in full and on time. We can say with some pride that a most significant achievement in 1993 was the phase-out of Halons, which only a few years ago were considered irreplaceable. The parties had decided last year the phase out would be subject to exemption for essential uses. 15 such nominations had been received and were scrutinized by the Halons Option Committee, the Technology and Economic Assessment Panel and the open-ended Working Group of Parties. You have their reports before you. They have all concluded that no exemptions are necessary, since technically and economically feasible alternatives or substitutes are available and since enough halons are available for recycling. It was a very pleasant experience at the 9th Meeting of the Open-Ended Working Group, when party after party which had submitted nominations for essential uses announced that they were convinced by the report of the Technology and Economic Assessment Panel and the Halons Option Committee and were withdrawing their nomination. A few representatives had mentioned then that they were unable to withdraw the nomination because of lack of mandate from their governments. I hope that these countries have now received the formal mandate from their respective governments to withdraw their nominations. I do hope that the recommendation of the Open Ended Working Group of the Parties, Technology and Economic Assessment Panel and the Halons Option Committee that the production and consumption of Halons will cease in the developed countries by 1 January 1994 will be accepted by this meeting. The year 1995 will be a very significant year in the on-going implementation of the Montreal Protocol. In 1995, the Parties will review, in accordance with Article 5 Paragraph 8, the situation of the developing countries including the effective implementation of financial cooperation and transfer of technology. The Parties will consider such revisions as necessary regarding the schedule of control measures applicable to developing countries. Under Decision 17C adopted last year, the Parties are required to review the financial mechanism in 1995. The modalities of these two reviews must be decided now so that the work can go ahead during 1994. There is a recommendation before you that the Executive Committee is in the best position to carry out both these reviews and give a report to the Parties in early 1995. The report of the Executive Committee can then be considered by the Open-Ended Working Group of the Parties and the final decision taken at the Seventh Meeting of the parties in 1995. 1995 is also the year to review the control measures applicable to the developing countries with respect to HCFCs, HBCFs and Methyl Bromide. Whether or not trade measures under Article 4 will be applicable to these substances will be examined. We should settle the methodologies for these reviews in this meeting. The suggestion that the Scientific and the Technology and Economic Assessment Panels should look into these issues and come up with a report by November 1994 so that the Open-Ended Working Group can consider it in 1995 and make a recommendation to the meeting of the Parties in 1995, is before you. I must express my pleasure in noting that Nairobi is recommended as the site of the next meeting of the Parties to the Montreal Protocol. UNEP will welcome you with open arms and endeavor to provide you with support and service. Ladies and Gentlemen: The preparatory work for this meeting has been conducted in a spirit of cooperation and with a recognition that a threat remains. The ozone layer still faces a precarious future. If we are to succeed in saving this global resource, we have to focus our energies into making the Protocol work. It is our only hope.
The undersigned requests that the present international application be processed according to the Patent Cooperation Treaty. **RECORD COPY** **Box No. I TITLE OF INVENTION** Use of non-evaporable getter alloys for the sorption of hydrogen in vacuum and in inert gases **Box No. II APPLICANT** This person is also inventor Name and address: (Family name followed by given name; for a legal entity, full official designation. The address must include postal code and name of country. The country of the address indicated in this Box is the applicant’s State (that is, country) of residence if no State of residence is indicated below.) SAES GETTERS S.p.A. Viale Italia 77 I - 20020 Lainate MI Italy Telephone No. Facsimile No. Teleprinter No. Applicant’s registration No. with the Office State (that is, country) of nationality: Italy State (that is, country) of residence: Italy This person is applicant for the purposes of: - [ ] all designated States - [X] all designated States except the United States of America only - [ ] the United States of America only - [ ] the States indicated in the Supplemental Box **Box No. III FURTHER APPLICANT(S) AND/OR (FURTHER) INVENTOR(S)** Name and address: (Family name followed by given name; for a legal entity, full official designation. The address must include postal code and name of country. The country of the address indicated in this Box is the applicant’s State (that is, country) of residence if no State of residence is indicated below.) CODA Alberto Via per Uboldo 47 I - 21040 Gerenzano VA Italy This person is: - [ ] applicant only - [X] applicant and inventor - [ ] inventor only (If this check-box is marked, do not fill in below.) Applicant’s registration No. with the Office State (that is, country) of nationality: Italy State (that is, country) of residence: Italy This person is applicant for the purposes of: - [ ] all designated States - [ ] all designated States except the United States of America only - [X] the United States of America only - [ ] the States indicated in the Supplemental Box [X] Further applicants and/or (further) inventors are indicated on a continuation sheet. **Box No. IV AGENT OR COMMON REPRESENTATIVE; OR ADDRESS FOR CORRESPONDENCE** The person identified below is hereby/has been appointed to act on behalf of the applicant(s) before the competent International Authorities as: - [X] agent - [ ] common representative Name and address: (Family name followed by given name; for a legal entity, full official designation. The address must include postal code and name of country.) ADORNO Silvano, PIZZOLI Pasquale, BARDINI Marco, GERMINARIO Claudio Società Italiana Brevetti S.p.A. Via Carducci 8 I - 20123 Milano MI Italy Telephone No. +39.02.806331 Facsimile No. +39.02.80633200 Teleprinter No. Agent’s registration No. with the Office Address for correspondence: Mark this check-box where no agent or common representative is/has been appointed and the space above is used instead to indicate a special address to which correspondence should be sent. Continuation of Box No. III FURTHER APPLICANT(S) AND/OR (FURTHER) INVENTOR(S) If none of the following sub-boxes is used, this sheet should not be included in the request. | Name and address: (Family name followed by given name; for a legal entity, full official designation. The address must include postal code and name of country. The country of the address indicated in this Box is the applicant’s State (that is, country) of residence if no State of residence is indicated below.) | This person is: | | --- | --- | | GALLITOGNOTTA Alessandro Via Marconi 52 I - 21040 Origgio VA Italy | □ applicant only ☑ applicant and inventor □ inventor only (If this check-box is marked, do not fill in below.) | | State (that is, country) of nationality: | State (that is, country) of residence: | | --- | --- | | Italy | Italy | | Name and address: (Family name followed by given name; for a legal entity, full official designation. The address must include postal code and name of country. The country of the address indicated in this Box is the applicant’s State (that is, country) of residence if no State of residence is indicated below.) | This person is: | | --- | --- | | CORAZZA Alessio Via Rlenza 72 I - 22100 Como CO Italy | □ applicant only ☑ applicant and inventor □ inventor only (If this check-box is marked, do not fill in below.) | | State (that is, country) of nationality: | State (that is, country) of residence: | | --- | --- | | Italy | Italy | | Name and address: (Family name followed by given name; for a legal entity, full official designation. The address must include postal code and name of country. The country of the address indicated in this Box is the applicant’s State (that is, country) of residence if no State of residence is indicated below.) | This person is: | | --- | --- | | CACCIA Debora Via Pasteur 37 I - 20025 Legnano MI Italy | □ applicant only ☑ applicant and inventor □ inventor only (If this check-box is marked, do not fill in below.) | | State (that is, country) of nationality: | State (that is, country) of residence: | | --- | --- | | Italy | Italy | | Name and address: (Family name followed by given name; for a legal entity, full official designation. The address must include postal code and name of country. The country of the address indicated in this Box is the applicant’s State (that is, country) of residence if no State of residence is indicated below.) | This person is: | | --- | --- | | BARONIO Paola Via De Andrè Fabrizio 78 I - 21042 Caronno Pertusella VA Italy | □ applicant only ☑ applicant and inventor □ inventor only (If this check-box is marked, do not fill in below.) | | State (that is, country) of nationality: | State (that is, country) of residence: | | --- | --- | | Italy | Italy | Further applicants and/or (further) inventors are indicated on another continuation sheet. Continuation of Box No. III FURTHER APPLICANT(S) AND/OR (FURTHER) INVENTOR(S) If none of the following sub-boxes is used, this sheet should not be included in the request. | Name and address: (Family name followed by given name; for a legal entity, full official designation. The address must include postal code and name of country. The country of the address indicated in this Box is the applicant’s State (that is, country) of residence if no State of residence is indicated below.) | This person is: | | --- | --- | | TOIA Luca Via della Fontana 14/a I - 21040 Carnago VA Italy | □ applicant only ☑ applicant and inventor □ inventor only (If this check-box is marked, do not fill in below.) | | State (that is, country) of nationality: | State (that is, country) of residence: | | --- | --- | | Italy | Italy | | This person is applicant for the purposes of: | □ all designated States □ all designated States except the United States of America ☑ the United States of America only □ the States indicated in the Supplemental Box | | Name and address: (Family name followed by given name; for a legal entity, full official designation. The address must include postal code and name of country. The country of the address indicated in this Box is the applicant’s State (that is, country) of residence if no State of residence is indicated below.) | This person is: | | --- | --- | | PORRO Mario Vicolo Natisone 58 I - 21042 Caronno Pertusella VA Italy | □ applicant only ☑ applicant and inventor □ inventor only (If this check-box is marked, do not fill in below.) | | Applicant’s registration No. with the Office | | State (that is, country) of nationality: | State (that is, country) of residence: | | --- | --- | | Italy | Italy | | This person is applicant for the purposes of: | □ all designated States □ all designated States except the United States of America ☑ the United States of America only □ the States indicated in the Supplemental Box | | Name and address: (Family name followed by given name; for a legal entity, full official designation. The address must include postal code and name of country. The country of the address indicated in this Box is the applicant’s State (that is, country) of residence if no State of residence is indicated below.) | This person is: | | --- | --- | | | □ applicant only □ applicant and inventor □ inventor only (If this check-box is marked, do not fill in below.) | | Applicant’s registration No. with the Office | | State (that is, country) of nationality: | State (that is, country) of residence: | | --- | --- | | | | | This person is applicant for the purposes of: | □ all designated States □ all designated States except the United States of America □ the United States of America only □ the States indicated in the Supplemental Box | | Name and address: (Family name followed by given name; for a legal entity, full official designation. The address must include postal code and name of country. The country of the address indicated in this Box is the applicant’s State (that is, country) of residence if no State of residence is indicated below.) | This person is: | | --- | --- | | | □ applicant only □ applicant and inventor □ inventor only (If this check-box is marked, do not fill in below.) | | Applicant’s registration No. with the Office | | State (that is, country) of nationality: | State (that is, country) of residence: | | --- | --- | | | | | This person is applicant for the purposes of: | □ all designated States □ all designated States except the United States of America □ the United States of America only □ the States indicated in the Supplemental Box | | Further applicants and/or (further) inventors are indicated on another continuation sheet. | 1. If in any of the Boxes, except Boxes Nos. VIII(i) to (v) for which a special continuation box is provided, the space is insufficient to furnish all the information: in such case, write "Continuation of Box No...." (indicate the number of the Box) and furnish the information in the same manner as required according to the captions of the Box in which the space was insufficient, in particular: (i) if more than two persons are to be indicated as applicants and/or inventors and no "continuation sheet" is available: in such case, write "Continuation of Box No. III" and indicate for each additional person the same type of information as required in Box No. III. The country of the address indicated in this Box is the applicant's State (that is, country) of residence if no State of residence is indicated below; (ii) if, in Box No. II or in any of the sub-boxes of Box No. III, the indication "the States indicated in the Supplemental Box" is checked: in such case, write "Continuation of Box No. II" or "Continuation of Box No. III" or "Continuation of Boxes No. II and No. III" (as the case may be), indicate the name of the applicant(s) involved and, next to (each) such name, the State(s) (and/or, where applicable, ARIPO, Eurasian, European or OAPI patent) for the purposes of which the named person is applicant; (iii) if, in Box No. II or in any of the sub-boxes of Box No. III, the inventor or the inventor/applicant is not inventor for the purposes of all designated States or for the purposes of the United States of America: in such case, write "Continuation of Box No. II" or "Continuation of Box No. III" or "Continuation of Boxes No. II and No. III" (as the case may be), indicate the name of the inventor(s) and, next to (each) such name, the State(s) (and/or, where applicable, ARIPO, Eurasian, European or OAPI patent) for the purposes of which the named person is inventor; (iv) if, in addition to the agent(s) indicated in Box No. IV, there are further agents: in such case, write "Continuation of Box No. IV" and indicate for each further agent the same type of information as required in Box No. IV; (v) if, in Box No. VI, there are more than three earlier applications whose priority is claimed: in such case, write "Continuation of Box No. VI" and indicate for each additional earlier application the same type of information as required in Box No. VI. 2. If the applicant intends to make an indication of the wish that the international application be treated, in certain designated States, as an application for a patent of addition, certificate of addition, inventor's certificate of addition or utility certificate of addition: in such a case, write the name or two-letter code of each designated State concerned and its indication "patent of addition", "certificate of addition", "inventor's certificate of addition" or "utility certificate of addition," the number of the parent application or parent patent or other parent grant and the date of grant of the parent patent or other patent grant or the date of filing of the parent application (Rules 4.11(a)(iii) and 49bis.1(a) or (b)). 3. If the applicant intends to make an indication of the wish that the international application be treated, in the United States of America, as a continuation or continuation-in-part of an earlier application: in such a case, write "United States of America" or "US" and the indication "continuation" or "continuation-in-part" and the number and the filing date of the parent application (Rules 4.11(a)(iv) and 49bis.1(d)). Box No. V DESIGNATIONS The filing of this request constitutes under Rule 4.9(a), the designation of all Contracting States bound by the PCT on the international filing date, for the grant of every kind of protection available and, where applicable, for the grant of both regional and national patents. However, - [ ] DE Germany is not designated for any kind of national protection - [ ] JP Japan is not designated for any kind of national protection - [ ] KR Republic of Korea is not designated for any kind of national protection - [ ] RU Russian Federation is not designated for any kind of national protection (The check-boxes above may only be used to exclude (irrevocably) the designations concerned if, at the time of filing, the international application contains in Box No. VI a priority claim to an earlier national application filed in the particular State concerned, in order to avoid the ceasing of the effect, under the national law, of this earlier national application. See the Notes to Box No. V as to the consequences of such national law provisions in these States). Box No. VI PRIORITY CLAIM The priority of the following earlier application(s) is hereby claimed: | Filing date of earlier application (day/month/year) | Number of earlier application | Where earlier application is: | |---------------------------------------------------|-------------------------------|--------------------------------| | | | national application: country or Member of WTO | | | | regional application:* regional Office | | | | international application: receiving Office | item (1) 28 February 2006 (28.02.2006) MI2006A 000361 ITALY item (2) item (3) Further priority claims are indicated in the Supplemental Box. The receiving Office is requested to prepare and transmit to the International Bureau a certified copy of the earlier application(s) (only if the earlier application was filed with the Office which for the purposes of this international application is the receiving Office) identified above as: - [ ] all items - [ ] item (1) - [ ] item (2) - [ ] item (3) - [ ] other, see Supplemental Box *Where the earlier application is an ARIPO application, indicate at least one country party to the Paris Convention for the Protection of Industrial Property or one Member of the World Trade Organization for which that earlier application was filed (Rule 4.10(b)(ii)): Box No. VII INTERNATIONAL SEARCHING AUTHORITY Choice of International Searching Authority (ISA) (if two or more International Searching Authorities are competent to carry out the international search, indicate the Authority chosen; the two-letter code may be used): ISA / .................................................................. Request to use results of earlier search; reference to that search (if an earlier search has been carried out by or requested from the International Searching Authority): Date (day/month/year) Number Country (or regional Office) Box No. VIII DECLARATIONS The following declarations are contained in Boxes Nos. VIII (i) to (v) (mark the applicable check-boxes below and indicate in the right column the number of each type of declaration): - [ ] Box No. VIII (i) Declaration as to the identity of the inventor - [ ] Box No. VIII (ii) Declaration as to the applicant’s entitlement, as at the international filing date, to apply for and be granted a patent - [ ] Box No. VIII (iii) Declaration as to the applicant’s entitlement, as at the international filing date, to claim the priority of the earlier application - [ ] Box No. VIII (iv) Declaration of inventorship (only for the purposes of the designation of the United States of America) - [ ] Box No. VIII (v) Declaration as to non-prejudicial disclosures or exceptions to lack of novelty Number of declarations ## Box No. IX CHECK LIST; LANGUAGE OF FILING | This international application contains: | This international application is accompanied by the following item(s) (mark the applicable check-boxes below and indicate in right column the number of each item): | |----------------------------------------|----------------------------------------------------------------------------------| | (a) on paper, the following number of sheets: | 1. □ fee calculation sheet | | request (including declaration sheets) : 6 | 2. □ original separate power of attorney | | description (excluding sequence listing and/or tables related thereto) : 14 | 3. □ original general power of attorney | | claims : 2 | 4. □ copy of general power of attorney; reference number, if any: | | abstract : 1 | 5. □ statement explaining lack of signature | | drawings : 10 | 6. □ priority document(s) identified in Box No. VI as item(s): | | Sub-total number of sheets : 33 | 7. □ translation of international application into (language): | | sequence listing | 8. □ separate indications concerning deposited microorganism or other biological material | | tables related thereto | 9. □ sequence listing in electronic form (indicate type and number of carriers) | | (for both, actual number of sheets if filed on paper, whether or not also filed in electronic form; see (c) below) | (i) □ copy submitted for the purposes of international search under Rule 13ter only (and not as part of the international application) | | Total number of sheets : 33 | (ii) □ (only where check-box (b)(i) or (c)(i) is marked in left column) additional copies including, where applicable, the copy for the purposes of international search under Rule 13ter | | (b) □ only in electronic form (Section 801(a)(i)) | (iii) □ together with relevant statement as to the identity of the copy or copies with the sequence listing mentioned in left column | | (i) □ sequence listing | 10. □ tables in electronic form related to sequence listing (indicate type and number of carriers) | | (ii) □ tables related thereto | (i) □ copy submitted for the purposes of international search under Section 802(b-quater) only (and not as part of the international application) | | (c) □ also in electronic form (Section 801(a)(ii)) | (ii) □ (only where check-box (b)(ii) or (c)(ii) is marked in left column) additional copies including, where applicable, the copy for the purposes of international search under Section 802(b-quater) | | (i) □ sequence listing | (iii) □ together with relevant statement as to the identity of the copy or copies with the tables mentioned in left column | | (ii) □ tables related thereto | 11. □ other (specify): | **Figure of the drawings which should accompany the abstract:** 3 **Language of filing of the international application:** Italian ## Box No. X SIGNATURE OF APPLICANT, AGENT OR COMMON REPRESENTATIVE Next to each signature, indicate the name of the person signing and the capacity in which the person signs (if such capacity is not obvious from reading the request). **ADORNO Silvano** For receiving Office use only 1. Date of actual receipt of the purported international application: **28 FEB 2007 / 28/02/2007** 2. Drawings: - [x] received: - [ ] not received: 3. Corrected date of actual receipt due to later but timely received papers or drawings completing the purported international application: 4. Date of timely receipt of the required corrections under PCT Article 11(2): 5. International Searching Authority (if two or more are competent): ISA / 6. [x] Transmittal of search copy delayed until search fee is paid For International Bureau use only Date of receipt of the record copy by the International Bureau:
Lucerne-dominated fields recover native grass diversity without intensive management actions Péter Török\(^1\)*, András Kelemen\(^1\), Orsolya Valkó\(^1\), Balázs Deák\(^2\), Balázs Lukács\(^2\) and Béla Tóthméresz\(^1\) \(^1\)Department of Ecology, University of Debrecen, PO Box 71, H-4010 Debrecen, Hungary; and \(^2\)Hortobágy National Park Directorate, Sumen út 2, H-4024 Debrecen, Hungary Summary 1. Spontaneous succession is often underappreciated in restoration after the cessation of intensive agricultural management. Spontaneous succession could improve the success of restoration programmes, and offers a cost-effective option with little active intervention. 2. We studied the spontaneous recovery of loess grasslands in extensively managed lucerne *Medicago sativa* fields mown twice a year using space for time substitutions to highlight the importance of spontaneous processes in grassland restoration. 3. With increasing field age a gradual replacement of lucerne by perennial native grasses and forbs and increase of mean species richness was detected. As the age of fields increased, lucerne decreased from 75% to 2% of total vegetation cover, whereas perennial graminoids increased from 0–5 to 50% cover. Mean total cover showed no significant differences between the age groups; weed cover was less than 10%. 4. The phytomass of lucerne was negatively correlated with graminoid phytomass. As the age of the fields increased, lucerne phytomass decreased and grass phytomass increased. We found a negative correlation between litter and forb phytomass but there was no relationship with the age of the field. There was no litter accumulation and no increase of mean total phytomass as the age of fields increased. 5. **Synthesis and applications.** Native grasses within loess grasslands recovered within 10 years, but characteristic native forbs remained rare. The advantages of spontaneous succession in lucerne fields compared to technical reclamation include: (i) no early stages dominated by weeds, (ii) minimal litter accumulation, (iii) a spontaneous decrease in lucerne over time, and (iv) negligible cost. In addition, the requirement for twice yearly mowing in the early years will guarantee farmer involvement because of the high forage value of lucerne. The complete restoration of species rich grasslands will require more active management such as propagule transfer by hay and/or moderate grazing to encourage the return of native forbs. Key-words: alfalfa, *Medicago sativa*, old field, phytomass, space for time substitution, succession, weed control Introduction The aim of grassland restoration is to recover and/or improve grassland biodiversity and ecosystem functions (Firn 2007; Reid et al. 2009). Two contrasting approaches are used most often: technical reclamation or spontaneous succession (Prach & Hobbs 2008). Both methods are generally followed up by site management for weed suppression using techniques such as mowing and/or grazing (Warren, Christal & Wilson 2002; Lepš et al. 2007; Kiehl et al. 2010). Recovery can be accelerated and directed by *technical reclamation* methods. In most cases this means adding seeds of desirable species using hay transfer or seed sowing (Pywell et al. 2002; Hölzlel & Otte 2003). An alternative approach is *spontaneous succession*, where seeds are not added and the system is left to recover naturally (Prach & Pyšek 2001). Technical reclamation is preferred worldwide despite several promising examples of spontaneous recovery of grasslands (e.g. Ruprecht 2006; Prach & Rehounková 2008). This is especially true when there is an urgent need to heal landscape scars, prevent erosion or suppress weeds (Török et al. 2010; Tropek et al. 2010). *Correspondence author. E-mail: email@example.com © 2010 The Authors. Journal of Applied Ecology © 2010 British Ecological Society Recently, there have been attempts to link theories of spontaneous succession with direct restoration efforts to mitigate costs and improve the success of restoration (del Moral, Walker & Bakker 2007; Walker, Walker & del Moral 2007). For example, patterns in vegetation dynamics could be used to judge whether or not invasive weed cover will develop rapidly after agriculture ceases or to judge whether active intervention is necessary to eliminate former crops. Spontaneous succession has several advantages over technical reclamation. (i) The natural value of spontaneously regenerated sites is often higher than that of reclaimed ones (Hodačová & Prach 2003). (ii) Spontaneously colonising species are expected to be better adapted to local conditions than species originating from commercial sources or non-local sites (Mijnsbrugge, Bischoff & Smith 2010). (iii) Increased vegetation patchiness at spontaneously regenerated sites provides improved refuges for animals compared to technical reclamation sites (Tropek et al. 2010). Finally, (iv) spontaneous succession offers cost-effective restoration with a low rate of active intervention (Prach & Hobbs 2008). Spontaneous succession also has some drawbacks compared to technical reclamation, concerning (i) the low level of predictability and control of initial vegetation composition, density and pattern, and (ii) the relatively slow development of vegetation towards to the target state, especially where proper donor sites for colonisation are missing (Ruprecht 2006; Prach & Hobbs 2008). However, the value of spontaneous succession in restoration programmes is becoming more widely appreciated, which underlines the importance of reporting relevant case (Prach & Pyšek 2001; Prach, Pyšek & Bastl 2001). There is large scale abandonment in rural areas where productivity is low in Central- and Eastern Europe (Jongepierová, Mitchley & Tzanopoulos 2007; Török et al. 2010). After the collapse of state owned agricultural cooperatives, the socio-economical changes resulted in large scale abandonment of croplands (Prach, Lepš & Rejmánek 2007; Pullin et al. 2009). Between 1990 and 2004, 600 000 ha of croplands have been abandoned in Hungary (Hobbs & Cramer 2007). This has provided an opportunity to use these areas to restore grasslands and improve their continuity for nature conservation (Stevenson, Bullock & Ward 1995; Simmering, Waldhardt & Otte 2006; Lindborg et al. 2008). Most studies reporting spontaneous succession have focused on abandoned fields formerly cultivated with annual crops or the previous history of the site (e.g. last crop) has been ignored (Cseserits & Rédéi 2001; Ruprecht 2006). Generally in these studies, weedy short-lived species are found to dominate in the first years after abandonment (Blumenthal, Jordan & Svenson 2005; Prach, Lepš & Rejmánek 2007). Weed dominance is generally associated with high levels of soil nutrients, which can be difficult and costly to control (Blumenthal, Jordan & Svenson 2003). The dominance of early colonising weedy species can also slow down the regeneration of native vegetation for many years (Collins, Wein & Philippi 2001; Prach & Pyšek 2001). Secondary succession after intensive cultivation of perennial crops has not previously been studied. One of the most important perennial crops worldwide is lucerne *Medicago sativa* L. Lucerne is often used as silage or hay for cattle forage (Horrocks & Valentine 1999; Li, Xu & Wang 2008). In Hungary more than 130 000 ha of croplands were sown with lucerne although intensity of use has decreased in recent years (2004–2008; K.S.H. 2008). We studied the regeneration of loess grasslands in extensively managed (mown twice a year) lucerne fields using space for time substitutions. We addressed the following questions: (i) How effective is lucerne in weed control? (ii) How quickly does lucerne disappear? (iii) How fast does grassland recover in extensively managed lucerne fields? The overall aim of this study was to examine the value of spontaneous succession in the restoration of grasslands in former lucerne fields as a cost-effective strategy for grassland conservation. **Materials and methods** **STUDY AREA** The study area is located in the Hortobágy Puszta (Hortobágy National Park), in East-Hungary. Hortobágy Puszta with an area of 85 000 ha is one of the largest grassland ecosystems in Europe, with vegetation characteristic of alkali and loess grasslands. The climate is moderately continental with a mean annual temperature of 9·5°C. Mean annual precipitation is about 550 mm. The yearly maximum precipitation falls in June (mean 80 mm) with high year-to-year fluctuations (Pecsi 1989). Historically, loess grassland vegetation (*Festuca rupicola*) covered the highest elevations in the region (Borhidi 2003). At the lower elevations, loess grasslands were surrounded by dry alkali short grasslands (*Festuca pseudovicina*), alkali wet meadow (*Alopecurus pratensis*) and alkali marsh vegetation (*Bolboschoenentalia maritimi*) (for more details see Molnár et al. 2008; Molnár & Borhidi 2003). The loess grasslands have been ploughed up in the last centuries and many of the remaining fragments are degraded by moderate or heavy grazing by cattle and/or sheep. The most degraded loess pastures (*Cynodonit-Poetum angustifoliae*) are characterised by a high cover of grazing tolerant graminoids [*Cynodon dactylon* (L.) Pers., *Poa angustifolia* L., *Festuca pseudovina* Hack. ex Wiesb., *Festuca rupicola* Heuff. and *Carex stenophylla* Wahlbg.] and forbs [*Galium verum* L., *Euphorbia cyparissias* L., *Cruciata pedemontana* (Bell) Ehrend., *Myosotis stricta* Link, *Achillea collina* L., and *Convolvulus arvensis* L.]. At heavily grazed sites, thistles dominate (*Ononis spinosa* L., *Eryngium campestre* L.). Only small patches of less degraded loess steppe grasslands (*Salvio nemorosae-Festuacetum rupicolae*) have remained. The characteristic graminoids for these grasslands are *Festuca rupicola*, *Bromus inermis* Leyss, *Koeleria cristata* (L.) Pers., *Stipa capillata* L., *Alopecurus pratensis* L., and *Poa angustifolia*. They are rich in perennial forb species, and harbour several characteristic loess specialist species (*Salvia nemorosa* L., *Salvia austriaca* Jacq., *Phlomis tuberosa* L., *Thalictrum minus* L., *Thymus glabrescens* Willd.). In the study region lucerne or alfalfa *Medicago sativa* L. is sown after deep ploughing at the high elevations formerly covered by loess grasslands. Seed sowing density is typically 30 kg ha\(^{-1}\). There are intensively and extensively managed lucerne fields. Intensive management means regular mowing associated with the application of fertilisers and pesticides. After 3 years intensively managed field are re-sown or shallow disked. Extensive management means only regular mowing twice a year. Every year 10–50 ha intensively managed lucerne fields were replaced by extensively managed ones in the Hortobágy National Park. **SAMPLING** The vegetation of 1-, 3-, 5- and 10-year-old extensively managed lucerne fields (three fields in each age group) was monitored in 2009. The study fields were situated on loess plateaux between 87 and 94 m a.s.l., within a 50 km radius, in the vicinity of the villages of Egyek, Tiszacsege, Karcag and Nádudvar (N47 26′; E21 01′). None of the study fields were directly connected to loess grasslands, which was the most common vegetation at this elevation in the region (Török *et al.* 2010). The fields were mown twice a year but no further management was applied. Small patches of loess grasslands and, at lower elevations, alkali marshes, alkali wet meadows and alkali short grasslands were present in close proximity to most of the fields. In each field three 25-m² sample blocks were chosen randomly. Within each block, the cover of vascular plants was recorded in four 1 m² plots in early June, before the first mowing. In addition, within each block and near to the plots (*l* < 1 m), 10 aboveground phytomass samples were collected (in total 30 per field, 20 × 20 cm, total aboveground green phytomass and litter). We recorded the vegetation of three variously degraded stands of loess grasslands (*Festuca rupicola*) for base-line vegetation reference: (i) a formerly heavy grazed *Cynodon-Poëtum* stand, (ii) a species rich loess balk stand with *Bromus inermis* dominance, and (iii) a regularly mown species rich stand of *Salvia nemorosa*-*Festucetum rupicolae* grassland (for detailed species lists see Appendix S1 Supporting Information). We used the same sampling design as described above. Phytomass samples were dried (65 °C, 24 h), then sorted to litter, graminoids (Poaceae and Cyperaceae), lucerne and forbs. Dry weights were measured in a laboratory with an accuracy of 0.01 g. **DATA ANALYSIS** We classified the species into four functional groups using life-form (based on Raunkiaer’s life form system, Raunkiaer 1934) and morphological categories (grasses and forbs). These were perennial graminoids, perennial forbs, short-lived graminoids, and short-lived forbs. Annuals and biennials are short-lived, and geophytes, hemikryptophytes, and chamaephytes are perennials. The functional groups of the weed species were classified using Grime C-S-R strategy types (Grime 1979) which was modified and adapted to local conditions by Borhidi (1995). The cover, species richness and phytomass data of the differently aged fields were compared using General Linear Mixed-Effect Models (GLMM) and Tukey test (Zuur *et al.* 2009). Field age (time) was included as a fixed effect and field/block structure as a random effect. To analyse correlations between the different phytomass groups and sites we used DCA ordination, with square root transformed datasets. DCA was calculated by CANOCO 4.5 (ter Braak & Šmilauer 2002). We used cover based Shannon diversity to characterise vegetation diversity, and Sørensen dissimilarity for vegetation changes. Characteristic species of differently aged lucerne fields and reference grasslands were identified by the IndVal procedure (Dufrêne & Legendre 1997); during the calculations 10 000 random permutations were used. The IndVal procedure was executed by a revised version of the R code published as the electronic appendix of Bakker (2008). To explore similarities between restored and reference sites, we used NMDS ordination with Bray–Curtis similarity (Legendre & Legendre 1998). Other statistical analyses were performed using the R statistical environment (version 2.11.1, R Development Core Team 2010). Nomenclature follows Borhidi (2003) for syntaxa, and Simon (2000) for taxa. **Results** **VEGETATION AND PHYTOMASS** The vegetation of 1- and 3-year-old lucerne fields was characterized by the high cover of lucerne. Several weed species were present; their mean cover was less than 5% (e.g. *Conyza canadensis* (L.) Cronq., *Lamium amplexicaule* L., *Polygonum aviculare* L., *Stellaria media* (L.) Vill., see Appendix S1 in Supporting Information). The mean cover of lucerne decreased from 75.2 to 2.2% with increasing field age. In the vegetation of 5-year-old fields the cover of lucerne was lower than 50% in all studied plots; moreover in one of the 10-year-old fields no lucerne cover was detected. Conversely, the mean cover of perennial graminoids increased from 0.5 to 50.2% parallel with increasing field age (GLMM, *P* < 0.001, d.f. = 134, *t* = 14.30; Table 1). The mean total cover of differently aged lucerne fields fluctuated between 77.6 and 86.1% (Table 1). Altogether 104 vascular plant species were recorded in the --- **Table 1.** Cover, species richness and Shannon diversity scores of functional species groups | Age of lucerne fields | 1-year-old | 3-year-old | 5-year-old | 10-year-old | |-----------------------|------------|------------|------------|-------------| | **Cover (%,** mean ± SE) | | | | | | Total | 85.4 ± 0.4 | 85.8 ± 4.7 | 86.1 ± 12.9 | 77.6 ± 12.6 | | *Medicago sativa* | 75.2 ± 1.1<sup>a</sup> | 72.8 ± 11.0<sup>a</sup> | 24.1 ± 4.9<sup>b</sup> | 2.3 ± 2.3<sup>c</sup> | | Perennial forbs (excl. *M. sativa*) | 0.7 ± 0.2<sup>a</sup> | 6.5 ± 4.5<sup>b</sup> | 10.7 ± 2.7<sup>b</sup> | 16.3 ± 2.2<sup>c</sup> | | Perennial graminoids | 0.5 ± 0.2<sup>a</sup> | 0.9 ± 0.1<sup>a</sup> | 29.8 ± 14.1<sup>b</sup> | 50.2 ± 15.0<sup>c</sup> | | Short-lived forbs | 8.9 ± 1.6 | 5.4 ± 2.2 | 10.6 ± 7.6 | 6.2 ± 0.5 | | Short-lived graminoids| 0.1 ± 0.1<sup>a</sup> | 0.2 ± 0.1<sup>a</sup> | 11.0 ± 3.9<sup>b</sup> | 2.6 ± 1.5<sup>a</sup> | | Species richness (mean ± SE) | | | | | | Perennial species | 2.4 ± 0.2<sup>a</sup> | 3.3 ± 0.4<sup>a</sup> | 6.0 ± 1.1<sup>b</sup> | 5.8 ± 0.4<sup>b</sup> | | Short-lived species | 6.1 ± 0.7<sup>a</sup> | 5.2 ± 1.6<sup>a</sup> | 8.7 ± 2.1<sup>b</sup> | 8.1 ± 1.0<sup>b</sup> | | Shannon diversity | 0.5 ± 0.1<sup>a</sup> | 0.6 ± 0.3<sup>a</sup> | 1.6 ± 0.2<sup>b</sup> | 1.5 ± 0.2<sup>b</sup> | Different superscripted letters indicate significant differences tested with General Linear Mixed-Effect Models and Tukey test (*P* < 0.05) vegetation of the studied lucerne fields. The mean total species richness (from 8·5 to 13·9–14·7), the mean species richness of perennials (from 2·4 to 5·8–6·0), and the mean Shannon diversity scores (from 0·5 to 1·5–1·6) were increased with field age (GLMM, $P < 0·001$, d.f. = 134, $\ell = 11·04$ and 11·17, respectively; Table 1). No significant differences were found between the total phytomass of differently aged lucerne fields (means ranged between 286 and 689 g m$^{-2}$). As for cover, the phytomass of lucerne decreased with increasing field age (GLMM, $P < 0·001$, d.f. = 350, $\ell = 17·17$). The phytomass of graminoids was highest in the 5- and 10-year-old fields (Fig. 1). A negative correlation was detected between the phytomass of lucerne and that of graminoids. Litter and forb phytomass were also negatively correlated, but no clear temporal trend was detected. A decreasing lucerne phytomass and an increasing grass phytomass were detected with increasing field age (Fig. 2). **LUCERNE FIELDS AND REFERENCE GRASSLANDS** Characteristic grass species for reference grasslands (e.g. *Festuca rupicola* and *Bromus inermis*) were found at low levels of cover in 5- and 10-year-old lucerne fields. Conversely, some common grasses were dominant (e.g. *Festuca pseudovina*, *Poa angustifolia*, *Agropyron intermedium* (Host) P.B., *Alopecurus pratensis*; see Appendix S1). Decreasing mean dissimilarity of species composition was detected with increasing field age (from a mean of 0·96 in 1-year-old fields to a mean of 0·76 in 10-year-old fields). Characteristic forb species of native loess grasslands were only present in 5- and 10-year-old lucerne fields (e.g. *Vicia hirsuta* (L.) S.F., *V. angustifolia* L., *Galium verum*, *Medicago minima* (L.) Grubfg., *Trifolium angulatum* W. et Kit., *T. retusum* Höjer, *Lathyrus tuberosus* L.). Several other characteristic perennial forbs were not detected even in the vegetation of 10-year-old lucerne fields (e.g. *Ajuga genevensis* L., *Salvia nemorosa*, *S. austriaca*, *Pimpinella saxifraga* L., *Thymus degenianus* Lyka, *Euphorbia cyparissias*, *Veronica prostrata* L.; see Appendix S1). Several disturbance tolerant and weedy perennial forbs were more frequent in the lucerne fields than in reference grasslands (e.g. *Cirsium arvense* (L.) Scop., *Convolvulus arvensis*, *Taraxacum officinale* Weber ex Wiggers). Species composition in the lucerne fields showed a clear shift along the first axis in the NMDS ordination (Fig. 3). Time is represented by the first axis, and the age groups are separated along it. The vegetation of the 1 and 3-year-old fields showed low variability, while the variability of plots of the older fields was much higher (Fig. 3). The vegetation of the 10-year-old fields showed the most similarity with the vegetation of reference grasslands. --- **Fig. 1.** Phytomass scores of *Medicago sativa* and three functional groups in different aged lucerne fields. Notations: a = *Medicago sativa*, b = graminoids, c = litter, d = other forbs. Different letters indicate significant differences within a phytomass group between years (General Linear Mixed-Effect Models and Tukey test, $P < 0·05$; tests were executed on 20 × 20 cm samples). **Fig. 2.** The relationship between the various phytomass fractions and time using DCA. The points (main data) were based on mean species percentage cover. All data were pooled at the field’s level. Notations for the lucerne fields: 1-year-old – ○; 3-year-old – ⋄; 5-year-old – ●; 10-year-old – ●. Notations for the background variables (arrows): Lucerne = phytomass of alfalfa; Forbs = forb phytomass; Grasses = graminoid phytomass. Time = field age; Litter = litter phytomass. Eigenvalues are 0·52 and 0·08 for axis 1 and 2, respectively. **Discussion** **WEED CONTROL** Previous studies have reported high weed cover after abandonment of intensively managed crop fields, e.g. weed cover of 5–40% for sandy fields abandoned for 1–10 years (Central-Hungary; Csecserits & Rédei 2001; Csecserits et al. 2007), and 10–60% for 1- to 12-year-old abandoned loess fields (Ruprecht 2005, 2006). Low weed cover was found after abandonment only where crop production lasted just a few years, and no mineral fertilizers had been applied (e.g. Jongepierová, Jongepier & Klimes 2004). It has been suggested that the rapid development of weed cover can be avoided by sowing mixtures of seeds of characteristic late successional species (Prach & Pyšek 2001; Pywell et al. 2002; Warren, Christal & Wilson 2002) or cover crop grasses (Hansson & Fogelfors 1998). In our study weedy species did not dominate in the early years. The total cover of weeds was low at less than 5% cover, regardless of the age of the fields. Our results support the findings of Li, Xu & Wang (2008), where lucerne and other legume species were found to aid in suppressing weeds. It is well known that seeds of weed species are present in the soils of croplands in high density (Hutchings & Booth 1996; Manchester et al. 1999). Török et al. (2010) detected a high cover of short-lived weeds after ploughing and sowing of perennial graminoids in former lucerne fields (1–3 years old), which suggests a high amount of weed seeds in the soil of lucerne fields. The low cover of weeds detected in the present study is most likely to be explained by the presence of lucerne, than by the absence of weed seeds in the soil. The high cover and phytomass of lucerne in the first years caused weed suppression by increased shading of the soil surface (Güsewell & Edwards 1999), and/or the competitive exclusion of short-lived weeds (Bischoff, Auge & Mahn 2005). An allelopathic effect of lucerne may be responsible for low weed cover: Ells & McSay (1991) showed that lucerne leaf extract (containing phenolic allelochemicals) was detrimental to germination and differentiation of susceptible plants. **COVER AND PHYTOMASS OF LUCERNE** In our study the cover of lucerne was over 70% in 1- and 3-year-old lucerne fields. A sharp decline was detected after the third year. This is in accordance with the common agricultural practice in this region, where the lucerne is re-sown after 3–4 years of cultivation. In a sowing experiment conducted by Li, Xu & Wang (2008) in loess plateaux in China, the mean cover of lucerne decreased after the first year of sowing (about 50% of cover in the first, and 29% in the third year after sowing, respectively). The more rapid decrease in lucerne cover can be explained by the lower sowing density than in our study (22·5 kg ha\(^{-1}\), in our region 30 kg ha\(^{-1}\) is typical). Our results suggest that lucerne could disappear within a decade from grasslands under extensive management by mowing. The disappearance of lucerne could also be facilitated by low intensity grazing, which would select for leguminous species (Stroh et al. 2002). In previous studies a significant increase in total vegetation cover (Ruprecht 2005; Li, Xu & Wang 2008) or an increase of cover and/or phytomass of perennials (Štolcová 2002; Feng et al. 2007a,b; Török et al. 2008) has been found during secondary succession. In our study, no such trend was detected. The total cover and also the total phytomass scores remained stable during secondary succession. This was caused by the gradual replacement of lucerne by perennial grasses. Török et al. (2010) found litter accumulation of one order of magnitude higher between the first and second years after restoration of grasslands with low diversity mixtures in former lucerne fields (first year litter: 28–37 g m\(^{-2}\); second year litter: 280–289 g m\(^{-2}\)). The litter scores in the second and the third year of this study were about two to three times higher than that detected in the present study. Accumulated plant litter was identified as negatively affecting vascular plant species richness in several studies (Huhta et al. 2001; Enyedi, Ruprecht & Deák 2007). Therefore, high amounts of litter with high perennial cover are especially effective in weed suppression (Török et al. 2010). Litter accumulation can also be negative as litter can reduce the micro-topographical heterogeneity (Tropek et al. 2010), and decrease the availability of colonisation sites (Jensen & Gutekunst 2003), which can stabilise the community in an undesirable state (Hobbs et al. 2006). High amounts of litter could also hamper the immigration and establishment of several target species by limiting microsite availability (Foster & Gross 1998; Bissels et al. 2006). In this study, there was no litter accumulation detected and, as a result, germination and colonisation was not hampered and species richness increased with field age. Other studies reporting spontaneous grassland succession have found similar links with litter accumulation and reduction in germination and colonisation (Jongepierová, Jongepier & Klimes 2004; Ruprecht 2006; Feng et al. 2007a). RECOVERY OF GRASSLANDS We found that the recovery of species poor loess grasslands dominated by perennial native species in former lucerne fields was possible within 10 years. Other old-field studies found 6–23 years after abandonment was sufficient time for the spontaneous succession of loess grasslands (Molnár & Botta-Dukát 1998; Ruprecht 2005; Csecserits et al. 2007; Feng et al. 2007a,b). The dissimilarity in species composition between lucerne fields and reference grasslands has continuously decreased with increasing field age. Dissimilarity scores were, however, high even between 5 and 10-year-old fields and reference grasslands. Several perennial forbs found at high frequency in loess grasslands were not detected in lucerne fields; and several short-lived weeds detected with low cover but high frequency in lucerne fields were missing from reference grasslands (see Appendix S1). Previous studies have reported that the spontaneous immigration of desirable target species is a diasporae limited process (Donath et al. 2007; Kiehl et al. 2010). There are two reasons for diasporae limitation: (i) limited spatial dispersal (e.g. missing dispersal agents and heavy seeds) reduces the movement of seeds into target sites (Simmering, Waldhardt & Otte 2006); (ii) long-term agricultural use often depletes the local seed bank, and also increases the amount of weed seeds in the soil (Coulson et al. 2001). Therefore, spontaneous recovery will be most effective where native grassland sites are located nearby (Öster et al. 2009). A further explanation for the persistent differences in species composition between the old fields and reference grasslands is that the perennial forbs may require more time to establish in extensively managed fields (e.g. Prach, Lepš & Rejmánek 2007). PRACTICAL IMPLICATIONS FOR POLICY Our results suggest that the recovery of initial loess grasslands may not require technical reclamation methods (i.e. sowing competitor grasses and/or forbs) in lucerne fields where nearby grasslands are present as a seed source. We found that after a decade of regular mowing, lucerne fields were transformed into loess grasslands dominated by native perennial grasses. However, most of the characteristic loess grasslands forb are missing. Similar results were found under the more common technical reclamation method of sowing low diversity seed mixtures (Hansson & Fogelfors 1998; Lepš et al. 2007; Török et al. 2010). The full recovery of loess grasslands requires more time and/or should be facilitated by technical introduction of some of the target species (Kirmér et al. 2008; Kiehl et al. 2010). The transfer of hay and/or low intensity grazing combined with continued mowing can be another option to facilitate the establishment of desirable species. Our results suggest that sowing lucerne in abandoned fields and following this with extensive management can combine the advantages of both spontaneous succession and technical reclamation in grassland restoration. It offers a cost effective solution from the economic (agricultural) and conservation management point of view. The method has several advantages over technical reclamation. In particular, there is no weed dominated stage and no intensive litter accumulation. Lucerne gradually decreases in abundance once re-sowing and/or fertilizing stop so we there will be a lower microsite limitation rate compared to technical reclamation sites where competitor grasses are sown. Finally, spontaneous succession is cheaper than technical reclamation, and provides a high value hay harvest in the first few years in lucerne fields. Acknowledgments We thank I. Kapocsi, L. Gál, S. Újfalusı, S. Tóth from the Hortobágy National Park for their help. We are indebted to T. Míglez, K. Tóth, Sz. Tasnády graduate students for their help in field and laboratory works. We are grateful to J. Memmott, and J. Firn for improving the former draft of the paper. References Bakker, J.D. (2008) Increasing the utility of indicator species analysis. *Journal of Applied Ecology*, **45**, 1829–1835. Bischoff, A., Augé, H. & Mahn, E.-G. (2005) Seasonal changes in the relationship between plant species richness and community phytomass in early succession. *Basic and Applied Ecology*, **6**, 385–394. Bissels, S., Donath, T.W., Hölzle, N. & Otte, A. (2006) Effects of different mowing regimes on seedling recruitment in alluvial grasslands. *Basic and Applied Ecology*, **7**, 433–442. Blumenthal, D.M., Jordan, N.R. & Svenson, E.L. (2003) Weed control as a rationale for restoration: the example of tallgrass prairie. *Conservation Ecology*, **7**, [online] URL: http://www.consecol.org/vol7/iss1/art6/ Blumenthal, D.M., Jordan, N.R. & Svenson, E.L. (2005) Effects of prairie restoration on weed invasions. *Agriculture, Ecosystems & Environment*, **107**, 221–230. Borhidi, A. (1995) Social behaviour types, the naturalness and relative indicator values of the higher plants in the Hungarian Flora. *Acta Botanica Hungarica*, **39**, 97–181. Borhidi, A. (2003) Magyarország Növénytársulásai (Plant associations of Hungary). Akadémiai Kiadó, Budapest, Hungary. (in Hungarian). ter Braak, C.J.F. & Smilauer, P. (2002) CANOCO Reference Manual and CanoDraw for Windows User’s Guide: Software for Canonical Community Ordination (version 4.5). Microcomputer Power Ithaca, NY, USA. Collins, B., Wein, G. & Philippi, T. (2001) Effects of disturbance intensity and frequency on early old-field succession. *Journal of Vegetation Science*, **12**, 721–728. Coulson, S.J., Bullock, J.M., Stevenson, M.J. & Pywell, R.F. (2001) Colonization of grassland by sown species: dispersal versus microsite limitation in responses to management. *Journal of Applied Ecology*, **38**, 204–216. Csecserits, A. & Rédei, T. (2001) Secondary succession on sandy old-fields in Hungary. *Applied Vegetation Science*, **4**, 63–74. Csecserits, A., Szabó, R., Halassy, M. & Rédei, T. (2007) Testing the validity of successional predictions on an old-field chronosequence in Hungary. *Community Ecology*, **8**, 195–207. Donath, T.W., Bissels, S., Hölzle, N. & Otte, A. (2007) Large scale application of diasporae transfer with plant material in restoration practice – impact of seed and microsite limitation. *Biological Conservation*, **138**, 224–234. Dufrène, M. & Legendre, P. (1997) Species assemblages and indicator species: the need for a flexible asymmetrical approach. *Ecological Monographs*, **67**, 345–366. Ellis, J.E. & McSaw, A.E. (1991) Allelopathic effects of alfalfa plant residues on emergence and growth of cucumber seedlings. *HortScience*, **26**, 368–370. Enyedi, M.Z., Ruprecht, E. & Deák, M. (2007) Long-term effects of the abandonment of grazing on steppe-like grasslands. *Applied Vegetation Science*, **11**, 55–62. Feng, D., Hong-Bo, S., Lun, S., Zong-Suo, L. & Ming-An, S. (2007a) Secondary succession and its effects on soil moisture and nutrition in abandoned old-fields of hilly region of Loess Plateau, China. *Colloids and Surfaces B: Biointerfaces*, **58**, 278–285. Feng, D., Zongsuo, L., Xuexuan, X., Lun, S. & Xingchang, Z. (2007b) Community biomass of abandoned farmland and its effects on soil nutrition in the Loess hilly region of Northern Shaanxi, China. *Acta Ecologica Sinica*, **27**, 1673–1683. Finn, J. (2007) Developing strategies and methods for rehabilitating degraded pastures using native grasses. *Ecological Management & Restoration*, **8**, 183–187. Foster, B.L. & Gross, K.L. (1998) Species richness in a successional grassland: effects of nitrogen enrichment and plant litter. *Ecology*, **79**, 2593–2602. Grime, J.P. (1979) *Plant strategies and Vegetation Processes*. Wiley, Chichester. Güssewell, S. & Edwards, P. (1999) Shading by *Phragmites australis*: a threat for species-rich fen meadows? *Applied Vegetation Science*, **2**, 61–70. Hansson, M. & Fogelhors, H. (1998) Management of permanent set-aside on arable land in Sweden *Journal of Applied Ecology*, **35**, 758–771. Hobbs, R.J. & Cramer, V.A. (2007) Why Old Fields? Socioeconomic and ecological causes and consequences of land abandonment. *Old Fields: Dynamics and Restoration of Abandoned Farmland* (eds V.A. Cramer & R.J. Hobbs), pp. 1–15. Island Press, Washington. Hobbs, R.J., Arico, S., Aronson, J., Baron, J.S., Bridgewater, P., Cramer, V.A., Epstein, P.R., Ewel, J.J., Klink, C.A., Lugo, A.E., Norton, D., Ojima, D., Richardson, D.M., Sanderson, E.W., Valladares, F., Vilá, M., Zamora, R. & Zobel, M. (2006) Novel ecosystems: theoretical and management aspects of the new ecological world order. *Global Ecology and Biogeography*, **15**, 1–7. Hodáčová, D. & Prach, K. (2003) Spoil heaps from brown coal mining: technical reclamation vs. spontaneous re-vegetation. *Restoration Ecology*, **11**, 385–391. Hölzel, N. & Otte, A. (2003) Restoration of a species-rich flood meadow by topsoil removal and diaspor transfer with plant material. *Applied Vegetation Science*, **6**, 131–140. Horrocks, R.D. & Valentine, J.F. (1999) *Harvested Forages*. Academic Press, San Diego, CA, USA. Huhta, A.-P., Rautio, P., Tuomi, J. & Laine, K. (2001) Restorative mowing on an abandoned semi-natural meadow: short-term and predicted long-term effects. *Journal of Vegetation Science*, **12**, 677–686. Hutchings, M.J. & Booth, K.D. (1996) Studies on the feasibility of re-creating chalk grassland vegetation on ex-arable land. I. The potential roles of the seed bank and the seed rain. *Journal of Applied Ecology*, **33**, 1171–1181. Jensen, K. & Gutekunst, K. (2003) Effects of litter on establishment of grassland plant species: the role of successional status. *Basic and Applied Ecology*, **4**, 579–587. Jongepierová, I., Jongepier, A.W. & Klimes, L. (2004) Restoring grassland on arable land: an example of a fast spontaneous succession without weed-dominated stages. *Preslia*, **76**, 361–369. Jongepierová, I., Mitchell, J. & Tzanopoulos, J. (2007) A field experiment to recreate species rich hay meadows using regional seed mixtures. *Biological Conservation*, **139**, 297–305. Kiehl, K., Kirmer, A., Donath, T.W., Rasran, L. & Hözl, N. (2010) Species introduction in restoration projects – evaluation of different techniques for the establishment of semi-natural grasslands in Central and Northwestern Europe. *Basic and Applied Ecology*, **11**, 285–299. Kirmer, A., Tischew, S., Ozinga, W.A., von Lampe, M., Baasch, A. & Groenendael, J.M. (2008) Importance of regional species pools and functional traits in colonisation processes: predicting re-colonization after large-scale destruction of ecosystems. *Journal of Applied Ecology*, **45**, 1523–1530. K.S.H. (Hungarian Central Statistical Office) (2008) Economic Accounts for Agriculture in Hungary, 2008. *Statistical Reflections*, **26**, [online] http://portal.ksh.hu/portal/ Legendre, P. & Legendre, L. (1998) *Numerical Ecology*. Elsevier Science, Amsterdam, The Netherlands. Leps, J., Doležal, J., Bezemer, T.M., Brown, V.K., Helland, K., Igual Arroyo, M., Jørgensen, H.B., Lawson, C.S., Mortimer, S.R., Peix Geldari, A., Rodriguez Barrueco, C., Santa Regina, I., Smilauer, P. & van der Putten, W.H. (2007) Long-term effectiveness of sowing high and low diversity seed mixtures to enhance plant community development on ex-arable fields. *Applied Vegetation Science*, **10**, 97–110. Li, J.-H., Xu, D.-H. & Wang, G. (2008) Weed inhibition by sowing legume species in early succession of abandoned fields on Loess Plateau, China. *Acta Oecologica*, **33**, 10–14. Lindborg, R., Bengtsson, J., Berg, A., Cousins, S.A.O., Eriksson, O., Gustafsson, T., Per Hasund, K., Lenoir, L., Pihlgren, A., Sjödin, E. & Stenske, M. (2008) A landscape perspective on conservation of semi-natural grasslands. *Agriculture, Ecosystems & Environment*, **125**, 213–222. Manchester, S.J., McNally, S., Treweek, J.R., Sparks, T.H. & Mountford, J.O. (1999) The cost and practicality of techniques for the reversion of arable land to lowland wet grassland - an experimental study and review. *Journal of Environmental Management*, **55**, 91–109. Mijnssenbrugge, K.V., Bischoff, A. & Smith, B. (2010) A question of origin: where and how to collect seed for ecological restoration. *Basic and Applied Ecology*, **11**, 300–311. Molnár, Zs. & Bohidi, A. (2003) Hungarian alkali vegetation: origins, landscape history syntaxonomy, conservation. *Phytocoenologia*, **33**, 377–408. Molnár, Zs. & Bottai-Dukát, Z. (1998) Improved space-for-time substitution for hypothesis generation: secondary grasslands with documented site history in SE-Hungary. *Phytocoenologia*, **28**, 1–29. Molnár, Zs., Biró, M., Bölöni, J. & Horváth, F. (2008) Distribution of the (semi-)natural habitats in Hungary I. Marshes and grasslands. *Acta Botanica Hungarica*, **50**, Suppl. 1, 59–106. del Moral, R., Walker, L.R. & Bakker, J.P. (2007) Insights gained from succession fort he restoration of landscape structure and function. *Linking Restoration and Ecological Succession* (eds L.R. Walker, J. Walker & R.J. Hobbs), pp. 19–45. Springer-Verlag, New York, USA. Öster, M., Ask, K., Römermann, C., Tackenberg, O. & Eriksson, O. (2009) Plant colonization of ex-arable fields from adjacent species-rich grasslands: the importance of dispersal vs. recruitment ability. *Agriculture, Ecosystems & Environment*, **130**, 93–99. Pécsi, M. (eds) (1989) Magyarország Nemzeti Atlasza (National Atlas of Hungary). Kartográfiai vállalat, Budapest (in Hungarian). Prach, K. & Hobbs, R.J. (2008) Spontaneous succession versus technical reclamation in the restoration of disturbed sites. *Restoration Ecology*, **16**, 363–366. Prach, K., Leps, J. & Rejmánek, M. (2007) Old field succession in Central Europe: local and regional patterns. *Old Fields: Dynamics and Restoration of Abandoned Farmland* (eds V.A. Cramer & R.J. Hobbs), pp. 180–201. Island Press, Washington. Prach, K. & Pyšek, P. (2001) Using spontaneous succession for restoration of human-disturbed habitats: experience from Central Europe. *Ecological Engineering*, **17**, 55–62. Prach, K., Pyšek, P. & Bastl, M. (2001) Spontaneous vegetation succession in human-disturbed habitats: a pattern across seres. *Applied Vegetation Science*, **4**, 83–98. Prach, K. & Rehounková, K. (2008) Spontaneous vegetation succession in gravel-pit and pits: a potential for restoration. *Restoration Ecology*, **16**, 305–312. Pullin, A.S., Baldi, A., Can, O.E., Dieterich, M., Kati, V., Livoreil, B., Lövei, G., Mihályik, Nevin, O., Selva, N. & Sousa-Pinto, I. (2009) Conservation focus on Europe: major conservation policy issues that need to be informed by Conservation Science. *Conservation Biology*, **23**, 818–824. Pywell, R.F., Bullock, J.M., Hopkins, A., Walker, K.J., Sparks, T.H., Burke, M.J.W. & Peel, S. (2002) Restoration of species-rich grassland on arable land: assessing the limiting processes using a multi-site experiment. *Journal of Applied Ecology*, **39**, 294–309. R Development Core Team (2010) *R: A Language and Environment for Statistical Computing*. R Foundation for Statistical Computing, Vienna, Austria. ISBN 3-900051-07-4. URL http://www.R-project.org. Raunkiaer, C. (1934) *The Life Forms of Plants and Statistical Plant Geography*, Being the Collected Papers of C. Raunkiaer. Oxford University Press, Oxford. Reid, A.M., Morin, L., Downey, P.O., French, K. & Virtue, J.G. (2008) Does invasive plant management aid restoration of natural ecosystems? *Biological Conservation*, **142**, 2342–2349. Ruprecht, E. (2005) Secondary succession in old-fields in the Transylvania Lowland (Romania). *Preslia*, **77**, 145–157. Ruprecht, E. (2006) Successfully recovered grassland: a promising example from Romanian old-fields. *Restoration Ecology*, **14**, 473–480. Simmering, D., Waldhardt, R. & Otte, A. (2006) Quantifying determinants contributing to plant species richness in mosaic landscapes: a single- and multi-patch perspective. *Landscape Ecology*, **21**, 1233–1251. Simon, T. (2000) A magyarországi edényes flóra határozója (Vascular Flora of Hungary). Nemzeti Tankönyvkiadó, Budapest, Hungary (In Hungarian). Stevenson, M.J., Bullock, J.M. & Ward, L.K. (1995) Re-creating semi-natural communities: effect of sowing rate on establishment of calcareous grassland. *Restoration Ecology*, **3**, 279–289. Štolcová, J. (2002) Secondary succession on an early abandoned field: vegetation composition and production of biomass. *Plant Protection Science*, **38**, 149–154. Stroh, M., Storm, C., Zehm, A. & Schwabe, A. (2002) Restorative grazing as a tool for directed succession with diaspor inoculation: the model of sand ecosystems. *Phytocoenologia*, **32**, 595–625. Török, P., Matus, G., Papp, M. & Tóthméresz, B. (2008) Secondary succession in overgrazed Pannonian sandy grasslands. *Preslia*, **80**, 73–85. Török, P., Deák, B., Vida, E., Valkó, O., Lengyel, Sz. & Tóthméresz, B. (2010) Restoring grassland biodiversity: sowing low diversity seed mixtures can lead to rapid favourable changes. *Biological Conservation*, **143**, 806–812. Tropek, R., Kadlec, T., Karesova, P., Spitzer, L., Kocarek, P., Malenovsky, I., Banar, P., Tuf, I.H., Hejda, M. & Konvicka, M. (2010) Spontaneous succession in limestone quarries as an effective restoration tool for endangered arthropods and plants. *Journal of Applied Ecology*, **47**, 139–147. Walker, L.R., Walker, J. & del Moral, R. (2007) Forging a new alliance between succession and restoration. *Linking Restoration and Ecological Succession* (eds L.R. Walker, J. Walker & R.J. Hobbs), pp. 1–19. Springer-Verlag, New York, USA. Warren, J., Christal, A. & Wilson, F. (2002) Effects of sowing and management on vegetation succession during grassland habitat restoration. *Agriculture, Ecosystems & Environment*, **93**, 393–402. Zuur, A., Ieno, E.N., Walker, N., Saveliev, A.A. & Smith, G.M. (2009) Mixed Effects Models and Extensions in Ecology with R. Springer, New York, USA. *Received 9 June 2010; accepted 20 October 2010* *Handling Editor: Jennifer Finn* **Supporting Information** Additional Supporting Information may be found in the online version of this article. **Appendix S1.** Characteristic species of target loess grasslands and extensively managed lucerne fields identified by an IndVal procedure; 10,000 random permutations were used. As a service to our authors and readers, this journal provides supporting information supplied by the authors. Such materials may be reorganized for online delivery, but are not copy-edited or typeset. Technical support issues arising from supporting information (other than missing files) should be addressed to the authors.
Hip Fracture Risk Assessment Based on Different Failure Criteria Using QCT-Based Finite Element Modeling Hossein Bisheh\textsuperscript{1, 2}, Yunhua Luo\textsuperscript{1, 3} and Timon Rabczuk\textsuperscript{4,*} Abstract: Precise evaluation of hip fracture risk leads to reduce hip fracture occurrence in individuals and assist to check the effect of a treatment. A subject-specific QCT-based finite element model is introduced to evaluate hip fracture risk using the strain energy, von-Mises stress, and von-Mises strain criteria during the single-leg stance and the sideways fall configurations. Choosing a proper failure criterion in hip fracture risk assessment is very important. The aim of this study is to define hip fracture risk index using the strain energy, von Mises stress, and von Mises strain criteria and compare the calculated fracture risk indices using these criteria at the critical regions of the femur. It is found that based on these criteria, the hip fracture risk at the femoral neck and the intertrochanteric region is higher than other parts of the femur, probably due to the larger amount of cancellous bone in these regions. The study results also show that the strain energy criterion gives more reasonable assessment of hip fracture risk based on the bone failure mechanism and the von-Mises strain criterion is more conservative than two other criteria and leads to higher estimate of hip fracture risk indices. Keywords: Hip fracture risk, finite element model, strain energy, von Mises stress, von Mises strain. 1 Introduction The most common injury of the elderly during the sideways fall is hip fracture. It was reported that hip fracture may lead to long term disability and death of individuals [Resnick and Greenspan (1989)]. The total number of hip fracture is increasing over the world [Gullberg, Johnell and Kanis (1997)]. Therefore, a special attention must be dedicated on this important issue in order to provide appropriate plans for prevention and treatment of hip fracture. Accurate assessment of hip fracture risk in the elderly helps us to consider proper preventing schemes such as effective design of hip protectors and or providing proper treatment plans to protect the elderly against future hip fracture. \textsuperscript{1} Department of Mechanical Engineering, University of Manitoba, Winnipeg, R3T 5V6, Canada. \textsuperscript{2} Institute of Structural Mechanics, Bauhaus-Universität Weimar, Weimar, 99423, Germany. \textsuperscript{3} Department of Biomedical Engineering, University of Manitoba, Winnipeg, R3T 5V6, Canada. \textsuperscript{4} Institute of Research and Development, Duy Tan University, Da Nang, Viet Nam. *Corresponding Author: Timon Rabczuk. Email: firstname.lastname@example.org. Received: 09 December 2019; Accepted: 06 January 2020. By integrating an imaging technology such as Dual-Energy X-ray Absorptiometry (DXA) or Quantitative Computed Tomography (QCT) and a numerical method such as the finite element (FE) method, a category of more reliable tools for assessing hip fracture risk have been developed which do not have the limitations of statistical models and methods which are based on measuring bone mineral density (BMD). However, in numerical and computational models such as QCT-based finite element models, choosing a proper failure criterion based on the bone microstructure is very important for accurate assessment of hip fracture risk. The human femur consists of inhomogeneous (porous) cancellous bone and nearly homogenous cortical bone, so, their failure mechanism is totally different due to their different microstructures. Failure mechanism of the cancellous bone is often in the form of buckling, and the failure of denser cancellous bone and the cortical bone is mostly characterized by local cracking [Mirzaei, Keshavarzian and Naeini (2014); Stölken and Kinney (2003)]. Although stress- and strain-based failure criteria are accurate for ductile materials such as metal, they may not be accurate for bone because it is categorized as a brittle material [Cordey and Gautier (1999)]. The tensile strength of bone is smaller than their compressive strength, indicating bone should be classified as a brittle material [Cordey and Gautier (1999)]. Due to this property of bone, strain energy criterion which is a combination of both stress and strain effects may lead to more accurate assessment of hip fracture risk. In the literature, hip fracture risk was usually estimated using von Mises stress and von Mises strain criteria [Lotz, Cheal and Hayes (1991); Keyak, Rossi, Jones et al. (1997); Luo, Ferdous and Leslie (2013)], maximum principle stress and strain criteria [Ota, Yamamoto and Morita (1999); Testi, Viceconti, Baruffaldi et al. (1999); Schileo, Taddei, Cristofolini et al. (2008); Gong, Zhang, Fan et al. (2012)], maximum shear stress criterion [Keyak and Rossi (2000)], maximum distortion energy criterion [Keyak and Rossi (2000)], and strain energy criterion [Kheirollahi and Luo (2015); Kheirollahi (2015)]. To the best of our knowledge, there is no comparative study on hip fracture risk assessment by using different failure criteria. The objective of this study is to compare hip fracture risk indices calculated by the strain energy, von Mises stress, and von Mises strain criteria at the critical cross-sections of human femur. We construct finite element model of the femur from the QCT image of clinical cases and then simulate the single-leg stance and sideways fall configurations by finite element analyses, and finally fracture risk indices are assessed in the critical regions of femur using the strain energy, von Mises stress, and von Mises strain criteria, and then we evaluate and discuss about their rate of conservation and accuracy based on the bone failure mechanism. 2 Methodology The proposed methodology for assessment of hip fracture risk in the critical regions of femur using the strain energy, von Mises stress, and von Mises strain criteria determined from QCT-based finite element model is shown in Fig. 1. The procedure is explained in detail in the following subsections. 2.1 QCT-Based finite element modeling 2.1.1 QCT scan of femur To accurate assessment of hip fracture risk, a three-dimensional (3D) finite element model of subject’s femur is required. The 3D model is constructed from the QCT image of subject’s femur. Thickness of QCT slices is usually considered 1 mm. The QCT images are saved in the Digital Imaging and Communications in Medicine (DICOM) format. To construct a 3D model of a femur, an appropriate segmentation is required to separate the femur from the soft tissue. Each voxel of the QCT image has an intensity defined as Hounsfield Unit (HU), which is associated with bone density [Keyak, Meagher, Skinner et al. (1990); Keaveny, Borchers, Gibson et al. (1993)]. In this study, QCT images of 20 clinical cases, including 10 females and 10 males, were obtained from the Winnipeg Health Science Centre in an anonymous way based on a human research ethics approval. The cases are in the age range of 51 to 78 years (average of 64.5 years). Statistical information of the clinical cases is listed in Tab. 1. Figure 1: The proposed methodology for calculating hip fracture risk index using the strain energy, von Mises stress, and von Mises strain criteria Table 1: Statistical information of the 20 clinical cases | | Age (years) | Height (cm) | Body weight (kg) | BMI (kg/m²) | |----------------|-------------|-------------|------------------|-------------| | Range | 51-78 | 155.8-193.2 | 51.7-111.4 | 18.83-43.36 | | Average | 64.5 | 170.33 | 81.28 | 28 | 2.1.2 Generation of finite element mesh First of all, the 3D model of the femur is constructed from the subject’s QCT image using Mimics (Materialise, Leuven, Belgium). QCT images, saved in DICOM format, are imported to Mimics for the required segmentation (Fig. 2(a)) and generation of 3D model of the femur (Fig. 2(b)). Then, a FE mesh is generated by employing the 3-matic module of Mimics (Fig. 2(c)). In this study, the 4-node linear tetrahedral element SOLID72 in ANSYS is utilized. In order to analyze the model convergence, FE models with different maximum element edge lengths are employed. The maximum von Mises stress is obtained for each FE model under the same conditions. The maximum element edge length leading to converged solutions is calculated and utilized in all FE analyses. Figure 2: QCT-based finite element analysis of the femur: (a) QCT-scan of the femur; (b) 3D model constructed from the QCT image; (c) 3D finite element model; (d) inhomogeneous isotropic material properties assignment; (e) single-leg stance configuration; and (f) sideways fall configuration. (Color should be used for this figure) 2.1.3 Material properties assignment To generate a more real FE model, inhomogeneous isotropic material properties are assigned to the femur. The inhomogeneous isotropic mechanical properties of the femur are extracted from the QCT image data using a correlation between the CT numbers and the bone material properties. The bone ash density ($\rho_{ash}$) is determined according to the HU number by the following empirical equation [Les, Keyak, Stover et al. (1994); Dragomir-Daescu, Buijs, McEligot et al. (2010)], \[ \rho_{ash} = 0.04162 + 0.000854 \text{ HU} \quad (\text{g/cm}^3) \] (1) Eqs. (2)-(4), developed by Keller [Keller (1994)], were used to determine Young’s modulus \( E \), the yield stress \( \sigma_Y \), and the yield strain \( \varepsilon_Y \), respectively, according to the bone ash density, \[ E = 10500 \rho_{ash}^{2.29} \quad (\text{MPa}) \] (2) \[ \sigma_Y = 116 \rho_{ash}^{2.47} \quad (\text{MPa}) \] (3) \[ \varepsilon_Y = 0.011 \rho_{ash}^{-0.26} \] (4) A constant Poisson’s ratio \( \nu = 0.4 \) is assigned [Keyak, Rossi, Jones et al. (1997); Reilly and Burstein (1975)]. To apply bone material properties, elements are categorized into several discrete material bins using Mimics (Materialise, Leuven, Belgium), representing the continuous distribution of the inhomogeneous bone mechanical properties. A convergence study is fulfilled to determine the required number of material bins. To this purpose, models with different material bins are constructed. The maximum von Mises stress is acquired for each model under the same conditions. The maximum number of material bins generating converged solutions is calculated and employed in all rest simulations. Fig. 2(d) illustrates an isotropic inhomogeneous distribution of bone material properties. ### 2.2 Finite element analysis A femur finite element model with the assigned material properties, extracted from Mimics, is imported to ANSYS for further analyses. In finite element analysis, the single-leg stance and sideways fall configurations are simulated. For simulation of the single-leg stance configuration, 2.5 times of the subject’s body weight is imposed as a distributed load on the femoral head [Yoshikawa, Turner, Peacock et al. (1994)] and the distal end of the femur is considered completely fixed [Keyak, Rossi, Jones et al. (1997); Bessho, Ohnishi, Matsumoto et al. (2009)] (see Fig. 2(c)), \[ F_{\text{Stance}} = 2.5w \quad (N) \] (5) where \( w \) is the subject’s body weight in Newton (N). In sideways fall configuration, the femur is completely fixed at the distal end and the head of femur is fixed in the loading direction (Fig. 2(f)) [Koivumäki, Thevenot, Pulkkinen et al. (2012); Nishiyama, Gilchrist, Guy et al. (2013)]. The representative impact force of sideways fall configuration applying on the greater trochanter (Fig. 2(f)) is given by Yoshikawa et al. [Yoshikawa, Turner, Peacock et al. (1994); Robinovitch, Hayes and McMahon (1991)], \[ F_{\text{Impact}} = 8.25w(\frac{h}{170})^{\frac{1}{2}} \quad (N) \] (6) where \( h \) is the height of the subject in centimeter (cm). All loading and boundary conditions are applied to a group of nodes at the greater trochanter, the femoral head, and the distal end of femur (Fig. 2(e) and Fig. 2(f)). All FE simulations are performed automatically using ANSYS Parametric Design Language (APDL) codes. The required solutions for hip fracture risk assessment, including the nodal displacements, stresses, and strains, are obtained from the finite element analysis. 2.3 Critical cross-sections of femur Three major types of hip fractures are: femoral neck fracture, intertrochanteric fracture, and subtrochanteric fracture (Fig. 3). The intertrochanteric, femoral neck, and subtrochanteric fractures constitute 49, 37, and 14 percent of the total hip fractures, respectively [Michelson, Myers, Jinnah et al. (1995)]. Thus, three critical cross-sections of femur are the smallest femoral neck cross-section (SFN CS), the intertrochanteric cross-section (IntT CS), and the subtrochanteric cross-section (SubT CS) which commonly have the highest fracture risk (Fig. 3). In this study, we determine these critical cross-sections based on the method proposed in the literature [Kheirollahi and Luo (2015); Kheirollahi and Luo (2017)]. ![Critical femoral cross-sections](image) **Figure 3:** Critical femoral cross-sections: the smallest femoral neck cross-section (A-A), the intertrochanteric cross-section (B-B), and the subtrochanteric cross-section (C-C) 2.4 Hip fracture risk index definition In this section, hip fracture risk index is defined using the strain energy, von Mises stress, and von Mises strain criteria. The strain energy, von Mises stress, and von Mises strain at the three critical cross-sections of the femur induced by the applied forces are computed using in-house developed MATLAB codes and the data extracted by APDL codes from the obtained finite element solutions. The plane boundaries of the three critical cross-sections, extracted from the finite element mesh, are imported to MATLAB to generate a two-dimensional (2-D) mesh for calculating the cross-section strain energy, von Mises stress, and von Mises strain. Fig. 4 shows the generated triangle elements over the smallest femoral neck cross-section, the intertrochanteric cross-section, and the subtrochanteric cross-section. The strain energy, von Mises stress, and von Mises strain at the three critical cross-sections induced by the applied forces are, respectively, the sum of strain energy, the sum of von Mises stress, and the sum of von Mises strain in all triangle elements of the cross-section, i.e., \[ U = \sum_{i=1}^{m} U_e \] \[ \sigma = \sum_{i=1}^{m} \sigma_e \] \[ \varepsilon = \sum_{i=1}^{m} \varepsilon_e \] **Figure 4:** Generated triangle elements over (a) the smallest femoral neck cross-section, (b) the intertrochanteric cross-section, and (c) the subtrochanteric cross-section where \( U, \sigma, \) and \( \varepsilon \) are, respectively, the strain energy, von Mises stress, and von Mises strain at the three critical cross-sections of the femur; \( U_e, \sigma_e, \) and \( \varepsilon_e \) are, respectively, the strain energy, the von Mises stress, and the von Mises strain in a triangle element (\( e \)) of the proposed cross-section induced by the applied forces and \( m \) is the number of triangle elements created over the proposed cross-sections. Gaussian integration method is used to calculate the strain energy, the von Mises stress, and the von Mises strain in a triangle element (\( e \)) of the cross-section. Integration points in each triangle element are determined using in-house MATLAB codes. By using the Gaussian integration method, the strain energy, the von Mises stress, and the von Mises strain of a triangle element (\( e \)) induced by the applied forces are calculated as, \[ U_e = \int \int \bar{U}_e \, dA \approx \sum_{i=1}^{n} W_i |J| \bar{U}_i \] \[ \sigma_e = \int \int \hat{\sigma}_e \, dA \approx \sum_{i=1}^{n} W_i |J| \hat{\sigma}_i \] \[ \varepsilon_e = \int \int \hat{\varepsilon}_e \, dA \approx \sum_{i=1}^{n} W_i |J| \hat{\varepsilon}_i \] where \( \bar{U}_e, \hat{\sigma}_e, \) and \( \hat{\varepsilon}_e \) are, respectively, the strain energy density, the stress, and the strain functions of a triangle element (\( e \)); \( \bar{U}_i, \hat{\sigma}_i, \) and \( \hat{\varepsilon}_i \) are, respectively, the strain energy density, the von Mises stress, and the von Mises strain values at the integration points of a triangle element \( e \); \( W_i \) is the weight at the integration points; \( |J| \) is determinant of the Jacobean matrix of the triangle element; and \( n \) is the number of integration points over the triangle element (integration domain). The von Mises stress and the strain values at the integration points of the triangle elements of the proposed cross-sections are obtained from the results of FE analysis. The strain energy density at an integration point (\( i \)) is determined from the finite element solution obtained by the 3D QCT-based FE model, i.e., \[ \bar{U}_i = \frac{1}{2} \{ \sigma \}^T \{ \varepsilon \} \] where \(\{\sigma\} = [D]\{\varepsilon\}\) and \(\{\varepsilon\} = [B]\{d\}\). The strain energy density at each integration point can be expressed by the finite element solution as, \[ \bar{U}_i = \frac{1}{2} \{d\}_e^T [B]_e^T [D]_e [B]_e \{d\}_e \] (10) where \(\{d\}\) is the displacement vector consisting of displacements at element nodes of the tetrahedral element where the integration point is located; matrix \([B]\) is the derivatives of shape functions of the tetrahedral element; and \([D]\) is the material property matrix of the tetrahedral element, \[ [D]_e = \frac{E}{(1+\nu)(1-2\nu)} \begin{bmatrix} 1-\nu & \nu & \nu & 0 & 0 & 0 \\ \nu & 1-\nu & \nu & 0 & 0 & 0 \\ \nu & \nu & 1-\nu & 0 & 0 & 0 \\ 0 & 0 & 0 & \frac{1}{2}-\nu & 0 & 0 \\ 0 & 0 & 0 & 0 & \frac{1}{2}-\nu & 0 \\ 0 & 0 & 0 & 0 & 0 & \frac{1}{2}-\nu \end{bmatrix} \] (11) where Poisson’s ratio is constant (\(\nu = 0.4\)) and Young’s modulus is function of the bone density obtained from Eq. (2). For each integration point, its Young’s modulus is calculated according to the bone density at the point, which is the density of the tetrahedral element where the integration point is located. The maximum allowable strain energy, stress, and strain of the three critical cross-sections of the femur are also computed using in-house MATLAB codes and the data extracted by APDL codes from the obtained finite element solutions. The maximum allowable strain energy, stress, and strain (or the yield strain energy, yield stress, and yield strain) of the three critical cross-sections are, respectively, the sum of the yield strain energy, the sum of yield stress, and the sum of yield strain in all triangle elements of the cross-section i.e., \[ U_Y = \sum_{i=1}^{m} U_Y^e \] (12a) \[ \sigma_Y = \sum_{i=1}^{m} \sigma_Y^e \] (12b) \[ \varepsilon_Y = \sum_{i=1}^{m} \varepsilon_Y^e \] (12c) where \(U_Y^e\), \(\sigma_Y^e\), and \(\varepsilon_Y^e\) are, respectively, the yield strain energy, yield stress, and yield strain in a triangle element \((e)\). The Gaussian integration method is also used to calculate the maximum allowable (yield) strain energy, stress, and strain in each triangle element. The maximum allowable strain energy, stress, and strain that a triangle element \((e)\) can sustain are given by, \[ U_Y^e = \int \int \bar{U}_Y^e \, dA \approx \sum_{i=1}^{n} W_i |J| \bar{U}_{Yi} \] (13a) \[ \sigma_Y^e = \int \int \hat{\sigma}_Y^e \, dA \approx \sum_{i=1}^{n} W_i |J| \hat{\sigma}_{Yi} \] (13b) \[ \varepsilon_Y^e = \int \int \hat{\varepsilon}_Y^e \, dA \approx \sum_{i=1}^{n} W_i |J| \hat{\varepsilon}_{Yi} \] (13c) where \(\bar{U}_Y^e\), \(\hat{\sigma}_Y^e\), and \(\hat{\varepsilon}_Y^e\) are, respectively, the yield strain energy density, yield stress, and yield strain functions in a triangle element \((e)\); and \(\bar{U}_{Yi}\), \(\hat{\sigma}_{Yi}\), and \(\hat{\varepsilon}_{Yi}\) are, respectively, the yield strain energy density, yield stress, and yield strain values at the integration points of a triangle element \((e)\) of the proposed cross-section. The yield stress and yield strain of each integration point is obtained using Eqs. (3) and (4) based on its density, which is the density of the tetrahedral element where the integration point is located. The yield strain energy density at an integration point \((i)\) is calculated as, \[ \bar{U}_{Yi} = \frac{1}{2} \sigma_{Yi} \varepsilon_{Yi} = \frac{\sigma_{Yi}^2}{2E_i} \] (14) where \(E_i\), \(\sigma_{Yi}\), and \(\varepsilon_{Yi}\) are, respectively, the Young’s modulus, the yield stress, and the yield strain at the integration point, where all of them are functions of the bone density, which is the density of the tetrahedral element where the integration point is located, as given in Eqs. (2)-(4). Hip fracture risk index at the three critical cross-sections of the femur using the strain energy, the von Mises stress, and the von Mises strain criteria is defined, respectively, as the ratio of the strain energy, the stress, and the strain induced by the applied forces to the maximum allowable strain energy, stress, and strain of the femur over the proposed cross-sections, \[ \eta = \frac{U}{U_Y} \] (15a) \[ \eta = \frac{\sigma}{\sigma_Y} \] (15b) \[ \eta = \frac{\varepsilon}{\varepsilon_Y} \] (15c) where \(\eta\) is the fracture risk index at one of the three critical cross-sections of the femur based on the strain energy, the von Mises stress, and the von Mises strain criteria; and \(U\), \(\sigma\), \(\varepsilon\) and \(U_Y\), \(\sigma_Y\), \(\varepsilon_Y\) are, respectively, obtained from Eqs. (7) and (12). 3 Results 3.1 Convergence studies 3.1.1 Element size in femur finite element analysis The convergence of finite element solutions in a representative case is shown in Fig. 5. The convergence study shows that the maximum von Mises stress at the narrowest femoral neck converges with the maximum element edge length smaller than 8 mm. Therefore, in construction of the rest of femur FE models, the maximum element edge length is set to 8 mm. Figure 5: Convergence of the maximum von Mises stress at the femoral neck with element size 3.1.2 Assignment of inhomogeneous material properties 3D femur FE models with different material bins are constructed to investigate model convergence in the inhomogeneous material properties assignment. For each model, the maximum von Mises stress at the narrowest femoral neck is monitored under the same loading and boundary conditions. As shown in Fig. 6, the results of the convergence study indicate that there is no significant change in the maximum von Mises stress with the number of material bins larger than 50. Thus, 50 discrete material bins are considered in the material properties assignment of all cases. 3.1.3 Element size in calculating fracture risk index Convergence study is also performed to determine the element size used in integrating the cross-sectional strain energy, as it influences on the calculation of fracture risk index (FRI). The FRI at the smallest cross-section of femur is calculated with different maximum element edge lengths. The results are plotted in Fig. 7. The FRI does not change significantly with the maximum element edge length smaller than 5mm. Therefore, the maximum element edge length is set to 5 mm in calculating the cross-sectional strain energy. Figure 6: Convergence of the maximum von Mises stress at the femoral neck with the material bins Figure 7: Convergence of the fracture risk index (FRI) with the maximum element edge length of triangle elements generated over the smallest cross-section of femur 3.1.4 Number of integration points in calculating cross-section strain energy, von Mises stress, and von Mises strain The effect of number of integration points on the calculation of FRI is investigated in this section. FRI at the smallest femoral neck cross-section is computed for 5 clinical cases with different numbers of integration point. The relative errors between FRIs obtained with 3 and 7 integration points are shown in Tab. 2. As it can be seen, the errors are not significant. Therefore, the 3-point integration rule is used in this study to reduce the computational time. Table 2: Femoral neck FRI obtained with different numbers of integration point | Case No. | 3 integration points | 7 integration points | Relative error (%) | |----------|----------------------|----------------------|--------------------| | 1 | 0.239 | 0.2416 | 1.07 | | 2 | 0.6898 | 0.6975 | 1.1 | | 3 | 0.2966 | 0.2976 | 0.33 | | 4 | 0.8885 | 0.899 | 1.16 | | 5 | 1.1482 | 1.1701 | 1.87 | 3.2 Stress and strain patterns at the critical cross-sections Figs. 8 and 9 show the maximum von Mises stress and von Mises strain at the three critical cross-sections of the femur during both the single-leg stance and the sideways fall for 10 clinical cases including 5 females and 5 males. For the single-leg stance configuration, the patterns in the stresses are different (Tab. 3); first, the differences between the stresses over the three regions are much smaller, and for some cases, the stresses at the subtrochanteric region are higher than those in the other two regions (Fig. 8). The results illustrate that the femoral neck and the intertrochanteric region receive higher stresses than the subtrochanteric region during the sideways fall (Tab. 4). However, strains at the three critical regions of the femur have similar trends in both the single-leg stance and the sideways fall configurations (Tabs. 5 and 6 and Fig. 9). Table 3: Average maximum von Mises stress (MPa) at the smallest femoral neck cross-section (SFN CS), the intertrochanteric cross-section (IntT CS), and the subtrochanteric cross-section (SubT CS) of the femur for 10 clinical cases during the single-leg stance | | Maximum von Mises stress (MPa) | |----------------------|---------------------------------| | | SFN CS | IntT CS | SubT CS | | Range | 19.56-52.38 | 23.55-47.8 | 27.09-43.04 | | Average | 32.93 | 32.41 | 35.84 | Table 4: Average maximum von Mises stress (MPa) at the smallest femoral neck cross-section (SFN CS), the intertrochanteric cross-section (IntT CS), and the subtrochanteric cross-section (SubT CS) of the femur for 10 clinical cases during the sideways fall | | SFN CS | IntT CS | SubT CS | |----------------------|----------|----------|----------| | Range | 22.78-69.97 | 16.2-60.3 | 6.73-33.2 | | Average | 46.52 | 33.48 | 18.66 | Table 5: Average maximum von Mises strain at the smallest femoral neck cross-section (SFN CS), the intertrochanteric cross-section (IntT CS), and the subtrochanteric cross-section (SubT CS) of the femur for 10 clinical cases during the single-leg stance | | SFN CS | IntT CS | SubT CS | |----------------------|--------------|--------------|-------------| | Range | 5.25E-03-1.55E-02 | 5.49E-03-1.87E-02 | 2.14E-03-4.34E-03 | | Average | 9.55E-03 | 1.05E-02 | 3.11E-03 | Table 6: Average maximum von Mises strain at the smallest femoral neck cross-section (SFN CS), the intertrochanteric cross-section (IntT CS), and the subtrochanteric cross-section (SubT CS) of the femur for 10 clinical cases during the sideways fall | | SFN CS | IntT CS | SubT CS | |----------------------|--------------|--------------|-------------| | Range | 1.67E-02-7.43E-02 | 3.35E-02-1.91E-01 | 5.08E-04-3.31E-03 | | Average | 4.26E-02 | 9.37E-02 | 1.74E-03 | Figure 8: Maximum von Mises stress (MPa) at the smallest femoral neck cross-section (SFN CS), the intertrochanteric cross-section (IntT CS), and the sub-trochanteric cross-section (SubT CS) of the femur for 10 clinical cases during the single-leg stance and the sideways fall. (Color should be used for this figure) Figure 9: Maximum von-Mises strain at the smallest femoral neck cross-section (SFN CS), the intertrochanteric cross-section (IntT CS), and the sub-trochanteric cross-section (SubT CS) of the femur for 10 clinical cases during the single-leg stance and the sideways fall. (Color should be used for this figure) 3.3 Hip fracture risk indices obtained using the strain energy, von Mises stress, and von Mises strain criteria For 20 clinical cases (10 females and 10 males), hip fracture risk indices are calculated using the strain energy, von Mises stress, and von Mises strain criteria for the smallest femoral neck, the intertrochanteric, and the subtrochanteric cross-sections of the femur. during the single-leg stance and the sideways fall configurations. The calculated fracture risk indices at the three critical cross-sections of the femur using these three criteria for 10 females and 10 males are shown in Figs. 10-12. As shown in Figs. 13 and 14, there is not so much difference between the average FRI obtained using the strain energy criterion and that obtained using the von Mises stress criterion for the smallest femoral neck, the intertrochanteric, and the subtrochanteric cross-sections during the sideways fall, however, the average FRI obtained using the strain energy criterion is much higher for these three cross-sections. During the single-leg stance, as shown in Figs. 10-14, the FRIs obtained using the von Mises stress and von Mises strain criteria are much higher than those obtained using the strain energy criterion. The FRIs obtained using the von Mises stress and von Mises strain criteria during the single-leg stance are to some extent high for a static loading such as stance loading on the femur and they are in the range of the FRIs obtained using the strain energy criterion during the sideways fall. It indicates that based on the von Mises stress and von Mises strain criteria, the elderlies are in the risk of hip fracture even during the normal walking. Hence, it is concluded that hip fracture risk assessment using the von Mises stress and von Mises strain criteria is more conservative than hip fracture risk assessment using the strain energy criterion. Figure 10: Hip fracture risk index (FRI) at the smallest cross-section of the femoral neck (SFN CS) for 10 females and 10 males during the single-leg stance and the sideways fall configurations using the strain energy, von Mises stress, and von Mises strain criteria. (Color should be used for this figure) Figure 11: Hip fracture risk index (FRI) at the intertrochanteric cross-section of femur (IntT CS) for 10 females and 10 males during the single-leg stance and the sideways fall configurations using the strain energy, von Mises stress, and von Mises strain criteria. (Color should be used for this figure) Figure 12: Hip fracture risk index (FRI) at the subtrochanteric cross-section of femur (SubT CS) for 10 females and 10 males during the single-leg stance and the sideways fall configurations using the strain energy, von-Mises stress, and von-Mises strain criteria. (Color should be used for this figure) Figure 13: Average FRI at the smallest femoral neck cross-section (SFN CS), the intertrochanteric cross-section (IntT CS), and the subtrochanteric cross-section (SubT CS) of femur for 10 females during the single-leg stance and the sideways fall configurations using the strain energy, von-Mises stress, and von-Mises strain criteria. (Color should be used for this figure) Figure 14: Average FRI at the smallest femoral neck cross-section (SFN CS), the intertrochanteric cross-section (IntT CS), and the subtrochanteric cross-section (SubT CS) of femur for 10 males during the single-leg stance and the sideways fall configurations using the strain energy, von-Mises stress, and von-Mises strain criteria. (Color should be used for this figure) 4 Discussions Choosing a proper bone failure criterion is challenging. In the literature, stress and strain based failure criteria such as the von Mises stress and von Mises strain criteria and the maximum principle stress and strain criteria were commonly used to assess hip fracture risk. In our previous study [Kheirollahi and Luo (2015)], the strain energy based failure criterion was used for hip fracture risk assessment. Whereas the cancellous bone failure is in the form of buckling and deformation (strain intensity) and the cortical bone failure is related to its local cracking (stress intensity), strain energy failure criterion, which is a combination of both stress and strain intensities, is theoretically more reasonable than other failure criteria for hip fracture risk assessment. There are significant differences between the strains in the three critical regions of the femur during both the single-leg stance and the sideways fall (see Tabs. 5 and 6, and Fig. 9) while the differences between corresponding stresses are not too high (see Tabs. 3 and 4, and Fig. 8), indicating the bone sensitivity with respect to the strains because of its fragility property. Thus, the effects of strains in bone fracture risk assessment have a great importance. In this study, a comparison has been done between the strain energy, von Mises stress, and von Mises strain criteria on hip fracture risk assessment. The results of the study indicate that for the sideways fall configuration, the von Mises strain criterion gives higher estimate of hip fracture risk indices and is more conservative than the von Mises stress and strain energy criteria in hip fracture risk assessment. While, for the single-leg stance configuration, the von Mises stress and von Mises strain criteria give higher estimate of hip fracture risk with respect to the strain energy criterion and the calculated FRIs using the von Mises stress and von Mises strain criteria for the single-leg stance are in the range of those obtained by the strain energy criterion for the sideways fall configuration. Therefore, based on von Mises stress and von Mises strain criteria, the elderlies are in the risk of hip fracture even during the normal walking. However, it can be concluded that the von Mises strain criterion is the most conservative failure criterion in hip fracture risk assessment. Based on these three failure criteria, the femoral neck and the intertrochanteric region have higher fracture risk than the subtrochanteric region (see Figs. 10-14), which is consistent with the fact that the femoral neck and the intertrochanteric region have a larger proportion of cancellous bone than the subtrochanteric region; and the cancellous bone is generally weaker than the cortical bone. Hence, hip fracture is most likely to initiate first at the femoral neck and then in the intertrochanteric region or in the subtrochanteric region. Therefore, based on the importance of hip fracture risk assessment and its conservation, the strain energy criterion, the von Mises stress criterion, and or the von Mises strain criterion can be used in evaluation of hip fracture risk in the individuals to consider proper preventive and treatment plans. However, the strain energy criterion can give more reasonable assessment of hip fracture risk based on the bone failure mechanism, because it considers both effects of stress and strain simultaneously. 5 Conclusion Choosing a reliable failure criterion to assess hip fracture risk in the individuals is crucially important for preventing hip fracture and initiating a treatment. The purpose of this study is to compare the strain energy, von Mises stress, and von Mises strain criteria in estimation of hip fracture risk. The results of this study show that the strain energy failure criterion leads to more reliable assessment of hip fracture risk than the von Mises stress and von Mises strain criteria. However, the von Mises strain criterion is more conservative than other two criteria. The results of this study can be used in clinical applications to evaluate hip fracture risk and monitor the effects of corresponding treatments, and also in future studies regarding hip fracture risk assessment. **Conflicts of Interest:** The authors declare that they have no conflicts of interest to report regarding the present study. **References** Bessho, M.; Ohnishi, I.; Matsumoto, T.; Ohashi, S.; Matsuyama, J. et al. (2009): Prediction of proximal femur strength using a CT-based nonlinear finite element method: differences in predicted fracture load and site with changing load and boundary conditions. *Bone*, vol. 45, no. 2, pp. 226-231. Cordey, J.; Gautier, E. (1999): Strain gauges used in the mechanical testing of bones. Part I: theoretical and technical aspects. *Injury*, vol. 30, Supplement 1, pp. SA7-SA13. Dragomir-Daescu, D.; Buijs, J. O. D.; McEligot, S.; Dai, Y.; Entwistle, R. C. et al. (2010): Robust QCT/FEA models of proximal femur stiffness and fracture load during a sideways fall on the hip. *Annals of Biomedical Engineering*, vol. 39, no. 2, pp. 742-755. Gong, H.; Zhang, M.; Fan, Y.; Kwok, W. L.; Leung, P. C. (2012): Relationships between femoral strength evaluated by nonlinear finite element analysis and BMD, material distribution and geometric morphology. *Annals of Biomedical Engineering*, vol. 40, no. 7, pp. 1575-1585. Gullberg, B.; Johnell, O.; Kanis, J. A. (1997): World-wide projections for hip fracture. *Osteoporosis International*, vol. 7, no. 5, pp. 407-413. Keaveny, T. M.; Borchers, R. E.; Gibson, L. J.; Hayes, W. C. (1993): Trabecular bone modulus and strength can depend on specimen geometry. *Journal of Biomechanics*, vol. 26, no. 8, pp. 991-1000. Keller, T. S. (1994): Predicting the compressive mechanical behavior of bone. *Journal of Biomechanics*, vol. 27, no. 9, pp. 1159-1168. Keyak, J. H.; Rossi, S. A.; Jones, K. A.; Skinner, H. B. (1997): Prediction of femoral fracture load using automated finite element modeling. *Journal of Biomechanics*, vol. 31, no. 2, pp. 125-133. Keyak, J. H.; Rossi, S. A. (2000): Prediction of femoral fracture load using finite element models: an examination of stress- and strain-based failure theories. *Journal of Biomechanics*, vol. 33, no. 2, pp. 209-214. Keyak, J. H.; Meagher, J. M.; Skinner, H. B.; Mote Jr, C. D. (1990): Automated three-dimensional finite element modelling of bone: a new method. *Journal of Biomedical Engineering*, vol. 12, no. 5, pp. 389-397. Kheirollahi, H.; Luo, Y. (2015): Assessment of hip fracture risk using cross-section strain energy determined by QCT-based finite element modeling. *BioMed Research International*, vol. 2015, pp. 1-15. Kheirollahi, H. (2015): *Assessment of Hip Fracture Risk Using Cross-Section Strain Energy Determined from QCT-Based Finite Element Model* (M.Sc. Thesis), University of Manitoba, Canada. Kheirollahi, H.; Luo, Y. (2015): Identification of high stress and strain regions in proximal femur during single-leg stance and sideways fall using QCT-based finite element model. *International Journal of Biomedical and Biological Engineering*, vol. 9, no. 8, pp. 633-640. Kheirollahi, H.; Luo, Y. (2017): Understanding hip fracture by QCT-based finite element modeling. *Journal of Medical and Biological Engineering*, vol. 37, no. 5, pp. 686-694. Koivumäki, J. E. M.; Thevenot, J.; Pulkkinen, P.; Kuhn, V.; Link, T. M. et al. (2012): CT-based finite element models can be used to estimate experimentally measured failure loads in the proximal femur. *Bone*, vol. 50, no. 4, pp. 824-829. Les, C. M.; Keyak, J. H.; Stover, S. M.; Taylor, K. T.; Kaneps, A. J. (1994): Estimation of material properties in the equine metacarpus with use of quantitative computed tomography. *Journal of Orthopaedic Research*, vol. 12, no. 6, pp. 822-833. Lotz, J. C.; Cheal, E. J.; Hayes, W. C. (1991): Fracture prediction for the proximal femur using finite element models: part I-linear analysis. *Journal of Biomechanical Engineering*, vol. 113, no. 4, pp. 353-360. Lotz, J. C.; Cheal, E. J.; Hayes, W. C. (1991): Fracture prediction for the proximal femur using finite element models: part II-nonlinear analysis. *Journal of Biomechanical Engineering*, vol. 113, no. 4, pp. 361-365. Luo, Y.; Ferdous, Z.; Leslie, W. D. (2013): Precision study of DXA-based patient-specific finite element modeling for assessing hip fracture risk. *International Journal for Numerical Methods in Biomedical Engineering*, vol. 29, no. 5, pp. 615-629. Michelson, J. D.; Myers, A.; Jinnah, R.; Cox, Q.; Van Natta, M. (1995): Epidemiology of hip fractures among the elderly. Risk factors for fracture type. *Clinical Orthopaedics and Related Research*, no. 311, pp. 129-135. Mirzaei, M.; Keshavarzian, M., Naeini, V. (2014): Analysis of strength and failure pattern of human proximal femur using quantitative computed tomography (QCT)-based finite element method. *Bone*, vol. 64, pp. 108-114. Nishiyama, K. K.; Gilchrist, S.; Guy, P.; Cripton, P., Boyd, S. K. (2013): Proximal femur bone strength estimated by a computationally fast finite element analysis in a sideways fall configuration. *Journal of Biomechanics*, vol. 46, no. 7, pp. 1231-1236. Ota, T.; Yamamoto, I.; Morita, R. (1999): Fracture simulation of the femoral bone using the finite-element method: how a fracture initiates and proceeds. *Journal of Bone and Mineral Metabolism*, vol. 17, no. 2, pp. 108-112. Reilly, D. T.; Burstein, A. H. (1975): The elastic and ultimate properties of compact bone tissue. *Journal of Biomechanics*, vol. 8, no. 6, pp. 393-405. Resnick, N. M.; Greenspan, S. L. (1989): ‘Senile’ osteoporosis reconsidered. *Journal of the American Medical Association*, vol. 261, no. 7, pp. 1025-1029. Robinovitch, S. N.; Hayes, W. C.; McMahon, T. A. (1991): Prediction of femoral impact forces in falls on the hip. *Journal of Biomechanical Engineering*, vol. 113, no. 4, pp. 366-374. Schileo, E.; Taddei, F.; Cristofolini, L.; Viceconti, M. (2008): Subject-specific finite element models implementing a maximum principal strain criterion are able to estimate failure risk and fracture location on human femurs tested *in vitro*. *Journal of Biomechanics*, vol. 41, no. 2, pp. 356-367. Stölken, J. S.; Kinney, J. H. (2003): On the importance of geometric nonlinearity in finite-element simulations of trabecular bone failure. *Bone*, vol. 33, no. 4, pp. 494-504. Testi, D.; Viceconti, M.; Baruffaldi, F.; Cappello, A. (1999): Risk of fracture in elderly patients: a new predictive index based on bone mineral density and finite element analysis. *Computer Methods and Programs in Biomedicine*, vol. 60, no. 1, pp. 23-33. Yoshikawa, T.; Turner, C. H.; Peacock, M.; Slemenda, C. W.; Weaver, C. M. et al. (1994): Geometric structure of the femoral neck measured using dual-energy X-ray absorptiometry. *Journal of Bone and Mineral Research*, vol. 9, no. 7, pp. 1053-1064.
Mechanism of Action of Human P-glycoprotein ATPase Activity PHOTOCHEMICAL CLEAVAGE DURING A CATALYTIC TRANSITION STATE USING ORTHOVAANADATE REVEALS CROSS-TALK BETWEEN THE TWO ATP SITES* (Received for publication, April 15, 1998, and in revised form, May 7, 1998) Christine A. Hrycyna†§, Muralidhara Ramachandra¶‖, Suresh V. Ambudkar‡, Young Hee Ko**‡‡, Peter L. Pedersen***‡‡, Ira Pastan¶‖, and Michael M. Gottesman‡§§ From the †Laboratory of Cell Biology and ¶Laboratory of Molecular Biology, Division of Basic Sciences, National Cancer Institute, National Institutes of Health, Bethesda, Maryland 20892 and **Department of Biological Chemistry, The Johns Hopkins University School of Medicine, Baltimore, Maryland 21205 Human P-glycoprotein (P-gp), an ATP-dependent efflux pump responsible for cross-resistance of human cancers to a variety of lipophilic compounds, is composed of two homologous halves, each containing six transmembrane domains and an ATP-binding/utilization domain. To determine whether each site can hydrolyze ATP simultaneously, we used an orthovanadate (Vi)-induced ADP-trapping technique (P-gp:MgADP:Vi). In analogy with other ATPases, a photochemical peptide bond cleavage reaction occurs within the Walker A nucleotide binding domain consensus sequence (GX$_6$GK(T/S)) when the molecule is trapped with Vi in an inhibited catalytic transition state (P-gp:MgADP:Vi) and incubated in the presence of ultraviolet light. Upon reconstitution into proteoliposomes, histidine-tagged purified P-gp from baculovirus-infected insect cells had drug-stimulated ATPase activity. Reconstituted P-gp was incubated with either ATP or 8-azido-ATP in the presence or absence of Vi under ultraviolet (365 nm) light on ice for 60 min. The resultant products were separated by SDS-polyacrylamide gel electrophoresis and subjected to immunoblotting with seven different human P-gp-specific antibodies covering the entire length of the molecule. Little to no degradation of P-gp was observed in the absence of Vi. In the presence of Vi, products of approximately 28, 47, 94, and 110 kDa were obtained, consistent with predicted molecular weights from cleavage at either of the ATP sites but not both sites. An additional Vi-dependent cleavage site was detected at or near the trypsin site in the linker region of P-gp. These results suggest that both the amino- and carboxyl-terminal ATP sites can hydrolyze ATP. However, there is no evidence that ATP can be hydrolyzed simultaneously by both sites. One of the main causes of broad-based cellular resistance to a wide variety of cytotoxic agents in cancer cells is expression of a 170-kDa plasma membrane polypeptide known as the multidrug transporter or P-glycoprotein (P-gp), encoded by the *MDR1* gene in humans (1, 2). This 1280-amino acid plasma membrane-associated glycoprotein is composed of two homologous halves, each containing six transmembrane domains and one ATP site. P-gp acts as an ATP-dependent efflux pump for chemotherapeutic agents and other drugs (3). The precise mechanism of action of P-gp, however, remains unknown. ATP binding and hydrolysis are essential for the proper functioning of P-gp. It has been previously demonstrated that each ATP site in P-gp can hydrolyze ATP and that both sites must be intact to retain activity of the transporter (4, 5). These data suggested a model of P-gp action in which the ATP sites alternate and do not hydrolyze ATP simultaneously (6, 7). To determine whether both sites hydrolyze ATP simultaneously, we used orthovanadate (Vi), a phosphate analog that stabilizes the inhibited catalytic transition state of P-gp (P-gp:MgADP:Vi), mimicking the physiological state in which MgADP and phosphate are bound and subsequently released (8). Upon incubation with Vi and ATP, only one cycle of hydrolysis occurs as a result of the stabilization of the inhibitory complex (9). When this complex is irradiated with ultraviolet light, a photochemical reaction occurs modifying the amino acid in the third position within the Walker A nucleotide binding domain consensus sequence (GX$_6$GK(T/S)) (10) followed by cleavage of the peptide bond (11). This technique has been successfully used to study the mechanism of action of myosin-ATPase (12, 13), adenylate kinase (14), and most recently, the F$_1$-ATPase from rat liver mitochondria (15). The studies described here represent the first use of this technique in the study of an ATP-binding cassette (ABC) transporter. The results indicate that ATP hydrolysis occurs within one or the other ATP site but could not be detected in both simultaneously. EXPERIMENTAL PROCEDURES Expression and Purification of Wild-type P-gp Containing a C-terminal 6-Histidine Tag (P-gpH$_6$)—Recombinant baculovirus encoding wild-type P-gp containing a six-histidine tag at the C terminus (BV-MDR1(H$_6$)) was used to infect *Trichoplusia ni* (High Five™) cells (Invitrogen, San Diego, CA) as described (16). P-gpH$_6$ was purified by metal affinity chromatography as described (16). Protein concentration of the purified material was determined by the Amido Black 10B protein assay (17). Approximately 300 μg of purified protein was obtained from 20 mg of crude membrane protein prepared from $2 \times 10^6$ cells. Preparation of Sodium Orthovanadate—Sodium orthovanadate was freshly prepared in water and heated at 100 °C for 3 min, vortexed, and cooled to room temperature. The concentration of the stock solution was determined spectrophotometrically at A$_{268}$ (molar extinction coefficient, --- * The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked “advertisement” in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. § Supported in part by a postdoctoral fellowship from The Jane Coffin Childs Memorial Fund for Medical Research. ¶ Present address: Canji, Inc., 3030 Science Park Rd., San Diego, CA 92121. ‡‡ Supported in part by a grant from NIDDK, NIH (to P. L. P.). §§ To whom correspondence should be addressed: Laboratory of Cell Biology, Bldg. 37, Rm. 1A-09, NCI, National Institutes of Health, 37 Convent Dr. MSC 4255, Bethesda, Md 20892-4255. Tel.: 301-496-1530; Fax: 301-402-0450; E-mail: firstname.lastname@example.org. 1 The abbreviations used are: P-gp, P-glycoprotein; P-gpH$_6$, human P-glycoprotein containing a 6-histidine tag at the C terminus of the protein; Vi, sodium orthovanadate; UV, ultraviolet light at 365 nm; PAGE, polyacrylamide gel electrophoresis; MOPS, 4-morpholinepropanesulfonic acid; aa, amino acid. Vi prepared in this manner may consist of varying amounts of monomeric vanadate and its oligomers, di-, tetra-, and pentavanadate. **Measurement of Vi-sensitive ATPase Activity in Proteoliposomes Reconstituted with Purified P-gp-H₆**—Vi-sensitive ATPase activity in purified P-gp-H₆ preparations was performed as described (16, 18). **Photochemical Cleavage of Purified P-gp-H₆**—Purified P-gp-H₆ (4 μl; ~1.4 μg) was first diluted 25-fold to a final volume of 100 μl to form proteoliposomes in 50 mM MOPS-KOH (pH 7.2), 125 mM KCl, 5 mM MgCl₂ in the presence and absence of 600 μM sodium orthovanadate in 12 × 75-mm glass test tubes and allowed to incubate at room temperature for 3 min. Verapamil (30 μM) was subsequently added to all samples from a 3 mM stock made in Me₂SO, and the tubes were allowed to stand at room temperature for an additional 3 min. Subsequently, 2.5 mM ATP was added, and the reaction mixture was immediately transferred to ice. In reactions where photoactivation was induced by UV light, the samples were transferred to a 96-well flat-bottom cluster and were placed under a 365-nm ultraviolet (UV) lamp (Black-Ray lamp from UVP, Upland, CA) on aluminum foil-covered ice under subdued light conditions. The samples were irradiated for 40 min covered with a glass plate at a distance of 1.3 cm and for an additional 20 min uncovered at a distance of 1 cm. For samples containing 8-azido-ATP, the reaction was pre-irradiated for 10 min on ice prior to addition of drug and Vi. **SDS-Polyacrylamide Gel Electrophoresis (PAGE) and Immunoblot Analysis**—Samples were prepared in 1× Laemmli sample buffer (19) and allowed to incubate at room temperature for 30 min prior to electrophoresis. SDS-PAGE was performed (19) using 8, 8–16, and 4–20% Tris/glycine gels (Novex, San Diego, CA) followed by immunoblotting as described (16). Immunoreactive bands were visualized by enhanced chemiluminescence (ECL, Amersham Pharmacia Biotech). **Antibodies**—Monoclonal anti-P-gp antibody C219 (Centocor, Malvern, PA) (20) was used at a 1:2000 dilution. Human specific anti-P-gp polyclonal antibodies 4007 and 4077 (21) were used at dilutions of 1:1000 and 1:3000, respectively. Polyclonal human specific anti-P-gp antibodies PEPG-13, PEPG-2, and PEPG-7 were used at dilutions of 1:3000, and PEPG-12 was used at a dilution of 1:1000 (22). **Mild Trypsin Digestion of Purified P-gp-H₆**—Purified P-gp-H₆ (1.4 μg) was diluted in a total volume of 100 μl in 50 mM Tris/HCl (pH 8.0). Modified trypsin (Promega) was added at a ratio of 30:1 protein/trypsin (0.046 μg). The reaction was carried out for 5 min at 37 °C. Subsequently, a 5-fold excess of trypsin inhibitor was added followed by Laemmli sample buffer. **RESULTS AND DISCUSSION** Human P-gp is a 1280-amino acid protein with two homologous halves functionally connected by a flexible linker region. Each half contains six hydrophobic transmembrane regions implicated in the binding of substrates and inhibitors based on photoaffinity labeling studies and the behavior of mutant transporters and a highly conserved ATP binding/utilization domain (1). Through mutational analysis, it has been demonstrated that both sites are essential for function since disruption of either nucleotide binding domain results in an inactive protein (23–25). Biochemical analyses using P-gp from Chinese hamster ovary cells have revealed that each ATP site is capable of hydrolyzing ATP (4). In this study, we sought to determine whether both ATP sites of human P-gp were acting independently and hydrolyzing ATP simultaneously or if cross-talk exists between the two sites that allows for only one hydrolysis event to occur at a time. This alternating catalytic site model of ATP hydrolysis was originally suggested by Senior and colleagues (6) in studies that demonstrated that 1 mol of Mg²⁺–8-azido-ADP was bound per mol of hamster P-gp. This hypothesis has been supported by experiments involving chemical modification of one ATP site that prevented vanadate trapping at the other site (5). **Model for Photooxidative Peptide Bond Cleavage of Human P-gp**—To assess directly whether ATP hydrolysis can occur at both sites simultaneously, we made further use of sodium Vi, a phosphate analog that is photochemically active. Irradiation with UV light at 365 nm results in specific oxidations of protein side chains within Vi-trapped species and in peptide bond cleavage. The mechanism of photocleavage for myosin, which involves a seryl radical intermediate, has been determined by Grammer et al. (11). ATPases form a MgADP-Vi-enzyme inhibitory complex, which upon irradiation results in peptide bond cleavage at the third position within the Walker A motif (GX₄GKT). This has been shown directly for myosin (serine) (12, 13), F₁-ATP synthase (alanine) (15), and adenylate kinase (proline) (14), and flagellar ATPase dynein (26). Human P-gp has a serine residue at the third position in both nucleotide binding domains. In the case of the heavy chain of myosin, an additional nucleotide-independent vanadate cleavage site is observed upon incubation with Vi and UV light (12). This second site, termed V2, is a few amino acids away from the trypsin-sensitive site between the 50- and 20-kDa tryptic fragments of the myosin heavy chain, perhaps because of tetrameric vanadate binding to a series of positively charged lysine residues next to a potentially photosensitive residue (12). Interestingly, human P-gp also has a lysine/arginine-rich region in the linker region between the two halves of the protein, which contains a trypsin site that is sensitive to enzymatic cleavage (27). **Schematic Representation of Potential UV-induced Vanadate Cleavage Sites in Human P-gp**—A schematic diagram of the potential vanadate cleavage sites in human P-gp is shown in Fig. 1. The cleavage products are identified using antibodies specific for P-gp. The epitopes for these various antibodies are shown in Fig. 1F. If simultaneous hydrolysis occurs at both ATP sites, three fragments would be generated (Fig. 1D) of approximately 47, 71, and 23 kDa. If hydrolysis occurs only at the N-terminal site (Fig. 1B), only 47- and 94-kDa fragments would be produced as a result of the UV-induced cleavage reaction in the presence of Vi. Conversely, if hydrolysis only... occurs at the C-terminal ATP site (Fig. 1C), only fragments of predicted molecular masses of 118 and 23 kDa would be generated. If both sites were active but not in the same molecule, 47-, 94-, 118-, and 23-kDa fragments would be predicted. If nucleotide-independent cleavage at or near the trypsin site of P-gp occurs, the molecule would also be cleaved into two peptides of 80 and 60 kDa representing the N-terminal and C-terminal halves of the protein, respectively (Fig. 1E). **Purified P-gp Retains Vi-sensitive Drug-stimulated ATPase Activity**—To facilitate identification of the UV-induced vanadate cleavage products and to eliminate other interfering ATPases, we used wild-type P-gp containing a six-histidine tag at the C terminus (P-gp-H$_6$) purified from insect cells using metal affinity chromatography as described under “Experimental Procedures” (16). Using a rapid dilution method to reconstitute the protein into proteoliposomes, Vi-sensitive drug-stimulated ATPase activity of P-gp-H$_6$ was confirmed (16). The reconstituted protein demonstrated high specific activity (5.8 μmol/min/mg of protein) in the presence of 30 μM verapamil. This method reconstitutes approximately 20% of the starting material, 50% of which has ATPase activity (16). This yield of functional P-gp was used to calculate the specific activity of the protein. **UV-induced Vi Cleavage of the Human P-gp-H$_6$ Polypeptide Chain**—We performed the UV-induced vanadate cleavage reactions under the same optimal conditions as in the ATPase activity assay described above, except that MOPS buffer was substituted for Tris because it is known that Tris buffers result in less efficient photo cleavage because of the formation of stable Tris-Vi complexes (28). P-gp ATPase activity is comparable in either Tris or MOPS buffer (data not shown). Purified P-gp-H$_6$ was reconstituted into proteoliposomes by rapid dilution either in the presence or absence of Vi. To start the formation of the MgADP-Vi-P-gp complex, 2.5 mM ATP was added, and the samples were immediately irradiated on ice for a total of 60 min and subjected to SDS-PAGE and immunoblot analysis as described under “Experimental Procedures.” **Photooxidative Peptide Bond Cleavage at the ATP Sites of Human P-gp Is Vi- and UV-dependent**—Peptide bond cleavage occurs only in the presence of Vi and UV irradiation (Fig. 2, lanes 3). UV irradiation alone generated little or no cleavage products (Fig. 2, lanes 2). Additionally, little or no cleavage was observed in the absence of UV light either in the presence or absence of Vi (Fig. 2, lanes 1 and 4). Because both the amount of functional P-gp-H$_6$ in the reaction (200 ng) and the efficiency of the cleavage reaction were extremely low, we could not visualize the bands by Coomassie Brilliant Blue or silver stain nor could we generate enough of each fragment for N-terminal sequencing. However, using a variety of human P-gp-specific antibodies that recognize different regions of the molecule, we were able to clearly identify the cleavage products (a), (b), (c), and (d) but not (e) (Fig. 1F). Because of the hydrophobic nature of the peptides generated, there was a slight disparity between the predicted molecular weights of the products (Fig. 1) and the apparent molecular weights as determined by SDS-PAGE. The crucial results are, however, that we detect the higher molecular weight fragments (b and c) but not fragment (e) (Fig. 1), arguing against simultaneous hydrolysis and cleavage at both ATP sites. The absence of this fragment does not prove unequivocally that it is not being formed in amounts below the level of detectability. We do not believe that it is present but migrating anomalously because we could not detect this fragment in several gel systems with different antibodies. Human P-gp-specific polyclonal antibodies 4077 (21) (Fig. 2A) and PEPG-7 (22) (Fig. 2B) directed against peptides prior to the N-terminal ATP site recognize fragments (a) and (c). Polyclonal antibody PEPG13 (22) directed against a peptide between the two ATP sites recognizes fragments (b) and (c) (Fig. 2C). Polyclonal antibody PEPG-12 directed against a peptide after the C-terminal ATP site recognizes fragments (b), (c), and (d) (Fig. 2D). PEPG-12 was made against the entire loop... between the Walker A and the linker dodecapeptide in the ATP-binding domain (22). This region is 50% identical and 70% similar to the homologous region in the N-terminal half, which may explain the cross-reactivity and the recognition of fragment (c). Polyclonal antibody 4007, which spans both sides of the C-terminal ATP site and is expected to react with all three bands, gave the same pattern as PEPG-12, which is also suggestive of cross-reactivity (data not shown). Fragment (e), the expected product if simultaneous cleavage occurs, was not detected using PEPG-13, C219, PEPG-2, and 4007. The data shown in Fig. 3 provide further evidence for the generation of fragments (a), (b), (c), and (d) but not (e). These experiments were done in the presence and absence of Vi using both ATP (lanes 1 and 2) and 8-azido-ATP (lanes 3) as nucleotides. We initially used 8-azido-ATP, which is also hydrolyzable by P-gp (6), to determine whether ATP cross-linked to P-gp would block UV-induced vanadate cleavage and found that it does not. As can be seen in panel A, using the P-gp-specific monoclonal antibody C219, both nucleotides can support the cleavage reaction and the antibody recognizes the (b) and (c) products; however, under the blotting conditions used, (d) fragment is not recognized. Both PEPG-2 (panel D) and PEPG-13 (panel C) recognize the (b) and (c) cleavage products. 4077 (panel B) recognizes products (a) and (c) and PEPG-12 (panel E), and 4007 (data not shown) recognizes (b), (c), and (d). **Vi-induced Cleavage of Human P-gp Near the Trypsin-sensitive Site in the Linker Region**—In our experiments, two additional bands migrating at approximately 80 and 60 kDa are apparent (Figs. 2 and 3). The 80-kDa band cross-reacts with 4077 (Figs. 2A and 3B), PEPG-13 (Figs. 2C and 3C), and PEPG-7 (Fig. 2B), and the 60-kDa band is preferentially recognized by PEPG-12 (Figs. 2D and 3E) and 4007 (data not shown). Both bands were recognized by PEPG-2 (Fig. 3D). Because the 80-kDa peptide is recognized well by 4077, PEPG-7, and PEPG-9 directed against amino acids 348–419 (data not shown), it is unlikely that this fragment represents fragment (e) (Fig. 1), the product of UV-induced Vi peptide bond cleavage at both ATP sites in the same P-gp molecule. Because both bands were recognized by PEPG-2, the cleavage site must necessarily reside between amino acids 637 and 712. Conversely, PEPG-12 and 4007 preferentially recognize the C-terminal half although PEPG-12 can under certain conditions weakly recognize some fragments containing the N-terminal half. Under the conditions used in this experiment, C219 does not detect these fragments (Fig. 3A). Importantly, the linker region of human P-gp, defined as the peptide segment between amino acids 633 and 709, is known to be sensitive to cleavage by mild trypsin digestion (27). Taken together, these results suggest that the two bands most likely represent the N- and C-terminal halves of P-gp produced by vanadate cleavage at or near this lysine/arginine-rich trypsin-sensitive region of P-gp, as has been previously observed for myosin (12). Additionally, upon overexposure of these immunoblots (data not shown), the natural degradation products generated during manipulation of the untreated samples for electrophoretic analysis are apparent and migrate in the same positions as the Vi-induced 80- and 60-kDa products, lending credence to the argument that these bands represent the two halves of the protein. To further confirm the identity of these fragments representing the N- and C-terminal halves of P-gpH$_6$, the purified protein was treated mildly with trypsin, followed by SDS-PAGE, immunoblotting, and probing with human P-gp-specific antibodies (Fig. 4). PEPG-2 (Fig. 4A) recognizes both halves of the protein whereas 4077 (Fig. 4B) recognizes the N-terminal half and 4007 preferentially recognizes the C-terminal half (Fig. 4C). Migration positions are similar to those observed in the cleavage reactions shown in Figs. 2 and 3. **Mechanism of ATP Hydrolysis of Human P-gp**—In this study, we have demonstrated that both ATP sites of human P-glycoprotein are capable of hydrolyzing ATP because we generate peptide products accounting for cleavage at both active sites. We have no evidence, however, for any detectable amount of the 71 kDa (e) double cleavage fragment under the reaction conditions tested and with any of the antibodies used. This fragment along with the (a) and (d) fragments would have been generated if simultaneous hydrolysis was occurring at both of the ATP sites. Our data suggest that cleavage can occur only at one site at a time and that the two sites are functionally interdependent. These data do not demonstrate, however, that the catalytic sites alternate with equal efficiency but only that they are unlikely to act simultaneously. **REFERENCES** 1. Gottesman, M. M., and Pastan, I. (1993) *Annu. Rev. Biochem.* **62**, 385–427 2. Gottesman, M. M., Hrycyna, C. A., Schoenlein, P. V., Germann, U. A., and Pastan, I. (1995) *Annu. Rev. Genet.* **29**, 607–649 3. Raviv, Y., Pollard, H. B., Bruggemann, E. P., Pastan, I., and Gottesman, M. M. (1990) *J. Biol. Chem.* **265**, 3975–3980 4. Urbatsch, I. L., Sankaran, B., Bhagat, S., and Senior, A. E. (1995) *J. Biol. Chem.* **270**, 26956–26961 5. Senior, A. E., and Bhagat, S. (1998) *Biochemistry* **37**, 831–836 6. Senior, A. E., al-Shawi, M. K., and Urbatsch, I. L. (1995) *FEBS Lett.* **377**, 285–289 7. Senior, A. E., and Gadsby, D. C. (1997) *Semin. Cancer Biol.* **8**, 143–150 8. Goodno, C. C. (1979) *Proc. Natl. Acad. Sci. U. S. A.* **76**, 2620–2624 9. Urbatsch, I. L., Sankaran, B., Weber, J., and Senior, A. E. (1995) *J. Biol. Chem.* **270**, 19383–19390 10. Walker, J. E., Saraste, M., Runswick, M. J., and Gay, N. J. (1982) *EMBO J.* **1**, 945–951 11. Grammer, J. C., Loo, J. A., Edmonds, C. G., Cremo, C. R., and Yount, R. G. (1996) *Biochemistry* **35**, 15582–15592 12. Cremo, C. R., Long, G. T., and Grammer, J. C. (1990) *Biochemistry* **29**, 7982–7990 13. Cremo, C. R., Grammer, J. C., and Yount, R. G. (1989) *J. Biol. Chem.* **264**, 6608–6611 14. Cremo, C. R., Loo, J. A., Edmonds, C. G., and Hatlelid, K. M. (1992) *Biochemistry* **31**, 491–497 15. Ko, J. Y., Hanchet, M., Amzel, L. M., and Pedersen, P. L. (1997) *J. Biol. Chem.* **272**, 18875–18881 16. Ramachandra, M., Ambudkar, S. V., Chen, D., Hrycyna, C. A., Dey, S., Gottesman, M. M., and Pastan, I. (1998) *Biochemistry* **37**, 5010–5019 17. Schaffner, W., and Weissmann, C. (1973) *Anal. Biochem.* **56**, 502–514 18. Sarkadi, B., Price, E. M., Boucher, R. C., Germann, U. A., and Scarborough, G. A. (1992) *J. Biol. Chem.* **267**, 4854–4858 19. Laemmli, U. K. (1970) *Nature* **227**, 680–685 20. Geering, E., Stedile, G., Gariepy, J., and Ling, V. (1990) *Proc. Natl. Acad. Sci. U. S. A.* **87**, 1526–1530 21. Tanaka, S., Currier, S. J., Bruggemann, E. P., Ueda, K., Germann, U. A., Pastan, I., and Gottesman, M. M. (1990) *Biochem. Biophys. Res. Commun.* **166**, 180–186 22. Bruggemann, E. P., Chaudhary, V., Gottesman, M. M., and Pastan, I. (1991) *Biotechniques* **10**, 202–209 23. Muller, M., Bakon, E., Welker, E., Varadi, A., Germann, U. A., Gottesman, M. M., Morris, B. S., Roninson, I. B., and Sarkadi, B. (1996) *J. Biol. Chem.* **271**, 18717–1880 24. Azzaria, M., Schurr, E., and Gros, P. (1989) *Mol. Cell. Biol.* **9**, 5289–5297 25. Loo, T. W., and Clarke, D. M. (1995) *J. Biol. Chem.* **270**, 21449–21452 26. Gibbons, I. R., and Moez, G. (1991) *Methods Enzymol.* **196**, 428–442 27. Bruggemann, E. P., Germann, U. A., Gottesman, M. M., and Pastan, I. (1989) *J. Biol. Chem.* **264**, 15483–15488 28. Cremo, C. R., Grammer, J. C., and Yount, R. G. (1991) *Methods Enzymol.* **196**, 442–449
#1 **Express initial impressions of visual phenomena and artwork/art phenomena with suitable vocabulary** **Initial impression** - **identify and explain with reasons an initial reaction based on an important feature of the visual creation** - **use art terminology to describe initial impression** | Criteria | 1 Very Low Achievement VLA | 2 Low Achievement LA | 3 Basic Achievement BA | 4 High Achievement HA | 5 Very High Achievement VHA | |----------|---------------------------|----------------------|------------------------|-----------------------|----------------------------| | | | | | | | | | Verbalizes reaction with only one word (e.g., Yuk!, Wow!), after hasty or cursory scan | Offers a brief reactive comment (I like/dislike it!), after limited examination of evidence | Selects one important feature of the visual creation on which to comment, after building an examination | Identifies several important features in initial reaction, after giving careful consideration to an aesthetic and emotive examination | Identifies many first impressions (aesthetic, emotive, and intellectual) of the whole creation after a thorough examination of all parts, and explains each with sound arguments | | | Notes an insignificant feature of visual creation | Addresses one feature, but it is marginally significant to the whole | Explains to a fair degree reason(s) for initial impression | Explains each impression with an adequate reason | Selects seminal features as sources for reaction | | | Opinion, if offered, is not persuasive | Lacks depth of insight to support choice with a good reason(s) | Uses some art terminology correctly in response | Applies appropriate art terms in initial reaction | Demonstrates clear and accurate usage of many appropriate art terms in response | | | Limited awareness of art terms to express impression | Communicates at least one art term in response, but it may be used incorrectly | In short, synthesizes visual information quite well at first glance | In short, synthesizes visual information extremely well at first glance | Synthesizes crucial visual information extremely well at first glance | | #2 | Describe visual phenomena, artwork/art phenomena and the connections among visual elements, images, and focuses | |-----|------------------------------------------------------------------------------------------------------------------| | | **Description** • describe subject matter or content and its inner relationships • describe art elements and design principles and relationships between them and with subject matter • describe technical aspects (art form, media, and making/creating techniques) | | | □ Describes with little detail or explanation one image of the subject matter, but makes no connections between its parts □ Possesses very limited knowledge of elements and principles; therefore, a reference to their relationships in work is sketchy and weak □ Deals briefly with a simple connection between one formal element or principle and the subject matter □ Demonstrates a partial understanding of how the work is created, but does not identify its materials or techniques | | | □ Provides a brief description of basic information about a few images of the subject matter, but does not mention any relationships between them □ Vaguely describes a few of the elements and principles; recognizes one simple connection between them □ Attempts to tie a couple of elements and principles to subject matter with some success □ Can recognize and describe a traditional art form or mode, if not too complex, and a few easily recognizable media | | | □ Describes most subject matter adequately and points out a few simple relationships between objects or images □ Mentions some of the elements and principles with fair degrees of accuracy, and describes at least one connection between them □ Demonstrates one important connection between formal properties and subject matter □ Explains some technical aspects satisfactorily | | | □ Refers to all major features of subject matter in a good description and recognizes several interesting relationships between them □ Describes with detail relevant elements and principles and some connections between them □ Expounds on a few meaningful connections between formal qualities and subject matter □ Describes most technical aspects with proficiency | | | □ Describes subject matter thoroughly and with precise details, and notes many complex connections between its features □ Recognizes and discusses with expertise all seminal elements and principles, along with their inner relationships □ Describes both subtle and complex relationships between subject matter and formal properties with excellent and original insights □ All technical aspects are explained masterfully | | #3 | Perform formal analysis and express personal feelings and ideas on the aesthetics, style and symbolic meanings of the objects of appreciation and criticism, based on their visual elements and organization | |---|---| | **Formal and aesthetic analysis** • express a personal opinion about overall composition • express knowledge of the concept of “aesthetic value” and explain its existence in an art creation • express aesthetic value related to issues of beauty/ugly as demonstrated by visual elements • recognize style and opine about its aesthetic impact on visual creation • comprehend symbolism in images and express its value to the whole piece | | □ Presents an unclear overview of composition and its structural devices or techniques; therefore, expresses a weak opinion about it □ Knows very little about what is meant by “aesthetic value”; resulting in a flawed explanation of it in a visual creation with respect to visual organization (Note: “Aesthetic value” is defined as the value a work has to stimulate pleasure or interest or action in the viewer through its aesthetic properties. A positive aesthetic value is beauty.) □ Expresses one unsupported reason for why visual creation is perceived as beautiful, ugly, or in-between □ May recognize style of visual creation, but is unable to give any ideas about its aesthetic effect □ Names one simple symbol in work, but misunderstands its contribution to aesthetic value of the whole piece | | □ Expresses a simplistic opinion about composition, based on limited knowledge of formal elements as compositional devices □ Possesses partial knowledge about aesthetic value and attempts to explain it in a visual creation □ Offers a partially-supported opinion about aesthetic values of beauty/ugly, but cannot tie these values to visual organization □ States an opinion with some degree of accuracy about style of work and how it affects aesthetic value □ Explains an easy symbol in a piece, and attempts, with limited success, to articulate its connection to aesthetic value | | □ Opines ideas regarding composition that indicate a general understanding of a few compositional devices □ Explains one idea about aesthetic value and its role in a particular visual creation satisfactorily □ Gives one good comment about aesthetic value, as related to beauty/ugly and visual organization □ Knows style of work and offers a simple but credible opinion about its aesthetic effect □ Recognizes symbolism in a work, but has difficulty in tying it to aesthetic value | | □ Recognizes a successful composition and can explain its formal sources along with a well-grounded opinion regarding it □ Explains aesthetic value and gives several verbal examples of it in an art creation □ Expresses several good reasons for aesthetic issues of beauty/ugly with respect to visual organization □ Expresses ideas about style with ease and makes a good connection between style and aesthetic value of a piece □ Points out numerous examples of symbolism in work and offers a sound opinion about its relationship to aesthetic value | | □ Expresses many outstanding insights and opinions regarding formal elements as compositional devices □ Comprehends aesthetic value and describes it correctly and easily in visual objects □ Expresses excellent and grounded opinions in regards to aesthetic value (beauty/ugly) in work, as determined by visual elements and organization □ Gives ample and thoughtful opinions demonstrating deep insight on how style of visual creation impacts aesthetic value □ Provides clear evidence of comprehension of symbolism in the work and expresses keen insights regarding its role in determining aesthetic value | | #4 | Discern the style and implications of art creations of different cultures, regions, times, and artists | |----|------------------------------------------------------------------------------------------------------------------| | | **Stylistic differences** • differentiate styles of visual creations from a variety of regions in the world (e.g., Orient, Euro-Western, the Americas, Middle East, Africa, Australia, Polynesia, and the like) • distinguish styles of visual creations related to distinct cultural groups within selected countries or regions • discern stylistic differences of visual creations related to historical time periods • discern stylistic differences of a variety of selected landmark and local artists/designers • discern consequences associated with stylistic differences of the above-mentioned cases | | | □ Barely discriminates between styles of visual creations from two regions of the world □ Distinguishes one very simple stylistic feature of visual creations from two very distinct cultural groups □ Barely indicates knowledge of stylistic differences of visual creations across time periods □ Demonstrates little basic knowledge of differences in style between selected artists/designers □ Exhibits minimal understanding of consequences or implications associated with stylistic differences | | | □ Recognizes there are regional differences in styles of visual creation and pin points one or two examples without complete accuracy in practice □ Explains one example of differing styles between two cultural groups □ Discerns one or two simple stylistic differences, particularly between examples where vast differences are apparent, of visual creations throughout time periods □ Makes a few correct remarks about differences in style of selected artists/designers □ Discusses with some degree of accuracy only one example of an implication related to stylistic differences in a selected case | | | □ Makes satisfactory references to differences in styles in a few examples of regional art creations □ Articulates several stylistic differences between examples of art creations from two cultural groups □ Gives some consideration to multiple stylistic differences of art creations over time □ Recognizes that artists/designers have distinct stylistic differences and can discuss several of these effectively □ Recognizes there are implications related to stylistic differences in each case, and can articulate more than a few ideas correctly in a couple of cases | | | □ Clearly communicates differences in styles of each presented example of regional visual creations □ Shows adeptness in comparing numerous stylistic differences of visual creations from three or more cultural groups □ Offers good examples of stylistic differences of visual creations across different time periods □ Pin points many seminal stylistic differences between visual creations of a variety of artists/designers □ Considers many examples of implications related to stylistic differences in a variety of cases and offers major evidence | | | □ Provides detailed explanations of stylistic differences between presented regional visual creations □ Indicates in responses a depth of understanding of stylistic differences between whatever cultural groups are presented □ Notes correctly significant stylistic differences between visual creations throughout history, regardless of the time periods presented □ Offers valid examples of stylistic differences between visual creations of any artists/designers presented □ Communicates an accurate analysis of implications associated with stylistic differences in whatever case is presented | | #5 | Interpret artwork/art phenomena in various contexts with appropriate use of knowledge of social, cultural, historical, and other aspects | |----|----------------------------------------------------------------------------------------------------------------------------------| | Interpretation | • interpret the meaning of the visual creation • use research and knowledge to apply various contexts (e.g., social, cultural, historical, and the like) to further elucidate interpretation | | | ☐ Presents an unsupported or “underdetermined” interpretation ☐ Interpretation indicates a lack of prior research or knowledge of various contexts (Note: Examples of contexts may be personal, social, cultural, historical, philosophical, technological, environmental, economic, and aesthetic) | | | ☐ Offers an interpretation that is partially supported or weak ☐ Addresses only one context impacting an interpretation, indicating rudimentary background knowledge of various contexts | | | ☐ Gives a good or supported interpretation, but it may be a little too brief ☐ Alludes to a couple of contexts in interpretation, indicating acceptable research and knowledge of several contexts | | | ☐ Expounds on a good interpretation with more detail and supporting evidence ☐ Evidence embeds proficient research and knowledge of at least three different contexts that influence interpretation ☐ Interpretation may actually be very close to the artist’s own interpretation or intent (i.e., the “correct” interpretation or actual intent) ☐ Enhances interpretation by significant knowledge and research of multiple contexts, some of which were learned independently, that impact an interpretation | | #6 | Produce informed judgements on the appropriateness of the selection of form in accordance with the message/function and the significance or values of a particular piece of artwork in the context of appreciation and creation | |---|---| | **Judgement** • judge the visual creation based on “goodness of fit” of all components • judge the value of the visual creation, applying knowledge and skills of art/design appreciation • judge the value of the work based on personal aesthetic experience | | □ Presents a very weak and simplistic judgment with little supporting evidence based on prior knowledge of form or message/function or other components □ Judgement is too hasty, ignoring knowledge and skills related to art/design appreciation □ Does not mention a personal aesthetic experience as a reason for judgement | | □ Takes a final position that is only vaguely informed by a goodness of fit variable, such as form, message/function, or other components □ Barely mentions knowledge and skills of art/design appreciation to help substantiate judgment □ Offers a brief statement that indicates limited knowledge about the value or role of a personal aesthetic experience in making a judgement | | □ Expounds briefly on a judgment that incorporates one good example of a goodness of fit variable like form, or message/function, or other components □ Employs basic knowledge and skills of art/design appreciation to help support judgement □ Addresses role of aesthetic experience to a fair degree in judgement statement | | □ Provides a well-grounded judgement that considers sound reasons based on at least two variables related to goodness of fit (e.g., form, message/function, or other components) □ Builds judgement on acceptable application of knowledge and skills of art/design appreciation □ Describes sufficiently how a personal aesthetic experience relates to final judgement | | □ Takes a strong and well-defined position, relating appropriateness of judgement to multiple variables of goodness of fit (e.g., form, message/function, or other components) □ Bases judgement on a thorough comprehension of knowledge and skills of art/design appreciation □ Reflects with keen insights as to how a personal aesthetic experience influences judgement | | #7 and #8 | Perform art appreciation and criticism verbally, in dialogue and writing | Perform art/design appreciation and criticism | |-----------|---------------------------------------------------------------------------------|-----------------------------------------------| | | □ Demonstrates little understanding of skills and techniques necessary for discussing or writing successful criticism or appreciation | □ Knows a few components of a basic model of criticism (based on description, analysis, interpretation, and judgement), but rarely applies them when talking about visual creations with teacher or others | | | □ Demonstrates a very rudimentary understanding of appropriate skills and techniques when discussing and writing art/design criticism or appreciation | □ Recognizes some components related to a basic model of criticism (based on description, analysis, interpretation, and judgement) and uses these on a regular basis to discuss visual creations with teacher or others | | | □ Demonstrates general knowledge of skills and techniques necessary for satisfactory discourse and writing about art/design criticism and appreciation | □ Addresses most components related to a basic model of criticism (based on description, analysis, interpretation, and judgement) in daily practice when speaking with teacher or peers about visual creations | | | □ Offers pertinent and well-structured responses to all required components of criticism and appreciation when speaking or writing | □ Uses correctly and on a regular basis all of the components of a basic model of criticism (based on description, analysis, interpretation, and judgement) when engaged in discourse with teacher or peers about visual creations | | | □ Speaks and writes about art/design criticism and appreciation very effectively and fluently, often presenting detailed, thought-provoking, and original comments | □ Verbalizes accurate responses to components of a basic model of criticism (based on description, analysis, interpretation, and judgement) with ease and self-confidence, when speaking on a daily basis about visual creations with teacher or others | • orally discuss or write formal and informal examples of art/design appreciation and criticism • Use art/design appreciation and criticism components in classroom situations
On Sanction-Goal Justifications: How and Why Deterrence Justifications Undermine Rule Compliance Marlon Mooijman, Wilco W. van Dijk, Eric van Dijk, and Naomi Ellemers Online First Publication, December 1, 2016. http://dx.doi.org/10.1037/pspi0000084 CITATION Mooijman, M., van Dijk, W. W., van Dijk, E., & Ellemers, N. (2016, December 1). On Sanction-Goal Justifications: How and Why Deterrence Justifications Undermine Rule Compliance. *Journal of Personality and Social Psychology*. Advance online publication. http://dx.doi.org/10.1037/pspi0000084 On Sanction-Goal Justifications: How and Why Deterrence Justifications Undermine Rule Compliance Marlon Mooijman University of Southern California Wilco W. van Dijk and Eric van Dijk Leiden University Naomi Ellemers Utrecht University Authorities frequently justify their sanctions as attempts to deter people from rule breaking. Although providing a sanction justification seems appealing and harmless, we propose that a deterrence justification decreases the extent to which sanctions are effective in promoting rule compliance. We develop a theoretical model that specifies how and why this occurs. Consistent with our model, 5 experiments demonstrated that when people with punitive proclivities received a justification for a sanction—either with a just-deserts justification—sanction effectiveness decreased when sanctions were justified as attempts to deter people from rule breaking. This effect was mediated by people feeling distrusted by the authority. We further demonstrated that (a) the degree to which deterrence fostered distrust was attenuated when the sanction was targeted at others (instead of the participant) and (b) the degree to which distrust undermined rule compliance was attenuated when the authority was perceived as legitimate. We discuss the practical implications for authorities tasked with promoting rule compliance, and the theoretical implications for the literature on sanctions, distrust, and rule compliance. Keywords: deterrence, distrust, rule compliance, sanction justifications Authorities often provide a justification for their sanctioning behavior. Judges sentence people to prison with the explicit justification that this is meant to deter future rule-breaking behavior (e.g., see Martinez, 2015) and politicians explicitly justify naming-and-shaming policies as attempts to deter crime (e.g., see Langlois, 2012). In the present research, we investigate the consequences of providing such deterrence justifications. We propose that deterrence justifications decrease the extent to which sanctions are effective in promoting rule compliance and we propose that this can be attributed to people feeling distrusted by authorities that justify their sanctions as deterrents. Previous research has mainly focused on the extent to which sanction goals such as deterrence guide sanctioning decisions (Carlsmith, 2006, 2008; Carlsmith, Darley, & Robinson, 2002; Darley, Carlsmith, & Robinson, 2000; Gerber & Jackson, 2013), but has left the effects of sanction-goal justifications on rule compliance unaddressed. Examining how and why deterrence justifications shape people’s willingness to comply with rules can provide valuable insights into how authorities should—and should not—use sanctions. Indeed, societal and organizational authorities (e.g., policymakers, leaders, and managers) tend to justify their use of sanctions by stressing the necessity to deter rule-breaking behavior (Kirchler, Kogler, & Muehlbacher, 2014; Mooijman, van Dijk, Ellemers, & van Dijk, 2015). Understanding how such justifications affect rule compliance may therefore be helpful in explaining the (in)effectiveness of real-life sanctions and suggesting ways to improve the manner in which authorities justify their sanctioning behavior. Sanction Justifications Authorities frequently stress the aim to deter rule breaking as justification for their use of sanctions (Bentham, 1789/1988; Hobbes, 1651/1998; Kirchler et al., 2014; Mooijman et al., 2015; Nagin, 1998). When having this sanction goal, authorities should be primarily concerned with using sanctions to deter future rule breaking from potential rule breakers rather than punishing past rule breakers proportionate to their crime (Carlsmith et al., 2002). A deterrence goal is thus prospective rather than retroactive and can, as such, be distinguished from just deserts—a retroactive sanction goal (Darley, 2009; Kant, 1780/1961). When having just deserts as the sanction goal, authorities should be primarily concerned with punishing rule breakers proportionately for crimes committed in the past (i.e., achieve balance between crime and punishment), regardless of the sanction’s ability to deter future rule breaking (Carlsmith et al., 2002; Keller, Oswald, Stucki, & Gollwitzer, 2010). Although reliance on deterrence as a sanction goal can affect the type and severity of the sanction used (Mooijman et al., 2015; Tetlock et al., 2007) and thereby influence rule compliance (Ball, Treviño, & Sims, 1994), we argue that authorities’ use of a deterrence goal as justification creates an additional source of influence. That is, independently of the type and severity of a sanction, people’s willingness to comply with rules may be negatively affected by whether an authority provides a deterrence justification or not. We propose that this negative impact of deterrence justifications is specific to deterrence justifications and does not similarly hold for just-deserts justifications. Understanding the specificity of the effect is important because scholars typically contrast deterrence with just deserts sanction goals and thus study these two sanction goals simultaneously (Bentham, 1789/1988; Carlsmith et al., 2002; Hobbes, 1651/1988; Kant, 1780/1961; Keller et al., 2010; Nagin, 1998). When Sanction Justifications May Signal Distrust A central aspect of a deterrence goal is that sanctions are aimed at those who are deemed likely to break rules (hence the need to deter them; Nagin, 1998). In other words, authorities that aim to deter rule breaking are focused on the possibility that people break rules in the future (i.e., they distrust them; Mooijman et al., 2015). In contrast, the goal to give past rule breakers their just deserts is indifferent with regard to people’s likelihood of breaking rules in the future (i.e., trustworthiness is irrelevant; Carlsmith et al., 2002; Kant, 1780/1961; Mooijman et al., 2015). People often infer intentions and considerations from authorities’ decisions McKenzie, Liersch, and Finkelstein (2006). Managers’ attempts to incentivize weight-loss with financial sanctions have been shown, for instance, to unintentionally signal negative attitudes toward the overweight (Tannenbaum, Valasek, Knowles, & Ditto, 2013). Authorities who justify sanctions attempts to deter people from rule breaking may therefore signal their distrust to people. That is, authorities using deterrence justifications may signal that sanctions are needed because people are likely to break rules in the absence of sanctions: sanctions are then used as a means to deter people’s future rule-breaking behavior. The communicated “breadth” of deterrence-justified sanctions is thus large (i.e., targets all potential rule breakers), thereby signaling distrust to those who have not broken any guidelines or rules (yet). This distrust-signalizing effect of sanction justifications may be specific to deterrence. Just deserts signals that authorities’ sanctions are aimed only at those who have broken rules in the past instead of those who may potentially break rules in the future. This reduces the likelihood that people who have not broken any rules yet feel distrusted. Evidence corroborating this reasoning comes from research showing that authorities’ distrust predicts the degree to which they rely on deterrence, but not just deserts, as a sanction goal (Mooijman et al., 2015) and from research showing that people are highly motivated to infer authorities’ considerations from their sanction decisions (Fiske, 1993; Keltner et al., 2003). People may thus infer from authorities’ deterrence justifications that they are expected to have the malicious intention to undermine the interests of the authorities (consistent with definitions of distrust, Kramer, 1999; Yamagishi & Yamagishi, 1994; Zand, 1997). In sum, we hypothesize that justifying a sanction as an attempt to deter people from breaking rules increases the degree to which people feel distrusted by the relevant authorities (Hypothesis 1). Why Feeling Distrusted May Undermine Rule Compliance How is people’s rule compliance affected by their feelings of being distrusted by the authorities? Rule compliance is not solely determined by the severity of a sanction or the probability that one receives a sanction (Balliet, Mulder, & Van Lange, 2011). Instead, rule compliance is also determined by how people feel treated by authorities (e.g., interpersonal justice; Tyler & Lind, 1992). For instance, people’s satisfaction with authorities’ decisions decreases when authorities communicate disrespect through pursuing their own interest instead of the interest of the people (De Cremer & Van Knippenberg, 2002) and using nontransparent and biased procedures (Tyler, 2006). In contrast, authorities that are perceived to pursue the collective interest (Mulder & Nelissen, 2010), show respect for others (Tyler & Blader, 2003), and use transparent and unbiased procedures foster decision acceptance (Tyler, 2006). Importantly, these effects of perceived interpersonal treatment often go beyond the outcome that people (expect to) receive from authorities (Cropanzano et al., 2007). How people feel treated by authorities justifying sanctions may thus be of vital importance for people’s willingness to behave according to authorities’ rules. More specifically, people are motivated to see themselves as trustworthy (Brown, 2012; Sedikides, Meek, Alicke, & Taylor, 2014; Steele, 1988), and want and expect others to trust them (Ellemers, 2012; Tyler & Lind, 1992). Feeling distrusted by an authority is therefore likely to foster the feeling that the authority does not view oneself favorably (e.g., without respect). This perception alone may be sufficient to feel poorly treated, thereby undermining one’s willingness to abide to this authority’s rules. Indeed, perceived interpersonal treatment does not have to revolve around tangible outcomes that one receives, but can also entail subjective assessments of how others view oneself (e.g., respect; Ellemers, Doosje, & Spears, 2004). A perceived lack of trust in one’s willingness to comply with relevant guidelines and rules may seem unwarranted when no prior breach of rules was displayed, and may thus seem disrespectful and unjust. This may in turn decrease people’s willingness to comply with rules—research has indeed demonstrated that even slight signs of distrust (i.e., when senders in a Trust Game do not sent their full endowment) can increase interpersonal retaliation in Trust Games (Pillutla, Malhotra, & Murnighan, 2003). Although sanctions increase the costs of rule breaking regardless of an authority’s sanction justification, we thus predict that the potential effectiveness of a sanction is decreased by the distrust that people experience when an authority provides a deterrence justification. This prediction can also be construed as a behavioral confirmation effect (i.e., self-fulfilling prophecy)—people’s willingness to comply with authorities’ rules is expected to decrease in response to people’s perception that these authorities expect them to break rules. Research has demonstrated that authorities can create self-fulfilling prophecies; subordinates are more likely to be productive when supervisors expect them to be intrinsically motivated and productive, in part because supervisors’ set behavioral standards through their expectations (Pelletier & Vallarand, 1996). Although our theorizing on how authorities create behavioral confirmation effects is different from this previous research (i.e., we focus on deterrence justifications and the role of feeling distrusted), our predictions can be construed as authorities creating a self-fulfilling prophecy. According to Snyder (1992) a self-fulfilling prophecy has four stages: (a) The perceiver adopts certain beliefs about the target, (b) the perceiver behaves as if these beliefs were true and treats the target accordingly, (c) the target perceives and responds to the perceiver’s behaviors, and (d) the perceiver interprets the target’s behaviors as a confirmation of his or her initial beliefs. Previous research on power and sanctions fits into the first two stages of the model—(a) the power that authorities hold has been shown to increase distrust (Inesi, Gruenfeld, & Galinsky, 2012; Mooijman et al., 2015; Schilke, Reimann, & Cook, 2015), and (b) authorities justify their sanctions as means to deter rule breaking because of this distrust (Mooijman et al., 2015). The theorizing presented in the current manuscript fits primarily in the third stage (c)—people infer distrust from authorities’ deterrence justifications and are consequently less willing to comply with authorities’ rules. Although the current research does not directly test the fourth and final stage (the perceiver interprets the target’s behaviors as a confirmation of his or her initial beliefs), it seems likely that people’s unwillingness to comply with rules confirms authorities’ initial distrust. In sum, we hypothesize that sanction effectiveness decreases when authorities justify their sanctions as a means to deter people from breaking rules (Hypothesis 2a). We further hypothesize that distrust mediates this negative relationship between deterrence justifications and rule compliance (Hypothesis 2b). **Overview of Current Research** We tested our hypotheses in five experiments in which we, (a) manipulated whether or not an authority provided a deterrence justification for its sanctioning behavior, (b) measured the distrust that participants felt, and (c) tested how this affected rule compliance. In most experiments, we thus compared a condition in which an authority justified a sanction as a means to deter to a condition in which an authority provided no justification for a sanction. In two of the experiments, we included a condition in which authorities provided a just-deserts justification or provided both a deterrence and just deserts justification. This allowed us to demonstrate that the theorized effects of sanction justifications are specific to deterrence justifications. Moreover, in these experiments we tested how deterrence justification affected the extent to which participants lied to their team leader to further their own interest (Experiments 1 and 2), their willingness to commit plagiarism (Experiment 3) or fraud (Experiment 4), and their willingness to take resources from their leader (Experiment 5). We tested our hypotheses in both college samples (Experiments 1 and 4) and Mechanical Turk samples (Experiments 2, 3, and 5). To provide support for the proposed mediating role of distrust and exclude alternative explanations, we assessed the perceived anger of the authority, attitudes toward the authority, and participants’ distrust toward others. Previous research has demonstrated that rule compliance can be affected by the extent to which people perceive others to be angry (Wubben, De Cremer, & van Dijk, 2011), the attitudes that they hold toward authorities (Tyler & Blader, 2003) and the extent to which they trust others to comply with rules (Mulder, van Dijk, De Cremer, & Wilke, 2006). We assessed these control variables in Experiments 2, 3, and 4. In Experiments 4 and 5, we examined the moderating role of authority legitimacy and perceived target of the sanction (i.e., oneself or others). Based on the notion that deterrence justifications increase distrust because of their “breadth” (i.e., they cast a wide net of suspicion that includes the participant), we expect the relationship between deterrence and distrust to be attenuated when the sanction is perceived to target others instead of oneself. Moreover, based on the notion that distrust undermines rule compliance because of the way people feel treated by the authority, we expect legitimacy to attenuate the degree to which distrust undermines rule compliance; legitimacy has been shown to buffer against relational threats (Tyler, 2006). Consistent with the recommendations of Simmons, Nelson, and Simonsohn (2011), we made sure that every condition had around 30 participants, although most conducted experiments have considerably more participants per condition (i.e., more than 50; cf. Simmons, Nelson, & Simonsohn, 2013). Across experiments, participants reported very low rates of suspicion regarding the goal of the experimental manipulations (<5%); results did not change significantly for any experiment when we excluded participants who were suspicious. Consequently, we report the analyses that include the participants who reported being suspicious. Unless indicated otherwise, all measured variables were assessed on seven-point scales, on which participants could indicate their level agreement (1 = disagree completely, 7 = agree completely). All participants provided informed consent and were debriefed, compensated, and thanked for their participation. **Experiment 1** In Experiment 1, we tested our hypotheses in an experimental tax-paying game. More specifically, we devised an experimental game in which an authority introduces a fine and provides no justification or justifies this fine as a means to deter. Participants could misreport their revenue to the authority to evade a stipulated tax rule. **Method** **Participants and design.** Seventy U.S. college students (62 females; $M_{age} = 18.87$ years, $SD_{age} = 1.36$) were randomly assigned to one of two justification conditions (deterrence vs. no justification). Participants received course credit for their participation. **Procedure.** **Rule compliance.** Consistent with previous work on experimental tax games (Bilokach, 2006), participants were told that they were randomly assigned to be a “worker” in a work team consisting of eight members, while one other group member was randomly assigned to be the team leader. Workers could earn extra money by finding correct words among scrambled letters. The rule was that 40% of the money would have to be paid to the group leader. The team leader had to evenly redistribute this money among all team members such that all group members could share in a part of total revenues (i.e., similar to taxes that have to be paid to a governmental authority). Although participants were told that the money they would earn was contingent on the number of words they found (and could thus vary across participants, depending on their productivity), all parParticipants received $1.50—regardless of the amount of words correctly noted. Crucially, workers then had to self-report the amount of money they had earned to the team leader. Participants were told that the team leader was able to verify if this self-reported amount was correct for only two workers (i.e., partial monitoring from an authority). As such, participants had the possibility to misreport the amount of money they earned, and keep more of their own revenue (analogous to social dilemma games; see Molemaker, De Kwaadsteniet, & van Dijk, 2014). Thus, participants who reported $1.50 to the team leader fully complied with the rules, whereas lower self-reported amounts reflected less rule compliance. **Sanction justifications.** Participants were informed that the team leader had the ability to fine those who were caught misreporting their revenue; the team leader could do this by decreasing the money participants earned with the task by $1. The type and severity of the sanction was thus held constant across conditions. In the deterrence condition, the team leader justified this sanction as a means to deter workers from misreporting their revenues. That is, the leader stated, “the primary aim of the fine is to deter workers from misreporting revenue.” In the no justification condition, no justification was given (i.e., no additional information was provided by the team leader). **Distrust.** Perceived distrust was measured with the following three items, “I feel distrusted by the team leader,” “I feel like the team leader does not trust me,” and “I feel like the team leader assumes I am going to lie” (Cronbach’s $\alpha = .78$). **Results** **Distrust.** Participants felt more distrusted in the deterrence condition ($M = 4.03$, $SD = 1.12$) compared with the no-justification condition ($M = 3.18$, $SD = 1.28$), $t(70) = 2.95$, $p = .004$. Cohen’s $d = 0.70$. **Rule compliance.** To test to what extent participants misreported revenue on average (thus regardless of condition), we compared the mean of money participants reported earning to the mean of $1.50$ that represented full rule compliance. Please note that participants reported their earnings in dollar cents. On average, participants underreported the amount of money they had earned ($M = 91.81$, $SD = 31.37$), $t(70) = 15.52$, $p < .001$, $d = 3.27$. Participants’ misreporting also depended on the sanction-justification manipulation. Participants were more likely to underreport the amount of money they earned in the deterrence condition ($M = 81.49$, $SD = 34.83$) compared with the no-justification condition ($M = 102.14$, $SD = 23.77$), $t(70) = 2.89$, $p = .005$, $d = 0.70$. **Mediation analyses.** Feeling distrusted was negatively correlated with rule compliance, $r = -.38$, $p = .001$ and mediated the effect of the deterrence justification on rule compliance (95% CI = [−15.26, −1.26], $\kappa^2 = .10$). **Discussion** Consistent with our hypotheses, participants were more likely to underreport their earnings to their team leader when this leader justified a sanction as a means to deter lying. This was explained by deterrence justifications increasing the degree to which participants felt distrusted by the team leader. **Experiment 2** In Experiment 2, we attempted to replicate the effect we observed in Experiment 1 while adding a just-deserts justification condition. This allowed us to demonstrate that the findings in Experiment 1 are specific to deterrence justifications (and not simply attributable to an authority providing additional sanction-goal information), and stay consistent with the previous literature on sanction goals that pits deterrence against just deserts (Carlsmith et al., 2002; Hobbes, 1651/1988; Kant, 1780/1961; Keller et al., 2010; Mooijman et al., 2015; Nagin, 1998). **Method** **Participants and design.** Three hundred twenty-six participants (198 males; $M_{age} = 34.73$ years, $SD_{age} = 10.91$) were recruited from the Mechanical Turk website and participated in exchange for $0.50. Participants were randomly assigned to one of three justification conditions (deterrence vs. just deserts vs. no justification). **Procedure.** **Rule compliance.** The experimental game used was identical as the game used in Experiment 1. **Sanction justifications.** In the deterrence condition, the team leader justified this sanction as a means to deter workers from misreporting their revenue. The team leader stated, “the primary aim of the fine is to prevent workers from misreporting revenue.” In the just deserts condition, the team leader justified the sanction as giving team members who misreport their revenue their just deserts. The team leader stated, “the primary aim of the fine is to give team members who misreport revenue their just deserts.” In the no-justification condition, no justification was given (i.e., no additional information was provided by the leader). **Distrust.** Perceived distrust was measured with the following six-items, “I feel distrusted by the team leader,” “I feel like the team leader does not trust me,” “I think the leader assumes I am going to lie,” “I think the leader assumes I am going to break the --- 1 We conducted an additional study on Mechanical Turk ($N = 142$) to test whether participants believed was the authority’s main motive in the deterrence, just deserts, deterrence/just deserts, and control conditions. We used the experimental tax game from Experiments 1 and 2 and included a deterrence, just deserts, deterrence/just deserts (see Experiment 4), and no-justification condition. We then asked participants whether they thought the authority was motivated to deter, provide just deserts, deter and provide just deserts, or none of the above. Participants knew the authority’s motivation. Results provided strong evidence for the notion that the sanction-justification manipulations worked as intended; 98% of participants in the deterrence condition indicated that the authority was motivated to deter and 89% of participants in the just-deserts condition indicated that the authority was motivated to provide just deserts. Furthermore, 98% of participants in the control condition indicated that the authority was motivated to provide just deserts; 8% of participants indicated that the authority was motivated to deter and provide just deserts; 4% of participants indicated that the authority was motivated to deter; and 48% of participants in the control condition indicated that they did not know the authority’s motivation. Lastly, 91% of participants in the deterrence/just-deserts condition indicated that the authority was motivated to deter and provide just deserts. These results demonstrate that the manipulations that are used are effective, and that deterrence tends not to be perceived as the authority’s main motivation in the control condition. rules.” “The leader expects me to have bad intentions,” and “I think the leader believes I am going to lie” ($\alpha = .83$).2 Results Distrust. Overall, the sanction justification affected feelings of distrust, $F(1, 323) = 18.11, p < .001, \eta^2_p = .10$. Participants felt more distrusted in the deterrence condition ($M = 4.83, SD = 1.36$) than in the just-deserts condition ($M = 4.11, SD = 1.29$), $t(217) = 4.04, p < .001, d = 0.55$, and no-justification condition ($M = 3.78, SD = 1.29$), $t(215) = 5.82, p < .001, d = 0.79$. Participants felt (marginal significantly) more distrusted in the just-deserts condition compared with the no-justification condition, $t(214) = 1.84, p = .067, d = 0.25$. Rule compliance. Participants on average underreported their revenues to the team leader ($M = 92.41, SD = 57.63$), $t(325) = 18.04, p < .001, d = 2.00$. Participants’ underreporting also depended on the sanction justification, $F(1, 323) = 4.29, p = .014, \eta^2_p = .03$. Participants were more likely to underreport the amount of money they had earned in the deterrence condition ($M = 79.95, SD = 57.69$) compared with the just-deserts condition ($M = 101.93, SD = 57.38$), $t(217) = 2.83, p = .005, d = 0.39$, and no-justification condition ($M = 95.51, SD = 56.08$), $t(215) = 2.04, p = .045, d = 0.27$. Reporting behavior did not differ between the just-deserts condition and no-justification condition, $t(214) = 0.83, p = .41, d = 0.05$. Mediation analyses. Feeling distrusted was negatively correlated with rule compliance ($r = -.15, p = .007$) and mediated the overall effect of justification condition on rule compliance (95% CI = $[-4.42, -0.24], \kappa^2 = .03$). This was the case for both the deterrence versus just-deserts contrast (95% CI = $[-7.24, -1.05], \kappa^2 = .03$) and the deterrence versus no-justification contrast (95% CI = $[-5.83, -0.44], \kappa^2 = .04$). Discussion Replicating Experiment 1, participants were more likely to underreport their revenues when the team leader justified a sanction as a means to deter lying compared with when the team leader provided no justification. Participants were also more likely to underreport their revenues when the team leader justified a sanction as a means to deter lying compared with as a means to give rule breakers their just deserts. Consistent with our hypotheses and with the results from Experiment 1, these effects were explained by participants feeling distrusted by the leader. These findings provide support for our hypotheses in a different sample, while also demonstrating that the observed findings are specific to deterrence (and not just deserts) justifications. Experiment 3 In Experiment 3, we aimed to generalize our findings from experimental tax games to sanctions that are justified as a means to deter people from committing tax fraud. Moreover, Experiment 2 used the deterrence or just-deserts sanction justifications as mutually exclusive—that is, a sanction was justified as either aimed at deterrence or just deserts. In reality, however, these goals can be, and often are, combined. One could argue that just deserts might attenuate the negative relationship between deterrence and rule compliance (because just deserts also signals authorities’ focus on past rule breakers). Experiment 3 aims to address this issue by investigating the effect of providing simultaneously a deterrence and just deserts justification. Because we theorize that deterrence justifications signal distrust by casting a wide net of suspicion that includes the participant, and that this distrust negatively affects rule compliance, we predicted that the presence of a deterrence justification negatively affects rule compliance regardless of whether it is provided simultaneously with a just deserts justification. To rule out competing explanations, we also measured the extent to which the deterrence justification affects people’s distrust toward others. Besides making people feel distrusted, authorities that provide deterrence justifications may also make people distrust other group members (because an authority also tries to deter them). People’s distrust toward others has been shown to undermine cooperation through raising fears of exploitation by these others (Mulder et al., 2006). We predict that deterrence impacts rule compliance through increasing the extent to which people feel distrusted by an authority, even when controlling for people’s distrust toward others. Deterrence, in other words, impacts rule compliance primarily through increasing the extent to which people feel distrusted by the authority. Method Participants and design. One hundred eight-six participants (113 males; $M_{age} = 33.70$ years, $SD_{age} = 10.42$) were recruited from the Mechanical Turk website and randomly assigned to one of three justification conditions (deterrence vs. deterrence/just deserts vs. no justification). Participants received $1.00 for their participation. Procedure. Sanction justifications. Participants read an excerpt on tax fraud. Specifically, they read that American citizens are sometimes tempted to commit tax fraud by, for instance, underreporting their earnings to the U.S. Internal Revenue Service (IRS). Participants were further informed that the IRS could fine citizens for committing tax fraud (no-justification condition). It was further added that the primary aim of IRS policies is to “deter citizens from committing such tax fraud with these sanctions,” (deterrence condition) or both “deter citizens from committing tax fraud and give those who commit tax fraud their just deserts” (deterrence/just-deserts condition). In the deterrence/just deserts condition, the order of the deterrence and just deserts explanation was counterbalanced but this had no significant impact on the results. Distrust. Feelings of distrust were measured on a three-item scale. Items included, “I feel disturbed by the IRS,” “I think the IRS assumes I want to break tax rules,” and “I feel like the IRS assumes I have bad intentions” ($\alpha = .96$). Rule compliance. We adapted rule compliance measurements from previous research (see Tyler & Blader, 2005). Rule compli- --- 2 We also included a measurement of perceived authority anger (e.g., “I feel like the leader will be angry when subordinates lie; Wubben et al., 2011) to explain a possible effect of just deserts on rule compliance. However, we did not observe an effect of just deserts on rule compliance. In addition, controlling for perceived authority anger did not significantly change the results of deterrence on distrust and rule compliance. ance was measured on a five-item scale. Items included, “I feel inclined to behave according to all rules set by the IRS,” “I feel obliged to stick to the rules regarding tax fraud,” “I will act according to the rules even when the IRS will never know if I committed tax fraud,” and “I feel inclined to commit tax fraud when I can get away with it” (reverse-coded; $\alpha = .93$). **Distrust toward others.** Consistent with Mulder et al. (2006), distrust toward other taxpayers was measured on a three-item scale. Items included, “I feel like I cannot trust other citizens to pay their taxes,” “I think taxpayers are tempted to break the rules,” and “I feel like taxpayers cannot be trusted.” ($\alpha = .94$). ### Results The means and standard deviations are reported in Table 1. **Distrust.** Overall, the sanction justification manipulation affected feelings of distrust in the deterrence condition than in the no-justification condition, $t(123) = 4.09$, $p < .001$, $n_p^2 = .09$. Participants felt more distrusted in the deterrence condition than in the deterrence/just-deserts condition, $t(121) = 1.02$, $p = .31$, $d = 0.18$. Moreover, distrust was higher in the deterrence/just-deserts condition than in the no-justification condition, $t(122) = 3.01$, $p = .003$, $d = 0.55$. **Rule compliance.** The sanction justification manipulation also influenced the willingness to comply with IRS rules, $F(1, 245) = 4.93$, $p = .008$, $n_p^2 = .05$. Rule compliance did not differ between the deterrence condition and deterrence/just-deserts condition, $t(121) = 0.05$, $p = .96$, $d = 0.01$, but was lower in the deterrence condition than in deterrence/just-deserts condition compared with the no-justification condition, $t(123) = 2.69$, $p = .008$, $d = 0.49$; $t(122) = 2.75$, $p = .007$, $d = 0.49$, respectively. **Distrust toward others.** The sanction justification manipulation did not affect distrust toward others, $F(1, 183) = 0.38$, $p = .68$, $n_p^2 = .009$. ### Mediation analyses Using a bootstrap analysis procedure with 5,000 resamples (Hayes, Preacher, & Myers, 2011), we tested whether distrust toward others, or perceived distrust toward self mediated the effect of the sanction-justification manipulation on rule compliance. Results from the bootstrap analyses demonstrated that distrust toward others correlated negatively with rule compliance ($r = -.21$, $p = .004$) but did not mediate the overall effect (or the contrast effects between the significant conditions) of sanction justifications on rule compliance (all 95% CIs fell between $-0.20$ and $0.09$, without zero in the interval). Instead, results from the bootstrap analyses demonstrated that feeling distrusted by the IRS correlated negatively with rule compliance ($r = -.34$, $p < .001$) and mediated the overall effect of the sanction-justification manipulation on rule compliance (95% CI = $[-0.17, -0.02]$, $\kappa^2 = .07$), even after controlling for participants’ distrust toward others (95% CI = $[-0.15, -0.02]$). This was similar for the deterrence versus no-justification contrast (95% CI = $[-0.79, -0.21]$, $\kappa^2 = .15$), and deterrence/just-deserts condition versus no-justification contrast (95% CI = $[-0.22, -0.04]$, $\kappa^2 = .11$). ### Discussion Experiment 3 replicates Experiments 1 and 2 in a different context, while demonstrating that a deterrence justification also undermines rule compliance when presented in combination with a just-deserts justification. Consistent with our theorizing, these effects were attributable to a deterrence justification increasing feelings of distrust and not increasing distrust toward others. These results strongly suggest that the rule undermining effect of sanction justifications is specific to deterrence. ### Experiment 4 Experiment 4 extends the previous three experiments in two ways. First, we manipulated whether or not to a university justified its reliance on a sanctioning system as a means to deter plagiarism from students. This allowed us to generalize our findings from the fine used in the experimental tax games of Experiments 1 and 2 and the generic IRS sanctions in Experiment 3 to a frequently used and well-known sanction at universities (i.e., exclusion from a course). Second, we investigated the moderating role of the perceived legitimacy of an authority. Legitimacy is the belief that authorities have the right to govern and that people should comply with their rules (Tyler, 2006). Therefore, we predict that a deterrence justification increases distrust independent of legitimacy beliefs (i.e., because the authority stills communicates distrust to all potential rule breakers), but we predict that the extent to which this distrust undermines rule compliance is attenuated by perceptions of (high) legitimacy (i.e., because the belief that one should comply with rules should override the negative impact of feeling distrusted). Thus, we predict that distrust mediates the effect of deterrence on rule compliance to a greater extent when legitimacy is low compared with high. These findings are consistent with our theorizing on distrust as a perception of interpersonal (mis)treatment; legitimacy has been shown to act as a buffer against relational threats (Tyler, 2006). ### Method #### Participants and design One hundred sixteen U.S. college students (79 females; $M_{age} = 23.42$ years, $SD_{age} = 6.64$) partici- | Variable | Deterrence | Deterrence/Just deserts | No justification | Total | |----------------|------------|-------------------------|------------------|-------| | Feeling distrusted | 4.57 (2.04) | 4.20 (2.02) | 3.14 (1.83) | 3.97 (2.05) | | Distrust others | 3.28 (1.93) | 3.38 (2.00) | 3.09 (1.76) | 3.26 (1.89) | | Rule compliance | 4.86 (1.65) | 4.88 (1.60) | 4.09 (1.59) | 4.61 (1.64) | pated in exchange for course credit and were randomly assigned to one of two justification conditions (deterrence vs. no justification). **Procedure.** **Legitimacy.** Legitimacy was measured with the following six items, “The university has the right to sanction students,” “The university always has the right to enforce rules,” “The university always has the right to make students comply with the rules,” “Decisions made by the university are legitimate,” “Decisions made by the university should be accepted by students,” and “The university has the right to exclude students from courses” ($\alpha = .91$). **Sanction justification.** All participants were informed that the study assessed students’ attitudes toward university policy. More specifically, it was explained how university students might sometimes be tempted to directly copy information from professional articles for their own work. It was explained that the policy of their university was to immediately exclude students who committed such plagiarism from their respective courses. In the no-justification condition, no further information was given. In the deterrence condition, participants read, “the primary aim of this punitive policy is to deter students from committing plagiarism.” The severity of the sanction was thus held constant across the two justification conditions. **Distrust.** Perceived distrust was measured with the following six items, “I feel distrusted by the university,” “I feel like the university does not trust me,” “I think the university assumes I am going to commit plagiarism,” “I think the university assumes I am going to break the rules,” “The university expects me to have bad intentions,” and “I think the university believes I am going to break the rules” ($\alpha = .86$). **Rule compliance.** Rule compliance was assessed on an eight-item scale. This scale measured students’ willingness to commit plagiarism. Items included, “I feel inclined to behave according to university rules regarding plagiarism,” “I feel obliged to stick to the rules regarding plagiarism,” “I will act according to the rules even when the university will never know if I committed plagiarism,” and “I feel inclined to break plagiarism rules when I can get away with it” (reverse-coded; $\alpha = .75$). **Results** **Distrust.** Multiple regression analysis was used to test the interactive effects of deterrence and legitimacy on distrust. For the first step, deterrence (coded as $-1$ for no justification and $+1$ for deterrence) and legitimacy (standardized) were included as predictors. For the second step, the interaction between deterrence and legitimacy was added. Results demonstrated main effects of deterrence and legitimacy ($\beta = .55$, $t(116) = 7.39$, $p < .001$; ($\beta = .23$, $t(116) = 3.06$, $p = .003$, but no interaction effect between deterrence and legitimacy ($\beta = .21$, $t(116) = 1.29$, $p = .20$. **Rule compliance.** Multiple regression analysis was used to test the interactive effects of deterrence and legitimacy on rule compliance. For the first step, deterrence and legitimacy (standardized) were included. For the second step, the interaction between deterrence and legitimacy was added. Results demonstrated main effects of deterrence and legitimacy ($\beta = -.15$, $t(116) = 1.19$, $p = .095$; ($\beta = .36$, $t(116) = 4.22$, $p < .001$, and an interaction effect between deterrence and legitimacy ($\beta = -.39$, $t(116) = -2.08$, $p = .040$. Results from a similar analysis including distrust, legitimacy, and their interaction demonstrated a main effect of legitimacy ($\beta = .33$, $t(116) = 4.18$, $p < .001$, no main effect of distrust ($\beta = -.08$, $t(116) = 1.06$, $p = .29$, and a significant interaction effect between distrust and legitimacy ($\beta = -.20$, $t(116) = 3.02$, $p = .003$. A similar multiple regression analysis including deterrence, distrust, legitimacy and both interaction terms (deterrence/legitimacy, distrust/legitimacy), yielded a significant interaction term between distrust and legitimacy ($\beta = -.19$, $t(116) = -2.11$, $p = .037$, but no significant interaction term for deterrence and legitimacy ($\beta = -.11$, $t(116) = -1.16$, $p = .11$. This demonstrates that legitimacy moderates the relationship between distrust and rule compliance instead of the relationship between deterrence and rule compliance (Aiken & West, 1991). Follow-up analyses demonstrated that distrust only significantly (and negatively) predicted rule compliance when legitimacy was relatively low ($\beta = -.26$, $t(116) = -2.71$, $p = .007$, but not when legitimacy was relatively high ($\beta = .10$, $t(116) = 1.03$, $p = .30$ (see Figure 1). **Mediation analysis.** Replicating the previous three experiments, feeling distrusted was negatively correlated with rule compliance ($r = -.25$, $p < .001$) and mediated the main effect of the deterrence justification on rule compliance (95% CI $= [-0.22, -0.05]$, $k^2 = .11$). Moreover, we used Model 14 with 5,000 bootstraps in PROCESS (Hayes, 2013) to test the degree to which distrust mediated the relationship between deterrence and rule compliance when we allow legitimacy to moderate the relationship between distrust and rule compliance (as demonstrated earlier). Results yielded a significant moderated mediation effect (95% CI $= [-0.14, -0.01]$; distrust mediated the negative relationship between deterrence and rule compliance only when legitimacy was low ($-1 SD$; indirect effect $= 0.12$, $SE = 0.06$, 95% CI $= [0.04, 0.24]$), but not when legitimacy was high ($+1 SD$; indirect effect $= -0.02$, $SE = 0.06$, 95% CI $= [-0.12, 0.09]$; see Figure 2). **Discussion** These results extend the findings from the previous three experiments in at least two ways. First, we replicate the main effect of the deterrence justification in a different context that used sanctions that are more commonly encountered (i.e., a university sanction system). Second, we demonstrate that legitimacy attenuated the degree to which the distrust that was fostered by the deterrence justification impacted rule compliance. Indeed, the deterrence justification increased distrust regardless of perceived authority legitimacy (consistent with the notion that the justification communicates distrust to all potential rule breakers, including participants), but this distrust only negatively affected rule compliance when perceived legitimacy was relatively low, not when it was relatively high. These findings are consistent with our theorizing on distrust as a perception of interpersonal (mis)treatment; legitimacy has been shown to act as a buffer against relational threats (see Tyler, 2006). Legitimacy, in other words, can “override” the impact of feeling distrusted on rule compliance. **Experiment 5** Experiment 4 identified an important variable that can counteract the impact of distrust on rule compliance. However, legitimacy is not always easy to gain (people’s perception of authorities are unlikely to change overnight) and Experiment 4 did not provide unequivocal support for legitimacy attenuating the relationship between deterrence and rule compliance. Understanding how authorities can directly prevent the negative effects of a deterrence justification in a more practical manner seems desirable. We therefore investigated the effects of framing a deterrence justification as aimed at a group of people that either did or did not include the participant. If a deterrence justification fosters distrust through signaling that one is considered a potential rule breaker, then a deterrence justification aimed at oneself should increase distrust, whereas a deterrence justification aimed at others should attenuate the extent to which one feels distrusted. We predicted that a sanction signals more distrust (and is thus less effective in promoting a willingness to comply with rules) when it is justified as deterring oneself—compared with others—from rule breaking, and compared with a sanction that is provided without a justification. In addition, we investigated people’s attitudes toward the authority. Liking of an authority can be an important predictor of people’s willingness to comply with authorities’ rules (Tyler & Blader, 2000) and authorities that provide deterrence justifications may be liked less because they make people feel distrusted. We therefore (a) controlled for attitudes toward the authority to demonstrate that the impact of deterrence on rule compliance cannot be fully attributed to authority liking, and (b) investigated whether feeling distrusted by an authority undermined rule compliance through increasing negative attitudes toward the authority. **Method** **Participants and design.** One hundred eighty-five U.S. participants (110 males; $M_{\text{age}} = 33.19$ years, $SD_{\text{age}} = 11.09$) were recruited from the Mechanical Turk website and were randomly assigned to one of three justification conditions (self deterrence vs. other deterrence vs. no justification). Participants received $1 for their participation. **Procedure** **Rule compliance.** The experimental game used was similar to the game used in Experiments 1 and 2, except for two differences. First, instead of self-reporting to the team leader how much money they received, participants were first informed that according to the rules set by the team leader, their work corresponded to a $1 reward. Participants could then each take up to $7 from the team leader as a reward for their own performance. The team leader was able to verify if the money taken was the correct amount ($1) for only two workers (i.e., partial monitoring from an authority). As such, participants had the possibility to take more money than they had earned. Second, the amount of money participants could gain was thus higher than in Experiments 1 and 2. That is, seven times as much as they received for simply participating in the study. **Sanction justifications.** It was explained to participants (who all reported to be U.S. citizens) that Mechanical Turk workers hail from different countries; participants read that Mechanical Turk workers from different countries have been shown to behave differently in studies that revolve around money. The justification conditions were identical to Experiment 1 with the exception that the fine was justified by the leader as a means to deter U.S. participants (deterrence self condition) versus non-U.S. participants (deterrence other condition) from taking too much money from the team leader. **Distrust.** Perceived distrust was measured with the following four items, “I feel distrusted by the team leader,” “I feel like the team leader does not trust me,” “I think the leader assumes I am going to lie,” “I think the leader assumes I am going to break the rules” ($\alpha = .83$). **Attitudes toward supervisor.** We measured participants’ attitudes toward the supervisor on four items. Items included, “I like this team leader,” “I have a positive feeling about the team leader,” “I tend to view this team leader positively,” and “I dislike the team leader” (reverse-coded; $\alpha = .93$). **Results** **Distrust.** The sanction justification manipulation affected the extent to which participants felt distrusted by the team leader, $F(1, 182) = 19.31, p < .001$, $\eta^2_p = .18$. Perceived distrust was higher in the deterrence self condition ($M = 4.57, SD = 1.83$) compared with the deterrence other condition ($M = 2.90, SD = 1.69$), $t(120) = 5.36, p < .001, d = 0.99$, and no-justification condition ($M = 2.79, SD = 1.87$), $t(120) = 5.32, p < .001, d = 0.97$. The deterrence other condition did not differ from the no-justification condition, $t(120) = 0.29, p = .77, d = 0.04$. **Rule compliance.** On average, participants took more money than they earned ($M = 26.62, SD = 18.96$), $t(184) = 9.93, p < .001, d = 1.46$. Participants’ rule compliance also depended on the sanction-justification manipulation. The sanction-justification manipulation influenced rule compliance, $F(1, 182) = 4.38, p = .014$, $\eta^2_p = .05$. Participants took more money in the deterrence self... condition ($M = 31.70, SD = 20.99$) compared with the deterrence other condition ($M = 24.00, SD = 17.92$), $t(24) = 2.24, p = .037, d = 0.38$, and no-justification condition ($M = 22.50, SD = 16.56$), $t(120) = 2.67, p = .008, d = 0.47$. The deterrence other condition and no-justification condition did not differ, $t(120) = 0.46, p = .65, d = 0.07$. **Attitudes toward authority.** The sanction-justification manipulation influenced participants’ attitudes toward the supervisor, $F(1, 241) = 10.71; p < .001, \eta^2_p = .12$. Participants in the deterrence self condition liked the supervisor less ($M = 3.79, SD = 1.46$) than participants in the deterrence other condition ($M = 4.46, SD = 1.53$), $t(24) = 2.52, p = .013, d = 0.46$, and no-justification condition ($M = 4.86, SD = 1.61$), $t(120) = 3.85, p < .001, d = 0.70$. The deterrence other condition and no-justification condition did not differ, $t(120) = 1.40, p = .16, d = 0.25$. **Mediation analyses.** Using a bootstrap analysis procedure with 5,000 resamples (Hayes et al., 2011), we tested whether distrust mediated the effect of the sanction-justification manipulation on rule compliance. Distrust was positively correlated with taking more money than participants earned ($r = .62, p < .001$), and results from the bootstrap analysis showed that distrust mediated the overall effect of the sanction-justification manipulation on rule compliance (95% CI = [−7.92, −3.40], $\kappa^2 = .26$), even after controlling for attitudes toward the supervisor (95% CI = [−5.41, −1.69]) or after adding attitudes toward the supervisor as an additional mediator (95% CI = [−6.93, −2.59]). Attitudes toward the supervisor also independently mediated the effect of the sanction-justification manipulation on rule compliance (95% CI = [−2.15, −0.14]). The indirect effect of distrust was also significant for the deterrence self versus no-justification contrast (95% CI = [−16.36, −5.82]) and the deterrence self versus deterrence other contrast (95% CI = [−9.01, −3.79]). Interestingly, for both contrasts the deterrence justification undermined rule compliance through the mediating effect of distrust predicting negative attitudes toward the authority (i.e., deterrence→distrust→attitudes toward authority→rule compliance; 95% CIs: [−6.93, −2.59]). **Discussion** The findings from Experiment 5 demonstrate that the extent to which a deterrence justification fostered distrust and decreased sanction effectiveness was attenuated when it was explicitly aimed at others. This corroborates our assumption that a deterrence justification fosters distrust through signaling that one is considered a potential rule breaker. Moreover, no differences in distrust were observed when the sanction was justified as deterring others compared with the no-justification condition. This strongly suggests that the distrust that authorities communicate with a deterrence justification is in part attributable to people inferring that the sanction is aimed at them. Lastly, the findings from Experiment 5 demonstrate that feeling distrusted by an authority undermines rule compliance in part through increasing negative attitudes toward this authority. **General Discussion** We presented five experiments that examined how sanction justifications affect sanction effectiveness. Across different samples, contexts, and sanctions, we consistently observed that people feel more distrusted when sanctions are justified as attempts to deter them from breaking rules. This distrust is shown to decrease people’s willingness to comply with the rules of their team leader (Experiments 1, 2, and 5), IRS (Experiment 3), and university (Experiment 4). These results strongly suggest that justifying a sanction as an attempt to deter people from breaking rules makes people feel more distrusted (Hypothesis 1) which decreases the effectiveness of a sanction (Hypotheses 2a and 2b). **Theoretical and Practical Implications** The present set of studies makes several contributions to the literature on rule compliance. Previous research has mainly focused on the extent to which people use deterrence and just-deserts goals in guiding their sanction decisions (Carlsmith, 2006, 2008; Carlsmith et al., 2002; Darley et al., 2000; Gerber & Jackson, 2012; Mooijman et al., 2015). The present research is to our knowledge the first to demonstrate the effects of using one such goal (deterrence) as a *justification*. The reported studies demonstrate that using a deterrence goal as a justification can lead an authority to signal the sanction’s underlying considerations to the public. Sanction goals are therefore not only “hidden” motivations that drive punitive-sanction decisions (Carlsmith et al., 2002); as justifications, they can also be relevant for influencing the effectiveness of sanctions. Interestingly, the present research demonstrates that deterrence goals do not only influence rule compliance through affecting the severity and type of sanction. Rather, deterrence goals can have a distinct and independent influence, regardless of sanction type and severity. The underlying considerations that an authority signals to others through deterrence justifications are therefore highly relevant for the subsequent effectiveness of the sanction. The way authorities affect others’ tendency to comply with the rules is complex. Sanctions are not just means to increase the costs and decrease the benefits of rule breaking (Nagin, 1998). Rather, sanctions are also driven by philosophies and goals (Bentham, 1789/1988; Hobbes, 1651/1988; Kant, 1780/1961) that can directly affect the public. Previous research has demonstrated that powerful authorities are inclined to rely on deterrence as a sanction goal because they distrust others (Mooijman et al., 2015). The current research implies that the public is able to infer this distrust from sanctions justified as attempts to deter rule breaking. This stresses the notion that authorities should be cautious with how they justify their sanctions. Indeed, the perceived distrust that was elicited by deterrence justifications played a unique role in undermining rule compliance. The experiments demonstrated that feeling distrusted seems to have at least a moderate degree of influence on rule compliance (Cohen, 1988; Preacher & Kelley, 2011) and this rule-undermining effect of feeling distrusted was independent from other variables such as perceived anger (Wubben et al., 2011), attitudes toward authorities (Tyler & Blader, 2003), and distrust toward others (Mulder et al., 2006). The current research thus demonstrates that perceived authority (dis)trust is of importance for an authority’s ability to promote compliance with cooperative rules. As such, the current studies have direct practical relevance for authorities—judges, policymakers, and managers should be aware of the consequences that a deterrence justification can have for sanction effectiveness. To stimulate rule compliance it may perhaps be more effective to (a) emphasize that sanctions are meant to give people their just deserts, (b) give no sanction justification, or (c) use deterrence justifications that signal that the sanction targets others. Such sanctions foster rule compliance without making people feel (too) distrusted. Although this advice seems straightforward, it may be harder to achieve than one may think. Recent research has demonstrated that power increases people’s reliance on deterrence as a goal for punishment (Mooijman et al., 2015); power increases distrust toward others, which increases reliance on deterrence as a goal for punishment. Powerful authorities may thus ironically be the most inclined to emphasize deterrence aspects of a sanction, even though this may be one of the least effective course of action. The present research suggests, however, that deterrence justifications are more effective when they are coupled with an affirmation of trust in the target (e.g., sanctions are not targeted toward the self). This provides at least one way in which authorities can attenuate the effects of deterrence justifications. Moreover, the distrust that is fostered by deterrence justifications is shown to be less likely to impact rule compliance when authority legitimacy is high. Ironically, however, authorities with low legitimacy tend to suffer most from rule compliance problems (Tyler, 1990). This suggests that when authority legitimacy is low, deterrence justifications might exacerbate the rule-compliance problems of authorities (since their legitimacy is not attenuating the effects of people feeling distrusted). **Limitations and Future Directions** Whereas the present research supports our hypotheses across different samples, contexts, and measures, there are some issues to be noted. For instance, we did not focus on individuals who have already broken rules. First, it is possible that these individuals feel distrusted even though this distrust would be partially justified. Although potentially interesting, the majority of people tend to be rule abiding or at least perceive themselves as such (Brown, 2012; Sedikides, Meek, Alicke, & Taylor, 2014). Justifying a sanction as a deterrent may therefore still make rule breakers feel distrusted and thereby undermine sanction effectiveness. Second, it is also possible that although rule breakers feel distrusted by an authority that aims to deter them from breaking rules, this distrust may not translate into rule-breaking behavior (because by this they would provide the authority a legitimization for their distrust). Third, it is also possible that the moral self-image of participants moderates the degree to which distrust undermines rule compliance. Feeling distrusted may at times stimulate more rule compliance for individuals who are more likely to perceive themselves of moral individuals. This may especially be the case when individuals can explicitly demonstrate their morality to others (and thereby restore their moral self-image in their eyes and the eyes of others). Although we did not measure this in the present studies, we believe this to be an interesting avenue for future research. More research may also be needed to flesh out exactly why feeling distrusted undermines rule compliance. Although our findings are consistent with our theorizing and also demonstrate the role of attitudes toward the authority, future research could benefit from directly testing the notion that feeling distrusted is perceived as a form of interpersonal injustice. This could potentially explain why feeling distrusted by an authority, but not distrusting others (which is less likely to be perceived as an interpersonal injustice), explained the negative impact of deterrence justifications on rule compliance. Indeed, feeling distrusted by an authority may also decrease the extent to which people place trust in an authority, thereby contributing to a self-sustaining cycle of mutual distrust between people and authorities. Moreover, sanction goals are typically classified as deterrence or just-deserts goals (Carlsmith et al., 2002). Previous research has primarily focused on the extent to which people use these two goals in guiding their sanction decisions (Carlsmith, 2006, 2008; Carlsmith et al., 2002; Darley et al., 2000; Gerber & Jackson, 2012). The present research has been consistent with this approach because we examined how these two goals affect sanction effectiveness. However, we did not find any effects of a just-deserts justification on rule compliance, mainly because just deserts did not significantly affect distrust. Future research could examine this issue further, for example, it could be that framing the wording of a just-deserts justification in terms of retribution or revenge is more effective in affecting rule compliance through—for instance—influencing the anger that people infer from the authority. The perceived link between revenge and anger may be stronger than the perceived link between just deserts and anger (just deserts revolves about proportionality between crime and punishment, whereas revenge revolves around doling out disproportionate punishments to humiliate the other; see Gerber & Jackson, 2013); the link between revenge and anger is indeed well documented (Eisenberger, Lynch, Aselage, & Rohdieck, 2004; Seip, van Dijk, & Rotteveel, 2014). Future research could also examine how sanction severity interacts with sanction justifications; the present research mainly used relatively mild sanctions, or unspecified sanctions. For instance, the negative impact of deterrence justifications could be amplified when sanctions are perceived as severe (because it signals more distrust). Although the focus in the present article is on how authorities justify their sanctioning behavior, future research could also investigate how people infer sanction goals from sanctions. Although this lies outside of the scope of the present article, our theorizing and reported experiments can provide a meaningful theoretical framework for formulating predictions about sanction inferred goals. For instance, inferring that a sanction is meant to deter rule breaking can make people feel distrusted, and thus undermine rule compliance. Future research could aim to investigate when sanctions are perceived to reflect a deterrence (or just-deserts) goal. Lastly, the present research primarily measured rule compliance in relatively small-stakes behavioral experiments. This approach provides experimental control and provides the advantage of being able to establish causality—consistent with a large literature in experimental and behavioral economics—but lacks empirical validation in high-stakes situations. To address this issue, future research could investigate the effects of sanction-goal justifications in real-life field settings involving high-stakes situations (e.g., ethical behavior in government and/or corporate settings). **Conclusion** Authorities frequently use sanctions to promote rule compliance and often provide a justification for their use of such sanctions. Although providing a deterrence justification may seem appealing, the present article demonstrates that this can in fact negatively influence how effective a sanction will be in promoting future rule compliance. Five experiments demonstrate that sanction effectiveness decreases when sanctions are justified as attempts to deter rule breaking. This can be explained by deterrence justifications fostering feelings of distrust. This suggests that although authorities have been shown to rely on deterrence as a sanction justification (Moonjijan et al., 2015), this may—paradoxically—undermine the effectiveness of sanctions. References Aiken, L., & West, S. (1991). *Multiple regression: Testing and interpreting interactions*. Newbury Park, CA: SAGE. Ball, G. A., Treviño, L. K., & Sims, H. P., Jr. (1994). Just and unjust punishment: Influences on subordinate performance. *Academy of Management Journal*, 37, 299–322. http://dx.doi.org/10.2307/256831 Balliet, D., Mulder, L. B., & Van Lange, P. A. M. (2011). Reward, punishment, and cooperation: A meta-analysis. *Psychological Bulletin*, 137, 594–615. http://dx.doi.org/10.1037/a0024849 Bentham, J. (1988). *An introduction to the principles of morals and legislation*. New York, NY: Prometheus Books. (Original work published 1787) Bilokonokh, Y. (2006). A tax evasion-bribery game: Experimental evidence from Ukraine. *The European Journal of Comparative Economics*, 3, 31–49. Brown, J. D. (2012). Understanding the better than average effect: Motives (still) matter. *Personality and Social Psychology Bulletin*, 38, 209–219. http://dx.doi.org/10.1177/0146167211432763 Carlsmith, K. M. (2006). The roles of retribution and utility in determining punishment. *Journal of Experimental Social Psychology*, 42, 437–451. http://dx.doi.org/10.1016/j.jesp.2005.06.007 Carlsmith, K. M. (2008). On justifying punishment: The discrepancy between words and actions. *Social Justice Research*, 21, 119–137. http://dx.doi.org/10.1007/s11211-008-0068-x Carlsmith, K. M., Darley, J. M., & Robinson, P. H. (2002). Why do we punish? Deterrence or just deserts as motives for punishment. *Journal of Personality and Social Psychology*, 83, 284–299. http://dx.doi.org/10.1037/0022-3518.104.22.1684 Cohen, J. (1988). *Statistical power analysis for the behavioral sciences* (2nd ed.). New York, NY: Academic Press. Crapanzano, R., Bowen, D. E., & Gilliland, S. W. (2007). The management of organizational justice. *The Academy of Management Perspectives*, 21, 34–48. http://dx.doi.org/10.5465/AMP.2007.27895338 Darley, J. M. (2009). Morality in the law: The psychological foundations of citizens’ desires to punish transgressions. *Annual Review of Law and Social Science*, 5, 1–23. http://dx.doi.org/10.1146/annurev.lawsocsci.4.110707.172335 Darley, J. M., Carlsmith, K. M., & Robinson, P. H. (2000). Incapacitation and just deserts as motives for punishment. *Law and Human Behavior*, 24, 659–683. http://dx.doi.org/10.1023/A:1008552303727 De Cremer, D., & van Knippenberg, D. (2002). How do leaders promote cooperation? The effects of charisma and procedural fairness. *Journal of Applied Psychology*, 87, 858–866. http://dx.doi.org/10.1037/0021-9010.87.5.858 Eisenberger, R., Lynch, P., Aselage, J., & Rohdeick, S. (2004). Who takes the most revenge? Individual differences in negative reciprocity norm endorsement. *Personality and Social Psychology Bulletin*, 30, 787–799. http://dx.doi.org/10.1177/0146167204264047 Ellemers, N. (2012). The group self. *Science*, 336, 848–852. http://dx.doi.org/10.1126/science.1220987 Elleners, N., Doosje, B., & Spears, R. (2004). Sources of respect: Effects of being liked by ingroups and outgroups. *European Journal of Social Psychology*, 34, 155–172. http://dx.doi.org/10.1002/ejsp.196 Fiske, S. T. (1993). Controlling other people: The impact of power on stereotyping. *Psychological Review*, 98, 621–628. http://dx.doi.org/10.1037/0033-295X.98.4.621 Gerber, M. M., & Jackson, J. (2013). Retribution as revenge and retribution as just deserts. *Social Justice Research*, 26, 61–80. http://dx.doi.org/10.1007/s11211-012-0174-7 Hayes, A. F. (2013). *Introduction to mediation, moderation, and conditional process analysis*. New York, NY: Guilford Press. Hayes, A. F., Preacher, K. J., & Myers, T. A. (2011). Mediation and the estimation of indirect effects in political communication research. In E. P. Bucy & R. Lance Holbert (Eds.), *Sourcebook for political communication research: Methods, measures, and analytical techniques* (pp. 443–465). New York, NY: Routledge. Hobbes, T. (1660/1996). *Leviathan*. New York, NY: Oxford University Press. (Original work published 1651) Inesi, E. M., Gruenfeld, D. H., & Galinsky, A. D. (2012). How power corrupts relationships: Cynical attributions for others’ generous acts. *Journal of Experimental Social Psychology*, 48, 795–803. http://dx.doi.org/10.1016/j.jesp.2012.01.008 Kant, I. (1952). The science of right. In K. Hutchins (Ed.), *Great books of the western world* (W. Hastie, Trans.) (pp. 397–446). Edinburgh, Scotland: T. & T. Clark. (Original work published 1790) Keller, L. B., Oswald, M. E., Stucki, L. A., & Gollwitzer, M. (2010). A closer look at an eye for an eye: Layperson’s punishment decisions are primarily driven by retributive motives. *Social Justice Research*, 23, 99–116. http://dx.doi.org/10.1007/s11211-010-0113-4 Keltner, D., Gruenfeld, D. H., & Anderson, C. (2003). Power, approach, and inhibition. *Psychological Review*, 110, 265–284. http://dx.doi.org/10.1037/0033-295X.110.2.265 Kirchler, E., Kogler, C., & Muellbacher, S. (2014). Cooperative tax compliance: From deterrence to deference. *Current Directions in Psychological Science*, 23, 87–92. http://dx.doi.org/10.1177/0963721413516975 Kramer, R. M. (1999). Trust and distrust in organizations: Emerging perspectives, enduring questions. *Annual Review of Psychology*, 50, 569–598. http://dx.doi.org/10.1146/annurev.psych.50.1.569 Langlois, J. (2012). *India to name and shame rapists*. Global Post. Retrieved from http://www.globalpost.com/dispatch/news/regions/asiasouth-asia/120219/india-name-shame-rape-rapist Martinez, M. (2015, January 24). Colorado woman gets 4 years for wanting to join ISIS. *CNN*. Retrieved January 28 from http://ed.cnn.com/2015/01/23/us/colorado-woman-isis-sentencing/ McKenzie, C. R. M., Liersch, M. J., & Finkelstein, S. R. (2006). Recommendations implicit in policy defaults. *Psychological Science*, 17, 414–420. http://dx.doi.org/10.1111/j.1467-9820.2006.01721.x Molenmaker, W. E., De Kwaadsteniet, E. W., & Van Dijk, E. (2014). On the willingness to costly reward cooperation and punish non-cooperation: The moderating role of type of social dilemma. *Organizational Behavior and Human Decision Processes*, 125, 175–183. http://dx.doi.org/10.1016/j.obhdp.2014.09.002 Moonjijan, M., van Dijk, W. W., Ellemers, N., & van Dijk, E. (2015). Why leaders matter: A person perspective. *Journal of Personality and Social Psychology*, 109, 75–89. http://dx.doi.org/10.1037/pspp0000011 Mulder, L. B., & Nelissen, R. (2010). When rules really make a difference: The effect of cooperation rules and self-sacrificing leadership on moral norms in social dilemmas. *Journal of Business Ethics*, 95, 57–72. http://dx.doi.org/10.1007/s10551-011-0795-9 Mulder, L. B., van Dijk, E., De Cremer, D., & Wilke, H. A. M. (2006). Undermining trust and cooperation: The paradox of sanctioning systems in social dilemmas. *Journal of Experimental Social Psychology*, 42, 147–162. http://dx.doi.org/10.1016/j.jesp.2005.03.002 Nagin, D. (1998). Criminal deterrence research at the outset of the twenty-first century. In M. Tonry (Ed.), *Crime and justice: A review of research* (Vol. 23, pp. 1–42). Chicago, IL: University of Chicago Press. http://dx.doi.org/10.1086/445070 Pelletier, L. G., & Vallerand, R. J. (1996). Supervisors’ beliefs and subordinates’ intrinsic motivation: A behavioral confirmation analysis. *Journal of Personality and Social Psychology, 71*, 331–340. http://dx.doi.org/10.1037/0022-3522.214.171.1241 Pillutla, M., Malhotra, D., & Murnighan, J. K. (2003). Attritions of trust and the calculus of reciprocity. *Journal of Experimental Social Psychology, 39*, 448–455. http://dx.doi.org/10.1016/S0022-1031(03)00015-5 Preacher, K. J., & Kelley, K. (2011). Effect size measures for mediation models: Quantitative strategies for communicating indirect effects. *Psychological Methods, 16*(2), 93–115. http://dx.doi.org/10.1037/a0022658 Schaufeli, W., Reinierse, M., & Coet, K. S. (2015). Power decreases trust in social exchange. *PNAS Proceedings of the National Academy of Sciences of the United States of America, 112*, 12950–12955. http://dx.doi.org/10.1073/pnas.1517057112 Sedikides, C., Meek, R., Alicke, M. D., & Taylor, S. (2014). Behind bars but above the bar: Prisoners consider themselves more prosocial than non-prisoners. *British Journal of Social Psychology, 53*, 396–403. http://dx.doi.org/10.1111/bjso.12060 Seip, E. C., van Dijk, W. W., & Rotteveel, M. (2014). Anger motivates costly punishment of unfair behavior. *Motivation and Emotion, 38*, 578–588. Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. *Psychological Science, 22*, 1359–1366. http://dx.doi.org/10.1177/0956797611417632 Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2013). *Life after p-hacking*. Paper presented at the Fourteenth Annual Meeting of the Society for Personality and Social Psychology, New Orleans, LA. Snyder, M. (1992). Motivational foundations of behavioral confirmation. In M. P. Zanna (Ed.), *Advances in experimental social psychology* (Vol. 25, pp. 67–113). New York, NY: Academic Press. Steele, C. M. (1988). The psychology of self-affirmation: Sustaining the integrity of the self. In S. Berkowitz (Ed.), *Advances in experimental social psychology* (Vol. 21, pp. 261–302). New York, NY: Academic Press. http://dx.doi.org/10.1016/S0065-2601(08)60229-4 Tannenbaum, D., Valasek, C. J., Knowles, E. D., & Ditto, P. H. (2013). Incentivizing wellness in the workplace: Sticks (not carrots) send stigmatizing signals. *Psychological Science, 24*, 1512–1522. http://dx.doi.org/10.1177/0956797612474047 Teitelbaum, P. E., Visser, P., Singh, R., Polifroni, M., Elson, B., Mazzocco, P., & Rosenzweig, P. (2007). People as intuitive prosecutors: The impact of social-context cues on attributions of responsibility. *Journal of Experimental Social Psychology, 43*, 195–209. http://dx.doi.org/10.1016/j.jesp.2006.02.009 Tyler, T. R. (1990). *Why we obey the law*. New Haven, CT: Yale University Press. Tyler, T. R. (2006). Psychological perspectives on legitimacy and legitimation. *Annual Review of Psychology, 57*, 375–400. http://dx.doi.org/10.1146/annurev.psych.57.102904.190038 Tyler, T. R., & Blader, S. L. (2000). *Cooperation in groups: Procedural justice, social identity, and behavioral engagement*. Philadelphia, PA: Psychology Press. Tyler, T. R., & Blader, S. L. (2003). The group engagement model: Procedural justice, social identity, and cooperative behavior. *Personality and Social Psychology Review, 7*, 349–361. http://dx.doi.org/10.1207/S15327957PSPR0704_07 Tyler, T. R., & Blader, S. L. (2005). Can businesses effectively regulate employee conduct? The antecedents of rule following in work settings. *Academy of Management Journal, 48*, 1143–1158. http://dx.doi.org/10.5465/AMJ.2005.19573114 Tyler, T. R., & Lind, E. A. (1992). A relational model of authority in groups. In M. P. Zanna, M. P. Zanna (Eds.), *Advances in experimental social psychology* (Vol. 25, pp. 115–191). New York, NY: Oxford University Press. http://dx.doi.org/10.1016/S0065-2601(08)60283-X Wubben, M. J., De-Cremer, D., & van Dijk, E. (2011). The communication of anger and disappointment helps to establish cooperation through indirect reciprocity. *Journal of Economic Psychology, 32*, 489–501. http://dx.doi.org/10.1016/j.jeop.2011.03.016 Yamagishi, T., & Yamagishi, M. (1994). Trust and commitment in the United States and Japan. *Motivation and Emotion, 18*, 129–166. http://dx.doi.org/10.1007/BF02249397 Zand, D. (1997). *The leadership triad: Knowledge, trust, and power*. New York, NY: Oxford University Press. Received May 21, 2015 Revision received October 18, 2016 Accepted October 19, 2016
FIRST IEEE CONVENTION HIGHLIGHTS OF TECHNICAL SESSIONS, P 63 PREVIEW OF THE EXHIBITS, P 33 Integrated circuit using load-compensated transistor-diode logic No BCD* in this counter The 1150-A Digital Frequency Meter uses ring counting circuits. The advantages are many. The ring counter can readily be made into a decade device without need of fussy feedback circuits and complex decoding matrices. Furthermore, the ring counter is capable of driving readout devices directly; additional stages of amplification are not needed and circuit voltages are not critical. Summing it up, the G-R 1150-A Digital Frequency Meter is straightforward and reliable. You get dependability, in-line Numerik® readout, and a crystal-controlled time base in this low-cost counter. CONDENSED SPECIFICATIONS Frequency Range: 10 cps to 220 kc Accuracy: ±1 count = time-base stability Time Base: Internal 100-kc crystal oscillator with ½ ppm stability Provision for external 100-kc time base Sensitivity: Better than 1 volt, peak-to-peak. For pulses, duty cycle should be between 0.2 and 0.8. Input impedance is 0.5 MΩ shunted by less than 100 pf. Gate Times: 0.1, 1, and 10 seconds. Also manual start/stop. Reset: Automatic or manual Display Time: Adjustable from 0.1 to 5 seconds, or infinite. Self Check: Has provision for counting own 100-kc frequency. Small Size: Only 3½" x 19" x 10" Price: $915 in U.S.A. All solid-state construction. Totalizes events or measures frequency to 220 kc. Oven-controlled 100-kc crystal oscillator with ½ ppm stability. Temperature stability better than 5 ppm over an ambient range of 0° to 50°C. New, brilliant, always in focus, NUMERIK® in-line indicator 120° viewing angle . . . 5000-hour lamp life in counting service. Build NUMERIK® indicators into your equipment. One-third the volume and uses one-half the power of comparable units. Prices start at $32.20. Quantity discounts available. Write for complete information. Write for 1150-A Counter Bulletin GENERAL RADIO COMPANY WEST CONCORD, MASSACHUSETTS SEE THIS INSTRUMENT AT THE IEEE SHOW—Booths 3201-3208 Also on display for the first time — a Sweep Generator with standard signal generator features . . . another low-cost Counter with Numerik® in-line indicators . . . an accurate, peak-responding Voltmeter that's easy to use . . . precision Invar Capacitor Standard . . . 200va Power Oscillator and Amplifier . . . Microphone Calibrator based on reciprocity techniques . . . 8-ampere Variac® autotransformer . . . new precision vhf-uhf Coaxial Connectors . . . Optical Pickoff and Flash Delay for locking electronic stroboscopes to non-cyclic motion. CIRCLE 900 ON READER SERVICE CARD INTEGRATED CIRCUIT using load-compensated diode-transistor logic (LCDT) by Siliconix. This modified version of DTL circuit uses a clamping diode. It overcomes some speed and power dissipation problems of other logic circuits. *Our cover was reproduced in natural color from actual diffusion mask negatives.* See p 68 IEEE SPECIAL IEEE PRESIDENT Talks Candidly. We asked Dr. Ernst Weber to outline problems facing the IEEE. *The biggest problem: the split in AIEE and IRE views* PLANAR TRANSISTORS. More of these popular types will bow at the show. *In one, a plastic header slashes costs* IEEE AWARD WINNERS—What They're Like. The careers of nine winners of top awards are profiled. *Did you know, for example, that one winner's study of water troughs helped develop waveguides in the 1930's?* PREVIEW OF EXHIBITS. Instruments Extend Operating Ranges. New instruments at the show continue the trend toward greater utility, higher accuracy and bolder display. *Design improvements make for easier operation, faster measurements* TUBES PACK MORE PUNCH. Tube designers are cutting size and weight while adding to frequency range and power. *Marriage of klystron and traveling-wave-tube techniques is one new concept* MICROCIRCUITS Graduate Into Hardware. Integrated circuits and thin-film devices pace development of off-the-shelf equipment lines. *This section of our preview of IEEE Show exhibits also surveys new equipment built with more conventional circuits* FIRST IEEE CONVENTION: Engineering Preview. Again this year our staff has combed the 54 technical sessions to select newsworthy papers. Our efforts combined with those of our McGraw-Hill News Bureaus and the cooperation of the authors makes this preview possible. *High on the list are electron devices—both tubes and semiconductors—medical electronics and antennas* HIGH-SPEED INTEGRATED CIRCUITS With Load-Compensated Diode-Transistor Logic. These integrated circuits are constructed within and upon a single sliver of silicon. The article gives details on fabrication and the advantages of a new kind of logic circuit—LCDT. *This may be one of the most informative articles ever published on integrated circuits.* By B. T. Murphy, Siliconix CONTENTS continued INCREASING DIGITAL TRANSMISSION RATES With a Unique Synchronization Method. Synchronization of transmitter and receiver is a major problem in radio digital data transmission. Systems usually sync on the leading or trailing edge of pulses. *This system syncs on the middle. Trick is to insert three extra bits centered on the sync pulse.* By K. Roedl and R. Stoner, General Dynamics/Electronics 75 MEASURING EQUIPMENT PERFORMANCE: New Method Uses Common Instruments. Technique permits precise measurements and immediate display; requires only a generator, attenuator and oscilloscope. *Both positive and negative gain can be displayed.* By J. L. Haynes, Consulting Engineer 78 SEMI PERMANENT MEMORY: Latest Use for Twistors. The twistor is a copper wire on which is wound a helix of square-loop magnetic material. This 7,680-bit memory functions by automatically resetting bits to their original state after each read pulse. *Holes punched in removable copper sheet inhibit writing in bit locations desired.* By K. E. Krylow, J. T. Perry, Jr., and W. A. Reimer, Automatic Electric 80 REALISTIC SONAR TRAINER Generates Ships Wakes. Wakes of ships can lead sonar operators to the ship that produced it or obscure echos on the far side. This new simulator adds realism to sonar training. *The digital wake generator can also simulate radar signals from ion trails.* By M. Kaufman and E. Levine, Applied Science Labs, Inc. 84 DEPARTMENTS Crosstalk, *Program for IEEE* 3 Comment. *Millimeter Waves. Recruiting Students* 4 Electronics Newsletter. *Electroluminescent Diode Has Negative-Resistance Characteristic* 7 Washington This Week. *UHF Tv Wins Test* 12 Meetings Ahead. *Instrument Society of America Conference* 44 Research and Development. *Contest Produces Novel Circuit Designs* 96 Components and Materials. *Components Meet Sales Challenge* 114 Production Techniques. *Variety Spices IEEE Production Topics* 132 New Products. *Generator Tailors Pulses* 146 Literature of the Week 188 New Books. *Lasers: Generation of Light by Stimulated Emission* 190 People and Plants. *IEEE Medal of Honor Goes to Inventor* 196 Exhibitors at the IEEE Show 203 Reprint Information and Order Form 213 Index to Advertisers 225 Program for IEEE THE International IEEE Convention and Show opening in New York March 25 is a highpoint of the year for the electronics industry. This meeting will be unsurpassed in sheer magnitude by any technical event anywhere in the world during 1963. An estimated 75,000 engineers will view more than 850 exhibits and listen to over 250 technical papers. Four days of buying and selling products exhibited at the Show will administer a shot in the arm to the electronics industry. And this is good. But the Convention came before the Show. It was the technical papers presented before the two merged groups forming the Institute that attracted engineers in the first place. And it is around this essential technical core that all this highly beneficial economic activity revolves. Now, this year's announced technical program is disappointing. There are few new revelations and few papers to arouse any real controversy. It cannot be blamed on the program committee because we understand that almost every paper submitted was accepted for presentation. There were just not enough papers submitted so that the committee could be sufficiently selective. Many engineers have observed, and indeed some IEEE officers admit, that many specialized conferences siphon off the best papers reporting new developments and discoveries in our field. These specialized conferences will continue, and for the good of the industry the International Convention must also continue. This dilemma leaves us with the problem of how to build and sustain an interesting and significant technical program in the face of heavy and increasing competition from specialized conferences. Here are a few suggestions: - Let's face up to the fact that new developments cannot always wait until the last week of March for disclosure. Let's have them, by all means, but let's stress panel discussions by renowned engineers treating important and controversial topics of the industry as well as invited papers by specialists summarizing work done in a particular field during the past year. - Let's get away from the professional-group system of organizing the Convention, wherein a group with little new to say gets equal time with one whose activities can barely be scratched in the time allotted. A paper should stand on its own merits, not on its sponsorship. - Let's put an end to the three-ring and four-ring circus aspect of the convention that often makes it impossible for an engineer to be everywhere he wants to be. The answer: fewer papers, but better papers, of broader interest. The proper function of the International IEEE Convention as we see it is to cross-fertilize this vast field of electrical-electronics engineering—to be a place where the antenna specialist can learn about new developments in transistor circuits and conversely, or where the computer designer can get up to date on lasers. A smaller but more selective program of stimulating panel discussions and comprehensive tutorial papers as well as important new developments would be truly worthy of the world's largest engineering society. SEE US AT THE SHOW. While you are at the IEEE Show, why not stop in at our booth too? Comments, criticism, suggestions on articles you would like to see us publish, articles you'd like to author, news tips, chit-chat, or just a friendly hello—we'll welcome them all. ELECTRONICS' booth is 4314-4316, on the fourth floor right near the elevators. There will be an editor at the booth each day from 9:45 a.m. to 8 p.m. (if you want to see a particular editor just leave a message). McGraw-Hill Book Company will be at booth 4331. COMMENT Millimeter Waves I should like to extend my congratulations and appreciation to ELECTRONICS for the excellent coverage of the Orlando IRE Millimeter Wave Conference appearing in your Jan. 18 issue (p 24). In my opinion, and that of many others in the industry, you contribute substantially to the state of the art by such perceptive efforts. JACK G. BUTLER Butler Roberts Associates, Inc. New York, New York Recruiting Students A feature called Who's Minding the Stockroom? (p 3, Nov. 2), was recently called to my attention. It asked what might be done to counteract the alarming decline in the number of young people entering the engineering profession. At the University of Pittsburgh, we have begun an active campaign to recruit high-school students to engineering careers. As part of this campaign, we wrote a nine-week series of radio shows based upon the United States space program. We distributed these scripts to high schools in half a dozen eastern states. DAVID MARTIN Public Relations Division University of Pittsburgh Pittsburgh, Pennsylvania Dictionaries and Usage Mr. Julian Loebenstein's letter (p 4, Dec. 28, 1962) about the use of the word "obsolete" as a verb, and the editorial reply, bring out a very interesting point that most people overlook. Because a dictionary says that a word "means" thus and so, it does not mean that this is the "correct" definition or use of the word, but merely the prevalent one. We forget that a good dictionary is not a definer of words, but a reporter of usage. Every good dictionary will make a point of saying that quite strongly. It merely reports that this is how the word is being used; it does not say that this is how the word should be used. Indeed, any usage of a word is correct, if it is made clear that the word is being used in a special or obscure sense, if such is the case. We tend to forget that very often words are used just that way, or we assume that, because we know that our usage is special, everyone else will know it too. In this sense, H. L. Mencken's statement that Dr. Samuel Johnson was the worst thing that ever happened to the English language, is quite correct. Johnson set himself up as an authority, and said, not that this is how words are used, nor even that this is how they should be used, but that this is how they must be used. A language, in order to express any subtlety whatever, must be allowed to evolve with the society using it. Johnson tried to prevent that change. In so doing he almost obsoleted (!) the language, like a lexicographer shouldn't. KIM A. BORISKIN Burlington, Vermont New Color TV System The article on Harries' new color television system (p 33, Dec. 14, 1962) is quite interesting, but doesn't the 50 and 100-Kv rating for the crt place it in the X-ray class? I believe RCA warns of X-radiation at 25 Kv. Will shielding be necessary for these tubes? A. R. ROGERS Dunbar, West Virginia Author Harries The picture tubes are not used at 100 Kv, and having regard to the current and much lower voltage at which they are used, the X-radiation is adequately shielded by the electrode materials in the tube and by the metal plates of the chassis. We went into this very thoroughly in the early stages with the National Physical Laboratory's X-Ray Protection Division in London, and this result has been confirmed by later tests. J. H. OWEN HARRIES Harries Electronics Corp., Ltd. Devonshire, Bermuda You pay no more for Lambda environment-engineered LE power supplies **COMPLETELY PROTECTED** against—short circuit and electrical overload, input line voltage transients, excessive ambient temperatures. No voltage spikes due to "turn-on, turn-off" or power failure. **WIDE INPUT RANGE** Wide input voltage and frequency range—105-135 VAC, 45-66 CPS and 320-480 CPS in two bands selected by switch. **CONVECTION COOLED** No blowers or filters; maintenance free. **6 MODELS AVAILABLE** **REMOTELY PROGRAMMABLE AND CONTINUOUSLY VARIABLE** Voltage continuously variable over entire range. Programmable over voltage and current range. **OTHER FEATURES** - All solid state. - Adjustable automatic current limiting. - 0°C to +50°C ambient. - Grey ripple finish. - Ruggedized voltmeters and ammeters per MIL-M-10304B on metered models. **LE SERIES CONDENSED TENTATIVE DATA** | Model | Voltage Range | Current Range | Price | |-------|---------------|---------------|-------| | LE101 | 0-36 VDC | 0-5 Amp | $420 | | LE102 | 0-36 VDC | 0-10 Amp | $25 | | LE103 | 0-36 VDC | 0-15 Amp | $95 | | LE104 | 0-36 VDC | 0-25 Amp | $75 | | LE105 | 0-18 VDC | 0-8 Amp | $25 | | LE109 | 0-9 VDC | 0-10 Amp | $30 | (1) Current rating applies over entire voltage range. (2) Prices are for non-metered models. For models with ruggedized MIL meters add suffix "M" to model number and add $50 to the non-metered price. For metered models and front panel control add suffix "FM" and add $50 to the nonmetered price. **REGULATED VOLTAGE:** Regulation (line and load) Less than .05 per cent or 8 millivolts (whichever is greater). For input variations from 105-135 VAC and for load variations from 0 to full load. Transient Response (line) Output voltage is constant within regulation specifications for any 15 volt line voltage change within 105-135 VAC. (load) Output voltage is constant within 25 MV for load change from 0 to full load or full load to 0 within 50 microseconds of application. Remote Programming ....... 50 ohms/volt constant over entire voltage range. Ripple and Noise ........ Less than 0.5 millivolt rms. Temperature Coefficient .. Less than 0.015%/°C. **AC INPUT:** 105-135 VAC; 45-66 CPS and 320-480 CPS in two bands selected by switch. **OVERLOAD PROTECTION:** Thermal ............... Thermostat, reset by power switch, thermal overload indicator light front panel. Electrical: External Overload Protection ........ Adjustable, automatic electronic current limiting. **METERS:** Ruggedized voltmeter and ammeter to Mil-M-10304B specifications on metered models. **PHYSICAL DATA:** Mounting ............ Standard 19" rack mounting. Size ........ LE 101, LE 105, LE 109 3½" H x 19" W x 16" D LE 102 ............. 5¼" H x 19" W x 16" D LE 103 ............. 7" H x 19" W x 16½" D LE 104 ............. 10½" H x 19" W x 16½" D SEND FOR LAMBDA CATALOG. LAMBDA ELECTRONICS CORP. 518 BROAD HOLLOW ROAD • HUNTINGTON, L.I., NEW YORK • 516 MYRTLE 4-4200 SALES OFFICES AND REPRESENTATIVES CONVENIENTLY LOCATED IN MAJOR CITIES Use it anywhere! New hp portable AC Voltmeter 100 µV to 300 V • 5 cps to 2 MC 2% accuracy is yours over a major portion of the frequency range with this new battery-operated hp 403B AC Voltmeter. Carry it anywhere and quickly make direct measurements from 100 microvolts to 300 volts, 5 cps to 2 mc. Battery charge, easily checked with a front-panel switch, is automatically restored while you use the 403B on your bench or from your ac line. The instrument itself always operates from its battery supply, and the battery operation permits complete isolation of the 403B from power line and external grounds—eliminating hum and ground loops. Signal ground may be ±500 v dc from external chassis. The meter responds to the average value of the input, is calibrated in the rms value of a sine wave. The solid state, compact 403B weights only 6½ lbs. Call your Hewlett-Packard representative or write direct for a demonstration on your bench. Data subject to change without notice. Price f.o.b. factory. Hewlett-Packard Company 1501 Page Mill Road, Palo Alto, California, Area Code 415, DA 6-7000 Sales and service representatives in all principal areas Europe, Hewlett-Packard S. A., 54 Route des Acacias, Geneva; Canada, Hewlett-Packard (Canada) Ltd., 8270 Mayrand St., Montreal New GaAs Diode Is Promising Light Switch ELECTROLUMINESCENT gallium-arsenide diodes with negative-resistance characteristics have been fabricated by IBM scientists. Diode I-V characteristics exhibit a negative resistance at a forward bias of 3 to 5 volts. There is also a nonlinear increase in light intensity that indicates such diodes could make highly efficient light switches. Diods are prepared by diffusing manganese and then zinc into \( n \)-type gallium arsenide. Because manganese is a deep level acceptor, there is a freeze-out of holes on the manganese centers at liquid nitrogen temperatures. This produces a relatively wide high-resistance region (1 mil, several thousand ohms) on the \( p \) side of the junction that is responsible for the negative resistance. Below the critical voltage of 3 to 5 v, current is a few milliamperes. Current then jumps through the negative resistance region to typically more than 10 times its pre-breakdown value, depending on load resistance. Scientists feel it might be possible to switch on the order of 1 amp. For a current change by a factor of seven, intensity of the manganese and zinc lines have been observed to change by a factor of 10 and 100, respectively. This contrasts with ordinary electroluminescent diodes where light intensity is roughly linear with current. The emitted light is incoherent. Possibility of laser action exists but IBM scientists had no comment. Work will be described by K. Weiser, R. S. Levitt, and W. P. Dumke, of IBM Watson Research Center, at the American Physical Society meeting in St. Louis, March 25 to 28. Temperature-Tunability Features Diode Lasers BOSTON—Tunability is one of most important features of junction-diode laser, says Ali Javan, of MIT. Tunability is a corollary of fact that frequency of oscillation is a function of the diode's temperature, and the temperature can be varied. Although Javan did not spell out applications, frequency diversity is an obvious advantage in military uses. In Boston, IEEE lecture on quantum electronics, Javan said another advantage of semiconductor laser, extremely high power efficiency, makes it important for space applications. It is known that MIT Lincoln Laboratory is exploring possibilities of junction-diode laser radar as a lunar landing aid. Russians Considering French Color-Tv System PARIS—Russia, Poland and Czechoslovakia plan to take a serious look at the French "Secam" color tv system (p 57, May 6, 1960), reports Compagnie Francaise de Television, which developed the system. Tests are already underway in six western European countries. At stake is selection of a standard for the Continent and England, and probably the Iron Curtain countries as well. For western Europe, the decision should come in July, when the European Radio Broadcasting Union releases its comparative study of the two systems still in the running—Secam and the American NTSC system. Both are compatible for black-and-white. They differ principally in the way the color signals are handled. Dyna-Soar Project Future Looks Dimmer IN THE OPINION of some Washington officials, Defense Secretary McNamara's mind was already made up to scrap the X-20 Dyna-Soar program before he left Washington Wednesday to visit prime contractor Boeing in Seattle. Stated purpose of the trip is to study the feasibility of continuing USAF's manned space project (p 8, March 1). Cancellation of the project would fall in line with McNamara's stalemate strategy that he outlined to Congress (p 7, Feb. 8). Deep-Space Tracker Set for Canberra MELBOURNE, AUSTRALIA—Canberra has been chosen as site of main Australian tracking station for U.S. space shots. Decision follows visit last year of NASA team. Station will be for deep tracking and will be integrated into existing network of stations at Johannesburg, South Africa, Woomera and Los Angeles. Main equipment will be parabolic IEEE Hard Put to Break IRE's Records IEEE HAS BOOKED some 860 manufacturers' exhibits into a total of 1,256 booths at the Coliseum for the show March 25 through 28. This is just about the same number of exhibits that were packed into the last IRE show in New York. The reason: while IEEE is far bigger than IRE was, the Coliseum is still the same size. IEEE, however, predicting an attendance of 75,000 at the Show and Convention. That would set a new record. Last year, the IRE predicted 70,000 and drew about 73,500 antenna 85 feet in diameter, antenna control systems and equipment for transmission reception and processing of radio signals to and from spacecraft. Single-Chip Gate Has 5-nsec Delay NEW YORK—Single-chip integrated circuits using a modified transistor-transistor logic were introduced here Wednesday by Sylvania's Semiconductor division. The silicon-based epitaxial planar structures include a dual NAND gate, a NAND-OR block and two flip-flops. These circuits simultaneously attain a minimum noise rejection of 0.7 v at 125 C, a typical fan-out of 25 without a buffer amplifier, propagation delay of 11 nsec and dissipate 12 mw per stage. The dual NAND includes two 3-input gates that can be cross coupled to give a 20-Mc set-reset flip-flop. Individual gates have been observed to have propagation delays of 5 nsec and dissipate 7 mw per stage. Boston Begins Forming Regional Space Firm BOSTON—Baystate Science Foundation, a non-profit corporation, has been formed in Boston as the initial step in marshalling resources of New England area for bidding on major space contracts (p 7, Oct. 5, and p 24, Nov. 2, 1962). If pledges of capital are sufficient, Advanced Technology Inc., a profit-making operating company wholly owned by the Foundation, will be formed. The planning committee says that at no time would firm intrude into the hardware province of industry. Soviet Plants Increase Tv Production VIENNA—Tv assembly plants of 500,000 units annual capacity will be established soon in the USSR, Tass reported last week. The plant will produce standard types of tv sets, using automation and mechanization. This move is to boost Soviet tv production far above the 2 million in 1962. An end will come to Soviet production of "a great number of tv sets," Tass said. Basic model having a 13.6-inch screen is up for approval soon; unified types of 18.3 and 23-inch screens will be presented at the end of 1963. In Brief... WEST GERMAN Post Office is buying a transportable space communications station from ITT that will have dual transmitters, one for working with Relay and the other for Telstar. SPACE FLIGHT operations center costing $12 to $15 million is planned by Jet Propulsion Laboratory. It will guide unmanned lunar and planetary probes after launch. MINIATURE SONOBUOYS will be produced for Navy by Hazeltine Corp. HIGH-PRESSURE, modular inflation system is being tested for Echo II. IRC SAYS it will deliver pre-production samples of hybrid circuits this year. JAPAN MAY IMPOSE quota on battery exports. Rule could go into effect April 1. HUNGARY has built a semiconductor plant. KAISER will build $25 million NASA facility at Sandusky, Ohio, for final ground testing of nuclear-powered spacecraft. CONTROL DATA CORP. is buying Bendix Computer division. It will continue the line of Bendix G-15 and G-20 computers. LIGHTWEIGHT tape recorders with large capacity and low power requirements are being developed for Apollo spacecraft by Leach. MODEL RAILROADING breakthrough has been claimed by GE. Microreceiver, using two silicon-controlled rectifiers, provides "realistic" operation. AGREEMENT to make and market TRW-530 computer in Japan has been signed by Thompson Ramo Woodruffe and Mitsubishi. WATKINS-JOHNSON acquired Stewart Engineering Co. in stock transfer. HONEYWELL introduced new models of its 800 and 1800 computers, featuring input-output control center. THIN-FILM MICROCIRCUITS NOW AVAILABLE FROM SPRAGUE! Smaller than a postage stamp, this typical CERACIRCUIT is a two-stage oscillator and gated amplifier, used as a clock-pulse source in digital systems. LINEAR and DIGITAL CERACIRCUITS FOR GREATER DESIGN FLEXIBILITY... INCREASED RELIABILITY... CIRCUIT ECONOMY! Thin-film CERACIRCUITS allow great flexibility in the choice of components and types of circuits. Chopping size, weight, and cost, while boosting reliability and power utilization, these revolutionary microcircuits are being used by alert design engineers in ever-increasing numbers. Their ease of usability is remarkable. Containing familiar circuit elements such as capacitors, inductors, resistors, diodes, and transistors, CERACIRCUITS offer precision components with a wider choice of tighter parameters, assuring greater design freedom. Custom thin-film CERACIRCUITS are here... Now! A Sprague microcircuit specialist will be glad to discuss the transition of your circuits to thin-film. He can also supply CERACIRCUITS such as linear amplifiers, oscillators, NOR gates and drivers, indicators, binary counters, and clocks for evaluation of Ceramic-base CERACIRCUITS in your equipment. For complete information, write to Technical Literature Service, Sprague Electric Company, 35 Marshall Street, North Adams, Massachusetts. SPRAGUE COMPONENTS MICROCIRCUITS CAPACITORS TRANSISTORS MAGNETIC COMPONENTS RESISTORS INTERFERENCE FILTERS PULSE TRANSFORMERS PIEZOELECTRIC CERAMICS PULSE-FORMING NETWORKS TOROIDAL INDUCTORS HIGH TEMPERATURE MAGNET WIRE CERAMIC-BASE PRINTED NETWORKS PACKAGED COMPONENT ASSEMBLIES FUNCTIONAL DIGITAL CIRCUITS ELECTRIC WAVE FILTERS GET THE FULL STORY AT IEEF BOOTH 2424 SPRAGUE THE MARK OF RELIABILITY 'Sprague' and '®' are registered trademarks of the Sprague Electric Co. CIRCLE 9 ON READER SERVICE CARD WASHINGTON THIS WEEK DOOR OPENS FOR UHF TV IN 7 CITIES UHF TV HAS WON a crucial test. By a four to three vote, FCC vetoed its plan to put new vhf stations in seven cities (Electronics, p 7, March 1). Now, third-network service to these markets can only be developed at uhf. A vhf station at Enid, Okla., will be permitted to move to Oklahoma City in the only exception to the FCC's ruling against allowing new vhf stations at spacings below usual standards. Commissioner Robert E. Lee succeeded in convincing Chairman Newton N. Minow, the swing-man in the vote, that allowing the vhf stations would set back uhf development. DON'T SELL technical data or processes to Communist countries without checking with the Commerce Department, officials warn. The prohibition on exporting "unpublished" technical information includes the knowhow that enables American technicians to build an efficient facility, even if the principles are well-known. In the first test case of data export controls, Hydrocarbons Research, Inc., was penalized by a curb on its export privileges because it built a petrochemical facility in Rumania. Congressional critics charge that Commerce acted too little and too late since the Rumanians got U. S. knowhow. Commerce officials promise swifter enforcement, tougher penalties. Maximum penalty is total loss of export privileges. FAA SETS UP BUYING STAFF FEDERAL AVIATION AGENCY has established four new contracting branches, each headed by a specialist, to handle procurement. The branches and their newly-named directors are: traffic control and radar, Ray E. Mulari; communications and weather, Harold N. Austin; aircraft and navigation aids, Richard T. Golrick; and facilities and services, Frederick G. Bremer. PATENT LAW SEEMS AWARDS FOR INVENTORS PATENT RIGHTS LAW proposed by Rep. Herman Toll (R.-Pa.) includes a requirement that government contractors set up an awards system for employed inventors. Inventor rights are an incidental consideration now in government concern over contractor-government patent interests. In the Senate last week, a Judiciary Committee report said that in Europe a body of patent law to protect rights of employed inventors is growing. Generally, the laws developing in Germany and other countries protect non-research employees—those not hired to invent. In the U. S., employed inventors are protected by commercial law and legal precedents. One study indicates that the courts more often than not protect employed inventors' interests. ELECTRONICS STILL OUT OF TFX PROBE McCLELLAN COMMITTEE PROBE into the TFX contract (Electronics, p 18, Dec. 14, 1962) has not yet touched seriously on performance or source selection of the aircraft's electronics. The investigation has disclosed that the secretaries of Defense, Air Force and Navy reversed four source-selection board recommendations favoring Boeing. The secretaries saw "no over-riding margin between the competitors," gave General Dynamics-Grumman the edge because the team proposed to standardize 85 percent of parts in Air Force and Navy versions of TFX. Boeing proposed 60 percent. Since 1958, when it first built the AN/DRW 11 (a receiver whose primary function is to destroy malfunctioning missiles), STL has produced more than 400 space communications receivers of 14 different designs. The Able I receiver, the first phase-locked receiver ever to fly, was built by STL. So were the ground station parametric amplifiers that tracked Pioneer V 22 million miles into space. STL built the receiver now being used at Plermeur-Bodou, France, to track America's first communications satellites. The voice communications receiver for SYNCOM and the space command receiver for NASA's OGO are both STL products. Scientists and engineers interested in advancing the art of space communications will find Space Technology Laboratories an active place. STL builds spacecraft for NASA and Air Force-ARPA, and continues Systems Management for the Air Force's Atlas, Titan and Minuteman programs. These activities create immediate openings in: Space Physics, Radar Systems, Applied Mathematics, Space Communications, Antennas and Microwaves, Analog Computers, Computer Design, Digital Computers, Guidance and Navigation, Electromechanical Devices, Engineering Mechanics, Propulsion Systems, Materials Research. To obtain additional information regarding positions at Southern California or Cape Canaveral, you may contact Dr. R. C. Potter, One Space Park, Dept. G-3-3, Redondo Beach, California, or P.O. Box 4277, Patrick AFB. STL is an equal opportunity employer. SPACE TECHNOLOGY LABORATORIES, INC. a subsidiary of Thompson Ramo Wooldridge Inc. Los Angeles • Vandenberg AFB • Norton AFB, San Bernardino • Cape Canaveral • Washington, D.C. • Boston • Huntsville • Dayton • Houston IEEE DELEGATES: VISIT STL PRODUCTS BOOTH 3237—3239 What name is on the first 1.5 Mc recorder? Here it is: a 1.5 Mc per track, multi-track recorder! And Ampex is the first to have it. It's called the FR-1400. It will give you the broadest bandwidth yet in longitudinal recording. What's more, it utilizes solid state electronics throughout—all in one rack. It has four speeds, each electrically switchable with no adjustments needed. And it comes with tape search and shuttle to provide quick data location and permit any portion of the tape to run repeatedly without operator attention. What about performance? Outstanding! It offers better rise time and minimum ringing on square waves, low intermodulation distortion, and improved flutter. Ampex also brings you a new 1.5 Mc tape. In both you'll find the same engineering precision, the same superior quality, that has made Ampex first in the field of magnetic recording. Write the only company providing recorders and tape for every application: Ampex Corp., 934 Charter St., Redwood City, Calif. Worldwide sales and service. Allen-Bradley Hot Molded Resistors Help Beckman Engineers Achieve Maximum Reliability In designing utmost reliability into their 210 high-speed data processing system, Beckman engineers—from the very start—insisted on components of the highest reliability. Thus, A-B hot molded resistors fitted ideally into this development program. For more than three decades, A-B resistors—by the billions—have been delivering superior performance in high quality equipment of all types. Allen-Bradley has developed and perfected a unique hot molding process which assures such consistent year-in and year-out uniformity that long term performance can be accurately predicted...and there is complete freedom from catastrophic failures. When performance takes priority over all else, be certain to begin the planning of your equipment with the built-in reliability that only Allen-Bradley hot molded resistors can deliver. For full details on all Allen-Bradley quality electronic components, please write for Publication 6024. Typical circuit board shows extensive use of A-B resistors in this section of the Beckman 210 system. CALL COLLINS for advanced components: for these five reasons: 1. Simplified purchasing with broad product line including mechanical, crystal and LC filters, toroids, transformers, magnetic amplifiers, magnetostriuctive devices. 2. Improved delivery schedules because of newly expanded production facilities. 3. Engineering talent in depth to bring you such developments as low cost mechanical filters, microminiature toroids, printed circuit magnetic components, super-selective crystal filters in thumb-size cases. And even more advanced research programs are in full swing in ferrite development, metallurgy, crystal processing, component packaging. 4. High-reliability qualification facilities are among most well-equipped in industry. 5. Expanded force of experienced engineering sales representatives in principal cities. Call one of those listed today or write for more information. Ask for Data File 205. A representative sample of Collins mechanical, crystal and LC filters, toroids, transformers, magnetic amplifiers, magnetostriuctive devices. ARIZONA — NEW MEXICO AR/TEC, INC. 34 East Sletson Drive P.O. Box 727 Scottsdale, Arizona Telephone: 947-6304 TWX: 602-949-0183 CALIFORNIA (SOUTHERN) Engineering Liaison Associates 3360 Barham Boulevard Hollywood 28, California Telephone: HOLlywood 9-2283 TWX: LA 1543 FLORIDA — ALABAMA William R. Lehmann Company P. O. Box 1224, 222 Palmetto Avenue Orlando, Florida Telephone: GArden 4-0131 TWX: OR 7207 ILLINOIS Edward Schmeichel Company 5968 West Chicago Avenue Chicago 91, Illinois Telephone: ESterbrook 8-2070 MARYLAND Daniel And Company P. O. Box 124, 18 Argyle Road Lutherville, Maryland Telephone: VALley 5-3330 MISSOURI - KANSAS Midland Engineering Sales Assoc. 2210 West 75th Street Prairie Village, (Kansas City) Kansas Telephone: ENdicott 2-7397 NEW ENGLAND STATES R. H. Sturdy Company 103 Morse Street Newton 58, Massachusetts Telephone: WAinut 6-0808 NEW JERSEY Harold Gray Associates 8-10 Highwood Avenue Tenafly, New Jersey Telephone: LOwell 7-3585 NEW YORK Naylor Electric Company 1718 Erie Boulevard, East Syracuse 3, New York Telephone: GRanite 2-9183 TWX: SS 166 NORTHERN CALIFORNIA WASHINGTON — OREGON Engineering Liaison Associates North 626 Jefferson Street Redwood City, California Telephone: EM9-9515 OHIO Electro Com 5554 Pearl Road Cleveland 29, Ohio Telephone: TUXedo 6-2404 OKLAHOMA — TEXAS — LOUISIANA Hillman Enterprises, Inc. 3805 Turtle Creek Blvd. Dallas 19, Texas Telephone: LAKeside 1-2070 TWX: DL 123 PENNSYLVANIA Hile and Stitzer Company 17 South Valley Road Paoli, Pennsylvania Telephone: Nlagara 4-5500 CANADA Collins Radio Company of Canada, Ltd. 11 Bermondsey Road Toronto 16, Ontario, Canada Telephone: PLymouth 7-1101 Collins Radio Company Components Division 19700 San Joaquin Road Newport Beach, California Maggs Electronics, 5611 Sheila Street, Los Angeles 22, California, is a Collins distributor for the Collins standard toroidal coil line and is able to provide 24-hour delivery on most types in quantities up through 99 pieces, packaged and marked to customer requirements. Telephone: Code 213, 685-6141. TWX 213-722-6289. See the complete Collins line at New York IEEE 16 CIRCLE 16 ON READER SERVICE CARD OHMITE TECHNOLOGICAL BREAKTHROUGH! MOLDED VITREOUS ENAMELED WIRE-WOUND RESISTORS A NEW PRODUCT... A NEW METHOD Patent Applied For ADVANTAGES ★ Insulated for 1000 V to Ground ★ Uniform Shape ★ Uniform Sizes ★ Permanent "Fired-on" Vitreous Markings, Completely Cleaning-Solvent Resistant ★ Plus All The Advantages of Ohmite Time-Proven Vitreous Enamel The NEW Series 99 Resistors are the result of an outstanding technological development—an exclusive new molding process for applying vitreous enamel to resistors. This "Patent Applied For" molding process is the first radical manufacturing change in the history of vitreous enameled resistors—replacing the traditional "wet dipping" process. The dense uniform vitreous enamel jacket created by molding—fired at high temperature—produces the hard, glossy, moisture-resistant covering proved in years of service, as well as the extra advantages featured above. Series 99 Resistors meet all requirements of MIL-R-26C, including pertinent V-block insulation tests*. Construction is all ceramic and metal. Ratings are based on a maximum hot spot temperature of 350°C with a 25°C ambient. Standard tolerance is ±5%, other tolerances available. Standard leads are grade A nickel, tinned for soldering. Also supplied untinned for welding. Other types of lead material are available. *For 1-watt size only, V-block not to exceed length of resistor body. RHEOSTATS • RESISTORS • RELAYS • TAP SWITCHES • R.F. CHOKES VARIABLE TRANSFORMERS • TANTALUM CAPACITORS • SEMICONDUCTOR DIODES MILLIONS OF UNIT-HOURS OF TESTING —This new molded vitreous enamel construction has been test-proven in pilot production. Load-life tests are being conducted at full-rated wattage on all sizes and resistance values which represent the approximate minimum and maximum for each size. The total number of resistors in this test group is 1,966, and 2,000 hours of cyclic "on-time" have been exceeded, thereby producing an equivalent total to date (January, 1963) of 5,242,666 unit test hours (cyclic, 1½ hours on, ½ hour off) of successful operation. Testing on all units continues. | OHMITE STYLE | RATED WATTS AT 25° C | DIMENSIONS (INCHES) | OHMS RANGE (COM/M'L.) | |--------------|----------------------|--------------------|-----------------------| | | | DIAM. +.031-.000 | LENGTH .015 | | | 995-1A | 1 | 0.125† | 0.422‡ | 1 TO 3,000 | | 995-3A* | 3 | 0.203 | 0.547 | 1 TO 8,000 | | 995-5A§ | 5 | 0.313 | 0.922 | 1 TO 30,000 | | 995-5B | 5 | 0.203 | 0.938 | 1 TO 18,000 | | 995-10A† | 10 | 0.313 | 1.781 | 1 TO 51,000 | NOTE: Standard lead length is 1½". *Also in MIL style RW69V. §Also in MIL style RW67V. †Also in MIL style RW68V. ‡Tolerance, ±.015-.005 Write for Bulletin 103 OHMITE MANUFACTURING COMPANY 3610 Howard Street, Skokie, Illinois Phone: (312) ORchard 5-2600 SEE ALL OF OHMITE'S NEW PRODUCTS AT BOOTH 2333-35, IEEE (IRE) SHOW CIRCLE 17 ON READER SERVICE CARD CIRCLE 18 ON READER SERVICE CARD eight component lines... one standard of excellence every OHMITE component reflects the controlled quality which has made OHMITE products worthy of your confidence REQUEST CATALOG FROM OHMITE MANUFACTURING COMPANY 3637 HOWARD STREET, SKOKIE, ILLINOIS PHONE: (312) ORchard 5-2600 RHEOSTATS RESISTORS VARIABLE TRANSFORMERS RELAYS TANTALUM CAPACITORS TAP SWITCHES R.F. CHOKES SEMICONDUCTOR DIODES From industry's viewpoint, the distance from farm to plant is shorter in Virginia's Shenandoah Valley. How do you measure the distance from farm to plant? In miles? Or in human trainability? On the latter count, the sturdy farmer stock of Virginia's Shenandoah Valley has made a superb record. Industries from textiles to electronics, from furniture to drugs, report Shenandoah men and women learn new jobs faster, show greater stability and productivity than people in older, more congested industrial areas. Ask VEPCO for industrial site data and economic studies covering the Shenandoah Valley's warmly hospitable, highly livable communities. Write, wire or phone in confidence. VIRGINIA ELECTRIC and POWER COMPANY J. Randolph Perrow, Manager—Area Development Electric Building, Richmond 9, Virginia • Milton 9-1411 Serving the Top-of-the-South with 2,540,000 kilowatts—due to reach 3,500,000 kilowatts by 1965. The Multi-Sweep Model Video 300 is a wide range video-vhf sweeping oscillator which provides a full 300 mc of swept-frequency output by all-electronic frequency modulating techniques. It provides a linear swept frequency output, AGC'd for constant output over the frequency band. The Multi-Sweep Model Video 300 includes provision for the insertion of external oscillators to generate variable birdie-bypass type markers on all frequencies. A calibrated frequency dial permits the use of the unit as an IF-VHF oscillator with continuously variable center frequency and sweep width. **Sweep Frequency Range** The Model 300 is a wide-sweeping swept frequency oscillator with high and undistorted output, essentially free of spurious signals. Over the entire sweeping range, it generates a 0.5 volt (rms into load) output which is held constant to within ±0.25 db by a fast-acting automatic gain control circuit. The RF output is monitored by a calibrated panel meter. **Sweep Rate** The repetition rate of the sweep may be locked to the nominal line frequency or varied around this frequency for hum checks. A manually-controlled swept output provides a means of varying c-w signal in sync with the oscilloscope display. The manual control covers the same frequency range to which the Model 300 is set for electronic sweeping. **Advanced Design** The Multi-Sweep Model 300 employs recently developed techniques in providing a compact and versatile instrument. All elements, including the frequency modulated source and its means of modulation use recently developed solid state circuits. Careful isolation and buffered outputs provide for excellent waveshapes and clean, reliable outputs. **SPECIFICATIONS** - **Frequency Range:** Continuously variable 1 mc to 300 mc. - **Sweep Width:** Linear, continuously variable 200 kc to 300 mc. CW operation. - **Sweep Rate:** Variable around line frequency, locks to line. Manual control. - **RF Output:** 0.5 volt rms into nominal 50 ohms (70 ohms on request); flat to within ±0.25 db over widest sweep — metered. - **Markers:** Provision for birdie-bypass markers derived from external oscillators. Separate level control and output. - **Attenuators:** Switched 20,20,20,10,6,3 db plus variable 6 db. - **Power Supply:** Input approx. 20 watts, 117 volts (±10%), 50-60 cps ac, regulated. - **Dimensions:** 6¾" x 15½" x 13½". - **Weight:** 24 lbs. - **Price:** $795.00 f.o.b. factory, $875.00 f.o.s. N.Y. VISIT KAY AT THE IEEE SHOW: BOOTH 3512-3518 KAY ELECTRIC COMPANY Dept. E-3 Maple Avenue, Pine Brook, Morris County, New Jersey • Capital 6-4000 IEEE President Talks Candidly Dr. Ernst Weber says split in AIEE, IRE views is big problem AS FIRST PRESIDENT of the merged AIEE-IRE, Dr. Ernst Weber is in a unique position to view the problems, the conflicts and the enthusiasms that the new organization, the IEEE, has generated. Recently we went to Polytechnic Institute of Brooklyn, which he heads, to find out what he thinks will be the IEEE's biggest problem: "The difference in viewpoints of the two memberships," he answered. "The AIEE member is happiest when you tell him just what to do. With the IRE man, it's a case of telling him what not to do." This difference, he says, has a profound effect on IEEE's first order of business: combining AIEE's technical committees with IRE's professional groups. IEEE's top brass has adopted a hands-off policy toward this issue, encouraging the committees and groups to work out their own plans for merger. But the traditional methods of operation of the AIEE and IRE produce difficulties here. "For instance, what an AIEE member means by freedom will probably be quite different from what an IRE member means." The technical committees were organized from the top down, he explained, with members chosen by AIEE higher-ups. IRE groups were most often established by the members' own initiative and staffed by volunteers. Headquarters regulated certain well-defined areas but otherwise only provided a model constitution, which a group might rewrite to suit itself. COMPANIES' ATTITUDE—What has been the attitude of companies in the industry to the AIEE-IRE merger, we asked. "They're all for it," he said. Dr. Weber agreed that in recent years many companies had become more and more reluctant to cooperate with the professional societies. Some of them have balked at giving their men time off and travel money to attend meetings. Dr. Weber holds that the principal reason for this has been the proliferation of such meetings. Because the IEEE will be striving to eliminate overlapping and duplication, every firm he has talked to is enthusiastic about IEEE. ORIGINAL PAPERS—Concerning the meetings themselves, the trend in the future, as in the recent past, will be to steer original papers to the meetings of the professional and technical groups, where researchers can expect to find a "serious and competent" audience. The national meeting in March will continue to be a "three ring circus," he said, with much emphasis on socializing. Papers read at national meetings will be "tu- WEBER, THE WELL-ROUNDED MAN Most impressive qualities of Dr. Ernst Weber are his energy and his varied interests. In an interview, the energy asserts itself as a desire to answer questions fully but pithily, candidly but diplomatically. Here is a man, you judge, who is making the fullest use of his resources. The record bears this out. In college, he doubled up on his studies and won doctorates in engineering and philosophy. He holds three presidencies, of Polytechnic Institute of Brooklyn, Polytechnic Research and Development Corp., and IEEE. His contributions to microwave are reflected in 50 American and foreign patents. His nonprofessional interests include music, poetry, mountain climbing—the list stretches on, as we have reported before (Electronics, p 268, March 13, 1959) More Planar Transistors Plastic package in one silicon planar type slashes cost NEW YORK—GE will be luring consumer products designers into its components booth at the IEEE Show with a silicon planar transistor that sells for 40¢. Chief cost-cutter—a plastic package. Beryllia mounting wafer for collector isolation is the design feature of Pacific Semiconductors' new silicon triple-diffused planar transistors. In a new circuit, one type gives 10-w output at 265 Mc. F-m transmission is among expected applications. A 250-w silicon power transistor in a double-ended stud package, now in advanced stages of development, will be shown by Silicon Transistor Corp. Beta is 10 at 50 amp. A 10-amp series of silicon planar triple-diffused power transistors will be shown by Minneapolis-Honeywell. At 10 amp, breakdown voltages are around 80 v. Gains are 20 to 60 or 40 to 120. Multiple forward voltage regulators with up to 4 units in a D0-7 package are offered by Computer Diode. Electrochemical polishing and other semiconductor finishing services are being introduced by Semiconductor Specialties. International Resistance is showing ¼-inch-cube trimmers with ranges of 50 ohms to 20 K. Screwdriver-adjusted, they are designed for higher packing densities. Among the new products at Spragus Electric's booth are a line of molded solid tantalum capacitors and molded pulse transformers. Buckbee Mears is now making hemispherical grids with up to 1,000 lines an inch in a variety of materials, as well as thin-film memory circuits, masks and other products. TRIPLE-STRIPE geometry gives this Sylvania silicon epitaxial planar transistor distinctive appearance (see p 154) Amperex announces 5 new Ruggedized Tetrodes... direct plug-in replacements for all standard prototypes, ruggedized or non-ruggedized, regardless of brand... and they are available at no premium in price. CIRCLE 24 ON READER SERVICE CARD 7034W and 7035W are Ruggedized Class/Metal Power Tetrodes, designed for use in Linear RF Amplifier in class B Television Service; class C RF Oscillator/Amplifier; and class AB, or Z AF Amplifier/Modulator. Fil. Ratings: 7034W (6.0V, 2.6A), 7035W (26.5V, 0.56A). (MIL specifications in preparation.) These Amperex Ruggedized Tetrodes are rated for full 250 watts plate dissipation in applications up to 500 Mc...and simultaneously meet the additional Arco Ruggedization Specifications of 90 G Shock Test and 10 G Vibration Test (10-1000-10 cycles)! Amperex ruggedization has been achieved without alteration of static or dynamic characteristics of the related tube prototypes or of their external physical dimensions. Input and Output capacities and heater ratings remain the same. That is why these Amperex Ruggedized Tetrodes are preferred for BOTH new equipment specifications...and as direct plug-in field replacements...and at no premium in price for ruggedization. 7203W and 7204W are Ruggedized Ceramic/Metal Power Tetrodes, for high reliability performance as RF Amplifier/Modulator, RF Pulse Amplifier, class C RF Amplifier/Oscillator or class AB, AF Amplifier/Modulator. Fil. Ratings: 7203W (6.0V, 2.6A), 7204W (26.5V, 0.56A). (MIL specifications in preparation.) 7580W is a Ruggedized Ceramic/Metal Power Tetrode, for SSB and other Linear RF Amplifier applications. (Available to specifications MIL-E-1/136SA [Navy].) These five new ruggedized power tetrodes are now in production at the Amperex, Hicksville, Long Island plant...where advanced manufacturing techniques assure high reliability transmitting tubes in production supply. Write for detailed data today. Ask Amperex Amperex Electronic Corporation, Tube Division, Hicksville, Long Island, New York IEEE Award Winners — What Profiles of 9 men who will be honored at IEEE Convention FORMER IRE AWARDS will be presented at the IEEE’s convention banquet March 27. Winner of one former AIEE prize, the Edison Award, will be “recognized” at the banquet, the award having already been given. Winner of the Lamme Award, who has not yet been named, will also be “recognized.” Presentation will be later this year. Next year, IEEE says, all awards will be presented at the same time. ALEXANDER C. MONTEITH is winner of AIEE’s Edison Award for “meritorious achievements in engineering education, management and development of young engineers.” Monteith, a Westinghouse vice president, has long been concerned with “getting better engineers and more of them.” Born in 1902, he is former president and director of the recently completed Westinghouse Educational Center. While serving on the Engineers’ Council for Professional Development, he spearheaded the development of the well-known report, “The First Five Years of Professional Development.” The profiles that follow are of IRE prize winners. John Hays Hammond, Jr., co-winner of the Medal of Honor “for pioneering contributions to circuit theory and practice, to the radio control of missiles and to basic communications methods” is profiled on p 196 of this issue. GEORGE C. SOUTHWORTH, the other co-winner of the Medal of Honor, is one of the grand old men of radio and electronics. Born in 1890 and brought up on a farm near Little Cooley, Pa., he was seeking out information about the spark-gap transmitter and coherer receiver when he was still in high school. While a graduate student at Yale, he became interested in resonant water troughs. He later pursued these studies while working for the Bell System, accumulating enough data for a demonstration on waveguides before the IRE in 1938. This eventually led to waveguide technique as we know it today. Dr. Southworth retired from Bell recently. FREDERICK E. TERMAN, winner of the Founders Award, is vice president and provost of Stanford University. He has played a large part in developing Stanford’s electronics engineering curriculum into one of the finest in the world. Born in 1900, Dr. Terman still searches every grade card and lab report for evidence of deep analytical talent. Former students include Russell Varian, Robert Hansen, William Hewlett and David Packard. He has written what are now classics in their field, “Radio EnThey’re Like IAN M. ROSS, winner of the Morris Liebman Award, has spent his entire career with solid-state devices. At 35, he expects to devote the rest of his working life to that specialty, a prospect that leaves him with no feeling of limitation whatsoever. “So far as I’m concerned, the opportunities are boundless,” he told ELECTRONICS. This is not true of transistors and diodes, he thinks. Main advances in these devices will concern quality, reliability, large volume production and low cost. “This isn’t very exciting but I’m afraid that’s the state of a mature art,” he said. He holds nine patents and has five patents pending on semiconductor devices. He is director of the Semiconductor Device and Electron Tube lab at Bell Telephone. CHIH-TANG SAH will be given the Browder J. Thompson prize for his paper, “Effect of Surface Recombination and Channel on P-N Junction and Transistor Characteristics,” published last year when he was only 29. Until recently he commuted between Fairchild Semiconductor and the University of Illinois, where he was both teacher and researcher. He now works full time as head of Fairchild’s solid-state physics department. ALLEN H. SCHOOLEY, who will receive the Harry Diamond Award, RESEARCHER RESEARCHES —Philip J. Rice in the lab thrive on a split life. Daytimes he is associate director of research for electronics at the U.S. Naval Research Laboratory, supervising more than 750 persons and numerous research projects. Nights and weekends he is often back at the lab working on a project of his own. Using inexpensive equipment—a 95-cent motor and paper clips in one case—he has over the years carried out experiments in electronics and oceanography that have led to an impressive list of published papers. MODULTONE ENCODER HIGH Stability LOW Cost SPACE-Saving Standard 6-tone encoder and decoder modules supplied with choice of frequencies from 67 to 1600 cps. Patented, proven RESON-ATORS assure superior frequency stability. Entire circuit printed... rugged, dependable. MODULTONE DECODER WRITE for additional information on MODULTONE encoders and decoders. SECURITY DEVICES LABORATORY ELECTRONICS DIVISION OF SARGENT & GREENLEAF, INC. 17 Seneca Ave. ROCHESTER 21, N.Y. SEE MODULTONE IN ACTION... IEEE SHOW • BOOTH 1625 It resulted in the development of a videograph utilizing CRT principles to print data on paper rather than displaying it on a screen. The system is now used by some publications to print address labels. Dr. Rice is 46 years old and is presently trying to develop a solid-state device focusing on a new type of metal-base transistor. WILLIAM E. EVANS, co-winner of Zworykin Award, already has several special achievements to his credit. He received a War-Navy Dept. citation for ECM work during World War II, co-designed the first successful high-power phase-to-amplitude broadcast transmitter in the U.S. and holds four patents in the field of outphasing TV modulation, color TV systems and the utilization of scanning techniques in industrial TV systems. Born in 1921, he is engineering manager of the R&D labs at A.B. Dick Co. LEONARD LEWIN, winner of the W.R.G. Baker Award, started investigating microwaves while working in the British Admiralty during World War II. He continued this work after joining ITT's Standard Telecommunication Laboratories in 1946, where he is now assistant manager. By 1949-50 he had originated so many ideas he was able to publish a book, "Advanced Theory of Waveguides." He suspected, though, that many of the apparently independent examples he cited were really special cases of some linking theorem. Eleven years later he verified this in a paper, "On the Resolution of a Class of Waveguide Discontinuity Problems by the Use of Singular Integral Equations," for which he will be given the Baker Award. PHILIP J. RICE JR. will receive the Vladimir K. Zworykin Award, along with William E. Evans, "for the development of techniques and equipment for fixing televised images on paper." Award-winning project was a joint effort of A.B. Dick Co. and Stanford Research Institute, where Dr. Rice is manager of Physical Electronics Lab. Glass Display Arrays TRANSPARENT wire arrays for electroluminescent X-Y coordinate display panels, being shown by Corning Glass Works, are made of glass strands coated with conductive metal oxide and transparent insulation. EXPERIENCED TRAVELER Here is Melpar's case for space. Experience? We've got a bagful—from the first probing start with Snark to the far-advanced Apollo program, from earth to the moon, to Venus and beyond. If you're aiming for a place in space, Melpar's proven capabilities can help you get there—fast. Thinking small, like circuits ten millionths of an inch thin or refrigerators (thermo-electric coolers) half the size of your thumb nail? Melpar makes them. Thinking big, like data handling complexes (Finder) as large as a basketball court? Melpar produces those, too. If you're traveling or exploring in the spheres of Advanced Electronics, Aerospace, Physical or Life Sciences, just pack up your space problems and bring them to Melpar. We make wonderful traveling companions, because we've been there—and that's a mighty strong case anytime. And, if you're a scientist or engineer who would like to travel along with this fast-moving leader in space and defense, we're ready to reserve your place in space, too. Write: Professional Employment Manager, 3010 Arlington Blvd., Falls Church, Virginia. Serving Government and Industry MELPAR INC A SUBSIDIARY OF WESTINGHOUSE AIR BRAKE COMPANY 3902 Arlington Boulevard, Falls Church, Virginia. An equal opportunity employer Subsidiaries: Microwave Physics Corp., Garland, Texas • Television Associates of Indiana, Inc., Michigan City, Indiana • Melpar-Fairmont Corp., Fairmont, West Virginia electronics • March 15, 1963 CIRCLE 29 ON READER SERVICE CARD 29 A New Galaxy— An entire circuit module compensated for phase, gain and zero drift over entire temperature range. - 0.1 Cubic Inch Volume - 0.1 Ounces in Weight - Infinite Standby and Service Life - Low Milliwatt Power Consumption - High Shock and Vibration Resistance Electrical zero point and gain, repeatability and stability over entire service life Extremely broad bandwidth Carrier frequencies as high as 1 megacycle Input signal current resolution better than 0.01 μa Absolute reliability Micro Magnetic Modulator Type IMM-655-2 Micro Magnetic Modulator Type IMM-648-1 Micro Magnetic Modulator Type IMM-664-1 Micro Magnetic Modulator Type IMM-680-1 1952-1962...weight reduced from 5 ounces to 0.1 ounce! The product of 2 years of intensive development work, new completely microminiaturized magnetic modulators feature an essentially drift-free circuit with superior phase and gain stability over wide environmental ranges. All the ruggedness, dependability, wide dynamic range and stability that are characteristic of the larger magnetic modulators are engineered into this new magnetic circuit. "MICRO MAG MODS" are shock and vibration proof, provide the ultimate in reliability and unlimited life. of "MAG MOD"® MICRO MAGNETIC MODULATORS — provide repeatable data over years of continuous, unattended operation "MAG MODS" provide four quadrant operation, extreme stability with negligible change of phase, gain and zero position over a wide temperature range. Design is simple, lightweight, rugged with no vacuum tubes, semiconductors or moving parts to limit life. "MAG MODS" offer infinite design possibilities and impedance levels, and are adaptable for algebraic addition, subtraction, multiplying, dividing, raising to a power and vector summing. Absolute Reliability in Micro Magnetics | TYPE NUMBER | IMM-655-2 | IMM-648-1 | IMM-664-1 | IMM-680-1 | |-------------|-----------|-----------|-----------|-----------| | Reference Carrier Voltage and Frequency | 3 V @ 400 cps | 2 V @ 2 KC | 10V @ 60 KC | 115 V @ 400 cps | | Input Control Signal Range | 0 to ±100 μA DC | 0 to ±300 μA DC | 0 to ±100 μA DC | 0 to ±10 μA DC | | AM Phase Reversing AC Output Range | 0 to 0.8 V RMS @ 400 cps | 0 to 1.0 V RMS @ 2 KC | 0 to 200 mv RMS @ 60 KC | 0 to 30 mv RMS @ 400 cps | | RMS mv AC Output/μA DC Signal Input | 7 mv/μA | 4 mv/μA | 2 mv/μA | 5 mv/μA | | AC Output Null (Noise Level) RMS | 5 mv RMS Max. | 5 mv RMS Max. | 10 mv RMS Max. | 100 μV RMS Max. | | Output Impedance | 14 K ohms | 1000 ohms | 11 K ohms | Approx. 150 ohms | | External Load | 100 K ohms | 5 K ohms | 50 K ohms | 100 ohms | | Zero Drift over Temperature Range | ±0.1 μA Max. | 0.5 μA Max. | — | 0.05 μA Max. | | Hysteresis In % of Max. Input DC Signal | 0.2% Max. | 0.2% Max. | 0.5% Max. | 0.1% Max. | | % Harmonic Dist. In Output Product Wave | 15% | 10% to 15% | 5% | 20% | | Temperature Range | −55°C to +125°C | −55°C to +125°C | −55°C to +125°C | −55°C to +125°C | | Frequency Response | 5 K Series, 108 cps | Over 200 cps | Over 5 KC | Over 100 cps | | Approximate Weight (in Ounces) | 0.2 | 0.1 | 0.2 | 0.2 | GENERAL MAGNETICS • INC 135 BLOOMFIELD AVENUE BLOOMFIELD, NEW JERSEY Catalogs are available on Micro Magnetic Modulators, Standard Magnetic Modulators, Miniaturized Multiplying Modulators and Transistor Oscillators. Call or write for your copies, or ask to have a GENERAL MAGNETICS representative contact you for consultation on specific applications. CIRCLE 31 ON READER SERVICE CARD There's logic in Delco's line of digital circuits and equipment Check it yourself. A large variety of standard logic functions are available in three series of Delco circuit modules: the extra low power 100 KC "SM" series for extreme light weight and wide environmental applications; the 200 KC "DM" series for ultra reliability circuits in the low and medium speed range; the high speed "FM" silicon series operates from DC to 10 MC in a wide range of environments. Circuit cards of 200 KC, 5 MC and 10 MC are available where size is less critical. Digital support equipment includes standard and special power supplies, card racks and special circuits. Delco Radio digital cards, modules and support equipment are available now to reduce component and design time costs in your digital systems. Contact our Military Requirements Department for more data and our new low prices. You'll be convinced of the logic in the Delco line. See our display at the IEEE Show—Booth 1423. PREVIEW OF EXHIBITS Instruments Extend Operating Ranges, Are Easier to Work Trends toward greater utility, higher accuracy, bolder display continue NEW YORK—Greater utility, rather than radically new approaches to measuring techniques, characterizes most of the new instruments being exhibited March 25 through 28 at the Coliseum. Instrument manufacturers—almost to a man—appear to have concentrated during the past year on developing new instruments that extend the capabilities of their bread-and-butter lines. Wider ranges, greater sensitivity and accuracy, more automatic operation, human engineering in the form of fewer knobs and bolder indication, digital readout, portability—these are some of the continuing trends. OSCILLOSCOPES—Automatic d-c to 10 Mc oscilloscope will be shown by California Instruments. Vertical sensitivity, horizontal sweep speed and d-c offset are set, positioned and indicated on a digital display of the parameters. A digital-readout oscilloscope programmer for use with Tektronix' digital dual-trace scope will be introduced by the company. Program cards can be set up to measure such parameters as amplitude, time, start-to-stop time intervals or first or second pulse selection. Lumatron Electronics is showing an improved version of their modular oscilloscope. The 0.35-nsec scope has a trigger capability to several Gc. FREQUENCY MEASURING—Designed for antenna measurements, a receiving system with a frequency range from 20 Mc to 100 Gc will be shown by Scientific- Atlanta. Dynamic range is 60 db. Its new solid-state broadband signal analyzer will speed up spectrum signature measurements, reports PRD Electronics. Frequency range is 45 Mc to 11 Gc. The unit is designed for checking jammers, rfi sources, broadband microwave tubes and frequency diversion radars. Ssb spectrum analyzer that includes a built-in frequency synthesizer and self-check features is being introduced by Lavoie Laboratories. Range is 2 to 80 Mc. An accompanying two-tone generator has a range of 20 cps to 20 Kc. Frequency of unmodulated and modulated signals including a-m, f-m and fsk from 10 cps to 1 Gc can be monitored and measured by a system being exhibited by Rohde DETECTOR produces vswr measurements directly on an oscilloscope, reports Telonic Engineering SIGNAL GENERATORS—General Radio will introduce a new sweep signal generator featuring very high stability. It covers the range from 700 Kc to 230 Mc plus two lower band spreads—one from 400 to 500 Kc, and one from 10.4 to 11 Mc. The marker system is calibrated in frequency and amplitude. New series of signal generator modules will be shown by Polarad Electronic Instruments. Signal generators and sources will have ranges of 3.8 to 8.2 Gc and 6.95 to 11 Gc; doublers will provide outputs up to 21 Gc. A modulator is designed to drive the sources. H. H. Scott will show a self-calibrating, combination f-m generator, audio oscillator and multiplex generator in one package about the same size as one unit of the former models. E-H Research Labs is announcing a new light-weight microwave swept signal oscillator that gives continuous coverage in octave or greater bandwidths from 1 to 40 Gc. Frequencies may be changed by plug-in heads. Three internal frequency markers, 25 v in amplitude, are continuously adjustable. Sweep oscillator with sweep widths as wide as 300 Mc and as narrow as 100 Kc is being shown by Kay Electric. Frequency range is 0.5 Mc to 1.1 Gc. Video sweep generator designed for three modes is being shown by Jerrold. The unit has low residual frequency modulation of 20 cps in narrow band and cw modes and 700 cps in wideband mode. Range is 1 Kc to 15 Mc. Short-circuit-proof, solid-state pulse generator with a repetition rate of 100 cps to 5 Mc is being shown by Velonix division of Pulse Engineering. Pulse width can vary from 50 nsec to 1 msec. Function generator that provides simultaneous outputs of sine, square and triangle waveforms over the frequency range of 0.001 cps to 10 Kc will be shown by Exact Electronics. Random-noise generator that is a serves as a stable, calibrated white noise source for vibration and strain analysis will be exhibited by Quan-Tech Labs. Output is continuously variable from 0.01 to 1,000 μv per root cycle and is white from d-c to 100 Kc. SYNTHESIZERS—Frequency synthesizer that generates precise signals to 50 Mc in steps of 0.01 cps will be shown by Hewlett-Packard. It can be remotely programmed, or programmed by computer. New high-stability frequency-synthesizer by Manson Labs provides over 690,000 discrete frequencies from 2 to 34 Mc in four bands. The synthesizer offers direct digital readout. Frequency synthesizer producing any stable frequency from 10 to 20 Mc with only one temperature-controlled crystal will be shown by Measurements division of McGraw-Edison. With external multipliers it reaches 1 Gc. One use is ssb-transceiver design. METERS—A solid-state a-c/d-c digital multimeter is being exhibited by Electronic Associates. Reading speed is 200 a sec. D-c digital voltmeter to be exhibited by Cimron Corp. features automatic range and polarity with plug-in a-c converters or preamplifiers. Range is 0.1 μv to 1,000 v. True rms voltmeter with ranges from 10 μw to 330 v (reaching 10 Kv rms with optional accessories) is being introduced by Ballantine Labs. The unit features flat amplification from 5 cps to 4 Mc with 90 db gain. Accessories provide for current and rms power measurements. Phase shift can be read directly with digital readout with Narda Microwave's d-c to 5-Gc coaxial phase shifter. Portable L, C and R-measuring bridge from Marconi Instruments can also measure electrolytic capacitance and incremental inductance. Victoreen Instruments will show an electrometer dubbed the Femto-meter because it can make measurements in the femto-ampere ($10^{-18}$) range. Gaussmeter that measures a magnetic field from d-c to 30 Kc ...up to 160 channels in a $5\frac{1}{4}$" panel! versatile addressable or sequential Multiplexers - Sampling rate to 50,000 channels/sec - Variable frame length - Accuracy $\pm 0.02\%$ full scale - Input levels to $\pm 10$ V Texas Instruments Multiplexers are all solid state units providing accurate, high-speed bipolar operation with low dynamic crossfeed, fast settling time, and variable strobe. Manual channel select switches facilitate system set-up and check-out. Frame length is selectable from front panel. Expandable to 160 channels by means of plug-in printed circuit cards. Case size $5\frac{1}{4}$ by 19 by 18 inches for standard relay rack mounting. TI's high speed Model 834 Analog-Digital Converter, ideal companion instrument to the TI Multiplexer. High speed: 1.5 $\mu$sec per bit Built-in sample and hold Accuracy: $\pm 0.05\%$ full scale Automatic zero stabilization Ask a TI Application Engineer for further information on digital data handling equipment for your specific needs. POWER SUPPLY TECHNICAL LITERATURE from kepco® For the very latest information, Kepco, Inc., offers a wide variety of useful illustrated booklets described below. For your complimentary copies write today. 1 Notes on SYSTEMS APPLICATIONS OF REGULATED POWER SUPPLIES: 40-page REFERENCE HANDBOOK aids the engineer in making full use of the versatility built into many of today's power supplies. Information ranges from the fundamentals of selecting a power supply, to detailed theoretical discussions on applications for systems use. 2 POWER SUPPLY NEWS: Newseworthy and pertinent facts related to the use of power supplies in the electronic industry. Feature articles in present issue describe the effects of noise; Power Supply Electronic Field Today; Design Notes; and Power Supplies in Systems Applications. 3 AN ANALYSIS of COOLING METHODS: Advantages of forced-air cooling over large-area convection systems are discussed in an effort to analyze overall equipment reliability. 4 UNDERSTANDING POWER SUPPLY TERMINOLOGY and a NOMOGRAPH on WIRE LOSSES: Comprehensive definitions of the most-commonly used significant terms to assist the engineer in the understanding of Regulated Power Supplies. The Nomograph supplied enables rapid determination of voltage loss across the load supply leads as a function of wire size and current. 5 Designing a CONSTANT-CURRENT POWER SUPPLY: This informative material defines a constant-current source. It then describes a method of converting a standard constant-voltage source to a constant-current source. HYBRID SLOW-WAVE structure in Raytheon's 5-Kw twt (left) for phased arrays helps keep power linear over 1.2 to 1.4 Gc range. On tripod is X-band klystron rated at 50 Kw by Varian Associates—it has generated 106 Kw. Typical of Huggins Labs' new permanent-magnet-focused bwo's is 50-mw model for 8.2 to 12.4 Gc Tubes Pack More Punch NEW YORK—First of the S-band driver klystrons for Stanford's 2-mile-long linear accelerator has just been delivered to the university by Eitel-McCullough. The tube will rate as the newest of the new products at Eimac's IEEE Show display. Development started only six months ago. Designed for 75 Kw peak output and tested to 100 Kw, it weighs only 35 lbs. Weight was cut 80 percent by periodic permanent-magnet beam-focusing. Company also has new 500 Kw pentodes. Another tube that borrows twt concepts is GE's traveling-wave multiple-beam klystron (detailed on p 64 of this issue). Metcom's new X and K-band klystrons use an adjustable dielectric tuning rod in the cavity to get flat power response over a 1.5-Gc band. The ½-w tubes are being used mostly in parametric pumps. Six new pulsed magnetrons—four mm-range models and two 1-lb types that put out 1 Kw and ½ Kw at 9.3 Gc—are being shown by Litton Industries. There is also a 30 Kw to 10 Mw hollow-beam klystron and a fiber-optic crt with ultraviolet output. One of CBS Labs' special light-sensing, light-emitting and image-conversion tubes is a multiplier phototube with rubidium-telluride sensing surface and peak quantum efficiency of 25 percent. Another is an image dissector used in deep space probes as a star tracker. ITT is showing a ceramic power tube that dissipates 100 kw and has an evaporative cooling anode. Writing speed less than 1 ips is achieved by ITT's Iatron direct-viewing storage tube. Sylvania is showing a 3-inch crt for use in military airborne displays faced by extremely high ambient light. RCA promises a long list of new tubes and microwave devices. One is a developmental Nuvistor for power supplies and small-signal amplification to 350 Mc. Varsitor helps cut picture interference on latest Zenith TV—automatically A development of the patented "Fringe Lock" circuit incorporated in Zenith TV receivers now automatically cuts annoying picture disturbances, whether made by nearby electrical machines or external influences such as passing automobiles. Function of the circuit is to cut off the twin pentode 6HS8 (see below) when external noise is introduced. Plates of the pentode are connected respectively to the AGC and Sync circuits. Two of the grids are fed by composite video signals. Automatic bias setting, varying with signal level fluctuations and always safely above the Sync tips, is provided by the voltage-sensitive resistance characteristics of the type BNR-331 Carborundum varistor. The varistor replaces a potentiometer that required adjustment for maximum noise protection, particularly in fringe areas. The varistor not only provides automatic control and positive, instantaneous cut out, but also costs one-third less than the potentiometer previously used. New Technical data on varsitors points way to wider applications and production savings Carborundum offers a new bulletin and technical literature to aid in the selection and application of silicon carbide non-linear, voltage-sensitive resistors. A variety of body types and sizes is available, with electrical characteristics suitable for applications requiring microamperes at one volt up to kiloamperes at kilovolts. Typical applications are lightning arrestors; contact arc suppression for relay coils and solenoids; protection for silicon rectifiers, capacitors and other electronic components against high peak inverse voltage; and voltage regulation and control. The bulletin lists standard stock varistors with pertinent design information. Individual technical sheets provide E/I characteristic curves and specifications on over 100 stock varistors. For your copies, write Dept. EL-3R, Electronics Division, Carborundum Company, Niagara Falls, New York. Inquiries regarding application to specific problems are invited. Be sure to visit our booth at the IEEE Show. CAST-IRON HEAD contains gyroaccelerometer system. Fairchild Controls put it in there to test stresses and strains on humans. One use is in the Apollo test program. IEEE SPECIAL PREVIEW OF EXHIBITS Microcircuits Graduate Into Hardware Products Integrated and thin-film circuits pace development of off-the-shelf lines NEW YORK—Predictions that the industry is "going micro" this year (Electronics, p 45, Feb. 15) will be amply borne out when the IEEE Show opens March 25. Since the last IRE Show, microcircuit exhibits have multiplied and most of the manufacturers will be offering product lines, rather than production capability. Another sign of the swift pace of product development will be lasers. In addition to the lower-power types now readily available, there will be at least two superpower ruby lasers. One, from Radiation at Stanford, puts out at least 500 joules. Raytheon's 350-joule unit can blast through \( \frac{1}{4} \)-inch steel. Meanwhile, the producers of tube and transistor equipment haven't been idle during the past year. They'll be offering a mixed bag of scores of new products, ranging from data-processing gear to production test equipment. MICROCIRCUITS—Sylvania will be featuring hardware applications for thin films. Among these are a cigarette-pack-sized transceiver-beacon, being built for use by pilots downed at sea, and a tape-control unit—also for Navy. Linear molecular circuits by Westinghouse Electric include a variety of amplifiers and an oscillator-mixer. Molecular digital circuits will also be shown. High-speed thin-film, hybrid dif- THIS CIRCUIT, reduced to a \( \frac{3}{8} \)-inch-square integrated circuit by Signetics, provides the sense amplifiers in Univac's new aerospace computer fused silicon tantalum logic gates will be introduced by Philco Lansdale. Silicon planar/epitaxial devices being introduced by Amperex Electronic include choppers and 11 types of mil-spec high-speed switches and amplifiers. Transistron is reportedly planning to introduce a new line of integrated circuits. Texas Instruments Incorporated is showing its expanded line of integrated circuits. LOGIC MODULES — Flip-flops, rated at 50 Mc with some tested as high as 80, will be at Varo's booth. Sanders Associates will introduce parts of its new digital logic module line. Microcircuit ten-bit 20-Mc shift register by General Instrument is designed for general computer and data system applications. Micro package of 10 npn diffused silicon transistors is fabricated simultaneously by Burroughs in a common emitter, strip configuration that permits interconnection with diode matrices. RECORDERS — Sangamo Electric's instrumentation magnetic tape recorder/reproducer reduces speed errors with a new eddy-current drive system that eliminates mechanical coupling between the motor and the capstan. Seven-channel instrumentation magnetic tape recorder, by American-Concertone, has differential capstan and coaxial reels. Computer tape of 1-mil Mylar T from Audio Devices is compatible with 1.5-mil tapes, but provides more FRONTIER FREQUENCY/TIME STANDARDS are the best across-the-board Here are space-tested System Frequency/Time Standards that meet your most exacting requirements for commercial, industrial or military applications. The same engineering capability that supplies Frequency/Time control to major space projects places in your hands advanced crystal control that fulfills any precision timing assignment. ■ Frontier's line features a wide range of frequencies (30 cps to 100 megacycles), stabilities for every need ($1 \times 10^{-8}$, $20 \times 10^{-6}$ or whatever you require), solid-state circuitry and oven control (no noise-generating thermostats!). The line has unitized modular construction for best shock and vibration resistance — fast start-up and low power consumption — choice of sizes and mountings — individual test documentation shipped with each unit — mounts in any position. Write or call today for complete technical information: Visit us at the I.E.E.E. Show Booth 3017 March 23 to 28 The Coliseum, New York COMMERCIAL | MILITARY SYSTEMS CAPABILITY FRONTIER ELECTRONICS DIVISION INTERNATIONAL RESISTANCE COMPANY 4600 Memphis Avenue • Cleveland 9, Ohio • Phone: 216 749-1570 CIRCLE 39 ON READER SERVICE CARD VISUAL PRINTOUT, magnetic keyboard, remote programming ability feature Navigation Computer's tape-puncher for numerical machine control CLEAN LINES improve readability of Honeywell Precision Meter division's new panel meters. Concave plastic forms face CARTRIDGE-LOADING tape-drive system introduced by IBM has an instantaneous data rate of 170,000 8-bit characters a second footage on standard-sized reels. Rheem Electronics' bidirectional punched-tape reader can sense 300 characters/second and stop on character. Bausch & Lomb's X-Y strip chart recorder records multiple inputs directly, d-c volts, ohms, or milliamps, without external converters. Vertical strip-chart recorders for transcribing telemetry and analog computer readout and other uses are being introduced by American Optical. Two low-cost X-Y recorders being shown by Houston Instrument feature built-in time base, selectable time sweep and electric pen lift. COMMUNICATIONS—North Electric says its digital-to-voice converter/multiplexer is unique. Airlines reservations and stock quotation systems are applications. For amateurs, National Radio has a 6-inch-high ssb transceiver that puts out 200 watts. High-frequency crystal-lattice filter for both receiving and transmitting eliminates multiple conversions. Tv camera that provides 1,000 horizontal lines and 700 vertical lines will be shown by Dage division of Thompson Ramo Woolridge. AN/ARW-79 receiver with proportional control decoder for controlling pilotless aircraft and missiles will be featured by RS Electronics. Avco will show its satellite radio-command receiver. Rixon Electronics will demonstrate a digital-data modem that operates at up to 3,600 bits/sec. Unique feature is carrier exalting or reinjection. Yokogawa Electric Works for graphical analysis of soft magnetic materials. Medium and high-power semiconductor devices can be tested with a unit from Sierra Electronic division of Philco. SCR gating circuits reduce power dissipation in devices tested. Bridge to measure temperature coefficients of resistors from 1 ohm to 1 Meg is being introduced by Daven. California Technical Industries has updated its automatic circuit analyzer for programming input and output test data. COMPUTER GEAR — Magnetostrictive delay lines are used for storage in a printer that Potter Instrument reports conserves computer loading time by as much as 20:1. It accepts asynchronous or synchronous data and prints 1,200 alphanumeric lines a minute. Packard Bell Computer reports its two high-accuracy analog-to-digital converters are fast enough to digitize high-speed transient phenomena and telemetry data online. Speeds are 30,000 15-bit conversions a second and 70 Kc. On-line auto/cross-correlation computer for medical electronics and vibration analysis is being exhibited by Mnemotron. Frequency response is 0.006 to 100 cps. Transfer functions of servo systems can be derived directly with Wayne Kerr's new computer. Frequency range is 100 cps to 5 Kc. Solid-state angle position indicator, featuring a 30-sec repeatability, accuracy of 6 min of arc, and digital readout over 360 degrees is being exhibited by North Atlantic Industries. ANTENNA DRIVES—Its variable-speed tracking antenna system will be demonstrated by Technical Appliance. Hydraulic operation minimizes rfi. Microwave Associates will introduce a multiple-bit phase shifter for L-band antenna scanning, expects it to have wide applications in phased-array radar. Guarded-crossbar 600-channel scanner will be shown by Dymec division of Hewlett-Packard. Low signal-path resistance permits switching of microvolt signals. Miniature pressure-sensitive cutoff charging devices for use with Yardney Electric's AgZn and AgCd batteries goes into operation as gasing increases within the battery to about 2 psig. Panel meters only 3/4 inch in diameter will be among Triplett Electrical Instrument's new products. Infra-pack, a modular system for providing infrared instrumentation will be shown by Telewave Labs. Consumer electronics competition from Japan is typified by a 16-inch color television set, featuring only a hue control for color adjustment, by Toshiba. NEW FROM SANGAMO TOUGHER TANTALUM CAPACITORS These solid electrolyte capacitors, Sangamo Type 595, represent a distinct achievement in tantalum capacitors. They utilize Sangamo's exclusive "Innerseal" construction with the terminals mechanically secured to the tubular container and precisely positioned without regard to the capacitor element. The seal is produced with a minimum of solder and flux, and with minimum thermal and mechanical stress on the glass insulator. There is absolutely no reliance on solder for mechanical strength. That's why these tougher units give peak performance under the most drastic shock and vibration conditions. Sangamo tantalum capacitors comply with all the electrical and mechanical requirements of Mil-C-26655A. Basically, these tantalum capacitors provide the highest capacitance per-cubic-inch in an extremely small and strong, hermetically sealed package. Sangamo Type 595 capacitors are designed for filter, by-pass, coupling, blocking, and low voltage applications in telemetering devices, airborne systems, computers, missiles, and transistor circuits. They have low dissipation factor, low dc leakage, and excellent shelf life. They are available in capacitance values of 0.22 to 330 mfd, and in voltages from 6 to 35 WVDC. They're suitable for operation at full-rated voltages over a temperature range of -80°C to +65°C and, when properly derated, will operate up to +125°C. Complete information is yours for the asking. ELECTRONIC COMPONENTS SANGAMO ELECTRIC COMPANY SPRINGFIELD, ILLINOIS The new 4700 Series combines accuracy, application flexibility, and operator convenience unmatched in other instrumentation recorders. **ACCURACY** - Eddy current clutch for smoother tape handling—faster servo response of 50%/sec. over a range of 30% of nominal tape speed. - Low-mass drive resulting in 100 times better TDE than conventional high-mass drives. - Precision guiding for minimum tape skew. - Vacuum tensioning and cleaning producing positive head-to-tape contact. - Phase-equalized electronics. **FLEXIBILITY** - ¼-inch through 2-inch tape-handling capacity with no changes other than heads and guides. - Reel-to-reel or continuous loop operation with no mechanical changes. - Modular construction for system expansion. - Four speeds of either FM or direct record and reproduce at the flip of a switch. - Direct galvo drive capability plus squelch. - Full IRIG compatibility. **CONVENIENCE** - 8 speeds (15/16 ips through 120 ips) controlled by a single switch—no belt changing. - Attractive control panel designed for the operator. - Eye-level electronics modules for quick, easy setup. - No mechanical brake to adjust with linear DC reel drive servos. There’s much more to tell about the new 4700 Series... write, wire, or phone us for the complete story. Unique capstan drive with no mechanical coupling isolates motor from capstan—an eddy current clutch is the secret. This vibration-free coupling system is combined with Sangamo's proven eddy current speed control in the only light-mass tape drive—another "first" for your instrumentation needs from Sangamo. SANGAMO ELECTRIC COMPANY SPRINGFIELD, ILLINOIS MORE VERSATILE THAN EVER "SPEEDIVAC" MULTIPLE VAPOR SOURCE VACUUM COATING UNIT Following are listed some of the special features supplied as standard fittings in the EDWARDS 19E6 evaporator. Stainless Steel Bell Jar, Viton Gasketting, Six Position Vapor Source, Substrate Heater, Motor Driven Rotary Substrate Holder, Glow Discharge Cleaning, Ultimate Vacuum with LN² trap 2 x 10⁻⁷ Torr. Fast reliable pump downs are, of course, a feature of all EDWARDS evaporators. Write for your free technical reprints, written by members of our research staff on "Thin Films and Ultra High Vacuum Techniques." MICRO-CIRCUIT JIG AND MASK CHANGER The micro-circuit jig is complete with a six-position vapor source, enabling six 2" square substrates to be coated with six different materials using six different masks. The jig is also provided with two substrate heaters, one to preheat the substrate to 150°C., and the second to raise the temperature of the substrate in the evaporation position to 300°C. Resistance monitor pick-up points are provided and separate resistance monitor and automatic source shutter can be provided. Standard EDWARDS patented glow discharge cleaning rings are supplied with the jig, along with the rotating six-position vapor source. The accuracy of registration of each successive mask in contact with a given substrate is within ±0.001". ELECTRON BOMBARDED VAPOR SOURCE Designed as an inexpensive vapor source for depositing thick films of material containing Ni, Fe or Co. The source is complete with a wire feed mechanism and handwheel assembly for continuous controlled evaporation by feeding wire to the vapor source from the handwheel mounted externally on the coating unit. A complete power supply to operate the source is also available complete with interlocks to the vacuum system. MARK II MODULATED BEAM PHOTOMETER The "Speedivac" Modulated Beam Photometer provides a method of controlling the optical thickness of films deposited by evaporation or sputtering by indicating the changing optical characteristics of the films as their thickness increases. The instrument measures the reflection from or the transmission through coated glass surfaces as a function of wavelength. Both these quantities can be measured alternately if two light sensing elements are used. OMEGATRON-MASS SPECTROMETER The "Speedivac" Omegatron Mass Spectrometer analyzer delivers quantitative and qualitative data of minute quantities of residual gases and vapors in vacuum systems. The unit provides the following characteristics: High Sensitivity • Extended Range • Excellent Resolving Power • Rapid Response, Linear Scan • Pressure Measurement Independent of Gas Composition • High Sensitivity Leak Detection • Simple Construction. GENERAL SPECIFICATIONS INCLUDE: Range—Mass 2-200; Resolution — Complete separation of adjacent peaks to mass 32 and very good separation to mass 60; Sensitivity — The unit's high sensitivity enables the analysis of residual gases in the range of 10⁻⁶ to 10⁻¹¹ Torr. HOW CHEAP IS "CHEAP"? "Why should we buy from you when we can get the 'same thing' from other suppliers at a lower price?" In selecting a supplier of lacing tape (or any component), price and compliance with specifications are not the only criteria. But too often, manufacturers ignore the other factors involved and consequently lose money. For example, in a $15,000 piece of equipment there may be only 15 cents worth of Gudebrod lacing tape. It costs $75 to work this tape. It may be possible to buy the same amount of tape from other suppliers for 2 or 3 cents less... it "will meet the specs" according to these suppliers. But one of our customers recently pointed out why he still specifies only Gudebrod lacing tape in such cases. "We tried buying some cheaper tape that 'met the specs.' Within a few months our production was off by 50%... boy, did the production people really scream about that tape. And our labor costs doubled... our costing people really flipped! "Another thing, why should we risk the possible loss of thousands of dollars when the original material cost difference is only a few cents. Once you put cheaper tape on and something goes wrong after the equipment is finished... you've had it. No, thank you! We learned our lesson! We buy Gudebrod lacing tape!" Whether your firm uses one spool of lacing tape or thousands, there are four advantages in specifying Gudebrod for all your lacing requirements: 1. **Gudebrod lacing tape guarantees increased production!** 2. **Gudebrod lacing tape guarantees reduced labor costs!** 3. **Gudebrod lacing tape guarantees minimal maintenance after installation!** 4. **Gudebrod guarantees quality!** On every spool is a lot number and seal which guarantees that all Gudebrod lacing tape is produced under strict quality control. Our standards are more exacting than those required for compliance with Mil-T. Our Technical Products Data Book explains in detail the complete line of Gudebrod lacing tapes for both civilian and military use. For your copy write to Electronics Division --- **MEETINGS AHEAD** **PACIFIC COMPUTER CONFERENCE**, IEEE; California Institute of Technology, Pasadena, Calif., March 15-16. **BIONICS SYMPOSIUM**, United States Air Force; Biltmore Hotel, Dayton, Ohio, March 18-21. **EUROPEAN ELECTRONICS MARKET**, EIA; Statler Hilton, Washington, D.C., March 19-22. **INSTITUTE OF PRINTED CIRCUITS MEETING**, IPC; Barbizon-Plaza Hotel, New York City, March 25-27. **IEEE INTERNATIONAL CONVENTION**, Institute of Electrical and Electronics Engineers; Coliseum and Waldorf-Astoria Hotel, New York, N.Y., March 25-28. **ELECTRON BEAM SYMPOSIUM**, Alloyd Electronics Corp.; Somerset Hotel, Boston, Mass., March 28-29. **ENGINEERING ASPECTS OF MAGNETOHYDRODYNAMICS SYMPOSIUM**, IEEE, IAS, University of California; at UC, Berkeley, Calif., April 10-11. **OHIO VALLEY INSTRUMENT-AUTOMATION SYMPOSIUM**, ISA, et al; Cincinnati Gardens, Cincinnati, Ohio, April 16-17. **CLEVELAND ELECTRONICS CONFERENCE**, IEEE, Case Institute, Western Reserve University, ISA; Hotel Sheraton, Cleveland, O., April 16-18. **OPTICAL MASERS SYMPOSIUM**, IEEE, American Optical Society, Armed Services, et al; Waldorf Astoria Hotel, New York City, April 16-18. **INTERNATIONAL NONLINEAR MAGNETICS CONFERENCE**, IEEE; Shoreham Hotel, Washington, D.C., April 17-19. **SOUTHWESTERN IEEE CONFERENCE & ELECTRONICS SHOW**, IEEE (Region 5); Dallas Memorial Auditorium, Dallas, Texas, April 17-19. **BIO-MEDICAL ENGINEERING SYMPOSIUM**, IEEE, et al; Del Webb's Ocean House, San Diego, Calif., April 22-24. **NATIONAL ELECTROMAGNETIC RELAY CONFERENCE**, Oklahoma State University; OSU, Stillwater, Okla., April 23-25. **ADVANCE REPORT** **INSTRUMENT SOCIETY OF AMERICA CONFERENCE**, ISA; McCormick Place, Chicago, Ill., Sept. 9-12. March 30 is deadline for submitting abstracts to: T. A. Abbot, Conference Program Coordinator, Instrument Society of America, Penn Sheraton Hotel, 530 Wm. Penn Place, Pittsburgh 19, Pennsylvania. Technical sessions to be given include: aerospace instrumentation; analog computation & process control; automatic data acquisition; automatic control systems; computer control and systems engineering; data handling and computation; electronic instrumentation; noise, shock and vibration measurement; radiation methods of analysis; reflex voltage devices & techniques; solid state controls & instruments; transducers. The Siliconix 12 nsec 5 mw Dual NAND Gate THIS PLANAR SILICON INTEGRATED CIRCUIT HAS A LOWER POWER-SPEED PRODUCT (60 PICOWATT-SECONDS) AT HIGHER FAN-OUT THAN CONVENTIONAL DIODE-COUPLED NAND GATES BECAUSE OF: a. The unique emitter-follower diode-clamp circuit... b. Small geometry which minimizes capacitance... c. Epitaxially grown collectors. PROPAGATION DELAY VARIES LESS THAN ±7.5% FROM -55°C TO +125°C WITH VCC 4 TO 5 VOLTS. USE THIS GATE AS A NAND, NOT AND-OR, BISTABLE FLIP-FLOP, OR HALF ADDER. ANOTHER EXAMPLE OF THE WAY SILICONIX COMBINES CIRCUIT AND SEMICONDUCTOR TECHNOLOGIES INTO DIGITAL AND LINEAR INTEGRATED CIRCUITS AND COMPONENTS. WRITE FOR DETAILS. Siliconix Incorporated 1140 West Evelyn Ave. • Sunnyvale 33, California Telephone 245-1000 • Area Code 408 • TWX 408-737-9948 FOUR YEARS OF PERFORMANCE—FOUR YEARS OF EXPONENTIAL GROWTH Many test instruments have been shipped thru our doors since we first opened them in 1959 and it leaves us with a feeling of appreciation to our customers—not just because you have bought our products but because you have recognized our pledge to maintain performance, reliability, flexibility and instrument accessibility. We estimate some 20 million charts may have been plotted on our equipment by now. But the chart we're proudest of was not plotted on one of our recorders...but by ALL of them. That is our own growth chart. It's fun looking at a growth curve that just goes up. Of course, we're not getting smug about it...we know we're not the biggest in the field. We're just going to keep adding to our line with one thought in mind: our growth can only result by a continuous, successful striving for designed simplicity. May we send you a brochure? SEE US IN BOOTH 3029 AT 1963 IEEE SHOW MARCH 25-29 16 PAGE SHORT FORM CATALOG AVAILABLE ON REQUEST The HLVC-150 log voltmeter-converter's new design principle permits measurements accurate to 0.2 db of voltage or voltage ratios on a true logarithmic scale over a 3160:1 or 70 db continuous range. AC or DC inputs, DC output for recording. $1450. The HR-95 Recorder, a high performance 8½ x 11" recorder featuring plug-in modules and dual regulated zener reference supplies. Front recording panel swings open for easy access to all of the electrical and mechanical components. $1250. Emphasizing a straightforward design approach, the HR-97 is an 11 x 17" XY recorder with 1 mv/in basic sensitivity, 0.25% of full scale accuracy, 15 in/sec pen speed, zener reference voltages, snap-on pen assembly and vacuum paper hold-down. $1390. The HR 80 T-Y* recorder provides rectilinear recording as a function of time on standard graph paper of any variable expressible as DC voltage. $475. The HR 96 is an 8½ x 11" XY recorder with a 1 mv/in basic sensitivity, 0.25% accuracy, 10 in/sec pen speed, zener reference voltages, snap-on pen assembly and 0.5 to 2 in/sec time base. $895. houston instrument Corporation 4950 TERMINAL AVENUE / BELLAIRE 101, TEXAS / MOhawk 7-7403 / Cable: HOINCO TWX: 713-571-2063 †PATENTS PENDING PUMPS • VALVES • BAFFLES • GAUGES VACUUM COMPONENTS DIFFUSION PUMPS Only NRC offers a full line of high-speed diffusion pumps with all these important advantages: - No super-heating... minimizes pump fluid decomposition. - Fractionating jet assembly... constantly purifies pump fluid... lower pressures can be held longer. - Patented Cold Cap*... an NRC exclusive... cuts back-streaming 98%. Available now in 4", 6", 10", 16" and 32" sizes for all your high vacuum testing and production applications, where performance and reliability really count. *Licensed exclusively from Edwards High Vacuum Limited. U.S. Patent No. 2919061. SLIDE VALVES New NRC Slide Valves (HC Series) are very-high and ultra-high vacuum valves at conventional prices. Pressures of $10^{-8}$ to $10^{-10}$ torr range have been produced... without baking... in vacuum systems using these valves. 100% clear opening and low height provide highest conductance. Double-pumped stem seal cuts gas bursts 99%. They're available in 4" and 6" sizes, either hand or air operated. CRYO AND MOLECULAR BAFFLES Now, your high vacuum system can be operated at lowest pressures for extended periods of time with no detectable trace of hydrocarbons reaching the chamber! The reason: NRC's Circular Chevron Cryo Baffle and the all-new NRC Molecular Sorbent Baffle (which utilizes three full trays of zeolite) virtually eliminate back-migration of pump fluid vapors. Yet, they provide exceptionally high conductance for maximum useful pumping speed. VACUUM GAUGES Get accurate, reproducible direct-readings to $10^{-13}$ torr with the new NRC Model 752 Redhead Magnetron Gauge! The only really satisfactory gauge commercially available for measurements below $1 \times 10^{-9}$ torr. Increased current readings provide 50 times the sensitivity of hot-wire ionization gauges. Because there's no hot filament, it's magnitudes less "gassy", can't become contaminated by vaporizing of gauge elements. And the 752 Gauge is not X-ray limited. NRC's full line of vacuum gauges and controls also includes an improved Bayard-Alpert type gauge, Model 751, for accurate, reliable measurement in the $1 \times 10^{-8}$ to $10^{-10}$ torr range. SEE THESE AND OTHER NEW NRC VACUUM COMPONENTS AND SYSTEMS —IEEE SHOW BOOTH # 4425-4427 A Subsidiary of National Research Corporation NRC EQUIPMENT CORPORATION 150 Charlemont St. Newton 61, Massachusetts Area Code 617, Decatur 2-5800 MANUFACTURING PLANTS IN NEWTON, MASSACHUSETTS AND PALO ALTO, CALIFORNIA CIRCLE 48 ON READER SERVICE CARD New DPDT TRIMPOT® Relay: 160 mw Sensitivity, Microminiature Size! This new DPDT is more than just small—it's reliable! Subject it to 150 G shock or 30 G, 3000 cps vibration, and you still get the performance that's on the published data sheet. Model 3101 has single-coil design, rotary balanced armature, hermetically sealed case, and self-cleaning contacts. It's designed to meet or exceed all environmental requirements of MIL-R-5757D. Every relay goes through a 5000-operation run-in and 100% final inspection, including mass-spectrometer leak testing, for all important characteristics. In addition, monthly samples undergo the punishment of the Bourns Reliability Assurance Program. This program, originally developed for TRIMPOT potentiometers, is one of the most extensive series of electrical and environmental tests in the electronics industry. It underscores the trustworthiness of the name TRIMPOT in relays, too. Model 3101 relays and their SPDT companion, Model 3100, are available immediately from the factory in a full range of coil-resistances and with voltage or current adjustment. Three terminal types, two mounting-bracket styles. Write for complete technical data. Size: .2" x .4" x .6" Maximum operating temperature: 125°C Contacts: DPDT; Rating: 1.0 amp resistive, 26.5 VDC Coil resistances: 65Ω to 2000Ω Pick-up sensitivity: 160 milliwatts Vibration: 30 G standard, 60 G special Shock: 150 G Compare its space requirements with those of the usual crystal-can or half-crystal-can types. Manufacturer: TRIMPOT® potentiometers; transducers for position, pressure, acceleration. Plants: Riverside, Calif.; Ames, Iowa; and Toronto, Canada SEE BOURNS PRODUCTS IN BOOTHS 1429-1431 AT THE IEEE SHOW CIRCLE 49 ON READER SERVICE CARD The following standard switches are representative of those produced by Tech Laboratories, all of which meet government specifications. Write or teletype for further information. The Type 2A is the standard control switch for electronic instruments and high quality equipment. It is made in a number of combinations with as many as four poles per deck and 24 positions. Insulation is phenolic. The Type 2M is identical except for Melamine insulation. In the table, Code No. "S" stands for "shorting" and "N" for "non-shorting". | TYPE | CODE | DIA. | RATING | NO. POS. | NO. DECKS | POLD. | PRICE | |--------|----------|---------|--------|----------|-----------|-------|-------| | 2A | A1S24 | 13/4" | 3A | 24 | 1 | 1 | $4.50 | | 2A | A4S5 | 13/4" | 3A | 4 | 1 | 4 | $6.00 | | 2A | A1N10-2 | 13/4" | 3A | 10 | 2 | 1 | $7.00 | | 2M | M1S24 | 13/4" | 3A | 24 | 1 | 4 | $5.50 | | 2M | M4N3 | 13/4" | 3A | 3 | 1 | 4 | $7.00 | The new Type 3A molded miniature switch for use in all military and commercial applications in which a superior instrument switch is required, can be furnished with as many as eight decks and up to twelve positions per deck, single pole, or six positions double pole. It has adjustable stops for any lower number of steps. The switch, 1 1/4" in diameter, carries 5 amp and is furnished "Shorting", with "non-shorting" types available on request. It can be supplied solenoid-operated and hermetically sealed. Write for details and prices. These standard rugged control switches are the standby for equipment calling for dependable switches in a hurry. Will meet all standard government specs. All parts carried in stock. An infinite number of combinations possible. Teletype your specs. The price of the basic frame is the same for all, viz. $4.00 each. The type 600 and 800 are priced at $8.00 per deck plus frame price, and the type 900 switches are $9.50 per deck plus frame $4.00. | Type | Size | Rating | Max. Pos. | Max. Pos. Short. | Non-Short. | |------|-----------|--------|-----------|------------------|------------| | 600 | 1 3/4" sq.| 5A | 24 | 12 | | | 800 | 2 1/4" sq.| 5A | 32 | 16 | | | 900 | 2 3/4" sq.| 10A | 48 | 24 | | TECH LAB switches are now standard equipment in most missiles as well as in ground controls. Furnished sealed to meet all specs. Write for details. Prices on request. A complete line of audio and r.f. attenuators with both manual and remote control. We will design components to meet your specific requirements, if standard units are inadequate. Please write for catalogue and send us your specs for estimate. WRITE FOR INFORMATION ON NEW DIGITAL BINARY SWITCHES WRITE FOR NEW BULLETINS ON SWITCHES • DECADE RESISTORS • POTENTIOMETERS TECH LABORATORIES INC. • Palisades Park, New Jersey March 15, 1963 • electronics Put our 2¢ worth in! Think solid state design's too expensive? Not with AMPin-cert® DUO-TYNE® Flag Connectors. Cost approximates just 2¢ a line! And performance meets and exceeds all UL requirements. This low cost reliability means that now you can design solid state components into all your electrical/electronic products (washers, dryers, power tools, vending machines, organs, etc.). And there's no worry here that production problems will stifle your design creativity. This connector is right down the production man's alley. Strip-mounted, crimp, manual snap-in contacts make possible automated job lot assembly. This means speedy automatic wire preparation with an A-MP® crimping machine. Separate harness assembly. Swift loading of connector housings. Penny-pinching features that add up to big dollar savings in the form of lowest installed costs. In-use service is no trouble with snap-in snap-out contact design. This allows for simplified field servicing. Every way you look at it, the AMPin-cert DUO-TYNE Flag Connector is a money-saver—a money-saver that means wider design parameters for you. Learn more about this economical new connector. How it can help you get more design freedom, greater reliability and lowest installed costs. Extensive product line includes 3 position to 22 position housings...with or without mounting ears. Wherever your need, whatever your need, AMP puts an end to every circuit problem! Additional information available on request. *Trademark of AMP INCORPORATED AMP INCORPORATED Harrisburg, Pennsylvania Visit us at the IEEE Show, Booths 2527-31 and 2837, March 25-28, 1963 CIRCLE 51 ON READER SERVICE CARD ANCIENT HISTORY Mincom has delivered reliable 1-megacycle performance for the past 5 years. We'll Demonstrate Predetection at IEEE Booth 3833 Reliable wideband performance at Mincom is an old story — and a good one. Mincom systems were recording and reproducing extremely complex signals at 1 mc as far back as 1955. Today Mincom's 1-mc system, the CM-100, is noted as a pioneer in operational predetection. Another system, the CMP-100, is a smaller mobile unit for recording in the field—also with 1 mc at 120 ips. The CM and CMP (as well as the other two basic Mincom systems) provide the simple, reliable data-gathering capability possible only with longitudinal recording on fixed heads. For all the details on Mincom's dependable wideband instrumentation, write us today. Mincom Division 3M COMPANY 2049 South Barrington Avenue, Los Angeles 25 425 13th Street N. W., Washington 4, D. C. SUBMINIATURE CIRCUIT BREAKERS MIL-TYPE The Heinemann Series SM: hermetically sealed and built to take it under rough environmental conditions. One-, two-, and three-pole models. Available in any integral or fractional current rating you want, from 0.050 to 20 amps. For 230V AC, maximum, 60 or 400 cycles AC; or 50V DC, maximum. Choice of two time-delay responses for each voltage frequency. Hydraulic-magnetic actuation provides temperature-stable performance: nominal load-current capacity and specified trip-points are completely unaffected by ambient temperature. Details: Bulletin 3504. COMMERCIAL The Heinemann Series VP: low-cost, molded-case, commercial version of the Series SM, opposite. Exceptionally lightweight: only 1.5 ounces. Single-pole only. Can be supplied in our conventional series-trip construction or any of four special-function models (shunt-trip, relay trip, calibrating-tap, auxiliary-contact). Available in the same range of current and voltage ratings as the Series SM, and with a similar choice of time-delay responses. Hydraulic-magnetic actuation, of course—hence, no de-rating for high ambient temperatures. Details: Bulletin VP. HEINEMANN ELECTRIC COMPANY 2600 Brunswick Pike, Trenton 2, N.J. Special Introductory Offer To new members of the ELECTRONICS and CONTROL ENGINEERS' Book Club ANY ONE FOR ONLY $1.00 YOURS WITH A CHARTER MEMBERSHIP AND SENT WITH YOUR FIRST SELECTION VALUES FROM $7.00 TO $22.50 Electronic Designers' Handbook, Edited by R. W. Lathlee, D. E. Davis, and J. A. Albin. Presents detailed, practical design data. Publisher's Price, $17.50 Club Price, $13.95 Electronic Switching, Timing, and Pulse Circuits by Joseph M. Petit. Provides practical understanding of complex circuits. Publisher's Price, $8.50 Club Price, $7.25 Digital Computer and Control Engineering by Paul L. Lefebvre. Coverage from basic design to advanced programming techniques. Publisher's Price, $14.50 Club Price, $12.35 Wave Generation and Shaping by Berndt Strauss. Essential features and techniques of practical wave-generating and shaping circuits. Publisher's Price, $12.50 Club Price, $10.65 Pulse and Digital Circuits by J. Millman and H. Taub. Fully covers pulse and digital circuit operation for electronic systems design. Publisher's Price, $14.00 Club Price, $11.90 Mathematics for Electronics with Applications by H. M. Newman and J. A. Albin. Methods for solving practical problems. Publisher's Price, $7.00 Club Price, $5.95 Magnetic Recording Techniques by John E. Stewart. Full description of magnetic recording methods and devices. Publisher's Price, $9.00 Club Price, $7.65 Control Engineers' Handbook by Robert Truax. A wealth of practical help on automatic feedback control systems. Publisher's Price, $22.50 Club Price, $19.10 Transistor Circuit Design, presented by the Engineering Staff of Texas Instruments. Introduces theory to actual practice. Publisher's Price, $15.00 Club Price, $12.75 Information Transmission, Modulation, and Noise by M. Schwartz. A comprehensive approach to communication systems. Publisher's Price, $11.75 Club Price, $9.95 Select one for JUST A DOLLAR! Choose from Electronic Designers' Handbook, Magnetic Recording Techniques, Wave Generation and Shaping, and seven other valuable books. It's your introduction to membership in The Electronic and Control Engineers' Book Club. If you're missing out on important technical literature—if today's high cost of reading curbs the growth of your library—here's the solution to your problem. The Electronic and Control Engineers' Book Club was organized for you, to provide an economical technical reading program that cannot fail to be of value to you. All books are chosen by qualified editors and consultants. Their thoroughgoing understanding of the standards and values of the field, plus the field, guarantees the authenticity of the selections. How the Club operates. Periodically you receive free of charge The Electronic and Control Engineers' Book Bulletin (issued eight times a year) which gives you advance notice of the next main selection, as well as a number of alternate selections. If you want the main selection, you need make no choice; it will be mailed to you. If you want an alternate selection, or if you want no book at all for that particular period, notify the Club by returning the convenient card enclosed with each Bulletin. We ask you to agree only to the purchase of three books in a year. Certainly out of the large number of books in your field offered in any twelve months there will be at least three that you will find of interest. By joining the Club you save yourself the bother of searching and shopping, and save in cost about 15 per cent from publishers' prices. Send no money now. Just check any two books you want—one for only $1.00 and one at your first Club selection—in the coupon below. Take advantage of this offer now, and get two books for less than the regular price of one. THIS COUPON IS WORTH UP TO $21.50 The Electronic and Control Engineers' Book Club, Dept. L-3-15 330 West 42nd Street, New York 36, N. Y. Please enroll me as a member of the Electronic and Control Engineers' Book Club, and to receive the two books I have indicated at the right. You will bill me for my first selection at the special club price and $1 for my first membership book, plus a few additional dollars for delivery costs. (The Club reserves the right to change book orders.) Future coming selections will be described to me in advance and I may decline any book. I need take only 3 selections or alternates in 12 months of membership. (This offer good in U.S. only.) PLEASE PRINT Name ................................................................................................................ Address ................................................................................................................ City ........................................................ Zone... State.......................... Company ............................................................................................................ NO RISK GUARANTEE If not completely satisfied, you may return your first shipment within 10 days and your membership will be canceled. Check 2 Books: We will send the higher priced book for only $1.00, and the other us your first selection. ☐ Electronic Designers' Handbook, $14.95 ☐ Electronic Switching, Timing, and Pulse Circuits, $7.25 ☐ Digital Computer and Control Engineering, $12.35 ☐ Wave Generation and Shaping, $10.65 ☐ Pulse and Digital Circuits, $11.90 ☐ Mathematics for Electronics with Applications, $5.95 ☐ Magnetic Recording Techniques, $7.65 ☐ Control Engineers' Handbook, $19.10 ☐ Transistor Circuit Design, $12.75 ☐ Information Transmission, Modulation, and Noise, $9.95 L-3-15 54 March 15, 1963 • electronics SILICON PLANAR EPITAXIAL DIODES TRANSITRON'S NEW SG5000-5400 SERIES — OUTSTANDING SUCCESSORS TO THE POPULAR SG5000 — PROVIDES THE MOST EFFECTIVE COMBINATION OF HIGH FORWARD CONDUCTANCE, LOW CAPACITANCE AND FAST SWITCHING EVER OFFERED TO THE INDUSTRY. DEMANDED Introduced a short time ago, the SG5000 offered a new high in reliability, performance and versatility that quickly made it a popular component for computer circuit design. Fully aware of the need for a complete range of similar devices, Transitron has now developed a series of premium subminiature glass silicon planar epitaxial diodes. EXPANDED Transitron's new SG5000-5400 series offers a combination of 3 major characteristics that is superior to any now available: higher forward conductance . . . 200 to 400 mA @ 1 Volt; lower capacitance . . . 2 to 4 pf @ 0 Volts; faster switching . . . down to 2 nsec. All types will fully meet the rigid requirements of military and space exploration high reliability systems. | Type | Minimum Forward Current @ 1 Volt (mA) | Minimum Breakdown Voltage @ 5µA (Volts) | Maximum Capacitance @ 0 Volts (pf) | Maximum Inverse Recovery Time (nsec) | |--------|--------------------------------------|----------------------------------------|-----------------------------------|-------------------------------------| | SG5000 | 200 | 100 | 2 | 2 | | SG5100 | 400 | 50 | 4 | 2 | | SG5200 | 400 | 75 | 4 | 2 | | SG5300 | 300 | 100 | 2 | 2 | | SG5400 | 200 | 150 | 2 | 2 | A balanced combination of very low capacitance and exceptional high current switching makes the diodes of the new SG5000 series ideal for memory core driving applications. And since all types can be custom-encapsulated as multiple-chip assemblies, they are highly compatible with the critical space limitations of computer memory core systems. Another important application is logic systems. The SG5000 series provides tightly controlled lower forward voltages at specified low current levels, and more units can be paralleled and still deliver fast switching. Because these units fulfill maximum diode specifications, it is no longer necessary to use 2 or 3 diode types in a system. Now, only one diode need be evaluated for component procurement. All SG5000-5400 silicon planar epitaxial diodes are digitally marked for quick diode type identification. And all types are also available through your Transitron Distributor... For further information, write for Transitron's "Silicon Planar Epitaxial Diode" bulletins. Transitron electronic corporation wakefield, melrose, boston, mass. SALES OFFICES IN PRINCIPAL CITIES THROUGHOUT THE U.S.A. AND EUROPE • CABLE ADDRESS: TRELCO MEET US AT THE IEEE — BOOTH 1720-24 CIRCLE 55 ON READER SERVICE CARD For Every Electrical Protection Need there’s a safe and dependable BUSS or FUSETRON Fuse! BUSS fuse engineers have consistently pioneered the development of new fuses to keep pace with the demands of the Electronic industry. Today, the complete line includes: Single-element fuses for circuits where quick-blowing is needed;—or single-element fuses for normal circuit protection;—or dual-element, “slow-blowing” fuses for circuits where harmless current surges occur;—or indicating fuses for circuits where signals must be given when fuses open. Fuses range in sizes from 1/500 amperes up—and there’s a companion line of fuse clips, blocks and holders. If you have a special protection problem The world’s largest fuse research laboratory, plus the experience gained by solving many, many electrical protection problems is on call to you at all times. Our engineers work with yours and can help you save engineering time and trouble. For more information, write for BUSS bulletin SFB. BUSS: The complete line of fuses and fuse mountings of unquestioned high quality. BUSSMANN MFG. DIVISION McGraw-Edison Co. St. Louis 7, Mo. SMALL PART PROBLEMS? LEVIN Heavy duty instrument lathes offer the best solution to small part lathe operations. 29 standard models for first and second operation work in 3/16", 5/16", and 1/2" collet capacities. Shown above, an ACAF turret lathe set up to produce the small needle valve, illustrated, with a 0.0118" bleed hole. The self indexing turret is extremely sensitive for fine work. Speed regulation is continuously variable from 0 to 4000 r.p.m. with IR drop compensation. SEND FOR COMPLETE CATALOG LEVIN INSTRUMENT LATHES LOUIS LEVIN & SON, INC. 3573 Hayden Ave., Dept. E • Culver City, California New York Representative and Showroom RUSSELL-HOLBROOK & HENDERSON, INC. 292 Madison Ave., New York 17, N.Y. PHOTO TUBES INDICATOR TUBES SUBMINIATURE TUBES SUBMINIATURE LAMPS TRANSMITTING TUBES, INCLUDING COMPACTRON TYPES GERMANIUM POWER TRANSISTORS DYNAQUAD™ TOUCH CONTROL MODULES DYNAQUADS SILICON DIFFUSED AND FIELD EFFECT TRANSISTORS TUNG-SOL SHOWCASE Tung-Sol is the one independent domestic manufacturer of vacuum and gas-filled tubes and solid state devices with the capability to supply volume requirements of so many popular-demand types. Tung-Sol components span the whole frequency spectrum. For more than sixty years, Tung-Sol has served America's largest-producing industries. Year after year the confidence of companies of all sizes has been merited by the performance-to-cost ratio of Tung-Sol products, the competence of Tung-Sol engineering and the dependability of Tung-Sol service. You may find it profitable to discuss your component requirements with Tung-Sol, particularly in the design stage. Tung-Sol Electric Inc., Newark 4, New Jersey. TWX: 201-621-7977 Technical assistance is available through: Atlanta, Ga.; Columbus, Ohio; Culver City, Calif.; Dallas, Tex.; Denver, Colo.; Detroit, Mich.; Melrose Park, Ill.; Newark, N. J.; Seattle, Wash. In Canada: Abbey Electronics, Toronto, Ont. BOOTHS 2733-35-37-39 AT THE IEE SHOW CIRCLE 59 ON READER SERVICE CARD New Value Package Delayed Sweep and Dual-Trace Plug-in Units with the Tektronix Type 561A Oscilloscope You can use the Type 561A Oscilloscope—with Type 3A1 Dual-Trace Amplifier Unit and Type 3B3 Delayed-Sweep Time-Base Unit—for a wide range of DC-to-10 MC laboratory applications. You can observe no-parallax displays and sharp trace photography. For the new rectangular ceramic CRT has an internal graticule with controllable edge lighting. You can display single or dual-trace presentations or algebraic addition. You have 10 mv/cm sensitivity with .035 μsec risetime. You have highly adaptable time-base features including: 1. Calibrated sweep range—from 0.1 μsec/cm to 1 sec/cm for normal and delayed-sweep presentations. 2. Calibrated sweep delay—for setting and measuring precise delay intervals from 0.5 microsecond to 10 seconds. 3. Single-sweep control—for simplifying waveform photography of normal-sweep presentations. 4. Flexible triggering facilities—with triggered operation extending to beyond 10 megacycles. Also, you can use any of 9 other amplifier and time-base units for differential, multi-trace, sampling, other applications, including matched X-Y displays using the same type amplifier units in both channels. TYPE 561A OSCILLOSCOPE (without plug-ins) $470 TYPE 3A1 DUAL-TRACE AMPLIFIER UNIT $410 (6 cm linear scan • no signal delay) TYPE 3B3 TIME BASE UNIT $525 U. S. Sales Prices f.o.b. Beaverton, Oregon For a demonstration, please call your Tektronix Field Engineer Tektronix, Inc. / P. O. BOX 500 • BEAVERTON, OREGON / Mitchell 4-0161 • TWX—503-291-6805 • Cable: TEKTRONIX, OVERSEAS DISTRIBUTORS IN 27 COUNTRIES AND HONOLULU, HAWAII. Tektronix Field Offices are located in principal cities throughout the United States. Please consult your Telephone Directory. Tektronix Canada Ltd: Montreal, Quebec • Toronto (Willowdale) Ontario • Tektronix International A. G., Terrassenweg 1A, Zug, Switzerland, illuminated internal graticule rectangular ceramic CRT In FIXED COMPOSITION RESISTORS if it's news, expect it first from IRC IRC Fixed Composition Resistors have a STRONGER LEAD ASSEMBLY HERE'S WHY...IRC's resistance element is a film of carbon composition thermally bonded to a glass body. This rugged, compact configuration permits 35% more molding around the lead assembly, and a correspondingly thicker molding at the ends. IRC's exclusive talon lead extends farther into the resistor. Ribbed shoulders are imbedded in the molding to prevent twisting or pull-out. The lead is bonded to the element so strongly, IRC resistors are failure-free under MIL-R-11 shock, vibration and acceleration tests. In destructive tensile tests, carbon slug types fail at forces averaging 22, 18 and 28 pounds respectively, for brands shown in the X-Ray. IRC carbon composition resistors withstand forces averaging 33 pounds. Even at that force, IRC leads do not pull out...the wire breaks outside the body. Write for GBT bulletin. International Resistance Co., Philadelphia 8, Pa. PERFORMANCE ADVANTAGES IRC Type GBT's also provide - Outstanding load life - Better resistance-temperature characteristics - Lower operating temperatures - Greater moisture protection - Superior high frequency characteristics - Ranges to 100,000 megohms - Weldable leads Analab 1120/700...without a doubt the INDUSTRY’S FINEST dual-trace OSCILLOSCOPE FEATURES: • Superior triggering and over-all stability • Highest sensitivity (100 µv/cm) • Brightest traces • “Instantaneous” beam finder • Delayed trigger output from 1 µsec to 50,000,000 µsec • Bandwidth from DC to 150 kc • Both single-ended and differential amplifier inputs, AC or DC-coupled PLUS the only oscilloscope on which you can make PRECISE QUANTITATIVE MEASUREMENTS! In addition to its exceptionally fine qualitative characteristics, the Analab 1120/700 has consistently proved itself for accurate quantitative measuring of signal amplitude, rise time, pulse duration, frequency, and phase. Use it as a precision AC & DC voltmeter, phase meter, time-interval meter, analog wave-form indicator...thanks to Analab’s exclusive Null-Balance Readout. Call or write for specifications on the 1120/700 and on the full line of Analab scopes, scope camera systems, and accessories. Analab Instrument Corporation Cedar Grove, New Jersey Analab Model 1120 main frame with Model 700 plug-in. Analab A subsidiary of THE JERROLD CORPORATION BOOTH 3904-08 CIRCLE 62 ON READER SERVICE CARD PREVIEW OF FIRST IEEE CONVENTION HIGHLIGHTS of the First International Convention of the Institute of Electrical and Electronics Engineers beginning two weeks from Monday in New York's Coliseum and Waldorf-Astoria Hotel are spread over 54 technical sessions. They deal with electron devices, self-repairing circuits, antennas, medical electronics and radar, to mention only a small sampling. MICROWAVE PHOTOTUBE — Discovery of the laser makes possible optical communications systems having thousands of megacycles of bandwidth. But such systems will require photodetectors having similarly wide bandwidth. One such detector is the microwave phototube shown in the photographs. The large photo shows the tube withdrawn from its focusing structure while the inset shows it performing a laser mode-separation measurement. Figure 1 is a cross section of the tube. Light to be demodulated passes through the optical window on the left onto a transmission-type photocathode on a glass disk. The photoelectrons emitted from the cathode are bunched at the modulation frequency of the incident light. As the electrons are accelerated and pass through a traveling-wave-tube type helix, they excite a traveling wave on the helix. This signal is taken out at the output coupler. The electron-beam focusing structure which surrounds the helix uses periodic permanent magnets as do modern traveling-wave tubes. The tube, with its focusing structure, is 18 inches long and weighs five pounds. The photocathode response can be made to match that of any conventional phototube. The tube shown uses an L-band helix but tubes can be built for S, C or K band.\(^1\) **PERMACHON SCAN CONVERTER**—The tube shown in the photograph is a special type of scan converter. Scan converters are tubes into which information can be written in one format and extracted at the same or a later time in the same or a different format. As Fig. 2 shows, there is a writing gun at one end of the tube and a reading gun at the other. The writing gun emits high-velocity electrons; the reading gun emits low-velocity electrons. Between the guns is a target. In the Permachon scan converter this target is made of material that becomes electrically conductive when it is bombarded with electrons. The tube can be read out many times even after the input illumination has been removed for a long time. It can also add together several scans at a low light level to furnish an enhanced visual output. Three kinds of target are used. The first two use so-called EBIC (electric bombardment induced conductivity) materials: an aluminum-oxide supported target and an aluminum supported target. The third type of target is fiber-optics coupled. It furnishes complete isolation between input and output. It has received the acronym FOPT (fiber optics photon transfer). There are two thin transparent and conductive films separated by a fiber optics honeycomb. The film on the writing side is coated with a television-type phosphor while the one on the reading side is a photoconductor. The writing beam excites the phosphor and light travels through the fiber-optics rod and makes the photoconductor conductive. One of the first applications of the Permachon may be to improve the performance of an Army moving-target-indicator radar.\(^2\) **MULTIPLE-BEAM TRAVELING-WAVE KLYSTRON**—Many modern microwave systems require both high power levels and wide dynamic bandwidth. One approach to the problem is the multiple-beam klystron (MBK). This not only permits generating high power levels but also allows broadbanding by stagger tuning the MULTIPLE-BEAM traveling-wave klystron combines the best of two possible approaches to obtaining superpower microwave signals with wide bandwidth. beams. This could yield a 10 or 15 percent bandwidth. Another approach is the traveling-wave klystron (TWK). It is illustrated in Fig. 3A. An elongated ribbon beam crosses the gap between two ridge-loaded waveguides that comprise the input and output circuits. Figure 3B shows how an r-f voltage that propagates on the lower or input guide will velocity modulate the beam as the input signal travels from left to right. Bunching takes place in the drift distance between the guides and a density-modulated current will induce waves in the upper or output guide. This idea was good in theory but it never worked out too well. A long build-up distance was required for the waveguide and it was hard to produce a stable sheet beam. The tube shown in the third photo combines the best features of both the MBK and the TWK. It uses cylindrical instead of sheet beams. Build-up distance is decreased by increasing the impedance of the waveguides by periodic inductive and capacitive loading. The experimental MBTWK has eight beams. It operates at 725 Mc. Gain is 24 db and efficiency averaged 44 percent over a sample of ten prototype tubes. SELF-REPAIRING CIRCUITS — As electronic circuits become increasingly complex, especially in computer and control circuits, the problem of reliability increases almost without bound. One approach is to increase component reliability. However, reliability is only the probability of success and no matter how high the reliability there is some finite chance of failure. Another approach is to use circuits that will repair themselves. In the majority-logic network, redundancy is the key. The so-called voter gives the output called for by the majority of the inputs. Even if some circuits fail, operation can continue as long as most work. The next step is to replace the simple voter with an adaptive cir- TRAVELING-WAVE KLYSTRON used sheet beam with input and output signals coupled by ridged waveguide (A). Voltage gradient shows how velocity modulation and subsequent bunching was achieved (B)—Fig. 3 LOGIC NETWORK WITH ADAPTIVE VOTERS AND REDUNDANCY MEMISTOR PLATING CELLS SILVER IODIDE ADAPTIVE COMPONENT TWO-CORE MAGNETIC ADAPTIVE COMPONENT ADAPTIVE logic network (A) is one key to circuits that repair themselves. Several adaptive circuit elements can apply weights needed: memistor plating cell (B), silver-iodide adaptive component (C) and two-core magnetic adaptive component (D)—Fig. 4 circuit as shown in Fig. 4A. Here weights are assigned to the inputs to the voter so that a circuit that persists in giving the wrong response is given less and less weight. Figures 4B, 4C and 4D illustrate mechanisms by which this weighting may be accomplished. Each provides a fixed weight with permanent memory except when a so-called adapt signal is received. The memistor, Fig. 4B uses electroplated copper film for memory; the resistance of the film is the readout. The resistance can be changed smoothly and reversibly by plating or etching. The same principle is used in the device shown in Fig. 4C. Here the ionic conduction of silver iodide (AgI) affords a sort of solid electrolyte. The device in Fig. 4D is a magnetic component that uses second-harmonic readout to provide nondestructive readout of the flux stored in the cores. These three devices not only have possible application in self-repair circuits, they also make excellent integrators and can store an integrated value indefinitely. Thus they may even substitute for motor-driven potentiometers in some applications. For a complete discussion of five types of adaptive components see H. S. Crafts’ article Components That Can Learn and How to Use Them in next week’s issue. HIGH-FREQUENCY POWER TRANSISTOR — A new silicon transistor has been developed that has a cutoff frequency of 200 Mc. Power output is about 30 watts and maximum current is five amperes. The transistor has an interdigitated emitter-base geometry with emitter and base fingers both 75 microns wide and contact strips 25 microns wide. This gives an emitter periphery of about 31 millimeters on an area of four square millimeters. The contacts are evaporated aluminum 0.4 micron thick. Two application procedures were used: evaporating a continuous layer and removing the excess with the photoresist technique or evaporating the aluminum through a metal mask. Both techniques were satisfactory. Four gold bonds along the center of the emitter and one to the base provide the electrical connections. The device is encapsulated in a TO3 can with a moisture getter. Epitaxial techniques are used to achieve low saturation resistance. Two epitaxial techniques were evaluated: normal and inverse. In the latter, a low resistivity epitaxial layer is grown on a high resistivity substrate. Conventional diffusion and oxide-masking produce a base region 1.4 microns thick. The emitter junction is planar while the collector junction is passivated subsequent to etching a mesa. Passivation is accomplished in a two-step oxidation. LETTER-RACK ANTENNA—This wideband antenna of log-periodic design can be mounted flush on the surface of an aircraft or missile. Its design evolved from the corrugated surface-wave antenna. However, the corrugated antenna is not in itself suitable for use as a log-periodic antenna when it is fed in the usual manner with a horn or loop exciter at one end. This letter-rack antenna uses a novel feed system that produces the backfire condition. This gives good coupling of the radiated pattern into space and makes the pattern unidirectional. The shift into the backfire region is accomplished with a feed system that provides an extra 180-deg phase shift per cell. ULTRASONIC CARDIAC DIAGNOSIS—Echo ranging with ultrasound permits distinguishing between heart defects that are difficult to tell apart. One is mitral stenosis or a closing down of the opening around the mitral valve of the heart. The other is mitral insufficiency where the opening is so large that some blood flows backwards when the heart pumps. Two-megacycle ultrasonic pulses with a peak power of six watts per sq cm and an average power of 12 mw per sq cm are applied to the patient’s chest. The echoes are recorded on film as an A-scope display. A moving film camera is used. Diagnosis takes from five to 10 minutes and replaces difficult and painful catheter procedures. The technique may also be used to diagnose pericardiac effusion or a gathering of fluid in the sac around the heart and possibly aortal defects. In the waveforms, ultrasonic ranging traces are shown with the usual electrocardiograph trace. The normal record (A) is a continuous double-peaked curve of echo presumed to have come from the anterior leaflet of the mitral valve. Record (B) is from a patient suffering from mitral stenosis. His valve velocity is 15 mm per second. The record (C) is from the same patient after a successful heart operation and shows a valve velocity increase to 30 mm per second. MYOCARDIAL PROSTHETIC SYSTEM—Artificial blood pumping has been achieved in dogs using a myocardial prosthesis and external electronic support unit. It may be that someday human beings suffering from heart impairments will be kept alive by this technique much as respiratory patients are now aided by the so-called iron lung. The prosthesis consists of an inner flexible liner bonded to a rigid outer shell. It is surgically implanted in the chest and entirely encapsulates the heart. The support unit is a transistor controller that activates a three-way solenoid valve that regulates the air pressure and suction applied to the stem of the prosthesis. During the period corresponding to the systole, a pulse of air pressure is supplied between the outer shell and inner lining causing the inner lining and hence the myocardium to contract pumping blood through the cardiovascular system. During the period corresponding to the diastole, a suction pulse is applied causing the liner to retract and permitting the ventricles to expand and fill. **MILLIMETER-WAVE PARAMP** Parametric amplification at millimeter wavelengths has been held up by the requirement that the pump frequency had to be at least twice the signal frequency. In this new device, the signal frequency exceeds that of the pump while the amplifier retains its low-noise and high-gain characteristics. Instead of a single idler tank, the amplifier introduces additional idlers. Progressively lower sideband frequencies are generated as the signal passes through the multiple idlers. Two sidebands are selected to form the basis of a regenerative system. They create a negative resistance and lead to power amplification. To date, a 10-Mc paramp using a 7.2-Mc pump and a 13.3-Gc paramp with a 9.6-Gc pump have been constructed. Work is underway on a 13.3-Gc paramp with a 5.6-Gc pump. This approach offers possibilities of parametric amplification all the way from 10 to 100 Gc. **OBSERVING MULTIPLE RADAR TARGETS** Watching many radar targets spread out over a wide volume of space presents problems. One solution is to convert all the target tracks to ultrasonic beams in a transparent acoustic medium and examine this model in a beam of well collimated monochromatic light projected at right angles to the direction of propagation of the sonic waves. The sonic pressure waves phase modulate the light and this phase modulation can be converted to intensity modulation by collecting the light passing through the medium with a converging lens and observing the light intensity in the focal plane of the lens. The target tracks are all now a series of dots on a plane surface. A multielement array receives signals continuously from all targets in its volume of coverage. Signals from each antenna element are heterodyned to a common ultrasonic frequency and applied to separate acoustic transducers arranged in an array analogous to the antenna array but propagating into the ultrasonic medium. **REFERENCES** (1) D. Blattner, H. Johnson, J. Ruedy and F. Sterzer, Microwave Phototube With Transmission Photomultiplier. (2) R. Doyle, Permachron Type Scan Converters. (3) M. R. Boyd, R. A. Dehn and T. G. Milman, The Multiple-Beam Traveling-Wave Klystron. (4) B. Angell, The Need and Means for Self-Rectifying Circuits. (5) A. Goetzberger, N. Zetterquist and R. M. Schmitt, A New High-Frequency Power Transistor. (6) R. Mittra and M. Wahl, The Letter Rack Antenna—A Flush Mount Wide Band Antenna of Low Profile Design. (7) J. M. Reid and C. R. Joyner, Jr., Ultrasonic Echo-Ranging Techniques in Cardiac Diagnosis. (8) I. Klinz, A Bio-Medically Engineered Myocardial Prosthetic System. (9) W. B. Henning, New Form of Parametric Amplifiers Enables Below-Signal Pumping. (10) L. Groginsky and J. D. Young, A New Technique for Simultaneous Radar Observation of Multiple Targets Within a Broad Surveillance Area. All papers to be presented at the First IEEE International Convention, New York, March 25-28, 1963. For integrated circuits, LCDT logic is shown to offer many advantages—including short propagation delay at low power dissipation, high fan-out levels with loose component tolerances, and minimal crosstalk. High Speed Integrated Circuits With FOR SEVERAL REASONS integrated logic circuits are usually designed as low-power, high-speed devices. First, of course, these goals are desirable in themselves. Second, since integrated circuits aim at high component packing density, the power dissipation per component must be low. Third, small device size makes it possible to process a large number on a single slice of silicon and to maximize the probability of any given device being good. Since transient delays due to both capacitive and minority carrier effects are related to current densities rather than total currents, small device geometry implies low power operation. With the trend towards low power levels, it is desirable to use logic circuits that operate satisfactorily with small voltage differences between true and false states. Saturating current-steering circuits satisfy the requirement for small line voltage swings. They can also be used at low voltage levels, which further reduce power consumption. Their disadvantages are minority carrier storage effects and their need for low saturation resistance. The former can be made small in comparison with capacitance effect by gold doping techniques, particularly at low power levels. But low saturation resistances are difficult to realize in integrated circuits since all contacts are usually at one side of the device—collector currents must therefore flow along increased path lengths. The low power requirement alleviates this problem, but does not eliminate it because of small device geometries. Low saturation resistance can be satisfactorily achieved only with epitaxial growth techniques. LOGIC CIRCUITS—Diode-transistor logic (DTL), direct-coupled transistor logic (DCTL), and transistor-transistor logic (TTL) have all been scrutinized for their applicability in integrated circuit design. Conventional DTL (Fig. 1A) offers the desired high speed at low power levels. It has a relatively high saturation resistance tolerance but the current available for turning off the inverter transistor must flow either as recovery current in diodes $D_1$ and $D_2$ or through $R_s$. A compromise is thus forced between circuit gain and speed, which is particularly severe if $R_s$ is grounded to avoid a second power supply. Logic modes DCTL and TTL offer higher speeds at lower operating voltage level than DTL when the Load-Compensated Diode-Transistor Logic same inverter is used in all three, primarily because of the smaller voltage difference between their true and false states. However, since they have a lower tolerance to saturation resistance, a slower transistor is generally needed for adequate d-c stability. The d-c stability and logical gain of these circuits is further compromised by cross-talk. Current hogging, which is severe in DCTL, can be reduced to acceptable levels by using resistive coupling, but only by sacrificing switching speed. A similar situation exists in TTL due to inverse gain in the logic transistors, and the requirement of low inverse $\beta$ conflicts with the requirement of low offset voltages. Thus, a confusing situation exists as to the relative suitability of the foregoing circuits in integrated circuit form. In an integrated DTL circuit, it is convenient to form one or both of diodes $D_1$ and $D_2$ (Fig. 1A) as the emitter-base diode of a transistor. This transistor also can be used to increase the total gain of the circuit to a point at which gain no longer presents a problem. The transistor can be used as an emitter follower (Fig. 1B), or as a quasi emitter follower (Fig. 1C) in which saturation is also prevented. In Fig. 1B, the power dissipation is high and gain dependent; in Fig. 1C, power dissipation is high but resistor controlled, as is the additional gain. In both, the overdrive is excessive at low fan-out values; but the circuit of Fig. 1C could be used as a buffer element to complement the conventional lower fan-out DTL circuit, provided the inverter were redesigned (larger geometry) for low saturation resistance. Speed and power dissipation problems posed by the above circuits can be overcome by using a clamping diode as a shunt around the amplifying stages, as in Fig. 1D. Here the excess current from $R_1$ (which causes overdrive in the previous circuits) flows through the clamping diode, while the emitter follower draws just enough current from the power supply to sustain the load current. As load current increases, the driving current also increases and, in this sense, the circuit is load compensated. The current drawn from the power supply by the emitter is $$I_{EF} = \left\{ \frac{V_{eb}}{R_2} + \frac{I_e}{\beta_1} \right\} / \left\{ 1 + \frac{1}{\beta_2} \right\}$$ The clamping diode considerably improves switching... speed by restricting the line voltage swing. Speed improves also because the lower limit of $R_s$ is not now set by gain requirements. Minority carrier storage in the transistor collector region is also avoided, but the clamping diode introduces a similar effect, so that gold doping is still needed to control minority carrier lifetime. Transistor coupling could decrease the effect of carriers stored in the diode, but cross-talk problems would be introduced by this mode of coupling. At first sight, it appears that d-c stability of the load-compensated circuit must always be inferior to that of conventional DTL. This probably will be the case if the best switching speeds are to be attained; however, to obtain the fastest switching speed with DTL, the transistor design should be such that the saturation resistance is close to its maximum permissible limit, which gives minimum permissible d-c stability. Furthermore, the load-compensated version (LCDTL) clamps at an output voltage which can be adjusted by introducing series resistance effects into the diode shunt path (this is easy to do in the integrated circuit). Thus, considerable design freedom is possible in balancing d-c stability and speed requirements, while retaining the load compensation feature of the circuit. To summarize, the LCDT circuit can be used with line voltage swings similar to those of DCTL and TTL. At the same time, it has the relatively high tolerance to saturation resistance of DTL. Switching speed and d-c stability can be traded by introducing resistance in series with the clamping diode. The number of components used is greater than in any of the single-stage circuits, but their tolerance are considerably looser, which makes the device particularly suitable for integrated circuits. The circuit seems to combine the good features of the more conventional saturating circuits, while eliminating their weaknesses. A detailed description of the circuit and its integrated version follows. **LCDT LOGIC**—The most interesting circuit characteristics are d-c stability at various values of fan-out and switching speed. The former can be calculated if the ideal diode equation is assumed. The latter is dependent on minority carrier storage effects in the diodes and transistors and also on current flow in $R_s$ and $R_e$ during the ON-OFF and OFF-ON transients. Such a transient analysis would be a study in itself; information presented here is based on direct measurements. Two requirements need to be satisfied for the circuit to operate at a given value of fan-out; namely, the overall gain should be high enough and the output voltage at that fan-out should be low enough to turn off the next stage with an adequate margin of stability $\Delta V$. The maximum output current from the circuit under worst-case conditions is: $$I_{\theta \text{ max}} = \beta_1 \beta_2 \left( \frac{V - V_{z \text{ max}}}{R_{i \text{ max}}} - I_i \right) - \frac{1}{\beta_1} \cdot \frac{V_{eb \text{ max}}}{R_{2 \text{ min}}}$$ \hspace{1cm} (1) where $\beta_1 =$ current gain of the emitter follower, $\beta_2 =$ current gain of the inverter, $I_i =$ worst-case input current ON, and $V$, $V_z$ and $V_{eb}$ are the voltages indicated in Fig. 1D. Voltage $V_{z \text{ max}}$ is the maximum $V_z$ ever occurring in the circuit. The worst-case input current when the circuit is being held OFF is: $$I_i = \frac{V - V_{z \text{ min}}}{R_{i \text{ min}}} - \frac{V_{eb \text{ min}}}{\beta_1 R_{2 \text{ max}}}$$ \hspace{1cm} (2) Voltage $V_{z \text{ min}}$ is the lowest $V_z$ that can occur when the input is at the highest permissible false voltage. Since both the emitter follower and the inverter are formed simultaneously in proximity on the device, it is a good approximation to assume that they have the same gain. Then, using Eq. 1 and 2, Table I gives the minimum gain values at $-55$ deg C for various values of fan-out and resistor tolerances. Values for $V_z$ and $V_{eb}$ were measured on a circuit using silicon diodes and transistors; $R_{2 \text{ nominal}}/R_{i \text{ nominal}}$ is taken to be 1.5, which was found to give optimum speed power balance. In an integrated circuit, the required $\beta_{\text{min}}$ values will be lower than those given in Table I, particularly for the looser resistor tolerances, since resistors $R_s$ and $R_e$ tend to increase or decrease together in the same circuit. Assuming that the circuit has sufficient gain, its output voltage when fully loaded is determined by the forward voltages across the emitter-base diodes. --- **EQUIVALENT CIRCUIT of LCDT logic device (A), and cross-section of the actual device (C); in (B) appears the forward characteristic of transistor used as a diode—Fig. 2** OUTPUT CHARACTERISTICS of LCDT integrated circuit, 4-v supply (A and B). In (A), max permissible load current at 1.1 v is 87 ma. In (B), saturation effects set in at 100 ma output current, breakdown occurs at over 10 v. Input characteristics of LCDT circuit (4-v supply, 4-v output) shown in (C)—Fig. 3 of $Q_1$ and $Q_2$ and the clamping diode $D$. $$V_o = V_{eb}(Q_1) + V_{eb}(Q_2) - V_D$$ Similarly, the maximum permissible false voltage at the input is $$V_F = V_{eb}(Q_1) + V_{eb}(Q_2) + V_{D1} - V_{D1}$$ The d-c stability margin $\Delta V$ is the difference between the worst-case values of $V_F$ and $V_o$. Using the ideal diode equation $$\Delta V = \frac{kT}{q} \left\{ \log \frac{I_{s''}}{I_{S1}} + \log \frac{I_{s''}}{I_{S2}} + \log \frac{I_{s''}}{I_{S1}} - \log \frac{I_{i''}}{I_{Si}} \right\}$$ $$- \frac{kT}{q} \left\{ \log \frac{I_{s'}}{I_{S1}} + \log \frac{I_{s'}}{I_{S2}} \right\} + V_D$$ where subscripts are as in Fig. 1D. Single primes indicate worst-case ON values, double primes indicate worst-case OFF values; $I_s$ values refer to the saturation current of the various diodes. Rearranging this expression, and noting that to a good approximation $$\text{Fan-out (FO)} = I_{s'}/I_{i''}$$ then $$\text{FO} = \left[ \frac{I_{s''}I_{s''}I_{i''}}{(I_{i''})^2I_{s'}} \right] \exp \left( \frac{q}{kT} (V_2 - \Delta V - \Delta V') \right)$$ The term $\Delta V'$ accounts for two effects. First, it includes the difference in forward voltage across diodes $D_i$ and $D_i$ at the same current level. For maximum stability $D_i$ should have a low forward voltage relative to $D_i$. Second, it includes the forward voltage tolerances from circuit to circuit on the emitter diodes in $Q_1$ and $Q_2$. The current level in the emitter of $Q_2$ varies enough from ON to OFF that the ideal diode equation is unlikely to hold over the whole range; departure from the ideal equation will be in a direction to reduce F.O. However, even if the ideal equation does not hold exactly, it does give some measure of the effects of various circuit parameters on the overall d-c stability. POWER-SPEED CURVES—Figure 1E shows the power-speed curve for the LCDT gate using a 2N743 inverter, an FD829 clamping diode and a 4-volt supply. Similar curves are shown for DCTL, TTL and DTL, using in each case a 2N743 inverter and a 4-volt supply. These curves cannot be directly related to integrated versions of the four circuits, since the transistor design and optimum power supply voltage would vary from circuit to circuit. The lower supply voltages which are permissible with TTL and DCTL tend to compensate for the higher saturation resistances which are permissible with DTL. Two further points are worthy of note. First, integrated circuits eliminate the contribution of packaging capacitance to collector-base and clamping-diode capacitance. This is of more significance in DTL and LCDTL than in DCTL and TTL, since turn-off currents are more limited in the former. Second, DCTL requires the use of as many inverters with separate bases and low saturation resistances as there are inputs. These transistors require more layout space and provide more stray capacitance in integrated circuit form than the input transistor-diodes of DTL, TTL and LCDTL. This tends to counter-balance the low component count of DCTL. TOPOLOGY—In designing an integrated circuit to perform LCDTL, it is important to minimize stray capacitances and to avoid cross-overs. The device was planned as a four-input dual NAND for maximum packing density. An equivalent circuit, including principal stray capacitances, is shown in Fig. 2A. Low isolation capacitances, including 1.3 pf of pin capacitance, were realized with adequately low saturation resistances by use of epitaxial growth techniques and tight masking tolerances. A plan view of the device (see photo) shows the function of the different areas. In accordance with the principles outlined, the device was designed for minimum junction areas to allow high-speed operation at low power levels, to obtain as many devices per silicon wafer as possible, and to increase the percentage yield. Thus, the total size for the dual gate is $37 \times 28$ mils. The inverter transistor has a slightly INTEGRATED LCDT PERFORMANCE: resistor and input current variations with temperature (A); worst case F.O. as function of d-c stability and temperature (B); propagation delay as a function of voltage and temperature (C); variation with F.O. and temperature of true and false thresholds and fully loaded output voltage (D); and envelope of all 4-v propagation delay measurements obtained on 6 different runs (E)—Fig. 4. smaller geometry than the 2N709 transistor. The input diode array was formed as a transistor array with a collector-base short for convenience in fabrication, but the arrangement also has operational advantages. Good diode action is obtained in that the ideal diode equation is obeyed over a wide range of forward-current values. This is because most of the forward current flows by transistor action through the collector until the transistor saturates. At that point, which is well outside the operating range, the diode characteristic shows an inflection, as in Fig. 2B. The collector-base short avoids inverse transistor action. To obtain maximum d-c stability, the clamping diode and diode $D_1$ were made as small as was consistent with reproducibility. From this point of view, it would also be desirable to make the input diodes large, but this requirement conflicts with that of minimizing the recovery effects in these diodes and with small total device size; thus a compromise must be made. The sizes of the emitter follower and inverter are unimportant from the point of view of stability, provided that resistive effects are avoided, since each affects both the output and input voltages. FABRICATION—Epitaxial techniques have been used to minimize junction capacitance while maintaining an adequately low saturation resistance. There are many ways of using epitaxial growth techniques to fabricate integrated circuits. As is standard practice in transistors, the best ones employ a heavily doped layer beneath the surface for current carrying purposes and a lightly doped surface layer in which $p-n$ junctions can be formed for transistors, resistors or diodes with minimal capacitance effects. The heavily doped layer can be predeposited either uniformly or selectively, using epitaxial growth or diffusion techniques, or it can be formed as part of a composite layer by proper control of the dopants in the gas stream entering the reaction furnace. Contact can be made to the $n^+$ layer either through the $n$ layer by a special diffusion through it, or by pre-arranging the surface so that the $n^+$ layer appears at certain points. The substrate will generally be high resistivity $p$-type silicon, and isolation between component areas can then be obtained by either etching troughs between them or by a $p$-type diffusion. Etching minimizes capacitance effects but exposes junctions and creates problems in making the desired interconnections between component parts. Isolation by diffusion is simpler, gives planar junctions, and for the purposes of the low-power circuit in hand, gives only marginally more isolation capacitance with adequate values of saturation resistance. The latter technique was used in this design; a cross-section of the device is shown in Fig. 2C. Sheet resistivity and depth of the boron diffusion affects the circuit in a number of ways. The range of useful values of sheet resistivities is generally confined to 100 to 300 ohms/sq., although for special purposes, the limits of this range can be exceeded. Higher values generally are more suitable for forming resistors with small junction areas and obtaining high gain in the transistors. Lower values are more suitable for minimizing resistive effects in the transistors and diodes. Lower resistivities also maintain high forward voltage drops in the various diodes that determine the threshold voltage $V_T$ and hence indiLOADING ARRANGEMENT for propagation delay measurements (A); effect of loading on propagation delay (B); ring oscillator and counter arrangement (C) in which counting proceeds at a 15 Mc rate; input and output waveforms of counter are shown in (D). With better techniques and a better waveform on the clock pulse, a higher counting rate should be achieved—Fig. 5 rectly the maximum permissible saturation resistance in the transistor. Shallow diffusion depths are necessary to obtain fast switching, since the transit time must be kept small both for its own sake and to minimize the effect on gain of the heavy gold doping required for low storage effects. For the LCDT gate, resistor values are small, gain requirements are at minimum, and, at the same time, resistive effects in the various diodes must be minimized. Thus, a relatively low $p_a$ value of 140 ohms/sq was chosen. Junction depths of 1.8 to 2.1 microns for the base layer and 1.2 to 1.6 microns for the emitter gave satisfactory gains and switching speeds. Based on all units measured to date, a 15 percent tolerance can be maintained on resistors with 93 percent yield. Gold doping is necessary to minimize minority carrier storage effects in the transistors and diodes. One of the more fortunate facts about the diode-coupled logic circuit is that capacitance effects are worst at low temperatures, while minority carrier storage effects are worst at high temperatures. If the gold doping is adjusted correctly, switching speeds can be made worst-case at both high and low temperatures with optimum performance near room temperature and with a minimum of overall variation. INTEGRATED CIRCUIT PERFORMANCE—The d-c characteristics of the LCDT gate can be studied most simply with a transistor curve tracer. The output, one input, and the ground leads are connected to the collector, base, and emitter posts of the curve tracer; the B' lead is connected to an appropriate supply voltage. Displays of output voltage against output current with negatively stepped input current, and input voltage against output current for negatively stepped input current are the most informative. These displays give: (a) the worst-case output voltage under full load and worst-case input current, (b) the maximum "false" voltage for worst-case output OFF current, (c) the minimum "true" input voltages for various load currents, (d) the value of resistor $R_i$, and (e) the worst-case input current. Typical displays are shown in Fig. 3A, 3B and 3C. Figures 4A and 4D show how these parameters as measured on a typical device vary as a function of temperature. One of the desirable features of DTL is that the variation in input current and power dissipation due to variation of $R_i$ with temperature is partially compensated for by variation in the diode forward voltages with temperature. The LCDT integrated circuit will operate over a minus 55 deg C to plus 125 deg C range at supply voltages between 3 and 6 volts. It seems unlikely, however, that such a wide voltage variation will be experienced by any one device. Preliminary studies indicated that a 4-volt supply was optimum in that | $T^oC$ | -55 | +25 | +70 | +100 | +125 | |--------|-----|-----|-----|------|------| | $I_0 \mu A$ | 5 | 10 | 20 | 40 | 40 | | $I_L \mu A$ | 0.05 | 0.1 | 0.3 | 1 | 2.5 | component tolerances were loose and that speed and power dissipation varied little with temperature at this voltage. Thus the circuit was designed primarily for operation at 4 volts, d-c and dynamic studies on the device were made primarily at this operating voltage. The following test program was carried out on a large number of units sampled from 20 different production runs. First, an output current level $I_o$ was chosen for each temperature at or below which the device was defined as being OFF (see Table IIA), and the input voltage $V_r$ at which this current would flow was measured on all units. Second, input currents at various temperatures were measured, and worst-case values set which gave a good yield. Third, a leakage specification $I_L$ (see Table IIB) was set at each temperature for the input diodes, to be measured with the full B+ voltage applied to the input with the device ON. Fourth, output voltage $V_o$ was measured at various fan-out values and temperatures on all the units with an input current $I_i$, where $$I_i = \text{Fan-In} \times \{I_o' + (\text{FO} - 1)I_L\}$$ These results were used to choose values of $V_r$ and $V_o$ at 125 deg C that would guarantee a d-c stability $\Delta V = 100$ mv at a fan-out of 5 with maximum yield. Units were selected which would pass this test and plots drawn of their worst-case $V_o$ vs F.O. at each temperature. From these plots, the values of F.O. were obtained for which $$V_o = V_{FW} - \Delta V$$ where $V_{FW}$ = lowest value of $V_r$ measured on any unit at each temperature. The curves in Fig. 4B, showing fan-out as a function of temperature and d-c stability level, were obtained in this way. If the fan-out values given in Fig. 4B are to be realized, saturation resistance of the inverter transistor should be lower than $V_o$ at the maximum fan-out at each temperature. Saturation would have two effects. First, $V_o$ would be increased; such an increase would be detected in the tests already described. Second, the power drain from the supply would be increased, since any increase in $V_o$ above the natural clamping level will divert current from the clamping diode into the emitter follower, even though the increased $V_o$ is within specifications. Power drain at maximum fan-out must be measured to safeguard against this effect. Specifications for the epitaxial layer were chosen to avoid saturation at all temperatures. The following measurements were made to check this. First, the saturation resistance of the inverters on a large number of gates were measured at room temperature with 20 percent overdrive. Nearly all units fell within the range 0.25 to 0.35 volt at 10 ma collector current. Second, the collector current on a worst-case inverter was measured at a collector voltage of $V_o' = V_{FW} - 200$ mv with 20 percent overdrive at various temperatures; $V_o'$ was chosen in this way so that it would be below the clamped output voltage of any unit. Finally, the fan-out values to which these collector currents corresponded were plotted to give the broken line in Fig. 4B. Breakdown effects in the device are consistently above the operating voltage range of the device. The input diodes, being emitter-base junctions, have breakdown voltage of 6.5 to 7 volts at 10 $\mu A$. Isolation and collector-base junctions (including the clamping diode, which is formed at the same time as the collector-base of the transistor) have breakdown voltages typically in excess of 25 volts at 10 $\mu A$, and both the emitter follower and inverter transistor have breakdown voltages of 8 to 10 volts. **DYNAMICS**—Propagation delays in the device were measured using a five-stage ring oscillator. No significant difference was observed between delays measured across two stages of the ring (average of the delay per stage between 50 percent points) and measurements based on the frequency of oscillation. The dependence of average propagation delay on voltage, temperature and fan-out are illustrated in Fig. 4C, 4E and 5B. The effect of increasing fan-out depends on whether or not the spare input diodes on the additional loads are connected to other gates which remain ON during the 0 to 1 transient. If such a connection is assumed, recovery effects in the input diodes slow down the propagation. Figure 5A illustrates how this effect was simulated to obtain the results given in Fig. 5B. Care was taken to ensure that voltage $V_i$ was such as to give worst-case effects in these measurements. Grounding $V_i$ produced a situation very close to the worst-case. The rise time (0 to 1 transient) varies from 10 nanosec at 4 volts (with a single load), to 18 nanosec with 10 loads ungrounded or 30 nanosec with 10 loads grounded. The fall time (1 to 0 transient) varies from 5 nanosec at 4 volts with a single load, to 20 nanosec with 10 ungrounded loads and 20 nanosec with 10 grounded loads. The LCDT gate is shown interconnected as a counter, driven by a ring oscillator and counting at a rate of 15 Mc in Fig. 5C and 5D. The interconnection techniques were poor, which accounts for the poor wave form. With better techniques, and a better wave form on the clock pulse, a higher counting rate should be achieved. One final point of note concerning the dynamic characteristics of the device is that when overdriven with a low impedance generator, ringing is observed. Although related to the large recovery currents that would flow in the input diodes under such circumstances, the mechanism propagating the ringing is not fully understood. The ringing can be avoided by using a transistor on the generator output. **CONCLUSIONS**—LCDT logic offers a way of achieving high-speed propagation delay at low power dissipation, and high fan-out levels with loose component tolerances, particularly at temperatures below 100 deg C. Cross-talk is virtually nonexistent, and the circuit has a relatively high saturation resistance tolerance. Extra d-c stability is achieved by a small resistance in series with the clamping diode to depress the clamped output voltage, at a sacrifice in speed and saturation resistance tolerance. The author acknowledges the invaluable contributions of W. F. Perrine, H. L. Schoger and W. R. Faleschini in the development of the LCDT integrated circuit. Since the pulse center is less affected by noise than pulse edges, three extra bits, centered on the sync pulse, are inserted periodically. Regardless of pulse distortion, the total length of the 3 pulses still equals 3 bits. **MILITARY APPLICATION** This synchronization system is used in a target observer's reporting device (see photo). Switches are set to indicate target location, description, quantity, heading and activity, as well as date and time. After the message is checked, the operator presses a transmit button, and the entire message is transmitted over standard field equipment. The device weighs about 4½ pounds. **UNIQUE SYNCHRONIZING TECHNIQUE INCREASES Digital Transmission Rates** By K. ROEDL and R. STONER, General Dynamics/Electronics, San Diego, Calif. A MAJOR PROBLEM in achieving faster and more accurate radio transmission of digital data to remote receiving stations is that of synchronizing the transmitting and receiving systems. With high-stability timing systems, an initial synchronization is usually adequate for the reception of short messages, but for longer messages it becomes necessary to resynchronize periodically on the transmitted data. Some of the methods developed use the leading or trailing edge of the received pulses for synchronization. In radio transmissions, however, these edges are affected by noise and may shift to such an extent that the timing obtained becomes inaccurate and unreliable. A received signal after demodulation may be narrowed or widened as a result of noise (Fig. 1A). Methods to diminish the detrimental effects of noise have been developed using the center of the received pulse instead of the edges. Since laboratory tests have shown that the center of a modulated signal shifts noticeably less than the edges when affected by noise, more accurate synchronization may be obtained this way. At any signal-to-noise ratio tolerable for reliable communications, the pulse center remains relatively stable. Under conditions that cause the pulse center to shift significantly, the problem becomes one of communication rather than of synchronization. This method uses the pulse center for synchronization and determines this center by digital means in a simple but unique manner. Three extra bits, called sync bits, are inserted periodically between equal groups of data bits when synchronization is desired. The logic levels of the extra bits may be either 0-1-0 or 1-0-1. The center bit of the three is called the sync pulse, since the center of this pulse is used to synchronize the system. **PRINCIPLE**—The basic principle of the method is shown in Fig. 1B. Time $B$ represents the length of the received sync pulse. As shown in Fig. 1A, this pulse may be shorter or longer than the true bit length, depending upon how the transmission has been affected by noise. Time $C$ is the period from the end of the received sync pulse to the start of the first data bit. If the center of the sync pulse has not shifted (that is, if transmitter and receiver are in sync) time $C$ will equal time $A$, regardless of the length of $B$. This leads to the following equation, which forms the basis for this synchronization method: $B + 2C = 3$ bits. A binary counter performs the arithmetic operation shown in the equation above. The length of the received sync pulse, time $B$, is measured by the counter while counting at its normal rate. At the end of time $B$, the counting rate is doubled and the counter continues at that speed until the count accumulated represents the time of 3 bits. As indicated in Fig. 1B, at this point the counter is synchronized to the transmitted data, since this count of three bits is reached at the end of the third sync bit. If the center of the sync pulse has not been affected significantly by noise, the accuracy of synchronization will be to within one clock period of the counter. Therefore, the accuracy is a function of the clock rate and the number of counter stages used. Figure 1C illustrates the application in which the method was first checked. In the system described, 8-bit characters are utilized. The three sync bits are inserted between characters, to provide resynchronization before the start of each new character. The transmission speed is 600 bits per second and the receiving system is controlled by an 8-stage binary timing counter operated by a 9.6-Kc clock (Fig. 2). The timing counter has the triple function of determining when to sample the received data bits for their logical level; of counting these bits to determine when the sync pulse is to be expected; and of finding the center of the sync pulse to reach synchronization at the beginning of the first data bit. Since in this case only eleven bits have to be counted in one cycle, the counter is reset automatically to the count of 01010000 at the end of the eighth data bit. **TIMING COUNTER**—The timing counter (Fig. 2) is a typical binary counter with eight stages of internally cross-coupled flip-flops. When the COUNT NORMAL signal is a logical one, the COUNT FAST signal is a logical zero. In this condition, the one and zero inputs to flip-flop 1 are enabled through AND 1 or OR 1 respectively. Since the flip-flop is cross-coupled, the next clock pulse will provide coincidence at one of its inputs and it will change state. Each time the first stage goes to a one state, flip-flop 2 is enabled through OR 2 and will change state on the next clock pulse. Each of the succeeding stages of the counter is enabled when the stage preceding it is in a one state and will change state on successive clock pulses. Thus the count produced is binary at a speed determined by the clock rate. When the COUNT FAST signal is a logical one, the COUNT NORMAL signal is a logical zero. In this condition, the first flip-flop stage is disabled through AND 1 and the counter is enabled at its second stage through OR 2. This causes the remaining stages to count at twice the normal rate. The AND 2 and 3 permit setting the counter at a particular count. As long as final stage flip-flop 8 is in its zero state, its one output is a logical zero and AND gates 2 and 3 are enabled through inverter N2. With these gates enabled, flip flops 5 and 7 change state. When the last stage changes to a one state, its inverted output disables AND gates 2 and 3, and the following clock pulse will set flip-flop stages 5 and 7 to a one state. This provides a count of 01010000 at the end of the eighth data bit. This is the count indicated in the timing chart (Fig. 3) at END OF LAST DATA BIT. The count of 01010001, set in the counter by the recognition of the sync pulse, is also indicated. The status of the various stages of the timing counter at the shift from normal to fast count and return to normal count are shown in the timing chart at END OF SYNC PULSE and END OF SYNC PERIOD. EXAMPLE—For illustration, a clock rate difference of 3.1 percent between the transmitting and receiving systems is assumed. If the two systems are in synchronism at the start of the first data bit, the accumulated error at the end of the eighth data bit will be 416 microseconds. This error must be corrected during the following sync period by synchronizing to the center of the sync pulse. With sync bits of zero-one-zero, the incoming data is checked for a logical one level throughout the interval from the center of the first sync bit to the center of the last sync bit. As soon as a logical one level is detected, it is recognized as the start of the sync pulse and the timing counter is set by the following clock to 01010001. This is one count greater than when the counter is reset automatically at the end of the eighth data bit, to account for the fact that one clock period has passed between the beginning of the received sync pulse and the setting of the counter. For the duration of the sync pulse, the timing counter is counting at normal speed. At the end of the sync pulse, as soon as a zero level is detected on the incoming data, the counter is advanced through its second stage with the first stage disabled. Thus the counter now counts at twice its normal rate. When the clock count equivalent to three bits has been reached (01111111), the timing counter has established synchronization, within one counter clock period, to the transmitted data by using the center of the received sync pulse. In this example, the timing error after synchronization is only 52 microseconds. The first stage of the timing counter is again enabled at the beginning of the first data bit, allowing it to count at normal speed to control the timing for reception of the next character. DYNAMIC NULL New Method for Measuring Equipment Performance With simple laboratory components and an oscilloscope, this method allows precise parameter measurements and immediate display. By JOHN L. HAYNES, Consulting Engineer, Redwood City, Calif. THIS DYNAMIC NULL method of measuring equipment performance parameters requires only simple laboratory instruments. It is useful for product testing, for calibrating and adjusting d-c or a-c amplifiers, and even for the complete f-m/f-m data links of many instrumentation systems. By this method, the effect of any component or power supply change on almost all equipment characteristics can be immediately displayed on an oscilloscope. Many modern instrumentation systems offer overall performance accuracies ranging from 0.05 to 1 percent; equipments making up such systems require accuracies two to five times better. An amplifier, for example, may specify gain as $1,000 \pm 0.01$ percent, linearity as 0.05 percent, bandwidth as 100 Kc, output impedance as 0.1 ohm and input impedance as 100,000 ohms. Measurement of such parameters by conventional input-output techniques requires highly accurate test equipment. Furthermore, although measurements of d-c transfer characteristics can be made to better than 0.02 percent with a Kelvin bridge or potentiometric voltmeter, the process is tedious. Linearity can be determined by calculating the largest deviation from a plotted or calculated best straight line. If the amplifier is a-c coupled, or for a-c gain and linearity measurements, the difficulty increases, since measurement of a-c signals to 0.01 percent or even 0.1 percent absolute accuracy is virtually impossible with usual laboratory instruments. Measurement of linearity by harmonic analysis would be difficult even if generators with 0.05 percent distortion were available. DYNAMIC NULL — With this method, an oscilloscope gives a dynamic plot of equipment errors in gain or linearity, offset or drift, effects of input and output impedance, phase shifts, all noise and hum, and transient effects such as overshoot, ringing, saturation or slewing errors. Photographs of the oscilloscope display can serve as part of the performance record of individual units. As with a conventional bridge, the test accuracy is dependent on an attenuator. A resistive attenuator has no offset, has linearity better than 10 parts per million, and can be made with negligible phase shift. An absolute gain error measurement is limited by the tolerances of the attenuator (for f-m/f-m tests, gain is usually one; no attenuator needed). However, precision attenuators are usually available or can be made with precision resistors for each amplifier. The attenuator should have an output impedance equal to the recommended source impedance for the amplifier, which should also be terminated in the proper load. A dynamic null test requires only an a-c generator, a few resistors and an oscilloscope. Generator output waveform is not critical—it may be sinusoidal, ramp or triangular, and may have poor distortion and amplitude stability (it could even be a signal from the secondary of a filament transformer). Resistors need not be precise for linearity measurements, but should be for accurate gain measurement. The MAKING TESTS—To test an amplifier with positive gain, the generator is connected to it through the attenuator (Fig. 1A); for negative gain, the connection is as in Fig. 1B. Attenuator loss is adjusted to equal the specified amplifier gain; amplifier output should then equal generator output. The oscilloscope is next connected to the generator output and to the amplifier output, making the vertical deflection proportional to the difference between the attenuator input and amplifier output. This displays any amplifier error. The oscilloscope horizontal input is then connected to make the horizontal deflection proportional to the generator signal (in Fig. 1A, positive voltages will read from left to right since the horizontal trace is inverted). The resulting trace is amplifier error plotted against signal amplitude at the generator frequency. Any transfer error in the system under test shows up at a glance, allowing quick adjustment of errors. For instance, with the generator adjusted for 3 v rms output (10 v p-p) and the scope vertical sensitivity set at 5 mv/cm, a 0.1 percent gain error will be displayed as a 1-cm vertical deflection. Two precautions should be noted: (1) the scope should be re-zeroed periodically to eliminate drift; (2) shielded leads should be used on the output of the attenuator, because signal level is normally low and any noise or hum entering the amplifier terminals at this point will show up as an apparent amplifier output error. A signal ground point should be found which results in the minimum hum and pickup. The ground point may be dictated by an existing ground in the amplifier. The table shows scope patterns of typical errors, their cause and cure. Figure 2A and 2B show a setup for sensitive measurements of input and output impedance. The impedances can be calculated from these equations: \[ Z_{in} = R_s + \Delta R_e (E - \Delta E)/\Delta E \] where \( \Delta E \) is obtained by closing \( S_1 \); and \[ Z_o = R_L (\Delta E R_L + \Delta E \Delta R_L)/(E \Delta R_L - R_L \Delta E) \] where \( \Delta E \) is obtained by closing \( S_2 \). Trade-offs in system errors for best overall output errors are easily assessed. Figure 2C shows the display for an amplifier with no gain error but with odd harmonic distortion which apparently puts the output out of spec. By increasing the gain slightly the curve is tilted to bring the output within spec; thus a unit which would have been rejected by a simple linearity test is proved satisfactory. All of the preceding measurements can be made over time, temperature, power supply variations and/or component substitutions, allowing a complete check of the amplifier under rated environment; with one measurement it is possible to display simultaneously all amplifier errors. QUICK, STRAIGHTFORWARD AND INFORMATIVE Although most examples given here are for amplifiers, this dynamic null measurement method is equally useful for evaluating transfer gain of many other subsystems — f-m modulator-demodulator units, or other voltage-to-frequency converters, analog-to-digital converters, and so forth. This method, says author Haynes, is quick, straightforward and informative, and can even be used by relatively unskilled lab technicians. Semipermanent Memory: LATEST USE Experimental twistor memory with 7,680 bits obtains semipermanent information storage by automatically resetting the twistor bits to their original state after each read pulse. Holes punched in removable copper sheet inhibit writing in desired bit location. EXTENSIVE USE of digital storage techniques in recent years has led to investigation of various information storage devices and methods. One of these devices, the twistor\(^1\), is being used as the storage element in an experimental 7,680-bit semipermanent card-changeable memory built to evaluate overall quality and operating characteristics of a moderate quantity of twistor element. The memory also indirectly measures the effectiveness of fabricating apparatus and procedures and establishes a reference point for circuit design requirements and improvement of existing techniques. BASIC MEMORY—The simplest twistor memory consists of a length of twistor and a solenoid concentric with the twistor (Fig. 1A). Because of the helical orientation, the direction of magnetization in the ribbon can be reversed by the magnetic field produced by the current flowing in the twistor core, the current flowing in the solenoid or a combination of the two. Because of material square-loop properties, the component of the magnetic field along the ribbon must exceed a threshold value before such reversal occurs. Three current pulses are required to operate a twistor memory in the temporary storage mode. Of these, two (\(I_{sw}\) and \(I_{sr}\)) are used for writing and one (\(I_{sr}\)) for reading. A coincidence of write pulses through the core of the twistor and the solenoid is used to write ONES into the memory. The amplitudes TWISTOR memory (A) uses coincidence of current pulses (\(I_{sw}\) and \(I_{ww}\)) to set the magnetic material under the solenoid to the 1-state. Readout is by current pulse \(I_{sr}\) through the solenoid with readout at point A. Two bits of the memory are shown in (B) while (C) shows a five-turn, printed-wire solenoid. Equivalent circuit (D) shows eddy current return paths far removed from the copper while (E) shows return path near the copper—Fig. 1 FOR TWISTORS By K. E. KRYLOW, J. T. PERRY, JR., and W. A. REIMER, Staff Engineers, Automatic Electric Laboratories, Inc., Northlake, Illinois HIGH-PERMEABILITY plates are affixed to both sides of the code card in this partially assembled memory plane of the write pulses are such that neither pulse is capable of switching the twistor by itself. However, their combined effect during coincidence is sufficient to put the twistor into the ONE state. Read pulse polarity is opposite to that of the solenoid write pulse and its amplitude is large enough to insure full switching. The read operation sets the bit into the ZERO state and it remains in this state until it is reset to the ONE state, regardless of the number of times it is read out. When a twistor bit in either state is interrogated, a voltage is induced in the twistor core. The voltage induced during the transition of an interrogated bit from the ONE state to the ZERO state is much larger than that induced when the bit is originally in the ZERO state and no change of state occurs. Consequently, the two states are readily distinguishable. In practice, a number of twistors pass through any solenoid and many solenoids can be placed at intervals along the twistors. Each solenoid represents one address and one twistor represents one bit in every address. Each twistor is paired with a plain copper wire to form a transmission line pair. Properly terminated, this arrangement reduces external noise pick-up and provides uniformity of transmission characteristics. SEMIPERMANENT STORAGE—Two approaches to semipermanent information storage are available. One is to find a means by which the twistor bits can be kept in the desired state as long as necessary, and the state of the bits ascertained in some way without changing it. The other is to find a means by which the twistor bits can be automatically reset to their original state after each read pulse. The memory described here is based on the second approach. Physically, twistor and solenoid are different from that described above. The twistor element is placed outside the solenoid and a thin copper sheet is placed over the twistor (see Fig. 1B). During the rise and fall of solenoid current $I_s$, eddy currents $I_e$ are induced in the copper sheet by the time-changing flux linkages. At the twistor location, the magnetic field intensity associated with the currents induced during the rise of the solenoid current, aids the field produced by the solenoid. The rate of rise and the final value of the solenoid current can be adjusted so that the magnetic field intensity due to the solenoid and the eddy currents combined is sufficient to switch the twistor, while the magnetic field intensity due to the solenoid current alone is not. Under such conditions, the twistor will switch when the copper sheet is present over the twistor but will not switch when it is absent. In the memory, the presence of copper over the twistor bit codes a ONE while the absence of copper codes a ZERO. This is accomplished by punching holes in the copper sheet over the twistor bits which are to be coded ZERO. If two consecutive solenoid pulses of opposite polarity are used, all twistor bits associated with the selected solenoid which are coded ONE will switch to the ZERO state and then back to the ONE state. Bits which are coded ZERO will remain in this state at all times, since the magnetic field intensity is insufficient to switch them. SOLENOIDS—A switch core matrix is a convenient way to provide two consecutive pulses of opposite polarity and at the same time supply the means to select a solenoid in the memory. However, no switch core could be found that would give the relatively high currents necessary to switch the twistor with single-turn solenoids. The situation was solved by WHAT'S A TWISTOR? It's a magnetic information storage device. The original twistor\(^1\) was a magnetic wire under torsion, hence its name. At present, the twistor consists of a copper wire on which a ribbon of square-loop magnetic material is wound helical fashion. Through selection of materials and processing, the easy direction of ribbon magnetization is made to lie along the ribbon. Information is stored in binary form using the two remanent magnetic states. devising a special five-turn printed-wire solenoid configuration as shown in Fig. 1C. The solenoid is a figure eight in which the section common to the two loops has twice as many conductors as there are in the outer portions of each loop. The central portion of the solenoid is called the information zone, and the outer portions of the solenoid are the return paths. Several variations of this solenoid have been made, including one that provides bipolar outputs—positive ONES and negative ZEROS. The presence of the return paths near the copper sheet increases the induced current flow over the information zone of the solenoid. The effect can be explained by considering the copper sheet over the solenoid as a two-loop circuit, where the common branch represents the information zone. When the return paths are far removed from the copper, a generator, representing the induced voltage, exists only in the common branch (Fig. 1D). When the return paths are near the copper, additional generators are inserted in the outer branches of the circuit (Fig. 1E) that aid the generator in the center branch. This increase of solenoid efficiency permits the use of lower drive current amplitudes, but imposes an upper limit on the read current. Increasing the read current beyond a certain magnitude will cause the portions of the twistor over the return paths to switch in opposition to the twistor over the information zone, thereby reducing twistor signal output. A memory based on these principles was built and operated successfully. Only small variations of the solenoid pulse magnitudes could be tolerated because of the spread in twistor characteristics. To increase allowable variation of the solenoid pulses from nominal operating magnitudes, plates of high-permeability magnetic material were placed over the copper sheets. This addition improves the operation of the memory in two ways. It permits the use of higher solenoid currents without switching the bits which are in the ZERO state. Also greater control over the information content of any bit is transferred to the copper sheet; that is, the ratio of the magnetic field intensity due to the eddy currents to the magnetic field intensity due to the solenoid current is larger. These effects can be deduced by considering the distortion of the magnetic field caused by planes of high-permeability materials in the vicinity of the current-bearing conductors. **MEMORY PACKAGE**—The memory consists of a number of transmission line pairs passing over a series of solenoids and under a series of copper sheets. There are as many twistors as there are bits per address, and as many solenoids as there are addresses. The card-changeable feature is provided by attaching the copper sheets to removable code cards that can be inserted in slots in the package. The memory coding is changed by punching the required hole pattern in a copper sheet, attaching the sheet to a code card, and inserting it in the memory. A slight bow in the copper sheet and reasonably tight dimensional tolerances on the slots and code cards provide adequate proximity of the copper to the twistor elements. In designing a package using this coding scheme, all bits must be located accurately in the memory. The spacing between neighboring twistor elements in one direction, and between adjacent solenoids in the other, has to be kept constant. Twistor element spacing is established by encapsulating the required number of pretested twistor elements between two polyester tapes. In addition to locating the twistor elements accurately with respect to one another, the encapsulation minimizes the possibility of damaging the elements and simplifies the problems of storing and handling. Uniform solenoid spacing is maintained by printing solenoids on a flexible substrate, using conventional wire-printing techniques. The solenoids are printed in groups to minimize tolerance build-up, provide a convenient subunit within the memory and keep the package to reasonable dimensions. Each group contains eight solenoids of the type shown in Fig. 1C, and constitutes one plane of the memory. A partially assembled memory plane is shown in the photo. A backing plate and a tape containing the encapsulated twistors are affixed to opposite sides of each half of a solenoid group. Two spacer bars and a solenoid terminal strip are added, and the plane is assembled by folding, keeping the twistor tapes to the inside. The spacer bars and the terminal strip are side guides and back stop for the code card, providing an accurate index of the coded copper sheet to the twistor bit locations. Matching hole patterns are provided in the twistor tapes, solenoid groups and package hardware. All components are held in alignment, both during and after assembly, by pins passing through these holes. To build a memory package, these planes are assembled side-by-side along two lengths of twistor tape. Using the tape between planes as hinges, successive planes are stacked on top of each other. The package is completed by adding corner supports, top and bottom plates, and twistor terminal strips. **ELECTRONICS** — The system was constructed solely for evaluating overall performance of twistor elements and techniques used in fabrication, testing and packaging. Consequently, circuits associated with this unit function merely as a memory exerciser. The only design considerations, other than the drive requirements imposed by the memory package, were simplicity of construction and reasonable flexibility of programming. The various circuit blocks, flip-flops, diode gates, drivers, are of conventional design, and no special interconnection techniques are used. Since the principles of design, interconnection, and operation of the circuit blocks and the switch-core matrix are covered extensively\(^{4-7}\), only a brief description of overall operation will be given. Referring to Fig. 2, a particular set of logic levels is provided by the seven flip-flops. This set activates one \(X\) gate and one \(Y\) gate, which in turn place the corresponding \(X\) and \(Y\) switches in the ready condition. An enabling pulse \(S\) from the monostable multivibrator \(MS\) closes the switches and simultaneously disconnects the two current shunts from the constant-current-generator drivers. The current pulses sent into the two select lines thus chosen, reverse the state of the ferrite switch core at their intersection, causing a READ current pulse to flow in the solenoid connected to that core. When $S$ terminates, the select line switches open, the shunt switches close, and a d-c bias resets the selected core to its original state causing a WRITE current pulse to flow in the solenoid. A new set of logic levels is set by the trigger pulse from the flip-flop driver and the entire process repeated. A wide variety of programs can be set up by altering the flip-flop and gate connections. The output signals from the twistor are coupled into the sense amplifiers through 1:3 (nominal) ferrite pot-core transformers. The primary is center-tapped to ground, and presents an impedance of about 35 ohms to the twistor. Each sense amplifier consists of the transformer, two class-A voltage amplifiers, an emitter-follower buffer and an output discriminator-amplifier. A voltage gain of 80 is provided by the first four units. The final discriminator stage presents ONE outputs as eight-volt pulses strobed during the interval of switch-enabling pulse $S$. Memory speed limitation is imposed by the characteristics of the switch core matrix, and not by those of the twistor. The results of another phase of the twistor memory investigation indicate that metallic tape-wound cores would be more suitable as switch matrix elements. Substituted directly for the ferrite core matrix in this system, a tape-wound core matrix would allow a 20-percent reduction in matrix drive requirements and decreased the minimum read-write cycle to 5.0 microseconds. **NOISE**—The memory was susceptible to the influence of external magnetic fields and excessive internal noise almost entirely masked the output signals. The first was completely eliminated by enclosing the memory module with magnetic shielding. The second problem required extensive investigative work before the solution was found. The circuit layout and component placement for the system were based on the extrapolation of data previously obtained from test arrays. The general trend was to place various components as close as possible to each other to keep lengths of interconnecting leads to a minimum, especially those carrying low-amplitude signals. The switch-core matrix and the first stage of the sense amplifiers were placed directly on the memory module. Changes were made as the troubleshooting of the system proceeded. Part of the noise was eliminated by removing the switch core matrix from the memory and placing it outside the magnetic shield. The source of the remaining noise was finally traced to capacitive coupling among the relatively long leads between first and second stages of the sense amplifiers. The problem was solved by removing the first stage of amplification from the memory package and moving it closer to the next stages, and replacing interconnecting wiring by a shielded twisted pair. The memory has been in operation since March, 1962 without any malfunction. The system, built as a feasibility model, has now found application as a 60-channel program generator. Since completion, several new memory packages have been developed. This feature increased information storage density as well as various components designed to facilitate the assembly and wiring. An electrically alterable semipermanent twistor memory is under investigation. **REFERENCES** (1) A. H. Bobeck, *A New Storage Element Suitable for Large-Sized Memory Arrays*, Bell System Technical Journal, 36, p 1,319, Nov. 1957. (2) J. A. Winkelman, *A Myriabit Magnetic Core Matrix Memory*, Proc IRE, 41, 10, p 1,497, Oct. 1953. (3) A. Ashley, S. Bradapies, E. Cohler, M. Stern, and H. Ullman, *Core Memory Systems*, Sylvania Technologist, XII, 4, p 10, Oct. 1959. (4) R. P. Schneider, G. H. Barnes, *High Speed, Word Organized Memory Techniques*, Electronic Design, p 40, June 15, 1959. (5) L. P. Hunter, *Handbook of Semiconductor Electronics*, 1st Edition, McGraw-Hill Book Company, Inc., New York, 1956. (6) J. Millman and H. Taub, "Pulse and Digital Circuits," McGraw-Hill Book Co., Inc., New York, 1956. INTRODUCING REALISM Many training aids and simulators are unable to reproduce the real-life effects of the equipment they are simulating. A million-dollar aircraft simulator, for example, still can’t provide the sustained acceleration and gravitational forces exerted by the simplest airborne maneuver. This realism-deficit has now been largely corrected, for sonarmen at least, by the equipment described here, which puts an important ingredient—ship’s wakes—back into the picture. By M. KAUFMAN and E. LEVINE, General Applied Science Laboratories, Inc., Westbury, L. I., New York REALISTIC SONAR TRAINER Digital equipment generates artificial wakes and relates them to target-ship’s speed, course and position. Increased realism helps condition classroom trainees to events at sea. THE WAKES OF SHIPS are simultaneously a hindrance and an aid to sonar operators. A wake, which may last 15 minutes and longer, acts as a reflecting surface that returns echoes to active sonars. It is useful because it leads the operator to the ship that produced it, but a hindrance because it obscures echoes that come from its far side. Methods for simulating ships’ wakes for active sonar displays have been considered by several investigators with varying degrees of success. Two basic equations were derived in 1956, based on theoretical and empirical considerations. These relations describe the intensity of a wake echo and the effects of interference produced by other wakes interspersed between the sound (sonar) source and the target. FUNDAMENTALS — Based on these two equations an attenuation factor, $A$, can be expressed $$A = \frac{\text{Interposed Wake}}{F(S_T) F_1(T)} \times \frac{\text{Reflecting Wake}}{F_2(T') F(\rho)}$$ where $F(S_T)$ is a function of the WAKE SIMULATOR keeps up-to-date with target-ship so that wake is modified in accordance with target-ship's speed, course and other parameters. Maximum wake storage is 15 minutes, which conforms to a real-life decay rate of a ship's wake of 6 db/minute—Fig. 2 SIMULATES SHIP'S WAKES target speed while laying the interspersed wake, $F_1(T)$ is a function of the wake segment age between the reflecting wake and sonar, $F_2(T)$ is a function of the age of the reflecting wake, $\alpha$ is the target aspect (bearing angle $\theta$ —heading angle $\phi$), and $F\rho$ is a function of wake range. This article describes digital equipment for simulating ships' wakes as they affect sonar receivers using multiple hydrophones and ppi displays. The equation does not include the effects of target depth or target accelerations, but these effects may be included. SIMULATOR DESCRIPTION — Figure 1 is a block diagram of the wake simulator. Control data for both the sonar and the target is supplied in the form of initial position and velocity. The sonar and target tracks are computed by integrating the velocities. The resultant target position is sampled at a rate dependent on the target velocity, rather than at fixed time intervals, providing position information at incremental target displacements to reduce the amount of data storage needed for slow-moving targets. The sampled target position data is stored in a circulating magnetostrictive delay line together with predetermined functions of target velocity, heading angle and other data. The resolution of the stored position is 60 yards for a 100-mile-square area. A maximum of 15 minutes of wake information may be stored, which, considering the high rate of age attenuation, (6db/min), is adequate. The wake position is assumed to be identical (plan view) to the stored past position of the target, thus neglecting effects of currents on the wake position. The latest 15 minutes of target track are held in the absolute target storage and compared to the sonar position to determine the relative wake-sonar position. Other operations and computations give data for solution of wake strength as defined by the relative data computer and processor, Fig. 1. The resulting signals are held in the intermediate storage. The wake range, speed of vessel at the time the wake was laid, wake segment age, and related data, is routed from the intermediate storage (which is constantly updated) to the echo amplitude-and-location computer. Based on the range, age, speed and aspect of the wake, an attenuation factor, $A$, is derived in the computer. The range number, $\rho$, and the corresponding $A$ number are rearranged in terms of azimuth angle relative to a line passing stem-to-stern through the sonar carrying "ownship". These two numbers are held in azimuth sequence in the magnetostrictive delay line display storage, which circulates in synchronism with the sonar scanning rate at 120 cycles. This synchronous circulating display storage is the key to simulation of wake effects for high speed scanning sonar, where both the sonar and targets may have motion. Because the information in the display storage is arranged in synchronism with the video scan, only two numbers per wake segment are stored. The display storage is divided into 48 sections, one for each beamwidth (7.5 degrees for the particular sonar of interest). Each section may contain several \( \rho \) and \( A \) numbers to accommodate target tracks that cross back and forth in the same 7.5 degree sector. The range data is compared to a range sweep initiated by the sonar ping (in the sonar presentation converter). When a comparison is made, the wake segment attenuation number corresponding to that wake range is converted to an analog voltage for modulating a carrier. This signal is routed to the video amplifiers where, due to synchronism between the display storage and the ppi, it is displayed at the correct angular position. Figure 1 also depicts an abridged wake simulator. The bulk of the total simulator (track generator, digital integrators, samplers, data processing, relative position computation, intermediate storage) are eliminated. Since the innovation in this simulation technique is the synchronous display storage, only this portion of the simulator was constructed and tested. The computations described by the equations were handled on a general-purpose computer with the resultant range, bearing angle, and attenuation numbers \( A \), supplied on punched paper tape. The punched tape was used as the input to the abbreviated simulator and entered into the echo amplitude and location computer, as shown. The data was processed and displayed on an operational sonar. **SIMULATOR DETAILS**—Figure 2 is a detailed block diagram of the abridged wake simulator. The circulating display storage provided video signals of the sonar display. This digital memory has an access time of nominally 8 ms which is equivalent to the sonar ppi scan rate of 120 cycles. The display storage is divided into 48 slots, each corresponding to a 7.5 degree increment of target bearing angle. Within each 7.5 degree segment, capacity is provided for storing seven samples of range (\( \rho \)) and attenuation factor (\( A \)) of a wake, enough to accommodate seven target crossings at the same bearing. The 8-bit \( \rho \) number quantizes the range to 60 yards. The attenuation, \( A \), is held as a 14-bit word to provide greater than 80 db dynamic range. Two blank bit spaces are provided for each sample of \( \rho \) and \( A \), thereby forming a basic 24-bit word length. Consequently the display storage capacity is 7 words/slot \( \times \) 48 slots \( \times \) 24 bits/word = 8064 bits. At a 1.024-Mc clock rate, the display storage recirculation time is 7,875 \( \mu \)s (equivalent to 127 cycles). To obtain synchronism between the ppi and display storage, the motor that drives the 48-position sonar scanning switch and the sonar sweep generator is disconnected. A computer-synchronized 127-cps signal drives a synchronous motor, which together with a resolver and ping-synchronized ramp, provides the sonar ppi scan. Referring to Fig. 2, the \( \rho \), \( A \) and \( \theta \) (bearing angle) information stored on paper tape is converted to serial digital numbers and temporarily stored in the buffer store. A bearing angle counter, which counts from 1 to 48, selects any 7.5 degree slot. Every 168 \( \mu \)s (7.5 degrees) the counter is stepped. The counter is compared to the temporarily stored number in the bearing angle comparator. The display storage line containing the range and attenuation numbers is synchronized to the bearing angle counter. When the counter is in position 1, it is possible to place information that has a bearing angle between 0 degrees and 7.5 degrees into the display storage line. Information related to a bearing between 7.5 degrees to 15 degrees is entered into slot 2 (168 \( \mu \)s later) and so on, until, when the counter has reached the 48th slot, the 352.5-to-360 degree portion of the delay line is available. A digital range sweep is initiated by the sonar ping after which a step sweep is generated consisting of 8 ms steps, each having a weight of 7.5 yards. The simulated ping width is added to each range step. The range plus the ping width digital number and the range number alone, represent the upper and lower limits of each range step. Once the input information is entered in the display line, it is repeatedly compared to this simulated radial sweep. When a segment of wake is found to have a range lying between the upper and lower limits of the ping, the gate is activated by the range limit comparator and the \( A \) number is routed to the display storage. This attenuation number, which defines the necessary intensity of the video, is converted to an appropriate analog signal and is summed and scaled with special effects and then routed to the sonar video amplifier. After all the numbers in the display storage line are scanned, compared, and displayed, a maximum range number is detected. At this time, all the information in the delay line has been displayed and a total wake presentation viewed. When the maximum range detection is accomplished, the old information in the delay line is erased and the tape radar up-dates the line with the latest wake information. This process of reading-in, scanning, displaying and reading-in again is repeated every 20.5 seconds (time between pings) for 15 minutes. The new information is a continuation of the previous information but contains data for an additional 20.5 seconds of movement and aging. Thus a real-time display of the wake is generated. The authors acknowledge the contribution of Dominick Capuano of The Naval Training Device Center, Port Washington, N.Y., under whose direction this equipment was developed, under Naval Training Device Center's contract N61339-1099. ANNOUNCING THE VERSATILE NEW JERROLD VIDEO SWEEP GENERATOR MODEL 1015 for both wide- and narrow-band response testing from 1 kc to 15 mc High-pass and low-pass filters swept alternately on Model 1015 Video Sweep Generator, in wide-band mode (2-15 mc). Crystal filter swept in narrow-band mode (4 kc) with Jerrold Model LA-5100 Log Amplifier, x-y recorder and Model 1015 Sweep. Jerrold is proud to introduce this versatile, highly stable video sweep generator as the latest in its growing line of sophisticated measuring instruments. Engineered to combine characteristics of a very stable narrow sweep (20 cps residual FM) and a very wide sweep for video applications (10 kc to 15 mc), the Model 1015 provides narrow-band, wide-band, and continuous-wave output modes. Automatic or manual sweeping is provided by a front-panel selector switch. Center frequency is continuously variable from 1 kc to 15 mc in all three modes. In addition to a built-in marker generator on the wide-band range, provision is made for connecting two external marker generators. For fast quantitative measurements of response that otherwise would involve hours of tedious point-by-point compilation, it will pay you to investigate this stable new video sweep generator. Write for complete specifications. $2,540 FEATURES: - Wide-band, 0-15 mc; narrow-band, 0-400 kc; CW - Excellent stability in both narrow and wide modes - Better than 2v metered output in both modes - Low residual FM (20 cps on narrow band and CW) - Continuously variable sweep rate from 60/sec. to 1 per 2½ min. - Built-in high-output birdie-type marker generator Jerrold Electronics Corporation Industrial Products Division Philadelphia 32, Pa. A subsidiary of THE JERROLO CORPORATION BOOTH 3904-08 CIRCLE 87 ON READER SERVICE CARD Why depend on 'selected' transistors for your communications applications? See how Amperex production-run P.A.D.T.'s you need LOW NOISE AT HIGH FREQUENCY 3-DOT SMALL GEOMETRY gives you Low $r_{\text{db}}'$ • High $f_T$ as in the P.A.D.T. 2N2495 you need STABLE HIGH GAIN 2-DOT SMALL GEOMETRY gives you Stable High Gain Low, low Capacity • High $f_T$ as in the P.A.D.T. 2N2654 AMPEREX production-run P.A.D.T. transistors are immediately available from these and other authorized Industrial Electronic Distributors and in volume quantities from our semiconductor plant at Slatersville, Rhode Island. CIRCLE 88 ON READER SERVICE CARD designs for specific requirements! you need POWER AT HIGH FREQUENCY P.A.D.T. STRIPE GEOMETRY gives you Lower $r_{bb'}$ • High Dissipation High $f_T$ • High Beta as in the P.A.D.T. 2N2786 you need HIGH GAIN, LOW NOISE, LOW COST 2-DOT LARGE GEOMETRY gives you a universal, high performance transistor for a wide range of frequencies for entertainment, industrial and military applications. as in the P.A.D.T. 2N2084 (MIL-S-19500/213A NAVY) P.A.D.T. 2N2089 for P.A.D.T. 2N2092 for P.A.D.T. 2N2671 VHF P.A.D.T. 2N2672 RF AMPEREX semiconductor specialists have been engaged in a continuing technological program that brings the superior performance and high-level production-benefits of PADT to ever expanding applications in the areas of communications, radar, instrumentation and AM-FM-TV receivers. From this program there have emerged the four distinctive P.A.D.T. transistor geometries illustrated above...each with "DESIGNED-IN" parameters and performance characteristics of significance to specific end-equipment needs - and reproducible, IN MASS PRODUCTION, without selection...at P.A.D.T. production-run prices! As you are undoubtedly aware, P.A.D.T. is the unique AMPEREX Post Alloy Diffusion Technique by which simultaneous diffusion and alloying take place under specified and controlled conditions. This exclusive process makes it possible to mass-produce superior communications type transistors with a base layer of only a few ten thousands of an inch and with extremely high cut-off frequencies...and P.A.D.T. does this with consistently high yields and with unequalled uniformity, stability and reliability. For detailed data and/or applications engineering assistance, write to: AMPEREX Electronic Corporation, Semiconductor and Receiving Tube Division, Hicksville, Long Island, New York. (CANADA: PHILIPS ELECTRON DEVICES, LTD., TORONTO 17, ONTARIO.) CIRCLE 89 ON READER SERVICE CARD Designed for Application MAGNETIC SHIELDS Illustrated are a few of the stock mumetal or nicaloy magnetic shields for multiplier photo tubes and cathode ray tubes. Stock shields are available for all popular tubes. Custom designed shields are made for special applications. JAMES MILLEN MFG. CO., INC. MALDEN MASSACHUSETTS CIRCLE 301 ON READER SERVICE CARD FAST DELIVERY ANYWHERE MIDWEC INSTRUMENT GRADE MYLAR* DIELECTRIC CAPACITORS Best Shipping Interval In The Industry—3 Weeks Standard High Reliability and Quality Competitively Priced Specialists in Low Tolerance Units Approved for use in Talos, Minute Man, Titan, Typhon, Telephone Companies ■ 100% Test for dielectric strength, capacitance, insulation resistance and dissipation factor MIDWEC OSHKOSH, NEBRASKA write for data sheets and prices SALES OFFICE: 601 So. Jason St., Denver 23, Colo. TWX: 292-3891—Telephone SH 4-3481—DDD 303 *DuPont TM for Polyester Film. BROADBAND COAXIAL CRYSTAL HOLDERS ■ "Model 1011 Broadband Coaxial Crystal Holder is one of 44 models in our SAGELINE of coax crystal holders. It is designed for mixer and video detector applications using tri polar crystals such as the IN358A. Its recommended frequency range, without a DC return, is 10-15,000 MC. DC returns are available for five specific frequency ranges. The DC returns are internal and do not increase holder size. Output capacity is 6-7 mmf, minimum. ■ The 1011 input is type N male; the output, BNC female. Maximum dimensions are 2½" in diameter and 2¼" in length. ■ The price of the basic 1011 holder is $20 . . . with any DC return, $30 . . . FOB Natick. Quantity discounts are available. Delivery is from stock for quantities up to 100. ■ If you have a question or would like to place an order, I hope you will call the number shown below. We'll look forward to talking with you." William J. Kennedy / Sales Manager SAGE LABORATORIES, INC. 3 HURON DRIVE • NATICK, MASS. • Tel: 617-653-0844 TWX: 617-653-6193 • Cable: SAGELABS-NATICK CIRCLE 302 ON READER SERVICE CARD March 15, 1963 • electronics Especially Designed for Rapid, Easy Installation! PIPE-THREADED CLEAR GLASS HERMETICALLY SEALED OBSERVATION WINDOWS HTW SERIES WINDOWS ARE AVAILABLE IN A WIDE RANGE OF SIZES STANDARD AMERICAN TAPER PIPE THREAD (NPT) FOR QUICK INSTALLATION Dimensions for HTW Series Sealed Windows (In Inches) | PART NO. | A | B | C | D | E | F | G | |----------|-----|-----|-----|-----|-----|-----|-----| | HTW-1 | .525| .687| .500| 3/8 | .125| .410| 5/32| | HTW-2 | .675| .875| .562| 1/2 | .200| .500| 3/16| | HTW-3 | .825| 1.062| .750| 3/4 | .200| .675| 9/32| | HTW-4 | 1.035| 1.437| .812| 1 | .250| .800| 1/4 | | HTW-5 | 1.300| 1.790| .875| 1 1/4| .250| 1.000| 11/32| | HTW-6 | 1.635| 2.000| .937| 1 1/2| .500| 1.300| 3/8 | | HTW-7 | 1.890| 2.500| 1.000| 2 | .500| 1.600| 7/16| Ruggedly constructed and hermetically sealed for AIR CONDITIONING, REFRIGERATION AND HEATING EQUIPMENT ELECTRICAL AND ELECTRONIC AND PHOTO SENSITIVE DEVICES CONTROL AND OTHER SEALED OR PRESSURIZED MECHANISMS E-I sealed glass windows are designed for observing internal conditions in hermetically sealed mechanical, electrical and electronic equipment. These windows are precision made to provide "space age" reliability... feature super-rugged E-I compression seals that have been proven for utmost reliability in major missile and space projects. For complete data and recommendations on your particular requirements, call or write E-I, today. ELECTRICAL INDUSTRIES A Division of Philips Electronics & Pharmaceutical Industries Corp. MURRAY HILL, NEW JERSEY—Telephone: 464-3200 (Code 201) Backed by 15 years' proof of RELIABILITY REAC® 500 ANALOG COMPUTER Typical unscheduled downtime LESS THAN 3% REAC 500 is the proud successor to a long line of REAC Analog Computers first produced more than 15 years ago. Most REAC installations are still in operation — many "round-the-clock" — with unscheduled down-time averaging less than 3%. The same high quality built into our previous models has been maintained in the new REAC 500 series which are now in production. REAC is synonymous with RELIABILITY — safeguarded by our uncompromising standards for performance, construction, and ease of maintenance. Your computer investment is guaranteed when you specify REAC 500. For complete information, write for Data File 103. Qualified engineers who are seeking rewarding opportunities for their talents in this and related fields are invited to get in touch with us. See our dynamic display at the I.E.E.E. Exhibit—Booths 1305-1307 REEVES INSTRUMENT CORPORATION A Subsidiary of Dynamics Corporation of America. Roosevelt Field, Garden City, New York March 15, 1963 • electronics From Bantamweight To Heavyweight. Completion of the world's largest and first high power, low frequency WR 2100 Circulator for Rome Air Development Center, Rome, N.Y., demonstrates Sperry's ability to extend the state of the art. The WR 2100 Circulator as well as Sperry's full range of circulators will be exhibited at the IEEE Show. Another example of Sperry's solid state capability is the development and testing of the first total solid state front end system...also displayed at IEEE. Sperry's technical competence is represented by the full line of Microline test instruments—now available from stock—and the ability to design complete microwave measurement systems for commercial and military use. Radar Performance Analyzers that provide complete overall performance measurement and monitoring, as well as ground spectral measurement equipment, clearly demonstrate Sperry's engineering ingenuity and technical know-how. You'll be interested in seeing the full importance of Sperry's versatility and capability at IEEE booths 3314 - 3318. SPERRY MICROWAVE ELECTRONICS COMPANY, Clearwater, Florida. Need Miniature Ferrite Parts? Indiana General offers you.. Wide Range of Materials for frequency bands from 800 cycles to 1 kilo-megacycles. IGC can furnish miniature parts in a material that will meet your most rigid application requirements — high Q, high permeability, high saturation, low loss, linear temperature coefficient, close physical tolerances. Wide Variety of Configuration — E & I cores, U cores, multi-aperture cores, cup cores, bobbins and sleeves, toroids and recording heads. Design and Consultation Service — IGC field sales engineers will give you valuable assistance in solving application problems. Write to Indiana General Corporation, Electronics Division, Keasbey, N. J. Ask for bulletin 30K. INDIANA GENERAL Toroids E & I Cores Adjustable Inductors Transformer Cores Antenna Rods FIRST SILICON POWER SUPPLY UNDER $100 $89.00 Guaranteed by "Worst Case" Analysis A Big New Price Step—35% Below Germanium Units New Development. Here is a unique combination of low cost and high reliability. Featuring all silicon semiconductors throughout, these supplies are priced as much as 35% lower than comparable germanium supplies. Designed to operate in ambient temperatures up to 50°C without an external heat sink, they may be used up to 75° with a comparatively small external heat sink or with forced air. "Worst Case" Analysis designed, these supplies are capable of operating under the worst possible conditions with a "Mean Time Between Failure" that rivals many supplies costing two or three times as much. For the full money-saving details on Con Avionics new HT Series Silicon Power Supplies, send for literature today. SPECIFICATIONS Regulation Accuracy ....................... ±0.5% Ripple ........................................... 10 mv RMS Max. Max. Operating Ambient .................. 75°C Automatic Short Circuit Protection. Models available from 6V @ 1.5A to 32 V @ 600 ma Dimensions ......................... 4½" H, 4¾" W, 5¾" D CONSOLIDATED AVIONICS CORPORATION 800 SHAMES DRIVE, WESTBURY, L.I., NEW YORK See us at Booth 1324 IEEE Show CIRCLE 95 ON READER SERVICE CARD Contest Produces Novel Circuit Designs FOUR ORIGINAL digital circuits were chosen by editors of four electronics publications as winners in the Burroughs Beam-X Switch tube contest, timed for the IEEE Show. One choice was a sequentially delayed pulse generator circuit, with a frequency-independent preset duty ratio, submitted by designer Gordon E. Nelson of Sperry Products Div., Danbury, Conn. The circuit's output pulses are sequentially delayed relative to a reference pulse, so that on each cycle the pulse is delayed by an amount equal to the pulse width of the generated pulse. Suggested applications include electron-beam cutting control, an ultrasonic inspection instrument with expanded scale display, and a radar with expanded scale display. The circuit, shown in Fig. 1, produces an output pulse whose pulse width is equal to the period of the input waveform (from an oscillator or pulse generator) and whose pulse repetition frequency is a fraction, determined by the preset duty cycle, of the input frequency, with each subsequent pulse delayed by a time equal to the period of the input waveform. A fixed or variable-frequency source drives the input driver for the upper beam-switching tube. The driver is a bistable flip-flop. Duty ratio is established by the scaling-down ratio between the input to the upper beam-switching tube, and the output target. Each time the output target returns to its off state, a sync output pulse is sent to the driver of the lower beam-switching tube. The upper and lower beam-switching tubes share corresponding target load resistors, so that a coincidence in target currents produces a negative step voltage double that produced by a single target current. A coincidence bus is connected through off-biased diodes, so that a coincidence of target currents is required to deliver a pulse to the bus. The output of the coincidence bus can be used directly for some applications, but feeding it through a Schmitt trigger produces a fixed-amplitude output pulse with a faster leading and trailing edge, and additional level discrimination. A positive output pulse is obtained. For ECM.....ITT 1 kw metal-ceramic C-Band TWT weighing in at 7 pounds! ITT model F-2502 is the smallest, lightest 1 kw TWT available today. This new tube has four other advantages for ECM applications (or when used as a driver for high-power klystrons): - Highest power—1 to 2 kw peak pulse power - Full spectrum—4 to 8 Gc at rated power - Rugged construction—all metal-ceramic - Longer duty cycle—tested at 2% In addition to its broad line of TWT's, ITT has recently introduced a line of PM BWO's covering L through X band. Write for more information or applications assistance. | Type | Frequency Gcs | Power kw | Duty Control | |--------|---------------|----------|--------------| | F-7640 | 2-4 | 1 | .005 K | | F-2500 | 2-4 | 1 | .01 G | | F-2501 | 2-4 | 1 | .02 K | | F-2502 | 4-8 | 1 | .02 K | | F-2503 | 4-8 | 1 | .01 G | ELECTRON TUBE DIVISION CLIFTON, NEW JERSEY INTERNATIONAL TELEPHONE AND TELEGRAPH CORPORATION BARKER AND WILLIAMSON HARMONIC B&W AND SPURIOUS TOTALIZER B&W Harmonic and Spurious Totalizer, Model HST, measures total harmonic and spurious radiation from radio transmitters. **Frequency Range:** for transmitters operating from 2-32 mc, measures harmonics and spurious to 90 mc. Total spurious and harmonic levels as low as 65 db below the carrier can be measured. A measurement can be made in a matter of minutes. **Ideal for:** Periodic check of spurious emissions at radio transmitting stations. Development of transmitter equipment. Production testing of radio transmitters. Write for Sales Bulletin #106 for description and specification. BARKER & WILLIAMSON, Inc. Radio Communication Equipment Since 1932 BRISTOL, PENNSYLVANIA • STILLWELL 8-5581 MULTIPHASE GENERATOR uses beam switching tube to allow starting a new phase at any of a number of points in the cycle—Fig. 3 REVOLVING ARITHMETIC UNIT handles addition, subtraction, multiplication, using Beam-X beam switching tubes—Fig. 4 The shock spectrum analyzer consists of a number of peak voltage memory circuits, connected in parallel, each preceded by an L-C filter that through its resonance frequency determines its frequency channel. The shock spectrum of an input pulse is defined by the peak voltage appearing across each filter capacitor. The analyzer requires the use of peak voltage memories that can retain the information long enough for all channels to be recorded; it is also important that the memory be inexpensive since a large number may be needed. The peak voltage memory using a Beam-X switching tube is shown in NEW PHILCO PLANAR 40V\textsubscript{CBO}, 20V\textsubscript{CEO} 500 mc f\textsubscript{T} SWITCH Only Philco gives you so much design margin in so many parameters—with proven planar reliability. The new Philco 2N2710 presents industry’s best combination of speed, voltage, and beta. It also is specified for leakage current an order of magnitude lower than I\textsubscript{CEO} specifications of other 500 mc f\textsubscript{T} switches. Every 2N2710 parameter is outstanding. Get samples today from your Philco Industrial Semiconductor Distributor. Write for complete data. Dept. E31563. PHILCO 2N2710 CHARACTERISTICS | Parameter | Value | |-----------|-------| | V\textsubscript{CBO} | 40v min | | V\textsubscript{CEO} | 20v min | | h\textsubscript{FE} | 40 min | | t\textsubscript{ON} | 20 nsec max | | t\textsubscript{OFF} | 35 nsec max | | V\textsubscript{CES} | 30v min | | V\textsubscript{EBO} | 5v min | | I\textsubscript{CBO} | 30 na max | | t\textsubscript{S} | 15 nsec max | | f\textsubscript{T} | 500 mc min | CONSULT PHILCO SEMICONDUCTOR ENGINEERS AT I.E.E.E. BOOTHS 1302-1308 PHILCO A SUBSIDIARY OF Ford Motor Company, LANSDALE DIVISION, LANSDALE, PA. CIRCLE 99 ON READER SERVICE CARD Fred Roberts* can show you... how to measure ac ratios to 1.0 ppm ...at a sensible price In fact, any of North Atlantic's field representatives can quickly demonstrate how NAI's Ratio Boxes will economically meet critical requirements for AC ratio measurements—in the laboratory, or in field and production testing. These high-precision inductive voltage dividers are available in a complete range of models for particular applications. Standard types include Model RB-503 for bench or rack use, the miniaturized RB-521 for panel mounting in military specification equipment and PRB-506, a versatile system module programmable from punched cards or tape for automatic testing. Abridged specifications of these models are given below. | MODEL | RB-503 | RB-504 | RB-521 | PRB-506 | |-------|--------|--------|--------|---------| | | RACK OR BENCH | RACK OR BENCH | MINIATURE PANEL MTD. | MINIATURE PROGRAMMED | | Ratio Range | 0.000000 to +1.111110 | -0.111110 to +1.111110 | 0.0000 to +1.1110 | 0.0000 to +1.111110* | | Nominal Accuracy (Term. Linearity) | 10 ppm | 1 ppm | 10 ppm | 10 ppm | | Freq. Range (Useful) | 50 cps-10 Kc | 50 cps-3 Kc | 50 cps-10 Kc | 50 cps-3 Kc | | Input Impedance at 400 cps | > 60K | > 200K | > 50K | > 50K | | Nominal Input Voltage Ratings (f in cps) | 0.5f volts 350v max. | 1.0f volts 350v max. | .35f volts 300v max. | .35f volts 300v max. | | Maximum Output Series Resistance | 3.2Ω | 8.0Ω | 3.5Ω | 3.4-3.9Ω* | | Resolution | 5 decades plus pot. | 5 decades plus pot. | 3 decades plus pot. | 3, 4, 5 or 6 coded decades | | Size | 19" x 3½" x 8½"d | 19" x 3½" x 8½"d | 2¾" x 3¾" x 6¼"L | 9½" x 3¾" x 12" d | | Price | $295.00 | $450.00 | $275.00 | $900 to $1500* | Abridged specification—send for full details *Depends on number of decades Also from North Atlantic: Model RB-510 for 2.5ppm precision at 10kc, RB-503T and -504T with ratio ranges from —1.111110 to +1.111110, and PRS-531 Resolver Ratio Simulator. For complete technical and application data, write for Data File RB, or contact the North Atlantic man in your area. NORTH ATLANTIC industries, inc. TERMINAL DRIVE, PLAINVIEW, L. I., NEW YORK • OVerbrook 1-8600 See us at IEEE—Booth 3939 MULTIPHASE GENERATOR—Another winner from among the contest entries is an unusual multiphase generator by Warren A. Anderson of Raytheon Co., Portsmouth, R. I. Shown in Fig. 3, the circuit produces an output waveform whose phase is locked to the clock-signal driving assembly. A sequence of bridging voltage dividers are connected across the respective target outputs of the Beam-X tube or tubes, making up the basic counter chain. Each voltage divider is so adjusted that its output is appropriate to its angular position in the counting cycle, and the composite output of the voltage dividers is obtained by resistive summing. Repeating the resistive phase formers with appropriate connections to the Beam-X tube target buses will yield other phases of the basic counting cycle frequency. In the circuit shown, additional phases can be developed at 36-degree increments. This circuit can be applied in the development of multi-phase carrier voltages appropriate to synchro-resolver-servo computing applications; in developing precise pulse vs sine-wave time relationships, such as are required in navigation and measurement techniques, and in developing stable phase relationships despite variations in clock frequency. Other applications include precise phase shifts, and generation of single or multiple-phase sine-wave signals. ARITHMETIC ACCUMULATOR—Fourth contest winner is an arithmetic accumulator circuit with an automatic carry control, submitted by Richard J. Bartek, of the General Motors Defense Research Labs Raytheon’s RM3002 combines a light sensor and high-gain amplifier in one TO-18 package Now, optical readout is simpler because one component replaces three. Raytheon’s new RM3002 photo-Darlington, the third in a series of Darlington configurations, combines a lens window with an integral light-sensing amplifier. The result is extreme sensitivity in a very small package. Raytheon’s other Darlington amplifiers, without lens windows, include the RM3022, and, with an additional base lead for greater design freedom, the 2N998. For technical data, price and delivery, write Raytheon Company, Semiconductor Division 350 Ellis Street, Mountain View, California. | RM3002 | 25°C | V<sub>CE</sub> = 20 V | 10 nanoA max. | |-------------------------|--------|----------------------|---------------| | Dark Current (I<sub>CBO</sub>) | | | | | Dark Current | 150°C | V<sub>CE</sub> = 20 V | 100 µA max. | | Collector Dark Current (I<sub>CBO</sub>) | 25°C | V<sub>CB</sub> = 30 V | 10 nanoA max. | | Collector Dark Current | 150°C | V<sub>CB</sub> = 30 V | 15 µA max. | | Light Current Sensitivity (I<sub>CE</sub>) | | V<sub>CE</sub> = 12 V | 25 µA/ft. candle. | RAYTHEON CIRCLE 101 ON READER SERVICE CARD 101 QUALITY CONTROL Starts with Calibration Wide range Model 829D calibrates AC and DC instruments quickly and at modest cost with certificated accuracy traceable to the National Bureau of Standards. Employing TBS comparison standards with a repeatability of reading within ±0.1%, the Model 829D instrument calibration standard has a full scale, direct reading accuracy within ±0.5% and within ±0.25% using correction factors supplied for all ranges from 0.25 mV to 2000 volts and 2 μA to 20 amperes. Output frequency for AC calibration is same as input frequency; useful range for stated accuracy is 50-1000 c/s. Automatic interlocks and built-in safety features protect the operator and instrument under test. On shock mounts supplied, the Model 829D will maintain long term calibrated accuracy in mobile van service. Most compact instrument calibration system is also mobile Photo at left shows the Model 829D mounted on the Model 10 Test Equipment Cart which contains a Model 250 Variable Frequency Power Supply. Latter regulates 115/230-volt line source to ±0.2% for 5% line or 25% load change, with low distortion, and supplies calibration frequencies from 50 to 1000 c/s. Performance is certified and guaranteed. Price of Model 829D for 115-volt operation is $2950, f.o.b. Boonton, N.J. subject to change without notice. Tester Checks Out Thermocouple Circuits By SIGMUND MEIERAN Boeing Company, Seattle, Wash. INSTRUMENT to check out thermocouple installations for thermal contact, electrical continuity and correct polarity, without causing any temperature change on the thermocouple junction, has been developed by the Boeing Company, Seattle. Advantage is taken of the resistance difference between the thermocouple wires. The resistance, per 100 feet, of 28-gauge wire at 68 deg F varies from 6.489 ohms for copper to 266 ohms for Chromel-P. Thermocouples are often connected NEW PORTABLE TRANSISTORIZED Panoramic* SPECTRUM ANALYZER Completely new from Singer Metrics is the Model TA-2 transistorized portable spectrum analyzer. To be unveiled at the IEEE Show, the TA-2 represents the first of a new series of all-solid-state PANORAMIC units. It is battery operated and easily portable. Its batteries recharge when AC-operated. The compact model TA-2 measures 8¼" wide x 11" high x 18" deep. Its carefully designed solid-state modules provide a high order of reliability. ■ The TA-2 with the AR-1 plug-in module is a sonic spectrum analyzer covering the frequency range from 20 cps to 25 kc. Additional modules will be available in the near future for other frequency ranges. Thus, the one basic instrument will provide incomparable versatility for an extremely wide range of applications. ■ The small size and portability of the new TA-2 with AR-1 module permits on-site analyses of noise and vibration in vehicles and other locations having severe space limitations. A 3½" x 3½" square CRT display provides quick-look capability, and the instrument is designed for high resolution, low drift, and ease of operation. ■ See the all-new TA-2... and many other PANORAMIC instruments... dynamically demonstrated at Booth 3301-3303, IEEE Show. Full engineering attendance at booth. *TRADEMARK OF THE SINGER MANUFACTURING COMPANY SEE IN OPERATION—COMPREHENSIVE ARRAY OF ANALYZERS AND RELATED INSTRUMENTS—BOOTH 3301-3303—IEEE SHOW electronics • March 15, 1963 When you want the range of interest on a meter expanded to occupy the full scale for higher resolution and improved readability, do you have to accept enlarged dimensions? No. The advanced Expando technique expands the scale without back-case extensions. Expando achieves accuracies as fine as 0.1% in completely self-contained meters built into any manufacturer's models. Now you can match meters for a uniform instrument panel. What's more, because Expando's low consumption eliminates costly external circuitry, you get a compact meter with more reliable performance at a lower price. Write for specifications on expanded range AC and DC voltmeters, ammeters, milliammeters, true RMS, frequency meters, and meter relays. EXPANDO METERS • A & M INSTRUMENT, INCORPORATED 48-01 31ST AVENUE, LONG ISLAND CITY, NEW YORK THERMOCOUPLE tester arrangement—Fig. 1 CIRCUIT diagram of thermocouple circuit tester—Fig. 2 to compensating and biasing networks to provide for the reference junction and common mode rejection; as a result, the output terminals have about the same resistance to ground. If the hookup is reversed, this is no longer the case, and an incorrect hookup can be detected by a resistance bridge. The thermocouple installation tester, see Fig. 1, includes a potentiometer to complete the bridge, a zero-center microammeter connected between the potentiometer wiper and the metal structure on which the thermocouples are installed, a suitable battery for resistance-bridge excitation, and a push button for protection of the microammeter and prevention of unnecessary battery drain. Circuit is shown in Fig. 2. The potentiometer is adjusted to unbalance the bridge for a clockwise deflection of the microammeter. The magnitude of the deflection should equal that caused by a reversed thermocouple installation in counterclockwise direction. If the thermocouple is not making contact with the metal structure, there will SANBORN FLEXIBILITY—7 channels — fm, direct record or any combination/plug-in, all solid-state circuitry/record-reproduce amplifiers on same card/4 speeds — 3¾ to 30 ips; 1½ to 15 ips and 7½ to 60 ips optional / 7" high electronics available separately/optional extras include voice channel amplifier, digital input circuit, push-pull input coupler, precision footage indicator, loop adapter and remote control unit. AT AN UNMATCHED $7200. This new Sanborn/Ampex Model 2007 system conforms to accepted IRIG instrumentation standards, provides 1% system accuracy and bandwidths to 100,000 cps with direct recording, 10,000 cps with FM amplifiers. Max. error due to non-linearity is only ±0.5% on DC, ±1% on AC. Basic system features include quickly interchanged, readily accessible printed circuit plug-in modules . . . flutter compensation by using one channel to compensate all others . . . alignment of all FM channels with built-in meter and selector switch, eliminating need for electronic counters . . . automatic squelch circuit . . . entire system in only 31" of rack panel space . . . packaging in either mobile console shown or portable cases for tape transport and electronics. System price of $7200 includes 7-channel tape transport, transfer chassis, playback preamplifiers, power supply and 7 channels of FM Record/Reproduce electronics, housed in metal mobile cabinet. All prices F.O.B. Waltham, Mass., and subject to change without notice. Get the complete specifications on this new Tape System — as well as 3 new types of Sanborn Data Amplifiers, 17" Multi-Trace Scope and other related instrumentation — from your local Sanborn Industrial Sales-Engineering Representative. Ask him for your copy of the complete Industrial Catalog. SEE SANBORN'S COMPLETE RANGE OF OSCILLOGRAPHIC TAPE, X-Y AND EVENT RECORDERS — PLUS DATA AMPLIFIERS, TRANSDUCERS AND RELATED INSTRUMENTS — AT BOOTHS 3413-3417, 1963 IEE SHOW, MARCH 23-28. INDUSTRIAL DIVISION SANBORN COMPANY WALTHAM 54, MASS. A subsidiary of Hewlett-Packard Company CIRCLE 105 ON READER SERVICE CARD "CLAIREXCOR NYC — TELEGRAM "At 201 PST TODAY DECEMBER 14, 1963 THE MARINER II SPACECRAFT MADE ITS CLOSEST APPROACH TO THE PLANET VENUS WITHIN THE PLANNED MISS CORRIDOR. THIS INTERPLANETARY FLIGHT HAS SET MANY WORLD RECORDS INCLUDING COMMUNICATIONS DISTANCE, QUANTITY AND SIGNIFICANCE OF DATA RECEIVED, THREE AXIS ATTITUDE CONTROL AND INTERPLANETARY SPACE MANEUVER. "We are pleased to report that your cadmium sulfide photoconductor detectors used in the Mariner II sun sensors and sun ate have operated successfully throughout the complete 109 day flight. Your detectors have played a key part in the success of this highly successful mission." JET PROP LAB G W MEISENHOLDER R SCHMIDT R G FOMEY J M WHALEN" THE EYES OF A MODERN MARINER CLAIREX PHOTOCONDUCTIVE CELLS normally served as the detectors in the sun-sensing "eyes" of Mariner II, our Venus space vehicle, controlling reference attitude prior to the critical mid-course correction maneuver which reduced the "miss" from 233,000 to 21,000 miles! The sun sensors also served as panel-orientors throughout the flight for maximum power output of the solar cell panels, signalling position errors to the pitch and yaw stabilization jets. The Clairex cells in Mariner II were the standard CL-605 type now in use in hundreds of other more earth-bound applications. Special single-crystal Clairex components, however, have been utilized in Ranger and other space probe projects as radiation detectors. MID-COURSE CORRECTION AFFECTS FLIGHT PATH Redirecting vehicle from destination (A) to (B) in vicinity of (C) required flight correction at point (C) by applied jet propulsion of short duration. The vehicle's maneuvers prior to corrective propulsion were based on initial proper sun reference via the photoconductive sun sensors. SUN SENSING ARRAY ON MARINER VEHICLE Throughout the life of the craft, prior and subsequent to midcourse correction, the sun sensors (S) signalled error-correction commands to the stabilization jets for pitch and yaw control, thus keeping the solar cell banks properly oriented for maximum power output. PHOTOCONDUCTIVE CELL COMPONENTS Six Standard Series of photoconductive cells, including the Mariner II type (D), are manufactured by Clairex Corporation. Illustrated are units of both Cadmium Sulfide and Cadmium Selenide, in glass or metal containers, offering a wide range of response characteristics. Technical design data available on request. "The light touch in automation and control!" CLAIREX CORPORATION 8 West 30th Street, New York 1, New York THE OLDEST MANUFACTURER OF CADMIUM SULPHIDE AND CADMIUM SELENIDE PHOTOCONDUCTIVE CELLS See us at the I.E.E.E. Show—Booth 1217 Effect of Disarmament on Space R&D to Be Studied CAN ARMS CONTROL and disarmament agreement be circumvented by R&D activities? This is one of the questions to be studied by Aerospace Corporation under a $215,000 contract from the U.S. Arms Control and Disarmament Agency. The contract calls for an assessment of the nature and impact of possible controls on RDT&E of ballistic missile and space systems, inspection procedures and related recommendations. Electron Beam Welder Studied for Space Use AIR FORCE may supply orbital maintenance spacecraft with electron beam welders. To investigate this possibility it has awarded a $340,000 contract to Hamilton Standard division of United Aircraft. Experimental welding equipment will be built to perform tests under the high vacuum conditions of space. Narrow heat-affected zones and short cool-down periods would minimize sublimation, or evaporation of metals in space, giving the electron-beam process a major advantage over other welding methods, Hamilton Standard claims. One microvolt not being much, it is reasonable to be efficient in the interest of the most usable signal. The circuit shown is efficient for several reasons readily apparent to a transformer engineer. For one thing, the transformer secondary has four times the impedance of the more usual centertapped modulator circuit, and this is one real good reason for using a DPDT chopper. (Engineers always seem to want higher impedances and lower noise levels.) Low noise levels are also easier to obtain with DPDT, unless the chopper itself is noisy. It is rather simple to arrange the transformer primary in two identical halves, so that both primary leads are perfectly symmetrical to ground. It is considerably more difficult to provide this precise a balance when the center tap circuit is used. Chopper models 60 and 61, being also perfectly symmetrical as well as low noise, continue the perfect balance out to the input terminals of the amplifier. The net effect is remarkably good common mode rejection. Since the noise in these choppers is virtually non-existent, the recognition and use of signal levels of a few microvolts becomes feasible, even in strong noise fields. The "initial" permeability of the input transformer core presents a problem. The permeability of some materials falls off seriously as the level approaches zero. The published curves of 80-20 nickel-iron show good permeability even at 0.1 gauss. One assumes hopefully it will still be good at .001 gauss. At impedances of 1,000 ohms the power level, $E^2/R$, is $10^{-15}$ watts, and the use of the high permeability alloys is mandatory. It helps to use more turns, too, since impedance varies as the square. Which is now full circle back to where we came in. Use a DPDT chopper. We have much more information — it's yours for the asking. ANNOUNCING THE BEST PERFORMING, MOST RELIABLE DIODES EVER BUILT: HOFFMAN OXIDE-PASSIVATED ZENERS! BIG DEAL. Heard it all before, have you? About better temperature and impedance and leakage characteristics? Well, not like this! The plain fact is that these new zeners, through oxide passivation, come within a whisker of delivering everything semiconductor theory says a zener should. And that is a big deal. Because look at what it buys you: First and foremost, stability. You can put these zeners into your circuit and forget them. Because the elements that cause deterioration are no longer present; they're not there to begin with and oxide passivation literally locks them out—permanently. We've proved it by "Joy-bomb" testing these devices with—and without—the glass case. And extraordinarily sharp zener knees, a result of leakage rate that's less than 1/100th of the Mil Spec combined with extremely low impedance, provide voltage regulation at microamp current levels—permanently. Temperature characteristics? The wide ambient range of these devices means you have a temperature confidence level that's probably higher than anything else in the circuit. For example, these zeners can be instantly cycled from -190° to +250°C, again and again and again. There's lots more performance data—including life test data—available either at our IEEE Booth #1227-9 ...or from your local Hoffman distributor or sales office. We urge you to look it over. Just one more thing. We do more than talk about Oxide Passivated Zeners. We ship them. Types 1N960A through 1N984A are in stock for immediate delivery. And that's probably the biggest deal of all. Hoffman Electronics Corporation Semiconductor Division 4501 N. ARDEN DR., EL MONTE, CALIF. • CUMBERLAND 3-7191 • TWX: 213-442-5633 CIRCLE 109 ON READER SERVICE CARD A Month’s Rejects Transicoil delivers highest volume with lowest reject rate on precision temperature-compensated motor tachometers. Daystrom Transicoil’s claims to lowest reject rate rest squarely where such claims are proven: at the customer’s incoming inspection. Coupled with this reliability, moreover, is our ability to deliver on schedule in volume. In fact, Daystrom Transicoil is the largest known producer of temperature-compensated motor tachometers. In recent months, systems requirements have become increasingly stringent for component reliability. Our Size 11’s have successfully met this challenge in such systems as the F-104, A3J, Pershing, Hound Dog, Mirage, and a number of other systems as yet not even officially designated by name. Most delivery promises have been met . . . and even bettered. Why don’t you check the specs at right, then find out for yourself how our temperature-compensated motor tachometers can meet your own requirements? DAYSTROM, INCORPORATED TRANSICOIL DIVISION WORCESTER, PENNSYLVANIA TELEPHONE 215-584-2421 See us at Booths 1702–1710, 1801–1809, IEEE Show | SIZE 11 TEMPERATURE-COMPENSATED TACHOMETER ELECTRICAL CHARACTERISTICS | |---------------------------------------------------------------| | INPUT VOLTAGE (V) | 115 | | INPUT POWER (W) | 5.5 | | INPUT IMPEDANCE (OHMS) | 1500 | | INPUT CURRENT (A) | 0.077 | | OUTPUT IMPEDANCE (OHMS) | 5000 | | OUTPUT VOLTAGE (V/1000RPM) | 2.75 | | MAX. NULL RMS (VOLTS) | 0.020 | | LINEARITY (%) | 0.07 | | SIGNAL TO NOISE | 140 | | PHASE SHIFT AT 25°C (DEG.) | 0±0.5° | | SCALE FACTOR VAR. W/TEMP. | ±0.5% | | PHASE SHIFT VAR. W/TEMP. | ±0.5° | See us at Booths 1702–1710, 1801–1809, IEEE Show In the continuing cold war between West and East, one of the more powerful weapons the United States has is the around-the-clock broadcasts of news and commentary by the U.S. Information Agency. Heard in 36 languages, this broadcast operation known as the Voice of America is virtually the only means of getting the truth to millions behind the Iron and Bamboo Curtains. Attesting to the success of the USIA program is the fact that the Communists use some 2,000 transmitters in an effort to block out free world broadcasts. To maintain and strengthen the Voice of America, the world's largest transmitting facility is now beaming programs overseas from Greenville, North Carolina. Building this facility was a joint effort by Continental Electronics Systems, Inc., and Alpha of Texas Inc., subsidiary of Collins Radio Co., with Continental responsible for the electronic aspects of the project. The Consolidated East Coast Facilities are at three sites totalling 6,000 acres of cleared timber land. Two transmitter sites and the receiving sites are 18 miles apart. Each of the transmitting sites has three 500,000 watt short wave transmitters supplied by Continental Electronics Manufacturing Company. Other transmitters include: Three 250,000 watt transmitters, three 50,000 watt and two 5,000 watt transmitters, for a combined total transmitting power of 4.82 million watts. Continental Electronics SYSTEMS, INC. MAILING ADDRESS: BOX 17040 • DALLAS 17, TEXAS • EV 1-7161 4212 SOUTH BUCKNER BLVD. LTV SUBSIDIARY OF LING-TEMCO-VOUGHT, INC. Designers and Builders of the World's Most Powerful Radio Transmitters CIRCLE 111 ON READER SERVICE CARD This plug-in relay brings you remarkable savings in time and money Now, for the first time, AE's new Series EIN relay assemblies offer you the advantages of true plug-in relays at low cost. This is made possible through the design of a special integrated socket that accommodates any of AE's standard Class E relays with taper-tab terminals. With Series EIN, you avoid the mounting and wiring charges on relays equipped with octal-type plugs. The sockets are available separately from AE stock so that you can wire complete chassis, as you would for tubes, then order the Class E relays to meet production schedules. The relay terminals fit snugly and provide large surface contact, yet the relays are easily removed in the field for repair or replacement. Series EIN relays are provided with clear plastic cases that protect the mechanism from dust and damage, and allow visual examination of the relays in operation. AE circuit engineers will be happy to work with you in applying Series EIN Class E relays to your designs. For full details on Class E relays, Series EIN, write the Director, Control Equipment Sales, Automatic Electric, Northlake, Illinois. AUTOMATIC ELECTRIC Subsidiary of GENERAL TELEPHONE & ELECTRONICS March 15, 1963 • electronics Kynar, the new fluorocarbon resin from Pennsalt Chemicals, offers an outstanding combination of properties for electronic applications. Coupled with high dielectric strength and resistivity, Kynar offers extreme mechanical strength and toughness, stability to temperatures ranging from -80 to +300°F, and resistance to severe environmental stresses caused by weather, radiation and corrosive chemicals. Kynar is readily extruded to form primary wire insulation, abrasion-resistant jackets, and thin wall tubing. And Kynar-insulated hook-up wire withstands the mechanical stresses imposed by high speed automatic wrap and assembly without deterioration. Typical properties of 10-mil Kynar insulation extruded over AWG 24 solid soft copper conductor: Dielectric strength, volts ........................................... 10,000 Insulation Resistance, meg-ohm/M ......................... > 1,000 Cold bend, ½" dia., 1 lb. weight at -70°F, volts.. 8,000 Abrasion Resistance, Janco Tester grade 400 alumina, inches of tape ............................................. 50 Cut through, anvil at 90°, 350 gm. hours at 270°F ................................................................. > 500 Soldering test, flare back ........................................ None Flammability ............... self extinguishing Write for our new brochure and the names of nearby fabricators who supply Kynar. Plastics Dept., PENNSALT CHEMICALS CORPORATION, 3 Penn Center, Phila. 2, Pa. Components Meet Sales Challenge Electronics mart bigger, wares get better, and competition fiercer EMPHASIS at the big electronics market place this year is to offer better devices and better materials in the face of stiff competition. More electronics firms are scheduled to enter the electronics arena this year in the annual Coliseum contest to win favor for their products. Many of the same words will be used to influence the electronics market. The particular device or material will claim advantage over adversary by claims of greater reliability, at lower cost, to overcome existing limitations, for more efficient utilization, at more critical operating temperatures, to improve circuit performance. Familiar words ever. But fact of the matter is that more devices and better materials are now available to meet claims of superiority than in any other period of electronics' history. Company know-how will be in evidence everywhere. But the same big problem still faces the user: How can he keep intelligently informed on available devices that can fit his needs? Companies that best know how to keep engineers informed will have decided edge in capturing healthy share of a highly-competitive electronics market. Company selling products must develop more competence in technical sales know-how. Sales engineer will render knock-out blow when he learns not only to sell, but to educate. Two-pronged approach involves much more than merely supplying data-sheet information on his company's product. Sales engineer must trigger user in applications, and "provide user with what he wants, not what his company has to sell" (see ELECTRONICS, May 11, 1962, p 57). Here are some of the devices and materials, that will be shown at this year's I-Triple-E show. Device engineers may want to find out more about these components for their particular applications: READOUT tubes are smaller this year. Miniature electronic readout tube, shown by Burroughs, displays numeral 0 to 9. Characters are 0.310 inch high. Object of mechanical design of new Nixie tube is to provide minimum readout space when groups of tubes are mounted together. Complete ten-digit display of \( \frac{1}{4} \)-in. characters occupies less than five inches of panel width and \( \frac{3}{8} \)-in. of behind-panel space when units are connected to the activating circuit. OPTICAL meter relay, featured by Assembly Products Inc. may be biggest item in their line. Device uses a combination of fiber optics and a reflecting disk. Relay obtains almost instantaneous control action at set point, a dead band of 0.25 percent of full scale or less, and low price. Units can be pro- Components Engineers: YOUR GUIDE TO I-TRIPLE-E SESSIONS | Subject | IEEE Session | Date | Where Held | |-------------------------------|--------------|------------|------------| | Antennas, (three sessions) | 17 | Mar 26, pm | a | | | 25 | Mar 27, am | a | | | 33 | Mar 27, pm | a | | Audio | 46 | Mar 28, am | b | | Component Fabrication | 22 | Mar 26, pm | b | | Component Horizons | 28 | Mar 27, am | c | | Components, Miniature | 12 | Mar 26, am | c | | Component Reliability | 15 | Mar 26, am | b | | Computer Components | 5 | Mar 25, pm | d | | Electron Devices | 29 | Mar 26, pm | e | | Microelectronics | 52 | Mar 28, pm | e | | Microwaves, (three sessions) | 36 | Mar 27, pm | c | | | 43 | Mar 28, am | c | | | 51 | Mar 28, pm | c | | Semiconductors | 11 | Mar 26, am | e | | Ultrasonics (two sessions) | 11 | Mar 26, am | f | | | 19 | Mar 26, pm | f | am sessions begin at 10 am; pm sessions begin at 2:30 pm (a) Waldorf Astoria, Starlight Roof (b) N. Y. Coliseum, Marconi Hall (c) Waldorf Astoria, Sert Room (d) Waldorf Astoria, Empire Room (e) N. Y. Coliseum, Faraday Hall (f) Waldorf Astoria, Jade Room BOTH THESE MAGNETIC TAPES HAVE A POLYESTER BASE ...BUT ONLY ONE IS MYLAR® (8 YEARS PROVEN) Eight years ago instrumentation tape of Du Pont MYLAR® polyester film appeared on the scene and set new standards of reliability. Naturally enough, people whose needs called for a magnetic tape of highest performance couldn't risk a tape other than MYLAR. Now, other polyester films are beginning to appear. They are not all the same: MYLAR is a polyester film, but other polyester films are not MYLAR. In the past you could safely assume you were getting MYLAR when you specified "polyester base". Today you cannot. There's only one way to be sure you're getting the MYLAR you've used and trusted for magnetic tapes of proven reliability: specify MYLAR by name. E. I. du Pont de Nemours & Co. (Inc.), 10452 Nemours Bldg., Wilmington 98, Delaware. *Du Pont's registered trademark for its polyester film. Pedigree by the yard This man-size stack is representative of records that are kept five years on every XT tantalum capacitor we produce. There's a point to all of this paperwork. This is evidence that every XT capacitor has passed the most critical examination at every phase of production. Our engineers analyze the numbers that go on these log sheets... and if characteristics show a trend toward specification limits, they can take immediate action to head off trouble before it starts. Careful testing, both during production and on every finished capacitor, is a key part of the advanced techniques we have engineered into our Greencastle plant... the first to be built specifically for tantalum capacitor manufacturing. Typical result: our XT series have performed over 45,000 hours on life test. Mallory Capacitor Company, Indianapolis 6, Indiana—a division of P.R. Mallory & Co. Inc. WET SLUG, FOIL AND SOLID TANTALUM CAPACITORS CIRCLE 116 ON READER SERVICE CARD Mallory tantalum capacitors delivered from stock at factory prices by these distributors: Baltimore, Md. Radio Electric Service Binghamton, N.Y. Wehle Electronics Birmingham, Ala. Milt's Electronics & Equipment Co. Boston, Mass. Cramer Electronics Delambre Radio Supply Co. Lafayette Radio QPL House, Inc. Bridgeport, Conn. Westcoast Electronics Buffalo, N.Y. Summit Distributors, Inc. Chicago, Ill. Allied Electronics Corp. Newark Electronics Corp. Cincinnati, Ohio United Radio Cleveland, Ohio Planner Electronics Dallas, Texas Engineering Supply Co. Hall-Mark Electronic Corp. Dayton, Ohio Stottlemydman Co. Denver, Colo. Denver Electronics Houston, Texas Harmon Equipment Co., Inc. Lenert Company Indianapolis, Ind. Graham Electronics Los Angeles, Calif. California Electronics Kleruiff Electronics, Inc. Lynch Electronics Radio Product Sales Minneapolis, Minn. Northwest Electronics Corp. Montreal, Que. Canadian Electrical Supply Co. Muskegon, Mich. Fitzpatrick Electric Co. Nashville, Tenn. Electro-West, Co. Newark, N.J. Lafayette Radio New York, N.Y. Hallicrafters Radio Corp. Harvey Radio Co., Inc. Lafayette Radio Miko Electronics Terminal and Mason Electronics Oakland, Calif. Elmar Electronics, Inc. Oklahoma City, Okla. Radio Shack Orlando, Fla. East Coast Electronics Harmon Electronics, Inc. Ottawa, Ont. Wackid Radio-TV Lab. Palo Alto, Calif. Zack Electronics Perth Amboy, N.J. Atlas Electronics Philadelphia, Pa. Herbertson & Rademan Philadelphia Electronics Phoenix, Ariz. Kleruiff Electronics, Inc. Pittsburgh, Pa. Radio Parts Co. Salt Lake City, Utah Kimball Electronics San Antonio, Texas Perry Radio San Francisco, Calif. Kleruiff Electronics, Inc. St. Louis, Mo. Olive Electronics Seattle, Wash. F. B. Connelly Co. Springfield, N.J. Federalist Purchaser, Inc. Tampa, Florida Thurrow Electronics, Inc. Toronto, Ont. Alpha American Radio Co. Electronic Sonic Supply Wholesale Radio & Electronics Tulsa, Okla. Engineering Supply Co. Washington, D.C. Capitol Radio Wholesalers Electronic Industrial Sales White Plains, N.Y. Mechanics' Electronic Supply Co., Inc. Winston-Salem, N.C. Electronic Wholesalers Inc. CIRCLE 117 ON READER SERVICE CARD New Design for Planar Devices LEAF configuration is claimed to improve performance of planar transistors for medium power, medium frequency range ANALYSIS of various configurations of existing planar transistors, conducted by Bendix, has resulted in a new surface design of planar devices. Bendix now claims optimum design performance for planar types in medium power, medium frequency range. Company now offers device that is said to be particularly important for amplifiers and switching applications, claims device that can handle heavier current, with improved reliability factors. Basically what Bendix has done is to design a planar transistor that has a larger emitter area, larger contact area, and say they have improved contacts. Leaf design, shown above has eliminated sharp corners, which may reduce leakage problems. Circuit design engineer attending IEEE will have an opportunity to evaluate results of beta gain, saturation voltage, and temperature storage effects that can minimize reliability of contacts. CONTACTS—Leaf configuration, shown in above photo, was developed by Robert L. Reber and Albert Schrob. Photo shows top contact connected to white emitter contact area; and lower base contact. Grey areas adjacent to emitter and base contact areas are non conducting layers. Bendix says leaf design has reduced: collector-base cutoff current, emitter-base cutoff current, collector capacitance, collector saturation voltage, emitter saturation voltage. Company also says gain-band width product is higher and gain at high collector voltage is improved. Company also says breakdown voltages are unchanged. INTO DIGITAL SYSTEMS DESIGNED AND DEVELOPED BY COMPUTER CONTROL COMPANY, INC. GO "CAREFULLY SELECTED QUALITY COMPONENTS." PAKTRON CAPACITORS FILL THE BILL. For ten years Computer Control Company, Inc. has designed, developed and delivered a broad range of specialized digital systems. Their reputation for high-quality and high-reliability are well known. They insist on reliable components throughout. We at Paktron are well aware of our obligation to meet their exacting requirements of small size, high reliability and economy. For high quality and outstanding performance, specify Paktron Miniature Molded Mylar® capacitors. **DUPONT PAKTRON DIVISION ILLINOIS TOOL WORKS INC. 1321 LESLIE AVENUE • ALEXANDRIA, VIRGINIA AREA CODE 703 King 8-4400 SOLID TANTALUM capacitor is shown by Kemet Co., Div. of Union Carbide. They call it C series of their epoxy-molded solid-tantalum capacitor line. This is in cylindrical form. Component has application in any welded module construction item. EPOXY GLASS laminates claimed to have higher peel strengths which are consistent in most applications, are being shown by General Electric's Laminated Products Department. Company feels product is significant advance. BRUSHLESS d-c motor, shown by Globe Industries, Dayton, has inverter mounted integrally right on blower. Unit is one and one-half inches long. Device is claimed to be half the size of anything now on the market. PRECISION THERMISTORS, now available from Yellow Springs Instrument Co., Ohio, show high degree of sophistication. Most of company's output is directed toward laboratory markets with a substantial portion in the area of temperature measurement and control. Company has aimed at determining basic limitations on operating temperatures. Company cites thermistor that has operated at 1,200 C for six months with no evidence of degradation of characteristics. CRYSTAL CAN relay, shown by Babcock Electronics, is designed for low-profile mounting. Half-size crystal can features high sensitivity and durability, company says. Coil operation requires only 175-mw pull-in power to switch any load from dry circuit to 2 amps. Unlike conventional relay motor arrangement, armature is located inside the coil, the region of highest flux density. CONNECTOR called Ultrekon will be displayed by Cinch Manufacturing Co., Chicago. Unit was developed primarily for military and Today's line of ISC HALL*ISTOR products represents over ten years' research, development and application engineering on Hall effect components for a broad range of detection, measurement, computation and control applications. Many of these probes, pick-ups, multipliers, modulators, tape heads, and other devices have been in production and actual service for five years or more. The HALL*ISTOR line begins where others end. Only from ISC can you get a complete selection of advanced devices and components with the latest developments in Hall effect technology. More than 50 different products, all in stock for immediate delivery, are available as standard catalog types. These include probes of Indium Antimonide, Indium Arsenide, or Indium Arsenide Phosphide — all in a choice of crystalline or deposited thin film construction — on ceramic or ferrite substrates. ISC's leadership in the field of Hall effect devices results in the lowest available prices of components together with the industry's widest assortment of sizes, shapes, terminations, outputs, linearities, sensitivities, effective air gaps, and other characteristics. Our applications engineering staff is at your disposal to discuss requirements for complete systems involving processing, automation, material handling, and other automatic controls. Get complete engineering specifications and application data — send for the new ISC HALL*ISTOR Catalog No. H-20014. HALTEST DIVISION 111 Cantiague Road Westbury, L.I., New York (516) WE 8-8000 instrument systems corporation ISC electronics • March 15, 1963 CIRCLE 119 ON READER SERVICE CARD 119 Electrically and mechanically, ATOHM'S 1" rectilinear, wirewound TRIMMER POT (SHADeD AREA INDICATES RT-12 SIZE) QUALIFIES to new MIL-R-27208A The RT-12, or "thin case" style, as called out by MIL-R-27208A, is the only 1" rectilinear, wirewound trimmer potentiometer that is MIL-approved for future designs. The popular Atohm Model 120, shown above, meets this new spec mechanically and electrically. Furthermore, the Model 120 has a 2-watt capability, 20% better resolution, an operating temperature range of -65°C to +200°C, a superior mechanical design, and other features which give it exceptional accuracy and reliability. Atohm has manufactured the Model 120 for 3½ years, and is one of only two manufacturers who have been making this style for an extended period. All other manufacturers must re-tool to meet the new spec. ATOHM lab approved to qualify to MIL-R-27208A The Defense Electronics Supply Agency (formerly ASES A) has granted approval to the Atohm lab to qualify parts to this long-awaited MIL spec. Since July, 1962, all Atohm MIL-type trimmers have been tested to MIL-R-27208A. Systematic random samples of production units are given complete tests (Group II) including shock, vibration and load life, to determine that production lots continue to meet or exceed the spec. Specify Atohm for thoroughly tested, field proven trimmers...at no extra cost. When desired, Atohm provides a Certificate of Conformance without charge. But not dimensionally. The Model 120 is smaller. So, when preparing house specs and specification control drawings, use maximum dimensions where possible. Write for reproducible transparency with shadow area drawing for RT-12 style. BERYLLIA ceramics will be featured by American Lava, Chattanooga, Tenn. Company will show thin flat substrates for microminiature applications. Beryllia has dielectric constant about three-quarters that of dense alumina and a Te value almost twice that of alumina. CHOPPER of double-pole, double-throw design, is said to be smallest on market, according to Airpax. Company also claims noise level of new electro-mechanical chopper is lowest available—less than 25 microvolts into a megohm load. Company quotes small-quantity delivery of four to eight weeks, says unit is competitively priced. VITREOUS ENAMEL resistor with new coating is singled out as best component offered by Ohmite this year. Unit has precise dimen- CIRCLE 121 ON READER SERVICE CARD → March 15, 1963 • electronics How Sylvania produced a 100-watt TWT in a PPM package...in only 4 months Our microwave engineers pride themselves on being able to redesign an existing traveling-wave tube in a short time to meet new specifications of a customer. "Quick reaction," they call it. In the case of our 100-watt CW X-band tube, the reaction took only four months—something of a record for a power increase of such magnitude. They started with a pulsed Sylvania tube of 10 watts average power, modified the internal structure, and incorporated a new helix design. The result is a whopping 100-watt CW output that system designers have been needing for ECM, long-range space communications, and special equipment for testing high-power components. "Quick reaction" means being able to come up with fast solutions and render on-the-spot engineering assistance. And it requires production lines that can handle either long runs of standards or small runs of special-purpose tubes. That's exactly the way we are set up—a result of our work on the B-58 "Hustler" tube program. Care to give us a try on your traveling-wave tube requirements? Write to Microwave Device Division, Sylvania Electric Products Inc., P.O. Box 87, Buffalo, New York. SYLVANIA SUBSIDIARY OF GENERAL TELEPHONE & ELECTRONICS NEW CAPABILITIES IN: ELECTRONIC TUBES • SEMICONDUCTORS • MICROWAVE DEVICES • SPECIAL COMPONENTS • DISPLAY DEVICES SEE US AT IEEE — BOOTH 2322-2329 AND 2413-2425 Dilemma: true RMS-measuring instruments are low-impedance, delicate, and damage-prone devices. Increase their sensitivity and they slow down. VTVMs, conversely, measure true RMS of pure sine waves only. Resolution: trio/lab's superb combination of high impedance, sensitivity, and overload immunity — the new Model 120 AC Voltmeter. The trick is turned by driving a laboratory-standard electrodynamometer by an ultra-linear, high-impedance amplifier, gain-stabilized by negative current-feedback to better than 0.1%. Results: 0.25% true RMS accuracy, 10MV-500V full-scale sensitivity, 50-2,000 CPS frequency range — regardless of time, temperature, or line fluctuations. Convenience and peace of mind for you, too: the Model 120 reads out directly from a 7" edge-indicating, mirror-backed scale; it can be overloaded only by malicious mischief. Put this original concept in instrumentation — derived from trio/lab's 8 pioneering years in producing "build-ins" — to work for you. $985 ships the portable Model 120-1, or the rack-mounted 120-7, from stock. Secondary-Standard Accuracy ¼% true RMS-direct reading triolab TRIO LABORATORIES, INC., PLAINVIEW, L.I., NEW YORK OVERBROOK 1-0400, AREA CODE 516, TWX: 516 433-9573 See us at IEEE, Booth 3222 REED SWITCH, shown by General Electric Receiving Tube Department is ½ in. in diameter, has glass length about ¾ in. long. Switch arms of equal length permit both arms to move under influence of magnet. Switch is normally open, but can be magnet-biased to use normally closed. Firm has started sampling prospective customers. STEPPER MOTOR, size 8, claimed to increase pulse potential, is featured by Wright Division of Sperry Rand. Motor provides torque at 90 degrees in excess of one ounce inch. Step motor employs magnetic detenting, thus eliminating shock loading and wear on mechanical detent mechanisms. RELAY shown by Phillips Control Company has special armature for more efficient utilization of the magnetic field. Design includes single diagonally-mounted coil, single stamping for integral return spring and backstop. Company also specializes in telephone and power relays. IMPROVED Alnico 8 material, featured by General Magnetic Corp., is guaranteed to 1,600 oeseteds and a max vh of 5.25 million. Magnetic version of Alnico 5 is not for sale as yet. It will run in the order of 8 to 8.5 million vh. Crystallized version of Alnico 5 grows in spaghetti-like bunch. SILICON RECTIFIERS, featured by International Rectifier, are designed to meet NEMA standards and operate with high reliability in electro-chemical processing and general industrial service. Devices have voltage ratings to 900-v repetitive prv, 1200-v transient prv, current ratings to 250 amps continuous, and 4,500 amps peak one cycle surge current capability. Five types cover prv range from 650 to 900 volts and have stable reverse characteristics under alternating and direct voltage over temperature range from -40 to 200 C. RESONANT RELAY of self-holding type, is recent development of Mallory Timers Co. Contact action is prevented until a critical operating point has been reached as a result of a signal being present on the coil of proper frequency, power and duration. At instant of operation, resonant reed is pulled away from the magnetic fulcrum and held solidly to an operating gap. Associated circuit contacts are closed with a positive snap action. TRIM CAPACITORS, displayed by JFD Electronics, are fixed capacitors of 5-pf and 6-pf values with variable temperature coefficients. Diffused quartz dielectric units are designed for control of frequency variations due to temperature changes from -55 to 200 C. Capacitance variation is ±1,000 parts per million per deg C. TRAVELING WAVE tube mount, shown by Calvert Electronics, eliminates need for external leads. The N4047 mount has an ejector bar and socket built in. The twt is designed for use between 5.8 and 7.2 Gc, and has a nominal gain of 43 db, 5-w output, typical saturation output power of 12 w, noise figure of 30 db, and a gain flatness of 0.01 db per one Mc measured over a 50-Mc range. MINIATURE ceramic vacuum relay, shown by Jennings Radio, is ENCLOSURE'S three dimensional utility provides each customer with product individuality WHEN DEPTH...OR WIDTH IS IMPORTANT EMCOR II Modular Enclosures provide a major breakthrough in full functional utilization of available enclosure space. No matter what face—side or front—that you select—each provides the most favorable housing area for your precision instrumentation. A variety of standard component modifications and/or variations offer each customer product individuality. A custom look is achieved through a choice of recessed, flush or extended panel mountings; single or double width frames, pontoon bases and side panels; assorted customer nameplate styles; aluminum trim or grillwork extrusions (customer's own design extrusions or trim can be readily utilized) and externally removable side panels. Stimulate your imagination, request full details today! EMCOR-The Original Modular Enclosure System By INGERSOLL PRODUCTS Division of Borg-Warner Corporation 1000 W. 120th ST. • DEPT. 1242 • CHICAGO 43, ILL. INGERSOLL PRODUCTS SALES ENGINEERS VISIT US AT IEEE BOOTH NOS. 4211-13-15 ALSO REPRESENT MCLEAN BLOWERS FOR ENCLOSURES CIRCLE 123 ON READER SERVICE CARD FOR SUPERFINE CUTTING OF HARD, BRITTLE MATERIALS THE S.S. WHITE AIRBRASIVE® UNIT We don’t know why anyone would want to slice a light bulb up like an onion. But we do think it is an awfully good demonstration of the Airbrasive’s ability to cut hard brittle materials. Imagine, for example, cutting precision slivers like these with a mechanical tool! This unique industrial tool is doing jobs that were up to now considered impossible. Its secret lies in its superfine jet of gas-propelled abrasive particles that are capable of precision cutting without shock, heat or vibration. Thus the most fragile materials can be shaped, drilled, abraded, or cleaned with complete safety. Use it to make cuts as fine as 0.008”... remove surface coatings ... debur tiny parts ... wire-strip potentiometers ... adjust microminiature circuits ... cut germanium, silicon, ferrites, glass, ceramics ... in the laboratory or on the production line. The cost is low, too. For under $1000 you can set up an Airbrasive cutting unit in your own shop. Send us samples of your “impossible” jobs and let us test them for you at no cost. WRITE FOR BULLETIN 6006. Complete information. S. S. WHITE INDUSTRIAL DIVISION Dept. EU, 10 East 40th St., New York 16, N. Y. • Telephone MU 3-3015 collect. designed to operate at 2,500 volts. Yet device is only 1½-in. long and weighs only one ounce. At 16 Mc, the relay is rated at 4 amps rms and 2,500 volts withstand voltage. Unit was designed for high-volume production. NICKEL-CADMIUM batteries, shown by Sonotone, are available in polystyrene and nylon cases. Batteries have vented cells, are made up in more than 100 standard and special configurations. Cells weigh from one pound to 21 pounds. SENSOR for low-temperature controlled systems, displayed by Electronics Div. of Carborundum, has maximum temperature sensitivity of −55 C to 105 C. Company will also show 24, 115 and 220-v ignitors for the first time. Devices serve as replacements for wire elements, spark plugs and automatic standby pilots. In typical test, an ignitor, operated on a cyclic test 30 sec on, 90 sec off, indicated the following surface temperatures: 2,362 F; and after 240,000 cycles, 2,280 F. TWO NEW time-delay proximity switch systems and a new two-position multi-pole rotary switch with snap-acting switch elements will be shown by Honeywell’s Micro Switch Division. Time-delay systems are especially valuable to show when an object has been in a specified position for a certain length of time, or to indicate when production lines have slowed or stopped, or when parts jam. Both systems respond to ferromagnetic material within the detection range of the sensor without physical contact. Each system is available with three amplifier models that overlap the time delay from 0 to 30 seconds. FOAMED TEFLON FEP, just developed by DuPont, permits dielectric constant below 2.0. Resulting dielectric constant depends upon percent void, can be in order of 1.5 or 1.6. Fluorocarbon resins as a solid material have dielectric constants in range of 2.05 to 2.20. In new foamed FEP, air or an inert gas displaces the solid FEP during the foaming process. Resulting dielectric constant is reduced as the degree of foaming is increased. Foam was developed by DuPont in cooperation with several of its Now for Line and Core Driving . . . MOTOROLA'S NEW 2N2381-82 the 2N2381-82 ✔ WILL SWITCH 200 mA IN 40 NSEC TYP. ✔ WILL SWITCH AT CURRENTS AS HIGH AS ½ AMP Motorola has designed two entirely new germanium epitaxial PNP transistors for advanced line and core driver applications... for driving pulses on co-axial or resistance lines, or as line drivers for read-out and memory units. These devices, types 2N2381 and 2N2382, cover a switching current range from 100 mA to 500 mA. They feature a typical 200 mA switching time of only 40 nsec (67 nsec max.). See for yourself, by comparing specifications and by trying them in your present circuit, how the 2N2381 and 2N2382 compare with the transistors you are now using. For additional information on these new Motorola high-current transistors, contact your nearest Motorola District Sales Office or Distributor or write: Motorola Semiconductor Products Inc., Technical Information Department, Box 955, Phoenix 1, Arizona. | Type | $BV_{CEO}$ | $BV_{CEO}$ | $V_{CE} \text{ (mV)} @ I_C / I_B$ | $t_{ON}^*$ | $t_{OFF}^*$ | |--------|------------|------------|----------------------------------|------------|-------------| | 2N2381 | 30 | 15 | .4 @ 200 / 20 | 10 nsec | 20 nsec | | 2N2382 | 45 | 20 | .4 @ 200 / 20 | 10 nsec | 20 nsec | | 2N1204 | 20 | 15 | .5 @ 200 / 20 | 15 nsec | 25 nsec | | 2N1495 | 40 | 25 | .3 @ 200 / 20 | 15 nsec | 30 nsec | | 2N2099 | 25 | 12 | .6 @ 200 / 10 | 16 nsec | 50 nsec | | 2N2100 | 40 | 20 | .5 @ 200 / 10 | 16 nsec | 50 nsec | | 2N2173 | 25 | 15 | .5 @ 200 / 10 | 16 nsec | 40 nsec | *All types measured in the same circuit at 200 mA ($I_{B1} = 40 \text{ mA}; I_{B2} = 40 \text{ mA}$) wire and cable customers. New material offers two basic advantages over solid materials: the external diameter and cable weight can be significantly reduced for a given conductor size. Also internal conductor can be increased in size and still keep the external diameter the same. Advantage of the larger conductor is added mechanical strength and less electrical copper losses. While a vacuum provides the lowest possible dielectric constant with a value of 1.0, it can be approximated by air and certain other gases. Although the mechanical properties of tensile strength, elongation, and crush resistance are generally lowered, foamed FEP with good balance of mechanical and electrical properties is being produced by several companies. Manufacturers of wire and cable insulated with foamed FEP include: Brand-Rex Div. of American Enka Corp., Surprenant Div. of ITT, Microdot Corp., and Times Wire & Cable Co. CADMIUM sulphide photoconductor cells, used for industrial control and lighting applications, will be displayed by Sylvania. Units are one-half inch in diam, offer cell resistance, at 2 fc, from 750 through 16,000 ohms. THREE ELECTRON tubes for the home entertainment market are being introduced by ITT. The tubes are two 9-pin miniature audio amplifiers, ECLL800 (6KH8) and ELL80, and a 9-pin miniature voltage indicator, EM84A. The first tube provides push-pull audio amplification and phase inversion in a single envelope. The second tube is designed for use as a twin-channel audio-frequency output tube in stereo amplifiers, recorders, and radios, as well as for push-pull or single-ended circuits. The last tube, the EM84A, is a miniature, sensitive, voltage indicator. COLD CATHODE counters, introduced by Baird-Atomic, are two new Dekatron glow transfer types. At same time company will introduce packaged transistor drive circuits for all double-pulse types. One new counting tube is a selector type GS10J with a low striking voltage, 150 v; a low pulse amplitude, 24 v; and a maximum counting rate, 1 Kc. Soniline Magnetostrictive Delay Lines 4 TO 20,000 MICROSECONDS | STANDARD SONILINE MODELS | |--------------------------| | **3C Soniline Model** | S-33A | S-33A-1 | S-44A | S-66A | S-66B | S-77B | S-88A | S-88B | S-99A | S-99B | S-99C | S-99D | | **Delay Range (μsec)** | 4-14 | 4-14 | Max. 1000 | Max. 1500 | Max. 2000 | Max. 2200 | Max. 3500 | Max. 6500 | Max. 4500 | Max. 15,000 | Max. 10,000 | Max. 20,000 | | **Case Size L x W x H (in.)** | 5 x 1 x ¾ | 5 x 1 x ¾ | 3½ x 3⅞ x ¾ | 4⅛ x 4⅜ x ¾ | 4⅛ x 4⅜ x ¾ | 4⅛ x 4⅜ x ¾ | 4⅛ x 4⅜ x ¾ | 6 x 7 x ¾ | 9⅛ x 10⅝ x ¾ | 9⅛ x 10⅝ x ¾ | 9⅛ x 10⅝ x ¾ | 9⅛ x 10⅝ x ¾ | | **Maximum Storage Capacitance (RZ Binary Bits)** | 28 | 28 | 1000 | 1500 | 2000 | 2200 | 3500 | 5000 | 4500 | 9000 | 10,000 | 10,000 | | **Bit Rate (MHz) (Microcycles)** | 0-2 | 0-2 | 0-1 | 0-1 | 0-1 | 0-1 | 0-1 | 0-0.8 | 0-1 | 0-1 | 0-0.7 | 0-0.5 | | **INPUT** | | | | | | | | | | | | | | **V-in (Volts)** | 15 | 15 | 15 | 15 | 15 | 15 | 15 | 15 | 15 | 15 | 25 | 25 | | **I-in (mA)** | 50 | 50 | 50 | 50 | 50 | 50 | 50 | 50 | 50 | 50 | 80 | 80 | | **Z-in (Ω)** | 300 | 300 | 300 | 300 | 300 | 300 | 300 | 300 | 300 | 300 | 300 | 300 | | **L-in (μH)** | 15 | 15 | 30 | 30 | 30 | 30 | 30 | 30 | 30 | 30 | 60 | 60 | | **Pulse Width (μsec)** | 0.20 | 0.20 | 0.4 | 0.4 | 0.4 | 0.4 | 0.4 | 0.4 | 0.4 | 0.4 | 0.8 | 0.8 | | **Rise & Fall Time (μsec)** | 0.05 | 0.05 | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 | 0.2 | 0.2 | | **OUTPUT** | | | | | | | | | | | | | | **V-out (MV)** | 70 | 20 | 20 | 20 | 10 | 10 | 10 | 10 | 10 | 5 | 5 | 2 | | **Z-out (Ω)** | 1500 | 1500 | 1500 | 1500 | 1500 | 1500 | 4000 | 4000 | 4000 | 4000 | 4000 | 4000 | | **L-out (μH)** | 80 | 80 | 150 | 150 | 150 | 150 | 150 | 150 | 150 | 150 | 300 | 300 | | **(RZ) Pulse Width (μsec)** | 0.5 ±0.05 | 0.5 ±0.05 | 1.0 ±0.1 | 1.0 ±0.1 | 1.0 ±0.1 | 1.0 ±0.1 | 1.0 ±0.15 | 1.25 ±0.15 | 1.0 ±0.15 | 1.0 ±0.2 | 2.0 ±0.2 | 2.0 ±0.2 | | **Signal to Spurious Noise (Dynamic)** | 10:1 | 10:1 | 10:1 | 10:1 | 10:1 | 10:1 | 10:1 | 10:1 | 10:1 | 10:1 | 8:1 | 6:1 | | **Signal to Spurious Noise (Dynamic)** | 5:1 | 5:1 | 5:1 | 5:1 | 5:1 | 5:1 | 4:1 | 4:1 | 5:1 | 4:1 | 4:1 | 3:1 | | **MECHANICAL** | | | | | | | | | | | | | | **Volume (Cu. In.)** | 2.2 | 2.2 | 5.5 | 9.4 | 15.6 | 18.6 | 18.4 | 31.5 | 43.5 | 75 | 100 | 150 | | **Weight (lbs.)*** | 0.2 | 0.2 | 0.4 | 0.6 | 1.0 | 1.2 | 1.1 | 1.25 | 2.7 | 3.0 | 3.1 | 3.4 | | **Mounts*** | TS | TS | TBI | TBI | TBI | TBI | TBI | TBI | EMTI | EMTI | EMTI | EMTI | | **ENVIRONMENTAL** | | | | | | | | | | | | | | **Opt. Temp. Range (°C)** | 0-80 | 0-80 | 0-55 | 0-55 | 0-55 | 0-55 | 0-55 | 0-55 | 0-55 | +10 to +40 | +10 to +40 | +10 to +40 | | **Max. Change in Temp. to Temperature (μsec)** | ±0.04 | ±0.05 | ±0.1 | ±0.1 | ±0.1 | ±0.1 | ±0.1 | ±0.15 | ±0.1 | ±0.2 | ±0.3 | | **(Non-operating) Shock** | 50 g, 11 ms | 50 g, 11 ms | 50 g, 11 ms | 50 g, 11 ms | 50 g, 11 ms | 50 g, 11 ms | 50 g, 11 ms | 50 g, 11 ms | 50 g, 11 ms | Normal Handling | Normal Handling | Normal Handling | | **Vibration (Non-operating)** | 20 g, 5-2000 cps | 20 g, 5-2000 cps | 20 g, 5-2000 cps | 20 g, 5-2000 cps | 20 g, 5-2000 cps | 20 g, 5-2000 cps | 20 g, 5-2000 cps | 20 g, 5-2000 cps | 20 g, 5-2000 cps | Normal Handling | Normal Handling | Normal Handling | *TS — Threaded Studs *TBI — Threaded Blind Inserts *EMTI — Edge Mounted Threaded Inserts ON REQUEST — LIBRARY OF INFORMATION ON DELAY LINES HOW TO USE — An extensive collection of technical notes describes a variety of typical digital applications for magnetostrictive delay lines including storage, sequential buffering, cross-and auto-correlation, signal compression and expansion, and video signal analysis □ HOW TO SPECIFY — 20-Page technical booklet discusses specification of magnetostrictive delay lines for digital applications. Details include: capabilities and limitations, principles of operation, modulation techniques, test patterns and effects of temperature □ HOW TO ORDER — 3C Catalog MDL-1 details complete line of standard Soniline models. Order form included. COMPUTER CONTROL COMPANY, INC. OLD CONNECTICUT PATH, FRAMINGHAM, MASS. • 2251 BARRY AVENUE, LOS ANGELES 64, CALIF. Specify VICTOREEN Special Purpose Components FOR HIGH VOLTAGE, HIGH RESISTANCE, APPLICATIONS WITHIN THE RANGE OF 400 TO 27,000 VOLTS In the 400 to 27,000 volt range, Victoreen components — Corotrons®, triodes, pentodes and resistors — give your circuits both reliability and outstanding performance. Other advantages are circuit simplification for lighter weight and lower manufacturing costs. Let Victoreen help solve your high voltage stabilization problems. Condensed specification data on some Victoreen units is listed below: - Glow Tubes: 57 to 150 volts - Corotrons: 400 to 27,000 volts - Vacuum Tubes: 1000 to 10,000 volts - Resistors: 200 ohms to 200 megohms From the manufacturer of famous Victoreen Hi-Meg Resistors and Electrometer Tubes. IEEE BOOTH 2301-03 THE VICTOREEN INSTRUMENT COMPANY 5806 HOUGH AVENUE • CLEVELAND 3, OHIO Victoreen European Office: P. O. Box 654, The Hague March 15, 1963 • electronics 20-YEAR HITCH IN DAVY JONES' LOCKER (Mycalex components are built to work on over 4,000 miles of ocean floor until at least 1983) These amplifiers will be spaced at 20-mile intervals along a single cable on the ocean floor to help the Bell System handle the growing number of intercontinental telephone calls—well over 4,000,000 last year alone. In designing these new amplifiers the Bell System engineers aimed at developing a device that would stand up for at least 20 years under the extreme pressure. For failure of any of the complex components could interrupt vital transoceanic circuits. They looked to Mycalex Corporation of America for 11 key parts—resistors, inductors and transformers—because Western Electric knows from over 20 years of materials testing and experience that our SUPRAMICA® 555 ceramoplastic is one of the most nearly perfect insulating materials. It can be precision molded for reliable operation. It is extremely stable. It has a thermal expansion coefficient close to that of stainless steel. It permits easy soldering of imbedded inserts. SUPRAMICA (we make three kinds, 555, 560, and 620 "BB") is only one of the products we produce as the world's leading specialists in high-temperature, high-reliability ceramic insulation materials and components. If you'd like a sample of SUPRAMICA 555 plus our newest literature on this amazingly versatile engineering material, please fill out the coupon below. MYCALEX World's largest manufacturer of ceramoplastics, glass-bonded mica and synthetic mica products. Mycalex Corporation of America Dept. E, Clifford Blvd. Clifton, New Jersey Please send me information on SUPRAMICA 555 ceramoplastic and other Mycalex products. My specific interest is— Name__________________________ Title___________________________ Company_______________________ City______________State_______ CIRCLE 129 ON READER SERVICE CARD 129 MODEL WX LABORATORY WINDER The world's most versatile winding machine for all types of radio frequency coils as well as bobbins, transformers, resistors, etc. Features continuous gain or feed adjustment and our exclusive adjustable cam (patented). MODEL CS BOBBIN WINDER Pictured is the current version of our most popular bobbin winder having increased capacities. A complete line of single and multiple units is available. MODEL CK-SL AUTOMATIC WINDER The machine illustrated produces more than 20 completed coils a minute without operator attention. This is one of a large family of completely automatic machines available. MODEL MP HAND WINDER Coils with up to 60 turns produced in a single stroke of the lever. MODEL BRS PRECISION STRIP WINDER This is one of a group of machines for winding a single layer on continuous or precut strips or mandrels of various shapes. The most advanced Coil Winding Equipment in the world today! Write today for complete literature COIL WINDING EQUIPMENT CO. OYSTER BAY, NEW YORK — WALnut 2-5660 (Area Code 516) HERE'S EXTREME ACCURACY FROM 5 CPS TO 50 MCS WITHOUT CORRECTION FIGURES! MODEL 540A THERMAL TRANSFER STANDARD—Extreme accuracy is provided by the new Fluke Model 540A Thermal Transfer Standard over a frequency range of 5 cps to 500 KC. 540A incorporates a built-in Lindeck potentiometer and galvanometer and requires only that the DC standardizing voltage be provided from an external source. Prime advantage offered by the Model 540A is that the AC and DC voltages on any range are always applied to the same portion of the transfer circuitry on a 1:1 basis. This feature completely eliminates the cause of inherent inaccuracy found in other transfer type instruments in which the AC voltage is first divided down to a lower level before comparison with the DC standard voltage. A search circuit minimizes the possibility of thermocouple overload and permits more rapid measurements of unknown voltages. PARTIAL SPECIFICATIONS—MODEL 540A | VOLTAGE RANGES | TRANSFER ACCURACY | |----------------|-------------------| | 0.5V | ±0.02% ±0.05% | | 1-10V | ±0.02% | | 20-50V | ±0.02% ±0.2% | | 100-500V | ±0.03% | | 1000V | ±0.05% | Voltage Ranges: 0.5, 1, 2, 3, 5, 10, 20, 30, 50, 100, 200, 300, 500, and 1000 V. (Note: A voltage from ½ to 1½ times the voltage specified by the range selector may be accurately measured. The absolute maximum voltage which may be safely applied is 1000 V DC or 1000 V RMS AC). Price: $795.00. The frequency range of the 540A can be extended to 50 megacycles by using the Model A55 thermal converters. Current measurements from 2.5 ma to 10A can be made with the 540A by utilizing the Fluke Model A40 current shunts. MODEL A55 THERMAL CONVERTERS offer frequency specifications to 50MC with complete coverage of the voltage range (from 0.25 to 50 VAC) provided in nine individual converter units. Great care has been taken in the electrical design and packaging of these units to provide accuracy comparable to NBS standards under less than ideal conditions of temperature, humidity, and vibration enabling them to be used in production areas as well as in the standards or development laboratory. PARTIAL SPECIFICATIONS—MODEL A55 | VOLTAGE RATING | TRANSFER ACCURACY | |----------------|-------------------| | 5 cps | ±0.01% ±0.01% +0.5% +1.5% | | 1-10V | ±0.01% ±0.03% ±0.10% ±0.10% | | 20-50V | ±0.01% ±0.05% ±0.10% | MODEL 550A TRANSFER STANDARD incorporates: a four dial Lindeck potentiometer, DC reference supply, polarity reversing switch and terminals for external galvanometer. A complete set of accessories is included at no additional cost for convenient interconnection of Models 550A and A55 in any suitable measurement configuration. Price: $395.00. CALIBRATION: All accuracy specifications are guaranteed by John Fluke Mfg. Co., Inc., to be within the indicated deviation limits from zero error as defined by the National Bureau of Standards without correction figures! John Fluke Company or NBS test reports on Models 540A or A55 are available at additional cost. All prices F. O. B. factory, Mountlake Terrace, Washington. Prices and data subject to change without notice. John Fluke Manufacturing Co., Inc. Box 7428, Seattle 33, Washington PR 6-1171, TWX—Halls Lake, TLX—852 See the new 821A Voltmeter and other new instruments to be provided at IEEE Show Booth 3229-3231 DIP-SOLDERING is used to connect pins of Burndy plug having right-angle configuration to circuit board interconnecting RCA Micro-Modules, also dip-soldered. Packaging scheme facilitates micromin system production and also achieves operational advantages. Further refinements of technique to be discussed at IEEE. Topics include packaging, quality, welding, thin films UPWARDS of a dozen papers on production techniques are to be discussed at forthcoming IEEE convention during various technical sessions. As a result, variety in topics rather than any particular topic trend seems to be the keynote. A panel will discuss assembly of pellet components. Packaging is of top interest. Development, testing and low-cost production of basic elements used in microelectronic systems appears to be reaching the point where they can be integrated into reliable, maintainable, low-cost packages. This is theme of a paper by Michael Lazar of Burndy entitled "Case Histories in the Field of Microelectronic Packaging." He deals with several solutions to problems of package integration concerning: interconnections, power supplies, grounding, shielding, air cooling, mechanical mounting, module polarization and guidance. As indicated by title, Lazar will describe experiences in micro packaging at Burndy and RCA. He says that we are witnessing three microminiature revolutions: (1) microminiaturization of discrete components, (2) thin-film integrated circuitry, (3) semiconductor integrated circuitry. No one now knows, says Lazar, whether any one approach will supplant the other two, whether marriages will take place, or whether each ...a TV set operating under water? That’s not water...that’s FREON® fluorocarbon solvent And we’ll bet this is the cleanest electronic system at the IEEE show! Because it will play, while completely immersed, for the duration of the show. This demonstration is possible because “Freon” is an excellent dielectric and a selective cleaning agent. There is no arcing, even in the TV set’s high-voltage circuitry. “Freon” thoroughly removes dust, grease, lint and chips from components or entire assemblies—without harm to delicate parts, finishes, elastomers or insulation. “Freon” has a uniquely low surface tension that lets it penetrate minute openings. There it wets and displaces soils other solvents cannot. And “Freon” is safe for production people because it’s nonexplosive and virtually nontoxic. It leaves no residue and can easily be recovered for use over and over again for maximum economy. So don’t miss this one at IEEE! If you’re not going to the show, write for complete technical information, and, if you wish, the services of a cleaning specialist. Du Pont Co., “Freon” Products Division N-2420 E-3, Wilmington 98, Delaware. SEE THIS DEMONSTRATION IN BOOTH #4317-4319 AT IEEE! electronics • March 15, 1963 EFFECT of process variables on glass bearing ceramic-to-metal welds must be evaluated on basis of recent discovery that glass migrates from ceramic into porous molybdenum coating. will retain its own place. Right now component standardization does not exist, compounding the problems of mechanical packaging and electrical interconnection of systems. These are not likely to be solved easily by use of standard techniques, he says. One problem is keeping number of connections to a minimum while maintaining module replaceability and system adaptability to incorporation of new microminiature modules. A packaging technique intended to meet this problem is shown in an accompanying illustration. Modules shown are RCA Micro-Modules developed for U.S. Signal Corps. They consist of wafer-mounted components interconnected with soldered riser wires that protrude from the encapsulated modules for mounting to double-sided printed circuit boards by dip-soldering. Board circuitry interconnects modules and Ultra-Miniature Printed Circuit connector developed by Burndy for U.S. Signal Corps. UPC plug uses three tiers of gold-plated pins of 0.028-inch diameter on 0.100 inch centers, formed in a right-angle configuration and molded in a one-piece plastic body. Plug body is mounted to board with its pins dip-soldered to board circuitry. Large pins at each end polarize and guide plug into receptacle. Receptacle is also a one-piece molded housing containing gold-plated Beryllium-copper sockets which provide round pins in back for termination to the interconnecting wiring. Wire termination is accomplished by tool-wrapping small gauge solid wires to terminal pins and soldering. CERAMIC-METAL SEALS—Relative importance of sintering rate, furnace atmosphere, particle size, metal impurities, glassy-phase wetting of metal and peak temperature in making ceramic-to-metal seals will be pointed-out by Sanford S. Cole of Mitronics. In a paper entitled "Basic Mechanisms Affecting Ceramic Seals", he says that different sealing techniques are based on different physical laws. Four techniques provide four categories of seals: (1) glass-bearing ceramics coated with a refractory metal, (2) non-glassy ceramics coated with a refractory metal, (3) seals which adhere as result of chemical formation of molybdenum aluminate compounds, (4) active metal seals. Since most ceramic-metal seals are in first category, Cole concentrates his discussion on techniques and associated physical laws in making these seals. Adherence of FALL IN! New-type recorder assembles slow or random data, spaces it uniformly on tape for computers If your digital computer is as finicky as most, it won't listen to a magnetic tape that talks like this It will insist on characters uniformly spaced on the tape like this Which means that life can be difficult for people who have data that is otherwise perfectly reputable, but just doesn't happen to occur at the right time intervals to suit the computer. Now comes a wonderful device that will gladly accept irregular data—such as the output of a teletypewriter or an analog-to-digital converter—and put it on magnetic tape just the way the computer wants it. The secret is incremental tape motion. Our new recorder stands still awaiting each character, records it, then moves the tape a uniform distance to await the next. As a result, whether characters arrive 100 per second or 1 per month, they are recorded in a proper, uniform packing density. The PI incremental recorder shown here records 200 bits per inch (556 BPI optional), a recording fully compatible with the input requirements of IBM computers. To tell you more, we've put together a brochure fully compatible with the input requirements of discriminating users. Send for bulletin #73; address us at Stanford Industrial Park, PRECISION INSTRUMENT Palo Alto 20, California. slash production costs with new automated coil winder ... fewer operators ... more coils per operator Leesona No. 116 will wind 400 to 1,000 bobbin coils per hour... with one operator. All the operator does, as the rotary table brings the winding heads to her, is load bobbin on arbor, clip start wire. Automatically the No. 116 closes tailstock, tapes start lead, resets counter, starts winding, stops at ± 2 turns, waxes or tapes finish lead, indexes arbor, cuts wire, ejects and sorts. It winds two or more different coils up to 3" diameter by 2³/₈" long... simultaneously. Supports and winds from 100 lb. wire container, and stops when spool or container is empty. Six to twelve two-speed heads wind wire AWG 16 to 50 and finer. No. 116 is designed by Robert Bachi. We build it to the customer's specifications. For details call your nearest Leesona representative. LEESONA CORPORATION Warwick, Rhode Island See us at the IEEE Show, Booths 4325-7 B.2.2 QUALITY WELDING — Developments in high-temperature circuit techniques have made new conductor materials available for interconnecting wiring that make welding of electronic circuit interconnections more feasible. So says F. A. Lally of the Boeing Company in his paper "A Program of Quality Assurance for Welded Electronic Circuitry." This in turn, says Lally, achieves savings in weight and equipment space. But, despite growing activity in welding of electronic circuitry extremely little reliability data exists. However, says Lally, one study by C. J. Heslin of Raytheon Company shows that while the future goal for mean-time-between-failures of soldered joints is 100,000,000 hours, weld-joint reliability exceeds 552,000,000 hours. Another study by Heslin indicates a possible 20 to 1 improvement in reliability when using welded rather than soldered joints. However, says Lally, in order to achieve this improvement, welding machines have to be preadjusted to predetermined pressure and energy settings. Operators then only align leads and energize welding circuit, eliminating factor of varying operator skills. At Boeing, a welded-wire-packaging-technique program evaluates welding machinery on the basis of welding-current discharge. NUMBER ONE IN ELECTRICAL CONNECTORS... SOLVE SPACE AGE PROBLEMS BETTER... SPECIFY CANNON® KV 26500 PLUGS SPECIFICALLY DESIGNED TO EXCEED THE PERFORMANCE REQUIREMENTS OF MIL-C-26500. This new CANNON Series provides high temperature, general purpose plugs with crimp snap in contacts...is designed to exceed the requirements of the MIL-C-26500 specification for meeting the increased environmental demands of missile and space vehicles. The CANNON KV 26500 will help you solve your space age connector problems better with these outstanding new features: LEAD-IN CHAMFER ON HARD CLOSED ENTRY SOCKET INSERT • REAR INSERTION/EXTRACTION OF CONTACTS AND TOOL • INTERFACIAL SEAL WITH RAISED BARRIERS AROUND EACH PIN CONTACT • EXPENDABLE PLASTIC TOOL CANNOT DAMAGE INSERT • SIMPLER, STRONGER CONTACTS RESIST BENDING • NEW BUTT-TYPE PERIPHERAL SEAL. Whatever your requirements, whether ground based or outer space, specify CANNON, the world's largest and most experienced manufacturer of electrical connectors. For further information, write to: SEE CANNON AT IEEE BOOTH 2727-2731 CANNON ELECTRIC COMPANY 3208 Humboldt Street, Los Angeles 31, California ADDITIONAL PLANTS IN: SANTA ANA & ANAHEIM, CALIF., PHOENIX, ARIZ., SALEM, MASS., TORONTO • LONDON • PARIS • BORNEM, BELGIUM • TOKYO • MELBOURNE Sales offices and representatives in principal cities of the world © 1963 CANNON ELECTRIC COMPANY electronics • March 15, 1963 CIRCLE 137 ON READER SERVICE CARD 137 FILTERED COOLING AIR IN!... RFI OUT! WITH NEW McLEAN RFI BLOWERS! AND FILTER-GRILLE ASSEMBLIES Now McLean makes it possible for you to pressurize radio shielded electronic cabinets with cool, filtered air without opening the "envelope" to RF interference. These new McLean RFI blowers and filter-grille assemblies have been designed and built for RFI performance in accordance with MIL-1-6181D — meeting or exceeding all requirements including susceptibility, generation and shieldability. This development is one of many by McLean engineers designed to assure reliability of electronic equipment. WRITE TODAY for further information. McLEAN ENGINEERING LABORATORIES World Leader in Packaged Cooling P.O. Box 228, Princeton, New Jersey Phone: Area Code 609 WAlnut 4-4440 TWX 609-799-0245 SEE US AT THE NEW YORK IEEE SHOW BOOTH #1624. SEND FOR NEW 44-PAGE CATALOG Or Our NEW MIL-SPEC BLOWER CATALOG IMPLOSION resistance during evacuation is provided by television tube envelope permanently reinforced with steel bands, resins and glass fiber. DEEVACUATION CONTROL—New technique for controlled de-evacuation of television tubes to prevent and contain implosions without using glass or plastic shields will be described by Burton W. Spear and Darryl E. Powell of Owens-Illinois Technical Center. Called Kimcode (KImble Method for COntrolled DEvacuation) this is a manufacturing approach for building tubes so that tube envelopes are reinforced with steel bands, resins, and glass fibre. The tube is thus made highly-resistant to implosion during evacuation. At the same time, Kimcode technique reportedly permits elimination of safety window in front of tube as in bonded or tempered glass construction. Supposedly, it will simplify proPRECISION-MADE SUBMINIATURE SWITCHES hundreds of types and assemblies available Where space and weight limitations are critical, MICRO SWITCH "SM" subminiature switches bring a combination of compact size, reliable precision operation and broad range of actuators, assemblies and variations to meet a wide variety of design requirements. Long, reliable life. Wide temperature ranges: \(-100^\circ\) to \(+180^\circ\) F. SPDT contact arrangement. Catalog listings also include types with extra long life and extra high temperature characteristics, gold contacts for low energy circuits, and bifurcated contacts—to name a few. A field engineer from the nearest MICRO SWITCH Branch (See Yellow Pages) will be happy to show the complete line...or write for Catalog 63 now. NEW 5 KW LOAD BIRD Model 8890 RF Load with blower accessory features forced air cooling. No water required! The BIRD Model 8890 TERMAFINE® Coaxial RF Load Resistor is a portable, general purpose 50-ohm coaxial load. It provides an accurate, non-radiating termination for RF transmission lines. The Model 8890 uses BIRD "QC" Quick-Change Connectors to accommodate any standard series of coaxial line fittings. Female Type LC (illustrated) is normally supplied. Continuous power rating for the Model 8890 utilizing normal air convection cooling is 2500 watts. With accessory blower Model BA-88, this power rating is doubled to 5000 watts continuous duty. | SPECIFICATIONS | BIRD Model 8890 TERMAFINE | |----------------|---------------------------| | Resistance: | 50 ohms nominal | | Power rating: | 2.5 KW (air convection cooled) | | | 5 KW with BA-88 Blower accessory | | VSWR: | 1.1 max, 0-1000 mc | | Weight: | 33 pounds net (with blower 49 pounds) | Ambient Air Temperature Range: -40°C to +45°C. Blower Model BA-88: 115V, 50/60 cy, 27w NOTE: Other models available in this series are: Model 8891 with 3/4" EIA flanged line connector Model 8892 with 1¾" EIA flanged line connector Prices, F.O.B. Factory: Model 8890 $410 Model 8891 425 Model 8892 415 Model BA-88 250 Contact BIRD for more information on these and other BIRD products. ELECTRONIC CORPORATION 30303 Aurora Rd., Cleveland 39 (Solon), Ohio Churchill 8-1200 TWX 216-248-6458 CABLE: BIRDELEC Western Representatives: O. H. Brown Co. P. O. Box 128, Palo Alto, Cal. * Phone 321-8867 (415) 3600 Wilshire Blvd., Los Angeles 43, Cal. * Phone 383-4443 (213) HIGH-DENSITY FABRICATION — Aerospace Division of Martin-Marietta will have two authors describing welding and encapsulating techniques for high-density packaging. In their paper, S. Maszy and H. Uglione will discuss economical methods for preparation of molds for encapsulating modules to be used in such packaging. Also they will talk about encapsulating material selection. THIN-FILM RESISTORS — Vacuum evaporation and deposition of nichrome onto various planar substrates with different surface characteristics to form resistor elements with varying characteristics is topic advanced by H. J. Degenhart and I. H. Pratt of U.S. Army Electronics R & D Laboratory. Among factors measured was effect of substrate surface roughness. PELLET PANEL — Application and assembly of pellet microcomponents into systems will be discussed at a panel moderated by S. M. Stuhlbarg of Mallory. The panel consisting of representatives from a number of both user and supplier companies has been organized because of increasing interest among design and packaging engineers in pellet microcomponents. This interest reportedly has been aroused by advantages such as: design flexibility, reasonable cost, adaptability to mechanized production. Members of panel will discuss applications and assembly techniques which are under investigation at their respective companies. Availability and useability of pellet microcomponents will also be discussed. Suppliers of both active and passive components will be represented. THIN-FILM CONTROL — Automatic control of thin-film deposition process for resistor elements will be described by R. A. Quinn and H. R. Kaiser of Lockheed. Two instruments—an a-c ohm meter and a-c resistance limit bridge—have been specially developed to terminate process when a predetermined resistance value is attained. In addition to controlling process end-point, units also monitor elecRegardless of its size, type, or frequency any crystal bearing the name McCoy can be relied upon to deliver the ultimate in frequency control despite wide temperature variations and extreme conditions of shock and vibration. **MICRO MODULE CRYSTALS** *SHOWN ACTUAL SIZE* (GLASS) This vacuum sealed, hard glass crystal unit was developed and designed for use with the RCA micromodule wafer shown above. Available in frequencies ranging from 10 mc to 200 mc, the type MM crystal provides electronic miniaturization programs with a reliable evacuated crystal enclosure of excellent stability. **METAL ENCASED STANDARD SIZE AND MINIATURE CRYSTAL UNITS** *SHOWN ACTUAL SIZE* The crystals that made the name of McCoy a synonym for quality. Metal encased, HC-6/U size is available in frequencies from 500.0 kc to 200.00 mc. Fills the need for miniature crystals in frequencies from 2.5 mc to 200.0 mc. Meets specs MIL-C-3098C and ARINC No. 401. **ALL GLASS STANDARD SIZE AND MINIATURE CRYSTAL UNITS** *SHOWN ACTUAL SIZE* This vacuum sealed, hard glass crystal unit possesses all of the quality features for which the McCoy M-1 is so famous. It has long term frequency stability five times better than the conventional metal types. Available in frequencies from 1000 kc to 200 mc. This vacuum sealed, hard glass crystal unit meets the new CR-73/U and CR-74/U specifications. It has long term frequency stability five times better than the conventional metal type. Available in frequencies from 5000 kc to 200 mc. **CRYSTAL FILTERS** McCoy crystal filter engineering and production capabilities are among the finest in the world and constantly in demand by industry, the military, and everyone searching for quality. A complete technical staff stands ready at all times to discuss your filter requirements. Many standard models are available without costly design and prototype charges. The following chart shows bandwidths available in specific frequency ranges (expressed as % of center frequency). | Frequency | B.W. | |--------------------|------------| | 1 mc to 30 mc | .01% to 4.0% | | 30 mc to 75 mc | .001% to .04% | | up to 125 mc | up to .01% | **SEE THE NEW McCoy HIGH-FREQUENCY CRYSTALS AND CRYSTAL FILTERS AT BOOTH NO. 2214 RADIO ENGINEERING SHOW NEW YORK COLISEUM MARCH 25-28** **ELECTRONICS CO.** Dept. 1363 MT. HOLLY SPRINGS, PA. Phone: HUnter 6-3411 Area Code: 717 SUBSIDIARY OF OAK MANUFACTURING CO. Write today for our free illustrated catalogs which include complete listings of all McCoy military specifications. For specific needs, write, wire or phone. Our research section is anxious to assist you. A Pump with a History and a Future From a 17th Century Well Sweep Pump to a new parametric amplifier pump Klystron, conceived and in production at Metcom. The new Klystron features are: a dielectric tuner, a wider frequency range selection than any other Pump Klystron, a new locking device, and a tuner that can be physically changed to meet different needs and designs. The parametric Pump Klystron is now in production, samples and specifications are available. The Pump Klystron can be adapted or ordered to custom design. Unlike other Klystrons, the user is not limited to specific frequency selections. trochemical activity of process. Thus, with minor modification, it is expected that these units will provide full control over process variables. A-c measurement rather than d-c is used because of direct current present in resistor elements during fabrication. ELECTRONIC CABLE SHEATH —Sheathing of communication and power cable with aluminum performed by a Swedish firm using an electronic method will be described in a paper by C. A. Tudbury of AMF Thermatool Corporation. Aluminum strip is continuously formed into tubing around cable while cable and strip pass through a special mill. Result of a joint effort by Sieverts Kabelverk and Thermatool, the process uses high-frequency seam welding to join strip edges at such a high rate of speed that cable and its insulation are unaffected. Two electrical effects are exploited in welding: skin effect and proximity effect. These make possible very high concentration of heating energy at strip edges without undue heating of other areas. (Skin effect is well known and describes tendency of alternating current to be more concentrated at conductor's surface rather than its center. Proximity effect describes further concentration of current along adjacent sides of two closely-located conductors having pronounced skin effect and comprising a go-and-return circuit). Current at 450 kc enters and leaves strip by two sliding contacts. Skin effect and proximity effect cause bulk of current to flow in a thin film along one edge of vee-shaped opening strip to apex of sheath edges and back along other edge. This could be your calendar—any week in 1963. We believe BUSINESS WEEK’s “Short-Notice Closing” is the fastest ad closing of any major national magazine. It gives you last-minute, same-week insertion for one or two pages of advertising in any issue. BUSINESS WEEK is read by the most important people in America for business advertisers—management men. With “Short-Notice Closing,” you reach these important decision-makers fast, with up-to-the-minute information they need for quick decisions. Here are the details: Deadline for Reservations: Monday at 4 p.m. Our Business Department in New York must have your reservation, at the latest, by 4 p.m. on Monday of week-of-issue. For quickest service, wire (TWX N.Y. 1-1636) or phone K. D. Reynolds, Production Manager, BUSINESS WEEK, Longacre 4-3000 (Dial New York Area Code 212). Deadline for Plates: Tuesday at 1 p.m. To meet our “Short-Notice Closing,” your plates must be in the hands of our Production Manager, in our New York office (330 West 42nd Street, New York 36, N.Y.), by 1 p.m. on Tuesday of week-of-issue, at the latest. (Sorry, no extensions possible.) Size of Units: Black-and-White Page or Spread. Either one or two single black-and-white, non-bleed full pages, or one black-and-white two-page spread (gutter bleed only) per issue. Only complete plates can be accommodated. Corrections, additions, or plate refinements are not possible on so tight a schedule. Price: A premium of 10% will be charged over and above regular advertising space rates for the “Short-Notice Closing” service. Agency commission applies to premium. To get fast decisions, there’s nothing like being in the right place at the right time. The right place is BUSINESS WEEK. Now the right time is any time—with BUSINESS WEEK’s “Short-Notice Closing.” You advertise in BUSINESS WEEK when you want to inform management A McGraw-Hill Magazine By Misunderstanding and Mishandling the “Cash-Flow” . . . Let’s Not Eat The Goose That Lays The Golden Eggs Because of a basic misunderstanding, labor and management may be heading toward a battle that need not be fought. The issue is whether cash-flow or profit is the best measure of business earnings, and therefore the best measure of business’ ability to raise wages, improve fringe benefits and shorten hours of work. When business and union negotiators sit down around the bargaining table they frequently clash. This is probably inevitable. But it is pure waste—for both sides and for the public as well—when the clashes are caused by a misunderstanding rather than by realities. The “cash-flow vs. profits” issue, should it develop as suggested by reports from Washington and the public pronouncements of labor leaders, will be a prime example of such waste. This editorial, one of a series on business profits, is designed to point this up. It discusses the difference between cash-flow and profits. And it shows how confusion between the two might have disastrous results. The Meaning Of Cash-Flow Cash-flow can be calculated in various ways. One way—and the most common one among businessmen—is to add (1) after-tax profits minus dividend payments to stockholders, and (2) depreciation allowances. Another way—the one used by the AFL-CIO—is to add (1) total after-tax profits, and (2) depreciation allowances. This adding of depreciation allowances (roughly the cost of buildings and machines either worn out in production or rendered obsolete by time) to profits (what is left over after all costs and taxes are met) may seem a clear case of adding apples and pears and coming up with a mixed fruit compote. But the practice does have its uses—as, for instance, in predicting business outlays on plant and equipment. Since cash-flow represents the total funds corporations generate internally for replacing used-up facilities and acquiring new ones, it is a rough measure of industry’s ability to invest. But organized labor apparently sees cash-flow in an entirely different light, as being virtually the same as corporate earnings. Here, from the June 1962 issue of The American Federationist, the official monthly magazine of the AFL-CIO, is an example: “The cash-flow, which is reported profits plus depreciation allowances, is the accurate measure of a company’s returns since it is the amount of money left over after payment of all costs and taxes.” The key phrases in the above are (1) “profits plus depreciation” and (2) “after payment of all costs,” which are linked in a way that makes the statement an out-and-out denial that depreciation is a cost. (Nor is this a mere slip of the pen. Virtually the same thing is said, in only slightly different words, four times in the same article.) Quite clearly, however, depreciation allowances are designed to cover costs, which now and forevermore are the opposite of profits. There are no real profits or net returns to a business enterprise until all costs are recovered, including the cost of buildings and machines either used up in production or made obsolete by time. To argue otherwise is to strip logic from the language of economics, to quash intelligible conversation on the subject of profits. If business spends its depreciation allowances on higher wages or dividends, it is failing to replace its worn out and antiquated productive facilities. The Measurement Of Profit Aside from the dispute over the meaning of “cash-flow,” “depreciation,” and “profit,” there is also the question of profit measurement. Labor points out, and correctly so, that profits as reported by the U.S. Commerce Department's Office of Business Economics have been distorted over the years by revisions in the federal tax laws. Among these are several new ways of calculating depreciation allowances inaugurated in 1954, and the new Internal Revenue Procedure 62-21 introduced in mid-1962. (A recent Department of Commerce study attempts to measure the effect of some of these revisions.) These changes were designed to enable businesses to charge off their depreciation costs at a rate more closely in line with the rate at which their facilities actually wear out and become out of date. But these more realistic techniques of figuring depreciation allowances in no way disturb this basic fact: depreciation is a cost and not a profit. Moreover, tax changes have not permitted firms to charge off more than the original cost of their facilities, but only to speed the timing of the charges. As a result, any profit understatement owing to stepped-up depreciation during the years immediately following tax changes is necessarily followed by a profit overstatement in subsequent years. So it is important to remember that changes in the timing of depreciation allowances work both ways. Some tend to understate current profits relative to those of earlier years. Others do the reverse. **The Correct Measure** As the chart in the box below shows, the corporate cash-flow has not been squeezed during the past several years nearly so much as profits. This alone offers a temptation to suggest that cash-flow — rather than profit — is the best measure of corporate returns or earnings. But the temptation must be sternly resisted, for profit—not profit plus depreciation—is the correct measure of a firm's returns. Those who argue otherwise are treating the language of economics in a cruel and unusual way. They should cease and desist before killing effective conversation altogether. **Eating The Goose** There is not the slightest inclination here to suggest that the profit figures released by the U.S. Office of Business Economics are perfect. Like many statistics, they may not always reveal everything they seem to. But we should remember that they are the most comprehensive and useful measure of over-all corporate profitability we have. It is even more important for us to remember that depreciation is a cost and not a profit. The funds attributed to depreciation allowances, like any other funds business has, can be paid out in dividends to stockholders or in higher wages to workers — but only if the economy is liquidating; only if it is failing to replace its antiquated and worn out facilities. The depreciation reform, announced in July by the Treasury, was designed to make depreciation allowances for tax purposes more truly representative of the rate at which machinery actually wears out and becomes obsolete. This, in turn, was intended to speed up machinery and equipment replacement, which will increase productivity, cut costs and give U.S. business a better crack at world markets. It would be ironical, indeed, if this long-needed reform were used to justify wage increases so large that they would actually cut into the funds needed for our program of modernization. This would be a pure and simple case of eating the goose that lays the golden eggs—a point both labor and management should certainly keep firmly in mind. --- **PROFITS vs. CASH-FLOW** *(Percent of Gross Corporate Product)* ![Graph showing Profits vs. Cash Flow] *Source: Dept. of Commerce; McGraw-Hill Dept. of Economics* --- This message was prepared by my staff associates as part of our company-wide effort to report on major new developments in American business and industry. Permission is freely extended to newspapers, groups or individuals to quote or reprint all or part of the text. *Donald C. McElwain* **PRESIDENT** **McGRAW-HILL PUBLISHING COMPANY** Generator Provides Tailored Pulses Pulse rise and fall times, width, delay and amplitude are independently variable ANNOUNCED by the Industrial Products Group of Texas Instruments Inc., 3609 Buffalo Speedway, Houston, Texas, the variable rise and fall (VRF) module extends the flexibility of the Series 6500 pulse generators. The module provides, at 20 Mc repetition rates, independent rise and fall time control from 20 ns to 0.5 μsec for coincident positive and negative outputs, coincidentally variable pulse width control from 40 ns to 1 ms, coincident variable pulse delay control from 50 ns to 1 ms and independent variable positive and negative outputs to 10 v into 50 ohms. Short-circuit protection is provided and protection is complete even when reset is attempted with a dead short on the output. Output circuits are designed for 80-percent duty-cycle operation and high duty cycles will not damage the generator. Other devices include an avalanche pulse generator capable of producing up to 1 ampere pulses, 50 v across 50 ohms, with rise and fall times well under a nanosecond at repetition rates from 100 cps to well into the Mc region. Pulse delay will be adjustable by front-panel controls. A dual mixer module whose output is the algebraic sum of two applied inputs will also be available. The sketch shows operation of the variable rise and fall unit and the model 6563 pulse generator using the VRF with a pulse generator operating between 100 cps and 25 Mc. CIRCLE 401, READER SERVICE CARD Preset Counter Performs Many Functions INTRODUCED by Hewlett-Packard Co., 1501 Page Mill Road, Palo Alto, California, the model 5214L preset counter not only measures frequency, period and totalizes but also measures normalized rate, N, 10N or 100N periods, ratio, normalized ratio, time for N events to occur, counts N, 10N or 100N events giving an output pulse at start and end of count and allows N to be remotely preset. N may be set to any integer from 1 to 100,000. Separate output signals are available to operate external equipment when gate opens or closes. Self-check provisions are incorporated for rate, time, preset at N and ratio functions. The internal time base aging rate is ±2 parts in $10^6$ per week. Printer output of 4-line BCD (1-2-2-4) at 100,000-ohms per line is also available. Readout is by 5 display tubes with display storage. The sketch shows setup for rate and ratio measurements. In rate measurements, the gate is controlled by... MR. RELAY by Allied Control 1. I'D LIKE YOU TO MEET S, THE ONLY RELAY FOR SANDWICH CIRCUIT BOARDS. HE'S WELDED ALL THE WAY AND CONTAMINATION-FREE. 2. HIS CONTACTS ARE BIFURCATED TO INSURE DRY CIRCUIT RELIABILITY AND LESS BOUNCE TOO. 3. SEE, S IS ONLY HALF THE SIZE OF A CRYSTAL CAN RELAY AND INTERCHANGEABLE ... IN EVERY WAY. 4. AND S IS REALLY A CHAMP WITH HIS PERMANENT MAGNET DESIGN. There's more news worth noting about Allied's new S relay. Flux contamination, for example, is a thing of the past. We use the latest heliarc welding techniques to seal the S relay within an inert atmosphere. Since there's no bobbin (the coil is wound directly on the magnetic core), Allied eliminates possible contamination here, too. And talk about immunity to shock and vibration! S is really rugged with its balanced rotary action armature. All S relays are calibrated for contact over-travel of the energized contacts during production, so they stay and stay on the job. Want complete application data? Write for Catalog Sheet S or call your nearest Allied representative. OPERATING CONDITIONS | Contact Rating: (at nominal coil voltage) | 2 amperes resistive at 29 volts d-c Low level contacts available | |------------------------------------------|------------------------------------------------------------------| | Contact Arrangement: | Two pole double throw | | Shock: | 50g operational | | Vibration: | 5 to 55 cps at 0.125 inch D. A., 55 to 2000 cps at a constant 20g | | Operate & Release Time: (at +25°C) | 4 milliseconds maximum at nominal coil voltage | | Terminals: | Plug-in, printed circuit, hook type solder terminals and 3 inch leads | | Weight: | 0.3 ounce maximum | ©1962 BY ALLIED CONTROL COMPANY, INC. ALLIED CONTROL COMPANY, INC. 2 EAST END AVENUE, NEW YORK 21, N. Y. the time base, preset decades and multiplier. The preset decades may be set to keep the gate open for N cycles of the time base directly or through the multipliers. This enables normalized readings or converting frequency to practical units. For example; if a generator produces 100 pulses per revolution, the gate can be set to 10 ms to measure rps directly or to 600 ms to measure rpm. For ratio, the signal is connected to B and goes through the multiplier and preset decades to control gate time. The signal connected to A goes to the readout decades. Consequently, signal A is counted for (N times multiplier setting) cycles of signal B. CIRCLE 402, READER SERVICE CARD Voltmeter Measures True RMS to 50 Mc MANUFACTURED by Keithley Instruments, Inc., 12415 Euclid Ave., Cleveland 6, Ohio, the model 121 true rms wideband voltmeter has a frequency range from 15 cps to 50 Mc, voltage range between 1 mv and 300 v full scale, and full-scale accuracy ±1 percent from 20 cps to 10 Mc, ±3 percent from 18 cps to 20 Mc and ±5 percent from 15 cps to 50 Mc. The crest factor is 6/1 at full scale and 60/1 at tenth scale. Input noise is 70 μV rms maximum. The device also has a built-in a-c and d-c amplifier for oscilloscopes and recorders. A-c output is 100 mv/rms with 6 ns risetime and d-c output is 100 mv with less than one second response to input signal. The device uses an a-c to d-c thermal converter with d-c feedback amplification techniques to measure true rms. Thermocouple output of converter is directly proportional to effective heating value of applied a-c signal. The meter and d-c outputs indicate changes in d-c resulting from the applied a-c signal. A second thermal converter is used to buck-out or null d-c potentials induced by variations in ambient temperature. A quiescent d-c bias is applied to heater of signal thermal converter. When a-c signal is applied, thermocouple output goes up. This increased voltage causes the chopper amplifier to subtract d-c current from the heater. Equilibrium occurs when total a-c and d-c heating equals initial quiescent d-c level. (403) Measuring Resistance to 0.0005 Percent ANNOUNCED by Julie Research Laboratories, Inc., 211 W. 61 St., New York 23, N. Y., the model PRB-205S primary resistance measuring system makes possible resistance measurement to 0.0005-percent accuracy referred to NBS units. Range is 1,000, 10,000, 100,000, 1 megohm and 10 megohms, accuracy is 0.0001 percent of full scale plus accuracy of standard and resolution is six digits. The system consists of three independently-usable primary standard instruments, a resistance bridge, a multiple resistance standard and a primary standard voltage divider. The primary standard voltage divider and the multiple resistance standard are individually certifiable by NBS. The device requires only the addition of a sensitive null detector. (404) Semiconductor Chopper Has Two Emitters MANUFACTURED by Sperry Semiconductor Division of Sperry Rand Corp., Norwalk, Connecticut, the 33K3 multi-element assured tracking chopper (MATCH) is a device having two emitters with common base and collector. Mounted in an 4-lead TO-18 package, the device features offset voltage as low as 50 μV maximum from −25 to +100 C for I_B of 0.1 to 3 ma, saturation resistance of 15 ohms maximum for I_B of 2 ma, gain-bandwidth product of 100 Mc minimum and off resistance of 1,000 megohms minimum. Emitter-base recovery time is 5 μsec. Since the base and collector are common for both emitters, only one base current limiting resistor is needed. Low transfer resistance allows relaxed drive waveform requirements. A typical shunt chopper is shown in the sketch. Here, R_L is the input resistance of the a-c amplifier in parallel with an external resistor (if used) and R_0 is the total resistance in series with the signal source. The maximum d-c error voltage produced in such a circuit (offset voltage plus saturation drop) using a ±10 mv signal with source resistance R_0 equal to 10,000 ohms is 65 μV. Since base-emitter capacitance is low (9 pF at zero volts), an output spike amplitude of less than 1 mv is obtained with R_L equal to one megohm and C_L equal to 70 pF. (405) Silicon Snap-Off Diodes Turn Off in 0.2 Ns ANNOUNCED by General Electric Co., Semiconductor Products Dept., Electronics Park, Syracuse, New York, the SSA550 series of silicon snap-off diodes have typical turn-off times of 0.3 ns for one type and 0.2 ns for another. Produced in JEDEC standard DO-7 packages, they are rate at 250 mw power dissipation at case temperatures of 25 C and have 1 μsec peak surge current rating of 2 amperes. For applications requiring high stored charge, one type is rated at 20 picocoulombs per milliamperem minimum and 100 pc/ma maximum. Where low stored charge is desired, another type is rated at 1.0 pc/ma minimum to 5.0 pc/ma maximum. When connected in shunt with a load (sketch p151), the snap-off diode produce a fast leading edge for volt18 NEW SILICON MODULES FROM EECO 125°C circuits in 1 Mc and 10 Mc versions | 1 Mc Series | 10 Mc Series | Circuits Available | |-------------|--------------|--------------------| | U-501 | U-701 | Triple 4-input NOR circuit | | U-502 | U-702 | Eight driver circuits | | U-503 | U-703 | Dual flip-flops | | U-504 | U-704 | Multivibrator and three drivers | | U-505 | U-705 | Three one-shots | | W-506 | U-706 | Two exclusive-OR (NAND) circuits | | U-507 | U-707 | Two exclusive-OR (NOR) circuits | | U-508 | U-708 | Full adder | | U-509 | U-709 | Three 4-input-OR circuits | The ability of all 18 modules in this new silicon family to exceed the searing demands of MIL-E-5400F, Class II, for temperature typifies their excellent performance in general. Superior materials and special packaging techniques make these circuit cards your logical answer to any problem in high temperatures or reliability. Designs are based on derated specifications for the components used, and the resulting specifications are then further derated to give you reliability in reserve. Should any module ever fail to perform according to specs under the terms of the company's warranty, it will be repaired or replaced free. Standard, conservative loading specifications and the availability of compatible hardware make it easy for you to determine your design requirements. Write, wire or phone today for free technical literature or a call from one of our applications engineers. Power required: +12VDC, -12VDC. Logic levels: 0 and +6VDC, nominal. Card dimensions: 4\(\frac{1}{2}\) x 5\(\frac{1}{2}\) x \(\frac{1}{16}\). Contacts: Two sides rhodium-plated with beveled edges for insertion into standard 22-pin etched circuit board connectors. (Special contacts also available.) Construction: Glass-epoxy etched circuit card with funnel eyelets. See the EECO Line in Booth 1518 at the I.E.E.E. Show. THE NEW CASE FOR RELIABILITY Electronic Components from Westinghouse For your needs . one source See us at the IEEE Show Booths 1402-1408 1601-1607 The industry's standard for silicon power transistors—now in a double ended case! In response to customer demand, Westinghouse now makes available its field-proven silicon power transistor in a new double-ended case. Performance, reliability and construction features are the same as have been successfully used in Westinghouse military type transistors for the last three years. Over 5 megawatts of 30 ampere transistors are now serving in military and industrial applications. The new double-ended transistor, 2N2757 series, comes in voltage ratings to 250 volts, current ratings to 30 amperes, and a variety of gain classes. Rock top transistor for highest power ratings The 250 watt, 300 volt 2N1809-2N2109 series in the rugged "rock top" case features the highest power dissipation ratings available in silicon transistors. Conventional case for convenient mounting The 2N2739-2N2754 series (formerly Type 109) offers the convenience of a low mounting profile. Dissipation ratings to 200 watts, currents to 20 amperes. New procurement specifications Procurement specifications on each of the above units are available in military format for designers and reliability engineers. These specifications outline electrical and environmental capabilities under standard Mil-spec conditions. Write for a free copy today on your company letterhead: Westinghouse Semiconductor Division, Youngwood, Pa. You can be sure...if it's Westinghouse. We never forget how much you rely on Westinghouse CIRCLE 151 ON READER SERVICE CARD Power Supply has Regulated Output 0 to 2,500 V ON THE MARKET from Kepco Inc., 131-38 Sanford Ave., Flushing 52, N. Y., New York, the model ABC2500M continuously-adjustable regulated power supply has output from zero to 2,500 v at up to 2 ma, better than 0.05-percent regulation and stability, ripple less than 0.5 ms rms and has less than 50 μsec recovery time from abrupt line and variations. A ten-position range selector and a ten-turn potentiometer permits resolution of not more than 25 mv. Remote programming by resistance or voltage and constant-current operation is available. The device also has short-circuit protection and no transient overshoot for input power turn on or turn off. As shown in the sketch, output current $I_o$ must pass through plate resistance of $V_p$, which is controlled by grid bias. This bias is controlled by a transistorized amplifier. Input of the amplifier is connected across null points $A$ and $B$ of a bridge consisting of reference voltage source $V_r$, reference resistor $R$, and output voltage $E_u$. The system is phased to balance the bridge and force voltage across $A$ and $B$ to approach zero. With bridge in balance, constant current $I_d$, determined by ratio of $V_r$ to $R_r$, circulates. For example: $V_r$ at 6 v and $R_r$ at 6,000 ohms, then $I_d$ is 6/6,000 or 1 ma. This also determines system control ratio, in this case, 1,000 ohms per volt. (407) 35-MM Camera Requires 2 Minutes Developing Time RELEASED by Analab Instrument Corp., 30 Canfield Rd., Cedar Grove, New Jersey, are the type 3030-C 35-mm electric pulsed film advance camera and the type 100J Rapromatic process developing unit. The new camera features data chamber that records a 24-hour clock, 4-digit counter and platen data automatically on each frame. Shy strips of film or the entire 100-f roll may be removed from camera and developed and fixed in 2 minutes. Process works by placing take-up spool with exposed film in the developing SAVE SPACE without sacrificing performance and operating capabilities. New Dunco Type FC-1 Relays are only ½ the size of conventional crystal cans yet they do every job the standard size units can do. *Example:* they withstand shock at 50G for 11 milliseconds, withstand vibration at 30G to 2,000 cycles. Only .400" high, they're ideal as direct replacements for side terminal crystal can types used in printed circuits. TESTED IN ACCORDANCE WITH MIL-R-5757D, Type FC-1 Relays are specially designed for missile, ground support equipment, computers, communications, and control systems. They're hermetically sealed in controlled atmospheres. All-welded internal construction prevents solder flux contamination. Only non-gassing materials are used. Parts and components are cleaned repeatedly during assembly. All this assures reliable contact performance at loads ranging from dry circuit conditions to 2 amps resistive. Write for Data Bulletin FC-1. Address: Struthers-Dunn, Inc., Pitman, N. J. Variable Relay Can Operate From Light Source RECENTLY ANNOUNCED by STL Products, 139 Illinois St., El Segundo, California, the model 2A trigger delay generator provides accurately controlled delayed pulses for triggering equipment from either optical or electrical inputs. Output pulse can be delayed in four decades from 0 to 99.99 µsecs with reference to the zero delay pulse. Zero delay pulse appears 30 ns after input signal. The five-foot, fiber-optic probe requires 250 µw to produce a pulse. Thirty levels of optical attenuation are available up to $10^6$, and superimposed timing marks show both the input trigger threshold of the input and delayed output pulse. Repeatability and calibration is within 0.01 percent. Delay setting is digitally displayed on the front panel. When optical input is used, a monitor output permits oscilloscope observation with superNow there is a new approach to micro-circuit packaging . . . BIPCO® Diode Matrices and Transistor Strips. They provide the only approach combining: - Total function logic - Connection oriented packaging - Connection oriented batch manufacturing See how these unique features will benefit you. Above is the logic diagram for a full adder and its equivalent BIPCO circuit. Note how "total function" logic is performed with matrices of diodes and strips of transistors and resistors. Since the interconnections are always the same, other functions (counting, decoding, accumulating, etc.) can be performed by simply changing the arrangement of the diodes within the matrix. You can specify parameters, logic levels, BIPCO devices containing up to 100 silicon diodes and 10 silicon transistors are available as individual packages or as printed circuit assemblies for counting, decoding and code-converting applications. Because the diodes and transistors are manufactured and connected in batches, the cost of these units is competitive with that of conventional components and less than that of other micro-circuit devices. Write today for our newest brochure . . ."BIPCO Logic | the Total Function Approach". See us at the I.E.E.E. Show Booths #1211-1215 ELECTRONIC CONTRIBUTIONS BY Burroughs Corporation ELECTRONIC COMPONENTS DIVISION PLAINFIELD, NEW JERSEY CIRCLE 153 ON READER SERVICE CARD NEW BETTER-THAN-EVER RELIABILITY for long-distance point-to-point communications NORTHERN RADIO NEW 16-CHANNEL TRANSISTORIZED VOICE FREQUENCY DIVERSITY CARRIER TELEGRAPH TERMINAL TYPE 235 MODEL 3 MIL DESIGNATION AN/FGC-61A ... All units militarized; components and design approved by U.S. Military. ... Converters have equalized gain and adjustable time delay in each channel for better diversity performance and interchangeability. ... Switching Panels provide "local" or "remote" selection of 2-channel or 4-channel diversity modes. ... Combiners have adjustable gains in each channel, for complete switching flexibility, and the combining follows an ideally modified square law function for both 2-channel space or frequency and 4-channel space plus frequency diversity. ... Keyers have adjustable "threshold" sensitivity control and simplified input circuit selection. ... Detter and Delay Indicator provides test keying signal source for keyers and delay equalizers in all channels. Write for complete literature. Pace-Setters in Quality Communication Equipment NORTHERN RADIO COMPANY, inc. 147 WEST 22nd ST., NEW YORK 11, NEW YORK In Canada: Northern Radio Mfg. Co., Ltd., 1950 Bank St., Billings Bridge, Ottawa, Ontario. Micropower Transistor Switches in 12 Ns FROM Sylvania Semiconductor Division, 100 Sylvan Road, Woburn, Massachusetts, the 2N2784 is a silicon epitaxial planar switch having a total switching time of 12 nanoseconds in a saturated circuit. The device is designed for optimum efficiency at the microwatt and milliwatt range. Gain bandwidth product is greater than 1 Gc, typical beta is 70 at 8 ma with gradual falloff at higher IC and aluminum-to-aluminum bonding eliminates catastrophic purple-plague junction deterioration. The transistor is available in the TO-18 package and shortly available in TO-51 and TO-46 packages. Performance results from new geometric configuration, three-stripe design. Two connected base areas, one on each side New Reeves-Hoffman 2.5 mc Frequency Standard offers stability of $2 \times 10^{-11}$ (2 parts in 100,000,000,000) Reeves-Hoffman’s 2.5 mc Frequency Standard, Model S2075, uses an AT-cut 5th overtone crystal of our own manufacture. It provides an ultra-stable, in-house standard that can be compared continually with VLF transmissions. Other important specifications are: Double proportional control oven construction; phase stability of $7 \times 10^{-3}$ degrees peak-to-peak during a 20 millisecond period; solid state construction; output frequencies of 100 kc, 1 mc and 5 mc simultaneously; setability to within $1 \times 10^{-11}$. Model S2075, which also provides power failure alarm, fits into a 5¼-inch rack panel and will maintain specifications over a temperature range of 0 to 40°C. See it at our Booth 1309 at the I.E.E.E. SHOW ... or write for Bulletin S2075 for complete specifications. PRODUCERS OF PRECISION FREQUENCY CONTROL DEVICES... crystals * crystal-controlled frequency sources, standards, filters * component ovens. REEVES-HOFFMAN CARLISLE, PENNSYLVANIA Data Handling EQUIPMENT ALL SOLID STATE CONSTRUCTION ASSURING SUPERIOR PERFORMANCE and RELIABILITY DIC-1 DATA INSERTION CONVERTER Generates 3 completely independent FM/FM telemetry signals with up to 20 different subcarrier oscillators removable from the front panel. Each subcarrier oscillator module has independent level controls for each of the 3 output mixers. Direct input modules and FSK modules are available. PTS-2 PCM SIMULATOR A laboratory test instrument or field checkout aid in PCM telemetry systems, the PTS-2 will generate a variety of PCM codes and formats. The simulator generates a periodic frame synchronization word followed by a preset number of identical data words in a continuous serial pulse train, with each bit controlled by a front panel switch. RFT-2 REFERENCE OSCILLATOR/MIXER Generates a 50 or 100 kc reference signal which is linearly mixed with up to 4 independent telemetry signals for recording on magnetic tape. Blocking Oscillator Has Variable Width Output NEW from Polyphase Instrument Co., East Fourth St., Bridgeport, Pennsylvania, are the series N-100 pulse transformers whose output pulse widths can be continuously controlled between 0.06 and 5 μsec. Pulse width is controlled by a potentiometer controlling bias current. Repetition rates vary from 100 Kc for the N-100 having a nominal pulse width of 0.06 μsec to 4 Kc for the N-150 having a nominal pulse width of 5 μsec. Rise time is less than 40 ns and droop is less than 5 percent for all pulse widths. Conventional blocking oscillators are limited to operation between $B_m$ and $B_r$ on the core BH curve (see sketch), and this allows short pulse widths only. Addition of a bias winding makes it possible to set the core to any desired position between $B_r$ and $-B_m$, resulting in a large $\Delta B$. When bias current is increased so as to reset core to $-B_m$, maximum pulse width is obtained. Reducing bias current brings corresponding reduction in pulse width. Pulse reductions of up to 10 percent for the 0.06 μsec and up to 70 percent for the 5 μsec unit are possible. (411) 10-W Swept-Signal Source From 1 to 18 Gc NEW from Paradynamics, Inc., 10 Stepar Place, Huntington Station, New York, the series 888C swept-signal source delivers 10 w output power between 1 and 18 Gc. The unit can be used as a stable c-w signal source, a swept source with sweep rates between 0.01 and 100 cps (with external capacitor, sweeps can be up to hours), or it can be pulse modulated. The device also can be remotely programmed. Output is leveled to 1 db, sweep output voltage is a 100 v peak linear sawtooth, oscilloscope blanking is a 50-v pulse, residual f-m is 0.0025 percent of maximum frequency, residual a-m is 30 db nominal below c-w level and r-f dynamic range is 20 db minimum. Modulation can be by internal square wave between 800 and 1,200 cps, external a-m from d-c to 500 Kc or external pulse. Both the bwo and twt may be pulse modulated separately or simultaneously. As shown in the sketch, a milliwatt signal is amplified to the 10-w level by a twt. A portion of the output is coupled to a detector where it is rectified, amplified and compared with a stable reference source. The difference is applied to the twt a-m input. This system holds output power to 1 db over wide bandwidths. (412) Coaxial Cables Are Solid Jacketed MICRODELAY DIVISION of Uniform Tubes, Inc., Collegeville, Pa., offers MicroCoax cables that feature: low loss, total shielding; easy to strip, to connect; no loose frayed ends; solderable; uniform, close-tolerance construction. Characteristic impedance is 50 ohms; outer jacket, solid copper; dielectric, Teflon (TFE); Electron Tubes Special quality tubes Frame grid construction with high transconductance to plate current ratio for use in critical industrial and military applications, in which service reliability is of primary importance. | Type | Description | Characteristics | Maximum ratings | |---------------|-----------------|-----------------|-----------------| | | | Plate supply voltage V | Plate current mA | Transconductance µmhos | Plate voltage V | Plate dissipation Watts | Cathode current mA | | E88 C | UHF Triode | 160 | 12,5 | 13500 | 200 | 2.4 | 15 | | E88 CC/6922 | Twin Triode | 100 | 15 | 12500 | 220 | 1.5 | 20 | | E188 CC/7308 | Twin Triode | 100 | 15 | 12500 | 250 | 1.65 | 22 | | E288 CC/8223 | Twin Triode | 100 | 30 | 18000 | 250 | 3.0 | 40 | For further information and application engineering assistance regarding these electron tubes manufactured by Siemens & Halske AG • Germany, please write to their distributor in the U.S.A.: SIEMENS AMERICA INCORPORATED 350 Fifth Avenue, New York 1, N.Y. • Tel: LOngacre 4-7674. Telex 01-2070, Cable: siemens newyork in Canada: SIEMENS HALSKE SIEMENS SCHUCKERT (CANADA) LTD. 407, McGill Street, Montreal 1, P.Q. • Tel: 849-5783. Telex 012800, Cable: siemenscan THE MODEL LM-401, HIGH RESOLUTION, MONITOR OSCILLOSCOPE provides more data on 14" screen than 17" scopes With a resolution of 25 lines per centimeter (65 lines per inch) the new ITT Model LM-401 Monitor Oscilloscope can present more data with greater precision across the full screen than the old style 17" scopes. This new, low-cost, 14-inch model has a full screen frequency response 5 times greater than previous equipment...to beyond 50 kc. Other important features include: linearity of 1%, stable DC amplifiers, easy conversion to bench or rack mounting, modular design, and high sensitivity in horizontal and vertical axes. For more information, write for Data File E-1914-2. Coaxial Relays Display Very Low VSWR C. P. CLARE & CO., 3101 Pratt Bldg., Chicago 45, Ill. New coaxial relays assure reliable, high quality switching over r-f ranges to 700 Mc. Among applications are: i-f switching in microwave networks, transmit-receive antenna switching in the uhf range of 100-500 Mc, and switching pcm data in telemetering systems without deterioration of square wave form. Type HGS2C (shown at left) displays a crosstalk isolation of 35 db min, 70-700 Mc and the HGS4C displays a crosstalk isolation of 80 db min. (414) Coil Bobbins Are Precision-Molded GRIES REPRODUCER CORP., 151 Beechwood St., New Rochelle, N. Y. Designed for ferrite cup core assemblies for telecommunication filter networks and other applications, a complete line of precision-molded coil bobbins for "International Standards Series" cup cores are announced. The bobbins—seven Another New High Order of Reliability! El-Menco MYLAR-PAPER DIPPED CAPACITORS ASSURE A LOW FAILURE RATE OF Only 1 Failure in 7,168,000 Unit-Hours for 0.1 MFD Capacitors* Setting A New High Standard Of Performance! SPECIFICATIONS - **TOLERANCES**: 10% and 20%. Closer tolerances available on request. - **INSULATION**: Durez phenolic, epoxy vacuum impregnated. - **LEADS**: Na. 20 B & S (.032") annealed copper clad steel wire crimped leads for printed circuit application. - **DIELECTRIC STRENGTH**: 2 or 2½ times rated voltage, depending upon working voltage. - **INSULATION RESISTANCE AT 25°C**: For .05MFD or less, 100,000 megohms minimum. Greater than .05MFD, 5000 megohm-microfarads. - **INSULATION RESISTANCE AT 160°C**: For .05MFD or less, 1400 megohms minimum. Greater than .05MFD, 70 megohm-microfarads. - **POWER FACTOR AT 25°C**: 1.0% maximum at 1 KC These capacitors will exceed all the electrical requirements of E. I. A. specification RS-164 and Military specifications MIL-C-918 and MIL-C-25C. Write for Technical Brochure CAPACITANCE AND VOLTAGE CHART - Five case sizes in working voltages and ranges: | Voltage (V) | Capacitance (MFD) | |-------------|-------------------| | 200 | .018 to .5 | | 400 | .0082 to .33 | | 600 | .0018 to .25 | | 1000 | .001 to .1 | | 1600 | .001 to .05 | MINIMUM LIFE EXPECTANCY FOR ** 1.0 MFD *MYLAR-PAPER DIPPED CAPACITORS AS A FUNCTION OF VOLTAGE & TEMPERATURE **THE NUMBER OF UNIT-HOURS IS INVERSELY PROPORTIONAL TO THE CAPACITY IN MFD** UNIT-HOURS FOR ONE FAILURE *Registered Trade Mark of DuPont Co. THE ELECTRO MOTIVE MFG. CO., INC. WILLIMANTIC, CONNECTICUT Dipped Mica • Molded Mica • Silvered Mica Films • Mica Trimmers & Padders Mylar-Paper Dipped • Paper Dipped • Mylar Dipped • Tubular Paper AMCO ELECTRONICS, INC. Centurion Drive, Great Neck, L. I., New York Authorized Distributors: J. L. Johnson & Co., Inc. New York, N. Y. Collins & Hynix Co., 531 Hollywood Blvd., Los Angeles, California 2360 Wilshire Boulevard, Los Angeles, California electronics • March 15, 1963 NOW... high accuracy synchro/resolver testing — GERTSCH STANDARDS AND BRIDGES REPLACE COSTLY ELECTRO-MECHANICAL METHODS There is a Gertsch synchro or resolver instrument to meet virtually all requirements. Whether testing simple components or complete systems, you get accuracies up to 2 seconds-of-arc — accuracies maintained without constant checking and recalibration. These versatile units employ the same time-proven design techniques as Gertsch RatioTrans® assuring high input impedance, low output impedance, and very low phase shift. Minimum operator error. Angles are selected with positive detent knob — requires no critical adjustments. Direct-reading digital display reduces readout error. Simplified circuitry — least susceptible to the effects of stray capacitance, pickup, loading. Fewer accessories needed, hence less error from associated equipment. Over 100 synchro/resolver test instruments are available from Gertsch — synchro standards, resolver standards, synchro bridges, resolver bridges. In addition to conventional, manually-operated units, all standards and bridges can be supplied as rotary solenoid, relay (programmable), and decade (.001° resolution) instruments. Gertsch dividing heads (both manual and automatic), in combination with Gertsch phase angle voltmeters, provide complete checkout capabilities for AC rotating components. Complete information on all Gertsch synchro/resolver test instruments in catalog #11 — 40 pages of technical information, specifications, theory, application data and engineering bulletins. A valuable reference source for design and test engineers. Gertsch GERTSCH PRODUCTS, INC. 3211 S. La Cienega Blvd., Los Angeles 16, Calif. • Upton 0-2761 • VErmont 9-2201 standard sizes with choice of 2, 3 or 4 flanges—are molded in Delrin (duPont’s acetal resin) to precise tolerances and exacting specifications in a single automatic operation. Delrin offers high strength and stiffness at elevated temperatures, plus high dielectric strength and low dielectric constant. CIRCLE 415, READER SERVICE CARD Deviation Meter GREIBACH INSTRUMENTS CORP., 315 North Ave., New Rochelle, N. Y., offers a new line of single or multi-range meters capable of measuring current deviation from a pre-established value of as little as 10 parts per million with accuracy as high as 0.0005 percent. (416) Nuvistor Tube Sockets In 5- and 7-Pin Types INDUSTRIAL ELECTRONIC HARDWARE CORP., 109 Prince St., New York 12, N. Y., introduces two new Nuvistor tube sockets in both five and seven pin types. The JETEC base numbers are E5-79 and E5-65 for the five-pin types, and E7-83 for the seven-pin type. These sockets are for tube type numbers: 13CW4, 6DS4, 6CW4, 7895, 8058, 8056, 7587, 7586 and 7895. (417) Digital Voltmeter Uses Reed Relays INDUSTRIAL INSTRUMENTS INC., 89 Commerce Road, Cedar Grove, N. J., offers the DVM-2 digital voltmeter. Balancing speed is approximately twice that of conventional stepping-switch digital instruments. Longlife components are used and switching is accomplished by highly reliable sealed reed relays. Plug-in circuits make it a highly flexible instrument. Unit covers d-c voltage from ±0.001 to 999.9 in ranges of 0.000 to 9,999.99/999.9. Accuracy is ±0.01 percent of full scale +1 digit. (418) Rotary Switch with Positive Positioning VEMALINE PRODUCTS CO., Box 1, Franklin Lakes, N. J. Series 700 rotary switch features: fully enclosed 1½ in. diameter; a variable detent action; positive positioning; 12 contacts in either nickel silver or coin silver; 5 amp at 115 v a-c res, 3 amp at 28 v d-c res, 2 amp at 28 v d-c ind; available in shorting or nonshorting models; three solder lugs on common ring wafer instead of the usual one lug construction; meets military specifications. (419) Colored Lamp Filters Are Unbreakable SILKROME DIVISION of APM-Hexseal Corp., 41 Honeck St., Englewood, N. J., introduces a line of elastomeric lamp filters which can be easily slipped over clear incandescent lamps to change their color. Designed for lighting panels, switch now... controlled diffusion with longer flat zones! Insure high yields of top-quality doped silicon or germanium wafers... continuously... with multi-chamber, four-on-one Hayes Model 4-DHS0330 Diffusion Furnace. Thermal flat zones (16" to 18" ± 1°C) plus zirconia outer muffle dampen temperature "ripples" in depositing chambers... assure positive control of temperature and predictable quality of furnace output. Furnace features removable diffusion units, each independent of the other three so others need not be shut off to service one... three-zone movable source chambers... and recessed, modular panels which can be removed for servicing outside "clean room". Furnace is available with iron-chrome, molybdenum, or platinum elements. Single and double unit models available. Also many other furnaces for continuous, cycling and/or programmed diffusion processes. Flat zones to 24". Request Data Sheet F-11 for complete details. C.I. Hayes, Inc., 845 Wellington Ave., Cranston 10, R. I. Ph.: 401-461-3400 indicators, instrument lighting, consoles, etc., the filter caps produce colors which conform to the limits as specified in MIL-C-25050 (yellow, red, blue, green, lunar white), MIL-L-25467 (instrument lighting red), and MIL-L-27160A (instrument lighting white). One of the many applications is in the Polaris missile system. CIRCLE 420, READER SERVICE CARD Voltage Stabilizers Have No Moving Parts STANCOR ELECTRONICS, INC., 3501 Addison St., Chicago 18, Ill., announces the Powerguard group of automatic voltage stabilizers. Initially, 30, 60, 250, 500, 1,000 and 3,000 v-a units are being produced. Units correct line voltage variations of ±15 percent to within ±1 percent. Voltage correction time is almost instantaneous. Only 25 milliseconds are required to bring voltage to rated output. Units contain no moving parts, thus providing maintenance free operation, and high resistance to physical and mechanical shock. (421) Circular Plugs for Space Use CANNON ELECTRIC CO., 3208 Humboldt St., Los Angeles 31, Calif. The KV series plugs are designed for space and high performance applications. Shell hardware of the plugs conforms to MIL-26500B. Improved design concepts and low cost are added advantages. Insulator of the plug incorporates the HEADED FOR AUTOMATION ...AND DESIGNED TO MEET ALL MIL SPECS. THAT'S THE STORY ON DAYSTROM TRANSITRIM. The Daystrom Transitrim potentiometer, in a TO-5 configuration, is designed to facilitate the automatic assembly of PC board circuitry. In addition, it is designed and manufactured to comply with the operational requirements of MIL-R-27208A. The Series 510 wire-wound Transitrim offers 1.25 watts dissipation in still air, resistance ranges from 10 ohms to 30 K, and an operating temperature range from -55°C to +175°C. Some features of the Transitrim potentiometer include: a vacuum-tight glass-to-metal seal header with O-ring under compression on the adjustment screw; an all-metal housing free of plastic parts for greatest strength, durability, and heat dissipation; and 1½ inch rigid bare wire leads for automatic assembly. The Transitrim is impervious to humidity, salt spray, sand and dust, etc. No other line offers so much...send for data! DAYSTROM POTENTIOMETERS ARE ANOTHER PRODUCT OF WESTON Instruments & Electronics Division of Daystrom Incorporated 614 FREILINGHUYSEN AVENUE, NEWARK 14, NEW JERSEY See us at BOOTHs 1702-1710, 1801-1809, IEEE SHOW CIRCLE 163 ON READER SERVICE CARD Little Caesar rear release system which provides simple insertion and extraction from the rear and high contact retention. Plugs can withstand temperatures up to 200 C, and all are intermateable with other connectors designed to MIL-C-26500. CIRCLE 422, READER SERVICE CARD Bus-Line Wiring Speeds Assembly ELCO CORP., Willow Grove, Pa. Buss-line wiring technique offers contacts on strips which act as wires, furnished in endless reels, thereby eliminating high cost and unreliability of soldering contacts to wires individually. Buss-lines also speed mass assembly wiring of complex circuitry. (423) Lightweight Fan for Heavy-Duty Use ROTRON MFG. CO., INC., Woodstock, N.Y., has available the Feather Fan for cooling electronic packages in computer consoles, relay racks, power supplies, instruments and many other applications. It will deliver 270 cu ft of air per minute at free delivery. Weighing only 1.5 lb, its compact design (7 in. in diameter and only 2½ in. thick) permits simple and easy mounting to any equipment panel. It can be used continuously at any temperature. from -55 C to 65 C; draws only 22 watts; 3,380 rpm; 115 v a-c, 50-60 cps, 1-phase operation. A 2 μf capacitor can be supplied mounted on the fan. (424) **Cooling Blower Uses Transverse Flow** THE TORRINGTON MFG. CO., 100 Franklin Dr., Torrington, Conn. The 4-in. Crossflo transverse-flow blower is one of three sizes of handmade models (other two sizes have 2 in. and 3.15 in. impeller diameters) now available. Transverse-flow units are particularly useful in electronic cooling because of their relatively high pressure coefficients, an inherent ability to produce thin bands of air flow, and unusual flexibility in selecting the orientation of air inlet and discharge. (425) **Indicator Lights** DRAKE MFG. CO., 4626 North Olcott Ave., Chicago 31, Ill. Type MF indicator lights with a rectangular lens for midget flange base lamps are used in missile and electronic equipment, as well as commercial applications. (426) **Quartz Crystals Are Glass Mounted** BLILEY ELECTRIC CO., Union Station Building, Erie, Pa. Miniature glass mounted quartz crystals for use in frequency and reference standards have an aging characteristic of less than 0.01 ppm per day and less than 0.03 ppm per week after 24 hours --- **Vacuum-melted alloys for glass hermetic seals** **RODAR®** **NIRON® 52** **NIROMET® 46** Specified Industry-wide for **PERMANENTLY-BONDED VACUUM-TIGHT SEALS!** **Thermal Expansion** | Temperature Range | Average Thermal Expansion *cm/cm/*C x 10^-7 | |-------------------|------------------------------------------| | 30° To 300° C | 43.3 To 53.0 | | 30 | 44.1 | | 30 | 45.4 | | 30 | 50.3 | | 30 | 57.1 | **NOMINAL ANALYSIS: 29% Nickel, 17% Cobalt, 0.3% Manganese, Balance—Iron** Rodar matches the expansivity of thermal shock resistant glasses, such as Corning 7052 and 7040. Rodar produces a permanent vacuum-tight seal with simple oxidation procedure, and resists attack by mercury. Available in bar, rod, wire, and strip to customers' specifications. **COEFFICIENT OF LINEAR EXPANSION** *As determined from cooling curves, after annealing in hydrogen for one hour at 300° C. and for 15 minutes at 1100° C.* **NIRON® 52** **NOMINAL ANALYSIS: 51% Nickel, Balance—Iron** For glass-to-metal seals with Corning #0120 glass. **NIROMET® 46** **NOMINAL ANALYSIS: 46% Nickel, Balance—Iron** For vitreous enameled resistor terminal leads. **NIROMET® 42** **NOMINAL ANALYSIS: 42% Nickel, Balance—Iron** For glass-to-metal seals with GE #1075 glass. **CERAMVAR** **NOMINAL ANALYSIS: 27% Nickel, 25% Cobalt, Balance—Iron** For high alumina ceramic-to-metal seals. Call or write for Sealing Alloy Bulletin WILBUR B. DRIVER CO. NEWARK 4, NEW JERSEY, U.S.A. IN CANADA: Canadian Wilbur B. Driver Company, Ltd. 50 Ronson Drive, Rexdale (Toronto) Precision Electrical, Electronic, Mechanical and Chemical Alloys for All Requirements CIRCLE 165 ON READER SERVICE CARD 165 NEW DESIGN OSCILLOSCOPE 247A GENERAL PURPOSE PORTABLE 1) Band width: DC - 1 MHz Sensitivity: 50 mV/cm 2) Band width: 10 Hz - 200 KHz Sensitivity: 5 mV cm • Direct reading calibration on both axis. • Free running, triggered and single sweep operation. • Sweep range: 0.5 µs/cm to 1 s/cm by 20 step attenuator. • X 5 magnifier • Selection of triggering level • Horizontal amplifier: DC - 500 KHz • CRT diameter: 5" • Direct access to CRT plates through commutation system • Dimensions: 15" long, 8" wide, 12" high Photoconductive Cells In Compact Case CLAIREX CORP., 8 W. 30th St., New York 1, N.Y. The 900 series offers 10 distinct types of cells in a compact (0.21 in. diameter by 0.15 in. high), rugged, TO-18 metal case. Types have either a 75 or 250 voltage rating, and a power dissipation rating of 50 mw. (428) Spectrum Analyzer Features Compactness PENTRONIX ASSOCIATES, INC., 2037 61st St., Brooklyn 4, N.Y. Model 100 wide band, 100 Mc dispersion, spectrum analyzer has less than 1 Kc of incidental frequency modulation in S band. Extreme compactness (5.25 in. rack panel) is achieved through the use of solid state components combined with high performance circuits. (429) Test Chamber Has Range of — 100 to + 350 F TENNEY ENGINEERING, INC., 1090 Springfield Road, Union, N.J. The Tenney-Jr. is a compact, mechaniCONTROLLED AVALANCHE RECTIFIERS 1000 times more immune to destructive voltage transients than conventional rectifiers Because carefully controlled non-destructive internal avalanche breakdown occurs across the entire junction area . . . the new G-E developed Controlled Avalanche Rectifiers protect themselves and the rest of the circuit against high levels of peak power in the reverse direction. Derating headaches are a thing of the past, your transient voltage problems are solved more efficiently, more economically than ever before. To merit the designation "Controlled Avalanche," G-E silicon rectifier diodes must satisfy these three important requirements: - Have rigidly specified maximum and minimum avalanche voltage characteristics - Be able to operate steady-state in their avalanche region without damage - Be able to dissipate momentary power surges in the avalanche region without damage, and have ratings defining this capability The new 0.5 amp A7 is a subminiature type in 150, 200, 300, 400 and 500 volt working PRV ratings, can dissipate up to 310 watts peak power in the reverse direction. The 12 amp A27 is available in 600, 800, 1000 and 1200 working PRV types, with built-in "zener" diode protection even well beyond 1200 volts, can dissipate up to 3900 watts peak power in the reverse direction. And the A92 is a 250 amp unit in 600, 700, 800, 900, 1000, 1100 and 1200 volt working PRV types, can dissipate up to 80,000 watts peak power in the reverse direction. For complete details, see your G-E Semiconductor District Sales Manager and ask for bulletin 200-27. Or write Section 16C101, Rectifier Components Department, General Electric Company, Auburn, N.Y. In Canada: Canadian General Electric, 189 Dufferin St., Toronto, Ont. Export: International General Electric, 159 Madison Ave., New York 16, N.Y. cally refrigerated bench model high low temperature precision test chamber. It has a temperature range of -100 to +350 F in a work space 14 in. wide, 10 in. high and 10 in. deep. It operates on a continuous basis for approximately two cents an hour. Chamber was designed for small batch, small unit testing of semiconductors and other components to military specifications as well as for testing medical and consumer products. Price is $990. CIRCLE 430, READER SERVICE CARD Rugged Meter Relay Features Taut Band WESTON INSTRUMENTS and Electronics Division, Daystrom, Inc., 614 Frelinghuysen Ave., Newark, N. J. The MagTrak meter relay, which combines magnetic and electromagnetic latching, now incorporates a taut-band suspension mechanism. The double-action principle assures repeated reliability of contact closure. Replacement of the pivot and jewel mechanism with the taut-band suspension mechanism allows full scale deflection at 5 μA as compared with the previous 10 μA. (431) Microminiature Relay Has High Sensitivity TELEX/AEMCO, 10 state St., Mankato, Minn., announces a 50 mw sensitivity dpdt microminiature relay. Unit will switch 1.0 amp at 30 v d-c resistive. Life of the relay will exceed 100,000 operations at the rated contact load. Meets vibration requirements of MIL-R-5757-D, paragraph 126.96.36.199. Pull-in is 7.1 ma d-c. Also available with 40 mw sensitivity with two form A contacts. (433) P-C Connectors Have Polarizing Slots CONTINENTAL CONNECTOR CORP., 34-63 56th St., Woodside 77, N. Y. Series 600-123 feature polarizing slots integral with the molding and accommodate a polarizing key in any desired location. This eliminates the need to sacrifice any contact position for polarization. Eighteen beryllium copper contacts with gold plate over silver plate have solder lug terminations, and accept a ¼ in. p-c board. Molding is glass filled diallyl phthalate per MIL-M-19833, type GDI-30. (432) Coaxial Connectors Are 50-Ohm Devices MICON ELECTRONICS, INC., Roosevelt Field, Garden City, N. Y. Subminiature matched impedance coaxial connectors are available in both straight-through and right angle designs. The connectors whose mating characteristics conform to MIL-C-22557 are the crimp type designed for rapid assembly on cables such as RG174, RG188, and RG196. NEMA voltage rating is 500 v. Flashover at sea level is rated at 2,000 v minimum, and at 70,000 ft 1,000 minimum. They are coronaSEE ALL THE NEW PERFORMERS IN THE BROADEST BECKMAN LINE EVER! NEW YORK CITY...IEEE SHOW...MARCH 25, 26, 27, 28 INTERNATIONAL SUBSIDIARIES: GENEVA, SWITZERLAND; MUNICH, GERMANY; GLENROTHES, SCOTLAND Beckman INSTRUMENTS, INC. BERKELEY DIVISION Richmond, California CIRCLE 169 ON READER SERVICE CARD 169 SERIES 1025 This series is the industry's smallest molded coil. Stock values are available from .15 uh through 100 uh (30 pieces). Available on a custom basis in values up through 1000 uh. Physical Size: .100" dia. x .250" lghth. Environmental Conformance: MIL-C-15305 Grade 1 Class B. SERIES 1537-700 Features complete electromagnetic shielding — shielding along body and at ends. Completely molded. Inductance Range: 1 uh through 700,000 uh. Physical Size: .166" dia. x .375" lghth. Environmental Conformance: MIL-C-15305 Grade 1 Class B. SERIES 1890 Similar to Series 1537-700 being a completely shielded coil. It is not molded, but has epoxy and seals. Elimination of the molding technique allows further miniaturization at the expense of environmental protection. Inductance Range: .1 uh through 200,000 uh. Physical Size: .128" dia. x .330" lghth. Environmental Conformance: MIL-C-18305 Grade 2 Class B. SERIES 2501 A series shielded fully molded coil featuring radial leads (at .200" spacing) with inductance values up to 220,000 uh. Physical Size: .250" dia. x .310" lghth. Environmental Conformance: MIL-C-15305 Grade 1 Class B. Reliability Built into Every Design MICRO-MINIATURE COILS Through the use of new materials, new designs and new manufacturing methods, Delevan leads the industry in micro-miniaturization offering a line of coils which are superior in every respect. Delevan's precision engineered coil products assure unmatched reliability. Environmental testing of all coil products in Delevan's modern laboratory is continuous to assure conformance to MIL-C-15305C. Design Engineers requiring built in reliability, specify Delevan with confidence. Write for descriptive catalog today. TRAK MICROWAVE CORP., Tampa, Fla., announces an S-band oscillator with a power output of 100 w peak minimum and 150 w typical. Weighing only 3 oz, the type 9186S has a diameter of ½ in. and is 4 in. long excluding projections. Manual tuning range is 2.7 to 3.0 Gc; power input requirements, 1000 v pulse at 0.7 amp peak $I_p$, 6.3 v at 280 ma. Frequency stability is ±2 Mc, −20 C to +70 C; shock, 100 g, 7 ms, less than 1.0 Mc f-m. Vibration, 15 g, 15-2000 cps, 3 axes, f-m less than ±1.0 Mc. (435) AUGAT, INC., Attleboro, Mass., announces a line of insulated aluminum isolators for use with TO-3 and TO-36 power transistors. Average thermal resistance values of 0.2C/watt have been attained on an aluminum chassis. Prices run approximately 15 cents each in 1,000 lots. (436) AUTOMATIC ELECTRIC CO., 400 N. Wolf Road, Northlake, Ill. Series E1N will resist normal shock and vibration, yet permit instant rePredetection Recording by DCS gives you these 7 features: - Best s/n performance - Best transient characteristics - Up to 800,000 bit/second response - Tape speed compensation - Off-the-shelf modular flexibility - 100% solid state - Usable with most receivers and recorders Considering predetection recording? Only DCS can give you all these advantages: First, the phase lock loop design of the GFD-4 Discriminator permits playback at the recorded frequency without incurring the noise and transient degradation typical of up-conversion systems. And in addition, response from DC to beyond that required for 800 Kilobit NRZ PCM is provided, for full IRIG requirements. What's more, DCS has the only system providing tape speed compensation of reproduced data. Components are all solid state...modular (just plug 'em in!)...and available off the shelf. Whether you need a complete predetection recording system, or want to build one using your present receiver and recorder (DCS components are compatible with most), DCS can help you. Write us for complete information. Address: Dept. E-7-2. DATA-CONTROL SYSTEMS, INC. Instrumentation for Research Los Angeles • Santa Clara • Wash., D.C. • Cape Canaveral Home Office: E. Liberty St., Danbury, Conn. • Pioneer 3-9241 electronics • March 15, 1963 new low cost precision THERMISTORS match standard curves -40° to 150° C. Resistance Tolerance 30K thermistor No. 44008 - YSI precision thermistors can now be stocked by the thousands, used interchangeably. The high cost problems of matching, padding, auxiliary resistances or individual calibration have been eliminated. - Stock base resistances at 25° C. of: 100 Ω 1K 10K 300 Ω 3K 30K 100K - For 5 years YSI has manufactured precise, interchangeable thermistors for laboratory instrumentation. - Now we offer as components a family of precision thermistors which match the same Resistance-Temperature curves to within ± 1% over a wide temperature range. - $4.90 each, with substantial discounts on quantity orders. - Quantities under 100 available from stock at Newark Electronics Corporation and its branches. For complete specifications and details write: YSI—COMPONENTS DIVISION Yellow Springs, Ohio Flexible Coupling Is Subminiaturized RENBRANDT, INC., 6 Parmelee St., Boston 18, Mass., introduces a subminiaturized version of the Tinymite flexible coupling with the insulating nylon insert scaled down to ½ in. o-d by ⅛ in. long. Standard bore sizes: ½ in. and ¾ in., zero backlash. (439) Fixed Delay Lines Have Small Diameter HELIPOP DIVISION of Beckman Instruments, Inc., 2500 Harbor Blvd., Fullerton, Calif. The Spiradel distributed constant fixed delay lines offer delays from 20 nsec to 300 nsec in case diameters from 0.6 in. to 1.50 in. by 0.375 in. height. (440) Magic Tee Mixer for 2 to 4 Gc Range SAGE LABORATORIES, INC., 3 Huron Drive, East Natick Industrial Park, Natick, Mass., offers a coaxial magic tee balanced mixer which operates from 2 to 4 Gc. It features high LO-to-signal isolation, low noise figure, compactness and light weight. Its high isolation, low cost in large quantities, and closely reproducible phase characteristics are of particular advantage in multi-mixer applications such as phased-array radar. The unit significantly reduces filter requirements in any mixer applications. (438) Parallel Printer Requires Low Power VICTOR COMPTOMETER CORP., Business Machines Div., 3900 N. Rockwell St., Chicago 18, Ill., Lowest power requirement of any parallel entry printer with decimal input is claimed by new Victor Digit-Matic for greater compatibility with solid state systems. Solenoids need only 5 w to index, 10 w for print command at 24 v d-c applied potential. Lower current eliminates need for signal amplification. Available after July 1. (441) Tape Systems Have 7 and 14 Tracks SANBORN CO., 175 Wynnan St., Waltham, Mass., announces 7-speed, 7- and 14-track magnetic data recording systems with compact solid-state electronics. Model 2107 and 2114 meet accepted IRIG instrumentation standards, have record/reproduce amplifiers on same card, use interchangeable p-c plug-ins for direct and f-m recording, and for seven channels occupy only 31 in. of panel space. System specs include speeds of ⅛, ¼, ½, ¾, 7½, 15, 30 and 60 ips; nonlinearity less than ±0.5 percent on d-c, ±1 percent on a-c; 1 percent system accuracy; 100-100,000 cps direct record bandwidth, d-c to 10,000 cps f-m bandwidth. (442) **Delay Network** **Housed in Small Case** ESC ELECTRONICS CORP., 534 Bergen Blvd., Palisades Park, N. J. Model 52-77 provides a total delay time of 24.65 μsec with taps every 1.45 μsec. The unit, housed in a case only 4.25 in. by 2 in. by 1 in., has a tolerance of ±0.05 μsec. Characteristic impedance is 470 ohms. Attenuation is 7.5 db max. Terminating resistance is 470 ohms. (443) **Sweep Generator for F-M System Checkout** TELONIC INDUSTRIES, INC., 60 N. First Ave., Beech Grove, Ind., offers a sweep generator capable of providing complete checkout of signal response of f-m receivers in production and inspection. A switch on the front panel selects oscillators for either the i-f or r-f bands each having separate attenuator systems and output connectors. This allows r-f, local oscillator, and i-f adjustments to be made in a suitably equipped test fixture without re- --- **Introducing** **MICROBOND** **THIN FILM WELDER AND MICROPOSITIONER** **SPECIFICATION HIGHLIGHTS** **Welding Capability** Wire size: .0005 to .005 in. dia. Ribbon size: .0005 x .0C25 to .0C4 ± .020 in. Film thickness: 500 to 5000 Å **Electrical** Weld power: 3 sequential cycles, each independently controllable for weld pulse duration and amplitude. Cycle durations (each): 5-100 milliseconds. Cycle amplitudes (each): 1.35 to 24C watts. Total power input to weld: .02 to 72 watt-sec. For detailed specifications including optional features, price and delivery, write Weldmatic, 950 Royal Oaks Drive, Monrovia, California. WELDMATIC DIVISION / UNITEK CIRCLE 173 ON READER SERVICE CARD 173 connecting cables or changing input signal levels. Crystal-controlled pulse markers, at customer-specified frequencies, are included as standard equipment. Five markers are provided on the 98 Mc band and three on the 10.7 Mc band. Accuracy is typically 0.01 percent. CIRCLE 444, READER SERVICE CARD **Amplifiers** INSTRUMENTS FOR INDUSTRY, INC., Hicksville, L. I., N. Y., offers a new 45 Mc phase matched transistorized i-f amplifier, a commercial super-video amplifier a 30 Mc log amplifier, and a portable communications, navigations and interrogations unit which is also suitable for general laboratory and production line testing. (445) **UHF Octave Amplifier Comes in Two Models** COMMUNITY ENGINEERING CORP., 234 E. College Ave., State College, Pa. Models 1033 and 1035 together cover frequencies from 250-1,000 Mc. Model 1033, 250-500 Mc, has a noise figure of 7 db max. Model 1035, 490-1000 Mc, a noise figure of 10 db max. Each is made up of two identical amplifiers each supplied with its own solid state power supply. Each amplifier module has a gain of 18 db nominal. Band flatness is ±0.5 db for the 10:33 and ±1 db for the 1035. Impedance in and out is 50 ohms with a vswr of 1.75:1 max. (446) **Fasteners Feature Concealed Heads** PENN ENGINEERING & MFG. CORP., Box 311, Doylestown, Pa., offers a new concept in fastener mounting—concealed-head studs and stand-offs which make it possible for the designer to achieve undistorted exterior panel surfaces. ConcealedUNIQUE The Genalex Miniature High-Speed Stepping Switch FOR: automatic switching circuit selection and timing-control FEATUREING: 80 steps per second on impulse drive 30 contacts per bank 12 banks maximum 17 oz. lightweight 7 levels sequence switching. Over 5,000,000 Steps Without Replacements Write today for complete data — Also, data available on Genalex one-way and two-way stepping switches. IMTRA CORPORATION 11 UNIVERSITY ROAD, CAMBRIDGE 38, MASS. U. S. AGENTS FOR THE GENERAL ELECTRIC COMPANY, LTD. OF ENGLAND CIRCLE 307 ON READER SERVICE CARD NEW FROM NORTHEASTERN! Compact 25 MC Solid State Counter Features Time Interval Measurement Northeastern's Model 40-81 meets the demand for a low 5⅜" panel height, 8 digit in-line presentation, fully solid state 25MC counter which features Time Interval Measurement in the basic unit as well as frequency, period and ratio. Remote operation and programmability are included features. Specifications: Frequency Measurement Range... 0 to 25 MCs Standard Gates Times............. 1 μ sec to 10 sec in decade steps Period Measurement Range....... (single) 0 to 1MC (multiple) 0 to 300 KC Time Interval Range.............. 1 μ sec to 10⁹ sec (digit capacity) Stability......................... ±7 parts in 10⁹/day (averaged over 7 days) Temperature...................... −20°C to +65°C Power............................ 115 VAC ±10%, 50-60 Cps Dimensions........................ Basic Unit 12" W x 15½ D x 5¼ H w/rack mount 14" W x 15½ D x 5¼ H w/plug in 17" W x 15½ D x 5¼ H w/plug in & rack mount 19" W x 15½ D x 5¼ H Weight......................... 28 pounds w/plug-in hardware... 33 pounds BOOTH 3226 IEEE SHOW NORTHEASTERN ENGINEERING INCORPORATED A SUBSIDIARY OF ATLANTIC RESEARCH CORPORATION DEPARTMENT 4-A, MANCHESTER, NEW HAMPSHIRE Employment Opportunities Open At All Levels WHAT do you want lacing cords and tapes to do for you? • Tie faster, easier, tighter! • Knots that don't slip! • Greater stability under high heat! NYLON and DACRON CORDS and FLAT BRAIDED TAPES give you all these advantages In addition — they meet Govt. Spec. MIL T-713B. Available in wax-coated, wax-free or "G. E." Finish. Write for free samples THE HEMINWAY & BARTLETT MFG. CO. Electronics Division: 500 Fifth Avenue, New York 36, N.Y. CIRCLE 308 ON READER SERVICE CARD head stud is available in seven thread-diameter choices ranging from 4-40 to $\frac{3}{8}$-18. Six lengths of stud shanks are from $\frac{3}{8}$ in. to $1\frac{1}{2}$ in. Concealed-head standoff is available with range of thread sizes from No. 4-40 to $\frac{1}{4}$-20, and in eight lengths from $\frac{3}{8}$ in. to 1 in. CIRCLE 447, READER SERVICE CARD Solder Pot Heats Rapidly ORYX CO., 13804 Ventura Blvd., Sherman Oaks, Calif., introduces a miniature quick-heating solder pot. Designed for a variety of production line and laboratory applications, the pot has a capacity of $2\frac{1}{2}$ cc. In addition to obvious use in tinning wires and leads of miniature electronic components, the pot may be used to heat waxes, shellacs, and potting compounds. Pot operates directly from 115 v a-c or d-c. Power consumption is approximately 15 w. Operating temperature is 550-600 F. Heating time is 4 minutes. (448) Damping Compounds Are Visco-Elastic LORD MFG. CO., Erie, Pa. DC-322 is a controlled visco-elastic material that may be applied to virtually any structural configuration — hori- IEEE BOOTH 3701-3-5 new UNIVERSAL BRIDGE TRANSISTORIZED PORTABLE $375 Model 2700 RANGES: C: 0.5pF to 1100$\mu$F L: 0.3$\mu$H to 110 H R: .010$\Omega$ to 11 M$\Omega$ ACCURACY: ±1% FREQUENCY: Internal 1kc External 20cps to 20kc ALSO MEASURES: Incremental 'L' Incremental 'R' 'C' with bias Write for detailed catalog sheet. MARCONI INSTRUMENTS DIVISION OF ENGLISH ELECTRIC CORPORATION 111 CEDAR LANE • ENGLEWOOD, NEW JERSEY Main Plant: St. Albans, England CIRCLE 309 ON READER SERVICE CARD March 15, 1963 • electronics A significant experiment is rapidly drawing to a climax. Soon a weather satellite will go into polar orbit carrying an Automatic Picture Transmission System (APT). As the satellite orbits the Earth it will continually photograph the cloud cover below and transmit pictures to a number of new low-cost ground stations scattered around the world. Through these stations for the first time, local weathermen will see millions of square miles of the Earth's weather at a glance. Remote ocean, desert and mountain areas, oftentimes the breeding ground for the most devastating storms, will be subjected to regular surveillance. This new approach to weather analysis will probably undergo preliminary tests using the Tiros satellite in the middle of the year. Toward the end of the year, the Nimbus satellite for which the system was designed, will be launched. Fairchild Stratos-Electronic Systems Division has developed and is producing APT ground stations under the technical direction of NASA's Goddard Space Flight Center. For more information on this system, contact our Director of Customer Relations. When there's a need to know: Fairchild Stratos-Electronic Systems Division capabilities are best reflected in an integrated approach to data requirements. Extensive experience in acquisition, processing, transmission and display has given FS-ESD engineers a particularly sensitive awareness of both final information needs and the many subsystems required to answer them. * For knowledgeable engineers interested in career opportunities in advanced data techniques, may we suggest a note to our Director of Industrial Relations for the brochure "Grow Your Own Future". FS-ESD, an equal opportunity employer. FAIRCHILD STRATOS ELECTRONIC SYSTEMS DIVISION WYANDANCH, LONG ISLAND, NEW YORK Now you can build a fine Schober Organ for only $550. You can assemble this new Schober Spinet Organ for $550 — or half the cost of comparable instruments you have seen in stores. The job is simplicity itself because clear, detailed step-by-step instructions tell you exactly what to do. And you can assemble it in as little as 50 hours. You will experience the thrill and satisfaction of watching a beautiful musical instrument take shape under your hands. The new Schober Electronic Spinet sounds just like a big concert-size organ — with two keyboards, thirteen pedals and magnificent pipe organ tone. Yet it's small enough (only 38 inches wide) to fit into the most limited living space. You can learn to play your spinet with astounding ease. From the very first day you will transform simple tunes into deeply satisfying musical experiences. Then, for the rest of your life, you will realize one of life's rarest pleasures — the joy of creating your own music. For free details on all Schober Organs, mail the coupon now. No salesman will call. The Schober Organ Corporation 43 West 61st Street, New York 23, N. Y. Also available in Canada and Australia. Mail This Coupon Today The Schober Organ Corporation Dept. EL-2 43 West 61st Street New York 23, New York ☐ Please send me FREE booklet and other information about Schober Organs. ☐ Please send me the Hi-Fi demonstration record. I enclose $2 which is refundable when I order my first kit. Name........................................... Address.......................................... City.................. Zone .. State........... Horizontal, vertical or overhead; flat, curved or irregular—to provide additive damping. Available in cured sheets, bonded structural components or in uncured two-part kits, it makes possible predictable structural response in components and systems. The material affords good damping over a wide frequency range from 50 to 11,000 cps. Circle 449, Reader Service Card Metal Foils for R-F Shielding EMERSON & CUMING, INC., Canton, Mass. Low-cost r-f shielding method provides -100 db enclosures. System centers around the application of specially developed metal foils, Eccoshield WP, in new construction, or for making existing structures into high-performance r-f shielded areas. Conductive adhesive and caulking compounds are used in applying the shielding panels and in rendering all seams and joints r-f tight. Eccoshield WP is installed by stapling and/or bonding to walls, ceiling and floor. Price of the various types of foil ranges from $1 to $3 per sq ft in quantities over 100 sq ft. (450) Four-Terminal Bridge Has Extended Range ANGSTROHM PRECISION INC., 7341 Greenbush Ave., W. Hollywood, Calif. A direct reading percent deviation principle four terminal extended range Wheatstone-Kelvin bridge covers the ranges from 0.0001 ohm to 10' ohms with self-contained accuracies of 0.01 per- TOYO ELECTRONICS INDUSTRY CORPORATION P. O. BOX 103 CENTRAL KYOTO JAPAN Circle 310 on Reader Service Card March 15, 1963 • Electronics circuit designers...is your appointment in space with Hughes? Today, Hughes is one of the nation's most active space/electronics firms. Projects include: MMRBM (Mobile Mid-Range Ballistic Missile—Integration, Assembly & Checkout), TFX(N) Electronics, SURVEYOR, SYNCOM, VATE, BAMBI, POLARIS guidance and others. This vigor promises the qualified engineer or scientist more and bigger opportunities for both professional and personal growth. Many immediate openings exist. The engineers selected for these positions will be assigned to the following design tasks: the development of high power airborne radar transmitters, the design of which involves use of the most advanced components; the design of low noise radar receivers using parametric amplifiers; solid state masers and other advanced microwave components; radar data processing circuit design, including range and speed trackers, crystal filter circuitry and a variety of display circuits; high efficiency power supplies for airborne and space electronic systems; telemetering and command circuits for space vehicles, timing, control and display circuits for the Hughes COLIDAR* (Coherent Light Detection and Ranging). If you are interested and believe that you can contribute, make your appointment today. Please airmail your resume to: Mr. Robert A. Martin Head of Employment Hughes Aerospace Divisions 11940 W. Jefferson Blvd. Culver City 11, California We promise you a reply within one week. Creating a new world with electronics HUGHES HUGHES AIRCRAFT COMPANY AEROSPACE DIVISIONS An equal opportunity employer. Don't argue with him, Freddy, he may be right. That's one measurement we've never checked! But that's about the only one we haven't used in assuring the quality of REEVES-HOFFMAN CRYSTALS for standard and precision applications for commercial and military requirements. See for yourself. We've printed specifications concerning both the "milk" and "cream" of our crystal production in bulletin QCI. Write for your copy today. PRODUCERS OF PRECISION FREQUENCY CONTROL DEVICES . . . crystals • crystal-controlled frequency sources, standards, filters • component ovens. Pulse Transformer Enclosed in Epoxy PCA ELECTRONICS, INC., 16799 Schoenborn St., Sepulveda, Calif., offers a subminiature RX molded pulse transformer designed in a cube-type configuration. Length of each side: only 0.300 in. Designed for installation in a wide variety of transistorized circuits where space is critical, these compression molded pulse transformers, enclosed in flame-proof epoxy, are all same size. Price is $3.20 each in lots of 1,000. (452) Patchboard Offered in 1200 Contact Size VECTOR ELECTRONIC CO. INC., 1100 Flower St., Glendale 1, Calif. A 1200 contact size pre-programming patchboard is offered for computer, systems, and test equipment manufacturers. It features a rear contact design that allows solderless slip-on wiring connections. The slip-on contact slides onto the contact pins at the rear of the patchboard, making an extremely low resistance connection which can be readily changed if required. The slip-on contacts can be readily crimped to leads with a hand pliers, or automatic crimping equipment. The same rear contact can also be soldered if desired. (453) Mechanical Filter for Upper Sideband COLLINS RADIO CO., 19700 San Joaquin Rd., Newport Beach, Calif., offers an upper sideband mechanical filter built to meet rigid missile telemetry specifications. Composition of the ferrite transducer has been modified, further increasing the mechanical strength of the filter and reducing insertion loss. Another benefit of the new ferrite transducer is a reduction in pass-band ripple. New metallurgical treatment of the nickel-alloy disks which serve as the filter's resonant elements reduce drift to less than 1 part per million per deg C over a temperature range of -25 C to +85 C. (454) Power Supplies Are Modular Type HARRISON LABORATORIES, 45 Industrial Road, Berkeley Height, N. J. The 6340 series modular power supplies is designed to meet both the need for a well-regulated, inexpensive chassis-mounting supply and the need for a line of supplies of low power rating capable of being efficiently grouped on rack panels. Both load and line regulation are less than 3 mv or 0.02 percent, DIGITAL VOLTMETER AT LOWEST COST portable style — series "200" base price $287.50 36 standard models FEATURING: • Choice of 0.1% or 0.2% Full Scale accuracy. • .025% Resolution and Readability. • Readings from .0001 to 1000. V-DC. • Reliable transistorized circuit. • Bi-Directional Tracking—without flicker. • Floating or grounded input. • 1-Year guarantee. • Individually calibrated and certified. • Specific variations to your OEM requirements. Stocking Distributors throughout United States & Canada. Write or Wire for Demonstration. UNITED SYSTEMS CORPORATION 918 Woodley Road, Dayton 3, Ohio CIRCLE 181 ON READER SERVICE CARD Your electronics BUYERS' GUIDE should be kept in your office at all times—as accessible as your telephone book. Wire miniaturized components with Wire-Wrap® tools Now you can wire miniaturized components with Gardner-Denver "Wire-Wrap" tools. Use wire as fine as 30 or 32 gauge. Connections with 32-gauge wire are possible on \( \frac{1}{10} \) modular spacings—permitting at least 100 terminals per square inch. All you need is a newly designed bit and nosepiece which fit on present battery-powered or other "Wire-Wrap" tools. All Gardner-Denver Wire-Wrap tools are simple and easy to use. Permanent connections are made fast—in only 3 seconds to be exact. They end failure headaches. These tools are rapidly—and understandably—replacing less reliable methods. Proof? Fifteen billion solderless wrapped connections; not one reported failure. Get further proof. Write for Bulletin 14-1 today. NEW DIMENSIONS IN RELIABILITY whichever is greater, and ripple and noise is less than 1 mv rms for any combination of line voltage, output voltage and load current. Operating temperature range is 0 to 50 C and temperature coefficient is less than 0.033 percent plus 2 mv per deg C. Prices range from $120 to $225. CIRCLE 455, READER SERVICE CARD Broadband Oscillator Is Highly Stable LFE ELECTRONICS, 1079 Commonwealth Ave., Boston 15, Mass. Model 831-X-1 is continuously tunable over the 8.2 Gc to 12.4 Gc bandwidth. It achieves long-term stability of one part in \( 10^6 \) per hr and short-term stability of two parts in \( 10^6 \) over a 20 Kc disturbance band. It features provision for electronic sweeping by an external sawtooth. Start of each sweep is determined by the main dial setting. Sweep width is controlled by a front-panel dial calibrated from 100 Mc to 4.2 Gc. Sawtooth signals from an oscilloscope can be used to synchronize the output of the 831 to provide a calibrated swept signal. (456) Tantalum Capacitors Offered in 3 Types INTERNATIONAL ELECTRONIC INDUSTRIES, Box 9036, Melrose, Nashville, Tenn., announces three new miniature tantalum electrolytic capacitor lines. They are comprised of high reliability dry slug tantalum capacitors with advanced performNow Multiple PPI Displays Under High Ambient Light Conditions... With GEC Scan Converter With GEC's transistorized 6021 Scan Converter, it is no longer necessary to look at rapidly decaying PPI displays in dark surroundings. Any number of inexpensive TV monitors can be operated from one PPI source with controlled image storage time affording more reliable evaluation of displayed information. Readily tailored to your specific requirements through its plug-in functional modules, the 6021 Scan Converter is capable of: - TRANSLATION of video information from one scanning mode to any other. - STORAGE and INTEGRATION of video information. - TIME-COORDINATE TRANSFORMATION for expansion or reduction of bandwidth. Contact GEC for more information about conversion of radar PPI to TV, TV standards conversion or conversion of slow scan narrow band TV to standard TV or vice versa. Qualified electronic engineers are needed for work in the field of Scan Conversion. Address inquiries to Professional Placement Manager. An equal opportunity employer. ... advanced electronics at work GENERAL ELECTRODYNAMICS CORPORATION 4430 FOREST LANE • GARLAND, TEXAS • BROADWAY 5-1161 electronics • March 15, 1963 small size... BIG performance! NEW 1/4" round single turn MECHATRIM trimmer potentiometer FEATURES: - infinite resolution - 200°C temperature performance - ± .015%/°C temp. coefficient - 100 megacycle frequency range - mil spec moisture resistance - non-wire wound reliability - limit stops PHONE OR WRITE FOR DETAILS SM/I SERVOMECHANISMS/INC. MECHATROL DIVISION NEW YORK - Home Office 1200 Prospect Avenue Westbury, New York Area Code 516 - EDgewood 3-6000 CALIF. - (Branch Office) Mechatrol of California 200 North Aviation Boulevard El Segundo, California Area Code 213 - Oregon 8-7841 SEE US AT THE IEEE SHOW Booth 2316 CIRCLE 184 ON READER SERVICE CARD Heater-Cooler for Electronic Systems MCLEAN ENGINEERING LABORATORIES, P. O. Box 228, Princeton, N. J. This unit will keep an electronic enclosure at an even 65 F with surrounding temperature in still air as low as 10 F. It also will flush and cool electronic cabinets when the remote thermostat indicates that cooling is required. When moderate heating is required, internal dampers are changed to recirculate air within the enclosure without drawing in fresh air. Heat comes from internal electronic units. When substantial heating is required an internal relay energizes a 1,000 w electric heater to maintain a set temperature. The Mil-Spec centrifugal blowers and motors are guaranteed to run continuously for 20,000 hr. (458) Round Connectors Have Grommet Seal Contacts WINCHESTER ELECTRONICS, INC., 19 Willard Road, Norwalk, Conn. Series RM-RS connectors meets environmental requirements of MIL-C-26482. They are available in 8, 10, 12, 14, 16, 18, 20, 22 shell sizes. CIRCLE 457 ON READER SERVICE CARD Preformed GRID-WIRE CONNECTORS Fastest to apply... Lowest installed cost... No maintenance... For GRID TYPE ARRAYS These unique helically formed connectors are in service on U. S. Government signal installations. For instance, the Tapered Aperture Horn Antenna — TAHA — at LaPlata, Maryland employs over 90,000 PREFORMED Grid-Wire Connectors. They are also used on several Voice of America projects and on rhombic antennas, log periodics, conical monopoles, and horn antennas. PREFORMED Grid-Wire Connectors are wrapped on by hand; no tools are needed. They provide uniform holding; prevent stress points. No parts can loosen to create noise. High mechanical strength and electrical conductivity are assured. T-CONNECTORS designed for terminating a wire at a cross-wire or catenary, prevent premature fatigue damage often caused by high-stress fittings... available in both standard and reducing configurations. L-CONNECTORS an excellent electrical and mechanical connector suitable for a wide range of holding strengths, made of compatible materials... available in standard or reducing types, for wires of equal or different diameters. CROSS-TIES interlock design holds grid wires securely yet permits adjustment to various angles. Available in standard or reducing configurations, as well as spacer types, which join intersecting but noncontacting wires. Use the reader service card to request complete information, or write for Bulletin SP-2041. PREFORMED LINE PRODUCTS CO. 5349 St. Clair Avenue Cleveland 3, Ohio 881-4900 (DDD 216) 600 Hansen Way Palo Alto, California 327-0170 (DDD 415) Made in accordance with U.S. Patent 2,691,865 CIRCLE 312 ON READER SERVICE CARD March 15, 1963 • electronics Pin and socket inserts are interchangeable with plug and receptacle shells. Series RM has crimp type contacts; series RS has solder type. Polarization is achieved by shells, with keys and keyways. Contacts are furnished in two sizes: No. 20 Awg, 7.5 amp; and No. 14 Awg, 13 amp. (459) D-C Power Supplies Feature Compactness SORENSEN, Richards Ave., S. Norwalk, Conn. Custom specifications have been designed into eight new transistorized d-c power supplies. The low-priced new QB series models provide nominal outputs of 5 to 36 v at 90 or 180 w capacity. They provide regulation of ±0.01 percent (line and load combined), ripple of only 300 μv rms and response time of 25 μsec (typical). (460) Tuning Fork Oscillator Mounted on P-C Board FORK STANDARDS, INC., 1915 North Harlem Ave., Chicago 35, Ill., has available a complete tuning fork oscillator built on a printed-circuit board. Accuracy is 0.005 percent at room temperature and 0.010 percent from 0 to 85 C over a frequency range of 60 to 10,000 cps. Output is 3 v rms sine wave or 8 v p/p square and see how the output voltage remains constant at 24 volts. The points we want to demonstrate, of course, are that the G5 GTO operates directly on a 200 volt line, makes for simple circuitry, has fast switching speed (up to 100 kc) and handles power at least as smoothly as that high megatane rated gasoline you hear so much about on TV. (If you'd like the story on this regulated supply, write us and ask for the note prepared by Denis Graham. Additional information is available in Application Note 200.23, which gives you a number of circuits ideal for regulated supplies in applications such as computers, test equipment, airborne and missile equipment, and industrial controls, among others.) Another Thing to See ... at I triple E will be a direct power control with the G-E L7 light activated silicon switch. You'll be able to actuate a motor-driven aperture disc between a miniature incandescent lamp and the L7, trigger a xenon flash tube shining on the same L7, thus directly controlling a 120 volt lamp and an industrial control relay. And with the use of glass fiber to transmit light to the L7, you can trigger the L7, even around corners. If that sounds like a lot, come to Booth 2902 and see. We can prove it. Or if you'd like the whole story about the L7, write us and ask for Application Note 200.29. A last closing comment: we'll also show you (among other things) how G-E Controlled Avalanche Rectifiers live through reverse voltage transients and also operate in series strings without resistance dividers (whether you like it or not). Any questions? Write Section 16C102, Rectifier Components Department, General Electric Company, Auburn, New York. In Canada: Canadian General Electric, 189 Dufferin St., Toronto, Ont. Export: International General Electric, 159 Madison Ave., N.Y. 16, N.Y. wave with a 10,000 ohm load. Standard board is 3 by 6 by \( \frac{1}{2} \) in., but the circuit can also be built using the customer's standard sized board and connector. CIRCLE 461, READER SERVICE CARD R-F Calorimeter Is Self-Contained AVNET INSTRUMENT CORP., 91 Commercial St., Plainview, L. I., N. Y. Model HS-12 high power r-f calorimeter, housed in a Widney Dorlec enclosure, is a self-contained portable precision instrument capable of quickly and accurately measuring average, pulsed or c-w, microwave power to 50 Kw. Instrument employs a calibrated constant volume, sealed, distilled water circulating system. (462) Shielding Material In Three Forms METEX ELECTRONICS CORP., Walnut Ave., Clark, N. J., offers Polashield, a shielding material that provides both an rfi shield and an integral pressure seal. It is made from thousands of oriented wires that are molded in a matrix of elastomeric material. The wires are aligned perpendicular to the surface of the shield, increasing the insertion loss through the gasket. The material yields an overall system attenuation of 125-135 db and has an insertion loss measurement of as much as 100 db. Pressure seals up to 30 psi can be maintained. Polashield is available in three forms: strip, ring and formed gasket. (463) Programming Switches Are Miniaturized SEALECTRO CORP., 139 Hoyt St., Mamaroneck, N. Y., announces miniature programming switches featuring relay-type contacts. These Actan switches are approximately one-third the size of conventional units and utilize a barrel actuator into which activating pins can be inserted to set up the desired sequence of events. A standard switch will offer up to 16-pole, double-throw operation and is also available with as many as 32-poles-double throw. Contact life is claimed to be in excess of 100 million operations. (464) D-C/A-C Choppers for P-C Board Mounting STEVENS-ARNOLD, INC., 7 Elkins St., South Boston 27, Mass., offers a-c driven and d-c driven d-c/a-c choppers for printed circuit board mounting. The new models, with twin-contact construction, include all features required for easy mounting, either plug-in or solder-in. For a-c driven applications there are models for 50, 60, 94, and 120 cycles. For d-c applications, company can furnish, for customer convenience, a transistorized driver unit to convert 12 v d-c into 94 cycles square wave a-c to operate the chopper. When user wishes to make a driver unit, company furnishes complete information, circuit diagram, and a parts list. (465) PROVEN RELIABILITY— SOLID-STATE POWER INVERTERS, over 260,000 logged operational hours— voltage-regulated, frequency-controlled, for missile, telemeter, ground support; 135°C all-silicon units available now— Interelectronics all-silicon thyratron-like gating elements and cubic-grain toroidal magnetic components convert DC to any desired number of AC or DC outputs from 1 to 10,000 watts. Ultra-reliable in operation (over 260,000 logged hours), no moving parts, unharmed by shorting output or reversing input polarity. High conversion efficiency (to 92%, including voltage regulation by Interelectronics patented reflex high-efficiency magnetic amplifier circuitry.) Light weight (to 6 watts/oz.), compact (to 8 watts/cu. in.), low ripple (to 0.01 mv. p-p), excellent voltage regulation (to 0.1%), precise frequency control (to 0.2% with Interelectronics extreme environment magnetostrictive standards or to 0.0001% with fork or piezoelectric standards.) Complies with MIL specs. for shock (100G 11 misc.), acceleration (100G 15 min.), vibration (100G 5 to 5,000 cps.), temperature (to 150 degrees C), RF noise (I-26600). AC single and polyphase units supply sine waveform output (to 2% harmonics), will deliver up to ten times rated line current into a short circuit or actuate MIL type magnetic circuit breakers or fuses, will start gyros and motors with starting current surges up to ten times normal operating line current. Now in use in major missiles, powering telemeter transmitters, radar beacons, electronic equipment. Single and polyphase units now power airborne and marine missile gyros, synchros, servos, magnetic amplifiers. Interelectronics—first and most experienced in the solid-state power supply field produces its own all-silicon solid-state gating elements, all high flux density magnetic components, high temperature ultra-reliable film capacitors and components, has complete facilities and know how—has designed and delivered more working KVA than any other firm! For complete engineering data, write Interelectronics today, or call Ludlow 4-6200 in New York. INTERELECTRONICS CORP. 2432 Gr. Concourse, N. Y. 58, N. Y. For Dependability KINNEY COMPOUND HIGH VACUUM VANE PUMPS SERIES KCV The KCV Series of two-stage, vane-type, compound high vacuum pumps has a range of free air displacements from 2 to 7 cfm and attains ultimate pressures of 0.2 micron. Gas ballasting, a standard feature of all Kinney Pumps, reduces oil contamination and consequent poor vacuum caused by condensable vapors. The series has been developed specifically to provide quiet, vibration-free operation, and includes long-lasting filter elements to eliminate smoke and fumes from the discharge. KINNEY . . . EVERYTHING IN VACUUM KINNEY VACUUM DIVISION The New York Air Brake Company 3529 Washington Street Boston, Massachusetts For laboratory or field research applications MODEL 700/1400 SERIES NEW MAGNETIC TAPE RECORDING SYSTEMS ACCURATE ± 0.2 linearity for analog data FLEXIBLE as many data channels as you need from 2 to 14 COMPACT a 7-channel system fits in less than 2 ft. of rack space LOW COST modest initial cost, true operating economy MENOMOTRON NEW MODEL 700/1400 SERIES MAGNETIC TAPE RECORDING SYSTEMS record any electrical quantity from DC up to 5000 cps. A uniquely simple pulse-frequency modulation technique insures that data signal intelligence is free from non-linearity due to tape coating or other distortions. Select as many data channels as you need, up to 14. Choose the tape format you want—¼", ½" in-line, or standard IRIG. If standardization is desired, simply specify 7 channels on ½-inch tape in the standard IRIG configuration. Record/Reproduce electronics for each channel are integrated in a single plug-in module featuring unity gain. An integral speed switch permits selection of data conversion for 2, 3 or 4 tape speeds — no additional plug-ins needed. For maximum flexibility, each multi-channel input is isolated. Data can be accepted from unbalanced, differential or push-pull outputs, or different DC levels on input signal ground returns can be preserved. Test points allow monitoring of input during recording, output voltage level when reproducing. Write for the pleasant details. Visit us at Booth 3027 — IEEE Show Division of TECHNICAL MEASUREMENT CORPORATION Executive Sales Offices: 202 Mamaroneck Ave., White Plains, N.Y. INDUSTRIAL LITERATURE SERVICE McGraw-Hill Book Co., Training Materials & Information Services Div., 330 W. 42nd St., New York 36, N. Y. Brochure describes custom services available, including preparation of technical bulletins, house organs, sales brochures, facility reviews and the like. CIRCLE 466, READER SERVICE CARD SSB SPECTRUM ANALYZER Lavoie Laboratories, Inc., Morganville, N. J. A catalog sheet contains advance specifications for the LA-40 single sideband spectrum analyzer with 2-32 Mc range, and the LA-41 two-tone generator. CIRCLE 467, READER SERVICE CARD SOLDERING METHODS Oryx Co., 13804 Ventura Blvd., Sherman Oaks, Calif., has available a technical bulletin on soldering methods and techniques. CIRCLE 468, READER SERVICE CARD DELAY LINES Polyphase Instrument Co., East Fourth St., Bridgeport, Pa. Bulletin 25DL covers nanosecond, microsecond and millisecond delay lines. CIRCLE 469, READER SERVICE CARD PHOTOCONDUCTIVE CELLS Clairex Corp., 8 West 30th St., New York 1, N. Y. A 16-page booklet covers the use of photoconductive cells under various light, circuit and application conditions. CIRCLE 470, READER SERVICE CARD SWITCHING MODULE Vitramon, Inc., P.O. Box 544, Bridgeport 1, Conn. Catalog of 6 pages contains specifications and operating characteristics of VG low level switching module. CIRCLE 471, READER SERVICE CARD MICROWAVE ABSORBERS Emerson & Cuming, Inc., Canton, Mass. Color chart presents performance and physical data on a full line of Eco-sorb microwave absorbers designed for "free space" and waveguide applications. CIRCLE 472, READER SERVICE CARD MATERIALS PROCESSING SERVICE Semiconductor Specialties Corp., 252 Garibaldi Ave., Lodi, N. J. Single-page bulletin discusses the company's available service for slicing, lapping, diced, etching and sizing of materials. CIRCLE 473, READER SERVICE CARD WIRELESS MICROPHONE Bergen Laboratories Inc., 60 Spruce St., Paterson 1, N. J., has published a data sheet describing the Radio-Mike, a professional wireless microphone that requires no trailing cord to the p-a amplifier. CIRCLE 474, READER SERVICE CARD INTEGRATED SERVO ASSEMBLY Daystrom, Inc., Transicorl Division, Worcester, Pa., has published a catalog sheet describing an integrated servo assembly—three components in one housing. CIRCLE 475, READER SERVICE CARD MICROWAVE REFLECTOMETER Parady-namics, Inc., 10 Stepar Place, Huntington Station, L. I., N. Y., has available a comprehensive, illustrated brochure describing the newly developed precision microwave reflectometer. CIRCLE 476, READER SERVICE CARD HIGH-SPEED PRINTER SYSTEM Potter Instrument Co., Inc., East Bethpage Road, Plainview, N. Y. Catalog No. 400-2-1 illustrates and describes the LP-1200, a complete high-speed printer system. CIRCLE 477, READER SERVICE CARD VOLTAGE SURGE PROTECTION International Rectifier Corp., 233 Kansas St., El Segundo, Calif. A 20-page manual, KL-601, provides data on the protection of semiconductors through the use of selenium transient voltage suppressors. CIRCLE 478, READER SERVICE CARD PARAMETRIC AMPLIFIERS Sperry Microwave Electronics Co., P.O. Box 1828, Clearwater, Fla. A 10-page brochure on parametric amplifiers covers design, typical characteristics, structures and systems. CIRCLE 479, READER SERVICE CARD STACK SWITCHES Switchcraft, Inc., 5555 N. Elston Ave., Chicago 30, Ill. Catalog S-308 covers stack switch components and assemblies for the industrial electronic industry. CIRCLE 480, READER SERVICE CARD RESOLVER-AMPLIFIER COMBINATIONS General Precision Aerospace, Little Falls, N. J. Catalog sheet describes size 8 and size 11 winding-compensated resolver-amplifier combinations designed for coordinate chain applications. CIRCLE 481, READER SERVICE CARD CAPACITORS Aerovox Corp., New Bedford, Mass., has issued a bulletin on type V146XR Aerofilm Wrap & Fill Mylar capacitors. CIRCLE 482, READER SERVICE CARD PRECISION POT Giannini Controls Corp., 1600 S. Mountain Ave., Duarte, Calif. A recent two-page bulletin describes the Tempot, a precision potentiometer capable of operating in extreme temperature environments. CIRCLE 483, READER SERVICE CARD FERRITE CORES Electronic Memories, Inc., 9430 Bellanca Ave., Los Angeles 45, Calif. offers specification sheets on two new ferrite memory cores for application in coincident current memories. CIRCLE 484, READER SERVICE CARD IMMITTANCE CHART Avco Corp., Cincinnati 41, O., offers a 17 in. by 22 in. immittance chart which permits direct conversion from impedance to admittance or vice versa. CIRCLE 485, READER SERVICE CARD POWER SUPPLIES Deltron Inc., Fourth and Cambria Sts., Philadelphia 33, Pa. A 32-page catalog, A-631, covers a wide range of solid state and vacuum tube power supplies. CIRCLE 486, READER SERVICE CARD TRUE-RMS VOLTMETER Ballantine Laboratories Inc., Boonton, N. J. Four-page brochure describes model 320A true-rms voltmeter for accurate measurements on a wide range of waveforms. CIRCLE 487, READER SERVICE CARD ELECTRICALLY ISOLATED R-F ROOMS Erik A. Lindgren & Associates, Inc., 4575 N. Ravenswood Ave., Chicago 40, Ill., offers comprehensive catalogs, drawings and specifications covering scientifically designed and constructed double electrically isolated r-f rooms. CIRCLE 488, READER SERVICE CARD The quality of our IF strips has been hidden within our systems... until now. What's behind Loral's success in meeting—or exceeding—MIL-SPECS in the creation of "black boxes," both systems and subsystems, for the military for over 15 years? The quality built into components such as this IF amplifier. This unit, one of a series of IF amplifiers operating at center frequencies from 30 to 160 megacycles, was developed for a Loral system that meets MIL-E-5400. It is now ready for YOU through our General Products Division. Such amplifiers are available as virtually "off-the-shelf" items and are representative of Loral's R & D capacity to create electronic components that are the best possible buy in the smallest, most reliable package—Value Engineered throughout. We may have, right now, the electronic component that will help YOU do an important defense job while saving YOU the unnecessary time and cost of undertaking your own R & D. For further information on our complete line of amplifiers and other precision microwave products, write: General Products Division, LORAL ELECTRONICS CORPORATION, 825 Bronx River Avenue, The Bronx 72, New York. | MIL SPEC | MIL-E-5400 | BANDPASS RIPPLE | 0 | GAIN CONTROL | Yes | INPUT IMPEDANCE | 50 ohms | |----------|------------|-----------------|---|--------------|-----|-----------------|--------| | PART NUMBER | IF-301 | WEIGHT | 11.5 oz. | AGC | Yes | OUTPUT IMPEDANCE | 50 ohms RF 91 ohms video | | CENTER FREQUENCY (MCS) | 100 | TRANSISTOR COMPLEMENT | 2N1195 | POWER REQUIREMENTS | 25v 110 ma | VOLTAGE GAIN | 50 db RF 66 db video | | BANDWIDTH AT 3db (MCS) | 20 | NOISE FIGURE db | 7 db | DIMENSIONS | 11 x 1¼ x 1 | | NEW BOOKS Lasers: Generation of Light by Stimulated Emission By BELA A. LENGYEL John Wiley & Sons, Inc., New York, 1962, 125 p, $6.95. At last there is a competent introduction in book-form to lasers, without either the sensationalism and oversimplification of popular press, or the abstruseness of highly specialized journals. The author sets out to bring together the highly scattered literature on lasers, and to present it to technically qualified readers in a unified form. In this he largely succeeds. There is, of course, probably no less static field today than lasers, and any book on them cannot hope to remain up to date for long. The gallium-arsenide diode laser, for instance, appeared only after this book went to press. Nevertheless, it will serve as an excellent introduction to "classical" laser theory. A good bibliography of laser literature closes the book.—G.V.N. Component Parts Failure Data Compendium Electronic Industries Association, New York, 1962, 195 p, $2.50. This long-awaited work by the EIA Ad Hoc Group on Component Parts Failure Data has arrived. Twenty-eight organizations contributed data to this study and it is the most complete compilation available. Data is presented in chart form, on 61 different components under various environmental conditions. Failure rates are given in percent per 1,000 hours. However, as is the case with most failure-rate data, even this compendium should be marked "use with discretion." Some of the data goes back to the early days of reliability work and represents nothing more than someone's "educated" guess. In other cases, conditions of use (mostly in the field) and test hours are carefully documented and these data should prove highly enlightening.—J.M.C. Electromechanical Energy Conversion By SAMUEL SEELY McGraw-Hill Book Co., Inc., New York, 1962, 336 p, $10.75. Three areas in particular are covered by this textbook: energy storage, energy transfer, and energy conversion. The electrical devices that fall within these broad categories are analyzed in a fundamental way, developing and using tools of analysis such as Lagrange's equations and dynamic problem analysis, which will prove useful in other fields as well. From transducers and converters the book progresses to the rotary power converter and generalized machine theory, and the n-m symmetrical machine. Though its main interest lies in the power engineering field, the fundamentals and methods explained are of importance to electronics engineers as well. Reliability: Theory and Practice By IGOR BAZOVSKY Prentice-Hall, Inc., Englewood Cliffs, New Jersey, 292 p, $8.75. Reliability Principles and Practices By S. R. CALABRO McGraw-Hill Book Co., Inc., New York, 1962, 371 p, $10.50. In a field in which university courses are just beginning, the appearance of a usable textbook has been eagerly awaited. Both these books may find use in reliability courses both in universities and in industry. Bazovsky's book may be preferred by some university professors who prefer to concentrate on from the House of Zeners... US SEMCOR a completely New Series 1N 4016B thru 1N 40428 THE INDUSTRY'S ONLY 5 WATT ZENER UNIQUE MINIATURE PACKAGE • 6 YEARS PRODUCTION EXPERIENCE IN BASIC PACKAGE DESIGN • SAVE ON WEIGHT, SPACE AND COST • INCREASE RELIABILITY Solves intermediate power problems for 1 thru 5 watt applications where you are now using 10 watt 7/16 package (1/3 weight - 20% of volume) SEE IT AT IEEE . . . SEMCOR'S DISPLAY SUITE MEURICE HOTEL • 145 W. 58th ST. • NEW YORK, N.Y. the basics of the subject and introduce a certain level of rigor in their courses. Calabro's book is one that no engineer practicing reliability in industry should be without. It is thorough, comprehensive and easily understood. It may also find use as a textbook.—J.M.C. Textbook on Mechanized Information Retrieval By ALLEN KENT Interscience Publishers, div. of John Wiley & Sons, New York, 1962, 268 p, $9.50. REVIEW of information retrieval methodology largely from the point of view of the librarian. Includes heavy concentration on special coding techniques used at Western Reserve University. Gives little space to recent advances in machine indexing that appear to hold great promise.—J.M.C. Handbook of Nonparametric Statistics By JOHN E. WALSH D. Van Nostrand Company, Inc., Princeton, New York, 1962, 549 p, $15. LARGE number of nonparametric tests for randomness, point estimation and confidence regions. Presentation is extremely compact and the book will be of value only to those having a sound foundation both in classical and distribution-free statistics. Some nonparametric tests described will be useful in reliability work.—J.M.C. Electromechanical System Theory By HERMAN E. KOENIG and WILLIAM A. BLACKWELL McGraw-Hill Book Co., Inc., New York, 504 p, $14.50. ALTHOUGH the subject matter of this book relates to electrical power rather than electronic engineering, the analysis techniques developed are applicable to both fields. A large portion of the book has to do with the formulation of problems, and methods of analysis. Two-terminal and multi-terminal system concepts are dealt with; the second half of the book relates to analysis of d-c and a-c rotating machines and of non-linear systems. Six-Language Dictionary of Electronics, Automation and Scientific Instruments By A. F. DORIAN Prentice-Hall, Inc., Englewood Cliffs, N. J., 1963, 752 p, $16.95 TECHNICAL translations are always impeded by the fact that words have more than one meaning, and a common-usage word will often have a very unexpected connotation when used in a technical sense. Thus, this dictionary will prove very useful to those who deal with foreign technical literature. The dictionary could have been Are you a COMPLETELY INFORMED electronics engineer? Today you may be working in microwaves. But on what project will you be working tomorrow? You could have read electronics this past year and kept abreast of, say, microwave technology. There were 96 individual microwave articles between July, 1961 and June, 1962! But suppose tomorrow you work in some area of standard electronic components, in semiconductors, in systems? Would you be up-to-date in these technologies? Did you read the more than 3,000 editorial pages that electronics' 28-man editorial staff prepared last year? electronics is edited to keep you current wherever you work in the industry, whatever your job function(s). If you do not have your own copy of electronics, subscribe today via the Reader Service Card in this issue. Only 7½ cents a copy at the 3 year rate. New FILM-THIN Copper-clad Laminates for space-saver or multi-layer printed circuits Visit us at I.E.E.E., Booths 4421-23 Synthane copper-clad laminates are now being produced with a base laminate of only .0035" and up—with 1 or 2 oz. cladding available on one or both sides. A pre-impregnated glass cloth with epoxy resin filler is also available for bonding multi-layer circuits. These new materials are produced under clean room conditions. Property values are comparable to military specs for the same materials in standard thicknesses. Write for folder of Synthane metal-clad laminates. SYNTHANE CORPORATION OAKS, PENNA. Glendale 2-2211 (Area Code 215) TWX 215-666-0589 Synthane-Pacific, 818 W. Garfield Ave., Glendale 4, Calif. TWX GLDL 4417U Synthane Corporation, 36 River Rd., Oaks, Pa. Gentlemen: Please send me Synthane metal-clad laminates folder. Name__________________________________________ Address_________________________________________ City_______________________Zone_____State_______ improved by adding the definitions of some of the terms which have more than one meaning even within electronics: thus, while P.M. in English can mean photomultiplier as well as permanent magnet, only the latter sense is translated into German. Languages included are English, Russian, French, German, Spanish and Italian. Entries are listed alphabetically in English, referred to by number from glossaries in the other languages. Models of Transistors and Diodes By JOHN G. LINVILL McGraw-Hill Book Co., Inc., New York, 1963, 190 p, $7.95. THE purpose of this book, as stated, is to develop a set of models for transistors and semiconductor diode devices based on their basic internal physics. It makes no reference, in the explanations, to other electronic devices such as vacuum tubes, and thus will be a suitable text for the young engineers who learn about transistors first. Models are developed, successively, for the semiconductor carrier transport mechanisms, for the \(p-n\) junction, and for the \(p-n\) diode and the transistor. In the last two cases, lumped models are used that approximate the distributed devices. The book closes with functional models of the transistor and examples of transistor circuits.—G.V.N. Introduction to Electronics By WALTER H. EVANS Prentice-Hall, Inc., Englewood Cliffs, N. J., 1962, 518 p, $14.65. AS introductions to electronics go, this one is unusually complete and can probably serve well both the engineering student and the executive or scientist in a related field who wishes to increase his knowledge of electronic circuits. Not just a popular exposition of the subject, the book contains enough detail and enough mathematics to explain adequately a number of circuit designs, including digital and switching circuits as well as amplifiers and communication circuits. A descriptive chapter is given on microwaves, radar and antennas. Practical exercises give practice in circuit design procedures.—G.V.N. exhibit at IEEE show 1963 a new 0 to 520 Mc frequency-converter extending the frequency measuring range of WESTON-ROCHAR 20 Mc frequency-meter A 1149 (Weston ref. model 2052) Direct reading and recording of complete data from the counter Complete set including: 1 20 Mc F.T.P. counter model 2052 1 0 to 520 Mc frequency-converter 1 Codeverter (for digital recording) Input sensitivity: 50 mV from 1 to 60 Mc 20 mV from 60 to 560 Mc Input impedance: 50 Ω 4 new D.A.V.O. Meters (Digital Amp Volt Ohm Meters) fully transistorized 1 nanoampere to 20 mA DC/AC (or more with external shunts) 100 μV to 2,000 V DC/AC 1 Ω to 20 MΩ 0.03 % of reading ± 1 digit Point and unit displays Calibration-test: 1.018 V (Weston cell) Automatic polarity (all models) Automatic ranging (2 models B) Distributed in U.S.A. and Canada by: WESTON - Newark (N.J.) For complete information on our line of products and address of our agency in your country please apply to ROCHAR-ELECTRONIQUE, 51, rue Racine MONTROUGE (Seine) FRANCE. IEEE Medal of Honor Goes to Inventor AMID the massive team efforts that are characteristic of today's electronics, rare indeed is "the inventor." But that title is a natural appendage to the name of John Hays Hammond Jr. of Gloucester, Mass., who will receive the IEEE Medal of Honor this year. His hundreds of patents underlie much of modern electronic technology. He is cited by the IEEE for "pioneering contributions to circuit theory and practice, to the radio control of missiles and to basic communications methods." The setting of his work is a medieval castle-museum above the rocky Gloucester coastline not far from the reef of Norman's Woe, the setting of Longfellow's "The Wreck of the Hesperus." "I love old things," says this inventor of new things for more than a half-century. In the Great Hall of the castle is the magnificent 10,000-pipe organ designed by Hammond and built over a period of 20 years. The Great Hall has been used for recordings by some of the major record companies, and some of the greatest organists in the world have played there. Here in this museum, sections of which are open to the public during the Summer, are Hammond's living quarters, and here is the Hammond Research Corp., whose vice president, Ellison S. Purington, has worked in close association with Hammond since 1920. The work of Hamond, now 74 years old, spans about the same range of years as the IRE. From 1912 until 1928, the Hammond Laboratory was in a building on the same Gloucester coastline property, and here much pioneering work in radio was accomplished. From 1928 on, the work was done in laboratory areas in the castle. Hammond did extensive work for the U.S. military services starting in 1912 when the chief of Coast Artillery for the Army witnessed in Gloucester the successful radio control of a boat from shore. During both world wars, the Hammond group developed radio and other remote control systems applicable to waterborne and airborne missiles. Only recently, Hammond Research Corp. completed a communications project for the Navy. Hammond helped develop some of the stabilization and homing principles used in modern missiles. In communications, the Hammond group contributed to development of the triode for amplification purposes, the i-m principle for selectivity, and of f-m techniques for broadcasting and telephony. The list of Hammond colleagues, correspondents and consultants over the years reads like a roster of the radio-electronic pioneers: deForest, Alexanderson, Tesla, Lowenstein, G. W. Pierce, Langmuir, David Sarnoff. Harvard's Dr. E. L. Chaffee, now professor emeritus, became a consultant to the Hammond Laboratory as early as 1918, and even now traverses the 40-plus miles from Belmont, Mass., each Wednesday to spend the day at Hammond Research Corp. as a consultant and old friend. Hammond is a director and a research consultant for RCA, and many of the Hammond and Purington patents are turned over to RCA for development and manufacture. Son of a millionaire mining engineer, Hammond was graduated from Yale and received a doctorate in science from George Washington University. An early member of the IRE, he has served as treasurer and a director. Clancy Accepts New Position WILLIAM E. CLANCY has been appointed vice president and director of sales of John E. Fast & Co., Chicago based capacitor manufacturer. Prior to this appointment Clancy was vice president and director of sales of Thordarson-Meissner. Vactek Announces Two Appointments VACTEK, INC., new wholly owned subsidiary of Geophysics Corp. of America, Bedford, Mass., announces the appointment of Herbert Roth, Jr., as manager, and Bernard Bernstein as technical director. Roth, formerly a vice president of Nuclear Corp. of America, served previously with Radio Corp. of America. Bernstein, formerly head of NuGet These Special Reports on Manufacturing and Marketing Opportunities in Atlanta Listed below are eleven recently completed reports on specific manufacturing and marketing opportunities in Atlanta. Compiled and written by the Industrial Development Division of Georgia Tech, they are accurate, up-to-date and completely objective. No eye-wash. No glib generalities. Any or all are yours on request. Also available are 17 other studies of various aspects of Atlanta's economic make-up. For the reports you want just check and mail coupon with your company letterhead. There's no cost. No obligation. Inquiries held confidential. Reports on Manufacturing and Marketing Opportunities in Atlanta for: - Fluorescent Lamp Ballasts - Current Carrying Devices (for building construction) - Plumbing Fixture Fittings - Household Waxes and Polishes - Antibiotics - Calculators and Computers - Refrigeration and Air Conditioning Equipment - Electronic Testing and Measuring Instruments - Drugs and Proprietaries - Industrial Valves and Pipe Fittings - Packaging General Reports on Atlanta: - Educational and Training Facilities in Metropolitan Atlanta - Atlanta's Metalworking Industry - World Trade - Insurance - Data Processing - Utilities - Communications - Taxes - Industrial Districts - Medical Complex - Transportation - Finance - Population - Manufacturers Guide - Georgia Data - Atlanta Facts - World Trade Directory "Forward Atlanta," Paul Miller, Industrial Manager Atlanta Chamber of Commerce 1330 Commerce Bldg., Atlanta 3, Ga. Phone 521-0845 Please send me the special "Forward Atlanta reports checked. We would be interested primarily in a new: ( ) plant ( ) warehouse ( ) sales office ( ) other__________________________ Name____________________________________Title_____________________________________ Product____________________________________ Company____________________________________ Street_____________________________________ City________________Zone_____State_________ Vactek's products and capabilities for the laboratory and precision industrial segments of the vacuum industry are expected to complement those of Vacuum Specialties, Inc., another GCA subsidiary. **GI Rectifier Division Appoints Davis** EMANUEL DAVIS has been appointed to the new post of director of quality control and reliability of the General Instrument Corporation Rectifier division. He was formerly with General Electric Co. In his new post, Davis will be responsible for all quality control and reliability programs for the division's entire line of silicon and selenium rectifiers. Reporting to him will be a group of approximately 45 engineers and technicians. General Instrument Rectifier division has plants at Newark, N. J., and Brooklyn, N. Y. **Illinois Tool Works Appoints Templeton** ILLINOIS TOOL WORKS INC., Chicago, Ill., recently appointed J. Earl Templeton vice president, Electronics divisions. Templeton was formerly with P. R. Mallory & Co., where he was director, Western operations. He is a vice president and director of Electronics Industry Show Corp. which produces the Electronics Parts Show held annually in Chicago. **System Development Upgrades Melahn** WESLEY S. MELAHN, Air Defense division manager of System Develop- **Raytheon Promotes Cassevant** ALBERT F. CASSEVANT has been named manager of Raytheon Company's Electronic Services Operation with responsibility for its worldwide support activities. He previously served as corporate special projects manager. Before joining the firm in 1962, he was vice president and general manager for ITT's Kellogg division. A New Alloy Offering Substantial Savings In the manufacture of high performance sliding and wiping contacts Send for free technical bulletin What is 239 Alloy? Leach & Garner #239 Alloy is a new Gold Palladium Base Alloy developed specifically as an equivalent functional substitute for Leach & Garner #226 Alloy (a Palladium Silver Base Alloy). Unique properties of this new alloy permit it to be clad to base materials and fabricated into a new type, low material cost, potentiometer sliding contact with high reliability and long, low noise life. Immediate use can be made in trimming potentiometer manufacture. Further testing should also validate its high performance in both precision potentiometers and a wide range of other electronic component applications. Please direct your inquiries to 52 Pearl St., Attleboro, Mass. Shown 1/2 actual size Leach & Garner for Alloys and Clad Metals Over 60 years' successful experience has established Leach & Garner as a leader in the production of clad and solid alloys for a wide range of industries. In addition a program, carefully developed by unique owner-management, has created a completely new, clean and separate department where this experience is applied to bonding, rolling and fabrication of clad semiconductor materials. Leach & Garner Company Attleboro, Massachusetts General Findings Inc. Attleboro, Massachusetts Specialized experience through production of countless miniature precision parts for the electronic industry is also combined with the most modern facilities to offer semiconductor manufacturers the service needed to meet the most demanding requirements at lost cost with absolute assurance of complete conformity. General Findings for Precision Parts Fabrication Shelley Named to Stoddart Post TAMAR ELECTRONICS, INC., Anaheim, Calif., has announced the appointment of Rulon Gene Shelley as vice president and general manager of Stoddart Aircraft Radio, Inc., a wholly owned subsidiary of Tamar. Shelley was formerly vice president and general manager of the Tamar Electronics division. He has been with the Tamar organization since January of 1962. Prior to that, he spent 12 years with North American Aviation where he was chief engineer of the Armament and Flight Control division of Autonetics. Bell Heads Up Pearce-Simpson PHILIP BELL, executive vice president of Pearce-Simpson, Inc., TOTAL CAPABILITY-through UTL Services Your Company can attain total capability by utilizing UTL services in Reliability Engineering, Technical Services and Testing. As an independent and objective organization, the technical assistance rendered by UTL on many major weapon and space programs has provided industry and government with highly valued and advanced developments in the fields of... **RELIABILITY ENGINEERING**...program plans, training instruction, prediction, design review, part selection, quality assurance. **TECHNICAL SERVICES**...installation, evaluation, logistic support, design, technical documentation. **TESTING**...simulation of space environments and weapon system mission profiles, electronic ordnance and cryogenic parts and systems testing. The UTL technical staff, with comprehensive nationwide facilities, can assist you from development through delivery and installation. Call or write to the address nearest you: - Test Facilities • Sunnyvale, Calif. (San Francisco Area), 150 Wolfe Road, RE 9-5900. - Monterey Park, Calif. (Los Angeles Area), 573 Monterey Pass Road, CU 3-4168. - Alexandria, Va. (Washington, D.C. Area), 4416 Wheeler Ave., 836-7200. UNITED TESTING LABORATORIES a division of United ElectroDynamics, Inc. CIRCLE 315 ON READER SERVICE CARD --- High Quality Nichicon Capacitors for all electronic equipment Nichicon research and experience assures remarkable strength and stability in all its capacitors. Nichicon produces a complete line of capacitors designed for every need. MAIN PRODUCTS: Oil Paper Capacitor, Electrolytic Capacitor, Tantalum Capacitor, Metallized Paper Capacitor, Ceramic Capacitor, Mica Capacitor and Mylar Capacitor, etc. Nichicon Capacitor Ltd. HEAD OFFICE: Uehara Bldg., Oikedori, Karasumahigashi-iru Nakagyo-ku, Kyoto, Japan CABLE ADDRESS: CAPACITOR KYOTO CIRCLE 316 ON READER SERVICE CARD --- COSTS DOWN! WIRE FORMS PRECISION UP! BY ARTWIRE Art Wire's high speed, automatic machines and production economies mean BIG SAVINGS for you...with guaranteed precision from the first to the millionth unit. Widest variety of wire components, made for today's automatic production lines, delivered on schedule to assure uninterrupted work flow. Send a Sample or Blueprint for Estimates ART WIRE AND STAMPING CO. 18 Boyden Place, Newark 2, N.J. CIRCLE 201 ON READER SERVICE CARD Miami, Fla., has been named president and chief executive officer. He succeeds William S. Simpson. The new president has been executive vice president, chief executive officer, and a director since March 1962, after having joined the company as general manager in 1961. Pearce-Simpson produces marine radio-telephones and Citizens Band radios manufactured by its electronics division. **PEOPLE IN BRIEF** Harold E. Francis leaves Chandler Evans Corp. to take post of v-p in charge of sales for Alloy Nuclear Corp. Electro-Optical Systems, Inc. promotes John M. Teem to technical director, and Henry L. Richter, Jr., to mgr., Advanced Systems Development Operations. Harold E. Watson (Maj. Gen., USAF, Ret.) has joined GE's Defense Programs Operation as a consultant on aerospace and defense technology. Sol Sparer elevated to president and chief executive officer of Pacotronics, Inc. George J. Tatnall, formerly of the Naval Air Development Center, now with Corning Glass Works as supervisor of radome engineering. Ralph F. Woodward, previously a staff associate at Stanford U., named quality control mgr., mfg. div., of Warnecke Electron Tubes, Inc. IBM Corp. advances L. R. Bickford, Jr., to director of general science at the Thomas J. Watson Research Center. Gerald deG. Cowan promoted to director of engineering for Sperry Rail Service. Hamilton O. Hauck moves up to director of corporate development, Western region, of Raytheon Co. Motorola ups Forrest G. Hogg to mgr. for NASA Programs of the Military Electronics div. Edmond A. Roelof leaves Eldon Industries, Inc. to join Midland Mfg. Co. as v-p, mfg. Larry Kaufman advances to director of research at Man-Labs, Inc. Consolidated Electrodynamics Corp. ups C. Kenneth Hines to g-m of the DeVar-Kinetics div. Directors of the Gudebrod Bros. Silk Co., Inc., have elevated F. W. Krupp to president and W. T. Hooven to chairman of the board. Let **DYNASERT®** CUT COSTS, SAVE TIME! The Dynasert Component Inserting Machine will pay for itself in a year or less in direct labor savings. It feeds, cuts, forms, inserts and clinches a wide range of axial lead components — up to ten times faster than by hand assembly. And with the new Pantograph Positioning Table you get even greater economies. For use where multiple components of the same size are to be inserted in parallel positions. Find out more. Write or call Mr. D. R. Knight, Dynasert, United Shoe Machinery Corporation, Boston 10, Massachusetts. Area Code 617, LI 2-9100. See us at Booth #4241 at the IRE Show CIRCLE 317 ON READER SERVICE CARD NEW JERSEY "cradle of industrial research" IEEE SPECIAL Exhibitors at the IEEE Show and their booth numbers are as follows: A ADC Products ........................................ 1923 A.M.P., Inc. ........................................... 1928 AMP, Inc. ............................................. 2527-2531 & 2837 A & M Instrument, Inc. ............................ 2704 A.P.M. Hexseal Corp. ................................ 2744 APR Electronics, Inc. ................................ 2822 Arc Electronics Associates, Inc. .................. 1921-1926 Ace Engineering & Machine Co., Inc. ............ 3928 Acro Products Corp. .................................. 3930 Acton Labs., Inc. ..................................... 4130 Advanced Measurement Instruments, Inc. ....... 4046 Advanced Vacuum Products, Inc. .................. 2607 Ad-Yu Electronics Labs., Inc. ...................... 3612 Aeropulse Corp. ....................................... 4120 Aetna Electronics Corp. ............................. 4123 Affiliated Manufacturers, Inc. ..................... 4131 AGASTAT Timing Instruments ....................... 2345 Airborne Instruments Lab. .......................... 2349 Airborne Instruments Lab. .......................... 3802-3810 Aircom, Inc. .......................................... 1108 Airco Electronics, Inc. ............................. 1205 Airparc Electronics Inc. ............................ 2906 Aladdin Industries, Inc. ............................. 1924 Alberco Corp. ......................................... M-12 Allen Elect. & Impulse Recording Equip. Co. .... 1611 Allen Products Corp. ................................. 1618-1915 Allen Papers and Engineering Co., Inc. .......... 1909 Alford Manufacturing Co. ........................... 1716-1718 Alfred Electronics .................................... 3030 Altra-Brealey Co. ..................................... 2313-2318 Allied Chemical Corp. ............................... 4303-4305 Allied Control Co., Inc. ............................ 2905-2907 Allison Corporation, Inc. ........................... 3407 Allen Unlimited Inc. ............................... 4145 Alpha Metals, Inc. ................................... 4328 Alpha Wire Corp. ..................................... 1326-1328 Alpha Engineering Co. .............................. 4329 American Aluminum Co. ............................. 4041 American Electrical Heater Co. ..................... 4033 American Electronic Laboratories, Inc. .......... 3045 American Enka Corp. .................................. 4301 American Lava Corp. .................................. 4401 American Metal Products Co. ....................... 4285 American Optical Co. ................................ 3008-3010 American Silver Co., Inc. ........................... 4224-4226 American Smelting and Refining Co. .............. 4005 Amperex Electronic Corp. ........................... 2222-2224 Amphenol-Borg Electronics Corp. .................. 1802-1810 1901-1910 Amplex Electronics, Inc. ........................... M-28 Anchor Alloys Inc. ................................... 1921 Andersen Corp. ....................................... 1502-1504 Anheuser-Busch, Inc. ................................ 4311 Angelica Uniform Co. .................................. 2960 ARRA (Antenna & Radome Research Assoc.) ....... 1929-1930 Antenna Systems, Inc. .............................. 2527-2541 Arthur Ansley Manufacturing Co. .................. 1820 ANT-LAB, Inc. ........................................ 3223-3225 Applitek Research Inc. ............................. 2339 The Arnold Engineering Co. ........................ 2315-2319 Arte Engineering Co. ............................... 4020-4022 Arwood Corp. ......................................... 4115 Assembly Products, Inc. ............................ 3916-3918 Associated American Winding Machine Co., Inc. .. 4410-4412 Associated Testing Laboratories, Inc. ............ 3927-3929 Audio Devices, Inc. .................................. 2227 Augat, Inc. ........................................... 2229 Automatic Electric Sales Corp. .................... 1908-1910 Automatic Metal Products Corp. .................... 1911 Autowires Inc. ....................................... 1903 Avco Corp. ........................................... 3840-3842 Avnet Electronics Corp. ............................ 3803-3807 Axel Electronics, Inc. .............................. 1221 B B & F Instruments, Inc. ............................. 3123 R & K Manufacturing Co. ............................ 3043 Babcock Electronics Corp. .......................... 1005 Ball-Metalco, Inc. .................................... 3210-3213 Baleo Research Laboratories, Inc. .................. 2338 Ballantine Laboratories, Inc. ...................... 3502-3504 Barber-Colman Co. .................................... 2242-2244 Many of the scientific developments that will shape tomorrow's world are germinating right now in New Jersey's more than 500 industrial research institutions. The smaller manufacturer who does not have his own research staff can easily find top-notch facilities and personnel available close by to help work out his problems. Because of its great contributions in such fields as electricity, electronics, chemistry, metallurgy and aviation, New Jersey has been called "the cradle of industrial research". Your executive and technical people will find a stimulating environment here, and ample opportunity for advanced study. Write for our 40-page, "New Jersey Industrial Guide". Department of Conservation and Economic Development Promotion Section 951-U. 520 East State Street, Trenton 25, New Jersey NEW JERSEY Bureau of Commerce, in the geographic center of the world's richest market CIRCLE 318 ON READER SERVICE CARD electronics • March 15, 1963 Accepted by industry as the quality line of Coaxial cables. Conform to Military Specifications including MIL-C-17C—or your own special requirements. Send for complete Coaxial Cable catalog. CHESTER CABLE CORP, CHESTER, NEW YORK a subsidiary of TENNESSEE CORPORATION CIRCLE 203 ON READER SERVICE CARD 203 Now you can mark each wire or piece of plastic tubing with its own circuit number... quickly... economically, right in your own plant. You reduce wire inventories because you need only one color of wire for as many circuits as necessary. Simplify your assembly methods and speed production with the same machine that has proven so successful in the aircraft and missile field. Write for details. KINGSLEY MACHINES 850 Cahuenga • Hollywood 38, Calif. See us in Booth 4232—IEEE Show—March 25-28 CIRCLE 204 ON READER SERVICE CARD Here's how Atlee spells RELIABILITY BERYLLIUM COPPER COMPONENT HOLDERS To even the most critical component mounting problems, Atlee's 100-300* series Component Holders bring the added assurance of functional superiority. When checked against holders made of conventional materials Beryllium Copper Component Holders are . . . always superior . . . superior all ways. ✓ Tensile Strength ✓ Electrical Conductivity ✓ Thermal Conductivity ✓ Corrosion Resistance ✓ Wear Resistance Atlee component holders and clips are ideal for mounting capacitors, resistors, relays, wires, cables, tubing and related components against shock and vibration; accommodating component diameters from .175" to 3.00". Atlee's contour design automatically increases holding power as environmental stress increases. Finishes Available: Cadmium Dichromate, Silver Dalcoat, Silver, Nickel, Black Matte, Hot Tin Dip, Electro Tin, Dalcoat B (a dielectric), and Natural. For further information contact our Engineering Department or request our Application Data Sheet. *Beryllium Copper atlee corporation 2 LOWELL AVE. • WINCHESTER, MASS. CIRCLE 319 ON READER SERVICE CARD March 15, 1963 • electronics Boeing has been awarded primary developmental, building and test responsibility for NASA's Saturn S-IC advanced first stage booster. Aero-Space Division's new Saturn Booster Branch has immediate, long-range openings offering professional challenge and rapid advancement to graduate engineers and scientists. This new Saturn program is expanding rapidly, providing unique advancement advantages and ground-floor opportunities to properly qualified Structural Design, Electronics/Electrical, Propulsion, Aeronautical, Cryogenics, Systems Test, Thermodynamics, Mechanical Design, Industrial and Manufacturing Engineers, as well as to graduate Physicists and Mathematicians. Salaries are commensurate with all levels of education and experience. Minimum requirements are a B.S. degree in any applicable scientific discipline. Boeing pays travel and moving allowances to newly-hired engineers. Assignments are in New Orleans as well as in Huntsville, Alabama. Positions with Saturn, and other missile and space programs at Boeing — including the solid-fuel Minuteman ICBM and X-20 Dyna-Soar boost-glide vehicle — are also available at Seattle, Cape Canaveral and Vandenberg AFB, California. Send your resume to Mr. L. Wendell Hays, The Boeing Company, P. O. Box 26088 - ECR, New Orleans, La. An equal opportunity employer. NEW SATELLITE HEART LIVES LONGER IN SPACE World's First TRUE Brushless DC Motor Has Highest Efficiency/Weight Ratio - The new Sperry Farragut self-starting brushless direct current servomotor* offers highest efficiency/weight ratios, longest life and dependable static-free operation. It is the only TRUE brushless direct current motor made for battery operation. Others require inversion units. This solid-state commutator motor lives longer in space because there are no brushes to wear out... gone are the friction, arcing and wear of switching commutation necessary for conventional D.C. motors. *Patent Pending FREE TECHNICAL REPORT TECHNICAL BULLETIN SPERRY-FARRAGUT SERVOMOTOR BRUSHLESS DC Another Space-Age First From SPERRY FARRAGUT COMPANY DIVISION OF SPERRY RAND CORPORATION BRISTOL, TENNESSEE Write today to Dept. EL-3 Characteristics of Model Being Made for NASA'S Goddard Space Flight Center Efficiency — 50% Torque — .67 in.-oz. @ 3,000 rpm Weight — 7.8 oz. ELECTRONICS Eliot National Watch Co. ............ 2519 Ellen Instruments, Inc. ............. 3838 Embree Electronics Corp. ........... 3948 Emerson & Cumming Inc. ............ 3106 Emilco, Inc. .......................... 3505-3506 Engelhard Industries, Inc. .......... 4403-4411 Engineered Electronics Co. .......... 1518 English Electric Valve Co., Ltd. .... 2425 Epee Products, Inc. .................. 2887 Epoxy Products ....................... M-30 Epco Inc. .............................. 3311-3313 Equipe Electronics Corp. ............ 4310 State Labs., Inc. ...................... 2909 L. M. Ericsson Co. .................... 2909 Eureka Engineering Co. ............... 3439 Eugene Engineering Co., Inc. ........ 4325 Exact Electronics, Inc. .............. 3659 F FMT Corp. ............................. 4044 Fairchild Camera & Inst. Corp. ...... 2701-2715 Falcon Co. ............................. 4321-4323 Finnest Metallurgical Corp. .......... 4050-4052 Federal Tool Engineering Co. ......... 4428 20 to 200 D.P. Send your prints for quotations - SPURS - HELICALS - WORM AND WORM GEARS - STRAIGHT BEVELS - LEAD SCREWS - RATCHETS - CLUSTER GEARS - RACKS - INTERNALS - ODD SHAPES A few of the many varieties of straight bevels we are regularly producing are shown above. Tell us your needs. THE FINEST IN GEARS Beaver Gear Works Inc. 1021 Parmele Street, Rockford, Illinois CIRCLE 320 ON READER SERVICE CARD Magneline® THE INDICATOR WITH INHERENT MEMORY Two new series for digital readout, ideal for multiplex applications SERIES 14000—FOR SOLID STATE LOGIC Character Size ........................................... 3/32" x 1/4" No. of Characters ........................................ Up to 11 Leads .................................................. 11 plus a common Watts .................................................... 2.4 SERIES 15000—FOR RELAY LOGIC Character Size ........................................... 5/16" x 1/4" No. of Characters ........................................ Up to 10 Leads .................................................. 5 plus a common* Watts .................................................... 1.3—1.7 *Requires switching of lead in combination with reversal of polarity to change indicator. Units hold last reading without power. Totally enclosed, self-stacking housing for front or rear mounting. Jewel bearings, only one moving part. Standard voltages 6, 12, 24, or 28 V.D.C. Readability 12 feet at normal room lighting. Options include special voltage, special characters, and internal lighting for dark room applications. Write or teletype (203-753-9341) for free literature PATWIN ELECTRONICS A DIVISION OF THE PATENT BUTTON COMPANY WATERBURY 20, CONNECTICUT CIRCLE 207 ON READER SERVICE CARD MINIATURE FREQUENCY-ACTUATED REMOTE SELECTOR SWITCHES These miniaturized resonant reed selectors are designed for remote signaling and control in multiplex telemetry, mobile communications and similar applications. Each selector will respond to one of 40 audio frequencies spaced at 15 cps intervals from 262.5 to 847.5 cps, and actuate signals, counters, controls or other devices. Normal drive current is 2.5 ma. and driving power needed is only 1.8 mW. Selectivity is ±1.5 cps of calibrated frequency and stability is within ±0.5 cps of calibrated frequency from −10 to +50°C. Fujitsu resonant reed selectors are particularly useful where space and weight are at a premium. A new electro-mechanical design permits both the reed and driving coil to be sealed in a case only 36mm long and 12.6mm in diameter. For detailed specifications and applications information contact our nearest representative. FUJITSU LIMITED Communications and Electronics Tokyo, Japan Represented by: U.S.A.: HAR-WELL ASSOCIATES, INC., Southbury, Connecticut, Phone: 264-8222 THE NISSHO PACIFIC CORP., 120 Montgomery St., San Francisco 4, California, Phone: Yukon 2-7901, 7906 Canada: NISSHO (CANADA) LTD., 100 University Avenue, Toronto, Phone: EMpire 2-4794 United Kingdom: WALMORE ELECTRONICS LIMITED 11-15 Betterton Street, Drury Lane, London W.C. 2, Phone: TEMplebar 0201-5 Germany: NEUMULLER & CO. GMBH, 8 München 13, Schraudolphstr 2a, Phone: 29 97 24 CIRCLE 208 ON READER SERVICE CARD HELP YOUR POST OFFICE TO SERVE YOU BETTER BY MAILING EARLY IN THE DAY NATIONWIDE IMPROVED MAIL SERVICE PROGRAM the finest precision coaxial connectors* *TM® (miniaturized TNC) General RF Fittings, Inc. 702 BEACON STREET, BOSTON 15, MASSACHUSETTS Telephone: 617 267-5120 CIRCLE 321 ON READER SERVICE CARD Around the world it's KEW MODEL F-98 MODEL EW-16 MODEL P-22 MODEL VO-38 MODEL VR-2P MODEL TK-20A MODEL FL-202 MODEL PV-200 MODEL TR-A SWR & RF WATTMETER KYORITSU ELECTRICAL INST. WORKS, LTD. No. 120, Nakane-cho, Meguro-ku, Tokyo, Japan Cable Address: "KYORITSUKEIKI TOKYO" Tel: (717) 0131 ~ 5 ~ 0151 ~ 3 CIRCLE 322 ON READER SERVICE CARD solve speed and control problems Pickups convert mechanical motion into AC voltage without contact These standard magnetic pickups generate voltage and power without the aid of additional power sources. They actuate electronic or electrical circuitry without amplification in most cases. No extra bearings, mechanical linkages, etc., are required. A wide variety of stock models are available for immediate delivery, and unlimited variations can be engineered on special order. CATALOG MP-562 shows varied applications, characteristics and easy selection guide... write today! ELECTRO PRODUCTS LABORATORIES 6125-F WEST HOWARD, CHICAGO 48 (NILES), ILLINOIS PHONE: 647-6125 Proximity Switches • Magnetic Pickups • Pres-on Controls • Tachometers Dynamic Micrometers • DC Power Supplies CIRCLE 209 ON READER SERVICE CARD THE PERFECT PACKAGE PRESENT APPLICATIONS: VOLTAGE DIVIDERS REFERENCE OR RATIO STANDARDS COMPUTER APPLICATIONS LADDER TYPE CONVERTERS SUMMING NETWORKS MISSILE CHECKOUT SYSTEMS DIGITAL TO ANALOG CONVERSION FOR HIGH PERFORMANCE APPLICATIONS KELVIN CUSTOM DESIGNED RESISTANCE NETWORKS Our experienced engineers will answer your application inquiries accurately and promptly. Send specifications or requirements to: Representatives in principal cities KELVIN ELECTRIC COMPANY 5907 Noble Ave., Van Nuys, Calif., Triangle 3-3430 New York: Yonkers, 916 McLean Ave., Beverly 7-2500 CIRCLE 323 ON READER SERVICE CARD Electrical Characteristics Available: - Nominal resistance tolerances to ± .005% - Resistance ratio tolerances as close as ± .002% - Long term resistance stability of ± .002% per year. - Low reactances to provide rise times as low as 50 nanoseconds. - Temperature coefficients of resistors track as close as 17PPM/°C from −55°C to +125°C. Kelvin has specialized for years in the custom design and production of resistance networks to suit individual customer requirements. Recognized, high quality Kelvin precision wire-wound resistors are designed to obtain the ultimate in both accuracy and stability. Units perform in airborne and missile environments with altitude, shock, vibration, humidity and wide temperature ranges. Networks are packaged in hermetically sealed cases or encapsulated in epoxy resin to meet exact mechanical specifications. NIMS NATIONWIDE IMPROVED MAIL SERVICE PROGRAM For Better Service Your Post Office Suggests That You Mail Early In The Day! Send for FREE Catalog 28 pages of professional electronic equipment in kit and wired form—for Lab...Line...Home EICO, 3300 N. Blvd., L.I.C., I. N. Y. E-3A ☐ Send free 32-page catalog & dealer's name. ☐ Send new 36-page Guidebook to Hi-Fi for which I enclose 25c for postage & handling. Name ____________________________________________________________ Address ___________________________________________________________ City ________________________ Zone ______ State ________________ EICO 3300 N. Blvd., L.I.C., I. N. Y. Export Dept., Roburn Agencies 431 Greenwich St., N.Y. 13, N.Y. See Us at IEEE Booth 3101 CIRCLE 210 ON READER SERVICE CARD March 15, 1963 • electronics WHAT'S NEW FROM GARLOCK Electronic Products CHEMELEC® Printed Circuit Test Points reflect the latest thinking in advanced design and precision manufacture. TFE insulator material provides exceptional dielectric properties, chemical inertness, resistance to temperatures from -110°F to +500°F. Beryllium copper contacts and brass brackets are silver plated and gold flashed. Flashover (Short time, sea level) — 3000 VRMS. Capacitance (Frequency 1000 KC) — .25 MMFD. Available, in all ten RMA colors, from local stock. Write for AD-169. CHEMELEC® Miniature Tube Sockets for high frequency applications, such as radar equipment and wide band oscilloscopes. FEP body insulating material has outstanding impact strength, resists temperatures from -395°F to +400°F. Water absorption is zero. All metal parts precision made, and plated to JAN specifications. Furnished in 7 and 9 pin Shield Base Type, 7 and 9 pin Saddle Base Type, and 9 pin Bottom Mounting Type. Available from local stock. Write for AD-169. For full information, contact your Garlock Electronic Products distributor or representative. Or, write GARLOCK ELECTRONIC PRODUCTS, GARLOCK INC., Camden, New Jersey. GARLOCK SEE OUR BOOTH 2814-2816 AT THE IEEE SHOW CIRCLE 211 ON READER SERVICE CARD 211 The advanced design and precision construction of Ainslie antenna systems and associated equipment bear testimony to nearly two decades of microwave communication, detection and identification experience. By virtue of complete design-to-delivery capabilities and facilities, Ainslie Corporation offers its customers not only comprehensive standard lines of mesh, spun and horn antennas, but also the flexibility required to develop custom designed prototypes for on-schedule delivery. See us at the IEEE Show Booth #1819 Ainslie CORPORATION 531 Pond Street Braintree 85, Massachusetts Acoustical Components of Superior Quality JAPAN PIEZO supplies 80% of Japan's crystal product requirements. STEREO CARTRIDGE Crystal — "PIEZO" Y-130 XTAL STEREO CARTRIDGE At 20°C, response: 50 to 10,000 c/s with a separation of 16.5 db. 0.6 V output at 50 mm/sec. Tracking force: 6 ± 1 gm. Compliance: 1.5 × 10⁻⁶ cm/dyne. Termination: 1 MΩ + 150 pF. Write for detailed catalog on our complete line of acoustical products including pickups, microphones, record players, phonograph motors and many associated products. JAPAN PIEZO ELECTRIC CO., LTD. Kami-renjaku, Mitaka, Tokyo, Japan CIRCLE 324 ON READER SERVICE CARD March 15, 1963 • electronics PHASE METERS **TYPE 202** - Phase Range: 0-19, 0-29, 0-45, 0-100, 0-20° up to 360° - Voltage Range: 0.01v up to 100v in ten ranges. - Frequency Range: 15 cps to 500 mc. - Accuracy: ±0.02° or ±2%. - Price: $698 **TYPE 405 SERIES** - Frequency Range: 1 cps to 500 kc without adjustment. - Voltage Range: 0.3v to 70v without amplitude adjustment. - Phase Range: 0-129°, 0-258°, 0-390° up to 360°. - Accuracy: ±0.25% relative, 1% or 2% absolute. - Price: $668 and up. Write for literature. AD-YU ELECTRONICS INC 249 TERHUNE AVE., PASSAIC, N. J. GRegory 2-5622 CABLE: AD-YU PASSAIC Visit Our Booth No. 3612 at the IEEE Show CIRCLE 325 ON READER SERVICE CARD Coils for Contact Capsules 1 to 5 Reeds .095 — .215 dia. 6 to 48 V.D.C. Lead or Pin Term. Also Available with Reeds. Write for Bulletin and Prices ELECTRICAL COIL WINDINGS Wire sizes #6 to #56, Classes A, B, F and H. Complete engineering service available. Coto-Coils COTO-COIL COMPANY INC. 65 Pavilion Avenue, Providence 5, R. I. CIRCLE 326 ON READER SERVICE CARD March 15, 1963 • electronics Unusual Professional Openings Exist At Atomic Energy Research Laboratory for Engineers and Physicists to participate in the design, development, construction and operation of large particle accelerators and liquid hydrogen bubble chambers/Specific openings include: - Design and developmental experience and interest in high and low level electronics, RF systems, microwaves and solid state devices for power and control. - The operation and development of a 3 BEV Synchrotron, a 33 BEV Synchrotron and an 80" liquid hydrogen bubble chamber. These facilities include a wide range of electronic systems, sophisticated high power systems and cryogenic devices. A broad background in electronics as well as a solid understanding of physics fundamentals is necessary. - Design, development and operation of electrical and electronic instrumentation and control equipment for dynamic devices operating in the Cryogenic region. Experience in both digital and analogue control systems is preferred. - Design, development and operation of advanced devices for analysis of photographic data, using "ON-LINE" operation of an IBM 7090 or similar computer. Experience with digital circuitry, computer hardware and interface equipment is essential. NEW YORK CITY INTERVIEWS DURING I.E.E.E. SHOW AT THE COLISEUM OFFICE BUILDING (MARCH 25 TO 28) For Interview Appointment With Dr. G. K. Green And Staff, Phone COlumbus 5-2090 during the show or send resume to his attention at Building 185 BROOKHAVEN NATIONAL LABORATORY ASSOCIATED UNIVERSITIES, INC. • UPTON, LONG ISLAND, NEW YORK An Equal Opportunity Employer ATTENTION: ENGINEERS, SCIENTISTS, PHYSICISTS This Qualification Form is designed to help you advance in the electronics industry. It is unique and compact. Designed with the assistance of professional personnel management, it isolates specific experience in electronics and deals only in essential background information. The advertisers listed here are seeking professional experience. Fill in the Qualification Form below. STRICTLY CONFIDENTIAL Your Qualification form will be handled as "Strictly Confidential" by ELECTRONICS. Our processing system is such that your form will be forwarded within 24 hours to the proper executives in the companies you select. You will be contacted at your home by the interested companies. WHAT TO DO 1. Review the positions in the advertisements. 2. Select those for which you qualify. 3. Notice the key numbers. 4. Circle the corresponding key number below the Qualification Form. 5. Fill out the form completely. Please print clearly. 6. Mail to: Classified Advertising Div., ELECTRONICS, Box 12, New York 36, N. Y. (No charge, of course). (Continued on page 223) IMMEDIATE OPENINGS WITH GENERAL DYNAMICS/ELECTRONICS The large number of diversified development contracts now in the house at General Dynamics/Electronics provide immediate assignments for additional professional personnel in the following disciplines: SYSTEMS ENGINEERING SENIOR DESIGN ENGINEER. To assist in evaluation of complex electronic reconnaissance systems. Requires experience in 2 or more of the following: digital, RF, pulse, audio, CRT, photorecorders, magnetic recorders, pulse multiplex and frequency multiplex. SENIOR ENGINEER. With broad knowledge of Aerospace Ground Electronic design. Will analyze proposed electronic subsystems to test requirements and determine equipment needs. Experience in Air Force shop or Naval carrier installations desirable, with emphasis on equipment layout, intercabling, work flow analysis, and operational and calibration procedures. PROJECT ENGINEERS. To supervise design and integration of test equipments and test procedures. Should be familiar with all types of test equipment and techniques. Knowledge of more than following areas: flight control systems, radar, HF-UHF navigation and communication equipment, microwave equipment, antenna systems and electronic countermeasures. DIGITAL EQUIPMENT DESIGN SENIOR ENGINEERS. To supervise and do design work on MODEMS, logic and in-put/out-put devices for data communication equipment used in industrial and military systems. Work includes transistor circuit design, logic design, modulation techniques for radio and wire line data transmission, mechanical design of input/out-put devices, packaging design and integration of complete communications systems. CIRCUIT DESIGN ENGINEERS. With experience in the design of transistorized logic circuits, pulse generators and other digitally controlled circuits such as numerical indicators. MAINTAINABILITY Long Range Programs in Development/Test/Evaluation/Production of Aerospace Electronic Equipment for: PRINCIPAL ENGINEER. To establish and operate elite group — experience with all phases of MIL-M-26512; maintenance engineering analysis; principal practices and techniques in the design, maintenance and use of Aerospace Electronic equipment. — Supervisory Position. SENIOR ENGINEERS. To implement maintainability tasks — experience with design principles, practices and techniques on Aerospace Electronic hardware; analysis, control and demonstration means; familiar with aerospace ground equipment specifications and Government maintenance procedures. ENGINEERS. To maximize maintainability on Aerospace Electronic Equipment; perform analysis, monitor, audit and review designs; coordinate demonstration testing, simulations; reporting and documentation responsibilities. Please send your resume to Mr. R. W. Holmes, Dept. 22. RF EQUIPMENT DESIGN MICROWAVE ENGINEERS. Experienced in the design of signal generators and receivers in the following frequency bands: L, S, C, T, Ku, Ka. Should also know techniques for remote control of frequency and signal amplitude. ENGINEERS. Experienced in the design of RF and microwave receivers, digital display circuits, data handling and CRT displays including storage tube circuits. ENGINEERS. Experienced in the design and development of solid state receivers for reconnaissance telemetry, Doppler and communication applications. Experience of tracking filters, phase lock, and synthesizer circuits desirable. LOW FREQUENCY DESIGNERS. Experienced in the design of audio and sweep signal generators and servo systems test equipment. Senior engineers are also required with experience in the design of LF receivers and transmitters. HF-UHF ENGINEERS. With experience in design of signal generators, using both transistorized and vacuum tube circuitry. Knowledge of techniques for digital selection of frequency, analog frequency synthesis and remote control of signal amplitude is required. SENIOR ENGINEERS. Experienced in the design and development of single side band receivers and transmitters. RELIABILITY Long Range Programs in Aerospace Electronic Equipment. Positions available in staff functional areas and state-of-the-art systems programs for: PRINCIPAL ENGINEERS. To provide reliability technical group support and program project task support—experience in reliability activities of the following: Analysis, Design Review, Surveillance, Audit, Sub-Contractor Liaison, Apportionment, Allocation and Assessment. Responsible for the application of techniques on Aerospace Electronic programs and general system methods and procedures. Staff and program positions available.—Supervisory. SENIOR ENGINEERS. To implement reliability engineering and reliability services group tasks. Experience required in Aerospace Electronic equipment reliability activities. Positions available in all reliability areas including: Analysis, Review, Audit, Surveillance, Monitoring, Sub-Contractor Liaison, Statistical Demonstration Testing Studies, etc. Staff and program positions available. ENGINEERS. To perform reliability tasks of all kinds on Aerospace Electronic equipment. ENGINEERS · SCIENTISTS measure the MAGNITUDE of the new range technology ...where a satellite may be just one element in a vast instrumentation system Using a satellite to assist in monitoring flight performance of a Manned Space Vehicle is only one of the forward-looking projects under study by the Advanced Planning Group of PAN AM's Guided Missiles Range Division at Cape Canaveral. Since 1953, the need to match range instrumentation systems with the constantly advancing capabilities of new missiles and space vehicles has spurred PAN AM to create a whole new range technology for the Atlantic Missile Range. TODAY THE EFFORT IS ACCELERATING. PLANNING IS UNDER WAY AT 3 TIME LEVELS. 1. To meet the specific needs of scheduled launchings immediately ahead. 2. To meet the requirements of launch programs of the next 5 years. 3. To prepare for manned lunar flights and work as far into the future as the late 70's projecting range technology for interplanetary vehicles now existing in concept only. YOU ARE INVITED TO INQUIRE ABOUT THE FOLLOWING OPPORTUNITIES: Systems Engineers—EE's, Physicists capable of assuming complete project responsibility for new range systems. Instrumentation Planning Engineers—EE's, Physicists to be responsible for specific global range instrumentation concepts. Advance Planning Engineers—EE's, Physicists to evaluate and project the state-of-the-art in all applications of range instrumentation. Experience in one or more of these areas: Pulse radar, CW techniques, telemetry, infrared, data handling, communications, closed circuit TV, frequency analysis, command control, underwater sound, timing, shipboard instrumentation. Why not write us today, describing your interests and qualifications in any of the areas above. Address Dr. Charles Carroll, Dept. 28C-3 Pan American World Airways, Inc., P.O. Box 4465, Patrick Air Force Base, Florida. GUIDED MISSILES RANGE DIVISION PATRICK AIR FORCE BASE, FLORIDA AN EQUAL OPPORTUNITY EMPLOYER PAN AM is now creating the range technology for launches of DYNA-SOAR, GEMINI, APOLLO, ADVANCED SATURN BOOSTERS March 15, 1963 • electronics We believe in versatile engineers here — men with specialties, yes — but with broad knowledge in many allied areas. Those who have an especially strong interest in diversification are encouraged to grow into creative systems engineering. And the contacts between engineering operations and laboratory research are close. For example, circuit design engineers work hand-in-hand with applied physicists in extending the state of the art in molecular circuitry. An instance of the success of this cooperation: Norden's solid state servo amplifier, which produces a significant power level of 1.5 watts. (Work to obtain even higher outputs is underway.) Climate for Achievement at Norden. Engineers and scientists find a working atmosphere at Norden that encourages continued learning and growth. Here, staff members work on problem-solving teams, gaining broad exposure to many technical aspects of a project. Opportunities for advanced study at nearby academic institutions are open to qualified engineers under our graduate program. Unsurpassed test and research facilities are available. And Norden's location near Long Island Sound is outstandingly attractive and convenient, easily reached from Northern New Jersey, Westchester, New York City, Long Island, and of course, all of Connecticut. Opportunities at all technical levels on programs in the areas of submarine, helicopter, fixed wing aircraft and space vehicle display integration: VIDEO CIRCUITS • CATHODE RAY TUBE DRIVE CIRCUITS • HIGH SPEED ANALOG & DIGITAL PROCESSING • VIDEO SIGNAL SYNTHESIS • RADAR & TV SYNCHRONIZERS • HIGH VOLTAGE POWER SUPPLIES Also openings for: SEMICONDUCTOR DEVICE SCIENTISTS & ENGINEERS. R&D of silicon functional electronic blocks. Requires experience with oxide masked multi-diffused structures and knowledge of transistorized circuitry. SYSTEMS ENGINEERS. Aerospace applications of military ground support equipment; and modern microwave and optical radar systems. RELIABILITY ENGINEERS. Review system and subsystem tests for design approval. Will recommend design modifications. EQUIPMENT DESIGN ENGINEERS. Knowledge of stress analysis, heat transfer, high density electronic packaging. Please forward your resume to Mr. James E. Fitzgerald, Employment Dept., Helen Street, Norwalk, Connecticut. Look what EE's are doing at ELECTRIC BOAT • Design of Special Instrumentation for Measurement of Acoustic & Vibration Data • Design & Installation of Interior Communication Systems, Navigation Systems, Ship Control Systems, Depth Control Systems, Steering & Diving Devices • Application and Systems Engineering of Radio, Radar, Sonar & Countermeasures Systems & Components • Design and Installation of Electric Power Plants & Distribution Systems • Quality/Reliability Control & Assurance • Nuclear Power Plant Systems Schematics Review • Advanced Circuit Design • Electronic Systems Engineering • Missile Fire Control, Guidance and Checkout Systems & Equipment • Installation and Test of Reactor Plant Auxiliary Power Supplies • Integration of Control and Instrumentation Systems • Navigation Systems and Equipment • Procurement • Signal Systems Analysis • Vendor Product Application Design • Electrical Power & Control Systems & Component Design • Test Development & Instrumentation Design • Vendor Performance Analysis • Process Control Engineering & Instrumentation • Sound, Shock & Vibration Analysis As a world of technology in miniature—incorporating missile launching systems, a nuclear propulsion plant, and life support systems—the nuclear submarine is an engineering challenge of the highest order. The Electrical and Electronic Engineer working at Electric Boat has a unique opportunity for professional development, not only in his own specialty but through broad knowledge gained in the unity of all technologies. Your resumes are invited. Please address Mr. Peter Carpenter. GENERAL DYNAMICS ELECTRIC BOAT Groton, Connecticut AN EQUAL OPPORTUNITY EMPLOYER EXCEPTIONAL ENGINEERING POSITIONS BENDIX—KANSAS CITY ELECTRONIC TEST EQUIPMENT DESIGN ENGINEERS To develop, design and supervise construction of special electronic test instruments, and to direct the technical activities of others in the organization. These positions require familiarity with test equipment problems and inspection techniques. Past association with military electronic equipment or experience in precision measurement of production items would assist you in qualifying for these positions. EE degree required. ELECTRONIC MANUFACTURING ENGINEERS For these positions we would prefer experienced engineers with a degree or equivalent experience in light product tooling and machining methods or electronic encapsulation packaging. Responsibilities include determining manufacturing processes, procedures, approving tool designs and capital equipment and facilities planning. COMPONENTS APPLICATION SPECIALISTS EE or Physics degree with minimum of 2 years' experience in the application of one or more of the following: Semiconductors, relays, motors, switches, gas filled tubes, or other electronic components. As a specialist you will work as a consultant with engineering design groups, quality engineers, purchasing and manufacturing personnel on component problems. PLANT LAYOUT ENGINEERS These positions require EE with power option or equivalent experience in the preparation of general plant layouts of electrical distribution for lighting and equipment installation. This activity will involve project responsibility for engineering and design, including coordination with maintenance following the project to completion. YOUR FUTURE AT BENDIX! The Kansas City Division of Bendix, long term prime contractor for the U.S. Navy, offers a pleasant, stimulating environment for professional growth. Kansas City is also a delightful place to live, visited frequently by all America's most beautiful city. Living costs are moderate, recreational, cultural and educational facilities are abundant. Close suburban living only minutes away, no traffic problems. For prompt attention, address your confidential inquiry to: MR. K. L. BEARDSLEY Technical Personnel Representative BENDIX CORPORATION Box 303-HL Kansas City 41, Mo. An Equal Opportunity Employer KANSAS CITY DIVISION March 15, 1963 • electronics DURING IEEE INVESTIGATE A POSITION WITH AIR FORCE SYSTEMS COMMAND OR AIR FORCE LOGISTICS COMMAND THAT OFFERS Professional Challenge Recognition Career Advancement Financial Assurance At 22 installations throughout the nation Engineers and Scientists of Air Force Systems Command and Air Force Logistics Command are discovering, defining and solving important aerospace problems. If you possess a degree and your competence demands more than routine assignments . . . if you seek the opportunity to gain professional stature and recognition . . . and if you have a desire to contribute to important, long-term programs, you are invited to seriously consider the openings listed below. All of these positions offer full Civil Service status and benefits. PHYSICISTS—Must have extensive knowledge of the physics of plasmas, fluid media (including the upper atmosphere) and electromagnetic scattering theory. ELECTRONICS ENGINEERS—With a background in development, fabrication, installation, maintenance and experimental operation of large radar sites. ELECTRONIC ENGINEERS (INSTRUMENTATION)—Must have background in range instrumentation with at least five years in missile testing. ELECTRONIC ENGINEERS—Should possess a background in training and engineering planning and the ability to recognize deficiencies and incompatibilities in plans. ELECTRONIC ENGINEERS—With necessary experience to maintain technical surveillance over contractor electrical and electronic engineering activities. ELECTRONIC ENGINEERS—To conduct system and equipment analysis studies to determine the feasibility and applicability of new and novel techniques to range instrumentation systems. ELECTRONIC ENGINEERS (ELECTRO-MAGNETICS)—With ability to conduct analysis and design studies of complex search and height finding equipments. If an interview is inconvenient at this time, you are invited to direct your resume or Civil Service Application (SF 57) in complete confidence to: TO ARRANGE A CONVENIENT INTERVIEW IN NEW YORK CITY DURING IEEE Phone PLaza 2-5110 AFSC-AFLC JOINT PROFESSIONAL PLACEMENT OFFICE Room 401 527 Madison Ave. • New York 22, N.Y. An equal opportunity employer ENGINEERS SYSTEMS - CONTROLS INSTRUMENTATION Positions in a firm of established Consulting Engineers where Engineering is our prime function. Well-trained independent-minded Design ENGINEERS in many fields find satisfying assignments with Sverdrup & Parcel and Associates, Inc. In addition to a general ability to comprehend broad systems design, specialized experience in one or more areas is necessary. If you are qualified in the following fields, contact us at once. TEST INSTRUMENTATION: Provide preliminary and final design on Telemetering Systems, Transducers, Data Acquisition, Transmission and Processing Techniques. AUTOMATIC CHECKOUT SYSTEMS: Check out of Components, System Test Calibration, Test Programming. FACILITY INSTRUMENTATION AND CONTROLS: Propellents, Cooling Water, Gas and Miscellaneous Systems, Communications, and Warning Systems. You are invited to submit your qualifications to: SVERDRUP & PARCEL AND ASSOCIATES, INC. ENGINEERS • ARCHITECTS 915 Olive Street, St. Louis 1, Mo. An equal opportunity employer MICROWAVE AND RADIO COMMUNICATIONS ENGINEERS U.S. and Foreign Assignments MICROWAVE ENGINEERS. Responsibilities include systems engineering and/or installation supervision of telecommunications equipment. Ability to conduct and evaluate acceptance tests important. Working acquaintance with site selection and microwave path design helpful. RADIO ENGINEERS. Position requires transmitter and receiver station design, including rehabilitation of domestic and international communications systems. Knowledge of broadcasting, mobile telephone, marine and public safety networks helpful. Television Associates engineers and supervises construction of world-wide communications systems. Generous living allowance provided. Write or call R. J. Rhinehart TELEVISION ASSOCIATES OF INDIANA, INC. MICHIGAN CITY, INDIANA A Subsidiary of Melpar, Inc. An Equal Opportunity Employer ACROSS INTERDISCIPLINARY BOUNDARIES Electronic and Electrical Engineers who prefer the climate of non-routine careers are needed for intermediate and senior levels of responsibilities. Immediate openings exist for engineers in the fields of: - Digital Electronics - Communications, including U.H.F.—V.H.F.—Microwave - Information Retrieval - Control and Control - Systems Integration - Data Control - Operations Research Experience in practical applications required. During the I.E.E.E. Show Call Robert Flink in New York, PLaza 2-8774 (evenings 612), from 1:00 p.m. to 7:00 p.m. Or write: Robert Flink, Director of Personnel, 4815 Rugby Ave., Bethesda, Maryland. BOOZ • ALLEN APPLIED RESEARCH, INC. Scientific and Technical Services AN EQUAL OPPORTUNITY EMPLOYER ELECTRONICS National Coverage All Depths SATELLITE—ELECTRICAL COMMUNICATIONS—TELEMETRY Send resume in confidence. Dr. L. I. Gilhertson AEROSPACE PLACEMENT CORP. P.O. Box 2125, Phila. 3, Pa. POSITION VACANT Position for Electrical Engineer with electronics experience available in Philadelphia's Department of Streets. Work includes determining signal sequence for traffic movement, evaluating and recommending necessary electrical equipment, designing and testing circuits, and developing methods and systems for traffic control or traffic flow analysis. Require degree in electrical engineering plus four years experience including two years electronics experience. Equivalent training and training and experience will be considered. Salary $9272-$10,630. Liberal fringe benefits. Applicants must be U.S. citizens. Apply Director of Recruiting, 792 City Hall, Philadelphia 7, Pa. SELLING OPPORTUNITY WANTED Mfrs' Export Reps with exc. world-wide connect, on their toes around the clock, are eager to boost your sales. Aacor International, 198 Broadway, N. Y. S, BA 7-0482. "Put Yourself in the Other Fellow's Place" TO EMPLOYERS TO EMPLOYEES Letters written offering Employment or applying for same are written with the hope of satisfying a current need. An answer, regardless of whether it is positive or negative, is appreciated. MR. EMPLOYER, won't you remove the mystery about the status of an employee's application by acknowledging all applicants and not just the promising candidates. MR. EMPLOYEE, you, too, can help by acknowledging applications and job offers. This would encourage more companies to answer position wanted ads in this section. We make this suggestion because we realize the help cooperation between employers and employees. This section will be the more useful to all as a result of this consideration. Classified Advertising Division McGRAW-HILL PUBLISHING CO., INC. 330 West 42nd St., New York 36, N. Y. CHIEF ENGINEER (Electronic Packaging) Our client is a well established Connecticut manufacturer of oceanic data processing systems. They seek a man to direct the packaging design of all equipment in a major expansion of their product lines. He will have several years experience in packaging electronic systems and probably an EE or ME degree. This position offers an excellent career opportunity for professional growth and recognition with a top starting salary. Reply in confidence including present earnings to Allen West VEZAN-WEST & CO. Management Consultants 1000 FARMINGTON AVE. WEST HARTFORD 7, CONN. Inquiries are invited concerning other Supervisory and Sr Engineering positions with this company. FREE TRIP IEE (IRE) SHOW NEW YORK CITY MARCH 25-28TH Electronic Scientists and Engineers you can have an all expense paid trip to New York to select materials with national's electronic companies. You qualify. Requirements: BS, MS, or PhD and one year's experience in one or more of these areas: Research — Design — Development — Sales — Marketing — Applications. Air Mail Resume in Complete Confidence—No Obligation Write or Call Collect: Dept E for Full Details Boston: (617) 444-7113 Alan Glour—Technical/Scientific Sid Hopper—Sales/Marketing PERSPECTIVE A PROFESSIONAL PLACEMENT ORGANIZATION Ten Kearney Road, Needham Heights 94, Mass. Highland Ave., Exit = 56W Off Rte. 128—Marr Bldg. INTERESTING TO NOTE . . . NEW YORK, N. Y. Feb. 1963: Universal Relay Corp., 42 White St., New York 13, N. Y. announces that their 1963 catalog is progressing toward publication and will be mailed shortly. With publication of the catalog, they inform their customers that "normal inventory includes over 2,000,000 relays in approximately 30,000 types. In most cases stock is sufficient to give immediate delivery of production quantities. This catalog is, therefore, not just a listing of items available 'on order' but, by and large, it is an indication of in-stock items (either as complete units or as ready to assembly components). The average shipment is made within 48 hours. Where coils and frames require assembly, or relays require special testing or adjustment, shipments are made within one week to ten days. Universal is completely equipped to assemble, adjust and thoroughly test any type of relay. Assembly and test facilities have been imitated by some relay manufacturers. A personal interest is taken in every order. This interest is maintained as the order is processed. And, it continues even after the customer receives the merchandise until he makes sure that it satisfies his needs. All merchandise is guaranteed, subject to customers' inspection and approval, and may be returned within 30 days for replacement or credit. The catalog is full of items to fill everyday relay requirements". Catalog E-163 may be obtained by writing directly to: UNIVERSAL RELAY CORP. 42 White Street, New York 13, N. Y. WAlker 5-6900 electronics IS EDITED TO KEEP YOU FULLY INFORMED — a "well-rounded" engineer What's your present job in electronics? Do you work on computers? (electronics ran 158 articles on computers between July, 1961 and June, 1962!) Are you in semiconductors? (For the same period, electronics had 99 articles, not including transistors, solid-state physics, diodes, crystals, etc.) Are you in military electronics? (electronics had 179 articles, not including those on aircraft, missiles, radar, etc.) In all, electronics' 28-man editorial staff provided more than 3,000 editorial pages to keep you abreast of all the technical developments in the industry. No matter where you work today or in which job function(s), electronics will keep you fully informed. Subscribe today via the Reader Service Card in this issue. Only 7½ cents a copy at the 3 year rate. ## INDEX TO ADVERTISERS ### Audited Paid Circulation | Company Name | Page | |------------------------------------------------------------------------------|------| | A & M Instrument Inc. | 104 | | AMP Corp. | 51 | | Ad-Yu Electronics Inc. | 214 | | Ainslie Corporation | 212 | | Airpax Electronics, Inc. | 107 | | Allen-Bradley Co. | 15 | | Allied Control Company, Inc. | 147 | | Amperex Electronic Corp. | 24, 25, 88, 89 | | Ampex Corporation | 14 | | Anabah Sub. of Jerrold Corp. | 62 | | Art Wire & Stamping Co. | 201 | | Atlanta Chamber of Commerce | 197 | | Atlee Corp. | 204 | | Atomh Electronic | 120 | | Automatic Electric Sub. of General Telephone & Electronics | 112 | | Automatic Metal Products Corp. | 134 | | Barker & Williamson, Inc. | 98 | | Beaver Gear Works, Inc. | 207 | | Beckman Instruments, Inc. Berkeley Division | 169 | | Bird Electronic Corporation | 140 | | Biley Electric Co. | 154 | | Boeing Co., The | 205 | | Bourns Inc. | 49 | | Brookhaven National Laboratory | 215 | | Brush Instruments Div. of Clevite Corp. | 3rd Cover | | Burroughs Corporation Electronic Components Div. | 153 | | Business Week | 143 | | Hussmann Mfg. Co. Div. of McGraw Edison Co. | 56 | | Cannon Electric Co. | 137 | | Carborundum Company, The | 37 | | Celeco-Constantine Engineering Laboratories | 161 | | Chester Cable Corp. | 202, 203 | | Clairex Corp. | 106 | | Coil Winding Equipment Co. | 130 | | Collins Radio Co. | 16 | | Computer Control Co. Inc. | 127 | | Consolidated Avionics Corp. | 95 | | Constantine Engineering Laboratories | 161 | | Continental Electronics Systems Inc. | 111 | | Coto-Coil Co., Inc. | 214 | | Data-Control Systems, Inc. | 171 | | Duxstrom Incorporated Transcroll Division | 110 | | Defense Electronics, Inc. | 156 | | Delco Radio | 32 | | Delevan Electronics Corp. | 170 | | Driver Co., Wilbur B. | 165 | | duPont de Nemours & Co., Inc. E. I. | 115, 133 | | Dynamics Instrumentation Co. | 200 | | Edwards High Vacuum Inc. | 43 | | Electrical Industries | 91 | | Electro Motive Mfg. Co., Inc. | 159 | | Electro Products Laboratories | 209 | | Electronic Instrument Co., Inc. (EICO) | 210 | | Engineered Electronics Co. | 149 | | Fairchild Stratos | 177 | | Fluke Mfg. Co., Inc., John | 131 | | Frederick Electronics Corp. | 164 | | Frontier Electronics Div. of International Resistance Co. | 39 | | Fujitsu Ltd. | 208 | | Gardner-Denver Company | 182 | | Garlock Electronics Products Inc. | 211 | | General Electric Co. Re-tailer Components Dept. | 167, 185 | | General Electrodynamics Corporation | 183 | | General Findings Inc. | 199 | | General Magnetics Inc. | 30, 31 | | General RF Fittings, Inc. | 209 | | General Radio Corp. | 2nd Cover | | Genatron Inc. Sub. of Genisco Inc. | 186 | | Gertsch Products, Inc. | 160 | | Green Instrument Co., Inc. | 190 | | Guidebrod Bros. Silk Co., Inc. | 44 | | Harman Karison Sub. of Jerrold Corp. | 4 | | Hayes, Inc. U. L. | 162 | | Heinemann Electric Co. | 53 | | Hemingway & Bartlett Mfg. Co., The | 175 | | Hewlett-Packard Company | 6 | | Hexacon Electric Co. | 180 | | Hoffmann Electronics Corp. | 108, 109 | | Houston Instrument Corp. | 46, 47 | | Hughes Aircraft Co. Aerospace Divisions | 179 | | ITT Electron Tube Div. | 97 | | Imtra Corp. | 175 | | Indiana General Corp. | 94 | | Ingersoll Products Division of Borg-Warner Corp. | 123 | | Instrument Systems Corp. | 119 | | Interelectronics Corp. | 187 | | International Resistance Co. | 61 | | International Telephone and Telegraph Corp. Industrial Products Division | 158 | | Japan Piezo Electric Co., Ltd. | 212 | | Jerrold Electronics Corp. | 87 | | Kay Electric Co. | 21 | | Keithley Instruments, Inc. | 161 | | Kelvin Electric Co. | 210 | | Kepco, Inc. | 36 | | Kingsley Machines | 204 | | Kinney Vacuum Div. of New York Air Brake Co. | 187 | | Kyoritsu Electrical Instruments Works, Ltd. | 209 | | Lambda Electronics Corp. | 5 | | Leach and Garner Co. | 199 | | Leesona Corp. | 136 | | Leget High Frequency Laboratories, Inc. | 38 | | Levin and Son, Inc., Louis | 57 | | Loral Electronics Corp. | 189 | | Machlett Laboratories Inc., The | 27 | | Mallory and Co., Inc., P. R. | 116, 117 | | Marconi Instruments | 176 | | Met'oy Electronics Co. | 141 | | McGraw-Hill Book Co. | 54 | | Melcan Engineering Laboratories | 138 | | Melpar Inc. | 29 | | Meteor Inc. | 142 | | Micro Instrument Co. | 214 | | Microswitch Division of Honeywell | 139 | | Midwec | 90 | | Miller Mfg. Co., Inc., James | 90 | | Minnesota Mining & Mfg. Co. Mincom Division | 52 | | Monotron Corp. | 188 | | Motorola Semiconductor Products Inc. | 125 | | Mycalex Corp. of America | 129 | *See advertisement in the July 25, 1962 issue of Electronics Buyers’ Guide for complete line of products or services.* NRC Equipment Corp. 48 New Jersey Bureau of Commerce 203 Nichicon Capacitor 201 North Atlantic Industries, Inc. 100 Northeastern Engineering, Inc. 175 Northern Radio Co., Inc. 154 Ohmite Mfg. Co. 17, 18, 19 Paktron Div. of Illinois Tool Works 118 Patwin Electronics 207 Pennsalt Chemicals Corp. 113 Philco Sub. of Ford Motor Co. 99 Polarad Electronic Instruments A Division of Polarad Electronics Corp. 174 Potter Instrument Co., Inc. 23 Precision Instrument Co. 135 Preformed Line Products Co. 184 Premier Metal Products Co., Inc. Sub. of Renwell Ind. Inc. 200 Radio Corex, Inc. Permacor Div. 164 Radio Corporation of America 4th Cover Radio Frequency Laboratories, Inc. 102 Raytheon Company 101 Reeves-Hoffman 155 Reeves-Hoffman Div. of Dynamics Corp. of America 180 Reeves Instrument Corp. Div. of Dynamics Corp. of America 92 Ribet-Desjardins 166 Rochar Electrique 195 Rohn Mfg. Co. 174 Sage Laboratories Inc. 90 Sanborn Company 105 Sangamo Electric Co. 41, 42 Schober Organ Corp. The 178 Seco Electronics Inc. 192 Security Devices Laboratory Electronics Div. of Sargent & Greenleaf, Inc. 28 Servomechanisms Inc. Mechatrol Div. 184 Siemens America Inc. 157 Siliconix Inc. 45 Singer Metries Div. Singer Mfg. Co. 103 Space Technology Laboratories, Inc. 13 Sperry Parragut Co. Div. of Sperry Rand Corp. 206 Sperry Microwave Electronics Co. Div. of Sperry Rand Corp. 93 Sprague Electric Co. 9 Stromberg Carlson Div. General Dynamics 194 Struthers-Dunn Inc. 152 Sylvania Electric Products, Inc. Microwave Device Division 121 Synthane Corp. 193 Tech Laboratories, Inc. 50 Tektronix Inc. 60 Texas Instruments Incorporated Industrial Products Group 35 Texas Instruments Incorporated Semiconductor Components Division 10, 11 Thermo American Fused Quartz Co., Inc. 168 Toyo Electronics Ind. Corp. 178 Transitron Electronic Corp. 55 Trio Laboratories, Inc. 122 Tru-Ohm Products 198 Tung-Sol Electric, Inc. 58, 59 United Systems Corp. 181 United Testing Laboratories 201 United Shoe Machinery Corp. Dynasert Div. 202 U. S. Semcor 191 Unitek/Weldmatic Division 173 Utica Drop Forge & Tool Division, Kelsey-Hayes Co. 126 Victoreen Instrument Co., The 128 Virginia Electric & Power Co. 20 Westinghouse Electric Corp. 150, 151 Weston Instruments & Electronics A Division of Daystrom Inc. 163 West Penn Power 176 White S.S. 124 Yellow Springs Instrumant Co., Inc. 172 CLASSIFIED ADVERTISING F. J. Eberle, Business Mgr. EMPLOYMENT OPPORTUNITIES 217-223 EQUIPMENT (Used or Surplus New) For Sale 223, 224 INDEX TO CLASSIFIED ADVERTISERS Aeropace Placement Corp. 222 ACF Industries Inc. 223 AFSC-AFLC Joint Professional Placement Office 221 Bell Aerosystems Co. 223 Bendix Corporation, Kansas City Div. 220 Booz Allen Applied Research Inc. 222 Electro Testing Co. 223 General Dynamics/Electric Boat 220 General Dynamics/Electronics 217 Norden Division of United Aircraft Corp. 219 Pan American World Airways Inc. Guided Missiles Range 218 Perspective 222 Radio Research Instrument Co. 223 Sverdup & Parcel and Associates Inc. 222 Television Associates of Indiana, Inc. 222 Universal Relay Corp. 224 U. S. Dept. of Commerce, National Bureau of Standards Boulder Labs. 222 Vezan-West & Co. 222 See advertisement in the July 25, 1962 issue of Electronics Buyers' Guide for complete line of products or services. This Index and our Reader Service Numbers are published as a service. Every precaution is taken to make them accurate, but electronics assumes no responsibilities for errors or omissions. ADVERTISING REPRESENTATIVES ATLANTA (9): Michael H. Miller, Robert C. Johnson 1375 Peachtree St. N.E., Trinity 5-0523 (area code 404) BOSTON (16): William R. Hodgkinson, Donald R. Furth McGraw-Hill Building, Copley Square, Congress 2-1160 (area code 617) CHICAGO (11): Harvey W. Warncke, Robert M. Denmead 645 North Michigan Avenue, Mohawk 4-5800 (area code 312) CLEVELAND (13): Paul T. Teigler 55 Public Square, Superior 1-7000 (area code 216) DALLAS (1): Frank Le Beau The Vaughn Bldg., 1712 Commerce St. Riverside 7-9721 (area code 214) DENVER (2): John W. Patton Tower Bldg., 1700 Broadway, Alpine 5-2981 (area code 303) HOUSTON (25): Joseph C. Page, Jr. Prudential Bldg., Holcombe Blvd., Riverside 8-1280 (area code 713) LOS ANGELES (17): Ashley P. Hartman, John G. Zisch, William C. Gries 1125 W. 6th St., Huntley 2-5450 (area code 213) NEW YORK (36): Donald A. Miller, Henry M. Shaw, George F. Werner 500 Fifth Avenue, LO-4-3000 (area code 212) PHILADELPHIA (3): Warren H. Gardner, William J. Boyle 6 Penn Center Plaza, LOcust 8-4330 (area code 215) SAN FRANCISCO (11): Richard C. Alcorn 255 California Street, Douglas 2-4600 (area code 415) LONDON W1: Edwin S. Murphy Jr. 34 Dover St. FRANKFURT/Main: Matthée Herfurth 85 Westendstrasse GENEVA: Michael R. Zeynel 2 Place du Port TOKYO: George Olcott, 1, Katohiracho, Shiba, Minato-ku March 15, 1963 • electronics At Brush it's a matter of record ... and this record speaks for itself. It was made by the new oscillograph Series 2300 . . . a product of Brush's advanced recording system design. Its unique optical system produces these high contrast traces . . . at all writing speeds. An extremely stable tungsten light source eliminates cluttered records caused by ultra-violet, "jitter" or RF interference. The start-up of this low-cost lamp is virtually instantaneous. Overall system linearity is better than 2%. Eight record speeds are controlled by pushbuttons. Paper take-up is built-in. A complete line of accessories accommodates special requirements. So now, you can record over the whole range of most-used frequencies with Brush systems incorporating all the known refinements in oscillography. Write for full details. brush INSTRUMENTS DIVISION OF CLEVITE 37TH AND PERKINS, CLEVELAND 14, OHIO CIRCLE 901 ON READER SERVICE CARD MIL-SPEC NUVISTORS The RCA-7586, 7895 and 7587 nuvistors are designed and manufactured to meet current military specifications. Now you can incorporate the proven performance advantages of the tiny nuvistors into your military equipment designs. Three important nuvistor types are designed to meet the current military specifications listed below: | NUVISTOR TYPE | MIL-SPEC. NO. | DATE | |----------------------------------------------------|------------------------|------------| | JAN-7586 medium-mu triode. General-purpose type for military and industrial applications | MIL-E-1/1397/A | 5 July 1962| | USA-7895 high-mu triode (mu=64). General-purpose type for military and industrial applications | MIL-E-1/1433 (Sig C) | 1 Feb. 1962| | USA-7587 sharp-cutoff tetrode. General-purpose type for military and industrial applications | MIL-E-1/1434 (Sig C) | 5 Feb. 1962| Nuvistors have exceptional uniformity of characteristics from tube to tube and throughout life; and high transconductance at low plate current and voltage. These highly reliable tubes feature: - Low voltage operation - Low power consumption - High input impedance - Low noise AND...nuvistors are in the class of active electronic circuit components least susceptible to catastrophic failure from nuclear radiation. MIL-SPECS...compactness...outstanding performance—are the benefits you have immediately available to you by designing these nuvistors into your military equipment. It will be well worth your while to find out more about these unique new tubes by calling your nearest RCA Electron Tube Div. Field Representative. RCA ELECTRON TUBE DIVISION, HARRISON, N. J. The Most Trusted Name in Electronics
VISION To build a vibrant multicultural learning environment founded on value based academic principles, wherein all involved shall contribute effectively, efficiently and responsibly to the nation and global community. Inside this issue.... *From Director’s Desk *6th Convocation of NIT Hamirpur *RuTAG at NIT Hamirpur *Ceremonies, Celebrations - Independence Day Celebration - Hindi Diwas & Release of ‘Trishul’ - Inter Uni. Badminton Championship. * Publications in International Journals of Repute *Publications in International/National Conferences *Foreign Visits & Programmes Attended *Awards, Recognitions & Milestones Achieved *Conferences/Workshop/STCs Conducted *Expert Lectures Delivered राष्ट्रीय प्रौद्योगिकी संस्थान, हमीरपुर (हिो प्रो) NATIONAL INSTITUTE OF TECHNOLOGY, HAMIRPUR (H.P.) www.nith.ac.in The platform of this newsletter is significant in a way that it gives me an opportunity to connect with everyone who is directly or indirectly associated and contributing to the growth of NIT Hamirpur. Last few months had been prodigious for this Institute in a sense that the Institute had been a host to some remarkably exceptional personalities who had contributed much to the overall growth of this country. The institute organized some very phenomenal training programs under TEQIP-II especially tailored to suit the needs of faculty members of engineering Institutions in the country. The whole endeavor of the institute is pivoted around the spirit of contributing to the improvisation of higher education in a systematic and comprehensive way. I am contended that the Faculty members are contributing to their fullest potential. This academic session we commenced a new UG program in Chemical Engineering discipline thus making the education imparted at this NIT more versatile and distinct. I wish that the pedagogy of this program may reach standards of global acceptance and the product of this program may establish newer horizons of professionalism and competence. The Institute is now expanding its facets and is coming out of its shell of traditional methodology of contributing to the nation. A worth sharing snippet is that we have added another feather in our cap by becoming first amongst all NITs to be designated as RuTAG Center (A government of India initiative for uplifting life of rural India) by the office of Principal scientific Advisor to Govt. of India. I envisage that this center shall be contributing to rural India especially in this hilly state of Himachal in a big way in times to come. This part of the Indian calendar year is full of jubilance on account of many festivals that are to be celebrated. These festivals invigorate us and prepare us to strive for reaching the pinnacles of glorious achievements. I sincerely convey my wishes for this festivity period and wish for wellbeing of every being in NIT Hamirpur and academia. Wish you all a very happy and prosperous Deepawali Prof Rajnish Shrivastava The sixth convocation of NIT Hamirpur concluded on 13th of June 2013. At this most pious academic ceremony Professor D K Bandyopadhyaya Vice Chancellor GGSIPU Delhi signed the scroll of degree in capacity of Chairman Board of Governors NIT Hamirpur. Professor Rajnish Shrivastava, Director NIT Hamirpur conferred the degree to all successful candidates. It was a matter of great pride that Principal Scientific Advisor to Govt. of India Dr. R Chidambaram delivered the convocation address to degree recipients. At this occasion 548 degree(s) were awarded, 263 candidates received degree in person and 285 in absentia. 13 candidates were awarded Doctor of Philosophy, 139 candidates were awarded Master's degree, 26 candidates received MBA degree and 370 candidates were awarded Bachelor's degree. RuTAG at NIT Hamirpur NIT Hamirpur has become first amongst all NITs to establish RuTAG center with the support of office of Principal Scientific Advisor to Govt. of India. This center will work towards the upliftment of people living in rural areas with sustainable technologies especially developed for rural mass. This centre would be first of its kind which would be fully devoted to solving problem of rural area. The centre would work under the flagship of Prof. Rajnish Shrivastava, Director NIT Hamirpur. Three co ordinators have also been appointed for smooth functioning of this centre. Ceremonies and Celebrations Independence Day Celebrations The 67th Independence day of India was celebrated with full enthusiasm and fervor of patriotism at NIT hamirpur on 15th August 2013. At this auspicious occasion Director-In-Charge Professor Rakesh Sehgal unfurled the tricolor. In his inspiring speech Prof. Sehgal touched some very important aspects of freedom movement and urged the students to understand its importance and sanctity. Hindi Diwas Celebrations Hindi Saptah (Week) was celebrated at NIT hamirpur from 7-14th September 2013. On 13th September the annual Hindi magazine “TRISHUL” was released by Professor Rajnish Shrivastava, Director NIT Hamirpur with other officials. At this occasion Professor Shrivastava in his prolific speech extolled about the virtues of this language and called for promotion of the language in day to day working of the Institute. NIT Hamirpur organizes Inter University Badminton Championship Inter Technology University Sports Association (ITUSA) and NIT Hamirpur organized Badminton Championship (Boys & Girls) of Northern Region on 4th & 5th October 2013. Teams from various NITs of northern region, PEC Chandigarh and Thapar University Patiala participated in the event. The Boys Team event was won by NIT Hamirpur. Mr. Vedant Kumar student of Mechanical Engineering was instrumental for this win by performing extraordinarily during this event. | S. No. | Authors | Title | Journal (With issue, page no etc.) | Publisher | |-------|-------------------------------|----------------------------------------------------------------------|--------------------------------------------------------------------------------------------------|--------------------| | 1 | S.P. Guleria and R.K. Dutta | Study of flexural strength and leachate analysis of fly ash-lime-gypsum composite mixed with treated tirechips | KSCE Journal of Civil Engineering, 17(4), 662-673. | Springer | | 2 | Pushpender Kumar and Narottam Chand | Clustering in Wireless Multimedia Sensor Networks Using Spectral Graph Partitioning | International Journal of Communications, Network and System Sciences, Vol. 6, No. 3, pp. 128-133, 2013 | Scientific Research | | 3 | Rajeev Singh and T.P. Sharma | A Secure WLAN Authentication Scheme | Transactions on Smart Processing and Computing | IEE, Korea | | 4 | Kulwardhan Singh and T. P. Sharma, | REDD: Reliable Energy-Efficient Data Dissemination in Wireless Sensor Networks with Multiple Mobile Sinks | World Academy of Science, Engineering and Technology | WASET | | 5 | Varun Gupta, Durg Singh Chauhan and Kamlesh Dutta | Incremental development & revolutions of E-learning software systems in education sector: a case study approach, | Journal of Human-centric Computing and Information Sciences, May 2013, 3:8 doi:10.1186/2192-1962-3-8, | Springer-Verlag | | 6 | Prashant Kumar Tiwari and Yog Raj Sood | An Efficient approach for optimal Allocation and Parameters Determination of TCSC with Investment Cost Recovery under Competitive Power Market | IEEE Transactions of Power System, paper ID: TPWRS-00348-2012, Vol. 28, No. 3, August 2013, pp.2475-2484. | IEEE Transactions | | 7 | Naveen Kumar Sharma, Prashant Kumar Tiwari and Yog Raj Sood | Assessment of Indian Competitive Power Market Availability, Demand and Shortage | Electrical India, vol. 53, issue 8, August 2013 | Electrical India | | 8 | Amita Nandal, T. Vigneswaran and Ashwani K. Rana | An efficient design of full Adder using testable reversible gate | Vol. 20, No. 5, May 2013, SCI, IF-0.267 | WIJ, Austria | | | Authors | Title | Journal/Conference | Publisher | |---|---------------------------------|----------------------------------------------------------------------|----------------------------------------------------------|--------------------| | 9 | AD Thakur, J N Sharma and YD Sharma | Disturbance due to point loads in a piezo-thermoelastic continuum, | J.Thermal Stresses, 36, 259-283, 2013. | Taylor & Francis | | 10| J N Sharma and Ramandeep Kaur | Analysis of forced vibrations in micro-scale anisotropic thermoelastic beams due to concentrated loads, | J.Thermal Stresses, 2013 (Accepted). | Taylor & Francis | | 11| Nirmal Singh, Rakesh Sehgal and Vishal Sharma | Effect of tempering after cryogenic treatment of tungsten carbide-cobaltbounded inserts | Bulletin of Materials Science, Ms. No. BOMS-D-12-00195R1. | Springer | | 12| U.C Jha and Sunand Kumar | Effect of TQM on Customer Satisfaction In Indian Manufacturing Industry | Review of Business and Technology Research, Vol. 9, No. 1, 2013, pp 171-179. ISSN 1941-9414 | MTMI, USA | | 13| Poonam Tanwar, T.V. Prasad and Kamlesh Dutta | Hybrid technique for effective knowledge representation, Advances in Computing and Information Technology | Advances in Intelligent Systems and Computing. Vol. 178, 2013, pp 33-43 | Springer | **Publications in International / National Conferences** - Dutta R.K., Khatri V.N. and Venkataraman G, “Compaction and CBR behaviour of clay reinforced with CCl4 treated coir fibres,” in International Conference on Intelligent Society in Pursuit of Advances of Civil Engineering, Gurgaon, India, 4-5th March, 2013. - Dutta R.K., Khatri V.N. and Venkataraman G, “Compaction and CBR behaviour of clay reinforced with NaOH treated coir fibres,” in National conference on Geotechnical and Geoenvironmental Aspects of Wastes and Their Utilization in Infrastructure Projects, Ludhiana, India, 15-16th February, 2013. - Dubey R. and Kumar P, “An empirical approach for the optimization of fly ash content in self-consolidating concrete” in 5th North American Conference on Design and Use of Self Compacting Concrete, (SCC 2013), Chicago, USA, 12-15th May, 2013. - Kumar P. and Dubey R, “Influence of superplasticizer dosages on fresh properties of self-consolidating concrete”, in 5th North American Conference on Design and Use of Self Compacting Concrete, (SCC 2013), Chicago, USA, 12-15th May, 2013. - Dharmendra, “Harness of rainwater needs for today and tomorrow” in National Conference on Environmental sustainability and society, Growing Paradigm (ESS-2013), 30-31st March, 2013. - Gautam G., Dharmendra and Gandhi S, “Waste to energy recovery from urban solid waste – a case study for Delhi city” in National Conference on Environmental Sustainability and Society, The Growing Paradigm (ESS-2013), 30-31st March, 2013. Gupta A, “Workability of Fiber reinforced self-compacting concrete”, Proc. of UKIERI Concrete Congress: Innovations in Concrete Construction, Dr. B. R. Ambedkar National Institute of Technology Jalandhar, Punjab, India, 5-8th March, 2013. Chand N, “Energy efficient cooperative caching in WSN,” in International Conference on Computer and Communication Networks Engineering (ICCCNE), pp. 674-679, Germany, May 2013. Singh R. and Sharma T.P, “A sequence number based WLAN authentication scheme for reducing the MIC field overhead”, in International Conference on Computer and Communication Technologies (WOCN’13), 26-28 July, 2013. Singh R. and Sharma T.P, “A key refreshing technique to reduce 4-way handshake latency in 802.11i based networks”, in International Conference on Computer and Communication Technologies (ICCCT’13), 20-22nd September, 2013. Singh K. and Sharma T.P, “Reliable energy-efficient data dissemination (REDD) in wireless sensor network”, in International Conference on Computer Networks and Systems Security (ICCNSS 2013), New York, USA, 05-06th June, 2013. Chandra P., Pandey K.S. and Chauhan S, “An energy aware dispatch scheme for WSNs” in International Conference on Computer, Communication and Information Sciences and Engineering (ICCCISE-2013), Paris, 27-28th June, 2013. Uttam V. and Gupta N, “Clustering in WSN based on minimum spanning tree using divide and conquer approach” in International Conference on Electrical, Computer, Electronics and Communication (ICECECE-2013), London UK, pp. 851-855, July, 2013. Dubey B.B. and Chauhan N, “ZSDMAC: zone sensing directional MAC protocol for vehicular Ad Hoc networks” in International Conference on Electrical, Computer, Electronics and Communication (ICECECE-2013), London UK, pp. 851-855, July, 2013. Kumar A. and Kumar R, “Selective forwarding attack and its detection algorithms: A review” in International Conference on Electrical, Computer, Electronics and Communication (ICECECE-2013), London UK, pp. 851-855, July, 2013. Sharma N.K. and Sood Y.R, “Optimal location and rating of wind power generator with maximization of social welfare in competitive electricity market” in IEEE Power & Energy Society (PES) General Meeting, Vancouver, BC, Canada, 21 - 25th July, 2013. Kumar S. and Sharma B.B, “Formation control of multi-vehicle systems using PID like consensus algorithm” in International Conference on Advances in Electronics, Electrical and Computer Engineering, Shivalik College of Engineering Dehradun, 22-23rd June, 2013. Soni S.K, “Energy efficient clustering and data aggregation in wireless sensor networks” in International Conference on Computer and Communication Networks Engineering (ICCCNE 2013), Berlin, Germany, Issue 77, pp. 868-874, May 2013. Kumar A. and Kumar V, “Fuzzy logic based improved range free localization for wireless sensor networks,” in International Conference on Computer and Communication Networks Engineering (ICCCNE 2013), Berlin, Germany, issue 77, pp. 886-894, May 2013. Keshari A.K. and Khanna G, “Characterization and analysis of fully depleted SOI MOSFET,” in Proceeding of International Conference on Electrical, Electronics and Computer Science Engg. (ICEECS) Dehradun, Uttrakhand, 21-24th June, 2013. Sharma R. and Goswami A, “A robust approach for construction of irregular LDPC codes”, in FTEE-2013, Thailand, Bangkok, 13-14th July, 2013. Rana A. and Priyadarshi P, “Impact of radiation on MOSFET in nano regime”, in FTEE-2013, Thailand, Bangkok, 13-14th July, 2013. Rathore R.S., Kumar V. and Pundir H, “A low-power sample-and-hold amplifier using 0.05-µm CMOS technology”, in FTEE-2013, Thailand, Bangkok, 13-14th July, 2013. Sehgal R. and Sharma M.D, “Influence of machining parameters on main cutting force and surface roughness during turning of AISI A2 steel alloy”, published in the proceedings of Clute Institute International Academic Conference Paris, France, Paper ID No. ENG-294., 9-11th June, 2013. Kumar S, “Effect of TQM on customer satisfaction in Indian manufacturing industry” at annual MTMI in International Conference on Global issues in Business Technology, Virginia, United States of America, 20-21st September, 2013. **Foreign Visits & Program Attended** Dr. Pradeep Kumar visited Chicago, USA, to attend 5th North American Conference on “Design and use of self compacting concrete” held on 12-15th May, 2013. Prof. R. S. Banshu attended “Davos atmospheric and cryospheric assembly (DACA)-2013”, from: 8- 12th July, 2013 at Davos, Switzerland. Sh. Chander Prakash attended International Conference “Davos atmospheric and cryospheric assembly 2013 DACA-2013” from 8- 12th July, 2013 at Davos, Switzerland. Prof. Sushil Chauhan & Sh.Rajesh Kumar attended a three day training program on “Best practices in power system operation & economics” at Engineering staff college, Hyderabad 12-14th August, 2013. Prof. Sushil Chauhan attended two day workshop on “Outcome based accreditation and education” NIT, Surathkal 23-24th August, 2013. Dr. Bharat Bhushan Sharma attended National Workshop on “Promoting excellence in research among NITs through e-journals” organized by NIT Warangal (AP) at Warangal, 12-13th July, 2013. Dr. Bharat Bhushan Sharma attended 43rd ISTE Section Faculty Convention organized by Rayat-Bahra College of Engineering and Nano-Technology for Women, Hoshiarpur, Punjab, 26-27th July, 2013. Dr Ravinder Nath and Dr. Ashwani Chandel attended STC on “Condition monitoring & health assessment of power” at ERDA Vadodara Gujrat 30-31st July, 2013. Dr. R K Jarial & Dr. O.P. Rahi attended STC on “Preparation of specifications, laying of LT, HT and power cables and their condition” at ESIC Hyderabad under TEQIP-II, 27-30th Aug., 2013. Dr. S. Soni visited Berlin, Germany to attend International Conference on Computer and Communication Networks Engineering (ICCCNE 2013), 18-25th May, 2013. Dr. Ashok Kumar visited Berlin, Germany to attend and present paper on “Fuzzy logic based Improved range free localization for wireless sensor networks” in International Conference on Computer and Communication Networks Engineering (ICCCNE 2013), 18-25th May, 2013. Dr. Ashwani Rana, Mr. Vinod Sharma and Mr. Rakesh Sharma presented technical paper in FTEE 2013 at Bangkok, Thailand, 13-14th July, 2013. Er. Rakesh Sharma attended FDP on “Linux system administration” at ESCI Hyderabad, under TEQIP-II, 26-29th August, 2013. Prof. J. N. Sharma presented a paper entitled “Free vibration analysis in axisymmetric functionally graded thermoelastic sphere”, in the 10th International Congress held at Nanjing University of Technology, Nanjing China, 31 May -4June, 2013. Dr. P. K. Sharma presented a paper entitled “In plane vibrations in clamped thermoelastic solid disks”, in the 10th International Congress held at Nanjing University of Technology, Nanjing China, 31 May -4June, 2013. Dr R. K. Vats presented a paper “Triple fixed point theorems via a-series in partially ordered metric sets” in the International Conference held at AbantIzzet Baysal University, Bolu, Turkey. Dr. (Mrs.) Kamlesh Dutta attended workshop on “Awareness of cloud computing” at Engineering Staff College of India, Hydrabad, 01-03rd Aug., 2013. Er. Siddhartha Chauhan attended workshop on “Mail server administration” at Engineering Staff College of India, Hydrabad, 18-20th Sept., 2013. Dr. Naveen Chauhan attended workshop on “Data communication for power sector” at Engineering Staff College of India, Hydrabad, 23-27th Sept., 2013. Er. Rajeev Kumar attended workshop on “Data communication for power sector” at Engineering Staff College of India, Hydrabad, 23-27th Sept., 2013. Er. Nitin Gupta attended School on “Advance algorithms” at IIITDM Jabalpur sponsored by IMPECS Germany, 11-14th June, 2013. Er. Nitin Gupta attended workshop on “Linux server administration” at Engineering Staff College of India, Hydrabad, 26-29th Aug., 2013. Er Pardeep Singh attended workshop on “Linux server administration” at Engineering Staff College of India, Hydrabad, 26-29th Aug., 2013. Prof. Sunand Kumar attended annual MTMI International Conference on “Global issues in business technology” at Virginia Beach Resort Hotel, 2800 Shore Drive, Virginia Beach, Virginia, United States of America, 20-21st Sept., 2013. Prof. R. K. Dutta attended training programme “Current requirements in environmental impact assessment (EIA) process and procedure” as per MOEF Guidelines, Engineering Staff College of India, Hyderabad, 12-14th Aug., 2013. Dr. Pradeep Kumar attended 5 days STC on “Leakage and water proofing of buildings” at Engineering Staff College of India Hyderabad, 16-20th Sept., 2013. Dr. U. K. Pandey attended STTP at ESCI Hyderabad on “Leakages and water proofing treatment in buildings” 16-20th Sept., 2013. **Awards, Recognitions & Milestones Achieved** Dr. Narottam Chand chaired a technical session in International Conference on Computer and Communication Networks Engineering (ICCCNE), Berlin, Germany, 22-23rd May, 2013. Dr. Kamlesh Dutta appointed as member of the committee “Building IPv6 talent pool through training: development of standardized IPv6 training certified courses” constituted by the Ministry of Communications & IT, Department of Telecommunications. Prof. Sunand Kumar chaired a session at annual MTMI International Conference on Global Issues in Business Technology, Virginia, United States of America, 20-21th Sept., 2013. Dr. Bharat Bhushan Sharma received ISTE section chapter level “Best Teacher Award” for year 2013 on 26th July, 2013 during 43rd ISTE Section Faculty Convention, organized by Rayat-Bahra College of Engineering and Nano-Technology for Women, Hoshiarpur, Punjab. Dr. Surender Soni completed Ph.D on topic, “Enhancing lifetime of wireless sensor networks using energy Efficient Protocols,” in June, 2013 from NIT Hamirpur HP. - Dr. Ashok Kumar completed Ph.D on topic, “Localization and location aware protocols for wireless sensor networks,” in June, 2013 from NIT Hamirpur HP. - Prof. Rakesh Sehgal received ‘Best Paper Award’ for paper “Influence of machining parameters on main cutting force and surface roughness during turning of AISI A2 steel alloy in the International Academic Conference Paris, France, 9–11th June, 2013. **Conferences/ Workshops/ STCs Conducted** - Dr. Raman Parti conducted 3-Day training course on “DPR preparation and quality assurance for rural roads” for HPPWD engineers under NRRDS and NITH during 29-31st Aug., 2013 at NIT Hamirpur. - Dr. Pradeep Kumar organized 3-Day workshop on “Universal human values” for faculty of NIT Hamirpur, under TEQIP-II during 27-29th Sept., 2013 at NIT Hamirpur. - Dr. V.K. Bansal organized STC on “Computational methods in engineering” from 13-17th May, 2013 sponsored by TEQIP-II for faculty from Engineering College. - Dr. H. K. Vinayak organized workshop on “Rapid visual survey format for structural and non-structural elements in schools of Himachal Pradesh” for SSA Engineering under National School Safety Programme at N.I.T. Hamirpur, from 1-2nd May, 2013. - Sh. Chander Prakash conducted four days training program for SJVNL engineers on “Surveying with total station & data interpretation” from 30th May to 2nd June, 2013. - A one week STC on “Recent trends in VLSI & communication systems (RTVCS-13)” was organized by E&CE department from 10-14th June, 2013 under TEQIP-II. More than 25 participant attended the STC.. The resource persons were Prof. S.S. Pattnaik from NITTTR Chandigarh and Prof. J.S. Sahambi, from IIT Ropar. - E&CE department organized a STC on “Development in VLSI devices and technology (DiVDAT-13)” under TEQIP-II from 24-28th June, 2013. The main resource persons were Sh. H.S. Jatana from SCL Mohali, Dr. Brajesh Kumar from IIT Roorkee and Dr. Rohit Sharma IIT Ropar. - E&CE department organized a national workshop on “VLSI design and automation techniques(VDAT-2013)” from 1-3rd July, 2013 under TEQIP-II. Prof. M.J. Kumar from IIT Delhi was the main expert for the workshop. - Induction training programme on “Effective teaching learning process (ETLP-13)” was organized for the internal faculty of the institute on 3-4th August, 2013 under TEQIP-II. The resource person were Prof. Yoginder Verma, Pro Vice-Chancellor HP Technical University and Dr. Vinod K. Sanwal, Gautam Budha University, Greater Noida. - Department of Mathematics has organized one “National conference on advances in mathematics & its applications (AMA-2013)” during 25-27th June, 2013. In this conference, more than fifty five research papers were presented and eight invited lectures have been delivered by the experts to the participants. **Expert Lectures Delivered** - Prof. Rakesh Sehgal delivered an expert lecture on the topic “General research methodologies & research initiatives in tribology” during TEQIP-II Workshop on Research Methodologies, UIET, Panjab University, Chandigarh on 13th July, 2013. - Dr. R K Dutta delivered a lecture on “Alternate low cost materials for low volume roads” in the course on “DPR Preparation and Quality Assurance for Rural Roads: under NRRDA and NIT Hamirpur during 29-31st Aug., 2013, 5-7th Sept., 2013 and 12-14th Sept., 2013. - Dr. Pradeep Kumar delivered series of lectures on Visual rapid Screening Codal Provisions, Retrofitting of structures codal Provisions, and Techniques of Retrofitting of structures during 3 day training Programme organized for SSA engineers in Rapid Visual screening, non structural Mitigation measures and retrofitting of structures during 29-31st March, 2013 at NIT Hamirpur HP. - Dr. Pradeep Kumar delivered lecture under the aegis of rotary club on “Learning disasters and disaster mitigation” at DC Office Hamirpur for Public Awareness on 31st Aug., 2013. - Dr. Ankit Gupta delivered expert lectures to SSA Engineers on “Non structural mitigation measures and retrofitting techniques” in training course on RVS, NSMM and Retrofitting Techniques under National School Safety Programme by HP State Disaster Management Authority during 15-17th March and 29-31st March, 2013. - Dr. Ankit Gupta delivered expert lectures to HP PWD Engineers on “Pavement material testing and quality control” in training course on DPR Preparation and Quality Assurance for Rural Roads under NRRDA and NIT Hamirpur during 29-31th Aug. and 5-7th Sept., 2013. - Dr. Kamlesh Dutta delivered an expert talk on “Website testing: Issues and challenges” and “Website testing tools and techniques” in training programme on Building Dynamic websites and Testing, JNJEC Sundernagar 6-7th April, 2013 under TEQIP-II - Dr. Kamlesh Dutta delivered an expert talk on “Analyzing IP and ICMP header” and “Software vulnerability issues in Security” in STP on “Recent advances in computer networks and information security” 8-12th July, 2013 organized by NIT Jalandhar under TEQIP-II - Dr. Kamlesh Dutta delivered an expert talk on “Genetic algorithm” and “Next generation protocol: IPv6” in STP on “Emerging trends in physics and information technology” 10-14th June, 2013 organized by NIT Jalandhar under TEQIP-II - Er. Siddhartha Chauhan delivered an expert lecture on “Security in internet” at Govt. Degree College Hamirpur on 2nd June, 2013. - Er. Arvind Dixit, Director Advance Technology Chandigarh delivered an expert lecture on “Emerging trends in embedded system design,” and was organized under Industry-Institute interaction scheme of TEQIP-II for students of B.Tech and M.Tech E&CE Department on 16th April, 2013. **Books Published / Course Material Developed** - Dr. RK Jarial has edited one book on PROBLEMS and SOLUTIONS in ELECTRIC MACHINERY. This book is based on Numerical Solutions to the unsolved numerical problems given in famous Book on Electric Machinery by A.E. Fitzgerald, Charles Kingsley Jr. and Stephens D.Umans (McGraw Hill Education Pvt. Ltd., Delhi) as Sp. Indian Edition in May/June 2013. **Upcoming Activities** - Winter School On Mobile and Distributed Systems: Theory and Challenges, 2-6th Dec., 2013. - 40th National Conference on Fluid Mechanics and Fluid Power, 12-14th Dec., 2013. - STTP on Renewable Energy on Sustainable Development and Environment 2-6th Dec., 2013. - STC on Communication Networks 16-20th Dec., 2013. - STTP on Digital Signal Processing with Applications using MATLAB (DSPAM) 23-27 Dec., 2013. The Editorial Board Wishes You A Very Happy And Prosperous Deepawali EDITORIAL BOARD: Chairman : Prof. Rajnish Shrivastava Director, NIT Hamirpur Editor : Dr.Siddhartha Members : Prof.Rakesh Sehgal Prof. Vinod Kapoor Dr. Rajeevan Chandel NATIONAL INSTITUTE OF TECHNOLOGY HAMIRPUR (H.P)-177005 Phone : 01972-222308, 254010 Fax : 01972-223834 E-mail : email@example.com
are deemed to be synonymous for the purposes of the provisions of law regarding the licensure and regulation of optometry. [S. B&P] SB 921 (Maddy), as introduced March 4, would provide that it is unprofessional conduct for an optometrist to fail to advise a patient in writing of any pathology that requires the attention of a physician when an examination of the eyes indicates a substantial likelihood of any pathology. [S. B&P] SB 842 (Presley), as amended April 13, would authorize the Board to issue interim orders of suspension and other license restrictions, as specified, against its licensees. [A. CPGE&ED] LITIGATION In California Optometric Association (COA) v. Division of Allied Health Professions, Medical Board of California, No. 531542 (filed January 11 in Sacramento County Superior Court), and Engineers and Scientists of California (ESC), et al. v. Division of Allied Health Professions, Medical Board of California, No. 706751-0 (filed October 8, 1992 in Alameda County Superior Court), COA and ESC challenge the validity of DAHP's medical assistant regulations. Following the enactment of SB 645 (Royce) (Chapter 666, Statutes of 1988), it took DAHP over three years to adopt section 1366 of the CCR, its regulation defining the technical support services which unlicensed medical assistants (MAs) may perform and establishing standards for appropriate MA training and supervision. During the lengthy rulemaking process, DCA rejected DAHP's proposed regulations twice and the Office of Administrative Law rejected them once before finally approving them in March 1992. During the rulemaking hearings, COA and the Board of Optometry objected to language in the proposed regulations stating that MAs are permitted to perform "automated visual field testing, tonometry, or other simple or automated ophthalmic tests not requiring interpretation in order to obtain test results, using machines or instruments, but are precluded from the exercise of any judgment or interpretation of the data obtained on the part of the operator." [12:1 CRLR 88-89] However, DAHP overruled the objections and included this language in its final regulations. COA and ESC claim that section 1366 is invalid because the conduct authorized is beyond the scope of DAHP's authority and conflicts with DAHP's enabling statutes; further, it conflicts with Business and Professions Code sections 3040 and 3041 (which define the practice of optometry and prohibit unlicensed persons from engaging in optometry). At this writing, the Attorney General has filed an answer on behalf of DAHP; no court hearing has been set. RECENT MEETINGS At the February 18 meeting, Executive Officer Karen Ollinger reviewed previously-approved budget changes, and reported that the Board is close to covering its costs. Ollinger also announced that the occupational analysis by Human Resource Strategies is proceeding on schedule. [13:1 CRLR 59] Finally, Board President Thomas Nagy, OD, announced that Board member Stephen R. Chen, OD, was named Optometrist of the Year at the annual California Optometric Association Congress. FUTURE MEETINGS November 17–18 in Orange County. BOARD OF PHARMACY Executive Officer: Patricia Harris (946) 445-5014 Pursuant to Business and Professions Code section 4000 et seq., the Board of Pharmacy grants licenses and permits to pharmacists, pharmacies, drug manufacturers, wholesalers and sellers of hypodermic needles. It regulates all sales of dangerous drugs, controlled substances and poisons. The Board is authorized to adopt regulations, which are codified in Division 17, Title 16 of the California Code of Regulations (CCR). To enforce its regulations, the Board employs full-time inspectors who investigate accusations and complaints received by the Board. Investigations may be conducted openly or covertly as the situation demands. The Board conducts fact-finding and disciplinary hearings and is authorized by law to suspend or revoke licenses and permits for a variety of reasons, including professional misconduct and any acts substantially related to the practice of pharmacy. The Board consists of ten members, three of whom are public. The remaining members are pharmacists, five of whom must be active practitioners. All are appointed for four-year terms. MAJOR PROJECTS Restructuring the Enforcement Unit. As the Board has not augmented its enforcement program in at least ten years, it spent considerable time at its October 1992 meeting discussing the need to expand the program in light of the increasing number of pharmacies and licensed pharmacists in California, the establishment of new registration programs such as medical device retailers and pharmacy technicians, and changes in the law governing the practice of pharmacy. [13:1 CRLR 60] At the Board's April 28–29 meeting, Executive Officer Patricia Harris reported that the Governor and the budget subcommittees in both houses of the legislature have tentatively approved a $703,000 increase to the Board's 1993–94 budget to establish eight additional enforcement unit positions: five inspectors, one supervising inspector, one consumer services representative, and one office technician. The increase in staff will enable the Board to establish a public assistance unit staffed by complaint handlers to assist consumers who call with questions regarding pharmacy services and pharmacists; complaints would be opened by this unit and referred to the inspection staff for investigation. This process is expected to enable Board inspectors to focus their efforts on inspection, not complaint processing. Harris cautioned that the full legislature has yet to pass the Governor's budget, and that the budget augmentation may be revised or eliminated. Board Discusses Request for Regulatory Change. At its January 20–21 meeting, the Board noted that it had received several requests to revise section 1719(1c), Title 16 of the CCR, which provides that, as of April 16, 1992, all candidates for the pharmacist licensure examination who are graduates of a foreign pharmacy school (any school located outside the United States) must demonstrate proficiency in English by achieving a score of at least 220 on the Test of Spoken English administered by the Educational Testing Service. Board member Gilbert Castillo noted that the issue was originally discussed by the Board and referred to its Committee on Licensure for evaluation; the Committee held preliminary hearings and invited public input. Following discussion, the Board unanimously agreed that it is in the best interest of the consumer to continue to require that foreign pharmacy graduates pass the Test of Spoken English. Board Considers Electronic Transmission of Prescriptions. At the Board's January 20–21 meeting, the Board's Committee on Electronic Transmission and Faxing of Prescriptions recommended that the Board pursue statutory and regulatory changes to allow for the electronic transmission of prescriptions. Under the Committee's proposal, the term "electronic transmission prescription" would include both electronic image transmission prescriptions (any prescription order for which a facsimile of the order is received by a pharmacy from a licensed prescriber) and electronic data transmission prescriptions (any prescription order, other than an electronic image transmission prescription, which is electronically transmitted from a licensed prescriber to a pharmacy). Under the proposal, if a prescription is electronically transmitted to a pharmacy, the pharmacy may maintain a hard copy. Following discussion, the Board unanimously agreed to pursue statutory and regulatory changes to allow for the electronic transmission of prescriptions; the proposal was subsequently included in the Department of Consumer Affairs' omnibus bill, AB 1807 (Bronshvag) (see LEGISLATION). **Board Considers New Rulmaking Proposals.** At its January 20–21 and April 28–29 meetings, the Board discussed a proposal to amend section 1732.3, Title 16 of the CCR, regarding continuing education (CE) courses. Among other things, section 1732.3 currently provides that a recognized CE provider's coursework shall be valid for one year following the initial Board approval; the Board is considering amending this section to provide that such coursework would be valid for up to three years following Board approval. This modification was suggested by the Board's Continuing Education Committee in recognition of the American Council on Pharmaceutical Education's policy allowing its approved CE providers to use an expiration date of three years for some courses. The Board is expected to pursue this regulatory change; at this writing, however, the Board has not published notice of its intent to do so in the *California Regulatory Law Reporter*. At its April 28–29 meeting, the Board discussed the possibility of amending section 1717(a), Title 16 of the CCR, which specifies that no medication shall be dispensed on prescription except in a new container which conforms with standards established in the official compendia; section 1717(a) provides for an exception to the rule and designates one type of prescription container which may be reused under specific conditions, including the condition that the container be used for the same drug for the same patient. The Board is expected to pursue an amendment to section 1717(a) to include an additional type of prescription container which may be reused under specific circumstances; at this writing, however, the Board has not published notice of its intent to do so in the *California Regulatory Notice Register*. **Rulmaking Update.** The following is a status update on rulmaking proposals discussed in detail in previous issues of the *Reporter*: - **Compounding for Prescriber Office Use.** The Board's adoption of new sections 1716.1 and 1716.2, Title 16 of the CCR, defines the quantity of compounded medication which a pharmacist may provide to a prescriber for office use, and specifies the minimum types of records that pharmacies must keep when they furnish compounded medication to prescribers in quantities larger than required for the prescriber's immediate office use or when a pharmacy compounds medication for future furnishing. [13:1 CRLR 61] The Office of Administrative Law (OAL) originally disapproved this regulatory action in June 1992 on the basis that it failed to meet the clarity and necessity standards of the Administrative Procedure Act. The Board amended its proposal at its October 1992 meeting to resolve OAL's concerns, and released the modified language for a fifteen-day public comment period in December. The Board then re-submitted the proposal to OAL, which approved the action on April 15. - **Medical Device Retailers' Locked Storage Requirements.** On January 19, OAL approved the Board's adoption of new section 1748.2, Title 16 of the CCR, which provides that a medical device retailer (MDR) may leave a dangerous device in a retail area of the MDR premises during an absence of an exemptee if the item is of sufficient size and weight that removal from the premises would be difficult. Any dangerous devices designated for display under section 1748.2 shall be specifically listed in the written policies and procedures of the MDR. [13:1 CRLR 62] However, OAL disapproved the Board's proposed adoption of section 1748.1, Title 16 of the CCR, also regarding MDR locked storage; among other things, the original version of section 1748.1 would have provided that dangerous devices shall be furnished from locked storage only upon the oral or written authorization of an exemptee to an employee of the MDR who operates the service vehicle. OAL found that the Board lacked statutory authority to allow a non-licensed person to dispense dangerous devices at the direction of an exemptee in this manner. The Board subsequently modified the language, released it for a fifteen-day public comment period, and resubmitted it to OAL; among other things, the modified version provides that dangerous devices shall be furnished from locked storage only by an exemptee. OAL approved the Board's adoption of section 1748.1 on May 12. - **Patient Consultation Regulations.** On March 3, OAL approved the Board's amendments to sections 1707.1 and 1707.2, and its adoption of new section 1707.3, Title 16 of the CCR, which revise the Board's patient consultation requirements to comply with federal Omnibus Budget Reconciliation Act of 1990 (OBRA 90) standards. [13:1 CRLR 61] However, on April 15, 1993, the Board disapproved further amendments to section 1707.2. Subsection (a) of section 1707.2 requires a pharmacist to provide oral consultation to a patient or his/her agent, upon request or whenever the pharmacist deems it warranted. Under subsection (b), a pharmacist must provide oral consultation whenever the prescription drug has not previously been dispensed to a patient, and whenever a prescription drug not previously dispensed to a patient in the same dosage, form, strength, or with the same written directions is dispensed by the pharmacy. Subsection (e) states that, notwithstanding the requirements in (a) and (b), that a pharmacist is not required to provide oral consultation when a patient or the patient's agent refuses such consultation. According to the Department of Health Services (DHS), the Board's current regulations may not be in compliance with OBRA 90, which apparently requires pharmacists to offer an oral consultation—an element which the Board's regulations lack. Following discussion, the Board unanimously agreed to leave its consultation regulations as they are, and to seek clarification from DHS and the Health Care Finance Administration (HCFA) on this issue. **LEGISLATION** **AB 260 (W. Brown),** as amended April 12, and SB 1048 (Watson), as introduced March 5, would each establish the Clean Needle and Syringe Exchange Pilot Project, and would authorize pharmacists, physicians, and certain other persons to furnish hypodermic needles and syringes without a prescription or permit as prescribed through the pilot project. [A Floor; S. H&S] **AB 667 (Boland).** The Pharmacy Law regulates the use, sale, and furnishing of dangerous drugs and devices, as defined; the law prohibits a person from furnishing any dangerous device, except upon the prescription of a physician, dentist, podiatrist, or veterinarian. However, existing law provides that this prohibition does not apply to the furnishing of any dangerous device by a manufacturer, wholesaler, or pharmacy to each other or to a physician, dentist, podiatrist, veterinarian, or physical therapist acting within the scope of his/her license under sales and purchase records that correctly give the date, the names and addresses of the supplier and the buyer, the device, and its quantity. As amended March 29, this bill would provide that the prohibition also does not apply to the furnishing of any dangerous device by a manufacturer, wholesaler, or pharmacy to a chiropractor acting within the scope of his/her license. Existing law authorizes a medical device retailer to dispense, furnish, transfer, or sell a dangerous device only to another medical device retailer, a pharmacy, a licensed physician, a licensed health care facility, a licensed physical therapist, or a patient or his/her personal representative. This bill would additionally authorize a medical device retailer to dispense, furnish, transfer, or sell a dangerous device to a licensed chiropractor. [A. Health] SB 849 (Bergeson). Under the Pharmacy Law, a "hospital pharmacy" means and includes a pharmacy licensed by the Board of Pharmacy and located within any hospital, institution, or establishment that maintains and operates organized inpatient facilities for the diagnosis, care, and treatment of human illnesses in accordance with certain requirements. As amended April 26, this bill would instead define a "hospital pharmacy" to mean a pharmacy licensed by the Board and located within a general acute care hospital, as defined, acute psychiatric hospital, as defined, or a special hospital, as defined in accordance with certain requirements. [S. B&P] SB 842 (Presley), as amended April 13, would permit the Board to issue interim orders of suspension and other license restrictions, as specified, against its licensees. [A. CPGE&ED] AB 1807 (Burgervarg) as amended May 3, would require a pharmacy, except a nonresident pharmacy, that ships or mails prescriptions to residents of California to provide certain toll-free telephone service, and written notification of the availability of that service to patients. Existing law defines the term "prescription" for the purposes of existing law relating to licensure of pharmacists, regulation of pharmacies, and regulation of controlled substances. This bill would revise the definition, for these purposes, to include electronically transmitted prescriptions, as defined. Under existing law, it is a misdemeanor for any person to falsely represent himself/herself to be a person who can lawfully prescribe a drug, or to falsely represent that he/she is acting on behalf of a person who can lawfully prescribe a drug, in a telephone communication with a registered pharmacist. This bill would also make it a misdemeanor to make these false representations by electronic communication. [A. W&M] AB 2099 (Epple). The Pharmacy Law prohibits a pharmacist from dispensing any prescription except in a container correctly labeled with certain types of information. As amended April 28, this bill would additionally require the container label to identify the condition for which the drug was prescribed if the patient requests that the prescription identifies the condition. [A. W&M] AB 2155 (Polanco). Existing law requires prescription blanks in triplicate to be issued by the Department of Justice and furnished to any practitioner authorized to write a prescription for Schedule II controlled substances. Existing law prohibits the Department of Justice from issuing more than 100 triplicate prescription blanks to any authorized practitioner. As introduced March 5, this bill would establish the Medical and Pharmacy Ad Hoc Committee within the Department of Consumer Affairs, and require it to study all matters regarding the Department of Justice's authority to monitor and oversight activities of prescriptions for Schedule II controlled substances and advise the Attorney General on these matters. It would require the Committee membership to consist of a pharmacist and various persons who are engaged in prescribed specialties of medical practice. [A. W&M] SB 432 (Greene). Existing law generally requires every prescription for a controlled substance classified in Schedule II to be in writing. One exception to this general requirement is when failure to issue a prescription for a controlled substance classified in Schedule II to a patient in a licensed skilled nursing facility, an intermediate care facility, or a licensed home health agency providing hospice care would, in the opinion of the prescriber, present an immediate hazard to the patient's health and welfare or result in intense pain and suffering to the patient; under the circumstances, the prescription may be dispensed upon an oral prescription. As amended May 19, this bill would instead provide that any order for a Schedule II controlled substance in a licensed skilled nursing facility, intermediate health care facility, or a licensed home health agency providing hospice care may be dispensed upon an oral prescription. [S. Jud] SB 1051 (McCorquodale). The Pharmacy Law requires a pharmacist to inform a patient orally or in writing of the harmful effects of a drug dispensed by prescription if the drug poses a substantial risk to the person consuming the drug when taken in combination with alcohol or if the drug may impair a person's ability to drive a motor vehicle, whichever is applicable, and the Board determines that the drug requires the warning. The Pharmacy Law requires any pharmacy located outside California that ships, mails, or delivers any controlled substances or dangerous drugs or devices into this state pursuant to a prescription to register with the Board, disclose information regarding the pharmacy to the Board, and meet other conditions. Under the Pharmacy Law, one of those conditions is the requirement that the pharmacy, within a prescribed time period, provide toll-free telephone service to facilitate communication between patients in California and a pharmacist at the pharmacy who has access to the patient's records. It also requires the toll-free number to be disclosed on a label affixed to each container of drugs dispensed to patients in this state. As amended April 21, this bill would require the Board to adopt regulations that apply the same requirements or standards for oral consultation between the pharmacy and the patient, under certain circumstances, to a nonresident pharmacy as are applicable to a pharmacy that has been issued a permit by the Board. [S. Appr] SB 1153 (Watson). Existing law provides for the Medi-Cal program administered by DHS, pursuant to which medical benefits, including certain prescription drugs, are provided to public assistance recipients and certain other low-income persons. As amended April 28, this bill would establish the Drug Utilization Review Board to review, evaluate, and make recommendations to DHS on retrospective drug utilization reviews, standards applications, educational interventions, and drug utilization program profile development. This bill would also require the Board of Pharmacy, with the assistance of the Drug Utilization Review Board, to adopt and publish guidelines and standards to be used by pharmacists in their counseling of Medi-Cal recipients. [S. Appr] AB 2020 (Isenberg), as amended May 18, would, among other things, authorize optometrists to use, prescribe, and dispense specified pharmaceutical compounds to a patient. This bill would also make it a misdemeanor for any person licensed as an optometrist to refer a patient to a pharmacy that is owned by that licensee or in which the licensee has proprietary interest. [A. Floor] SB 1136 (Kelley). Under the Medi-Cal program administered by DHS, pharmacists are reimbursed for covered drugs based on prices determined by the Department; existing law authorizes pharmacists to select a generic drug type, as defined, over a name brand drug product when filling a prescription, unless the prescriber specifies otherwise. As amended May 5, this bill would require that a generically substitutable product shall not be reimbursable if the DHIS Director determines that a product from a company subject to rebates as an innovator company under federal law is lower in net cost to the state than a generically substitutable product not subject to the rebates; it would require the Director to notify pharmacists of these determinations. [S. Appr] LITIGATION Plaintiffs are appealing the trial court's ruling in *Californians for Safe Prescriptions v. California State Board of Pharmacy*. No. BS019433 (Dec. 15, 1992), which held that the Board followed and complied with the Administrative Procedure Act in promulgating and adopting its pharmacy technician regulations. [13:1 CRLR 63] The plaintiffs, a non-profit organization consisting of approximately 5,000 licensed pharmacists, filed a notice of appeal on January 5; at this writing, no date for oral argument has been set. On February 18, the California Supreme Court granted the pharmacy's petition for review of the Fifth District Court of Appeal's decision in *Huggins v. Longs Drug Stores California, Inc.*, No. F016033 (Dec. 4, 1992). The appellate court held that a pharmacist's provision of incorrect dosage amounts for a prescription which the pharmacist knew or should have known would be administered to an infant by the infant's parents constitutes negligent action directed at the parent caregivers, which may allow the caregivers to recover damages for negligent infliction of emotional distress. [13:1 CRLR 63] RECENT MEETINGS At the Board's January 20–21 meeting, the Board considered its Long-Term Care Committee's recommendation that it adopt proposed standards for pharmacies servicing long-term care facilities. Among other things, the standards state the obligations of pharmacies servicing such facilities, which include establishing procedures for obtaining and providing necessary drugs on a timely manner, including on a 24-hour basis, and for the availability of emergency drug supplies, in conformity with federal and state laws and regulations, and maintaining drug information services available to facility nursing staff, prescribers, other physicians, and the facility's consultant pharmacist. Following discussion, the Board unanimously adopted the standards. FUTURE MEETINGS October 6–7 in Sacramento. BOARD OF REGISTRATION FOR PROFESSIONAL ENGINEERS AND LAND SURVEYORS Executive Officer: Harold L. Turner (916) 263-2222 The Board of Registration for Professional Engineers and Land Surveyors (PELS) regulates the practice of engineering and land surveying through its administration of the Professional Engineers Act, sections 6700 through 6799 of the Business and Professions Code, and the Professional Land Surveyors Act, sections 8700 through 8800 of the Business and Professions Code. The Board's regulations are found in Division 5, Title 10 of the California Code of Regulations (CCR). The basic functions of the Board are to conduct examinations, issue certificates, registrations, and/or licenses, and appropriately channel complaints against registrants/licensees. The Board is additionally empowered to suspend or revoke registrations/licenses. The Board considers the proposed decisions of administrative law judges who hear appeals of applicants who are denied a registration/license, and those who have had their registration/license suspended or revoked for violations. The Board consists of thirteen members: seven public members, one licensed land surveyor, four registered Practice Act engineers and one Title Act engineer. Eleven of the members are appointed by the Governor for four-year terms which expire on a staggered basis. One public member is appointed by the Speaker of the Assembly and one by the Senate Rules Committee. The Board has established four standing committees and appoints other special committees as needed. The four standing committees are Administration, Enforcement, Examination/Qualifications, and Legislation. The committees function in an advisory capacity unless specifically authorized to make binding decisions by the Board. Professional engineers are registered through the three Practice Act categories of civil, electrical, and mechanical engineering under section 6730 of the Business and Professions Code. The Title Act categories of agricultural, chemical, control system, corrosion, fire protection, industrial, manufacturing, metallurgical, nuclear, petroleum, quality, safety, and traffic engineering are registered under section 6732 of the Business and Professions Code. Structural engineering and geotechnical engineering are authorities linked to the civil Practice Act and require an additional examination after qualification as a civil engineer. At its January 29 meeting, PELS selected Harold L. Turner as its new Executive Officer; Turner, formerly California's Deputy Auditor General, was hired to replace Darlene Stroup, who resigned in August 1992. [13:1 CRLR 64] In February, Governor Wilson announced the appointment of Stephen H. Lazarian as PELS' new public member; Lazarian is a self-employed attorney from Pasadena who formerly served on the Contractors State License Board from 1985–92 and was its chair from 1988–89. MAJOR PROJECTS Proposed Elimination of Title Act Protection for Traffic Engineers. At its March 12 meeting, PELS discussed the possible elimination of Title Act coverage for traffic engineering. The proposed action is opposed by S.E. Rowe, General Manager of the City of Los Angeles' Department of Transportation, who contends that title protection for traffic engineering is necessary primarily because of its "extreme implications in saving lives and reducing injuries and property damage to the public." As currently worded, Rowe contends that a registered civil engineer with little or no experience in traffic engineering could make traffic recommendations on behalf of his/her clients. Department of Consumer Affairs (DCA) legal counsel Don Chang noted that preparation of certain traffic mitigation or worksite traffic control plans does not constitute the practice of civil engineering. Board President Larry Dolson referred the matter to a special committee to further consider the scope of Title Act coverage. At PELS' April 23 meeting, Board member Ted Fairfield reported that, based on a review of current National Council of Examiners for Engineering and Surveying (NCEES) test questions and most college curricula, there appear to be inconsistencies between the definition and educa-
January 7, 1998 City Council Sacramento, California Honorable Members in Session: SUBJECT: AN ORDINANCE AMENDING SECTION 7.05.057 OF THE SACRAMENTO CITY CODE AND ADDING SECTION 7.05.058 TO THE SACRAMENTO CITY CODE, TO AUTHORIZE THE SALE OF FOOD AND BEVERAGES ON THE SIDEWALKS AND BOARDWALKS OF OLD SACRAMENTO LOCATION AND COUNCIL DISTRICT: 1 (Old Sacramento) RECOMMENDATION: It is recommended that Council approve, by Resolution, the proposed amendment to an existing ordinance regulating sidewalk vending in Old Sacramento. CONTACT PERSON: Ed Astone, Old Sacramento Town Manager, 264-7031 FOR COUNCIL MEETING OF: January 27, 1998 SUMMARY: This report provides information on the present ordinance pertaining to street and sidewalk vending in Old Sacramento and describes the proposed changes, which would permit vendors with a fixed business location to vend food and beverages year-round on Old Sacramento sidewalks for a one-time permitting fee of $250. COMMITTEE/COMMISSION ACTION: At its regularly scheduled meeting on January 6, 1998, the Law and Legislation Committee reviewed and recommended for Council approval, the proposed ordinance and fee changes. BACKGROUND INFORMATION: Under current City regulations, street or sidewalk vending in Old Sacramento is prohibited except during certain special events such as the Jazz Jubilee, the Children's Festival, the Railroad Festival and others as determined by the Director of the Downtown Department. During these specified events, vendors with a fixed business location in Old Sacramento may vend goods from the sidewalk immediately adjacent to the fixed business location. Over the past several years, Old Sacramento Management and merchants have contended that the economic prosperity of the merchants and the attractiveness and ambiance of Old Sacramento would be enhanced by extending the special events policy to permit year-round sidewalk sales of food and beverages outside their fixed business locations. Following meetings with Old Sacramento merchants, stakeholders, and Police Department alcoholic beverage licensing staff, City staff agrees. It is recommended that the proposed ordinance changes be adopted to permit year-round sidewalk food and beverage sales on submission and City acceptance of a permit application and the vendor's one-time payment of a new fee of $250 for the application. The permit is revocable at will by the Director of the Downtown Department. The consumption of alcoholic beverages on Old Sacramento streets and sidewalks would continue to be prohibited. The proposed ordinance requires that any tables, chairs, and/or vending carts must: (1) be positioned to leave a minimum of six feet clear on every sidewalk or boardwalk and (2) be removed each evening. The proposed ordinance also requires that all tables, chairs, vending carts and new signage meet Old Sacramento standards and receive the approval of the Old Sacramento Town Manager to ensure that they are in keeping with the historic theme of Old Sacramento. The proposed fee has been submitted for public notice in accordance with City policies. January 7, 1998 City Council Sacramento, California Honorable Members in Session: SUBJECT: AN ORDINANCE AMENDING SECTION 7.05.057 OF THE SACRAMENTO CITY CODE AND ADDING SECTION 7.05.058 TO THE SACRAMENTO CITY CODE, TO AUTHORIZE THE SALE OF FOOD AND BEVERAGES ON THE SIDEWALKS AND BOARDWALKS OF OLD SACRAMENTO LOCATION AND COUNCIL DISTRICT: 1 (Old Sacramento) RECOMMENDATION: It is recommended that the item be passed for publication of title and continued to January 27, 1998. CONTACT PERSON: Ed Astone, Old Sacramento Town Manager, 264-7031 FOR COMMITTEE MEETING OF: January 20, 1998 SUMMARY: The item is presented at this time for approval of publication of title pursuant to City Charter, Section 32. BACKGROUND INFORMATION: Prior to publication of an item in a local paper to meet legal advertising requirements, the City Council must first pass the item for publication. The City Clerk then transmits the title of the item to the paper for publication and for advertising the meeting date. Respectfully submitted, Thomas V. Lee, Deputy City Manager RECOMMENDATION APPROVED: William H. Edgar, City Manager ORDINANCE NO. ADOPTED BY THE SACRAMENTO CITY COUNCIL ON DATE OF _______________________ AN ORDINANCE AMENDING SECTION 7.05.057 OF THE SACRAMENTO CITY CODE, AND ADDING SECTION 7.05.058 TO THE SACRAMENTO CITY CODE, TO AUTHORIZE THE SALE OF FOOD AND BEVERAGES ON THE SIDEWALKS AND BOARDWALKS OF OLD SACRAMENTO BE IT ENACTED BY THE COUNCIL OF THE CITY OF SACRAMENTO: SECTION 1. Section 7.05.057 of the Sacramento City Code is hereby amended to read as follows: 7.05.057 Vendors of merchandise with fixed business locations permitted to vend from sidewalk during specified special events Section 7.05.055 notwithstanding, a vendor who has a fixed business location within Old Sacramento from which the vendor regularly sells goods, wares, or merchandise other than food or beverages may vend these goods; wares, or merchandise from the sidewalk or boardwalk immediately adjacent to the fixed business location during the following special events: Jazz Jubilee, Children's Festival, Railroad Festival, Thursday Night Afterglow, and any other Old Sacramento special event for which the director of the Downtown Department determines that sidewalk vending is appropriate. All tables and vending carts used for this purpose shall be approved in advance by the director of the Downtown Department, or the director's designee, and shall be positioned so as to leave a minimum of six (6) feet clear on every sidewalk or boardwalk for pedestrians. SECTION 2. Section 7.05.058 is hereby added to the Sacramento City Code to read as follows: 7.05.058 Food and beverage vendors with fixed business locations permitted to vend from sidewalks year-round. (a) Section 7.05.055 notwithstanding, a vendor who has a fixed business location within Old Sacramento from which the vendor regularly sells food or beverages may apply for a permit, revocable at will by the Director, to vend food or beverages from the sidewalk or boardwalk immediately adjacent to the fixed business location at any time. No alcoholic beverages shall be sold or consumed on the sidewalk or boardwalk. All tables, chairs, and vending carts used for this purpose shall be approved in advance by the director of the Downtown Department, or the Director's designee, and shall be positioned so as to leave a minimum of six (6) feet clear on every sidewalk or boardwalk for pedestrians. (b) The provisions of Title 12 of this code and the Zoning Ordinance relating to outdoor sidewalk cafes notwithstanding, an application for a revocable permit under this section shall be submitted to the Old Sacramento Management office, and shall be accompanied by a non-refundable application fee set by resolution of the City Council. At the Director's discretion, the permit may be renewed annually upon payment of a renewal fee set by resolution of the City Council. (c) Prior to selling food or beverages from the sidewalk or boardwalk, each vendor shall: (1) Through Old Sacramento Management, obtain written approvals from the Fire Department and the Police Department for the furnishings to be used and their placement on the sidewalk or boardwalk. (2) Provide and thereafter maintain comprehensive general liability insurance in an amount not less than one million dollars, naming the City of Sacramento, and its officers, employees, agents and volunteers as additional insureds. The form and substance of the insurance required shall be approved by the City Risk Manager. (d) The Director may establish reasonable rules and regulations concerning the sale of food and beverages from the sidewalk or boardwalk. DATE PASSED FOR PUBLICATION: DATE ENACTED: DATE EFFECTIVE: ATTEST: CITY CLERK MAYOR FOR CITY CLERK USE ONLY ORDINANCE NO. DATE ADOPTED: ORDINANCE NO. ADOPTED BY THE SACRAMENTO CITY COUNCIL ON DATE OF ____________________________ AN ORDINANCE AMENDING SECTION 7.05.057 OF THE SACRAMENTO CITY CODE, AND ADDING SECTION 7.05.058 TO THE SACRAMENTO CITY CODE, TO AUTHORIZE THE SALE OF FOOD AND BEVERAGES ON THE SIDEWALKS AND BOARDWALKS OF OLD SACRAMENTO BE IT ENACTED BY THE COUNCIL OF THE CITY OF SACRAMENTO: SECTION 1. Section 7.05.057 of the Sacramento City Code is hereby amended to read as follows: 7.05.057 Vendors of merchandise with fixed business locations permitted to vend from sidewalk during specified special events Section 7.05.055 notwithstanding, a vendor who has a fixed business location within Old Sacramento from which the vendor regularly sells goods, wares, or merchandise; other than food or beverages may vend these goods, wares, or merchandise; food or beverages from the sidewalk or boardwalk immediately adjacent to the fixed business location during the following special events: Jazz Jubilee, Children's Festival, Railroad Festival, Thursday Night Afterglow, and any other Old Sacramento special event for which the director of community and visitor services the Downtown Department determines that sidewalk vending is appropriate. All tables and vending carts used for this purpose shall be approved in advance by the director of community and visitor services the Downtown Department, or the director's designee, and shall be positioned so as to leave a minimum of six (6) feet clear on every sidewalk or boardwalk for pedestrians. FOR CITY CLERK USE ONLY ORDINANCE NO. _______________________ DATE ADOPTED: ________________________ SECTION 2. Section 7.05.058 is hereby added to the Sacramento City Code to read as follows: 7.05.058 Food and beverage vendors with fixed business locations permitted to vend from sidewalks year-round. (a) Section 7.05.055 notwithstanding, a vendor who has a fixed business location within Old Sacramento from which the vendor regularly sells food or beverages may apply for a permit, revocable at will by the Director, to vend food or beverages from the sidewalk or boardwalk immediately adjacent to the fixed business location at any time. No alcoholic beverages shall be sold or consumed on the sidewalk or boardwalk. All tables, chairs; and vending carts used for this purpose shall be approved in advance by the director of the Downtown Department, or the Director's designee, and shall be positioned so as to leave a minimum of six (6) feet clear on every sidewalk or boardwalk for pedestrians. (b) The provisions of Title 12 of this code and the Zoning Ordinance relating to outdoor sidewalk cafes notwithstanding, an application for a revocable permit under this section shall be submitted to the Old Sacramento Management office, and shall be accompanied by a non-refundable application fee set by resolution of the City Council. At the Director's discretion, the permit may be renewed annually upon payment of a renewal fee set by resolution of the City Council. (c) Prior to selling food or beverages from the sidewalk or boardwalk, each vendor shall: (1) Through Old Sacramento Management, obtain written approvals from the Fire Department and the Police Department for the furnishings to be used and their placement on the sidewalk or boardwalk. (2) Provide and thereafter maintain comprehensive general liability insurance in an amount not less than one million dollars, naming the City of Sacramento, and its officers, employees, agents and volunteers as additional insureds. The form and substance of the insurance required shall be approved by the City Risk Manager. (d) The Director may establish reasonable rules and regulations concerning the sale of food and beverages from the sidewalk or boardwalk. DATE PASSED FOR PUBLICATION: DATE ENACTED: DATE EFFECTIVE: ATTEST: ___________________________ MAYOR CITY CLERK FOR CITY CLERK USE ONLY ORDINANCE NO. ________________ DATE ADOPTED: ________________ FINANCIAL CONSIDERATIONS: No significant financial consequences are anticipated as a result of the proposed ordinance change. It is anticipated that the application fee of $250 will generate approximately $1,000 annually in new application fees. ENVIRONMENTAL CONSIDERATIONS: The Planning Services Division, Environmental Section has reviewed the project and has determined that it has no potential for causing a significant impact on the environment and is therefore exempt from the California Environmental Quality Act (CEQA) under Section 15061 (b)(3) of the CEQA Guidelines. POLICY CONSIDERATIONS: The proposed ordinance change is consistent with City policy to attempt create an environment conducive to providing opportunities for economic success while enhancing the attractiveness of Sacramento attractions in a cost-effective manner. The adoption of the proposed ordinance changes at this time will permit participating merchants to have any necessary tables, chairs and vending carts approved and in place for the beginning of the upcoming California Sesquicentennial celebrations. MBE/WBE: None. No goods or services are to be purchased. Respectfully submitted, Thomas V. Lee, Deputy City Manager RECOMMENDATION APPROVED: William H. Edgar, City Manager AN ORDINANCE AMENDING SECTION 7.05.057 OF THE SACRAMENTO CITY CODE, AND ADDING SECTION 7.05.058 TO THE SACRAMENTO CITY CODE, TO AUTHORIZE THE SALE OF FOOD AND BEVERAGES ON THE SIDEWALKS AND BOARDWALKS OF OLD SACRAMENTO BE IT ENACTED BY THE COUNCIL OF THE CITY OF SACRAMENTO: SECTION 1. Section 7.05.057 of the Sacramento City Code is hereby amended to read as follows: 7.05.057 Vendors of merchandise with fixed business locations permitted to vend from sidewalk during specified special events Section 7.05.055 notwithstanding, a vendor who has a fixed business location within Old Sacramento from which the vendor regularly sells goods, wares, or merchandise other than food or beverages may vend these goods, wares, or merchandise from the sidewalk or boardwalk immediately adjacent to the fixed business location during the following special events: Jazz Jubilee, Children's Festival, Railroad Festival, Thursday Night Afterglow, and any other Old Sacramento special event for which the director of the Downtown Department determines that sidewalk vending is appropriate. All tables and vending carts used for this purpose shall be approved in advance by the director of the Downtown Department, or the director's designee, and shall be positioned so as to leave a minimum of six (6) feet clear on every sidewalk or boardwalk for pedestrians. SECTION 2. Section 7.05.058 is hereby added to the Sacramento City Code to read as follows: 7.05.058 Food and beverage vendors with fixed business locations permitted to vend from sidewalks year-round. (a) Section 7.05.055 notwithstanding, a vendor who has a fixed business location within Old Sacramento from which the vendor regularly sells food or beverages may apply for a permit, revocable at will by the Director, to vend food or beverages from the sidewalk or boardwalk immediately adjacent to the fixed business location at any time. No alcoholic beverages shall be sold or consumed on the sidewalk or boardwalk. All tables, chairs, and vending carts used for this purpose shall be approved in advance by the director of the Downtown Department, or the Director's designee, and shall be positioned so as to leave a minimum of six (6) feet clear on every sidewalk or boardwalk for pedestrians. (b) The provisions of Title 12 of this code and the Zoning Ordinance relating to outdoor sidewalk cafes notwithstanding, an application for a revocable permit under this section shall be submitted to the Old Sacramento Management office, and shall be accompanied by a non-refundable application fee set by resolution of the City Council. At the Director's discretion, the permit may be renewed annually upon payment of a renewal fee set by resolution of the City Council. (c) Prior to selling food or beverages from the sidewalk or boardwalk, each vendor shall: (1) Through Old Sacramento Management, obtain written approvals from the Fire Department and the Police Department for the furnishings to be used and their placement on the sidewalk or boardwalk. (2) Provide and thereafter maintain comprehensive general liability insurance in an amount not less than one million dollars, naming the City of Sacramento, and its officers, employees, agents and volunteers as additional insureds. The form and substance of the insurance required shall be approved by the City Risk Manager. (d) The Director may establish reasonable rules and regulations concerning the sale of food and beverages from the sidewalk or boardwalk. DATE PASSED FOR PUBLICATION: DATE ENACTED: DATE EFFECTIVE: ATTEST: CITY CLERK FOR CITY CLERK USE ONLY ORDINANCE NO. _______________________ DATE ADOPTED: ________________________
Don't Look to Diamanda Galás for Comfort By WILLIAM HARRIS DIAMANDA GALÁS DEFIES SIMPLE categorization. She is a writer, composer and performer, yet the term performance artist fails to capture her rage or vocal skills. Musician is only slightly more accurate. True, Ms. Galás, 41, has released eight albums and is a classically trained pianist and opera singer known for her three-and-a-half-octave range. But what comes out of her mouth in performance is a visceral collage of notes, chants, shrieks, gurgles, hisses — you name it — often at extreme volumes, frequently distorted electronically and accompanied by a torrent of words. Ms. Galás typically shapes a libretto by mashing passages from the Bible with her own writings. On Thursday at Alice Tully Hall, Ms. Galás will perform her new solo, "Insolia," for the opening of the Serious Fun festival at Lincoln Center. It will be repeated the next evening. In the fall, she will embark on a four-week tour of Europe, followed by a seven-city tour of the United States. Two things about Ms. Galás are clear: She has a formidable stage presence, and by design her work is not soothing. The performer, who wears her jet black hair long and straight, parted in the middle, and on whose left-hand fingers are tattooed the words "We are all B.I.V.s," is a cultural activist. Her art is about — and tries to physically embody — the emotional and physical pain of people who have been marginalized by society, particularly those suffering from AIDS. Conceptually, her work is closest to that of the late David Wojnarowicz, the visual artist-activist who died of AIDS in 1992. Theatrically, Ms. Galás conjures up the spirit of the classic Greek heroine Antigone — outspoken, passionate and defiant, both on stage and off. "I don't know anyone or anything quite like her," said Christopher Hunt, who presented Ms. Galás in 1985 at the acclaimed Pepsi Center concert where he had turned concert posters, clippings, scraps of information, research materials, file folders — her composition tools. Even the walls are covered with useful jottings: ideas, phone numbers and scribbled fragments of texts. Ms. Galás first began addressing AIDS issues in 1984, while living in San Francisco. At that time, she started composing a multi-segmented mass that would take years to complete. The 90-minute piece, "Plague Mass," was not presented in its entirety until 1990, at the Cathedral of St. John the Divine in Manhattan. In one section, "There Are No More Threats to the Church," a bare-chested Ms. Galás, covered in stage blood, equates the brutality of AIDS deaths with the pain of crucifixion. Earlier portions of the work were seen here and abroad; in Italy, some reviewers called the work "blasphemous" and referred to Ms. Galás as "an evil singer." Yet Bernard Holland wrote in The New York Times: "Ms. Galás, who is known to audiences less familiarized into a narrow surge of energy, one that is difficult to sidestep, much less steer at. Ms. Galás is a kind of terrorist. She sabotages old ways of making music, just as she is an unsettling presence in the cold war against AIDS." Ms. Galás defends "Plague Mass," explaining that it "is for and about persons who are fighting to stay alive in the face of..." In the fall, one will embark on a four-week tour of Europe, followed by a seven-city tour of the United States. Two things about Ms. Galás are clear: She has a formidable stage presence, and by design her work is not soothing. The performer, who wears her jet black hair long and straight, parted in the middle, and on whose left-hand fingers are tattooed the words "We are all H.I.V. +," is a cultural activist. Her art is about — and tries to physically embody — the emotional and physical pain of people who have been marginalized by society, particularly those suffering from AIDS. Occasionally, her work is close to that of the late David Wojnarowicz, the visual artist-writer who died of AIDS in 1992. Theatrically, Ms. Galás conjures up the spirit of the classic Greek heroines Antigone — outspoken, passionate and defiant, both on stage and off. "I don't know anyone or anything quite like her," said Christopher Hunt, who presented Ms. Galás in his first acclaimed PepsiCo SummerFest, where he was the curator. "She is one of the few artists who makes the political content of her work valid and not just an excuse for a lot of historians. I find her compulsive listening, although not always something one immediately likes in any ordinary sense." "Insektia," like Ms. Galás's earlier pieces, is concerned with physical and spiritual suffering. It has been developed in collaboration with the sound designers Eric Liljestrand and Blaise Dupuy and the director Valeria Vasilievski at the Kitchen, the avant-garde performance space where she first presented her work to New York audiences back in 1982 and where last year produced her Venere Cava," a two-act revolved work about AIDS-related dementia and depression. Since mid-April, she has been the Kitchen's first artist-in-residence — meaning that she has had access to free rehearsal space and support facilities. "My work is about individuals in extreme isolation and in response to a quarantine mentality," said Ms. Galás recently at her East Village apartment. "It is concerned with the destruction of the mind and issues of survival. The emotions I'm dealing with are ugly, and a lot of the feelings are not polite or therapeutic when voiced. The reason I have this [tattoo] is to reflect an anti-quarantine mentality. We are all H.I.V.-positive until the epidemic ends." She then greeted a visitor with an apology, explaining that she wears tinted glasses indoors to protect her light-sensitive eyes, not difficult to sidestep, much less sneer at. Ms. Galás is a kind of terrorist. She sabotages old ways of making music, just as she is unsettling presence in the real world against AIDS." Ms. Galás defends "Plague Mass," explaining that it "is for and about persons who are fighting to stay alive in the face of indifference. I'm showing modern-day saints crucified by society. When I chant, 'Were you a witness? On that holy day and on that bloody day, were you a witness?,' I mean, did you protest the action of this crucifixion, this extermination, this execution, or did you just watch as a voyeur, an audience of cowards?" A number of her AIDS activists friends was one of dozens of members of the AIDS Coalition to Unleash Power (Act Up) arrested during a demonstration at St. Patrick's Cathedral four years ago — many people assume that Ms. Galás is either a lesbian or H.I.V.-positive herself. She is neither; but her brother, the playwright Philip-Dimitri Galás, died of AIDS in 1986. "A lot of straight men assume I do this work in reaction to the death of my brother," Ms. Galás said. "They dismiss it as a hysterical female reaction. I find that attitude irritating, because it assumes I have no vision at all. It belittles grieving and implies that there is something intrinsically wrong with a woman who would respond to her brother's death from AIDS." FOR "INSKETIA," MS. GALÁS HAS been researching biological defense experiments as well as the language of schizophrenics and the use of drugs like Thorazine and Mellaril to control behavior. "On the surface," she said, "the title refers to something that is small and insignificant. I'm also using it to refer to a faceless population, such as one that is found in mental institutions or in a prison, and therefore, a population that is available to experimentation." Ms. Galás, who will perform her new solo work, "Insektia," to open the Serious Fun festival at Lincoln Center on Thursday. With is of being put in a powerless situation and also being part of a powerless population." The allusions in "Insektia" to the dehumanizing way many AIDS patients are treated, while not specific, are resonant. Ms. Galás, who moved to New York in 1988 after years of living on both the West Coast and in various European capitals, grew up in San Diego, the daughter of first-generation Greek immigrants. She began studying the master's degree in music performance, she became fascinated with the jazz compositions of Ornette Coleman and John Coltrane. Jazz encouraged her to experiment with sound. Her interest in, and study of, voice followed. Sound remains an important component of her work, which is not surprising, since operas inspires her and was the catalyst that led her to become a singer. On musical notation, instead, she describes in longhand the sounds she wants. In one section of "Insektia," for instance, her script calls for "sonically magnified physiological body functions (respiratory, secretory, heart, salivary, mastication)" — a sense of a natural propensity in the human organism as described by the composer John Cage. East Village apartment. "It is concerned with the destruction of the mind and issues of survival. The emotions I'm dealing with are ugly, and a lot of the feelings are not polite or therapeutic when voiced. The reason I have this tattoo is to reflect an anti-quarantine mentality. We are all H.I.V.-positive until the epidemic ends." She has greeted a visitor with an apology, explaining that she wears tinted glasses indoors to protect her light-sensitive eyes, not as an interdenominational tactic or fashion statement. Her one-bedroom apartment, like her art, is not tidy. Everything, including much of the floor, seems to be littered with newspapers. Williams Harris is a consulting editor to Dance Talk. been researching biological defense experiments as well as the language of schizophrenia and the use of drugs like Thorazine and Mellaril to control behavior. "On the surface," she said, "the title refers to something that is small and insignificant. I'm also using it to refer to a faceless population, such as those in concentration camps, institutions or in a prison, and therefore a population that is available to experimentalism." "In the first section, or aria you might say, I tell the fragmented story of a person who had been raped with a knife. This person is trying to describe that treatment but is incapable of direct, linear speech. She discusses whoever attacked her by describing the smells she remembers. The image I'm working with is of being put in a powerless situation and also being part of a powerless population." The allusions to "Insecta" to the dehumanizing way many AIDS patients are treated, while not specific, are resonant. Ms. Galás, who moved to New York in 1990 after years of living on both the West Coast and in various European countries, grew up in San Diego, the daughter of first-generation Greek immigrants. She began studying the piano at age 5, and as a child, she was often asked to accompany the gospel choir her father led. At 14, she performed as a soloist with the San Diego Symphony, playing Beethoven's Piano Concerto No. 1. While attending the University of California at San Diego, from which she holds both a bachelor's and master's degree in music performance, she became fascinated with the jazz compositions of Ornette Coleman and John Coltrane. Jazz encouraged her to experiment with sound. Her interest in, and study of, voice followed. Gospel remains an important component of her work, which is not surprising, since gospel inspires hope and was, historically, the music of resistance. Strains from traditional Greek dithyrambs also are heard in her work. Dirge singing transforms the act of mourning into an oath of vengeance. Ms. Galás cites Antonin Artaud and his writings on the "theater of cruelty" as another influence. When composing, Ms. Galás does not rely on musical notation. Instead, she describes in longhand the sounds she wants. In one section of "Insecta," for instance, her score calls for "sonically magnified, physiological body functions (intestinal, excretory, heart, salivary, mastication) ... a sense of a natural proximity to the human organism is desired." Ultimately, it is Ms. Galás's voice that remains most indelible. "My work," said Ms. Galás, "is not just a call to activism, although it has functioned that way. It is more than mere propaganda. For me, the definition of mediocre performance is preaching. I developed my voice so that I could sing what I heard, to explore the outer limits of the soul." We are transmitting a total of ______ pages (including this cover sheet). If you have any questions regarding this transmission or wish to reply, please contact us at the above fax number or call (212) 807-6480. COMMENTS: Project: Serious Fun! Date/Time: 1 July 1993 Operator:
Big Data Privacy by Design Computation Platform Rui Nuno Lopes Claro Thesis to obtain the Master of Science Degree in Information Systems and Computer Engineering Supervisors: Prof. Dr. Miguel Filipe Leitão Pardal Dr. José Miguel Ladeira Portêlo Examination Committee Chairperson: Prof. Dr. Paolo Romano Supervisor: Prof. Dr. Miguel Filipe Leitão Pardal Member of the Committee: Prof. Dr. Alexandre Paulo Lourenço Francisco May 2018 Declaração Declaro que o presente documento é um trabalho original da minha autoria e que cumpre todos os requisitos do Código de Conduta e Boas Práticas da Universidade de Lisboa. Declaration I declare that this document is an original work of my own authorship and that it fulfills all the requirements of the Code of Conduct and Good Practices of the Universidade de Lisboa. Resumo Vivemos na era das grandes quantidades de dados (Big Data). Os dados pessoais dos utilizadores, em particular, são necessários para o desenvolvimento, funcionamento e melhoria constante de serviços disponíveis na Internet, nomeadamente o Google, Facebook, WhatsApp, Spotify, entre tantos outros. Muitas vezes, a recolha e o uso dos dados pessoais não são explícitos para os utilizadores, embora a sua utilização seja central para o modelo de negócios das empresas. No entanto, o direito à privacidade de cada indivíduo tem de ser respeitado. De que forma podem estas duas necessidades conflituantes ser reconciliadas, ou seja, como podemos construir sistemas de Big Data que respeitem a privacidade do utilizador? O objetivo deste trabalho é desenhar e implementar uma “prova de conceito” de uma plataforma para realizar computações que preservem a privacidade dos utilizadores. Pretende-se disponibilizar um método de simples utilização para a implementação de técnicas de preservação da privacidade. Este sistema pode ser utilizado para encapsular algoritmos que permitam, por exemplo, monitorizar os sinais vitais dos pacientes (sem os expor a outras pessoas), produzir recomendações em tempo real com base na localização (mas sem a divulgação da mesma). Assim, esta “prova de conceito” implementa versões de algoritmos de aprendizagem automática que preservam a privacidade, e fornece um referencial que permite uma melhor compreensão das relações e benefícios criados com o uso de técnicas de preservação de privacidade. We live in the age of Big Data. Personal user data, in particular, are necessary for the operation and improvement of everyday Internet services like Google, Facebook, WhatsApp, Spotify. Many times, the capture and use of personal data are not made explicit to the users, but they are central to the business model of the companies. However, the right to privacy of each individual has to be respected. How can these two conflicting needs be reconciled, i.e. how can we build useful Big Data systems that are respectful of user privacy? The goal of this work is to design and implement a “proof-of-concept” of a platform for performing privacy-preserving computations, providing an easy-to-use method to implement privacy-preserving techniques. This system can be used to encapsulate algorithms that can, for example, monitor the vital signs of patients (without exposing the data to other people), produce real-time recommendations based on location (without disclosing the location to others). This proof-of-concept implemented privacy-preserving versions of Machine Learning algorithms and compared them against a baseline reference, allowing a practical understanding of the trade-offs in using privacy-preserving technology. Palavras-Chave Preservação de Privacidade em Computações Aprendizagem Automática Extração de Dados Grandes Quantidades de Dados Processamento de Dados Computação Multi-Entidade Segura Keywords Privacy-preserving Computations Machine Learning Data Mining Big Data Data Processing Secure Multi-Party Computations Acknowledgements My sincerest thanks to my supervisors Miguel Pardal and José Portêlo for proposing this thesis, for the guidelines provided, for the help and advice that ultimately allowed the development of this work. To my parents that allowed me to reach this goal, to my sister, my girlfriend, my family, my friends, my college colleagues, and my work colleagues, I give you my sincerest thank you. A final word of appreciation to Altran for the opportunity of an internship in conjunction with this thesis. ## Contents 1 Introduction 1 1.1 Contributions 2 1.2 Outline 3 2 Related Work 5 2.1 Data Security 5 2.1.1 Data Security Principles 5 2.1.2 Examples of Data Security Breaches 6 2.2 Data Privacy 7 2.2.1 Privacy Protection Principles 8 2.2.2 European Union Legislation 9 2.2.3 Examples of Data Privacy Breaches 10 2.3 Privacy Implications of Personal Data Processing 11 2.3.1 Attack Models 13 2.4 Privacy-Preserving Techniques 14 2.4.1 Anonymization 14 2.4.2 Differential Privacy 15 2.4.3 Secure Multi-Party Computations 16 2.4.4 Oblivious Transfer 17 2.4.5 Garbled Circuits 18 2.4.6 Homomorphic Encryption 18 2.4.7 Functional Encryption 19 2.5 Privacy-Preserving Machine Learning 19 2.6 Use Cases 20 2.7 Summary ........................................ 22 3 BARD ........................................... 23 3.1 Motivation .................................... 23 3.2 Objectives .................................... 23 3.3 Architecture .................................. 24 3.4 Our Contributions to BARD .................. 26 3.5 Summary ...................................... 27 4 Implementation .................................. 29 4.1 Platform ...................................... 29 4.2 Use Case: Healthcare .......................... 30 4.3 Structure ..................................... 32 4.4 Datasets Used ................................ 33 4.5 Data Preprocessing ............................ 34 4.6 ML Algorithms ................................ 35 4.6.1 Decision Trees ............................. 36 4.6.2 Support Vector Machines .................... 36 4.6.3 $k$-Means ................................... 38 4.6.4 Logistic Regression ......................... 39 4.7 Privacy-preserving Algorithms ............... 39 4.7.1 Garbled Circuits and Decision Trees ....... 40 4.7.2 Garbled Circuits and $k$-Means ............... 41 4.7.3 Homomorphic Encryption and Logistic Regression 42 4.7.4 Homomorphic Encryption and Support Vector Machines 43 4.8 Summary ...................................... 43 5 Evaluation ...................................... 45 5.1 Evaluation Metrics ............................ 45 5.2 Experimental Setup ............................ 46 5.2.1 Baseline Parameters .................................................. 48 5.2.2 Garbled Circuits toolkits and parameters ......................... 48 5.2.3 Homomorphic Encryption toolkits and parameters ............... 49 5.3 Baseline Results ......................................................... 50 5.3.1 Pima Indians Diabetes Dataset ................................... 50 5.3.2 Breast Cancer Wisconsin Diagnostic Dataset .................... 50 5.3.3 Credit Approval Dataset ........................................... 51 5.3.4 Adult Income Dataset .............................................. 52 5.4 Comparison with the Baseline Results ................................. 52 5.5 Execution Time Results ................................................ 53 5.5.1 Garbled Circuits and Decision Trees ............................. 54 5.5.2 Garbled Circuits and $k$-Means .................................. 54 5.5.3 Homomorphic Encryption and Logistic Regression ............... 56 5.5.4 Homomorphic Encryption and Support Vector Machines ........... 59 5.6 Communication Cost Results .......................................... 62 5.6.1 Garbled Circuits and Decision Trees ............................. 62 5.6.2 Garbled Circuits and $k$-Means .................................. 63 5.6.3 Partially Homomorphic Encryption ................................. 65 5.6.4 Fully Homomorphic Encryption .................................... 67 5.7 Discussion ............................................................. 68 5.8 Summary ............................................................... 69 6 Conclusion ............................................................... 71 6.1 Future Work .......................................................... 72 A Detailed Results ......................................................... 79 A.1 Execution Time ....................................................... 79 A.1.1 Garbled Circuits and Decision Trees ........................... 79 A.1.2 Garbled Circuits and $k$-Means ................................ 81 A.1.3 Partially Homomorphic Encryption and Logistic Regression .... 84 2.1 Process diagram showing the relationship between the different phases of CRISP-DM [63]. ................................................................. 12 3.1 Platform architecture for BARD. .................................................. 26 4.1 Conceptual view of the platform. .................................................... 30 4.2 Steps and Processes of the implementation. ..................................... 33 4.3 Boolean circuit of each node in a DT. .............................................. 40 4.4 Expansion of binary trees. ............................................................... 41 4.5 Boolean Circuit of the prediction of the $k$-Means ($k$-M) algorithm. ....... 42 5.1 GC+DT. Runtime per data sample, in seconds. All datasets. ............... 55 5.2 GC+$k$-M. Runtime per data sample, in seconds. All datasets. .............. 57 5.3 PHE+LR. Execution time per data sample, in seconds. All datasets. ...... 58 5.4 FHE+LR. Execution time per data sample, in seconds. All datasets. ...... 59 5.5 PHE+SVM. Execution time per data sample, in seconds. All datasets. .... 60 5.6 FHE+SVM. Execution time per data sample, in seconds. All datasets. .... 61 5.7 GC+DT. Amount of bytes per data sample (in kB) received during runtime by the Garbled Circuits (GC) evaluator. All datasets. .................. 64 5.8 GC+$k$-M. Amount of bytes per data sample (in kB) received during runtime by the GC evaluator. All datasets. ........................................... 66 | Table | Description | Page | |-------|-----------------------------------------------------------------------------|------| | 2.1 | Privacy-Preserving Machine Learning (PPML) algorithms | 20 | | 4.1 | Datasets used in the evaluation of BARD | 33 | | 5.1 | Binary confusion matrix | 45 | | 5.2 | Baseline results, in percentage. *Pima Indians Diabetes Dataset*. “A” represents Accuracy, “F” represents F-Measure. | 51 | | 5.3 | Baseline results, in percentage. *Breast Cancer Wisconsin Diagnostic Dataset*. “A” represents Accuracy, “F” represents F-Measure. | 51 | | 5.4 | Baseline results, in percentage. *Credit Approval Dataset*. “A” represents Accuracy, “F” represents F-Measure. | 51 | | 5.5 | Baseline results, in percentage. *Adult Income Dataset*. “A” represents Accuracy, “F” represents F-Measure. | 52 | | 5.6 | GC+DT. Average label prediction error when compared with the baseline. | 52 | | 5.7 | GC+k-M. Average label prediction error when compared with the baseline. | 53 | | 5.8 | GC+DT. Average pre-computation times per data sample, in seconds. All datasets. | 54 | | 5.9 | GC+k-M. Average pre-computation times per data sample, in seconds. | 56 | | 5.10 | GC+DT. Average amount of bytes per data sample (in kB) sent during pre-computation (PC-S), received during pre-computation (PC-R) and sent during runtime (R-S) by the GC evaluator. All datasets. | 63 | | 5.11 | GC+k-M. Average amount of bytes per data sample (in kB) sent during pre-computation (PC-S), received during pre-computation (PC-R) and sent during runtime (R-S) by the GC evaluator. All datasets. | 65 | | 5.12 | PHE. Communication costs in kilobytes (kB). All datasets. | 67 | | 5.13 | FHE. Communication costs in Megabytes (MB). All datasets. | 68 | | A.1 | GC+DT. Runtime per data sample, in seconds. *Pima Indians Diabetes dataset.* | 79 | A.2 GC+DT. Runtime per data sample, in seconds. *Breast Cancer Wisconsin Diagnostic* dataset. ................................................................. 79 A.3 GC+DT. Runtime per data sample, in seconds. *Credit Approval* dataset. ........................................................................................................... 80 A.4 GC+DT. Runtime per data sample, in seconds. *Adult Income* dataset. .................................................................................................................. 80 A.5 GC+k-M. Runtime per data sample, in seconds. *Pima Indians Diabetes* dataset. ........................................................................................................... 81 A.6 GC+k-M. Runtime per data sample, in seconds. *Breast Cancer Wisconsin Diagnostic* dataset. ........................................................................................................... 82 A.7 GC+k-M. Runtime per data sample, in seconds. *Credit Approval* dataset. .................................................................................................................. 82 A.8 GC+k-M. Runtime per data sample, in seconds. *Adult Income* dataset. .................................................................................................................. 83 A.9 PHE+LR. Execution time in seconds. *Pima Indians Diabetes Dataset*. .................................................................................................................. 84 A.10 PHE+LR. Execution time in seconds. *Breast Cancer Wisconsin Diagnostic Dataset*. ........................................................................................................... 84 A.11 PHE+LR. Execution time in seconds. *Credit Approval Dataset*. .................................................................................................................. 84 A.12 PHE+LR. Execution time in seconds. *Adult Income Dataset*. .................................................................................................................. 85 A.13 FHE+LR. Execution time in seconds. All Datasets. .................................................................................................................. 86 A.14 PHE+SVM. Execution time in seconds. *Pima Indians Diabetes Dataset*. .................................................................................................................. 87 A.15 PHE+SVM. Execution time in seconds. *Breast Cancer Wisconsin Diagnostic Dataset*. ........................................................................................................... 87 A.16 PHE+SVM. Execution time in seconds. *Credit Approval Dataset*. .................................................................................................................. 87 A.17 PHE+SVM. Execution time in seconds. *Adult Income Dataset*. .................................................................................................................. 88 A.18 FHE+SVM. Execution time in seconds. All Datasets. .................................................................................................................. 89 A.19 GC+DT. Amount of bytes per data sample (in kB) received during runtime by the GC evaluator. *Pima Indians Diabetes Dataset*. ........................................................................................................... 90 A.20 GC+DT. Amount of bytes per data sample (in kB) received during runtime by the GC evaluator. *Breast Cancer Wisconsin Diagnostic Dataset*. ........................................................................................................... 90 A.21 GC+DT. Amount of bytes per data sample (in kB) received during runtime by the GC evaluator. *Credit Approval Dataset*. ........................................................................................................... 90 A.22 GC+DT. Amount of bytes per data sample (in kB) received during runtime by the GC evaluator. *Adult Income Dataset*. .................................................................................................................. 91 A.23 GC+k-M. Amount of bytes per data sample (in kB) received during runtime by the GC evaluator. *Pima Indians Diabetes Dataset*. ........................................................................................................... 92 A.24 GC+k-M. Amount of bytes per data sample (in kB) received during runtime by the GC evaluator. *Breast Cancer Wisconsin Diagnostic Dataset*. ........................................................................................................... 93 A.25 GC+k-M. Amount of bytes per data sample (in kB) received during runtime by the GC evaluator. *Credit Approval Dataset*. ................................. 94 A.26 GC+k-M. Amount of bytes per data sample (in kB) received during runtime by the GC evaluator. *Adult Income Dataset*. ................................. 95 | Acronym | Description | |---------|-------------| | BARD | Big dAta pRivacy by Design platform | | CRISP-DM| Cross Industry Standard Process for Data Mining | | DM | Data Mining | | DT | Decision Trees | | DP | Differential Privacy | | EMR | Electronic Medical Records | | ED | Euclidean Distance | | EU | European Union | | ENISA | European Union Agency for Network and Information Security | | FN | False Negative | | FP | False Positive | | FHE | Fully Homomorphic Encryption | | FE | Functional Encryption | | GC | Garbled Circuits | | GDPR | General Data Protection Regulation | | HE | Homomorphic Encryption | | k-M | $k$-Means | | LR | Logistic Regression | | ML | Machine Learning | | MUX | Multiplexer | | OT | Oblivious Transfer | | PHE | Partially Homomorphic Encryption | | PII | Personally Identifiable Information | | PPDM | Privacy-Preserving Data Mining | | Acronym | Description | |---------|-------------| | PPML | Privacy-Preserving Machine Learning | | RBF | Radial Basis Function | | SMPC | Secure Multi-Party Computations | | STPC | Secure Two-Party Computations | | SVM | Support Vector Machines | | TN | True Negative | | TP | True Positive | With the so-called “Big Data revolution”, vast amounts of data are now being analyzed and processed by companies that take advantage of the enormous quantities of data that are generated every day\(^1\). The Big Data and Business Analytics market reflects this growth rate, expecting to hit the $210 billion mark in the year 2020\(^2\). Through this data processing, meaningful information can be obtained to improve existing systems or to discover new approaches in business models. An example is the deployment of Data Mining (DM) algorithms by companies to better understand their customers and to devise better recommendation systems, in order to surpass their competitors in customer satisfaction. Another example lies in the field of healthcare, where it can be beneficial to match patient records from different hospitals in order to identify inefficiencies and develop best practices [42]. Data often contain Personally Identifiable Information (PII) of individuals, such as daily routines or health records. This kind of data cannot be freely processed because that leads to breaches of privacy, such as the AOL Search Leak\(^3\) or the Microsoft Hotmail privacy breach\(^4\). Due to these type of breaches, consumers show an increasing concern with privacy threats [13]. The privacy of an individual may be violated due to, for example, unauthorized access to personal data, or the use of personal data for purposes other than the ones for which data were collected. To deal with the privacy issues in DM, a sub-field known as Privacy-Preserving Data Mining (PPDM) has been gaining attention over the last years [18]. The goal of PPDM is to guarantee --- \(^1\)http://www.vcloudnews.com/every-day-big-data-statistics-2-5-quintillion-bytes-of-data-created-daily/ \(^2\)https://www.idc.com/getdoc.jsp?containerId=prUS42371417 \(^3\)https://www.networkworld.com/article/2185187/security/15-worst-internet-privacy-scandals-of-all-time.html \(^4\)https://www.networkworld.com/article/2185187/security/15-worst-internet-privacy-scandals-of-all-time.html the privacy of sensitive information, while, at the same time, preserve the utility of the data for DM purposes [2]. This can be achieved by using one or more privacy-preserving techniques, such as Differential Privacy (DP) [19] or Secure Multi-Party Computations (SMPC) [18]. Machine Learning (ML) algorithms in the context of Big Data processing are also producing significant results, so it is possible to gather knowledge from datasets in order to predict future labels (i.e. classes of data) or clusters (i.e. groups of related data) as new data are acquired. An example application of ML algorithms in DM is Classification [39]. In Classification, a training set is processed in order to create a classifier for data, and then that classifier is used to predict class labels for new data. These applications show a great impact in the field of medicine: for example, Google DeepMind builds ML algorithms to process admissions in hospitals\(^5\), and IBM Watson assists medical personnel to consider treatment options for their patients\(^6\). Some examples of ML algorithms include Decision Trees (DT), \(k\)-Means (\(k\)-M), and Support Vector Machines (SVM) [39]. By combining ML algorithms and privacy-preserving techniques, it is possible to create DM processes that allow for knowledge learning on large datasets and also help maintain a level of privacy that is desirable by individuals and that complies with the applicable legislation [18]. ### 1.1 Contributions The main contribution of this thesis is the design and creation of a proof-of-concept platform for privacy-preserving distributed ML computations. Since the platform has its foundations on privacy-preserving techniques, it can be used to address satisfactorily the privacy demands that individuals want for their data. We show a possible usage for this platform in the field of healthcare, with a scenario of privacy-preserving processing of Electronic Medical Records (EMR). We provide a detailed comparison of four ML algorithms: DT, SVM, \(k\)-M and Logistic Re- \(^5\)https://deepmind.com/applied/deepmind-health/ \(^6\)https://www.mskcc.org/about/innovative-collaborations/watson-oncology gression (LR), combined with two privacy-preserving techniques: Garbled Circuits (GC) and Homomorphic Encryption (HE), allowing us to understand what is the right combination for each ML algorithm, depending on the context of data and on the operations to be performed. Joining the algorithms and techniques mentioned above, we proposed the creation of a platform that provides Privacy-Preserving Computation as a Service. With this platform, we wish to contribute to the faster integration of solutions developed by the scientific community in enterprise systems, thus reducing the time required for innovation to reach products used by many people where privacy improvements are urgently needed. This thesis is part of a larger project developed in Altran, called Big dAta pRivacy by Design platform (BARD). The implementation was done by a team of developers. We detail in Section 3.4 what were our contributions to the project, and what other results were developed in BARD and are presented in this thesis for completeness purposes. 1.2 Outline This dissertation is structured as follows. In Chapter 2 we present an overview on the related work about the Privacy-Preserving Machine Learning (PPML) paradigm. Chapter 3 presents the project from Altran that this work is a part of. In Chapter 4 we discuss the implementation specifications of the platform. Chapter 5 presents the results obtained with the implementation. Finally, in Chapter 6 we wrap up the dissertation with the conclusions and propose directions for future work. This Chapter provides an overview of the concepts relevant to privacy-preserving data processing. We start by defining Data Security and Data Privacy, in sections 2.1 and 2.2 respectively, and describe the differences between them. Section 2.3 presents the concepts of data processing and Data Mining (DM), gives an overview of the Cross Industry Standard Process for Data Mining (CRISP-DM) model and defines the attack models that can be assumed when developing a Privacy-Preserving Data Mining (PPDM) solution. For developing a PPDM algorithm, we can implement one or more of the privacy-preserving techniques briefly presented in Section 2.4. We present Privacy-Preserving Machine Learning (PPML) in Section 2.5. Finally, in Section 2.6 we discuss use cases that show what can be achieved. 2.1 Data Security Data Security refers to protective digital measures that are applied to prevent unauthorized access to computers, databases, and websites that store data, as well as to prevent data destruction or alteration. 2.1.1 Data Security Principles We define here the core security principles widely accepted in the literature, often known as the CIA triad: Confidentiality, Integrity, and Availability [33]. - **Confidentiality** is defined as the property of data, and of services that process such data, that prevents them from being accessed by unauthorized entities. • **Integrity** is defined as the property of data, and of services that process such data, that prevents them from being modified in an unauthorized or undetected manner. • **Availability** is defined as the property that access to data, and services that process such data, is always possible when needed by the authorized parties and in a timely manner. For applying Data Security measures, various technologies can be implemented: • **Data backups** ensure that data that have been lost can be recovered. This technique is a standard procedure for most companies, since the permanent loss of crucial data can seriously cripple a company’s business. • **Data erasure**, in contrast to backups, is a technique to permanently delete data from a hard drive or other digital media, to ensure that no sensitive data are leaked when a company wants to permanently remove an asset from usage, or when they required to do so by court order. • **Data encryption**, or disk encryption, refers to techniques that allow a user to encrypt data in a disk, such that the disk remains protected and cannot be decrypted by an unauthorized party. • **Identity-based security** is a method to limit the access to data such that only a user that has been authenticated and has permission to access a piece of data can do so. These techniques offer ways to protect data, but sometimes this is not enough. Usually, due to programming bugs in the system, vulnerabilities occur in the software, allowing unauthorized parties to bypass these techniques and get access to data that should be confidential. ### 2.1.2 Examples of Data Security Breaches Data Security breaches refer to attacks, usually through unauthorized access, to systems that contain private data. These attacks are commonly made by organized hacker groups to gain leverage against companies or to make a profit by selling the data in the black market. Next, we present some recent examples of Data Security breaches. - **Sony Pictures hack**\(^1\). In 2014, a hacker group leaked confidential data from Sony Pictures in an attempt to gain leverage with the company to make it comply to their demands. The hacker group threatened to commit acts of terrorism in theaters if Sony released a movie related to the North Korean leader. - **Ashley Madison data breach**\(^2\). In 2015, a group of hackers stole user data from the adultery website Ashley Madison, and threatened to release usernames and Personally Identifiable Information (PII) if the website was not shut down. - **Yahoo! data breach**\(^3\). In 2016, Yahoo! reported two separate data breaches occurring in 2014 and 2013, of over 1.5 billion user accounts, including Yahoo! email access, which in turn can reveal bank and family details as well as passwords for other services. ### 2.2 Data Privacy Data Privacy, also referred to as Information Privacy, is the relationship between the collection and dissemination of data, and the legal issues surrounding them. It refers to the measures taken in providing individuals with defenses for their personal data. Privacy can be defined as the ability or right that an individual has of protecting his personal information and extends the ability or right to prevent invasions on the personal space of said individual [4]. Privacy is an important field in information security because it gives an individual his personal space and defines his personal private information, giving the individual the right to decide which information is for sharing and which should be kept confidential. The right to privacy also limits the access that other entities, being them the government or private companies, have to personal data. --- \(^1\) https://www.washingtonpost.com/news/the-switch/wp/2014/12/18/the-sony-pictures-hack-explained/ \(^2\) http://fortune.com/2015/08/26/ashley-madison-hack/ \(^3\) https://www.theguardian.com/technology/2016/dec/14/yahoo-hack-security-of-one-billion-accounts-breached One of the prime examples of Privacy applied to information technology problems is related to Electronic Medical Records (EMR) [42]. These records must be handled with extra care because they contain a large number of sensitive information about patients. The Patient Record Systems should be able to disclose information only to selected personnel. However, not all the information about the patient should be disclosed, only what is necessary to proceed in helping the patient. This example illustrates the tension between having access to the data, which can be useful, but at the same time keeping them closed to other users. ### 2.2.1 Privacy Protection Principles New concepts have arisen in recent years for privacy protection principles [19]. Their definitions are as follows: - **Unlinkability** is defined as the property that ensures privacy-relevant data cannot be linked across domains that are constituted by a common purpose and context. In other words, multiple actions from the same user/entity cannot be linked together. - **Transparency** is defined as the property that ensures all privacy-relevant data processing can be understood and reconstructed at any time. Transparency has to cover not only the actual processing but also the planned processing and after processing to fully know actions and entities involved. Transparency is related to the principles concerning openness and it is a prerequisite to accountability. The individual must know and understand how his private data are being handled. - **Intervenability** is defined as the property that ensures mediation is possible concerning privacy-relevant data processing, in particular by the people whose data is being processed. Intervenability is related to the rights of an individual, in a way that the owner of privacy-relevant data must have the means to rectify or erase said data. 2.2.2 European Union Legislation It is also important to mention in the context of this work the current legislation in the European Union (EU) regarding data protection. The Data Protection Directive\(^4\) is the current law regarding privacy in the EU and is in force since 1995. More recently, a replacement has been proposed and accepted in the EU, the General Data Protection Regulation (GDPR)\(^5\), that will take effect in May 2018. Both these laws are advised by European entities, namely the European Union Agency for Network and Information Security (ENISA)\(^6\) for the GDPR, and the Article 29 Data Protection Working Party\(^7\) for the Data Protection Directive. According to ENISA, EU data protections law applies to any processing of personal data [18]. This personal data are defined as any information related to an identified or identifiable natural person. In the context of Big Data analytics, the focus is more on indirect identification, which translates into three different approaches: - The possibility of isolating some or all records which identify an individual in a dataset. - The linking of at least two records concerning the same individual in the same database or in different databases. - The possibility to infer the value of an attribute in a dataset from the value of other attributes. Another important cornerstone of GDPR is the principles relating to data quality: - The fairness principle requires that personal data should never be processed without the individual actually being aware of it. --- \(^4\)http://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex:31995L0046 \(^5\)http://eur-lex.europa.eu/legal-content/EN/TXT/?uri=uriserv:OJ.L_.2016.119.01.0001.01.ENG&toc=OJ:L:2016:119:TOC \(^6\)https://www.enisa.europa.eu/ \(^7\)http://ec.europa.eu/newsroom/just/item-detail.cfm?item_id=50083 • The *purpose limitation principle* implies that data can only be collected for specified, explicit and legitimate purposes. • The *data minimization principle* states that data processed should be the one which is strictly necessary for the specific purpose previously determined by the data controller. These three principles together mandate that data processing must be done with the consent of the subject, informing the subject of what is the purpose of the processing, and do not deviate from this purpose without informing the subject. Finally, and also important to mention in the context of this work, are the rights of the data subject according to the GDPR. There are two important rights that a subject has: the *right of access* and the *right to object*. The *right of access* ensures that any data subject is entitled to obtain from the data controllers communication of the data that is subjected to processing and to know the logic involved in any processing of data concerning him. This is particularly relevant in the context of Big Data analytics because it limits technological lock-ins and other competition impediments, and it enhances transparency and trust between users and service providers. The *right to object* ensures that data subjects have a right to revoke any prior consent, and to object to the processing of data relating to them, giving them the power to remove himself completely or partially to any data processing mechanisms using their personal data. ### 2.2.3 Examples of Data Privacy Breaches In recent years we can find a number of attacks made to systems that handle personal information. Next are some relevant examples regarding Data Privacy breaches. - **Massachusetts GIC medical encounter database**\(^8\). In 1997, a researcher from Carnegie Mellon University linked the anonymized database (which contained birth date, sex, and ZIP code) with voter registration and was able to link medical records with individuals. \(^8\)https://techpinions.com/can-you-be-identified-from-anonymous-data-its-not-so-simple/7627 • *AOL search data leak*.\(^9\) In 2006, AOL released to the general public a text file containing search keywords from a large amount of users, intended for research purposes. The users were not identified, but PII was present in many of the queries. These queries contained a user *id* attributed by AOL, and an individual could be identified and matched to their account and search history using such information when combined with “voting lists”. • *Netflix Prize*.\(^{10}\) In 2007, Netflix created a contest to improve their recommendation system. To do that, they released a training dataset, with all the personal information regarding customers removed and customer *ids* replaced by randomized *ids*. Later, it was shown that this was not enough when a group of researchers linked public information in another movie-rating website (IMDb) with the released dataset and were able to partially de-anonymize the training dataset, compromising the identity of some users. • *Target Pregnancy Leak*.\(^{11}\) In 2012, Target, an American retail company, started merging data from user searches and demographics data in order to learn when their customers were pregnant, to approach them with a specific advertisement. This constituted a clear violation of private sensitive information about their customers and their private life. ### 2.3 Privacy Implications of Personal Data Processing Data processing is the conversion of raw data to meaningful information through a process, where operations are performed on a given set of data to extract the required information. Data is manipulated to produce results that lead to a resolution of a problem or improvement of an existing situation. DM is the process of discovering interesting patterns and knowledge from large amounts of data [32]. We describe the DM process according to the widely used CRISP-DM model [63] (Figure 2.1), in which the process is separated into six major phases, as described next. \(^9\)https://techcrunch.com/2006/08/06/aol-proudly-releases-massive-amounts-of-user-search-data \(^{10}\)https://www.wired.com/2009/12/netflix-privacy-lawsuit \(^{11}\)https://www.forbes.com/sites/kashmirhill/2012/02/16/how-target-figured-out-a-teen-girl-was-pregnant-before-her-father-did/#3001668b6668 • **Business Understanding**: In this initial phase, the project goals and requirements must be understood from a business perspective and then converted into a DM problem. • **Data Understanding**: During this phase, an initial data collection is done, followed by a number of activities in order to get familiar with the data, to understand how the data are organized, to identify if the data have quality problems, or even to detect interesting subsets in the data collected. • **Data Preparation**: This phase covers all the data preparation tasks to construct the final dataset from the initial raw data collected. These tasks include attribute selection, data cleaning, and transformation of data to fit the modeling tools. • **Modeling**: In this phase, various modeling techniques are selected and applied, and the underlying parameters are calibrated to optimal values. Usually, there are several techniques that can be applied to the same DM problem type, and some of these techniques require specific data formats. • **Evaluation**: At this point in the DM process, one or more of the models that have been developed are expected to be of high quality, from a data analysis perspective. These models must be evaluated thoroughly so that they properly achieve the business goals. • **Deployment**: In this last phase, the knowledge gained by the DM process needs to be organized and presented in a way that the customer can use it. This, of course, depends on the requirements presented at the beginning of the process. An example of a common deployment that results from DM is a simple report on the knowledge obtained. In the DM process, we must be aware that, sometimes, private information about individuals can be used and it may lead to breaches of privacy. We define this private information as PII [55], i.e. as information that can be used alone or in conjunction with other information to identify, contact, or locate a single person, or to identify an individual in a context. ### 2.3.1 Attack Models When considering the security of a system, one must have into account the concepts of threat model, attack model, and type of adversary, to understand how to better implement an efficient and trustworthy security layer. The honest-but-curious adversary follows the protocol, but he will try to extract information from his viewpoint to gain some form of advantage or access to confidential information. The malicious adversary is one that can deviate from the protocol specification as he desires, and will try to disrupt and/or collect as much information as he can. Cryptographic attacks are possible whenever a target system relies on cryptography for protection. An attack model is, in terms of cryptanalysis, a classification of cryptographic attacks specifying the type of access the attacker has to a system when attempting to break an encrypted message. We can summarize cryptographic attacks in the following four categories: • **Ciphertext-only attack**: In this type of attack, the adversary has access only to the ciphertext and has no access to the plaintext. This is the most common type of attack, and it is a requirement for modern ciphers to be resistant to it. An example of a ciphertext-only attack is the brute force attack, where the attacker makes a trial-and-error approach to decrypt the ciphertext. • **Known-plaintext attack**: In this type of attack, the adversary has access to a number of pairs of plaintext and the corresponding ciphertext. • **Chosen-plaintext attack**: In this type of attack, the adversary is able to encrypt arbitrary plaintext and have access to the resulting ciphertext, allowing him to make a statistical analysis on the plaintext state space. • **Chosen-ciphertext attack**: In this type of attack, the adversary is able to choose arbitrary ciphertext and obtain the corresponding plaintext. When designing privacy solutions using cryptography, protections must be put in place against these kinds of attacks. ### 2.4 Privacy-Preserving Techniques In this section, we will describe some of the techniques used in preserving Data Privacy, namely Anonymization, Differential Privacy (DP), Secure Multi-Party Computations (SMPC), Garbled Circuits (GC), Oblivious Transfer (OT), Homomorphic Encryption (HE), and Functional Encryption (FE). #### 2.4.1 Anonymization Data anonymization is a type of information sanitization technique that has the final intent of privacy protection. This can be achieved by either removing or encrypting PII from datasets so that the individuals whom the data describes remain anonymous [49]. Anonymization techniques use a variety of approaches, for example, *suppression*, where a piece of information (e.g., name, age) is removed from the dataset; *generalization*, where data is coarsened into less refined sets; *perturbation*, where data is modified by adding noise; and *permutation*, where sensitive associations between entities in the dataset are swapped. The goals behind anonymization of data are tightly intertwined with the privacy goals we want to achieve for the data being processed. Usually, one or more techniques mentioned above are applied to the data until certain properties are met, for example, *k-anonymity*, *ℓ-diversity* or *t-closeness*: - *k-anonymity* states that, for each individual whose data is released in some dataset, he must not be distinguishable from at least $k$ individuals that are also present in the release [59]. - *ℓ-diversity* is an extension of *k-anonymity*, and furthers the anonymization of data by reducing the granularity of the data representation such that for any given record, there exist at least $\ell$ different sensitive attribute values, in addition to the guarantees made by *k-anonymity* [43]. - *t-closeness* is a further refinement of *ℓ-diversity*. It requires that the distribution of a sensitive attribute in any equivalence class is close to the distribution of the attribute in the overall table, effectively limiting the amount of individual-specific information an observer can learn [40]. ### 2.4.2 Differential Privacy The concept of Differential Privacy (DP) arose due to recent Data Privacy breaches such as the ones mentioned in Section 2.2.3. The security standard for statistical databases, which states that access to a statistical database should not enable an adversary to learn anything about an individual that could not be learned without access, is not achievable because of the existence of auxiliary information, i.e. information available from sources other than the statistical database [23]. DP is a process of maximizing the accuracy of queries in statistical databases while minimizing the chances of identifying its records. The core of the procedure is based on randomized response [62], giving the possibility to infer statistical information from the dataset, while still ensuring high levels of privacy. Detailed information about DP and algorithms designed to achieve it are described in the literature [24]. ### 2.4.3 Secure Multi-Party Computations Secure Multi-Party Computations (SMPC) (also known as secure computation, multi-party computation, or privacy-preserving computation) is a protocol for different parties to jointly compute a function over their inputs while keeping those inputs private. The problem of computing functions while also preserving the privacy of the inputs is referred in the literature as a SMPC problem [64]. Generally speaking, a SMPC problem deals with computing any probabilistic function on any input, while also ensuring the correctness of the computation. It also guarantees that no more information is revealed to a participant in the computation than what can be inferred from that participant’s input and output [29]. A strategy to solve these problems is to trust an external entity (a trusted third party), that can mediate the computation. This approach can be risky because it requires a third party that all participants agree to trust, which can sometimes be difficult to find. Sometimes, the data have such high degree of importance to the participants that even disclosing them to a trusted third party is not viable. When building a SMPC protocol, the most important properties that must be ensured are input privacy and correctness [28]: - The input privacy property states that no information about the private data held by the parties can be inferred during the execution of the protocol. The only inferences about private data are those that could be inferred from seeing the output of the computation made by the protocol. • The *correctness* property relates to the existence of malicious parties that could try to deviate from the normal functioning of the protocol. In these cases, the protocol should prevent honest parties to output incorrect results. The approach to implementing the correctness property comes in two alternatives: either the protocol is robust, i.e. it guarantees that the honest parties compute the correct output; or the honest parties abort the computation if they find an error during the execution of the protocol. The first implementation of secure computation was introduced as Secure Two-Party Computations (STPC) [64]. It is a simplification of the problem of SMPC, and a known protocol for STPC is Yao’s GC protocol, which is detailed in Section 2.4.5. In STPC, there are only two participants in the computation, and usually one is responsible for starting and encoding the computation mechanism, while the other is responsible for evaluating the computation. In SMPC, the parties have no special roles and, instead, the encoding is shared amongst the parties, by secret sharing, and the evaluation is made by a protocol. An example of a SMPC implementation is the FairplayMP [8]. Some recent implementations of SMPC protocols are based on *Secret Sharing* that allows one party to distribute a secret over a number of parties by distributing shares to each party. Three of the types of secret sharing techniques more commonly used are: Shamir Secret Sharing [56], Additive Secret Sharing and Replicated Secret Sharing. ### 2.4.4 Oblivious Transfer Oblivious Transfer (OT) [48] is a protocol in which a sender transfers one of the potentially many pieces of information he has to a receiver but remains oblivious as to what piece has been transferred. Let $s^0$ and $s^1$ be two strings held by a sender that wants to transfer one of the strings to a receiver, holding a selection bit $b$; the protocol allows for only one of the inputs $s^b$ to be transferred; the receiver learns nothing about $s^{1-b}$, and the sender does not learn $b$. An interesting implementation of OT is the one done by Pinkas and Naor [44]. In it, the authors describe an extension to the basic 1-out-of-2 OT protocol, to a 1-out-of-$N$ protocol, and a $k$-out-of-$N$ protocol. The use of OT has been shown as a fundamental cornerstone in modern cryptography [36], and it is an essential building block for communication between parties in a SMPC protocol. ### 2.4.5 Garbled Circuits Yao’s Garbled Circuits (GC) [65] are a cryptographic protocol that allows two mutually mistrusting parties to evaluate a function over their private inputs without resorting to a trusted third party. In other words, GC allow parties holding inputs $x$ and $y$ to evaluate an arbitrary function $f(x, y)$ without leaking any information about their inputs beyond what is inferred from the function output. The idea behind GC is that one party prepares an encrypted version of a circuit that computes $f(x, y)$ and the other party then computes the output of the circuit without learning any intermediate values. Some optimizations have been proposed for Yao’s GC. Namely, Kolesnikov and Schneider [37] present a technique that eliminates the need to garble XOR gates. Pinkas et al. present a technique that reduces the size of a garbled table from four to three ciphertexts [46]. ### 2.4.6 Homomorphic Encryption Homomorphic Encryption (HE) [51] is a cryptographic technique that allows computations to be carried out over the ciphertexts, so that, when decrypted, the resulting plaintext reflects the computation made. In other words, HE allows to make some computation over the ciphertext, for example, addition, without decrypting it, and the result is the same as making that computation on the plaintext. This is of great importance because it allows chaining multiple services that make computations on a ciphertext, without the need to expose the data to those services. Homomorphic cryptosystems can be classified into two distinct groups: partially homomorphic cryptosystems and fully homomorphic cryptosystems: • In Partially Homomorphic Encryption (PHE) only one operation is permitted, for example addition, multiplication or XOR. Some examples of existing partially homomorphic cryptosystems are: ElGamal cryptosystem [26]; Unpadded RSA [52]; and Pailier cryptosystem [45]. • In Fully Homomorphic Encryption (FHE) it is possible to compute two different operations on the ciphertext, namely addition and multiplication. This concept was first introduced in the 1970’s [51], and it remained a theoretical result, until recently, when fully homomorphic implementations were developed, for example, Gentry’s cryptosystem [27]. ### 2.4.7 Functional Encryption In Functional Encryption (FE) systems, a decryption key allows a user to learn a specific function of the encrypted data, while also stopping that same user from learning anything more about the encrypted data. In other words, having a secret key only allows for computation of a specific function over the ciphertext [11]. When comparing FE with FHE, the main difference is that, in an FHE scheme, we compute an encryption of $f(x)$ from an encryption of $x$, whereas in an FE scheme we compute, in the clear, $f(x)$ from an encryption of $x$. More details can be found in the literature [3]. ### 2.5 Privacy-Preserving Machine Learning The conjunction between Machine Learning (ML) and privacy-preserving techniques comes from the need to do learning over large datasets, while also maintaining protection over the privacy of the data, without degrading the quality of data by using anonymization techniques. In the context of knowledge acquisition, ML techniques come as an important addition to the data processing step. An example of that is the Classification method. Classification is described by a two-step process: a classification algorithm is employed to build a classifier for the data by analyzing a training set made of tuples of data and their associated labels, and then the classifier is used to predict class labels for new data. Due to the large size of the datasets produced in Big Data operations, classification algorithms have a large quantity of data to learn from, making them less prone to erroneous classification of new data. In the field of ML, we can identify a number of algorithms that can be used in knowledge learning. In Table 2.1, we list some of them, with a brief description, and identify a PPML implementation present in the literature. Table 2.1: PPML algorithms. | Machine Learning Algorithm | Short summary | Privacy-Preserving Technique | Reference | |----------------------------|-------------------------------------------------------------------------------|------------------------------|-----------| | DT | Protocol for distributed learning of Decision-Tree classifiers. | SMPC | [14] | | Naive Bayes | Differentially private naive Bayes classifier. Centralized access to the dataset. | DP | [60] | | SVM | Algorithm for Support Vector Machines classification over vertically partitioned data. | SMPC | [67] | | k-M | $k$-Means clustering based on additive secret sharing. | SMPC | [22] | | LR | Logistic Regression based on Differential Privacy. | DP | [15] | ### 2.6 Use Cases In terms of PPML and its applications, it is important to distinguish the context of the data that are being processed. Different data can be subjected to different constraints regarding laws and privacy. Some sensitive data may be only processable in a local environment, while other data can only be processed in a less individualized way. We now detail three relevant and diverse use cases that are bound to different privacy constraints. - **Health records:** Healthcare systems are one of the examples where vast amounts of data are collected every day. It is relevant to do knowledge learning on patient records, for a better understanding of patients and to improve the healthcare system. Patient records contain very sensitive information about individuals and cannot be processed without the DM system being in compliance with the applicable legislation on Data Privacy. Therefore it is of interest to build privacy-preserving systems for the healthcare system, so that hospitals and other health-related organizations can share and infer knowledge without violating the privacy of their patients. - **Governance (students and taxes):** A group of researchers made a statistical study in 2015 using SMPC to look for correlations between working during university studies and failing to graduate in time [10]. For this study, it was necessary to link the database of individual tax payments and the database of higher education universities. These types of governmental data are subject to strict legislation and cannot simply be handled without strong privacy guarantees. To solve this problem, a SMPC system was developed and deployed that could assure a level of privacy that would be in compliance with the laws on Data Privacy. The data processing steps were all made using SMPC between three parties, using OT so that each party would not know each other inputs. In the end, the study using SMPC was compared with an anonymized study using $k$-anonymity with $k = 3$. The loss of samples in the latter was 10%-30%, depending on the demographic group, thus suggesting that producing studies on existing databases using SMPC to enforce privacy can give more accurate results than the same study run using $k$-anonymity measures. - **Human mobility:** Another subject that provides great challenges in the field of Data Privacy is the mobility traces generated by people when driving, walking, etc. Mobility traces are highly unique so it is possible, even after anonymizing the dataset, to link an individual to his mobility patterns [20]. Since mobility data contains the approximate whereabouts of individuals, it can be used to reconstruct their movements across space and time. Applying privacy-preserving techniques to process this highly sensitive data can result in robust privacy-compliant geographic-based recommendation systems. 2.7 Summary This Chapter provided an overview of the state of the art surrounding privacy-preserving data processing. We started by defining Data Security and Data Privacy in Sections 2.1 and 2.2. We described the concept of DM and data processing, in Section 2.3. In Section 2.4 we described privacy-preserving techniques that can be used to implement a PPML algorithm. We presented PPML in Section 2.5. Finally, we discussed known use cases that illustrate the benefits of what can be achieved in the field in Section 2.6. In this chapter, we present the Big dAta pRivacy by Design platform (BARD) project from Altran that this work is a part of. In Section 3.1, we present the motivations behind the creation of BARD, and define its objectives in Section 3.2. Section 3.3 presents the architectural specifications of the project. Finally, Section 3.4 details our individual contributions to BARD, since the development of this project was a team effort. 3.1 Motivation The idea for this project was motivated by the current growth trend in the Big Data market. The evolution of Big Data the last years, caused by the increasing number of devices connected to the Internet, provided analysts with the data to develop and improve systems in a varied scope of subjects, such as Healthcare and the Automotive Industry. But this data have private information about individuals, and, as we have shown before, processing them without certain precautions leads to breaches of private information. The BARD project contributes to solving this problem, by raising awareness of it and providing solutions in the form of methods and protocols to build privacy-preserving solutions for Big Data systems. 3.2 Objectives The objectives defined for BARD are described below: - Define methods to support protection of personal data for harvesting, sharing, querying and processing data assets, and supporting all the decisions to be taken while developing the platform. • Analyze the effects of the current legislation on the validation of the solution proposed, as it may restrict which technical solutions can be used. • Conceive a Privacy-by-Design architecture which makes the right balance between the data subject’s needs, the data consumer’s demands and the legal constraints. • Develop a Privacy-by-Design platform based on a reference architecture for the entire data flow process, in order to maximize value for both the people and companies. The expected result of BARD is to produce a platform that provides mechanisms for the protection of personal data, that complies with the current legislation, and that assures Privacy by Design and by Default. 3.3 Architecture We now present the architectural specifications of BARD. The Cross Industry Standard Process for Data Mining (CRISP-DM) model described in Figure 2.1 was used as a baseline to represent the data life cycle. BARD focuses on Data Quality (e.g.: cleaning, annotation), Data Representation (e.g.: anonymization, ciphering) and Data Processing (e.g.: computation on ciphertext). The Data Quality step refers to the transformation of raw input data into a structured, consistent, and, whenever possible, complete representation. An important aspect to mention in this step is that the data must be processed in plaintext by a trusted entity, meaning that this is done by the data owner or an entity in which the data owner trusts and has explicit permission to perform the operations. The Data Representation step refers to the protection of Personally Identifiable Information (PII) contained in the data. This data should be: integrated using privacy-preserving data integration techniques; aggregated using anonymization techniques; and represented using either hashing techniques or homomorphic cryptosystems. The Data Processing step is done in two different approaches. On one hand, Secure Multi-Party Computations (SMPC) techniques, such as Garbled Circuits (GC) or Homomorphic Encryption (HE), are performed over the data. On the other hand, Machine Learning (ML) algorithms, adapted to work with hashed or encrypted data, are used in performing knowledge learning. We now describe the internal components of a BARD solution. As stated in the objectives (Section 3.2), the main goal of BARD is to produce a platform that will provide mechanisms so that companies can perform privacy-preserving computations for ML algorithms, that are respectful of user privacy, and comply with the legislation. A representation of a solution is described by the following components: - **A dataset** to train the ML algorithms, or the values representing the already trained algorithms. - **A sample** or a set of samples that represent the user inputs, to be predicted. - **Prediction algorithms** that depend on the ML algorithms and the privacy-preserving techniques chosen. - **A set of toolkits** for each of the techniques used. These components altogether allow the user of the platform to perform Privacy-Preserving Machine Learning (PPML) over data of his own choosing. In Figure 3.1 we present the architecture for BARD. With it, we aim at providing companies a way to integrate their Big Data systems processes with privacy-preserving ML algorithms, allowing them to provide additional data privacy guarantees to their clients. We assume that only two parties exist: the client and the server. The client represents a user or an individual who owns the data and wishes to perform some processing over it. The server represents a cloud or service provider who has the computational capabilities and know-how to perform such processing. The data are provided by the client in the form of a dataset. That dataset is then pre-processed and the data are sanitized, to remove outliers and missing values. The model training is performed in the usual manner. The model evaluation process is the main focus of our work, and it is where the privacy-preserving techniques are deployed. In the end of the flow, the platform produces the prediction results. ### 3.4 Our Contributions to BARD Our contributions to BARD project were the development of a solution using the toolkit VIPP for privacy-preserving computations using GC. This includes the development of a baseline system, the adapted algorithms, the testing of the various toolkits and the actual development of the final solution with GC. As mentioned in Section 1.1, this work is part of a project, and the development of the solution was done by a team. Some of the results shown in this dissertation are presented for completeness only, as they were developed by the BARD team at Altran. Those results are presented in Sections 5.5.3 and 5.5.4, for the results obtained using HE and Logistic Regression (LR), and for the results using HE and Support Vector Machines (SVM), respectively, and Sections 5.6.3 and 5.6.4 for the communication costs of using Partially Homomorphic Encryption (PHE) and Fully Homomorphic Encryption (FHE). All the remaining results were obtained by the author of this dissertation. 3.5 Summary In this chapter, we discussed the project from Altran that this thesis is a part of. We explained the motivations behind its creation in Section 3.1, and its objectives in Section 3.2. We detailed the architecture of BARD in Section 3.3. Finally, we identified our individual contributions to the project in Section 3.4. When building a platform for Privacy-Preserving Machine Learning (PPML), we must go beyond the traditional steps in data processing considered in the CRISP-DM model, discussed in Section 2.3 and in Figure 2.1, and also have an increased care when preprocessing data to incorporate the cryptographic techniques. This Chapter describes the work that was done in implementing a PPML platform. In Section 4.1 we present the conceptual view of our platform. For understanding the applicability of the solution developed, we detail in Section 4.2 a use case for our implementation. In Section 4.3, we describe the structure we followed in implementing and evaluating the platform. In Section 4.4 we offer a description of the datasets chosen to test our platform. Then, we explain the preprocessing that was done to those datasets, in Section 4.5. Section 4.6 presents the baseline implementation of the chosen Machine Learning (ML) algorithms, resorting to a widely used ML toolkit for Python. In Section 4.7, we present the cryptographic protocols used, why we used them, and how we implemented them. 4.1 Platform In Figure 4.1 we present the conceptual view of our platform. The *data resources* represent the datasets that are used in the classification process. The data processing itself is done using the combination of ML algorithms and cryptographic techniques for performing privacy-preserving computations. The Application Programming Interface (API) layer abstracts details and provides the operations of the platform itself, which allow a simplified building of applications and data visualizations. The use-cases describe the various subjects that can be addressed using this platform, and allow us to place it in real-world scenarios that have high impact and demand in Big Data operations. More use-cases are possible beyond Healthcare, Mobility and Finance, as the platform is designed for general use. 4.2 Use Case: Healthcare As mentioned in Section 2.6, healthcare systems generate vast amounts of data every day. Processing this data can be beneficial for both the health institution (hospitals, clinics) and for the patients. However, the usage of Electronic Medical Records (EMR) cannot be freely done by institutions without the consent of the patients and in compliance with data protection legislation. As a result, this processing is performed *in-house*, with only a few exceptions\(^1\). The problem is that developing and/or maintaining a Data Mining (DM) infrastructure in an institution amasses costs that it may not be willing to support. Our contribution to mitigating this standoff between the gains and costs of DM EMR is to provide a product that removes the costs of maintenance and development from the institutions, while at the same time provides enough privacy guarantees to comply with existing legislation. We now describe a typical use case scenario for privacy-preserving processing of EMR. \(^1\)https://www.reuters.com/article/us-health-medicalrecords-sharing/few-u-s-hospitals-can-fully-share-electronic-medical-records-idUSKCN1C72UV • **Description:** Design and implementation of a platform to process EMR in order to improve treatments and diagnoses, while maintaining identities private. This is achieved by training models using these data and then predict medical conditions for future patients. All the computations should be done resorting to privacy-preserving techniques. • **Actors involved:** Healthcare institutions, patients, medical staff. • **Preconditions:** Access to data and to EMR of patients. Consent from each patient regarding the processing of his data. • **Basic Flow:** 1. The institution supplies the platform with data to train the models for one or more ML algorithms. This training must be done in an encrypted and/or anonymized domain. 2. A new patient arrives at the institution and it is asked if he consents to the use of the platform to speed up his diagnosis, including consent to data collection and data processing. If the patient agrees, the process can continue. 3. Patient data are collected by the medical staff, including his symptoms, medical history, etc. 4. These data are supplied to the platform, and the platform performs one or more predictions, depending on the number of models the platform has, using privacy-preserving techniques to do so. 5. The platform informs the doctor of what are the prediction results. 6. The doctor decides on the appropriate medical action, taking into account his medical background and the information supplied by the platform. • **Postconditions:** The platform has received data from the institution. The platform has trained different instantiations of ML algorithms. The platform successfully predicted the labels for the new samples. 4.3 Structure The process of implementing and testing a PPML platform can be achieved in a number of steps. The plan we designed was composed of three steps, each with three processes that are repeated for all steps. - **Step 1**: Use a toolkit (scikit-learn) to implement baseline versions of the chosen ML algorithms. Each of the processes were performed in black boxes\(^2\) supplied by the toolkit: 1. Compute the model using the training set. 2. Use the model to predict the labels of the testing set. 3. Evaluate the performance of the model by comparing predicted labels with the real ones. - **Step 2**: Write scripts implementing the ML algorithms that explicitly contain all the equations for the processes described above. - **Step 3**: Rewrite the scripts from Step 2 using cryptographic techniques to perform the necessary computations, that add protections for all the processes described above. It is important to mention that the processes in Steps 2 and 3 where implemented in reverse order, not only because training is more complex than prediction, which in turn is more complex than evaluation, but also because this new ordering better matches the purpose and logic of implementing them using cryptographic techniques for privacy-preserving purposes. Figure 4.2 presents a visual representation of the structure we followed, and show what was left for future work. \(^2\)A black box is a device, system or object which can be viewed in terms of its inputs and outputs, without any knowledge of its internal workings. 4.4 Datasets Used For running the experiments, we used datasets that are normally considered in the literature, namely the *Breast Cancer Wisconsin Diagnostic* dataset\(^3\), the *Pima Indians Diabetes* dataset\(^4\), the *Credit Approval* dataset\(^5\), and the *Adult Income* dataset\(^6\). Table 4.1 presents a brief description of each dataset, and the number of features after the preprocessing techniques presented in Section 4.5. | Dataset | Subject | Instances | #Features | #Features after one-hot encoding | |--------------------------|-------------|-----------|-----------|----------------------------------| | Pima Indians Diabetes | HealthCare | 768 | 8 | 8 | | Breast Cancer Wisconsin | HealthCare | 569 | 30 | 30 | | Credit Approval | Finance | 690 | 15 | 51 | | Adult Income | Governance | 48842 | 14 | 108 | \(^3\)https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+(Diagnostic) \(^4\)https://archive.ics.uci.edu/ml/datasets/pima-indians-diabetes \(^5\)http://archive.ics.uci.edu/ml/datasets/credit+approval \(^6\)https://archive.ics.uci.edu/ml/datasets/adult 4.5 Data Preprocessing Although our data is obtained from publicly available data sources, some preprocessing on the data is still required. For example, the existence of categorical data, that some ML algorithms cannot process directly, must be addressed. We now describe some of the techniques we used in the data preparation phase for the datasets described in Table 4.1. - **One-hot Encoding**\(^{[34]}\) was initially used in digital circuits in order to determine the state of a state machine without using a decoder. The binary code is converted in a group of bits in which only one bit can have the value high (1), and all the others low (0). In ML, we apply this technique to deal with the problems created in using datasets with categorical (or nominal) data. Most ML algorithms require numerical representations in the data. A solution can be to assign an integer number to each different value present in the data, but this leads to the model assuming a natural ordering between categories that may not exist. Using one-hot encoding in categorical data allows us to circumvent this problem, creating additional binary variables for each unique value, e.g., if a variable describing pets has the values “dog”, “cat” and “fish”, after encoding, three more variables will be added to the dataset, each representing a possible value. Then, a high (1) will be placed on the binary variable that represents the original value, and low (0) on all the others. Using the pets example, if the original value was “dog”, the resulting three binary variables will be high on the binary variable representing “dog”, and low on the other two, i.e. “dog” becomes \((1, 0, 0)\). - **Feature Scaling** is a technique to normalize the range of data. For some ML algorithms, having a broad range of values in one of the features may cause this feature to govern the modeling. There are different ways of achieving this, for example, *rescaling*, where the range of values is scaled to a target range (usually \([0, 1]\) or \([-1, 1]\)), or *standardization*, where the features are rescaled so that they have the properties of a standard normal distribution, i.e. mean average equal to 0 and standard deviation equal to 1. In our implementation, we used one-hot encoding in the *Adult Income* dataset and in the *Credit Approval* dataset. These two datasets contain categorical features that required the use of encoding to be processed by the ML algorithms. In the *Adult Income* dataset, we have multiple examples of categorical features, namely the *work-class*, *education*, *marital-status*, *occupation*, *relationship*, *race*, *sex* and *native-country*. These were all encoded so that only numerical features remained, causing a large growth on the number of features, going from the initial 14 to 108. In the *Credit Approval* dataset, we encoded all the non-numerical features, changing the number of features from 15 to 51. Feature scaling was used in all of the datasets, in order to reduce the errors caused by wide ranges. We used rescaling of all values to numbers between 0 and 1. Besides the techniques mentioned above, it was also necessary to deal with the missing values that existed in the datasets. So, for the numerical features, we replaced those missing values with the median of the other existing values. In the case of the categorical features, the missing values are considered as another valid value for that feature, in order to make the calculation of one-hot encoding to the whole dataset easy. Finally, for our setup we required a validation and a testing set, which some of the datasets did not contain originally. We divided those datasets into training, validation and testing sets, using a proportion of 70/15/15, respectively. In the cases where a testing set already existed, the validation set was created from the training set, and forced to have the same size as the testing set. ### 4.6 ML Algorithms The baseline approach consists of setting up reference values so that meaningful comparisons can be achieved. For understanding the overhead created by privacy-preserving technologies, we implemented the baseline using the publicly available ML toolkit for Python, *scikit-learn*.\(^7\) In order to understand and explain what was done and why, next we detail the ML algorithms \(^7\)http://scikit-learn.org that were implemented in the baseline and in Section 4.7. 4.6.1 Decision Trees Decision Trees (DT) [47] are a decision support tool that uses a tree-like graph to represent decisions and possible outcomes of those decisions. It is an algorithm that is composed of conditional control sequences. DT learning uses DT as a predictive model for classification. These DT are composed of nodes and leaves. The nodes represent decisions to take, or more specifically, thresholds that a feature is compared against in order to decide which branch of the tree to follow. The leaves represent class labels. To build a DT classifier there are a number of algorithms that can be used, each with different approaches and benefits. Examples include ID3, C4.5, C5.0 and CART [58]. These algorithms focus on maximizing one or more metrics, such as Gini impurity or information gain [53]. The classification process of a sample in DT is an intuitive method. Each node of the tree has the information on which feature in the sample to compare against the threshold. Classifying a sample in DT is done by traversing the tree starting at the top, comparing the values selected on each node with its respective threshold, and, depending on the result, choosing one child node or the other. When a leaf is reached, a class label is retrieved from the leaf and assigned to the sample, ending the classification process. At each tree node, a decision is computed using: \[ f_{DT}(x_i) = x_i \geq \theta_j \] (4.1) where \(x_i\) is the feature value of interest of the testing sample and \(\theta_j\) is the decision threshold of node \(j\). If the output is 0, the left hand child is selected; if it is 1, the right hand child is selected. 4.6.2 Support Vector Machines Support Vector Machines (SVM) [17] are supervised learning models used for classification and regression analysis. A SVM model represents the examples as points in space, mapped so that the margin between the two classes for the data is as wide as possible. The vectors that define this margin, or *hyperplane*, are called support vectors. For classifying new samples, they are mapped in that space and predicts which class they belong to based on which side of the gap they lie. For calculating the SVM for linear classification, and in the case of a *hard-margin*, i.e. when the training data is linearly separable, we select two parallel hyperplanes so that the distance between them is as large as possible. These hyperplanes can be described by the Equations 4.2. \[ \begin{align*} \vec{w} \cdot \vec{x} - b &= 1, \\ \vec{w} \cdot \vec{x} - b &= -1 \end{align*} \] (4.2) where $\vec{w}$ is the normal vector to each hyperplane, $\vec{x}$ is the training set and $b$ is a scalar number. To maximize the distance between hyperplanes, we minimize the value of $||\vec{w}||$ subject to $y_i(\vec{w} \cdot \vec{x}_i - b) \geq 1$, for $i = 1, \ldots, n$, where the $y_i$ are either 1 or $-1$, depending on the class label, and $n$ is the number of samples in the training set. In the case where the data is not linearly separable (*soft-margin*), we minimize instead the *hinge* loss function given by the Equation 4.3. \[ f(w, \lambda) = \left[ \frac{1}{n} \sum_{i=1}^{n} \max (0, 1 - y_i(\vec{w} \cdot \vec{x}_i - b)) \right] + \lambda ||\vec{w}||^2 \] (4.3) where the parameter $\lambda$ determines the tradeoff between increasing the margin size and ensuring that the $\vec{x}_i$ lie on the correct side of the margin. For SVM non-linear classification, we use a *kernel trick* [54], in which the dot product is replaced by a non-linear kernel function. The most used kernels are: - Linear: $k(\vec{x}_i, \vec{x}_j) = (\vec{x}_i \cdot \vec{x}_j)$ - Polynomial: $k(\vec{x}_i, \vec{x}_j) = (\vec{x}_i \cdot \vec{x}_j)^d$ - Radial Basis Function (RBF): $k(\vec{x}_i, \vec{x}_j) = \exp(-\gamma ||\vec{x}_i - \vec{x}_j||^2)$ The classification of new samples in SVM is done using the scoring function in 4.4, where each testing sample $x$ is attributed to a prediction label. $$f_{\text{SVM}}(x) = \sum_{i=1}^{m} \alpha_i K(x_{SV}^{(i)}, x) + b$$ where $\alpha_i$ is the coefficient associated with the support vector $x_{SV}^{(i)}$, $K$ is the kernel function chosen, and $b$ is a scalar number. ### 4.6.3 k-Means $k$-Means ($k$-M) clustering [41] is a method commonly used to partition a dataset into $k$ groups. It proceeds by selecting $k$ initial cluster centers (centroids) and then iteratively refine them. This refining is done in two distinct steps: 1. Each instance is assigned to its closest cluster. This is done by calculating the Euclidean Distance (ED) between each instance and each cluster center. Then, the lowest distance indicates which cluster the instance must be assigned to. 2. Each cluster center is updated to be the mean of all the instances assigned to it. The algorithm stops when the centroids no longer change position. This means that, depending on the data, it is not guaranteed that the optimal solution is found [35]. The classification of each testing sample is done similarly to step 1., i.e. by computing the ED of the new sample with each centroid, discovering the cluster whose centroid is closest to it. The label of the cluster becomes the predicted label of the sample. Equation 4.5 describes the prediction: $$f_{k-\text{M}}(x) = \arg\min_C d_E(x, C_j)$$ where $C$ are the centroids of each cluster, $x$ is the testing sample and $d_E$ is the ED. 4.6.4 Logistic Regression Logistic Regression (LR) [61] is a regression model in which the dependent variable is categorical. This variable is usually binary, i.e., it can only take two values, usually 0 or 1 that represents opposite outcomes such as “win/lose” or “healthy/sick”. This binary logistic model is used to estimate the probability of a binary response based on one or more variables. To define LR, one must begin with the logistic function, which in turn is given by Equation 4.6. \[ \sigma(t) = \frac{e^t}{e^t + 1} = \frac{1}{1 + e^{-t}} \] (4.6) where \( t \) is the input. If we express \( t \) as \( t = \beta_0 + \beta_1 x \), we can write the logistic function as in Equation 4.7. \[ F(x) = \frac{1}{1 + e^{-(\beta_0 + \beta_1 x)}} \] (4.7) To classify testing samples we use Equation 4.8 to attribute a prediction label to each sample \( x \). \[ f_{LR}(x) = \beta_0 + \sum_{i=1}^{m} \beta_i x_i \] (4.8) 4.7 Privacy-preserving Algorithms In the final step of our implementation, we made adjustments to the evaluation processes of the ML algorithms discussed above, so that we could apply two privacy-preserving techniques, namely Garbled Circuits (GC), described in Section 2.4.5, and Homomorphic Encryption (HE), described in Section 2.4.6. These two techniques offer different means to obtain privacy-preserving computations, and we must consider them when choosing which ML algorithm to use with each cryptographic algorithm. GC builds ciphered Boolean circuits, and most of the computations are possible to implement on them. However, arithmetic computations require a large number of logic gates, creating an overhead that makes GC very slow. So, for some of the ML algorithms, we used a HE system, since it offers arithmetic operations as a core operation. Due to these constraints, we decided to adapt the DT and k-M to be evaluated using GC, and SVM and LR to be evaluated using HE. 4.7.1 Garbled Circuits and Decision Trees The process of evaluating a DT in a privacy-preserving context is similar to evaluating in the usual manner, as described in Section 4.6.1. The main differences are the use of ciphered Boolean circuits instead of operations, i.e. basic operations such as additions, comparisons, are replaced with logic gates, and the evaluation of the DT involves evaluating every single node in it. Although the use of GC effectively hides the DT evaluation process from unwanted parties, it does not hide the sparseness of the DT, which could leak some relevant information about the original data, meaning that the use of expanded DT is preferable for preserving privacy. ![Figure 4.3: Boolean circuit of each node in a DT.](image) In Figure 4.3 we show how the computations are done inside each node of the DT. Each node contains the featureID, the ID of the feature to be selected from the sample to be classified, and the threshold, the value that is compared against the selected feature and that decides which branch of the tree to follow next. The first Multiplexer (MUX) gate selects from the sample the feature to be compared. The greater-than gate compares the selected feature with the threshold. Then, the value from the comparison (0 or 1) is used as a selection bit in the second MUX to choose the next_featureID and next_threshold for the next node in the tree. It is also important to mention that the trees that we used are always complete, i.e. the number of nodes $n$ is always the maximum possible, and it can be defined as $n = 2^{h+1} - 1$, where $h$ is the height of the tree. We can see in Figure 4.4 the implications of this expansion of the binary trees. ![Original DT](image1.png) ![Complete DT](image2.png) Figure 4.4: Expansion of binary trees. In Figure 4.4(a) we have a binary DT with height = 3 and with different path lengths to the leaves, depending on the branch of the tree taken, but on the tree in Figure 4.4(b), all paths have the same length. With this, we can effectively hide the sparseness of the tree, which could leak relevant information about the original data. However, this solution increases exponentially the total amount of nodes that need to be evaluated, and that decreases performance significantly, as we will see in Chapter 5. ### 4.7.2 Garbled Circuits and k-Means As in the previous section, evaluating the k-M algorithm in a privacy-preserving manner is similar to evaluating it in the usual manner. We took the operations in the prediction step and transformed them into Boolean circuits, with logic gates such as multiplexers, adders, etc. In Figure 4.5 we show how we designed the circuit to represent the $k$-M prediction. The $d_E$ blocks represent the operations to calculate the ED between the sample and each centroid provided by the $k$-M model. The ARGMIN block determines which of the ED is the smallest. As shown in the results in Section 5.5.2, there is an exponential decrease in performance when the number of clusters increases or the sample has a larger number of features. This is caused by the multiplications necessary for computing the ED. Usually, this is a reason to use HE instead, but the algorithm to implement comparisons using HE is very complex to implement and requires considerable computational power [9]. ### 4.7.3 Homomorphic Encryption and Logistic Regression For the LR algorithm, in order to use a Fully Homomorphic Encryption (FHE) system, the prediction function described in 4.8 must be converted to: $$f_{FHE}(x) = D_k \left( E_k(\beta_0) + \sum_{i=0}^{m} E_k(\beta_i) \cdot E_k(x_i) \right) \quad (4.9)$$ where $E_k$ represents the encryption operation and $D_k$ represents the decryption operation using the key $k$. Converting Equation 4.9 to be computed using a Partially Homomorphic Encryption (PHE) system is straightforward, but this can only be done under two assumptions: 1) the data to be evaluated \((x)\) and the model parameters \((\beta_0, \beta_1, \ldots, \beta_m)\) must come from two different parties, and 2) the owner of the model parameters must be the one processing the data. Under these assumptions, the linear prediction function for a multiplicative PHE system becomes: \[ f_{PHE}(x) = D_k \left( E_k(\beta_0) \cdot \prod_{i=1}^{m} E_k(x_i)^{\beta_i} \right) \] (4.10) ### 4.7.4 Homomorphic Encryption and Support Vector Machines For the SVM algorithm, we only considered the linear kernel, as it simplifies the scoring function. The Equation 4.4 is then simplified to the following: \[ f(x) = \sum_{i=1}^{m} \alpha_i x_{SV}^{(i)} x + b = \sum_{i=1}^{m} \alpha_i \sum_{j=1}^{n} x_j x_{SV}^{(i,j)} + b \] (4.11) To compute this function using a FHE system, we must convert it to: \[ f_{FHE}(x) = D_k \left( \sum_{i=1}^{m} E_k(\alpha_i) \cdot \sum_{j=i}^{n} E_k(x_j) \cdot E_k(x_{SV}^{(i,j)}) + E_k(b) \right) \] (4.12) where \(E_k\) represents the encryption operation and \(D_k\) represents the decryption operation using the key \(k\). Like in the previous Section, converting it to be computed using a PHE system is equally straightforward, but this must be done under the same two assumptions: 1) the data to be evaluated \((x)\) and the model parameters \((\alpha_i, x_{SV}^{(i,j)}, b)\) must come from two different parties, and 2) the owner of the model parameters must be the one processing the data. Under these assumptions, the scoring function for a multiplicative PHE system becomes: \[ f_{PHE}(x) = D_k \left( \prod_{i=1}^{m} \left( \prod_{j=1}^{n} E_k(x_i)^{x_{SV}^{(i,j)}} \right)^{\alpha_i} \cdot E_k(b) \right) \] (4.13) ### 4.8 Summary In this Chapter, we discussed the implementation of a privacy-preserving ML platform. We started by describing in Section 4.1 the conceptual view of our platform. For understanding the uses of the solution developed, we detailed in Section 4.2 a use case for our platform. We described the datasets chosen to evaluate the platform in Section 4.4, and then we explained the preprocessing step in Section 4.5. Section 4.6 presented the implementation of the baseline for the chosen ML algorithms. In Section 4.7, we detailed which cryptographic protocols we used, why we used them, and how we implemented them. In this Chapter we describe the experiments that were conducted regarding the implementation of Machine Learning (ML) algorithms using privacy-preserving techniques made to prove the concept of the platform. In Section 5.1, we present the metrics used in the experiments. Section 5.2 describes the setup that was used to run the experiments, as well as the toolkits used and the parameters chosen for those toolkits. In Section 5.3, we present the best baseline results obtained for the datasets in question and in Section 5.4 we compare those results with the ones obtained using the toolkits. In Section 5.5, we present the execution times for the combinations of ML algorithms and privacy-preserving techniques that were implemented. In Section 5.6 we present the communication costs for those combinations. Finally, in Section 5.7 we make some final observations about the obtained results. 5.1 Evaluation Metrics To evaluate our implementation, a set of metrics was considered, namely: accuracy, precision, recall, and F-measure. For the definition of the metrics, we need to define the events that can occur when making predictions, shown in Table 5.1 below. | Predicted label | Real label | |-----------------|------------| | | +1 | -1 | | +1 | True Positive (TP) | False Positive (FP) | | -1 | False Negative (FN) | True Negative (TN) | Accuracy is defined as how much of the measurements of a value differ to the real value. In our implementation, it represents how many times the predictions calculated by the ML generated models match the class of the testing samples. In mathematical terms, it is represented by: \[ accuracy = \frac{TP + TN}{TP + TN + FP + FN} \] (5.1) *Precision* is defined by the fraction of relevant instances among the retrieved instances. *Recall* is defined by the fraction of relevant instances that have been retrieved over the total of the relevant instances. \[ precision = \frac{TP}{TP + FP} \] (5.2) \[ recall = \frac{TP}{TP + FN} \] (5.3) *F-measure* is a measure of the accuracy of a test. It considers both the precision and the recall of the test to compute the score: \[ F_1 = 2 \cdot \frac{precision \cdot recall}{precision + recall} \] (5.4) Besides these metrics, we also used additional ones in order to understand how much the computational overhead due to the use of cryptography influences the system. We compared the *results* obtained by the privacy-preserving versions of ML algorithms with the ones obtained using the baseline. We also take into account the *execution times* of the system, which shows the overhead caused by the additional computational cost added by cryptography. Finally, we also show the increase in communication costs that happen when cryptography is involved, as all values must be represented by ciphertext, instead of plaintext integer or float values. ### 5.2 Experimental Setup All the experiments were performed using a machine with an Intel Core i5-4300M CPU @2.60Gz with 3MB L3 cache memory and 12 GB RAM memory. For obtaining the experimental results, we started by applying the preprocessing techniques mentioned in Section 4.5 to the datasets described in Table 4.1. As described before, all datasets that were composed of a single file were split into three sets: training, validation and testing sets, with the proportion 70/15/15. Each ML model was trained using the training set, the best model configuration was chosen using the validation set, and the model performance was evaluated using the testing set. Taking into account that the baseline implementation was done using scikit-learn, we could not explicitly observe the operations done in the toolkit since they were made in a black box. This could lead to an inability to distinguish prediction errors caused by the privacy-preserving systems and caused by the implementation of the prediction equations. To solve this problem, we implemented the prediction processes of the ML algorithms without using the toolkit, i.e. directly from the algorithm equations. We successfully recreated the prediction processes by retrieving from the toolkit the specifications for each ML model. - For the Decision Trees (DT), we needed to retrieve all of the model, i.e. the binary tree, so that we could traverse it according to each testing sample, comparing the feature with the threshold of each node, and choosing which branch of the tree to follow, until a label is reached. - In the case of Support Vector Machines (SVM), we retrieved from the toolkit all the coefficients needed to compute Equation 4.4, namely the support vectors $x_{SV}^{(i)}$, the $\alpha_i$ coefficients for each support vector, the kernel function $k(\vec{x}_i, \vec{x}_j)$ that was used (linear or polynomial), the exponent for the polynomial kernel if needed, and the scalar value $b$. - For k-Means ($k$-M), we just required from the toolkit the centroids for each cluster, as well as the prediction labels associated with each one. The classification of each testing sample was done by discovering which centroid was closest to it. - In Logistic Regression (LR), we extracted the $\beta = (\beta_0, \beta_1, ..., \beta_n)$ that appear in Equation 4.8. $\beta_0$ is the intercept from the linear regression, and $\beta_i$ are each regression coefficient that are multiplied by each feature of the sample. 5.2.1 Baseline Parameters The following parameters are the ones used in the scikit-learn toolkit. For the experiments with DT, we tested the values for max_depth of 5%, 10%, 20%, 50%, 100%, 200%, and 500% of the total number of features, and the values for min_samples_leaf of 0.001%, 0.002%, 0.005%, 0.01%, 0.02%, 0.05%, 0.1%, 0.2%, 0.5%, 1%, 2% and 5% of the total number of training samples. For the experiments with SVM, we used kernel values of linear, poly and rbf. For all kernels, we used C values of $2^{-10}$, $2^{-6}$, $2^{-2}$, $2^2$, $2^6$ and $2^{10}$. For the polynomial kernel, we used degree values of 2, 3 and 4. For the Radial Basis Function (RBF) kernel, we used $\gamma$ values of $2^{-9}$, $2^{-5}$, $2^{-1}$, $2^1$ and $2^3$. For the experiments with k-M, we tested with a variable number of clusters, i.e. with num_clusters values of 2, 3, 4, 5, 6, 7, 8, 9, 10, 20, 30, 40, 50, 60, 70, 80, 90 and 100. For the experiments with LR, we used the liblinear solver with C values of $2^{-10}$, $2^{-6}$, $2^{-2}$, $2^2$, $2^6$ and $2^{10}$. These variations on parameters allowed to train models with all possible configurations without the need to specifically adapt the parameters to the different datasets. 5.2.2 Garbled Circuits toolkits and parameters For the experiments with Garbled Circuits (GC), we used the toolkit developed by VIPP group from the University of Siena\footnote{http://clem.dii.unisi.it/~vipp/index.php/home}. This toolkit was not our first choice since it has known issues in computation times, but the other toolkits that we tested contained limitations that we could not overcome, as stated below: - **ABY[21]**: We found it impossible to define gate-to-gate wires, and that removed the ability for fine control on how to use and combine wires. • **JustGarble**\([7]\): This toolkit could not be fully compiled due to conflicts with current versions of the GNU gcc compiler. • **Ciphermed**\([12]\): This toolkit is efficient for small DT, but is exponentially slower for larger trees (above 10 nodes). • **TinyGarble**\([57]\): The current version of this toolkit does not support the open source synthesis tool (Yosys\(^2\)) recommended by the authors, and only supports a paid one. • **CompGC**\([30]\): The implementation of all the examples of ML algorithms in this toolkit are hardcoded, making it extremely difficult to adapt to our needs. In the experiments using GC, we tested values of 8, 12, 16, 20 and 24 bits for the numeric precision of the data and model parameters. This will be reflected in the circuit size and the accuracy of the results. Larger values were not considered because 24 bits are already sufficient for an exact representation of the input values and model parameters. ### 5.2.3 Homomorphic Encryption toolkits and parameters For the experiments using Partially Homomorphic Encryption (PHE), we implemented our own version of the Paillier cryptosystem \([45]\). We decided to do this way instead of using an existing toolkit because it was simple to implement and because it provided a good learning experience on the inner workings of a cryptographic system. For the experiments using Fully Homomorphic Encryption (FHE), we used the HElib toolkit \([31]\). Two methods were considered in the implementation, method M1: multiply and sum arrays without packing, and method M2: use coefficient packing, invert and multiply polynomials, that was inspired in a HElib tutorial\(^3\). It is also important to note that, due to intrinsic limitations in HElib, we had to pre-compute \( \alpha_i \cdot x_{SV}^{(i)} \) when evaluating the SVM. However, this does not negatively affect the experiments, since these model parameters are both owned by the same party. \(^2\)http://www.clifford.at/yosys/ \(^3\)https://mshcruz.wordpress.com/2016/09/27/scalar-product-using-helib/ For the experiments using PHE, we used the values of 128, 256, 512, 1024 and 2048 bits (NBits) for the length of the cryptographic keys. For the experiments using FHE, no parameter search was made, since adequate default values were pre-determined for most parameters, and we chose a large enough number for the modulus \((2^{15})^4 = 2^{60}\). Changing these parameters should not affect the obtained predicted labels in any way, although they may have minimal effects on the actual output of the evaluation functions of LR and SVM algorithms. The values considered are too large to affect the results and correspond to a cryptographic security factor. ### 5.3 Baseline Results We now show the experimental results obtained by applying different ML algorithms to the datasets mentioned above. In terms of execution time results for the baseline, we were able to observe that they are in the order of milliseconds per data sample. In each Section below, for each dataset, we present the different ML algorithms and parameters. The results shown in the following tables are the ones obtained in the testing sets using the parameters that provided the best accuracy or F-Measure results with the validation set. We also present results found in the literature, for comparison. #### 5.3.1 Pima Indians Diabetes Dataset We present the best baseline results obtained in the testing set for the *Pima Indians Diabetes* Dataset in Table 5.2. We can see that our results are comparable to the ones found in the literature. #### 5.3.2 Breast Cancer Wisconsin Diagnostic Dataset We present the best baseline results obtained in the testing set for the *Breast Cancer Wisconsin Diagnostic* Dataset in Table 5.3. As we see, our baseline results are, again, close to Table 5.2: Baseline results, in percentage. *Pima Indians Diabetes Dataset*. “A” represents Accuracy, “F” represents F-Measure. | ML algorithm | DT | k-Means | LR | SVM | |--------------|--------|---------|--------|--------| | | | | | Linear | Poly | RBF | | Baseline | A: 73.04 | A: 72.17 | A: 75.65 | A: 75.65 | A: 76.52 | A: 77.39 | | | F: 63.53 | F: 52.94 | F: 58.82 | F: 61.11 | F: 59.70 | F: 62.86 | | Literature | A: 75.39[5] | A: 73.7[25] | A: 77.95[5] | A: - | A: - | A: 80.2[1] | | | F: - | F: - | F: - | F: - | F: - | F: - | the ones found in the literature. Table 5.3: Baseline results, in percentage. *Breast Cancer Wisconsin Diagnostic Dataset*. “A” represents Accuracy, “F” represents F-Measure. | ML algorithm | DT | k-Means | LR | SVM | |--------------|--------|---------|--------|--------| | | | | | Linear | Poly | RBF | | Baseline | A: 92.94 | A: 91.76 | A: 95.29 | A: 94.12 | A: 94.12 | A: 94.12 | | | F: 90.91 | F: 90.91 | F: 93.75 | F: 92.06 | F: 92.31 | F: 92.06 | | Literature | A: 95.13[6] | A: 92.79\textsuperscript{4} | A: 93.50[50] | A: - | A: 97.54[66] | A: 97.13[6] | | | F: 94.88[6] | F: - | F: - | F: - | F: - | F: 96.25[6] | ### 5.3.3 Credit Approval Dataset We present the best baseline results obtained in the testing set for the *Credit Approval Dataset* in Table 5.4. We obtained results that are similar to the ones found in the literature. Table 5.4: Baseline results, in percentage. *Credit Approval Dataset*. “A” represents Accuracy, “F” represents F-Measure. | ML algorithm | DT | k-Means | LR | SVM | |--------------|--------|---------|--------|--------| | | | | | Linear | Poly | RBF | | Baseline | A: 78.64 | A: 83.50 | A: 85.44 | A: 85.44 | A: 85.43 | A: 84.47 | | | F: 75.00 | F: 81.32 | F: 84.21 | F: 84.54 | F: 83.87 | F: 83.33 | | Literature | A: 85.5\textsuperscript{5} | A: 86.3[25] | A: 87.9[16] | A: 86.2\textsuperscript{5} | A: 84.8[16] | A: 85.5\textsuperscript{5} | | | F: - | F: - | F: - | F: - | F: - | F: - | \textsuperscript{4}https://www.linkedin.com/pulse/using-k-means-clustering-tableau-diagnose-breast-cancer-mayand-tiwari \textsuperscript{5}http://docplayer.net/storage/53/32532528/1505920161/lql-Akt2A2T_EvaGfgnQww/32532528.pdf 5.3.4 Adult Income Dataset We present the best baseline results obtained in the testing set for the Adult Income Dataset in Table 5.5. We can see that our baseline results are comparable to the ones found in literature, and therefore are a good reference for the privacy-preserving platform. Table 5.5: Baseline results, in percentage. Adult Income Dataset. “A” represents Accuracy, “F” represents F-Measure. | ML algorithm | DT | k-Means | LR | SVM Linear | SVM Poly | SVM RBF | |--------------|-------------|-------------|-------------|------------|----------|---------| | Baseline | A: 85.56 | A: 81.95 | A: 85.08 | A: 69.67 | A: 80.82 | A: 82.79| | | F: 67.37 | F: 55.80 | F: 66.87 | F: 57.04 | F: 65.91 | F: 61.15| | Literature | A: 82.20[5] | A: - | A: 80.00[5] | A: - | A: 84.55\(^b\) | A: 84.93[38]| | | F: - | F: - | F: - | F: - | F: - | F: - | 5.4 Comparison with the Baseline Results It is important to mention that, after analyzing the results obtained using the VIPP toolkit to implement GC, we verified that changing the amount of bits for the actual numeric precision of the data and model parameters affects the accuracy of the results. The degree of this error is depicted in Tables 5.6 and 5.7, for the experiments for DT and k-M respectively. It is to be noted that this error is computed versus the baseline prediction results, not the prediction labels from the dataset. Table 5.6: GC+DT. Average label prediction error when compared with the baseline. | bits | Pima Indians | Breast Cancer | Credit Approval | Adult Income | |------|--------------|---------------|-----------------|--------------| | 8 | 1.88% | 0.55% | 8.70% | 0.00% | | 12 | 0.00% | 0.13% | 1.11% | 0.00% | | 16 | 0.00% | 0.13% | 0.31% | 0.00% | | 20 | 0.00% | 0.13% | 0.31% | 0.00% | | 24 | 0.00% | 0.13% | 0.31% | 0.00% | By observing these tables, we can conclude that the loss of prediction performance caused by using the privacy-preserving versions of the ML algorithms is not relevant, as long as at \(^b\)http://www.dudonwai.com/docs/gt-omschs-cs7641-a1.pdf?pdf=gt-omschs-cs7641-a1 Table 5.7: GC+k-M. Average label prediction error when compared with the baseline. | bits | Pima Indians | Breast Cancer | Credit Approval | Adult Income | |------|--------------|---------------|-----------------|-------------| | 8 | 2.03% | 3.07% | 0.05% | 0.02% | | 12 | 0.39% | 0.85% | 0.00% | 0.00% | | 16 | 0.29% | 0.72% | 0.00% | 0.00% | | 20 | 0.29% | 0.72% | 0.00% | 0.00% | | 24 | 0.00% | 0.00% | 0.00% | 0.00% | At least 16 bits are used to represent the data. Since both DT and k-M only output an integer representing the label, and not a real number, the visible effect of changing the number of bits is minimal. After analyzing the results obtained using the PHE and FHE systems, we verified that all predicted labels and almost all function evaluation outputs match the baseline. The few examples when an exact match does not happen come mostly from the SVM scoring evaluation function implemented in HElib, and are most likely caused by the accumulation of the intrinsic noise generated every time an operation is performed between two ciphertexts. Therefore, we can conclude that our privacy-preserving versions of the ML algorithms using PHE and FHE have no relevant loss of prediction performance. 5.5 Execution Time Results In order to better assess the execution time required by each privacy-preserving version of the different ML algorithms, we will analyze each of the combinations separately. We do not present total execution times for the whole datasets because execution times per sample are independent of the dataset size, and execution times per sample are the expected costs in a real-life scenario where a large computer cluster is available and data samples are supplied in a continuous fashion. 5.5.1 Garbled Circuits and Decision Trees We present the execution times obtained by using the toolkit to build a GC implementation of DT for all the datasets in the tables below. The results are presented in terms of average pre-computation times per data sample and runtimes per data sample. Table 5.8 presents the average pre-computation times for each data sample for all datasets. We can observe that average pre-computation times are very similar to one another, despite the slight dependence on the size of the GC, which is defined by the numeric precision. This means that pre-computation poses no restrictions regarding the scalability of our approach. Table 5.8: GC+DT. Average pre-computation times per data sample, in seconds. All datasets. | Dataset | Numeric Precision | |---------|-------------------| | | 8 bits | 12 bits | 16 bits | 20 bits | 24 bits | | Pima | 0.219 | 0.285 | 0.310 | 0.344 | 0.356 | | Breast | 0.205 | 0.240 | 0.281 | 0.325 | 0.356 | | Credit | 0.224 | 0.253 | 0.271 | 0.290 | 0.315 | | Adult | 0.233 | 0.271 | 0.313 | 0.355 | 0.373 | Regarding the runtimes per data sample (Figure 5.1), we can observe that the results obtained dominate over the pre-computation times. Although they scale slightly sub-linearly with the numeric precision and the number of features, they scale super-linearly with the number of nodes in the DT. This can be a problem in terms of scalability since we are using fully expanded DT, which means that increasing DT depth leads to an exponential increase in the number of nodes. When comparing these results with those of the baseline, they are many orders of magnitude above, with the non-privacy-preserving approach in the order of milliseconds, and the privacy-preserving approach varying from seconds to hundreds of seconds. More detailed results are presented on Section A.1.1. 5.5.2 Garbled Circuits and k-Means We present the execution times obtained by using the toolkit to build a GC implementation of k-M for all the datasets in the tables below. The results are presented in terms of average Figure 5.1: **GC+DT**. Runtime per data sample, in seconds. (a) *Pima Indians Diabetes Dataset*; (b) *Breast Cancer Wisconsin Diagnostic Dataset*; (c) *Credit Approval Dataset*; and, (d) *Adult Income Dataset*. pre-computation times per data sample and runtimes per data sample. Table 5.9 presents the average pre-computation times for each data sample for all datasets. We can see that average pre-computation times are all very similar to one another despite the slight dependence on the GC size, meaning that it does not impact the scalability of our solution. Regarding the runtimes per data sample (Figure 5.2), we observe again that the results are much larger than the pre-computation times. They scale linearly with the number of features and slightly super-linearly with the number of clusters, none of which compromises Table 5.9: **GC+k-M**. Average pre-computation times per data sample, in seconds. | Dataset | Numeric Precision | |---------|-------------------| | | 8 bits | 12 bits | 16 bits | 20 bits | 24 bits | | Pima | 0.260 | 0.283 | 0.307 | 0.331 | 0.339 | | Breast | 0.225 | 0.251 | 0.274 | 0.289 | 0.301 | | Credit | 0.226 | 0.249 | 0.253 | 0.265 | 0.272 | | Adult | 0.214 | 0.214 | 0.232 | 0.266 | 0.262 | the scalability of our approach. However, runtimes scale quadratically with the numeric precision, which is caused by the multiplications required to compute the Euclidean Distance (ED). Although this causes scalability issues for large numeric precision, the results in Table 5.7 show that the loss of accuracy is negligible even when only 12 bits are considered, allowing us to safely ignore this issue. The runtimes per data sample are also considerably large for the instances with a large number of clusters, but in our baseline system, we verified that the best results were always obtained when less than 10 clusters were considered. Even if this was not the case, we could sacrifice a small amount of accuracy by lowering the number of clusters considered in order to obtain much faster runtimes. When comparing these results with those of the baseline, they are many orders of magnitude above, with the non-privacy-preserving approach in the order of milliseconds, and the privacy-preserving approach varying from seconds to hundreds of seconds. Finally, it is important to mention that we did not compute all the entries in the tables above because the execution times for the larger datasets were becoming very long as the number of clusters and numeric precision grew and some examples were impossible to run due to insufficient RAM memory. More detailed results are presented on Section A.1.2. ### 5.5.3 Homomorphic Encryption and Logistic Regression As mentioned in Section 3.4, the results presented in this section were developed by other team members in the Big dAta pRivacy by Design platform (BARD) project at Altran, and are presented for completeness purposes. We present the execution times obtained by using the PHE and FHE systems for all datasets. in the tables below. For the FHE system, we present the times for methods M1 and M2 in the same cell, for ease of comparison. The results are presented in terms of execution time per data sample. When observing the execution times obtained using PHE (Figure 5.3), we see that a linear increase in encryption and computation times when the number of features in the samples increases, but a constant decryption time, independent of the number of features. We can also observe a linear increase in computation times, and a super-linear increase in encryption and decryption times, when the value of NBits increases. This can cause a problem of scalability, but it can be safely ignored since the execution times per sample are still very small. When comparing these results with those of the baseline, the non-privacy-preserving results are in the order of milliseconds, and the privacy-preserving approach varies from deciseconds ($10^{-1}$s) to seconds. More detailed results are presented on Section A.1.3. When analyzing the results obtained using FHE (Figure 5.4), we can observe that the packing used by method M2 greatly decreases the encryption and computation times, when compared ![Graphs showing execution time per data sample in seconds for different datasets](image) **Figure 5.3:** PHE+LR. Execution time per data sample, in seconds. (a) Pima Indians Diabetes Dataset; (b) Breast Cancer Wisconsin Diagnostic Dataset; (c) Credit Approval Dataset; and, (d) Adult Income Dataset. to method M1. We also observe that method M2 makes the encryption and computation times independent from the number of features. Overall, we see that method M2 is much more efficient than method M1, showing the obvious advantage of packing the features for each dataset in a single ciphertext before performing any computation. More detailed results are presented on Section A.1.4. When comparing PHE and FHE, we observe that method M2 of FHE has lower execution times than PHE, despite the complexity of the algorithm behind it. The only possible justification for this is the positive effect caused by feature packing, especially due to the gains in encryption time. 5.5.4 Homomorphic Encryption and Support Vector Machines Again, as mentioned in Section 3.4, the results presented in this section were developed by other team members in the BARD project at Altran, and are presented for completeness purposes. We present the execution times obtained by using the PHE and FHE systems for all datasets in the tables below. Once again, for the FHE system, we present the times for methods M1 and M2 in the same cell, for ease of comparison. The results are presented in terms of execution time per data sample for the PHE case, and execution time per sample and support ![Figure 5.4: FHE+LR. Execution time per data sample, in seconds. All Datasets.](image-url) When observing the execution times obtained using PHE (Figure 5.5), we can see that computation times have a significant overhead for small amounts of number of features and number of Support Vectors, as they only affect the results in the Adult Income Dataset, where they seem to have a linear dependency. We also observe a linear increase in encryption times with increasing the number of features and a linear increase in computation times with increasing NBits. When comparing these results with those of the baseline, the non-privacy-preserving results are in the order of milliseconds, and the privacy-preserving approach varies from deciseconds ($10^{-1}$s) to seconds. Finally, we can see that decryption times are constant with ![Graphs showing execution time per data sample in seconds for different datasets](image) **Figure 5.5:** PHE+SVM. Execution time per data sample, in seconds. (a) Pima Indians Diabetes Dataset; (b) Breast Cancer Wisconsin Diagnostic Dataset; (c) Credit Approval Dataset; and, (d) Adult Income Dataset. increasing number of features and number of Support Vectors, but we verify that there is a slightly super-linear increase in encryption and decryption times with increasing NBits. This can cause a problem of scalability, but it can be safely ignored since the execution times per sample are still very small. Additionally, for SVM we obtained similar encryption times to the ones obtained for LR, which is not surprising since the encryption is only done on one side of the protocol (due to the way the Paillier cryptosystem works). Also, we can observe that decryption times are comparable in both cases, as decryption only occurs once, when all operations have been performed. More detailed results are presented on Section A.1.5. When observing the execution times obtained using FHE (Figure 5.6), we can see similar results to those obtained for LR. In particular, we see that the packing used by method M2 greatly decreases the encryption and computation times, when compared with method M1, as well as making the encryption and computation times independent from both the number of features and the number of Support Vectors. We can also observe that the decryption times are independent of the method used, the number of features and the number of Support Vectors. More detailed results are presented on Section A.1.6. Once again, we can see that the method M2 is much more efficient than method M1, showing the obvious advantage of packing the features for each dataset in a single ciphertext before performing the computation. ![Execution time per data sample](image) Figure 5.6: FHE+SVM. Execution time per data sample, in seconds. All datasets. When comparing PHE and FHE, unlike what was observed for LR, here the former has lower execution times that the latter. Even considering the feature packing of method M2, the fact that many multiplications have to be made (one for each Support Vector) overwhelms FHE when compared with PHE for evaluating an SVM. 5.6 Communication Cost Results As we will see in this Section, the communication cost is primarily defined by the cryptographic techniques considered and only secondarily by the ML algorithms. We will focus on the former and address each of the specifics of the latter as needed. Given that cryptographic keys only need to be sent once and the ciphertext containing the desired result also only needs to be sent once, the bulk of the communication cost comes from the transmitting and receiving ciphertexts containing the actual data values. Since the amount of bytes sent by one of the parties is equal to the amount received by the other, and vice-versa, we will only present costs from one of the parties. We also do not present total communication costs for the whole datasets because communication costs per sample are independent of the dataset size, and communication costs per sample are the expected costs in a real-life scenario where a large computer cluster is available and data samples are supplied in a continuous fashion. 5.6.1 Garbled Circuits and Decision Trees We present the communication costs obtained by using a GC implementation of DT for all datasets in the tables below. We present results in terms of average amount of bytes per data sample sent during pre-computation, received during pre-computation and sent during runtime, and the number of bytes per data sample received during runtime by the GC evaluator. In Table 5.10, we can see that all communication costs per data sample increase linearly with the variables of interest. Both the number of bytes sent and received by the GC evaluator during pre-computation depend only on the numeric precision, and the amount of bytes sent Table 5.10: **GC+DT**. Average amount of bytes per data sample (in kB) sent during pre-computation (PC-S), received during pre-computation (PC-R) and sent during runtime (R-S) by the GC evaluator. All datasets. | Dataset | Numeric Precision | 8 bits | 12 bits | 16 bits | 20 bits | 24 bits | |---------|-------------------|--------|---------|---------|---------|---------| | | | PC-S | PC-R | R-S | PC-S | PC-R | R-S | | Pima | | 1.055 | 2.342 | 8.441 | 1.583 | 3.513 | 12.662 | | | | | | | 2.110 | 4.685 | 16.883 | | | | | | | 2.638 | 5.856 | 21.104 | | | | | | | 3.166 | 7.027 | 25.324 | | Breast | | 1.055 | 2.342 | 31.656 | 1.583 | 3.514 | 47.483 | | | | | | | 2.110 | 4.685 | 63.311 | | | | | | | 2.638 | 5.856 | 79.139 | | | | | | | 3.165 | 7.027 | 94.967 | | Credit | | 1.055 | 2.342 | 53.815 | 1.583 | 3.514 | 80.722 | | | | | | | 2.110 | 4.685 | 107.628 | | | | | | | 2.638 | 5.812 | 134.536 | | | | | | | 3.166 | 7.027 | 161.444 | | Adult | | 1.055 | 2.342 | 113.961 | 1.583 | 3.514 | 170.940 | | | | | | | 2.110 | 4.685 | 227.919 | | | | | | | 2.638 | 5.855 | 284.901 | | | | | | | 3.166 | 7.027 | 341.881 | during runtime depends only on the numeric precision and the number of features. Regarding the amount of bytes per data sample received during runtime (Figure 5.7), they depend linearly on the numeric precision, on the number of features and on the number of DT nodes, and therefore do not compromise the scalability of our approach. For larger DT, the communication cost gets considerably large, but it can be easily minimized by using the original DT instead of the fully expanded ones. More detailed results are presented on Section A.2.1. ### 5.6.2 Garbled Circuits and k-Means We present the communication costs obtained by using a GC implementation of k-M for all datasets in the tables below. We present results in terms of the average number of bytes per data sample sent during pre-computation, received during pre-computation and sent during runtime, and the number of bytes per data sample received during runtime by the GC evaluator. In Table 5.11, we can see that all communication costs per data sample increase linearly with the variables of interest. Both the number of bytes sent and received by the GC evaluator during pre-computation depend only on the numeric precision, and the amount of bytes sent during runtime depends only on the numeric precision and the number of features. Regarding the amount of bytes per data sample received during runtime (Figure 5.8), they depend linearly on the number of features and the number of clusters, and quadratically on the numeric precision. However, as we have seen before, the results on Table 5.7 showed that the loss of accuracy is negligible even when only 12 bits are considered, meaning we Table 5.11: **GC+k-M**. Average amount of bytes per data sample (in kB) sent during pre-computation (PC-S), received during pre-computation (PC-R) and sent during runtime (R-S) by the GC evaluator. All datasets. | Dataset | Numeric Precision | 8 bits | 12 bits | 16 bits | 20 bits | 24 bits | |---------|-------------------|--------|---------|---------|---------|---------| | | | PC-S | PC-R | R-S | PC-S | PC-R | R-S | | Pima | | 1.055 | 2.342 | 8.442 | 1.583 | 3.514 | 12.662 | | | | | | | 2.110 | 4.685 | 16.883 | | | | | | | 2.638 | 5.856 | 21.104 | | | | | | | 3.166 | 7.027 | 25.324 | | Breast | | 1.055 | 2.342 | 31.656 | 1.583 | 3.514 | 47.483 | | | | | | | 2.110 | 4.685 | 63.311 | | | | | | | 2.638 | 5.856 | 79.139 | | | | | | | 3.166 | 7.027 | 94.967 | | Credit | | 1.055 | 2.342 | 53.815 | 1.583 | 3.514 | 80.722 | | | | | | | 2.110 | 4.685 | 107.629 | | | | | | | 2.638 | 5.856 | 134.536 | | | | | | | 3.166 | 7.027 | 161.444 | | Adult | | 1.055 | 2.342 | 113.960 | 1.583 | 3.514 | 170.941 | | | | | | | 2.110 | 4.685 | 227.921 | | | | | | | 2.638 | 5.856 | 284.901 | | | | | | | 3.166 | 7.027 | 341.881 | can easily minimize its effects. Finally, it is important to mention that we did not compute all the entries in the tables above because the execution times for the larger datasets were becoming very long as the number of clusters and numeric precision grew and some examples were impossible to run due to insufficient RAM memory. More detailed results are presented on Section A.2.2. ### 5.6.3 Partially Homomorphic Encryption As mentioned in Section 3.4, the results presented in this section were developed in the BARD project at Altran, and are presented for completeness purposes. For the PHE systems, both the key size and the ciphertexts size depend only on the number of bits chosen (NBits). For the Paillier cryptosystem in particular, both the public and private keys are composed of two $2 \times$ NBits numbers and any ciphertext is a $2 \times$ NBits number. Under the assumption that one of the parties owns the data to be evaluated and the other owns the evaluation model and has the computational power to perform the evaluation, we only need to determine the communication cost of transmitting the data to be evaluated from Figure 5.8: GC+k-M. Amount of bytes per data sample (in kB) received during runtime by the GC evaluator. (a) Pima Indians Diabetes Dataset; (b) Breast Cancer Wisconsin Diagnostic Dataset; (c) Credit Approval Dataset; and, (d) Adult Income Dataset. one party to the other. This cost is independent of the ML algorithm considered. For each of the data samples, each individual feature value needs to be encrypted. The communication cost, in bits, is therefore given by: \[ cost_{comm} = \underbrace{2(2NBits)}_{\text{public key}} + \underbrace{Nn(2NBits)}_{\text{ciphered data}} + \underbrace{2NBits}_{\text{ciphered result}} \] where \( N \) is the number of samples and \( n \) is the number of features per sample. As mentioned before, the ciphertexts containing the actual data overwhelm the other contributions. We present the communication costs for the datasets considered in Table 5.12. Table 5.12: PHE. Communication costs in kilobytes (kB). All datasets. | NBits | Pima cost / sample | Breast Cancer cost / sample | Credit Approval cost / sample | Adult Income cost / sample | |-------|-------------------|-----------------------------|-------------------------------|---------------------------| | 128 | 0.352 | 1.056 | 1.728 | 3.552 | | 256 | 0.704 | 2.112 | 3.456 | 7.104 | | 512 | 1.408 | 4.224 | 6.912 | 14.208 | | 1024 | 2.816 | 8.448 | 13.824 | 28.416 | | 2048 | 5.632 | 16.896 | 27.648 | 56.832 | As expected, the communication costs increase linearly with increasing NBits, the number of samples and the number of features. The communication costs for most datasets are considerably small, around a few megabytes. Even for the larger dataset, the Adult Income Dataset, the larger costs are due only to the much higher number of samples considered; the cost per sample is still around a few kilobytes. 5.6.4 Fully Homomorphic Encryption As mentioned in Section 3.4, the results presented in this section were developed in the BARD project at Altran, and are presented for completeness purposes. Considering the FHE system used by the HElib toolkit, there was no easy way to precisely compute the total communication cost. The details of the cryptographic key generation process are not included in the toolkit documentation, and both the cryptographic keys and the ciphertexts are represented using their own structure. By printing several examples of cryptographic keys and ciphertexts, we estimated that each key is composed of approximately 400,000 64-bit values (total: $w_{key} \approx 3200$ kB = 3.2 MB) and each ciphertext is composed of approximately 100,000 64-bit values (total: $w_{ciphertext} \approx 800$ kB = 0.8 MB). Under the assumption that one of the parties owns the data to be evaluated and the other owns the evaluation model and has the computational power to perform the evaluation, we only need to determine the communication cost of transmitting the data to be evaluated from one party to the other. This cost is independent of the ML algorithm considered. For each of the data samples, each individual feature value needs to be encrypted. The communication cost, in bits, is therefore given by: \[ \text{cost}_{\text{comm}} = \underbrace{w_{\text{key}}}_{\text{public key}} + \underbrace{Nnw_{\text{ciphertext}}}_{\text{ciphered data}} + \underbrace{w_{\text{ciphertext}}}_{\text{ciphered result}} \] (5.6) where \( N \) is the number of samples and \( n \) is the number of features per sample. As mentioned before, the ciphertexts containing the actual data overwhelm the other contributions. We present the communication costs for the datasets considered in Table 5.13. Table 5.13: **FHE**. Communication costs in Megabytes (MB). All datasets. | Pima | Breast Cancer | Credit Approval | Adult Income | |------|---------------|-----------------|--------------| | cost / sample | cost / sample | cost / sample | cost / sample | | 10.4 | 28.0 | 44.8 | 90.4 | Once again, the communication costs increase linearly with increasing NBits, the number of samples and the number of features. However, we can observe the negative effect of the extremely long keys required by the FHE system. The communication costs for all datasets are extremely high. Even if only a single data sample is considered, several megabytes are required for transmitting the corresponding ciphertext. ### 5.7 Discussion We now make some final observations on the obtained results, analyzing the advantages and disadvantages of our GC and Homomorphic Encryption (HE) approaches. Although we did not compare the performance of GC and HE directly, for instance by choosing a ML algorithm and implementing it using both privacy-preserving techniques, it is clear that the HE approach is adequate for ML algorithms that rely on arithmetic operations, and the GC approach is adequate for ML algorithms that rely on non-arithmetic operations. An example pointing in this direction is the quadratic increase in runtime verified in the GC+k-M experiments, due to the need to perform multiplications to compute the ED. An important remark on our experiments with GC is related to our choice to only analyze fully expanded DT instead of the original ones, in order to prevent any information leakage regarding the shape of the original tree. However, in most cases this causes an exponential growth of the number of nodes with increasing tree depths, leading to proportional increases in both the execution times and the communication costs. We were also able to observe that the execution time increases by five orders of magnitude when comparing the baseline results with the privacy-preserving approach. Another important conclusion with our experiments with HE is when each of the techniques should be used. We verified that PHE is, in fact, usable in practice but under some restrictions (e.g.: if there is no need for complex composition of operations and if data is separated between client and server), while FHE is more flexible but still too computationally expensive. However, by using the coefficient packing in method M2, FHE can be more efficient than PHE for evaluating some ML algorithms (e.g.: LR). 5.8 Summary In this chapter, we detailed the experiments that were conducted in implementing our solution. We presented the metrics used in evaluating the results obtained, as well as the setup and toolkits that were used to run the experiments. The results we obtained using privacy-preserving algorithms, when compared with the ones obtained with the baseline, show that the loss of prediction performance is very small, as long as the numeric precision is 16 bits or more. We observed that ML algorithms that rely on arithmetic operations have better performance with the HE approach. On the contrary, the GC approach has better performance for ML algorithms that rely on non-arithmetic operations. We also observed the differences between PHE and FHE, and were able to discern when each of the techniques should be used. Big data is very useful for the operation and improvement of everyday services, but because data contain private information about individuals, it cannot be freely processed because it can lead to breaches of privacy. Privacy-preserving processing techniques can be helpful in mitigating this problem. In this work we have presented BARD, a privacy-preserving Machine Learning (ML) platform to provide companies with the means to apply the privacy-preserving paradigm in their Big Data operations. We discussed the existing techniques that provide the level of privacy compliant with the laws in force and matched those techniques with the most commonly used ML algorithms. We evaluated our solution using publicly available datasets that reflect subjects of relevance. We show that the platform is generic and can be used for other use cases. We compared two privacy-preserving techniques, Garbled Circuits (GC) and Homomorphic Encryption (HE), and identified their limitations when computing comparisons or arithmetic operations. We were also able to observe the overhead that is caused by these techniques when compared to a baseline that is significant. With this work, we contributed to provide accurate privacy-preserving ML platforms that achieve a level of privacy compliant with the laws in force while also maintaining the quality of data for knowledge learning. 6.1 Future Work For future work, we propose the following enhancements to the functionalities of the platform and its performance: - Extend the platform to work with more ML algorithms (ex: Neural Networks or Naive Bayes), so that the platform can be used for more purposes (ex: Deep Learning); - Optimize the Secure Multi-Party Computations (SMPC) techniques used, to improve the performance of the platform; - Implement and test the SMPC techniques using other toolkits, also to improve the performance. - Build a catalog of platform applications where lessons learned with actual deployments of the technology can be shared with industry practitioners and also with the scientific community to guide future research in new improved algorithms. [1] Abdul Azis Abdillah and Suwarno. Diagnosis of diabetes using support vector machines with radial basis function kernels. *International Journal of Technology*, 7(5):849–858, 2016. [2] Rakesh Agrawal and Ramakrishnan Srikant. Privacy-preserving data mining. In *ACM Sigmod Record*, volume 29, pages 439–450. ACM, 2000. [3] Joël Alwen, Manuel Barbosa, Pooya Farshim, Rosario Gennaro, S Dov Gordon, Stefano Tessaro, and David A Wilson. On the relationship between functional encryption, obfuscation, and fully homomorphic encryption. In *IMA International Conference on Cryptography and Coding*, pages 65–84. Springer, 2013. [4] Ross Anderson. *Security engineering*. John Wiley & Sons, 2008. [5] Hina Anwar, Usman Qamar, Muzaffar Qureshi, and Abdul Wahab. Global optimization ensemble model for classification methods. *The Scientific World Journal*, 2014, 2014. [6] Hiba Asri, Hajar Mousannif, Hassan Al Moatassime, and Thomas Noel. Using machine learning algorithms for breast cancer risk prediction and diagnosis. *Procedia Computer Science*, 83:1064–1069, 2016. [7] Mihir Bellare, Viet Tung Hoang, Sriram Keelveedhi, and Phillip Rogaway. Efficient garbling from a fixed-key blockcipher. In *Security and Privacy (SP), 2013 IEEE Symposium on*, pages 478–492. IEEE, 2013. [8] Assaf Ben-David, Noam Nisan, and Benny Pinkas. FairplayMP: a system for secure multi-party computation. In *Proceedings of the 15th ACM conference on Computer and communications security*, pages 257–266. ACM, 2008. [9] Ian F Blake and Vladimir Kolesnikov. Strong conditional oblivious transfer and computing on intervals. In *International Conference on the Theory and Application of Cryptology and Information Security*, pages 515–529. Springer, 2004. [10] Dan Bogdanov, Liina Kamm, Baldur Kubo, Reimo Rebane, Ville Sokk, and Riivo Talviste. Students and taxes: a privacy-preserving study using secure computation. *Proceedings on Privacy Enhancing Technologies*, 2016(3):117–135, 2016. [11] Dan Boneh, Amit Sahai, and Brent Waters. Functional encryption: Definitions and challenges. In *Theory of Cryptography Conference*, pages 253–273. Springer, 2011. [12] Raphael Bost, Raluca Ada Popa, Stephen Tu, and Shafi Goldwasser. Machine learning classification over encrypted data. In *NDSS*, volume 4324, page 4325, 2015. [13] Ljiljana Brankovic and Vladimir Estivill-Castro. Privacy issues in knowledge discovery and data mining. In *Australian institute of computer ethics conference*, pages 89–99, 1999. [14] Justin Brickell and Vitaly Shmatikov. Privacy-preserving classifier learning. In *International Conference on Financial Cryptography and Data Security*, pages 128–147. Springer, 2009. [15] Kamalika Chaudhuri and Claire Monteleoni. Privacy-preserving logistic regression. In *Advances in Neural Information Processing Systems*, pages 289–296, 2009. [16] K Chitra and B Subashini. Automatic credit approval using classification method. *International Journal of Scientific & Engineering Research (IJSER)*, 4(7):2027, 2013. [17] Corinna Cortes and Vladimir Vapnik. Support-vector networks. *Machine learning*, 20(3):273–297, 1995. [18] Giuseppe D’Acquisto, Josep Domingo-Ferrer, Panayiotis Kikiras, Vicenç Torra, Yves-Alexandre de Montjoye, and Athena Bourka. Privacy by design in big data: An overview of privacy enhancing technologies in the era of big data analytics. *European Union Agency for Network and Information Security*, 2015. [19] George Danezis, Josep Domingo-Ferrer, Marit Hansen, Jaap-Henk Hoepman, Daniel Le Metayer, Rodica Tirtea, and Stefan Schiffner. Privacy and data protection by design-from policy to engineering. *European Union Agency for Network and Information Security*, 2015. [20] Yves-Alexandre De Montjoye, César A Hidalgo, Michel Verleysen, and Vincent D Blondel. Unique in the crowd: The privacy bounds of human mobility. *Scientific reports*, 3:1376, 2013. [21] Daniel Demmler, Thomas Schneider, and Michael Zohner. ABY-a framework for efficient mixed-protocol secure two-party computation. In *Network and Distributed System Security Symposium*, 2015. [22] Mahir Can Doganay, Thomas B Pedersen, Yücel Saygin, Erkay Savaş, and Albert Levi. Distributed privacy preserving k-means clustering with additive secret sharing. In *Proceedings of the 2008 international workshop on Privacy and anonymity in information society*, pages 3–11. ACM, 2008. [23] Cynthia Dwork. Differential privacy. *Proceedings of the 33rd International Colloquium on Automata, Languages and Programming*, pages 1–12, 2006. ISSN 03029743. doi: 10.1007/11787006_1. [24] Cynthia Dwork, Aaron Roth, et al. The algorithmic foundations of differential privacy. *Foundations and Trends in Theoretical Computer Science*, 9(3–4):211–407, 2014. [25] Jeroen Eggermont, Joost N Kok, and Walter A Kosters. Genetic programming for data classification: Partitioning the search space. In *Proceedings of the 2004 ACM symposium on Applied computing*, pages 1001–1005. ACM, 2004. [26] Taher ElGamal. A public key cryptosystem and a signature scheme based on discrete logarithms. *IEEE transactions on information theory*, 31(4):469–472, 1985. [27] Craig Gentry. Fully Homomorphic Encryption using Ideal Lattices. In *41st ACM Symposium on the Theory of Computing (STOC)*, pages 169–178, 2009. [28] Oded Goldreich. *Foundations of cryptography: volume 2, basic applications*. Cambridge university press, 2009. [29] Shafi Goldwasser. Multi party computations: past and present. In *Proceedings of the sixteenth annual ACM symposium on Principles of distributed computing*, pages 1–6. ACM, 1997. [30] Adam Groce, Alex Ledger, Alex J Malozemoff, and Arkady Yerukhimovich. Compgc: Efficient offline/online semi-honest two-party computation. *IACR Cryptology ePrint Archive*, 2016:458, 2016. [31] Shai Halevi and Victor Shoup. HElib-an implementation of homomorphic encryption. *Cryptology ePrint Archive, Report 2014/039*, 2014. [32] Jiawei Han, Jian Pei, and Micheline Kamber. *Data mining: concepts and techniques*. Elsevier, 2011. [33] Marit Hansen, Meiko Jensen, and Martin Rost. Protection goals for privacy engineering. *Proceedings - IEEE Security and Privacy Workshops, SPW*, pages 159–166, 2015. doi: 10.1109/SPW.2015.13. [34] David Harris and Sarah Harris. *Digital design and computer architecture*. Morgan Kaufmann, 2010. [35] John A Hartigan and Manchek A Wong. Algorithm as 136: A k-means clustering algorithm. *Journal of the Royal Statistical Society. Series C (Applied Statistics)*, 28(1): 100–108, 1979. [36] Joe Kilian. Founding cryptography on oblivious transfer. In *Proceedings of the twentieth annual ACM symposium on Theory of computing*, pages 20–31. ACM, 1988. [37] Vladimir Kolesnikov and Thomas Schneider. Improved garbled circuit: free XOR gates and applications. *Automata, Languages and Programming*, pages 486–498, 2008. [38] Alina Lazar. Income prediction via support vector machine. In *International Conference on Machine Learning and Applications*, pages 143–149. Citeseer, 2004. [39] Lei Xu, Chunxiao Jiang, Jian Wang, Jian Yuan, Yong Ren, Lei Xu, Chunxiao Jiang, Jian Wang, Jian Yuan, and Yong Ren. Information Security in Big Data: Privacy and Data Mining. *IEEE Access*, 2:1149–1176, 2014. ISSN 2169-3536. [40] Ninghui Li, Tiancheng Li, and Suresh Venkatasubramanian. t-closeness: Privacy beyond k-anonymity and l-diversity. In *Data Engineering. ICDE, IEEE 23rd International Conference on*, pages 106–115. IEEE, 2007. [41] Stuart Lloyd. Least squares quantization in pcm. *IEEE transactions on information theory*, 28(2):129–137, 1982. [42] Rongxing Lu, Hui Zhu, Ximeng Liu, Joseph Liu, and Jun Shao. Toward efficient and privacy-preserving computing in big data era. *IEEE Network*, 28(4):46–50, 2014. ISSN 08908044. doi: 10.1109/MNET.2014.6863131. [43] Ashwin Machanavajjhala, Daniel Kifer, Johannes Gehrke, and Muthuramakrishnan Venkitasubramaniam. l-diversity: Privacy beyond k-anonymity. *ACM Transactions on Knowledge Discovery from Data (TKDD)*, 1(1):3, 2007. [44] Moni Naor and Benny Pinkas. Computationally secure oblivious transfer. *Journal of Cryptology*, 18(1):1–35, 2005. [45] Pascal Paillier. Public-key cryptosystems based on composite degree residuosity classes. In *International Conference on the Theory and Applications of Cryptographic Techniques*, pages 223–238. Springer, 1999. [46] Benny Pinkas, Thomas Schneider, Nigel P Smart, and Stephen C Williams. Secure two-party computation is practical. In *International Conference on the Theory and Application of Cryptology and Information Security*, pages 250–267. Springer, 2009. [47] J Ross Quinlan. Simplifying decision trees. *International journal of man-machine studies*, 27(3):221–234, 1987. [48] Michael O Rabin. How to exchange secrets with oblivious transfer. *IACR Cryptology ePrint Archive*, 2005:187, 2005. [49] Balaji Raghunathan. *The Complete Book of Data Anonymization: From Planning to Implementation*. CRC Press, 2013. [50] G Naga Ramadevi, K Usha Rani, and D Lavanya. Evaluation of classifiers performance using resampling on breast cancer data. *International Journal of Scientific & Engineering Research*, 6(2), 2015. [51] Ronald L Rivest, Len Adleman, and Michael L Dertouzos. On data banks and privacy homomorphisms. *Foundations of secure computation*, 4(11):169–180, 1978. [52] Ronald L Rivest, Adi Shamir, and Leonard Adleman. A method for obtaining digital signatures and public-key cryptosystems. *Communications of the ACM*, 21(2):120–126, 1978. [53] Lior Rokach and Oded Maimon. Top-down induction of decision trees classifiers-a survey. *IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews)*, 35(4):476–487, 2005. [54] Bernhard Schölkopf. The kernel trick for distances. In *Advances in neural information processing systems*, pages 301–307, 2001. [55] Paul M Schwartz and Daniel J Solove. The pii problem: Privacy and a new concept of personally identifiable information. *NYUL rev.*, 86:1814, 2011. [56] Adi Shamir. How to share a secret. *Communications of the ACM*, 22(11):612–613, 1979. [57] Ebrahim M Songhori, Siam U Hussain, Ahmad-Reza Sadeghi, Thomas Schneider, and Farinaz Koushanfar. Tinygarble: Highly compressed and scalable sequential garbled circuits. In *Security and Privacy (SP), 2015 IEEE Symposium on*, pages 411–428. IEEE, 2015. [58] Carolin Strobl, James Malley, and Gerhard Tutz. An introduction to recursive partitioning: rationale, application, and characteristics of classification and regression trees, bagging, and random forests. *Psychological methods*, 14(4):323, 2009. [59] Latanya Sweeney. k-anonymity: A model for protecting privacy. *International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems*, 10(05):557–570, 2002. [60] Jaideep Vaidya, Basit Shafiq, Anirban Basu, and Yuan Hong. Differentially private naive bayes classification. In *Proceedings of the 2013 IEEE/WIC/ACM International Joint Conferences on Web Intelligence (WI) and Intelligent Agent Technologies (IAT)-Volume 01*, pages 571–576. IEEE Computer Society, 2013. [61] Strother H Walker and David B Duncan. Estimation of the probability of an event as a function of several independent variables. *Biometrika*, 54(1-2):167–179, 1967. [62] Stanley L Warner. Randomized response: A survey technique for eliminating evasive answer bias. *Journal of the American Statistical Association*, 60(309):63–69, 1965. [63] Rüdiger Wirth and Jochen Hipp. Crisp-dm: Towards a standard process model for data mining. In *Proceedings of the 4th international conference on the practical applications of knowledge discovery and data mining*, pages 29–39. Citeseer, 2000. [64] Andrew C Yao. Protocols for secure computations. In *Foundations of Computer Science, 1982. SFCS’82. 23rd Annual Symposium on*, pages 160–164. IEEE, 1982. [65] Andrew Chi-Chih Yao. How to generate and exchange secrets. In *Foundations of Computer Science, 1986., 27th Annual Symposium on*, pages 162–167. IEEE, 1986. [66] Haowen You and George Rumbe. Comparative study of classification techniques on breast cancer fina biopsy data. *International Journal of Interactive Multimedia and Artificial Intelligence*, 1(3):6–13, 2010. [67] Hwanjo Yu, Jaideep Vaidya, and Xiaoqian Jiang. Privacy-preserving SVM classification on vertically partitioned data. *Advances in Knowledge Discovery and Data Mining*, pages 647–656, 2006. Detailed Results This appendix contains the tables detailing the experimental results for the execution times and communication costs considered in Chapter 5. A.1 Execution Time A.1.1 Garbled Circuits and Decision Trees Table A.1: GC+DT. Runtime per data sample, in seconds. *Pima Indians Diabetes* dataset. | DT Depth | Numeric Precision | |----------|-------------------| | | 8 bits | 12 bits | 16 bits | 20 bits | 24 bits | | 1 | 0.374 | 0.502 | 0.663 | 0.782 | 0.901 | | 4 | 0.457 | 0.548 | 0.677 | 0.888 | 1.087 | | 6 | 0.654 | 0.877 | 1.051 | 1.177 | 1.195 | | 8 | 1.622 | 1.689 | 1.889 | 2.099 | 2.180 | | 10 | 3.469 | 3.644 | 3.112 | 4.300 | 4.343 | | 12 | 7.884 | 9.727 | 12.459 | 16.270 | 17.488 | | 13 | 16.289 | 18.353 | 20.926 | 25.120 | 33.508 | Table A.2: GC+DT. Runtime per data sample, in seconds. *Breast Cancer Wisconsin Diagnostic* dataset. | DT Depth | Numeric Precision | |----------|-------------------| | | 8 bits | 12 bits | 16 bits | 20 bits | 24 bits | | 1 | 1.015 | 1.383 | 1.735 | 2.082 | 2.442 | | 3 | 1.057 | 1.425 | 1.804 | 2.187 | 2.550 | | 4 | 1.107 | 1.533 | 1.920 | 2.319 | 2.724 | | 5 | 1.307 | 1.713 | 2.122 | 2.665 | 2.974 | | 6 | 1.538 | 2.030 | 2.522 | 2.939 | 3.330 | | 7 | 1.966 | 2.393 | 2.823 | 3.507 | 3.787 | Table A.3: **GC+DT**. Runtime per data sample, in seconds. *Credit Approval* dataset. | DT Depth | Numeric Precision | |----------|-------------------| | | 8 bits | 12 bits | 16 bits | 20 bits | 24 bits | | 2 | 1.598 | 2.333 | 2.956 | 3.591 | 4.190 | | 5 | 1.852 | 2.603 | 3.295 | 3.953 | 4.576 | | 7 | 2.702 | 3.600 | 4.359 | 5.743 | 5.848 | | 8 | 3.365 | 4.280 | 6.541 | 7.337 | 8.998 | | 9 | 4.511 | 6.930 | 8.202 | 10.990 | 12.227 | Table A.4: **GC+DT**. Runtime per data sample, in seconds. *Adult Income* dataset. | DT Depth | Numeric Precision | |----------|-------------------| | | 8 bits | 12 bits | 16 bits | 20 bits | 24 bits | | 5 | 3.777 | 5.169 | 6.502 | 8.171 | 9.424 | | 6 | 4.111 | 5.774 | 7.074 | 8.813 | 10.816 | | 9 | 9.256 | 13.864 | 20.294 | 24.314 | 27.236 | | 10 | 17.167 | 23.056 | 30.553 | 39.114 | 54.987 | ### A.1.2 Garbled Circuits and $k$-Means Table A.5: **GC+$k$-M.** Runtime per data sample, in seconds. *Pima Indians Diabetes* dataset. | Number Clusters | Numeric Precision | |-----------------|-------------------| | | 8 bits | 12 bits | 16 bits | 20 bits | 24 bits | | 2 | 0.629 | 0.848 | 1.163 | 1.350 | 1.533 | | 3 | 0.813 | 1.055 | 1.270 | 1.494 | 1.761 | | 4 | 0.852 | 1.143 | 1.429 | 1.664 | 1.969 | | 5 | 0.943 | 1.246 | 1.531 | 1.824 | 2.066 | | 6 | 1.140 | 1.261 | 1.513 | 2.266 | 2.145 | | 7 | 1.124 | 1.319 | 1.620 | 2.319 | 2.188 | | 8 | 1.178 | 1.339 | 1.697 | 2.130 | 2.337 | | 9 | 1.227 | 1.464 | 2.356 | 2.178 | 2.504 | | 10 | 1.228 | 1.544 | 2.439 | 2.497 | 3.717 | | 20 | 1.568 | 1.896 | 3.374 | 3.952 | 5.751 | | 30 | 2.142 | 2.223 | 3.794 | 5.315 | 7.346 | | 40 | 1.762 | 3.379 | 5.099 | 6.941 | 9.697 | | 50 | 1.803 | 3.670 | 5.979 | 8.300 | 12.376 | | 60 | 3.005 | 4.927 | 7.025 | 12.218 | 16.839 | | 70 | 3.420 | 5.250 | 7.616 | 15.795 | 19.339 | | 80 | 3.513 | 6.187 | 8.649 | 15.795 | 21.302 | | 90 | 3.490 | 6.239 | 12.278 | 16.691 | 23.669 | | 100 | 3.580 | 7.876 | 14.497 | 23.599 | 32.248 | Table A.6: **GC+k-M**. Runtime per data sample, in seconds. *Breast Cancer Wisconsin Diagnostic* dataset. | Number Clusters | Numeric Precision | |-----------------|-------------------| | | 8 bits | 12 bits | 16 bits | 20 bits | 24 bits | | 2 | 1.619 | 2.102 | 2.788 | 3.616 | 3.838 | | 3 | 1.868 | 2.451 | 3.037 | 3.550 | 5.674 | | 4 | 1.800 | 3.082 | 3.180 | 5.028 | 5.604 | | 5 | 1.912 | 2.598 | 4.760 | 5.410 | 7.402 | | 6 | 2.062 | 2.707 | 4.872 | 5.275 | 7.951 | | 7 | 2.351 | 2.882 | 5.133 | 7.149 | 7.906 | | 8 | 2.930 | 2.805 | 4.978 | 6.946 | 9.124 | | 9 | 2.570 | 3.917 | 4.935 | 7.044 | 9.136 | | 10 | 2.291 | 4.215 | 5.990 | 7.349 | 10.802 | | 20 | 4.223 | 6.797 | 9.580 | 14.877 | 25.232 | | 30 | 5.269 | 9.234 | 15.693 | 24.228 | 36.767 | | 40 | 7.126 | 14.724 | 25.754 | 37.984 | 49.574 | | 50 | 7.885 | 14.701 | 24.245 | 40.602 | 67.651 | | 60 | 9.156 | 19.862 | 35.970 | 56.269 | - | | 70 | 11.585 | 23.550 | 41.306 | - | - | | 80 | 11.433 | 23.367 | 45.067 | - | - | | 90 | 13.642 | 27.030 | 53.124 | - | - | | 100 | 16.684 | 32.046 | 59.372 | - | - | Table A.7: **GC+k-M**. Runtime per data sample, in seconds. *Credit Approval* dataset. | Number Clusters | Numeric Precision | |-----------------|-------------------| | | 8 bits | 12 bits | 16 bits | 20 bits | 24 bits | | 2 | 2.541 | 3.652 | 4.151 | 5.729 | 6.837 | | 3 | 2.615 | 3.752 | 5.547 | 6.369 | 8.512 | | 4 | 2.902 | 3.663 | 5.934 | 8.114 | 9.692 | | 5 | 3.016 | 5.036 | 5.825 | 8.633 | 10.725 | | 6 | 3.129 | 5.359 | 7.474 | 9.820 | 12.961 | | 7 | 3.058 | 5.216 | 7.463 | 10.031 | 16.429 | | 8 | 3.033 | 5.360 | 7.912 | 11.342 | 15.633 | | 9 | 3.082 | 5.326 | 8.911 | 13.911 | 19.112 | | 10 | 4.196 | 6.569 | 9.412 | 14.731 | 18.997 | | 20 | 6.368 | 10.641 | 19.121 | 28.797 | 44.666 | | 30 | 9.060 | 17.660 | 28.409 | 49.931 | - | | 40 | 11.152 | 25.673 | 42.835 | - | - | | 50 | 14.681 | 27.539 | 54.795 | - | - | | 60 | 18.326 | 37.088 | - | - | - | | 70 | 19.318 | 41.164 | - | - | - | | 80 | 23.371 | 56.422 | - | - | - | | 90 | 27.361 | 62.862 | - | - | - | | 100 | 27.908 | 67.835 | - | - | - | Table A.8: **GC+k-M**. Runtime per data sample, in seconds. *Adult Income* dataset. | Number Clusters | Numeric Precision | |-----------------|-------------------| | | 8 bits | 12 bits | 16 bits | 20 bits | 24 bits | | 2 | 4.703 | 5.536 | 8.098 | 11.307 | 13.672 | | 3 | 4.361 | 7.535 | 9.620 | 14.500 | 18.896 | | 4 | 4.396 | 6.986 | 10.364 | 15.635 | 20.343 | | 5 | 6.083 | 8.688 | 11.681 | 17.928 | 25.292 | | 6 | 5.587 | 11.307 | 15.884 | 23.006 | 31.464 | | 7 | 6.441 | 10.278 | 15.981 | 24.306 | 31.277 | | 8 | 6.394 | 10.871 | 17.157 | 27.155 | 38.732 | | 9 | 8.029 | 12.958 | 21.050 | 32.607 | 48.497 | | 10 | 8.706 | 14.956 | 24.468 | 32.010 | 47.241 | | 20 | 14.834 | 24.567 | 41.386 | - | - | | 30 | 19.150 | 39.870 | - | - | - | | 40 | 27.248 | 60.023 | - | - | - | | 50 | 33.243 | - | - | - | - | | 60 | 42.933 | - | - | - | - | | 70 | 52.514 | - | - | - | - | | 80 | 61.952 | - | - | - | - | | 90 | - | - | - | - | - | | 100 | - | - | - | - | - | A.1.3 Partially Homomorphic Encryption and Logistic Regression Table A.9: **PHE+LR**. Execution time in seconds. *Pima Indians Diabetes* Dataset. | NBits | Execution time / data sample | |-------|-----------------------------| | | encryption | computation | decryption | | 128 | 0.002 | 0.001 | 0.000 | | 256 | 0.005 | 0.001 | 0.000 | | 512 | 0.014 | 0.003 | 0.002 | | 1024 | 0.035 | 0.004 | 0.005 | | 2048 | 0.168 | 0.014 | 0.025 | Table A.10: **PHE+LR**. Execution time in seconds. *Breast Cancer Wisconsin Diagnostic* Dataset. | NBits | Execution time / data sample | |-------|-----------------------------| | | encryption | computation | decryption | | 128 | 0.005 | 0.002 | 0.000 | | 256 | 0.013 | 0.004 | 0.000 | | 512 | 0.024 | 0.005 | 0.001 | | 1024 | 0.100 | 0.018 | 0.005 | | 2048 | 0.570 | 0.028 | 0.020 | Table A.11: **PHE+LR**. Execution time in seconds. *Credit Approval* Dataset. | NBits | Execution time / data sample | |-------|-----------------------------| | | encryption | computation | decryption | | 128 | 0.008 | 0.004 | 0.000 | | 256 | 0.012 | 0.006 | 0.000 | | 512 | 0.035 | 0.013 | 0.001 | | 1024 | 0.170 | 0.026 | 0.004 | | 2048 | 0.970 | 0.045 | 0.018 | Table A.12: **PHE+LR**. Execution time in seconds. *Adult Income* Dataset. | NBits | Execution time / data sample | |-------|-----------------------------| | | encryption | computation | decryption | | 128 | 0.011 | 0.008 | 0.000 | | 256 | 0.020 | 0.016 | 0.000 | | 512 | 0.070 | 0.023 | 0.001 | | 1024 | 0.334 | 0.058 | 0.003 | | 2048 | 2.106 | 0.124 | 0.019 | A.1.4 Fully Homomorphic Encryption and Logistic Regression Table A.13: **FHE+LR**. Execution time in seconds. All Datasets. | Dataset | Execution time / data sample | |---------|-----------------------------| | | encryption | computation | decryption | | Pima | M1: 2.193 | M1: 0.049 | M1: 0.036 | | | M2: 0.168 | M2: 0.005 | M2: 0.036 | | Breast | M1: 7.494 | M1: 0.169 | M1: 0.036 | | | M2: 0.168 | M2: 0.005 | M2: 0.036 | | Credit | M1: 12.467 | M1: 0.296 | M1: 0.038 | | | M2: 0.180 | M2: 0.006 | M2: 0.038 | | Adult | M1: 24.253 | M1: 0.590 | M1: 0.036 | | | M2: 0.169 | M2: 0.005 | M2: 0.036 | A.1.5 Partially Homomorphic Encryption and Support Vector Machines Table A.14: **PHE+SVM**. Execution time in seconds. *Pima Indians Diabetes* Dataset. | NBits | Execution time / data sample | |-------|-----------------------------| | | encryption | computation | decryption | | 128 | 0.001 | 0.033 | 0.000 | | 256 | 0.001 | 0.054 | 0.000 | | 512 | 0.005 | 0.096 | 0.001 | | 1024 | 0.024 | 0.221 | 0.003 | | 2048 | 0.185 | 0.747 | 0.025 | Table A.15: **PHE+SVM**. Execution time in seconds. *Breast Cancer Wisconsin Diagnostic* Dataset. | NBits | Execution time / data sample | |-------|-----------------------------| | | encryption | computation | decryption | | 128 | 0.002 | 0.026 | 0.000 | | 256 | 0.005 | 0.043 | 0.000 | | 512 | 0.018 | 0.085 | 0.001 | | 1024 | 0.093 | 0.213 | 0.003 | | 2048 | 0.572 | 0.663 | 0.019 | Table A.16: **PHE+SVM**. Execution time in seconds. *Credit Approval* Dataset. | NBits | Execution time / data sample | |-------|-----------------------------| | | encryption | computation | decryption | | 128 | 0.003 | 0.026 | 0.000 | | 256 | 0.008 | 0.042 | 0.000 | | 512 | 0.030 | 0.084 | 0.001 | | 1024 | 0.155 | 0.208 | 0.003 | | 2048 | 0.964 | 0.593 | 0.019 | Table A.17: **PHE+SVM**. Execution time in seconds. *Adult Income* Dataset. | NBits | Execution time / data sample | |-------|-----------------------------| | | encryption | computation | decryption | | 128 | 0.007 | 0.317 | 0.000 | | 256 | 0.016 | 0.492 | 0.000 | | 512 | 0.066 | 1.051 | 0.001 | | 1024 | 0.344 | 2.504 | 0.003 | | 2048 | 2.057 | 6.982 | 0.019 | A.1.6 Fully Homomorphic Encryption and Support Vector Machines Table A.18: FHE+SVM. Execution time in seconds. All Datasets. | Dataset | Execution time / data sample | |---------|-----------------------------| | | encryption | computation | decryption | | Pima | M1: 1.127 | M1: 0.104 | M1: 0.058 | | | M2: 0.231 | M2: 0.199 | M2: 0.052 | | Breast | M1: 4.100 | M1: 0.359 | M1: 0.058 | | | M2: 0.233 | M2: 0.200 | M2: 0.053 | | Credit | M1: 5.892 | M1: 0.615 | M1: 0.059 | | | M2: 0.235 | M2: 0.201 | M2: 0.053 | | Adult | M1: 11.895 | M1: 1.296 | M1: 0.057 | | | M2: 0.232 | M2: 0.199 | M2: 0.053 | A.2 Communication Cost A.2.1 Garbled Circuits and Decision Trees Table A.19: **GC+DT**. Amount of bytes per data sample (in kB) received during runtime by the GC evaluator. *Pima Indians Diabetes* Dataset. | DT Depth | Numeric Precision | |----------|-------------------| | | 8 bits | 12 bits | 16 bits | 20 bits | 24 bits | | 1 | 23.542 | 34.941 | 46.339 | 57.743 | 69.137 | | 4 | 90.858 | 131.485 | 172.120 | 212.741 | 253.369 | | 6 | 321.667 | 462.496 | 603.318 | 744.170 | 885.012 | | 8 | 1244.83 | 1786.52 | 2328.19 | 2869.91 | 3411.55 | | 10 | 4937.70 | 7082.78 | 9227.74 | 11372.8 | 13517.9 | | 12 | 19708.8 | 28267.3 | 36825.7 | 45384.2 | 53942.9 | | 13 | 39403.7 | 56513.5 | 73623.3 | 90733.1 | 107843 | Table A.20: **GC+DT**. Amount of bytes per data sample (in kB) received during runtime by the GC evaluator. *Breast Cancer Wisconsin Diagnostic* Dataset. | DT Depth | Numeric Precision | |----------|-------------------| | | 8 bits | 12 bits | 16 bits | 20 bits | 24 bits | | 1 | 85.280 | 127.547 | 169.814 | 212.084 | 254.348 | | 3 | 175.738 | 261.128 | 346.551 | 431.980 | 517.377 | | 4 | 296.296 | 439.239 | 582.210 | 725.153 | 868.094 | | 5 | 537.460 | 795.483 | 1053.44 | 1311.47 | 1569.51 | | 6 | 1019.83 | 1507.91 | 1996.08 | 2484.10 | 2972.24 | | 7 | 1984.46 | 2932.86 | 3881.12 | 4829.36 | 5777.78 | Table A.21: **GC+DT**. Amount of bytes per data sample (in kB) received during runtime by the GC evaluator. *Credit Approval* Dataset. | DT Depth | Numeric Precision | |----------|-------------------| | | 8 bits | 12 bits | 16 bits | 20 bits | 24 bits | | 2 | 193.908 | 289.759 | 385.628 | 481.480 | 577.331 | | 5 | 889.564 | 1323.15 | 1756.74 | 2190.37 | 2623.99 | | 7 | 3274.64 | 4866.18 | 6457.81 | 8049.41 | 9640.96 | | 8 | 6454.73 | 9590.25 | 12725.9 | 15861.4 | 18997.0 | | 9 | 12815.1 | 19038.6 | 25262.1 | 31485.6 | 37709.3 | Table A.22: **GC+DT**. Amount of bytes per data sample (in kB) received during runtime by the GC evaluator. *Adult Income* Dataset. | DT Depth | Numeric Precision | |----------|-------------------| | | 8 bits | 12 bits | 16 bits | 20 bits | 24 bits | | 5 | 1843.72 | 2753.97 | 3664.21 | 4574.43 | 5484.64 | | 6 | 3485.83 | 5205.30 | 6924.63 | 8643.95 | 10363.3 | | 9 | 26476.2 | 39523.2 | 52569.9 | 65617.1 | 78663.8 | | 10 | 52751.1 | 78743.8 | 104736 | 130729 | 156721 | ### A.2.2 Garbled Circuits and $k$-Means Table A.23: **GC+$k$-M**. Amount of bytes per data sample (in kB) received during runtime by the GC evaluator. *Pima Indians Diabetes* Dataset. | Number Clusters | Numeric Precision | |-----------------|-------------------| | | 8 bits | 12 bits | 16 bits | 20 bits | 24 bits | | 2 | 309.849 | 591.326 | 961.850 | 1421.45 | 1970.15 | | 3 | 456.502 | 874.347 | 1425.81 | 2110.88 | 2929.57 | | 4 | 603.113 | 1157.36 | 1889.79 | 2800.34 | 3889.04 | | 5 | 749.726 | 1440.35 | 2353.72 | 3489.77 | 4848.50 | | 6 | 896.376 | 1723.41 | 2817.74 | 4179.15 | 5807.92 | | 7 | 1042.96 | 2006.43 | 3281.61 | 4868.58 | 6767.40 | | 8 | 1189.58 | 2289.42 | 3745.53 | 5557.99 | 7726.80 | | 9 | 1336.19 | 2572.47 | 4209.50 | 6247.40 | 8686.30 | | 10 | 1482.85 | 2855.49 | 4673.53 | 6936.81 | 9645.61 | | 20 | 2949.03 | 5685.65 | 9312.94 | 13831.1 | 19240.0 | | 30 | 4415.20 | 8515.81 | 13952.4 | 20725.3 | 28834.3 | | 40 | 5881.34 | 11345.9 | 18592.0 | 27619.6 | 38428.7 | | 50 | 7347.55 | 14176.2 | 23231.6 | 34514.1 | 48023.0 | | 60 | 8813.84 | 17006.3 | 27871.1 | 41408.0 | 57617.9 | | 70 | 10280.0 | 19836.4 | 32510.5 | 48302.4 | 67211.9 | | 80 | 11746.1 | 22665.5 | 37510.1 | 55196.6 | 76806.3 | | 90 | 13212.4 | 25496.9 | 41789.7 | 62090.9 | 86400.4 | | 100 | 14678.6 | 28326.8 | 46429.1 | 68985.2 | 95995.1 | Table A.24: **GC+k-M**. Amount of bytes per data sample (in kB) received during runtime by the **GC** evaluator. *Breast Cancer Wisconsin Diagnostic Dataset.* | Number Clusters | Numeric Precision | |-----------------|-------------------| | | 8 bits | 12 bits | 16 bits | 20 bits | 24 bits | | 2 | 1160.22 | 2215.08 | 3604.00 | 5326.97 | 7383.92 | | 3 | 1706.36 | 3271.44 | 5337.51 | 7904.74 | 10973.1 | | 4 | 2252.49 | 4327.76 | 7071.10 | 10482.5 | 14562.2 | | 5 | 2798.68 | 5384.18 | 8804.74 | 13060.4 | 18151.1 | | 6 | 3344.74 | 6440.50 | 10538.2 | 15638.3 | 21740.2 | | 7 | 3890.90 | 7496.85 | 12271.9 | 18215.9 | 25329.3 | | 8 | 4437.04 | 8553.14 | 14005.4 | 20793.9 | 28918.7 | | 9 | 4983.15 | 9609.47 | 15739.1 | 23371.7 | 32507.7 | | 10 | 5529.26 | 10666.0 | 17472.6 | 25949.5 | 36096.7 | | 20 | 10990.4 | 21229.2 | 34808.3 | 51728.0 | 71987.3 | | 30 | 16451.7 | 31792.6 | 52144.1 | 77506.3 | 107878 | | 40 | 21913.0 | 42356.3 | 69479.9 | 103284 | 143769 | | 50 | 27374.2 | 52919.6 | 86815.3 | 129062 | 179660 | | 60 | 32835.5 | 63482.7 | 104151 | 154841 | - | | 70 | 38296.9 | 74046.3 | 121487 | - | - | | 80 | 43758.2 | 84609.9 | 138822 | - | - | | 90 | 49219.1 | 95173.1 | 156159 | - | - | | 100 | 54680.5 | 105737 | 173494 | - | - | Table A.25: **GC+k-M**. Amount of bytes per data sample (in kB) received during runtime by the **GC** evaluator. *Credit Approval Dataset*. | Number Clusters | Numeric Precision | |-----------------|-------------------| | | 8 bits | 12 bits | 16 bits | 20 bits | 24 bits | | 2 | 1972.05 | 3765.09 | 6126.07 | 9054.88 | 12551.6 | | 3 | 2899.50 | 5559.67 | 9071.71 | 13435.4 | 18650.7 | | 4 | 3827.13 | 7354.19 | 12017.0 | 17815.7 | 24750.1 | | 5 | 4754.49 | 9148.68 | 14962.5 | 22196.1 | 30849.3 | | 6 | 5682.00 | 10943.2 | 17908.1 | 26576.5 | 36948.4 | | 7 | 6609.56 | 12737.8 | 20853.7 | 30956.8 | 43047.6 | | 8 | 7536.99 | 14532.3 | 23799.2 | 35337.2 | 49146.9 | | 9 | 8464.34 | 16326.9 | 26744.6 | 39717.9 | 55246.2 | | 10 | 9391.91 | 18121.3 | 29690.1 | 44098.0 | 61345.4 | | 20 | 18666.6 | 36066.6 | 59145.0 | 87901.8 | 122338 | | 30 | 27941.7 | 54011.9 | 88600.0 | 131706 | - | | 40 | 37216.4 | 71957.2 | 118055 | - | - | | 50 | 46491.1 | 89902.4 | 147510 | - | - | | 60 | 55765.9 | 107848 | - | - | - | | 70 | 65040.7 | 125793 | - | - | - | | 80 | 74315.8 | 143738 | - | - | - | | 90 | 83590.5 | 161683 | - | - | - | | 100 | 92865.1 | 179629 | - | - | - | Table A.26: **GC+k-M**. Amount of bytes per data sample (in kB) received during runtime by the **GC** evaluator. *Adult Income* Dataset. | Number Clusters | Numeric Precision | |-----------------|-------------------| | | 8 bits | 12 bits | 16 bits | 20 bits | 24 bits | | 2 | 4174.78 | 7971.66 | 12971.1 | 19173.0 | 26577.5 | | 3 | 6136.87 | 11769.3 | 19205.7 | 28445.8 | 39489.5 | | 4 | 8098.94 | 15567.1 | 25440.1 | 37718.2 | 52401.5 | | 5 | 10061.2 | 19364.8 | 31675.0 | 46990.8 | 65313.5 | | 6 | 12023.2 | 23162.6 | 37909.2 | 56263.7 | 78225.4 | | 7 | 13985.3 | 26960.3 | 44143.4 | 65536.3 | 91137.4 | | 8 | 15947.4 | 30757.9 | 50378.0 | 74808.8 | 104049 | | 9 | 17909.6 | 34555.5 | 56612.9 | 84081.3 | 116961 | | 10 | 19871.7 | 38353.1 | 62847.2 | 93354.2 | 129873 | | 20 | 39493.0 | 76330.2 | 125193 | - | - | | 30 | 59114.1 | 114307 | - | - | - | | 40 | 78735.3 | 152284 | - | - | - | | 50 | 98356.6 | - | - | - | - | | 60 | 117978 | - | - | - | - | | 70 | 137598 | - | - | - | - | | 80 | 157221 | - | - | - | - | | 90 | - | - | - | - | - | | 100 | - | - | - | - | - |
Alkaloids in Bulgarian *Pancratium maritimum* L. Strahil Berkova\textsuperscript{a,*}, Luba Evstatieva\textsuperscript{a}, and Simeon Popov\textsuperscript{b} \textsuperscript{a} Institute of Botany, Bulgarian Academy of Sciences, 23 Acad. G. Bonchev Str., 1113 Sofia, Bulgaria. Fax: +359/2719032. E-mail: firstname.lastname@example.org \textsuperscript{b} Institute of Organic Chemistry with Centre of Phytochemistry, Bulgarian Academy of Sciences, 9 Acad. G. Bonchev Str., 1113 Sofia, Bulgaria * Author for correspondence and reprint requests Z. Naturforsch. \textbf{59c}, 65–69 (2004); received May 22/July 7, 2003 A GC/MS analysis of alkaloids from leaves, bulbs and roots of *Pancratium maritimum* was performed. From the identified 16 alkaloids, 5 alkaloids were reported for the first time for this plant. Several compounds with pharmacological activity were found. Haemanthamine was main alkaloid in the leaves and bulbs whereas galanthane was found to be main alkaloid in roots. \textit{Key words}: Amaryllidaceae Alkaloids, GC/MS, *Pancratium maritimum* \section*{Introduction} Amaryllidaceae have attracted attention as a source of valuable biologically active alkaloids. The genus *Pancratium* includes about 15 species distributed in the Mediterranean, Africa and Asia (Willis, 1973). The alkaloid composition of only a few of them have been investigated in detail. *Pancratium maritimum* L. is characteristic for sandy coastal habitats of the Mediterranean. The plant is endangered and protected in Bulgaria. Bulb and leaf extracts of *P. maritimum* have purgative (Iordanov, 1964), acaricidal, insecticidal (Abbassy et al., 1998) and antifungal activities (Sur-Altiner et al., 1999). About 40 alkaloids have been reported for this species: dihydrolycorine, norpluvine (Sandberg and Michel, 1968), lycorine, 6-\textit{O}-methylhaemanthidine, \textit{O},\textit{N}-dimethylnorbeladine, hippeastrine, hordenine, harbanthine, unguimorine, unguimorine-\textit{N}-oxide, vittatine (Tato et al., 1988), tazettine, pancrachine, lycorenine, galanthamine, sickernberginge, homolycorine, hemanthidine, hippadine, trispheridine, haemanthamine, pseudolycorine, 9-\textit{O}-demethylhomolycorine, 11-hydroxyvittatine, ungeremine, zefbateine, narciclasine-4-\textit{O}-\beta-\textit{D}-glucopyranoside (Aboudonia et al., 1991), 3,11-dihydroxy-1,2-dehydrocrinane (Sener et al., 1993), buphanisine, crinine, 3-methoxy-6-dihydroxy-3-methoxy-1,2-dehydrocrinane, 6,11-dihydroxy-3-methoxy-1,2-dehydrocrinane, 6,11-dihydroxy-1,2-dehydrocrinane, 8-hydroxy-9-methoxycrinine (Sener et al., 1994), pancratistatine (Pettit et al., 1995), \textit{N}-demethylgalanthamine, 2-\textit{O}-demethylmonanthine (Sener et al., 1998), marithidine, lycoramine (Youssef and Frahm, 1998); pancritamine, acetyllycoramine, \textit{N}-demethyllycoramine (Youssef, 1999). Some of these alkaloids have interesting pharmacological properties such as anti-tumor (pancratistatine and unguimorine; Pettit et al., 1995), anti-viral (lycorine), anti-cholinesterase (galanthamine) and analgesic activities (lycorine and galanthamine; Bastida and Viladomat, 2002). Gas-chromatography/mass-spectrometry (GC/MS) proved to be an useful method for investigation of complex mixtures of different alkaloid groups (Wink et al., 1983; Witte et al., 1987; Kreh et al., 1995). In order to increase the volatility of the alkaloids and make them suitable for GC/MS investigation the alkaloid mixtures can be silylated before analyses, but the spectra obtained gave limited information (Kreh et al., 1995). Much more informative appeared to be the spectra of underderivatized alkaloids. There are only a few reports on GC/MS of underderivatized alkaloid mixtures from Amaryllidaceae plants which showed that the alkaloids retain their characteristic EIMS fragmentation pattern under GC/MS conditions (Kreh et al., 1995; Tram et al., 2002). The alkaloid composition of *P. maritimum* plants from the Bulgarian seacoast has not been studied and we performed GC/MS analysis of the alkaloid fractions from leaves, bulbs and roots of *P. maritimum* growing in Bulgaria. Experimental Plant material Samples of *P. maritimum* were collected in May, 2002 form the Black Sea coast near Kavatsite camping, Bulgaria. A voucher specimen (COM-Co 974) is deposited at the herbarium of Institute of Botany, Bulgarian Academy of Sciences. Isolation of the alkaloid fractions Fresh plant tissues were cut into small pieces and extracted three times (48 h each) with ethanol. The extracts were concentrated *in vacuo*, acidified with 3% sulfuric acid to pH 1–2 and defatted with chloroform (3×). After that, the acidic aqueous phase was alkalinized with 25% NH$_4$OH to pH 10–11 and the alkaloids were extracted three times with chloroform. The chloroform extracts were combined, dried over anhydrous Na$_2$SO$_4$ and than evaporated. The residues obtained were dissolved in methanol and subjected to GC/MS analysis. GC/MS analysis The GC/MS were recorded on a Hewlett Packard 5890/MSD 5972A instrument operating in EI mode at 70 eV. A HP5 MS column (30 m × 0.25 mm × 0.25 μm) was used. The temperature program was 80 to 280 °C at 10 °C·min$^{-1}$ and 10 min hold at 280 °C. Injector temperature was 280 °C. The flow rate of carrier gas (He) was 0.8 ml·min$^{-1}$. The identification of the alkaloids was confirmed by comparing the mass spectral data with those of authentic compounds from database NIST 98 (a Hewlett Packard Mass Spectral Library, Hewlett Packard, Palo Alto, CA, USA) or with data obtained from the literature. Results and Discussion We analyzed the alkaloid composition of roots, bulbs and leaves from *P. maritimum* in order to establish the presence of biologically active alkaloids in different tissues and to obtain some data for the alkaloid metabolism. More than 30 compounds from the investigated alkaloid mixtures showed the characteristic mass spectral fragmentation of Amaryllidaceae alkaloids. Almost all of them produce well separated GC peaks. Sixteen compounds were identified (Table I, Fig. 1). To the best of our knowledge, five alkaloids, namely graciline (2), 6α-deoxytazettine (8), galanthane (9), *N*-formylgalanthamine (15), and crinane-3-one (16) are reported for the first time for *P. maritimum*. Alkaloids 2, 6, 8 and 16 were present in trace amounts in the alkaloid mixtures and it seems that the only method for their identification is GC/MS. The GC/MS spectra of eleven other compounds (P-1–P-11) with mass spectral fragmentation characteristic for the Amaryllidaceae alkaloids are listed in Table I. We did not identify them because of the absence of similar spectra in the available literature or database. Alkaloids P-1 and P-2 show mass spectral fragmentation typical for lycorenine type alkaloids – no molecular ion peak and very low intensities of all fragments besides the base peak at $m/z$ 109. For this type of alkaloids, M$^+$ ion can not be determined unambiguously by electron impact mass detector (Kreh et al., 1995). Alkaloid P-6 possesses fragments at $m/z$ 185, 199, 214 and 270 as well as intensive M$^+$ ion characteristic for pancracine derivatives (Wildman and Brown, 1968). Alkaloid P-10 shows very similar fragmentation to those of 6α-deoxytazettine (8), only with differences in the relative intensities of some ion fragments and they must be isomers. Alkaloid P-11 shows M$^+$, M$^+$-15 and base peaks with two mass units lower than those of tzettine (12) as ![Fig. 1. Structures of alkaloids identified in *P. maritimum*. Numbers are identical with the numbers of Table I.](image-url) Table 1. Alkaloids of *Pancratium maritimum* L. | Compound | [M$^+$] | $m/z$ (rel. int.) | Roots | Bulbs | Leaves | MS ref. | |-------------------|---------|-------------------|-------|-------|--------|---------| | Tripheridine (1) | 223(100)| 222(39), 167(10), 165(10), 164(16), 138(22) | 2.1 | 3.05 | 3.14 | Ali et al., 1986 | | Graciline (2) | 283(4) | 282(4), 264(5), 254(6), 240(5), 227(2), 226(20), 225(100), 139(7) | 0.43 | 0.22 | – | Noyan et al., 1998 | | Galanthamine (3) | 287(82) | 286(100), 244(24), 230(12) | 1.56 | 1.37 | 2.65 | Kreh et al., 1995 | | Buphanisine (4) | 285(100)| 270(33), 258(23), 230(18), 215(8), 201(23), 187(10), 185(18), 172(18), 157(20), 115(31) | 1.01 | 1.32 | 3.06 | Viladomat et al., 1995 | | $N$-Demethylgalanthamine (5) | 273(98) | 272(100), 230(33) | – | 2.11 | 1.34 | Kreh et al., 1995 | | $\alpha$-Dihydrocaranine (6) | 273(35) | 272(100), 254(6), 242(2), 226(2), 214(5), 200(3), 188(2), 174(3), 162(4) | 0.8 | – | – | NIST 98 | | Crinine (7) | 271(100)| 270(14), 254(10) 228(23), 214(12), 199(65), 187(57), 173(18) 115(22) | 6.27 | 8.91 | 14.16 | Viladomat et al., 1995 | | 6$\alpha$-Deoxytazettine (8) | 315(27) | 300(49), 231(100), 217(7), 211(4), 191(8), 181(12), 150(10), 152(6), 141(5), 128(8), 115(1), 70(51) | 0.91 | – | – | NIST 98 | | Galanthane (9) | 251(45) | 250(100), 220(4), 204(2), 192(14), 191(12), 165(6), 152(2), 139(4), 96(7), 95(9) | 15.2 | – | 4.81 | NIST 98 | | Demethylmarithidine (10) | 273(100)| 230(25), 201(86), 189(54), 175(22) 157(16), 128(19), 115(12) | – | 2.04 | – | Bastida et al., 1988 | | Haemanthamine (11) | 301(15) | 272(100), 240(18), 211(16), 181(23) | 4.93 | 19.53 | 38.2 | Kreh et al., 1995 | | Tazettine (12) | 331(30) | 316(14), 298(22), 260(5), 247(100), 227(13), 211(12), 201(14), 181(12), 152(10), 141(9), 128(10), 115(16) | 7.02 | 6.38 | 1.65 | Duffield et al., 1965 | | Pancracine (13) | 287(100)| 286(100), 268(18), 250(10), 214(14), 199(18), 185(29), 141(8), 128(8), 115(10) | 0.89 | 2.43 | – | Wildman and Brown, 1968 | | Lycorine (14) | 287(27) | 286(25), 268(20), 250(10), 227(6), 226(100), 212(5), 147(8), 135(4), 119(8) | 1.88 | 3.36 | 0.51 | Likhitwitayawuid et al., 1993 | | $N$-Formylgalanthamine (15) | 301(100)| 272(2 ), 243(6), 230(8), 225(15), 211(16) 128(11), 115(10) | 2.9 | 4.72 | 2.02 | Bastida et al., 1987 | | Crinane-3-one (16) | 271(100)| 270(41), 240(14), 238(14), 226(8), 211(22), 181(65), 153(15), 152(15), 115(9) | – | – | 0.36 | NIST 98 | | P-1 | – | 250(4), 238(3), 209(4), 190(1), 152(3), 135(3), 110(8), 109(100), 94(5), 82(6) | 0.24 | – | – | – | | P-2 | – | 207(2), 199(1), 164(4), 152(3), 135(3) 110(8), 109(100), 108(25), 94(6), 82(6) | 0.47 | 2.90 | – | – | | P-3 | – | 253(49) 252(100), 224(38), 181(7), 166(16), 152(11), 128(3), 115(6) | 0.23 | 0.32 | 0.59 | – | | P-4 | – | 265(12) 227(100), 199(34), 128(5), 115(9) | – | 0.38 | – | – | Table I. (cont.) | Compound | [M$^+$] | $m/z$ (rel. int.) | Roots | Bulbs | Leaves | MS ref. | |----------|---------|------------------|-------|-------|--------|---------| | P-5 | 301(49) | 286(15), 272(14), 245(100), tr. | – | – | – | – | | | | 229(51), 128(7), 115(20) | | | | | | P-6 | 299(100)| 284(22), 270(33), 244(35), tr. | – | – | – | – | | | | 227(83), 214(7), 199(33), 185(20), 141(19), 128(15), 115(26), 115(17) | | | | | | P-7 | 271(100)| 256(19), 238(16), 211(16), 181(14), 165(15), 128(11), 115(17) | – | 0.42 | – | – | | P-8 | 273(60) | 272(6), 257(43), 224(100), 212(10), 199(44), 166(10), 141(19), 128(1), 115(15) | – | 2.12 | 1.34 | – | | P-9 | 277(100)| 211(21), 181(70), 153(31), 152(31), 128(5), 115(11) | 0.72 | – | 4.81 | – | | P-10 | 315(19) | 308(10), 231(10), 217(3), 211(15), 197(11), 185(5), 159(4), 152(8), 141(8), 128(11), 115(11), 70(9) | 1.43 | 2.04 | – | – | | P-11 | 329(21) | 314(25), 295(25), 245(100), tr. | – | – | – | – | | | | 227(14), 211(18), 181(9), 152(10), 141(9), 128(5), 115(10) | | | | | * The ion current generated depends on the characteristics of the compound and is not a true quantification. well as fragments at $m/z$ 227, 211, 181, 152, 141 which are present in the mass spectrum of **12**. Evidently, P-11 is a dehydroderivative of tazettine. Alkaloids P-3 and P-8 have intensive M$^+–H$ peaks characteristic for lycorine, phenantridine and galanthamine type alkaloids. The M$^+$ ions in the spectra of alkaloids P-6, P-7 and P-9 form the most prominent peaks which are characteristic for many alkaloids of crinine type. Crinine type alkaloids haemanthamine and crinine appeared to be the main alkaloids in the Bulgarian *P. maritimum*. Major alkaloid in roots was galanthane and in bulbs and leaves haemanthamine. Tazettine was also present in relatively high levels in the alkaloid fractions from roots and bulbs. Crinine-3-one might be produced by crinine oxidation in the leaves. Several alkaloids with pharmacological activity were found. The most interesting was the intensively studied acetylcholinesterase inhibitor galanthamine. This compound was found at higher concentrations in the leaves whereas the galanthamine precursor N-demethylgalanthamine (Bastida and Viladomat, 2002) was accumulated mainly in the bulbs. The further transformation of galanthamine to N-formylgalanthamine probably proceeds in the bulbs. Other compound of interest is lycorine. Previous study of Tato et al. (1988) on Spanish *P. maritimum* showed that lycorine is the main component of the alkaloid fraction from bulbs. Contrary to them, we found that this compound is accumulated as a minor component in the plant tissues. The major alkaloid of bulbs and leaves, haemanthamine, exhibits cytotoxic and hypertensive properties (Bastida and Viladomat, 2002). Taking into account the complexity of the alkaloid fractions, GC/MS is the method of choice for a rapid analysis of *Pancratium* alkaloids. It requires minimum of plant material and allows the identification of numerous compounds, some of them of pharmacological interest. **Acknowledgements** This work was supported by the Ministry of Environment and Waters, Bulgaria (project 3228/264). Ali A., El Sayed H., Abdallah O., and Steglich W. (1986), Oxocirnine and other alkaloids from *Crimum americanum*. Phytochemistry **25**, 2399–2401. Anassi M., Gougery O., El-Hamady S., and Sholo M. (1998), In vitro cytotoxicity and antitumouristic effects of soosin, *Pancratium maritimum* extracts and constituents. J. Egypt. Soc. Parasitol. **28**, 197–205. Abou-Donia A., Giulio A., Evidente A., Gaber M., Habib A.-A., Lanzetta R., and El-Din A. (1991), Narciclasine-4-O-β-D-glucopyranoside, a glucosyloxy amide phenanthridone derivative from *Pancratium maritimum*. Phytochemistry **30**, 1445–1446. Bastida J., Viladomat F., Llabres J., Codina C., Felitz M., and Rubiralta M. (1987), Alkaloids from *Narcissus confusus*. Phytochemistry **26**, 1519–1524. Bastida J., Llabres J., Viladomat F., Codina C., Rubiralta M., and Felitz M. (1988), 9-O-Demethylmariitidine: A new alkaloid from *Narcissus radiganorum*. Planta Med. **54**, 524–526. Bastida J., and Viladomat F. (2002), Alkaloids of *Narcissus*. In: Medicinal and Aromatic Plants – Industrial Profiles: The Genus *Narcissus* (Hanks G., ed.), Taylor and Francis, London and New York, pp. 141–214. Duffield A., Aplin R., Budzikiewicz H., Djerassi C., Murphy C., and Wildman W. (1965), Mass spectrometry and stereochemistry. Problems LXXXII. A study of the fragmentation of some Amaryllidaceae alkaloids. J. Am. Chem. Soc. **87**, 4902–4917. Iordanov D. (1964), Genus *Pancratium*. In: Flora of People’s Republic of Bulgaria (Iordanov D., ed.). Academic Press, Sofia, Vol. 2, pp. 323–324. Kreh M., Matusch R., and Witte L. (1995), Capillary gas chromatography-mass spectrometry of Amaryllidaceae alkaloids. Phytochemistry **38**, 771–776. Likhitwattana K., Angschofer C., Chai T., Pezzuto J., and Cordell G. (1993), Cytotoxic and antimalarial alkaloids from the bulbs of *Crimum amabile*. J. Nat. Prod. **56**, 1331–1338. Noyan S., Rentsch G., Onur M., Gozler T., Gozler B., and Hesse M. (1998), The gracilines: a novel subgroup of the Amaryllidaceae alkaloids. Heterocycles **48**, 1777–1784. Pettit G., Pettit G. 3rd, Groszek G., Backhaus R., Doubek D., Barr R., and Meerow A. (1995), Antineoplastic agents, 301. An investigation of the Amaryllidaceae genus *Hymenocallis*. J. Nat. Prod. **58**, 756–759. Sandberg F., and Michel K.-H. (1968), Alkaloids of *Pancratium maritimum*. Acta Pharm. Suec. **5**, 61–66. Sener B., Konukol S., Kruk C., and Pandit U. (1993), New crinine type alkaloids from *Pancratium maritimum*. Fitoterapia **64**, 281–284. Sener B., Konukol S., Kruk C., and Pandit U. (1994), Alkaloids of Amaryllidaceae. II. Alkaloids of crinine class from *Pancratium maritimum*. J. Chem. Soc. Pak. **16**, 275–279. Sener B., Konukol S., Kruk C., and Pandit U. (1998), Alkaloids of Amaryllidaceae. III. Alkaloids from the bulbs of *Pancratium maritimum*. Nat. Prod. Sci. **4**, 148–152. Sur-Altiner D., Gurkan E., Mutlu G., Tuzlaci E., and Ang O. (1999), The antifungal activity of *Pancratium maritimum*. Fitoterapia **70**, 187–189. Tato P., Castedo L., and Riguera R. (1988), New alkaloids from *Pancratium maritimum*. Heterocycles **27**, 2833–2836. Todor N., Mitova M., Bankova V., Handijeva N., and Popov S. (2002), GC/MS of *Crimum latifolium* L. alkaloids. Z. Naturforsch. **57c**, 239–242. Viladomat F., Codina C., Bastida J., Mathee S., and Campbell W. (1995), Further alkaloids from *Brunsvigia josephinae*. Phytochemistry **40**, 961–965. Winkler W., and Brown R. (1968), Mass spectra of 5,11-dihydroxyphenanthridine alkaloids. The structure of pancracine. J. Am. Chem. Soc. **90**, 6439–6446. Willis J. C. (1973), A Dictionary of the Flowering Plants and Ferns. Cambridge University Press, p. 847. Wink M., Witte L., Hartmann T., Theuring T., and Volz V. (1983), Accumulation of quinolizidine alkaloids in plants and cell suspension cultures: Genera *Lupinus*, *Galega*, *Glycyrrhiza*, *Laburnum* and *Sophora*. Planta Med. **48**, 253–257. Witte L., Muller K., and Arfmann H.-A. (1987), Investigation of the alkaloid pattern of *Datura innoxia* plants by capillary gas-liquid-chromatography-mass-spectrometry. Planta Med. **53**, 192–197. Youssef D., and Frahm A. (1998), Alkaloids of the flowers of *Pancratium maritimum*. Planta Med. **64**, 669–671. Youssef D. (1999), Further alkaloids from the flowers of *Pancratium maritimum*. Pharmazie **54**, 535–537.
East Japan Railway Company Procedural Regulations for IC Cards for Foreign Visitors to Japan (Aims of These Regulations) Article 1 These regulations are intended to stipulate the content and conditions of use of the services which the East Japan Railway Company (hereinafter referred to as "the Company") provides for users of the monetary value etc. recorded in an unregistered card embedded with an IC chip (hereinafter referred to as a “Welcome Suica”), sold to persons such as foreign visitors to Japan, and thereby aim to enhance the convenience of users. (Terms of Use) Article 2 The services offered by Welcome Suica shall be as stipulated in these regulations. 2. If these regulations are revised, the services offered by Welcome Suica from that point onwards shall be as stipulated in the said revised regulations. 3. Any matters not stipulated in these regulations shall be as stipulated in the East Japan Railway Company IC Fare Card Procedural Regulations (East Japan Railway Company Public Announcement No. 24, of October 2001, hereinafter referred to as the “IC Regulations”) and the like (but only stipulations that can be applied given the nature of the said Suica). When referring to the IC Regulations, “Suica” shall be read as “Welcome Suica”. (Terminology Definitions) Article 3 The main terminology definitions for these regulations are as listed below. (1) Child Welcome Suica: An unregistered Welcome Suica provided for a child to use. (2) Bulk Sale Welcome Suica: A Welcome Suica sold to persons such as foreign visitors to Japan through a travel agency or transport operator etc. (hereinafter referred to be the generic term “travel agency or the like”) which has been designated by the Company. (3) Welcome Suica Card: A card-style data storage medium which the Company has stipulated can be used as a Welcome Suica. (4) Reference Paper: A written document of card data such as the Welcome Suica’s period of validity and the passenger category (adult or child). 2. Any terminology definitions not stipulated in these regulations shall be as stipulated in the East Japan Railway Company Passenger Transport Business Regulations (East Japan Railway Company Public Announcement No. 4 of April 1987, hereinafter referred to as the “Passenger Transport Regulations”), and the IC Regulations. When referring to the IC Regulations, “Suica” shall be read as “Welcome Suica”. (When the Contract Becomes Valid) Article 4 The contract with regard to Welcome Suica based upon these regulations shall become valid when the Company itself, or via a travel agency or the like, issues the passenger with a Welcome Suica. (The Assignment and Ownership Rights of the Welcome Suica Card) Article 5 The Company shall assign the Welcome Suica Card when the user has applied to use Welcome Suica. 2. If the preceding paragraph applies, the ownership rights of the Welcome Suica Card belong to the user. (The Equivalent Value of the Card) Article 6 When the Company assigns the Welcome Suica Card as described in the previous paragraph, the Company shall not receive the equivalent value of the Welcome Suica Card. (Sale of a Welcome Suica) Article 7 When the Company assigns a Welcome Suica Card to the user as described in Article 5, the Company shall use a separately stipulated method to receive from the user the SF equivalent and/or amount equivalent to the Suica Discount Passes (hereinafter this procedure will be referred to as “the sale of a Welcome Suica”). 2. When the Company assigns to the user a Bulk Sale Welcome Suica Card, the Company shall assign a Welcome Suica Card in which the Suica Discount Passes data are already recorded and shall receive from the user the amount equivalent to the Suica Discount Passes (hereinafter this transaction is referred to as the “sale of a Bulk Sale Welcome Suica”). 3. When an application for a Child Welcome Suica is submitted, the user must show the person in charge of the sale an official document such as a passport to prove that the user of the said Child Welcome Suica is a child. If the application is submitted via a Welcome Suica vending machine, the child’s date of birth must be registered via the Welcome Suica vending machine. 4. When an application for a Child Welcome Suica is part of an application submitted for Bulk Sale Welcome Suica, the user must show the travel agency or the like from which they are making the purchase an official document such as a passport to prove that the user of the said Child Welcome Suica is a child. (Start Date for Use of a Welcome Suica) Article 8 The date that shall be calculated as the start of the period of validity (hereinafter referred to as “the start date for use”) shall be the date on which the user purchased the Welcome Suica Card from a Welcome Suica vending machine or from a device stipulated separately by the Company. 2. Regardless of the stipulation in the previous paragraph, the start date of use for a Bulk Sale Welcome Suica shall be processed in accordance with one of the following clauses. (1) When the said Welcome Suica Card is used in an automatic ticket machine to access a station within the valid section of line for the Suica Discount Passes that has been registered beforehand in the Welcome Suica Card (2) When the start date of use for the Suica Discount Passes that has been registered beforehand in the Welcome Suica Card has been specified by an automatic vending machine (excluding reserved seat ticket machines) or multipurpose ticket machine capable of processing the said Suica Discount Passes that has been registered beforehand in the Welcome Suica Card. (3) When the Welcome Suica Card has been charged by an automatic vending machine (excluding reserved seat ticket machines) or multipurpose ticket machine capable of charging Welcome Suica Cards. 3. If a process corresponding to those in the clauses of the preceding paragraph is not carried out, the said Welcome Suica cannot be used. (Validity Period of Welcome Suica) Article 9 The validity period of a Welcome Suica is 28 days, including the start day of use. 2. If the validity period has expired, the user will lose his/her rights with regard to the said Welcome Suica. (Expiration Date of the Bulk Sale Welcome Suica) Article 10 An expiration date is set for a Bulk Sale Welcome Suica. 2. The expiration date is the last day of the month displayed on the back of the Welcome Suica Card, in the position designated by the Company. 3. If a process corresponding to those in the clauses of Article 8 Paragraph 2 is not carried out before the expiration date printed on the said Welcome Suica Card, the said Welcome Suica shall be invalid. (Reference Paper) Article 11 The Company issues a Reference Paper to the user when it sells a Welcome Suica. 2. When the Company sells a Welcome Suica Card with a Suica Discount Passes stipulated in IC Regulations Article 27 Paragraph 2, it issues a Reference Paper printed with the said Suica Discount Passes data. 3. Regardless of the stipulations in the previous paragraphs, Reference Papers will sometimes be issued after the sale of a Bulk Sale Welcome Suica. 4. The Reference Paper has no validity as a Welcome Suica. (Welcome Suica Restrictions, etc.) Article 12 The user of a Child Welcome Suica can no longer use the said Child Welcome Suica after the first March 31, following the user’s 12th birthday. 2. The Suica Season Ticket Card defined in IC Regulations Article 26 is not dealt with in the Welcome Suica services. 3. When using a Child Welcome Suica, the user must carry with him/her an official document such as a passport which proves that the user is a child, and must show that document whenever asked to do so by an official. 4. Excluding some use of the Bulk Sale Welcome Suica approved by the Company, when using a Welcome Suica Card, the user must carry with him/her the Reference Paper of the said Welcome Suica and must show it whenever asked to do so by an official. 5. A Suica Discount Passes cannot be purchased if its period of validity exceeds the period of validity of the said Welcome Suica Card. (Checking the SF Usage Log) Article 13 Regardless of the stipulations in IC Regulations Article 14, the usage log of an expired Welcome Suica cannot be checked. (Revisions) Article 14 The revisions stipulated in IC Regulations Article 8 shall not be implemented. (Reimbursement) Article 15 The reimbursement of the SF balance stipulated in IC Regulations Article 15 Paragraph 1 cannot be made. (Reissue) Article 16 The reissuance of a lost card stipulated in IC Regulations Article 16 shall not be made. 2. The reissuance due to a malfunction stipulated in IC Regulations Article 17 shall not be made. (Special Procedures When a Malfunction Has Occurred) Article 17 If damage or the like makes it impossible to use a Welcome Suica at automatic ticket gates, in transactions for fare tickets or the like at ticket vending machines etc., for fare adjustments at automatic fare adjustment machines or to use a valid Suica Discount Passes, and when the user fills in the required entries in a separate application form prescribed by the Company and submits it along with the said Welcome Suica at a station, the Company will issue the user with a separately prescribed certificate which will be handled according to the provisions in the following clauses. This does not apply in cases where it is deemed that the damage was caused intentionally by the user or is the result of gross negligence on the part of the user. (1) Any remaining SF balance will be reimbursed when the user submits the said Welcome Suica Card and the separately prescribed certificate issued by the Company at a station within the period from the day after the user submitted his/her application to the 14th day after the expiration date of the said Welcome Suica. However, this will not be done if the number printed on the back of the card is indecipherable. (2) If Suica Discount Passes data is registered in the said Welcome Suica Card, the Reference Paper is to be shown to the station official in addition to the said Welcome Suica Card and the separately prescribed certificate issued by the Company. It is then handled as follows. a. If, within the period from the day after the user submitted his/her application to the 14th day after the expiration date of the said Welcome Suica, the user requests reimbursement for a Suica Discount Passes that has not yet been used and is still during the period of its validity, the amount equivalent to the Suica Discount Passes will be reimbursed in addition to the reimbursement of the SF balance stipulated in the previous clause. b. If the user requests to continue his/her use of a Suica Fare Plan Card that is still during the period of its validity, regardless of whether this is prior to or after his/her commencement of its use, procedures will be taken for the continued use of the said Suica Fare Plan Card. Once this continued use is over, the reimbursement of the SF balance stipulated in the previous clause shall be made if the user has applied again at a station. (Special Procedures When a Malfunction Has Occurred in an IC Card Issued by an Operator Other than the Company) Article 18 If a malfunction has occurred in an IC card named in the following clause(s) that was issued by a business operator specified in IC Regulations Article 61 Paragraph 2, it shall be handled as in the preceding article. However, the reimbursement liability shall be confined to the company that issued the said IC card. (1) PASMO PASSPORT issued by PASMO Co., Ltd. (Applicable Laws and Court of Jurisdiction) Article 19 These regulations shall be governed by the laws of Japan. 2. Any disputes related to these regulations shall be submitted to the exclusive jurisdiction of the Tokyo District Court in the first instance. Supplementary Provision The Japanese version of these regulations is the authorized version. If there are any discrepancies between the content of the Japanese and English versions, the content of the Japanese version shall take precedence.
An investigation of the reproductive ecology of crab’s-claw in the Trent River, Ontario, Canada NICHOLAS WEISSFLOG AND ERIC SAGER* ABSTRACT Crab’s-claw (*Stratiotes aloides* L.) is an aquatic macrophyte native to northern Eurasia and often sold in North America in the aquarium and water garden plant trade. In 2008, the first wild crab’s-claw population in North America was discovered in the Trent-Severn Waterway in Ontario, Canada. Lack of crucial information on the reproductive ecology of the plant in the invaded habitat is presenting a barrier to effective control and management strategies. Specifically, it is unknown the extent to which the plant is propagating via the production of turions and offsets. Further, the residency time of its turions is also unknown. A field study was completed to evaluate the density and biomass of plants as well as the number and fate of turions and offsets produced by different phenotypic forms of the plant. This was done to identify any potential variability in reproduction between forms in the area of infestation. The submerged phenotype was identified as creating, on average, significantly more turions and offsets than the emergent phenotype. Secondly, experiments were done to understand turion viability and residency times. It was found that turions of crab’s-claw do not persist in sediment for longer than 8 to 9 mo; however, it is likely turions last no longer than the period between growing seasons. This may bode well for management as it could be that there is a period in the year where all of the crab’s-claw biomass is vulnerable to control. Key words: control and management, invasive aquatic macrophyte, offset, *Stratiotes aloides*, turion. INTRODUCTION In 2008, the first wild population of crab’s-claw (*Stratiotes aloides* L.), herein referred to as CC, was found in North American inland waters. CC is thought to have been initially introduced from a water garden in the vicinity of the hamlet of Trent River in Kawartha/Northumberland region of Ontario, Canada. As of 2015, the plant has spread west, downriver, to Percy’s Reach, near Campbellford, ON, with another, separate, population in the Black River, a tributary of Lake Simcoe, to the east (Figure 1; R. McGowan, pers. comm.; OISAP 2016). It has established dense monodominant stands of vegetation within the river, with some stands over 20 ha in size, presenting a potential risk to the local ecology and human values derived from the river ecosystem (OISAP 2016). This is the first documented establishment of a wild population of CC outside of its native range; as such there is limited information available as to how it may impact the ecology of the river and human values associated with the river. Data have shown that it often excludes phytoplankton from its stand through allelopathy and competition for nutrients (Crackles 1982, Mulderij et al. 2006). Forbes (2000) notes that waterfowl predation of CC has not been explicitly mentioned in previous studies, though, during this study of the Trent River CC population, Canada geese (*Branta canadensis*) were observed predating the leaf tips of emergent plants. The CC population in the Trent River was also observed in this study to have developed association with the invasive zebra mussel (*Dreissena polymorpha*), which is consistent with observed behavior in its native range (Lewandowski and Ozimek 1997). The lengthy, wide, and robust leaves of CC seem to provide an excellent surface for the zebra mussel, thus contributing to the perpetuation of another invasive (Lewandowski and Ozimek 1997, Toma 2006). CC is a member of the Hydrocharitaceae family, which includes other well-known invasives such as hydilla [*Hydrilla verticillata* (L. f. Royle)] and Brazilian egeria (*Egeria densa* Planch.) (Les et al. 2006). The family, despite being relatively small, has an incredibly morphologically diverse array of members (Les et al. 2006). A recent genetic review concerning CC classification argued that it should likely be placed within its own subfamily (Les et al. 2006). CC is dioecious; however, there have been rare cases of plants with a fertile stamen produced in a female flower (Cook and Urmi-König 1983). This inconsistent dioecious behavior indicates sexual phenotypic instability, and observational studies of the plant have noted that phenotypic expression of sex might be temperature dependent as no males are seen in the northerly reaches of its native range (Forbes 2000). However, even in ranges where both sexes are present, recruitment from seed is thought to be minimal relative to asexual recruitment (Cook and Urmi-König 1983, Smolders et al. 1995). In one field study by Erixon (1979) recruitment was observed as being greater than 100% per year, with plant densities doubling between June and September. Research on the Trent River population has documented a doubling in CC biomass between September and November, usually a time when the native plant community is senescing (Canning 2014). In an *in situ* study done in a laboratory by Renman (1989), recruitment (new plants being added to the population) was observed to be 70% with no mortality in the rest of the population. *First and second authors: Graduate student and Professor, Ecological Restoration Program, Trent University, 1600 West Bank Dr., Peterborough, Ontario, K9L 0G2, Canada. Corresponding author’s E-mail: firstname.lastname@example.org. Received for publication November 30, 2015 and in revised form April 7, 2016.* CC can behave as either a submerged or as an emergent aquatic plant. In the spring and summer some plants will generate photosynthetic gases in their leaves, allowing them to become buoyant and float to the water surface; when autumn returns CC will lose many of its leaves and sink back down to the sediment (Smolders et al. 2003). Beyond this, the plant has interesting phenotypic variety. CC is typically described as having two phenotypes, an emergent phenotype characterized by a plant with some or all leaves above the surface of the water and a submerged phenotype, with all leaves below the surface (Erixon 1979, Renman 1989, Efremov and Sviridenko 2008). However, it has been theorized that CC has as many as three (Strzalek 2004) or four morphologically distinct phenotypes (Toma 2006). CC has three methods of propagation: by seed, offset, and turion (Forbes 2000). Seed production in the Trent-Severn Waterway population has been noted as being absent, which is consistent with information from its native range where, north of a particular line, in climates more similar to southern Ontario, male plants are found only rarely and no viable seed has been found (de Geus-Kruyt and Segal 1973, Forbes 2000). Further, even in areas where the plant reproduces sexually, it predominantly relies upon asexual reproduction by offset and by turion (Kornatowski 1979). Turions are frost-hardy propagules which can also act as long-range dispersal propagules as they susceptible to greater physical forces such as water current and wind-induced wave action (Erixon 1979). CC turion production begins in July and lasts until November (Erixon 1979, Kunze et al. 2010, Canning 2014). It is known that axillary turions of aquatic macrophytes are able to both float and sink; this is thought to be caused by variable starch densities, which, at variable water temperatures (with constant pressure and therefore variable densities), are able to sink or float (Weber and Noodén 2005). A study by Erixon (1979) found that turions collected in January sank when replaced in the water. Offsets are stand-densifying propagules that have the ability to create their own rooting structures while attached to the mother plant. Offsets are vulnerable to fragmentation by current and wave action and can act as a long-range dispersal mechanism (Erixon 1979). The mechanisms by which CC competes are of great interest in understanding potential interactions between CC and the newly invaded environments in Ontario. CC’s ability to overwinter as a green plant gives it an additional resilience mechanism, contributing to its extremely low mortality rate from year to year (Renman 1989, Kunze et al. 2010). CC’s phenotypic plasticity also lends to its ability to survive in lower light conditions. In a study by Harpenslager et al. (2015), it was observed that under lower light levels CC adapted by developing thinner leaves with a higher efficiency of photosystem II and higher chlorophyll content. In mesotrophic to eutrophic conditions with high photosynthetically active radiation (PAR) and high dissolved carbon dioxide (CO$_2$), CC is able to form dense emergent patches, with one study by Erixon (1979) reporting biomasses of 5,500 kg of dry weight per hectare in September (Harpenslager et al. 2015). These dense floating patches are easily capable of heavily shading the bottom sediment, significantly reducing light resources available for other plants. Due to the potential threat this plant poses to native ecosystems and its close proximity to the Great Lakes, the Ontario provincial government, in partnership with the Ontario Federation of Anglers and Hunters, has established a goal of complete eradication of CC from the Trent-Severn Waterway. In addition to ongoing studies as to appropriate management approaches, information on the reproductive ecology of the plant in this newly invaded region is lacking, presenting a barrier to effective management. Turion sediment residency time has been a large question for authorities managing CC because it is the only propagule in the current context that has the potential to act as a long-term resilience mechanism. Information on how long turions can persist in sediment will help determine the minimum repeat treatment period for any area invaded by CC. The viability of turions after overwintering is also unknown; additional information would provide a better estimation of the plants recruitment rates and ability to spread successfully on an annual basis. Another gap in our knowledge is the difference in the quantity of propagules produced by each CC phenotype within the area of infestation. Although previous studies have examined the differences between propagule production in emergent and submerged phenotypes, it is prudent to check for consistency in a new range (Erixon 1979). This information may help identify targeting priorities; i.e., if a significant difference in propagule output between phenotypes exists, one may be identified as being more greatly contributing to spread than another. MATERIALS AND METHODS Study area The study was conducted at two locations within the Kawarthas/Northumberland region of Ontario; the first location was a small (40 by 15 m), human-dug, hydrologically isolated pond in Blackstock, ON (44°0′57.52″N; 78°51′13.61″W) that is believed to have been colonized by CC sometime prior to 2011. The second location, near the hamlet of Trent River in the reach of the Trent-Severn Waterway known as Lake Seymour, was composed of two sets of sites (44°23′10.60″N; 77°50′22.05″W and 44°22′44.67″N; 77°49′33.75″W). The sampling sites at both locations consisted of a plant community largely dominated by CC (> 90% by sediment cover) with patches of both the emergent and submerged phenotypes. Offset and turion production by river populations The first portion of this research was to determine the differences in the reproductive ecology between emergent and submerged phenotypes. Data were collected in two sampling periods (August 2013 and September 2015) at the Trent-Severn Waterway location. The August 2013 sampling period was used to collect information on stand density, stand biomass, and individual biomass by phenotype; this occurred at the site with coordinates 44°23′10.60″N; 77°50′22.05″W. Patches were selected by having a diver move along a 50-m transect, with each patch encountered sampled. A patch was defined as a distinct area of sediment covered by a single phenotype of CC. One sample was taken from each patch by spearing a pole of polyvinyl chloride (PVC) piping into the sediment and placing a 0.5 by 0.5–m quadrat overtop of the spear, with all vegetation present within the quadrat harvested by hand. Upon harvest, each individual plant was placed within its own plastic garbage bag. In total, 5 emergent patches and 14 submerged patches were sampled. Any roots were removed from the sampled plants and all remaining biomass was spun in a salad spinner to remove excess moisture. All of the spun biomass was then placed on a scale until the mass reading stabilized (± 1 g). The September 2015 sampling period was used to collect information on turion and offset production by phenotype of CC. Sampling was done in a small bay within the Lake Seymour reach of the Trent River; this area was chosen as it received relatively little disturbance from boat traffic and other recreational users of the water body. Two submerged CC sites and six CC sites were chosen for sampling in late September 2015. Sites were defined as a 2 by 2–m area within a single patch of CC of a particular phenotype. Samples were taken at each site by spearing a pole of PVC piping into the sediment and placing a 0.5 by 0.5–m quadrat over the top of the spear, holding a corner of the quadrat to the spear to maintain relative position for sampling. Samples were taken within the quadrat until 15 plants had been harvested or there were no plants left within the site. Each plant then had its associated turions and offsets counted and recorded. For statistical analysis, Student’s $t$ tests and Mann-Whitney Rank Sum tests were used (based on the normality or nonnormality of the particular data, respectively). All statistical comparisons were considered significant at a $P$ value of less than 0.05 (alpha). All statistical analysis was done in the software program SigmaPlot 12.0.\(^1\) Turion persistence The turion persistence study was conducted at both the pond and the river sites in early August 2013, prior to the fall and winter release of turions, by collecting bulk sediment samples using a standard Ponar® Grab sediment sampler\(^2\) with a 523-cm\(^2\) sample area (Kunze et al. 2010). At the pond site, transect lines were set up covering half the length of the pond with five lines, each 5 m apart and with six sampling points per line, each 2 m apart, with a total of 30 sampling points. The Ponar Grab sediment sampler was dropped once at each point and the contents of the sample were emptied into a 5-mm wire mesh screen. Water was then poured through the screen to dissolve the sediment and reveal any turions. The number of turions per 523 cm\(^2\) and the depth of the sample were recorded. River site sampling was conducted by recording 10 global positioning system (GPS) positions within the site. At each GPS position, five samples were taken from different positions off the boat. Samples were collected and processed as previously mentioned in the pond site. The number of turions per 523 cm\(^2\) and the depth of the sample were recorded. Turon incubation and sprouting A laboratory experiment was set up to assess overwintering dynamics and turion viability by incubating three sets of five sealed plastic containers containing six turions each at 4 C for 2, 3, and 4 mo (Berhardt and Dunaway 1986; Adamec 1999). Turions used with the experiment were harvested in November using a throw rake to collect plants, from which a total of 90 mature turions were harvested. In order to best simulate winter conditions in the river, sediment from the river was used to line the bottom of the containers and river water was used to fill the containers to control for any impact nutrient and substrate conditions may have on sprouting. Light and temperature were controlled for by placing the turions in a dark refrigerator set at 4 C to mimic the overwintering conditions in Lake Seymour. Each turion was weighed and its position in the tray recorded. Once the appropriate incubation time was achieved, each set of containers was brought into a growth chamber with growing conditions established at 20 C and a 12-h light–dark cycle. The percentages of sprouting were assessed by container and by mass. RESULTS AND DISCUSSION Plant density results Emergent communities were found to have a significantly higher mean wet biomass density (Student’s t test; $P < 0.001$) than submerged communities, on average 13.9 kg m$^{-2}$ and 6.00 kg m$^{-2}$, respectively (Figure 2). Additionally, individual emergent plants were found to have significantly more wet biomass on average (Student’s t test; $P < 0.001$) than submerged plants, 279 g and 94 g respectively (Figure 3), which is a finding consistent with the literature (Toma 2006). The disparity in biomass density, despite lack of a significant difference in absolute plant numbers and individual plant biomass between emergent and submerged phenotypes could possibly be explained by the emergent plants’ increased access to CO$_2$ and light (Bowes and Salvucci 1989). In highly productive, shallow, freshwater riverine ecosystems both water and nutrients are rarely lacking; however, due to the high turbidity and heavy competition as a product of the high nutrient availability, light, as well as the limited diffusion of CO$_2$ into water, becoming the chief limiting factors (Bowes and Salvucci 1989). It has been well established that CC in its submerged form produces a significant quantity of marl, meaning that the submerged form uses a large amount of dissolved bicarbonate (HCO$_3^-$) for its photosynthetic processes, a more energetically expensive process than dissolved CO$_2$ assimilation (Brammer 1979, Madsen and Sand-Jensen 1991). In a study by Harpenslager et al. (2015) it was found that under low dissolved CO$_2$ levels, submerged CC would use HCO$_3^-$ to supplement carbon needs; however, this resulted in lower rates of photosynthesis, decreased emergent leaf formation, and increased precipitation of marl (calcium carbonate) on leaves. Further, this study found that, even with high levels of CO$_2$, at low levels of PAR submerged plants formed less biomass and produced no emergent leaves compared to those exposed to high PAR (Harpenslager et al. 2015). Hence, the more free availability of both CO$_2$ and light for emergent plants could explain the large disparity in individual plant biomass as well as biomass density between these two phenotypes of the plants. No significant difference was found in mean plant density between emergent and submerged phenotypic forms (Mann-Whitney Rank Sum test; $P = 0.008$) (Figure 6). However the submerged phenotype had much greater variation, ranging from 32 to 96 plants m$^{-2}$, compared to 40 to 60 plants m$^{-2}$ for emergents (Figure 6). This could be explained by an unequal and lower sample size in the emergent data, resulting in a Type I error ($n = 5$ vs. $n = 14$); however, submerged plants must live within a much greater range of PAR levels, especially in a eutrophic system like the Trent-Severn Waterway, as PAR levels attenuate rapidly due to high water turbidity. This great range of PAR levels may partly explain this high variability in submerged plants; a study by Harpenslager et al. (2015) found less new biomass was produced by submerged CC at lower PAR levels. There were significantly more turions produced, on average, by the submerged plants (6 turions plant\(^{-1}\)) than by emergent plants (1.7 turions plant\(^{-1}\)) (Figure 4) (Mann-Whitney Rank Sum test, \(P < 0.001\)). There were also significantly more offsets produced, on average, by submerged (6.2 offsets plant\(^{-1}\)) than emergent plants (4.1 offsets plant\(^{-1}\)) when measured in early August (Figure 5) (Mann-Whitney Rank Sum test, \(P < 0.001\)). When combined with information on stand density, this might indicate that submerged patches are priority targets to reduce propagule output. However, this study represents a comparative snapshot of propagule production by phenotype; annual data documenting patch size increases by phenotype, in nature, would provide valuable information on expected rate of spread. ### Turion persistence No turions were found at the pond site or the river site, suggesting that no turions of CC stay dormant longer than 8 to 9 mo, given sampling occurred in early August and mature turions begin detaching in late fall (November to December) (Kunze et al. 2010). Rather than being reflective of sampling errors, this conclusion is likely reasonable considering that studies using similar equipment and methods have been used to assess the presence of turions of other plants (Sutton and Portier 1985); that the limited distribution area for any turions at the pond site, given it was human constructed, hydrologically isolated, and small (40 by 15 m) are all factors favorable for detection; and that curlyleaf pondweed (*Potamogeton crispus* L.) turions were found in the sediment samples taken from the river site, showing that the method was capable of finding turions when they are present. This conclusion is also largely consistent with literature on turion persistence of similar species. It has been documented that hydilla axillary turions last for a year at most (Van and Steward 1990). Similar maximum persistence periods (10 to 12 mo) have also been found for the axillary turions of the genus *Utricularia* as well as other carnivorous aquatic macrophytes (Adamec 1999). ### Turion viability and overwintering dynamics Sprouting rates for all containers in all sets were 100% following incubation at 4 C, thus viability was unaffected by incubation period (Table 1). Further, variations in turion mass also seemed to have no effect on sprouting as, despite some turions being up to three times more massive than others, there was no failure to sprout (Table 1). Similar to the conclusions drawn by Van and Steward (1990) concerning hydilla axillary turions, this extremely high viability likely helps explain the extremely low persistence. In addition, as Van and Steward (1990) note, because the turion is designed for dispersal, has a relatively low mass, no hard outer coating, and stays on the surface of the sediment, it will tend to be highly influenced by fluctuations in the external environment, contributing further to the extremely low persistence of the turion (Strzalek 2004). Given that turions are the only propagule of the wild CC population currently in North America with the capability of dormancy, knowing that there may be a period within which there are no dormant propagules, and that, based on the results of the turion lifespan experiment, there are no multiyear propagule banks, the chance of an eradication strategy succeeding is significantly higher than if these conditions weren’t present. However, the extremely high viability of turions will make managing the plant even more challenging as the small plants into which turions develop... Figure 6. Mean ± SD plants m$^{-2}$ by phenotype from population analysis in early August. Line with P value below indicates significances level between bars (Mann-Whitney Rank Sum test). may be extremely difficult to find. This makes the study and modeling of their range and movement through the environment a potential next step. It can be concluded that the turions of CC persist for at most 8 to 9 mo and that there may be a significant period within which no turions are present in sediment at all during the summer months. The turions of CC have an extremely high viability rate (around 100%) regardless of mass and length of incubation. This may mean that CC recruitment from turions is extremely high, given its ability to live in a suppressed state (Harpenslager et al. 2015). Analysis of patch density, and number of turions and offsets produced per plant showed that the submerged phenotype produced both more turions and offsets and that patch densities of both phenotypes were not significantly different. This suggests that submerged patches are priority targets for reducing propagule output. However, this study represents a comparative snapshot of propagule production by phenotype; annual data documenting patch size increases by phenotype, in nature, would provide valuable information on expected rate of spread. It was also found that emergent phenotypes have denser biomass and larger plants than submerged phenotypes, likely due to increased access to CO$_2$ and light. **LITERATURE CITED** Adamec L. 1999. Turion overwintering of aquatic carnivorous plants. Carnivorous Plant Newsl. 28:19–24. Berhardt R, Bannwarth JM. 1983. Decay of pondweed and *Hydrilla hibernacula* in water. J. Aquat. Plant Manage. 21:29–34. Bowers G, Salvucci ME. 1989. Plasticity in the photosynthetic carbon metabolism of submersed aquatic macrophytes. Aquat. Bot. 34:235–266 Brammer E. 1979. Exclusion of phytoplankton in the proximity of dominant water soldier (*Stratiotes aloides*). Freshw. Biol. 9:223–230. Canning R. 2014. Non-aquatic invasive species management: the effect of treatment type and application timing on *Stratiotes aloides* in Ontario. In: 5th Annual Meeting of the Aquatic Plant Management Society: Proceedings Abstracts. Savannah, Georgia. 25 pp. Carrick GD, Urmi-Koskinen K. 1983. Revision of genus *Stratiotes* (Hydrocharitaceae). Aquat. Bot. 16:213–249. Crackles E. 1982. *Stratiotes aloides* L. in the East Riding of Yorkshire. Naturalist (Leeds) 107:99–101. De Geus-Crespi M, Segal S. 1975. Notes on the productivity of *Stratiotes aloides* in two lakes in the Netherlands. Pol. Arch. Hydrobiol. 20:195–205. Efremov AN, Sviridenko BF. 2008. The eco-biomorph of water soldier *Stratiotes aloides* L. (Hydrocharitaceae) in the west Siberian part of its range. Russ. Water Biol. 1:232–239. Eriksson G. 1978. Preliminary studies of a *Stratiotes aloides* L. stand in a riverside lagoon in N. Sweden. Hydrobiologia 7:215–221. Forbes RS. 2000. Assessing the status of *Stratiotes aloides* L. (water-soldier) in Co. Fermanagh, Northern Ireland (v.c. H33). Water 28:179–190. Harpenslager SP, Smolders AJP, Lamers LPM, Aarts AAM, Roelfs JGM, Lamers LPM. 2015. To float or not to float: How interactions between height and dissolved inorganic carbon species determine the buoyancy of *Stratiotes aloides*. PLOS ONE 10(4):e0124026. doi:10.1371/journal.pone.0124026. Kornowski W. 1978. Turions and off sets of *Stratiotes aloides* L. Acta Hydrobiologica 21:195–204. Kunze K, Nagler A, Zacharias D, Schirmer M, Jordan R, Kesel R, Kundel W. 2010. Erprobung von Managementmaßnahmen in Bremen zum Erhalt der Kreuzblume als Leitart für die ökologisch wertvollen Graben-Gruben-Gebiete der Kulturlandschaft Nordwestdeutschlands. Deutsche Bundesstiftung Umwelt. Les DH, Moody ML, Soros CL. 2006. A reappraisal of phylogenetic relationships in the monocotyledon family Hydrocharitaceae (Alismataceae). Aliso 25:111–230. Lewandowska M, Özimek T. 1997. Relationship of *Dreissena polymorpha* (Pall.) to various species of submerged macrophytes. Pol. Arch. Hydrobiol. 44:431–443. Madsen TV, Sand-Jensen K. 1991. Photosynthetic carbon assimilation in aquatic macrophytes. Aquat. Bot. 41:43–49. Maldenij G, Smolders AJP, Van Donk E. 2006. Allelopathic effect of the aquatic macrophyte, *Stratiotes aloides*, on natural phytoplankton. Freshw. Biol. 51:554–561. [OISOS OASIS]. Invasive Species Awareness Program. 2016. Water Soldier Control. [http://www.invasivespecies.com/get-involved/water-soldier-monitoring/](http://www.invasivespecies.com/get-involved/water-soldier-monitoring/). Accessed March 22, 2016. Renman G. 1989. Life histories of two clonal populations of *Stratiotes aloides* L. Hydrobiologia 185/2:11–229. Smolders AJP, Den Hartog C, Roelfs JGM. 1995. Observations of fruiting and seed-set of *Stratiotes aloides* L. in the Netherlands. Aquat. Bot. 51:259–268. Smolders AJP, Lamers LPM, Den Hartog C, Roelfs JGM. 2003. Mechanisms involved in the decline of *Stratiotes aloides* L. in the Netherlands: *Stratiotes* as a key indicator. Hydrobiologia 509: 605–610. Strazanac M. 2004. A green win on the offensive, or *Stratiotes* in water ecosystems. Wiadomości Ekologiczne 50:81–107. Sutton D, Portier K. 1985. Density of tubers and turions of hydrilla in south Florida. J. Aquat. Plant Manage. 23:64–67. Tomas C. 2006. Growth and competition of two morphological forms of water soldier (*Stratiotes aloides* L.) A case study on Lake Slosineckie Wiekie (northwest Poland). Biodiv. Res. Conserv. 3–4:251–257. Van TK, Steward KK. 1990. Longevity of monohyric hydrilla propagules. J. Aquat. Plant Manage. 28:71–76. Weber JA, Woodin LD. 2005. The causes of sinking and floating in turions of *Myriophyllum verticillatum*. Aquat. Bot. 83:219–226.
3D acoustic imaging applied to the Baikal Neutrino Telescope K.G.Kebkal, a,* R.Bannasch, a O.G.Kebkal, a A.I.Panfilov, b and R.Wischnewski c a) EvoLogics GmbH, Blumenstraße 49, 10243 Berlin b) Institute for Nuclear Research, 60th October Anniversary pr. 7a, Moscow 117312, Russia c) DESY, Platanenallee 6, 15735 Zeuthen, Germany Elsevier use only: Received date here; revised date here; accepted date here Abstract A hydro-acoustic imaging system was tested in a pilot study on distant localization of elements of the Baikal underwater neutrino telescope. For this innovative approach, based on broad band acoustic echo signals and strictly avoiding any active acoustic elements on the telescope, the imaging system was temporarily installed just below the ice surface, while the telescope stayed in its standard position at 1100 m depth. The system comprised an antenna with four acoustic projectors positioned at the corners of a 50 meter square; acoustic pulses were “linear sweep-spread signals” - multiple-modulated wide-band signals (10 → 22 kHz) of 51.2 s duration. Three large objects (two string buoys and the central electronics module) were localized by the 3D acoustic imaging, with a accuracy of ~0.2 m (along the beam) and ~1.0 m (transverse). We discuss signal forms and parameters necessary for improved 3D acoustic imaging of the telescope, and suggest a layout of a possible stationary bottom based 3D imaging setup. The presented technique may be of interest for neutrino telescopes of km$^3$-scale and beyond, as a flexible temporary or as a stationary tool to localize basic telescope elements, while these are completely passive. © 2008 Elsevier Science. All rights reserved PACS: 43.30.Vh, 43.35.Wa, 43.60.Rw, 43.60.Vx, 84.40.Xb Keywords: hydro-acoustic imaging; 3D sonar; sonars with arbitrary aperture; acoustic data transmission; S2C technology; Baikal; neutrino telescopes. 1. Introduction Precise knowledge of the relative positions of all light sensors of an underwater neutrino telescope is essential for spatial reconstruction of particle tracks; their absolute geo-referenced positions are needed to point back to astronomical sources. Usually, this calibration is done by an “acoustic positioning system”, made up by a number of telescope elements equipped with active acoustic... beacons and a geo-referenced antenna capable to localize the beacon positions. While this method yields reliable results (and high spatial resolution), drawbacks are its relative complexity, inflexibility and difficulty to repair in case of failure. Also, it is difficult to include “passive” elements of complex setups which are not connected to power/acquisition systems. This paper presents pilot test results of an alternative approach for acoustic localization of telescope elements. The main difference consists in the absence of any active beacons on the telescope. Localization of telescope elements is carried out “silently” as a result of 3D acoustic imaging and is based on the broad band acoustic echo signals from the telescope elements. Acoustic techniques are widely used underwater for inspections, imaging, obstacle avoidance, etc. Highly intelligent sonars, including 3D sonars, are available and applied for various practical purposes. Main practical applications are imaging of sea bottom and/or single underwater objects. For non-standard tasks, however, like 3D imaging of a large number of discrete objects closely placed in the water volume, problems like appearance of “phantom” images will lead to reduced imaging capability. The 3D acoustic imaging test has been carried out at the site of the Lake Baikal neutrino experiment [1,2] in early March, 2008 by EvoLogics GmbH, Berlin, jointly with the Baikal collaboration. The test was done before the yearly maintenance period of the telescope (during which the telescope NT200 is hauled up on the single carrying rope (see Fig.1.), serviced, and then re-deployed in early April. Due to the rotational degree of freedom during redeployment, only the central string position is fixed; peripheral strings are situated on different positions for every year). The main objective of this pilot 3D acoustic imaging test was to evaluate the possibilities for (1) localization of reference elements (e.g. buoys) of the telescope by a temporary acoustic setup from the ice-surface, while the telescope stays in its standard working position (1100-1200 m), (2) a future setup for a stationary acoustic imaging, (3) future improvements of signal parameters. Note, that (1) allows for an independent verification of the NT200 beacon acoustic system [2]; and is essential in case of its complete failure. Fig.1 gives a sketch of the NT200 telescope - the central part of the NT200+ neutrino detector [1,2]. Seven peripheral strings and one central string are mounted on an umbrella-like frame with 21.5 m radius. The oval line in Fig. 1 indicates the telescope segment, which the main effort of this 3D acoustic imaging was concentrated on: the seven outer strings are suspended under cylindrical end buoys (~2.0 m height and ~1.0 m diameter), made of small aluminum spheres. Because of their large size, these string-buoys are good reflectors - their localization would determine the string positions. We note that two of the seven buoys have larger size (by ~ 50%); we also expect slight inclinations of some buoys – thus yielding different reflection strengths. The acoustic antenna used in this test is made up of four acoustic transducers, spaced 50 m from each other and placed under the ice at 9.5m depth. A horizontal projection of the setup is shown in Fig.2. The antenna center is 91 m shifted horizontally from the center of the telescope, which is at ~1100m depth. The sonification of the telescope was carried out by means of multiple-modulated wide-band signals, i.e. linear sweep-spread signals. The signals are similar to those used in underwater data transmission with hydro-acoustic modems of S2C technology (S2C - “sweep-spread carrier” communication technology) [3]. The sweep-spread signals had a linear frequency change from 10 to 22 kHz, with a length of 51.2 s. The signal level was 195 dB re 1 µPa, and the beam-width of the transmitted signal was 60 degrees. The processing of the reflection signals from the telescope elements was similar to the procedure used by multi-static sonars [4]. In general, the signals reflected from the telescope elements showed very low signal-to-noise ratios (SNR). After optimal filtration the SNR varied between 1 and 2 dB. We identified only 3 out of the 8 similar sized objects (buoys). The most reasonable explanation for this “loss” is the extremely low SNR, so that only the three objects with largest reflection strength (due to size and orientation) were detectable; the other buoys “disappeared” below noise level. To verify the result (the positions of the resolved objects), the 3D acoustic image was overlaid with an independent measurement of the end buoys positions (by the regular acoustic localization system of NT200 [2]), as shown in Fig. 3 as white circles. We find good agreement. The accuracy of localization of the 3 objects is estimated as the width of the response main lobe at its half-height (standard method) - we find ~0.2 m along the beam and ~1.0 m transverse to the beam. We mention, that the 3D imaging was obtained despite the complicating reverberating environment (strongly reflecting ice layer), as well as ambiguities due to the large number of objects with low-level reflection signals. Finally, it is worth noting that for the rotationally symmetric NT200 telescope the achieved determination of a few string positions is sufficient to calculate those of the remaining ones. We estimate, that with increased energy content of the acoustic signals (longer duration), a gain of 3-6dB in SNR and thus localization of the remaining buoys should be feasible. 4. Perspective: BAN Antenna Our pilot test has shown, that acoustic tomography is technically feasible; as demonstrated for the specific, ice-layer driven application for the Baikal telescope. For a stationary application, an alternative and more general approach can be considered - which is of interest also for other underwater telescopes (without a seasonal ice-cover): installation of acoustic antennas on the lake’s bottom. This is sketched in figure 4. Three (or more) Bottom Acoustic Nodes (BANs) build an “antenna network”. They enclose the underwater telescope (or parts of it, in case of multi-km scale arrays). ![Fig.4: A bottom-based km-scale tomography system: 3 bottom acoustic nodes (BANs) enclose the underwater telescope.](image) Every BAN is made of an acoustic transducer (for sonification of the telescope), and a hydro-acoustic data modem of S2C technology [3] (for data exchange between the BANs). Using the inherent feature of S2C-modems to accurately evaluate their mutual distances, the positions of BANs and thus the antenna aperture can be precisely determined. The determination of absolute geographical coordinates for every BAN can be done via a geo-referenced surface modem (using another inherent S2C modem feature - the ultra-short base line (“USBL”) ability). This calibration is only carried out once from a ship. Positioning results would be logged in one of the BANs and transmitted periodically (or on request) by the S2C-modem to the neutrino telescope acoustic modem (or to any other modem-equipped permanent or temporary unit; alternatively, also surface buoys equipped with radio/satellite capability can be used). BANs could work for a long time in autonomous mode (>1 yr), need only occasionally be recovered for battery replacement or re-charging. Obviously, not only localization/positioning data can be transmitted by such an underwater acoustic network to the telescope (or to surface) – the acoustic modems can easily interface to a variety of devices collecting local underwater environmental data, seismic data, etc. - thus opening the road to a flexible, underwater array for “related science” [1]. 5. Conclusions This pilot study shows that accurate acoustic 3D imaging of medium-sized elements of underwater neutrino telescopes (e.g. buoys) is possible up to distances of km-scale, without using any active elements on the telescope. We performed 3-dimensional imaging of key structural elements of the Baikal neutrino telescope main structure. The imaging system, installed directly below the ice-surface, located 3 major buoys at 1.1 km depth (longitudinal and transverse precision of 0.2 m and 1 m), under the presence of many close-by objects and with large reflections from the ice-surface. The method uses the linear sweep-spread signal based S2C-modem technology. Improvements of the currently quite low SNR are feasible. For stationary application, autonomous antennas on the lake’s bottom are suggested - using acoustic modems to connect with each other and a central data acquisition unit. Acoustic imaging may be of interest for underwater neutrino telescopes of km$^3$ scale and beyond, since it does not imply any active elements - thus being complementary to beacon-based systems. In particular, it can be applied to passive objects spread over large volumes, and/or in emergency situations. Acknowledgements The authors thank the Baikal collaboration for the opportunity to test the 3D hydro-acoustic imaging under realistic conditions. References [1] V. Aynutdinov et al., The Baikal neutrino experiment – physics results and perspectives, these proceedings. [2] I. Belolaptikov et al., The Baikal underwater neutrino telescope: Design, performance and first results, Astropart. Phys. 7 (1997) 263. [3] Kekkal K.G. and Bannasch R., Sweep-spread carrier for underwater communication over acoustic channels with strong multipath propagation, J. Acoust. Soc. Am., Vol.112, p. 2043. [4] Coraluppi S. Multistatic Sonar Localization. IEEE Journal of Oceanic Engineering, vol. 31, no. 4, pp. 964-974.
2006 SUSTAINABILITY REPORT THE 12 FEATURES OF A SUSTAINABLE SOCIETY Features of Natural Capital 1 In their extraction and use, substances taken from the earth do not exceed the environment’s capacity to disperse, absorb, recycle or otherwise neutralize their harmful effects (to humans and/or the environment) 2 In their manufacture and use, artificial substances do not exceed the environment’s capacity to disperse, absorb, recycle or otherwise neutralize their harmful effects (to humans and/or the environment) 3 The capacity of the environment to provide ecological system integrity, biological diversity and productivity is protected or enhanced Features of Human Capital 4 At all ages, individuals enjoy a high standard of health 5 Individuals are adept at relationships and social participation, and throughout life set and achieve high personal standards of their development and learning 6 There is access to varied and satisfying opportunities for work, personal creativity, and recreation Features of Social Capital 7 There are trusted and accessible systems of governance and justice 8 Communities and society at large share key positive values and a sense of purpose 9 The structures and institutions of society promote stewardship of natural resources and development of people 10 Homes, communities and society at large provide safe, supportive living and working environments Features of Manufactured Capital 11 All infrastructure, technologies and processes make minimum use of natural resources and maximum use of human innovation and skills Features of Financial Capital 12 Financial capital accurately represents the value of natural, human, social and manufactured capital Welcome to Strategic Sustainability Consulting’s first sustainability report. We’ve been in business for a year, and now that we have collected some data we thought it was high time to disclose it. We tell our clients that transparency is the single most important factor in being a responsible organization, and so we’re thrilled to be walking our talk! Being a small business, we’re especially cognizant of the challenges faced by under-resourced organizations trying to be socially and environmentally responsible. In the coming months, we’ll be focused on developing better, more rigorous indicators to track the right data in a way that’s cost-effective and truly measures our progress towards a sustainable business model. For now, we’ve focused on qualitative aspects of our business, an often-overlooked yet critical measure of our commitment to operating in the most ethical, most sustainable manner possible. As we move from a start-up to a bona fide business model, we’ll face additional questions—with serious environmental and social consequences. Should we move from a home-based business to dedicated office space? Should we move from a consultant network model to direct employees? How can we best leverage our involvement in the local community? And how do we balance the economic challenges of growing a small business with the desire to maximize our social and environmental performance? While we don’t have all the answers, we do have a set of principles that guide our strategic decisions. These 12 Features of a Sustainable Society, promoted by Forum for the Future (and based on the Four System Conditions of The Natural Step and the Five Capitals of Natural Capitalism), are characteristics of an ideal world. We’d like to make that vision a reality, and so all of our decisions are judged against whether or not we’re moving towards those 12 Features. In the coming year, we hope to capitalize on our early successes and continue to grow in an environmentally responsible, socially just, and economically viable way. It’s an exciting time here at Strategic Sustainability Consulting, and we’re eager to get started on our second year of operations. We encourage you, our stakeholder, to give us feedback. Like what we’re doing? Think we’re missing an important issue? Want to get involved in one of our projects? Let us know! Jennifer K. Woofter Compared to other companies, Strategic Sustainability Consulting has a rather unusual profile. For one thing, we’re much smaller than most other organizations—in fact, we have just one direct employee (although we have several dozen consultants in an extended network). And we don’t have designated office space, so our environmental impacts are a bit smaller than other companies. Nonetheless, we do have a sustainability footprint—which we’ve tried to assess below: Keeping in mind our core impacts, our main goals for the coming year include: - Restructuring as a limited liability company (LLC). - Increasing our client base, and thus our operating budget. - Expanding our environmental management system to cover materials and waste. - Formalizing our sustainability consultant network to ensure a just and competent workforce. To make sure we stay on track, we’ve committed to a twice-yearly strategic review of our operations and impacts. We believe that these goals are both ambitious and achievable, and we look forward to reporting on our progress next year. ### OUR SUSTAINABILITY ASSESSMENT SUMMARY (main impacts are designated with *) | Our Stakeholders | Our Economic Impacts | |------------------|----------------------| | SSC Clients* | Pro-bono Services* | | SSC Network Consultants* | Competitive Pricing of Services* | | Local Community | Taxes | | Our Environmental Impacts | Our Social Impacts | |---------------------------|--------------------| | Environmental Services* | Social/Community/Stakeholder Services* | | Energy Use | Work/Life Balance of Consultants* | | Waste/Recycling | Labor/Human Rights in Supply Chain | | Transportation* | | | Major Strengths | Major Weaknesses | |-----------------|------------------| | Network of Sustainability and CSR Consultants and Practitioners* | Small Client Base* | | Market Niche | Administrative Burdens | | Business Structure (Low Overhead, Ability to Work Remotely) | Budget Constraints for Marketing | | Major Opportunities | Major Challenges | |---------------------|------------------| | Local SMEs as Potential Clients* | Overall Business Environment – Lack of Interest from Many SMEs* | | Formal Consultant Network* | Legal Constraints of Being a Sole Proprietorship* | | Expanding Internationally | | Our Services Sustainability Assessments — The most essential component of corporate social responsibility (CSR) is understanding an organization’s key social and environmental impacts. Whether you’re starting from scratch, facing a specific challenge, or tracking the progress of new initiatives, our sustainability assessments can help you identify the challenges and opportunities associated with corporate citizenship. Supply Chain Standards — Our most popular service, Supplier Audits, is a highly customized, client-based service that helps small and medium size organizations tackle corporate social responsibility issues in their supply chain. Unlike expensive auditing firms or niche advocacy groups, Supplier Audits allows clients to focus on the social and environmental issues important to them. More importantly, Supplier Audits provides guidance through each step of supply chain management, so that even organizations new to corporate social responsibility can feel confident that they are implementing best practices from start to finish. Sustainability Reporting and Disclosure — Once an organization has taken the initial steps along the path to social and environmental responsibility, it’s time to make that hard work pay off! One of the best ways to reap the benefits of corporate citizenship is a sustainability report. We can help you figure out what information to report, when to report it, and how to report it. Stakeholder Consultation — Consulting your stakeholders is a great way to get an “outside the box” view of an organization’s operations, and can boost your credentials as a socially responsible organization. As a third-party facilitator, we can help you build trust among employees, suppliers, customers, and community members. Together, we can determine where your organization shines, and where it needs a little polishing. Strategic Sustainability Consulting is a small business located in Bethesda, Maryland—a suburb in the Washington, D.C. metropolitan area. We specialize in helping under-resourced organizations manage their social and environmental impacts through a variety of products and services. For our first year of operations, we’ve been structured as a sole proprietorship. Strategic Sustainability Consulting’s founder and president, Jennifer K. Wooster, is the only direct employee of the organization, and she works with a network of additional sustainability consultants on a project-by-project basis. This model allows Strategic Sustainability Consulting to maintain low overhead costs, and provides our clients with an individually-customized team for each project we undertake (see Labor Standards). During the reporting period, we worked with clients in the United States, Canada, and the U.K. Our primary business was sustainability assessments and supply chain management, with a secondary focus on sustainability reporting, stakeholder engagement, and shareholder advocacy services. Additionally, we’ve been involved in one-off projects including background CSR research for other consultancies, freelance writing for social enterprises, and strategy consulting for a new CSR organization. What’s Missing We’ve chosen not to reveal specific financial information in this report (like net sales, revenue, assets, and capitalization). Being a sole proprietorship, that kind of transparency is a little too much! In the coming year, however, we’ll be investigating ways to disclose financial information in an appropriate manner. For now, we’re relying on standardized financial accounting methods to keep us on the straight and narrow. And since this is our first report, we don’t have any “major changes” to report—like facility openings or closings. This sustainability report covers our first year of operations, from July 2005 to July 2006. We’re committed to annual sustainability reporting, and plan to expand the breadth and depth of our disclosure as time goes by. As a micro-enterprise, we face obvious reporting challenges—most notable a VERY tight budget. Additionally, we’re aware of the materiality issues associated with a new small business’s sustainability report. In general, we have reported with the information at hand, and have tried to identify areas where we need to improve our metrics. We’ll openly admit that for many of the indicators to follow, we’ve had to use estimates, and one of our main goals for 2007 is to implement a relevant quantitative tracking system. In the meantime, we’ll explain our measurement (or “best guess”) techniques throughout this report. To get an outsider perspective, we’ve asked a number of external stakeholders to review drafts of the report, to highlight opportunities for improvement and give constructive criticism when appropriate. A partial list of stakeholder reviewers can be found at the end of the report. Since this is our first report, we’ve chosen to use the Global Reporting Initiative’s G3 Sustainability Reporting Guidelines to help structure our economic, environmental, and social disclosures. Although they are currently in draft format, we believe the G3 Guidelines are the best standards out there—and we want to use the best! Find out more about the G3 Guidelines at www.grg3.org. If you have questions or comments about this report, please contact Jennifer K. Woofter at 1–202–470–3248 or firstname.lastname@example.org. Because we are organized as a sole proprietorship complemented by a network of independent consultants, governance structures at Strategic Sustainability Consulting are somewhat more “vertical” than at other organizations. SSC President Jennifer K. Woofter makes the major strategy decisions, with input from colleagues in the sustainable development network. Additionally, a biannual strategic planning process (in January and July), based on the 12 Features of a Sustainable Society (inside front cover), provides oversight for day-to-day decisions and makes sure that the business is focused on the triple bottom line and aligned with our guiding values. Our Guiding Values At Strategic Sustainability Consulting, we believe that corporate social responsibility is not just the “right” thing to do, but also makes good business sense. With the goal of long-term sustainable development in mind, we commit to: **Integrity** – we go beyond mere compliance with the law and look for ways to be more honest, more accountable, and more transparent in everything we do. **Positive Social Impact** – we offer products and services that make the world a better place, including pro-bono work to clients who would otherwise be unable to fund CSR initiatives. **Environmental Responsibility** – we choose environmentally-friendly alternatives, encourage e-meetings, and offset our carbon emissions. **Social Responsibility** – we endorse the Universal Declaration of Human Rights and strive to buy only from suppliers who respect ILO Conventions. **Community Service** – we participate in the local community through volunteerism and charitable giving. At present, Strategic Sustainability Consulting does not have a governance committee, although we hope to formalize a Board of Advisors in 2007. In the meantime, consultants in the SSC network are encouraged to take ideas and complaints directly to the top. Stakeholder Engagement During our first year of operations, Strategic Sustainability Consulting did not undertake any formal stakeholder engagement. We did, however, spend a LOT of time networking with local sustainable development organizations, including: **DC Sustainable Business Network** www.dcsbn.org **Washington Area Business Alliance for Sustainability** www.wabas.com **DC Net Impact (Professional Chapter)** http://finance.groups.yahoo.com/group/DC_Net_Impact/ **The William James Foundation** www.williamjamesfoundation.org **The Clean Energy Partnership** www.cleanenergypartnership.org We solicit feedback from stakeholders through occasional surveys, as well as our quarterly e-newsletter, sent to more than 150 colleagues, clients, competitors, and other interested parties. And we regularly post to relevant listservs on a variety of topics, inviting dialogue on topics ranging from sustainability metrics for small business to work/life balance for entrepreneurs. Stakeholder Survey: D.C. Churches on the Road to Sustainable Development In May, we sent out a survey to 450 local churches asking how they are incorporating environmental stewardship, community service, and social justice into their day-to-day operations. The results will be published in a special report, Caring for His Creation: D.C. Area Churches on the Road to Sustainability, in late 2006. The report will provide an overview of how local faith communities are integrating issues of sustainable development into their worship practices, community outreach services, and overall theological mission. Trends revealed in the survey will help non-faith organizations learn how to better work with churches. For example, how can the local non-profits best help Christian outreach efforts? What environmental advocacy and social justice groups are best suited to work with local churches? What issues and concerns do both faith and non-faith communities share? Results of this survey will help inform our work with churches, and we hope provide a valuable service to our local community as well. What’s Missing We’ve chosen not to disclose executive compensation. Economic Performance While Strategic Sustainability Consulting is a for-profit business, we explicitly pursue a revenue model designed for maximum social and environmental impact. Recognizing that few—if any—consultancies specialize in helping small and medium size organizations achieve social and environmental excellence, we designed a system that keeps our overhead low and our services affordable. To be honest, our main economic goal for the coming year is achieving financial viability as a company. Starting up the company, developing a client base, investing in basic office equipment, and traveling to several conferences has cost money—and we’re just now beginning to see a positive cash flow. Our projections for 2007 show a modest profit, and with any luck next year we’ll be reporting on how that profit is making a positive social and environmental impact in our community. Even before we make a profit, though, we’re committed to pro-bono work—we think of it as a win-win situation. As a new business, all the work we do (even if it’s for free) helps to build our portfolio, and at the same time allows small businesses and organizations with limited budgets to explore new sustainability management options. In the last year, we’ve donated our time and services to the following organizations: Redeem PLC (UK), Tsunami — Stories of Human Resilience (US), US Responsible Media Forum (US), Wildlife Habitat Canada (Canada), and William James Foundation (US) We’re thrilled to announce that Strategic Sustainability Consulting has been selected as a semi-finalist in the Eileen Fisher “Eileen’s Vision Grant Program 2006”. The program provides grants to women-owned businesses “with a strong vision, a social conscience and a solid business plan.” We won’t know if we’ve advanced to the next round until August, so keep your fingers crossed! For more information on Eileen’s Vision Grant Program, go to www.eileenfisher.com. Working with SMEs Helping small and medium size organizations move towards sustainability is our business—and it presents a unique set of challenges and opportunities. On the bright side, the SME market is largely underserved by sustainability consultancies and so we have a huge percentage of the available market share. On the flip side, many (well, most) smaller organizations have yet to see the real benefits of implementing corporate social responsibility programs. The data is out there—but we face a real challenge in communicating the triple bottom line value to an audience too often focused on staying afloat for another quarter. | GRI INDICATOR | 2005-2006 | 2006-2007 | |---------------|-----------|-----------| | EC1 Economic value generated and distributed, including revenues, operating costs, employee compensation, donations and other community investments, retained earnings, and payments to capital providers and to governments | Given our organization as a sole proprietorship, we’ve decided not to disclose this information—although we are proud to say we’re in compliance with all tax requirements and use standard financial accounting methods to track our economic footprint. | We plan to reorganize into an LLC at the end of 2006, which will make financial transparency less of a privacy issue. In particular, we’ll be devoting attention to better IT systems to track expenditures and standardize our billing procedures. | | EC2 Financial implications of climate change | Our main financial risk from climate change is increasing energy costs associated with electricity and transportation (auto and airplane). | As small business increasingly faces the realities of climate change, we see a business opportunity by offering strategies to improve efficiency and offset emissions. | | EC3 Entry level wage compared to local minimum wage for significant locations of operation | We do not have any “entry level” positions. Our services are designed to bill at $50-$150 an hour. | We do not anticipate hiring any “entry level” positions in 2006-2007. | | EC6 Practices and proportion of spending on locally-based suppliers at significant locations of operation | Our major expense for the reporting period was for graphic design services, for which we used a local company. | Our goal is to purchase locally (within 100 miles) whenever possible—that is, when an economically, socially, and environmentally equivalent product is available. | | EC7 Procedures for local hiring, and proportion of senior management in locations of significant operation from the local community | Our consulting network is designed to be national (and sometimes international). We don’t have a policy specifying a preference for local vs. non-local hires. | None anticipated | | EC9 Indirect economic impacts | We donated more than $5,000 in pro-bono services during the reporting period. | We’ll be looking for better ways to track and quantify our indirect economic impacts for 2006-2007, including a formal giving program. | What’s Missing We don’t have a pension plan (EC3), nor did we receive any financial assistance from the government (EC4). Additionally, we didn’t make any infrastructure investments during the reporting period (EC8). We don’t have any economic certifications (although we plan to self-certify as a woman-owned small business (WOSB) with the Small Business Administration in 2007), and we haven’t received any economics-related civil and criminal fines or other penalties during the reporting period (or ever). Environmental Performance For our first year of operations, we’ve focused on the environmental aspects of our products and services—making sure that we deliver the most environmentally sound solutions to our clients via sustainability assessments, supply chain management, stakeholder engagement, and sustainability reporting. But we’ve also tried to be cognizant of the environmental impacts of our own operations. We don’t have designated office space, but instead use our personal living space, shared conference rooms, and the occasional coffee shop to conduct our business—so accounting for our direct environmental impact requires us to be a little creative. In fact, identifying the best ways to track our environmental progress has been one of our key challenges during the past year. For now we don’t have a formal environmental policy, but we do publicly commit to choosing environmentally-friendly alternatives, encouraging e-meetings, and offsetting our carbon emissions. During this reporting period, we’ve chosen to focus on direct energy consumption (our computer use) and energy related to transportation (auto and air travel). We’re especially proud of our carbon-free status. In July we calculated our carbon emissions from our first year of operations and were so pleased by the ease of the process that we offset 1000% of our carbon emissions—that’s 10 times our actual impact. It’s a small but meaningful step; one we hope can be an inspiration to our colleagues and clients. Next year, we’ll expand our tracking to material use and waste (specifically recycling)—although if this year was any indication, we have minimal material use (less than 1 ream of paper/person) and even less waste. Accounting for biodiversity impacts and water use are longer term goals. And within the next 2–3 years, we hope to formalize our environmental management system with a recognized certification, like ISO 14001. | GRI INDICATOR | 2005-2006 | 2006-2007 | |---------------|-----------|-----------| | **EN3** Direct energy consumption broken down by primary energy source | We estimate that SSC used 104 kWh/person (or roughly $10/person) worth of electricity for powering our computers—our only direct energy consumption. | We anticipate equal electricity consumption for the next reporting period. | | **EN6** Total energy saved due to conservation and efficiency improvements | We have our laptops set to power down after 15 minutes of inactivity, but have not measured this energy savings. | We don’t anticipate measuring our energy savings for the next reporting period. | | **EN7** Initiatives to provide energy-efficient products and services | We include energy efficiency audits as part of our standard “Green Office Audit”. | We will partner with energy experts in the coming year to provide more robust energy efficiency options to our clients. | | **EN12** Location and size of land owned, leased, or managed in, or adjacent to, protected areas | We don’t have designated office space, but instead use our personal living space and the occasional coffee shop to conduct the majority of our business. Thus, we don’t have specific land impacts, but try to encourage work in multi-use space—it keeps our overhead low and our environmental impact at a minimum. | | | **EN17** Greenhouse gas emissions | Based on the carbon calculator from Carbon fund.org that tracks electricity consumption, vehicle miles, and air miles, we emitted 0.4 tons of carbon dioxide our first year of operations. We offset 1000% of those carbon emissions—ten times our actual impact. | As a growing company, we will probably have a larger carbon footprint in the coming year. We remain committed to minimizing our energy use, and will offset our carbon emissions each year. | | **EN22** Total number and volume of significant spills | None | None anticipated | | **EN24** Initiatives to manage the environmental impacts of products and services and extent of impact reduction | We are always seeking ways to improve the quality of our environmental services—including working with technical experts, collaborating with academics, and networking with relevant organizations. | We’ll be formalizing relationships with several new groups in the coming year, which will provide our clients with cutting edge environmental strategy planning. | | **EN27** Percentage of products sold that is reclaimed at the end of the products’ useful life by product category | Because we are a service-based company, we don’t sell products per se. That said, all of our reports are printed on recycled paper or are delivered electronically. | | | **EN28** Incidents of, and fines or non-monetary sanctions for, non-compliance with applicable environmental regulations | We incurred no environmental compliance penalties, nor do we anticipate any fines or sanctions in the coming reporting period. | | | **EN29** Significant environmental impacts of transportation used for logistical purposes | Our main transportation impacts are related to offsite meetings and conferences. During the current reporting period, we logged 575 miles by car and 450 miles by airplane. We seek out meeting locations close to public transit options whenever e-meetings aren’t a feasible option. | With at least one international trip planned in the coming year, we anticipate an increased number of both vehicle and airplane miles for the next reporting period. | | **EN30** Total environmental protection expenditures by type | None | None anticipated | **What’s Missing** For this reporting period, we haven’t tracked water use (EN9-EN11, EN21), indirect/renewable energy use (ENA-ENS, EN8), biodiversity impact (EN3-EN16, EN25), or non-carbon emissions (EN18-EN19, EN23). Additionally, we haven’t tracked our materials use (EN1-EN2) or waste (EN20, EN24), although we estimate that less than 50 lbs of waste/person was generated (mainly paper supplies) during the reporting period. Social Performance Social Performance: Labor Practices and Decent Work Starting up a business is tough work, and there have been many times when the goal of work/life balance has seemed like a pipe dream. With a year under our belt, we’re now seeing the light at the end of the tunnel, and are well on our way to being “one of the best places to work” for sustainability consultants. In fact, we’ve been named one of the Best Workplaces for Commuters by the EPA for our telework policies—a practice we believe is essential to our success. We admit it—we do some of our best “big thinking” in slippers. As we mentioned in the introduction, Strategic Sustainability Consulting is structured as a sole proprietorship, with only one direct employee (President Jennifer K. Woofter). To supplement our staff, we have more than two dozen consultants spread over four continents—each specializing in a different aspect of sustainable development and/or corporate social responsibility. These consultants are hired on a project-by-project basis, and while they do not receive employee benefits we do strive to provide them with enriching, well-paid projects that add to the ultimate goal of sustainable development. Because of this structure, our most obvious labor-related risk is Strategic Sustainability Consulting’s dependence on its founder and president, Jennifer K. Woofter. In her current role, Ms. Woofter draws upon 7+ years of experience in the fields of corporate social responsibility, ethical investing, and organizational accountability systems to help clients make the leap between good intentions and long-term sustainable performance. Currently, and for the foreseeable future, the SSC brand will be inextricably linked to Ms. Woofter’s reputation and expertise. Up until now, we’ve worked with consultants on an informal basis. It’s worked well—we haven’t had any labor-related fines or complaints—but we can definitely do more. In the coming year, we’ll formalize the process, compiling a “virtual notebook” of SSC policies and practices—including privacy protection, whistleblower and grievance procedures, public policy involvement, gift restrictions, and fair competition. We endorse the Universal Declaration of Human Rights and strive to buy only from suppliers who respect ILO Conventions. As we look to the future and an expanding consultant network, we plan to create a more structured labor policy and program to ensure that we respect diversity, fair labor standards, health and safety regulations, and human rights—so stay tuned! LABOR PRACTICES & DECENT WORK PERFORMANCE INDICATORS What’s Missing Since SSC has only one direct employee, we haven’t reported on things like gender breakdown or labor-relations. And as a service-based business operating wherever there’s a power outlet for our laptops, we don’t have much of an office health and safety program, nor do we have a formal training/review process. Finally, no one working with SSC is represented by a labor union, we don’t have a benefits program, and we don’t have a work/life policy—although we do occasionally take days off to watch important television events like the World Cup. In short, we don’t report on GRI labor indicators LA1-LA15. Social Performance: Human Rights We take human rights seriously—and we promote the Universal Declaration of Human Rights and the ILO Conventions on Non-Discrimination, Freedom of Association, Child Labor, Forced Labor, and Compulsory Labor. Within our organization, that means choosing Fair Trade products whenever possible and avoiding suppliers with a record of human rights abuses. And we emphasize human rights in our services—especially in our Supply Chain services. In the coming year, our main goals related to human rights include compiling best practices for our consultant network and expanding our screening to 100% of our suppliers. | GRI INDICATOR | 2005-2006 | 2006-2007 | |-------------------------------------------------------------------------------|---------------------------------------------------------------------------|---------------------------------------------------------------------------| | HR1 Percentage of significant investment agreements that include human rights clauses or that underwent human rights screening | We did not make any significant investment agreements during the reporting period. | None anticipated | | HR2 Percentage of major suppliers and contractors that underwent screening on human rights | We screened our major suppliers (office supplies, computer hardware, and graphic design) for human rights issues—all met our requirements. | For the coming year, we will screen 100% of our suppliers on issues of human rights (as well as environment, workplace, community, product, and ethics issues). | | HR3 Type of employee training on policies and procedures concerning aspects of human rights relevant to operations, including number of employees trained | We make sure that our consultants are familiar with relevant human rights and labor agreements (such as the UDHR). | We plan to put together a training manual with relevant materials to standardize the training process. | | HR4 Incidents of discrimination | None—and we make a specific effort to work with diverse clients, consultants, and networks. | None anticipated | | HR5 Incidents of violations of freedom of association and collective bargaining | None | None anticipated | | HR6 Incidents of child labor | None | None anticipated | | HR7 Incidents of forced or compulsory labor | None | None anticipated | | HR8 Procedures for complaints and grievances filed by customers, employees, and communities concerning human rights, including provisions for non-retaliation | We have an open door policy—if consultants, clients, and other stakeholders have concerns, they are encouraged to go straight to the top and complain to the president. | As part of the planned training manual, we will include a provision on whistle-blower protection and grievance procedures. | | HR9 Percentage of security personnel trained in organization’s policies or procedures regarding human rights | None | None anticipated | | HR10 Incidents involving rights of indigenous people | None | None anticipated | What's Missing We don't have any human rights certifications, and haven't won any human rights awards, but neither have we incurred any human rights fines or sanctions. Social Performance: Society Performance At Strategic Sustainability Consulting, we go beyond mere compliance with the law and look for ways to be more honest, more accountable, and more transparent in everything we do—this sustainability report is a prime example. In the coming year, as we begin to formalize our consultant network, we’ll adopt specific policies on gifts, lobbying and public policy, and competitive business practices. Until then, we implement corporate governance guidelines on a project-by-project basis. | GRI INDICATOR | 2005-2006 | 2006-2007 | |---------------|-----------|-----------| | **SO1** Programs and practices for assessing and managing the impacts of operations on communities, including entering, operating and exiting | During this reporting period and for the next 12 months, we’re committed to focusing on the local community—working with local businesses, supporting local organizations, partnering with local universities, and contributing to local foundations that promote socially responsible business. | | | **SO2** Extent of training and risk analysis to prevent corruption | None | As small business increasingly faces the realities of climate change, we see a business opportunity by offering strategies to improve efficiency and offset emissions. | | **SO3** Actions taken in response to instances of corruption | None | In the coming year, we’ll be reorganizing into an LLC, which will increase our financial accountability and allow us to be more transparent across our triple bottom line. | | **SO4** Participation in public policy development and lobbying | We’ve recently become involved with the Clean Energy Partnership (www.cleanenergypartnership.org), a nonpartisan, not-for-profit business group that lobbies for more sustainable energy policies. And in the coming year, we’re committed to becoming more active in public policy development in support of sustainable development. | | | **SO5** Total value of contributions to political parties or related institutions broken down by country | SSC does not make political contributions, nor will I do so in the future. Individual consultants, however, are encouraged to be active in the political process. | | | **SO6** Instances of legal actions for anti-competitive behavior, anti-trust, and monopoly practices and their outcomes | None | None anticipated | **What’s Missing** We don’t have any community/society certifications, and we haven’t received any awards/fines related to community involvement or associated issues. Social Performance: Product Responsibility As a consultancy focusing on sustainable development management systems, we don’t have traditional issues of “product responsibility”—like worrying about product recalls. Instead, our responsibility lies in helping our clients improve their product impacts. We aim to “offer products and services that make the world a better place” through organizational systems than minimize and mitigate social and environmental impacts. And while we don’t have a formal product quality policy or program, we think our performance speaks for itself. Just ask some of our clients! Looking to the future, we hope to add more technical aspects to our services—like pairing up with energy audit and waste management experts. Right now, we’re in contact with several groups that provide these services, and we’re moving towards formalizing those relationships in the next year. | GRI INDICATOR | 2005-2006 | 2006-2007 | |---------------|-----------|-----------| | PR1 Procedures for improving health and safety across the life cycle of products and services | Our Sustainability Assessment services include an examination of the organization’s health and safety policies, programs, and performance. | We don’t anticipate adding any specific health and safety services in the coming year. | | PR2 Number and type of instances of non-compliance with regulations concerning health and safety effects of products and services | None | None anticipated | | PR3 Procedures for product and service information and labeling | Although we don’t have specific procedures for labeling, our policy is to be as transparent and accountable as possible. We encourage stakeholders to contact us with questions or concerns about information we disclose. | None anticipated | | PR4 Number and type of instances of non-compliance with regulations concerning product and service information and labeling | None | Our goal is to purchase locally (within 100 miles) whenever possible—that is, when an economically, socially, and environmentally equivalent product is available. | | PR5 Procedures related to customer satisfaction, including results of surveys measuring customer satisfaction | Because we work with clients on a project-by-project basis, we get very specific, immediate feedback. | We hope to track longer-term customer satisfaction over the coming year. | | PR6 Procedures and programs for adherence to laws, standards, and voluntary codes related to marketing communications including advertising, promotion and sponsorship | None | None anticipated | | PR7 Number and type of instances of non-compliance with regulations concerning marketing communications including advertising, promotion and sponsorship | None | None anticipated | | PR8 Percentage of customer data covered by the data protection procedures | Although we don’t have any formal procedures for protecting client data, all information is considered confidential unless express permission is given to share details with the public. Additionally, all SSC information is backed up on an external hard drive on a weekly basis. | As part of formalizing our consultant network, our partners will all sign confidentiality agreements to ensure that client data is protected. | | PR9 Number of substantiated complaints regarding breaches of customer privacy | None | None anticipated | What’s Missing We haven’t won any product or service awards, but neither have we incurred any product fines or sanctions. As a charitable environmental conservation organization, Wildlife Habitat Canada is very involved in granting to smaller grassroots conservation organizations and conducting conservation projects of our own. These projects are seen and felt by the Canadian conservation community and people outside the organization—outwardly, we do good things for the environment and Canadians. However, when it comes to our internal office operations, we rarely devote the time, resources or energy required to critically analyse our corporate actions. We know that the way we operate on a day-to-day basis speaks to the core values of the organization and gives us credibility for “walking the talk”, yet we tend to get wrapped up in projects and ignore this very critical aspect of any sustainable organization. Enter Strategic Sustainability Consulting. All we had to do was provide a list of suppliers we wanted to audit for their environmental and social performance and SSC did the rest—providing us with both comprehensive and summary reports of our suppliers’ performance and suggesting helpful actions we could take moving forward. With SSC’s results we could easily evaluate our supplier’s performance against our own values and work towards creating a sustainable procurement policy for the organization. The unanticipated benefit of the SSC audit was the momentum it created in the office in terms of engaging the staff in an active dialogue around how we as colleagues, individuals, parents, friends and neighbours could change our actions to become more sustainable! AMY SEABROOKE Wildlife Habitat Canada Stakeholder Feedback Strategic Sustainability Consulting would like to thank the following people (and several others who chose to remain anonymous), who gave us excellent feedback in preparing this report. Ronan Chester, The Healthiest Home and Building Supplies Arvin Ganesan, Pew Center on Global Climate Change Dave Nelson, Independent Consultant Carmen Turner, Teck Cominco Limited Congratulations to Strategic Sustainability Consulting for publishing their first Sustainability Report while demonstrating their commitment to strong leadership in sustainability. Adopting a brand new set of guidelines (G3 Guidelines) with the GRI is an impressive accomplishment as SSC is likely one of the first organizations to release a report based on the new standards. Although following the G3 clearly requires more disclosure on a formal management approach to sustainability, SSC has been able to produce a SR that is still interesting to read! This report not only provides valuable insight to SSC stakeholders, it is also a benchmark for other companies as they start to comply with the new GRI guidelines. CARMEN TURNER Teck Cominco Ltd. Sustainability & Corporate Affairs ## 2007 GOALS AT A GLANCE ### Top-Line Business Goals - Restructure as a limited liability corporation (LLC). - Increase our client base, and thus our operating budget. - Establish a Board of Advisors. ### Economic Goals - Standardize our billing procedures. - Implement system to better track indirect economic impacts. - Increase disclosure of key economic performance data. ### Environmental Goals - Solidify partnerships with environmental experts to expand client offerings. - Expand our environmental tracking system to cover materials and waste. ### Social Goals - Formalize our sustainability consulting network to ensure a just and competent workforce. - Compile a “virtual notebook” of SSC policies and practices, including privacy protection, whistleblower and grievance procedures, public policy involvement, gift restrictions, and fair competition. - Expand our supplier screening to ensure 100% of our vendors meet our labor and human rights criteria. - Develop a system to track client satisfaction. 4938 Hampden Lane, Suite 221 Bethesda, MD 20814 (202) 470.3248 email@example.com WWW.SUSTAINABILITYCONSULTING.COM This report is a production of EcoVision Partners, a collaboration of Strategic Sustainability Consulting, Studio 22, and Ecoprint. For more information, go to www.ecovisionpartners.com. Printed on 100% Post-consumer Recycled, Process Chlorine Free paper using 100% Wind Energy in a Carbon Neutral process.
On dispersion relations and the statistical mechanics of Hawking radiation Roberto Casadio* Dipartimento di Fisica, Università di Bologna and Istituto Nazionale di Fisica Nucleare, Sezione di Bologna, Italy November 16, 2018 Abstract We analyze the interplay between dispersion relations for the spectrum of Hawking quanta and the statistical mechanics of such a radiation. We first find the general relation between the occupation number density and the energy spectrum of Hawking quanta and then study several cases in details. We show that both the canonical and the microcanonical picture of the evaporation lead to the same linear dispersion relation for relatively large black holes. We also compute the occupation number obtained by instead assuming that the spectrum levels out (and eventually falls to zero) for very large momenta and show that the luminosity of black holes is not appreciably affected by the modified statistics. 1 Introduction The gravitational collapse as described by general relativity can lead to the formation of space-times with peculiar causal structure and profound consequences on the (quantum) matter propagating on it. The most striking example is probably that, once the (apparent) horizon has formed, Hawking radiation [1] generically sets off [2] and the mass of the source should then decrease by this quantum mechanical process. The main problem that remains with such a semiclassical picture is the determination of the backreacted metric and the corresponding time evolution of the source. A different (but possibly related) problem is the role played by very high (trans-Planckian) frequencies. In fact, by tracking back in space a photon of frequency $\omega$ as measured by a distant observer, one immediately finds that its frequency is blue-shifted, in the optical approximation and neglecting the backreaction, according to the formula $$\omega^* \sim \left(1 - \frac{R_H}{r_e}\right)^{-1/2} \omega,$$ where $r_e$ is the radial coordinate at the point of emission. It is clear that $\omega^*$ is unbounded from above if $r_e$ approaches the horizon radius $R_H$, and one is led to conclude that, in order to have $\omega$ finite, $\omega^*$ will soon exceed the Planck mass if $r_e \sim R_H$ as it is expected for Hawking quanta. *email@example.com This is a very strong conclusion, since to study such energetic states one would need a full-fledged theory of quantum gravity. More recently it was shown that, contrary to the above argument, trans-Planckian frequencies do not play a significant role in the Hawking process and the evaporation looks indeed insensitive to the presence of a UV (short distance) cut-off (for a review and list of references, see [3]). This opens up the possibility that the spectrum of emitted quanta be not linear and a non-trivial dispersion relation avoid the production of trans-Planckian modes (see, e.g. Refs. [4, 5, 6] for interesting proposals). About the origin of the new dispersion relation little is known. One might argue that the blue-shift in Eq. (1.1) must be corrected for the true (backreacted) metric in the vicinity of the (apparent) horizon. In fact, since the Hawking quanta are produced inside the potential barrier that surrounds the horizon, a fraction of them gets trapped and forms a (thermal) bath which backreacts on the metric. Moreover, the quanta which escape through the barrier do not propagate in vacuum since they must cross such a bath [7]. The correct dispersion relation must then account for both aspects and, as such, reflects our inability to solve the main problem with black hole evaporation. In the present paper we shall explore the connection between the dispersion relation and the statistical mechanics of Hawking quanta. We shall first work out general expressions in Section 2 which we then apply to both the canonical picture and the better sound microcanonical picture in Section 3. The starting point of the latter approach is the idea that black holes are (excitations of) extended objects ($p$-branes), a gas of which satisfies the bootstrap condition [8, 9, 10]. This yields a picture in which a black hole and the particles it emits are of the same nature and the statistical mechanics of the radiation then follows straightforwardly from the area law of black hole degeneracy [11]. One obtains an improved law of black hole decay which is consistent with unitarity (energy conservation) and no information loss paradox is expected. In fact, black holes approximately decay exponentially in this picture, although departures from the canonical behavior occur only around (or below) the planck mass [10]. Of course, the statistical mechanical approach is global and does not allow us to fully determine the local behaviour of the fields (although some explicit connection with the dynamics of the local geometry and the backreaction can be drawn [12]). In particular, it yields the occupation number density of Hawking quanta, but one then needs an extra hypothesis to determine the wave modes that lead to such a density. In principle, the new wave modes should describe the propagation in the black hole metric with the backreaction included. An hypothesis of this sort was put forward in Ref. [9] and we shall show that quantitatively negligible corrections to the linear dispersion relation predicted by the canonical picture are required by the microcanonical treatment of the Hawking radiation, except that there is a natural cut-off at $\omega = M$, where $M$ is the black hole mass. This is practically ineffective for the problem of trans-Planckian frequencies since $M > 1$ (in units of the Planck mass, with $c = \hbar = G = 1$) for a (classical) black hole in four dimensions. In the last part of the paper, Section 4, we shall reverse our line of reasoning and assume a dispersion relation in order to determine the corresponding occupation number and compare it with the canonical and microcanonical quantities. In particular, we shall choose a spectrum of the type proposed in Ref. [6] and show that it does not produce appreciable modifications to the luminosity of large black holes (in agreement with the general framework of Ref. [3]). In Section 5, we conclude by mentioning that this result might be significantly modified by the existence of extra dimensions [13], as the (microcanonical) luminosity was shown to depend strongly on the dimensionality of space-time [14]. 2 Occupation number density and wave modes An easy and instructive way of obtaining the standard (canonical) occupation number density of Hawking quanta is the following [15]. Consider a spherically symmetric four-dimensional metric in the Painlevé-Gulstrand form \[ ds^2 = -c^2(r, t) \, dt^2 + [dr - v(r, t) \, dt]^2 + r^2 \, d\Omega^2 , \] (2.1) where \(d\Omega^2\) is the line element of a unit two-sphere. The metric admits an (apparent) horizon if \(r = R_H\) exists such that \(v(R_H) \equiv v_H = -c_H \equiv -c(R_H)\). The surface gravity is then given by \[ \kappa = \frac{g_H}{c_H} , \] (2.2) where \[ g_H = \left. \frac{1}{2} \frac{d}{dr} \left( c^2 - v^2 \right) \right|_{r=R_H} . \] (2.3) The wave modes \[ \phi(r, t) = A(r, t) \exp[\varphi(r, t)] = A(r, t) \exp \left[ i \omega t - i \int_r^r k(r') \, dr' \right] , \] (2.4) solve the d’Alambertian equation in the eikonal approximation, \[ \partial_\mu \varphi \partial^\mu \varphi + i \epsilon = 0 , \] (2.5) provided the wave-number is given by \[ k = \frac{\omega}{\sigma (1 + i \epsilon) c + v} , \] (2.6) where \(\sigma = +1 (-1)\) for outgoing (ingoing) modes. Near the horizon (\(r \sim R_H\)), ingoing modes \((\sigma = -1)\) have wave-number \[ k_{\text{in}} \approx -\frac{\omega}{2 c_H} , \] (2.7) and are defined for \(r - R_H\) both positive and negative. Instead, purely outgoing modes \((\sigma = +1)\) exist only outside the horizon \((r > R_H)\) with \[ k_{\text{out}} \approx \frac{\omega}{\kappa (r - R_H)} . \] (2.8) Additionally, in this set of coordinates one can consider “straddling” modes that are defined for all values of \(r > 0\) and are swept “downstream” inside the horizon. For such modes one has (again for \(r \sim R_H\)) \[ \phi_{\text{straddle}}(r, t) \approx \exp \left\{ i \omega t - i \frac{\omega}{\kappa} \ln |r - R_H| \right\} \left[ \Theta(r - R_H) + \exp \left\{ +\frac{\pi \omega}{\kappa} \right\} \Theta(R_H - r) \right] , \] (2.9) where the Boltzmann-like factor inside the square brackets emerges from the usual analyticity argument which relates ingoing and outgoing amplitudes [16] and corresponds to the inverse Hawking temperature \[ \beta_H = \frac{2\pi}{\kappa}. \] In fact, the physical vacuum is defined with respect to \( \phi_{\text{straddle}} \), since freely falling observers should not see any peculiarities as they cross the horizon. The Bogolubov coefficients of the transformation from the basis \( \{ \phi_{\text{straddle}} \} \) to the basis \( \{ \phi_{\text{in}}, \phi_{\text{out}} \} \) are then simply given by the (normalized) amplitude of the ingoing (\( N_{\text{in}} \)) and outgoing (\( N_{\text{out}} \)) parts of the “straddling” modes. From Eq. (2.9) one finds \[ |N_{\text{in}}|^2 = e^{\beta_H \omega} |N_{\text{out}}|^2. \] The (wronskian) normalization condition \[ |N_{\text{in}}|^2 - |N_{\text{out}}|^2 = 1, \] then yields the thermal occupation number density \[ n_\beta = |N_{\text{out}}|^2 = \frac{1}{e^{\beta_H \omega} - 1}, \] for the outgoing Hawking quanta. To summarize, one has obtained the occupation number density as a consequence of the near horizon geometry. By reversing the above argument, one could in principle assume a specific function for the occupation number density and then reconstruct the related possible metrics. If the exact \( n \) were known, one would obtain some insight for the metric which takes the backreaction properly into account. It is therefore useful to rewrite some of the above expressions in terms of a (this far) unspecified function \( n(\omega) \). In particular, by replacing \( n_\beta \) with \( n \) one obtains new Bogolubov coefficients such that \[ |N_{\text{in}}|^2 = e^{\ln[1 + n^{-1}(\omega)]} |N_{\text{out}}|^2, \] and the backreacted “straddling” modes are determined as \[ \phi_{\text{straddle}}(r,t) \approx \exp \left\{ i \omega t - \frac{i}{2\pi} \ln \left[ 1 + n^{-1}(\omega) \right] \ln |r - R_H| \right\} \\ \times \left[ \Theta(r - R_H) + \sqrt{1 + n^{-1}(\omega)} \Theta(R_H - r) \right], \] again in the vicinity of the (apparent) horizon (\( r \sim R_H \)). ### 3 Canonical and microcanonical dispersion relations For the following, it is useful to introduce the dimensionless \( \tilde{k} \equiv (r - R_H) k_{\text{out}} \). From Eq. (2.15) and any number density \( n \) one finds that, for \( r > R_H \), \[ \tilde{k} \approx \ln \left[ 1 + n^{-1}(\omega) \right]. \] This relation is in general difficult to invert, depending on the form of $n(\omega)$. For $n = n_\beta$, Eq. (3.1) reduces to $\tilde{k} \approx \beta_H \omega$ and, for fixed values of $r > R_H$, one finds $$\frac{d\omega}{dk} \approx \frac{1}{\beta_H} = \frac{1}{8 \pi M}.$$ \hspace{1cm} (3.2) We now recall that the occupation number density in the microcanonical ensemble is certainly a better approximation to the (unknown) exact expression [8]. It can be obtained directly from the area law without solving for the wave equation and is given by [8, 9] $$n_M = \begin{cases} C(\omega) \sum_{l=1}^{[(M/\omega)]} \frac{\exp[4\pi(M-l\omega)^2]}{\exp(4\pi M^2)} & \omega < M \\ 0 & \omega > M, \end{cases} \hspace{1cm} (3.3)$$ where $M = 1/4\kappa$ is the (instantaneous) black hole mass and the function $C$ is a (unknown) factor which might account for high energy corrections coming, e.g., from string theory. In the following we shall set $C \sim 1$ unless differently specified. We also note that there is a natural cut-off at $\omega = M$ ($> 1$). For a comparison between $n_M$ and $n_\beta$ see Fig. 1. For $n = n_M$, $\tilde{k}$ is a rather complicated function of $\omega$. However, one can make some approximations on considering that we are particularly interested in the high frequency regime, i.e., $\omega \sim 1$. For $M \gg 1$, it is useful to consider $M$ as a large integer and, for $1 - \epsilon < \omega < 1$ (with $0 < \epsilon \ll 1$), one can easily compute $$\frac{d\omega}{dk} \approx -\frac{n_M(1+n_M)}{\omega(dn_M/d\omega)}. \hspace{1cm} (3.4)$$ In this range \[ n_M = e^{-4 \pi M^2} \sum_{l=1}^{M} e^{4 \pi (M-l) \omega^2} \simeq e^{-8 \pi M \omega} \simeq n_\beta, \] and \[ \frac{dn_M}{d\omega} = e^{-4 \pi M^2} \sum_{l=1}^{M} 8 \pi (l \omega - M) e^{4 \pi (M-l) \omega^2} \simeq -8 \pi M e^{-8 \pi M \omega} \simeq \frac{dn_\beta}{d\omega}, \] from which \[ \left. \frac{d\omega}{dk} \right|_{\omega \sim 1} \approx \frac{1}{8 \pi M}, \] in agreement with Eq. (3.2). The above turns out to be a rather good estimate for \(M \sim 10\) and greater, as one can check numerically (see table 1). For small values of \(M (\sim 1)\), one must properly take into account the finite sum appearing in Eq. (3.3). The result of a numerical analysis is shown in Fig. 2 for \(M = 1\) and in Fig. 3 for \(M = 10\). As was expected from the figures given in Table 1, the dispersion relation for \(M = 10\) is visibly linear and does therefore not differ from the canonical picture. For \(M = 1\) the curve departs from linearity and turns upward for \(\omega\) approaching the Planck energy. ### 4 Occupation number from dispersion relations Upon solving Eq. (3.1) for \(n(\omega)\) one obtains \[ n = \frac{1}{e^k - 1}, \] which is uniquely defined only for intervals of \(\omega\) where the function \(\tilde{k}(\omega)\) is single valued. This seems to be true for the two cases inspected in the previous Section, and would also hold for a spectrum which goes asymptotically constant for large \(\tilde{k}\) [4]. However it does not apply to the Epstein functions suggested in Ref. [6]. The latter can be considered as an extension to all values of \(\tilde{k}\) of the spectrum studied in Ref. [5], \[ \omega^2 = \frac{\tilde{k}^2}{k_0^2} \left( 1 - \frac{\tilde{k}^2}{\Lambda^2} \right), \] Figure 2: Microcanonical dispersion relation for $M = 1$. Figure 3: Microcanonical dispersion relation for $M = 10$. which is defined only for $\tilde{k} < \Lambda$ and is presumably meaningful only as the next-to-leading order expansion at small $\tilde{k}$ of the correct dispersion relation. We shall here consider a particular case of the family of functions studied in Ref. [6], namely $$\omega^2 = \frac{\tilde{k}^2}{k_0^2} \left[ \frac{\epsilon}{1 + e^{\tilde{k}/k_C}} + \frac{(4 - 2 \epsilon) e^{\tilde{k}/k_C}}{\left(1 + e^{\tilde{k}/k_C}\right)^2} \right] ,$$ (4.3) where $k_C$ determines the location of the maximum of $\omega$. We also demand that $n(\omega) \simeq n_\beta(\omega)$ for $\omega \ll 1$. This uniquely determines the coefficients $k_0 = \beta_H$ and $\epsilon = 0$ from equating the two lowest order coefficients in the Taylor expansion of Eq. (4.3) near $\tilde{k} = 0$ to the right hand side of Eq. (3.2). The constant $k_C$ can be fixed by requiring that the maximum of $\omega$ is close to 1. Since the derivative of $\omega(\tilde{k})$ with respect to $\tilde{k}$ vanishes for $\tilde{k} = k_m \simeq (5/2) k_C$, from $\omega(k_m) = 1$ one obtains $k_C \simeq (3/4) \beta_H$ and $k_m \simeq (15/8) \beta_H$. Finally, $$\omega \simeq \frac{\tilde{k}}{\beta_H} \text{sech} \left( \frac{2 \tilde{k}}{3 \beta_H} \right) .$$ (4.4) In Fig. 4 we plot a comparison between the dispersion relation (4.4) and the canonical dispersion relation (3.2) for $M = 10$ (we recall that the same dispersion relation follows from the microcanonical picture for such a large mass as shown in Table 1 and Fig. 3). In Fig. 5 we then display a comparison between the corresponding occupation number densities $n$ and $n_\beta$. We note that in the range $0 < \omega < 1$, $n_M \geq n_\beta$ while $n \leq n_\beta$. One might from this infer that the microcanonical number density must be corrected for $\omega \sim 1$ by a suitable $C(\omega)$ in Eq. (3.3). Having determined a novel occupation number density, we can now proceed to estimate the corresponding luminosity for an evaporating black hole, which can be formally written as [1] $$L = A \int_0^\infty \Gamma(\omega) n(\omega) \omega^3 \, d\omega ,$$ (4.5) where $\Gamma \sim 1$ is the grey-body factor and $A = 16 \pi M^2$ the horizon area. We now note that, since $k_m \gg 1$ for a black hole of mass $M > 1$, the number density for momenta $\tilde{k} > k_m$ is highly suppressed by the form of Eq. (4.1). Further, $\omega(\tilde{k})$ vanishes exponentially for $\tilde{k} \gg k_m$. One can therefore neglect the contribution of such modes, use the number density in Eq. (4.1) in the range $0 < \tilde{k} < k_m$ and approximate the luminosity as $$L \simeq \frac{\beta_H^2}{4} \int_0^1 n(\omega) \omega^3 \, d\omega$$ $$\simeq \frac{1}{4 \beta_H^2} \int_0^{k_m} \text{sech}^4 \left( \frac{2 \tilde{k}}{3 \beta_H} \right) \left[ 1 - \frac{2 \tilde{k}}{3 \beta_H} \tanh \left( \frac{2 \tilde{k}}{3 \beta_H} \right) \right] \frac{\tilde{k}^3 \, d\tilde{k}}{e^{\tilde{k}} - 1}$$ $$\sim \frac{1}{M^{-2}} ,$$ (4.6) where $k_m$ is the value of $\tilde{k}$ at which the term in square brackets (proportional to $d\omega/d\tilde{k}$) vanishes and the last line follows from dimensional analysis. The above Eq. (4.5) is just the standard canonical result [1]. The integral can also be estimated more precisely by changing the integration variable, $$L = \frac{\beta_H^2}{4} \int_0^{x_m} \text{sech}^4(x) \left[ 1 - x \tanh(x) \right] \frac{x^3 \, dx}{e^{2 \beta_H x} - 1} ,$$ (4.7) Figure 4: Comparison between the dispersion relation (4.4) (solid line) and the canonical dispersion relation (3.2) (dotted line) for $M = 10$. Figure 5: Behavior of the occupation number density $n$ (solid line) corresponding to the dispersion relation (4.4) compared with $n_\beta$ (dotted line) for $M = 10$. where \( x_m \equiv x(k_m) \simeq 1.2 \), and performing the integration numerically. The result is displayed for comparison with the canonical luminosity in Fig. 6 which shows that Eq. (4.6) is indeed correct to an exceedingly good approximation. We finally note that, since the microcanonical luminosity does not significantly differ from the canonical expression for four-dimensional black holes (necessarily with \( M > 1 \) [10]), the luminosity (4.6) computed in this section is also in agreement with the microcanonical result. 5 Conclusions and outlook In this paper we have studied how the statistical mechanics of the black hole evaporation is affected by the high energy behavior of Hawking quanta in four dimensions. We have found that one can consider large deviations from a linear dispersion relation at near Planckian frequencies without changing the luminosity of a black hole. In particular, the fact that the new dispersion relations advocated in Ref. [6] do not change the luminosity of a black hole with \( M > 1 \) is a direct verification of the general framework described in Ref. [3]. We have, however, not attempted at any estimate of how such modifications affect the laws of black hole thermodynamics, nor whether they can be indeed derived from a fundamental theory. Although we have not explicitly considered black holes lighter than the Planck mass in this paper, we suspect one would obtain different results for those cases [10]. Such objects are outside the domain of classical general relativity in four dimensions, since their Compton wavelength would be larger then the horizon radius. However, with more than four available dimensions [13] black holes could exist with \( M_0 < M < 1 \) (where \( M_0 \) is of the order of the fundamental mass scale of gravity). Further, the scale below which microcanonical corrections to the luminosity become significant in that scenario is given by the critical mass $M_c$ (much larger than the Planck mass) above which the black hole starts to behave like a purely four-dimensional object [14]. It then follows that for black holes with $M_0 < M < M_c$ one expects that modification of the dispersion relation for frequency $1 < \omega < M_c$ indeed affects the luminosity. We hope to extend our analysis along this line in a future publication. **Acknowledgement** I would like to thank M. Bastero-Gil and L. Mersini for useful discussions and B. Harms for reading the manuscript. **References** [1] S.W. Hawking, Nature **248**, 30 (1974); Comm. Math. Phys. **43**, 199 (1975). [2] P. Hajicek, Phys. Rev. D **36** (1987) 1065. [3] T. Jacobson, Prog. Theor. Phys. Suppl. **136** (1999) 1. [4] W. Unruh, Phys. Rev. D **51** (1995) 2827. [5] S. Corley and T. Jacobson, Phys. Rev. D **54** (1996) 1568 [6] M. Bastero-Gil, *What can we learn by probing trans-Planckian physics*, hep-ph/0106133; L. Mersini, *Dark energy from the transplanckian physics*, hep-ph/0106134; L. Mersini, M. Bastero-Gil and P. Kanti, Phys. Rev. D **64** (2001) 043508. [7] R. Parentani, Phys. Rev. D **63** (2001) 041503. [8] B. Harms and Y. Leblanc, Phys. Rev. D **46** (1992) 2334; Phys. Rev. D **47** (1993) 2438; Ann. Phys. **244** (1995) 262; Ann. Phys. **244** (1995) 272; Europhys. Lett. **27** (1994) 557; Ann. Phys. **242** (1995) 265; P.H. Cox, B. Harms and Y. Leblanc, Europhys. Lett. **26** (1994) 321; R. Casadio, B. Harms and Y. Leblanc, Phys. Rev. D **57** 1309 (1998). [9] R. Casadio and B. Harms, Phys. Rev. D **58** (1998) 044014. [10] R. Casadio and B. Harms, Mod. Phys. Lett. **A17** (1999) 1089. [11] J.D. Bekenstein, Lett. Nuovo Cim. **4** (1972) 737; Phys. Rev. D **7** (1973) 2333. [12] R. Casadio, Phys. Lett. **B 511** (2001) 285. [13] N. Arkani-Hamed, S. Dimopoulos and G. Dvali, Phys. Lett. **B 429**, 263 (1998); Phys. Rev. D **59**, 086004 (1999); I. Antoniadis, N. Arkani-Hamed, S. Dimopoulos and G. Dvali, Phys. Lett. **B 436**, 257 (1998); L. Randall and R. Sundrum, Phys. Rev. Lett. **83**, 4690 (1999). [14] R. Casadio and B. Harms, Phys. Rev. D **64** (2001) 024016; *Can black holes and naked singularities be detected in accelerators?*, hep-th/0110255 [15] M. Visser, *Essential and inessential features of Hawking radiation*, hep-th/0106111. [16] N.D. Birrell and P.C.W. Davis, *Quantum fields in curved space* (Cambridge, Cambridge University Press, 1982).
Oxidative stress and immune system analysis after cycle ergometer use in critical patients Eduardo Eriko Tenório de França, Luana Carneiro Ribeiro, Gabriela Gomes Lamenha, Isabela Kalline Fidelix Magalhães, Thainá de Gomes Figueiredo, Marthley José Correia Costa, Ubirácê Fernando Elihimas Júnior, Bárbara Luana Feitosa, Maria do Amparo Andrade, Marco Aurélio Valois Correia Júnior, Ferrari Ramos, Célia Maria Machado Barbosa de Castro 1Universidade Católica de Pernambuco (UNICAP), Fisioterapia, Recife/PE, Brazil. 8ª Universidade Federal de Pernambuco (UFPE), Fisioterapia, Recife/PE, Brazil. 9ª Universidade de Pernambuco (UPE), Fisioterapia, Recife/PE, Brazil. 10ª Hospital Agamenon Magalhães (HAM), Fisioterapia, Recife/PE, Brazil. 11ª Hospital Agamenom Magalhães (HAM), UTI Geral, Medicina Intensiva, Recife/PE, Brazil. 12ª Universidade Federal de Pernambuco (UFPE), Microbiologia, LIKA, Recife/PE, Brazil. OBJECTIVE: The passive cycle ergometer aims to prevent hypotrophy and improve muscle strength, with a consequent reduction in hospitalization time in the intensive care unit and functional improvement. However, its effects on oxidative stress and immune system parameters remain unknown. The aim of this study is to analyze the effects of a passive cycle ergometer on the immune system and oxidative stress in critical patients. METHODS: This paper describes a randomized controlled trial in a sample of 19 patients of both genders who were on mechanical ventilation and hospitalized in the intensive care unit of the Hospital Agamenom Magalhães. The patients were divided into two groups: one group underwent cycle ergometer passive exercise for 30 cycles/min on the lower limbs for 20 minutes; the other group did not undergo any therapeutic intervention during the study and served as the control group. A total of 20 ml of blood was analysed, in which nitric oxide levels and some specific inflammatory cytokines (tumour necrosis factor alpha (TNF-α), interferon gamma (IFN-γ) and interleukins 6 (IL-6) and 10 (IL-10)) were evaluated before and after the study protocol. RESULTS: Regarding the demographic and clinical variables, the groups were homogeneous in the early phases of the study. Nitric oxide analysis revealed a reduction in nitric oxide values in unstimulated cells ($p=0.0021$) and those stimulated ($p=0.0076$) after passive cycle ergometer use compared to the control group. No differences in the evaluated inflammatory cytokines were observed between the two groups. CONCLUSION: We can conclude that the passive cycle ergometer promoted reduced levels of nitric oxide, showing beneficial effects on oxidative stress reduction. As assessed by inflammatory cytokines, the treatment was not associated with changes in the immune system. However, further research in a larger population is necessary for more conclusive results. KEYWORDS: Cytokines; Oxidative stress; Musculoskeletal abnormalities. França EE, Ribeiro LC, Lamenha GG, Magalhães IK, Figueiredo TG, Costa MJ, et al. Oxidative stress and immune system analysis after cycle ergometer use in critical patients. Clinics. 2017;72(3):143-149 Received for publication on September 13, 2016; First review completed on November 9, 2016; Accepted for publication on December 16, 2016 *Corresponding author. E-mail: email@example.com INTRODUCTION Critical patients who require mechanical ventilation (MV) for an extended time period due to underlying disease and adverse effects of drugs undergo an important functional loss. The long immobilization period causes severe osteomyoarticular system dysfunction and is increasingly frequent in these patients (1,2). These irregularities create muscle function losses ranging from a daily decline of 1.3 force to 3 - 10% during an entire week of immobility (3). Immobilization in bed and increased MV dependence may adversely affect several organs and systems, leading to the following consequences: muscle contractures; functional loss; reduced maximal oxygen uptake (VO$_2$ Max); muscle weakness in the intensive care unit (ICU); deep venous thrombosis; pressure ulcers; pneumonia; atelectasis; bone demineralization; and changes in the emotional state such as anxiety, apathy and depression (4). Muscle weakness, which is common in critical patients, is associated with an inflammatory dysregulation that appears to contribute to myopathy. The mechanism for muscle decay due to immobility has not yet been completely clarified. Two molecular interactions are involved: oxidative stress and selected proinflammatory cytokines. It is believed that this synergy between oxidative stress, inflammatory cytokines and inactivity causes or accelerates muscular atrophy (5,6). Many studies have been developed with the objective of preventing the deleterious effects of ICU-acquired paresis or to minimize it, as an alternative therapy. Amongst the recommended procedures are passive exercises and/or activities, daily sedation suspension and reduced infusion of drugs such as neuromuscular blockers and corticosteroids, the maintenance of homeostasis of electrolytes and nutritional intake (7). The effects of physical exercise in critical patients for the prevention of atrophy and muscle strength improvement with a consequent reduction in hospitalization time in intensive care and functional improvement have rapidly expanded in recent years. However, changes in oxidative stress and immune system parameters in these patients are not well defined and require further study that identifies the mechanism of this approach of improving our understanding of the effect of isolated passive cycle ergometry use in critical patients. Therefore, this study analyses the oxidative stress and the immune system parameters after passive cycle ergometry use on lower limbs in critical patients. MATERIALS AND METHODS This was a randomized controlled trial, with a sample consisting of 19 MV patients of both genders hospitalized in the ICU of Hospital Agamenon Magalhães (HAM) who met the inclusion criteria. The study protocol was conducted in the ICU of HAM, but the blood analysis was performed in the laboratory of Immunology Keizo Asami (LIKA) at the Universidade Federal de Pernambuco. This study was approved by the Ethics and Research Committee (ERC) of the hospital under CAAE number 04563612.8.0000.5197, and all legal guardians of the patients signed informed consent (TFCG). The patients who underwent MV presented a good cardiovascular reserve, demonstrated by less than 20% heart rate (HR) variability, systolic blood pressure (SBP) less than 200 mmHg or greater than 90 mmHg, normal electrocardiogram (no evidence of acute myocardial infarction or arrhythmia) and a good respiratory reserve, demonstrated by a peripheral oxygen saturation (SpO₂) greater than 90% and an inspired fraction of oxygen (FiO₂) less than 60%, without signs of respiratory distress and a respiratory rate (RR) less than 25 ipm. Other clinical parameters necessary for inclusion in this study were as follows: stable haemoglobin of > 7 g / dL, stable platelet count of > 20,000 cells / mm³, white blood cell count of 4,300 - 10,800 cells / mm³, body temperature < 38°C and blood glucose levels of 3.5 – 20 mmol / L. Other parameters included an acceptable patient appearance, pain, fatigue, shortness of breath and emotional status; a stable conscious state, no other neurological complications, no orthopaedic contraindications, no recent SSG / flap to lower limbs or trunk, medically stable without vasoactive drugs and/or minimal doses, excessive weight able to be safely managed, and attachments that contraindicated mobilisation, safe environment, appropriate staffing and expertise and patient consent. Patients who presented with hemodynamic instability, the inability to walk without assistance before acute disease in the ICU, under 21 years old, pregnant women, patients with a body mass index (BMI) greater than 35 Kg/m² (2), neuromuscular disease or vascular disease, cerebrovascular accident history, non-consolidated fractures or any osteoarticular limitation that precludes cycle ergometry use were excluded from the study. After being recruited to the study, the participants were evaluated through medical records, demographic information, medical history and diagnosis. Data on neuromuscular blockers, sedatives and vasoactive drugs were also collected. When this initial review of all patients was completed, they were submitted to blood collection by central venous access, both before and 1 hour after the study protocol finalization. For each patient, a total of 20 ml of blood was collected with vacuum tubes (Vacutainer ®) DIPOTASSIUM EDTA (Juiz de Fora, Minas Gerais, Brazil) to evaluate the oxidative stress and immune system parameters as indicated by the cytokines tumour necrosis factor alpha (TNF-α), interferon gamma (IFN-γ) and interleukins 6 (IL-6) and 10 (IL-10). Oxidative stress was evaluated using FDDH⁺ monocytes obtained from the peripheral blood. The blood was diluted at a ratio of 1:200 pm with sterile PBS culture medium at a room temperature of 22 to 25°C (10 ml +10 ml blood in PBS). A total of 10 ml of histopaque (1077-SIGMA) was added to 20 ml of the suspension, and all contents were centrifuged for 30 minutes at 1,600 rpm (25°C). Soon after, the plasma was aspirated, and the layer formed by the cells was collected (PBMC) and transferred to another test tube. The same amount of aspirated PBS was added and centrifuged for 15 minutes under the same conditions as before. The supernatant was decanted and sedimented in 1 ml of RPMI 1640 medium containing 3% bovine foetal serum and antibiotics (100 U / ml penicillin and 100 µg / ml streptomycin). The suspension calculation was performed in a Neubauer chamber assuming the rate of adding a suspension of cells and trypan blue dye in a 1:10 dilution. This dye was used for cell calculation and for assessing their viability. It was standardized to a concentration of \(1 \times 10^6\) cells for each 1 ml of culture medium from the measurements. Nitric oxide (NO) production in cultured monocytes treated with *Escherichia coli* lipopolysaccharide (LPS) In each group, the concentration was adjusted to \(1 \times 10^6\) cells in 1 ml of culture medium to each well. The cells were treated at a dose of 10 µg / ml of LPS for 24 hours. The NO evaluation release was performed using the GRIESS method. First, 50 µl of GRIESS reagent (1 g of sulphanilamide, Sigma 2251, 0.1 g of naphthylethylenediamine dihydrochloride, Sigma P885, 2.5 ml of phosphoric acid PA, and qsp 100 ml of distilled water) was added. Then, the plate was incubated for 10 minutes in the dark. The reading was performed at 540 nm in an ELISA reader (Dynatech MR 5000). The sensitivity test threshold was 1.56 µM. Analysis of immune system parameters by quantification of IL-6, IL-10, TNF-α and IFN-γ Serum levels of IL-6, IL-10, TNF-α and IFN-γ were determined using ELISA commercial kits for IL-6, IL-10, TNF-α and IFN-γ (BioSource ®, Nivelles, Belgium, Europe) according to the manufacturer’s instructions. In this technique, a specific monoclonal antibody is adsorbed to the plate. After the serum sample addition of the mediator to be dosed, during incubation, the molecules of antigens settle to the antibodies adsorbed to the plate. Through washing, any non-fixed materials are eliminated. Next, new antibodies with specificity for an antigenic determinant connected to the plate are added, resulting in the complex Bc-Ag-Ac-enzyme (sandwich technique). A second wash is performed for the removal of unlinked antibodies. Then, a substrate is added that has the property of assuming a different coloration when in contact with the enzyme; this colour is proportional to the amount of mediator present in the sample (antigen). The reading is obtained in a plate reader (Bio-Rad, Tokyo, Japan) at 450 nm and compared to a standard curve obtained with known concentrations of recombinant mediators. **Study protocol** The study population was divided into two groups: the control group, consisting of 10 patients who were without any type of therapeutic intervention at the moment when they were submitted to the study protocol, and the intervention group, consisting of 9 patients who were submitted to passive cycle ergometry on their lower limbs with the speed adjusted at 30 cycles per minute for 20 minutes, by a cycle ergometer (Flex Motor with sensor; Cajumoro; Bragança Paulista, São Paulo, Brazil). Figure 1 demonstrates the protocol in the lower limbs. The participation in one of the two groups in this study was randomly determined by Microsoft Office Excel 2007. All results and demographic characteristics were assessed using GraphPad Prism 4 software and Microsoft Office Excel 2007. The presentation of the measured variables was performed through tables and figures. The median and the percentile (25-75%) were used to present continuous variables, whereas categorical data were presented using absolute and relative frequencies. To test the normality assumption of the variables in the study, the Shapiro-Wilk test was used. The comparative analysis between two groups was performed using the Mann-Whitney test, and the Wilcoxon test was used to compare the same group. Fisher’s exact test was used to evaluate the differences between the proportions. The relationship between variables was assessed using the Spearman correlation (non-normal distribution). All findings were considered at a 5% significance level. **RESULTS** During the study period, from December 2013 to February 2016, a total of 465 patients with various diseases were admitted to the general ICU, of whom 439 individuals met the exclusion criteria of the study. Only 26 patients were randomized into two groups; amongst these, only 19 patients finished their analysis, distributed as follows: control group (n=10) and exercise group (n=9) (Figure 2). Table 1 presents the median and percentile (25 - 75%) of the demographic and clinical variables of each of the two groups: control and cycle ergometer. There were no differences between the two groups regarding age, height, weight, BMI, APACHEII, water balance (BH) in the last 12:00 am, sedation scale of RASS, MV, hospitalization time in ICU, hemoglucotest (HGT), respiratory system compliance (Cst), respiratory system resistance (Rsr), HR, SpO₂, SBP, diastolic blood pressure (DBP), temperature (T) and ICU mortality, demonstrating the homogeneity between the groups. Figures 3a and 3b represent the results obtained from the NO analysis of stimulated monocytes (positive controls) and unstimulated (negative control) collected before and after the study protocol from both groups. These results revealed a reduction in NO variation after the production of stimulated monocytes ($p=0.0021$) and unstimulated ($p=0.0076$) after passive cycle ergometry use compared to the control group. Table 2 shows the median and percentile (25 - 75%) values for the cytokines, TNF-α, IFN-γ, IL-6 and IL-10, analysed before and after the study protocol for each group, in which there were no significant changes in cytokines compared to the moments before and after for the two groups. Correlations were made between cytokines (TNF-α, IL-6, IL-10 and IFN-γ) and NO concentration in the stimulated cells, positive control (C +) and negative control (C-) without any significant difference at $p > 0.05$. **DISCUSSION** There is no evidence in the literature that describes the effects of a passive cycle ergometer on oxidative stress and immune system parameters in critical patients. However, several other benefits of this intervention have been widely observed, particularly regarding muscle mass loss and improved functionality. During the period that these patients are restricted to bed, muscle fibre transformation to type II occurs, including reduced oxidative capacity, mitochondrial density and blood capillaries. In addition, the cardiovascular performance is reduced due to lower systolic ejection volume and increased HR. Venous stasis occurs due to reduced activity of the muscle pump in the limbs and an increased risk of developing thrombosis. The period of immobility can also contribute to bone demineralization and sodium and body water reduction (8). In the present study, the values described in Table 1 show that no differences were found in the demographic and clinical characteristics of the patients in both groups, which demonstrates the homogeneity between groups for all evaluated parameters. These partial results of homogeneity between the groups are important because they do not expose any group to a greater risk factor for its clinical condition commitment. Muscle dysfunction in critical patients is a common problem concerning patients submitted to prolonged MV, periode and hospitalization in the ICU. The magnitude of muscle weakness in the ICU is extremely variable and presents an intimate association with permanence time in bed and exposure to risk factors, such as nutritional condition and MV dependence (4,9). Critical patients are vulnerable to synthesizing oxidative agents and reducing antioxidants. Oxidative stress increases oxidation and has an important role in the pathophysiological process of muscular dysfunction. Reactive oxygen species (ROS) provide lipid peroxidation, leading to toxin release and arachidonic acid derivatives that damage cell membranes, inactivating the receptor membrane enzymes and ionic chemistry response changes. In myocytes, this process of degradation can be modified by numerous intracellular signals observed in vitro, including the activation of nuclear factor kappa-β (NF-k β) (10). Products of cell destruction by ROS produce a positive feedback, generating more ROS. Simultaneously, the interaction between ROS with cytokines and other intercellular molecules is associated with muscle degradation. Figures 3a and 3b show that for both stimulated monocytes (C+) and unstimulated monocytes (C-), there was a reduction in NO values in the cycle ergometer passive group compared to those in the control group. NO values decreased before compared with after the intervention, a phenomenon not observed in the control group. These findings suggest a potential beneficial effect of passive cycle ergometry in oxidative stress reduction in critical patients; it can be considered a moderate-intensity physical activity for this type of patient, providing a positive change in the redox status of cells and tissues from basal levels, decreasing oxidative damage and increasing resistance to oxidative stress (11-13). In fact, regular moderate exercise results in adjustments in antioxidant capacity, which protects cells against the damaging effects of oxidative stress, preventing subsequent cell damage (14,15). Our results correlate with the findings of Mercken et al. (16), who assessed the different intensities of exercise and its effects on oxidative stress, immediately after and 4 hours after the cycle ergometer test with 60% workload, in patients with chronic obstructive pulmonary disease (COPD). There was a reduction of the induction of systemic oxidative stress triggered by exercise, particularly after submaximal exercise. Similar to oxidative stress, some selected cytokines also influence muscle degradation and inflammation in critical patients. In a model to explain cachexia, Reid and Li (17) suggested that the interaction between ROS and proinflammatory cytokines such as TNF-α are synergistic, perhaps indicative of a pathological positive feedback cycle, which is lower in the regulation of repairing damaged muscle tissue. Thus, it is not only the direct suppression of muscle activity that leads to dysfunction in the presence of TNF-α, but a decrease in repairing and/or an increase of apoptosis that Table 1 – Comparison of demographic and clinical variables between the two groups. | Demographic and clinical variables | Control (n=10) | Passive cycle ergometer (n=9) | p * value | |-----------------------------------|---------------|-------------------------------|-----------| | Age (years) | 56.0 (44.0 – 70.5) | 77.0 (32.5 – 81.0) | 0.35 | | Height (cm) | 164.5 (157.5 – 172.0) | 160.0 (153.5 – 162.5) | 0.09 | | Weight (kg) | 67.5 (60.0 – 80.0) | 70.0 (55.0 – 80.0) | 0.84 | | BMI (Kg/m²) | 25.1 (23.9 – 26.1) | 27.3 (22.1 – 33.1) | 0.40 | | APACHE II | 23.0 (19.0 – 27.0) | 25.0 (15.5 – 29.5) | 0.90 | | WB-24 h am (ml) | 113.0 (100.0 – 181.0) | 487.0 (38.0 – 341.0) | 0.65 | | MV T (days) | 4.0 (2.5 – 8.0) | 5.0 (4.0 – 8.0) | 0.11 | | ICU T (days) | 4.0 (2.5 – 8.0) | 6.0 (5.0 – 8.0) | 0.06 | | HGT | 130.5 (92.5 – 176.5) | 122.0 (88.0 – 177.5) | 0.06 | | Cst (ml/cm² H₂O) | 32.2 (27.1 – 37.1) | 27.0 (17.6 – 33.9) | 0.15 | | Rr (cm² H₂O/Ls) | 12.0 (10.0 – 18.0) | 10.0 (7.0 – 14.0) | 0.65 | | FC (bpm) | 83.0 (61.0 – 99.0) | 86.0 (78.5 – 94.5) | 0.60 | | SpO₂ (%) | 98.5 (94.0 – 99.5) | 98.0 (97.5 – 99.0) | 0.71 | | SBP (mm Hg) | 137.5 (108.5 – 162.5) | 142.0 (125.0 – 151.0) | 0.71 | | DBP (mm Hg) | 80.0 (62.0 – 93.0) | 74.0 (66.0 – 88.5) | 0.84 | | Temperature (°C) | 36.0 (36.0 – 36.5) | 36.5 (36.5 – 37.0) | 0.21 | | ICU mortality | 6 (60.0) | 7 (77.7) | 0.63 | | Primary reason for admission | | | | | Respiratory problem | 4 (40.0) | 4 (44.4) | | | Cardiac problem | 2 (20.0) | 2 (22.2) | – | | Sepsis/infection | 1 (10.0) | 2 (22.2) | | | Other | 3 (30.0) | 1 (11.1) | | | Comorbid conditions | | | | | Respiratory | 2 (20.0) | 1 (11.1) | | | Cardiac | 2 (20.0) | 3 (33.3) | | | Endocrine | 2 (20.0) | 1 (11.1) | – | | Urinary | 1 (10.0) | 1 (11.1) | | | Chronic renal failure | 3 (30.0) | 2 (22.2) | | | Sepsis/infection | 1 (10.0) | 1 (11.1) | | Data are the median (25 - 75% percentile) before testing. * Mann-Whitney test and Fisher’s exact test Body mass index (BMI); Acute Physiology and Chronic Health Evaluation (APACHE II); water balance in the last 24 hours (WB-24 h); mechanical ventilation time (MV T); intensive care unit time (ICU T); hemoglobin test (HGT); static compliance of the respiratory system (Cst); resistance of the respiratory system (Rrr), heart rate (HR); peripheral oxygen saturation (SpO₂); systolic blood pressure (SBP); diastolic blood pressure (DBP); temperature and intensive care unit (ICU) mortality. Figure 3a and 3b - Variation in the nitric oxide (NO) values in stimulated control positive cell (C +) and unstimulated in the two groups studied: control and passive cycle ergometer. * Mann-Whitney Test. Differences between the control group and cycle ergometer 3a. (*p=0.0021) and 3b. (*p=0.0076). Results in muscle weakening of these mediated inflammatory cytokines. Table 2 shows that there was no significant difference between the values of TNF-α, IFN-γ, IL-6 and IL-10 compared to the moments before and after the cycle ergometer intervention. Comparing the cycle ergometer and control groups demonstrated that the passive cycle ergometer in this study was not enough to promote changes in the immune system after 1 hour of passive exercise. The level of exercise intensity and the kinetics of cytokines may have been responsible for the lack of changes in serum levels of these cytokines in the studied groups. According to Winkelman et al. (18), in healthy individuals, increased TNF-α varies in response to exercise; it is greater when oxidative stress is present during the exercise. Serum levels of TNF-α also reduce antioxidant levels in some skeletal muscles. Because TNF-α is a proinflammatory cytokine, it stimulates the synthesis of several factors, including adhesion molecules and ROS. Another very important cytokine in the inflammatory process is IL-6, which has a wide range of biological activities. It is synthesized by the immune system and skeletal muscle cells, adipocytes, endothelial cells and intestinal epithelium cells (19). Similar to TNF-α, IL-6 is released early in the inflammatory cascade. Unlike TNF-α, IL-6 increases in myosin and appears to have a role in the maintenance of the power supply during the lifetime of the myocyte. Table 2 - Comparison of cytokine measurements before and after for the control and passive cycle ergometer groups. | Cytokines | Control (n=10) | Passive cycle ergometer (n=9) | |-----------|----------------|-------------------------------| | | Before | After | Before | After | | TNF-α | 3.05 (2.83-3.22) | p=0.625 | 2.91 (2.70-3.30) | 3.35 (2.72-3.59) | p=0.301 | 2.87 (2.74-3.28) | p* value | | IFN-γ | 1.88 (1.87-1.91) | p=0.152 | 1.88 (1.86-1.91) | 1.91 (1.86-1.95) | p=0.944 | 1.92 (1.87-1.98) | p* value | | IL- 6 | 2.87 (2.42-3.86) | p=0.695 | 2.67(2.33-3.61) | 2.37 (2.30-2.96) | p=0.820 | 2.39 (2.23-2.88) | p* value | | IL- 10 | 1.86 (1.83-1.93) | p=0.998 | 1.85 (1.83-1.93) | 1.91 (1.88-1.93) | p=0.528 | 1.92 (1.91-1.93) | p* value | Proinflammatory cytokines (TNF-α, IFN-γ and IL-6) and antiinflammatory properties (IL-10). Values written in bold are statistically significant. Data are the median (25 – 75% percentile) before and after testing. *Wilcoxon test. TNF-α, tumour necrosis factor alpha; IFN-γ, interferon gamma; IL-6, interleukin 6; IL-10, interleukin 10. Muscle contraction induces IL-6 production and release to the plasma in large quantities. An increase of up to 100 times the initial value can be found in some studies in humans. The synthesis of IL-6 during exercise is independent of the production of TNF-α. During exercise, even low intensity, IL-6 is also synthesized and put into circulation by the muscle-tendon and fatty tissues (20). Apart from its proinflammatory properties, high levels of IL-6 act to stimulate the emergence of antiinflammatory cytokines in the plasma, including IL-10 and IL-1R α. IL-6 derived from muscles reduces the production of TNF-α, interrupting the muscle degradation through the destruction of myosin (18). The abolition of proinflammatory cytokines such as IL-6, particularly the powerful IL-β 1 and TNF-α, can benefit the critically ill. IL-10 is an anti-inflammatory cytokine that is initially identified by its ability to interrupt the production of cytokines by T cells. Studies have shown that IL-10 inhibits the synthesis of IL-1 β, IL-6, TNF-α, reactive oxygen intermediates and other proinflammatory factors, suppressing various immune responses through individual actions on multiple cell types. After exercise, high circulating IL-6 levels are followed by increased production of IL-10. Studies suggest that exercise has an antiinflammatory action by inducing IL-10 and IL-6 and inhibiting TNF-α and IL-β 1. The cellular messengers IL-6 and IL-10 are involved in the maintenance of muscle function during stretching and some exercise types. Exercise is used as an inflammation regulator and muscle function (18). The truth is that important clinical implications have been demonstrated with physical exercise and neuromuscular electrical stimulation (NMES) as some of the main prevention factors in the muscle function of critical patients. Although changes in inflammatory cytokines in a single session using the cycle ergometer were observed, we believe that the implementation of the passive cycle ergometer has beneficial effects on this response, similar to a report by Karanatsas et al. (21) who studied the application of six weeks of NMES in the lower limbs of patients with severe heart disease. They observed that NMES could promote a direct effect on endothelial function and the peripheral markers of antiinflammatory activation with reduced levels of TNF-α and IL-6 and improved blood flow in the brachial artery observed by ultrasound with doppler. These cytokines may systematically act as anabolic muscle and enhance stimulus catabolic effects of critical illness and paralysis. NMES and exercise can also activate a bioenergetics pathway that systemically improves the mitochondrial function of skeletal muscles. In another study evaluating the effects on the immune system, after a total of 20 sessions, Akar Olcay et al. (22) studied patients with COPD under MV whose objective was to investigate the effect of active mobilization and NMES in the weaning process, discharge and inflammatory mediators. A total of 30 patients were divided into three groups with 10 patients in each group: group 1 was submitted to active mobilization of extremities and NMES, group 2 was submitted to NMES, and group 3 only underwent active mobilization of extremities. Significant improvement was observed in peripheral muscle strength, particularly of the lower extremities, in the groups that performed NMES and exercises and NMES exclusively. In addition, a reduction of IL-6 and IL-8 in patients submitted to NMES was observed. It is too early to draw any conclusions from our findings. However, we can conclude that passive cycle ergometry on the lower limbs was enough to reduce NO levels in cells compared to the control group, showing the benefits of passive exercise on oxidative stress reduction in the study population. Regarding the behaviour of the inflammatory cytokines evaluated in this study, we observed that the use of the passive cycle ergometer did not cause changes in the immune system. However, additional research in a larger population is necessary for more conclusive results. AUTHOR CONTRIBUTIONS All of the authors participated in the design, interpretation of studies, data analyses and review of the manuscript. França EE, Ribeiro LC, Lameira GG, Magalhães IK, Figueiredo TG, Costa MJ, Eihimam-Junior UF and Barbara Feitosa BF, participated in the realization of experiments and laboratory analysis of oxidative stress and inflammatory cytokines. Andrade MA and Castro CM guided the research elaboration. Correia-Junior MA and Ramos FW wrote the manuscript and performed the statistical analysis. REFERENCES 1. Chiang LL, Wang LY, Wu CP, Wu HD, Wu YT. Effects of physical training on functional status in patients with prolonged mechanical ventilation. Phys Ther. 2008;88(9):1071-80. http://dx.doi.org/10.2522/ptj.20070435. 2. Joly LM, Bouché-Garni S, D’Angelo MC, Goulet M, Lebedev P, Cerf G, et al. Respiratory weakness is associated with limb weakness and delayed weaning in critical illness. Crit Care Med. 2007;35(9):2007-15. http://dx.doi.org/10.1097/01.ccm.0000281450.01881.d8. 3. Topp R, Dittmer M, Kang K, Doherty BS, Zernyak J 3rd. The effect of bed rest and potential of prehabilitation on patients in the intensive care unit. 4. De Jonghe B, Shinshir T, Lefaucheur JP, Autier PJ, Durand-Zaleski I, Bourgeois F, et al. Passive retraining in intensive care unit: a prospective multicenter study. JAMA. 2002;288(22):2859-67, http://dx.doi.org/10.1001/jama.288.22.2859. 5. Mader MJ, Iskakani E. Skeletal muscle dysfunction in chronic obstructive pulmonary disease. Respir Res. 2003;24(4):216-24, http://dx.doi.org/10.1186/r970. 6. Diacu B, Annex BH, Green HJ, Pippen AM, Kraus WE. Deconditioning fails to predict changes in skeletal muscle alterations in men with chronic heart failure. J Am Coll Cardiol. 2002;39(7):1170-4, http://dx.doi.org/10.1016/S0735-1097(02)01740-0. 7. Bax L, Staes F, Verhagen A. Does neuromuscular electrical stimulation stimulate muscle protein synthesis? A systematic review of randomized controlled trials. Sports Med. 2005;35(3):191-212, http://dx.doi.org/10.2165/00007256-200535030-00002. 8. Nava S, Taggi A, De Giorgio E, Carlucci A. Muscle retraining in the ICU patients. Intensive Care Med. 2002;28(5):341-5. 9. Latronico N, Rastrollo FA. Presentation and management of ICU myopathy and rhabdomyolysis. Curr Opin Crit Care. 2010;16(2):123-7, http://dx.doi.org/10.1097/MCC.0b013e32833655a0. 10. Li YP, Schwartz RJ, Waddell ID, Holloway BR, Reid MB. Skeletal muscle myocytes undergo protein loss and reactive oxygen-mediated NF-kappaB activation in response to tumor necrosis factor alpha. FASEB J. 1998;12(10):873-80. 11. Niess AM, Dickhuth HH, Northoff H, Fehrenbach E. Free radicals and oxidative stress in exercise-immunological aspects. Exerc Immunol Rev. 1999;5:10-24. 12. Di Meo S, Venditti P. Mitochondria in exercised-induced oxidative stress. Biol Signals Recept. 2001;10(1-2):125-40, http://dx.doi.org/10.1139/y00046880. 13. Cooper CE, Villard NB, Choueiri T, Wilson MT. Exercise, free radicals and oxidative stress. Biochem Soc Trans. 2002;30(2):288-5, http://dx.doi.org/10.1042/bst030280. 14. Dekkers JC, Van Doornen LJ, Kemper HC. The role of antioxidant vitamins and enzymes in the prevention of exercise-induced muscle damage. Sports Med. 1996;21(3):213-38, http://dx.doi.org/10.2165/00007256-199621030-00002. 15. Aguilo A, Tauler P, Pilar Guix M, Villa G, Córdova A, Tur JA, et al. Effect of exercise intensity and training on antioxidants and cholesterol profile in cyclists. J Nutr Biochem. 2003;14(6):319-25, http://dx.doi.org/10.1016/S0955-2863(03)00040-8. 16. Mercken EM, Hageman CJ, Schols AM, Akkermans MA, Bast A, Wouters EF. Rehabilitation decreases exercise-induced oxidative stress in patients with obstructive pulmonary disease. Am J Respir Crit Care Med. 2005;172(8):994-1001, http://dx.doi.org/10.1164/rcm.200411-1580OC. 17. Reid MB, Li YP. Tumor necrosis factor-alpha and muscle wasting: a new perspective. Respir Res. 2007;8:209-7, http://dx.doi.org/10.1186/rnr70. 18. Winkelman C. Inactivity and inflammation in the critically ill patient. Crit Care Clin. 2007;23(1):21-34, http://dx.doi.org/10.1016/j.ccc.2006.11.001. 19. Fink MP. The prevention and treatment of sepsis is interleukin-6 a drug target or not? Crit Care Med. 2006;34(3):919-21, http://dx.doi.org/10.1097/01.CCM.0000208108.30000.0C. 20. Norredt D, Hong S, Mills PJ, Ziegler MG, Hill M, Cooper DM. Systemic vs. local cytokine and leukocyte responses to unilateral wrist flexion exercises. J Appl Physiol. 2002;92(2):546-54, http://dx.doi.org/10.1152/japplphysiol.00383.2001. 21. Karavidas AI, Ratsiakis KG, Parisis JT, Tsekoura DK, Adamopoulos S, Korres DA, et al. Functional electrical stimulation improves endothelial function and reduces systemic inflammatory responses in patients with chronic heart failure. Eur J Cardiovasc Prev Rehabil. 2006;13(4):592-7, http://dx.doi.org/10.1097/01.hpr.0000219111.02544.ff. 22. Akar O, Gunay E, Ulasi SS, Ulasi AM, Kacar E, Sarıaydın M, et al. Efficacy of Neuromuscular Electrical Stimulation in Patients with COPD Followed in Intensive Care Unit. Clin Respir J. 2013.
Variance Estimation in Nonparametric Regression via the Difference Sequence Method Lawrence D. Brown *University of Pennsylvania* Michael Levine *Purdue University* Follow this and additional works at: [https://repository.upenn.edu/statistics_papers](https://repository.upenn.edu/statistics_papers) Part of the Statistics and Probability Commons **Recommended Citation** Brown, L. D., & Levine, M. (2007). Variance Estimation in Nonparametric Regression via the Difference Sequence Method. *The Annals of Statistics, 35*(5), 2219-2232. [http://dx.doi.org/10.1214/0090536070000000145](http://dx.doi.org/10.1214/0090536070000000145) Variance Estimation in Nonparametric Regression via the Difference Sequence Method Abstract Consider a Gaussian nonparametric regression problem having both an unknown mean function and unknown variance function. This article presents a class of difference-based kernel estimators for the variance function. Optimal convergence rates that are uniform over broad functional classes and bandwidths are fully characterized, and asymptotic normality is also established. We also show that for suitable asymptotic formulations our estimators achieve the minimax rate. Keywords nonparametric regression, variance estimation, asymptotic minimaxity Disciplines Statistics and Probability VARIANCE ESTIMATION IN NONPARAMETRIC REGRESSION VIA THE DIFFERENCE SEQUENCE METHOD BY LAWRENCE D. BROWN\textsuperscript{1} AND M. LEVINE\textsuperscript{1,2} University of Pennsylvania and Purdue University Consider a Gaussian nonparametric regression problem having both an unknown mean function and unknown variance function. This article presents a class of difference-based kernel estimators for the variance function. Optimal convergence rates that are uniform over broad functional classes and bandwidths are fully characterized, and asymptotic normality is also established. We also show that for suitable asymptotic formulations our estimators achieve the minimax rate. 1. Introduction. Let us consider the nonparametric regression problem \begin{equation} y_i = g(x_i) + \sqrt{V(x_i)}\epsilon_i, \quad i = 1, \ldots, n, \end{equation} where $g(x)$ is an unknown mean function, the errors $\epsilon_i$ are i.i.d. with mean zero, variance 1 and the finite fourth moment $\mu_4 < \infty$ while the design is fixed. We assume that $\max\{|x_{i+1} - x_i|\} = O(n^{-1})$ for $\forall i = 0, \ldots, n$. Also, the usual convention $x_0 = 0$ and $x_{n+1} = 1$ applies. The problem we are interested in is estimating the variance $V(x)$ when the mean $g(x)$ is unknown. In other words, the mean $g(x)$ plays the role of a nuisance parameter. The problem of variance estimation in nonparametric regression was first seriously considered in the 1980s. The practical importance of this problem has been also amply illustrated. It is needed to construct a confidence band for any mean function estimate (see, e.g., Hart [24], Chapter 4). It is of interest in confidence interval determination for turbulence modeling (Ruppert et al. [34]), financial time series (Härdle and Tsybakov [23], Fan and Yao [18]), covariance structure estimation for nonstationary longitudinal data (see, e.g., Diggle and Verbyla [10]), estimating correlation structure of heteroscedastic spatial data (Opsomer et al. [31]), nonparametric regression with lognormal errors as discussed in Brown et al. [2] and Shen and Brown [36], and many other problems. In what follows we describe in greater detail the history of a particular approach to the problem. von Neumann [40, 41] and then Rice [33] considered the special, homoscedastic situation in which $V(x) \equiv \sigma^2$ in the model (1) but $\sigma^2$ is unknown. \textsuperscript{1}Received August 2006; revised December 2006. \textsuperscript{2}Supported in part by NSF Grant DMS-04-05716. \textsuperscript{2}Supported in part by a 2004 Purdue Research Foundation Summer Faculty grant. AMS 2000 subject classifications. 62G08, 62G20. Key words and phrases. Nonparametric regression, variance estimation, asymptotic minimaxity. They proposed relatively simple estimators of the form \[ \hat{V}(x) = \frac{1}{2(n-1)} \sum_{i=1}^{n-1} (y_{i+1} - y_i)^2. \] The next logical step was made in Gasser, Sroka and Jennen-Steinmetz [19], where three neighboring points were used to estimate the variance, \[ \hat{V}(x) = \frac{2}{3(n-2)} \sum_{i=1}^{n-2} \left( \frac{1}{2} y_i - y_{i+1} + \frac{1}{2} y_{i+2} \right)^2. \] A further general step was made in Hall, Kay and Titterington [21]. The following definition is needed first. **Definition 1.1.** Let us consider a sequence of numbers \( \{d_i\}_{i=0}^r \) such that \[ \sum_{i=0}^r d_i = 0 \] while \[ \sum_{i=0}^r d_i^2 = 1. \] Such a sequence is called a difference sequence of order \( r \). For example, when \( r = 1 \), we have \( d_0 = \frac{1}{\sqrt{2}}, d_1 = -d_0 \), which defines the first difference \( \Delta Y = \frac{y_i - y_{i-1}}{\sqrt{2}} \). The estimator of Hall, Kay and Titterington [21] can be defined as \[ \hat{V}(x) = (n-r)^{-1} \sum_{i=1}^{n-r} \left( \sum_{j=0}^r d_j y_{j+i} \right)^2. \] The conditions (4) and (5) are meant to insure the unbiasedness of the estimator (6) when \( g \) is constant and also the identifiability of the sequence \( \{d_i\} \). A different direction was taken in Hall and Carroll [20] and Hall and Marron [22] where the variance was estimated by an average of squared residuals from a fit to \( g \); for other work on constant variance estimation, see also Buckley, Eagleson and Silverman [5], Buckley and Eagleson [4] and Carter and Eagleson [7]. The difference sequence idea introduced by Hall, Kay and Titterington [21] can be modified for the case of a nonconstant variance function \( V(x) \). As a rule, the average of squared differences of observations has to be localized in one way or another—for example, by using the nearest neighbor average, a spline approach or local polynomial regression. The first to try to generalize it in this way were probably Müller and Stadtmüller [27]. It was further developed in Hall, Kay and Titterington [21], Müller and Stadtmüller [28], Seifert, Gasser and Wolf [35], Dette, Munk and Wagner [9], and many others. An interesting application of this type of a variance function estimator for the purpose of testing the functional form of the given regression model is given in Dette [8]. Another possible route to estimating the variance function $V(x)$ is to use the local average of the squared residuals from the estimation of $g(x)$. One of the first applications of this principle was in Hall and Carroll [20]. A closely related estimator was also considered earlier in Carroll [6] and Matloff, Rose and Tai [26]. This approach has also been considered in Fan and Yao [18]. Some of the latest work in the area of variance estimation includes attempts to derive methods that are suitable for the case where $X \in \mathbb{R}^d$ for $d > 1$; see, for example, Spokoiny [38] for generalization of the residual-based method and Munk, Bissantz, Wagner and Freitag [29] for generalization of the difference-based method. The present research describes a class of nonparametric variance estimators based on difference sequences and local polynomial estimation, and investigates their asymptotic behavior. Section 2 introduces the estimator class and investigates its asymptotic rates of convergence as well as the choice of the optimal bandwidth. Section 3 establishes the asymptotic normality of these estimators. Section 4 investigates the question of asymptotic minimaxity for our estimator class among all possible variance estimators for nonparametric regression. 2. Variance function estimators. Consider the model (1). We begin with the following formal definition. **Definition 2.1.** A pseudoresidual of order $r$ is $$\Delta_i \equiv \Delta_{r,i} = \sum_{j=0}^{r} d_j y_{j+i-\lfloor r/2 \rfloor},$$ where $\{d_j\}$ is a difference sequence satisfying (4)–(5) and $i = \lfloor \frac{r}{2} \rfloor + 1, \ldots, n + \lfloor \frac{r}{2} \rfloor - r$. Let $K(\cdot)$ be a real-valued function such that $K(u) \geq 0$ and is not identically zero; $K(u)$ is bounded [$\exists M > 0$ such that $K(u) \leq M$ for $\forall u$]; $K(u)$ is supported on $[-1, 1]$ and $\int K(u) \, du = 1$. We use the notation $\sigma_K^2 = \int u^2 K(u) \, du$ and $R_K = \int K^2(u) \, du$. Then, based on $\Delta_{r,i}$, we define a variance estimator $\hat{V}_h(x)$ of order $r$ as the local polynomial regression estimator based on $\Delta_{r,i}^2$, $$\hat{V}_h(x) = \hat{a}_0,$$ where \[ (\hat{a}_0, \hat{a}_1, \ldots, \hat{a}_p) = \arg \min_{a_0, a_1, \ldots, a_p} \sum_{i=\lfloor r/2 \rfloor + 1}^{n+\lfloor r/2 \rfloor - r} [\Delta^2_{r,i} - a_0 - a_1(x - x_i) - \cdots - a_p(x - x_i)^p]^2 \times K \left( \frac{x - x_i}{h} \right). \] The value \( h \) in (8) is called the bandwidth and \( K \) is the weight function. It should be clear that these estimators are unbiased under the assumption of homoscedasticity \( V(x) \equiv \sigma^2 \) and constant mean \( g(x) \equiv \mu \). We begin with the definition of the functional class that will be used in the asymptotic results to follow. **Definition 2.2.** Define the functional class \( C_\gamma \) as follows. Let \( C_1 > 0, C_2 > 0 \). Let us denote \( \gamma' = \gamma - \lfloor \gamma \rfloor \), where \( \lfloor \gamma \rfloor \) denotes the greatest integer less than \( \gamma \). We say that the function \( f(x) \) belongs to the class \( C_\gamma \) if for all \( x, y \in (0, 1) \) \[ |f^{\lfloor \gamma \rfloor}(x) - f^{\lfloor \gamma \rfloor}(y)| \leq C_1 |x - y|^{\gamma'}, \] \[ |f^{(k)}(x)| \leq C_2, \] for \( k = 0, \ldots, \lfloor \gamma \rfloor - 1 \). Note that \( C_\gamma \) depends on the choice of \( C_1, C_2 \), but for our convenience we omit this dependence from the notation. There are also similar types of dependence in the definitions that immediately follow. **Definition 2.3.** Let \( \delta > 0 \). We say the function is in class \( C_\gamma^+ \) if it is in \( C_\gamma \) and in addition \[ f(x) \geq \delta. \] These classes of functions are familiar in the literature, as in Fan [15, 16] and are often referred to as Lipschitz balls. **Definition 2.4.** Define the pointwise risk of the variance estimator \( \hat{V}_h(x) \) (its mean squared error at a point \( x \)) as \[ R(V(x), \hat{V}_h(x)) = E[\hat{V}_h(x) - V(x)]^2. \] **Definition 2.5.** Define the global mean squared risk of the variance estimator \( \hat{V}_h(x) \) as \[ R(V, \hat{V}_h) = E \left( \int_0^1 (\hat{V}_h(x) - V(x))^2 \, dx \right). \] Then the globally optimal in the minimax sense bandwidth \( h_{\text{opt}} \) is defined as \[ h_n = \arg \min \{ \sup \{ R(V, \hat{V}_h) : V \in C_\gamma, g \in C_\beta \}; h > 0 \}. \] Note that $h_n$ depends on $n$ as well as $C_1$, $C_2$, $\beta$ and $\gamma$. A similar definition applies in the setting of Definition 2.4. **Remark 2.6.** In the special case where $\gamma = 2$ and $\beta = 1$, the finite sample performance of this estimator has been investigated in Levine [25] together with the possible choice of bandwidth. A version of $K$-fold cross-validation has been recommended as the most suitable method. When utilized, it produces a variance estimator that in typical cases is not very sensitive to the choice of the mean function $g(x)$. **Theorem 2.7.** Consider the nonparametric regression problem described by (1), with estimator as described in (8). Fix $C_1$, $C_2$, $\gamma > 0$ and $\beta > \gamma/(4\gamma + 2)$ to define functional classes $C_\gamma$ and $C_\beta$ according to the definition (2.2). Assume $p > \lfloor \gamma \rfloor$. Then the optimal bandwidth is $h_n \asymp n^{-1/(2\gamma + 1)}$. Let $0 < \underline{a} \leq \overline{a} < \infty$. Then there are constants $\underline{B}$ and $\overline{B}$ such that $$ \underline{B} n^{-2\gamma/(2\gamma + 1)} + o(n^{-2\gamma/(2\gamma + 1)}) \\ \leq R(V, \hat{V}) \leq \overline{B} n^{-2\gamma/(2\gamma + 1)} + o(n^{-2\gamma/(2\gamma + 1)}) $$ for all $h$ satisfying $\underline{a} \leq n^{1/(2\gamma + 1)} h \leq \overline{a}$, uniformly for $g \in C_\beta$, $V \in C_\gamma$. Theorem 2.7 refers to properties of the integrated mean square error. Related results also hold for minimax risk at a point. The main results are stated in the following theorem. **Theorem 2.8.** Consider the setting of Theorem 2.7. Let $x_0 \in (0, 1)$. Assume $p > \lfloor \gamma \rfloor$. Then the optimal bandwidth is $h_n(x) \asymp n^{-1/(2\gamma + 1)}$. Let $0 < \underline{a} \leq \overline{a} < \infty$. Then there are constants $\underline{B}$ and $\overline{B}$ such that $$ \underline{B} n^{-2\gamma/(2\gamma + 1)} + o(n^{-2\gamma/(2\gamma + 1)}) \leq R(V(x_0), \hat{V}_{h_n}(x_0)) \\ \leq \overline{B} n^{-2\gamma/(2\gamma + 1)} + o(n^{-2\gamma/(2\gamma + 1)}) $$ for all $h(x)$ satisfying $\underline{a} \leq n^{1/(2\gamma + 1)} h \leq \overline{a}$, uniformly for $g \in C_\beta$, $V \in C_\gamma$. The proof of these theorems can be found in the Appendix. The minimax rates obtained in (13) and (14) will be shown in Theorems 4.1 and 4.2 to be optimal in the setting of Theorem 2.7. At this point, the following remarks may be helpful. **Remark 2.9.** If one assumes that $\beta = \gamma/(4\gamma + 2)$ in the definition of the functional class $C_\beta$, the conclusions of Theorems 2.7 and 2.8 remain valid, but the constants $\underline{B}$ and $\overline{B}$ appearing in them become dependent on $\beta$. If $\beta < \gamma/(4\gamma + 2)$, the conclusion (14) does not hold. For more details, see comments preceding Theorem 4.2 and the Appendix. Remark 2.10. Müller and Stadtmüller [28] considered the general quadratic form based estimator similar to our (8) and derived convergence rates for its mean squared error. They also were the first to point out an error in the paper by Hall and Carroll [20] (see Müller and Stadtmüller [28], pages 214 and 221). They use a slightly different (more restrictive) definition of the classes $C_\gamma$ and $C_\beta$ and only establish rates of convergence and error terms on those rates for fixed functions $V$ and $g$ within the classes $C_\gamma$ and $C_\beta$. Our results resemble these but we also establish the rates of convergence uniformly over the functional classes $C_\beta$ and $C_\gamma$ and therefore our bounds are of the minimax type. Remark 2.11. It is important to notice that the asymptotic mean squared risks in Theorems 2.7 and 2.8 can be further reduced by proper choice of the difference sequence $\{d_j\}$. The proof in the Appendix supplemented with material in Hall, Kay and Titterington [21] shows that the asymptotic variance of our estimators will be affected by the choice of the difference sequence, but the choice of this sequence does not affect the bias in asymptotic calculations. The effect on the asymptotic variance is to multiply it by a constant proportional to $$C = 2 \left( 1 + 2 \sum_{k=1}^{r} \left( \sum_{j=0}^{r-1-k} d_j d_{j+k} \right)^2 \right).$$ For any given value of $r$ there is a difference sequence that minimizes this constant. A computational algorithm for these sequences is given in Hall, Kay and Titterington [21]. The resulting minimal constant as a function of $r$ is $C_{\text{min}} = (2r + 1)/r$. 3. Asymptotic normality. As a next step, we establish that the estimator (8) is asymptotically normal. We recall that the local polynomial regression estimator $\hat{V}_h(x)$ can be represented as $$\hat{V}_h(x) = \sum_{i=\lfloor r/2 \rfloor + 1}^{n+\lfloor r/2 \rfloor - r} K_{n;h,x}(x_i) \Delta^2_{r,i},$$ where $K_{n;h,x}(x_i) = K_{n,x}\left(\frac{x-x_i}{h}\right)$. Here $K_{n,x}\left(\frac{x-x_i}{h}\right)$ can be thought of as a centered and rescaled nonnegative local kernel function whose shape depends on the location of design points $x_i$, the point of estimation $x$ and the number of observations $n$. We know that $K_{n,x}\left(\frac{x-x_i}{h}\right)$ satisfies discrete moment conditions, $$\sum_{i=\lfloor r/2 \rfloor + 1}^{n+\lfloor r/2 \rfloor - r} K_{n,x}\left(\frac{x-x_i}{h}\right) = 1,$$ $$\sum_{i=\lfloor r/2 \rfloor + 1}^{n+\lfloor r/2 \rfloor - r} (x-x_i)^q K_{n,x}\left(\frac{x-x_i}{h}\right) = 0.$$ for any $q = 1, \ldots, p$. We also need the fact that the support of $K^n(\cdot)$ is contained in that of $K(\cdot)$; in other words, $K^n(\cdot) = 0$ whenever $|x_i - x| > h$. For more details see, for example, Fan and Gijbels [17]. Now we can state the following result. **Theorem 3.1.** Consider the nonparametric regression problem described by (1), with estimator as described in (8). We assume that the functions $g(x)$ and $V(x)$ are continuous for any $x \in [0, 1]$ and $V$ is bounded away from zero. Assume $\mu_{4+\nu} = E(\varepsilon_i)^{4+\nu} < \infty$ for some $\nu > 0$. Then, as $h \to 0$, $n \to \infty$ and $nh \to \infty$, we find that $$\sqrt{nh}(\hat{V}_h(x) - V(x) - O(h^{2\nu}))$$ is asymptotically normal with mean zero and variance $\sigma^2$ where $0 < \sigma^2 < \infty$. **Proof.** To prove this result, we rely on the CLT for partial sums of a generalized linear process $$X_n = \sum_{i=1}^{n} a_{ni} \xi_i,$$ where $\xi_i$ is a mixing sequence. This and several similar results were established in Peligrad and Utev [32]. Thus, the estimator (8) can be easily represented in the form (20) with $K_{n;h,x}(x_i)$ as $a_{ni}$. What remains is to verify the conditions of Theorem 2.2(c) in Peligrad and Utev [32]. - The first condition is $$\max_{1 \leq i \leq n} |a_{ni}| \to 0$$ as $n \to \infty$ and it is immediately satisfied since $$K_{n;h,x}(x_i) = O((nh)^{-1})$$ uniformly for all $x \in [0, 1]$. - The second condition is $$\sup_{n} \sum_{i=1}^{n} a_{ni}^2 < \infty.$$ It can be verified by using the Cauchy–Schwarz inequality and (22). - To establish uniform integrability of $\xi_i^2 \equiv \Delta_{r,i}^4$, we use a simple criterion mentioned in Shiryaev [37] that requires existence of the nonnegative, monotonically increasing function $G(t)$, defined for $t \geq 0$, such that $$\lim_{t \to \infty} \frac{G(t)}{t} = \infty.$$ and \[ \sup_i E[G(\Delta_{r,i}^4)] < \infty. \] It is enough to choose \( G(t) = t^\nu \) for small \( \nu > 0 \) to have this condition satisfied. Finally, the remaining three conditions of Peligrad and Utev [32] are trivially satisfied. \[\square\] 4. Asymptotic minimaxity and related issues. Lower bounds on the asymptotic minimax rate for estimating a nonparametric variance in formulations related to that in (1) have occasionally been studied in earlier literature. Two papers seem particularly relevant. Munk and Ruymgaart [30] study a different, but related problem. Their paper contains a lower bound on the asymptotic minimax risk for their setting. In particular, their setting involves a problem with random design, rather than the fixed design case in (1). Their proof uses the Van Trees inequality and relies heavily on the fact that their \((X_i, Y_i)\) pairs are independent and identically distributed. While it may well be possible to do so, it is not immediately evident how to modify their argument to apply to the setting (1). Hall and Carroll [20] consider a setting similar to ours. Their equation (2.13) claims (in our notation) that there is a constant \( K > 0 \), possibly depending on \( C_1, C_2, \beta \) such that for any estimator \( \hat{V} \) \[ \sup\{R(V(x_0), \hat{V}(x_0)): V \in C_\gamma, g \in C_\beta\} \] (24) \[ \geq K \max\{n^{-2\gamma/(2\gamma+1)}, n^{-4\beta/(2\beta+1)}\}. \] Note that \( n^{-2\gamma/(2\gamma+1)} = o(n^{-4\beta/(2\beta+1)}) \) for \( \beta < \gamma/(2\gamma + 2) \). It thus follows from (14) in our Theorem 2.8 that for any \( \gamma/(4\gamma + 2) < \beta < \gamma/(2\gamma + 2) \) and \( n \) sufficiently large \[ \sup\{R(V(x_0), \hat{V}_{h_n}(x_0)): V \in C_\gamma, g \in C_\beta\} \] (25) \[ \ll K \max\{n^{-2\gamma/(2\gamma+1)}, n^{-4\beta/(2\beta+1)}\}, \] where \( h_n \) is yet again the optimal bandwidth. This contradicts the assertion in Hall and Carroll [20], and shows that their assertion (2.13) is in error—as is the argument supporting it that follows (C.3) of their article. For a similar commentary see also Müller and Stadtmüller [28]. Because of this contradiction it is necessary to give an independent statement and proof of a lower bound for the minimax risk. That is the goal of this section, where we treat the case in which \( \beta \geq \gamma/(4\gamma + 2) \). The minimax lower bound for the case in which \( \beta < \gamma/(4\gamma + 2) \) requires different methods which are more sophisticated. That case, as well as some further generalizations, have been treated in Wang, Brown, Cai and Levine [42] as a sequel to the present paper. That paper proves ratewise sharp lower and upper bounds for the case where \( \beta < \gamma/(4\gamma + 2) \). We have treated both mean squared error at a point (in Theorem 2.8) and integrated mean squared error (in Theorem 2.7). Correspondingly, we provide statements of lower bounds on the minimax rate for each of these cases. The local version of the lower bound result for the minimax risk is obtained under the assumption of normality of errors $\varepsilon_i$. See Section 2 for the definition of $R$ and other quantities that appear in the following statements. **Theorem 4.1.** Consider the nonparametric regression problem described by (1). Fix $C_1$, $C_2$, $\beta$ and $\gamma$ to define functional classes $\mathcal{C}_\gamma$, $\mathcal{C}_\beta$ according to (2.2). Also assume that $\varepsilon_i \sim N(0, 1)$ and independent. Then there is a constant $K > 0$ such that $$\inf\{\sup\{R(V, \tilde{V}): V \in \mathcal{C}_\gamma^+, g \in \mathcal{C}_\beta\} : \tilde{V}\} \geq Kn^{-2\gamma/(2\gamma+1)}$$ where the inf is taken over all possible estimators of the variance function $V$. Our argument relies on the so-called “two-point” argument, introduced and extensively analyzed in Donoho and Liu [11, 12]. **Theorem 4.2.** Consider the nonparametric regression problem described by (1). Fix $C_1$, $C_2$, $\beta$ and $\gamma$ to define functional classes $\mathcal{C}_\gamma$, $\mathcal{C}_\beta$ according to (2.2). Also assume that $\varepsilon_i \sim N(0, 1)$ and independent. Then there is a constant $K > 0$ such that $$\inf\{\sup\{R(V(x_0)), \tilde{V}(x_0)): V \in \mathcal{C}_\gamma, g \in \mathcal{C}_\beta\} : \tilde{V}\} \geq Kn^{-2\gamma/(2\gamma+1)}$$ where the inf is taken over all possible estimators of the variance function $V$. **Proof.** It is easier to begin with the proof of Theorem 4.2 and then proceed to the proof of Theorem 4.1. We will use a two-point modulus-of-continuity argument to establish the lower bound. Such an argument was pioneered by Donoho and Liu [11, 12] for a different though related problem. See also Hall and Carroll [20] and Fan [16]. We assume without loss of generality that $g \equiv 0$. Define the function $$h(t) = \begin{cases} 2 - |t|^\gamma, & \text{if } 0 \leq |t| \leq 1, \\ (2 - |t|)^\gamma, & \text{if } 1 < |t| \leq 2, \\ 0, & \text{if } |t| > 2. \end{cases}$$ Assume (for convenience only) that $C_1 > 2$. Let $d$ be a constant satisfying $0 < d < C_2$ and let $$f_{\delta, l}(x) = d + l\delta h\left(\frac{x - x_0}{\delta^{1/\gamma}}\right).$$ Then $f_{\delta, \pm 1} \in C_\gamma$ for $\delta > 0$ sufficiently small. Let $H$ denote the Hellinger distance between densities, that is, for any two probability densities $m_1$, $m_2$ dominated by a measure $\mu(dz)$, $$H^2(m_1, m_2) = \int (\sqrt{m_1(z)} - \sqrt{m_2(z)})^2 \mu(dz).$$ Here are two basic facts about this metric that will be used below. If $Z = \{Z_j : j = 1, \ldots, n\}$ where the $Z_j$ are independent with densities $\{m_{kj} : j = 1, \ldots, n\}$, $k = 1, 2$ and $m_k = \Pi_j m_{kj}$ denotes the product density, then $$H^2(m_1, m_2) \leq \sum_j H^2(m_{1j}, m_{2j});$$ and if $m_i$ are univariate normal densities with mean 0 and variance $\sigma_i^2$, $i = 1, 2$, then $$H^2(m_1, m_2) \leq 2 \left( \frac{\sigma_1^2}{\sigma_2^2} - 1 \right)^2.$$ For more details see Brown and Low [3] and Brown et al. [1]. It follows that if $m_k$, $k = 1, 2$, are the joint densities of the observations $\{x_i, Y_i, i = 1, \ldots, n\}$ of (1) with $g \equiv 0$ and $f_k = f_{\delta, (-1)^k}$ then $$H^2(m_1, m_2) \leq \sum_i 2 \left( \frac{f_{\delta, -1}(x_i)}{f_{\delta, 1}(x_i)} - 1 \right)^2$$ $$\leq 8 \sum_i \delta^2 h^2 \left( \frac{x_i - x_0}{\delta^{1/\gamma}} \right) = O(n \gamma^{(2\gamma + 1)/\gamma}).$$ For this setting the Hellinger modulus-of-continuity, $\omega(\cdot)$ (Donoho and Liu [12], equation (1.1)), is defined as the inverse function corresponding to the value $H(m_1, m_2)$. Hence it satisfies $$\omega^{-1}(\gamma) = O(n^{1/2} \gamma^{(2\gamma + 1)/2\gamma}).$$ Equation (27) then follows, as established in Donoho and Liu [12]. Although this completes the proof of Theorem 4.2, we also provide a sketch of the argument based on (34). See Donoho and Liu [12] and references cited therein for more details. **Proof of Theorem 4.1.** We omit this proof for the sake of brevity. It begins from the result in Theorem 4.2 and then follows along the lines first described in detail in Donoho, Liu and MacGibbon [13]. This theorem can be also viewed as a consequence of the general results on the global convergence of nonparametric estimators by Stone [39] and Efromovich [14] that do not require normality of errors $\varepsilon_i$. □ APPENDIX PROOFS OF THEOREMS 2.7 AND 2.8. Fix $r$ and functional classes $\mathcal{C}_\gamma$ and $\mathcal{C}_\beta$. For the sake of brevity, we write $\Delta_i \equiv \Delta_{r,i}$. Our main tools in this proof are the representation (16) of the variance estimator $\hat{V}_h(x)$ and the properties (17)–(18). We also use the property $$\sum_{i=\lfloor r/2 \rfloor - 1}^{n+\lfloor r/2 \rfloor - r} (K_{n;h,x}(x_i))^2 = O\left(\frac{1}{nh}\right). \tag{35}$$ (35) follows from (22) and the Cauchy–Schwarz inequality. Here and later, $O$ is uniform for all $V \in \mathcal{C}_\gamma$, $g \in \mathcal{C}_\beta$ and $\{h\} = \{h_n\}$. Now, $$E(\Delta_i^2) = \text{Var}(\Delta_i) + (E(\Delta_i))^2, \tag{36}$$ where $$\text{Var}(\Delta_i) = \sum d_j^2 \text{Var}(y_{j+i-\lfloor r/2 \rfloor}) = V(x_i) + O\left(\left(\frac{1}{n}\right)^\gamma\right) \tag{37}$$ and $$E(\Delta_i) = O\left(\left(\frac{1}{n}\right)^\beta\right) \tag{38}$$ since $\sum d_j = 0$, $\sum d_j^2 = 1$ and $x_{i+r-\lfloor r/2 \rfloor} - x_{i-\lfloor r/2 \rfloor} = O\left(\frac{1}{n}\right)$. This provides an asymptotic bound on the bias as $$\text{Bias} \hat{V}_h(x) = \sum_{i=\lfloor r/2 \rfloor + 1}^{n+\lfloor r/2 \rfloor - r} (V(x_i) - V(x)) K_{n;h,x}(x_i) + O(n^{-\gamma}) + O(n^{-\beta}) \tag{39}$$ $$= O(h^\gamma) + O(n^{-\gamma}) + O(n^{-\beta}).$$ The last step in (39) is a very minor variation of the technique employed in Wang, Brown, Cai and Levine [42] (see pages 10–11). Next, we need to use the fact that $\Delta_i$ and $\Delta_j$ are independent if $|i-j| \geq r+1$. Hence, $$\text{Var} \hat{V}_h(x) = \text{Var}\left(\sum_{i=\lfloor r/2 \rfloor + 1}^{n+\lfloor r/2 \rfloor - r} K_{n;h,x}(x_i) \Delta_i^2\right)$$ $$= \sum_{i=\lfloor r/2 \rfloor + 1}^{n+\lfloor r/2 \rfloor - r} \sum_{j=i-r}^{i+r} K_{n;h,x}(x_i) K_{n;h,x}(x_j) \text{Cov}(\Delta_i^2, \Delta_j^2)$$ $$\leq \sum_{i=\lfloor r/2 \rfloor + 1}^{n+\lfloor r/2 \rfloor - r} \sum_{j=i-r}^{i+r} 4^{-1}((K_{n;h,x}(x_i))^2 + (K_{n;h,x}(x_j))^2)$$ $$\times (\text{Var} \Delta_i^2 + \text{Var} \Delta_j^2)$$ It is easy to see that \[ \Delta_i^2 = \left( \sum_{j=0}^{r} d_j y_{j+i-\lfloor r/2 \rfloor} \right)^2 \\ = \left( \sum_{j=0}^{r} d_j \sqrt{V(x_{j+i-\lfloor r/2 \rfloor})} \varepsilon_{i+j-\lfloor r/2 \rfloor} + O(n^{-\beta}) \right)^2, \] and this means, in turn, that \[ \text{Var} \Delta_i^2 \leq C_2^2 \text{Var} \left( \sum_{j=0}^{r} d_j \varepsilon_{i+j-\lfloor r/2 \rfloor} + O(n^{-\beta}) \right)^2 \\ \leq C_2^2 (r+1) \mu_4 + O(n^{-2\beta}) + O(n^{-4\beta}) = O(1). \] Hence, \[ \text{Var} \hat{V}_h(x) \leq O(1) \sum_{i=\lfloor r/2 \rfloor+1}^{n+\lfloor r/2 \rfloor-r} \sum_{j=i-r}^{i+r} ((K_{n;h,x}(x_i))^2 + (K_{n;h,x}(x_j))^2) \\ = O\left( \frac{1}{nh} \right). \] (40) Combining the bounds in (39) and (40) yields the assertion of the theorem since \(2\beta > \gamma/(2\gamma+1)\). **Acknowledgments.** We wish to thank T. Cai and L. Wang for pointing out the significance of the article of Hall and Carroll [20] and its relation to our (14). **REFERENCES** [1] **Brown, L. D., Cai, T., Low, M. and Zhang, C.-H.** (2002). Asymptotic equivalence theory for nonparametric regression with random design. *Ann. Statist.* **30** 688–707. MR1922538 [2] **Brown, L. D., Gans, N., Mandelbaum, A., Sakov, A., Shen, H., Zeltyn, S. and Zhao, L.** (2005). Statistical analysis of a telephone call center: A queueing-science perspective. *J. Amer. Statist. Assoc.* **100** 36–50. MR2166068 [3] **Brown, L. D. and Low, M.** (1996). A constrained risk inequality with applications to nonparametric function estimation. *Ann. Statist.* **24** 2524–2535. MR1425965 [4] **Buckley, M. J. and Eagleson, G. K.** (1989). A graphical method for estimating the residual variance in nonparametric regression. *Biometrika* **76** 203–210. MR1016011 [5] **Buckley, M. J., Eagleson, G. K. and Silverman, B. W.** (1988). The estimation of residual variance in nonparametric regression. *Biometrika* **75** 189–199. MR0946047 [6] **Carroll, R. J.** (1982). Adapting for heteroscedasticity in linear models. *Ann. Statist.* **10** 1224–1233. MR0673657 [7] **Carter, C. K. and Eagleson, G. K.** (1992). A comparison of variance estimators in nonparametric regression. *J. Roy. Statist. Soc. Ser. B* **54** 773–780. MR1185222 [8] DETTE, H. (2002). A consistent test for heteroscedasticity in nonparametric regression based on the kernel method. *J. Statist. Plann. Inference* **103** 311–329. MR1896998 [9] DETTE, H., MUNK, A. and WAGNER, T. (1998). Estimating the variance in nonparametric regression—what is a reasonable choice? *J. R. Stat. Soc. Ser. B Stat. Methodol.* **60** 751–764. MR1649480 [10] DIGGLE, P. J. and VERBYLA, A. (1998). Nonparametric estimation of covariance structure in longitudinal data. *Biometrics* **54** 401–415. [11] DONOHO, D. and LIU, R. (1991). Geometrizing rates of convergence. II. *Ann. Statist.* **19** 633–667. MR1105839 [12] DONOHO, D. and LIU, R. (1991). Geometrizing rates of convergence. III. *Ann. Statist.* **19** 668–701. MR1105839 [13] DONOHO, D., LIU, R. and MACGIBBON, B. (1990). Minimax risk over hyperrectangles and implications. *Ann. Statist.* **18** 1416–1437. MR1062717 [14] EFROMOVICH, S. (1996). On nonparametric regression for iid observations in a general setting. *Ann. Statist.* **24** 1125–1144. MR1401841 [15] FAN, J. (1992). Design-adaptive nonparametric regression. *J. Amer. Statist. Assoc.* **87** 998–1004. MR1209561 [16] FAN, J. (1993). Local linear regression smoothers and their minimax efficiencies. *Ann. Statist.* **21** 196–216. MR1212173 [17] FAN, J. and GIJBELS, I. (1996). *Local Polynomial Modelling and Its Applications*. Chapman and Hall, London. MR1383587 [18] FAN, J. and YAO, Q. (1998). Efficient estimation of conditional variance functions in stochastic regression. *Biometrika* **85** 645–660. MR1665822 [19] GASSER, T., SROKA, L. and JENNEF-STEINMETZ, C. (1986). Residual variance and residual pattern in nonlinear regression. *Biometrika* **73** 625–633. MR0897854 [20] HALL, P. and CARRROLL, R. (1989). Variance function estimation in regression: The effect of estimating the mean. *J. Roy. Statist. Soc. Ser. B* **51** 3–14. MR0984989 [21] HALL, P., KAY, J. and TITTERINGTON, D. (1990). Asymptotically optimal difference-based estimation of variance in nonparametric regression. *Biometrika* **77** 521–528. MR1087842 [22] HALL, P. and MARRON, J. (1990). On variance estimation in nonparametric regression. *Biometrika* **77** 415–419. MR1064818 [23] HÄRDLE, W. and TSYBAKOV, A. (1997). Local polynomial estimators of the volatility function in nonparametric autoregression. *J. Econometrics* **81** 223–242. MR1484586 [24] HART, J. (1997). *Nonparametric Smoothing and Lack-of-Fit Tests*. Springer, New York. MR1461272 [25] LEVINE, M. (2006). Bandwidth selection for a class of difference-based variance estimators in the nonparametric regression: A possible approach. *Comput. Statist. Data Anal.* **50** 3405–3431. MR2236857 [26] MATLOFF, N., ROSE, R. and TAI, R. (1984). A comparison of two methods for estimating optimal weights in regression analysis. *J. Statist. Comput. Simul.* **19** 265–274. [27] MÜLLER, H.-G. and STADTMÜLLER, U. (1987). Estimation of heteroscedasticity in regression analysis. *Ann. Statist.* **15** 610–625. MR0888429 [28] MÜLLER, H.-G. and STADTMÜLLER, U. (1993). On variance function estimation with quadratic forms. *J. Statist. Plann. Inference* **35** 213–231. MR1220417 [29] MUNK, A., BISSANTZ, N., WAGNER, T. and FREITAG, G. (2005). On difference-based variance estimation in nonparametric regression when the covariate is high dimensional. *J. R. Stat. Soc. Ser. B Stat. Methodol.* **67** 19–41. MR2136637 [30] MUNK, A. and RUYMGAART, F. (2002). Minimax rates for estimating the variance and its derivatives in nonparametric regression. *Aust. N. Z. J. Statist.* **44** 479–488. MR1934736 [31] OPSOMER, J., RUPPERT, D., WAND, M., HOLST, U. and HÖSSJER, O. (1999). Kriging with nonparametric variance function estimation. *Biometrics* **55** 704–710. [32] Peligrad, M. and Utev, S. (1997). Central limit theorem for linear processes. *Ann. Probab.* **25** 443–456. MR1428516 [33] Rice, J. (1984). Bandwidth choice for nonparametric kernel regression. *Ann. Statist.* **12** 1215–1230. MR0760684 [34] Ruppert, D., Wand, M., Holst, U. and Hössjer, O. (1997). Local polynomial variance-function estimation. *Technometrics* **39** 262–273. MR1462587 [35] Seifert, B., Gasser, T. and Wolf, A. (1993). Nonparametric estimation of the residual variance revisited. *Biometrika* **80** 373–383. MR1243511 [36] Shen, H. and Brown, L. (2006). Nonparametric modelling for time-varying customer service times at a bank call center. *Appl. Stoch. Models Bus. Ind.* **22** 297–311. MR2275576 [37] Shiryaev, A. N. (1996). *Probability*, 2nd ed. Springer, New York. MR1368405 [38] Spokoiny, V. (2002). Variance estimation for high-dimensional regression models. *J. Multivariate Anal.* **82** 111–133. MR1918617 [39] Stone, C. J. (1980). Optimal rates of convergence for nonparametric estimators. *Ann. Statist.* **8** 1348–1360. MR0594650 [40] von Neumann, J. (1941). Distribution of the ratio of the mean square successive difference to the variance. *Ann. Math. Statist.* **12** 367–395. MR0006656 [41] von Neumann, J. (1942). A further remark concerning the distribution of the ratio of the mean square successive difference to the variance. *Ann. Math. Statist.* **13** 86–88. MR0006657 [42] Wang, L., Brown, L., Cai, T. and Levine, M. (2006). Effect of mean on variance function estimation in nonparametric regression. Technical report, Dept. Statistics, Univ. Pennsylvania. Available at www-stat.wharton.upenn.edu/~tcai/paper/html/Variance-Estimation.html. Department of Statistics The Wharton School University of Pennsylvania Philadelphia, Pennsylvania 19104-6340 USA E-MAIL: email@example.com Department of Statistics Purdue University 150 N. University Street West Lafayette, Indiana 47907 USA E-MAIL: firstname.lastname@example.org
Scope: To identify the items required to be posted on a project’s bulletin board to satisfy EEO Requirements. On projects which are funded all or in part with federal funds, the contractor is required to post certain informational documents at the jobsite for the benefit of the construction workers. All required information is to be posted on a bulletin board. The bulletin board shall be weatherproof and watertight and be located in an area readily accessible to the project employees. The enclosed form has been developed to assist each Residency office in verifying that this contractual requirement has been fulfilled by the contractor. This form shall be completed by Residency personnel at the beginning of contract work and filed in the project file. The bulletin board should be monitored throughout the project duration to ensure that the bulletin board remains on the project site and that the information posted does not weather and become unreadable. George Raymond, P.E. Construction Engineer The Contractor is required to post a weatherproof and watertight bulletin board in a readily accessible area where employees gather to start work on the project site. The bulletin board must contain the following items: | Yes | No | |-----|----| | | A. Poster-OFCCP-1420 “Equal Opportunity is the Law” | | | B. Poster-OFCCP-1420 “La Iqualidad De Opportunidades De Empleo Es La Ley” | | | C. Poster-WH-1321 “Notice to Employees” Davis Bacon Wage Rate | | | C-1 Poster-WH-1321 SPA “Notice to Employees” Davis Bacon Wage Rate - Spanish | | | D. Poster-FHWA-1022 “Notice Federal Aid Projects” | | | G. Poster-OSHA-3165 “Job Safety and Health Protection” | | | H. Poster-OSHA-3167 “Sequirdad En El Trabajo Y Proteccion De La Salud” | | | I. Poster-WH-1088 “Your Rights Federal Minimum Wage” | | | J. Poster-WH-1088SP “Deprechos De Empleados” | | | K. Poster-WH-1284 “Notice to Workers with Disabilities Paid at Special Minimum Wages” | | | L. Poster-WH-1420 “Your Rights Under the Family and Medical Leave Act of 1993” | | | M. Poster-WH-1420SP “Sus Derechos bajo La Ley de Ausencia Familiar y Medica de 1993” | | | N. Poster-WH-1462 “Notice Employee Polygraph Protection Act” | | | O. Poster-OSHA-1926.5 “Emergency Telephone Numbers of Medical Facilities and Ambulance Services” | | | Q. Poster-State Minimum Wage “Your Rights Under the Oklahoma Minimum Wage Act” (two posters) | | | R. Contractor’s EEO Policy Statement | | | S. Letter Appointing the Contractor’s EEO Officer for the Project | | | T. Letter Naming Contractor’s EEO Officer and all Subcontractors EEO Officer(s) with Contact Information | | | U. Letter of Certification of Nonsegregated Facilities | | | V. Contractor’s Training Program Information | | | W. Contractor’s Procedure for Resolving Discrimination Complaints | | | X. Contractor’s Designated Safety Officer | | | Y. Wage Scale from Project Contract | | | Z. ODEQ Authorization to Discharge Certificate with Emergency Contact Name and Phone Number | Project Number: __________________________________________ Signature: _______________________________________________ Title: ___________________________________________________ Date: ___________________________________________________ OKLAHOMA DEPARTMENT OF TRANSPORTATION DATE: December 26, 1997 TO: Field Division Engineers, Division Construction Engineers, and Resident Engineers FROM: Byron Poynter, Construction Engineers SUBJECT: CONSTRUCTION CONTROL DIRECTIVE NO. 971226. BULLETIN BOARD POSTINGS On projects which are funded all or in part with federal funds, the contractor is required to post certain informational documents at the jobsite for the benefit of the construction workers. To assist you in verifying the postings, the list of the documents to be posted is enclosed. Byron Poynter Construction Engineer Document list enclosed Copy to: Distribution List CHECK LIST FOR PROJECT BULLETIN BOARD'S PROJECT NUMBER____________________DATE_________ POSTERS: ___ EQUAL OPPORTUNITY IS THE LAW (EEOC-P/E-1) ___ LA IGUALIDAD DE OPORTUNIDADES DE EMPLEO ES LA LEY (EEOC-P/E-S) ___ JOB SAFETY AND HEALTH PROTECTION (OSHA-2203) ___ SEGUROIDAD EN EL TRABAJO Y PROTECCION DE LA SALUD (OSHA-2200) ___ NOTICE TO EMPLOYEES (USDOL-1321) NOTICE FEDERAL AID PROJECTS (FHWA-1022) ___ YOUR RIGHTS FEDERAL MINIMUM WAGE (USDOL-1088) ___ YOUR RIGHTS FEDERAL MINIMUM WAGE (SPANISH) ___ WAGE RATE INFORMATION (FHWA-1495) ___ INFORMACION SOBRE ESCALAS DE SALARIOS (FHWA-1495A) ___ EMERGENCY TELEPHONE NUMBERS OF MEDICAL FACILITIES AND AMBULANCE SERVICES (OSHA-1926.5) ___ YOUR RIGHTS UNDER THE FAMILY AND MEDICAL LEAVE ACT OF 1993 (WH-1420) ___ NOTICE EMPLOYEE POLYGRAPH PROTECTION ACT (WH-1482) ___ STATE MINIMUM WAGE ACT-YOUR RIGHTS UNDER THE OKLA.MIN.WAGE ACT OTHER ITEMS: ___ CONTRACTOR'S EEO POLICY STATEMENT ___ LETTER DESIGNATING EEO OFFICER ___ CERTIFICATION OF NONSEGREGATED FACILITIES ___ CONTRACTOR'S PROCEDURE FOR RESOLVING DISCRIMINATION COMPLAINTS ___ WAGE SCALE FROM PROJECT CONTRACT The Contractor is required to post a weatherproof and watertight bulletin board in a readily accessible area where employees gather to start work on the project site. The bulletin board must contain the following items: | Yes | No | |-----|----| | | A. Poster-OFCCP-1420 “Equal Opportunity is the Law” | | | B. Poster-OFCCP-1420 “La Iqualidad De Opportunidades De Empleo Es La Ley” | | | C. Poster-WH-1321 “Notice to Employees” Davis Bacon Wage Rate | | | C-1. Poster-WH-1321 SPA “Notice to Employees” Davis Bacon Wage Rate - Spanish | | | D. Poster-FHWA-1022 “Notice Federal Aid Projects” | | | G. Poster-OSHA-3165 “Job Safety and Health Protection” | | | H. Poster-OSHA-3167 “Sequirdad En El Trabajo Y Proteccion De La Salud” | | | I. Poster-WH-1088 “Your Rights Federal Minimum Wage” | | | J. Poster-WH-1088SP “Deprechos De Empleados” | | | K. Poster-WH-1284 “Notice to Workers with Disabilities Paid at Special Minimum Wages” | | | L. Poster-WH-1420 “Your Rights Under the Family and Medical Leave Act of 1993” | | | M. Poster-WH-1420SP “Sus Derechos bajo La Ley de Ausencia Familiar y Medica de 1993” | | | N. Poster-WH-1462 “Notice Employee Polygraph Protection Act” | | | O. Poster-OSHA-1926.5 “Emergency Telephone Numbers of Medical Facilities and Ambulance Services” | | | Q. Poster-State Minimum Wage “Your Rights Under the Oklahoma Minimum Wage Act” (two posters) | | | R. Contractor’s EEO Policy Statement | | | S. Letter Appointing the Contractor’s EEO Officer for the Project | | | T. Letter Naming Contractor’s EEO Officer and all Subcontractors EEO Officer(s) with Contact Information | | | U. Letter of Certification of Nonsegregated Facilities | | | V. Contractor’s Training Program Information | | | W. Contractor’s Procedure for Resolving Discrimination Complaints | | | X. Contractor’s Designated Safety Officer | | | Y. Wage Scale from Project Contract | | | Z. ODEQ Authorization to Discharge Certificate with Emergency Contact Name and Phone Number | Project Number: __________________________________________ Signature: _______________________________________________ Title: ___________________________________________________ Date: ___________________________________________________ The Contractor is required to post a weatherproof and watertight bulletin board in a readily accessible area where employees gather to start work on the project site. The bulletin board must contain the following items: | Yes | No | |-----|----| | | A. Poster-OFCCP-1420 “Equal Opportunity is the Law” | | | B. Poster-OFCCP-1420 “La Iqualidad De Opportunidades De Empleo Es La Ley” | | | C. Poster-WH-1321 “Notice to Employees” Davis Bacon Wage Rate | | | D. Poster-FHWA-1022 “Notice Federal Aid Projects” | | | E. Poster-FHWA-1495 “Wage Rate Information” | | | F. Poster-FHWA-1495A “Informacion Sobre Escalas De Salrios” | | | G. Poster-OSHA-3165 “Job Safety and Health Protection” | | | H. Poster-OSHA-3167 “Sequirdad En El Trabajo Y Proteccion De La Salud” | | | I. Poster-WH-1088 “Your Rights Federal Minimum Wage” | | | J. Poster-WH-1088SP “Deprechos De Empleados” | | | K. Poster-WH-1284 “Notice to Workers with Disabilities Paid at Special Minimum Wages” | | | L. Poster-WH-1420 “Your Rights Under the Family and Medical Leave Act of 1993” | | | M. Poster-WH-1420SP “Sus Derechos bajo La Ley de Ausencia Familiar y Medica de 1993” | | | N. Poster-WH-1462 “Notice Employee Polygraph Protection Act” | | | O. Poster-OSHA-1926.5 “Emergency Telephone Numbers of Medical Facilities and Ambulance Services” | | | P. Poster-Whistle Blower “Know Your Rights Under The Recovery Act!” [ARRA funded (STIM) projects only] | | | Q. Poster-State Minimum Wage “Your Rights Under the Oklahoma Minimum Wage Act” (two posters) | | | R. Contractor’s EEO Policy Statement | | | S. Letter Appointing the Contractor’s EEO Officer for the Project | | | T. Letter Naming Contractor’s EEO Officer and all Subcontractors EEO Officer(s) with Contact Information | | | U. Letter of Certification of Nonsegregated Facilities | | | V. Contractor’s Training Program Information | | | W. Contractor’s Procedure for Resolving Discrimination Complaints | | | X. Contractor’s Designated Safety Officer | | | Y. Wage Scale from Project Contract | | | Z. ODEQ Authorization to Discharge Certificate with Emergency Contact Name and Phone Number | Project Number: ________________________________ Signature: ______________________________________ Title: ___________________________________________ Date: ___________________________________________ OKLAHOMA DEPARTMENT OF TRANSPORTATION DATE: December 22, 1997 TO: Field Division Engineers, Division Construction Engineers, and Resident Engineers FROM: Byron Poynter, Construction Engineer SUBJECT: CONSTRUCTION CONTROL DIRECTIVE NO. 971222. PROJECTS WITH COMPLETE-BY DATES For many projects, time charges begin on the effective date of work order and end on a specific date. Typically, the amount of time between these dates is more than it takes to perform the work. The intent is that the contractor may begin work any time during this period but must be done with the work on or before the complete-by date. When a contract has a complete-by date, the specified date is a term of the contract. If the date is changed, it is a modification of the terms of the agreement and must be accomplished by a change order. The justification for moving a complete-by date must be that the Department has failed to clear the project of utilities or other similar reason for which the contractor is not responsible. Weather may only be used as a reason when the amount of unusually severe weather is so extreme that, combined with the physical aspects of the project, render an on-time completion impossible. It may not always be possible to have the final inspection on the day the work is completed. It is acceptable to set the completion date on the date the work is completed even if the final inspection is a week or so later. However, with reference to Control Directive No. 940406, setting of a completion date releases the contractor from the physical project and must be set after all of the work is done. Byron Poynter Construction Engineer Copy to: Distribution List SPECIFICATION CHANGE - DRILLED SHAFT FOUNDATIONS Portions of the 1996 Specification for Drilled Shaft Foundations (metric), appear to be infeasible as to constructability. The specification will soon be revised to comply with a consensus reached between the industry and ODOT. This is your authority to make immediate adjustments to the handling of the Drilled Shafts on all ongoing projects in accordance with the following: Section 516.02 (b) Concrete: This section will be changed to provide for a slump of 150 mm at initiation of the concrete placement and a slump of 100 mm is to be maintained until the placement is complete, temporary casing removed and the top of the shaft aligned. Section 516.04 (a) Contractor Qualifications: This section will be changed to require a limited work plan for regular Drilled Shafts and a more comprehensive plan for when slurry displacement or polymer modified concrete is specified. The three year experience reference will be omitted. A draft copy of the revised specification is enclosed. Byron Poynter Construction Engineer Copy to: Distribution List These Special Provisions revise, amend, and where in conflict, supersede applicable sections of the Standard Specifications for Highway Construction, Edition of 1996. These Special Provisions are generally written in the imperative mood. In sentences using the imperative mood, the subject, "the Contractor", is implied. Reference to the Contractor is also implied in this language by the use of "shall", "shall be", or similar words and phrases. In material specifications, the subject may also be the supplier, fabricator, or manufacturer supplying material, products, or equipment for use on the project. Wherever "directed", "required", "prescribed", or other similar words are used, the "direction", "requirement", or "order" of the Engineer is intended. Similarly, wherever "approved", "acceptable", "suitable", "satisfactory", or other similar words are used, the words mean "approved by", "acceptable to", or "satisfactory to" the Engineer. The word "will" generally pertains to decisions or actions of the Engineer. These specifications are written with metric (SI) units. (Replace Subsection 516.02(b) with the following:) (b) **Concrete.** Furnish Class AA concrete modified as follows. Limit the maximum nominal aggregate size to 19 mm. Increase minimum cement content 10% for concrete placed under water or slurry. Adjust approved admixtures for site conditions to ensure that the concrete has at least 150 mm of slump at the start of placement and at least 100 mm of slump at the completion of placement and casing/reinforcement alignment. Maintain the concrete temperature under 30°C during placement. (Replace Subsection 516.04(a) with the following:) (a) **Contractor Qualifications and Installation Plan.** Use personnel with appropriate experience for the construction of drilled shafts. Submit an installation plan for approval at least 30 calendar days before constructing drilled shafts. Include the following information in the plan for drilled shafts. - Details of reinforcement placement including support and centering methods. - Details of concrete placement including proposed operational procedures for tremie and pumping methods. - Details of the concrete mix design including results of concrete trial mix and slump loss tests at ambient temperatures appropriate for the site conditions. Include the following information in the plan for slurry displacement drilled shafts. - List of proposed equipment to be used including cranes, drills, augers, bailing buckets, final cleaning equipment, desanding equipment, slurry pumps, core sampling equipment, tremies, concrete pumps, casings, etc. (Analyze the capacity of the equipment to drill the size, depth, and hardness of the planned excavations.) - Details of overall construction operation sequence and the sequence of shaft construction in bents or groups. - Details of shaft excavation methods and procedures for maintaining correct horizontal and vertical alignment of the excavation. - Details of excavated materials use or disposal. - Details of the methods to mix, circulate, desand, dispose of the slurry. - Details of methods to clean the shaft excavation. - Personnel resumes of project experiences and appropriate documentation including names, addresses, and telephone numbers of organizations or associations that verify the information. Approval of the installation plan, personnel, and, if appropriate, trial shafts does not relieve the responsibility for obtaining the required results. Revise and resubmit for approval if the installation plan does not provide satisfactory results. Submit any request for changing the type of shaft elevations, as needed, with the installation plan. (Replace from the beginning of Subsection 516.04(c)1 through the end of Subsection 516.04(c)1.1 with the following:) 1. **Hole Excavation.** Excavate holes according to the approved installation plan. Before drilling, excavate for structure footings supported on drilled shafts and construct embankments and fills. Position the drilled shaft within 75 mm of the required position in a horizontal plane at the top of the shaft elevation. Do not allow the alignment of a vertical shaft to vary from the required alignment by more than one percent of shaft depth. Do not allow the alignment of a battered shaft to vary from the required battered alignment by more than 2% of shaft depth. Use excavation equipment and methods that provide a flat bottom for the completed shaft, not deviating from a level horizontal plane more than 3% of shaft diameter. Use excavation equipment that provides a shaft diameter not less than 25 mm smaller than the required diameter. Excavate to the plan elevation, extending the excavation below the plan elevation only when it is determined that the load bearing material encountered during excavation does not satisfy plan requirements. Take soil samples or rock cores as shown on the plans or directed by the Engineer to determine the character of the material directly below the shaft excavation. Immediately notify the Engineer of any significant deviation from the plans in subsurface conditions that may result in a shaft depth change. Check dimensions and alignment of each shaft excavation in the presence of the Engineer before concrete placement. Final shaft depth shall be measured after final cleaning. When it is determined that the hole sidewall has softened due to excavation methods, swelled due to delays in concreting, or degraded as a result of slurry cake buildup, overream the sidewall a minimum of 12 mm and maximum of 75 mm to sound material. Immediately prior to concrete placement, clean the hole so no more than 50% of the bottom of each hole has more than 12 mm of sediment and the maximum depth of sediment or debris at any place on the bottom of the hole does not exceed 38 mm. For dry holes, reduce the depth of water to 150 mm or less before placing concrete. Use one or more of the following methods for excavation. Do not use methods prohibited by the plans or special provisions: 1.1 **Dry Method.** Use the dry construction method at sites where the groundwater level and soil conditions are suitable to permit construction of the shaft in a relatively dry excavation and where the sides and bottom of the shaft may be visually inspected before placing concrete. The dry method consists of drilling the shaft, removing accumulated water, removing loose material from the excavation, placing the reinforcing cage, and concreting the shaft in a relatively dry condition. The dry construction method can only be used when the trial shaft excavation demonstrates the following: - Less than 300 mm of water accumulates above the bottom of the hole during a one hour period with no pumping. - The sides and bottom of the hole remain stable without detrimental caving, sloughing, or swelling over a four hour waiting period immediately following the completion of excavation. Loose material and water can be satisfactorily removed before inspection and before concrete placement. When caving, sloughing, or swelling conditions exist or when groundwater seepage exceeds the described limits, discontinue and use an approved alternative method. Scope: To establish the procedure for monitoring payrolls submitted by the contractor and performing the periodic wage rate interviews. The purpose of this Directive is to offer guidelines for monitoring the requirements of the Davis-Bacon Act. The primary purpose of this act is to ensure that persons working on Federally funded contracts (Federal aid contracts on the Federal aid highway system) are paid at least the minimum hourly wage rate for their job classification. Specific information concerning the Davis-Bacon Act can be found on the U.S. Department of Labor, Wage and Hour Division's website at www.dol.gov. The applicable federal regulations are found in 29 CFR Part 1 and Part 5. Contracts containing projects designated as Local Roads or Rural Minor Collectors exempt the contractor from submitting weekly payrolls. Additionally, the Residency office administering the contract will not be required to perform the wage rate interviews of the contractor's employees on these projects. ODOT identifies Local Road or Rural Minor Collector projects by placing a "D" in the project number, just before the "hole number" (i.e. BRO-144D(33)CO). All other contracts containing a project which is Federally funded, all or in part, will require that the contractor submit weekly payrolls and will require the performance of the wage rate interviews by the Residency. For contracts that contain Federally funded projects tied with projects that are exempt, all of the projects in the contract will require both the submittal of weekly payrolls by the contractor and the performance of the wage rate interviews by the Residency. **Contractor Payrolls** The prime contractor and all approved subcontractors performing work on a Federally funded contract are required to submit weekly payroll records to the Residency. All payroll records from the prime contractor or subcontractor shall be received within two weeks of the end of the payroll reporting period. Payrolls for periods of "no work in progress" will not be required. The Residency will be required to stamp all payrolls indicating the date on which they were received. The Residency must monitor the payroll records received weekly and should notify the prime contractor in writing for any failure to submit the required payrolls or to submit a record with the necessary information (as detailed below) within the two week period. The written notification to the prime contractor may state actions that could be taken by the Residency, including holding future progressive payments until the contractual requirement has been satisfied. Any such correspondence must be stored in the project's payroll files. **Wage Rate Interviews** The Residency employees shall conduct systematic spot interviews of the prime contractor’s and approved subcontractor’s employees to identify whether the minimum wage and other labor standards of the contract are being fully complied with and that there is no misclassification of an employee. Only those employees, laborers and mechanics whose classifications are subject to the Davis-Bacon Act will be interviewed. Examples of exempt classifications include supervisor, foreman, salaried employees and survey crews. One employee of the prime contractor or subcontractor shall be interviewed each month during the duration of the original contract time. A minimum of two employees shall be interviewed on a specific contract. The Residency should ensure that interviews of subcontractors’ employees are done as well as the prime contractor’s employees. An employee shall not be interviewed more than once per contract. Refer to the attached interview form. This form shall be used to record the information obtained from the interview. Once an employee is interviewed, the results of the interview should be checked against the information contained in the weekly payroll record for that date and the payroll record should be reviewed for completeness. There is no mandatorily prescribed format for the contractor’s or subcontractor’s payroll records, however, payroll records received by the Residency shall contain, at a minimum, the following information: 1. Each employee’s full name and individual employee identification number. Employee’s home address and full social security number shall not be used. 2. Each employee’s classification. 3. Each employee’s hourly wage rate and, where applicable, overtime hourly rate. 4. The daily and weekly hours worked in each of the employee’s classification, including actual overtime hours worked. 5. The itemized deductions made for each employee. Any deductions listed as “Other” deductions, shall be explained on page 2 of the payroll form. 6. The net wages paid to each employee. During the Residency’s review of the payroll record from the prime contractor or subcontractor for whom the interviewed employee works, the Residency will review the record and note any deviations from the following: 1. The employee was paid, at least, 1½ times the regular hourly rate for every hour worked beyond 40 hours per week. 2. The employee was paid, at a minimum, the rate specified in the contract for the associated classification. 3. The record contains a certified statement executed by the person who supervises the payment of wages by the contractor or subcontractor with respect to the wages paid during the payroll period. Any deficiency discovered during the Residency’s review shall be brought to the appropriate contractor’s attention for their corrective action. All corrections should be reflected on future payrolls submitted to the Residency. **Personnel Providers** Contractors or subcontractors may use workers from a provider firm. The payroll submittal and interview requirements are the same as in any contract utilizing Federal funding. Payrolls must be submitted and certified by the provider on behalf of the contractor. The certified payrolls must show the actual wages paid to the employees, regardless of the employer, whether a labor finder or contractor. **Deficiency Reporting** The Residency shall report all cases of classification or wage rate violations discovered during the residency review or received by complaint to ODOT's Civil Rights Division – External Programs. George Raymond, P.E. Construction Engineer **Employee Interview** 1. Have you seen the posting of minimum wage rates? - [ ] Yes/Sí - [ ] No 2. Have you been advised that this project has minimum established wage rates? - [ ] Yes/Sí - [ ] No 3. What is your job classification? - [ ] ¿Cuál es su clasificación de trabajo? 4. What is your wage rate? - [ ] ¿Cuál es su tarifa de salario? 7. Are you currently enrolled in a training program? - [ ] Yes/Sí - [ ] No 8. Is your daily work consistent with your job classification? - [ ] Yes/Sí - [ ] No 9. Who do you work for? - [ ] ¿Para quién usted trabaja? 10. How are you paid? (Cash or Check) - [ ] ¿Cómo le pagan? (Efectivo o cheque) 11. Are you paid weekly? - [ ] Yes/Sí - [ ] No 12. Are you paid overtime for work over 40 hours per week? - [ ] Yes/Sí - [ ] No 13. Is there money deducted from your pay beside income and social security taxes? - [ ] Yes/Sí - [ ] No 14. Do you know where the project EEO Bulletin Board is? - [ ] Yes/Sí - [ ] No 15. Do you know who your Company EEO Officer is? - [ ] Yes/Sí - [ ] No **EMPLOYEE’S NAME** PRINTED NAME: _______________________________________ SIGNER: ____________________________________________ Nombre en letra: ______________________________________ Firma: ______________________________________________ (Employee Signature) **Payroll Review** 1. Has the Prime Contractor submitted his and all Subcontractors weekly payrolls? - [ ] Yes/Sí - [ ] No 2. Is the Contractor paying 1½ times regular rate of hours worked above 40 hours? - [ ] Yes/Sí - [ ] No 3. Have wage rates been checked to insure that rates paid were at least as much as the minimum rate established? - [ ] Yes/Sí - [ ] No 4. WERE ANY DISCREPANCIES NOTED: (If discrepancies are found in their wage rate and classification, exam documents in file, verify amount due is paid.) - [ ] Yes/Sí - [ ] No 5. In your opinion, has the Contractor taken the required affirmative action to comply with all of the E.E.O. requirements in his contract? - [ ] Yes/Sí - [ ] No Remarks: 6. Did you compare wages with Contract Wages (Davis Bacon Wages)? - [ ] Yes/Sí - [ ] No Reviewed By: _________________________________________ Date: _______________________________________________ OKLAHOMA DEPARTMENT OF TRANSPORTATION DATE: November 14, 1997 TO: Field Division Engineers, Division Construction Engineers, and Resident Engineers FROM: Byron Poynter, Construction Engineer SUBJECT: CONSTRUCTION CONTROL DIRECTIVE NO. 971114 This Directive Cancels Construction Control Directive No. 960724 MONITORING THE DAVIS-BACON ACT The purpose of this Directive is to offer guidelines for monitoring the Davis-Bacon Act. The primary purpose of this act is to ensure that persons working on projects which include federal funds, are paid at least the minimum hourly rate for their job classification. Projects designated as Local Roads or Rural Collectors are exempt from submittal of payrolls. These projects are identified with a “D” in the Project Number, just before the “hole number”. Please advise the Contractor early of this exemption. All other projects which are federally funded all or in part, require submission of contractor payrolls. Payrolls should be received within two weeks of the end of the payroll period. Submittal of payrolls is a contractual requirement. You must ensure that they are submitted. Payrolls for periods of “no work in progress” are not required. In order to compare the payrolls to the contract rates the payroll reports must indicate the workers classification. That is; the job title must relate to the contract titles. Example: Concrete Finisher, Form Setter, Bulldozer Operator, Motor Grader (Fine Grade), Motor Grader (Rough), etc. Construction Control Directive No. 971114 Continued. INTERVIEWS Interview project workers periodically as to hourly rate of pay and compare to the payrolls to ensure that at least, the minimum hourly rate is paid. Interviews should be conducted weekly for the first two or three weeks of a project to ensure that the contractor is in compliance. Afterwards, one interview per month, randomly selected, should be adequate. A minimum of ten percent of all workers should be interviewed during the course of the project. PERSONNEL PROVIDERS Some contractors use workers from a Provider Firm. The requirements are the same. Payrolls must be submitted and certified by the provider on behalf of the contractor. If a deficiency is detected, work through the contractor to resolve the matter. DEFICIENCY REPORTING Report all cases of act violations or deficiencies to the Construction Division (for the annual report to the FHWA). Byron Foynter Construction Engineer Copy to: Distribution list. Scope: To establish the procedure for monitoring payrolls submitted by the contractor and performing the periodic wage rate interviews. The purpose of this Directive is to offer guidelines for monitoring the requirements of the Davis-Bacon Act. The primary purpose of this act is to ensure that persons working on Federally funded contracts (Federal aid contracts on the Federal aid highway system) are paid at least the minimum hourly wage rate for their job classification. Specific information concerning the Davis-Bacon Act can be found on the U.S. Department of Labor, Wage and Hour Division’s website at www.dol.gov. The applicable federal regulations are found in 29 CFR Part 1 and Part 5. Contracts containing projects designated as Local Roads or Rural Minor Collectors exempt the contractor from submitting weekly payrolls. Additionally, the Residency office administering the contract will not be required to perform the wage rate interviews of the contractor’s employees on these projects. ODOT identifies Local Road or Rural Minor Collector projects by placing a “D” in the project number, just before the “hole number” (i.e. BRO-144D(33)CO). All other contracts containing a project which is Federally funded, all or in part, will require that the contractor submit weekly payrolls and will require the performance of the wage rate interviews by the Residency. For contracts that contain Federally funded projects tied with projects that are exempt, all of the projects in the contract will require both the submittal of weekly payrolls by the contractor and the performance of the wage rate interviews by the Residency. **Contractor Payrolls** The prime contractor and all approved subcontractors performing work on a Federally funded contract are required to submit weekly payroll records to the Residency. All payroll records from the prime contractor or subcontractor shall be received within two weeks of the end of the payroll reporting period. Payrolls for periods of “no work in progress” will not be required. The Residency will be required to stamp all payrolls indicating the date on which they were received. The Residency must monitor the payroll records received weekly and should notify the prime contractor in writing for any failure to submit the required payrolls or to submit a record with the necessary information (as detailed below) within the two week period. The written notification to the prime contractor may state actions that could be taken by the Residency, including holding future progressive payments until the contractual requirement has been satisfied. Any such correspondence must be stored in the project’s payroll files. **Wage Rate Interviews** The Residency employees shall conduct systematic spot interviews of the prime contractor’s and approved subcontractor’s employees to identify whether the minimum wage and other labor standards of the contract are being fully complied with and that there is no misclassification of an employee. Only those employees, laborers and mechanics whose classifications are subject to the Davis-Bacon Act will be interviewed. Examples of exempt classifications include supervisor, foreman, salaried employees and survey crews. One employee of the prime contractor or subcontractor shall be interviewed each month during the duration of the original contract time. A minimum of two employees shall be interviewed on a specific contract. The Residency should ensure that interviews of subcontractors’ employees are done as well as the prime contractor’s employees. An employee shall not be interviewed more than once per contract. Refer to the attached interview form. This form shall be used to record the information obtained from the interview. Once an employee is interviewed, the results of the interview should be checked against the information contained in the weekly payroll record for that date and the payroll record should be reviewed for completeness. There is no mandatorily prescribed format for the contractor’s or subcontractor’s payroll records, however, payroll records received by the Residency shall contain, at a minimum, the following information: 1. Each employee’s full name and individual employee identification number. Employee’s home address and full social security number shall not be used. 2. Each employee’s classification. 3. Each employee’s hourly wage rate and, where applicable, overtime hourly rate. 4. The daily and weekly hours worked in each of the employee’s classification, including actual overtime hours worked. 5. The itemized deductions made for each employee. 6. The net wages paid to each employee. During the Residency’s review of the payroll record from the prime contractor or subcontractor for whom the interviewed employee works, the Residency will review the record and note any deviations from the following: 1. The employee was paid, at least, 1½ times the regular hourly rate for every hour worked beyond 40 hours per week. 2. The employee was paid, at a minimum, the rate specified in the contract for the associated classification. 3. The record contains a certified statement executed by the person who supervises the payment of wages by the contractor or subcontractor with respect to the wages paid during the payroll period. Any deficiency discovered during the Residency’s review shall be brought to the appropriate contractor’s attention for their corrective action. All corrections should be reflected on future payrolls submitted to the Residency. **Personnel Providers** Contractors or subcontractors may use workers from a provider firm. The payroll submittal and interview requirements are the same as in any contract utilizing Federal funding. Payrolls must be submitted and certified by the provider on behalf of the contractor. **Deficiency Reporting** The Residency shall report all cases of classification or wage rate violations discovered during the residency review or received by complaint to ODOT’s Regulatory Services Branch. Monitoring The Davis-Bacon Act January 22, 2009 George Raymond, P.E. Construction Engineer Due to ever changing circumstances, especially on projects of long duration, there is a need for some informal documentation of changes made in the field whether they result in a formal change or not. This will help to ensure that any agreement made in the field will proceed to implementation and payment even if the agreeing parties are no longer present when the project is finalized. We have modified a form used for this purpose by another state and suggest that it be used to secure field agreements. The superintendent and the Resident Engineer (or delegated inspector) should sign the agreement (copy enclosed). Byron Poynter Construction Engineer Copy to: Distribution List STATE OF OKLAHOMA DEPARTMENT OF TRANSPORTATION FIELD WORK ORDER TO: (Contractor, Name and Address) PROJECT NO. ORDER NO. STATION You are hereby ordered to perform extra work described below in compliance with Subsections 104.03 and 109.03 of the Specifications and the conditions listed herein: DESCRIPTION OF WORK: (Include specifications if non-standard items) | ITEM OF WORK | UNIT | APPROX QUANTITY | AGREED UNIT PRICE | AMOUNT | |--------------|------|-----------------|-------------------|--------| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | FOR THE DEPARTMENT OF TRANSPORTATION Signature: Name: Title: DATE: FOR THE CONTRACTOR We Concur Signature: Name: Title: DATE: OKLAHOMA DEPARTMENT OF TRANSPORTATION DATE: September 12, 1997 TO: Field Division Engineers, Division Construction Engineers, and Resident Engineers FROM: Byron Poynter, Construction Engineers SUBJECT: CONSTRUCTION CONTROL DIRECTIVE NO. 970912. STORMWATER PERMITS TO DEQ THIS DIRECTIVE CANCELS CONTROL DIRECTIVE NO. 970826. The permit for National Pollutant Discharge Elimination System (NPDES) we have been operating under since 1992 expired September 9, 1997. Projects which will not be stabilized until after October 9, 1997 will require a new Notice Of Intent (NOI) submittal. At the same time, the responsibility for permitting has been moved from the Environmental Protection Agency (EPA) to the Oklahoma Department of Environmental Quality (DEQ). For projects which require a new submittal of the NOI, A copy must be completed for each co-permittee. Use the enclosed DEQ Form and submit to the following address: Stormwater Notice of Intent Oklahoma Department of Environmental Quality Water Quality Division 1000 NE Tenth Street Oklahoma City, Oklahoma 73117-1212 Even though the national permit has expired, the EPA has transferred their files to the DEQ. Because of this, you will have to submit an Notice of Termination (NOT) to close out any project that has become stabilized. The new NOT form is not currently available at this time. Use the old EPA form and submit to the DEQ until you receive the new DEQ form. BILLING: There is now a $240.00 fee for filing the NOI. The fee is paid annually and for projects that have a duration of more than one year, the fee will be paid for each year until stabilization occurs. The Department has arranged to pay this fee direct on a monthly billing by the DEQ. You will not need to be concerned with payment, except that the billing will continue until you have submitted a NOT. Please send a copy of the NOI and the NOT to the comptroller to assist with the payment and termination of charges. ADDITIONAL CERTIFICATION: The new guidelines (copy enclosed), require that subcontractors sign-off on a certification ensuring that the subcontractor is aware of the terms of the NOI (page 19 part E 1. Of the guidelines). Since these submittals are required 30 days before work begins, we will make this certification part of the subcontractor approval system. When you receive a subcontractor approval, place a copy with the Stormwater Runoff Plan. Enclosed: New NOI Form New Guidelines Byron Poynter Construction Engineer Copy to: Distribution List Storm.bcp 2 of 2 Notice of Intent (NOI) for Storm Water Discharges Associated with CONSTRUCTION ACTIVITY Under an OPDES General Permit Submission of this Notice of Intent constitutes notice of the party identified in Section II of this form intends to be authorized by an OPDES permit issued for storm water discharges associated with industrial activity in the State of Oklahoma. Becoming a permittee obligates such discharger to comply with the terms and conditions the permit. IN ORDER TO OBTAIN AUTHORIZATION, ALL REQUESTED INFORMATION MUST BE PROVIDED ON THIS FORM. SEE INSTRUCTIONS ON BACK OF FORM. Facility Owner/Operator Information Name: ________________________________ Phone: ________________________________ Address: ________________________________ Status of Owner/Operator: ______ City: ________________________________ State: ______ Zip Code: __________ - ______ Site Information Name of the Project: ________________________________ Location of Project: ________________________________ City: ________________________________ State: ______ Zip Code: __________ - ______ Quarter: _________ Section: _________ Township: _________ Range: _________ Latitude: _________ Longitude: _________ County: _________ Is Pollution Prevention Plan (PPP) developed? □ Yes □ No Is PPP Implemented? □ Yes □ No Address or location of PPP for viewing: □ Address in I. above □ Address in II. above □ Other, please specify below Address: ________________________________ Phone: ________________________________ State: ______ Zip Code: __________ - ______ Other Operator OPDES Number: ________________________________ or NPDES Number: ________________________________ Name of Receiving Water: ________________________________ | Month | Day | Year | |-------|-----|------| | Construction Start Date | Completion Date | Estimated area to be disturbed (to nearest acre): ________________________________ Is the Storm Water Pollution Prevention Plan in compliance with all applicable local sediment and erosion plans? □ Yes □ No □ None Certification I certify under penalty of law that I have read and understand the Part I.B. eligibility requirements for coverage under the general permit for storm water discharges from construction activities, including those requirements relating to the protection of endangered species identified in Part I.B.3.a. I further certify that I have followed the procedures found in Addendum A to protect listed endangered and threatened species and designated critical habitat and that the discharges covered under this permit and BMPs to control storm water runoff meet one or more of the eligibility requirements of Part I.B.3.e.(1) of this permit. Check the box(es) corresponding to the part of Part I.B.3.e.(1) under which you claim compliance with the eligibility requirements of this permit. a) □ b) □ c) □ d) □ e) □ I understand that continued coverage under this permit is contingent upon maintaining eligibility as provided for in Part I.B. Utility companies check here if applying for coverage as described in Section II(A)(4). The following certification statement additionally applies: I certify that I understand that authorization to discharge is contingent upon a principal operator of the construction project being granted coverage under this, or an alternative NPDES permit. I certify under penalty of law that this document and all attachments were prepared under my direction or supervision in accordance with a system design to assure that qualified personnel properly gather and evaluate the information submitted. Based on my inquiry of the person or persons who manage this system, or those persons directly responsible for gathering the information, the information submitted is, to the best of my knowledge and belief, true, accurate, and complete. I am aware there are significant penalties for submitting false information, including the possibility of fine and imprisonment for knowing violations. Date: ________________________________ Signature: ________________________________ Instructions - DEQ Form xx Notice of Intent (NOI) for Storm Water Discharges Associated with Construction Activity To Be Covered Under an OPDES Permit Who Must File A Notice Of Intent Form Under the provisions of the Clean Water Act, as amended, (33 U.S.C. . . 1251 et seq.), the Act, Oklahoma Environmental Code, Title 27A of the Oklahoma statutes, Section 2-14-101 et.seq., and the rules OAC 252:002-15, discharge of storm water from construction activities is prohibited without an Oklahoma Inluent Discharge Prevention System Permit. The conduct of a construction project which discharges storm water into a water body requires a NOI under an OPDES Storm Water General Permit (GP 005). If you have questions about whether you need a permit under the OPDES Storm Water program, or if you need information, write to or telephone the Water Quality Division, Department of Environmental Quality (DEQ), at (405) 271-2205. Where to File NOI Form The form must be sent to the following address: DEQ Water Quality Division 1000 NE 10th Street Oklahoma City, OK 73117-1212 Completing The Form You must type or print, using upper-case letters, in the appropriate areas only. Please place each character between the marks. Abbreviate if necessary to stay within the number of characters allowed for each item. Use one space for breaks between words, but not for punctuation marks unless they are needed to clarify your response. If you have any questions on this form, call DEQ-WQD at (405) '1-5205. Section I. Facility Owner/Operator Information Provide the legal name, mailing address, and telephone number of the person, firm, public organization, or any other entity that either individually or taken together meet the following two criteria: (1) they have operational control over the specifications (including the ability to make modifications in specifications); and (2) they have actual day-to-day operational control of those activities at the site necessary to ensure compliance with plan requirements and permit conditions. Do not use a colloquial name. Enter the appropriate letter to indicate the legal status of the operator of the facility: F = Federal; S = State; M = Public (other than Federal or State); P = Private Section II. Site Information Enter the Project's official or legal name and complete street address, including city, county, state, ZIP code and phone number. If the site lacks a street address, indicate with a general statement the location of the site (e.g., intersection of state highways 61 and 34). The applicant must also provide the latitude and longitude of the facility in degrees, minutes, and seconds to the area of subdivision (section, quarter section, township and range [to the nearest quarter section]) or the approximate center of the site. Location for subdivisions all be quarter, section, township and range. The latitude and longitude of your facility can be located on USGS quadrangle maps. The quadrangle maps can be obtained at 1-800-447-4762. Longitude and latitude may also be obtained at the Census Bureau Internet site: http://www.census.gov/cgi-bin/gazetteer. Only one location description is needed: address; section, township, and range; or latitude and longitude. Indicate whether the facility in on Indian Lands. Indicate if the Pollution Prevention Plan (PPP) had been developed. Also indicate if the PPP had been implemented. Refer to Part IV of the General Permit for information of PPPs. 'Yes' means the PPP is ready to be implemented upon notification of coverage or that the PPP is ready to be implemented at the time the NOI form is submitted. Provide the address and phone number where the PPP can be viewed, if different from address previously given. Check appropriate box. Enter the name of the receiving water body. If no water body exists on site, enter name of closest predominant receiving water body. Contact the appropriate state office to obtain more information on water bodies. Enter the construction start and completion date listing four digits for the year. Enter the estimated area to be disturbed including but not limited to: grubbing, excavation, grading, and utilities and infrastructure installation. Indicate to the nearest acre. Indicate if the PPP is in compliance with all other applicable local sediment and erosion plans. Indicate if any structures listed in Attachment A of the General Permit is in proximity to the storm water discharges or BMP construction associated with the discharges and requirements to be covered by this permit. Indicate if land disturbing activities will be conducted for the construction of storm water controls. Indicate if applicant is subject to and in compliance with a written historic preservation agreement. Section III. Certification Indicate under which criteria the applicant claims compliance with the Endangered Species Act. Refer to Part I.B.3.e.(1) of the General Permit. If applicant is a Utility Company, indicate if applying for coverage as described in Section II.(A).(4) of the General Permit. Federal Statutes provide for severe penalties for submitting false information on this application form. Federal regulations require this application to be signed as follows: For a corporation: by a responsible corporate officer, which means:(i) president, secretary, treasurer, or vice-president of the corporation in charge of a principal business function, or any other person who performs similar policy or decision making functions; or (ii) in the case of a partnership, general partner in the partnership, or operating facilities employing more than 250 persons or having gross annual sales or expenditures exceeding $25 million (in second-quarter 1980 dollars), if authority to sign had been assigned or delegated to the manager in accordance with corporate procedures; For a partnership or sole proprietorship: by a general partner of the proprietor, or; For a municipality, state, Federal, or other public facility: by either a principal executive or ranking elected official. PLEASE MAKE SURE YOU ACQUIRE A COPY OF THIS PERMIT AND READ ALL TERMS AND CONDITIONS. GENERAL PERMIT (GP-005) FOR STORM WATER DISCHARGES FROM CONSTRUCTION ACTIVITIES WITHIN THE STATE OF OKLAHOMA OKLAHOMA DEPARTMENT OF ENVIRONMENTAL QUALITY WATER QUALITY DIVISION SEPTEMBER 8, 1997 Storm Water General Permit for Construction Activities Cover Page Permit No. [GP-005] Authorization to Discharge Under the Oklahoma Pollutant Discharge Elimination System Act (OPDES) In compliance with the provisions of the OPDES, 27A O.S. 2-6-201 et seq., as amended, except as provided in Part I.B.3 of this permit, operators of storm water discharges from construction activities, located in an area specified in Part I.A., are authorized to discharge in accordance with the conditions and requirements set forth herein. Only those operators of storm water discharges from construction activities in the general permit area who submit a Notice of Intent in accordance with Part II of this permit are authorized under this general permit. This permit shall become effective on September 9, 1997. This permit and the authorization to discharge shall expire at midnight, September 8, 2002. (Signature of Executive Director) This signature is for the permit conditions in Parts I through IX and for any additional conditions in Part X which apply to facilities located in Oklahoma. General Permits for Storm Water Discharges From Construction Activities Table of Contents Part I. Coverage Under this Permit A. Permit Area B. Eligibility C. Obtaining Authorization D. Terminating Coverage Part II. Notice of Intent Requirements A. Deadlines for Notification B. Contents of Notice of Intent C. Where to Submit Part III. Special Conditions, Management Practices, and Other Non Numeric Limitations A. Prohibition on Non-Storm Water Discharges B. Releases in Excess of Reportable Quantities C. Spills D. Discharge Compliance with Water Quality Standards E. Responsibilities of Operators Part IV. Storm Water Pollution Prevention Plans A. Deadlines for Plan Preparation and Compliance B. Signature, Plan Review and Making Plans Available C. Keeping Plans Current D. Contents of Plan E. Contractor and Subcontractor Certifications Part V. Retention of Records A. Documents B. Accessibility E. Addresses Part VI. Standard Permit Conditions A. Duty to Comply B. Continuation of the Expired General Permit C. Need to Halt or Reduce Activity not a Defense D. Duty to Mitigate E. Duty to Provide Information. F. Other Information G. Signatory Requirements H. Penalties for Falsification of Reports I. Oil and Hazardous Substance Liability J. Property Rights K. Severability i. Requiring an Individual Permit or an Alternative General Permit M. State Environmental Laws N. Proper Operation and Maintenance O. Inspection and Entry P. Permit Actions Part VII. Reopener Clause Part VIII. Termination of Coverage A. Notice of Termination B. Addresses Part IX. Definitions Addenda A. Endangered Species Guidance B. Notice of Intent (NOI) Form C. Notice of Termination (NOT) Form Part I. Coverage Under This Permit A. Permit Area The permit language is structured as if it were a single permit, with area-specific conditions specified in Part XI. Permit coverage is actually provided by legally separate and distinctly numbered authorizations. B. Eligibility 1. This permit authorizes discharges of storm water from construction activities as defined in OAC 252:605-1-5(c)(3)(K)[being clearing, grading, and excavation activities that result in the disturbance of five or more acres of total land area, including smaller areas that are part of a larger common plan of development or ownership] and those construction site discharges designated by the Executive Director as needing a storm water permit. Any discharge authorized by a different NPDES permit may be commingled with discharges authorized by this permit. 2. This permit also authorizes storm water discharges from support activities related to a construction site (e.g. concrete or asphalt batch plants, equipment staging yards, material storage areas, etc.) from which there otherwise is a storm water discharge from a construction activity provided: a. The support activity is not a commercial operation serving multiple unrelated construction projects, and does not operate beyond the completion of the construction activity; and b. Appropriate controls and measures are identified in the storm water pollution prevention plan for the discharges from the support activity areas. 3. Limitations on Coverage The following storm water discharges from construction sites are not authorized by this permit: a. Post Construction Discharges. Storm water discharges that originate from the site after construction activities have been completed and the site has undergone final stabilization. b. Discharges Mixed with Non-storm Water. Discharges that are mixed with sources of non-storm water other than discharges which are identified in Part III.A.2. of this permit and which are in compliance with Part IV.D.5 (non-storm water discharges) of this permit. Any discharge authorized by a different NPDES permit may be commingled with discharges authorized by this permit. c. Discharges Covered by Another Permit. Storm water discharges associated with construction activity that have been issued an individual permit or required to obtain coverage under an alternative general permit in accordance with paragraph VII.L; d. Discharges Threatening Water Quality. Storm water discharges from construction sites that the Executive Director determines will cause, or have the reasonable potential to cause, excursions above water quality standards. (Where such determinations have been made, the discharger will be notified by the Director that an individual permit application is necessary.); e. Discharges that are not Protective of Endangered and Threatened Species. (1) A discharge of storm water associated with construction activity is covered under this permit only if the applicant certifies that it meets at least one of the following criteria. Failure to continue to meet one of these criteria during the term of the permit will result in the storm water discharges associated with construction being ineligible for coverage under this permit. (a) The storm water discharge(s), and the construction and implementation of Best Management Practices (BMPs) to control storm water runoff, are not likely to adversely affect species identified in Addendum A of this permit or critical habitat for a listed species; or (b) The applicant’s activity has received previous authorization under section 7 or section 10 of the Endangered Species Act (ESA) and that authorization addressed storm water discharges and/or BMPS to control storm water runoff (e.g., developer included impact of entire project in consultation over a wetlands dredge and fill permit under Section 7 of the Endangered Species Act); or (c) The applicant’s activity was considered as part of a larger, more comprehensive assessment of impacts on endangered and threatened species under section 7 or section 10 of the Endangered Species Act that which accounts for storm water discharges and BMPs to control storm water runoff (e.g., where an area-wide habitat conservation plan and section 10 permit is issued which addresses impacts from construction activities including those from storm water, or a National Environmental Policy Act (NEPA) review is conducted which incorporates ESA section 7 procedures); or (d) Consultation under section 7 of the Endangered Species Act is conducted for the applicant’s activity which results in either a no jeopardy opinion or a written concurrence on a finding of no likelihood of adverse effects; or (e) The applicant’s activity was considered as part of a larger, more comprehensive site-specific assessment of impacts on endangered and threatened species by the owner or other operator of the site and that permittee certified eligibility under item (a), (b), (c), or (d) above (e.g. owner was able to certify no adverse impacts for the project as a whole under item (a), so the contractor can then certify under item (e)). Utility companies applying for permit coverage for the entire permit area of coverage as defined under Part I.A. may certify under item (e) since authorization to discharge is contingent on a principal operator of a construction project having been granted coverage under this, or an alternative NPDES or OPDES permit for the areas of the site where utilities installation activities will occur. (2) All applicants must follow the procedures provided at Addendum A of this permit when applying for permit coverage. (3) The applicant must comply with any terms and conditions imposed under the eligibility requirements of paragraphs (1)(a), (b), (c), (d), or (e) above to ensure that storm water discharges or BMP's to control storm water runoff are protective of listed endangered and threatened species and/or critical habitat. Such terms and conditions must be incorporated in the applicant's storm water pollution prevention plan. (4) For the purposes of conducting consultation to meet the eligibility requirements of paragraph (1)(d) above, applicants are designated as non-Federal representatives. See 50 CFR 402.08. However, applicants who choose to conduct consultation as a non-Federal representative must notify DEQ. (5) This permit does not authorize any "take" (as defined under section 9 of the Endangered Species Act) of endangered or threatened species unless such takes are authorized under sections 7 or 10 the Endangered Species Act. (6) This permit does not authorize any storm water discharges nor require any BMPs to control storm water runoff that are likely to jeopardize the continued existence of any species that are listed as endangered or threatened under the Endangered Species Act or result in the adverse modification or destruction of habitat that is designated as critical under the Endangered Species Act. C. Obtaining Authorization 1. In order for storm water discharges from construction activities to be authorized to discharge under this general permit, a discharger must: (a) First develop a Pollution Prevention Plan (covering either the entire site or all portions of the site for which they are operators—see definition in Part IX) according to the requirements in Part IV (preparation and implementation of the Plan may be a cooperative effort where there is more than one operator at a site), and then (b) Submit a Notice of Intent (NOI) in accordance with the requirements of Part II, using an NOI form provided by the Oklahoma Department of Environmental Quality (DEQ) (or a photocopy thereof). The Pollution Prevention Plan must be implemented upon commencement of construction activities. 2. For construction sites where the operator changes, or where a new operator is added after the submittal of an NOI under Part II, a new NOI must be submitted in accordance with Part II. 3. Once authorization is issued by the DEQ, dischargers who submit an NOI in accordance with the requirements of this permit are authorized to discharge storm water from construction activities under the terms and conditions of this permit. DEQ may deny coverage under this permit and require submittal of an application for an individual OPDES permit based on a review of the NOI or other information (see Part VI.L of this permit). D. Terminating Coverage 1. Operators wishing to terminate coverage under this permit must submit a notice of termination (NOT) in accordance with Part VIII of this permit. 2. All permittees must submit a NOT within thirty (30) days after completion of their construction activities and final stabilization of their portion of the site, or another operator taking over all of their responsibilities at the site. A permittee cannot submit an NOT without final stabilization unless another party has agreed to assume responsibility for final stabilization of the site. Appropriate enforcement actions may be taken for permit violations where a permittee submits a NOT but the permittee has not transferred operational control to another permittee or the site has not undergone final stabilization. Project-by-project NOTs are not required to be submitted by utility company operators for installation of utilities at construction sites if the utility company operator has been authorized to discharge in the full area of coverage for a given permit as defined in Part I.A. of this permit. Part II. Notice of Intent Requirements A. Deadlines for Notification 1. Except as provided in Parts II.A.3, II.A.4, II.A.5, or II.A.6, parties with operational control over project specifications, (e.g., owner or developer), should submit an initial Notice of Intent (NOI) in accordance with the requirements of this Part at least thirty (30) days prior to the commencement of construction activities (i.e., the initial disturbance of soils associated with clearing, grading, excavation activities, or other construction activities); 2. Except as provided in Parts II.A.3, II.A.4., or Part II.A.5, parties defined as operators solely due to their day-to-day operational control over those activities at a project site which are necessary to ensure compliance with the storm water pollution prevention plan or other permit conditions (e.g., general contractor, erosion control contractor, etc.) should submit an NOI at least thirty (30) days prior to commencing work at the site. 3. For storm water discharges from construction sites where the operator changes, (including projects where an operator is added after an NOI has been submitted under Parts II.A.1 or II.A.2) an NOI in accordance with the requirements of this Part should be submitted at least thirty (30) days prior to when the new operator assumes operational control over site specifications or commences work at the site. 4. Utility Companies (i.e., telephone, electric, gas, water, sewer, cable TV, etc. companies that provide service to the public) whose involvement in an individual construction project is limited to installation of underground or above-ground service lines and associated equipment to provide connections from a main transmission line to individual customers (e.g., homes, apartments, businesses, etc.) or a location where the site operator's utility subcontractor will tap in (e.g., public water utility installs a stub with a tap into the main trunk line and developer's utility contractors run the distribution lines), may file a single NOI to obtain coverage for all such activities in the permit areas defined in Part I.A. Permit coverage obtained in this manner is limited to the utility company's activities on sites where: a. An operator of the individual construction project has obtained permit coverage under this or an alternative general permit or under an individual permit; b. The pollution prevention plan for the site identifies control measures for utilities installation activities; and c. The party responsible for implementation of each control measure for utilities installation is clearly identified. Where a utility company is constructing a main transmission line, or other project for themselves, the utility company must obtain permit coverage on a site-by-site basis. Note: Utility contractors hired by a utility company or other site operator and not meeting the definition of "operator" are considered subcontractors and are covered by the subcontractor certification requirements of Part IV.E. 5. The DEQ reserves the right to bring appropriate enforcement actions for any unpermitted activities that may have occurred between the time construction commenced and authorization of future discharges. 6. Permittees with construction projects authorized to discharge under the previous general permit issued in 1992 by EPA and now replaced by this OPDES permit must: a. Submit a new NOI within thirty (30) days of the effective date of this permit in order to be authorized to discharge after [insert date 30 days after effective date of permit]. If the permittee will be eligible to submit a Notice of Termination (NOT) (e.g., construction finished and final stabilization complete) before the 30th day, no NOI is required. b. Update their current pollution prevention plan to comply with the requirements of Part IV no later than [insert date 30 days from the effective date of the permit]. B. Contents of Notice of Intent 1. Notice of Intent for Individual Construction Projects The Notice(s) of Intent shall be signed in accordance with Part VI.G of this permit and shall include the following information: a. The street address (description of location if no street address is available), county, and the latitude and longitude of the approximate center of the construction site/project for which the notification is submitted; b. The name, address, and telephone number of the operator(s) filing the NOI for permit coverage and operator status as a Federal, State, private, or other public entity; c. The name, address, and telephone number of the construction site owner and owner's status as a Federal, State, private, or other public entity; d. The name of the receiving water(s), or if the discharge is through a municipal separate storm sewer, the name of the municipal operator of the storm sewer and the receiving water(s); e. The permit number of any NPDES and OPDES permit(s) for any discharge(s) (including any storm water discharges or any non-storm water discharges) from the site, to the extent available. f. An estimate of project start date and completion dates, estimates of the number of acres of the site on which soil will be disturbed, and g. A certification that a storm water pollution prevention plan, including both construction and post-construction controls, has been prepared for the site in accordance with Part IV of this permit, and such plan provides compliance with approved State and/or local sediment and erosion plans or permits and/or storm water management plans or permits in accordance with Part IV.D.2.d of this permit. (A copy of the plans or permits should not be included with the NOI submission). h. Whether, based on the instruction in Addendum A, any species identified in Addendum A are in proximity to the storm water discharges covered by this permit or the BMPs to be used to comply with permit conditions. i. Under which section of Part I.B.3.e.(1)(Endangered Species) the applicant is certifying eligibility. k. The following certifications shall be signed in accordance with Part VI.G. I certify under penalty of law that I have read and understand the Part I.B. eligibility requirements for coverage under the general permit for storm water discharges from construction activities, including those requirements relating to the protection of endangered species identified in Addendum A. I further certify that I have followed the procedures found in Addendum A to protect listed endangered and threatened species and designated critical habitat and that the discharges covered under this permit and BMPs to control storm water runoff meet one of the eligibility requirements of Part I.B.3.e.(1) of this permit. Check the box(es) corresponding to that part of Part I.B.3.e.(1) under which you claim compliance with the eligibility requirements of the permit—(a), (b), (c), (d), or (e). 2. Notice of Intent for Permit Issuance Area-wide Coverage of Utility Companies While Installing Utility Service The Notice(s) of Intent for utility companies filing for area-wide coverage in accordance with Part II.A.4. shall be signed in accordance with Part VI.G of this permit and shall include the following information: a. The name, address, and telephone number of the utility company filing the NOI for permit coverage and operator status as a Federal, State, private, or other public entity; b. The area for which coverage is being requested and whether or not any construction projects will be located on an Indian reservation; c. The name, address, and telephone number of the utility company’s point of contact for the utility company’s compliance with the area-wide coverage granted by the permit; d. A certification that a storm water pollution prevention plan with standard operating procedures for the limited utility company construction activities related to installation of service connections has been prepared in accordance with Part IV of this permit, and such plan provides compliance with approved State and/or local sediment and erosion plans or permits and/or storm water management plans or permits in accordance with Part IV.D.2.d of this permit. (A copy of the plans or permits should not be included with the NOI submission.) e. Under which sections of Part I.B.3.e.1. (Endangered Species) the applicant is certifying eligibility. f. The following certifications shall be signed in accordance with Part VI.G. I certify under penalty of law that I have read and understand the Part I.B. eligibility requirements for coverage under the general permit for storm water discharges from construction activities, including those requirements relating to the protection of endangered species identified Part I.B.3.e. I further certify that I understand that authorization to discharge is contingent on a principal operator of a construction project having been granted coverage under this, or an alternative NPDES or OPDES permit for the areas of the site where utilities installation activities will occur and that a pollution prevention plan including appropriate control measures for activities related to installation of utility service has been prepared and will be implemented. I further certify that I have followed the procedures found in Addendum A to protect listed endangered and threatened species and designated critical habitat and that the discharges covered under this permit and BMPs to control storm water runoff meet one of the eligibility requirements of Part I.B.3.e.(1) of this permit. Check the box(es) corresponding to that part of Part I.B.3.e.(1) under which you claim compliance with the eligibility requirements of the permit-(a), (b), (c), (d), or (e). I understand that continued coverage under this storm water general permit is contingent upon maintaining eligibility as provided for in Part I.B. C. Where to Submit 1. NOIs, signed in accordance with Part VI.G of this permit, are to be submitted to the DEQ the address: Storm Water Notice of Intent, Oklahoma Department of Environmental Quality, Water Quality Division, 1000 N.E. Tenth Street, Oklahoma City, Oklahoma 73117-1212. 2. A copy of the DEQ’s acknowledgment of coverage under the general permit and assignment of a permit number; a local contact telephone number/address for public access to view the pollution prevention plan at reasonable times during regular business hours (advance notice by the public of the desire to view the plan may be required, not to exceed two working days). The permit does not require that free copies of the plan be provided to interested members of the public, only that they have reasonable access to view the document and copy it at their own expense. A brief description of the project shall also be posted at the construction site in a prominent and safe place for public viewing during regular business hours (alongside the building permit if the building permit is required to be displayed). Part III. Special Conditions, Management Practices, and Other Non-Numeric Limitations A. Prohibition on Non-Storm Water Discharges 1. Except as provided in paragraph I.B.2 or 3 and III.A.2, all discharges covered by this permit shall be composed entirely of storm water. 2. Discharges of material other than storm water that are in compliance with a NPDES permit (other than this permit) issued for that discharge may be mixed with discharges authorized by this permit. 3. The following non-storm water discharges are authorized by this permit provided the non-storm water component of the discharge is in compliance with paragraph IV.D.5: discharges from fire fighting activities; fire hydrant flushings; waters used to wash vehicles or control dust in accordance with Part IV.D.2.c.(2); potable water sources including waterline flushings; routine external building washdown which does not use detergents; pavement washwaters where spills or leaks of toxic or hazardous materials have not occurred (unless all spilled material has been removed) and where detergents are not used; air conditioning condensate; springs; uncontaminated ground water; and foundation or footing drains where flows are not contaminated with process materials such as solvents. B. Releases in Excess of Reportable Quantities The discharge of hazardous substances or oil in the storm water discharge(s) from a facility shall be prevented or minimized in accordance with the applicable storm water pollution prevention plan for the facility. This permit does not relieve the permittee of the reporting requirements of 40 CFR 117 and 40 CFR 302. Where a release containing a hazardous substance in an amount equal to or in excess of a reporting quantity established under either 40 CFR 117 or 40 CFR 302, occurs during a 24 hour period: 1. The permittee is required to notify the National Response Center (NRC) (800-424-8802; in the Washington, DC metropolitan area 202-426-2675) in accordance with the requirements of 40 CFR 117 and 40 CFR 302 as soon as he or she has knowledge of the discharge; 2. The permittee shall submit within 14 calendar days of knowledge of the release a written description of: the release (including the type and estimate of the amount of material released), the date that such release occurred, the circumstances leading to the release, and steps to be taken to minimize the chance of future occurrences to the EPA Regional Office, at United States EPA, Region VI, Waste Management Division, (6W-EA), Storm Water Staff, First Interstate Bank Tower, at Fountain Place, 1445 Ross Avenue, 12th Floor, Suite 1200, Dallas, Texas, 75202; and 3. The storm water pollution prevention plan required under Part IV of this permit must be modified within 14 calendar days of knowledge of the release to: provide a description of the release, the circumstances leading to the release, and the date of the release. In addition, the plan must be reviewed to identify measures to prevent the reoccurrence of such releases and to respond to such releases, and the plan must be modified where appropriate. C. Spills This permit does not authorize the discharge of hazardous substances or oil resulting from an on-site spill. D. Discharge Compliance With Water Quality Standards Dischargers seeking coverage under this permit shall not be causing or have the reasonable potential to cause or contribute to a violation of a water quality standard. Where a discharge is already authorized under this permit and is later determined to cause or have the reasonable potential to cause or contribute to the violation of an applicable State Water Quality Standard, the permitting authority will notify the operator of such violation(s) and the permittee shall take all necessary actions to ensure future discharges do not cause or contribute to the violation of a water quality standard and document these actions in the pollution prevention plan. If violations remain or re-occur, then coverage under this permit will be terminated by the permitting authority and an alternative permit may be issued. Compliance with this requirement does not preclude any enforcement activity as provided by the Oklahoma Environmental Quality Code for the underlying violation. E. Responsibilities of Operators 1. Developer/Owner Operator--The permittee(s) with operational control over project specifications (including the ability to make modifications in specifications) (e.g. developer or owner) must: a. Ensure the project specifications for the portion of the site for which they are operators meet the minimum requirements of Part IV (Pollution Prevention Plan Development) and all other applicable conditions; b. Ensure that the pollution prevention plan indicates which areas of the project they have operational control over and ensure that if modifications are made to the pollution prevention plan, where other operators are implementing portions of the plan, that these other operators be immediately notified of such modifications. c. Ensure that the pollution prevention plan for the portion of the site for which they are operators indicates the name and NPDES or OPDES permit number for parties with day to day operational control of those activities necessary to ensure compliance with the storm water pollution prevention plan or other permit conditions. If these parties have not been identified at the time the pollution prevention plan is initially developed, the permittee with operational control over project specifications shall be considered to be the responsible party until such time as the authority is transferred to another party (e.g. general contractor hired) and the plan updated; d. Ensure that the pollution prevention plan complies with measures to identify and protect listed threatened and endangered species and/or critical habitat as specified in Part I.B.3.e., Addendum A of this permit and as may be required as a result of consultation; 2. Full Site Operator—The permittee(s) with day-to-day operational control of those activities at a project site which are necessary to ensure compliance with the storm water pollution prevention plan or other permit conditions (e.g. general contractor) must: a. Ensure the pollution prevention plan for the portion of the site for which they are operators meets the minimum requirements of Part IV (Pollution Prevention Plan Development) and identifies the parties responsible for implementation of control measures identified in the plan; b. Ensure that the pollution prevention plan indicates which areas of the project they have operational control over and ensure that if modifications are made to the pollution prevention plan, where other operators are implementing portions of the plan, that these other operators be immediately notified of such modifications; c. Ensure that the pollution prevention plan for the portion of the site for which they are operators indicates the name and NPDES or OPDES permit number of the party with operational control over project specifications (including the ability to make modifications in specifications); d. Ensure that the pollution prevention plan complies with measures to identify and protect listed threatened and endangered species and/or critical habitat as specified in Part I.B.3.e., Addendum A of this permit and as may be required as a result of consultation; 3. Partial Site Operators. Permittees with operational control over only a portion of a larger construction site (e.g., one of four homebuilders in a particular subdivision, utility companies, etc.) are responsible for compliance with all applicable terms and conditions of this permit as it relates to their activities on their portion of the construction site, including protection of endangered species, and implementation of pollution prevention plan measures. Partial site operators shall ensure (either directly or through coordination with another permittee) that their activities do not render another party’s pollution controls ineffective. Partial site operators must either implement their portions of a common pollution prevention plan developed by a full site operator or develop and implement their own pollution prevention plan. Part IV. Storm Water Pollution Prevention Plans A storm water pollution prevention plan shall be developed for each construction site covered by this permit (at least one per permit area for utility company service connection permit coverage). For more effective coordination of BMPs and opportunities for cost sharing, a cooperative effort by the different operators at a site to prepare and participate in a comprehensive pollution prevention plan is encouraged. Individual operators at a site may, but are not required, to develop separate pollution prevention plans that cover only their portion of the project provided reference is made to other operators at the site. Storm water pollution prevention plans shall be prepared in accordance with good engineering practices. The plan shall identify potential sources of pollution which may reasonably be expected to affect the quality of storm water discharges from the construction site. The plan shall describe and ensure the implementation of practices which will be used to reduce the pollutants in storm water discharges associated with construction activity at the construction site and to assure compliance with the terms and conditions of this permit. When developing pollution prevention plans, applicants must follow the procedures in Addendum A of this permit to determine whether endangered and threatened species would be affected by the applicant's storm water discharges or BMPs to control storm water runoff. Any information on whether endangered and threatened species and their critical habitat are found in proximity to the construction site must be included in the pollution prevention plan. Any terms or conditions that are imposed under the eligibility requirements of Part I.B.3.e and Addendum A of this permit to protect endangered and threatened species and/or critical habitat from storm water discharges or BMPs to control storm water runoff must be incorporated into the pollution prevention plan. Permittees must implement the applicable provisions of the storm water pollution prevention plan required under this part as a condition of this permit. A. Deadlines for Plan Preparation and Compliance The plan shall: 1. Be completed (including certifications required under Part IV.E) prior to the submittal of an NOI to be covered under this permit and updated as appropriate; and 2. The plan shall provide for compliance with the terms and schedule of the plan beginning with the initiation of construction activities. B. Signature, Plan Review and Making Plans Available 1. The plan shall be signed in accordance with Part VI.G, and be retained on-site at the facility which generates the storm water discharge in accordance with Part V (retention of records) of this permit. If the site is inactive or does not have an on site location adequate to store the pollution prevention plan, the location of the plan, along with a contact phone number, shall be posted on site. If the plan is located off site, reasonable local access to the plan, during normal working hours, must be provided as described below. 2. The permittee shall make plans available upon request to the DEQ or any other State or local agency approving sediment and erosion plans, grading plans, or storm water management plans; interested members of the public; local government officials; or to the operator of a municipal separate storm sewer receiving discharges from the site. Viewing by the public shall be at reasonable times during regular business hours (advance notice by the public of the desire to view the plan may be required, not to exceed two working days). The permit does not require that free copies of the plan be provided to interested members of the public, only that they have reasonable access to view the document and copy it at their own expense. The copy of the plan required to be kept onsite (or locally available) must be made available to the DEQ (or authorized representative) for review at the time of an onsite inspection. 3. The DEQ, or authorized representative, may notify the permittee (co-permittees) at any time that the plan does not meet one or more of the minimum requirements of this Part. Such notification shall identify those provisions of the permit which are not being met by the plan, and identify which provisions of the plan require modifications in order to meet the minimum requirements of this Part. Within 7 calendar days of receipt of such notification from the Director, (or as otherwise provided by the Director), or authorized representative, the permittee shall make the required changes to the plan and shall submit to the DEQ a written certification that the requested changes have been made. The DEQ may take appropriate enforcement action for the period of time the permittee was operating under a plan that did not meet the minimum requirements of the permit. C. Keeping Plans Current The permittee must amend the plan whenever: 1. There is a change in design, construction, operation, or maintenance, which has a significant effect on the discharge of pollutants to the waters of the State and which has not otherwise been addressed in the plan; 2. Inspections or investigations by site operators, local, State or federal officials indicate the storm water pollution prevention plan is proving ineffective in eliminating or significantly minimizing pollutants from sources identified under Part IV.D.2 of this permit, or is otherwise not achieving the general objectives of controlling pollutants in storm water discharges associated with construction activity; and 3. The plan shall be amended to identify any new contractor and/or subcontractor that will implement a measure of the storm water pollution prevention plan (see Part IV.E). The plan must also be amended to address any measures necessary to protect endangered and threatened species. Amendments to the plan may be reviewed by DEQ in the same manner as Part IV.B above. D. Contents of Plan The storm water pollution prevention plan shall include the following items: 1. Site Description Each plan shall provide a description of pollutant sources and other information as indicated: a. A description of the nature of the construction activity; b. A description of the intended sequence of major activities which disturb soils for major portions of the site (e.g., grubbing, excavation, grading, utilities and infrastructure installation, etc); c. Estimates of the total area of the site and the total area of the site that is expected to be disturbed by excavation, grading, or other activities; d. An estimate of the runoff coefficient of the site after construction activities are completed and existing data describing the soil or the quality of any discharge from the site; e. A general location map (e.g. portion of a city or county map or similar scale) and a site map indicating drainage patterns and approximate slopes anticipated after major grading activities, areas of soil disturbance, an outline of areas which are not to be disturbed, the location of major structural and nonstructural controls identified in the plan, the location of areas where stabilization practices are expected to occur, surface waters (including wetlands), and locations where storm water is discharged to a surface water; f. A description of any discharge associated with industrial activity other than construction (including storm water discharges from dedicated asphalt plants and dedicated concrete plants) covered by the permit; and the location of that activity; g. The name of the receiving water(s), and areal extent of wetland acreage at the site; h. A copy of the permit requirements (may simply attach copy of permit language); i. Information on whether listed endangered or threatened species and/or critical habitat are found in proximity to the construction activity and whether such species are adversely affected by the applicant’s storm water discharges or BMPs to control storm water runoff as required under Addendum A of the permit. 2. Controls Each plan shall include a description of appropriate controls and measures that will be implemented at the construction activity. The plan must clearly describe for each major activity identified in Part IV.D.1.b: (a) appropriate control measures and the timing during the construction process that the measures will be implemented and (b) which permittee is responsible for implementation (e.g., perimeter controls for one portion of the site will be installed by Contractor A after the clearing and grubbing necessary for installation of the measure, but before the clearing and grubbing for the remaining portions of the site. Perimeter controls will be actively maintained by Contractor B until final stabilization of those portions of the site upward of the perimeter control. Temporary perimeter controls will be removed by Owner after final stabilization). The description and implementation of controls shall address the following minimum components: a. Erosion and Sediment Controls. (1) Short and Long Term Goals and Criteria: (a) The construction-phase erosion and sediment controls should be designed to retain sediment on site to the maximum extent practicable. (b) All control measures must be properly selected, installed, and maintained in accordance with the manufacturers specifications and good engineering practices. If periodic inspections or other information indicates a control has been used inappropriately, or incorrectly, the permittee must replace or modify the control for site situations. (c) If sediments escapes the construction site, off-site accumulations of sediment must be removed at a frequency sufficient to minimize offsite impacts (e.g., fugitive sediment in street could be washed into storm sewers by the next rain and/or pose a safety hazard to users of public streets). (d) Sediment must be removed from sediment traps or sedimentation ponds when design capacity has been reduced by 50%. (e) Litter, construction debris, and construction chemicals exposed to storm water shall be picked up prior to anticipated storm events (e.g. forecasted by local weather reports), or otherwise prevented from becoming a pollutant source for storm water discharges (e.g. screening outfalls, picked up daily, etc.). (f) Offsite material storage areas (also including overburden and stockpiles of dirt, etc.) used solely by the permitted project are considered a part of the project and shall be addressed in the pollution prevention plan. (2) Stabilization Practices: A description of interim and permanent stabilization practices, including site-specific scheduling of the implementation of the practices. Site plans should ensure that existing vegetation is preserved where attainable and that disturbed portions of the site are stabilized. Stabilization practices may include: temporary seeding, permanent seeding, mulching, geotextiles, sod stabilization, vegetative buffer strips, protection of trees, preservation of mature vegetation, and other appropriate measures. Use of impervious surfaces for stabilization should be avoided. A record of the dates when major grading activities occur, when construction activities temporarily or permanently cease on a portion of the site, and when stabilization measures are initiated shall be included in the construction site log along with inspections. Except as provided in paragraphs IV.D.2.(a).(1),(a), (b), and (c) below, stabilization measures shall be initiated as soon as practicable in portions of the site where construction activities have temporarily or permanently ceased, but in no case more than 14 days after the construction activity in that portion of the site has temporarily or permanently ceased. (a) Where the initiation of stabilization is precluded by severe and/or adverse climatological conditions, in which case, stabilization measures shall be initiated as soon as practicable. (b) Where construction activity on a portion of the site is temporarily ceased, and earth disturbing activities will be resumed within 21 days, temporary stabilization measures do not have to be initiated on that portion of site. (c) In semi-arid areas (areas with an average annual rainfall of 10 to 20 inches), and areas experiencing droughts where the initiation of stabilization measures by the 14th day after construction activity has temporarily or permanently ceased is precluded by seasonal arid conditions, stabilization measures shall be initiated as soon as practicable. (3) Structural Practices: A description of structural practices to divert flows from exposed soils, store flows or otherwise limit runoff and the discharge of pollutants from exposed areas of the site to the degree attainable. Such practices may include silt fences, earth dikes, drainage swales, sediment traps, check dams, subsurface drains, pipe slope drains, level spreaders, storm drain inlet protection, rock outlet protection, reinforced soil retaining systems, gabions, and temporary or permanent sediment basins. Placement of Structural practices in floodplains should be avoided to the degree attainable. The installation of these devices may be subject to section 404 of the Clean Water Act (CWA). (a) For common drainage locations that serve an area with 10 or more acres disturbed at one time, a temporary (or permanent) sediment basin providing 3,600 cubic feet of storage per acre drained, or equivalent control measures, shall be provided where attainable until final stabilization of the site. The 3,600 cubic feet of storage area per acre drained does not apply to flows from offsite areas and flows from onsite areas that are either undisturbed or have undergone final stabilization where such flows are diverted around both the disturbed area and the sediment basin. For drainage locations which serve 10 or more disturbed acres at one time and where a temporary sediment basin providing 3,600 cubic feet of storage per acre drained, or equivalent controls is not attainable, smaller sediment basins and/or sediment traps should be used. At a minimum, silt fences, vegetative buffer strips, or equivalent sediment controls are required for all downslope boundaries of the construction area and for those side slope boundaries deemed appropriate as dictated by individual site conditions. (b) For drainage locations serving less than 10 acres, sediment basins and/or sediment traps should be used. At a minimum, silt fences, vegetative buffer strips, or equivalent sediment controls are required for all downslope boundaries (and those side slope boundaries deemed appropriate as dictated by individual site conditions) of the construction area unless a sediment basin providing storage for 3,600 cubic feet of storage per acre drained is provided. b. Storm Water Management. A description of measures that will be installed during the construction process to control pollutants in storm water discharges that will occur after construction operations have been completed. Structural measures should be placed on upland soils to the degree attainable. The installation of these devices may be subject to section 404 of the CWA. This permit only addresses the installation of storm water management measures, and not the ultimate operation and maintenance of such structures after the construction activities have been completed and the site has undergone final stabilization. Permittees are only responsible for the installation and maintenance of storm water management measures prior to final stabilization of the site, and are not responsible for maintenance after storm water discharges associated with construction activity have been eliminated from the site. However, post-construction storm water BMPs that discharge pollutants from point sources once construction is completed, may in themselves, need authorization under a separate OPDES permit. (1) Such practices may include: storm water detention structures (including wet ponds); storm water retention structures; flow attenuation by use of open vegetated swales and natural depressions; infiltration of runoff onsite; and sequential systems (which combine several practices). The pollution prevention plan shall include an explanation of the technical basis used to select the practices to control pollution where flows exceed predevelopment levels. (2) Velocity dissipation devices shall be placed at discharge locations and along the length of any outfall channel for the purpose of providing a non-erosive velocity flow from the structure to a water course so that the natural physical and biological characteristics and functions are maintained and protected (e.g., no significant changes in the hydrological regime of the receiving water). c. Other Controls. (1) No solid materials, including building materials, shall be discharged to waters of the State, except as authorized by a section 404 of the CWA. (2) Off-site vehicle tracking of sediments and the generation of dust shall be minimized. (3) The plan shall ensure and demonstrate compliance with State and/or local waste disposal, sanitary sewer or septic system regulations to the extent these are located within the permitted area. (4) The plan shall include a narrative description of practices to reduce pollutants from construction related materials which are stored onsite including an inventory of construction materials (including waste materials), storage practices to minimize exposure of the materials to storm water, and spill prevention and response. (5) A description of pollutant sources from areas other than construction (including storm water discharges from dedicated asphalt plants and dedicated concrete plants), and a description of controls and measures that will be implemented at those sites. (6) The plan shall include measures to protect listed endangered and threatened species and/or critical habitat (if applicable) including any terms or conditions that are imposed under the eligibility requirements of Part 1.B.3.e and Addendum A of this permit to protect such species and/or critical habitat from storm water discharges or BMPs to control storm water runoff. Failure to include these measures will result in the storm water discharges from the construction activities being ineligible for coverage under this permit. d. Approved Local Plans. (1) Permittees which discharge storm water associated construction activities must include in their storm water pollution prevention plan procedures and requirements specified in applicable sediment and erosion site plans or site permits, or storm water management site plans or site permits approved by local officials. Permittees shall provide a certification in their storm water pollution prevention plan that their storm water pollution prevention plan reflects requirements applicable to protecting surface water resources in sediment and erosion site plans or site permits, or storm water management site plans or site permits approved by State, Tribal or local officials. Permittees shall comply with any such requirements during the term of the permit. This provision does not apply to provisions of master plans, comprehensive plans, non-enforceable guidelines or technical guidance documents that are not identified in a specific plan or permit that is issued for the construction site. (2) Storm water pollution prevention plans must be amended to reflect any change applicable to protecting surface water resources in sediment and erosion site plans or site permits, or storm water management site plans or site permits approved by local officials for which the permittee receives written notice. Where the permittee receives such written notice of a change, the permittee shall provide a recertification in the storm water pollution plan that the storm water pollution prevention plan has been modified to address such changes. (3) Dischargers seeking alternative permit requirements shall submit an individual permit application in accordance with Part VI.L of the permit at the address indicated in Part V.C of this permit for the DEQ, along with a description of why requirements in approved local plans or permits, or changes to such plans or permits, should not be applicable as a condition of an NPDES or UPDES permit. 3. Maintenance A description of procedures to ensure the timely maintenance of vegetation, erosion and sediment control measures and other protective measures identified in the site plan in good and effective operating condition. Maintenance needs identified in inspections or by other means shall be accomplished before the next anticipated storm event, or as necessary to maintain the continued effectiveness of storm water controls. If maintenance prior to the next anticipated storm event is impracticable, maintenance must be scheduled and accomplished as soon as practicable. 4. Inspections Qualified personnel (provided by the permittee or cooperatively by multiple permittees) shall inspect disturbed areas of the construction site that have not been finally stabilized, areas used for storage of materials that are exposed to precipitation, structural control measures, and locations where vehicles enter or exit the site at least once every fourteen calendar days, before anticipated storm events (or series of storm events such as intermittent showers over one or more days) expected to cause a significant amount of runoff and within 24 hours of the end of a storm event of 0.5 inches or greater. Where sites have been finally or temporarily stabilized, runoff is unlikely due to winter conditions (e.g. site covered with snow, ice, or frozen ground), or during seasonal arid periods in semi-arid areas (areas with an average annual rainfall of 10 to 20 inches) such inspection shall be conducted at least once every month. a. Disturbed areas and areas used for storage of materials that are exposed to precipitation shall be inspected for evidence of, or the potential for, pollutants entering the drainage system. Erosion and sediment control measures identified in the plan shall be observed to ensure that they are operating correctly. Where discharge locations or points are accessible, they shall be inspected to ascertain whether erosion control measures are effective in preventing significant impacts to receiving waters. Locations where vehicles enter or exit the site shall be inspected for evidence of offsite sediment tracking. b. Based on the results of the inspection, the site description identified in the plan in accordance with paragraph IV.D.1 of this permit and pollution prevention measures identified in the plan in accordance with paragraph IV.D.2 of this permit shall be revised as appropriate, but in no case later than 7 calendar days following the inspection. Such modifications shall provide for timely implementation of any changes to the plan within 7 calendar days following the inspection. c. A report summarizing the scope of the inspection, name(s) and qualifications of personnel making the inspection, the date(s) of the inspection, major observations relating to the implementation of the storm water pollution prevention plan (including the location(s) of discharges of sediment or other pollutants from the site and of any control device that failed to operate as designed or proved inadequate for a particular location), and actions taken in accordance with paragraph IV.D.4.b of the permit shall be made and retained as part of the storm water pollution prevention plan for at least three years from the date that the site is finally stabilized. Such reports shall identify any incidents of non-compliance. Where a report does not identify any incidents of non-compliance, the report shall contain a certification that the facility is in compliance with the storm water pollution prevention plan and this permit. The report shall be signed in accordance with Part VI.G of this permit. 5. Non-Storm Water Discharges Except for flows from fire fighting activities, sources of non-storm water listed in Part III.A.2 of this permit that are combined with storm water discharges associated with construction activity must be identified in the plan. The plan shall identify and ensure the implementation of appropriate pollution prevention measures for the non-storm water component(s) of the Contractor and Subcontractor Certifications 1. Contractors and Subcontractors Implementing Storm Water Control Measures The storm water pollution prevention plan must clearly identify for each control measure identified in the plan, the party that will implement the measure. The Permittee(s) shall insure all contractors and subcontractors identified in the plan as being responsible for implementing storm water control measures sign a copy of the following certification statement, in accordance with Part VI.G of this permit, before performing any work in the area covered by the storm water pollution prevention plan. All certifications must be included with the storm water pollution prevention plan. I certify under penalty of law that I understand the terms and conditions of the Oklahoma Pollutant Discharge Elimination System Act (OPDES) general permit that authorizes storm water discharges associated with construction activity from the construction site identified as part of this certification. The certification must include the name and title of the person providing the signature in accordance with Part VI.G of this permit; the name, address and telephone number of the contracting firm; the address (or other identifying description) of the site; and the date the certification is made. 2. Contractors and Subcontractors Impacting Storm Water Control Measures The permittee shall insure contractor(s) and/or subcontractor(s) that will conduct activities that impact the effectiveness of control measures identified in the plan, but who do not meet the definition of "operator" (Part IX), sign a copy of the following certification statement, in accordance with Part VI.G of this permit, before beginning work on site. All certifications must be included with the storm water pollution prevention plan. I certify under penalty of law that I will coordinate, either through the general contractor, owner, or directly, with the contractor(s) and/or subcontractor(s) identified in the pollution prevention plan having responsibility for implementing storm water control measures to minimize any impact my actions may have on the effectiveness of these storm water controls measures. The certification must include the name and title of the person providing the signature in accordance with Part VI.G of this permit; the name, address and telephone number of the contracting firm; the address (or other identifying description) of the site; and the date the certification is made. 3. Utility Companies The storm water pollution prevention plan must clearly identify, for each control measure identified in the plan relating to the installation of utility service, the party that will implement the measure. The Permittee(s) shall provide to the site operator(s) responsible for maintenance of the pollution prevention plan addressing impacts of utilities installation, a copy of the following certification statement, signed in accordance with Part VI.G of this permit, before performing any work in the area covered by the storm water pollution prevention plan. All certifications must be included with the storm water pollution prevention plan. I certify under penalty of law that I understand the terms and conditions of the Oklahoma Pollutant Discharge Elimination System Act (OPDES) general permit that authorizes storm water discharges associated with construction activity from the portion of the construction site that will be disturbed during my installation of utility service. The certification must include the name and title of the person providing the signature in accordance with Part VI.G of this permit; the name, address and telephone number of the permittee; the address (or other identifying description) of the site; and the date the certification is made. Part V. Retention of Records A. Documents The permittee shall retain copies of storm water pollution prevention plans and all reports required by this permit, and records of all data used to complete the Notice of Intent to be covered by this permit, for a period of at least three years from the date that the site is finally stabilized. This period may be extended by request of the Director at any time. B. Accessibility The permittee shall retain a copy of the storm water pollution prevention plan required by this permit (including a copy of the permit language) at the construction site (or other local location accessible to the DEQ and the public) from the date of project initiation to the date of final stabilization. The permittees with day to day operational control over pollution prevention plan implementation shall have a copy of the plan available at a central location onsite for the use of all operators and those identified as having responsibilities under the plan whenever they are on the construction site. C. Addresses Except for the submittal of NOIs (see Part II.C of this permit), all written correspondence concerning discharges in Oklahoma covered under this permit should be sent to Storm Water Unit, Oklahoma Department of Environmental Quality, Water Quality Division, 1000 N.E. Twelfth Street, Oklahoma City, Oklahoma 73117-1212. Part VI. Standard Permit Conditions A. Duty To Comply 1. The permittee must comply with all conditions of this permit. Any permit noncompliance constitutes a violation of OPDES and is grounds for enforcement action; for permit termination. revocation and reissuance, or modification; or for denial of a permit renewal application. 2. Penalties for Violations of Permit Conditions. Criminal (1) Negligent Violations. OPDES provides that any person who negligently violates permit conditions is subject to a fine of not less than $2,500 nor more than $25,000 per day of violation, or by imprisonment for not more than 1 year, or both. (2) Knowing Violations. OPDES provides that any person who knowingly violates permit conditions is subject to a fine of not less than $5,000 nor more than $50,000 per day of violation, or by imprisonment for not more than 3 years, or both. (3) Knowing Endangerment. OPDES provides that any person who knowingly violates permit conditions and who knows at that time that he is placing another person in imminent danger of death or serious bodily injury is subject to a fine of not more than $250,000, or by imprisonment for not more than 15 years, or both. (4) False Statement. OPDES provides that any person who knowingly makes any false material statement, representation, or certification in any application, record, report, plan, or other document filed or required to be maintained under OPDES or who knowingly falsifies, tampers with, or renders inaccurate, any monitoring device or method required to be maintained under OPDES, shall upon conviction, be punished by a fine of not more than $10,000 or by imprisonment for not more than two years, or by both. If a conviction is for a violation committed after a first conviction of such person under this paragraph, punishment shall be by a fine of not more than $20,000 per day of violation, or by imprisonment of not more than four years, or by both. Civil Penalties OPDES provides that any person who violates a permit condition is subject to a civil penalty not to exceed $10,000 per day for each violation. c. Administrative Penalties OPDES provides that any person who violates a permit condition is subject to an administrative penalty, not to exceed $10,000 per violation nor shall the maximum amount exceed $125,000. B. Continuation of the Expired General Permit This permit expires five years after the effective date. However, an expired general permit may continue in force and effect. To retain coverage under the continued permit, permittees should provide notice of their intent to remain covered under this permit at least thirty (30) days prior to the expiration date. The notice must be signed in accordance with Part VI.G.1. of this permit and must contain the following information: 1. Name, address and telephone number of the operator. 2. The existing storm water construction permit number. C. Need To Halt or Reduce Activity Not a Defense It shall not be a defense for a permittee in an enforcement action that it would have been necessary to halt or reduce the permitted activity in order to maintain compliance with the conditions of this permit. D. Duty to Mitigate The permittee shall take all reasonable steps to minimize or prevent any discharge in violation of this permit which has a reasonable likelihood of adversely affecting human health or the environment. E. Duty to Provide Information The permittee shall furnish to the DEQ or an authorized representative of the DEQ any information which is requested to determine compliance with this permit or other information. F. Other Information When the permittee becomes aware that he or she failed to submit any relevant facts or submitted incorrect information in the Notice of Intent or in any other report to the DEQ, he or she shall promptly submit such facts or information. G. Signatory Requirements All Notices of Intent, storm water pollution prevention plans, reports, certifications or information either submitted to the Director or the operator of a large or medium municipal separate storm sewer system, or that this permit requires be maintained by the permittee, shall be signed as follows: 1. All Notices of Intent shall be signed as follows: a. For a corporation: by a responsible corporate officer. For the purpose of this section, a responsible corporate officer means: a president, secretary, treasurer, or vice-president of the corporation in charge of a principal business function, or any other person who performs similar policy or decision-making functions for the corporation; or the manager of one or more manufacturing, production or operating facilities employing more than 250 persons or having gross annual sales or expenditures exceeding $25,000,000 (in second-quarter 1980 dollars) if authority to sign documents has been assigned or delegated to the manager in accordance with corporate procedures; b. For a partnership or sole proprietorship: by a general partner or the proprietor, respectively; or c. For a municipality, State, Federal, or other public agency: by either a principal executive officer or ranking elected official. For purposes of this section, a principal executive officer of a Federal agency includes (1) the chief executive officer of the agency, or (2) a senior executive officer having responsibility for the overall operations of a principal geographic unit of the agency (e.g., Regional Administrators of EPA). 2. All reports required by the permit and other information requested by the DEQ or authorized representative of the DEQ shall be signed by a person described above or by a duly authorized representative of that person. A person is a duly authorized representative only if: a. The authorization is made in writing by a person described above and submitted to the Director. b. The authorization specifies either an individual or a position having responsibility for the overall operation of the regulated facility or activity, such as the position of manager, operator, superintendent, or position of equivalent responsibility or an individual or position having overall responsibility for environmental matters for the company. (A duly authorized representative may thus be either a named individual or any individual occupying a named position). c. Changes to authorization. If an authorization under paragraph II.B. is no longer accurate because a different operator has responsibility for the overall operation of the construction site, a new notice of intent satisfying the requirements of paragraph II.B must be submitted to the Director prior to or together with any reports, information, or applications to be signed by an authorized representative. d. Certification. Any person signing documents under paragraph VI.G shall make the following certification: I certify under penalty of law that this document and all attachments were prepared under my direction or supervision in accordance with a system designed to assure that qualified personnel properly gathered and evaluated the information submitted. Based on my inquiry of the person or persons who manage the system, or those persons directly responsible for gathering the information, the information submitted is, to the best of my knowledge and belief, true, accurate, and complete. I am aware that there are significant penalties for submitting false information, including the possibility of fine and imprisonment for knowing violations. Penalties for Falsification of Reports OPDES provides that any person who knowingly makes any false material statement, representation, or certification in any record or other document submitted or required to be maintained under this permit, including reports of compliance or noncompliance shall, upon conviction, be punished by a fine of not more than $10,000, or by imprisonment for not more than two years, or by both. I. Oil and Hazardous Substance Liability Nothing in this permit shall be construed to preclude the institution of any legal action or relieve the permittee from any responsibilities, liabilities, or penalties to which the permittee is or may be subject under OPDES, section 311 of the CWA or section 106 of the Comprehensive Environmental Response, Compensation and Liability Act of 1980 (CERCLA). J. Property Rights The issuance of this permit does not convey any property rights of any sort, nor any exclusive privileges, nor does it authorize any injury to private property nor any invasion of personal rights, nor any infringement of Federal, State or local laws or regulations. K. Severability The provisions of this permit are severable, and if any provision of this permit, or the application of any provision of this permit to any circumstance, is held invalid, the application of such provision to other circumstances, and the remainder of this permit shall not be affected thereby. L. Requiring an Individual Permit or an Alternative General Permit 1. The DEQ may require any person authorized by this permit to apply for and/or obtain either an individual OPDES permit or an alternative OPDES general permit. Any interested person may petition the DEQ to take action under this paragraph. Where the DEQ requires a discharger authorized to discharge under this permit to apply for an individual OPDES permit, the DEQ shall notify the discharger in writing that a permit application is required. This notification shall include a brief statement of the reasons for this decision, an application form, a statement setting a deadline for the discharger to file the application, and a statement that on the effective date of issuance or denial of the individual OPDES permit or the alternative general permit as it applies to the individual permittee, coverage under this general permit shall automatically terminate. Applications shall be submitted to the DEQ. The DEQ may grant additional time to submit the application upon request of the applicant. If a discharger fails to submit in a timely manner an individual OPDES permit application as required by the DEQ under this paragraph, then the applicability of this permit to the individual OPDES permittee is automatically terminated at the end of the day specified by the DEQ for application submittal. 2. Any discharger authorized by this permit may request to be excluded from the coverage of this permit by applying for an individual permit. In such cases, the permittee shall submit an individual application in accordance with the requirements of OAC 252:605, with reasons supporting the request, to the DEQ at the address indicated in this permit. The request may be granted by issuance of any individual permit or an alternative general permit if the reasons cited by the permittee are adequate to support the request. 3. When an individual OPDES permit is issued to a discharger otherwise subject to this permit, or the discharger is authorized to discharge under an alternative OPDES general permit, the applicability of this permit to the individual OPDES permittee is automatically terminated on the effective date of the individual permit or the date of authorization of coverage under the alternative general permit, whichever the case may be. When an individual OPDES permit is denied to an owner or operator otherwise subject to this permit, or the owner or operator is denied for coverage under an alternative OPDES general permit, the applicability of this permit to the individual OPDES permittee is automatically terminated on the date of such denial, unless otherwise specified by the DEQ. M. Proper Operation and Maintenance The permittee shall at all times properly operate and maintain all facilities and systems of treatment and control (and related appurtenances) which are installed or used by the permittee to achieve compliance with the conditions of this permit and with the requirements of storm water pollution prevention plans. Proper operation and maintenance also includes adequate laboratory controls and appropriate quality assurance procedures. Proper operation and maintenance requires the operation of backup or auxiliary facilities or similar systems, installed by a permittee only when necessary to achieve compliance with the conditions of the permit. N. Inspection and Entry The permittee shall allow the DEQ, an authorized representative, in the case of a construction site which discharges through a municipal separate storm sewer, an authorized representative of the municipal operator or the separate storm sewer receiving the discharge, upon the presentation of credentials and other documents as may be required by law, to: 1. Enter upon the permittee's premises where a regulated facility or activity is located or conducted or where records must be kept under the conditions of this permit; 2. Have access to and copy at reasonable times, any records that must be kept under the conditions of this permit; and 3. Inspect at reasonable times any facilities or equipment (including monitoring and control equipment). P. Permit Actions This permit may be modified, revoked and reissued, or terminated for cause. The filing of a request by the permittee for a permit modification, revocation and reissuance, or termination, or notification of planned changes or anticipated noncompliance does not stay any permit action. Part VII. Reopener Clause A. If there is evidence indicating that the storm water discharges authorized by this permit cause, have the reasonable potential to cause or contribute to, a violation of a water quality standard, the discharger may be required to obtain individual permit or an alternative general permit in accordance with Part I.C of this permit or the permit may be modified to include different limitations and/or requirements. B. Permit modification or revocation will be conducted according to Oklahoma Law. Part VIII. Termination of Coverage A. Notice of Termination Where a site has been finally stabilized and all storm water discharges from construction activities that are authorized by this permit are eliminated, or where the operator of all storm water discharges at a facility changes, the permittee must submit a Notice of Termination that is signed in accordance with Part VI.G of this permit. The Notice of Termination shall include the following information: 1. The street (description of location if no street address is available) address of the construction site for which the notification is submitted; 2. The name, address and telephone number of the permittee submitting the Notice of Termination; 3. The OPDES permit number for the storm water discharge identified by the Notice of Termination; 4. An indication of whether the storm water discharges associated with construction activity have been eliminated or the operator of the discharges has changed; 5. For changes in operators, the name, address, and phone number of the new operator, and 6. The following certification signed in accordance with Part VI.G (signatory requirements) of this permit: I certify under penalty of law that either: (a) all storm water discharges associated with construction activity from the portion of the identified facility where I was an operator have ceased or have been eliminated or (b) I am no longer an operator at the construction site and a new operator has assumed operational control for those portions of the construction site where I previously had operational control. I understand that by submitting this notice of termination, I am no longer authorized to discharge storm water associated with construction activity under this general permit, and that discharging pollutants in storm water associated with construction activity to waters of the State is unlawful under OPDES where the discharge is not authorized by a NPDES or OPDES permit. I also understand that the submittal of this notice of termination does not release an operator from liability for any violations of this permit or OPDES. For the purposes of this certification, elimination of storm water discharges associated with construction activity means that all disturbed soils at the portion of the construction site where the operator had control have been finally stabilized and temporary erosion and sediment control measures have been removed or will be removed at an appropriate time to insure final stabilization is maintained, or that all storm water discharges associated with construction activities from the identified site that are authorized by a NPDES or OPDES general permit have otherwise been eliminated from the portion of the construction site where the operator had control. B. Addresses All Notices of Termination are to be sent, using the form provided by the Director (or a photocopy thereof), to the address specified on the NOT form. Part IX. Definitions “Best Management Practices” (“BMPs”) means schedules of activities, prohibitions of practices, maintenance procedures, and other management practices to prevent or reduce the discharge of pollutants to waters of the State. BMPs also include treatment requirements, operating procedures, and practices to control plant site runoff, spillage or leaks, sludge or waste disposal, or drainage from raw material storage. "Control Measure"--As used in this permit, refers to any Best Management Practice or other method used to prevent or reduce the discharge of pollutants to waters of the United States. "Commencement of Construction"--The initial disturbance of soils associated with clearing, grading, or excavating activities or other construction activities. "CWA" means the Clean Water Act or the Federal Water Pollution Control Act, 33 U.S.C 1251 et seq. "DEQ" means the Oklahoma Department of Environmental Quality. "Discharge of Storm Water Associated with Construction Activity"--As used in this permit, refers to storm water "point source" discharges from areas where soil disturbing activities (e.g., clearing, grading, or excavation, etc.), construction materials or equipment storage or maintenance (e.g., fill piles, concrete truck washout, fueling, etc.), or other industrial storm water directly related to the construction process (e.g., concrete or asphalt batch plants, etc.) are located. "Executive Director" means the Executive Director of the Oklahoma Department of Environmental Quality. "Final Stabilization" means that all soil disturbing activities at the site have been completed, and that a uniform (e.g., evenly distributed, without large bare areas) perennial vegetative cover with a density of 70% of the native background vegetative cover for the area has been established on all unpaved areas and areas not covered by permanent structures, or equivalent permanent stabilization measures (such as the use of riprap, gabions, or geotextiles) have been employed. In some parts of the country, background native vegetation will cover less than 100% of the ground (e.g., arid areas). Establishing at least 70% of the natural cover of native vegetation meets the vegetative cover criteria for final stabilization. For example, if the native vegetation covers 50% of the ground, 70% of 50% would require 35% total cover for final stabilization. "Flow-weighted composite sample" means a composite sample consisting of a mixture of aliquots collected at a constant time interval, where the volume of each aliquot is proportional to the flow rate of the discharge. "Large and Medium municipal separate storm sewer system" means all municipal separate storm sewers that are either: (i) Located in an incorporated place (city) with a population of 100,000 or more as determined by the latest Decennial Census by the Bureau of Census (these cities are listed in Appendices F and G of 40 CFR 122); or (ii) Located in the counties with unincorporated urbanized populations of 100,000 or more, except municipal separate storm sewers that are located in the incorporated places, townships or towns within such counties (these counties are listed in Appendices H and I of 40 CFR 122); or (iii) Owned or operated by a municipality other than those described in paragraph (i) or (ii) and that are designated by the Director as part of the large or medium municipal separate storm sewer system. "NOI" means notice of intent to be covered by this permit (see Part II of this permit). "NOT" means notice of termination (see Part VIII of this permit). "OPDES" means the Oklahoma Pollutant Discharge Elimination System Act. "Operator" means any party associated with the construction project that meets either of the following 2 criteria: (1) The party has operational control over project specifications (including the ability to make modifications in specifications), or (2) the party has day-to-day operational control of those activities at a project site which are necessary to ensure compliance with the storm water pollution prevention plan or other permit conditions (e.g., they are authorized to direct workers at the site to carry out activities identified in the storm water pollution prevention plan or comply with other permit conditions). “Point Source” means any discernible, confined, and discrete conveyance, including but not limited to, any pipe, ditch, channel, tunnel, conduit, well, discrete fissure, container, rolling stock, concentrated animal feeding operation, landfill leachate collection system, vessel or other floating craft from which pollutants are or may be discharges. This term does not include return flows from irrigated agriculture or agricultural storm water runoff. “Runoff coefficient” means the fraction of total rainfall that will appear at the conveyance as runoff. “Storm Water” means storm water runoff, snow melt runoff, and surface runoff and drainage. “Storm Water Associated with Industrial Activity” is defined at 40 CFR 122.26(b)(14) and incorporated here by reference. Most relevant to this permit is 40 CFR 122.26(b)(14)(x), which relates to construction activity including clearing, grading and excavation activities. “Waters of the State” means all streams, lakes, ponds, marshes, watercourses, waterways, wells, springs, irrigation systems, drainage systems, storm sewers, and all other bodies or accumulations of water, surface and underground, natural or artificial, public or private, which are contained within, flow through or border upon this state or any portion thereof, shall include all circumstances the waters of the United States which are contained within the boundaries of, flow through or border upon this state or any portion thereof. Addendum A--Endangered Species Guidance I. Instructions Below is a list of endangered and threatened species that EPA has determined may be affected by the activities covered by the baseline construction general permit (BCGP). These species are listed by county. In order to get BCGP coverage, applicants must: + Indicate in box provided on the NOI whether any species listed in this Addendum or critical habitat are in proximity to the facility, + Certify pursuant to Section I.B.3.e that they have followed the procedures found in Addendum A to protect listed endangered and threatened species and designated critical habitat and that the storm water discharges and BMPs to control storm water runoff covered under this permit meet one or more of the eligibility requirements of Part I.B.3.e.(1) of this permit, while checking the box(es) that correspond to paragraph (a), (b), (c), (d), or (e) of Part I.B.3.e.(1) for which eligibility is claimed. To do this, please follow steps 1 through 6 below when developing the pollution prevention plan below. Step 1: Determine if the Construction Site Is Found Within Designated Critical Habitat for Listed Species Some (but not all) listed species have designated critical habitat. Exact locations of such habitat is provided in the Service regulations at 50 CFR part 17 and part 226. To determine if their construction site occurs within (also known as "in proximity to") critical habitat, applicants should either review those regulations or contact the United States Fish and Wildlife Service (FWS) Office, 222 S. Houston, Tulsa, Oklahoma, 74127. If the construction site is not located in designated critical habitat, then the applicant need not consider impacts to critical habitat when following steps 2 through 5. If the applicant's site is located within (i.e. in proximity to) critical habitat then the applicant must look at impacts to critical habitat when following steps 2 through 6. (EPA notes that many measures imposed to protect listed species under steps 2 through 6 will also protect critical habitat. However, obligations to ensure that an action is not likely to result in the destruction or adverse modification of critical habitat are separate from those of ensuring that an action is not likely to jeopardize the existence of threatened and endangered species. Thus, meeting the eligibility requirements of this permit may require measures to protect critical habitat that are separate and distinct from those to protect listed species.) Step 2: Review the County Species List To Determine if any Species Are located in the County Where the Construction Activities Occurs If no species are listed in a facility's county or if a facility's county is not found on the list, an applicant is eligible for BCGP coverage and may indicate in the NOI that no species are found in proximity and certify that it is eligible for BCGP coverage under Part I.B.3.a.(1)(a) of the permit by marking box a. in the certification provisions of the NOI. Where a facility is located in more than one county, the lists for all counties should be reviewed. If species are located in the county, follow step 3 below. Step 3: Determine if any Species May Be Found "In Proximity" to the Construction Activity's Storm Water Discharges A species is in proximity to a construction activity's storm water discharge when the species is: + Located in the path or immediate area through which or over which contaminated point source storm water flows from construction activities to the point of discharge into the receiving water. + Located in the immediate vicinity of, or nearby, the point of discharge into receiving waters. + Located in the area of a site where storm water BMPs are planned or are to be constructed. The area in proximity to be searched/surveyed for listed species will vary with the size and structure of the construction activity, the nature and quantity of the storm water discharges, and the type of receiving waters. Given the number of construction activities potentially covered by the BCGP, no specific method to determine whether species are in proximity is required for permit coverage under the BCGP. Instead, applicants should use the method or methods which best allow them to determine to the best of their knowledge whether species are in proximity to their particular construction activities. These methods may include: + Conducting visual inspections: This method may be particularly suitable for construction sites that are smaller in size or located in non-natural settings such as highly urbanized areas or industrial parks where there is little or no natural habitat, or for construction activities that discharge directly into municipal storm water collection systems. + Contacting the nearest U.S. Fish and Wildlife Service (FWS) office. Many endangered and threatened species are found in well-defined areas or habitats. That information is frequently known to State or Federal wildlife agencies. + Contacting local/regional conservation groups. These groups inventory species and their locations and maintain lists of sightings and habitats. + Conducting a formal biological survey. Larger construction sites with extensive storm water discharges may choose to conduct biological surveys as the most effective way to assess whether species are located in proximity and whether there are likely adverse effects. + Conducting an Environmental Assessment Under the National Environmental Policy Act (NEPA). Some construction activities may require environmental assessments under NAPA. Such assessments may indicate if listed species are in proximity. (BCGP coverage does not trigger NAPA because it does not regulate any dischargers subject to New Source Performance Standards under section 306 of the Clean Water Act. See CWA Sec. 511(c). However, some construction activities might require review under NEPA because of Federal funding or other Federal nexus.) If no species are in proximity, an applicant is eligible for BCGP coverage and may indicate that in the NOI and certify that it is eligible for BCGP coverage under Part I.B.3.E.(1)(a) of the permit by marking box a. in the certification provisions of the NOI. If listed species are found in proximity to a facility, applicants must indicate the location and nature of this presence in the Pollution Prevention Plan and follow step 4 below. Step 4: Determine if Species or Critical Habitat Could Be Adversely Affected by the Construction Activity’s Storm Water Discharges or by BMPS To Control Those Discharges Scope of Adverse Effects: Potential adverse effects from storm water include: + Hydrological. Storm water may cause siltation, sedimentation or induce other changes in the receiving waters such as temperature, salinity or pH. These effects will vary with the amount of storm water discharged and the volume and condition of the receiving water. Where a storm water discharge constitutes a minute portion of the total volume of the receiving water, adverse hydrological effects are less likely. + Habitat. Storm water may drain or inundate listed species habitat. + Toxicity. In some cases, pollutants in storm water may have toxic effects on listed species. The scope of effects to consider will vary with each site. Applicants must also consider the likelihood of adverse effects on species from any BMPs to control storm water. Most adverse impacts from BMPs are likely to occur from the construction activities. However, it is possible that the operation of some BMPs (for example, larger storm water retention ponds) may affect endangered and threatened species. If adverse effects are not likely, then the applicant should certify that it is eligible for BCGP. coverage under Part I.B.3.e(1)(a) of the permit by marking box a. in the certification provisions of the NOI. If adverse effects are likely, applicants should follow step 5 below. Step 5: Determine if Measures Can Be Implemented To Avoid any Adverse Effects If an applicant determines that adverse effects are likely, it can receive coverage if appropriate measures are undertaken to avoid or eliminate any actual or potential adverse affects prior to applying for permit coverage. These measures may involve relatively simple changes to construction activities such as re-routing a storm water discharge to bypass an area where species are located, relocating BMPs, or limiting the size of construction activity that will be subject to storm water discharge controls. At this stage, applicants may wish to contact the FWS to see what appropriate measures might be suitable to avoid or eliminate adverse impacts to listed species and/or critical habitat. (See 50 CFR 402.13(b)). This can entail the initiation of informal consultation with the FWS which is described in more detail below at Step Six. If applicants adopt measures to avoid or eliminate adverse affects, they must continue to abide by them during the course of permit coverage. These measures must be described in the pollution prevention plan and may be enforceable as permit conditions. If appropriate measures to avoid the likelihood of adverse effects are not available to the applicant, the applicant should follow Step 6 below. Step 6: Determine if the Eligibility Requirements of Part I.B.3.E.(1)(b)-(e) Can Be Met Where adverse effects are likely, the applicant must contact the EPA and FWS. Applicants may still be eligible for BCGP coverage if any likelihood of adverse effects are addressed through meeting the criteria of Part I.B.3.e.(1)(b)-(e) of the permit. To do so the applicant may: + I.B.3.e.(1)(b). The applicant’s activity has received previous authorization through an earlier section 7 consultation or issuance of a ESA section 10 permit (incidental taking permit) and that authorization addressed storm water discharges and/or BMPs to control storm water runoff. (e.g., developer included impact of entire project in consultation over a wetlands dredge and fill permit under Section 7 of the Endangered Species Act). If the applicant is eligible for coverage under this criteria, it should indicate this by marking box (b) of the certification provisions. + I.B.3.e.(1)(c). The applicant’s activity was considered as part of a larger, more comprehensive assessment of impacts on endangered and threatened species and/or critical habitat under section 7 or section 10 of the Endangered Species Act that which accounts for storm water discharges and BMPs to control storm water runoff (e.g., where a area-wide habitat conservation plan and section 10 permit is issued which addresses impacts from construction activities including those from storm water or a NEPA review is conducted which incorporates ESA section 7 procedures). If the applicant is eligible for coverage under this criteria, it should indicate this by marking box (c) of the certification provisions. + I.B.3.e.(1)(d). Enter section 7 consultation with the FWS for the applicant’s storm water discharges and BMPs to control storm water runoff. In such cases, EPA automatically designates the applicant as a non-federal representative. See I.B.3.e.(4). When conducting section 7 consultation as a non-federal representative, applicants should follow the procedures found in 50 CFR 402 the ESA regulations. Applicants must also notify EPA and the appropriate FWS office of its intention to conduct consultation as a non-federal representative. Coverage by the BCGP is permissible under Part I.B.3.E.(1)(b) if the consultation results in either: (1) FWS written concurrence with a finding of no likelihood of adverse effects (see 50 CFR 402.13) or (2) issuance of a biological opinion in which FWS finds that the action is not likely to jeopardize the continued existence of listed endangered threatened species or result in the adverse modification or destruction of adverse habitat (see 50 CFR 403.14(h)). Any terms and conditions developed through consultations to protect listed species and critical habitat must be incorporated into the pollution prevention plan. As noted above, applicants may, if they wish, initiate consultation during Step Five above (upon becoming aware that endangered and threatened species are in proximity to the facility). If the applicant is eligible for coverage under this criteria, it should indicate this by marking box (d) of the certification provisions. + I.B.3.e.(1)(c). The applicant’s activity was considered as part of a larger, more comprehensive site-specific assessment of impacts on endangered and threatened species by the owner or other operator of the site when it developed a SWPPP and that permittee certified eligibility under items I.B.3.e.(1)(a), (b), (c), or (d) of the permit (e.g. owner was able to certify no adverse impacts for the project as a whole under item (a), so contractor can then certify under item (e)). Utility companies applying for area-wide permit coverage may certify under item (e) since authorization to discharge is contingent on a principal operator of a construction project having been granted coverage under this, or an alternative NPDES or OPDES permit for the areas of the site where utilities installation activities will occur. If the applicant is eligible for coverage under this criteria, it should indicate this by marking box (e) of the certification provisions. The applicant must comply with any terms and conditions imposed under the eligibility requirements of paragraphs I.B.3.e.(1)(a), (b), (c), (d), (e) to ensure that storm water discharges or BMPs to control storm water runoff are protective of listed endangered and threatened species and/or critical habitat. Such terms and conditions must be incorporated in the applicant’s storm water pollution prevention plan. If the eligibility requirements of Part I.B.3.e.(1)(a)-(e) cannot be met then the applicant may not receive coverage under the BCGP. Applicants should then consider applying to DEQ for an individual permit. This permit does not authorize any “taking” (as defined under section 9 of the Endangered Species Act) of endangered or threatened species unless such takes are authorized under sections 7 or 10 the Endangered Species Act. Applicants who believe their construction activities may result in takes of listed endangered and threatened species should be sure to get the necessary coverage for such takes through an individual consultation or section 10 permit. This permit does not authorize any storm water discharges or BMPs to control storm water runoff that are likely to jeopardize the continued existence of any species that are listed as endangered or threatened under the Endangered Species Act or result in the adverse modification or destruction of designated critical habitat. | State/County | Group name | Inventory name | Scientific name | IR/FE | |--------------|------------|----------------|-----------------|-------| | CARTER | BIRDS | EAGLE, BALD | Haliaeetus leucocephalus. | | | CHEROKEE | BIRDS | EAGLE, BALD | Haliaeetus leucocephalus. | | | CHICOTAW | BIRDS | EAGLE, BALD | Haliaeetus leucocephalus. | | | CINAMARON | PLANTS | ORCHID, WESTERN PRAIRIE FRINGED | Habenaria leucopetala. | | | CLEVELAND | BIRDS | EAGLE, BALD | Haliaeetus leucocephalus. | | | | | TERN, INTERIOR (POPULATION) | Sterna antillarum. | | | | | TERN, LEAST, INTERIOR (POPULATION) | Sterna antillarum. | | | | FISHES | SHIVER, ARKANSAS RIVER | NOTROPIS GIRARDI. | | | | DIFOS | CRANE, WHOOPING | Grus americana. | | | COMANCHE | BIRDS | EAGLE, BALD | Haliaeetus leucocephalus. | | | | | PLOVER, PIPING | Charadrius melodus. | | | | | TERN, INTERIOR (POPULATION) | Sterna antillarum. | | | | | TERN, LEAST, INTERIOR (POPULATION) | Sterna antillarum. | | | COTTON | BIRDS | CRANE, WHOOPING | Grus americana. | | | | | EAGLE, BALD | Haliaeetus leucocephalus. | | | | | PLOVER, PIPING | Charadrius melodus. | | | | | TERN, INTERIOR (POPULATION) | Sterna antillarum. | | | | | TERN, LEAST, INTERIOR (POPULATION) | Sterna antillarum. | | | CRAIG | FISHES | CAVERISH, OZARK | Amblycysis rosea. | | | | | MUSSELS, PIGEON | Musculium pigginsi. | | | CREEK | PLANTS | ORCHID, WESTERN PRAIRIE FRINGED | Platanthera praecoxa. | | | | BIRDS | EAGLE, BALD | Haliaeetus leucocephalus. | | | | | PLOVER, PIPING | Charadrius melodus. | | | | | TERN, INTERIOR (POPULATION) | Sterna antillarum. | | | | | TERN, LEAST, INTERIOR (POPULATION) | Sterna antillarum. | | | CUSTER | BIRDS | CRANE, WHOOPING | Grus americana. | | | | | EAGLE, BALD | Haliaeetus leucocephalus. | | | | | PLOVER, PIPING | Charadrius melodus. | | | | | TERN, INTERIOR (POPULATION) | Sterna antillarum. | | | | | TERN, LEAST, INTERIOR (POPULATION) | Sterna antillarum. | | | DELAWARE | BIRDS | EAGLE, BALD | Haliaeetus leucocephalus. | | | DEWEY | FISHES | CAVERISH, OZARK | Amblycysis rosea. | | | | BIRDS | CRANE, WHOOPING | Grus americana. | | | | | EAGLE, BALD | Haliaeetus leucocephalus. | | | | | PLOVER, PIPING | Charadrius melodus. | | | | | TERN, INTERIOR (POPULATION) | Sterna antillarum. | | | | | TERN, LEAST, INTERIOR (POPULATION) | Sterna antillarum. | | | ELLIS | BIRDS | CRANE, WHOOPING | Grus americana. | | | | | EAGLE, BALD | Haliaeetus leucocephalus. | | | | | PLOVER, PIPING | Charadrius melodus. | | | | | TERN, INTERIOR (POPULATION) | Sterna antillarum. | | | | | TERN, LEAST, INTERIOR (POPULATION) | Sterna antillarum. | | | GARFIELD | BIRDS | CRANE, WHOOPING | Grus americana. | | | GARVIN | BIRDS | CRANE, WHOOPING | Grus americana. | | | GRADY | BIRDS | CRANE, WHOOPING | Grus americana. | | | | | TERN, INTERIOR (POPULATION) | Sterna antillarum. | | | | | TERN, LEAST, INTERIOR (POPULATION) | Sterna antillarum. | | | State/County | Group name | Inventory name | Scientific name | IR/FF | |--------------|------------|----------------|-----------------|-------| | GRANT | BIRDS | CRANE, WHOOPING | Grus americana. | | | | | EAGLE, BALD | Haliaeetus leucocephalus. | | | GREER | BIRDS | CRANE, WHOOPING | Grus americana. | | | | | EAGLE, BALD | Haliaeetus leucocephalus. | | | HARMON | BIRDS | CRANE, WHOOPING | Grus americana. | | | | | PLOVER, PIPING | +haradnus melocus. | | | | | TERN, INTERIOR (POPULATION) | Sterna antillarum. | | | | | TERN, LEAST. | Sterna antillarum. | | | HARPER | BIRDS | CRANE, WHOOPING | Grus americana. | | | | | PLOVER, PIPING | +haradnus melocus. | | | | | TERN, INTERIOR (POPULATION) | Sterna antillarum. | | | | | TERN, LEAST. | Sterna antillarum. | | | HADSELL | BIRDS | EAGLE, BALD | Haliaeetus leucocephalus. | | | | | PLOVER, PIPING | +haradnus melocus. | | | | | TERN, INTERIOR (POPULATION) | Sterna antillarum. | | | | | TERN, LEAST. | Sterna antillarum. | | | | | TERN, INTERIOR (POPULATION) | Sterna antillarum. | | | HUGHES | BIRDS | EAGLE, BALD | Haliaeetus leucocephalus. | | | | | TERN, INTERIOR (POPULATION) | Sterna antillarum. | | | | | TERN, LEAST. | Sterna antillarum. | | | JACKSON | BIRDS | CRANE, WHOOPING | Grus americana. | | | | | PLOVER, PIPING | +haradnus melocus. | | | | | TERN, INTERIOR (POPULATION) | Sterna antillarum. | | | | | TERN, LEAST. | Sterna antillarum. | | | JEFFERSON | BIRDS | CRANE, WHOOPING | Grus americana. | | | | | EAGLE, BALD | Haliaeetus leucocephalus. | | | | | PLOVER, PIPING | +haradnus melocus. | | | | | TERN, INTERIOR (POPULATION) | Sterna antillarum. | | | | | TERN, LEAST. | Sterna antillarum. | | | JOHNSTON | BIRDS | CRANE, WHOOPING | Grus americana. | | | | | EAGLE, BALD | Haliaeetus leucocephalus. | | | | | PLOVER, PIPING | +haradnus melocus. | | | | | TERN, INTERIOR (POPULATION) | Sterna antillarum. | | | | | TERN, LEAST. | Sterna antillarum. | | | KAY | BIRDS | EAGLE, BALD | Haliaeetus leucocephalus. | | | | | PLOVER, PIPING | +haradnus melocus. | | | | | TERN, INTERIOR (POPULATION) | Sterna antillarum. | | | | | TERN, LEAST. | Sterna antillarum. | | | KINGFISHER | BIRDS | CRANE, WHOOPING | Grus americana. | | | | | EAGLE, BALD | Haliaeetus leucocephalus. | | | | | TERN, INTERIOR (POPULATION) | Sterna antillarum. | | | | | TERN, LEAST. | Sterna antillarum. | | | KIOWA | BIRDS | CRANE, WHOOPING | Grus americana. | | | | | EAGLE, BALD | Haliaeetus leucocephalus. | | | | | TERN, INTERIOR (POPULATION) | Sterna antillarum. | | | | | TERN, LEAST. | Sterna antillarum. | | | LE FLORE | BIRDS | EAGLE, BALD | Haliaeetus leucocephalus. | | | | | PLOVER, PIPING | +haradnus melocus. | | | | | TERN, INTERIOR (POPULATION) | Sterna antillarum. | | | | | TERN, LEAST. | Sterna antillarum. | | | | CLAMS | ROCK-POCKETBOOK, OUACHITA | Arkansa (=Arcidens) wheeleri. | | | State/County | Group name | Inventory name | Scientific name | |--------------|------------|----------------|-----------------| | LINCOLN | FIGIUS | ROCK-POCKETBOOK, OUACHITA (+WHEELER'S PM). | Arkansas (=Arctidens) wheeleri. | | | BIRDS | DARTER, LEOPARD | Pteronetta australis. | | | | CRANE, WHOOPING | Grus americana. | | LOGAN | BIRDS | EAGLE, BALD | Haliaeetus leucocephalus. | | | | CRANE, WHOOPING | Grus americana. | | | | PLOVER, PIPING | +Haradnus melodus. | | | | TERN, INTERIOR (POPULATION LEAST). | Sterna antillarum. | | | | TERN, INTERIOR (POPULATION LEAST). | Sterna antillarum. | | LOVE | BIRDS | CRANE, WHOOPING | Grus americana. | | | | EAGLE, BALD | Haliaeetus leucocephalus. | | | | TERN, INTERIOR (POPULATION LEAST). | Sterna antillarum. | | | | TERN, INTERIOR (POPULATION LEAST). | Sterna antillarum. | | MAJOR | BIRDS | CRANE, WHOOPING | Grus americana. | | | | EAGLE, BALD | Haliaeetus leucocephalus. | | | | PLOVER, PIPING | +Haradnus melodus. | | | | TERN, INTERIOR (POPULATION LEAST). | Sterna antillarum. | | | | TERN, INTERIOR (POPULATION LEAST). | Sterna antillarum. | | MARSHALL | BIRDS | EAGLE, BALD | Haliaeetus leucocephalus. | | | | PLOVER, PIPING | +Haradnus melodus. | | | | TERN, INTERIOR (POPULATION LEAST). | Sterna antillarum. | | | | TERN, INTERIOR (POPULATION LEAST). | Sterna antillarum. | | MAYES | BIRDS | EAGLE, BALD | Haliaeetus leucocephalus. | | | | CAVEPHER, OZARK | Amblycassis rosea. | | MCCLAIN | BIRDS | CRANE, WHOOPING | Grus americana. | | | | PLOVER, PIPING | +Haradnus melodus. | | | | TERN, INTERIOR (POPULATION LEAST). | Sterna antillarum. | | | | TERN, INTERIOR (POPULATION LEAST). | Sterna antillarum. | | MCCURTAIN | BIRDS | EAGLE, BALD | Haliaeetus leucocephalus. | | | | TERN, INTERIOR (POPULATION LEAST). | Sterna antillarum. | | | | TERN, INTERIOR (POPULATION LEAST). | Sterna antillarum. | | | FISHES | DARTER, LEOPARD | Pteronetta carolinensis. | | | REPTILES | ALLIGATOR, AMERICAN | Alligator mississippiensis. | | MCINTOSH | BIRDS | EAGLE, BALD | Haliaeetus leucocephalus. | | | | TERN, INTERIOR (POPULATION LEAST). | Sterna antillarum. | | | | TERN, INTERIOR (POPULATION LEAST). | Sterna antillarum. | | MURRAY | BIRDS | EAGLE, BALD | Haliaeetus leucocephalus. | | | | TERN, INTERIOR (POPULATION LEAST). | Sterna antillarum. | | | | TERN, INTERIOR (POPULATION LEAST). | Sterna antillarum. | | MUSKOGEE | BIRDS | CRANE, WHOOPING | Grus americana. | | | | EAGLE, BALD | Haliaeetus leucocephalus. | | | | PLOVER, PIPING | +Haradnus melodus. | | | | TERN, INTERIOR (POPULATION LEAST). | Sterna antillarum. | | | | TERN, INTERIOR (POPULATION LEAST). | Sterna antillarum. | | NOBLE | BIRDS | EAGLE, BALD | Haliaeetus leucocephalus. | | | | PLOVER, PIPING | +Haradnus melodus. | | | | TERN, INTERIOR (POPULATION LEAST). | Sterna antillarum. | | | | TERN, INTERIOR (POPULATION LEAST). | Sterna antillarum. | | NOWATA | BIRDS | EAGLE, BALD | Haliaeetus leucocephalus. | | OKLAHOMA | BIRDS | PLOVER, PIPING | +Haradnus melodus. | | | | CRANE, WHOOPING | Grus americana. | | State/County | Group name | Inventory name | Scientific name | |--------------|------------|----------------|-----------------| | OSAGE | BIRDS | EAGLE, BALD | Haliaeetus leucocephalus. | | | | PLOVER, PIPING | +haradrius melodus. | | | | TERN, INTERIOR (POPULATION LEAST) | Sterna antillarum. | | | | TERN, INTERIOR (POPULATION LEAST) | Sterna antillarum. | | | FISHES | CRANE, WHOOPING | Grus americana. | | | | CURI, EW, ESKIMO | Numenius borealis. | | | | EAGLE, BALD | Haliaeetus leucocephalus. | | | | PLOVER, PIPING | +haradrius melodus. | | | | TERN, INTERIOR (POPULATION LEAST) | Sterna antillarum. | | | | TERN, INTERIOR (POPULATION LEAST) | Sterna antillarum. | | OTTAWA | BIRDS | EAGLE, BALD | Haliaeetus leucocephalus. | | | | CAVERISH, OZARK | Amblyopsis rosea. | | | | MADTOM, NEOSHO | Noturus platanus. | | | | CRANE, WHOOPING | Grus americana. | | | | EAGLE, BALD | Haliaeetus leucocephalus. | | | | TERN, INTERIOR (POPULATION LEAST) | Sterna antillarum. | | | | TERN, INTERIOR (POPULATION LEAST) | Sterna antillarum. | | PAWNEE | BIRDS | CRANE, WHOOPING | Grus americana. | | | | PLOVER, PIPING | +haradrius melodus. | | | | TERN, INTERIOR (POPULATION LEAST) | Sterna antillarum. | | | | TERN, INTERIOR (POPULATION LEAST) | Sterna antillarum. | | PAYNE | BIRDS | EAGLE, BALD | Haliaeetus leucocephalus. | | | | TERN, INTERIOR (POPULATION LEAST) | Sterna antillarum. | | | | TERN, INTERIOR (POPULATION LEAST) | Sterna antillarum. | | PITTSBURG | BIRDS | EAGLE, BALD | Haliaeetus leucocephalus. | | | | TERN, INTERIOR (POPULATION LEAST) | Sterna antillarum. | | | | TERN, INTERIOR (POPULATION LEAST) | Sterna antillarum. | | PONTOTOC | BIRDS | EAGLE, BALD | Haliaeetus leucocephalus. | | | | TERN, INTERIOR (POPULATION LEAST) | Sterna antillarum. | | | | TERN, INTERIOR (POPULATION LEAST) | Sterna antillarum. | | POTTAWATOMIE | BIRDS | EAGLE, BALD | Haliaeetus leucocephalus. | | | | TERN, INTERIOR (POPULATION LEAST) | Sterna antillarum. | | | | TERN, INTERIOR (POPULATION LEAST) | Sterna antillarum. | | PUSHMATAHA | BIRDS | EAGLE, BALD | Haliaeetus leucocephalus. | | | | ROCK-POCKETBOOK, OUACHITA ROCKS (WHEELER'S FM.) | Arkansas (=Arcodens) wheeleri. | | | | DARTER, LEOPARD | Perca pantherina. | | ROGER MILLS | BIRDS | CRANE, WHOOPING | Grus americana. | | | | CAVERISH, DAILY | Haliaeetus leucocephalus. | | | | PLOVER, PIPING | +haradrius melodus. | | | | TERN, INTERIOR (POPULATION LEAST) | Sterna antillarum. | | | | TERN, INTERIOR (POPULATION LEAST) | Sterna antillarum. | | ROGERS | BIRDS | CRANE, WHOOPING | Grus americana. | | | | EAGLE, BALD | Haliaeetus leucocephalus. | | | | PLOVER, PIPING | +haradrius melodus. | | | | TERN, INTERIOR (POPULATION LEAST) | Sterna antillarum. | | | | TERN, INTERIOR (POPULATION LEAST) | Sterna antillarum. | | SEMINOLE | PLANTS | ORCHID, WESTERN PRAIRIE FRINGED TERN (POPULATION LEAST) | Platanthera praeclara. | | | | TERN, INTERIOR (POPULATION LEAST) | Sterna antillarum. | | | BIRDS | EAGLE, BALD | Haliaeetus leucocephalus. | | | | PLOVER, PIPING | +haradrius melodus. | | | | TERN, INTERIOR (POPULATION LEAST) | Sterna antillarum. | | State/County | Group name | Inventory name | Scientific name | IR/FF | |--------------|------------|----------------|-----------------|-------| | STEPHENS | BIRDS | TERN, INTERIOR (POPULATION) LEAST | Sterna antillarum. | | | | | CRANE, WHOOPING | Grus americana. | | | TEXAS | BIRDS | EAGLE, BALD | Haliaeetus leucocephalus. | | | | | CRANE, WHOOPING | Grus americana. | | | | | EAGLE, BALD | Haliaeetus leucocephalus. | | | | | PLOVER, PIPING | +haradrus melodus. | | | | | TERN, INTERIOR (POPULATION) LEAST | Sterna antillarum. | | | TILLMAN | BIRDS | TERN, INTERIOR (POPULATION) LEAST | Sterna antillarum. | | | | | CRANE, WHOOPING | Grus americana. | | | | | PLOVER, PIPING | +haradrus melodus. | | | | | TERN, INTERIOR (POPULATION) LEAST | Sterna antillarum. | | | TULSA | BIRDS | TERN, INTERIOR (POPULATION) LEAST | Sterna antillarum. | | | WAGONER | BIRDS | EAGLE, BALD | Haliaeetus leucocephalus. | | | | | PLOVER, PIPING | +haradrus melodus. | | | | | TERN, INTERIOR (POPULATION) LEAST | Sterna antillarum. | | | WASHINGTON | BIRDS | TERN, INTERIOR (POPULATION) LEAST | Sterna antillarum. | | | | | CRANE, WHOOPING | Grus americana. | | | WASHITA | BIRDS | EAGLE, BALD | Haliaeetus leucocephalus. | | | WOODS | BIRDS | PLOVER, PIPING | +haradrus melodus. | | | | | CRANE, WHOOPING | Grus americana. | | | | | CRANE, WHOOPING | Grus americana. | | | | | CURLEW, ESKIMO | Numenius borealis. | | | | | EAGLE, BALD | Haliaeetus leucocephalus. | | | | | PLOVER, PIPING | +haradrus melodus. | | | | | TERN, INTERIOR (POPULATION) LEAST | Sterna antillarum. | | | | | TERN, INTERIOR (POPULATION) LEAST | Sterna antillarum. | | | WOODWARD | BIRDS | CRANE, WHOOPING | Grus americana. | | | | | EAGLE, BALD | Haliaeetus leucocephalus. | | | | | PLOVER, PIPING | +haradrus melodus. | | | | | TERN, INTERIOR (POPULATION) LEAST | Sterna antillarum. | | | | | TERN, INTERIOR (POPULATION) LEAST | Sterna antillarum. | | OKLAHOMA DEPARTMENT OF TRANSPORTATION STORMWATER RUN-OFF SUBCONTRACTOR CERTIFICATION PROJECT NO.______________________ JOB PIECE: ________________ COUNTY: __________________________ This certification is to be signed by subcontractors implementing Stormwater Control Measures (earthwork, erosion control, etc.) on the above identified project(s), as part of the subcontract approval. I certify under penalty of law that I understand the terms and conditions of the Oklahoma Pollutant Discharge Elimination System Act (OPDES) General permit that authorizes storm water discharges associated with construction activity from the construction site identified as part of this certification. _________________________________ ___________________________________ Name of Certifier Name of Firm _________________________________ ___________________________________ Date Address of Firm _________________________________ Firm's Phone No. _________________________________ Student Engineer: This completed form to be placed with the Stormwater Run-Off Plan Constr. 9-23-97 The permit for National Pollutant Discharge Elimination System (NPDES) we have been operating under since 1992 will expire September 9, 1997. Projects which will not be stabilized until after October 9, 1997 will require a new Notice Of Intent (NOI) submittal to "administratively extend" the current permit. With reference to the enclosures, please note the following: Notices of Termination (NOT) will be automatic on all active projects September 9, 1997. Projects which will not be stabilized within 30 days (by October 9, 1997) will require a new submittal of the NOI. Use the new EPA Form 3510-6 (enclosed). Prepare one for contractor and for the Local Government entity if applicable. For projects which will become stabilized (NOT stage) by October 9, 1997, no further action is necessary (an NOT is not required). Enclosure: 3 sheets Byron Poynter Construction Engineer Copy to: Distribution List Casing.bcp Cancelled by 9/9/92 Instructions for Permittees During Reissuance of the EPA Storm Water Construction General Permit EPA is preparing to reissue its Storm Water Construction General Permit. Please read these instructions carefully. Following these instructions will assist you in maintaining NPDES Storm Water General Permit coverage for construction activities in States where EPA is the permitting authority during the permit reissuance period. General Information The 1987 Congressional Amendments to the Clean Water Act require EPA to control the discharge of pollutants from storm water point sources. Regulations were finalized by EPA in 1990, and storm water permits for construction sites disturbing five or more acres were required starting in 1992. The 1992 EPA Baseline General Permit for Construction Activities expires at midnight, September 9, 1997, or midnight, September 25, 1997, depending on where the construction activity is located\(^1\). EPA proposed a new Construction General Permit\(^2\) in the Federal Register on June 2, 1997 (Volume 62, Number 105, pages 29785-29825). Public comments will be accepted on the proposed permit through August 1, 1997. Copies of the proposed permit are available through the USEPA Office of Water Resources Center at (202) 260-786 or through the following internet sites: http://www.epa.gov/owmtnet/pipes/storm.htm http://www.epa.gov/earth1r6/6en/w/sw/home.htm There is the potential that the new Construction General Permit will not be issued prior to the expiration of the 1992 Baseline Construction General Permit. According to the Administrative Procedures Act, permittees that wish to continue permit coverage for construction activities under the 1992 Baseline General Permit beyond September 9, 1997 (or September 25, 1997 in certain areas\(^3\)) must "administratively extend" their existing Baseline General Construction Permit to have continuing permit coverage until EPA issues the new permit. The following instructions provide guidance on how to administratively extend your existing permit and how to apply for the new permit once it is final. (Please note that the following instructions are based on the terms and conditions of the proposed new Construction General Permit published in the Federal Register on June 2, 1997) --- \(^1\) The 1992 EPA Baseline Construction Permit expires at midnight, September 25, 1997, in Massachusetts, Washington DC, Guam, American Samoa, non-Indian lands in Florida, Indian lands in New York and at Federal Facilities in Delaware. The permit expires at midnight, September 9, 1997, in all other areas where EPA is the permitting authority. It should be noted that there is conflicting information in the 1992 Baseline Construction General Permit that states that the expiration date is October 1, 1997 (\(57 ER 41223\) and \(57 ER 44454\)). However, EPA believes that the more consistent reading of the permit in accordance with the Clean Water Act would provide for the permit to expire at midnight, September 9, 1997, and September 25, 1997. \(^2\) The proposed EPA Construction General Permit does not extend coverage to construction activities in Florida. EPA Region IV proposed a separate Construction General Permit to cover those discharges. This permit was proposed in the Federal Register on April 16, 1997 (Volume 62, Number 73, pages 18605-18628); the proposal was modified and extended to include coverage of discharges from construction activities on Indian County lands in Alabama, Mississippi, Florida, North Carolina and South Carolina on June 27, 1997 (Volume 62, Number 124, pages 35053 - 35057). Comments on the Region IV proposal are due on or before August 26, 1997. A. TO EXTEND EXISTING BASELINE GENERAL PERMIT COVERAGE: 1. Submit a Notice of Intent (NOI) form for extended coverage under the 1992 Baseline Construction General Permit prior to September 9, 1997 (or September 25, 1997 in certain areas) to the address given in B.3 below. Use EPA NOI Form 3510-6 (enclosed). The form number is printed on the bottom left corner of the form. This indicates that you wish to continue coverage under an "administratively extended" Baseline Construction General Permit until EPA publishes the new Construction General Permit. Include the project's existing NPDES Permit Number in Section IV of the NOI form. If the NPDES Permit Number is not known, contact the EPA NOI Processing Center at (703) 931-3230. 2. Continue to follow the terms and conditions of the 1992 Baseline Construction General Permit until coverage is acquired under the new Construction General Permit as described below. NOTE: Permittees that have terminated construction activity and do not wish to remain covered under the Baseline Construction General Permit, should not submit an NOI for an administrative extension. Permittees may submit a Notice of Termination (NOT) (EPA Form 3510-7) to terminate coverage at any time prior to September 9, 1997 (or September 23, 1997 in certain areas), but coverage will terminate automatically when the permit expires at midnight, September 9, 1997 (or September 25, 1997 in certain areas), unless an NOI for extended permit coverage is submitted. B. TO ACQUIRE CONSTRUCTION GENERAL PERMIT COVERAGE UNDER THE EPA REISSUED CONSTRUCTION GENERAL PERMIT WHEN FINAL 1. Obtain a copy of the final reissued 1997 EPA Construction General Permit when it is published in the Federal Register. 2. Read and comply with all aspects of the new EPA Construction General Permit (note that some requirements may differ from those of the 1992 Baseline Construction General Permit). 3. Submit the new construction NOI form that was published with the final Construction General Permit within 30 days of the effective date of the final Construction General Permit to: Storm Water Notice of Intent (4203) USEPA 401 M Street, S.W. Washington, DC 20460 NOTE: The new Construction General Permit contains a construction-specific NOI that is different from NOIs that have been used in the past. It is EPA's intent to publish the new construction NOI form with the new EPA Construction General Permit in the Federal Register. Applicants must submit the new construction NOI form when applying for coverage under the new EPA Construction General Permit once it is issued in final. Applicants that submit an NOI for coverage under the new Construction General Permit are required to remain in compliance with the 1992 Baseline Construction General Permit during the time between the effective date of the new permit and 30 days thereafter. If your construction activity will meet the requirements for termination of coverage (i.e., will be finally stabilized) prior to 30 days after the effective date of the new Construction General Permit, submittal of an NOI for coverage under the new Construction General Permit is not required. However, submittal of an NOI for extended coverage beyond September 9, 1997 (or September 25, 1997 in certain areas) under the 1992 Baseline Construction General Permit is still required if construction is expected to extend beyond these dates. In this case, the permittee is required to remain in compliance with the 1992 Baseline Construction General Permit during the time between September 9, 1997 (or September 25, 1997 in certain areas) and the date on which the site is finally stabilized. General The United States Environmental Protection Agency is issuing a new National Pollutant Discharge Elimination System (NPDES) Phase I Storm Water Construction General Permit. EPA intends to issue the new permit to coincide with the expiration of the Baseline Construction General Permit in order to allow for continued general permit availability. The Baseline Construction General Permit was issued by EPA on September 9, 1992 in most areas (57 FR 41297) and September 25, 1992 in other areas (57 FR 44412) and will expire on September 9, 1997 or September 25, 1997, respectively. EPA Region IV is issuing a separate permit for construction activities in the State of Florida. Issuance of the new Construction General Permit will not affect areas where the State is the NPDES permitting authority. The proposed Construction General Permit was published in the Federal Register on June 2, 1997 (62 FR 29785). Background The 1987 Congressional Amendments to the Clean Water Act required EPA to control pollution from storm water discharges. Regulations were finalized by EPA in 1990, and storm water permits for construction sites disturbing five or more acres were required beginning in 1992. What's New? Revisions to the technical aspects of the permit involve improvements in clarity and certain new requirements. The most significant proposed changes include expanded conditions to protect endangered and threatened species, new conditions to protect historic properties, a requirement to post a copy of confirmation of permit coverage and a brief description of the project, a requirement to provide for public access to copies of a pollution prevention plan on the site or in another nearby location, terms for construction activities transitioning from the existing permit, clarification of who must become a permittee and their requirements, a streamlined permitting option for utility companies, a requirement to submit a Notice of Termination (NOT) when construction is completed, the ability to acquire permit coverage for other construction dedicated industrial activities (e.g., concrete batching plant) under one general permit, and pollution prevention plan performance standards. Endangered species and historical preservation requirements were modeled after those in the 1995 NPDES Storm Water Multi-Sector General Permit for industrial activities. The process was changed to allow the developer of a site to comply with the requirements of the Endangered Species Act and National Historical Preservation Act while avoiding duplication of effort on the part of subsequent operators (e.g., contractors, etc.). The Endangered Species Act and the National Historic Preservation Act require EPA to consult with the U.S. Fish and Wildlife Service, National Marine Fisheries Service, and Advisory Council on Historic Preservation on the effect that discharges authorized under this permit have on endangered species and historic properties. In general, these Acts prohibit EPA from authorizing discharges that would jeopardize the survival of endangered species or adversely impact historic properties. Public Comment Public comments on the proposed permit will be accepted through August 1, 1997. Public Hearings Public hearings will be held during the months of June and July in Boston, MA, Portland, ME, Concord, NH, Houston, TX, Albuquerque, NM, Dallas, TX, Phoenix, AZ, Boise, ID, Seattle, WA, and Anchorage, AK. Consult with local EPA Storm Water Coordinators for exact times, dates, and locations. Additional Information Copies of the proposed permit as published in the Federal Register are available through the EPA Office of Water Resources Center at (202) 260-7786 or through the following internet sites: http://www.epa.gov/owmimel/pipes/storm.htm http://www.epa.gov/earth1r6/sw/en/sw/home.htm PUBLIC HEARING DATES, TIMES, and LOCATIONS for the NEW CONSTRUCTION GENERAL PERMIT EPA Region 1 Portland, Maine Date: Tuesday, July 22, 1997. Time: 2:00 pm-5:00 pm. Place: Portland City Hall, 389 Congress Street, Room 208, Portland, ME 04101. Boston, Massachusetts Date: Thursday, July 24, 1997 Time: 6:00 pm-9:00 pm. Place: John A. Volpe National Transportation Systems Center, 55 Broadway--Kendall Square, Cambridge, MA 02142. EPA Region 6 Public Meetings were held previously in Dallas, TX, Houston, TX, and Albuquerque, NM, and a Public Hearing was held in Dallas, TX for this Region. EPA Region 9 Phoenix, Arizona Date: July 24, 1997. Time: 1-5 p.m. Place: Arizona Department of Environmental Quality, Public Meeting Room, 3033 North Central Ave., Phoenix, Arizona. EPA Region 10 Boise, Idaho Date: Thursday, July 24, 1997. Time: 6:00 pm-10:00 pm. Place: Idaho Public Television Building, Telemedia Room (First Floor), 1455 North Orchard, Boise, Idaho 83706. Seattle, Washington Date: Tuesday, July 29, 1997. Time: 6:00 pm-10:00 pm. Place: Park Place Building, Denali/Kenai Room (14th Floor), 1200 6th Avenue, Seattle, Washington 98101. Anchorage, Alaska Date: Thursday, July 31, 1997. Time: 5:00 pm-9:00 pm. Place: Federal Building/United States Court House, Room 135, 222 West 7th Avenue, Anchorage, Alaska 995131. Notice of Intent (NOI) for Storm Water Discharges Associated with Industrial Activity Under an NPDES General Permit The completion of this Notice of Intent constitutes notice that the party identified in Section III of this form intends to be authorized by a NPDES permit issued for storm water discharges associated with industrial activity in the State identified in Section III of this form. Becoming a permittee obligates such discharger to comply with the terms and conditions of the permit. ALL NECESSARY INFORMATION MUST BE PROVIDED ON THIS FORM. I. Permit Selection: You must indicate the NPDES Storm Water general permit under which you are applying for coverage. Check one of these. - Baseline Industrial - Baseline Construction - Multi-Sector (Group Permit) II. Facility Operator Information Name: ___________________________ Phone: ___________________________ Address: ___________________________ Status of Owner/Operator: [ ] City: ___________________________ State: ______ ZIP Code: ______ III. Facility/Site Location Information Name: ___________________________ Is the facility located on Indian Lands? (Y or N) Address: ___________________________ City: ___________________________ State: ______ ZIP Code: ______ Latitude: ______ Longitude: ______ Quarter: ______ Section: ______ Township: ______ Range: ______ IV. Site Activity Information MS4 Operator Name: ___________________________ If you are filing as a co-permittee, enter storm water general permit number: ___________________________ SIC or Designated Activity Code: Primary: ______ 2nd: ______ Is the facility required to submit monitoring data? (1, 2, 3, or 4) [ ] If You Have Another Existing NPDES Permit, Enter Permit Number: ___________________________ V. Additional Information Required for Construction Activities Only Project Start Date: ______ Completion Date: ______ Estimated Area to be Disturbed (in Acres): ______ Is the Storm Water Pollution Prevention Plan in compliance with State and/or Local sediment and erosion plans? (Y or N) [ ] VI. Certification: The certification statement in Box 1 applies to all applicants. The certification statement in Box 2 applies only to facilities applying for the Multi-Sector storm water general permit. BOX 1 ALL APPLICANTS: I certify under penalty of law that this document and all attachments were prepared under my direction or supervision in accordance with a system designed to assure that qualified personnel properly gather and evaluate the information submitted. Based on my inquiry of the person or persons who manage the collection or preparation of the information, I believe that the information submitted is accurate and complete. I am aware that there are significant penalties for making false statements, including the possibility of fine and imprisonment for knowing violations. BOX 2 MULTI-SECTOR STORM WATER GENERAL PERMIT APPLICANTS ONLY: I certify under penalty of law that I have read and understand the Part I.B. eligibility requirements for coverage under the Multi-Sector storm water general permit, including those requirements relating to the protection of species identified in Addendum H. To the best of my knowledge, the discharges covered under this permit, and construction of BMPs to control storm water run-off, are not likely to and will not likely adversely affect any species identified in Addendum H of the Multi-Sector storm water general permit or are otherwise eligible for coverage due to previous authorization under the Endangered Species Act. To the best of my knowledge, I further certify that such discharges, and construction of BMPs to control storm water run-off, do not have an effect on properties listed or eligible for listing on the National Register of Historic Places under the National Historic Preservation Act, or are otherwise eligible for coverage due to a previous agreement under the National Historic Preservation Act. I understand that continued coverage under the Multi-Sector general permit is contingent upon maintaining eligibility as provided for in Part I.B. Print Name: ___________________________ Date: ______ Who Must File A Notice Of Intent (NOI) Form Federal law at 40 CFR Part 122 prohibits point source discharges of storm water associated with industrial activity to a water body(ies) of the United States without a National Pollutant Discharge Elimination System (NPDES) permit. The purpose of an industrial activity that has such a storm water discharge must submit a NOI for coverage under a NPDES Storm Water General Permit. If you have questions about whether you need a permit under the NPDES Storm Water program, or if you need information as to whether a particular program is administered by EPA or a state agency, telephone or write to the Notice of Intent Processing Center at (703) 931-3230. Where To File NOI Form NOIs must be sent to the following address: Storm Water Notice of Intent (4203) 401 M Street, S.W. Room 2104 Northeast Mall Washington, DC 20460 (202) 260-9541* * This telephone number should be used as the recipient's number for express deliveries. The telephone number at the Notice of Intent Processing Center is (703) 931-3230. Completing The Form You must type or print, using upper-case letters, in the appropriate areas only. Please space each character between the marks. Abbreviate if necessary to stay within the number of characters allowed for each item. Use one space for breaks between words, but not for punctuation marks unless they are needed to clarify your response. If you have any questions on this form, call the Notice of Intent Processing Center at (703) 931-3230. Section I Permit Selection You must indicate the NPDES storm water general permit under which you are applying for coverage. Check one box only. The Desaline Industrial and Desaline Construction permits were issued in September 1992. The Multi-Sector Permit became effective October 1, 1993. Section II Facility Operator Information Provide the legal name of the person, firm, public organization, or any other entity that operates the facility or site described in Section III. The operator may or may not be the same as the owner of the facility. The responsible party is the legal entity that controls the facility's operation; rather than the client or site manager, do not use a colloquial name. Enter the complete address and telephone number of the operator. This will be the address to which EPA will send correspondence related to the NOI. Enter the appropriate letter to indicate the legal status of the operator of the facility. F = Federal; S = State; M = Public (other than federal or state); P = Private. Section III Facility/Site Location Information Enter the facility's or site's official or legal name and complete street address, including city, state, and ZIP code. Do not provide a P.O. Box number as the street address. If applying for a Baseline Permit and the facility or site lacks a street address, indicate the state and enter the latitude and longitude of the facility to the nearest 15 seconds or the nearest quarter section, and range (and township, if applicable), and the approximate center of the site. If applying for the Multi-Sector Permit indicate the complete street address and either the latitude and longitude of the facility to the nearest 15 seconds or the quarter, section, township, and range (to the nearest quarter section) of the approximate center of the site. All applicants must indicate whether the facility is located on Indian lands. Section IV Site Activity Information If the storm water discharge is to a municipal separate storm sewer system (MS4), enter the name of the operator of the MS4 (e.g., municipality, county, city, town, and the receiving water of the discharge from the MS4. (A MS4 is defined as a conveyance or system of conveyances (including roads with drainage systems, municipal streets, catch basins, curbs, gutters, ditches, man-made channels, or storm drains) that is owned or operated by a city, city, town, borough, county, parish, district, association, or other public body which is designed or used for collecting or conveying storm water.) If the facility discharges storm water directly to receiving water(s), enter the name of the receiving water(s). If you are filing as a co-permittee and a storm water general permit number has been issued, enter that number in the space provided. Indicate the monitoring status of the facility. Refer to the permit for information on monitoring requirements. Indicate the monitoring status by entering one of the following: 1 = Not subject to monitoring requirements under the conditions of the permit. 3 = Subject to monitoring requirements but not required to submit data. 4 = Subject to monitoring requirements but submitting certification for monitoring exemption. List, in descending order of significance, up to two 4-digit standard industrial classification (SIC) codes that best describe the principal products or services produced at the facility or site identified in Section III of this application. If you are applying for coverage under the construction general permit, enter "C0" (which represents SIC codes 1500 - 1799). For industrial activities defined in 40 CFR 122.26(b)(1)(iv)-(xi) that do not have SIC codes that accurately describe the principal products produced or services provided, use the following 2-character codes. HZ = Hazardous waste treatment, storage, or disposal facilities, including those that are operating under interim status or a permit under subtitle C of RCRA [40 CFR 122.26 (b)(1)(iv)]; LF = Landfills, land application sites, and open dumps that receive or have received any industrial wastes, including those that are subject to regulation under subtitle D of RCRA [40 CFR 122.26 (b)(1)(iv)]; SE = Small electric power generating facilities, including coal handling sites [40 CFR 122.26 (b)(1)(vi)]; TW = Treatment works treating domestic sewage or any other sewage sludge or wastewater treatment device or system, used in the storage, treatment, recycling, and reclamation of municipal or domestic sewage [40 CFR 122.26 (b)(1)(ix)]; or, CO = Construction activities [40 CFR 122.26 (b)(1)(xi)]. If there is another NPDES permit presently issued for the facility or site listed in Section III, enter the permit number. If an application for the facility has been submitted but no permit number has been assigned, enter the application number. Facilities applying for coverage under the Multi-Sector storm water general permit must answer the last three questions in Section IV. Refer to Attachment H of the Multi-Sector general permit for a list of species that are either proposed or listed as threatened or endangered. "BMP" means "Best Management Practices" that are used to control storm water discharge. Indicate whether any construction will be conducted to install or develop storm water runoff controls. Section V Additional Information Required for Construction Activities Only Construction activities must complete Section V in addition to Sections I through IV. Only construction activities need to complete Section V. Enter the project start date and the estimated completion date for the entire development plan. Provide an estimate of the total number of acres of the site on which soil will be disturbed (round to the nearest acre). Indicate whether the storm water pollution prevention plan for the site is in compliance with approved state and/or local sediment and erosion plans, permits, or storm water management plans. Section VI Certification Federal statutes provide for severe penalties for submitting false information on this application form. Federal regulations require this application to be signed as follows: For a corporation: by a responsible corporate officer, which means: is president, secretary, treasurer, or vice president of the corporation in charge of a principal business function, or any other person who performs similar policy or decision-making functions, or the line manager of one or more manufacturing, production, or operating facilities employing more than 250 persons or having gross annual sales or expenditures exceeding $25 million (in second-quarter 1980 dollars), if authority to sign documents has been assigned or delegated to the manager in accordance with corporate procedures; For a partnership or sole proprietorship: by a general partner or the proprietor; or For a municipality, state, Federal, or other public facility: by either a principal executive officer or ranking elected official. Paperwork Reduction Act Notice Public reporting burden for this application is estimated to average 0.5 hours per application, including time for reviewing instructions, searching existing data sources, gathering the data needed, and completing and reviewing the application. Send comments regarding the burden estimate, any other aspect of the collection of information, or suggestions for improving this form, including any suggestions which may increase or reduce this burden to: Chief, Information Policy Branch, 2136, U.S. Environmental Protection Agency, 401 M Street, SW, Washington, DC 20460, or Director, Office of Information and Regulatory Affairs, Office of The method of payment for permanent inner casing (CGSP casing) in drilled shafts using the Double Casing Method has changed from the 1988 Standard Specification to the 1996 Metric Standard Specification. Under the 1996 metric spec, CGSP casing is not a pay item. The use of CGSP casing and the Double Casing Method has not changed. Under either specification, the contractor typically has the option to use the Double Casing Method to facilitate construction of drilled shafts. If plan notes specify the Double Casing Method, CGSP casing must be used. If plan notes prohibit the Double Casing Method, CGSP casing cannot be used. Permanent casing cannot be used when the drilled shaft relies upon skin friction to carry the load. Permanent casing interferes with the development of the skin friction between the drilled shaft and surrounding soil. When permanent casing is used, it should never extend to the bottom of the shaft. Where casing is not required and not prohibited, the contractor may elect to use casing to facilitate his operations at his own expense. If there are not plan notes, many methods of excavation may be acceptable depending upon the soil conditions; refer to the Standard Specifications. Depending on which specification controls, edition, 1988 vs. 1996, and the plan notes, the payment method for CGSP casing varies. The following table details the differences: | Plan Note Requirements | 1988 Spec | 1996 Spec | |------------------------|-----------|-----------| | | Install CGSP? | Separate Pay Item? | Install CGSP? | Separate Pay Item? | | • No Special Notes | Contractor’s Option | YES | Contractor’s Option | NO | | • Double Casing Method required; Cost of CGSP included in other items. | YES | NO | YES | NO “Cost” plan note is not needed, it is covered by the Spec. | | • Double Casing Method required; CGSP Pay Item included in the summary. | YES | YES | NOT APPLICABLE The Bridge Division will not specify a CGSP pay item under the 1996 Standard Specification. | | • Double Casing Method Prohibited | NO | NO | NO | NO | If you think the double casing method should be specifically included or excluded on your project, be sure to inform the bridge project engineer and/or consultant well before the letting, preferably at the Plan-in-hand meeting. If you find a category not mentioned above, contact this office or the Bridge Division before taking action. Byron Poynter Construction Engineer Copy to: Distribution List OKLAHOMA DEPARTMENT OF TRANSPORTATION Date: June 5, 1997 July 11, 1997 (corrected typo) To: Field Division Engineers, Division Construction Engineers, Resident Engineers, and County Bridge Engineers. From: Byron Poynter, Construction Engineer Subject: CONSTRUCTION CONTROL DIRECTIVE NO. 970605 COARSE AGGREGATE GRADATION FOR PORTLAND CEMENT CONCRETE The coarse aggregate gradation specification for P.C. Concrete has been adjusted to allow from 0% to 2% passing the No. 200 sieve, for gradation sizes 57, 67 and 7. Refer to Section 701.06 of the Standard Specifications and the gradation below. 701-2(a) 91s 9-23-93 701.06. COARSE AGGREGATE: Revise the requirements for the No. 200 in the table in Subsection (c) as follows: | Sieve Size | No. 3 (2" to 1") | No. 357 (2" to #4) | No. 57 (1" to #4) | No. 67 (3/4" to No. 4) | No. 7 (1/2" to #4) | |------------|------------------|--------------------|-------------------|------------------------|-------------------| | No. 200 | 0-1.5 | 0-1.5 | 0-2.0 | 0-2.0 | 0-2.0 | The metric specifications have been issued with this gradation. This adjustment will apply to projects let under the 1988 Specifications. If needed, you may apply this gradation to the 1988 Specifications with a "no cost" change order. Byron Poynter Construction Engineer Copy to: Distribution List ccDAGG.BP DATE: March 19, 1997 TO: Field Division Engineers, Division Construction Engineers, and Resident Engineers FROM: Byron Poynter, Construction Engineers SUBJECT: CONSTRUCTION CONTROL DIRECTIVE NO. 970319. USE OF LINE NUMBERS - LIST OF CHANGE ORDERS Often line numbers on the pay estimates are used for purposes not intended resulting in confusing or odd print-outs. For example; sometimes on a line labeled “deduction” there is an plus amount instead of a negative amount. To help clear up this matter it is suggested that the various special pay items be placed on the lines as follows: | LINE NO. | ITEM | |----------|------| | 600 - 799 | CHANGE ORDERS | | 800 - 899 | MATERIALS ON HAND | | 900 - 989 | MATERIAL TAKEN INTO WAREHOUSE AND QA/QC ADJUSTMENTS | | 991 - 992 | SPECIAL DEDUCTIONS | | 993 | LIQUIDATED DAMAGES | ****** A list of change orders is required for the final estimate. This has for a long time been placed on separate sheets as part of the final document packet. Since change orders with supplemental agreements have to be listed on the final estimate, you may wish to list all change orders on the final estimate in lieu of providing a separate list. CONSTRUCTION CONTROL DIRECTIVE NO. 970319 CONTINUED If you choose to do this, please identify the change with one or two words. Also if the change adds new contract items, show the new item(s) and the amount paid. Original contract items that have been overrun are to be paid in the body of the estimate as always (unless there is a need to segregate for proper allocation of funds). A SAMPLE LIST MIGHT BE AS FOLLOWS: | Change Order No. | Description | Unit | Quantity | Rate | Amount | |------------------|--------------------------------------------------|------|----------|------|----------| | 600 | Change Order No. 1 SE (Tax Bond) | | | | | | 601 | Change Order No. 2 (rejected) | | | | | | 602 | Change Order No. 3 SE - 48" RCP LF | | 150.00 | 50.00| 7,500.00 | | 603 | Change Order No. 4 OR - Asphalt ty A | | | | | | 604 | Change Order No. 5 SE - Time | | | | | | 605 | Change Order No. 6 SE - 12" Piling LF | | 84.00 | 28.00| 2,352.00 | | 606 | Change Order No. 7 SE - Add Detour | | | | | | | TBSC Ton | | 118.00 | 30.00| 3,540.00 | | 607 | Change Order No. 8 AA - See Explanation | | | | | OR = Overrun of existing items. SE = Supplemental Agreement. AA= Additional Appropriation. The intent for listing all of the change orders on the estimate is to save time, effort, and paper. If this does not result in a savings, please advise. Byron Postler Construction Engineer Copy to: Distribution List Every effort should be made to obtain (as a minimum) the number of tests as recommended by guidelines for acceptance of highway construction materials. However, occasionally a test will be lost or, due to oversight, not performed. One of the actions that you may take with regard to lack of testing, is to waive the tests or the portion that is lacking. A limit has NOT been set on the amount of testing that may be waived. However, the amount of testing waived will continue to be monitored by the Construction Division and Materials Division. All testing should be done with a certain amount of common sense utilized. Following are some comments that should help to guide you in making waivers. After the project or work unit is complete, there is no value to testing lots of material that will not be incorporated into the project, in order to have the proper number of tests. Almost all materials can be tested after they have been placed. However, if testing is lacking, the use of the material should be evaluated to see if the actual application will receive stresses or if it is more ornamental, before any funds are spent to verify quality. When there are failing tests on temporary work such as, a shoo-fly detour, and the detour has endured through the time period it was needed, or the contractor has maintained the detour to guarantee its success, the materials should be accepted and 100% of the money earned should be paid. An exception to this would be when the failing tests and subsequent failure of the detour caused the Department to incur additional costs which would not have happened had the material and workmanship been successful. Some manufacturers of items such as “stick-down” traffic stripe tape put more tape on the roll than what is reflected on the invoice. This is done to ensure a “full measure” of what is purchased. Field measurements will almost always result in a greater amount than what is included in the test report. The amount measured should be paid. The portion not covered by test report should be waived. When a material item has been supported by several passing tests and it is found that some of the material is not covered by tests and there is no other evidence that the portion not covered is any different than the portion tested, a waiver should be considered. On the other hand, there are cases when all of the tests are present, the material is marked as tested, but due to damage during delivery or other reasons the material is not acceptable for use in the work and should be rejected. ******** It is desired that some level of practicality be applied with regard to testing waivers. Keep in mind that the need is for quality materials to be used in the construction. Byron Poynter, P.E. Construction Engineer Copy to: Distribution List The following proposal has been approved for implementation. As part of the process, each Field Division Engineer is to notify the Construction Division as to the amount of authority to be delegated to each Resident Engineer (by name please) we will refer to this as the Delegation List. This is for matching the signatures with the delegated authority. The Division Engineer need not sign changes within the authority level of the Resident Engineer. This process will begin as soon as the Delegation List has been received by Construction. However, no Changes will be returned for these purposes. THE PROPOSAL: The purpose of this authorization is to allow the Division Engineer to be more responsive to conditions in the field and to make timely decisions necessary to minimize the cost of changes. Part 2 To present Change Orders to the Transportation Commission in a prioritized manner to facilitate the approval process. The authorizations referred to in this Directive are for the purpose of completing the project as outlined in the plans and contract. Redesign of bridges, pavement typical sections, alignments and changes in the general scope of the plans, will require approval of the Central Office as in the past. Change Orders generally fall into one or more of three categories: 1. **OVERRUN OF EXISTING ITEMS**: This type of change represents an overrun of existing contract items and does not alter the contract or require a Supplemental Agreement. The scope of the plans usually is not changed. 2. **ADDED CONTRACT ITEMS MANDATED BY THE PLANS** This change adds contract items that are specified in the plans but are not listed in the contract. 3. ADDED CONTRACT ITEMS FOR ADDITIONS TO THE PROJECT: This type of change typically extends the scope of the project and may include both existing and added contract items. Part 1 The Division Engineer is authorized to proceed with any Change Order having a value up to $50,000.00. It is suggested that the Division Engineer limit the authority further delegated to Resident Engineers to $10,000.00 but that authority may be increased to the Division Engineer's limit depending on the experience and performance of the individual Resident Engineer. Change Orders not exceeding $50,000.00 will be approved by either the Division Engineer or the Resident Engineer, depending solely on the ceiling amount the Division Engineer has elected to delegate. All such delegations should be made in writing and if subsequent delegations are made, the later delegation should formally revoke its predecessor. Change Orders which change the scope of the project with a value greater than $50,000, up to $150,000 would be approved by the Director or his designee. Change Orders which add features to the project with a value greater than $150,000 would be brought to the Transportation Commission for approval prior to substantive changes being made but without delaying the project. If it becomes evident that sound engineering judgment has not been exercised on a Change Order approved within delegated authority and it appears that the system might be strengthened, the Construction Engineer may require a debriefing to examine the questioned Change Order and will participate in such debriefing, if requested. Part 2 The State Construction Engineer will categorize all changes and make a routine explanation to the Transportation Commission with regard to changes in category 3 and when changes in category 3 are mixed with items in other categories. A routine explanation would also be made when changes in categories 1 and 2 are inordinately costly or involve substantive changes in the scope of the project. None of this proposal would preclude formal approval of Change Orders. [Signature] Byron Fountain Construction Engineer Copy to Distribution List ## CHANGE ORDER AUTHORIZATION LEVELS | DIVISION | RESIDENT | AMOUNT | |----------|-------------------|----------| | ONE | All Residencies | $10,000.00 | | TWO | Brent Frank | $10,000.00 | | | David Huddleston | $10,000.00 | | | M. C. Ollar | $10,000.00 | | | Vacant | $0.00 | | THREE | All Residencies | $10,000.00 | | FOUR | All Residencies | $10,000.00 | | FIVE | All Residencies | $10,000.00 | | SIX | All Residencies | $25,000.00 | | SEVEN | Reese Knight | $5,000.00 | | | Mark Zishka | $15,000.00 | | | Jerry Harwell | $20,000.00 | | EIGHT | All Residencies | $10,000.00 | Since March 1, 1997, the Resident Engineer has been responsible for Certification of Materials on assigned projects. This revision is to clarify the certification format and include any waivers of tests that have been made. All tests, certifications and brochure items for materials used in the construction of the project are to be tallied and compared to the as built quantities (L 5). The certification is to address all failing tests, noncomplying materials and waivers. See also Construction Control Directive No. 970317 "Waivers of Acceptance Testing". The certification is as follows: DATE ___________________________ PROJECT NO. _______________________ JOB Piece_______________________ COUNTY__________________________ This is to certify that: The results of the tests used in the acceptance program indicate that the materials incorporated in the construction work, and the construction operations controlled by sampling and testing, were in conformity with the approved plans and specifications. All Independent Assurance samples and tests are within tolerance limits of the samples and tests that are used in the acceptance program. (See note 1.) Exceptions to the plans and specifications are as follows: (List each failing test and what action was taken with regard to the materials represented by the failing test. Also include any material test that was waived.) ______________________________________________ Resident Engineer Note 1. If Independent Assurance Sampling was not done, omit the last sentence in the certification. CONSTRUCTION CONTROL DIRECTIVE NO. 970115 CONTINUED Only the Resident Engineer is authorized to sign and seal the certification. Residency Managers that are not licensed are to initial the certification and forward to the Assistant Division Engineer or Division Engineer to be signed and sealed. Send a final list of all of the items and quantities and the certification to the Materials Division and to the Construction Division (see note 2. Below). When the project has a contract amount of $1,000,000 or more and is on the National Highway System, send the Federal Highway Administration a copy also. You will not receive a response from the Materials Division as in the past. Proceed with finalization of the contract. Note 2. You may use the L-5 Form but it may be easier to use the audit sheet which transmits information between the Residency and Division. Be sure to list all items including the earthwork items and items established by change order. Please summarize the exceptions on the certification. A sample certification might appear as follows: DATE December 6, 1997 PROJECT NO. NH-88(99) JOB Piece 05678(04) COUNTY Acapulco This is to certify that: The results of the tests used in the acceptance program indicate that the materials incorporated in the construction work, and the construction operations controlled by sampling and testing, were in conformity with the approved plans and specifications. All Independent Assurance samples and tests are within tolerance limits of the samples and tests that are used in the acceptance program. Exceptions to the plans and specifications are as follows: | Date | Item | Quantity | Test Result | Status | |--------|-----------------------------|----------|-------------|--------| | 3-2-97 | Type A Asphalt | 1254 tons| No. 10 sieve out 2% | Accepted | | 3-6-97 | Beads for Striping | 64 lb | | Accepted | | 5-25-97| Removable Striping Tape | 87 LF | no certification | Accepted | | 5-29-97| Type B asphalt | 26 tons | no test | Waived | | 8-22-97| 24" conc. Storm Sewer Pipe | 7 LF | | Waived | John Doe Resident Engineer See Note 1. Page 1 CONSTRUCTION CONTROL DIRECTIVE NO. 970115 CONTINUED It is not necessary to attach copies of the test or work sheets, we only need the summary indicating what action was taken. ADMINISTRATION OF MATERIALS TESTING Production and distribution of the monthly Materials Received Report, which has been done routinely for several years, is found to have little value and is no longer required. CATEGORIES OF TESTING: There are basically five areas of testing and verification for acceptance; Tests performed by the Residency, Pretested stock materials, Manufacturer’s Certification, Brochure Items, and tests by the Central Laboratory for Acceptance. The types of materials in each category will be addressed on the following pages. | Sampling and Testing by the Construction Residency | The Resident Engineer will sample, test, and maintain records at his office. Copies of the Asphalt Plant Inspector’s Report and Test Reports of Concrete Cylinders are to be sent to the Materials Engineer. | |--------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Pre-tested Materials (Materials that have been been sampled by the Materials Division or certified as pretested stock to a supplier.) | The Supplier will send the original shipping report (DT Form 217) to the Materials Engineer and one (1) copy each to the Resident Engineer and Contractor. | | Manufacturer’s Certification | The Supplier will send the original certification to the Materials Engineer and one (1) copy each to the Resident Engineer and Contractor. (Some materials require sampling in addition to the certification.) After the certification is deemed acceptable, by the Materials Division, one copy each will be sent to the Resident Engineer and the prime contractor. The Materials Engineer may authorize the Resident Engineer to accept certifications for specific materials. | | Brochure Items | The Contractor/Supplier will send five (5) copies to the ODOT Approving Authority. (see note 3.) On approval, four (4) copies will will be sent to the Materials Engineer for distribution. The Materials Engineer will send one (1) copy to the Contractor and two (2) copies to the Resident Engineer. | | Materials Sampled & Tested by Central Lab. For Acceptance (Project Specific) | One copy of the report is sent to the contractor and one copy to the Resident Engineer. | | Shop Drawings | See separate Construction Control Directive for the process of submittals of shop drawings, false work details, approval of concrete and related documents. | Note 3. Send a copy of the transmittal only to the Resident Engineer so that he will know that the brochure has been submitted. The Approving Authority is the Division (IE, Traffic, Bridge, Roadway, Local Gov.) that has primary responsibility for the approval. 1. Tests performed or samples acquired by the Construction Residency and forwarded to the Materials Division are as follows: - Filter Blanket - Cement - Electric Conduit - Fly Ash - Waterproofing Materials - Aggregate (all types) - Fencing Materials - Soils - Jute Mesh - Asphaltic Mixtures - Lime - Rip Rap - Electric Conductors - Liquid Asphalt - Penetrating Water Repellant - P.C. Concrete 2. Materials from Pretested Stockpiles: These are materials that have been purchased in bulk by a supplier. The materials are tested in lots and later shipped to the project accompanied by a test report that identifies the lot source. The testing entity may be the Central Laboratory, a Consulting Firm, and in some cases, another State. The materials in this category typically are: - R/W Markers - Curing Compound (membrane&emulsion) - Delineators - Glass Beads - Traffic Stripe - Castings - (solvent based paint & plastic) Brick - Glass Beads - Joint Fillers - Pull Boxes,Lids - Joint Sealers (other than silicone) - Reinforced Concrete Pipe - Guardrail End Section - Signs (all) - Wood Fence Post - Guardrail Posts & Blocks(wood&stl.) - Paint - Pavement Markers & Epoxy - Underdrain Pipe (plastic) - Compression Joint - Sign Posts Stubs - Reinforcing Steel (for CRCP) - Welded Wire Fabric CONSTRUCTION CONTROL DIRECTIVE NO. 970115 CONTINUED 3. Manufacturer Certified Materials: - Aluminum Bridge Rails - Valves & Boxes - Air Entraining Agents - Bearing Pads (1) - Fertilizer - Seeding - Penetrating Water Repellant (1) - Glare Screen - Geogrid Reinforcing Materials - Corrugated Metal Pipe - Unfabricated Steel Sign Posts - Acrylic Waterborne Paint - Filter Fabric (1) - Reinforcing Fabric (1) - Separator Fabric (1) - Cast iron pipe & fittings - Fire Hydrants - Admixtures - Liquid Asphalt(1) - Construction Traffic Control - Ditch Liner Protection - Fly Ash (1) - Attenuator Modules - Silicone Joint Sealants - Coated Metal Pipe - Guardrail, Hardware & End Terminals - Portland Cement (1) - Paint Systems(for Structural Steel) (1) Sample Required 4. Items Verified by Brochure: The specification for signal and lighting items may be met by several manufacturers. After award of the project, the contractor indicates which manufacturer will furnish the materials by submitting a Brochure depicting the model number etc. The Brochure is then sent to the designer for approval. Typically, the items are Signal Heads, Overhead Lighting, Signal Controllers and Cabinets and other miscellaneous hardware items. CONSTRUCTION CONTROL DIRECTIVE NO. 970115 CONTINUED 5. Materials Sampled and Tested By The Central Laboratory For Acceptance (Project Specific). - Precast Concrete Median Barriers, Epoxy Coated Steel - Concrete Boxes, Manholes & Inlets Heavy Weld Steel Grates - Reinforcing Steel Overhead Sign Structures (alum) - Pipe Railing Prestressed Concrete Beams - Wire Mesh Timber and Lumber - Fabricated Steel Pipe Posts Piling (all) - Culvert End Treatments Structural Steel PROCESSING SHOP DRAWINGS See separate Construction Control Directive for the process of submittals of shop drawings and related documents. Byron Poynter Construction Engineer Copy to: Distribution List 7 of 7 matcert.bp 13 NOTE TO WRITER: DIRECTIVE NO. 970115 HAS REFERENCES TO THE DIRECTIVE FOR SHOP DRAWING SUBMITTALS. IF EITHER IS REVISED, THE OTHER WILL REQUIRE REVISION ALSO.
May 30, 2021 — The Solemnity of the Most Holy Trinity Welcome to Holy Savior Catholic Church Holy Savior would like to extend a warm welcome to our new families. We are delighted to have you and would like you to know that we have a special place here for you and your family. Holy Savior is a place for spiritual nourishment, a place for worship, a place for learning, and a place for friendship. We hope you will find a true home with us here at Holy Savior. Pastor Rev. Jean-Marie Nsambu Mass Schedule Saturday: Vigil Mass at 4:00pm Sunday: 7:00am & 10:00am No daily Mass June 1 –18 Sacramental and Funeral Information Reconciliation: The sacrament of reconciliation is available on Saturday from 3:15pm - 3:45pm or call the parish office for an appointment. Anointing of the Sick: All arrangements should be made through the church office. Baptism: Call the parish office for an appointment. Wedding: Couples should make arrangements at least six months in advance. Call the parish office for further information. Funeral: Arrangements have to be made through the parish office. May They and Their Caretakers Know God’s Peace Ashley Adams, Dee & David Adams, Donna Adams, Harlie Adams, Kenzie Adams, Taylor Adams, Hazel Allemand, Belinda Billiot, Katty Boudreaux, Pearl Boudreaux, Lola Cantrelle, Terry & Cheryl Cavalier, Maci Cedotal, Ray Champagne, Pawnee & Roger Curole, Emma Daigle, Raymond Daigle, Jerry & Anna Mae Daous, Larry Doucet, George Dufrene Jr., Johanna Dufrene, Sherman Dufrene, Velma Dufrene, Nora Fields, Debbie Ford, Perry Ford, Toby & Sandra Foret, Mika Frickey, Ronnie & Rose Frickey, Angelle Gautreaux, Harris Griffin, Sarah Griffin, Sister Mary Elizabeth Guidry, Sister Gwen Grillot, O. Carm., Walter Hartman, Gerald Hebert, Helen Hendrix, June Lagarde Herman, Judy LeBlanc Hill, Vickie Hotard, Lionel Lagarde, Cindy Bollinger Landry, Babson LeBlanc, Mary Ledet, Robert “Sam” Leek, Amanda Matherne, Dolores Matherne, Rickey & Sallie Naccio, Carol Parks, Joshua Parks, Deanie Pere, Roy Perrillioux Jr., Janet Pitre, Joe Pitre, Alice & Nessie Plaisance, Gary Plaisance, Kate Plaisance, Melissa Plaisance, Dylan Sampey, Corey Savoie, Patti Savoie, Leroy Tastet Jr., Stacey Tastet, Stanley Tastet Sr., Stanley Tastet Jr., Alvin Vedros, Hank & Paula Wells, Shawn Willard Died in Christ Errol Chiasson Allen Foret Sr. Curtley Boudreaux Winnie Adams Marguerite Breaux Mass Intentions May 29 Saturday 4:00pm: Intentions of our Bulletin Sponsors- Ernie Colby- Michael Brunet Sr.- Beulah Griffin- Carl “Bud” Zenthoefer- Gene & Angie Orgeron- Johnny & Mary Frances Loupe (WA)- Francis Boudreaux- Robert “Bob” Folse- Pearl Baudoin- Rodney & Laura Strevig (WA) May 30 Sunday The Most Holy Trinity 7:00am: Luke & Bernice Aucoin- Roland J. Savoie (DA)- Arthur Breaux Jr. May 30 Sunday The Most Holy Trinity 10:00am: Nellie & Russell Arcement- V.H. Boudreaux- Dr. Harvey Detillier- Wilbur “Pete” Arcement- Sammy Pizzolato- Brittany Barrios No daily Masses June1-18. Mass intentions for those Masses have been moved to the weekend Masses. Safe Environment All employees and volunteers must update their Safe Environment certification. The course is available on the Diocesan website. The school computer lab will be open on June 1-3 starting at 4:30pm for any volunteer to use. Vacation Bible Camp June 14 - 18 8am - Noon Ages 3 years - 5th grade Registration is open. Scripture Readings for Sunday - **Sunday, May 30, 2021** - The Solemnity of the Most Holy Trinity - First reading: Deuteronomy 4:32-34, 39-40 - Psalm: 33 - Second reading: Romans 8:14-17 - Gospel: Matthew 28:16-20 Items Needed for Vacation Bible Camp: - artificial Christmas trees, wet wipes, - gallons of water, empty 2-liter bottles, cotton balls, - beach balls, hand sanitizer, sponges, rope, pie pans, - small paper cups, sandwich-sized Ziploc bags, - black spray paint, paper towels, empty spray bottles, - blue, red, green plastic table coverings, crepe paper, - sponges, pool noodles If you can help, you may bring donations to the office. We are excited to have the VBC this year. Thank you for your support! Calendar/ Events - May 31- Memorial Day - June 1-3: 4:30pm Safe Environment training - June 3: 6pm Finance Council meeting - June 1-18: daily Masses cancelled - June 7: deadline for Father’s Day flowers - June 10: 6:30pm Fundraiser meeting - June 14– 18: Vacation Bible Camp - June 20: Father’s Day Parish Sacrificial Giving Collection for week of May 23, 2021 Total: $2,895.00 Thank you for your generous support! Faith, Hope & Healing Program Thank you to all parishioners who have completed and submitted your Commitment Card. For those parishioners who have not yet had the opportunity to submit their card, a letter will be mailed to you with a personalized Commitment Card. We invite all faithful parishioners to complete a card as our parish goal is 100% participation! You may return your card by: - mailing it to the church using the pre-addressed envelope included with the letter - or bringing it to Mass next Sunday - or you may visit our parish website at https://holysaviorchurch.org/ and complete an electronic Commitment Card. All cards need to be returned to the parish by Sunday, June 20. Electronic giving is the preferred, easy, safe and secure way to give to our parish. To schedule a recurring electronic gift directly from your bank account or credit card, please visit our website and follow the instructions. We appreciate all parishioners using this means to support our parish. In Loving Memory of Dr. Harvey Detillier Ravenswood Pltn. Jerome & Lola Cantrelle In Loving Memory of Sidney & Sadie Laris In Loving Memory of Janice G. Davaine and all Cancer Victims In Loving Memory of Orga T. Toups In Loving Memory of John and Anita Braniff In Loving Memory of the Bennett & Barrios Family In Loving Memory of Doris & Bill Drawe, Jr. Inez & Guy Lefort, Sr. Maggie & Alfred Guidry In Thanksgiving for Graces The Bellini Family Pray for Our Beloved Deceased In Loving Memory of Raymond A. Birdsall, Sr. In Loving Memory of Atney Uzee Jr. LARIS INSURANCE AGENCY Life—Health—Auto—Home Group—Business—Marine Financial Services 810 Crescent Ave. Lockport 532-5576 (800)375-6013 Fax: 532-5001 Patrick Barker Agent 503 Crescent Ave. Lockport, LA 70374 Phone: 985.532.5596 Fax: 985.532.3935 Email: firstname.lastname@example.org ROGERS PARTS INC. IN LOVING MEMORY OF JAMES (JIMMY) N. ROGERS SR. 910 CRESCENT AVE., LOCKPORT, LA 70374 Phone: (985)532-3311 T/F's Shell Station LLC. 1000 Hwy. 1 Lockport, LA 70374 Our Mission Is To Serve Society Trene' J. Crosseon Owner Space Available For Advertisement Please Mention Holy Savior Church When Visiting Our Loyal Advertisers UNITED COMMUNITY BANK 4626 HWY 1 RACELAND, LA 70394 985.537.5283 UCBANKING.COM VALENTINE CHEMICALS, LLC 129 VALENTINE DR.-LOCKPORT, LA 70374 PH:(985)-532-2541-FAX:(985)532-6806 SAMPEY FABRICATION Ross Sampey Phone:(985)532-7003 * Fax:(985)532-7030 Cell: 985-856-9733 email@example.com ACE LOCKPORT FARM MARKET and BUILDING SUPPLIES, INC. KIRTH CHIASSON PH: 532-3323 Fax: 532-6527 7814 Highway 308 LOCKPORT, LA 70374 Anyone interested in advertising in the bulletin may contact the Church office at 985-532-3533 The BROADWAY ELDER LIVING AND REHABILITATION 985-532-1011 99A Non-Profit Organization Compliments of BOLLINGER SHIPYARDS James Matassa Agent 985-532-0936 Serve God with all your heart Rescue Wayne's Air & Heat Escaping You From The Heat & Cold - Saving You Money From High Prices • Central Air & Heat • Mobile Homes • Servicing On All Brands FREE ESTIMATES • LICENSED & INSURED Wayne Bourgeois, Jr. Owner/Technician www.rescuelwayneairheat.com (985) 532-6640 or (985) 633-9112 GOUAUX LAW FIRM Eugene G. Gouaux, Jr. Antoinette "Toni" Gouaux Eugene G. Gouaux, III 111 Barataria St. Lockport, La. 70374 Phone: 985-532-6507 Fax: 985-532-3998 CHILDREN'S CLINIC OF RACELAND LUISA C. BACUTA-TAGORDA, M.D. FAAP CELESTE GEORGE, FNP-C 110 ACADIA DRIVE - RACELAND, LA 70394 OFFICE PHONE: 985-537-8867 BONNIE MATASSA FINE ART & PORTRAIT STUDIO Bayou Acadiane Credit PERSONAL LOANS Need Extra Cash? Call Us Today! 5324 Hwy 1 Raceland, LA Ph: 985.532.3150 Fax: 985.532.3990 Grow in Christ
Robust Sampled-Data Adaptive Control of the Rohrs Counterexamples E. Dogan Sumer, Dennis S. Bernstein Department of Aerospace Engineering, University of Michigan, 1320 Beal Ave., Ann Arbor, MI 48109 Abstract—We revisit the Rohrs counterexamples within the context of sampled-data adaptive control. In particular, retrospective cost adaptive control (RCAC) is applied to the sampled continuous-time plant with unmodeled high-frequency dynamics, which involves nonminimum-phase (NMP) sampling zeros. It is shown that, without knowledge of these NMP zeros, RCAC stabilizes the uncertain plant and asymptotically follows the sinusoidal command. I. INTRODUCTION The history of adaptive control is marked by two key events. The first was the tragic accident in 1967 involving the X-15. The second was the publication in 1982 of [1], which presented two counterexamples showing the fragility of model reference adaptive control (MRAC) schemes. These counterexamples considered plants with high-frequency unmodeled dynamics that can induce a large, unknown phase shift in the plant’s open-loop response leading to unbounded response. These events dampened enthusiasm for adaptive control and led to a cautionary view of these techniques [2, 3]. Nevertheless, adaptive control continued to be developed and applied to a vast range of applications [4–6]. The purpose of the present paper is to revisit both Rohrs counterexamples using retrospective cost adaptive control (RCAC). RCAC is a discrete-time, direct adaptive control technique that can be used for plants that are possibly MIMO, nonminimum phase (NMP), and unstable [7–11]. This approach relies on knowledge of Markov parameters and, for NMP open-loop-unstable plants, estimates of the NMP zeros. For SISO systems that are either open-loop asymptotically stable or minimum phase, a single Markov parameter typically suffices. This information can be obtained from either analytical modeling or system identification [12]. Alternatively, an identified FIR model based on phase matching can be used [11, 13, 14]. The goal of the present paper is thus to apply sampled-data adaptive control to the Rohrs counterexamples. From a sampled-data point of view, the challenging aspect of these problems for RCAC is not the unmodeled dynamics per se, but rather the sampling zeros, which may be NMP under fast sampling. Since the Rohrs counterexamples are open-loop asymptotically stable, RCAC is able to provide reliable performance without knowledge of either the unmodeled high-frequency dynamics or the NMP sampling zeros [11]. II. PROBLEM FORMULATION Consider the MIMO discrete-time system \[ x(k+1) = Ax(k) + Bu(k) + D_1 w(k), \] (1) \[ y(k) = Cx(k) + D_2 w(k), \] (2) \[ z(k) = E_1 x(k) + E_0 w(k), \] (3) where \( k \geq 0 \), \( x(k) \in \mathbb{R}^n \), \( z(k) \in \mathbb{R}^{l_z} \) is the measured performance variable to be minimized, \( y(k) \in \mathbb{R}^{l_y} \) contains additional measurements that are available for control, \( u(k) \in \mathbb{R}^{l_u} \) is the input signal, \( w(k) \in \mathbb{R}^{l_w} \) is the exogenous signal that can represent either a reference command, an external disturbance, or both. The system (1)–(3) can represent a sampled-data application arising from a continuous-time system with sample and hold operations with the sampling period \( h \), where \( y(k) \) represents \( y(kh) \), \( z(k) \) represents \( z(kh) \), and so on. The operator matrix from \( u \) to \( z \) is thus given by \[ G_{zu}(q) \triangleq E_1(qI - A)^{-1}B, \] (4) where \( q \) is the shift operator which accounts for possibly nonzero initial conditions. Furthermore, for a positive integer \( i \), \( H_i \triangleq E_1A^{i-1}B \) is the \( i \)th Markov parameter of \( G_{zu} \). Now, consider the output-feedback controller \[ x_c(k+1) = A_c(k)x_c(k) + B_c(k)y(k), \] (5) \[ u(k) = C_c(k)x_c(k), \] (6) where \( x_c \in \mathbb{R}^{n_c} \). The closed-loop system with output feedback (5), (6) is thus given by \[ \tilde{x}(k+1) = \tilde{A}(k)\tilde{x}(k) + \tilde{D}_1(k)w(k), \] (7) \[ y(k) = \tilde{C}\tilde{x}(k) + D_2 w(k), \] (8) \[ z(k) = \tilde{E}_1\tilde{x}(k) + E_0 w(k), \] (9) where \( \tilde{x} \triangleq \begin{bmatrix} x^\top & x_c^\top \end{bmatrix}^\top \), \[ \tilde{A}(k) = \begin{bmatrix} A & BC_c(k) \\ B_c(k)C & A_c(k) \end{bmatrix}, \quad \tilde{D}_1(k) = \begin{bmatrix} D_1 \\ B_c(k)D_2 \end{bmatrix}, \] \[ \tilde{C} = \begin{bmatrix} C & 0_{l_y \times n_c} \end{bmatrix}, \quad \tilde{E}_1 = \begin{bmatrix} E_1 & 0_{l_z \times n_c} \end{bmatrix}. \] The goal is to develop an adaptive output feedback controller to minimize the performance measure \( z^\top z \) in the presence of the exogenous signal \( w \) with limited modeling information about the dynamics and exogenous signal. The model reference adaptive control (MRAC) problem can be formulated in terms of (1)–(3), where \( z \triangleq y_0 - y_m \) is the command-following error between the plant output \( y_0 \) and the output $y_m$ of a reference model $G_m$ whose input is the reference signal $r$. For MRAC, the measurement of the reference signal $r$ is assumed to be available for feedforward compensation, as shown in Figure 1. ![Fig. 1. MRAC Problem](image) For the adaptive controller (5), (6), the closed-loop state matrix $\hat{A}(k)$ may be time-varying. To monitor the ability of the adaptive controller to stabilize the plant, we compute the spectral radius $\text{spr}(\hat{A}(k))$ at each time step. If the controller converges, and $\text{spr}(\hat{A}(k))$ converges to a number less than 1, then the asymptotic closed-loop system is internally stable. ### III. Retrospective Cost Adaptive Control We represent (5), (6) by \[ u(k) = \theta^T(k)\phi(k-1), \] (10) where $\phi(k-1) = [y^T(k-1) \cdots y^T(k-n_c) u^T(k-1) \cdots u^T(k-n_u)]^T$, $\theta(k) = [N_1^T(k) \cdots N_{n_c}^T(k) M_1^T(k) \cdots M_{n_u}^T(k)]^T$, and, for all $1 \leq i \leq n_c$, $N_i(k) \in \mathbb{R}^{l_u \times l_u}$, $M_i(k) \in \mathbb{R}^{l_u \times l_u}$. The control law (10) can be reformulated as \[ u(k) = \Phi(k-1)\Theta(k), \] (11) where $\Phi(k-1) \triangleq I_{l_u} \otimes \phi^T(k-1) \in \mathbb{R}^{l_u \times l_u n_c(l_u + l_y)}$, and $\Theta(k) \triangleq \text{vec}(\theta(k)) \in \mathbb{R}^{l_u n_c(l_u + l_y)}$. Now, for a positive integer $r$, we define the finite-impulse-response (FIR) transfer matrix \[ G_f(q) \triangleq \frac{K_1 q^{-1} + K_2 q^{-2} + \cdots + K_r}{q^r}, \] (12) where $K_i \in \mathbb{R}^{l_u \times l_u}$ for $1 \leq i \leq r$. Next, for $k \geq 1$, we define the retrospective performance variable \[ \hat{z}(\Theta(k), k) \triangleq z(k) + \Phi_l(k-1)\hat{\Theta}(k) - u_l(k), \] (13) with \[ \Phi_l(k-1) \triangleq G_f(q)\Phi(k-1) \in \mathbb{R}^{l_z \times l_u n_c(l_u + l_y)}, \] (14) \[ u_l(k) \triangleq G_f(q)u(k) \in \mathbb{R}^{l_z}, \] (15) where $\hat{\Theta}(k)$ will be determined by optimization below. For $k > 0$, we define the cumulative cost function \[ J(\hat{\Theta}(k), k) \triangleq \sum_{i=1}^{k} \lambda^{k-i}\hat{z}^T(\hat{\Theta}(k), i)\hat{z}(\hat{\Theta}(k), i) + \sum_{i=1}^{k} \lambda^{k-i}\eta(i)\hat{\Theta}^T(k)\Phi_l^T(i-1)\Phi_l(i-1)\hat{\Theta}(k) + \lambda^k(\hat{\Theta}(k) - \Theta_0)^T P_0^{-1}(\hat{\Theta}(k) - \Theta_0), \] (16) where $\lambda \in (0, 1]$, $P_0$ is positive definite, and $\eta(k) \geq 0$. In this paper, we choose \[ \eta(k) \triangleq \eta_0 \sum_{j=0}^{p_c-1} z^T(k-j)z(k-j). \] (17) where $\eta_0 \geq 0$, and $p_c \geq 1$. The following result, which follows from RLS theory [4, 5], provides the global minimizer of the cost function (16) and thus the update law. **Proposition 3.1:** Let $P(0) = P_0$ and $\Theta(0) = \Theta_0$. Then, for all $k \geq 1$, the cumulative cost function (16) has a unique global minimizer $\hat{\Theta}(k)$. Furthermore, $\Theta(k)$ is given by \[ \Theta(k) = [I - K(k)\Phi_l(k-1)]\Theta(k-1) - P(k)\Phi_l^T(k-1)[z(k) - u_l(k)], \] where $P(k)$ satisfies \[ P(k) = \frac{1}{\lambda} \left[ P(k-1) - K(k)\Phi_l(k-1)P(k-1) \right], \] and \[ K(k) \triangleq P(k-1)\Phi_l^T(k-1) - \left[ \frac{\lambda}{1 + \eta(k)} I_{l_z} + \Phi_l(k-1)P(k-1)\Phi_l^T(k-1) \right]^{-1}. \] ### IV. Construction of $G_f$ In this section, we discuss two methods for constructing $G_f$. Since the Rohrs counterexamples are the focus of this paper, we limit the discussion to SISO plants. #### A. NMP-Zero-Based Construction of $G_f$ We rewrite (4) as $G_{zu}(q) = H_d \frac{N(q)}{D(q)}$, where $D(q)$ is a monic polynomial of degree $n$, $N(q)$ is a monic polynomial of degree $n-d$, and $d$ is the relative degree of $G_{zu}$. Assume that $H_d$ and the nonminimum-phase (NMP) zeros of $G_{zu}$, if any, are known. Now, consider the numerator factorization \[ N(q) = \beta_U(q)\beta_S(q), \] (18) where $\beta_U(q)$ and $\beta_S(q)$ are monic polynomials of orders $n_U$ and $n_S = n - d - n_U$, respectively, and each NMP zero of $\beta_U(q)$ is a root of $\beta_U(q)$. The NMP-zero-based construction of $G_f$ is given by \[ G_f(q) = H_d \frac{\beta_U(q)}{q^{n_U+d}}. \] (19) Robustness of this construction is discussed in [8] for minimum-phase systems, where it is shown that RCAC has 6-dB downward gain margin, and infinite upward gain margin to uncertainty in $H_d$. Finally, this construction does not require $\eta_0 > 0$ in (17) as long as $G_f$ captures the NMP zeros of $G_{zu}$. #### B. Phase-Matching-Based Construction of $G_f$ For $\Omega \in [0, \pi]$ rad/sample, consider the phase mismatch $\Delta(\Omega)$ between $G_f$ and $G_{zu}$ defined by \[ \Delta(\Omega) \triangleq \cos^{-1} \frac{\text{Re} \left[ G_{zu}(e^{j\Omega})G_f(e^{j\Omega}) \right]}{|G_{zu}(e^{j\Omega})||G_f(e^{j\Omega})|} \in [0, 180]. \] (20) Note that $\Delta(\Omega)$ represents the angle between $G_{zu}(e^{j\Omega})$ and $G_I(e^{j\Omega})$ in the complex plane. For the phase-mismatch-based construction, $G_I$ is chosen to satisfy $$\Delta(\Omega) \leq 90 \text{ deg}, \text{ for all } \Omega \in [0, \pi] \text{ rad/sample}. \quad (21)$$ A weaker condition is sufficient when $G_{zu}$ is asymptotically stable, and the exogenous signal $w(k)$ is harmonic. In this case, the phase-mismatch-based construction requires $$\Delta(\Omega) \leq 90 \text{ deg}, \Omega \in \text{spec}(w), \quad (22)$$ where “$\text{spec}(w)$” is the frequency spectrum of $w$. The phase-matching-based construction of $G_I$ is applicable to plants that are either minimum-phase or Lyapunov stable, that is, plants that are not both unstable and NMP. For NMP systems, this construction requires that $\eta_0$ be positive. The robustness of the phase-matching-based construction to phase mismatch is addressed in [11, 13]. Assuming that $w$ is harmonic, the numerical examples in [13] suggest that having (22) and $|G_{FIR}(e^{j\Omega})| > 0$, for all $\Omega \in \text{spec}(w)$ is sufficient for the performance to converge to zero, and the asymptotic convergence is robust to the choice of tuning parameters $\eta_0$ and $P_0$. It is also shown that (22) is not necessary for zero steady-state error, and, when this condition is not satisfied, an appropriate choice of tuning parameters may still lead to zero asymptotic performance. However, in this case, the asymptotic performance is sensitive to the choice of $\eta_0$ and $P_0$. We stress that (22) is not required for signal boundedness and stability properties, but for the performance to converge to zero. Two methods for minimizing phase mismatch are presented in [14]. These methods fit the IIR plant $G_{zu}$ with an FIR transfer function $G_I$. One method solves a constrained linear least squares problem to bound $\Delta(\Omega)$, while the other method solves a nonlinear least squares problem to minimize $\Delta(\Omega)$ with an FIR fit. V. Sampling Zeros of the Rohrs Plant Consider a discrete-time sampled-data system consisting of a zero-order hold, a continuous-time transfer function $T_{zu}(s)$, and a sampler with sampling period $h$, connected in series. The resulting discrete-time system is characterized by the pulse transfer function $G_{zu}(z)$ given by [16] $$G_{zu}(z) = (1 - z^{-1})Z\{T_{zu}(s)/s\}. \quad (23)$$ If the relative degree of $T_{zu}(s)$ is at least 2, then $G_{zu}(z)$ has more zeros than $T_{zu}(s)$. The additional zeros are called sampling zeros [15]. **Proposition 5.1:** Let $T_{zu}(s)$ be the $n^{\text{th}}$-order rational transfer function $$T_{zu}(s) = H\frac{(s - z_1) \cdots (s - z_m)}{(s - p_1) \cdots (s - p_n)} \quad (24)$$ with relative degree $d = n - m \geq 2$, and let $G_{zu}(z)$ be the corresponding pulse transfer function. Then, as the sampling period $h$ approaches 0, $n - d$ zeros of $G_{zu}(z)$ approach 1, and the remaining $d - 1$ zeros of $G_{zu}(z)$ approach the roots of $B_d(z)$, where $$B_d(z) \triangleq \beta_{d,1}z^{d-1} + \beta_{d,2}z^{d-2} + \cdots + \beta_{d,d}, \quad (25)$$ and for $k \in \{1, \ldots, d\}$, $$\beta_{d,k} \triangleq \sum_{i=1}^{k}(-1)^{k-j}i^d \binom{d+1}{k-i}. \quad (26)$$ **Proof 5.1:** See Theorem 1 of [15]. □ All of the zeros of $B_d(z)$ are negative, and $B_d(z)$ has at least one zero that is on or outside the unit circle [17]. For $d \geq 3$, $B_d(z)$ has at least one zero outside the unit circle. As a consequence of Proposition 5.1, sampled-data systems are typically NMP. In particular, for sufficiently small $h$, the pulse transfer function for a continuous-time system whose relative degree is at least 3 is NMP. We now discuss the complications that arise in sampled-data control of the Rohrs counterexamples due to unmodeled high-frequency dynamics. In Section IV, the NMP-zero-based construction of $G_I$ requires knowledge of the NMP zeros of $G_{zu}(z)$, rather than the NMP zeros of $T_{zu}(s)$. Therefore, we consider the pulse transfer function $G_{zu}(z)$. We consider the first-order transfer function $T_0(s) = \frac{2}{s+1}$ cascaded with the unmodeled high-frequency dynamics $$\Lambda(s) = \frac{229}{(s - 15 - j2)(s - 15 + j2)}.$$ The plant is given by $T_{zu}(s) \triangleq T_0(s)\Lambda(s)$, which is minimum phase. Although the phase of $T_0(j\omega)$ is in $[0, 90]$ deg for all $\omega$, $T_{zu}(j\omega)$ has a phase crossover frequency of $\omega_{pc} = 16.1$ rad/sec. Since the relative degree of $T_0(s)$ is 1, the pulse transfer function $G_0(z)$ has no sampling zeros for every sampling period $h$, and thus, $G_0(z)$ is minimum phase. However, due to the unmodeled dynamics $\Lambda(s)$, the relative degree of the plant $T_{zu}(s)$ is 3. Therefore, in accordance with Proposition 5.1, $G_{zu}(z)$ is NMP for all sufficiently small $h$. Applying (23) into $T_0(s)$ and $T_{zu}(s)$, the numerator polynomial corresponding to the pulse transfer functions $G_0(z) = N_0(z)/D_0(z)$ and $G_{zu}(z) = N_{zu}(z)/D_{zu}(z)$ are $$N_0(z) = 2(1 - e^{-h}), \quad (27)$$ $$N_{zu}(z) = \beta_2z^2 + \beta_1z + \beta_0, \quad (28)$$ where $$\beta_0 = -2e^{-31h} + 2.29e^{-30h} + 1.03e^{-16h}\sin 2h$$ $$- 0.29e^{-16h}\cos 2h, \quad (29)$$ $$\beta_1 = -0.29e^{-30h} + 4.29(e^{-16h} - e^{-15h})\cos 2h$$ $$+ 0.29e^{-h} - 1.03e^{-15h}\sin 2h, \quad (30)$$ $$\beta_2 = 0.29e^{-15h}\cos 2h - 2.29e^{-h} + 2$$ $$+ 1.03e^{-15h}\sin 2h. \quad (31)$$ Figure 2 illustrates the zeros of (28). We observe that for all $h \lesssim 0.2$, one of the sampling zeros is outside the unit circle and thus $G_{zu}(z)$ has an unknown NMP zero, which is caused by the high-frequency dynamics $\Lambda(s)$. Neither the presence nor the location of this NMP zero can be assumed to be known, because $\Lambda(s)$ is assumed to be unmodeled. ![Fig. 2. Sampling Zeros of $G_{zu}(z)$ as a function of $h$.](image) **VI. ROBUSTNESS OF RCAC FOR THE ROHRS COUNTEREXAMPLES** For $h > 0.2$ sec, the Rohrs sampled-data plant $G_{zu}(z)$ is minimum phase. In this case, for $\eta_0 = 0$, the robustness of NMP-zero-based construction is determined by the ratio of the first Markov parameters of $G_0(z)$ and $G_{zu}(z)$, as discussed in Section IV-A. In Figure 3, we illustrate the first Markov parameters $H_{0,1} = 2(1 - e^{-h})$ and $H_{zu,1} = \beta_2$ of $G_0(z)$ and $G_{zu}(z)$ for $h \in [0, 5]$. As $h \to \infty$, it follows from (27), (42) that both Markov parameters approach 2. Therefore, $\frac{H_{zu,1}}{H_{0,1}} \geq 0.5$ for all $h$. Hence, the Markov parameter uncertainty is not a robustness issue for the adaptive system. However, for $h \lesssim 0.2$, the available model $G_0(z)$ does not capture the NMP sampling zeros, and therefore, NMP-zero-based construction will not work. ![Fig. 3. First Markov parameters of $G_0(z)$ and $G_{zu}(z)$.](image) On the other hand, using the error-dependent control penalty $\eta(k)$ (17) with $\eta_0 > 0$ ensures robustness and closed-loop stability, whether $G_{zu}(z)$ is NMP or not. Intuitively, closed-loop stability is expected with $\eta_0 > 0$. Indeed, suppose that the closed-loop system becomes unstable, and $z(k)$ diverges to infinity. In this case, the term $\sum_{i=1}^{\infty} \lambda^{i-1} \eta(i) \Theta(k)^T \Phi_i^T (i - 1) \Phi_i (i - 1) \Theta(k)$ in (16) starts dominating other terms. Therefore, assuming $\sum_{i=1}^{k} \Phi_i^T (i - 1) \Phi_i (i - 1) \geq \alpha I > 0$, the optimization problem reduces to $\min_{\Theta(k)} ||\Theta(k)||$, which gives $\Theta = 0$. Thus, the closed-loop system reverts back to open-loop. Since the open-loop plant is asymptotically stable, $z(k)$ cannot diverge to infinity, which contradicts the assumption that the closed-loop system is unstable. Since closed-loop stability does not imply zero asymptotic performance, using $\eta_0 > 0$ does not guarantee zero asymptotic performance. For zero performance, we use phase-matching-based construction to satisfy (22). Since $T_0(s)$ and $T_{zu}(s)$ may have a phase difference larger than 90 deg at high frequencies, fitting an FIR plant with $G_0(z)$ may result in poor phase matching at high frequencies. However, as discussed in Section IV-B, (22) is not a necessary condition for zero steady-state error. **VII. SAMPLED-DATA ADAPTIVE CONTROL OF THE ROHRS COUNTEREXAMPLES WITH RCAC** We now apply RCAC to the Rohrs counterexamples [1]. In each example, the goal is to follow the output of the reference model $G_{ru}(s) = \frac{1}{s+3}$. Each simulation is initialized with the controller gain vector $\Theta(0)$ set to zero, and RCAC is turned on at $k = 5$. We use $\lambda = 1$ in all simulations. For consistency with the MRAC architecture, we use the measurements of the plant output $y_k$ and the reference signal $r$ so that $y = \begin{bmatrix} y_k & r \end{bmatrix}^T$. All modeling information we use is based on $G_0(z)$ rather than $G_{zu}(z)$. In each case, we illustrate the time traces of $z(k)$, $u(k)$, $\Theta(k)$, and the closed-loop spectral radius $\text{spr}(\hat{A}(k))$. **A. First Rohrs Counterexample: Sinusoidal Reference Inputs** In this section, we provide simulation results that illustrate the effectiveness of the error-dependent weighting $\eta(k)$ in preserving the closed-loop stability as predicted in Section VI regardless of the frequency content of the reference signal. We first examine the NMP-based construction method with $\eta(k) = 0$, and show that the method exhibits instability when the sampling rate is small enough to cause the sampling zeros to become NMP. We illustrate that the NMP sampling zero is the only cause of instability, and when the sampling period is large, the method does not suffer instability nor any parameter drift, regardless of the frequency spectrum of the reference input. Next, we introduce performance-dependent penalty $\eta(k)$ by letting $\eta_0 > 0$, and show that the closed-loop system remains stable even in the presence of the unknown NMP sampling zero independently of the frequency content of the reference signal. 1) **NMP-Zero-Based Construction with $\eta_0 = 0$:** We first consider the reference input $r_1(t) = 0.3 + 2 \sin(8.0t)$. We sample the continuous-time plant with $h = 0.25$ sec/sample, so that the Nyquist frequency $\omega_N = 4\pi$ rad/sec is larger than the largest reference frequency 8 rad/sec. For this sampling period, the sampling zeros are minimum-phase. The first Markov parameters corresponding to the pulse transfer functions $G_{zu}(z)$ and $G_0(z)$ are $H_{zu,1} = 0.2341$ and $H_{0,1} = 0.4424$, respectively. We let $G_T = H_0|q^{-1}$, and choose $P_0 = 10I$, $n_C = 10$. As shown in Figure 4, $z$ converges to zero, $u$ remains bounded, $\Theta$ converges, and $\text{spr}(\hat{A}(k))$ converges below 1. Keeping $h$ the same, we now consider the reference input $r_2(t) = 0.3 + 1.8 \sin(16.1t)$, which causes parameter drift and instability in traditional adaptive methods [1]. Note that the frequency of the reference signal is selected at the point where $T_{zu}(s)$ has a 180-deg phase lag. Furthermore, note that the Nyquist rate $\omega_N$ is smaller than the largest reference frequency 16.1 rad/sec. However, the goal here is to show that closed-loop stability is maintained independently of the Fig. 4. Response to the reference signal $r_1(t) = 0.3 + 2\sin(8.0t)$ with $h = 0.25$ sec/sample and NMP-zero-based construction with $\eta_0 = 0$. frequency of the reference command, as long as the sampling zeros arising from the unknown dynamics are minimum-phase. Choosing the same controller and tuning parameters, the parameters converge, and the closed-loop system is stable after convergence as shown in Figure 5. Of course, since $h$ is not small enough to reconstruct $r_2(t)$ from the sampled data, the performance $z(t)$ is not equal to zero between consecutive sampling instants. Fig. 5. Response to the reference signal $r_2(k) = 0.3 + 1.8\sin(16.1t)$ with $h = 0.25$ sec/sample and NMP-zero-based construction with $\eta_0 = 0$. Finally, to improve the intersample behavior, we reduce $h$ to 0.1 sec/sample, and consider $r_2(t)$ again. We have shown in Section VI that $G_{z,u}(z)$ is NMP for this sampling rate, and predicted that the choice $G_I = H_0, q^{-1}$ with $\eta_0 = 0$ would lead to instability, since $G_I$ does not capture the NMP zeros of $G_{z,u}$. The first Markov parameters are now $H_{-1,-1} = 0.037$, $H_{0,1} = 0.1903$, and we choose $G_I = H_0, q^{-1}$, $P_0 = 10I$, and $n_c = 10$. RCAC destabilizes the closed-loop system as shown in Figure 6. Similar behavior is obtained with $r_1(t)$ and other reference signals, which confirms that the only cause of instability is the unknown NMP sampling zero. Fig. 6. Response to the reference signal $r_2(t) = 0.3 + 1.8\sin(16.1t)$ with $h = 0.1$ sec/sample and NMP-zero-based construction with $\eta_0 = 0$. 2) Phase-matching-based Construction with $\eta_0 > 0$: We now introduce performance-dependent weighting $\eta(k)$, and use phase-matching-based construction for zero asymptotic performance. We sample the plant with $h = 0.1$ sec/sample. We use the linear fitting method outlined in [14] to obtain $G_I = 0.1946q^{-1} + 0.1761q^{-2}$, which bounds $\Delta(\Omega)$ by 65 deg from above, where $\Delta(\Omega)$ is defined as in (20) with $G_{z,u}$ replaced by $G_0$. Consequently, this choice does not guarantee (22); in fact, we have $\Delta(\Omega) > 90$ for $\Omega \in [0.6, 1.77] \cup [2.73, \pi]$ rad/sample. Note that the NMP sampling zero $-1.82$ of $G_{z,u}$ is not captured by $G_I$. We first consider $r_1(t)$. We have $\Delta(0) = 0$ deg, and $\Delta(0.8) = 94$ deg at the reference frequencies. Choosing $\eta_0 = 0.3$, $p_c = 10$, $P_0 = I$, and $n_c = 10$, $z$ converges to zero, and the asymptotic closed-loop system is stable with no parameter drift as shown in Figure 7. Fig. 7. Response to the reference signal $r_1(t) = 0.3 + 2\sin(8t)$ with $h = 0.1$ sec/sample and phase-matching-based construction with $\eta_0 = 0.3$. Keeping $G_I$, $\eta_0$, $P_0$, and $n_c$ the same, we now consider $r_2(t)$. We have $\Delta(1.61) = 92$ deg at the sinusoidal component of the reference spectrum. To ensure that no parameter drift occurs, we simulate the adaptive system for 2000 seconds. The performance converges to zero, and the asymptotic closed-loop system is stable as shown in Fig. 8. Fig. 8. Response to the reference signal $r_2(t) = 0.3 + 1.8\sin(16.1t)$ with $h = 0.1$ sec/sample and phase-matching-based construction with $\eta_0 = 0.3$. B. Second Rohrs Counterexample: Sensor Noise and Lack of Persistent Excitation Unknown additive sensor noise is pointed as the second main robustness challenge for common adaptive methods [1]. In this section, we show that RCAC is unconditionally robust to sensor noise with either construction methods. We consider the unknown additive sensor noise \( d(t) \), and modify the measurement vectors \( y \) and \( z \) to have \[ y(k) \overset{\Delta}{=} \begin{bmatrix} y_0(k) + d(k) & r(k) \end{bmatrix}^T, \\ z(k) \overset{\Delta}{=} \begin{bmatrix} y_0(k) + d(k) - y_M(k) \end{bmatrix} \] Hence, RCAC interprets the sensor noise as an additional component of the command that needs to be followed. Hence, the performance measurement \( z \) is not equal to the command-following error \( y_0 - y_M \). For illustration, we consider the step reference input \( r(t) = 2 \), which is persistently exciting of order one, with the unknown sensor noise \( d(t) = 0.5 \sin 8t \), which is persistently exciting of order two. 1) **NMP-zero-based Construction** with \( \eta_0 = 0 \): We sample the continuous-time plant \( h_i = 0.25 \text{ sec/sample} \), and thus the sampling zeros are minimum-phase. Applying RCAC with \( G_I = H_{0,I} q^{-1}, n_c = 10, \) and \( P_0 = 10I \), the performance measurement (not the command-following error) is driven to zero, the parameters converge, and the closed-loop system is stable as shown in Figure 9. ![Fig. 9. Response to the reference input \( r(t) = 2 \) and sensor noise \( d(t) = 0.5 \sin 8t \) with \( h = 0.25 \text{ sec/sample} \) and NMP-zero-based construction.](image) 2) **Phase-matching-based Construction** with \( \eta_0 > 0 \): We now sample the continuous-time plant with \( h_i = 0.1 \text{ sec/sample} \), and thus one of the sampling zeros is NMP. Applying RCAC with \( G_I = 0.1946q^{-1} + 0.1761q^{-2}, \eta_0 = 0.3, p_c = 10, P_0 = I, \) and \( n_c = 10, z \) converges to zero, the parameters converge, and the closed-loop system is stable as shown in Figure 10. ![Fig. 10. Response to \( r(t) = 2 \) and \( d(t) = 0.5 \sin 8t \) with \( h = 0.1 \text{ sec/sample} \) and phase-matching-based construction with \( \eta_0 = 0.3 \).](image) VIII. CONCLUSIONS We revisited the Rohrs counterexamples within the context of sampled-data adaptive control using RCAC algorithm. From a sampled-data point of view, it turns out that the challenging aspect of these problems for RCAC is not the unmodeled dynamics per se, but rather the sampling zeros, which may be NMP under fast sampling. These sampling zeros are induced by the unmodeled dynamics, and thus cannot be assumed to be known. Nevertheless, since the Rohrs counterexamples are open-loop asymptotically stable, with the use of a performance-dependent weighting, RCAC is able to provide reliable performance without the knowledge of either the unmodeled high-frequency dynamics or the NMP sampling zeros, regardless of the frequency content of the reference input. Finally, the presence of output disturbances do not adversely affect the closed-loop stability of the adaptive system, regardless of the degree of persistency of the reference input or the disturbance signal. REFERENCES [1] C. E. Rohrs, L. Valavani, M. Athans, and G. Stein, “Stability Problems of Adaptive Control Algorithms in the Presence of Unmodeled Dynamics,” *Conf. Dec. Cont.*, December, 1982. [2] B. D. O. Anderson, A. Dehghani, “Challenges of Adaptive Control—Past, Present and Future,” *Annual Reviews in Control*, Vol. 32, pp. 12–35, 2008. [3] B. D. O. Anderson, “Failures of Adaptive Control Theory and Their Resolution,” *Communications in Information and Systems*, Vol. 5, No. 1, pp. 1–20, 2005. [4] K. J. Aström and B. Wittenmark, *Adaptive Control*, 2nd ed., Addison-Wesley, 1995. [5] G. C. Goodwin and K. S. Sin, *Adaptive Filtering, Prediction and Control*, Prentice Hall, 1984. [6] P.A Ioannou and J. Sun, *Robust Adaptive Control*, Prentice Hall, 1996. [7] R. Venugopal and D. S. Bernstein, “Adaptive Disturbance Rejection Using LMMARKOV System Representations,” *IEEE Trans. Contr. Sys. Tech.*, Vol. 12, pp. 257–266, 2004. [8] J. B. Hoagg, M. A. Santillo, and D. S. Bernstein, “Discrete-Time Adaptive Command Following and Disturbance Rejection for Minimum Phase Systems with Unknown Exogenous Dynamics,” *IEEE Trans. Autom. Contr.*, Vol. 53, pp. 912–928, 2008. [9] A. M. D’Amato and D. S. Bernstein, “Adaptive Control Based on Retrospective Cost Optimization,” *AIAA J. Guid. Contr. Dyn.*, Vol. 33, pp. 289–304, 2010. [10] J. B. Hoagg and D. S. Bernstein, “Retrospective Cost Adaptive Control for Nonminimum-Phase Discrete-Time Systems Part 1: The Ideal Controller and Error System; Part 2: The Adaptive Controller and Stability Analysis,” *Proc. Conf. Dec. Contr.*, pp. 893–904, Atlanta, GA, December 2010. [11] A. M. D’Amato, E. D. Sumner, and D. S. Bernstein, “Frequency-Domain Stability Analysis of Retrospective-Cost Adaptive Control for Systems with Unknown Nonminimum-Phase Zeros,” *Proc. Conf. Dec. Contr.*, pp. 1096–1103, Orlando, FL, December 2011. [12] M. S. Flederjohn, M. S. Holzel, H. Palanibandlam-Madapusi, R. J. Fuentes, and D. S. Bernstein, “A Comparison of Least Squares Algorithms for Estimating Markov Parameters,” *Proc. Amer. Contr. Conf.*, pp. 3735–3740, Baltimore, MD, June 2010. [13] E. D. Sumner, A. M. D’Amato, and D. S. Bernstein, “Robustness of Retrospective-Cost Adaptive Control to Markov-Parameter Uncertainty,” *Proc. Conf. Dec. Contr.*, pp. 6085–6090, Orlando, FL, December 2011. [14] E. D. Sumner, M. H. Holzel, A. M. D’Amato, and D. S. Bernstein, “FIR-based Phase Matching for Robust Retrospective-Cost Adaptive Control,” *Proc. Amer. Contr. Conf.*, Montreal, CA, June 2012. [15] K. J. Aström, P. Hagander, and J. Sterby, “Zeros of Sampled Systems,” *IEEE Trans. Autom. Contr.*, Vol. 20, No. 1, pp. 31–38, 1984. [16] B. C. Kuo, *Digital Control Systems*, HRW, 1980. [17] S. R. Weiler, W. Moran, B. Ninness, and A. D. Pollington, “Sampling zeros and the Nyquist-Fichtenholz paradox,” *IEEE Trans. Automatic Contr.*, Vol. 46, No. 2, pp. 348–343, 2001.
Government of Tonga Public Service Tribunal PST Appeal No. 2 of 2018 Mrs. Tupou’ahau Fakakovikaetau Appellant Public Service Commission Respondent PUBLIC SERVICE TRIBUNAL: Mr. ‘Aisea Taumoepau, SC Chairman Mr. Timote Katoanga Member Mrs. Lepolo Taunisila Member REPRESENTATION: Appellant: Mrs. Ane Tavo Counsel for the Appellant Mrs. Tupou’ahau Fakakovikaetau In attendance Respondent: Mr. Sione Sisifa Solicitor General Mrs. Eunice Moala Public Service Commission Date of Hearing: 5th February, 2019 Date of Ruling: 5th March, 2019 Mrs. Tupou’ahau Fakakovikaetau (Appellant) v Public Service Commission (Respondent) PST Appeal No. 2 of 2018 1. This is an appeal by the Appellant seeking the following decision from the Tribunal: (a) To make the appropriate directions in respect of the Public Service Commission’s (PSC) letter of 22 November, 2017. 2. The Appellant relied on the following grounds, namely: (i) It was not the Appellant’s intention to take ‘special leave without pay’ when she had applied for special leave in lieu of overtime on the 29 September 2017. (ii) That the PSC’s decision dated 22 November 2017 was not received by the Ministry of Internal Affairs (MIA) office until 30 November 2017. The Appellant had resumed duty on the 27 October 2017 which did not give her any opportunity to respond regarding whether to continue with her leave or resume work earlier to avoid deductions in salary. (iii) The Appellant had not received notification of the withholding of her salary for the first half of December 2017 and one day from the second half of December until she went to the Bank of the South Pacific (BSP) ATM in December 2017 and discovered that her salary had not been deposited by Treasury, a decision of withholding her salary which she did not consent to. BACKGROUND 3. The Appellant is a Principal Programme Officer at the MIA. 4. On 29 September 2017, the Appellant submitted an application for 17 days special leave with pay, in lieu of overtime, to the Deputy Chief Executive Officer (DCEO) for the Women’s Affairs Division (WAD), Mrs. Polotu Paunga, to commence on 04 October 2017 and end on 26 October 2017. 5. The completed Application for Leave Form from the Appellant was then signed and recommended by the DCEO WAD on 29 September 2017. 6. On 29 September 2017, the Acting Chief Executive Officer (ACEO), Ms. Kalesita Taumoepoeau, approved the Appellant’s special leave application. 7. The Appellant commenced her special leave on 04 October 2017. 8. The Appellant resumed duty on 27 October 2017. 9. On 15 December 2017, the Appellant found out that her salary for the first half of December 2017 has not been deposited to her BSP account. 10. In following up with the MIA Account Sections, she was told that her salary for the first half plus one day from the second half of December 2017 were withheld. 11. On 18 December 2017, the Appellant sent an email to Ms. Taumoepape and stated: “.... Kalesita, I feel that I should have been properly informed prior to the approval of my 11 days without pay because this is not what I had applied for, I could have decided not to take the 11 days without leave. Given the information provided, I ask that inquiry regarding the holding of my salary for the first half of December, 2017 be looked into.” 12. On 16 January 2018, Ms. ‘Emeline Tongotea, Corporate Services Division of MIA, forwarded email correspondences between the PSC and Ministry of Finance to the Appellant which stated that the reason for withholding her pay for 11 days is in accordance with the Public Finance Management Act (Treasury Instructions) 2010, “All staff overtime shall be settled within one (1) month if paid by cash, or leave shall be taken within three (3) months if time off in lieu”. 13. The Appellant complained to Ms. Tongotea of why she was not informed in time as she would have taken the option of returning to work rather than taking leave without pay. 14. The Appellant launched her complaint to the Public Service Tribunal (Tribunal) on 29 June 2018. **PSC ACTION AND DECISION** 15. On 06 October 2017, the ACEO of MIA wrote to the CEO of the PSC to advise that the following officers have been approved for Annual Leave with pay. “ 1. Tupou’ahau Fakakovikaetau, Principal Programme Officer, 17 days – Lieu off overtime – (04/10/2017 – 27/10/2017) 2. …” 16. The ACEO of MIA sent a Savingram to the CEO of the PSC on 30 October 2017 to advise that “Mrs. Tupou’ahau Fakakovikaetau, Principal Programme Officer resumed duty on 27th October 2017 from her 17 days Lieu off overtime as from 04th October to 26th October 2017.” 17. On 22 November 2017, the CEO of the PSC sent a Savingram to the ACEO for MIA to advise that approval has been given to Mrs. Fakakovikaetau “…to be granted 17 days leave (i.e. working days off in lieu of overtime (overtime on Mar & April, ’17 and Jun & Jul, ’17) plus 11 working days special leave without pay) with effect from 4 October, 2017. For your information and further necessary action, please.” 18. On an email that was sent by ACEO, Ms. Taumoepaue on 18 December 2017, she raised the following question to Falemei Fale and Malia Pome’e of the PSC: “Falemei/Malia – If “Leave” requested by the officer was specifically to be taken in lieu of “Overtime” hrs she had worked, why had PSC recorded 11 days of those days be recorded as Special Leave Without Pay without the officer’s consent, or that of MIA’s CEO?” 19. Lisimeili Loloa of the PSC sent an email to Melisa Pulu of the Ministry of Finance on 10 January 2018 to check if they would accept Tupou’ahau Fakakovikaetau’s hours of overtime from 2014 – 2016 to cover her leave without pay or they will still have to stick to the Treasury Instructions 2010. 20. Melisa Pulu responded that it is quite clear from the Public Finance Management Act (Treasury Instructions) 2010, section 40(9) and quoted: “All staff overtime shall be settled within one (1) month if paid by cash, or leave shall be taken within three (3) months if time off in lieu”. Melisa also emphasized that they work under the Instructions to be fair to everyone. 21. Lisimeili Loloa noted the advice and position of the Ministry of Finance. 22. The email correspondence of Lisimeili and Melisa were referred to Tupou’ahau Fakakovikaetau by ‘Emeline Tongotea of MIA on 16 January, 2018. 23. On 19 January, 2018, Lisimeili Loloa of the PSC emailed Melisa Pulu of the Ministry of Finance conveying that Tupou’ahau Fakakovikaetau continues to dispute the decision of the Ministry of Finance to withhold her salary for 11 days of leave and sought for this to be returned to her. In her email, Lisimeili sought Melisa’s views and raised that Tupou Fakakovikaetau has 19.5 days in lieu of overtime from 2014 and 2015 and that she had utilized all her days in lieu of overtime for 2016 and 2017. 24. Melisa on the same day replied to Lisimeili’s email and stated that she consulted her Supervisor and confirmed the Ministry of Finance’s position that overtime when paid, must be paid one month after the overtime and for days in lieu of overtime, it must be taken within three months. 25. On 13 April, 2018 the Appellant emailed the ACEO of the MIA, raising her concerns over the delay involved in notifying her of the decision as this would have provided her an opportunity to decide whether to take special leave without pay or return to work. The Appellant emphasized that she found out that her salary for December 2017 were being withheld when she received no salary. 26. On 07 June, 2018 Leinolo Lakai of MIA emailed Falemei Fale of PSC to request an explanation for why the PSC failed to advise the MIA in a timelier manner, as this would have avoided withholding the officer’s salary. 27. Consequently, the Appellant appealed to the Tribunal against the content of the letter from the CEO of the PSC to ACEO of MIA on 22 November, 2017, as if that was a decision of the PSC. SUBMISSIONS 28. Both parties submitted written submissions, and the Tribunal is appreciative to both counsels, Mrs. Ane Tavo (on behalf of the Appellant) and the Solicitor General (on behalf of the Respondent). Parties further supported their written submissions with helpful oral elaboration and clarification. 29. The Appellant’s submissions were focused on the grounds of appeal stated in paragraph 2 above, and will not be repeated here. 30. The Respondent’s submissions may be summarized as follows: (i) That the issue complained of by the Appellant cannot be dealt with as an appeal under section 21A(2) of the Public Service Act due to the following: a. That a time off leave in lieu of overtime should be distinguished from a leave that may be granted by a CEO; b. That the authority to approve time off in lieu of overtime payment is a decision for the CEO for the MIA, and not the PSC; c. That the CEO for MIA is obliged under the Public Finance Management Act (Treasury Instructions) 2010 to observe the instructions, and the number of days for which the Appellant may take in lieu of overtime before consenting to the application by the Appellant to take time in lieu of overtime; and d. That the action of the PSC in issuing the letter of 22 November 2017 amounts to an advice, rather than a decision that can be appealed under section 21A(2) of the Public Service Act or any regulations made under the Act. (ii) That the issue complained of by the Appellant should have appropriately been dealt with under the Public Service (Grievance and Dispute Procedures) Regulations 2010. 31. In response, the Appellant’s submission is summarized as follows: (i) That the Appellant has been a victim of unfair treatment as a result of the decision of the PSC dated 22 November, 2017 due to the following: a. That the response from the PSC to the notification of the Appellant’s application for special leave in lieu of overtime sent in on 06 October, 2017 was unreasonably delayed. b. That the unreasonable delay has caused significant unfairness to the Appellant. (ii) That the Appellant has in fact exhausted the grievance procedure available under the Public Service (Grievance & Dispute Procedures) Regulations 2010. TRIBUNAL’S FINDINGS 32. The Appellant’s ground of appeal is an assertion that the PSC’s failure to provide timely advice on approval of the Appellant’s leave has disadvantaged the Appellant, who did not intend to take ‘special leave without pay’ at the time she applied for leave. 33. It remains unclear why the PSC took so long to advise the MIA that the Appellant has been approved 11 days special leave without pay. 34. Evidence was provided that it was the Ministry of Finance’s decision to withhold the Appellant’s salary for 11 days and not PSC. The decision was made in accordance with the Public Finance Management Act (Treasury Instructions) 2010, section 40(9). 35. The Tribunal notes that the Appellant followed the proper channels when applying for leave and obtained the approval of the ACEO for MIA. 36. In considering both written and oral submissions there is an assertion that this shortfall was on the part of the ACEO for MIA who is in a position to know the Treasury Instructions and ensure compliance. And therefore it would be more practical to discipline the ACEO by deducting the sum of the Appellant’s salary for 11 days from the ACEO and reimbursing the Appellant. 37. However as a matter of natural justice, it would be quite unfair to impose any disciplinary penalty on the ACEO who is not a party to this appeal and is unable to present her arguments. 38. Furthermore, the onus is also on the Appellant to be aware of the relevant Treasury Instructions relating to the process of the conversion of overtime into leave days as provided under the Public Finance Management Act (Treasury Instructions) 2010, section 40(9) which reads as follows: “All staff overtime shall be settled within one (1) month if paid by cash, or leave shall be taken within three (3) months if time off in lieu”. 39. It must be noted that the relevant legislation as quoted above form part of the conditions of employment of public employees including the Appellant. 40. The Tribunal also notes that there is no Public Service Commission decision. 41. Matters highlighted in this case include the importance of timeliness of informing of employees of situations that may affect them, and the relevant authorities should be encouraged to act accordingly. ACKNOWLEDGEMENT 42. The Tribunal acknowledges the efforts of both parties in providing the necessary documentation, and the capable manner in which both counsels conducted this case. ORDER OF THE TRIBUNAL 43. Section 21F of the Public Service Act 2002, as amended, provides that the Tribunal may make an order to affirm, vary, or set aside the PSC’s decision. 44. The tribunal makes the following orders: (a) For the reasons given above, this appeal is dismissed. (b) The parties are at liberty to apply. Mr. ‘Aisea Taumoepeau SC Mr. Timote Katoanga Mrs. Lepolo Taunisila
The barycenters of the $k$-additive dominating belief functions & the pignistic $k$-additive belief functions Thomas Burger Lab-STICC, Université Européenne de Bretagne, Université de Bretagne-Sud, CNRS, Vannes, France Email: email@example.com Fabio Cuzzolin Department of Computing Oxford Brookes University Oxford, U.K. Email: firstname.lastname@example.org Abstract—In this paper, we consider the dominance properties of the set of the pignistic $k$-additive belief functions. Then, given $k$, we conjecture the shape of the polytope of all the $k$-additive belief functions dominating a given belief function, starting from an analogy with the case of dominating probability measures. Under such conjecture, we compute the analytical form of the barycenter of the polytope of $k$-additive dominating belief functions, and we study the location of the pignistic $k$-additive belief functions with respect to this polytope and its barycenter. Keywords: Belief functions, pignistic transform, pignistic $k$-additive belief functions, $k$-additive dominating belief functions, permutation. I. INTRODUCTION Let $\Omega = \{x_1, x_2, \ldots, x_{|\Omega|}\}$ be a set of hypotheses (or events, or outcomes) of cardinality $|\Omega|$. As often stressed, (such as in [1] or [2]), manipulating belief functions on $\Omega$ is not always convenient: The meaning of each focal set in terms of mass is difficult to understand and to interpret, the computations on the powerset $\mathcal{P}(\Omega)$ are painstaking to perform, and finally decision making in a game of chance context is not trivial. This is why, it is advised in the Transferable Belief Model [2] to convert a mass function into a pignistic probability for decision making. The pignistic probability function associated to a mass function $m$ corresponds to the following Bayesian mass function: $$m^{[BetP]}(\{x\}) = \sum_{A \ni x \atop A \subseteq \Omega} \frac{m(A)}{|A|} \quad \forall \{x\} \in \left\{ \{x_1\}, \ldots, \{x_{|\Omega|}\} \right\},$$ while $m^{[BetP]}(A) = 0$ if $A$ is not a singleton. The belief function $b^{[BetP]}$ corresponding to $m^{[BetP]}$ reads as: $$b^{[BetP]}(A) = \sum_{\{x\} \in A} m^{[BetP]}(\{x\}) \quad \forall A \subseteq \Omega.$$ The latter is known to correspond to the Shapley value: $$b^{[S]}(B) = b^{[BetP]}(B) = \sum_{A \subseteq \Omega} \frac{m(A) \cdot |A \cap B|}{|A|} \quad \forall B \subseteq \Omega.$$ Alternatively, $k$-additive belief functions ($k \leq |\Omega|$) have been proposed to face the difficulties of the manipulation of generic belief functions [1]. A $k$-additive belief function $b$ on $\Omega$ is such that, its mass function $m$ has one (or more) focal sets of cardinality $k$ and no focal set of cardinality $> k$. We denote by $\mathcal{F}^k(\Omega)$ the set of the possible focal sets of $k$-additive belief functions (i.e. the sets of $\Omega$ of size smaller than or equal to $k$). The compactness of $k$-additive belief functions make them convenient to deal with, as they imply less computations and are easier to understand / manipulate. Of course, Bayesian belief functions are 1-additive belief functions. Choosing the hypothesis that maximizes $m^{[BetP]}$ is one of the most popular methods to make a decision [2]. Thus, the question has naturally occurred to generalize it in various ways [3]. Among the various generalisations, the one presented in [4] and applied to a real decision making problem in [5] is based on the notion of $k$-additive belief functions. This generalization is designed to help compare hypotheses of different cardinalities in order to make a decision: Thanks to its use, it is possible to consider various singleton hypotheses, and to choose a set of them. The interesting point is that, the cardinality of this set is not determined in advance, as it is only stated that its cardinality is lower than or equal to a dedicated threshold $\gamma$. Thus, depending on the context and the evidence, it is possible to focus on a singleton hypothesis, or, on the contrary, to remain imprecise in the decision process, and to choose the set of the 2, 3, ... or $\gamma$ “best” hypotheses. From a mathematical point of view, this transformation converts a given belief function into a $\gamma$-additive belief function. The goal of this paper is to begin a comparative study of the properties of the pignistic $k$-additive belief functions and the barycenters of the polytopes of $k$-additive b.f.s, as two different but sensible ways of generalizing the pignistic transform. In Section II, some background information is recalled, and several notations are set. In Section III, we establish some dominance properties of the set of pignistic $k$-additive belief functions. Then, in Section IV, we propose a conjecture on the shape of the polytope of $k$-additive dominating belief functions. In particular, we claim that its vertices are associated with permutations of all focal sets of size $|A| \leq k$, even though non uniquely. In Section V, the analytical expression of the barycenter of this polytope is given. A tentative comparison with the set of pignistic $k$-additive belief functions is sketched in Section VI. II. Background & notations A. The set of pignistic $k$-additive belief functions First, we recall some results on the generalisation of the pignistic transform described in [4]. When using this transform, the first thing the expert should do is to define a hesitation threshold $\gamma < |\Omega|$, $\gamma \in \{1, ..., |\Omega|\}$ according to the maximum amount of imprecision which is acceptable for the decision regarding the constraints of his/her problem. Once the hesitation $\gamma$ is chosen, we have, $\forall B \subseteq \Omega$ such that $|B| \leq \gamma$: $$m_\gamma^{[B]}(B) = m(B) + \sum_{A \supset B, A \subseteq \Omega, |A| > \gamma} \frac{m(A) \cdot |B|}{N'(|A|, \gamma)}$$ \hspace{1cm} (2) with $m_\gamma^{[B]}(.) = 0$, where $$N'(|A|, \gamma) = \sum_{k=1}^{\gamma} \binom{|A|}{k} \cdot k = \sum_{k=1}^{\gamma} \frac{|A|!}{(k - 1)!(|A| - k)!}$$ represents the “weighted” number of subsets of $A$ of cardinality at most $\gamma$, each of them weighted by its cardinality. The mass $m(A)$ associated with a focal set $A$ of cardinality $|A| > \gamma$ is divided into $N'(|A|, \gamma)$ equal parts, and these parts are redistributed to the focal sets of cardinality $\leq \gamma$ in a manner proportional to their cardinality. Let us denote by $H_\gamma^A(B)$ the mass inherited by $B$ from $A$, and by $H_\gamma(B)$ the total mass inherited by $B$ from focal sets of cardinality $> \gamma$. Of course, we have $$H_\gamma(B) = m_\gamma^{[B]}(B) - m(B) = \sum_{A \supset B, |A| > \gamma} H_\gamma^A(B).$$ \hspace{1cm} (3) From the definition, it is obvious that the belief function $b_\gamma^{[B]}$ derived from the mass function $m_\gamma^{[B]}$ is $\gamma$-additive. Moreover, we have that $b_1^{[B]} = b^{[S]}$, i.e., the pignistic transform corresponds to the particular case where $\gamma = 1$ [4]. Finally, for any belief function $b$ which is $k$-additive (eventually, $k = |\Omega|$), and thus, any $b$ is at least $|\Omega|$-additive it is possible to define $k - 1$ such belief functions $b_\gamma^{[B]}$ with $1 \leq \gamma \leq k - 1$. This leads to the definition of the set $$\mathcal{PBF}[b] = \left\{ b^{[S]}, b_2^{[B]}, \ldots, b_{k-1}^{[B]}, b \right\}$$ so that, $\forall \gamma : 1 \leq \gamma < K$, the $\gamma$-th element of $\mathcal{PBF}[b]$ is a $\gamma$-additive belief function. We call $\mathcal{PBF}[b]$ the set of pignistic $k$-additive belief functions of $b$. B. Dominance properties The “least commitment principle” [6] postulates that, given a set of mass functions compatible with a number of constraints, the most appropriate one is the “least informative”. As pointed out by Denoeux [7], in some sense it plays a role similar to that of maximum entropy in probability theory. There are many ways of measuring the information content of a belief function. This is done in practice by defining a partial order in the space of belief functions [8]–[10]. The partial order relation called weak inclusion is defined according to the notion of dominance: A belief function $b'$ dominates another one $b$ if the belief values of $b'$ are greater than those of $b$ for all events $A \subseteq \Omega$ $$b \ll b' \equiv b(A) \leq b'(A) \quad \forall A \subseteq \Omega.$$ \hspace{1cm} (4) The set of probability measures $$\mathcal{P}[b] = \{ p \in \mathcal{P} : b(A) \leq p(A) \quad \forall A \subseteq \Omega \}$$ \hspace{1cm} (5) corresponds to the set of Bayesian (or 1-additive) belief functions more committed than $b$ according to (4). We call $\mathcal{P}[b]$ the set of probabilities dominating $b$. As it has been proven in [11], [12], the set of dominating probabilities (5) is a polytope, whose vertices are probabilities determined by permutations of the elements of $\Omega$. **Proposition 1:** The set $\mathcal{P}[b]$ of all the probability functions consistent with a b.f. $b$ (of mass $m$) is the polytope $$\mathcal{P}[b] = C(\rho[p^\theta[b]] \forall \rho),$$ where $C(.)$ denotes the convex closure operator and where $\rho$ is any permutation $\{x_{\rho(1)}, \ldots, x_{\rho(n)}\}$ of the singletons of $\Omega$ ($n = |\Omega|$), and the vertex $p^\theta[b]$ is the Bayesian b.f. such that $$p^\theta[b](x_{\rho(i)}) = \sum_{A \ni x_{\rho(i)}; A \ni x_{\rho(j)} \forall j < i} m(A).$$ \hspace{1cm} (6) Each probability function (6) attributes to each singletons $x = x_{\rho(i)}$ the mass of all focal elements of $b$ which contains it, but does not contain the elements which precede $x$ in the ordered list $\{x_{\rho(1)}, \ldots, x_{\rho(n)}\}$ generated by the permutation $\rho$. In [13], the authors consider the dominance properties of $k$-additive belief functions for any type of capacity [14]. Meanwhile, they provide some results to characterize $\mathcal{B}_k[b]$, the polytope of $k$-additive belief functions dominating another belief function. In this paper, we will formulate a conjecture on the form of $\mathcal{B}_k[b]$ analogous to Proposition 1, and discuss the location of $\mathcal{PBF}[b]$ with respect to the set of $k$-additive dominating belief functions and its barycenter for all $k$. III. Dominance properties of the set of pignistic $k$-additive belief functions Let us start with a convenient property, which states that computing iteratively several pignistic $\gamma$-additive belief function, with various $\gamma$ is equivalent to computing directly the one with the smallest $\gamma$: **Proposition 2:** Let $b$ be a $k$-additive belief function and $\gamma_1, \gamma_2 < k$. We have: $$\left(b_\gamma^{[B]}\right)^{[B]}_{\gamma_2} = b_{\min(\gamma_1, \gamma_2)}^{[B]}$$ **Proof:** To show that, the simplest way is to consider the redistribution process in the case of two consecutive transformations with thresholds $\gamma_1$ and $\gamma_2$, and in the case of a single transformation with the threshold $\min(\gamma_1, \gamma_2)$. Then, it is sufficient to check that the redistribution process in these two scenarios leads to the same results. To do so, it is sufficient to analyze the critical case of the mass attributed to a set of cardinality $> \gamma_1$ when $\gamma_1 > \gamma_2$. So, let us consider $A$ a subset of $\Omega$ with $|A| > \gamma_1$. In both scenarios $m(A)$ is redistributed... to subsets of cardinality $\leq \gamma_1$. Let us call $B$ any subset of $\Omega$ such that $\gamma_2 < |B| \leq \gamma_1$, and $C$ any subset with $|C| \leq \gamma_2$. In the first scenario, a single transform ($\gamma = \gamma_2$) is used. Each $C \subseteq \Omega$ with $|C| \leq \gamma_2$ receives directly a number of parts of $m(A)$ which is, by definition, proportional to $|C|$: $H^A_{\gamma_2}(C) \propto |C|$. In the second scenario, two transforms (first $\gamma = \gamma_1$, and then, $\gamma = \gamma_2$) are used. After the first transform, the sets $C$ and $B$ receive some part of $m(A)$. Then, after the second transform, the mass of the sets $B$ is redistributed to the sets $C$. As the $B$ have received some part of $m(A)$ after the first transform, these parts of $m(A)$ are redistributed to $C$ after the second transform. Thus, $C$-type sets receive directly some mass from $A$ (first transform) but also receive indirectly some mass from $A$ that has transited via the sets $B$. If we note $H^{A \rightarrow B}_{\gamma_1,\gamma_2}(C)$ the mass that has transited from $A$, via $B$ to $C$, we have that: $$H^{A \rightarrow B}_{\gamma_1,\gamma_2}(C) \propto |B|.$$ This can be verified as, first we have $H^A_{\gamma_1}(B) \propto |B|$, and then, for each $B$, $H^A_{\gamma_1}(B)$ is shared and redistributed in a manner $\propto |C|$, which explains the previous equation. Hence, $C$ receive from $A$ the mass: $$\left( H^{A \rightarrow B}_{\gamma_1,\gamma_2}(C) + H^A_{\gamma_2}(C) \right) \propto |C|.$$ Finally, it is easy to check that, whatever the scenario, $C$-type sets receive all the mass initially associated with $A$, so that it is shared among such $C$’s in a manner proportional to their cardinality. As $m(A)$ and the sum of all the cardinality of the sets $C$ is determined once and for all, both scenarios lead to the same mass redistribution. □ **Corollary 1:** Let $b$ be a $k$-additive belief function. It is possible to compute in a recursive manner all the elements of $\mathcal{PBF}[b]$, starting from $b^{[B]}_1$ and finishing with $b^{[B]}_k = b^{[S]}$, using decreasing values for the hesitation threshold. Now we can study the dominating properties of $\mathcal{PBF}[b]$. **Proposition 3:** Let $b$ be a $k$-additive belief function (as eventually $k = |\Omega|$), and let $\gamma < k$. We have that: $$b \ll b^{[B]}_\gamma.$$ **Proof:** We need to show that, $\forall A \subseteq \Omega$, $b(A) \leq b^{[B]}_\gamma(A)$. By definition, $b(A) = \sum_{B \subseteq A} m(B)$ and $b^{[B]}_\gamma(A) = \sum_{B \subseteq A} n^{[B]}_\gamma(B)$. Moreover, by Equation (2), one has that: $$H_\gamma(B) = m^{[B]}_\gamma(B) - m(B) > 0 \text{ if } |B| \leq \gamma,$$ as the terms $H_\gamma(B)$ correspond to some mass inherited from focal sets of cardinality $> \gamma$, redistributed to focal sets of cardinality $\leq \gamma$. Now: - If $|A| \leq \gamma$, then, $b^{[B]}_\gamma(A) - b(A) = \sum_{B \subseteq A} H_\gamma(B) > 0$. - If $|A| > \gamma$, then, $$b^{[B]}_\gamma(A) = \sum_{B \subseteq A, |B| \leq \gamma} m^{[B]}_\gamma(B) + \sum_{B \subseteq A, |B| > \gamma} n^{[B]}_\gamma(B) = 0$$ $$= \sum_{B \subseteq A, |B| \leq \gamma} (m(B) + H_\gamma(B)).$$ According to the previous notation (3), it is possible to decompose $H_\gamma(B)$ with respect to the origin of the mass received by $B$ from all $C \subseteq \Omega$ s.t. $|C| > \gamma$. Some of them are included in $A$, some others are not: $$H_\gamma(B) = \sum_{C \subseteq A, |C| > \gamma} H^C_\gamma(B) + \sum_{C \not\subseteq A, |C| > \gamma} H^C_\gamma(B)$$ so that $$b^{[B]}_\gamma(A) = \sum_{B \subseteq A, |B| \leq \gamma} m(B) + \sum_{B \subseteq A} \sum_{C \subseteq A, |B| \leq \gamma, |C| > \gamma} H^C_\gamma(B)$$ $$+ \sum_{B \subseteq A} \sum_{C \not\subseteq A, |B| \leq \gamma, |C| > \gamma} H^C_\gamma(B).$$ Now we can notice that: $$\sum_{B \subseteq A} \sum_{C \subseteq A, |B| \leq \gamma, |C| > \gamma} H^C_\gamma(B) = \sum_{B \subseteq A, |B| > \gamma} m(B),$$ as the mass associated to subsets of $A$ with cardinality $> \gamma$ is redistributed to the subsets of $A$ with cardinality $\leq \gamma$. Thus, $$b^{[B]}_\gamma(A) = \sum_{B \subseteq A, |B| \leq \gamma} m(B) + \sum_{B \subseteq A, |B| > \gamma} m(B) + \sum_{B \subseteq A} \sum_{C \not\subseteq A, |B| \leq \gamma, |C| > \gamma} H^C_\gamma(B)$$ $$\geq 0$$ i.e. $b^{[B]}_\gamma(A) \geq b(A)$, and $b \ll b^{[B]}_\gamma$. □ Let us now summarize in a single theorem all the results on the set of pignistic $k$-additive belief functions as well as the consequences of these results: **Theorem 1:** Let $b$ be a $k$-additive belief function. The set $\mathcal{PBF}[b]$ of pignistic $k$-additive belief functions has the following properties: - when $\gamma = k$ the transform is idle as $b = b^{[B]}_k$, while when $\gamma = 1$, we obtain the Shapley value: $b^{[B]}_1 = b^{[S]}$; - $\forall \gamma \leq k$, $b^{[B]}_\gamma$ is a $\gamma$-additive belief function; - $\forall \gamma \leq k$, $b^{[B]}_\gamma$ dominates $b$; - as a consequence, given $\gamma \leq k$, there exists a unique pignistic $\gamma$-additive belief function which dominates $b$; - $\forall \gamma_2 < \gamma_1 \leq k$, $b^{[B]}_{\gamma_1} \ll b^{[B]}_{\gamma_2}$. In particular, we have that $$b = b^{[B]}_k \ll b^{[B]}_{k-1} \ll \cdots \ll b^{[B]}_2 \ll b^{[B]}_1 = b^{[S]},$$ - $\forall \gamma \leq k$, $\mathcal{PBF}[b^{[B]}_\gamma] \subseteq \mathcal{PBF}[b]$. IV. THE POLYTOPE OF $k$-ADDITIVE DOMINATING BFS Now, let us turn to the polytope $B_k[b]$. Proposition 1 states that the polytope of dominating probabilities (1-additive belief functions) $\mathcal{P}[b]$ has vertices associated with permutations of the list of element of $\Omega$. This suggests that the set of dominating $k$-additive belief functions could have a similar form, with each vertex associated with a permutation of the list of focal elements of size smaller than or equal to $k$. Let us denote by $\mathfrak{P}^k(\Omega) = \{ A \subseteq \Omega, |A| \leq k \}$ the set of subsets of $\Omega$ of cardinality smaller than or equal to $k$. **Conjecture 1:** Given a belief function $b : \mathfrak{P}(\Omega) \rightarrow [0, 1]$, with mass function $m$, the region $B_k[b]$ of all the $k$-additive belief functions on $\Omega$ which dominate $b$ according to order relation (4) is the polytope: $$B_k[b] = Cl(b^\rho[b] \ \forall \rho),$$ where $\rho$ is any permutation $\{A_{\rho(1)}, \ldots, A_{\rho(|\mathfrak{P}^k(\Omega)|)}\}$ of the focal elements of $\Omega$ of size at most $k$, and the vertex $b^\rho[b]$ is the $k$-additive belief function with the following mass function: $$m^\rho[b](A_{\rho(i)}) = \sum_{B \supseteq A_{\rho(i)}; B \neq A_{\rho(j)} \ \forall j < i} m(A). \quad (7)$$ We illustrate the sensibility of this conjecture on a simple example. A. A toy example: the binary case In the case of a binary frame $\Omega = \{x, y\}$ the list of focal elements of size at most $k = 2$ obviously reads as $\mathfrak{P}^2(\Omega) = \{\{x\}, \{y\}, \{x, y\}\}$, so that the possible permutations of such a list are six: $$\begin{align*} \rho_1 &= (\{x\}, \{y\}, \Omega) & \rho_2 &= (\{x\}, \Omega, \{y\}) \\ \rho_3 &= (\{y\}, \{x\}, \Omega) & \rho_4 &= (\{x\}, \Omega, \{y\}) \\ \rho_5 &= (\Omega, \{x\}, \{y\}) & \rho_6 &= (\Omega, \{y\}, \{x\}). \end{align*}$$ According to our conjecture on the nature of the vertices of the polytope of $k$-additive dominating belief functions (Equation (7)), both of the permutations in each row above generate the same 2-additive belief function. Namely, having denoted\footnote{With a slight abuse of notation we denote by $m(x)$, $b(x)$ instead of $m(\{x\})$, $b(\{x\})$ the values of the set functions of interest on a singleton $x \in \Omega$.} by $\bar{m} = [m(x), m(y), m(\Omega)]'$ the vector encoding the basic probability assignment of a belief function, the above pairs of permutations generate the following vertices: $$\begin{align*} \rho_1, \rho_2 &\rightarrow [m(x) + m(\Omega), m(y), 0]' \\ \rho_3, \rho_4 &\rightarrow [m(x), m(y) + m(\Omega), 0]' \\ \rho_5, \rho_6 &\rightarrow [m(x), m(y), m(\Omega)]'. \end{align*} \quad (8)$$ B. Geometry of $B_2[b]$ in the binary case Given a frame of discernment $\Omega$, a belief function $b : 2^\Omega \rightarrow [0, 1]$ is completely specified by its $N - 2$ belief values $\{b(A), \emptyset \subseteq A \subseteq \Omega\}$, $N = 2^{|\Omega|}$, and can be represented as a vector with $N - 2$ entries, i.e., a point of $\mathbb{R}^{N - 2}$ [15]. The set $B$ of points of $\mathbb{R}^{N - 2}$ which correspond to a belief function is called *belief space*. If we denote by $b_A$ the *categorical* [2] belief function assigning all the mass to a single subset $A \subseteq \Omega$ ($m_{b_A}(A) = 1$, $m_{b_A}(B) = 0$ for all $B \neq A$), the belief space $B$ is a *simplex*, and each belief function $b \in B$ can be written as a convex sum of the vectors $b_A$ representing the categorical belief functions as: $$b = \sum_{\emptyset \subseteq A \subseteq \Omega} m_b(A) \cdot b_A. \quad (9)$$ Figure 1 depicts the belief space and polytope $B_2[b]$ of the 2-additive belief functions dominating a given belief function $b$ for a frame $\Omega$ of cardinality 2. Here each b.f. is a vector $b = [m(x), m(y)]'$ and $B = Cl(b_x, b_y, b_\Omega)$. As it can be appreciated, the last vertex in (8) of $B_2[b]$ corresponds to the original belief function $b$, while the first two are nothing but the vertices of the set $\mathcal{P}[b] = B_1[b]$ of dominating probabilities. ![Figure 1](image) The polytope $B_2[b]$ of the 2-additive belief functions dominating a given belief function $b$ defined on a frame of size 2. The vertices of such polytope meet the conjectured form (7), and are given by the basic probability assignments of Equation (8). We can notice two facts: On one side, the conjecture seems confirmed by the analysis of the binary case $k = 2$. On the other side, unlike the case of dominating probabilities, there is no 1-1 correspondence between vertices of the polytope and the permutations of focal elements, as each vertex is produced by two different permutations. However, all vertices are associated with the same number of permutations. It is sensible to conjecture that this holds in the general case too. **Conjecture 2:** All the vertices of $B_k[b]$ are associated with the same number of permutations of $\mathfrak{P}^k(\Omega)$. This allows to deal with the computation of the center of mass of $B_k[b]$ in a straightforward manner. V. THE BARYCENTER OF THE SET OF $k$-ADDITIVE DOMINATING BELIEF FUNCTIONS We first go through the (already known) computation of the barycenter of the polytope $\mathcal{P}[b]$ of dominating probabilities (1-additive belief functions), in a way that can be generalized to $k$-additive b.f.s. Then, we will move to that of $\mathcal{B}_k[b]$, following a similar proof. A. The barycenter of dominating probabilities If we use the shorthand notation $\# \rho$ for the cardinality of the set of the permutations $\rho$ of $\Omega$, the center of mass $\overline{\mathcal{P}[b]}$ of $\mathcal{P}[b]$ is given by $$\sum_{\rho} \frac{b^{\rho}[b]}{\# \rho}$$ which, by Equation (6), corresponds to a Bayesian mass function which assigns to any focal set $\{x\}$ the value $$\sum_{B \supseteq \{x\}} m(B) \frac{\# \rho : \forall x' <_{\rho} x : B \not\supseteq \{x'\}}{\# \rho}.$$ To simplify this expression, we need to compute for each singleton focal element $B \supseteq \{x\}$ the number of permutations $\rho$ of $\Omega$ such that $B$ does not include any singleton $x'$ which comes before $x$ ($x' <_{\rho} x$) in the associated list $\{x_{\rho(1)}, \ldots, x_{\rho(|\Omega|)}\}$. For all possible positions of $x$ in the list, the permutation must be such that all elements before $x$ are extracted from $B^c$, the complement of $B$. In any admissible permutation, $x$ has to appear in one of the first $|\Omega| - |B| + 1$ locations (as otherwise some other elements of $B$ would come before $x$ in the list). For each position $i$ of $x$, the number of admissible permutations is given by the possible dispositions $\frac{(|\Omega| - |B|)!}{[(|\Omega| - |B|) - (i - 1)]!}$ of $(|\Omega| - |B|)$ points (the elements of $B^c$ in $i - 1$ locations (the elements of the list before $x$), multiplied by the number $(|\Omega| - i)!$ of permutations of the remaining $n - i$ singletons, which can appear after $x$ in any order. Then, $\overline{\mathcal{P}[b]}$ is given by a mass function which assigns to $\{x\}$ the value: $$\sum_{B \supseteq \{x\}} m(B) \sum_{i=1}^{|\Omega| - |B| + 1} \frac{(|\Omega| - |B|)!}{[(|\Omega| - |B|) - (i - 1)]!} \frac{(|\Omega| - i)!}{|\Omega|!}.$$ We can further simplify the multiplicative coefficient of $m(B)$ in the above expression, as follows: $$\sum_{i=1}^{|\Omega| - |B| + 1} \frac{(|\Omega| - |B|)!}{[(|\Omega| - |B|) - (i - 1)]!} \frac{(|\Omega| - i)!}{|\Omega|!} = \sum_{i=1}^{|\Omega| - |B| + 1} \frac{(|\Omega| - |B|)!}{[(|\Omega| - i) - (|B| - 1)]!} \frac{(|\Omega| - i)!}{|\Omega|!} = \sum_{i=1}^{|\Omega| - |B| + 1} \frac{(|\Omega| - |B|)!}{[(|\Omega| - i) - (|B| - 1)]!} \frac{(|B| - 1)!}{(|B| - 1)!} \frac{(|\Omega| - i)!}{|\Omega|!} = \frac{(|\Omega| - |B|)!}{|\Omega|!} \times \sum_{i=1}^{|\Omega| - |B| + 1} \frac{(|\Omega| - i)!}{[(|\Omega| - i) - (|B| - 1)]!} \frac{(|B| - 1)!}{|\Omega|!} = \frac{(|\Omega| - |B|)!}{|\Omega|!} \frac{(|B| - 1)!}{(|B| - 1)!} \sum_{i=1}^{|\Omega| - |B| + 1} \binom{|\Omega| - i}{|B| - 1},$$ which, after recalling that $$\sum_{i=1}^{|\Omega| - |B| + 1} \binom{|\Omega| - i}{|B| - 1} = \binom{|\Omega|}{|B|}$$ becomes $$= \frac{(|\Omega| - |B|)!}{|\Omega|!} \binom{|\Omega|}{|B|} = \frac{1}{|B|}.$$ As a consequence, $\overline{\mathcal{P}[b]}$ corresponds to the pignistic probability $m^{[BetP]}$ [2], as: $$\sum_{B \supseteq \{x\}} \frac{m(B)}{|B|} = m^{[BetP]}(B). \quad (10)$$ B. The case of dominating $k$-additive belief functions The proof of the analytical form of the center of mass $\overline{\mathcal{B}_k[b]}$ of $\mathcal{B}_k[b]$ follows the one given for the barycenter $\overline{\mathcal{P}[b]} = BetP[b]$ of $\mathcal{P}[b]$. Let us denote by $$\mathcal{M}(|\Omega|, k) \triangleq \sum_{i=1}^{k} \binom{|\Omega|}{i}$$ the number of non-empty focal elements of size at most $k$ in $|\Omega|$. Note that $$\mathcal{M}(a, b) \neq \mathcal{N}(a, b) = \sum_{i=1}^{b} \binom{a}{i} \cdot i,$$ as in the definition of $\mathcal{N}$ the contribution of each focal set is weighted by its cardinality. Under the assumption that Conjectures 1 and 2 are true, the barycenter of $\mathcal{B}_k[b]$ is $$\sum_{\rho} \frac{b^{\rho}[b]}{\# \rho}.$$ By Equation (6) this corresponds to a mass function which assigns to each focal set $A: |A| \leq k$ the value: $$\sum_{B \supseteq A} m(B) \frac{\# \rho : \forall A' <_{\rho} A : B \not\supseteq A'}{\# \rho}. \quad (11)$$ Again the coefficient of $m(B)$ in the above equation is proportional to the number of permutations $\rho$ of $\mathfrak{P}^k(\Omega)$ such that $B$ does not contain any element of $\mathfrak{P}^k(\Omega)$ that comes before $A$ in the permutation. In the present case, there are $\mathcal{M}(|\Omega|, k)$ elements in $\mathfrak{P}^k(\Omega)$. Of these, $\mathcal{M}(|\Omega|, k) - \mathcal{M}(|B|, k)$ are not included in $B$. Let $l = |B|$. As before, for each position $i$ of $A$, the number of admissible permutations is given by the possible dispositions $$\frac{(\mathcal{M}(n, k) - \mathcal{M}(l, k))!}{[(\mathcal{M}(n, k) - \mathcal{M}(l, k)) - (i - 1)]!}$$ of the $\mathcal{M}(n, k) - \mathcal{M}(l, k)$ subsets of size $\leq k$ which are not included in $B$ over $i - 1$ locations (the elements of the list before $A$), multiplied by the number $(\mathcal{M}(n, k) - i)!$ of permutations of the remaining $\mathcal{M}(n, k) - i$ elements of $\mathfrak{P}^k(\Omega)$, which can appear after $A$ in any order. The same derivations of Section V-A hold then for the case of dominating $k$-additive belief functions too, when we replace with \( \mathcal{M}(|B|, k) \) and \( |B| \) with \( \mathcal{M}(|B|, k) \). Therefore, the multiplicative coefficient of \( m(B) \) in Equation (11) turn out to be \( \frac{1}{\mathcal{M}(|k|, k)} \). **Theorem 2.** If Conjectures 1 and 2 hold, given a belief function \( b : 2^{\Omega} \rightarrow [0, 1] \) of mass function \( m \), the center of mass \( \overline{B_k}[b] \) of the simplex \( B_k[b] \) of \( k \)-additive belief functions dominating \( b \) is given by the mass function \( m_k^{[C]} \) which reads \[ m_k^{[C]}(A) = \sum_{B \supseteq A} m(B) \frac{1}{\mathcal{M}(|B|, k)}, \quad \forall A \in 2^k(\Omega) \] \[ m_k^{[C]}(A) = 0 \quad \text{otherwise} \] where \( \mathcal{M}(|B|, k) = |\mathfrak{P}^k(\Omega)| \) is the number of size-\( k \) subsets of \( B \). At this point, let us stress to two important facts of \( m_k^{[C]} \): First, as expected, for \( k = 1 \), the expression (12) reduces to the one of the pignistic function (10), since \( \mathcal{M}(|B|, 1) = |B| \). Moreover, the interpretation of the barycenter (12) of the set of \( k \)-additive dominating belief functions is straightforward. As the pignistic function is the result of a redistribution process in which the mass of each focal element is re-assigned on an equal basis among its elements (size 1 subsets), Equation (12) represents an analogous redistribution process in which the mass of each focal elements is re-assigned to each subset of size \( \leq k \) on an equal basis. **VI. Discussion** Several questions arise on the barycenters \( \overline{B_k}[b], \forall k \leq |\Omega| \), on the set \( \mathcal{PBF}[b] \), and on the interplays of these two sets of belief functions, or equivalently, on the respective location of these two sets of vectors in the belief space [15]. First, it is interesting to consider the location of all the centers of mass \( \overline{B_k}[b], \forall k \leq |\Omega| \), with respect to one another, knowing that their coordinates in the belief space [15] is given by the mass functions \( m_k^{[C]}, \forall k \leq |\Omega| \). Of course, the question is “Are they all located on the line joining \( b \) and \( b^{[S]} \)?”. We tend to think they are. In addition, beyond their geometrical interpretation, the question of the semantic of the barycenters \( m_k^{[C]} \) of the \( k \)-additive dominating b.f.s arises. Is it worthy to use it for decision making, as another possible generalization of the pignistic probability? If yes, to what kind of behaviour does it correspond to? Concerning the \( k \)-additive pignistic transforms \( b_k^{[B]}, \forall k \leq |\Omega| \), we can ask ourselves: - what is the distance between the elements of \( \mathcal{PBF}[b] \) and \( b^{[S]} \)? We know that \( b_1^{[B]} = b^{[S]} \), and we can conjecture that \( b_k^{[B]} - b^{[S]} \geq b_{k_1}^{[B]} - b^{[S]} \) iff \( k_1 > k_2 \); - what is the nature of the difference \( b_k^{[B]} - b^{[S]} \) as a function of \( k \) and \( |\Omega| \)? The most interesting question, possibly, regards the characterization of the difference vector joining in the belief space, the \( k \)-th element of \( \mathcal{PBF}[b] \) and the corresponding barycenter \( \overline{B_k}[b] \) of \( B_k[b] \). In the belief space, the coordinates of a vector representing a b.f. are given by its basic probability assignment \( m_i \). Such difference vector will therefore be expressed as: \[ \sum_{A \in 2^k(\Omega)} (m_k^{[B]}(A) - m_k^{[C]}(A)) \cdot b_A. \] The study of this difference is likely to shed some light on the nature of the two different redistribution processes generating \( m_k^{[B]} \) and \( m_k^{[C]} \), and will be pursued in the near future. **VII. Conclusions** In this paper, we investigated some dominance properties of the set of pignistic \( k \)-additive belief functions. In parallel, we proposed two natural conjectures on the set of dominating \( k \)-additive belief functions, inspired by the case of dominating probabilities. Surprisingly, the associated barycenter’s analytical form is very simple and elegant in terms of degrees of belief and mass redistribution. This led to the definition of another, “geometrical” set of pignistic \( k \)-additive belief functions. A number of questions on the interplay of these two sets of functions in the polytope of \( k \)-additive belief functions naturally arise and need to be answered in the near future. The next natural step along this line of research will be the formal proof of two conjectures, following the intuition provided by the case of dominating probabilities. **References** [1] M. Grabisch, “K-order additive discrete fuzzy measures and their representation,” *Fuzzy sets and systems*, vol. 92, pp. 167–189, 1997. [2] P. Smets and R. Kennes, “The transferable belief model,” *AI*, vol. 66, no. 2, pp. 143–174, 1994. [3] M. Dubois, “On transformations of belief functions to probabilities,” *International Journal of Intelligent Systems, special issue on Uncertainty Processing*. [4] T. Burge and A. Caplier, “A generalization of the pignistic transform for partial belief,” in *Proceedings of ECSQARU’2009, Verona, Italy*, July, pp. 252–263. [5] O. Aran, T. Burger, A. Caplier, and L. Akarun, “A belief-based sequential fusion approach for fusing manual and non-manual signs,” *Pattern Recognition*, vol. 42, no. 5, pp. 812–822, May 2009. [6] P. Smets, “Belief functions: The disjunctive rule of combination and the generalized pignistic theorem,” *International Journal of Approximate Reasoning*, vol. 9, pp. 1–35, 1989. [7] T. Denoeux, “A new justification of the unnormalized dempster’s rule of combination from the Least Commitment Principle,” in *Proceedings of FLAIRS’08, Special Track on Uncertain Reasoning*, 2008. [8] D. R. Aggar, “The Dempster-Shafer principle: Dempster-Shafer granules,” *International Journal of Intelligent Systems*, vol. 1, pp. 247–262, 1986. [9] D. Dubois and H. Prade, “A set-theoretic view of belief functions: logical operations and approximations by fuzzy sets,” *Int. J. of General Systems*, vol. 12, pp. 193–226, 1986. [10] T. Denoeux, “Conjunctive and disjunctive combination of belief functions induced by non distinct bodies of evidence,” *Artificial Intelligence*, 2007. [11] F. Cuzzolin, “On the credal structure of consistent probabilities,” in *Logics in Artificial Intelligence*. Springer Berlin / Heidelberg, 2008, vol. 5293/2008, pp. 31–49. [12] A. Chateauneuf and J.-Y. Jaffray, “Some characterizations of lower probabilities and other monotone capacities through the use of Möbius inversion,” *Mathematical Social Sciences*, vol. 17, pp. 263–283, 1989. [13] P. Miranda, M. Grabisch, and P. Gil, “Dominance of capacities by \( k \)-additive belief functions,” *European Journal of Operational Research*, vol. 175, pp. 912–921, 2006. [14] P. Walley, “Towards a general theory of imprecise probability,” *International Journal of Approximate Reasoning*, vol. 24, pp. 125–148, 2000. [15] F. Cuzzolin, “A geometric approach to the theory of evidence,” *IEEE Trans. Systems, Man, and Cybernetics C (in press)*, vol. 38, no. 3, May 2008.
Total revenue collections for the FY 2016/17 amounted to UGX 88,894,496,280 against the target of UGX 112,699,000,000 registering a performance of 79% and a deficit of UGX 23,804,503,721. This compared to the FY 2015/16, **REVENUE ADMINISTRATION** **Taxpayer education and sensitization** Carried out public sensitization and awareness campaigns - Organized and held 159 Sensitizations/Barraza’s /Workshops/Engagements and attracted 13,092 participants. Out of the 159 sensitizations 12 were professional targeted engagements, 25 local council driven, 12 on request of councilors and area member of parliament, 30 were upon the request of the Division leadership and 79 were general planned taxpayer sensitizations. - Held 14 radio talk shows and 4 TV talk shows, Vote: 122 Kampala Capital City Authority - Organized and held 3 Press Conferences, - 205,868 Bulk SMS, were sent out to the different taxpayers regarding different issues that relate to revenue collection and administration. **Tax Audits** Conducted a total of 98 tax audits and the collectible revenue identified was UGX 1,503,967,773 while UGX 383,043,790 realized during the period. **Enforcements** - 32,476 shops were sealed for trading without valid trade license within and UGX 2.9 Bn realized - 1,921 properties were enforced on for defaulting property rates and UGX 1.6 Bn was realized. - 15,256 taxis were impounded while others crumped for noncompliance to commercial road user fees and realized UGX 2.5 Bn **Revenue Modernization Project (CAM/CAMV)** Revenue collection is now administered on eCitie (KCCA Online payment Platform) while the automation of the remaining revenue sources is ongoing. **Mass Valuation of Properties.** - Completed the Valuation exercise for 15,018 properties in the Central Division with a ratable value of UGX 359,589,505,733. - As at 30th June 2017, 42,998 properties had been inspected in Central and Nakawa Divisions. Out of these, 31,054 were uploaded onto the system and 13,419 properties quality assured. Data collection in Nakawa Division is still on going. **City Address Model (CAM)** The City Address model has been done using the Geographic Information System (GIS). This system captures, stores, analyses, manages, and presents data that are linked to location(s). Vote: 122 Kampala Capital City Authority Performance as of BFP FY 2017/18 (Performance as of BFP) UGX 16,890,795,572 was collected against a target of UGX 27,324,082,716 representing 62.8% and registering a deficit of 10,433,287,144. A total of 56,728 had been inspected under CAMV (Computer aided mass valuation) during the period, out of these 26,672 had been uploaded and 14,415 were quality assured in Nakawa division. A total of 6,399 properties in Nakawa were assessed with a ratable value of UGX 24,617,586,819. Trading license register, Local service tax, Local Hotel Taxi, property and Ground rent register of revenue were updated on regular basis. A total of thirty (30) sensitizations were conducted during the quarter and 1,725 people directly attended these sensitizations. These sensitizations were geared towards popularizing CAM/CAM/V activities in Nakawa, enhancing revenue collections and in particular sensitization on Trade (License) (Amendment) Act 2015. Seventeen (17) audits were completed during the quarter and the total amount of revenue identified from the completed audits was UGX 112,275,416 /= FY 2018/19 Planned Outputs - Collection of UGX116Bn as target for NTR for FY 2018/19 - General revenue collection and administration - Taxpayer Registration Expansion Project (TREP) activities - Property valuation exercise - Develop the system for computer aided mass valuation of properties. - Office tools, computers and equipment. - Enhancement of revenue/tax compliance through audits, tax payer sensitization - Procurement of accountable stationary and office tools. - Enhance staff competencies through reskilling and training Medium Term Plans - Enhancing mobilization of Local revenue. - Development partner finance. - Promoting Alternative financing mechanisms. - Public Private Partnerships. - Kampala City Bond. - The Kampala Development Corporation. - Kampala Development Foundation. - Kampala City Lottery. Efficiency of Vote Budget Allocations GOU 0.43 BN ,NTR - 3.17Bn Total allocation UGX.3.6Bn To deliver the proposed above. Vote Investment Plans N/A Major Expenditure Allocations in the Vote for FY 2018/19 Vote: 122 Kampala Capital City Authority - Finalize property valuation - Full automation of the revenue administrative processes - Developing operational guidelines for revenue administration - Strengthening partnerships with other agencies and interagency systems i.e. KDLB - Enhancing staff competencies through reskilling and training - Tax compliancy programs - Review of regulatory framework and alternative tax sources V3: PROGRAMME OUTCOMES, OUTCOME INDICATORS AND PROPOSED BUDGET ALLOCATION Table V3.1: Programme Outcome and Outcome Indicators | Vote Controller : | | |-------------------|---| | Programme | 09 Revenue collection and mobilisation | | Programme Objective | To mobilize funds that will ensure service delivery for the different activities in the City. | | Responsible Officer | Director Revenue Collection. | Programme Outcome: Efficiency in the collection and management of public resources to ensure value for money in the service delivery. Sector Outcomes contributed to by the Programme Outcome 1. Value for money in the management of public resources | Programme Performance Indicators (Output) | Performance Targets | |------------------------------------------|---------------------| | | 2016/17 Actual | 2017/18 Target | Base year | Baseline | 2018/19 Target | 2019/20 Target | 2020/21 Target | | • Number | 0 | | | | 116,613,000,000 | 122,613,000,000 | 125,766,000,000 | Table V3.2: Past Expenditure Outturns and Medium Term Projections by Programme | Billion Uganda shillings | 2016/17 | 2017/18 | 2018-19 | MTEF Budget Projections | |--------------------------|---------|---------|---------|------------------------| | | Outturn | Approved Budget | Spent By End Q1 | Proposed Budget | 2019-20 | 2020-21 | 2021-22 | 2022-23 | | Vote: 122 Kampala Capital City Authority | | | | | | | | | | 09 Revenue collection and mobilisation | 0.420 | 0.434 | 0.010 | 0.434 | 0.529 | 0.609 | 0.730 | 0.876 | | Total for the Vote | 0.420 | 0.434 | 0.010 | 0.434 | 0.529 | 0.609 | 0.730 | 0.876 | V4: SUBPROGRAMME PAST EXPENDITURE OUTTURN AND PROPOSED BUDGET ALLOCATIONS Table V4.1: Past Expenditure Outturns and Medium Term Projections by SubProgramme | Billion Uganda shillings | 2016/17 | FY 2017/18 | 2018-19 | Medium Term Projections | |--------------------------|---------|------------|---------|------------------------| | | Outturn | Approved Budget | Spent By End Sep | Proposed Budget | 2019-20 | 2020-21 | 2021-22 | 2022-23 | | Programme: 09 Revenue collection and mobilisation | | | | | | 06 Revenue Management | 0.420 | 0.434 | 0.010 | 0.434 | 0.529 | 0.609 | 0.730 | 0.876 | | Total For the Programme: 09 | 0.420 | 0.434 | 0.010 | 0.434 | 0.529 | 0.609 | 0.730 | 0.876 | V5: VOTE CHALLENGES FOR 2018/19 AND ADDITIONAL FUNDING REQUESTS Vote Challenges for FY 2018/19 - Limited implementation of the CRUF Instrument 2015; Implementation of Commercial Road User Fees instrument (buses, boda-boda, Trucks, Lorries, pickups and other road user types not contributing) has proved cumbersome and the projected increase in Revenues from the this sector has not been achieved. This has further been complicated by the recent Presidential directive on streamlining collection of fees from this sector which has created total non-compliance even from those who had previously complied. - Set back in the implementation of the Trading License Act amendment 2015; Following the amendment to the Trading License Act which brought on board professionals under the armpit of business licensing, a number of professional firms have gone to the courts of Law to seek redress on account of excessive taxation and have managed to secure injunction from Court restraining us from collecting the License fee. - Limited involvement of political leadership in revenue administration; Revenue administration is greatly aided when the political leadership at the highest level takes a lead role in revenue mobilization. The extent of political involvement has been rather low whereas we some involvement more especially at the level of Urban Division councils. The same needs to replicate at the Authority level. - Delays in approval of some proposed revenue enhancement proposals; some revenue enhancement proposals such as revision in the Physical planning fees have not been approved yet the corresponding revenues were budgeted and included in the Revenue estimates for the FY 2017/18. - Tax Payer apathy; this is partly informed by the insistence by taxpayers on over taxation and sometimes ignorance and they sometimes confuse the KCCA levies with that of URA. - Conflict between tax laws and alternative administrative Directives and pronouncements; for example the Presidential directive on 10th November 2008 on promotion and empowerment of market vendors in management and developments of markets has not been translated into law. This has created a vacuum where vendors often quote that directive and out rightly refuse KCCA to manage and collect from the market. Such actions have caused vendors in Nakasero Market to forcefully take over the market and hence. - Furthermore the recent presidential directive on harmonization of taxi fees and market dues is at present in conflict with existing laws yet it has already impacted Revenue collections from the mentioned revenue sources. - Absence of a clear and harmonized leadership in the Commercial Transport sector; this has severely constrained reforming the sector since most of the sector associations hold ulterior motives and are less concerned with streamlining the sector and this complicates Revenue Administration in the sector. This is manifested through illegal stages and the violent behavior of some operators. - Limited Trade order in the city; Illegal stages for taxis and boda-bodas coupled with vending in every corner of the city impact on Revenue Administration by affecting the compliance behavior of formal businesses who complain that the activities of the street vendors impact on their business and hence are unwilling to settle their obligations. - Public expectation gap (tax payment vs service delivery); this promotes non-compliance. - Inadequate staffing (numbers) and limited tools and equipment’s for work; the revenue administration within our jurisdiction is heavily reliant on staff numbers and equipment’s such as motor vehicles to facilitate delivery of demand notices, follow up efforts and enforcement activities. - Delays in carrying out revaluation of properties; this in the past has been due to the cost of revaluation and inadequate records of the previous valuation exercise. This results into a slower growth in property tax revenues. - Limited tax payer Compliance; this increases cost of tax administration since revenue yields can only be sustained through enforcement. - Limitations in some tax administration laws impedes full realization of revenue potential; | Additional requirements for funding and outputs in 2018/19 | Justification of requirement for additional outputs and funding | |----------------------------------------------------------|---------------------------------------------------------------| | **Vote : 122 Kampala Capital City Authority** | | | **Programme : 09 Revenue collection and mobilisation** | | | **OutPut : 01 Registers for various revenue sources developed** | Each division has one vehicle attached to revenue collection activities with sitting capacity of four (4) which constrains the movement of over 10 staff at a division at ago. Tax payers usually comply with increased staff visibility in the field. Transport ease staff mobility and will into increased compliance and growth in the register over 30,000 taxpayers will be registered. Additional three (3) omnibuses of 414 seating capacity put in a pool will deal with the transport need | | **OutPut : 02 Local Revenue Collections** | Trade licences and property rates are tax types that require validation and verification respectively. If the said gadgets are availed they will reduce forgery and ease identification of properties put using our GIS platform. The gadgets will enable staff access the system while in the field to register payments and confirm payments real time. This is projected to enhance revenue collection and growth by 5%. | Funding requirement UShs Bn: 2.400 Funding requirement UShs Bn: 0.600
Stability Analysis of a More General Class of Systems with Delay-Dependent Coefficients Chi Jin *Laboratoire des Signaux et Systèmes (L2S) CentraleSup’elec-CNRS-Université Paris Sud, 3 rue Joliot-Curie 91192 Gif-sur-Yvette cedex, France.*, email@example.com Keqin Gu *Southern Illinois University Edwardsville*, firstname.lastname@example.org Islam Boussaada *Laboratoire des Signaux et Systèmes (L2S) CentraleSup’elec-CNRS-Université Paris Sud, 3 rue Joliot-Curie 91192 Gif-sur-Yvette cedex, France.*, email@example.com Silviu-Iulian Niculescu *IPSA & Laboratoire des Signaux et Systèmes (L2S) CentraleSup’elec-CNRS-Université Paris Sud, 3 rue Joliot-Curie 91192 Gif-sur-Yvette cedex, France.*, firstname.lastname@example.org Follow this and additional works at: https://spark.siue.edu/siue_fac Part of the Acoustics, Dynamics, and Controls Commons, Biological Engineering Commons, Controls and Control Theory Commons, Control Theory Commons, Navigation, Guidance, Control and Dynamics Commons, and the Process Control and Systems Commons Recommended Citation Jin, Chi; Gu, Keqin; Boussaada, Islam; and Niculescu, Silviu-Iulian, "Stability Analysis of a More General Class of Systems with Delay-Dependent Coefficients" (2019). SIUE Faculty Research, Scholarship, and Creative Activity. 84. https://spark.siue.edu/siue_fac/84 This Article is brought to you for free and open access by SPARK. It has been accepted for inclusion in SIUE Faculty Research, Scholarship, and Creative Activity by an authorized administrator of SPARK. For more information, please contact email@example.com. Cover Page Footnote Scheduled to be published in May 2019 Stability Analysis of a More General Class of Systems with Delay-Dependent Coefficients Chi Jin, Keqin Gu, IEEE Senior Member, Islam Boussaada, and Silviu-Iulian Niculescu, IEEE Fellow Abstract—This paper presents a systematic method to analyze the stability of systems with single delay in which the coefficient polynomials of the characteristic equation depend on the delay. Such systems often arise in, for example, life science and engineering systems. A method to analyze such systems was presented by Beretta and Kuang in a 2002 paper, but with some very restrictive assumptions. This work extends their results to the general case with the exception of some degenerate cases. It is found that a much richer behavior is possible when the restrictive assumptions are removed. The interval of interest for the delay is partitioned into subintervals so that the magnitude condition generates a fixed number of frequencies as functions of the delay within each subinterval. The crossing conditions are expressed in a general form, and a simplified derivation for the first-order derivative criterion is obtained. Illustrative examples are also presented. Index Terms—Delay Systems, Stability Analysis I. INTRODUCTION The presence of time-delay has been widely observed in physical and engineering systems, and it is often caused by the finite time needed to transfer materials, energy and information. Such systems may be modeled as delay differential equations, and have attracted significant attentions of scholars from mathematics, engineering, life science and economics for many years. See [3], [9], [10], [11] for some recent progress. For a linear system with constant coefficients and single delay or multiple commensurate delays, a number of effective methods have been proposed [1], [2], [5]. The methods are along the line of D-subdivision [6], [7], also known as the $\tau$-decomposition method [17] as the parameter involved in this case is the delay $\tau$. These methods roughly proceed as follows: starting with one value of delay $\tau_0$ that one knows the number of characteristic roots on the right-half plane (usually $\tau_0 = 0$), one sweeps through an interested interval $(\tau_0, \tau_N)$ of delays, and identify all delays $\tau_k$, $k = 1, 2, \ldots, N - 1$ for which there are characteristic roots on the imaginary axis. By identifying the direction these roots cross the imaginary axis, the change of the number of right-half plane roots as $\tau$ goes through $\tau_k$ can be determined. Thus, the interval $(\tau_0, \tau_N)$ is divided into subintervals $(\tau_{k-1}, \tau_k)$, and the number of right-half plane roots within each subinterval is constant and can be explicitly determined. Especially, the subintervals of delay for the systems to be stable can be identified. There are, however, practical systems in, for example, life science and engineering, for which the coefficients of the system characteristic equation depend on the delay values. For example, in [15], the source and dissipative process of a stellar dynamos is described by the following equations $$ \begin{cases} B_\phi(t) &= c_1 e^{-c_2 T_0} A(t - T_0) - c_2 B_\phi(t), \\ \dot{A}(t) &= c_3 e^{-c_2 T_1} B_\phi(t - T_1) - c_2 A(t), \end{cases} $$ where $B_\phi$ is the strength of toroidal field, and $A$ is the strength of poloidal field, and $c_1$, $c_2$, $c_3$, $T_0$, $T_1$ are positive constants. The characteristic equation of the above system can be easily obtained as the following with delay-dependent coefficients: $$ \lambda^2 + 2c_2 \lambda + c_2^2 - c_1 c_3 e^{-c_2 \tau} e^{-\tau \lambda} = 0, $$ where $\tau = T_0 + T_1$. A model of hematopoietic stem cell dynamics is given in [16]. The model is nonlinear, and possesses two equilibria. The linearized equation in the neighborhood of the nonzero equilibrium has the following characteristic equation $$ \dot{\lambda} + A(\tau) - B(\tau) e^{-\lambda \tau} = 0, $$ where $A$, $B$ are nonlinear functions of $\tau$. Therefore delay-dependent coefficients may result from the linearized dynamics of a nonlinear time-delay system. Time-delay systems with delay-dependent coefficients can also arise from the analysis of partial differential equations. As an example, the modeling of cell density in a generic compartment in [27] suggests an advection or reaction-convection equation of the following form: $$ \frac{\partial x(t, a)}{\partial t} + V \frac{\partial x(t, a)}{\partial a} = -\gamma(t, a) x(t, a). $$ A time-delay system can be obtained from the above equation using the method of characteristics $$ \dot{S}(t) = 2\beta(S(t - \tau_s)) e^{-\gamma_s \tau_s} S(t - \tau_s) - [\beta(S(t)) + \delta] S(t). $$ Detailed derivation and the meaning of the variables and functions in the above equation can be found in [27]. It is clear that the delay parameter $\tau$ enters the system coefficients through the exponential term $e^{-\gamma_s \tau_s}$. Other examples of systems with delay-dependent coefficients include the sun flower model [26], control systems using a finite-difference scheme for stabilization [21] as well as various population dynamics models [14]. Indeed, it has been pointed out in [12] that the dynamics of a population that goes... through distinct life stages in general involves delay-dependent parameters. While it is possible to use the existing methods mentioned above to determine the stability of such a system with a given delay value, they are no longer sufficient to determine the range of delays for the system to be stable. Beretta and Kuang [12] presented an effective method to carry out such a stability analysis for systems with a single delay. However, the authors of [12] made some very restrictive assumptions, and the main attention has been paid to the crossing direction of the characteristic roots at the imaginary axis. No procedure was given in [12] to identify all the pairs \((j\omega, \tau)\) that satisfy the characteristic equation. In general, the structure of the functions \(\omega(\tau)\) implicitly defined by \(F(\omega, \tau)\) has not been sufficiently described in [12] to systematically identify all such pairs. The purpose of this paper is to extend the method to the general case with the exception of some degenerate cases. As we will see, the removal of such restrictive assumptions means that a much richer behavior is possible. More specifically, the interval of interest for delay needs to be divided into subintervals so that the number of continuous functions \(\omega(\tau)\) remains constant within each subinterval. The number of such functions may change as the delay moves from one subinterval to another. The dividing points of the interval are those delays for which two polynomial equations have a common real solution. Based on such a structure, the crossing delays and the corresponding crossing frequencies may be identified systematically. Furthermore, the delay intervals such that the system is stable may be determined based on the crossing directions of each critical delay-frequency pair. The crossing direction in the general case may be determined numerically. With additional nondegeneracy assumption, the crossing direction may be conveniently determined analytically similar to the method given in [12], although we will show that a simplified derivation is possible. A preliminary version of this paper was presented in [20]. The following notation will be used in this paper. For a polynomial, \(\text{ord}(\cdot)\) denotes its order. For any complex number \(c\), \(\Re(c)\), \(\Im(c)\) and \(\bar{c}\) denote its real part, imaginary part and conjugate, respectively. \(\mathbb{R}\) stands for the set of real numbers and \(\mathbb{R}_+\) for non-negative reals. We will use \(\partial\) with a subscript to denote partial derivatives. For instance, \(\partial_\lambda D(\lambda, \tau) := \frac{\partial D(\lambda, \tau)}{\partial \lambda}\). II. Problem Statement Consider a time-delay system with characteristic equation of the form: \[ D(\lambda, \tau) = P(\lambda, \tau) + Q(\lambda, \tau)e^{-\tau \lambda} = 0, \] where \(P(\lambda, \tau)\) and \(Q(\lambda, \tau)\) are continuous in \(\tau\) and are polynomials of \(\lambda\) with real coefficients for each given \(\tau \in \mathcal{I}\), and \(\mathcal{I} = [\tau^l, \tau^u]\) is the range of delay parameters \(\tau\) of interest, \(0 \leq \tau^l < \tau^u\). In some context, we may write \(P_\tau(\lambda)\) and \(Q_\tau(\lambda)\) instead of \(P(\lambda, \tau)\) and \(Q(\lambda, \tau)\) in order to emphasize them as functions (polynomial in this case) of \(\lambda\) for a given \(\tau\). The same convention is also used for other functions of two independent variables with \(\tau\) as one of them. For example, we may write \(D_\tau(\lambda)\) instead of \(D(\lambda, \tau)\) to emphasize that we are considering \(D\) as a function of \(\lambda\) for a given \(\tau\) even though it is no longer a polynomial. As we will see later on, the solutions of (2) with \(\lambda\) on the imaginary axis plays an important role in stability analysis, in which case, (2) becomes \[ D(j\omega, \tau) = 0, \] where \(\omega\) is real. For this purpose, we define: \[ F(\omega, \tau) = P(j\omega, \tau)P(-j\omega, \tau) - Q(j\omega, \tau)Q(-j\omega, \tau). \] It is not difficult to see that a necessary but \textit{not sufficient condition} for \((\omega, \tau)\) to satisfy (3) is \[ F(\omega, \tau) = 0. \] The equation (5) is known as the magnitude condition, which means that the norms of the two complex number \(P(j\omega, \tau)\) and \(Q(j\omega, \tau)\) are equal. We will restrict ourselves to systems that satisfy the following four assumptions: **Assumption I.** For all \(\tau \in \mathcal{I}\), \(P_\tau\) satisfies \[ \text{ord}(P_\tau) = n. \] Furthermore, \[ \lim_{\omega \to \infty} \left| \frac{Q_\tau(j\omega)}{P_\tau(j\omega)} \right| < 1. \] **Assumption II.** No \((\omega, \tau) \in \mathbb{R}_+ \times \mathcal{I}\) satisfies \[ P(j\omega, \tau) = 0, \] \[ Q(j\omega, \tau) = 0, \] simultaneously. **Assumption III.** Any \((\omega^*, \tau^*) \in \mathbb{R}_+ \times \mathcal{I}\) that satisfies (3) must also satisfy \[ \partial_\lambda D(\lambda, \tau) \big|_{\tau = \tau^*, \lambda = j\omega^*} \neq 0. \] Furthermore, let \(\lambda(\tau)\) be the function implicitly defined by (2) in a sufficiently small neighborhood of \((j\omega^*, \tau^*)\) within \(\mathbb{R}_+ \times \mathcal{I}\), then for all \(\tau \neq \tau^*\), \(\tau \in \mathcal{I}\), \(|\tau - \tau^*|\) sufficiently small, \[ \Re(\lambda(\tau)) \neq 0. \] **Assumption IV.** There are only a finite number of \((j\omega, \tau)\) in \(\mathbb{R}_+ \times \mathcal{I}\) that simultaneously satisfy (5) and \[ \partial_\omega F(\omega, \tau) = 0. \] These four assumptions are less restrictive than typical in the literature either stated explicitly or implicitly. Assumption I above requires the leading coefficient of \(P_\tau\) not to vanish for all \(\tau \in \mathcal{I}\), and \[ \text{ord}(Q_\tau) \leq n. \] For time-delay systems of retarded type, (10) is satisfied with strict inequality. When (10) is an equality, the time-delay system is of neutral type, and (7) requires the absolute value of the leading coefficient of $Q_\tau(\lambda)$ to be strictly less than that of $P_\tau(\lambda)$. Systems of neutral type involve some surprising subtleties. See [2] for an example for systems with single delay. For more comprehensive coverage see [4] and [9]. Assumption II is much less restrictive than the counterpart in [12] which is $$P(j\omega, \tau) + Q(j\omega, \tau) \neq 0 \text{ for all } (\omega, \tau) \in \mathbb{R}^2.$$ (11) Indeed, the two complex equations in Assumption II are equivalent to four real equations with two real “unknowns” $\omega$ and $\tau$. Obviously, cases that violate this assumption are degenerate and rare. On the other hand, the set $$\{P(j\omega, \tau) + Q(j\omega, \tau) \mid (\omega, \tau) \in \mathbb{R}^2\}$$ is a region in the complex plane, and (11) requires this region not to include the origin, which is obviously much more restrictive. As will be presented later, the analysis is based on the phase condition on the set of parameters that satisfy the magnitude condition (5). The violation of this assumption makes the phase condition discontinuous at this point, and requires separate treatment which will not be pursued here. In Assumption III, Condition (8) guarantees that $\lambda(\tau^*) = j\omega^*$ is a simple characteristic root, and consequently $\lambda(\tau)$ is well defined in a small neighborhood of $\tau = \tau^*$ by the implicit function theorem [22], and $\lambda'(\tau)$ exists at $\tau^*$ if $D(\lambda, \tau)$ is differentiable with respect to $\tau$ at $(\lambda(\tau^*), \tau^*)$. The remaining part of the assumption means that the curve $\lambda(\tau)$ is on the imaginary axis only at one point $\lambda^* = \lambda(\tau^*)$ in this neighborhood. A more restrictive assumption is to assume $\Re(\lambda'(\tau)) \neq 0$, which is implicitly assumed in most works of this nature, including [12]. Assumption IV is also rather natural. It requires two real equations in two real variables to admit a finite number of solutions in the set $\mathbb{R}_+ \times \mathcal{S}$. This assumption holds for most systems with delay dependent coefficients in practice. This assumption allows the delay interval $\mathcal{S}$ to be divided into a finite number of sub-intervals such that the polynomial $F_\tau(\omega)$ has a constant number of simple positive roots within each subinterval. In most cases, we may choose the lower limit $\tau^l$ of $\mathcal{S}$ to be 0, and the upper limit $\tau^u$ sufficiently large. We leave them in this general form so that the method we present here is still valid even if some of the assumptions are violated for some $\tau < \tau^l$ or $\tau > \tau^u$. This paper provides extension of the analysis in [12] so that it is still applicable when the condition (11) is violated. In [12], it is also implicitly assumed that the number of real roots, $\pm \alpha_k(\tau), k = 1, 2, \cdots, m$, of $F_\tau(\omega)$ remains constant within the delay interval of interest $\mathcal{S}$, and they are continuously differentiable. With our relaxed assumptions, these are no longer true. Especially, the real roots may suddenly emerge or disappear as the delay $\tau$ increases within $\mathcal{S}$. It is therefore essential to understand the structure of this solution set in order to solve the stability problem. This will be discussed in the next section. III. Stability Analysis The main idea for stability analysis here is along the line of $\tau$-decomposition method outlined in the introduction. The validity of the method is based on the fact that there exists a constant $c > 0$ for any closed interval of $\tau$ such that all roots of $D_\tau(\lambda)$ with $\Re(\lambda) > -c$ vary continuously as $\tau$ changes. This is true under Assumption I [2][4][9]. The critical aspects of the stability analysis are: (i) identifying the values of $\tau$ such that there is at least one root of $D_\tau(\lambda)$ on the imaginary axis, as well as the corresponding imaginary roots, and (ii) determining whether these imaginary roots move from the left-half plane to the right-half plane, or vise versa, or return to the original side as $\tau$ increases through these values. In this section, we will consider the first aspect, and describe the process of stability analysis assuming we know the answer to the second aspect. In the next section, we will describe some methods of accomplishing the second aspect. To accomplish the first aspect stated in the last paragraph, it is useful to introduce the notation $$S(\lambda, \tau) = -\frac{P(\lambda, \tau)}{Q(\lambda, \tau)} e^{\tau \lambda},$$ (12) whenever $$Q(\lambda, \tau) \neq 0.$$ (13) Then $$S(j\omega, \tau) = W(\omega, \tau) e^{j\theta(\omega, \tau)},$$ (14) where $$W(\omega, \tau) = \left| \frac{P(j\omega, \tau)}{Q(j\omega, \tau)} \right|,$$ (15) $$\theta(\omega, \tau) = \angle P(j\omega, \tau) - \angle Q(j\omega, \tau) + \omega \tau + \pi.$$ (16) When $\lambda = j\omega$ is on the imaginary axis, we note that (3) is equivalent to the following two conditions $$W(\omega, \tau) = 1,$$ (17) $$\theta(\omega, \tau) = 2r\pi, \text{ for some integer } r,$$ (18) provided that (13) holds. Equation (17) is equivalent to (5), and represents the magnitude condition. Equation (18) is the phase condition. To capture essentially the same phase relationship, in [12] a function different from $\theta(\omega, \tau)$ is introduced, which requires the more restrictive condition (11). Let $$\mathcal{W} = \{(\tau, \omega) \mid \tau \in \mathcal{S}, \omega \in \mathbb{R}, F(\omega, \tau) = 0\},$$ (19) then $(\tau, \omega) \in \mathcal{W}$ if and only if $(\tau, \omega)$ satisfies (13) and (17) in view of Assumption II. Therefore, an effective approach to determine all $(\tau, \omega)$ satisfying (3) is to first determine the set $\mathcal{W}$, and then choose from $\mathcal{W}$ those $(\tau, \omega)$ that also satisfy (18). To understand the structure of $\mathcal{W}$, we will examine the function $F(\omega, \tau) = F_\tau(\omega)$ more closely. For any given $\tau$, $F_\tau(\omega)$ is an $2n^{th}$ order polynomial with real coefficients in view of Assumption I, and it is an even function. It can also be written as an $n^{th}$ order polynomial of $\omega^2$, \begin{align} \hat{F}(\alpha, \tau) &= F(\omega, \tau), \\ \alpha &= \omega^2. \end{align} Therefore \begin{equation} \hat{F}(\alpha, \tau) = 0 \end{equation} will provide $n$ solutions $\alpha_k$, $k = 1, 2, \ldots, n$. Without loss of generality, let $\alpha_k$, $k = 1, 2, \ldots, n_p$, $n_p \leq n$, be the only real and non-negative solutions. Then, all the real solutions of (5) are $\pm \alpha_k$, $k = 1, 2, \ldots, n_p$, where $\alpha_k = \sqrt{\alpha_k}$. In general, the number of non-negative real roots $n_p$ depends on $\tau$. In order to understand this dependence, let $\tau^{(i)}$, $i = 1, 2, \ldots, K - 1$ be the set of all $\tau \in \mathcal{I}$ such that $(\omega, \tau)$ simultaneously satisfies (5) and (9) for some $\omega \in \mathbb{R}_+$ (recall this set is indeed finite according to Assumption IV). We agree to order $\tau^{(i)}$ in ascending order $$\tau^{(1)} < \tau^{(2)} < \cdots < \tau^{(K-1)}.$$ We will also write $\tau^{(0)} = \tau^l$ and $\tau^{(K)} = \tau^u$. Then, we may partition $\mathcal{I}$ into $K$ subintervals \begin{equation} \mathcal{I}^{(i)} = [\tau^{(i-1)}, \tau^{(i)}], \ i = 1, 2, \ldots, K. \end{equation} The interior of $\mathcal{I}^{(i)}$ is denoted as $\mathcal{I}_o^{(i)} = (\tau^{(i-1)}, \tau^{(i)})$. Then the structure of the set $\mathcal{W}$ may be very clearly described in the following proposition. **Proposition 1.** For a given $i$, the number of real roots of $F_\tau(\omega)$ are the same for all $\tau \in \mathcal{I}_o^{(i)}$, and they are all simple. These real simple roots are continuous functions of $\tau$, and may be expressed as $\pm \omega_k^{(i)}(\tau)$, $k = 1, 2, \ldots, m^{(i)}$, where $m^{(i)} \leq n$, and $\omega_k^{(i)}(\tau) > 0$ for all $\tau \in \mathcal{I}_o^{(i)}$. **Proof.** For a fixed $i$, by definition, for all $\tau \in \mathcal{I}_o^{(i)}$, any $\omega \in \mathbb{R}$ that satisfies \begin{equation} F_\tau(\omega) = 0 \end{equation} must satisfy \begin{equation} F'_\tau(\omega) = \partial_\omega F(\omega, \tau) \neq 0, \end{equation} from which we conclude that all real roots of $F_\tau(\omega)$ are simple. As $F_\tau(\omega)$ is an even function of $\omega$, we can also conclude that the $-\omega$ is also a root if $\omega$ is a real root, and $\omega = 0$ is not a root (otherwise, it cannot be simple). To show the invariance of the number of real solutions within $\mathcal{I}_o^{(i)}$, let $\tau^* \in \mathcal{I}_o^{(i)}$, and let $\omega_k^*$, $k = 1, 2, \ldots, m$ be the only real roots of $F_\tau(\omega)$. By the continuity of roots with respect to coefficients [8], we may define $m$ continuous functions $\omega_k(\tau)$, $k = 1, 2, \ldots, m$ in $\mathcal{I}_o^{(i)}$, $\omega_k(\tau^*) = \omega_k^*$, and each $\omega_k(\tau)$ is a root of $F_\tau(\omega)$. The proof is complete if we show that all $\omega_k(\tau)$ are real in $\mathcal{I}_o^{(i)}$ as this also implies that $\omega_k(\tau)$ are simple roots of $F_\tau(\omega)$. For a given $k$, let $$\tau_M = \sup \{ \tau_a \mid \omega_k(\tau) \in \mathbb{R} \text{ for all } \tau \in [\tau^*, \tau_a] \}.$$ By continuity, $\omega_k(\tau_M)$ is real. We will show $\tau_M = \tau^{(i)}$. If not, for arbitrarily small $\varepsilon > 0$, $\omega_k(\tau_M + \varepsilon)$ is not real, which can be made arbitrarily close to $\omega_k(\tau_M)$ with sufficiently small $\varepsilon$. But this means that its complex conjugate $\overline{\omega}_k(\tau_M + \varepsilon)$ is also a root of the polynomial with real coefficients $F_{\tau_M + \varepsilon}(\omega)$ and arbitrarily close to $\omega_k(\tau_M)$. The continuity of roots with respect to the coefficients means that $\omega_k(\tau_M)$ cannot be a simple root of $F_{\tau_M}(\omega)$, which contradicts the first part of this proposition that we have already proven. We have thus shown that $\omega_k(\tau)$ is real for all $\tau \in [\tau^*, \tau^{(i)})$. Similarly, we can show that $\omega_k(\tau)$ is real for all $\tau \in (\tau^{(i-1)}, \tau^*]$, and the proof is complete. □ As $\tau$ moves rightward from a point in $\mathcal{I}_o^{(i)}$, some, say $m$, real roots, and $2l$ complex roots of $F_\tau(\omega)$ may merge to form a multiple root as $\tau$ reaches $\tau^{(i)}$, and some, say $2k$, become complex while $m + 2l - 2k$ roots remain real as $\tau$ enters $\mathcal{I}_o^{(i+1)}$. The most common scenarios are either two real roots merge and become complex, or two complex roots merge and become real as $\tau$ moves from $\mathcal{I}_o^{(i)}$ to $\mathcal{I}_o^{(i+1)}$ through $\tau^{(i)}$. A real root of $F_\tau(\omega)$ in $\mathcal{I}_o^{(i)}$, say $\omega_k^{(i)}(\tau)$, $k \leq m^{(i)}$, that does not merge with other roots at $\tau^{(i)}$ remains real, and becomes $\omega_l^{(i+1)}$ for some $l \leq m^{(i+1)}$ as $\tau$ moves from $\mathcal{I}_o^{(i)}$ to $\mathcal{I}_o^{(i+1)}$ through $\tau^{(i)}$. For a given $i$ and $k$, as $\omega_k^{(i)}$ depends on $\tau$ continuously in $\mathcal{I}_o^{(i)}$, we will require $\angle P(j\omega_k^{(i)}(\tau), \tau)$ and $\angle Q(j\omega_k^{(i)}(\tau), \tau)$ to be continuous functions of $\tau$. This means that \begin{equation} \theta_k^{(i)}(\tau) = \theta(\omega_k^{(i)}(\tau), \tau), \ k = 1, 2, \ldots, m^{(i)} \end{equation} are continuous functions of $\tau$ within $\mathcal{I}_o^{(i)}$, and will be known as the phase functions. On the other hand, this continuity requirement means that the values of $\angle P(j\omega_k^{(i)}(\tau), \tau)$, $\angle Q(j\omega_k^{(i)}(\tau), \tau)$ and $\theta_k^{(i)}(\tau)$ may not be restricted to any $2\pi$ range. Furthermore, if $\omega_k^{(i)}(\tau)$ and $\omega_l^{(i)}(\tau)$ merge at, say, $\tau^{(i)}$, and we extend the definition of $\theta_k^{(i)}(\tau)$ and $\theta_l^{(i)}(\tau)$ to $\tau^{(i)}$ by continuity, then it is possible that $$\theta_k^{(i)}(\tau) - \theta_l^{(i)}(\tau) = 2\pi r,$$ for some integer $r \neq 0$ even though \begin{equation} \omega_k^{(i)}(\tau^{(i)}) = \omega_l^{(i)}(\tau^{(i)}). \end{equation} Going through each interval $\mathcal{I}^{(i)}$ and each curve $\omega_k^{(i)}(\tau)$, we may identify all $\tau = \tau_l$ such that \begin{equation} \theta_k^{(i)}(\tau_l) = 2\pi r, \ r \text{ integer}, \end{equation} for some $k$ if $\tau \in \mathcal{I}^{(i)}$. Notice, the ends of the intervals, $\tau^{(i)}$, $i = 0, 1, \ldots, K$ should also be included. We will order such $\tau_l$ in an ascending order $$\tau^l \leq \tau_1 < \tau_2 < \cdots < \tau_L \leq \tau^u.$$ Each $\tau_l$ is known as a critical delay. For each given $\tau_l$, it is possible that more than one $k$ satisfies (28), and we denote the corresponding $\omega_k^{(i)}(\tau_l) \geq 0$ as $\omega_{lh}$, $h = 1, 2, \ldots, H_l$. Therefore, we can identify all the pairs $(\omega_{lh}, \tau_l)$, $h = 1, 2, \ldots, H_l; l = 1, 2, \ldots, L$, that satisfy (3). It should also be pointed out that it is possible that a simple root of $D_\tau(j\omega)$ may be a double root of $F_\tau(\omega)$. In other words, for some $\tau = \tau^{(i)}$, an $\omega$ that simultaneously satisfy (5) and (9) may satisfy (18) without violating Assumption III. Such points pose special difficulty in determining crossing direction as will be shown in the next section. Now we will describe the representation of the second aspect we mentioned at the beginning of this section, i.e., the movement of the imaginary roots. For a given pair \((\omega_{lh}, \tau_l)\) that satisfies (3), a sufficiently small \(\varepsilon > 0\), and any \(\tau \in (\tau_l, \tau_l + \varepsilon)\), there is a unique solution \(\lambda_{lh}^+\) of (2) in the neighborhood of \(j\omega_{lh}\). Assumption III and continuity means that \(\Re(\lambda_{lh}^+)\) must be nonzero, and have the same sign for any \(\tau \in (\tau_l, \tau_l + \varepsilon)\). Similarly, let \(\lambda_{lh}^-\) be the unique solution of (2) in the neighborhood of \(j\omega_{lh}\) corresponding to a given \(\tau \in (\tau_l - \varepsilon, \tau_l)\), then \(\Re(\lambda_{lh}^-)\) must have the same sign for all such \(\tau\). We define \[ \text{Inc}(\omega_{lh}, \tau_l) = \frac{\text{sgn}(\Re(\lambda_{lh}^+)) - \text{sgn}(\Re(\lambda_{lh}^-))}{2}. \] (29) If \(\text{Inc}(\omega_{lh}, \tau_l) = 1\), a root of \(D_\tau(\lambda)\) moves from the left-half plane to the right-half plane crossing the imaginary axis at \(j\omega_{lh}\) as \(\tau\) increases from \(\tau_l - \varepsilon\) to \(\tau_l + \varepsilon\). On the other hand, if \(\text{Inc}(\omega_{lh}, \tau_l) = -1\), then the root moves from the right-half plane to the left-half plane as \(\tau\) increases from \(\tau_l - \varepsilon\) to \(\tau_l + \varepsilon\). If \(\text{Inc}(\omega_{lh}, \tau_l) = 0\), the root moves towards the imaginary axis, touching it at \(j\omega_{lh}\), then return to the same half plane without crossing the imaginary axis. We also define \[ \text{Inc}(\tau_l) = 2 \sum_{h=1}^{H_l} \text{Inc}(\omega_{lh}, \tau_l). \] (30) Then, as \(\tau\) increases from \(\tau_l - \varepsilon\) to \(\tau_l + \varepsilon\), there is a net increase of \(\text{Inc}(\tau_l)\) roots on the right-half plane. Notice, \(\omega_{lh} > 0\), \(h = 1, 2, \ldots, H_l\) only accounts for the roots on the upper half of the imaginary axis, and the coefficient 2 in front of the summation sign in (30) accounts for the fact that the roots of \(D_\tau(\lambda)\) are symmetric to the real axis. Let the number of right-half plane roots of \(D_\tau(\lambda)\) be \(N^u(\tau)\). Then, for any \(\tau \in \mathcal{I}\), \(\tau \neq \tau_l\), \(l = 1, 2, \ldots, L\), we have \[ N^u(\tau) = N^u(\tau') + \sum_{l=1}^{L_\tau} \text{Inc}(\tau_l), \] (31) where \(L_\tau = \max\{\ell \mid \tau_l < \tau\}\). If \(\tau' = 0\), as \(D_{\tau'}(\lambda)\) is a polynomial, \(N^u(\tau')\) is easily obtained. If \(\tau' > 0\), \(N^u(\tau')\) may be obtained by a method covered in [5] or [1] (but notice the correction [2]). If there are imaginary roots for \(D_{\tau'}(\lambda)\), \(N^u(\tau')\) should not count these imaginary roots, and \(\text{Inc}(\omega_{lh}, \tau')\) should be defined as, \[ \text{Inc}(\omega_{lh}, \tau') = \begin{cases} 1, & \text{if } \text{sgn}(\Re(\lambda_{lh}^+)) = 1, \\ 0, & \text{otherwise} \end{cases} \] (32) instead. Obviously, \(N^u(\tau)\) remains the same in the interval \((\tau_l, \tau_{l+1})\) for any given \(l\). The system is stable if \(N^u(\tau) = 0\). IV. CROSSING DIRECTION CONDITIONS In the last section, a procedure of determining the range of \(\tau\) in \(\mathcal{I}\) such that \(D_\tau(\lambda)\) is stable has been developed, provided a method of determining \(\text{Inc}(\omega_{lh}, \tau_l)\) is available. It is not difficult to determine \(\text{Inc}(\omega_{lh}, \tau_l)\) according to the definition if a numerical method is used. Indeed, as the solution \((j\omega_{lh}, \tau_l)\) is already known for \(D(\lambda, \tau)\), the Newton-Raphson method may be used to find the unique solution in the neighborhood of \(j\omega_{lh}\) when \(\tau\) is very close to \(\tau_l\) and \(D(\lambda, \tau)\) is differentiable with respect to \(\tau\) in a neighborhood of \((\omega_{lh}, \tau_l)\) [8]. In many cases, however, a simple analytical method can be used, which will be described as follows. The simplest case is when \[ \Re(\lambda_{lh}'(\tau))_{\tau=\tau_l} \neq 0, \] (33) where, \(\lambda_{lh}'(\tau)\) is the implicit function defined by (2) in the neighborhood of \((j\omega_{lh}, \tau_l)\) provided that \(\lambda_{lh}'(\tau)\) is differentiable at \(\tau_l\). This can be guaranteed by requiring \(D(\lambda, \tau)\) to be differentiable w.r.t \(\tau\) at \((j\omega_{lh}, \tau_l)\) [8]. Indeed, provided that (33) is satisfied, it is easy to see \[ \text{Inc}(\omega_{lh}, \tau_l) = \text{sgn}(\Re(\lambda_{lh}'(\tau_l))), \] (34) if \(\tau_l > \tau'\). On the other hand, if \(\tau_l = \tau'\), we have \[ \text{Inc}(\omega_{lh}, \tau_l) = \max\left\{0, \text{sgn}(\Re(\lambda_{lh}'(\tau_l)))\right\}. \] (35) If (33) is violated, and \(D(\lambda, \tau)\) is differentiable to a sufficiently high order at \((j\omega_{lh}, \tau_l)\), then it follows from equation (8) in Assumption III and the implicit function theorem that the derivatives of \(\lambda(\tau)\) exist up to a sufficiently high order at the point \((j\omega_{lh}, \tau_l)\) [8]. Consequently we may express \(\text{Inc}(\omega_{lh}, \tau_l)\) using higher order derivatives. Suppose \[ \Re\left(\frac{d^k\lambda(\tau)}{d\tau^k}\right)_{\tau=\tau_l} = 0, \ k = 1, 2, \ldots, m-1, \] \[ \Re\left(\frac{d^m\lambda(\tau)}{d\tau^m}\right)_{\tau=\tau_l} \neq 0. \] Then, if \(\tau_l > \tau'\), then \[ \text{Inc}(\omega_{lh}, \tau_l) = \begin{cases} \text{sgn}\left(\Re\left(\frac{d^m\lambda(\tau_l)}{d\tau^m}\right)\right), & \text{if } m \text{ is odd}, \\ 0, & \text{if } m \text{ is even}. \end{cases} \] (36) If \(\tau_l = \tau'\), on the other hand, then \[ \text{Inc}(\omega_{lh}, \tau_l) = \max\left\{0, \text{sgn}\left(\Re\left(\frac{d^m\lambda(\tau_l)}{d\tau^m}\right)\right)\right\}. \] (37) If the condition (8) in Assumption III is violated for some imaginary characteristic root \(\lambda = j\omega^*\), we are then faced with a characteristic root with multiplicity and cannot regard it as a locally differentiable function of \(\tau\). In this case, the trajectory of characteristic roots parameterized by \(\tau\) may have several branches passing through the point \(j\omega^*\) on the imaginary axis. One may still determine the increment in the number of unstable roots based on these branches of curves, which can be locally characterized by the Newton-Puiseux series. Comprehensive analysis of this problem can be found in [23], [24] and [25]. An eigenvalue perturbation approach is taken in [23] and [24], which applies also to systems represented by the state-space matrices, whilst the analysis in [25] is based on the characteristic equations. We will now give an explicit expression of \(\text{sgn}(\Re(\lambda_{lh}'(\tau_l)))\) and leave the high-order analysis to future work. The expression is similar to that given in [12], but our derivation here is more succinct. For this purpose, we henceforth replace Assumption III by the following one: **Assumption IIIa.** Any pair \((\omega^*, \tau^*) \in \mathbb{R} \times \mathcal{I}\) that satisfies (3) must also satisfy \[ \partial_\omega F(\omega^*, \tau^*) \neq 0. \] Furthermore, \(D(\lambda, \tau)\) is differentiable with respect to \(\tau\) in a neighborhood of \((j\omega^*, \tau^*)\). The above assumption is stronger than the first part of Assumption III as indicated by the following Lemma. **Lemma 1.** Any pair \((\omega^*, \tau^*)\) that satisfies Assumption IIIa must also satisfy (8). **Proof.** At \((\omega^*, \tau^*)\) \[ F = \tilde{P}P - \tilde{Q}Q = 0, \] \[ e^{-\tau \lambda} = -P/Q = -\tilde{Q}/\tilde{P}, \] \[ \partial_\lambda D = \partial_\lambda P + (\partial_\lambda Q)e^{-\tau \lambda} - \tau Q e^{-\tau \lambda} = 0. \] Therefore, \[ \partial_\omega F = 2\Re(j\tilde{P}\partial_\lambda P - j\tilde{Q}\partial_\lambda Q) \] \[ = -2\Im(\tilde{P}\partial_\lambda P - \tilde{Q}\partial_\lambda Q) \] \[ = -2\Im\left(\tilde{P}\partial_\lambda P - \tilde{P}\frac{\tilde{Q}}{\tilde{P}}\partial_\lambda Q + \tau \tilde{P}P\right) \] \[ = -2\Im\left(\tilde{P}\partial_\lambda P + \tilde{P}e^{-\tau \lambda}\partial_\lambda Q - \tau \tilde{P}Q e^{-\tau \lambda}\right) \] \[ = -2\Im(\tilde{P}\partial_\lambda D). \] The above indicates that \(\partial_\omega F(\omega^*, \tau^*) \neq 0\) implies (8). \(\square\) It should be pointed out that the *converse is not necessarily true*. Indeed, the proof above shows that \(\partial_\omega F(\omega^*, \tau^*) = 0\) only implies that \(\partial_\lambda D(j\omega^*, \tau^*)\) is parallel to \(P(j\omega^*, \tau^*)\), which does not necessarily mean \(\partial_\lambda D(j\omega^*, \tau^*) = 0\). **Proposition 2.** Let \((\omega^*, \tau^*) \in \mathbb{R} \times \mathcal{I}\) satisfy (3) and Assumption IIIa. Then (2) defines \(\lambda\) as a differentiable function of \(\tau\) in a sufficiently small neighborhood of \((j\omega^*, \tau^*)\), and \[ \text{sgn}\left(\Re\left(\frac{d\lambda}{d\tau}\right)_{\tau=\tau^*}\right) = \text{sgn}\left(\partial_\omega F(\omega, \tau)\right)_{\tau=\tau^*, \omega=\omega^*} \times \text{sgn}\left(\frac{d_F \theta}{d\tau}\right)_{\tau=\tau^*, \omega=\omega^*}, \] (39) where \[ \frac{d_F \theta}{d\tau} = \partial_\omega \theta \frac{d_F \omega}{d\tau} + \partial_\tau \theta \] is the total derivative of \(\theta(\omega, \tau)\) with respect to \(\tau\) when \(\omega\) is considered as a function of \(\tau\) defined implicitly by (5) in a sufficiently small neighborhood of \((\omega^*, \tau^*)\), and \(\frac{d_F \omega}{d\tau}\) is the derivative of the function \(\omega(\tau)\) so defined. **Proof.** Lemma 1 and Assumption IIIa indicate that \(\partial_\lambda D(\lambda, \tau) \neq 0\) and \(\partial_\tau D(\lambda, \tau)\) exists in a neighborhood of \((j\omega^*, \tau^*)\). Therefore, the equation (2), or equivalently \[ S(\lambda, \tau) = 1, \] (40) defines \(\lambda\) as a differentiable function of \(\tau\) in a small neighborhood of \(\tau^*\) in view of the implicit function theorem. A differentiation of (40) yields \[ \partial_\lambda S \frac{d\lambda}{d\tau} + \partial_\tau S = 0, \] from which \[ \frac{d\lambda}{d\tau} = -\partial_\tau S/\partial_\lambda S = -\partial_\tau S(\overline{\partial_\lambda S})/\left|\partial_\lambda S\right|^2. \] But, at \(\lambda = j\omega^*\), \[ \partial_\lambda S(\lambda, \tau) = \frac{1}{j}\partial_\omega S(j\omega, \tau) \] \[ = \frac{1}{j}\left[(\partial_\omega W)e^{i\theta} + j(\partial_\omega \theta)We^{i\theta}\right] \] \[ = -j\frac{1}{W}\partial_\omega W + \partial_\omega \theta. \] In the last step, (40) and (14) have been used. Similarly, we may obtain \[ \partial_\tau S = \frac{1}{W}\partial_\tau W + j\partial_\tau \theta. \] Therefore, \[ \text{sgn}\left(\Re\left(\frac{d\lambda}{d\tau}\right)\right) = -\text{sgn}\left(\Re\left(\left(\frac{1}{W}\partial_\tau W + j\partial_\tau \theta\right)\right.\right. \] \[ \times \left(\partial_\omega \theta + j\frac{1}{W}\partial_\omega W\right)\left.\right)\right) \] \[ = \text{sgn}\left(\frac{\partial_\omega W\partial_\tau \theta - \partial_\tau W\partial_\omega \theta}{W}\right). \] (41) When \(\omega\) is a function of \(\tau\) defined implicitly by (5), or equivalently by (17), we have: \[ \frac{d_F \omega}{d\tau} = -\partial_\tau W/\partial_\omega W = -\partial_\tau F/\partial_\omega F. \] (42) In view of \(|Q(\omega^*, \tau^*)| = |P(\omega^*, \tau^*)|\), it is easy to show that \[ \frac{1}{W}\partial_\omega W\bigg|_{\tau=\tau^*, \omega=\omega^*} = \frac{1}{|P|^2}\partial_\omega F\bigg|_{\tau=\tau^*, \omega=\omega^*}. \] (43) A substitution of (41) by (42) and (43) yields \[ \text{sgn}\left(\Re\left(\frac{d\lambda}{d\tau}\right)\right) = \text{sgn}\left(\frac{1}{|P|^2}\partial_\omega F\left(\frac{d_F \omega}{d\tau}\partial_\omega \theta + \partial_\tau \theta\right)\right), \] from which (39) can be easily derived. \(\square\) We now make a useful observation about the first factor in (39). **Proposition 3.** For any given \(i\) and \(k\), the quantity \[ \text{sgn}\left(\partial_\omega F(\omega, \tau)\right)_{\omega=\omega_k^{(i)}(\tau)} \] (44) remains constant for all \(\tau \in \mathcal{I}_o^{(i)}\). **Proof.** Due to the continuity of \(\partial_\omega F(\omega, \tau)\), in order for \(\partial_\omega F(\omega_k^{(i)}(\tau), \tau)\) to change sign, it must first vanish, which violates the definition of \(\mathcal{I}_o^{(i)}\). \(\square\) The above proposition indicates that the first factor in (39) only needs to be checked once for each curve \(\omega_k^{(i)}(\tau)\) within the interval $\mathcal{I}_o^{(i)}$. Next, we will provide an explicit expression for the second factor. **Proposition 4.** If $(\omega, \tau)$ satisfies (3), $$\frac{d_F \theta}{d \tau} = \frac{1}{|P|^2} \left( P_r \frac{d_F P_i}{d \tau} - P_i \frac{d_F P_r}{d \tau} - Q_r \frac{d_F Q_i}{d \tau} + Q_i \frac{d_F Q_r}{d \tau} \right) + \tau \frac{d_F \omega}{d \tau} + \omega,$$ where the subscripts $r$ and $i$ represent the real and imaginary part of the quantities, respectively, and the total derivatives may be calculated by $$\frac{d_F \phi}{d \tau} = \partial_{\omega} \phi \frac{d_F \omega}{d \tau} + \partial_{\tau} \phi,$$ where $\phi$ may be $P_r$, $P_i$, $Q_r$ or $Q_i$, and $$\frac{d_F \omega}{d \tau} = -\partial_{\tau} F / \partial_{\omega} F.$$ **Proof.** Consider the identity $$S = We^{j \theta} = -\frac{Pe^{j \omega \tau}}{Q}. \quad (45)$$ By taking total derivative with respect to $\tau$, with $\omega(\tau)$ implicitly defined by (5), and noticing $$W(\omega(\tau), \tau) = 1 \text{ for all } \tau,$$ we obtain $$j \frac{d_F \theta}{d \tau} We^{-j \theta} = -\frac{d_F}{d \tau} \left( \frac{P}{Q} \right) e^{j \omega \tau} - j \left( \tau \frac{d_F \omega}{d \tau} + \omega \right) \frac{Pe^{j \omega \tau}}{Q}.$$ Solving the above for $d_F \theta/d \tau$ and using (45), we obtain $$\frac{d_F \theta}{d \tau} = \frac{1}{j} \left( \frac{1}{P} \frac{d_F P}{d \tau} - \frac{1}{Q} \frac{d_F Q}{d \tau} \right) + \tau \frac{d_F \omega}{d \tau} + \omega. \quad (46)$$ In view of $|P|^2 = |Q|^2$, the expression in the parentheses in (46) can be written as $$\frac{1}{P} \frac{d_F P}{d \tau} - \frac{1}{Q} \frac{d_F Q}{d \tau} = \frac{\bar{P}}{P \bar{P}} \frac{d_F P}{d \tau} - \frac{\bar{Q}}{Q \bar{Q}} \frac{d_F Q}{d \tau} = \frac{\bar{P}}{P \bar{P}} \frac{d_F P}{d \tau} - \frac{\bar{Q}}{Q \bar{Q}} \frac{d_F Q}{d \tau}.$$ A substitution of (46) by the above completes the proof. □ While no explicit expression was given for $d_F \theta/d \tau$ in [12], an explicit expression of $S_n'(\tau)$ in [12] could be obtained by going through the proof of Theorem 2.2 in [12]. Proposition 3 above can be considered as the consequence of Theorem 2.2 and Remark 2.2 in [12]. Indeed, It can be seen that $S_n'(\tau)$ in Theorem 2.2 in [12] is equal to $(\theta(\tau) - 2n\pi)/\omega(\tau)$ here. Remark 2.2 in [12] indicates that the factor $\omega(\tau)$ does not affect the sign of the derivative at the crossing point. It is interesting to apply the conclusions of Proposition 2 to the case of delay-independent coefficient polynomials, i.e., when $P(\lambda, \tau)$ and $Q(\lambda, \tau)$ are independent of $\tau$. In this case, $F(\omega, \tau)$ is independent of $\tau$, the curves $\omega_k^{(i)}(\tau)$ become constants, and $d_F \theta/d \tau = \omega =$ constant. As a result, the crossing direction given in (39) is independent of delay. This fact is well-known in the literature on single or commensurate delay systems with delay-independent coefficients, and have been stated either implicitly [5] or explicitly [19] as the *invariance property*. More generally, for systems with delay-dependent coefficient polynomials discussed in this paper, we may still identify delay intervals where the crossing direction is invariant provided $P(\lambda, \tau)$ and $Q(\lambda, \tau)$ are continuously differentiable with respect to $\tau$. Indeed, for a given subinterval $\mathcal{I}_o^{(i)} = (\tau^{(i-1)}, \tau^{(i)})$, and frequency curve $\omega_k^{(i)}(\tau)$, we may identify all the delay values $\tau_{kl}^{(i)}, l = 1, 2, \ldots, L - 1$, $\tau^{(i-1)} < \tau_{k1}^{(i)} < \tau_{k2}^{(i)} < \cdots < \tau_{k,L-1}^{(i)} < \tau^{(i)}$, such that $(d_F \theta/d \tau)_{\tau=\tau_{kl}^{(i)}} = 0$. Let $\tau_{k0}^{(i)} = \tau^{(i-1)}$, $\tau_{kL}^{(i)} = \tau^{(i)}$. Then, we may conclude, by continuity, that the crossing direction at the curve $\omega_k^{(i)}(\tau)$ remains invariant for all $\tau \in (\tau_{k,L-1}^{(i)}, \tau_{kl}^{(i)}), l = 1, 2, \ldots, L$. Note that the intervals for invariant crossing direction $(\tau_{k,L-1}^{(i)}, \tau_{kl}^{(i)})$ are different for different frequency curves in general. ### V. NUMERICAL EXAMPLES In this section, we present three examples to illustrate the method developed in this paper. **Example 1.** We first consider the stellar dynamos model mentioned in the introduction. The system characteristic equation is given in (1). Therefore, $$P(\lambda, \tau) = \lambda^2 + 2c_2 \lambda + c_2^2,$$ $$Q(\lambda, \tau) = -c_1 c_3 e^{-c_2 \tau}.$$ The parameters are set as: $c_1 = -10$, $c_2 = 2$, $c_3 = 3$. We are concerned with the stability of the system for $\tau \in \mathcal{I} = [0, 2]$. Since $\text{ord}(P_\tau) = 2$ and $\text{ord}(Q_\tau) = 0$, Assumption I holds. Assumption II requires the following two equations do not hold simultaneously for real $\omega$ and $\tau \in \mathcal{I}$: $$-\omega^2 + 2jc_2 \omega + c_2^2 = 0,$$ $$-c_1 c_3 e^{-c_2 \tau} = 0,$$ which is obviously true. The other assumptions can be verified as we carry out the remaining analysis. The function $F$ in this case is $$F(\omega, \tau) = \omega^4 + 2c_2^2 \omega^2 + c_2^4 - c_1^2 c_3^2 e^{-2c_2 \tau}. \quad (47)$$ Only one pair of parameters $(\omega, \tau) = (0, \tau^{(1)})$ simultaneously satisfies (5) and (9), where $$\tau^{(1)} = -\frac{1}{2c_2} \ln(\frac{c_2^4}{c_1^2 c_3^2}) \approx 1.006.$$ Therefore, Assumption IV is satisfied. The interval $\mathcal{I}$ is thus partitioned into two subintervals $\mathcal{I}^{(1)} = [\tau^{(0)}, \tau^{(1)}]$, $\mathcal{I}^{(2)} = [\tau^{(1)}, \tau^{(2)}]$, where $\tau^{(0)} = 0$, $\tau^{(2)} = 2$. There is one positive real root $\omega_1^{(1)}(\tau)$ of $F_\tau(\omega)$ for $\tau \in (0, \tau^{(1)})$. As $\tau$ reaches $\tau^{(1)}$, this solution merges with the negative solution $-\omega_1^{(1)}(\tau)$, and they become complex as $\tau$ enters $\mathcal{I}^{(2)}$, and $F_\tau(\omega)$ does not have any real solution for $\tau$ in $\mathcal{I}^{(2)}$. In this case, we have $$\omega_1^{(1)}(\tau) = \sqrt{|c_1 c_3| e^{-c_2 \tau} - c_2^2}.$$ Corresponding to $\omega = \omega_1^{(1)}(\tau)$, $\theta_1^{(1)}(\tau)$ defined in (26) is plotted against $\tau$ in the top diagram of Figure 1. It can be seen that the curve intersects the horizontal line $2\pi$ at $\tau_1 \approx 0.2748$ and $\tau_2 \approx 0.5314$. Therefore, $H_1 = 1$, $\omega_{11} = \omega^{(1)}_1(\tau_1) \approx 3.6490$, and $H_2 = 1$, $\omega_{21} = \omega^{(1)}_1(\tau_2) \approx 2.5228$. Since both $\tau_1$ and $\tau_2$ are different from $\tau^{(1)}$, it is easy to verify that Assumption IIIa holds because (9) does not hold for each $(\omega_h, \tau_i)$. Assumption III is further implied by Assumption IIIa. ![Graph](image) **Fig. 1.** The stability analysis of the stellar dynamos. The two intersections between the graph of $\theta^{(1)}_1$ and the black-dashed line located at $2\pi$ corresponds to the two delay values for which the system has a pair of imaginary roots. $N^u$ is the number of unstable roots of the stellar dynamos. It can be verified that $\partial_\omega F(\omega^{(1)}_1(\tau), \tau) > 0$ for $\tau = 0.5$, and the above inequality holds for all $\tau \in \mathcal{I}_o^{(1)}$ according to Proposition 3. It can be easily calculated that $$\frac{d}{d\tau} \theta^{(1)}_1(\tau_1) > 0, \quad \frac{d}{d\tau} \theta^{(1)}_1(\tau_2) < 0,$$ which are also obvious from the top diagram in Figure 1. Therefore, we conclude from (39) that a pair of characteristic roots cross the imaginary axis from the left-half plane to the right-half plane as $\tau$ increases through $\tau_1$, and this pair of characteristic roots return to the left-half plane as $\tau$ further increases through $\tau_2$. In other words, $\text{Inc}(\omega_{11}, \tau_1) = 1$, and $\text{Inc}(\omega_{21}, \tau_2) = -1$. Some simple calculation shows that the system is asymptotically stable for $\tau = 0$. A plot of $N^u(\tau)$ is shown in the bottom diagram of Figure 1, from which we conclude that the system is stable for $\tau \in [0, \tau_1) \cup (\tau_2, \tau^{[2]})$; it is unstable for $\tau \in (\tau_1, \tau_2)$. **Example 2.** Consider the following characteristic equation representing the population dynamics in [13], $$\lambda^2 + a\lambda + c + (b(\tau)\lambda + d(\tau))e^{-\lambda\tau} = 0,$$ where $$b(\tau) = k_1 e^{-m\tau}, \quad d(\tau) = k_2 e^{-m\tau}.$$ The parameters are set as: $$a = 2, \quad c = 1, \quad k_1 = 4, \quad k_2 = 2, \quad m = 3.5.$$ We analyse the stability of the system for $\mathcal{I} = [0, 2.5]$. By definition, we have $$P(\lambda, \tau) = \lambda^2 + a\lambda + c,$$ $$Q(\lambda, \tau) = b(\tau)\lambda + d(\tau).$$ Since $\text{ord}(P_\tau) = 2$ and $\text{ord}(Q_\tau) = 1$, Assumption I holds. Assumption II requires the following two equations do not hold simultaneously for real $\omega$ and $\tau \in \mathcal{I}$: $$-\omega^2 + aj\omega + c = 0,$$ $$b(\tau)j\omega + d(\tau) = 0,$$ which can be easily verified to be true. The function $F$ in this case is $$F(\omega, \tau) = \omega^4 + (a^2 - b^2(\tau) - 2c)\omega^2 + c^2 - d^2(\tau). \quad (49)$$ Solving (5) and (9) together for $(\omega, \tau) \in \mathbb{R}_+ \times \mathcal{I}$, we obtain two pairs of solutions approximately equal to $(0, 1.981)$ and $(0.720, 2.391)$. The interval $\mathcal{I}$ is thus partitioned into three subintervals $\mathcal{I}^{(1)} = [\tau^{(0)}, \tau^{(1)}], \quad \mathcal{I}^{(2)} = [\tau^{(1)}, \tau^{(2)}], \quad \mathcal{I}^{(3)} = [\tau^{(2)}, \tau^{(3)}]$, where $\tau^{(0)} = 0$, $\tau^{(1)} \approx 1.981$, $\tau^{(2)} \approx 2.391$, $\tau^{(3)} = 2.5$. The polynomial $F_\tau(\omega)$ has one positive real root, namely $\omega^{(1)}_1(\tau)$, in the interval $(\tau^{(0)}, \tau^{(1)})$ and two positive roots, namely $\omega^{(2)}_1(\tau)$ and $\omega^{(2)}_2(\tau)$, in the interval $(\tau^{(1)}, \tau^{(2)})$. It has no real root for $\tau \in (\tau^{(2)}, \tau^{(3)})$. We have the following expressions: $$\omega^{(1)}_1(\tau) = 2^{-1/2} \sqrt{(b^2(\tau) + 2c - a^2) + \Delta^{1/2}(\tau)}, \quad \tau \in \mathcal{I}^{(1)},$$ $$\omega^{(2)}_1(\tau) = 2^{-1/2} \sqrt{(b^2(\tau) + 2c - a^2) + \Delta^{1/2}(\tau)}, \quad \tau \in \mathcal{I}^{(2)},$$ $$\omega^{(2)}_2(\tau) = 2^{-1/2} \sqrt{(b^2(\tau) + 2c - a^2) - \Delta^{1/2}(\tau)}, \quad \tau \in \mathcal{I}^{(2)},$$ where $\Delta(\tau) = (b^2(\tau) + 2c - a^2)^2 - 4(c^2 - d^2(\tau))$. We observe that $\pm \omega^{(2)}_2(\tau)$ emerge as a pair of real roots of $F_\tau(\omega)$ at $\tau = \tau^{(1)}$ and $\omega^{(2)}_2(\tau^{(1)}) = 0$. As $\tau$ approaches $\tau^{(2)}$ from the left, the solution $\omega^{(2)}_2(\tau)$ merges with $\omega^{(2)}_1(\tau)$. These two positive roots become complex as $\tau$ increases beyond $\tau^{(2)}$. The corresponding phase functions $\theta^{(1)}_1(\tau)$, $\theta^{(2)}_1(\tau)$, $\theta^{(2)}_2(\tau)$ are plotted against $\tau$ in the top diagram of Figure 2. These curves intersect the horizontal line 0 at $\tau_1 \approx 0.7576$ and $\tau_2 \approx 2.1745$. Therefore, $H_1 = 1$, $\omega_{11} = \omega^{(1)}_1(\tau_1) \approx 2.7556$ and $H_2 = 1$, $\omega_{21} = \omega^{(2)}_1(\tau_2) \approx 1.1837$. Since both $\tau_1$ and $\tau_2$ are different from either $\tau^{(1)}$ or $\tau^{(2)}$, it is easy to see that (9) does not hold for each $(\omega_h, \tau_i)$. Consequently we deduce that Assumption IIIa must hold, which also implies Assumption III. It can be verified that $$\partial_\omega F(\omega^{(1)}_1(1), 1) > 0, \quad \partial_\omega F(\omega^{(2)}_1(2), 2) > 0,$$ therefore $\partial_\omega F(\omega^{(1)}_1(\tau), \tau) > 0$ for $\tau \in (\tau^{(0)}, \tau^{(1)})$ and $\partial_\omega F(\omega^{(2)}_1(\tau), \tau) > 0$ for $\tau \in (\tau^{(1)}, \tau^{(2)})$. Computation shows that $$\frac{d}{d\tau} \theta^{(1)}_1(\tau_1) > 0, \quad \frac{d}{d\tau} \theta^{(2)}_1(\tau_2) < 0,$$ which also follows from the graph of phase functions plotted in the top diagram of Fig.2. We deduce by using (39) that a pair of characteristic roots cross the imaginary axis from the left-half plane to the right-half plane as $\tau$ increases through $\tau_1$. Another pair of characteristic roots cross the imaginary axis from the right-half plane to the left-half plane as $\tau$ increases through $\tau_2$. Consequently, we have $\text{Inc}(\omega_{11}, \tau_1) = 1$ and $\text{Inc}(\omega_{21}, \tau_2) = -1$. It is easy to verify that (48) is asymptotically stable for $\tau = 0$. Therefore, we conclude that the system is asymptotically stable for $\tau \in [0, \tau_1) \cup (\tau_2, 2.5]$; it is unstable for $\tau \in (\tau_1, \tau_2)$. The plot of $N^u(\tau)$ is given in the bottom diagram of Figure 2. In these two examples, after all the crossing pairs have been identified, the method in [12] may also be used to determine the crossing directions of each such pairs and thus completing the stability analysis. However, a systematic method to identify such pairs, which requires us to divide the delay interval of interest into sub-intervals, has not been considered in [12]. The following example shows that it is not always necessary to divide the interval of interest even if the condition (11), which is assumed in [12], is violated. **Example 3.** Consider a system with the following characteristic equation for $\mathcal{I} = [0, 1]$: $$\lambda^2 + 4 + ((1 - 2e^{-2\tau})\lambda + 1 - 4e^{-2\tau})e^{-\lambda\tau} = 0.$$ (50) We notice that $P(j\omega, \tau) + Q(j\omega, \tau) = 0$ when $\tau = \frac{1}{2}\ln(2)$ and $\omega = \sqrt{3}$. Therefore Condition (11), which is assumed in [12], is not satisfied. However we can verify that all of our assumptions are satisfied. We have $$F(\omega, \tau) = \omega^4 - (4e^{-4\tau} - 4e^{-2\tau} + 9)\omega^2 + 12 + 16e^{-4\tau} + 8e^{-2\tau}.$$ We find that no $(\omega, \tau) \in \mathbb{R}_+ \times \mathcal{I}$ simultaneously satisfies (5) and (9), which means $\mathcal{J}^{(1)} = \emptyset$. There are two positive roots of $F_\tau(\omega)$ for all $\tau \in \mathcal{I}^{(1)}$, therefore $\omega_1^{(1)}(\tau)$, $\omega_2^{(1)}(\tau)$ are defined in $\mathcal{I}^{(1)}$. With the corresponding phase functions plotted in the upper diagram of Figure 3, we observe that $\theta_1^{(1)}(\tau)$ intersects the horizontal line 0 at $\tau_1 \approx 0.1982$ and $\theta_2^{(1)}(\tau)$ intersects the horizontal line $2\pi$ at $\tau_2 \approx 0.6933$. We also have $\omega_{11} = \omega_1^{(1)}(\tau_1) \approx 1.4945$ and $\omega_{12} = \omega_2^{(1)}(\tau_2) \approx 2.2656$. Computation shows that $\partial_\omega F(\omega_{11}, \tau_1) < 0$ and $\partial_\omega F(\omega_{12}, \tau_2) > 0$. From Figure 3 it is easy to see that $\frac{d}{d\tau}\theta_1^{(1)}(\tau_1) > 0$ and $\frac{d}{d\tau}\theta_2^{(1)}(\tau_2) > 0$. Accordingly we can deduce that the characteristic root $j\omega_{11}$ moves toward the left-half plane and the characteristic root $j\omega_{12}$ moves towards the right-half plane as $\tau$ increases and goes through $\tau_1$ and $\tau_2$ respectively. The system has two unstable characteristic roots for $\tau = 0$, therefore it is asymptotically stable for $\tau \in (\tau_1, \tau_2)$ and unstable for $\tau \in [0, \tau_1) \cup (\tau_2, 1]$. ![Figure 2](image1.png) *Fig. 2.* The stability analysis of the population dynamics (48). The graphs of $\theta_1^{(1)}$ and $\theta_2^{(2)}$ intersect the black-dashed line located at 0 at $\tau_1$ and $\tau_2$ respectively, which correspond to the two delay values for which the system has a pair of imaginary characteristic roots. $N^u$ is the number of unstable roots of (48). ![Figure 3](image2.png) *Fig. 3.* The stability analysis of Example 3. The graph of $\theta_1^{(1)}$ intersects the black-dashed line located at 0 at $\tau_1$ and the graph of $\theta_2^{(1)}$ intersects the black-dashed line located at $2\pi$ at $\tau_2$. Therefore the system admits imaginary roots at $\tau_1$ and $\tau_2$. $N^u$ is the number of unstable roots of the system. ### VI. CONCLUSION A method of stability analysis for time-delay systems with coefficients depending on the delay has been developed. The method is an extension of the one given in [12] to a more general case. The method partitions the range of interest for the delay into subintervals so that the magnitude condition yields a fixed number of solutions of frequencies $\omega$ as functions of the delay $\tau$ within each subinterval. The crossing conditions is expressed in a general form, and a simplified derivation for the first order derivative crossing criterion is obtained. ### REFERENCES [1] K. L. Cooke and P. van den Driessche. “On zeroes of some transcendental equations,” *Funkcialaj Ekvacioj*, vol. 29, pp. 77–90, 1986. [2] F. G. Boese. “Stability with respect to the delay: On a paper by K.L. Cooke and P. van den Driessche,” *J. Math. Anal. Appl.*, vol. 228, pp. 293–321, 1998. [3] K. Gu, V. L. Kharitonov, & J. Chen (2003). Stability of time-delay systems. [4] Gu, K. (2012). A review of some subtleties of practical relevance for time-delay systems of neutral type. ISRN Applied Mathematics, Vol 2012, Article ID 725783, 46 pages. doi: 10.5402/2012/725783. [5] K. Walton and J. E. Marshall. “Direct method for TDS stability analysis,” *IEE Proc*. vol. 134, part D, pp. 101-107, 1987. [6] L. E. El’sgol’ts and S. B. Norkin, *Introduction to the Theory and Application of Differential Equations with Deviating Arguments*, Translated by J. L. Casti, Academic Press, New York, 1973. [7] E. N. Gryazina, B. T. Polyak, and A. A. Tremba. “D-decomposition technique state-of-the-art,” *Automation and Remote Control*, vol. 69, no. 12, pp. 1991–2026, 2008. [8] K. Knopp. *Theory of Functions*, Parts I and II, Translated to English by F. Bagemihl, Dover, Mineola, NY, 1996. [9] W. Michiels, & S. I. Niculescu, (2014). Stability, Control, and Computation for Time-Delay Systems: An Eigenvalue-Based Approach (Vol. 27). Siam. [10] S. I. Niculescu, (2001). Delay effects on stability: a robust control approach, vol 269. Springer, Heidelberg. [11] R. Sipahi, S. I. Niculescu, C. T. Abdallah, W. Michiels and K. Gu, "Stability and Stabilization of Systems with Time Delay," in IEEE Control Systems, vol. 31, no. 1, pp. 38-65, Feb. 2011. doi: 10.1109/MCS.2010.939135 [12] E. Beretta, Y. Kuang (2002). Geometric stability switch criteria in delay differential systems with delay dependent parameters. SIAM Journal on Mathematical Analysis, 33(5), 1144-1165. [13] R. M. Nisbet, W. S. C. Gurney, J. A. J. Metz, Stage structure models applied in evolutionary ecology, Biomathematics, 18 (1989), pp. 428-449. [14] R. Bence, R. M. Nisbet. Space-limited recruitment in open systems: The importance of time delays, Ecology, 70(1989), pp. 1434-1441. [15] A. L. Wilmot-Smith, D. Nandy, G. Hornig, “A time delay model for solar and stellar dynamos,” The Astrophysical Journal 652:1 (2006): 696. [16] F. Crauste. A review on local asymptotic stability analysis for mathematical models of hematopoietic with delay and delay-dependent coefficients. Annals of the Tiberiu Popoviciu Seminar of functional equations, approximation and convexity, 9, 121-143 (2011). [17] M. S. Lee and C. S. Hsu, “On the $\tau$-decomposition method of stability analysis for retarded dynamical systems,” SIAM J. Control, 7:249, 259, 1969. [18] D. Hertz, E.J. Jury and E. Zeheb, Stability independent and dependent of delay for delay differential systems, J. Franklin Inst. 318(3) (1984) 143-150. [19] X. G. Li, S. I. Niculescu , A. Cela, et al. Invariance properties for a class of quasipolynomials, Automatica, 2014, 50(3): 890-895. [20] K. Gu, C. Jin, I. Boussaada and S. I. Niculescu, “Towards more general stability analysis of systems with delay-dependent coefficients,” 2016 IEEE 55th Conference on Decision and Control (CDC), Las Vegas, NV, 2016, pp. 3161-3166. [21] C. Jin, S. -I. Niculescu, I. Boussaada, K. Gu, “Stability analysis of control systems subject to delay-difference feedback,” Proceeding of the IFAC World Congress, Toulouse, France (2017). [22] L.V. Ahlfors, Complex Analysis: An Introduction to the Theory of Analytic Functions of One Complex Variable, L. V. Ahlfors. McGraw-Hill, 1953. [23] J. Chen, P. Fu, S. -I. Niculescu, and Z. Guan, “An eigenvalue perturbation approach to stability analysis, part I: Eigenvalue series of matrix operators,” SIAM Journal on Control and Optimization 48, no. 8 (2010): 5564-5582. [24] J. Chen, P. Fu, S. -I. Niculescu, and Z. Guan, “An eigenvalue perturbation approach to stability analysis, part ii: When will zeros of time-delay systems cross imaginary axis?,” SIAM Journal on Control and Optimization 48, no. 8 (2010): 5583-5605. [25] X. G. Li, S. I. Niculescu, A. ela, L. Zhang and X. Li, “A Frequency-Sweeping Framework for Stability Analysis of Time-Delay Systems,” in IEEE Transactions on Automatic Control, vol. 62, no. 8, pp. 3701-3716, Aug. 2017. [26] D. Israelsson and A. Johnsson, A theory for circumnutations in Helianthus annuus, Physiol. Plant. 20, 957-976 (1967). [27] C. Foley and M. C. Mackey, Mathematical model for G-CSF administration after chemotherapy, Journal of Theoretical Biology, 2009, 257(1): 27-44. --- **Chi Jin** was born in Shanghai, China in 1989. He received the B.S. degree from Tongji University, China, 2012. He is currently a Ph.D student with L2S (Laboratory of Signals and Systems) and Université Paris-Sud located at Gif-sur-Yvette, France. His research interest includes time-delay systems, nonlinear control with applications to automotive vehicles and robotics. **Kegin Gu** is a Distinguished Research Professor of the Department of Mechanical and Industrial Engineering, Southern Illinois University Edwardsville. He received B.S. and M.S. from Zhejiang University, and Ph.D. from Georgia Institute of Technology. His research interest includes control systems and nonlinear dynamical systems, with emphasis on time-delay systems. He authored or co-authored more than 100 papers in archive journals and technical conferences, and is the lead author of the book Stability of Time-Delay Systems. He was the US Coordinator of three US-France cooperative research projects. He is currently serving or served in editorial board of a number of major technical journals in the systems and control area, including Automatica, IEEE Transactions on Automatic Control, and Systems and Control Letters. He also served as a member of the program committee in a number of international conferences and workshops in the area, including Conference on Decision and Control, and American Control Conference. **Islam Boussaada** received his M.Sc. degree in Mathematics from University Tunis II as well as an M.Sc. degree in Pure Mathematics from University Paris 7 in 2004. In December 2008, he received his Ph.D. degree in Mathematics from Normandy University. In 2016 he obtained the French habilitation (HDR) in Physics from University Paris Saclay. In 2010, I. Boussaada served for two years as a post-doc fellow in control of time-delay systems at L2S Supélec-CNRS-University Paris Sud. Since 2012, he has been serving as an associate professor at IPSA and as an associate researcher at L2S of University Paris Saclay, CentraleSupélec-CNRS-University Paris Sud. Since 2016, he is an associate member of Inria Saclay DISCO project. His research interest belongs to the field of the qualitative theory of dynamical systems and its application in control. It covers the analysis of the delay effect on dynamics, stability and stabilization of delay systems and hyperbolic partial differential equations, oscillations and periodic solutions of functional differential equations, control of vibrations. He authored or co-authored 2 monographs and more than 30 papers in journals, book chapters and international conferences proceedings. **Silviu-Iulian Niculescu** received the B.S. degree from the Polytechnical Institute of Bucharest, Romania, the M.Sc., and Ph.D. degrees from the Institut National Polytechnique de Grenoble, France, and the French Habilitation (HDR) from Université de Technologie de Compiègne, all in Automatic Control, in 1992, 1993, 1996, and 2003, respectively. He is currently Research Director at the CNRS(French National Center for Scientific Research) and the head of L2S (Laboratory of Signals and Systems), a joint research unit of CNRS with CentraleSupélec and Université Paris-Sud located at Gif-sur-Yvette. He is author/coauthor of 10 books and of more than 475 scientific papers. His research interests include delay systems, robust control, operator theory, and numerical methods in optimization, and their applications to the design of engineering systems. He is the responsible of the IFAC Research Group on “Time-delay systems” since its creation in October 2007. He served as Associate Editor for several journals in Control area, including the IEEE Transactions on Automatic Control (2003-2005). Dr. Niculescu was awarded the CNRS Silver and Bronze Medals for scientific research and the Ph.D. Thesis Award from INPG, Grenoble (France) in 2011, 2001 and 1996, respectively. For further information, please visit http://www.l2s.centralesupelec.fr/perso/silviu.niculescu.
BUILDING AND MAINTAINING CLIENT RELATIONSHIPS LEADING LAWYERS ON ATTRACTING NEW CLIENTS, DEVELOPING EFFECTIVE MARKETING TECHNIQUES, AND ESTABLISHING A STRONG REPUTATION Benjamin F. Wilson, Beveridge & Diamond PC Joseph D. Garrison, Garrison. Levin-Epstein, Chimes, Richardson & Fitzgerald PC George P. McAndrews, McAndrews, Held & Malloy Ltd. Ronald H. Shechtman, Pryor Cashman LLP Kevin R. Pinegar, Durham Jones & Pinegar PC | Author | Title | Page | |---------------------------------------------|----------------------------------------------------------------------|------| | Benjamin F. Wilson | Managing Principal, Beveridge & Diamond PC | 7 | | | MEETING CLIENT NEEDS IN ENVIRONMENTAL LAW | | | Joseph D. Garrison | Managing Shareholder, Garrison, Levin-Epstein, Chimes, Richardson & Fitzgerald PC | 21 | | | SUCCESSFUL MARKETING STRATEGIES FOR AN EMPLOYMENT LAW PRACTICE | | | George P. McAndrews | Founding and Managing Partner, McAndrews, Held & Malloy Ltd. | 33 | | | A PATENT FIRM'S CHALLENGES IN GROWING ITS CLIENT BASE | | | Ronald H. Shechtman | Managing Partner and Chair, Labor and Employment Group, Pryor Cashman LLP | 47 | | | PROVIDING VALUE AND SUPERIOR CLIENT SERVICE IN AN ERA OF COST PRESSURE | | | Kevin R. Pinegar | President, Durham Jones & Pinegar PC | 59 | | | CLIENT SATISFACTION: KEY TO CLIENT ATTRACTION AND RETENTION | | | | Appendices | 75 | Providing Value and Superior Client Service in an Era of Cost Pressure Ronald H. Shechtman Managing Partner and Chair, Labor and Employment Group Pryor Cashman LLP Providing Value-Added Service The game has changed for law firms of all sizes. Since the recession, clients are demanding that counsel do more with less and are keeping a close eye on costs. Now, more than ever, law firms must ensure clients clearly understand the value of the legal work they provide. For firms to survive, the mantra “know your client” must be ingrained in every member—partners, associates, and staff. For some firms, the new reality of value-added service has created something of a culture shock. They can no longer rely on simply being the biggest or most respected name to retain clients. They must provide demonstrable value that clients will recognize when they review statements—something they are now doing with a much more critical eye. It is no longer enough for firms to throw innumerable partners and associates at a particular matter and bill the client. The cost must reflect the value of the legal work, not just the size of the team. For other firms, however, this has not been such a cataclysmic shift. At Pryor Cashman, for example, we have always believed in being a “right-sized” firm that is unencumbered by bureaucracy and not pressured by excessive leverage of associates to partners. We work closely with our clients to determine their specific needs and staff accordingly. We look at what a matter really needs and then work with the client to achieve its goals. This lean, cost-sensitive approach also keeps the firm agile in responding to market shifts. Compared with other firms, we are more able to make quick decisions regarding alternative fee arrangements and billing, which has proven advantageous. But the success of any client service philosophy can be measured only in results. For Pryor Cashman, there has been a demonstrable benefit to operating in the way we do. We have not only survived, but also excelled, as a mid-sized firm in New York, which is arguably the most competitive legal market in the world. While other firms began a long spiral downward in 2008, that year was one of the best ever for Pryor Cashman. We experienced turbulence in the beginning of 2009 and saw a drop in work for our real estate and corporate practices in particular, as did most firms. However, the last quarter of 2009 was the best ever for our firm, and 2010 looks strong. In light of this, we believe our client-focused model for value-added legal services works—and can be adapted for any firm, regardless of market, size, or core practices. It allows us to take advantage of boom periods, while providing near bulletproof protection in the face of economic instability. **Methods to Maximize Client Satisfaction** Relationships are at the core of what we do. We find that friendships often develop between our attorneys and their clients, even if the original purpose for the relationship was purely business. This is the natural result of the way in which we partner with clients. Our attorneys pay close attention to the needs of their clients. They listen. They come to completely understand, and share in, the hopes, goals, and fears of their clients. This creates a personal bond that is crucial to client satisfaction. When clients feel they are truly understood, they are more likely to see value in the legal work we do for them. But even strong client relationships can experience bumps in the road. When this happens, a client’s trust in the attorney’s complete understanding of their needs becomes invaluable. There is already a precedent for two-way communication, and a client will feel comfortable bringing the matter up. If attorneys have truly been doing their client-service job and demonstrating value, the client will communicate the problem and ask for help to fix it. If not, the client is far more likely to just move on. Because we place such a high value on knowing our clients, we have never experienced a need for client satisfaction surveys. Having third parties speak to our clients would be inconsistent with who we are and seem out of character. If we are doing our job and providing value-added, client-focused service, we know what our clients think about our work because we have been listening throughout the engagement. We are in constant contact with our clients, so the need for a formal survey to take the pulse of the relationship is unnecessary. Client Development Based on Relationships Relationships are also at the core of how we manage and develop clients. This is driven not only by our firm’s philosophy, but also by our client base. While we may represent some companies that fall within the Fortune 100, the majority of our clients are mid-sized, mid-market companies. They usually have come to the firm largely because of a relationship with one of our partners. This relationship then forms all of what we do for that client for as long as they are with us. In staffing matters, these relationships form our approach. Generally, one partner is responsible for guiding our work for a client. This partner not only works directly with the client, but also ensures all matters are properly staffed, assigned, and billed. The relationship partner is also responsible for ensuring a matter is not passed off to another attorney who is unfamiliar with the client’s business and industry and the issue at hand. Pryor Cashman clients will always know the attorneys working on their matters. It is a given in our firm. Relationship partners are also responsible for ensuring that clients receive value for their legal services dollar and that we are providing what they need in terms of service. Adding New Practice Areas to Attract New Clients In addition to pressures on billing, the recession has also brought about dramatic changes in the profitability of practice areas once thought to be bulletproof. It is no secret that in the past two years real estate and merger and acquisition (M&A) practices lost much of their luster. Anything connected to banking or financial services also took a hit. Firms were scrambling to pinpoint new profit centers and attract clients. Figuring out the new growth practices has meant firms must think creatively and reconsider long-held beliefs. For example, before the economic downturn, the conventional wisdom among larger firms was that personal services practices—trusts and estates, family law, and other areas—were less profitable than corporate work. This is no longer the case. At Pryor Cashman, we saw these practices as a way to retain clients, expand services, and develop new business. In 2009, we expanded our family law practice, added a business immigration practice, and grew a practice with China-based companies. In addition, in early 2010 we expanded our charities and tax-exempt organizations practice, complementing our larger litigation and corporate practices. By offering a wider array of services, we were able to secure additional business from current clients, when we might have might have lost them completely when their real estate or M&A matters dried up. Moreover, we lured in new clients by building our reputation as a firm that could handle the full scope of their legal work at a reasonable price. Where we were once thought of as just a corporate firm, we are now known for our personal services practices, as well. Our willingness to embrace change has also meant that we have been able to tap entirely new client markets. When we saw our traditional U.S.-based mid-market M&A work contract, we looked beyond national borders for new clients. That led us to targeting companies in China, a market that is vibrant and growing. We recruited a young attorney form China with an American law degree with the potential to develop business relationships in her native country. While we already handled finance work for a few China-based companies, our new hire enabled us to add significant value to what we could offer these existing clients, create critical mass, and attract new business. This market is very cost-conscious in terms of what they will spend on American legal services, and our reputation for lean staffing and exceptional value has resonated well with these clients. **New Client Billing Strategies** As with exploring new practice areas, the recession forced firms to explore alternative and creative billing. For some firms, this was merely a Band-Aid solution used to appease clients and did not reflect a true change in philosophy. Again, if these firms had been listening to clients before the economic downturn, they would have already been sensitive to billing issues. Because we have always focused on being cost-effective, Pryor Cashman had already considered alternative billing and had implemented alternative fee arrangements with great success in appropriate circumstances for many years. In the end, if it means providing value to our clients, we are willing to experiment with billing. Our structure allows us to make quick decisions about alternative arrangements promptly and carefully, while keeping in mind that alternative billing does come with risks. Because we have traditionally paid such close attention to staffing, we have a good sense of what given matters will require in terms of the amount of work, and this reduces the risk that we will underestimate the arrangement. For firms that do not have this kind of internal culture, however, the prospect of fixed-fee billing is likely far more risky. Of course, alternative billing arrangements are better suited for certain types of work. With transactional matters, for example, it is easier to anticipate the requirements of a given matter and anticipate costs. It is also easier to foresee potential problems and plan for them. When developing alternative billing arrangements for transactional work, we set specific parameters and expectations with a client so there are no surprises. Litigation poses a unique challenge in terms of alternative fee arrangements. For those matters appropriate, we have used premium billing or upside arrangements, offsetting significant rate adjustments. We will, for example, implement billing caps or use premiums to reduce fees based on our results. Generally, these arrangements have worked out well for both the firm and our clients. **Strategies for Retaining Top Attorneys** A law firm’s success depends on the quality of the attorneys who work there. A firm must foster the growth of partners and associates and strategically recruit new talent. Where some firms fall down is in ensuring that they retain these top attorneys. If a firm does not provide challenging work and an environment that encourages all attorneys to grow and develop, they will leave. At Pryor Cashman, we have always taken this risk seriously and have taken steps to address it. Unlike large firms where even partners complain about how hard it is to stand out, we provide ample opportunities for our attorneys to shine. The entrepreneurial nature of the firm allows and encourages attorneys to bring in their own clients and develop business. They work on lean teams consisting of three or four individuals in which associates are granted a high level of responsibility. Work is assigned based on ability, not rank, and associates are afforded an opportunity to assume important roles on cases from inception through trial on a day-to-day basis, which is unheard of at most firms of similar size. Our attorneys work hard, but the workload is manageable, and they have lives outside the office. There is also a clear trajectory from associate to partner. As a result, we are able to attract top talent and offer them a different kind of practice. A new hire may work on more matters in their few first months at Pryor Cashman than they did during years at their previous firm. The nature of much of our litigation, including intellectual property work, creates cases that resolve themselves without years of discovery and the opportunity for associates to go from the beginning to the end of the case. We are also an entrepreneurial firm, which is another key to retaining talent. Our attorneys are encouraged and rewarded for their efforts, success, and business development—and this process starts at an early stage. Associates receive commissions on work they originate; attorneys at all levels are encouraged and supported to develop business; and our compensation model generally favors originators. This means we have a built a culture of business development. It is no longer a shock to associates when they make partner and are told they must now shoulder marketing responsibilities they have not been trained to do. Attorneys are far less likely to feel expendable during difficult times because they have developed the skill to bring in new business. They simply put what they know to use and go after new business. Attorneys are also encouraged to sell internally and cross-promote their expertise among current clients. In this collaborative environment, partners open their book of business and identify opportunities to bring in a colleague to work on such business. Long gone are the days when partners were reluctant to share clients for fear that the quality of the work would be compromised or their compensation would be diminished. This creates an environment of collegiality and teamwork that talented attorneys are reluctant to leave. **Mentoring Young Attorneys** Developing talent in up-and-coming attorneys is important to secure the future of the firm. We have a one-to-one ratio between partners and non-partners. Associates work closely with seasoned attorneys who have more experience with clients and added responsibilities at the firm. Partners are expected to closely monitor the work of the associates they mentor or manage. They give clear information as to time parameters and expectations and operate with an open-door approach. If an associate feels she is spending more time on a matter than has been approved, she is encouraged to address this with the responsible partner, who may offer further direction, insight, or instruction. This not only provides an environment where associates are encouraged to ask questions and learn, but it also ensures we are being cost-effective in our work for clients. Associates are not permitted to merely churn out hours, but they are held accountable for their work being focused and directed. **Trends in Summer Associate Hiring and Billing** The recession has surely affected firms’ hiring and billing practices when it comes to summer associates. With a still sluggish economy, companies will no longer pay for a summer associate’s on-the-job-training or commit to hiring levels beyond two years after the hiring decision. A recent article in the *New York Law Journal* (“Summer Colds are the Worst: Client Freeze Paying for 2Ls Time,” June 8, 2010), for example, reported that of the ten large firms surveyed, nine had reduced their summer associate class sizes by at least 20 percent, and some by as much as 80 percent. The article also noted that Citibank has put its law firms on notice that it will not pay for summer associate time. Viacom has not allowed law firms to bill for summer associate work for several years. This is clearly a sea change for most firms. Had the *New York Law Journal* surveyed Pryor Cashman for its story, the reporter would have discovered the firm took a conservative approach to summer associate billing even well before the recession. For our summer program, we generally hire a select group of three to five second-year law students whom we hope to hire eventually as first-year lawyers after they graduate. If candidates are not right for our firm, we do not make offers simply to round out a summer class. In terms of billing, we address the issue on a case-by-case basis. Our well-vetted summer associates are managed closely by mentoring attorneys who have authority to write off their time on a matter. We see this as a form of alternative billing where the attorney responsible for the client relationship serves as the gatekeeper to ensure all bills accurately reflect the value of services rendered. As the *Law Journal* article indicated, our approach is the one favored by clients. **Being a Cost-Effective Law Firm** As we pull out of the recession with the recognition that there is a long road ahead to recovery with the potential still for setbacks, the watchword for law firms will be *value*. Not only are clients weary of escalating legal costs, but they are refusing to pay them and taking their business to more receptive and cost-sensitive firms. Firms must be cost-effective if they are to survive, and this means that some long-standing practices must end. It is no longer a given that law firms will offer astronomical salaries and bonuses to first-year associates fresh out of law school, and then pass those costs on to their clients. Clients will just not stand for that in this economic climate. They will not subsidize the training of inexperienced lawyers. Today’s law firm client must see a demonstrable relationship between the fees they are charged and the value they receive. Money matters, and clients will no longer pay outside counsel whatever they charge. This is more of a problem for larger law firms that were built on a culture of unlimited client cash flow. Mid-sized firms, because of their client base and operating constraints, have always had to be more sensitive to cost concerns. They were slower to increase salaries, were slower to increase fees, and are more driven to control costs. As a result, they have had less of a problem making the value case for the cost of their services with clients. Simply put, clients are more likely to look at rising legal fees and ask, “Can we afford this any longer?” For firms unable to justify the value in their fees, the answer they receive from clients will more frequently be “no.” For mid-sized firms that operate on a lean basis, without frills that cost clients money, this is a time of opportunity. They have always been the choice for value-added service, and clients are coming to understand this. **Final Thoughts** For any law firm to succeed in this environment, a relationship has to be created between the responsible attorney and the client. The relationship has to be managed so the client understands the responsible attorney is completely dedicated to understanding the client’s business and business needs, as reflected in the counsel he or she provides. The attorney must work with the client on developing a plan and a relationship where the client is getting value and understands it is fair and reasonable. Throughout the representation, it is important there is sensitivity on the part of management in terms of value and responsiveness to ensure the attorneys who are working on the matter are doing exactly what the client wants and needs. The client should never have to try to figure out who is responsible and who is in charge, or wonder where to go to find out what is happening and why it is happening and what it means to the client in terms of either the case development or the cost. **Key Takeaways** - Client relationships fit no single mold. They should develop naturally and follow from the client’s needs and the firm’s measure of what is possible. - Sensitivity to cost means skillfully monitoring one’s resources and using them effectively. - A client-focused process is the only way to provide value-added legal services. It can be adapted by any firm, regardless of market, size, or core practices. It is a necessity to thrive in a robust economy and survive in the face of economic instability. Ronald H. Shechtman is Pryor Cashman’s managing partner and chair of the firm’s Labor and Employment Group. Before joining Pryor Cashman, he was a partner with Gordon & Shechtman PC. Mr. Shechtman represents diverse clients in labor-management relations matters and in employment matters dealing with the increasing legal complexity of today’s workplace. He litigates labor-management, Equal Employment Opportunity (EEO), wrongful discharge, Employee Retirement Income Security Act (ERISA), and related matters and assists clients in developing strategies to mitigate exposure to litigation and liability arising from the employment relationship. Named a “Super Lawyer” in the area of Employment and Labor Law, Mr. Shechtman frequently lectures and publishes in this area. He is on the faculty of New York University Law School, where he has taught labor law courses. In addition, he is a board member of the Law School’s Center for Labor and Employment Law. Mr. Shechtman also represents many companies, non-profit organizations, and artists in the entertainment, insurance, and restaurant industries, among others. He is active on a pro bono basis with a number of non-profit organizations, primarily in the performing arts. As Pryor Cashman’s managing partner, Mr. Shechtman focuses on enhancing client relationships, attracting and retaining top legal talent to serve clients, and developing the firm’s strategic direction. Since assuming the role in 2007, he has focused on preserving a culture at Pryor Cashman that fosters collegiality and an entrepreneurial spirit, allowing partners independence in how they develop and maintain their practices. The firm has recently been named by The American Lawyer as one of the forty “hot” mid-sized firms in the United States and by Crain’s New York Business as one of the fifty best places to work in New York City. A 1972 graduate of New York University School of Law, where he was an Arthur Garfield Hays Fellow and editor of the New York University Law Review, Mr. Shechtman has been a member of its faculty since 2004. He received his B.A from Amherst College. Mr. Shechtman is AV Peer Review Rated, Martindale Hubbell’s highest peer recognition for ethical standards and legal ability. Dedication: We dedicate this chapter to our valued clients.
On July 20, 2000, Avista Utilities (Avista or the company), a division of Avista Corporation, filed its integrated resource plan (IRP) in accordance with Public Utility Commission of Oregon (Commission) Order No. 89-507. Avista held technical conferences prior to filing its plan. A summary of those activities is contained in Appendix A. Staff circulated a draft proposed order, recommending that the Commission acknowledge Avista's plan, on December 11, 2000. Staff's final proposed order was distributed January 16, 2001. At a public meeting on January 23, 2001, the Commission considered and adopted staff's final proposed order. **PROVISIONS OF THE PLAN AND COMMENTS** **Avista’s Least-Cost Plan** Unlike Avista’s previous plans, this least-cost plan (LCP, IRP or the plan) has integrated both Avista’s North (Washington and Idaho) and South (Oregon) operating regions into one concise plan entitled, *2000 Natural Gas Integrated Resource Plan*. The entire document was submitted to the Oregon, Washington and Idaho commissions. The document summarizes the resource decision-making process, its conclusions, and two-year action plan. Technical appendices, modeling exhibits, and a glossary provide detailed supporting documentation. Avista’s 2000 IRP describes the basic components of the company’s planning process. The planning process includes a forecast of its future market demand, assessments of demand-side and supply-side resource options, analysis and selection of resource options for meeting future needs, and identification of actions required in the next two-year period to carry out the company’s resource strategy. • **Forecast.** The 2000 IRP uses a 10-year forecast horizon. In prior IRPs, Avista produced a 20-year plan consistent with Commission Order No. 89-507. Early in 1999, the Company requested and was granted permission to reduce the planning horizon to 10 years for this plan. The Company's planning horizon for capital budgeting and pipeline capacity is 10 years; for revenue budgeting it is 5 years. The shorter planning horizon is consistent within the natural gas industry as pipelines are now able to increase capacity in a more timely fashion than in the past. Also, due to the volatility in the natural gas market, LDCs' planning horizons are much shorter than 20 years to allow for flexibility with the market. Avista is achieving forecast efficiencies by utilizing common forecast assumptions between its natural gas forecasts and its electric operations forecasts. The forecast captures economic trends for Avista’s five county service areas. It aggregates expected population growth patterns, employment, income, anticipated natural gas prices, and potential impacts from the developing natural gas vehicle market. Econometric models, along with weather data, are employed to produce usage patterns for residential, firm commercial and firm industrial, interruptible and special contract customers. Both a high and low scenario, along with the company’s base case provide a range of sensitivities. For the base case, over the ten-year period, Avista expects sales to residential customers to grow by 4.1% compounded. Firm commercial and firm industrial sales growth over the ten-year period is expected to grow by a compound rate of 2.6% and 0.5% respectively. Overall, total firm sales are expected to grow 3.4% compounded over the ten-year period. • **Demand-Side Resources.** Avista's demand-side resources are undergoing change and re-evaluation. The high-efficiency gas equipment programs for space and water heating have been operating at reduced incentive levels. Avista expected to end the direct customer incentive phase of the program as of December 31, 2000, but has recently filed to remove the termination date. Avista will fund an efficiency conservation message to persuade customers to choose high efficiency appliances. Overall, between 1994 and 3rd quarter of 1999, the residential high efficiency program has attained 340,880 first year therm savings. Avista continues to offer its commercial incentive program, commercial and residential energy audits. Avista has committed to remain active in pursuing cost effective DSM programs by re-evaluating the viability of additional gas DSM offerings as gas avoided costs increase. It is, in fact, the gas cost increases that have led the company to forestall termination of its high efficiency equipment programs. Avista will also continue to investigate new gas end-use technologies and gas DSM implementation techniques. The IRP states that with expected higher avoided costs, additional DSM programs may be feasible which the company will review. • **DSM’s Impact on Small Businesses.** Avista continues to utilize the private sector for providing DSM measures and programs. This addresses the concern expressed in Section 303 of the Energy Policy Act of 1992 of the potential impact that utility integrated resource planning and DSM activities could have on small businesses. • **Supply-Side Resources.** Under the currently approved Gas Benchmark Mechanism, Avista Energy manages Avista Utilities’ supply and transportation needs, contracts, and capacity releases. The Gas Benchmark Mechanism expires in March of 2002. Avista Utilities employs traditional supply-side options such as storage and flowing gas supplies through interstate pipelines. Avista contracts with Northwest Pipeline Corporation (NPC) for interstate pipeline transportation into the Avista service areas. Avista also contracts with NPC for Jackson Prairie storage and Plymouth LNG. Jackson Prairie Storage is an underground storage project located next to NPC’s mainline near Chehalis, Washington. Plymouth LNG is a liquefied natural gas storage facility located next to NPC mainline near Plymouth, Washington. Avista contracts with Pacific Gas and Electric Gas Transmission – Northwest (GTN) for interstate pipeline transportation to Medford that commenced November of 1995. For the 2000 IRP, the company’s strategy is to contract for a reasonable amount of firm transportation to serve firm customers should a design peak day occur in about a seven to ten year period. From the company perspective, too much firm transportation could impair its goal of being a low-cost energy provider. With the increasing ability to do capacity releases, this is minimized. On the other hand, too little firm transportation reduces the company’s ability to be a reliable energy provider. The company is evaluating the potential expansion of the Eugene lateral or additional transportation on GTN to obtain additional capacity to Southern Oregon. • **Environmental Externality Costs.** Consistent with OPUC Order No. 93-695, Avista’s plan includes an analysis to consider the impact of environmental externality costs in planning for future energy resources. For the 2000 IRP, Avista’s analysis includes a range of potential cost impacts that range from $0.06082 to $0.24166 per therm based on the emission cost adders specified in the OPUC Order. This analysis considers the natural gas environmental cost impacts from emitting carbon dioxide, nitric oxide, carbon monoxide, and methane. • **Integration Strategies.** Avista’s integrated resource portfolio, developed using the company’s SENDOUT model, indicates: DSM options are not chosen due to cost-effectiveness considerations; Alberta supplies via GTN firm transportation are taken at a high level, with swings coming from supplies via NPC; spot resources from AECO, Sumas, and the Rockies have an increasing supply role in the later years of the planning period; and GTN firm transportation will provide the additional capacity needed by the Avista system for load growth into the next decade. For peak day system-wide planning purposes, results show unserved demand beginning with a small amount in 2003 and doubling thereafter through 2007. The company’s resource strategy maintains Oregon-mandated DSM measures it has budgeted; continues diversification of its firm transportation sources by increasing its supply access via firm GTN and NWP transportation; and under the gas benchmark mechanism, optimizes value by pursuing flexible capacity releases of firm transportation. • **Two-Year Action Plan.** Avista’s Two-Year Action Plan describes the actions the company will undertake through 2001 to implement its resource strategy and accomplish its goal of meeting customers’ needs for low-cost and reliable gas services. Avista will focus on five primary areas to further its objective of integrating the company's operations with its resource planning process: sales forecasting, modeling, supply/capacity activities, demand-side activities, and distribution planning. Forecasting and modeling tasks include re-estimating temperature sensitive customer usage models using alternative measures of degree days, studying price elasticity impacts on the lag variable of the existing model, installing a daily forecasting system, collaborating with New Energy Associates (NEA) on software improvements to the gas resource optimization model, and increasing the use of the resource optimization model as a decision aid for gas operations. Supply-side/capacity tasks include monitoring the actions taken by Avista Energy under the Gas Benchmark Mechanism (GBM) for managing swing and peak supply contracts, Jackson Prairie and Plymouth LNG storage. GBM monitoring also includes planning tasks related to capacity releases, exchanges, off-system sales and financial hedging instruments. The DSM action items include improving reporting DSM efforts, maintaining stakeholder relationships, supporting cost-effective energy code changes and monitoring new DSM technologies. Distribution planning tasks include continuing development of the Stoner gas flow modeling and integrating the GIS system into planning operations. Comments of the Parties The Commission staff developed extensive comments on the company’s draft integrated resource plan submitted in December 1999, and distributed a draft proposed order on the company’s final IRP which was distributed to all parties on December 11, 2000. Even though parties have been given notice of Staff’s activities during the development of this IRP, other parties’ participation has been absent. Commission Staff Comments. As a result of the company’s cooperative approach to resource planning and its resolution of all of the substantive issues prior to filing its final integrated resource plan submitted in July 2000, staff makes no suggestions for modification to the company's IRP. On January 16, 2001, staff distributed its final recommendation that the Commission acknowledge Avista's IRP. OPINION Jurisdiction Avista is a public utility in Oregon, as defined by ORS 757.005, which provides natural gas service to or for the public. On April 20, 1989, pursuant to its authority under ORS 756.515, the Commission issued Order No. 89-507 in Docket UM 180 adopting least-cost planning for all energy utilities in Oregon. Requirements for Least-Cost Planning under Order No. 89-507 Order No. 89-507 establishes procedural and substantive requirements for least-cost planning and requires the Commission's acknowledgment of plans that meet the requirements of the order. Procedural requirements. At a minimum, the least-cost planning process must involve the Commission and public prior to making resource decisions rather than after the fact. See Order No. 89-507 at 3. Avista sought public input during the planning process by informing the general public and customers about its planning process and by conducting technical conferences on the plan. The company's technical advisory group, consisting of representatives from other utilities, regulatory agencies, industrial customers, county government and pipeline companies, provided input on planning assumptions, energy resource options, and future scenarios that influence the demands for and supply of energy. The company distributed a draft plan for comment before developing and submitting the final plan to the Commission. Appendix A reflects these activities. Substantive requirements. The substantive requirements were also set forth in the order as follows: 1. All resources must be evaluated on a consistent and comparable basis. 2. Uncertainty must be considered. 3. The primary goal must be least cost to the utility and its ratepayers consistent with the long-run public interest. 4. The plan must be consistent with the energy policy of the state of Oregon as expressed in ORS 469.010. Order No. 89-507 at 7. Evaluation of Resources. Avista's IRP evaluates both supply- and demand-side resources consistently and comparably over time. Numerous linear programming model runs, including a Staff requested model run, were completed to evaluate resource scenarios for the company's plan and related gas operations. In addition, the company has included estimates of potential costs for environmental externalities consistent with Order No. 93-695, issued May 17, 1993, regarding the treatment of external environmental costs. The company also applied the same discount rate to costs for both demand- and supply-side resources. We conclude that Avista satisfactorily complied with this requirement for purposes of this plan. Uncertainty. Avista's IRP planning approach addressed both uncertainty in demand and uncertainty in resource availability. The company considered uncertainty in demand by developing a range of demand forecasts. The forecasts include a medium case as well as high and low load growth scenarios. These scenarios reflect a range of possible economic and weather events that may affect customer demand. Other factors considered by the company to address planning uncertainty include customer price sensitivity, environmental externalities, changes in financial condition, pricing of alternative fuels, and the effects of changing public policy. A gas utility's primary source of traditional supply is a flowing gas supply that is transported using interstate pipeline capacity. The cost and availability of pipeline capacity, however, is dependent on the actions of third party pipelines, other project sponsors, government agencies, and other market participants. The actions of these parties represent an element of uncertainty that is difficult to quantify for planning purposes. For example, Avista's IRP describes uncertainty generated by FERC Order No. 636 and how it influenced the company's current resource decisions. We are satisfied that Avista's IRP is sufficiently flexible to allow the company to respond to the uncertainties identified in the planning process. Primary Goal of Plan Must Be Least Cost. The objective of least-cost planning is to plan for resources that both meet the needs of the utility's customers and minimize total system costs over the long term. Avista has set forth its integrated resource plan goals to "properly balance the need to be a reliable" and "low-cost provider of energy." Avista realizes that to be successful it must not only plan for, but implement, a least cost resource path, and believes that its 2000 IRP assists the company in meeting the reliability expectations of its customers at competitive prices. Based on the company's analysis and its commitment to continue to develop and utilize the optimization modeling capability it has acquired, we are satisfied that Avista has met this requirement for purposes of this integrated resource plan. Consistency with Oregon's Energy Policy. The Legislature mandated certain energy-related goals in ORS 469.010. These goals relate primarily to the development of sustainable energy resources. Avista's plan is consistent with these goals. Avista has considered conservation resources in its resource plan. In addition, the company has indicated it will continue to assess the potential for additional residential, commercial, and firm industrial DSM programs. Commission Decisions on Parties' Comments Staff's final recommendation document recommends Commission acknowledgment of Avista's plan. We adopt that recommendation. Conclusion Based on review of Avista's planning efforts, Avista's 2000 Natural Gas Integrated Resource Plan is acknowledged. Avista's IRP meets the minimum substantive and procedural requirements of Order No. 89-507. Achievement of the objectives in the company's 2000-2001 Action Plan will enhance the company's efforts in the development of future integrated resource plans and assist the company in remaining a reliable and low-cost provider of natural gas service over the ten-year planning horizon. EFFECT OF THE PLAN ON FUTURE RATE-MAKING ACTIONS Order No. 89-507 sets forth the Commission's role in reviewing and acknowledging a utility's LCP or least-cost plan, as follows: The establishment of least-cost planning in Oregon is not intended to alter the basic roles of the Commission and the utility in the regulatory process. The Commission does not intend to usurp the role of utility decision-maker. Utility management will retain full responsibility for making decisions and for accepting the consequences of the decisions. Thus, the utilities will retain their autonomy while having the benefit of the information and opinion contributed by the public and the Commission. ***** Plans submitted by utilities will be reviewed by the Commission for adherence to the principles enunciated in this order and any supplemental orders. If further work on a plan is needed, the Commission will return it to the utility with comments. This process should eventually lead to acknowledgment of the plan. Acknowledgment of a plan means only that the plan seems reasonable to the Commission at the time the acknowledgment is given. As is noted elsewhere in this order, favorable rate-making treatment is not guaranteed by acknowledgment of a plan. Order No. 89-507 at 6 and 11. This order does not constitute a determination on the rate-making treatment of any resource acquisitions or other expenditures undertaken pursuant to Avista's 2000 IRP. As a legal matter, the Commission must reserve judgment on all rate-making issues. Notwithstanding these legal requirements, we consider the integrated resource planning process to complement the ratemaking process. In rate-making proceedings, in which the reasonableness of resource acquisitions is considered, the Commission will give considerable weight to utility actions which are consistent with acknowledged integrated resource plans. Utilities will also be expected to pursue unanticipated least-cost opportunities beneficial to ratepayers which arise after Commission acknowledgment or, alternatively, explain why such opportunities were not pursued. CONCLUSIONS 1. Avista is a public utility subject to the jurisdiction of the Commission. 2. Avista's 2000 Natural Gas Integrated Resource Plan reasonably adheres to the principles for least-cost planning set forth in Order No. 89-507. The plan will assist in ensuring that Avista's customers receive adequate service at fair and reasonable rates and is otherwise in the public interest. ORDER IT IS ORDERED that the 2000 Natural Gas Integrated Resource Plan filed by Avista on July 20, 2000, as modified herein, is acknowledged in accordance with the terms of this order and Order No. 89-507. Made, entered, and effective FEB 09 2001. Ron Eachus Chairman Roger Hamilton Commissioner Joan H. Smith Commissioner PUBLIC INVOLVEMENT Part of the integrated resource plan is to involve the public in the least cost planning process. To accomplish this, the Company held three public Technical Advisory Committee (TAC) Meetings to review different phases of the plan during 1999. The first meeting was held jointly with the state utility commission staffs from Washington and Idaho, and with the Avista Corp. electric TAC members. The second meeting was held with the staffs from Oregon, and the third meeting was held jointly with the state utility commission staffs from Washington, Idaho, and Oregon. In addition to state commission staff, the meetings included representatives from other state government agencies, several industrial customers, county government, and pipeline companies. Table 1 lists the Technical Advisory Committee meetings that were held. Comments regarding the December 6, 1999 draft filing of this plan were received from George Fink, Idaho Public Utilities Commission on March 13, 2000 and from Ray Nunez, Oregon Public Utility Commission on March 6, 2000. --- **TABLE 1** **TECHNICAL ADVISORY COMMITTEE MEETINGS** | Date | Location | |--------------------|---------------------------| | August 19, 1999 | Spokane, Washington | Topics of Discussion: (Joint meeting with Avista Utilities electric IRP TAC) - Purpose of IRP - Background of Least Cost Planning - Forecast Methodology - Washington/Idaho Forecast - Washington/Idaho Demand Side Management August 26, 1999 Topics of Discussion: - Purpose of IRP - Background of Least Cost Planning - Forecast Methodology - Oregon Forecast - Oregon Demand Side Management October 15, 1999 Topics of Discussion: - Explanation of Distribution Planning - Demonstration of Distribution Model - Demonstration of GIS - Explanation of Resource Planning - Demonstration of SENDOUT Planning Model
Antidepressants of the Serotonin-Antagonist Type Increase Body Fat and Decrease Lifespan of Adult *Caenorhabditis elegans* Kim Zarse¹, Michael Ristow¹,²* ¹Institute of Nutrition, University of Jena, Jena, Germany, ²German Institute of Human Nutrition, Potsdam-Rehbrücke, Germany **Abstract** It was recently suggested that specific antidepressants of the serotonin-antagonist type, namely mianserin and methiothepin, may exert anti-aging properties and specifically extend lifespan of the nematode *C.elegans* by causing a state of perceived calorie restriction (Petrascheck M, Ye X, Buck LB: An antidepressant that extends lifespan in adult *Caenorhabditis elegans*; Nature, Nov 22, 2007;450(7169):553–6, PMID 18033297). Using the same model organism, we instead observe a reduction of life expectancy when employing the commonly used, standardized agar-based solid-phase assay while applying the same or lower concentrations of the same antidepressants. Consistent with a well-known side-effect of these compounds in humans, antidepressants not only reduced lifespan but also increased body fat accumulation in *C. elegans* reflecting the mammalian phenotype. Taken together and in conflict with previously published findings, we find that antidepressants of the serotonin-antagonist type not only promote obesity, but also decrease nematode lifespan. **Citation:** Zarse K, Ristow M (2008) Antidepressants of the Serotonin-Antagonist Type Increase Body Fat and Decrease Lifespan of Adult *Caenorhabditis elegans*. PLoS ONE 3(12): e4062. doi:10.1371/journal.pone.0004062 **Editor:** Georges Chapouthier, L'université Pierre et Marie Curie, France **Received** November 3, 2008; **Accepted** November 30, 2008; **Published** December 29, 2008 **Copyright:** © 2008 Zarse et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. **Funding:** The authors have no support or funding to report. **Competing Interests:** The authors have declared that no competing interests exist. *E-mail: email@example.com* --- **Introduction** In recent years, the nematode *Caenorhabditis elegans* has become a well-established model organism to identify compounds that may be capable of extending lifespan not only in invertebrates, but also mammals. Accordingly, several research groups have published nematode-based findings on such compounds [1–30], whereas for most of these it is currently unknown whether they might exert similar effects in mammals, while for others this was proposed in regards to rodent lifespan [31] or at least in regards to reduction of aging-associated physiological alterations, whereas no extension of lifespan was observed [32]. Like numerous other psychoactive compounds, the antidepressant mianserin has been shown to increase appetite [33] as well as body mass [34] in humans. Conversely, obesity has been shown to decrease life span in humans [35] as well as *C. elegans* [25], while in both species serotonin signalling has been implicated in body fat accumulation [36]. In conflict with this evidence, recently published findings unexpectedly suggest that mianserin, and additional antidepressants of the serotonin antagonist type might extend *C.elegans* lifespan [24], which would surprisingly implicate that obesity promotes longevity. While the latter study has employed liquid media to determine *C. elegans* lifespan, we have employed standardized and widely accepted agar-based assays aiming to replicate these findings, and unexpectedly observe a dose-dependent reduction of *C.elegans* lifespan, primarily suggesting that different assays to determine nematode lifespan generate opposing results. **Results and Discussion** To replicate the findings of previously published experiments by Petrascheck and colleagues [24], we have applied both compounds described to be life-extending in the original paper, mianserin and methiothepin, to Bristol N2 *C.elegans* which in our case were maintained on solid-phase agar media, as described in Material and Methods. We repeatedly observed significantly *decreased* life expectancies for the key compound mianserin when applying this substance at a final concentration as given in the original paper (50 µM, p<0.001), as well as at 5 µM (p<0.001) and 500 nM (p<0.001) (Fig. 1a). Similar results were obtained for a functionally related compound, methiothepin, at concentrations of 10 µM (p<0.001) as well as at 1 µM (p<0.005), whereas this compound showed no significant effect at a concentration of 100 nM (Fig. 1b). Methiothepin was shown to extend life span in the original study at a concentration of 10 µM [24]. Petrascheck and colleagues have used liquid media not only for 96-well based screening assays, but also for final determinations of lifespans [24]. These liquid media are not commonly used for definite lifespan determinations, since they have been repeatedly reported to potentially cause differences in life span when compared to the well-established, standard solid-phase media; the first report in fact was published more than 30 years ago [37]. Liquid media have caused opposing results when being applied by different laboratories using apparently identical protocols [3,9]. Moreover and according to their *Methods Summary* section [24], Figure 1. Antidepressants of the human serotonin antagonist type do not extend Caenorhabditis elegans lifespan. Panel A: The antidepressant mianserin shortens C.elegans lifespan at concentrations of 50 µM (dark blue boxes; this concentration was shown extend lifespan in the original publication [ref. 24]), 5 µM (medium blue boxes), and 500 nM (light blue boxes). Untreated control nematodes are depicted by black circles. Panel B: The chemically and functionally related compound methiothepin similarly shortens C.elegans lifespan at concentrations of 10 µM (red boxes; this concentration was shown extend lifespan in the original publication [ref. 24]), 1 µM (orange boxes), and has no significant effect on lifespan at a lower concentration of 100 nM (yellow boxes). Untreated control nematodes are depicted by black circles. doi:10.1371/journal.pone.0004062.g001 Figure 2. Antidepressants of the human serotonin antagonist type increase Caenorhabditis elegans body fat content. Panel A: The antidepressant mianserin increases C.elegans body fat content at a concentration of 50 µM (dark blue bar, right side; this concentration was shown extend lifespan in the original publication [ref. 24]) after ten days of treatment; untreated control nematodes are depicted as black bar. Panel B: The related compound methiothepin increases C.elegans body fat content at a concentration of 10 µM (red bar, right side; this concentration was shown extend lifespan in the original publication [ref. 24]) after ten days of treatment; untreated control nematodes are depicted as black bar. doi:10.1371/journal.pone.0004062.g002 Petrascheck et al. have not only based their liquid media on recipes from a publication [3] that was fundamentally put into question [9], but also from another laboratory [38] that has previously published a striking lack of correlation between lifespan results obtained with liquid- versus solid-phase media [39]. Lastly and most importantly, Petrascheck and colleagues observe a mean life expectancy of at least 23.6 days in N2 nematodes using their liquid media [24], whereas we [25] and others [40,41] consistently observe a significantly shorter mean lifespan when using solid-phase media. This suggests that nematodes maintained in liquid media are kept in an a priori state of calorie restriction known to extend lifespan per se, i.e. in the absence of life-extending compounds [42], which has been recently shown to alter multiple pathways of energy metabolism [43] as to be expected in a priori states of calorie restriction [44–46]. Accordingly, and to test whether solid phase media as used in our C.elegans experiments reflect the situation in humans, we have tried to replicate the fact that mianserin increases human body mass [34] by applying this compound to nematodes. Indeed, both compounds significantly increased body fat after ten days of incubation at the concentrations that have been used by Petrascheck and colleagues [24] (Figs. 2a and 2b), whereas other pharmacological interventions known to extend C.elegans lifespan have been previously shown to decrease body fat content [25]. Nevertheless it should be noted that a specific genetic disruption that extends C.elegans lifespan, namely of the insulin-/IGF1-receptor signaling (daf-2) [40] have been shown to increase C.elegans body fat [47]. Taken together and consistent with the findings in humans in regards to obesity [33,34], we find that antidepressants of the serotonin antagonist-type do not extend C.elegans lifespan at most commonly used and generally accepted experimental conditions. Materials and Methods Nematodes The strain used in this study was Bristol N2 which was obtained from the Caenorhabditis Genetics Center (CGC, University of Minnesota, USA). Nematodes were grown and maintained on NGM agar plates as described previously [25,48,49]. All experiments were performed at 20°C Celsius. *C. elegans* stocks and prefertile animals were maintained on OP50 bacteria. **Compounds** Antidepressants mianserin and methiothepin were both obtained from Sigma-Aldrich (St. Louis, MO, USA). Agar plates containing experimental treatments were prepared from the same batch of NGM agar as the control plates except that the respective chemical was added to obtain the indicated final concentrations from a sterile stock solution (10 µM each). **Fat content analyses** Triglyceride content was performed as previously described [25] briefly by flash-freezing nematodes and storage at −80°C until further processing. Approximately 25 mg was weighed and ground in a nitrogen-chilled mortar together with 250 µl of frozen phosphate buffer. The frozen material was gathered in a reaction tube and kept on ice. Extracts were sonicated three times and centrifuged for 7 min at 12,000 g. Fat content was determined with a commercially available triglyceride determination kit (Sigma-Aldrich) as previously described [50] and normalized to protein content, which was determined according to the Bradford method [51]. **Author Contributions** Conceived and designed the experiments: MR. Performed the experiments: KZ. Analyzed the data: KZ. Wrote the paper: MR. **References** 1. Harrington LA, Harley CB (1988) Effect of vitamin E on lifespan and reproduction in Caenorhabditis elegans. Mech Ageing Dev 43: 71–78. 2. Adachi H, Ishii N (2000) Effects of tocotrienols on life span and protein carboxylation in Caenorhabditis elegans. J Gerontol A Biol Sci Med Sci 55: B230–285. 3. Melov S, Ravenscroft J, Malik S, Gill MS, Walker DW, et al. (2000) Extension of life-span with superoxide dismutase/catalase mimetics. Science 289: 1567–1569. 4. Bakaev VV, Bakueva LM, Nikitin VP, Shabalina AV (2002) Effect of 1-butyrylglutamic acid dibutyl ester on life longevity in the nematode Caenorhabditis elegans. Biogerontology 3 (Suppl 1): 23–24. 5. Bakaev VV, Lyudmila MB (2002) Effect of ascorbic acid on longevity in the nematoda Caenorhabditis elegans. Biogerontology 3 (suppl 1): 12–16. 6. Cypser JR, Johnson TE (2002) Multiple stressors in Caenorhabditis elegans induce stress hormesis and extended longevity. J Gerontol A Biol Sci Med Sci 57: B109–114. 7. Wu Z, Smith JV, Paramasivam V, Bukto P, Khan I, et al. (2002) Ginkgo biloba extract EGB 761 increases stress resistance and extends life span of Caenorhabditis elegans. Cell Mol Biol (Noisy-le-grand) 48: 725–731. 8. Strayer A, Wu Z, Christen Y, Link CD, Luo Y (2003) Expression of the small heat-shock protein Hsp-6-2 in Caenorhabditis elegans is suppressed by Ginkgo biloba extract EGB 761. FASEB J 17: 2305–2307. 9. Krauss M, Gems D (2003) No increase in lifespan in Caenorhabditis elegans upon treatment with the superoxide dismutase mimic EUK-3. Free Radic Biol Med 34: 277–282. 10. Ishii N, Senoo-Matsuda N, Miyake K, Yasuda K, Ishii T, et al. (2004) Coenzyme Q10 can prolong *C. elegans* lifespan by lowering oxidative stress. Mech Ageing Dev 125: 41–46. 11. Wood JG, Rogina B, Lavu S, Howitz K, Helfand SL, et al. (2004) Sir2 proteins function as NAD-dependent protein deacetylases. *Science* 304: 482–489. 12. Viswanathan M, Kim SK, Berdichevsky A, Guarente L (2005) A role for SIR-2.1 regulation of ER stress response genes in determining *C. elegans* life span. Dev Cell 9: 605–615. 13. Evans JA, Huang C, Yamben I, Crevy DF, Kornfeld K (2005) Anticonvulsant medications extend worm life-span. Science 307: 259–262. 14. Kornfeld K, Evason K (2006) Effects of anticonvulsant drugs on life span. Arch Neurol 63: 491–496. 15. Brown MK, Evans JL, Luo Y (2006) Beneficial effects of natural antioxidants EGCG and alpha-lipoic acid on life span and age-dependent behavioral declines in Caenorhabditis elegans. Pharmacol Biochem Behav 85: 620–628. 16. Wilson MA, Shukitt-Hale B, Kalt W, Ingram DK, Joseph JA, et al. (2006) Blueberry polyphenols increase lifespan and thermotolerance in Caenorhabditis elegans. Aging Cell 5: 39–68. 17. Gruber J, Tang SY, Halliwell B (2007) Evidence for a trade-off between survival and fitness caused by resveratrol treatment of Caenorhabditis elegans. Ann N Y Acad Sci 1106: 530–542. 18. Winkelspecht A, Gomking N, Nwankam C, Zurawski RF, Timpel C, Chovolou Y, et al. (2007) Effects of the flavonoids kaempferol and fisetin on thermotolerance, oxidative stress and FoxO transcription factor DAF-16 in the model organism Caenorhabditis elegans. Arch Toxicol 81: 849–858. 19. Gerisch B, Rotters V, Li D, Motola DL, Cummins CL, et al. (2007) A bile acid-like steroid modulates Caenorhabditis elegans lifespan through nuclear receptor signaling. Proc Natl Acad Sci U S A 104: 5014–5019. 20. Miller DL, Roth MB (2007) Hydrogen sulfide increases thermotolerance and lifespan in Caenorhabditis elegans. Proc Natl Acad Sci U S A 104: 20618–20622. 21. Bass TM, Weinkove D, Houthoofd K, Gems D, Partridge L (2007) Effects of resveratrol on lifespan in *Drosophila melanogaster* and Caenorhabditis elegans. Mech Ageing Dev 128: 346–352. 22. Broue E, Lierc P, Kenyon C, Baillieu EE (2007) A steroid hormone that extends the lifespan of Caenorhabditis elegans. Aging Cell 6: 87–94. 23. Zou S, Sinclair J, Wilson MA, Carey JR, Liedo P, et al. (2007) Comparative approaches to facilitate the discovery of prolongevity interventions: effects of tocopherols on lifespan of three invertebrate species. Mech Ageing Dev 128: 222–226. 24. Petrascheck M, Ye X, Buck LP (2007) An antidepressant that extends lifespan in adult Caenorhabditis elegans. Nature 450: 533–536. 25. Schulz TJ, Zare K, Voigt A, Urban N, Biringer M, et al. (2007) Glucose restriction extends Caenorhabditis elegans life span by inducing mitochondrial respiration and increasing oxidative stress. Cell Metab 6: 280–293. 26. Evason K, Collins JJ, Huang C, Hughes S, Kornfeld K (2008) Valproic acid extends Caenorhabditis elegans lifespan. Aging Cell 7: 305–317. 27. Benedetti MG, Foster AL, Vantipalli MC, White MP, Sampayo JN, et al. (2008) Compounds that confer thermal stress resistance and extended lifespan. Exp Gerontol 43: 882–891. 28. Kampkötter A, Timpel C, Zurawski RF, Ruhl S, Chovolou Y, et al. (2008) Increase of stress resistance and lifespan of Caenorhabditis elegans by quercetin. Comp Biochem Physiol B Biochem Mol Biol 149: 314–323. 29. Kim J, Takahashi M, Shimizu T, Shirasawa T, Kajita M, et al. (2008) Effects of a potent antioxidant, platinum nanoparticle, on the lifespan of Caenorhabditis elegans. Mech Ageing Dev 129: 322–331. 30. Wiegan TA, Surinova S, Yuma E, Langehaar-Makkijnie M, Wikman G, et al. (2008) Plasmid adaptation increase lifespan and stress resistance in *C. elegans*. Biogerontology, in press. 31. Baur JA, Pearson KJ, Price NL, Jamieson HA, Lerin C, et al. (2006) Resveratrol improves health and survival of mice on a high-calorie diet. Nature 444: 337–342. 32. Pearson KJ, Baur JA, Lewis KN, Peshkin L, Price NL, et al. (2008) Resveratrol Delays Age-Related Deterioration and Mimics Transcriptional Aspects of Dietary Restriction without Extending Life Span. Cell Metab 8: 157–168. 33. Harris B, Harper M (1980) Unusual appetites in patients on mianserin. Lancet 1: 590. 34. Parder RM, Blum A, Stulemeijer SM, Barres M, Mokszardi M, et al. (1980) A double-blind multicentre trial comparing the efficacy and side-effects of mianserin and chlorimipramine in depressed in- and outpatients. Int Pharmacopsychiatry 15: 218–227. 35. Fontaine KR, Redden DT, Wang C, Westfall AO, Allison DB (2003) Years of life lost due to obesity. JAMA 289: 187–193. 36. Ashraki F, Chang FY, Watts JL, Fraser AG, Kamath RS, et al. (2003) Genome-wide RNA analysis of Caenorhabditis elegans fat regulatory genes. Nature 421: 268–272. 37. Croll NA, Smith JM, Zuckerman BM (1977) The aging process of the nematode Caenorhabditis elegans in bacterial and axenic culture. Exp Aging Res 3: 175–185. 38. Johnson TE, de Castro E, Hegi de Castro S, Cypser J, Henderson S, et al. (2001) Relationship between increased longevity and stress resistance as assessed through gerontogene mutations in Caenorhabditis elegans. Exp Gerontol 36: 1609–1617. 39. Shook DR., Johnson TE (1999) Quantitative trait loci affecting survival and fertility-related traits in Caenorhabditis elegans show genotype-environment interactions, pleiotropy and epistasis. Genetics 153: 1233–1243. 40. Kenyon C, Chang J, Gensch E, Rudner A, Tabtiang R (1993) A *C. elegans* mutant that lives twice as long as wild type. Nature 366: 461–464. 41. Tissenbaum HA, Guarente L (2001) Increased dosage of a sir-2 gene extends lifespan in Caenorhabditis elegans. Nature 410: 227–230. 42. Houthoofd K, Braeckman BP, Lenaerts I, Brys K, De Vreese A, et al. (2002) Axenic growth up-regulates mass-specific metabolic rate, stress resistance, and extends life span in Caenorhabditis elegans. Exp Gerontol 37: 1371–1378. 43. Castelain N, Hoogewijs D, De Vreese A, Braeckman BP, Vanleteren JR (2008) Dietary restriction by growth in axenic medium induces discrete changes in the transcriptional output of genes involved in energy metabolism in Caenorhabditis elegans. Biotechnol J 3: 803–812. 44. Lenaerts I, Walker GA, Van Hoosebeke L, Gems D, Vanleteren JR (2008) Dietary restriction of Caenorhabditis elegans by axenic culture reflects nutritional requirement for constituents provided by metabolically active microbes. J Gerontol A Biol Sci Med Sci 63: 242–252. 45. Smith ED, Kaerberlein TL, Lydum BT, Sager J, Welton KL, et al. (2008) Age- and calorie-independent life span extension from dietary restriction by bacterial deprivation in Caenorhabditis elegans. BMC Dev Biol 8: 49. 46. Surpren GL, Kaerberlein M (2008) Dietary restriction by bacterial deprivation increases life span in wild-derived nematodes. Exp Gerontol 43: 130–135. 47. Kimura KD, Tissenbaum HA, Liu Y, Ruvkun G (1997) daf-2, an insulin receptor-like gene that regulates longevity and diapause in Caenorhabditis elegans. Science 277: 942–946. 48. Brenner S (1974) The genetics of Caenorhabditis elegans. Genetics 77: 71–94. 49. Zarse K, Schulz TJ, Birringer M, Ristow M (2007) Impaired respiration is positively correlated with decreased life span in Caenorhabditis elegans models of Friedreich Ataxia. FASEB J 21: 1271–1275. 50. Ristow M, Pfister MF, Yee AJ, Schubert M, Michael L, et al. (2000) Frataxin activates mitochondrial energy conversion and oxidative phosphorylation. Proc Natl Acad Sci U S A 97: 12239–12243. 51. Bradford MM (1976) A rapid and sensitive method for the quantitation of microgram quantities of protein utilizing the principle of protein-dye binding. Anal Biochem 72: 248–254.
Microscopic Evaluation of Mandibular Symphyseal Distraction Osteogenesis Ismet Duran\textsuperscript{a}; Siddık Malkoç\textsuperscript{b}; Haluk İşeri\textsuperscript{c}; Mustafa Tunalı\textsuperscript{d}; Murat Tosun\textsuperscript{e}; Hasan Küçükkolbaşı\textsuperscript{f} Abstract: The purpose of this study was to evaluate microscopically the newly formed hard tissue after a consolidation period of mandibular symphyseal distraction osteogenesis (MSDO). Sixteen patients underwent MSDO treatment. After a latency period of seven days, the distraction device was activated by the patient once in the morning and once in the evening, for a total of one mm per day for a mean 10.1 ± 2.8 days, and the mean opening of the device was 8.1 ± 1.7 mm. The device was usually maintained in position approximately 90 days after surgery. After the completion of the distraction period, the lower anterior teeth were bonded and tooth movement into the distraction site was initiated. After a consolidation period, second surgery was performed to remove the distraction devices. During the second surgery, hard tissue biopsies were taken on the apical region of the two central incisors and the left canine. The samples were fixed in 10% buffered formalin and decalcified in 3% HNO\textsubscript{3} solutions. New bone formation was present within the distraction gap immediately after the consolidation period. The cellular construction was more irregular in the distraction sections than in the normal bone sections. The newly distracted area was not complete immediately after the consolidation period. Furthermore, the newly formed bone had a membranous structure, which indicates continual maturation. Bone exposed to stretching forces undergoes new bone formation, and the newly formed bone is of a membranous type also named as a woven type. (Angle Orthod 2006;76:369–374.) Key Words: Microscopic evaluation; Symphyseal distraction; Tooth movement INTRODUCTION Distraction osteogenesis (DO), initially reported in 1905 by Codivilla,\textsuperscript{1} is a process of growing new bone by mechanical stretching of the preexisting bone tissue. DO controls these dynamic forces and leads to new bone formation in the direction of the distraction vectors.\textsuperscript{1} There is a large amount of literature on the use of DO to treat a wide variety of dentofacial problems.\textsuperscript{1} The method is currently being developed for orthodontic application such as canine retraction,\textsuperscript{2} alveolar distraction osteogenesis\textsuperscript{3} (ADO) before oral implant reconstruction, mandibular widening,\textsuperscript{4–7} recovery of ankylosed teeth,\textsuperscript{8} segmental translation,\textsuperscript{9,10} and interdental distraction.\textsuperscript{11} To improve the distraction protocol, some microscopic, morphologic, and human clinical studies have been performed on the type and quality of bone obtained especially by ADO.\textsuperscript{3,11} The results are encouraging in that they suggest that reliable tissues can be obtained for implant treatment. In clinical orthodontics, reconstruction of the occlusion after DO is at the forefront of research. This topic is of particular interest, especially when applied in the tooth-bearing area, because a dental gap is created between the distracted bony segments. Mandibular symphyseal distraction osteogenesis (MSDO), initially reported by Guerrero in 1990,\textsuperscript{4} has since been used sparingly by others. Despite early reports\textsuperscript{4–7} of success, important questions remained unanswered. What is the biologic foundation of DO to widen the symphysis and what is the response of alFIGURE 1. The custom-made, intraoral, rigid tooth- and bone-borne distractor. FIGURE 2. Mandibular symphyseal distraction osteogenesis surgery. Veolar bone during mandibular widening? The purpose of this study was to evaluate microscopically the newly formed bone after the consolidation period of MSDO and to verify the influence of tooth movement into immature, fibrous, and less mineralized bone. MATERIALS AND METHODS Patient population The sample comprised 16 patients (nine male and seven female) with a mean age of $20.4 \pm 1.2$ years (range, 16.4–23.8 years) at the time of surgery. Clinical indications for MSDO were severe mandibular anterior dental crowding, V-shaped mandible, unilateral or bilateral scissor bite, and a maxillomandibular transverse deficiency. None of the patients had any systemic problems. Patients and their parents were informed about the proposed treatment plan involving the surgical phase as well as the conventional alternative option, and their consent was obtained. A detailed study design was explained to patients and their parents, and only volunteers were included in this study. The research project was approved by the Ethics Committee of the School of Dentistry, University of Selçuk. Appliance design A custom-made, intraoral, rigid bone- and tooth-borne distraction device was used. The device consisted of a HYRAX (GAC, New York, NY) screw and two footplates (Strike-Liebinger, Freiburg, Germany) (Figure 1). The distractor was positioned in front of the lower incisors at the gingival level, and opening holes of the screw was placed on the mandibular symphysis. The upper arms of the screw were bent in accordance with the lower anterior archform and fitted into the first mandibular premolar braces, which were welded to the band in a horizontal position. The footplates were fixed to the tip of the lower arms and adjusted according to the symphysis formation. Surgical technique The surgical procedure was performed under local anesthesia and intravenous sedation. An incision to a depth of four to six cm was made in the mandibular vestibule, through the orbicularis oris muscle. The upper arms of the device were fitted to the first premolars, the lower arms and footplates were adjusted to the bone, and guidance screw holes were drilled with a Lindeman bur. A vertical osteotomy was made through the symphyseal area with an oscillating saw blade, starting at the inferior border of the mandible and extending to the interdental space between the apices of the mandibular central incisors. Then, with a straight handpiece, the cut was continued on the labial cortical plate of the mandible until the alveolar crest was reached. The final sectioning was done manually with a mallet and spatula osteotome. Once the vertical osteotomy and sectioning of the mandible had been completed, the distraction device was fixed to the bone and teeth and then activated three mm (Figure 2). After confirming the complete osteotomy, the distraction device reactivated to its initial position. Care was taken to ensure that the wounds were sutured in the proper tissue planes. Distraction protocol After a latency period of seven days, the distraction device was activated by the patient once in the morning and once in the evening, for a total amount of one mm per day for $10.1 \pm 2.8$ days (distraction period), and the mean opening of the device was $8.1 \pm 1.7$ mm. The device remained in position $94.9 \pm 5.8$ days (consolidation period) after surgery for maturation of the newly developed bone. Orthodontic movement After the completion of the distraction period, the lower anterior teeth were bonded and tooth movement was initiated into the distraction site with the new alveolar bone using light orthodontic forces (25–30 g) commencing after distraction surgery. Dental crowding was resolved by movement of the anterior teeth into the distraction gap with fixed appliance orthodontic treatment. This orthodontic tooth movement began after the completion of the distraction process. **Microscopic analysis** After the consolidation period, second surgery was performed to remove the distraction devices. During the second surgery, hard tissue biopsies were taken on the apical region of two central incisors (distracted bone tissue) and on the left-side canine (control bone tissue). Dimensions of the control and distracted biopsy samples were approximately $0.5 \times 0.5 \times 0.5$ cm. The histological samples were fixed in 10% buffered formalin (Sigma Chemical, St Louis, Mo) and decalcified in HNO$_3$ solutions (Sigma Chemical), which were diluted before use. To get fast but effective and safe results, this solution was refreshed each 12 hours during the decalcification process. At the end of the decalcification process, the samples were processed with classical tissue-processing techniques and embedded in 60–62°C paraffin. Then, five-$\mu$m-thick sections were taken from these blocks and stained with Hematoxylin (Cole) and Eosin (Sigma Chemical). Cole Hematoxylin is generally used for distinguishing calcified and osteoid bone tissues under light microscopy.\textsuperscript{12} In addition, mallory aniline blue collagen stain (Sigma Chemical) was used for determining developing collagen structures in different tissues.\textsuperscript{13} All slides were comparatively evaluated under an Olympus BH2 (Osaka, Japan) light microscope at 33× magnification. **RESULTS** **Distracted bone biopsy samples** The distracted bone biopsy samples stained with Hemotoxylin (Cole) and Eosin (Figure 3) showed that although there was no mature osteon construction, there were many irregular interstitial lamellas and Haversian canaliculi. There were many osteocytes and osteoblasts in the irregular lamellar structure. There were also some developing osteoblasts but no osteoclasts in the Haversian canaliculi. Fine and coarse vessels had invaded the entire matrix. The tissue samples of distracted mandible stained with mallory aniline blue (Figure 4) showed that there were many irregular connective tissue fibrils and massive collagen accumulations in the distraction area. In addition, there were osteocytes scattered among the collagen fibrils. However, many osteoid structures, especially those evident around the Haversian canaliculi and in the matrix, were immature. The chondroid tissue had completely disappeared, and no evidence of soft tissue scarring was present in any of the sections. **Control bone biopsy samples** The control bone biopsy samples stained with Hematoxylin (Cole) and Eosin (Figure 5) had regular lamellar constructions, well-developed interstitial lamellas and Haversian canaliculi, and, in some places, organized osteon structures. In the bone matrix, there were many osteocytes but few osteoblasts and almost all the matrix was ossified. Neither osteoclasts nor developing osteoblasts was found in the Haversian canaliculi. Some medium-sized vessels were also observed in the matrix. The control bone biopsy samples stained with mallory aniline blue (Figure 6) collagen stain had regular FIGURE 6. The normal mandible biopsy samples stained with maliory aniline blue; Hc indicates Haversian canaliculi; IL, interstitial lamellas; C, capillar; Oc, osteocysts; Ob, osteoblasts; CA, calcified area; and OA, ossified area, 33×. osteon structures. The centrally located Haversian canaliculi and the regularly distributed interstitial lamellas appeared clearly. The matrix of the bone was stained dark blue indicating that it contained mature collagen. There was no chondroid structure in these samples. DISCUSSION Regenerate tissue mineralization and remodeling has been investigated experimentally by several authors, mainly by radiography, ultrasound, computed tomography, light microscopy, and electron microscopy.\textsuperscript{14–18} Although microscopic evaluation is limited to human biopsy material or to tissues harvested at the end point of animal experiments, it is the only method by which to directly observe all tissue components as well as their spatial relationships to one another. Moreover, direct quantification of cell and matrix types and bone formation rates are possible.\textsuperscript{19} This study analyzed the processing of bone formation and remodeling during MSDO using a tooth- and bone-borne distraction device. In our clinical study, the lower incisors and canines were moved into the new distracted bone area immediately after the distraction period, and the quality of new bone formation was evaluated. The progressive maturation of bone regeneration has been evaluated in different histological studies.\textsuperscript{19–21} Cope and Samchukov\textsuperscript{19} documented the histomorphometric changes of bone regeneration during an eight-week consolidation period after mandibular osteodistraction. According to their results, bone regeneration was believed to be still in the remodeling phase at the end of the consolidation period. Also, their results indicate that membranous ossification was the predominant mechanism of new bone formation in the DO process. In addition, they found that although some areas of cartilage were present within the regenerated tissue, possibly indicating enchondreal bone formation, no cartilage was seen within the distraction gap after the fourth week of consolidation. Similarly, in this study, we determined that the cellular construction was more irregular in the distraction sections than in the control bone sections. However, the number of the cells located in the distraction area was more than in the control area. Although control bone biopsies have a calcified structure, the distracted bone biopsies had an osteoid structure. The distraction gap when exposed to stretching forces undergoes new bone formation, and the newly formed bone is of a membranous, also named as woven, type. The chondroid tissue had also completely disappeared, and no evidence of soft tissue scarring or infection was present in any of the sections. These data suggest that maturation of the newly distracted area was not complete immediately after the consolidation period, but the newly formed bone has a membranous structure indicating the continual development and maturation of new bone. Although the size of the biopsies was small and they were not from the entire distraction region, we suggest that this study will be beneficial for future studies. Zaffe et al\textsuperscript{22} treated 10 patients with ridge deformities to obtain the required ridge augmentation by ADO. Clinical and radiological (orthopantomography and computerized tomography with densitometric assay) evaluations were carried out during the subsequent 12 weeks, before implant insertion(). Biopsies at 40, 60, and 88 days were studied after general, specific, and histochemical staining of slides; microradiographs were analyzed to evaluate the Trabecular Bone Volume (TBV). Forty days after the end of distraction, soft callus indicated the start of ossification. Sixty days after the end of distraction, the soft callus was largely converted into a network of trabecular woven bone; osteogenic activity was high and TBV was approximately 50%. Eighty-eight days after the end of distraction, the amount of bone appeared reduced, with a more ordered structure. Bone formation activity and TBV were also diminished, whereas osteoclast erosion was active. The densitometry assay shows values increasing from the end of distraction, particularly after implant insertion. Histological results of this study show a regression in bone deposition processes 88 days after the end of distraction, culminating in a virtual steady-state after a certain time. In our study, the biopsies performed approximately 90 days from the end of distraction showed several bony trabeculae displaying a more ordered structure. The osteoblasts formed parallel-fibered or lamellar bone in apposition to the preexisting woven bone. Our results agree with those obtained by Zaffe et al\textsuperscript{22} who used a different type of distraction device and vector. Therefore, their clinical pilot study, similar to our study, depended mainly on the clinical observation and included some histological analyses. Movement of teeth through regenerate bone is a topic of current interest. Some authors advise that tooth movement should not begin until radiologic evidence of consolidation is observed after the distraction period. They report that closure of the interdental space should be delayed until the bone is observed, and to prevent mesial migration, an acrylic denture can be placed in the distraction space. They assume that allowing the teeth to move into the gap early can lead to periodontal defects, bony defects, and potential loss of teeth. However, some clinical reports have demonstrated that a tooth can be moved into the regenerated bone after the distraction period. Moderate to severe alveolar bone loss was noted in the fourth premolars moved simultaneously with distraction. Initiating orthodontics at the end of the distraction period preserved periodontal support and produced a tooth movement rate of 1.2 mm per week. Liou et al demonstrated that orthodontic tooth movement into the newly distracted bone two weeks after the cessation of the distraction period accelerates the maturation process of this bone. They suggested that orthodontic tooth movement into the newly distracted bone is possible and that the new alveolar bone created through orthodontic tooth movement is a mature, compact bone indistinguishable from the original mandibular bone. Their study results indicate that orthodontic tooth movement also increased the volume of bone at the distraction site by alveolar bone formation. Cope et al demonstrated that teeth can be orthodontically moved into regenerated bone tissue, but the influence of tooth movement into immature or regenerated mature bone on the periodontal ligament and tooth roots remains unknown. Nakamoto et al evaluated that the influence of tooth movement on tooth roots and periodontal tissues when teeth were moved into mature, well-organized, and mineralized regenerate bone created after DO compared with immature, fibrous, and less mineralized bone. They indicated that heavy force (80–100 g) and early orthodontic tooth movement are not recommended when teeth are moved through regenerated bone created by DO, to avoid tipping and severe root resorption. This study demonstrated that early tooth movement into the newly distracted area has no adverse effects on bone maturation. Waiting 12 weeks after distraction to initiate tooth movement resulted in a lower rate of tooth movement with less root resorption. In accordance with our clinical experiences, no periodontal bone loss, periapical pathology, or soft tissue recession was evident, and tooth vitality was maintained in all patients. There are no reports that address the production and retention of attached gingiva when teeth are moved regenerated bone. Although DO offers considerable promise for orthodontics and dentofacial orthopedics, more research is needed to developed reliable clinical techniques. **CONCLUSIONS** - Bone exposed to stretching undergoes new bone formation and the newly formed bone is of a membranous, also known as woven type. - This formation is generally parallel to the axis of the stretching force. Consequently, it is possible to make the newly formed bone into the required shape. - Early tooth movement into the newly distracted area did not affect bone maturation and regeneration. **REFERENCES** 1. McCarthy JG, Stelnicki EJ, Grayson BH. Distraction osteogenesis of the mandible: a ten year experience. *Semin Orthod*. 1999;5:3–8. 2. Kışnisci R, İşeri H, Tüz HH, Altuğ AT. Dentoalveolar distraction osteogenesis for rapid orthodontic canine retraction. *J Oral Maxillofac Surg*. 2002;60:389–394. 3. McAllister BS, Gaffanet TE. Distraction osteogenesis for vertical bone augmentation prior to oral implant reconstruction. *Periodontol 2000*. 2003;33:54–66. 4. Guerrero CA. Rapid mandibular expansion. *Rev Venez Orthod*. 1990;48:1–9. 5. Malkoç S. *Effects of Mandibular Midline Distraction Osteogenesis on the Dentofacial Structures* [PhD thesis]. Konya, Turkey: Selcuk University; 2002. 6. Mommaerts MY. Bone anchored intraoral device for transmandibular distraction. *Br J Oral Maxillofac Surg*. 2001;39:8–12. 7. Contasti G, Guerrero CA, Rodriguez AM, Legan HL. Mandibular widening by distraction osteogenesis. *J Clin Orthod*. 2001;35:165–173. 8. Isaacson RJ, Strauss RA, Bridges-Poquis A. Moving and ankylosed central incisor using orthodontics, surgery and distraction osteogenesis. *Angle Orthod*. 2001;71:411–418. 9. Kondoh T, Hamada Y, Kamei K. Transport distraction osteogenesis following marginal resection of mandible. *Int J Oral Maxillofac Surg*. 2002;31:675–676. 10. Dolamaz M, Karaman AI, Durmus E, Malkoç S. Management of alveolar cleft by using dento-osseous transport distraction osteogenesis. *Angle Orthod*. 2003;73:723–729. 11. Block MS, Almerico B, Crawford C, Gardiner D, Chang A. Bone response to functioning implants in dog mandibular alveolar ridges augmented with distraction osteogenesis. *Int J Oral Maxillofac Implants*. 1998;13:342–531. 12. Bancroft JD, Stevens A. Bone. In: Stevens A, Lowe J, Bancroft JD, eds. *Theory and Practice of Histological Techniques*. 3rd ed. Edinburgh, UK: Churchill-Livingstone Company; 1992:309–341. 13. Clark G. Animal histotechnic methods for connective tissue. In: Clark G, ed. *Staining Procedures*. 4th ed. Baltimore, Md: Williams and Wilkins Company; 1981:113–129. 14. Gantous A, Phillips JH, Catton P, Holmberg D. Distraction osteogenesis in the irradiated canine mandible. *Plast Reconstr Surg*. 1994;33:164–168. 15. Eyres KS, Bell MJ, Kanis JA. Methods of assessing new bone formation during limb lengthening. Ultrasonography, dual energy X-ray absorptiometry, and radiography compared. *J Bone Joint Surg Br*. 1993;75:358–364. 16. Cope JB, Harper RP, Samchukov ML. Experimental tooth movement through regenerate alveolar bone: a pilot study. *Am J Orthod Dentofacial Orthop.* 1999;116:501–505. 17. Karaharju-Suvanto T, Peltonen J, Kahri A, Karaharju EO. Distraction osteogenesis of the mandible. An experimental study on sheep. *Int J Oral Maxillofac Surg.* 1992;21:118–121. 18. Karaharju EO, Aalto K, Kahri A, Lindberg LA, Kallio T, Karaharju-Suvanto T, Vauhkonen M, Peltonen J. Distraction bone healing. *Clin Orthop.* 1993;297:38–43. 19. Cope JB, Samchukov ML. Regenerate bone formation and remodeling during mandibular osteodistraction. *Angle Orthod.* 2000;70:99–111. 20. Cope JB, Samchukov ML, Muirhead DE. Distraction osteogenesis and histogenesis in beagle dogs: the effect of gradual mandibular osteodistraction on bone and gingiva. *J Periodontol.* 2002;73:271–282. 21. Rowe NM, Mehrara BJ, Dudziak ME, Steinbreck DS, MacKool RJ, Gittes GK. Rat mandibular distraction osteogenesis: part I. Histologic and radiographic analysis. *Plast Reconstr Surg.* 1998;102:2022–2032. 22. Zaffe D, Bertoldi C, Palumbo C, Consolo U. Morphofunctional and clinical study on mandibular alveolar distraction osteogenesis. *Clin Oral Implants Res.* 2002;13:550–557. 23. Liou EJ, Figueroa AA, Polley JW. Rapid orthodontic tooth movement into newly distracted bone after mandibular distraction osteogenesis in a canine model. *Am J Orthod Dentofacial Orthop.* 2000;117:391–398. 24. Liou EJW, Polley JW, Figueroa AA. Distraction osteogenesis, the effects of orthodontic tooth movement on distracted bone. *J Craniomax Surg.* 1998;9:564–571. 25. Nakamoto N, Nagasaka H, Daimaruya T, Takahashi I, Sugawara J, Mitani H. Experimental tooth movement through mature and immature bone regenerates after distraction osteogenesis in dogs. *Am J Orthod Dentofacial Orthop.* 2002;121:385–395. The Angle Orthodontist 2006;76(3):369–374. Microscopic Evaluation of Mandibular Symphyseal Distraction Osteogenesis Ismet Duran; Siddik Malkoç; Haluk İşeri; Mustafa Tunali; Murat Tosun; Hasan Küçükkolbaşı Please make the following changes: 1. The second paragraph of the “Patient Population” section should read, with the addition of the italicized text: Patients and their parents were informed about the proposed treatment plan involving the surgical phase as well as the conventional alternative option, and their consent was obtained. A detailed study design was explained to patients and their parents, and only volunteers were included in this study. The *surgical expansion of the mandible with distraction osteogenesis* was approved by the Ethics Committee of the School of Dentistry, University of Selçuk. *The study design was declared to patients orally and only volunteers were included in the study group.* 2. In the first sentence of the first paragraph of the “Microscopic Evaluation” section, the biopsy dimension is given as “cm”; it should be “mm.” 3. In the first sentence of the first paragraph of the “Surgical Technique” section, “intravenous sedation” should be changed to “intramuscular sedation.” The Angle Orthodontist 2006;76(3):400–405. Initial Effects of the Tongue Crib on Tongue Movements During Deglutition: A Cine-Magnetic Resonance Imaging Study M. Özgür Sayın; Erol Akin; Şeniz Karaçay; Nail Bulakbaşı The May 2006 issue carried this article and incorrectly listed the matrix size. Please note the change below. In the “Materials and Methods” section, please replace the incorrect matrix size. Correct sentence: The B-TFE images (shortest TR/TE: 2.1/1.09 ms) were obtained in the midsagittal plane by using 50° flip angle, 10-mm slice thickness, autoshim, 350 × 350 mm field of view, and 96 × 96 matrix size during swallowing of water.
Dear NAME Members: At the regular meeting of the NAME Board of Directors, several Proposed Changes to the NAME Bylaws were adopted in the ‘first reading, viewable in ‘track changes’ format below. Beginning April 1, 2009, all NAME members have at least 30 days to comment on these proposed changes, ending May 12, 2009. Comments may be submitted via email to: firstname.lastname@example.org or via phone to Bylaws Committee Chairperson Jane Reagan (Michigan) at 517-335-2250. Here is a very brief summary of the major changes put forth for your consideration in addition to some editorial changes: - **Article III, Membership Categories.** Re-orders the list so ‘limited voting’ comes before ‘non-voting’. - **Article VII, Officers.** Updating responsibilities of Officers, particularly the Secretary and Treasurer to reflect tasks they actually do; inserts signing the Conflict of Interest agreement as a requirement for all officers. - **Article VIII, Election of Officers.** Changes terms from one to two-years for the Secretary and Treasurer and has their elections in alternating years. Allows that in the first year of this change (possibly October 2009) the Treasurer may be elected for a three-year term. - **Article IX, Board of Directors.** Re-orders some sections and inserts signing the Conflict of Interest agreement as a requirement for all Board members. Updates the dates of term expirations for all Regional Board Representatives and At-Large LEA Representatives. - **Article X, Standing and Special Committees.** Adds “Membership Committee” and “Financial Review Committee” to Standing Committees and inserts signing the Conflict of Interest agreement as a requirement for all Committee Chairpersons. Includes limited voting members as eligible to serve as chairperson of committees. - **Article XI, Finances.** Reflects changes in Treasurer’s responsibilities and Financial Review Committee’s tasks. Thank you for your active participation on the review of these Bylaws changes, The NAME Bylaws Committee (Stacie Martin, John Hill, Liz Touhey, Greg Morris, Amy Edwards (ex officio) and Jane Reagan) Adopted: October 2, 2004, Amended 7-14-05, 9-14-06, 9-13-07, 7-10-08 I. TITLE II. PURPOSE III. MEMBERSHIP CATEGORIES A. Voting Membership B. Limited Voting Membership C. Non-Voting Membership D. Associate Membership IV. MEMBERSHIP YEAR V. MEMBERSHIP DUES A. Dues B. Good Standing VI. MEMBERSHIP MEETINGS A. Annual Meeting B. Special Meetings C. Quorum D. Voting VII. OFFICERS A. President B. President-Elect C. Immediate Past President D. Secretary E. Treasurer VIII. ELECTION OF OFFICERS A. Procedure B. Term of Office C. Nominating Committee D. Selection E. Vacancies IX. BOARD OF DIRECTORS A. General Powers B. Membership C. Meetings D. Quorum E. Voting F. Terms X. STANDING AND SPECIAL COMMITTEES A. Executive Committee B. Nominating Committee C. Conference Committee D. Bylaws Committee E. Other Standing Committees F. Special Committees XI. FINANCES A. Budget B. Obligations C. Loans D. Commercial Paper E. Deposits F. Financial Report XII. DISSOLUTION A. Dissolution Vote B. Funds XIII. AMENDMENTS XIV. AUTHORITY I. TITLE. The title of the organization shall be the National Alliance for Medicaid in Education, Inc. (NAME). The organization was incorporated in the State of Delaware on September 27, 2004. II. PURPOSE. The purposes of the organization are to: - Provide leadership as it relates to accessing Medicaid reimbursement for School-Based Services. - Promote integrity, collaboration, and success among all stakeholders. - Facilitate a network to share information on issues pertinent to Medicaid programs in public schools. III. MEMBERSHIP CATEGORIES. A. Voting Membership. 1) One individual is designated to represent the State Medicaid agency and one individual to represent the State Education agency as a Voting Member in the organization. Each designee shall have expertise, experience or some responsibility related to Medicaid reimbursement for Administrative Outreach or Direct Health Care Services provided by schools. If it is unclear whether the applicant for voting membership meets the qualifications listed above, the Membership Chair may request a letter from the designating State agency verifying its designation. 2) Two At-Large Local Education Authority (LEA) Representatives who have been elected to serve on the Board of Directors. Voting Members shall have the right to vote on all issues before the membership, elect officers and board members, hold office, and serve as chair of a standing or special committee. B. Limited Voting Membership. Any LEA member in good standing and in attendance at the Annual Membership Meeting is allowed to vote only for the purpose of electing the two At-Large LEA Representatives of the Board of Directors. For any other purposes or implementation of these Bylaws, LEA Members shall be considered Non-Voting Members possessing all other rights and privileges of that category. C. Non-Voting Membership. Staff who are involved with Medicaid in Education and who represent Federal or State agencies, regional education agencies or local education agencies shall be eligible for Non-Voting membership in the organization. Non-Voting Members shall have the right to attend all meetings and participate in activities of the organization, serve on standing and special committees, but shall not have the right to vote or to hold office. D. **Associate Membership.** Staff representing public or non-public organizations involved with Medicaid in Education shall be designated as Associate Members. Membership may also be extended to other persons by a vote of the membership. Associate Members shall have the right to participate in activities of the organization as Non-Voting Members and may serve on standing and special committees. IV. **MEMBERSHIP YEAR.** The NAME, Inc. membership year is January 1 through December 31. V. **MEMBERSHIP DUES.** A. **Dues.** The organization shall authorize and collect membership dues from Voting, Limited Voting, Non-Voting and Associate Members to be used for the operation of the organization. Dues are set by the Board and must be approved by a simple majority of the voting members at the annual meeting. Dues are payable by January first of each year. B. **Good Standing.** A member in good standing has paid the current year’s dues and any liens and/or assessments levied by the Association’s Board and Membership, and agrees to adhere to these Bylaws. VI. **MEMBERSHIP MEETINGS.** A. **Annual Meetings.** One Annual Membership Meeting of the organization shall be held in conjunction with the annual conference each year. The time and place of the meeting shall be announced at least six months prior to the meeting. B. **Special Meetings.** Additional meetings of the organization may be called, either by vote of the Board or by petition of a majority of the Voting Members. The time, agenda and place of all Special Meetings shall be announced at least thirty (30) days prior to the meeting. An alert will be sent to the membership advising them of the posting. C. **Quorum.** Those persons present at a properly called Annual Membership or Special Meeting shall be designated as a quorum and shall be entitled to take action on behalf of the organization. D. **Voting.** A simple majority vote of the Voting Members present at any meeting shall be required for any and all actions to be conducted by the organization. VII. **OFFICERS.** The officers of the organization shall be a President, President-Elect, Immediate Past President, Secretary and Treasurer. Officers must be full voting members in good standing at the time of nomination and election, and remain so throughout the term in office, including the move to the office of Immediate Past President. *All officers shall sign the Conflict of Interest Agreement annually.* A. **President.** The President shall be the principal executive officer of the organization and, subject to the control of the Board and the direction of the membership. The duties of the President shall be in general, to supervise and control all of the activities of the organization. The President shall be a member of the Board and, when present, shall preside at all meetings of the Board and all meetings of the membership. The President shall vote only in the case of a tie in a vote of the Board or the membership. The President shall select and appoint the chairpersons of all Standing and Special Committees and shall be an ex-officio member of all committees of the organization. The President, after having served for one year, shall automatically become the Immediate Past President. B. **President-Elect**. The President-Elect shall be a member of the Board and, in the absence of the President, shall perform the duties of the President. The President-Elect shall perform such other duties as are assigned by the President or the Board. The President-Elect, after having served for one year, shall automatically become the President of the organization. The President-Elect shall Chair the Nominating Committee. C. **Immediate Past President**. The Immediate Past President shall be a member of the Board and, in the absence of the President and the President-Elect shall perform the duties of the President. The Immediate Past President shall co-chair the Conference Committee. D. **Secretary**. The Secretary shall be a member of the Board. The Secretary shall keep and distribute the minutes of the proceedings of the Annual Membership meeting and the Board meetings. The Secretary shall assist the President in establishing the Board Meeting agendas and the distribution of meeting materials to the board. In addition the Secretary, shall annually collect and file the signed “Conflict of Interest Agreement” forms from Officers, Board Members and Committee Chairpersons. (these two moved to Membership committee). The Secretary shall assure all notices are duly given in accordance with these Bylaws. The Secretary shall perform such other duties as may be assigned by the President or the Board. E. **Treasurer**. The Treasurer shall be a member of the Board. The Treasurer shall have charge of and be responsible for all funds of the organization and shall collect membership dues, conference registration fees, sponsorship fees and other monies due and payable to the organization and deposit such funds in banks or other organizations approved by the Board. The Treasurer shall make disbursements as authorized by the President, Board, or membership in accordance with the budget adopted by the membership. The Treasurer shall send notification and collect all membership dues established by the organization. The Treasurer shall assist the Membership Committee Chairperson: 1) In maintaining a roster of current paid members; and, 2) In preparing and certifying the official list of Voting Members who have paid dues. The Treasurer shall prepare and distribute written financial reports for each regular Board meeting. The Treasurer shall present and hand out to those in attendance, an annual written financial report for the Annual Membership Meeting. The Treasurer shall perform such other duties as may be assigned by the President or the Board. **VIII. ELECTION OF OFFICERS.** A. **Procedure**. The election of officers shall take place during the Annual Membership Meeting each year. All Voting Members of the organization may participate in the election. Only full voting members of the organization are eligible to serve as officers. At the meeting prior to the Annual Membership Meeting, the Nominating Committee shall present to the Board a slate of candidates for officer positions for discussion. At the Annual Membership Meeting, further nominations may be received from the floor. The election of the slate, if non-contested, may be by voice vote. Any contested election shall be conducted by written ballot. B. Term of Office. The term of each office except the offices of Secretary and Treasurer shall be one year, effective immediately upon election to office. The terms of office of Secretary and Treasurer shall be two years. The Secretary shall be elected in even-numbered years and the Treasurer shall be elected in odd-numbered years and each of these positions may be elected to the same or other office for more than one term. In the initial election for the two year terms for the Secretary and Treasurer it may be necessary for one office term to be three years in duration. C. Nominating Committee. The Nominating Committee shall be responsible for receiving all suggestions for persons to serve as officers. The committee shall prepare a slate of officers to present for election by the membership. The committee shall contact all persons who will be nominated to confirm their willingness to serve. The committee shall insure that all nominees are Voting Members and otherwise eligible to serve in the office. D. Selection. A majority of the votes cast by the Voting Members present at the Annual Membership Meeting shall be necessary for election. Should no person receive a majority of the votes cast, a run-off between the two (2) persons who received the largest number of votes shall immediately be held. E. Vacancies. Any vacancy in office due to death, resignation or inability to serve shall be filled by the Board for the unexpired portion of the term. However, should a vacancy occur in the office of the President, the President-Elect shall immediately assume the office. Should a vacancy occur in the office of President-Elect for any reason, the vacancy shall be filled by a majority vote of the Board for the unexpired portion of the term. If the President-Elect was appointed by the Board, the appointed President-Elect would have to obtain a majority vote of approval from the voting membership prior to assuming the position of President. If a majority vote is not obtained then an election would be held during the Annual Membership Meeting in accordance with the election procedures established within these Bylaws. IX. BOARD OF DIRECTORS A. Membership. The Board shall consist of the President, President-Elect, Immediate Past President, Secretary, Treasurer, ten (10) regional representatives, and two Local Education Authority (LEA) At-Large representatives. B. General Powers. 1) The Board of Directors shall manage the affairs, activities and operation of the organization. The Board shall transact necessary business between the Annual Membership Meetings and such other business as may be referred to it by the membership or these Bylaws. It may create Standing and Special Committees, approve the plans and work of standing and special committees, present reports and recommendations at the meetings of the membership, prepare and submit a budget to the membership for approval, and, in general, conduct the business and activities of the organization. 2) Each member of the Board of Directors and each Committee Chairperson shall annually sign the “Conflict of Interest Agreement” and submit it to the Secretary by January 15th. 3) The NAME will not discriminate against any member, employee or applicant for employment because of his or her religion, race, creed, color, national origin, gender, sexual orientation, age, physical or mental disability or status as a veteran, in regard to any position for which the member, employee or applicant for employment is qualified. C. Meetings. 1) Regular meetings of the Board shall be held during the year. The dates and times shall be established at the Annual Membership Meeting. Special meetings may be called by the President or by a majority of the Board. With the exception of the Annual Membership Meeting, Board members may participate in meetings via conference call, if they are not able to travel to the meeting location. Adequate notice of all meetings shall be given to all members of the Board and, in the absence of an emergency, at least seven (7) days in advance. 2) Absence. If a Board member is unable to participate in a forthcoming Board meeting, an excused absence is obtained by notifying the Secretary or another officer prior to the meeting. Three unexcused absences from regularly-scheduled Board meetings by a Board member during a membership year is cause for removal from the Board. After the second unexcused absence, the Board member must be formally informed that if a third unexcused absence occurs, action will be taken by the Board to remove the individual from the Board. 3) Notice of the meetings shall be announced to all Members of the organization via the NAME website. An alert will be sent to the membership advising them of the posting. Any Voting, Non-Voting, or Associate Member of the organization may attend a meeting of the Board, but shall not be entitled to vote on matters before the Board. D. Quorum. A majority of the Board members, excluding any vacancies of the Board shall constitute a quorum for the transaction of business. E. Voting. Any action taken by the Board requires a majority vote of the Board members present and in which a quorum has been established. Absentee Voting. If a Board member is unable to attend a regular or special meeting, that member may provide an absentee vote on a particular issue if all the following conditions are met: 1) The issue has been provided in writing, in the form of a motion or resolution, to all Board members prior to commencement of the meeting as set forth in section IX [C] of these Bylaws, and 2) The absent Board member has an excused absence from the meeting, said excused absence having been received by the Secretary or another officer in advance of the meeting, and 3) The absent Board member registers the vote via email, phone call or fax, with any or all of the following officers of the Board, listed in order of preference: the Secretary, President-Elect or President, and 4) There are no amendments to the motion or resolution that substantively change the intent or outcome of the issue on the table F. Terms. 1) Regional Representatives – The ten regional Board members will be selected by the voting membership of the Centers for Medicare and Medicaid Services region they represent. The tenure for each of the ten regional Board members shall be a three-year term. Terms of the regional Board members shall begin upon election. Board members may be elected for more than one term. Terms for initial NAME regional Board members will be staggered as follows: a. Four regional board members will be elected to serve three years b. Three regional board members will be elected to serve two years and c. Three regional board members will be elected to serve one year. Term lengths of the initial regional board members were determined by a random drawing of the names of the elected regional members. The expiration dates for the Board members are: | Region I 2010, -13, -16 | Region VII 2009, -12, -15 | | Region II 2010, -13, -16 | Region VIII 2011, -14, -17 | | Region III 2011, -14, -17 | Region IX 2009, -12, -15 | | Region IV 2011, -14, -17 | Region X 2009, -12, -15 | | Region V 2009, -12, -15 | At-Large LEA Representative I 2010, -12, -14 | | Region VI 2010, -13, -16 | At-Large LEA Representative II 2009, -11, -13 | The Nominating Committee is responsible for submitting an official ballot for all open offices at the Annual Membership Meeting. The Nominating Committee shall assure that the Board is composed of Medicaid and Education representatives. In the event of the resignation of a regional Board Member, the Nominating committee is responsible for nominating an individual(s) from the same region to complete the term of the resigning member. The Nominating Committee will present the recommended slate of regional board member candidates to the Board for discussion prior to the Annual Membership Meeting. 2) At-Large LEA Representatives – Voting and Limited Voting members in attendance at the Annual Membership Meeting will elect the two LEA Representatives to the Board. The tenure for the At-Large LEA Representatives shall be a two-year term with the term of one of the LEA Representatives ending during the even numbered years (0, 2, 4, 6, and 8) and the other LEA Representative’s term ending during the odd numbered years (1, 3, 5, 7 and 9). The Nominating Committee will present the recommended slate of candidates for LEA Representatives to the Board for discussion prior to the Annual Membership Meeting. Vacancies. If a vacancy for an At-Large LEA Representative occurs before the term is ended, the Nominating Committee will seek candidates to fill the unexpired term, considering first those LEA staff who may have previously expressed an interest in serving. The LEA At-Large Representative will be appointed by the Board at a regular or special Board meeting to fulfill the unexpired term. X. STANDING AND SPECIAL COMMITTEES Unless there are specific provisions stated for the method of appointing the Chairperson of a Standing or Special Committee, the President may appoint a Non-Voting or Associate Member to serve as Committee Chairperson with approval of the Board of Directors. Each Committee Chairperson shall annually sign the “Conflict of Interest Agreement” and submit it to the Secretary by January 15th. A. Executive Committee. The Executive Committee shall consist of all elected officers (President, President-Elect, Immediate Past President, Secretary and Treasurer). The Committee may convene between Board meetings to make organizational decisions, address matters that cannot wait until the next Board meeting, or that should be addressed outside of the Board. The Committee may authorize, without prior Board approval, expenditures not to exceed $500. Meetings may be requested by any committee member (elected officer), with concurrence of at least two other members. Meetings require at least three members present. For those Committee decisions that require Board approval, such approval of Committee actions shall be submitted to the Board for their consideration at the next regularly scheduled Board meeting. B. Nominating Committee. The Nominating Committee shall be chaired by the President-Elect and composed of two (2) other persons who shall be selected by the Board at the beginning of each year. Any Voting, Non-Voting or Associate Member may serve as a committee member. In addition, the Immediate Past President shall be an ex-officio member of the committee. The committee shall carry out its responsibilities, as specified in these Bylaws. C. Conference Committee. The Conference Committee shall be co-chaired by the Immediate Past President and one other person designated by the President. The committee shall be responsible for planning and organizing the Annual Conference. Any Voting, Limited Voting, Non-Voting or Associate Member may serve as a committee member. D. Bylaws Committee. The President shall appoint the chairperson of the Bylaws Committee. Only Voting Members may serve as chairperson. Any Voting, Non-Voting, Limited Voting or Associate Member may serve as a committee member. The Bylaws Committee shall prepare draft amendments to the Bylaws as recommended by: a. An approved motion by the Voting Membership at the Annual Meeting; or, b. An approved motion by the Board. E. Membership Committee. The President shall appoint the Chairperson of the Membership Committee. The Committee shall be responsible for working with the Treasurer to: 1) Maintain a roster of current paid members, 2) Prepare and certify the official list of voting members based on the list of members who have paid dues, 3) Send timely notification of dues renewal when membership has lapsed. F. Financial Review Committee. The President shall appoint the Chairperson of the Financial Review Committee. The Committee shall consist of at least three members, none of whom are current members of the Finance Committee. The Committee shall be responsible for reviewing the financial documents of NAME on an annual basis and providing a report and recommendations. G. Other Standing Committees. The Board may establish other Standing Committees, as it deems necessary and advisable. The President shall appoint the chairpersons of all Standing Committees. Only Voting Members, including Limited Voting Members, may serve as chairperson. The chairperson of each committee shall recruit the members for his or her committee. Any Voting, Limited Voting, Non-Voting or Associate Member may serve as a committee member. The Chairperson shall report the plans and activities of the committee to the Board, which must approve all such reports. H. Special Committees. The President and/or the Board may create Special Committees. Special Committees shall be created for a specific time and/or task and shall cease to exist when that time or task has been completed, whichever occurs first. The President shall appoint the chairpersons of all Special Committees. Only Voting Members or Limited Voting Members may serve as chairpersons. Any Voting, Limited Voting, Non-Voting or Associate Member may serve as a committee member. The Chairperson shall report the plans and activities of the committee to the Board, which must approve all such reports. XI. FINANCES A. Budget. The Board shall present to the membership at the Annual Membership Meeting a budget of anticipated revenue and expenses for the year. This budget shall be used to guide the activities of the Board during the year. The Board must approve any substantial deviation from the budget in advance. B. Obligations. The Board may authorize any officer or officers to enter into contracts or agreements for the purchase of materials or services on behalf of the organization. C. Loans. No loans shall be made by the organization. D. Commercial Paper. All checks, drafts, or other orders for the payment of money on behalf of the organization shall be signed by the Treasurer or by any other person as authorized in writing by the Board. E. Deposits. The Treasurer shall deposit all funds of NAME in such banks or other organizations approved by the Board, and shall make such disbursements as authorized by the Board in accordance with the approved Bylaws. All deposits and/or disbursements shall be made within a maximum of thirty (30) days from the receipt of the funds and/or orders of payment. F. Financial Report. The Treasurer shall present and hand out a financial report at the Annual Membership Meeting of the organization and shall prepare a final report at the close of the year. The Board shall have the report and the accounts examined annually by an independent outside entity and the Financial Review committee, who, if satisfied that the Treasurer's annual report is correct, shall sign a statement of that fact at the end of the report. XII. DISSOLUTION. A. Dissolution Vote. Any dissolution of NAME shall be authorized at a meeting of the Board of Directors upon the adoption of a resolution to dissolve, with a majority vote by the Board members in office. The dissolution of NAME shall proceed according to Delaware state law. B. Funds. The NAME shall use its funds only to accomplish the Purposes stated in these Bylaws. No part of its funds shall inure or be distributed to the members of the organization. On dissolution of the organization, and after paying or making provision for payment of all liabilities, all funds remaining shall be distributed to one or more regularly organized and qualified professional societies, trade associations, charitable, educational, scientific or philanthropic organizations that are also exempt from Federal income taxes under the provisions of Section 501 (c)(3) of the Internal Revenue Code of 1954, to be selected by the Board of Directors. XIII. AMENDMENTS. These Bylaws may be altered, amended or repealed by the Board in the following manner. A first reading of a “Proposed Change” will be reviewed and voted on by the Board. Upon first reading approval, the “Proposed Change” will be posted on the NAME web page for 30 days to allow membership/public input. An alert will be sent to the membership advising them of the posting. Following the 30 day input period the Board will convene to review the comments and vote on the second reading of the “Proposed Change”. If passed on a second reading, the change becomes effective immediately. XIV. AUTHORITY. If any part of these Bylaws shall conflict with the decisions, policies or procedures adopted by State or Federal Government they shall be deemed null and void and the decision of the Government shall, in all cases, control. These Bylaws were first adopted by the Steering Committee and membership of an unincorporated association by a majority vote during a meeting properly called on September 26, 2003 in Denver Colorado, and were subsequently replaced by the Board of Directors with a majority vote during a meeting properly called on October 2, 2004, in Cambridge, Massachusetts and shall take effect immediately. Amended: March 10, 2005 (first reading), July 14, 2005 (second reading, effective immediately), September 14, 2006 (second reading, effective immediately). July 12, 2007 (first reading), September 13, 2007 (second reading, effective immediately). April 10, 2008 (first reading), July 10, 2008 (second reading, effective immediately).
THE VICARAGE, HAMPTON HILL, October 1st, 1927. My Dear People, With this month begin the winter parochial activities. The Band of Hope, Bible Classes and Lads' Club have begun, and, with the evenings drawing in, the numbers of the first and last will gradually increase. With regard to the Bible Classes, I should like to know that a much larger number of girls and lads were joining up. In a daily paper a little while ago there was an account of a meeting of Bible Class leaders in the Diocese of Liverpool. From that account one realised what a feature of Church life the Bible Class is in the North. Why should it not have as prominent a place in the work of the Church down South? So many of our young people imagine that their religious education ceases when they reach the fourteenth year of their age, and I am afraid the parents do little to upset this false idea. In fact, most Church people, young and old, are very ignorant of their Bible, which, after all, is not only the most helpful book in life, but the most interesting book for study. I do hope, then, that parents will urge their children to join the Bible Classes. The grown-ups are also going to have a chance of further study of this wonderful book. Beginning on the third Sunday in October, there will be a half-an-hour's meeting in Church after Evensong each Sunday for the consideration of some portion of the Scriptures. I trust that many will seize this opportunity of gaining knowledge of their Bible. Our Harvest Thanksgiving Services are on the second Sunday in October. You will find the times of services and the special preachers under the Parish News. The farmers of England have suffered badly from the weather this year, and I fear that some of the smallholders have been brought to the verge of ruin. Our sympathy and prayers will go out for them; but when we think that so little of the food which we consume is grown in the country, we still have cause to thank Him Whose mercy still endures. I shall be glad to receive the usual gifts for decoration on Saturday morning, October 8th, at the Church between half-past nine and half-past ten. I would draw your particular attention to notices in the Parish News about meetings in connection with the Work of the Church Overseas. Especially would I remind you of our Annual Sale of Work on Wednesday, October 26th. We are prolonging the time of this effort till half-past nine, so that many who do not leave their work until six o'clock may have an opportunity of looking in to do their bit for Christ's work amongst those who as yet know Him not. May I just add one word or two to the "appreciation" of Mrs. Thornton Coe, written in another place by one who has known the family much longer than I have. When the south aisle in the Church was dedicated, Mr. and Mrs. Coe occupied the two seats in which they have sat regularly Sunday by Sunday ever since, until they were hindered by infirmity from coming to God's House. What an example! Would that many more husbands and wives would follow it!! Our deepest sympathy goes out to Mr. and Miss Coe in their sad bereavement. Mr. Jordan, who has been superintendent of the Boys' Sunday School for very many years, and has been a very useful and loyal worker in other Church work, relinquished his position in the Sunday School at the end of last month. I shall miss his untiring help, and I am sure the children also will miss him. MISS DOROTHY HEAP L.R.A.M., L.T.C.L., Experienced Teacher desires Pupils—Piano, Singing, Theory and Harmony. Preparation for all Examinations. Apply 75, Hampton Rd., Upper Teddington. The three Hampton Choirs are holding their Festival Service at St. James's this year. It will, of course, entail a certain amount of extra expense. To meet this we are having a concert in the Boys' School on Friday, October 14th, at 8 p.m. With the co-operation of Mr. Russe, some really good artists have been secured, and I feel sure that you will back our enterprise up so that the financial success of the concert may be assured. I remain, Your faithful friend and Vicar, FREDK. P. P. HARVEY. INTERCESSIONS. At 7.30 a.m.—Holy Communion Mondays: Sunday Schools, Day Schools. Tuesdays: District Visiting, Mothers' Union, Voluntary Workers in the Church, Cleaners, &c. Wednesdays: Choir and Services, Parochial Church Council. Thursdays: Temperance Work, Band of Hope, Crusaders and Adult Branch, C.E.T.S. Fridays: Church Missions, Home & Over-seas Saturdays: Church Lads' Brigade, Girl Guides, Girls' Friendly Society. IMPORTANT.—Will all those who are responsible for Church Work please send in a full report by the 20th of each month, by so doing it will not only be of great use to our readers, but greatly forward the work of the Church. Applications for Advertisements in the Magazine should be made to the Hon. Treasurer, Mr. H. A. SIMMONS, 7, Oxford Road, Teddington. We are grateful to our Advertisers for their support of our Magazine, and confidently hope our readers will support them. PARISH WANTS. 1. A Litany Desk. 2. A Parish Hall. 3. A Bier for use in Church at Funerals, approximate cost about £30. 4. A Piano for the Infants' School. 5. A Flag and Staff for Church Tower. The Vicar may be seen at the Vicarage on Tuesdays, Thursdays and Saturdays, between the hours of 8.45 and 10 a.m. And on any day, except Mondays, between the hours of 6 and 7 p.m. CHURCHYARD.—Contributions towards keeping the Churchyard and the graves tidy will be welcomed, and may be sent to Mr. C. H. Evans (Churchwarden), Roseneath, Edward road, Hampton Hill. PARISH NEWS. BIBLE CLASSES.—A Class for Girls is held in the Girls' School on Sundays at 10 a.m. and 3 p.m. A Class for Lads is held in the Church Room at 2.45 p.m. LADS' CLUB.—The Club is held in the Boys' School on Wednesdays from 7 p.m. to 9.30 p.m. MOTHERS' MEETING.—This Meeting is held on Thursdays in the Church Room, at 2.30 p.m. MOTHERS' UNION.—The monthly meeting will be held in the Church Room on Wednesday, October 5th, at 2.30 p.m. The quarterly day is fixed for October 26th. There will be Corporate Communions at 7.30 a.m. and 11 a.m. In the afternoon, service in Church will be at 3.30 p.m., and after the service Mrs. Kiddell will address the members in the Church Room. SAINTS' DAYS.—October 18th, Feast of St. Luke, Holy Communion 7.30 a.m. and 12; Intercession on behalf of Medical Missions. October 28th, Feast of St. Simon and St. Jude, Holy Communion 7.30 a.m. and 12. Harvest Thanksgiving.—The Thanksgiving Services for Harvest will be on Sunday, October 9th. There will be Celebrations of the Holy Communion at 7 a.m. and 8 a.m. At 11 o'clock the preacher will be the Rev. H. J. Beck, Missioner of St. Thomas', Hanwell, and at 7 o'clock the Rev. H. G. McLeod, Rector of Shepperton, will be the preacher. In the afternoon the usual flower, vegetable, fruit, etc., service will be held at 3 p.m. The collections will be given to local hospitals and the Surgical Aid Society. Gifts of bread, flowers, fruit, etc., for decorations should be sent to the Church on Saturday morning, October 8th, between 9.30 a.m. and 10.30 a.m. Communicants' Guild.—The Monthly Service and Meeting will be held on Wednesday, October 19th, at 8 p.m. Missionary Association.—As there are three important events in connection with the Work of the Church Overseas, the meeting on Friday, October 7th, will be a business meeting in the Vestry, at 8 p.m. On Wednesday, October 12th, a "Missionary Day" is being held in the Twickenham Town Hall, from 3 p.m. to 9.30 p.m. Full particulars may be found on the bill posted on the Church door. Admission by programme, price 6d., of Miss Jakeman, the Rev. E. R. Milton, or the Vicar. On Wednesday, October 19th, there is to be a meeting at Chiswick Town Hall, at 8 p.m., to receive the Fifth Report of the World Call. Each Parish has been allowed seven tickets. The Vicar will be glad to receive application for these tickets, which are free, and will give them out in order of application. The Bishop of Kensington will preside. The Annual Sale of Work for the Missionary Societies will be held in the Girls' School, on Wednesday, October 26th, from 3.30 p.m. till 9.30 p.m. Gifts of needlework, cakes, jam, pickles and oddments will be gratefully received. They should be sent to the Vicarage not later than Monday, October 24th, or to the Girls' School on Tuesday, October 25th, between 10 a.m. and 12 noon. Price of admission is 3d. Concert.—A Concert will be held in the Boys' School on Friday, October 14th, at 8 p.m. The prices of admission are half-a-crown, one shilling, and sixpence. The half-a-crown and shilling seats are numbered and reserved. The proceeds will be given to the Choir Fund. Full particulars and names of artists may be seen on the window bills. Preliminary Notice.—A Festival Service will be rendered by the combined Choirs of St. Mary's and All Saints' Church, Hampton, and St. James' Church, Hampton Hill, on Tuesday, November 15th, in St. James' Church, at 8 p.m. The principal work which the choirs will render at this service will be Mendelssohn's "Come, let us sing." The Preacher will be the Rev. Dr. J. G. Simpson, Canon and Precentor of St. Paul's Cathedral. R. I. P. Mrs. Thornton Coe. An Appreciation. From our midst, within the last few days, one has been called to Higher Service, who for more than sixty years, with her husband, has so faithfully attended the services in this Church on Sundays and weekdays. All who knew her felt she was a real friend, who spoke kindly of others, and had a cheery smile and word for each—taking interest in their troubles and joys. In 1866 she was one of the first "mothers" to join the "Mothers' Meeting started by Mrs. Fitz-Wygram, wife of the first Vicar of St. James'. Mrs. Thornton Coe has often spoken of those happy afternoons spent in the Vicarage, and how much she was helped in her busy and difficult life by the instruction given—when she had a large family of eight or nine children to provide for, in days too when school fees for each child had to be paid, and wages were very small. From that time Mrs. Coe has remained a most loyal and untiring member of each successive Mothers' Meeting, and unswerving in her loyalty to each Vicar of the Parish, to the time of her "passing on" in her eighty-first year, setting a wonderful example to the older and younger mothers. In later years when the Mothers' Union was formed she at once joined it, and only ill-health has prevented her for the past eighteen months from attending Church or any meetings. Sorrows and troubles have come heavily upon her, but with undying faith and trust she has met and overcome them, by always seeking for strength, grace and patience at the feet of her beloved Master. With her husband she regularly knelt at the Altar and received the Blessed Sacrament. When through infirmities of old age and bad health they have been confined to the house, with intense reverence and thankfulness they have made their Communions at home, and expressed their gratitude to the Vicar for ministering to them. Great sympathy is felt for Mr. Thornton Coe and his daughter and other members of the family. After 65 years of married life, the loneliness for him is intense, and so much increased by his deafness and his bad health. A large number of mothers, representing both the Mothers' Meeting and the Mothers' Union, were present at the funeral, when amidst many tokens of love and respect Mrs. Coe was laid to rest; and her quiet, sweet influence will remain with her many friends "until the Day dawns, and the shadows flee away." "Saints departed even thus Hold communion still with us; Still with us, beyond the veil, Praising, pleading without fail." **BAPTISMS.** "Made a Member of Christ." August 28th—John Raymond Westell. Sept. 11th—Roy Stanley Edgar Bishop, ,, 11th—Charles James Edgar Goodwen. **MARRIAGES.** "Those whom God hath joined together." August 27th—Leslie George Cleghorn and Doris Elizabeth Cooper. ,, 27th—Benjamin Selby Graham and Edith Mabel Marjorie Clark. Sept. 17th—Cecil Gordon Simmonds and Marjorie Kathleen Cockburn. ,, 24th—William Roy Meredith and Margaret Angelica Roberts. **BURIALS.** "I am the Resurrection and the Life." Sept. 2nd—Emily Florence Dennett, aged 43 years. ,, 15th—John Percy Godwin, aged 27 years. ,, 23rd—Sarah Ann Coe, aged 81 years. ,, 24th—Rose May Cooper, aged 33 years.
Impurity induced double transitions for accidentally degenerate unconventional pairing states Bastian Zinkl\textsuperscript{1} and Manfred Sigrist\textsuperscript{1} \textsuperscript{1}Institute for Theoretical Physics, ETH Zurich, 8093 Zurich, Switzerland (Dated: September 23, 2020) Non-magnetic impurities can lift the accidental degeneracy of unconventional pairing states, such as the \((d + ig)\)-wave state recently proposed for Sr\(_2\)RuO\(_4\). This type of effect would lead to a superconducting double transition upon impurity doping. In a model calculation it is shown how this behavior depends on material parameters and how it could be detected. INTRODUCTION The ideal proposal for the symmetry of the order parameter of an unconventional superconductor should have the ability to explain all its specific experimental signatures. In the case of Sr\(_2\)RuO\(_4\), this high standard has turned out to be most challenging. Even the candidate order parameter considered as promising over a long time, the spin-triplet chiral \(p\)-wave state [1–4], has recently been questioned by contradictory experiments indicating spin-singlet pairing [5, 6]. This has prompted new proposals for the pairing symmetries, some of which have quickly gained prominence, such as the even-parity, spin-singlet, time reversal symmetry breaking superposition of \(d_{x^2-y^2}\) and \(g_{xy}(x^2-y^2)\), the \((d + ig)\)-wave state [7–9]. In contrast to the chiral \(p\)-wave state, whose two constituents, the \(p_x\) and \(p_y\)-component, are degenerate by symmetry, the \((d + ig)\)-wave state has to rely on an accidental degeneracy, because \(d_{x^2-y^2}\) and \(g_{xy}(x^2-y^2)\) belong to different representations of the tetragonal point group. In our study, we scrutinize the \((d + ig)\)-wave specifically for this aspect of degeneracy in view of disorder effects. For this purpose, we formulated a single-band tight-binding model and apply the self-consistent \(T\)-matrix approximation in order to take the effect of impurity scattering on the superconducting phase into account. In this way we examine the behavior of the two pairing channels, in particular, the splitting of their transition temperatures. In the case of a double transition, we also analyze the resulting specific heat signatures. MODEL OF A \((d + ig)\)-WAVE SUPERCONDUCTOR Tight-binding model We consider a single-band tight-binding model on a two-dimensional square-lattice, which includes nearest-neighbor (NN) and next-nearest-neighbor (NNN) hopping. In momentum space the Hamiltonian reads \[ \mathcal{H} = \sum_{\mathbf{k},s} \xi_{\mathbf{k}} c_{\mathbf{k},s}^\dagger c_{\mathbf{k},s} + V_{\text{pair}}, \] where \(c_{\mathbf{k},s}^\dagger\) (\(c_{\mathbf{k},s}\)) denotes the creation (annihilation) operator of an electron with spin \(s = \uparrow, \downarrow\) and momentum \(\mathbf{k} = (k_x, k_y)\). The dispersion, which is chosen to qualitatively resemble the genuinely two-dimensional \(\gamma\) band of Sr\(_2\)RuO\(_4\), is given by \[ \xi_{\mathbf{k}} = -2t(\cos k_x + \cos k_y) - 4t' \cos k_x \cos k_y - \mu, \] with \(\mu\) as the chemical potential and hopping matrix elements \(t = 1\) (unit of energy) and \(t' = 0.3\) (the lattice constant \(a\) is taken to unity). In Fig. 1 we show the Fermi surface (FS) for varying chemical potentials. The pairing potential \(V_{\text{pair}}\) is restricted to the spin-singlet channel, \[ V_{\text{pair}} = \sum_{\mathbf{k},\mathbf{k}',s_1,s_2} V_{\mathbf{k}\mathbf{k}'} c_{\mathbf{k},s_1}^\dagger c_{-\mathbf{k},-s_1}^\dagger c_{-\mathbf{k}',-s_2} c_{\mathbf{k}',s_2}, \] where the orbital structure is given by \(V_{\mathbf{k}\mathbf{k}'}\). With our focus on the \((d + ig)\)-wave [10], we introduce \[ V_{\mathbf{k}\mathbf{k}'} = \sum_{\alpha = d,g} V_\alpha \Phi_\alpha(\mathbf{k}) \Phi_\alpha(\mathbf{k}'), \] where the even-parity basis functions are \[ \Phi_d(\mathbf{k}) = \cos k_x - \cos k_y, \] \[ \Phi_g(\mathbf{k}) = \sin k_x \sin k_y (\cos k_x - \cos k_y). \] After the standard mean-field decoupling of the pairing potential, the minimization of the free energy leads naturally to the quasiparticle gap function \[ \Delta_{\mathbf{k}} = \Delta_d (\cos k_x - \cos k_y) \] \[ \pm i \Delta_g \sin k_x \sin k_y (\cos k_x - \cos k_y), \] which breaks time-reversal symmetry. The coefficients \(\Delta_{d,g}\) are obtained by solving the self-consistency equation, \[ \begin{pmatrix} \Delta_d \\ \Delta_g \end{pmatrix} = \sum_{\mathbf{k}} C_{\mathbf{k}} \begin{pmatrix} V_d & 0 \\ 0 & V_g \sin^2 k_x \sin^2 k_y \end{pmatrix} \begin{pmatrix} \Delta_d \\ \Delta_g \end{pmatrix}. \] The factor \(C_{\mathbf{k}}\) takes the form \[ C_{\mathbf{k}} = -T \sum_n \frac{(\cos k_x - \cos k_y)^2}{\omega_n^2 + \xi_k^2 + |\Delta_k|^2}, \] where $T$ is the temperature and the renormalized Matsubara frequencies $\tilde{\omega}_n$ are different from the standard Fermionic ones, $\omega_n = (2n+1)\pi k_B T$, if disorder is present, as defined in Eq. (13). **Disorder - T-matrix approximation** Disorder is introduced through non-magnetic impurities with a point-like potential leading exclusively to s-wave scattering. As we would like to explore the whole range of scattering potential strengths, meaning also the unitary limit where the potential exceeds the band width, we employ a T-matrix approach, which includes multiple scatterings at the same impurity. The $T$-matrix is defined by $$T_{kk'}(i\omega_n) = U_{kk'} + \sum_{k''} U_{kk''} G(k'', i\omega_n) T_{k''k'}(i\omega_n),$$ (10) where $U_{kk'}$ is the impurity potential in $k$ space and $G(k, i\omega_n)$ the (normal) electron Green's function. Note that we have omitted off-diagonal terms involving the anomalous Green's function, since they vanish for unconventional states. For s-wave scattering both $U_{kk'}$ and the $T$ matrix are scalar in momentum space, $$U_{kk'} = U, \quad T_{kk'}(i\omega_n) = T(i\omega_n).$$ (11) We may restrict ourselves to low impurity concentrations $c$ such that we can neglect impurity interference effects, because superconductivity is rather quickly suppressed by disorder, once the mean free path becomes comparable to the zero-temperature coherence length. Hence, the self-energy reads $$\Sigma(i\omega_n) = cT(i\omega_n),$$ (12) which renormalizes the Matsubara frequencies, $$i\tilde{\omega}_n = i\omega_n - \Sigma(i\omega_n).$$ (13) Using the renormalized frequencies $\tilde{\omega}_n$ in the self-consistent gap equation [Eqs. (8, 9)] enables us to examine the influence of disorder on the superposition of unconventional pairing states. **Critical temperatures $T_{c,d}$ and $T_{c,g}$** For two pairing states, which belong to different representations, such as the d- and g-wave states, the respective bare critical temperatures, $T_{c,d}$ and $T_{c,g}$, are generally different. In App. A we also discuss briefly the related case of the $(s+id)$-wave. We now assume that the critical temperatures coincide in the clean system and enforce this in our model by fine-tuning the coupling strengths $V_{d,g}$ in the pairing interaction accordingly. Focussing on the behavior of the bare critical temperatures, $T_{c,d}$ and $T_{c,g}$, under the influence of disorder, we solve the linearized gap equation [Eq. (8)], which decouples for the two channels. The ratio... $T_{c,d}/T_{c,g}$ displayed in Fig. 2 (circles) reveals two regimes, if we vary the chemical potential. For $\mu = 0.25$ (smallest FS) the ratio $T_{c,d}/T_{c,g}$ decreases upon growing impurity concentration $c$, while it increases for $\mu = 1.175$ (largest FS close to van Hove points). No change of the ratio is seen for $\mu = 0.925$. Thus, there is a fine-tuned FS where the "degeneracy" remains untouched. The difference in the behavior is reflected in the coherence lengths of the two pairing states, which depend on the position of the FS. A simple estimate of the zero-temperature coherence length $\xi$ for a given gap function can be obtained from $$\xi^2 = \frac{\sum_k |\nabla_k \Delta_k|^2}{\sum_k |E_k|^2}. \quad (14)$$ For larger coherence lengths $T_c$ suffers faster suppression with increasing $c$. Consistently, we find $\xi_d/\xi_g \approx 1.09$ for $\mu = 0.25$ and $\xi_d/\xi_g \approx 0.96$ for $\mu = 1.175$. Intuitively it is clear for the latter case that the $d$-wave state can profit from the enlarged density of states at the van Hove points (small Fermi velocity), while the $g$-wave state has nodes there. Hence, the $d$-wave state is more tightly bound. However, on more genuine Fermi surfaces pairing states of higher angular momentum have in general shorter coherence lengths for a given critical temperature. The splitting of the bare critical temperatures implies the occurrence of two consecutive superconducting transitions: First, into the superconducting phase and then breaking time reversal symmetry. The second transition, however, does not happen at the lower of the two bare $T_c$, but at a renormalized critical temperature, because the second order parameter has to nucleate in the presence of the first one. Thus, to determine the real onset of the second order parameter we have to solve the full self-consistency equation [Eq. (8)] for $\Delta_d$ and $\Delta_g$. The renormalization of the critical temperatures, indicated by squares in Fig. 2, yields a larger splitting of the two transitions than the ratio $T_{c,d}/T_{c,g}$ would suggest. Due to the presence of the first order parameter large parts of the states at the FS are consumed leaving a strongly reduced density of low-energy states available for the second order parameter. **Specific heat for the double transition** There are few ways of observing superconducting double transitions. Traditionally, specific heat has been a hallmark of such a feature in many unconventional superconductors. Thus, we would like to show here that the impurity induced split of the transition could leave an observable signature in the specific heat. We use our Green's function formalism and linear response theory [11, 12], as shown in App. B. We consider here the situation $T_{c,g} > T_{c,d}$: where the first transition leads to a $g$-wave phase and the second to the time reversal symmetry breaking $(d+ig)$-wave phase. Fig. 3 depicts the temperature dependence of the specific heat, $C/T$. Clearly a second anomaly is visible below the onset of superconductivity (see also inset). In our calculation the second jump is roughly 20% of the first one and both transitions are of second order. Furthermore, $C/T$ reaches a finite value in the zero-temperature limit due to the finite zero-energy density of states induced by the disorder. **Conclusion** Our work highlights how non-magnetic disorder influences the transition temperatures of accidentally or nearly degenerate unconventional pairing channels. Generally, the two pairing states would show a different suppression of their critical temperatures under disorder, which would yield in turn a superconducting double transition. Such a double transition would be visible in the specific heat, as shown in Fig. 3. However, since time reversal symmetry breaking would only occur at the second transition, $\mu$SR zero-field relaxation and polar Kerr effect measurements would be optimal tools to detect whether the appearance of intrinsic magnetic properties separates from the onset of superconductivity. Similarly, the renormalization of the ultrasound velocity of transverse modes would be a way to see the second transition. So far no such features have been reported and should therefore be indeed a target of measurements. The scenario based on the $(d+ig)$-wave phase for Sr$_2$RuO$_4$ relies on fine-tuning in the clean limit. To keep the degeneracy under disorder would mean to impose a second fine-tuning constraint. ACKNOWLEDGEMENTS We would like to thank Mark H. Fischer and Roland Willa for many useful discussions. This work was financially supported by the Swiss National Science Foundation (SNSF) through Division II (No. 184739). Appendix A: Disorder effect on the \((s + id)\)-wave phase For completeness we address here also an alternative state proposed, which constitutes the superposition of the extended \(s\)-wave and \(d\)-wave states which would not be degenerate by symmetry. The gap equations read \[ \begin{pmatrix} \Delta_s \\ \Delta_d \end{pmatrix} = \sum_k C'_k \begin{pmatrix} V_s \Phi_s(k)^2 & 0 \\ 0 & V_d \Phi_d(k)^2 \end{pmatrix} \begin{pmatrix} \Delta_s \\ \Delta_d \end{pmatrix}, \] with \[ \Phi_s(k) = \cos k_x + \cos k_y, \] \[ \Phi_d(k) = \cos k_x - \cos k_y, \] and \[ C'_k = -T \sum_n \frac{1}{\omega_n^2 + \xi_k^2 + |\Delta_k|^2}. \] As explained in the main text, we calculate the bare critical temperatures, \(T_{c,s}\) and \(T_{c,d}\), from the decoupled linearized gap equations of the two pairing channels. As a representative case we chose \(\mu = 1.175\) (largest FS in Fig. 1) and list the results for different impurity concentrations in Table I. The impurity concentration is normalized by the averaged critical concentration \(c_c = (c_s + c_d)/2\). Assuming degeneracy in the clean system, we find that the ratio \(T_{c,s}/T_{c,d}\) is decreasing as a function of the impurity concentration, which is in line with the ratio of coherence lengths, \(\xi_s/\xi_d \approx 1.36\). We checked that the behavior of \(T_{c,s}/T_{c,d}\) to decrease under impurity doping is independent of \(\mu\), in contrast to the \((d + ig)\) pairing state [cf. Fig. 2]. | \(c/c_c\) | \(T_{c,s}/T_{c,d}\) | |-----------|---------------------| | 0.11 | 0.887 | | 0.16 | 0.824 | | 0.22 | 0.755 | TABLE I. The ratio of critical temperatures of the \((s + id)\)-wave at \(\mu = 1.175\) as a function of the impurity concentration \(c\). The average of the critical concentrations, \(c_{c,s} \approx 0.057\) and \(c_{c,d} \approx 0.128\), is denoted by \(c_c\). Appendix B: Calculation of the specific heat in disordered systems For the derivation of the specific heat we employ the Green’s function formalism and linear response theory [13, 14]. We start with the generalized formula for the ground-state energy of an interacting electron system by Luttinger and Ward [11, 12]. The grand potential can be written as \[ \Omega_s = -T \sum_n \sum_k \left\{ \log (\omega_n^2 + \xi_k^2 + |\Delta_k|^2) + \Delta_k F^\dagger(k, i\omega_n) \right. \\ + \Sigma(i\omega_n) G(k, i\omega_n) \left\} + \Omega', \] with \(i\tilde{\omega}_n = i\omega_n - \Sigma(i\omega_n)\) and \[ \Omega' = T \sum_\nu \sum_n \sum_k \frac{1}{\nu} \Sigma_\nu(i\omega_n) G(k, i\omega_n), \] where \(\Sigma(i\omega_n) = cT(i\omega_n) = \sum_\nu \Sigma_\nu(i\omega_n)\). By considering the difference between superconducting and normal state, \(\Omega_s - \Omega_n\), we ensure that the sum over \(n\) converges. After calculating the self-energy self-consistently it is straightforward to determine the specific heat difference through \[ \frac{C_s - C_n}{T} = -\frac{\partial^2 (\Omega_s - \Omega_n)}{\partial T^2}. \] The derivatives for the displayed results in Fig. 3 have been taken numerically. References [1] G. M. Luke, Y. Fukudome, K. Kojima, M. Larkin, J. Merrin, B. Nachumi, Y. Uemura, Y. Maeno, Z. Mao, Y. Mori, et al., Nature 394, 558 (1998). [2] A. P. Mackenzie and Y. Maeno, Rev. Mod. Phys. 75, 657 (2003). [3] J. Xia, Y. Maeno, P. T. Beyersdorf, M. M. Fejer, and A. Kapitulnik, Phys. Rev. Lett. 97, 167002 (2006). [4] Y. Maeno, S. Kittaka, T. Nomura, S. Yonezawa, and K. Ishida, J. Phys. Soc. Jpn. 81, 011009 (2011). [5] A. Pustogow, Y. Luo, A. Chronister, Y.-S. Su, D. Sokolov, F. Jerzembeck, A. P. Mackenzie, C. Hicks, N. Kikugawa, S. Raghu, et al., Nature 574, 72 (2019). [6] K. Ishida, M. Manago, K. Kinjo, and Y. Maeno, J. Phys. Soc. Jpn. 89, 034712 (2020). [7] S. A. Kivelson, A. C. Yuan, B. Ramshaw, and R. Thomale, npj Quantum Mater. 5, 43 (2020). [8] S. Ghosh, A. Shekhter, F. Jerzembeck, N. Kikugawa, D. A. Sokolov, M. Brando, A. Mackenzie, C. W. Hicks, and B. Ramshaw, arXiv preprint arXiv:2002.06130 (2020). [9] R. Willa, arXiv preprint arXiv:2005.04124 (2020). [10] Another even-parity state is for instance the superposition of extended $s$-wave and $d$-wave, for which we simply use the basis function $\Phi_4(\mathbf{k}) = \cos k_x + \cos k_y$. [11] J. M. Luttinger and J. C. Ward, Phys. Rev. 118, 1417 (1960). [12] J. Keller, K. Scharnberg, and H. Monien, Physica C 152, 302 (1988). [13] V. P. Mineev and K. Samokhin, Introduction to unconventional superconductivity (Gordon and Breach Publisher, 1999). [14] T. Nomura, J. Phys. Soc. Jpn. 74, 1818 (2005).
Stochastic mRNA Synthesis in Mammalian Cells Arjun Raj\textsuperscript{1,2*}, Charles S. Peskin\textsuperscript{1}, Daniel Tranchina\textsuperscript{1}, Diana Y. Vargas\textsuperscript{2}, Sanjay Tyagi\textsuperscript{2*} \textsuperscript{1} Courant Institute of Mathematical Sciences, New York University, New York, New York, United States of America, \textsuperscript{2} Department of Molecular Genetics, Public Health Research Institute, Newark, New Jersey, United States of America Individual cells in genetically homogeneous populations have been found to express different numbers of molecules of specific proteins. We investigated the origins of these variations in mammalian cells by counting individual molecules of mRNA produced from a reporter gene that was stably integrated into the cell’s genome. We found that there are massive variations in the number of mRNA molecules present in each cell. These variations occur because mRNAs are synthesized in short but intense bursts of transcription beginning when the gene transitions from an inactive to an active state and ending when they transition back to the inactive state. We show that these transitions are intrinsically random and not due to global, extrinsic factors such as the levels of transcriptional activators. Moreover, the gene activation causes burst-like expression of all genes within a wider genomic locus. We further found that bursts are also exhibited in the synthesis of natural genes. The bursts of mRNA expression can be buffered at the protein level by slow protein degradation rates. A stochastic model of gene activation and inactivation was developed to explain the statistical properties of the bursts. The model showed that increasing the level of transcription factors increases the average size of the bursts rather than their frequency. These results demonstrate that gene expression in mammalian cells is subject to large, intrinsically random fluctuations and raise questions about how cells are able to function in the face of such noise. Citation: Raj A, Peskin CS, Tranchina D, Vargas DY, Tyagi S (2006) Stochastic mRNA synthesis in mammalian cells. PLoS Biol 4(10): e309. DOI: 10.1371/journal.pbio.0040309 Introduction Many recent experiments show that genetically identical populations of bacteria and yeast can exhibit cell-to-cell variations in the amount of protein a gene produces [1–7]. These variations result in increased phenotypic diversity [8–14]. The variations are thought to arise from the typically small number of molecules involved in gene expression, with protein numbers often on the order of hundreds of molecules, mRNA on the order of tens of molecules, and the genes themselves often present in just one or two copies per cell. The factors leading to cell-to-cell variations can be classified as deriving from two sources: (a) variations in global, or extrinsic, factors, such as varying amounts of transcriptional activators, or (b) inherently random, or intrinsic, molecular events, such as the transcription of mRNA or translation of proteins [3,15]. While studies in bacteria have shown that variations have partially [3] or completely intrinsic [4] origins, studies in yeast suggested that variations arise mostly from extrinsic sources [1,5,16]. Recent studies with more accurate methods of analysis have, however, identified a more substantial intrinsic component to the variations in yeast [17,18]. In two of these studies [1,5], the authors developed models of gene expression postulating that the remaining intrinsic variability was due to random transitions of the gene itself between an active state, in which mRNA is transcribed at a high rate, and an inactive state, in which mRNA is transcribed at a much lower rate [19]. This theory predicts that the magnitude of variations in protein level (relative to the mean amount of protein) increases as the rate at which genes activate decreases. By experimentally varying the overall level of gene expression of fluorescent protein reporters, the authors obtained results consistent with this model. Other studies performed in higher eukaryotes by a variety of means, pioneered by the early work of Ko et al. [24], have indicated that significant cell-to-cell variations exist in these organisms as well [20–23]. However, direct detection of the proposed gene activation and inactivation events was not possible because new proteins from individual activation events were masked by proteins remaining from previous events as a result of the long half-lives of the fluorescent proteins used as the reporter. The use of fluorescent proteins is further limited by their low sensitivity; because individual molecules of fluorescent protein produce only small amounts of fluorescence, they are difficult to detect in the low numbers produced by many genes. This limitation is particularly troublesome in eukaryotic cells in general, and higher eukaryotes in particular, as the cellular volumes are much larger than those of bacteria, thus diluting the fluorescent protein concentration. Given these limitations, the most direct way to detect gene activation and inactivation is to directly monitor the mRNA produced from the gene at the resolution of single molecules. Because the half-life of mRNA is typically much shorter than that of fluorescent proteins, their levels reflect more accurately the state of the gene. Moreover, by detecting single molecules, one would sidestep the issue of sensitivity. Furthermore, the presence of integral molecule counts would be especially valuable in precisely evaluating models of stochastic gene expression. Academic Editor: Ueli Schibler, University of Geneva, Switzerland Received April 13, 2006; Accepted July 20, 2006; Published September 12, 2006 DOI: 10.1371/journal.pbio.0040309 Copyright: © 2006 Raj et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Abbreviations: CHO, Chinese hamster ovary; FISH, fluorescence in situ hybridization; GFP, green fluorescent protein; ITA, tet-transactivator * To whom correspondence should be addressed. E-mail: email@example.com (AR); firstname.lastname@example.org (ST) In this study, we explored cell-to-cell variation in gene expression in mammalian cells by accurately counting single molecules of mRNA through the use of fluorescence *in situ* hybridization (FISH). By obtaining precise measurements of these most immediate (and fast decaying) products of gene expression, we show direct evidence that genes transition infrequently between active and inactive states, resulting in large cell-to-cell variations in gene expression in clonal cell lines. In contrast to the mostly extrinsic variations observed in yeast, we show that these transitions are intrinsically random and not due to extrinsic factors and that they affect the expression of entire genomic loci. Furthermore, we found that the mRNA produced by the gene encoding the large subunit of RNA polymerase II is also produced in bursts, and that these bursts are uncorrelated with those from our reporter gene, indicating that the level of RNA polymerase II is not an important extrinsic determinant of cell-to-cell variations. We also analyzed the effect that the mRNA variations had on the proteins they encode and found that having a slow protein degradation rate can serve to buffer the mRNA variations. A mathematical model of gene activation and inactivation indicates that the mean number of mRNA produced per activation event (the “average burst size”) can be controlled by varying the amount of transcriptional activators in the cell. **Results** **Detection of Individual mRNA Molecules** To directly observe random events of gene activation and inactivation, we measured cell-to-cell variations in the number of molecules of a specific mRNA. We accomplished this by integrating a reporter gene possessing a tandem array of probe binding sites into mammalian cells and utilized fluorescently labeled probes to visualize mRNA transcribed from the gene by FISH. To obtain single molecule sensitivity, we introduced 32 tandem copies of a 43–base-pair probe-binding sequence at the 3’ end of a coding sequence for a fluorescent protein (throughout this paper, we refer to this sequence array as M1). The construct was inserted into Chinese hamster ovary (CHO) cells by electroporation, and a stable cell line was isolated in which a single copy of the gene was integrated into the cells’ genome. These cells were then fixed and subjected to hybridization with a single-stranded oligodeoxynucleotide probe that was both complementary to the tandemly repeated sequences and labeled with five well-dispersed fluorophore moieties (Figure 1A). The binding of so many fluorophores to each individual mRNA molecule resulted in signals so bright that each molecule was detectable as a diffraction-limited spot in a conventional wide-field fluorescence microscope (Figure 1B). To count the total number of mRNA molecules in each cell, optical slices spanning the full three-dimensional cellular volume were acquired (Video S1). The number of mRNA molecules in each cell was then measured using custom software to identify individual fluorescent spots in three dimensions from the image stacks (Figure 1C). We have shown previously that each spot corresponds to an individual mRNA molecule and that there is no significant loss of mRNA molecules during the FISH procedure [25], thus establishing that the method is a valid way to count the number of mRNA molecules in individual cells. **Measurement of Cell-to-Cell Variations in Clonal Cells** To measure cell-to-cell variations in mRNA numbers in clonal cell lines, we generated stable CHO cell lines expressing our construct. The gene was placed under the control of a promoter whose expression could be controlled in mammalian cells (Figure 2A), and it was stably integrated into the genome via electroporation, resulting in the introduction of a single copy of the gene (as verified by Southern blotting; unpublished data). Observation of the mRNA synthesized in these cells showed that there were marked variations in the number of mRNA molecules from cell to cell (Figure 2B). The occasional larger bright areas that were observed are recently activated transcription sites [25,26] caused by a buildup of nascent mRNA that had not yet diffused away. The observation that these sites occur... infrequently indicates that the mRNA is not being continually synthesized, but rather, it is synthesized during brief periods of time when the gene is transcriptionally active. We refer to these periods as transcriptional bursts. The rest of the time, the gene is in a transcriptionally inactive state, during which no mRNA molecules are synthesized and those synthesized earlier are degraded. Quantitative evidence of the burst-like nature of transcription comes from comparing the number of mRNA in cells containing active transcription sites to those without active transcription sites. We found that of 97 randomly selected cells from cell line E-YFP-M1-7x (details of construct discussed below), the 23 containing transcriptional foci had an average of 244 mRNA per cell, as compared to 38 mRNA per cell in the 74 without any active transcription site ($p < 10^{-5}$). Because the FISH method also gives the spatial location of the mRNA, we were also able to compare the relative numbers of mRNA in the nucleus and cytoplasm to study further the behavior of the transcriptional bursts. If transcription occurs in bursts, then one would expect to find more mRNA in the nucleus than in the cytoplasm when the gene is active, as the nuclear mRNA has not been exported. However, when the gene is in the inactive state, the nuclear mRNA will be exported without being replenished, resulting in a lower proportion of the total cellular mRNA being found in the nucleus. To examine such behavior, we costained the cells with DAPI after the hybridization and determined whether each mRNA was located in the cytoplasm or nucleus. Often, we found that cells without a transcriptional focus had only cytoplasmic mRNA, whereas cells with a transcription site usually had a large number of nuclear mRNA (Figure 2D). Statistically speaking, cells containing active transcription sites had a higher percentage of reporter mRNA in the nucleus (35%, 17 cells analyzed) than did cells without active transcription sites (25%, 22 cells analyzed) ($p = 0.0093$). Interestingly, the two cells depicted in Figure 2D are clearly descended from the same parent cell but seem to display different transcriptional behavior. This behavior is typical and indicates that variations global extrinsic factors such as position in the cell cycle are not the primary source of variation in the activity of the transgene; this is more systematically analyzed in the “Relative Contributions of Intrinsic and Extrinsic Factors to Variations in mRNA Level” section of the results. Further evidence for transcriptional bursts comes from an analysis of the statistics of the distribution of mRNA molecules per cell over the entire cell population. If mRNA were produced at a constant rate, one would expect a Poisson distribution of mRNA per cell, in which case the mean number of mRNA molecules per cell and the variance (the square of the standard deviation) would be equal. However, we found that the mean was approximately 40 mRNA molecules per cell, while the variance was roughly 1,600 molecules squared, indicating that the mRNA is not synthesized at a constant rate, consistent with the occurrence of transcriptional bursts. **Mechanisms Controlling Transcriptional Bursts** To investigate the mechanisms controlling transcriptional bursts, we altered the overall level of transcription both by changing the amount of transcriptional activator present in the cells and by altering the number of binding sites for that activator in the promoter. To accomplish this, the gene was inserted downstream from a minimal cytomegalovirus promoter, and either one or seven copies of the tetracycline-sensitive *tet* operator sequence were present upstream from the promoter (Figure 2A). Transcription from the promoter is only possible when a protein known as the tet-trans-activator (tTA) binds to the operator sequence. tTA is a protein consisting of two domains: one that binds to the *tet* operator (derived from the TetR protein), and one that promotes transcription of nearby genes (the VP16 acidic activation domain). The tetracycline-like antibiotic doxycycline binds to the DNA-binding domain of tTA, preventing it from binding to DNA. By varying the level of doxycycline in the growth medium, we were able to control the level of free tTA in the cells [27]. Two constructs (1x-tetO and 7x-tetO) were stably integrated into CHO cells that had previously been modified to express tTA, resulting in the cell lines E-YFP-M1-1x and E-YFP-M1-7x, each containing a single copy of the respective reporter gene. Representative snapshots of clonal cell fields are shown in Figure 2B and 2C. We observed that the size of the bursts were larger in the E-YFP-M1-7x cell line than in the E-YFP-M1-1x cell line. We then varied the amount of doxycycline in the growth medium and measured the distribution of the number of mRNA molecules per cell across several fields of cells grown at each concentration of doxycycline (Figure 3A; mRNA counts given in Table S1). As expected, increasing the level of doxycycline resulted in a decrease in the mean number of mRNA molecules per cell (Figure 3B, top). However, we also found that the variability across the population (quantitatively measured by the “noise,” which we define as the standard deviation divided by the mean) remained constant over all doxycycline concentrations for the 1x-tetO construct but varied non-monotonically for the 7x-tetO construct (Figure 3B, bottom). Moreover, we found that noise properties do not change if one considers mRNA concentration rather than absolute number (Figure S2, compare with Figure 3B, bottom). This is most likely because the primary source of variation is the activation state of the gene itself, which does not vary with the volume of the cell. Both results are inconsistent with conventional stochastic models of gene expression [3,5,15,28] that predict that noise should increase steadily as the mean level of transcription decreases. To explain this behavior, we invoked a model of gene activation and inactivation [29] in which the gene undergoes infrequent transitions between a transcriptionally active state, during which many mRNA molecules are produced, and a transcriptionally inactive state, in which no mRNA molecules are produced (the model is analyzed in more detail in Protocol S1, where a complete formula for the mRNA distribution is presented). Using a fast numerical evaluation of the theoretical distributions resulting from this model, we were able to fit the experimentally obtained data to find expressions for the gene activation rate, $\lambda$, (to within a factor of the mRNA degradation rate, $\delta$), and the average number of mRNA produced during each burst, $\mu/\delta$ (henceforth referred to as the average burst size) (Figure 3C). The mRNA half-life was determined by quantitative RT-PCR to be approximately $4 \pm 1$ h (Figure S1; see Materials and Methods for further discussion). The results of the fitting procedure show that either increasing the number of transcription factor binding sites or increasing the amount of intracellular transcription factors increases the average burst size. Based on our analysis, it is impossible to say whether this is due to a decrease in the rate of gene inactivation or an increase in the rate of transcription of the activated gene. This fact does, however, point to an important difference with the bacterial case, where gene activation and inactivation have typically been associated with transcription factor association and dissociation [4]. Were that the case, decreasing the amount of transcription factors would serve to decrease the rate of activation while leaving the rates of inactivation and transcription the same. In our data, the rate of gene activation appears to be fairly constant until the doxycycline concentration reaches a relatively high level, at which point it increases, arguing against the application of the bacterial model to our system. It is unclear why the rate of gene activation increases at the larger doxycycline concentrations, since decreasing the level of transcription factors should only decrease the rate of gene activation. This might be due to factors not included in our model, or some physical behavior on the part of the cell induced at higher doxycycline concentrations. However, our data generally indicate that modulating the concentration of transcriptional activators affects the overall level of transcription by altering the average burst size rather than its frequency. **Relative Contributions of Intrinsic and Extrinsic Factors to Variations in mRNA Level** If the variation in expression levels from one cell to another were truly due to random gene activation events, then the presence of multiple independently activating copies of the gene would result in less cell-to-cell variability in mRNA numbers (i.e., the noise should decrease). Intuitively, this can be seen by considering simultaneous coin tosses: if only one coin is tossed, it is either heads or tails, but if several coins are tossed at once, the chances of the set of them being close to 50% heads and 50% tails increases with the number of coins used. To test this possibility, we integrated multiple copies of our reporter gene into one region of the genome via cationic lipid-based transfection (lipofection), which simultaneously integrates tens to hundreds of gene copies, often in tandem and at the same locus, and isolated cell line L-GFP-M1-7x. Generally, the number of mRNA produced in this cell line was much larger than in E-YFP-M1-7x (with only one copy of the reporter gene), but the cell line still displayed massive cell-to-cell variations (Figure 4A) with a markedly skewed distribution (Figure 4B). Statistically, this is demonstrated by the fact that the noise characteristics of these two cell lines was similar; since the mean number of mRNA molecules per cell has increased roughly 10-fold (at no doxycycline) over the E-YFP-M1-7x cell line, one would expect the noise to decrease by a factor of $\sqrt{10} \approx 3$ if the genes expressed independently, but no such decrease was observed (Figure 4C; compare to Figure 3B). There are two alternate explanations for this observation. One possibility is that the massive fluctuations seen in the number of mRNA molecules per cell are due to fluctuations in global factors that simultaneously affect the expression of all of the reporter genes (e.g., fluctuations in the levels of tTA or RNA polymerase II); this is usually referred to as extrinsic noise [3,15]. Alternatively, since the genes were integrated into the same genomic locus, it is possible that the genes express in a coordinated fashion in response to a random, local gene activation event (such as chromatin decondensation) that affects all nearby genes [1]. However, if local gene activation events occur at random, then genes located at distant sites would activate and deactivate independently. This type of noise, due to the random occurrence of events involved in gene expression, is usually referred to as intrinsic noise and is not dependent on global factors. To explore these two alternative explanations, we constructed another reporter gene, CFP-M2, that encoded a cyan fluorescent protein and contained a different tandem array of probe-binding sequences in its 3’-UTR, denoted M2. This allowed its mRNA to be distinguished from the mRNA synthesized from reporter genes containing the M1 sequence array by performing FISH with an additional probe that binds to the M2 array but is conjugated to a differently colored fluorophore. In one series of experiments, this gene was integrated into a cell line (L-GFP-M1-7x) that already expressed a reporter gene containing the M1 array, resulting in the CFP-M2 reporter gene being integrated into a locus distant from the site of integration of GFP-M1 (Figure 5A, left). In a second series of experiments, the two reporter genes were integrated simultaneously via lipofection, resulting in both genes being integrated at the same locus (Figure... Figure 4. Cell-to-Cell Variations in mRNA Numbers in a Cell Line with Multiple Reporter Gene Integrations at the Same Gene Locus (A) Representative field from cell line L-GFP-M1-7x, generated by lipofection, where the mRNA was hybridized to FISH probe P1-TMR; the image was obtained by merging a three-dimensional stack of images. (B) Histogram showing the distribution of mRNA molecules per cell over for cell line L-GFP-M1-7x when grown in media containing no doxycycline. (C) Graphs showing the population mean (top) and noise (defined as the standard deviation divided by the mean (bottom)) as a function of doxycycline concentration. Error bars were obtained by bootstrapping. DOI: 10.1371/journal.pbio.0040309.g004 5A, right). If the variations in mRNA expression in each cell were due to global factors, bursts of mRNA synthesis from these distinct genes would likely occur simultaneously in the same cell regardless of their genomic location (i.e., both cell lines would show a strong correlation between the expression of both reporter mRNAs). However, if gene expression is controlled by local gene activation events affecting individual loci independently, the first cell line, in which the distinct reporter genes are integrated at different loci, would show no correlation in gene expression, but the second cell line, in which the distinct reporter genes are integrated at the same locus, would show a strong correlation in gene expression. When integrated at separate loci (as evidenced by the presence of two distinct transcription sites), the two reporter mRNAs each individually displayed the large fluctuations observed previously (Figure 5B), yet the occurrence of those fluctuations were completely uncorrelated with each other ($R = 0.056$, $p = 0.57$) (Figure 5B, inset). However, when the two genes were integrated at the same locus (as evidenced by a single, dual-colored transcription site; Video S2), the genes produced both types of mRNA in simultaneous bursts (Figure 5C and inset; $R = 0.89$, $p = 1.2 \times 10^{-38}$). Taken together, the results of these experiments show that infrequent gene activation and inactivation events control the variability in mRNA levels, and these events occur randomly and are not dependent on global, extrinsic factors. Moreover, the results imply that these gene activation events are spatially extended, in that they affect whole regions of the genome at once. Cell-to-Cell Variations in the mRNA Encoding the Large Subunit of RNA Polymerase II To further examine the role of global, extrinsic factors, we checked for fluctuations in a putative extrinsic factor, RNA polymerase II, to see if the level of expression of its mRNA correlated with the level of expression of the mRNA from a reporter gene. We were able to image individual molecules of the natural mRNA encoding the large subunit of RNA polymerase II by exploiting the presence of a naturally occurring 21-nucleotide-long sequence that is repeated 52 times in the mRNA. We used a FISH probe for the repeated sequence that was labeled with a distinctively colored fluorophore, and we counted fluorescent spots similar to those observed previously (Figure 6A). We found that this mRNA also displayed bursts of mRNA synthesis and that its distribution across a field of cells was similar to those observed for the reporter genes, with a variance over 50 times the mean (Figure 6B, top). To check for a correlation between the level of this mRNA and that of a reporter gene (in cell line E-YFP-M1-7x), we also quantified the level of mRNA expression of the reporter gene in the same cells. No significant correlation was found ($R = 0.083$, $p = 0.41$) (Figure 6B, bottom). By using the model of gene activation and inactivation, we were able to estimate the rates of gene activation, inactivation, and transcription to within a factor of the mRNA half-life (see Figure 6B for parameter values and confidence intervals), indicating that the activation was indeed infrequent and burst-like. These results show two things: (a) synthesis of the mRNA from natural genes can also be burst-like and (b) fluctuations in the number of mRNA molecules encoding the large subunit of RNA polymerase II are not a source of noise in the expression of other genes. Propagation of mRNA Variability to Protein Levels To investigate the effects that burst-like transcription of mRNA had upon intracellular protein levels, we simultaneously quantified the number of mRNA and the fluorescent proteins they encoded in individual cells. To assess the effects that the rate of protein degradation had upon protein variability, we performed this analysis on a cell line expressing a fluorescent protein that was actively degraded and another cell line expressing a fluorescent protein with no active degradation. For the case of active degradation, we used a cell line stably expressing the GFP-M1 reporter gene, which encoded a green fluorescent protein (GFP) that had been tagged at the C-terminus with a short amino acid sequence rich in proline, glutamic acid, serine, and threonine (d2E-GFP) that targets the protein for active degradation (with a half-life of approxFigure 5. Dual-Reporter Experiments Showing Variations Are Intrinsically Random and Can Affect an Entire Gene Locus (A) Schematic depicting dual-reporter integration experiments, where the two reporters are either integrated in separate locations in the genome (left) or at the same locus (right). (B) Two-color overlay showing a merged three-dimensional stack of images of the cell line in which two different reporter genes (one expressing mRNA containing the M1 sequence array and the other expressing mRNA containing the M2 sequence array) are integrated into separate loci (cell line L-GFP-M1-7x/CFP-M2-7x); green corresponds to the signal from GFP-M1 mRNA and red corresponds to the signal from CFP-M2 mRNA (using probes PI-TMR and P2-AluTMR, respectively). The relative mRNA levels of both genes were quantified by counting the mRNA of each color in a single optical slice in each cell (inset). (C) Two-color overlay showing a merged three-dimensional stack of images of the cell line in which the two distinct reporter genes are integrated into the same locus (cell line E-YFP-M1-CFP-M2), where the same FISH probes were used as in (B). The relative mRNA levels of both genes were quantified by counting the mRNA of each color in a single optical slice in each cell (inset). The scale bars are 5 μm long. DOI: 10.1371/journal.pbio.0040309.g005 imately 2 h) and examined the correlations between the mRNA and protein levels (Figure 7A). We found that the levels correlated quite well, with correlation coefficients larger than 0.78 ($p < 2.6 \times 10^{-25}$) over a range of transcriptional strengths. Moreover, the distribution of total protein levels appeared rather similar to those of the mRNA (Figure 7A, marginal histograms on top, mRNA, and right, protein). For the case of no active protein degradation, we used another cell line, this time expressing the CFP-M2 reporter gene, in which the fluorescent protein did not contain any degradation tags. We found that the correlation between the mRNA and protein levels was significantly lower than before ($R = 0.35$, $p = 1.9 \times 10^{-4}$) (Figure 7B). Moreover, while the mRNA distribution was still heavily skewed with long tails, the protein distribution was somewhat less skewed. We also examined the distribution of proteins in live cells from cell line E-YFP-M1-7x, whose reporter gene also encodes a fluorescent protein that is not actively degraded. In this case, the single-copy integration produced too few proteins for us to detect after the fixation procedure, preventing us from simultaneously measuring the mRNA levels, but we found that the fluorescent protein levels in live cells also displayed a much less skewed distribution as compared with the actively degraded proteins (Figure 7C). To explain the differences in protein distributions and correlation between the actively degraded and nondegraded cases, we added protein dynamics to the model of mRNA dynamics and examined the behavior of this model through the use of Gillespie’s stochastic simulation algorithm [30]. (The parameters used for the simulations were those obtained from cell line E-YFP-M1-7x under conditions of no doxycycline, although the qualitative features observed do not depend heavily on the specific parameters used.) These simulations show that decreasing the rate of protein degradation results in a sharp decrease in the correlation between the mRNA and protein levels (Figure 7D). Also, the protein distribution changed from being heavily skewed to being more Gaussian in nature (Figure 7D, right marginal histograms), even though the mRNA distribution remained heavily skewed in all cases (Figure 7D, top marginal histogram). Intuitively, this is because proteins with fast protein degradation rates will be abundant only when the mRNA encoding it is abundant, resulting in a high correlation and similar distributions. However, if the proteins degrade very slowly, then proteins from earlier transcriptional bursts may still be present when new bursts occur. In this case, the transcriptional bursts merely serve to occasionally “top up” the amount of fluorescent proteins from time to time, resulting in less skewed distributions and a lower correlation between mRNA and protein numbers. Qualitatively, these predictions correspond well with our experimental observations. **Discussion** We have shown that the mRNA levels of both reporter genes and native genes display large cell-to-cell variations in mammalian cells due to intrinsically random, infrequent events of gene activation. These burst-like fluctuations are not restricted to engineered reporter genes, but occur in natural genes as well, as demonstrated for the mRNA encoding the large subunit of RNA polymerase II. We have further shown that these events are controlled by gene regulatory mechanisms, such as the level of activator proteins and the number of transcription factor binding sites, and can affect regions of the genome rather than just specific genes. Moreover, we have found that the variations are intrinsically random, rather than due to global extrinsic factors. This contrasts with the results of previous studies in lower eukaryotes [1,5,16], although some qualification of those studies may be required, as some extrinsic effects may simply be related to fluctuations in cell volume (see Protocol S1 for a further comparison to previous studies). This finding is significant, because extrinsic variations are often due to fluctuations in transcription factors [1,8] and cell cycle [16], which means they are at least partially regulated, whereas intrinsic variations are by definition uncontrollable. We have also shown that the statistics of these variations are well described by a model in which the only sources of randomness are random events of gene activation and inactivation, implying that one can safely ignore the randomness inherent in the chemical reactions describing transcription and translation. This is qualitatively different from... A. Active protein degradation - Histogram of mRNA molecules per cell - $R = 0.78$ (No dox) - $R = 0.79$ (0.08mg/ml dox) - $R = 0.84$ (0.16mg/ml dox) B. No active protein degradation - Histogram of mRNA molecules per cell - $R = 0.35$ (No dox) C. No active protein degradation (live cells) D. Stochastic model with variable protein half-life - mRNA half-life = 4 hours - Protein half-life = 1.56 hours - Protein half-life = 25 hours - Protein half-life = 200 hours - Histogram of mRNA molecules per cell - Histogram of GFP molecules per cell - Histogram of protein molecules per cell $R = 0.92$ $R = 0.43$ $R = 0.17$ the bacterial case, where such reactions are thought to be the dominant source of variability in gene expression [3,7,28,31–33], despite some recent evidence of relatively mild burst-like behavior in *Escherichia coli* [4]. Other studies performed in higher eukaryotes have found similar behavior to that we have observed, albeit by different means [20–22,24]. In particular, the work of Chubb et al. [23] showed through temporal measurements of active transcription sites that genes do indeed undergo random transitions between transcriptionally active and inactive states, providing a powerful corroboration of our model. Their study used the MS2 method of mRNA detection, which has previously been used to monitor real-time kinetics of gene activity [26]. **Possible Physical Mechanisms for Transcriptional Bursting** The most likely sources for the transcriptional bursts are random events of chromatin remodeling [1,5]. If this is the case, then gene activation would correspond to chromatin decondensation and gene inactivation would correspond to chromatin condensation, facilitated by the activity of histone acetyltransferases and deacetyltransferases, respectively. Our experiments with two reporter genes integrated in tandem or at different locations of the genome support this idea (Figure 5). When the genes are located in distant regions of the genome, they burst independently, but when they are located near each other, they burst together. This is consistent with previous studies in which VP16-mediated decondensation was observed to extend over a region much larger than a single gene [34,35]. If the decondensation of chromatin is a prerequisite for gene activation, then the nucleation of this decondensation will be a significant rate-limiting step. The structure of chromatin at the level of nucleosome stacking suggests that the “breathing” events that permit the entry of transcriptional regulators will be infrequent [36]. However, once a transcription regulator is able to bind to its target site on the DNA exposed during a breathing event, it would attract histone acetyltransferases and thereby keep the immediate context of chromatin accessible. Of course, the rate of the nucleation will likely depend on the actual location of the genome in question, with some areas exhibiting lower nucleation frequencies than others. The fact the mRNA is produced in bursts points to new means by which the cell may control transcription. There are three apparent means by which a cell would be able to upregulate a gene’s transcription: it could (i) increase the rate of gene activation, (ii) increase the rate of transcription when the gene is in the active state, or (iii) decrease the rate of gene inactivation (the opposite behaviors, of course, apply should a cell decide to downregulate a gene’s transcription). These mechanisms, while all resulting in the same average increase in transcription, differ markedly in the nature of the cell-to-cell variations induced. Our data indicate that in our system, either case (ii) or (iii) applies, whereas case (i) does not; in other words, the average burst size is being modulated rather than their frequency. The observation that altering the level of transcriptional activator does not reduce the rate of gene activation supports this hypothesis. Furthermore, the fact that altering the level of transcriptional activator does not reduce the rate of gene activation again argues for the intrinsic nature of the variations observed: if the primary source of cell-to-cell variation is the infrequent events of gene activation and those events are independent of the level of transcriptional activator, then the variations are likely due to some intrinsic fluctuations gene activation that do not depend on transcriptional activators. If gene activation does indeed correspond to chromatin remodeling, this points to the possibility that the nucleation of chromatin decondensation at a gene locus may be an inherently random event that does not require the presence of transcription factors but, once initiated, requires those factors to sustain the decondensed state. **Mathematical Model** Our mathematical treatment of stochastic gene expression is rather different than methods based on moment generating functions [28] and applications of the fluctuation-dissipation theorem [31] in that we are generally more concerned with obtaining some information about the nature of the entire distribution rather than simply finding formulas for the first two moments (although we do provide alternate derivations of such formulas in Protocol S1). While moment computations are very useful in evaluating stochastic models in bacteria, we believe that information regarding the entire distribution is critical to understanding the observed burst-like events that resulted in heavily skewed distributions, since such distributions are not very well described by population means and variances. Our use of exact solutions for the complete distribution enabled us to perform rigorous statistical determinations of key model parameters. Of course, obtaining expressions for such distributions is generally difficult for most chemical master equations, and so we anticipate that the use of telegraph-like signals (as elucidated in Protocol S1) may lead to significant simplifications. The primary assumption that allows the use of such models is that the randomness associated with individual events of transcription and translation is relatively mild compared with that arising from random gene activation and inactivation. We anticipate this assumption to be generally valid in higher eukaryotes, especially given the role that chromatin dynamics plays in their expression patterns. Such methods may find particular utility in the study of the dynamics of cell signaling networks, which have been shown to exhibit the burst-like variations observed here [24,37]. **Implications for Cellular Function** In a wider sense, the presence of such large, unpredictable fluctuations in gene expression may initially appear to be a significant impediment to the functioning of a cell. In particular, given its essential role in cellular function, it is surprising that the gene encoding the large subunit of RNA polymerase II also displays fluctuations on the order of those seen in the reporter genes. Our analysis of protein levels yields a resolution to this apparent paradox: if the degradation rate of the proteins is sufficiently small, then the variations in protein level will be buffered because the proteins from new bursts serve only to ‘top up’ the proteins already present from previous bursts. This suggests that essential genes whose mRNA expression is burst-like should have relatively stable proteins. Moreover, there are other manners in which protein variations may be further reduced. For instance, should two different proteins, each bursting independently, form a heteromeric complex, then the variations in the number of complexes will be somewhat buffered from the variations in each component. Conversely, there may also be situations in which burst-like expression of unstable proteins may be desirable as well. Many examples of such situations exist in bacteria and yeast, often as a result of multilevel behavior [8,10–12,14,38]. However, while such phenotypic variability may be advantageous for unicellular organisms because each cell is essentially identical, the same reasoning does not necessarily apply to multicellular organisms, in which the diversity of cellular function is controlled by the organism’s developmental program. It is possible, though, that higher eukaryotes might also be able to exploit this variation to achieve a multitude of cellular behaviors in otherwise homogeneous tissues and cell types, leading to, for example, mosaic phenotypes [39] or transitions between phases in the viral life cycle [12]. In multicellular organisms, however, the reasoning behind the need for phenotypic variability is somewhat different than in unicellular organisms, since the variability is not designed to take advantage of an unpredictable environment but rather to achieve varied function or behavior within a relatively constant environment. We expect that distinctions such as these will result in interesting differences in the properties of stochastic gene expression in unicellular and multicellular organisms. Materials and Methods Multimer construction. Construction of the DNA fragment with 32 probe binding sites in the pGEM-FlI(-z) cloning vector (Invitrogen, Carlsbad, California, United States) was performed by the method described by Rubinson et al. [40]. The 5′-end of the 32-mer oligonucleotide used to produce the M1 32-mer are M1-forward: TCGACGGCTCAGTGCGCTAAGGATTATATAGGAAACCCTTAC-CAAGCCGTCTAGGCGGAGG and M1-reverse: GATTCCTGGGCCT-GAGGGCGCTTGAGGGTTCTCATATAAAACTCTTCTAGGCCAC-CAGTCCG. The underlined portions of the sequence correspond to the SacI, XhoI, and BamHI sites used for integration into the host vector. A similar procedure was used to produce the M2 32-mer described by Vargas et al. [29]. The following oligonucleotide was used: M2-forward: TGACGGCTCAGTGCGCTAAGGATTATAGGAAACCCTTAC-CAAGCCGTCTAGGCGGAGG and M2-reverse: GATTCCTGGGCCT-GAGGGCGCTTGAGGGTTCTCATATAAAACTCTTCTAGGCCAC-CAGTCCG. The resulting plasmids were pGEM-M1-32x and pGEM-M2-32x. Creation of the reporter genes. The reporter genes were constructed in addition to open reading frames for yellow fluorescent protein (YFP) and cyan fluorescent protein (CFP). We inserted M1 and M2 multimers in the pGEM-M1-32x and pGEM-M2-32x plasmids, respectively. The sequences encoding YFP and CFP were amplified via PCR from pBdH3 and pBdH5 (University of Washington, Yeast Resource Center, Seattle, Washington, United States) and inserted in from the M1 and M2 multimers between the SacI and XhoI restriction sites, also introducing a BgIII restriction site between the SacI site and the start codon of the open reading frame. Integration into expression vectors. These reporter genes were then integrated into expression vectors enabling their expression in mammalian cells. The base vectors chosen were the pTRE2Hg, pTRE2Pur, and pTRE42EGFP vectors (Clontech, Palo Alto, California, United States). Each contains a tetracycline-responsive promoter containing seven copies of the tet operator followed by a minimal cytomegalovirus promoter and a termination signal. Additionally, the pTRE2Hg and pTRE2Pur vectors enabled selection by appropriate quantities of hygromycin B (Invitrogen) or paromycin (Sigma, St. Louis, Missouri, United States). To create a vector with one copy of the tet operator, we amplified the promoter region of pTRE2Hg using a primer containing one copy of the tet operator. This was then cloned back into the pTRE2Hg promoter site, replacing the native promoter and creating the plasmid pTRE2Hg1x. The YFP-M1 construct was then extracted from the pGEM host vector with BgIII and NotI and then inserted into the pTRE2Hg and pTRE2Hg1x vectors between the BamHI and NotI sites. The CFP-M2 construct was similarly inserted into the pTRE2Hg and pTRE2Pur vectors. This created the plasmids pTRE2Hg-YFP-M1, pTRE2Hg1x-YFP-M1, pTRE2Hg-CFP-M2, and pTRE2Pur-CFP-M2. The MI multimer was also inserted into 3′-UTR of the pTRE42EGFP vector between the BamHI and EcoRI sites, creating the plasmid pTRE42EGFP-M1. All constructs were verified by sequencing. Creation of cell lines. All cell lines were derived from the CHO-A8-Tet-off cell line (Clontech), which possesses a stably integrated gene expressing the tetracycline-regulated Tet-off transactivator. Cell lines E-YFP-M1-1x and E-YFP-M1-7x, containing the 1x-tetO and 7x-tetO constructs, were generated by electroporation (Bio-Rad, Hercules, California, United States) using plasmids pTRE2Hg1x-YFP-M1 and pTRE2Hg-YFP-M1, respectively. The electroporator settings were 200 V and 900 μF using 10 μg of DNA linearized with XmaI added to 10^6 cells in 1 ml of PBS in a 4-mm cuvette. The multiple-copy integration clones were generated using LipofectAMINE 2000 (Invitrogen) following the manufacturers instructions. Cell line L-GFP-M1-7x was created by transferring the CHO-A8-Tet-off cell line with the pTRE42EGFP-M1 plasmid linearized with XmaI. To create the cell lines in which two reporter genes were integrated in different loci, cell line L-GFP-M1-7x was transfected using LipofectAMINE 2000 with the plasmid pTRE2Hg-CFP-M2, linearized with XmaI; the resultant cell line is L-GFP-M1-7x-L-CFP-M2-7x. To create the cell line in which the two reporter genes were integrated in the same locus, the CHO-A8-Tet-off cell line was cotransfected cotransfected with equal amounts of pTRE2Hg-YFP-M1 and pTRE2Pur-CFP-M2, resulting in cell line L-YFP-M1-CFP-M2. Cell lines were isolated after transfection by either electroporation or lipofection, selected with the appropriate antibiotic (hygromycin B or paromycin), and then purified by serial dilution. Thus, only one copy of the transgene was integrated into cell lines E-YFP-M1-1x and E-YFP-M1-7x was verified by Southern blotting upon digestion of genomic DNA with the restriction enzyme BgIII. Several cell lines were isolated following transfection; all exhibit similar phenotypes to those cell lines described in this paper. Cell lines obtained from lipofection with pTRE42EGFP-M1 were isolated not by antibiotic selection but instead by directly identifying fluorescent cell clusters and purifying by serial dilution. Stability of the gene was verified by DNA FISH (unpublished data). Cell culture. Cells were cultured in the alpha modification of Eagle’s minimum essential medium (Sigma) supplemented with 10% TET-System-Approved fetal bovine serum (Clontech). The growth medium was supplemented with a low concentration of the selective antibiotic to insure stability of the transfected gene. Appropriate amounts of doxycycline were added to media, and cells were grown at the desired concentration of doxycycline for 4 d to minimize any transient effects. The doxycycline concentration experiments were all performed in parallel with the same batch of media to minimize differences due to media composition, etc. Comparison of probes in situ hybridization. The probes used for in situ hybridization were DNA oligonucleotides synthesized on an Applied Biosystems (Foster City, California, United States) 394 DNA synthesizer using mild phosphoramidites (Glen Research, Sterling, Virginia, United States). The oligonucleotide sequences were P1: 5′-CCGCRGCTTAAGGCAAACCTAARAACTTACGG-CAACA-3′; P2: 5′-RCGAGGTCCGARACCTGCTTGCTGGRCTTCTITG-RCACAACAA-3′; and P3: 5′-AGGAGGCGGAGGARACGCRGGGA-GAAGRRGGCGGAGRAGRCRRGG-3′, where P1 and P2 are complementary to the M1 sequences in M1 and M2, respectively, and P3 is complementary to a control sequence in the M2 sequence in the RNAII subunit α C DNA sequence. The “R”s represent locations where an amino-dT was introduced in place of a regular dT. The oligonucleotides were synthesized on a controlled pore glass column (Glen Research) that introduced a additional amino group at the 3′ end of each oligonucleotide. The probes were annealed and then coupled to the fluorophores Cy5.5, Alexa 594, and tetramethylrhodamine (Molecular Probes, Eugene, Oregon, United States) to create the following probes: P1-TMR, P1-Cy5.5, P2-Alexa-594, and P3-TMR. The probes were purified on an HPLC column to isolate oligonucleotides having the highest degree of coupling of the fluorophore to the amino groups. In situ hybridization. Cells were cultured in multichambered coverglass (Lab-Tek, Nalge Nunc, Rochester, New York, United States) coated with gelatin. The cells were fixed with 3.7% formaldehyde for 10 min at room temperature, washed twice in PBS, and then incubated for 1 h in 70% ethanol. FISH was then performed using combinations of probes P1, P2, and P3 at a concentration of 1 ng/μl each following the procedure outlined in Femino et al. [41]. The optimal level of formamide used during hybridization and washing for maximum signal to background was empirically determined to be 25%. For the DNA FISH experiments, an additional two steps were added after the permeabilization step: (1) cells were subjected to RNase A treatment at 100 µg/ml in PBS for 30 min at 37 °C, after which (2) cells were heated to 60 °C for 10 min in a water bath, 28 SSC, after which the hybridization procedure described above was followed. **Image acquisition and analysis.** After si hybridization, cells were imaged using an Axiovert 200M inverted fluorescence microscope (Zeiss, Oberkochen, Germany) equipped with a 100X oil-immersion objective and CoolSNIP HQ filter set (Photonetics, Pleasanton, California, United States), and cooled to ~30 °C; thus, standard filter sets obtained from Omega Optic (Burlathtboro, Vermont, United States). Openlab acquisition software (Improvision, Sheffield, United Kingdom) was used to acquire the images. For three-dimensional imaging, z-analytic channels were imaged by taking adjacent optical sections that were 0.5-µm apart. The images were counted in three dimensions using custom software written in MATLAB (The Mathworks, Natick, Massachusetts, United States). The general procedure was to (i) manually select the individual cells in a field, then (ii) run a movie file of each spot in the taken images, (iii) run a custom linear three-dimensional filter, designed to enhance particular signals and loosely based on the discrete Laplacian, on the stack of images, then (iv) manually select a threshold for the enhanced images, and then (v) count the total number of isolated signals (i.e., connected components) in all dimensions. In each manually selected threshold, other thresholds >5% above and below were also analyzed to verify that the particle count did not depend significantly on the particular threshold chosen. Our best estimate is that the number of spots counted by our algorithm is accurate to within 10% of the actual number. In cells with transcription sites, the transcription site itself was subjected to the same counting process, usually resulting in it being counted as a single molecule. This is justified, since the nascent RNAs present at the transcription site are most likely unprocessed pre-mRNA that have not yet been subjected to the various post-transcriptional modifications required for an mRNA to be considered functional [4]. In experiments where fluorophores other than TMR were used, we instead quantified the relative amount of mRNA from cell to cell by counting the number of mRNA in one optical section (chosen near the bottom of the cellular volume) that was done because the relatively low photostability of the Cy5.5 and Alexa 594 dyes meant that the particular signal became quite weak during the acquisition of the image stacks, making the imaging and counting of individual molecules progressively more difficult and thus significantly less accurate. In the case of the L-GFP-MI-7x clone, the larger number of mRNAs molecules per cell resulted in our interest to quantify using the segmentation method above due to overlap in the diffraction-limited spot. In this case, we quantified the mRNA by integrating the total fluorescence over the entire cellular volume. To relate this to the absolute number of mRNA in the cell, we quantified the mRNA in several test cells whose mRNA production was reliable using our molecule counting algorithm and correlated it to the total fluorescence within the volume. The relationship was found to be linear (Figure S3), thus yielding a simple formula by which one can compute the total number of molecules per cell from its total integrated fluorescence. While this method is most likely only inaccurate for low numbers of particles, it is able to yield a reasonable estimate of the number of molecules in cells with very large numbers of mRNA. The Supporting Information videos were created by deconvolving the optical sections and rendering them in three-dimensions using Volocity (Improvision). The fluorescent protein levels were quantified by a single fluorescence image toward the lower focal plane of the cells. The total fluorescence was found by integrating the difference between the pixel intensities and the average background over the entire cellular volume in the red or the live-cell YFP images. OptiMEM (Sigma) was used as growth medium because of its reduced autofluorescence as compared with regular MEM. All software is available upon request. **Statistical analysis and estimation of model parameters.** The error bars for the mean and noise reporter were obtained by the bootstrap method. The parameters from the model were estimated using the maximum-likelihood method based on an explicit formula derived for the complete mRNA distribution as outlined in Protocol S1. The error bars reflect 95% confidence intervals. The p-values for all the correlations given represent probabilities of finding the given data assuming the null hypothesis of no correlation. The p-values comparing the mRNA distributions in cells either containing or not containing transcription sites and comparing the nuclear versus cytoplasmic fraction were found by a permutation method and reflect the chances of obtaining the percentages found by random chance (i.e., by randomizing which cells are labeled as transcriptional active and inactive). **Determination of mRNA decay rate.** The mRNA decay rate was found by using real-time RT-PCR on RNA extracted from cell line L-GFP-MI-7x grown in medium containing 10 ng/ml doxycycline for a range of times using the Qiagen One-Step RT-PCR kit (Valencia, California, United States). The real time RT-PCR was performed for both the GFP transgene and the highly expressed elongation factor 1 gene, which served as an internal control not showing change in response to the doxycycline concentration. We used probes and molecular beacons specific to each gene to perform real-time PCR. The difference in threshold cycle between the GFP and the EF1 signals was linearly related to the time since transcription was halted, allowing an accurate determination of the half-life of the mRNA from the transcript. The results are shown in Figure S1. In determining the half-life, only the time points at 2, 4, and 8 h were considered so as not to confound the results with any transient behaviors associated with mRNA processing and export. **Stochastic simulations.** Stochastic simulations of the stochastic mRNA and protein model described in Protocol S1 were performed by implementing Gillespie’s Direct method [30] in Matlab (The Mathworks). The parameters governing the mRNA dynamics were taken from those obtained from cell line E-YFP-MI-7x grown under conditions of no doxycycline: rate of gene activation (\(\lambda/\delta\)) = 2.44, inactivation (\(\gamma/\delta\)) = 2.49, and transcription (\(\mu/\delta\)) = 910. Since we are only interested in the steady state, the protein degradation rate was chosen as a factor by which all the other rates were multiplied. The translation rate \(\eta_0/\delta\) was set to 100 and three values of the protein degradation rate \(\eta_p/\delta\) were investigated: 0.02, 0.16, and 2.56. In Figure 7C, the values of \(\delta\) and \(\delta_p\) are reported in physical units for clarity. **Supporting Information** **Figure S1.** Determination of Reporter mRNA Degradation Rate Plot shows the difference in threshold cycle between PCRs performed on the GFP reporter gene and the EF1 housekeeping gene in a real-time RT-PCR experiment performed on total mRNA extracted from cell line L-GFP-MI-7x. At time 0, the cellular media was replaced with media containing 10 ng/ml doxycycline, effectively shutting down transcription of the reporter gene, thus allowing for a determination of the mRNA degradation time. In determining the half-life, we only considered the rightmost three points, since early time points may display non-first-order degradation due to transient effects of mRNA processing and export. Found at DOI: 10.1371/journal.pbio.0040309.sg001 (61 KB PDF). **Figure S2.** Effects of Cellular Volume upon mRNA Noise Plot shows the noise (defined as the standard deviation divided by the mean) of mRNA concentrations for the 1s-to-0 construct (red) and the 7s-to-0 construct (blue) over a range of doxycycline concentrations. The mRNA concentration was determined by dividing the number of mRNA by the total volume of the cell, as determined by microscopy. Compare to Figure 3B, bottom. Found at DOI: 10.1371/journal.pbio.0040309.sg002 (61 KB PDF). **Figure S3.** Linear Correlation Used to Provide Accurate Estimates of the Number of mRNA Molecules in Heavily Expressing Cells Encountered when Analyzing mRNA Levels in the L-GFP-MI-7x Cell Line Plot shows the correlation between the number of mRNA molecules and total fluorescence integrated over the cellular volume (background subtracted). Cells were taken from random fields of the line L-GFP-MI-7x and cells were chosen for both reasonable levels of mRNA to allow for segmentation and for lack of brightly fluorescent features that could potentially influence the calibration. The linear fit indicated a value of roughly 53,500 fluorescence units per molecule of mRNA. Found at DOI: 10.1371/journal.pbio.0040309.sg003 (60 KB PDF). **Protocol S1.** Model of Gene Activation and Inactivation, Parameter Estimation, and Comparison to Previous Studies Mechanistic model of bursts in mRNA synthesis, basic model of gene activation and inactivation, fitting of parameters to experimental distributions, determination of protein mean and variances, and relationship between model and previous studies of intrinsic versus extrinsic noise. Found at DOI: 10.1371/journal.pbio.0040309.s001 (106 KB PDF). **Table S1. Number of Reporter (YFP-M1) mRNAs per Cell Used in the Study** | Gene | Cytosolic YFP-M1-7-5' | E-YFP-M1-7-5' | Number of RNAPII Large Subunit mRNAs in Cell Line E-YFP-M1-7-5' | |------|-----------------------|--------------|---------------------------------------------------------------| | | | | | Found at DOI: 10.1371/journal.pbio.0040309.st001 (21 KB XLS). **Video S1. Three-Dimensional Flythrough of a Pair of Recently Divided Sister Cells from Cell Line E-YFP-M1-7-5'** Each white spot represents a molecule of mRNA. The dense white spot inside of the cells is an active transcription site. Found at DOI: 10.1371/journal.pbio.0040309.vv001 (6.0 MB MOV). **Video S2. Three-Dimensional Flythrough of a Pair of Recently Divided Sister Cells from the Cell Line Possessing Two Reporter Genes Integrated into the Same Locus (L-YFP-M1-CFP-M2)** Green corresponds to the signal from YFP-M1 mRNA, and red corresponds to the signal from CFP-M2 mRNA. The dense yellow spot in both cells is an active site of transcription, indicating that both mRNAs are being transcribed at the same genomic locus. Found at DOI: 10.1371/journal.pbio.0040309.vv002 (4.0 MB MOV). **Acknowledgments** We thank F. Kramer for a critical reading of the manuscript, S. Marras for assistance with synthesizing the fluorescent probes, and S. Isaacs and P. Bickel for Boggard for discussions. **Author contributions.** AR and ST conceived and designed the experiments. AR performed the experiments. AR, CSP, and DT analyzed the data. DVV and ST contributed reagents/materials/analysis tools. AR and ST wrote the paper. **Funding.** This work was supported by National Institutes of Health grant GM-070557. **Competing interests.** The authors have declared that no competing interests exist. **Dynamic patterns of growth hormone gene transcription in individual living pituitary cells.** Mol Endocrinol 17: 193–202. 22. Walter MC, Fiering S, Eidemiller J, Magis W, Groudine M, et al. (1995) Enhances increase the probability but not the level of gene expression. Proc Natl Acad Sci U S A 92: 7125–7129. 23. Chubb JR, Trekell J, Sherman D, Brown RH (2006) Transcriptional pulsing in a developing mammalian gene. Curr Biol 16: 1018–1025. 24. Ko MS, Nakauchi H, Takahashi N (1990) The dose dependence of glucocorticoid-inducible gene expression results from changes in the number of transcriptionally active templates. EMBO J 9: 2835–2842. 25. Yamamoto DY, Raj A, Marra SA, Kramer FR, Tragl S (2005) Mechanism of mRNA transport in the nucleus. Proc Natl Acad Sci U S A 102: 17008–17013. 26. Janicki SM, Tsukamoto T, Salgheer SE, Tansey WP, Sachidanandam R, et al. (2005) Transcription to gene expression: Real-time analysis in single cells. Cell 116: 683–698. 27. Gossen M, Bujard H (1992) Tight control of gene expression in mammalian cells by tetracycline-responsive promoters. Proc Natl Acad Sci U S A 89: 5547–5551. 28. Thain M, van Oudenaarden A (2001) Intrinsic noise in gene regulatory networks. Proc Natl Acad Sci U S A 98: 8614–8619. 29. Peccoud J, Ycart B (1995) Markovian modeling of gene-product synthesis. Theor Population Biol 48: 222–233. 30. Gillespie DT (1977) Exact stochastic simulation of coupled chemical reactions. J Phys Chem 81: 2340–2361. 31. Paulsson J (2004) Summing up the noise in gene networks. Nature 427: 415–418. 32. Pirzada JM, van Oudenaarden A (2005) Noise propagation in gene networks. Science 307: 1965–1969. 33. Kepler TB, Elston TC (2001) Stochasticity in transcriptional regulation: Origins, consequences, and mathematical representations. Biophys J 81: 3110–3134. 34. Tumbur T, Sudlow G, Belmont AS (1999) Large-scale chromatin unfolding and remodeling induced by VP16 acidic activation domain. J Cell Biol 145: 1341–1354. 35. Moro M, S, Belmont AS (2005) Sequential recruitment of HAT and SWI/SNF components to condensed chromatin by VP16. Curr Biol 15: 241–246. 36. Li G, Levitus M, Bustamante C, Widom J (2005) Rapid spontaneous accessibility of nucleosomal DNA. Nat Struct Mol Biol 12: 46–53. 37. Hume DA (1995) Probability in transcriptional regulation and its implications for leukocyte differentiation and inducible gene expression. Blood 96: 2323–2328. 38. Balaban NQ, Merrin J, Chait R, Kowalik L, Leibler S (2004) Bacterial persistence as a phenotypic switch. Science 305: 1622–1626. 39. Werner-Washburne OZ, Celik A, Duncan DM, Duncan I, et al. (2006) Stochastic spindle expression creates the retinal mosaic for colour vision. Nature 440: 174–180. 40. Robnett CC, Straight A, Li G, Willhelm C, Sudlow G, et al. (1996) In vivo localization of DNA sequences and visualization of large-scale chromatin organization using lac operator/repressor recognition. J Cell Biol 135: 1085–1090. 41. Femino AM, Fay FS, Fogarty K, Singer RH (1998) Visualization of single RNA transcripts in situ. Science 280: 585–590.
By: Timesoftrouble According to the only truth written in the Holy Bible, there is none out of over three billions believers that believe every written word of Christ being THE WORD. And humanity is in for the worst timesoftrouble ever before on humanity. Published on booksie.com/Timesoftrouble Copyright © Timesoftrouble, 2015 Publish your writing on Booksie.com. U.S. Preparing for Genocide Truth or Consequences; You Make the Choice Because I am only a Messenger http://ourdestinyarrived.webs.com/ Genocide is a term used to describe the deliberate and systematic destruction, in whole or in part, of an ethnic, racial, religious, or national group. There has always been a mass murder worldwide from hundreds of thousands into the millions via genocide; and now nearing the beginning of humanities coming end through the years ahead under the totally pathetic incapable rule of human beings everywhere on this planet, the once great but now rapidly collapsing nations like America through Clinton, Bush and Obama bringing destruction as part of the new world order plans shall also become consumed with massive death; and America once blessed as promised has now reached its end as also promised through complete disobedience. ✠And I will make of you a great nation, and I will bless you [with abundant increase of favors] and make your name famous and distinguished, and you will be a blessing [dispensing good to others]. (Genesis 12:2)-(AMP) ✠The Lord shall send upon thee cursing, vexation, and rebuke, in all that thou settest thine hand unto for to do, until thou be destroyed, and until thou perish quickly; because of the wickedness of thy doings, whereby thou hast forsaken me. (Deuteronomy 28:20)-(KJV) There are no conspiracies, but there is prophecy of humanity's end in ways never before taught to any human beings on this earth. Billions and billions shall die worldwide, and in the end, those who were responsible shall suffer so badly through five months of torture they shall seek death, but not find it until God allows; and when in the second resurrection after the 1000 year millennium to stand before God, they shall be cast into the lake of fire for as long as God sees necessary to burn them clean along with all the other very wicked humans. ever since the beginning; and in more untaught truth... â The Lord is not slack concerning his promise, as some men count slackness; but is longsuffering to us-ward, not willing that any should perish, but that all should come to repentance. (2 Peter 3:9) â That at the name of Jesus every knee should bow, of things in heaven, and things in earth, and things under the earth; And that every tongue should confess that Jesus Christ is Lord, to the glory of God the Father. (Philippians 2:1-11)-(KJV) God is not a god of eternal torture, but a God of love as a Father who will do whatever is required to bring human beings into repentance, and chapter 5 of my published book â Message for This Entire Human Raceâ uses Godâ s own words in teaching hell is the grave and fires are punishment. Mass graves are being prepared in areas across the United States, and nobody seems to know what they are for beyond uneducated guesses, complete ignorance or just ignoring what there will be no escape from because the U.S. has been preparing for Genocide for decades in the same way other countries murdered hundreds of millions through a well devised plan of extermination all through the history of this human race. Among other data with a variety of questions, this survey was also sent to cemetery owners: "Should a prolonged mass fatality disaster or pandemic flu occur in your community would your cemetery be able to provide temporary or permanent internment space for a significant number of disaster or flu deaths in additional to your current burial services?" Cemetery owners were also asked to detail the business structure and capacity of their facilities, including proximity to roads, train lines and airfields. The Division of Cemeteries requested data to calculate the number of acres that could be made available â at 950 graves per acre. What becomes perfectly clear to any being the few searching for genuine information regarding what exactly has been going on behind the scenes of all civilization for the past five decades, all they will come to see is the worst planned darkness coming upon all in America and humanity that is without doubt growing very near its beginning; and my source is the Father who cannot lie. What does our Lord tell all believers? And in the morning, It will be foul weather today: for the sky is red and lowering. O ye hypocrites, ye can discern the face of the sky; but can ye not discern the signs of the times? (Matthew 16:3) I am the true vine, and my Father is the husbandman. Every branch in me that beareth not fruit he taketh away: and every branch that beareth fruit, he purgeth it, that it may bring forth more fruit. Now ye are clean through the word which I have spoken unto you. Abide in me, and I in you. As the branch cannot bear fruit of itself, except it abide in the vine; no more can ye, except ye abide in me. I am the vine, ye are the branches: He that abideth in me, and I in him, the same bringeth forth much fruit: for without me ye can do nothing. If a man abide not in me, he is cast forth as a branch, and is withered; and men gather them, and cast them into the fire, and they are burned. (John 15:1-6)-(KJV) There is no end to all the facts regarding the times we all as human beings are living within just as every word of our Lord being the only truth for direction; and the true fact is this entire human race is growing very near to the beginning of sorrows as clearly described in these following words: Be careful that no one misleads you,â returned Jesus, â for many men will come in my name saying â I am christâ , and they will mislead many. You will hear of wars and rumours of warsâ but donâ t be alarmed. Such things must indeed happen, but that is not the end. For one nation will rise in arms against another, and one kingdom against another, and there will be famines and earthquakes in different parts of the world. But all that is only the beginning of the birth-pangs. For then comes the time when men will hand you over to persecution, and kill you. And all nations will hate you because you bear my name. Then comes the time when many will lose their faith, and will betray and hate each other. Yes, and many false prophets will arise, and will mislead many people. Because of the spread of wickedness the love of most men will grow cold, though the man who holds out to the end will be saved. This good news of the kingdom will be proclaimed to men all over the world as a witness to all the nations, and the end will come. (Matthew 24:14)-(Phillips) There is nothing but major disasters rapidly approaching all humanity through the complete fall of America and nations rising against nations with use of biological, chemical and nuclear weapons; and just as our Lord tells all humanity, they have been misled and deceived, yet do not believe what Christ tells them all; therefore these true words apply to billions of believers... If a man abide not in me, he is cast forth as a branch, and is withered; and men gather them, and cast them into the fire, and they are burned. (John 15:6) The fires are within the sorrows (suffering) lying ahead for a totally unprepared human race just as God describes all on this earth with over three billion believers not even believing the words of who they believe in with very limited psychological understanding in only believing they have been saved and forgiven with absolutely zero knowledge of repentance for gaining Godâ s mercy in the times ahead while also making their salvation eternal. Those called and chosen as part of the Body of Christ who only follow their Head each have their own function making them different from other body parts, yet dependent on each other which is made clear in (1 Corinthians 12:14-31) -- With the more excellent way being chapter 13 this entire world of believers lack beyond their words. My direction from the Father whom I look to just as my Head has placed me within years of poverty only to see the church [ekklesia in Greek meaning an assembly or group being nothing to do with a building] as disobedient hypocrites without the love to even know God according to His own words believers refuse to believe. But to those few who have followed these words; â With eyes wide open to the mercies of God, I beg you, my brothers, as an act of intelligent worship, to give him your bodies, as a living sacrifice, consecrated to him and acceptable by him. Donâ t let the world around you squeeze you into its own mould, but let God re-mould your minds from within, so that you may prove in practice that the plan of God for you is good, meets all his demands and moves towards the goal of true maturity. (Romans 12:1)-(Phillips) â His â gifts to menâ were varied. Some he made his messengers, some prophets, some preachers of the Gospel; to some he gave the power to guide and teach his people. His gifts were made that Christians might be properly equipped for their service, that the whole body might be built up until the time comes when, in the unity of the common faith and common knowledge of the Son of God, we arrive at real maturityâ that measure of development which is meant by the â fullness of Christâ . We are not meant to remain as children at the mercy of every chance wind of teaching and the jockeying of men who are expert in the craft presentation of lies. But we are meant to hold firmly to the truth in love, and to grow up in every way into Christ, the head. For it is from the head that the whole body, as a harmonious structure knit together by the joints with which it is provided, grows by the proper functioning of individual parts to its full maturity in love. (Ephesians 4:13-16)-(Phillips) Therefore with great honor I have fully accepted my position as an appointed messenger along with being hated by a world of false brethren and sisters because I am not one of them; (John 15:19) therefore I have understanding of God through following instruction as written while believing every word with no confidence in man along with just accepting hunger and difficulties while knowing my circumstance has been preparing me for teaching others prior and while living on faith during the worst timesoftrouble every before in human history. When God speaks these following words being pure and true, I not only believe them, but have experienced exactly what they speak from a world within darkness they cannot see. ✠And what is the point of calling me, ✠Lord, Lord✠, without doing what I tell you to do? (Luke 6:46)-(Phillips) ✠Then Jesus said, ✠My coming into this world is itself a judgment✠those who cannot see have their eyes opened and those who think they can see become blind. (John 9:39)-(Phillips) ✠Staying with it✠that✠s what God requires. Stay with it to the end. You won✠t be sorry, and you✠ll be saved. All during this time, the good news✠the Message of the kingdom✠will be preached all over the world, a witness staked out in every country. And then the end will come. (Matthew 24:13-14)-(Message) Saved means having God✠s protection within the greatest panic and worst calamities ever before since the beginning of humanity. ✠Behold, the eye of the Lord is upon them that fear him, upon them that hope in his mercy; To deliver their soul from death, and to keep them alive in famine. Let thy mercy, O Lord, be upon us, according as we hope in thee. (Psalms 33:18-19, 22)-(KJV) What is God's mercy? "For he saith to Moses, I will have mercy on whom I will have mercy, and I will have compassion on whom I will have compassion. So then it is not of him that willeth, nor of him that runneth, but of God that sheweth mercy. For the scripture saith unto Pharaoh, Even for this same purpose have I raised thee up, that I might shew my power in thee, and that my name might be declared throughout all the earth. Therefore hath he mercy on whom he will have mercy, and whom he will he hardeneth. (Romans 9:15-18)-(KJV) How can we have God's mercy? "Beloved, let us love one another: for love is of God; and every one that loveth is born of God, and knoweth God. He that loveth not knoweth not God; for God is love. (1 John 4:7-8)-(KJV) Who gains God's mercy? "Whosoever hateth his brother is a murderer: and ye know that no murderer hath eternal life abiding in him. Hereby perceive we the love of God, because he laid down his life for us: and we ought to lay down our lives for the brethren. But whoso hath this world's good, and seeth his brother have need, and shutteth up his bowels of compassion from him, how dwelleth the love of God in him? (1 John 3:15-17)-(KJV) Being I have endured my faith in God with years of being murdered by brothers and sisters everywhere through my uncountable works for God -- My purpose has become being one of God’s end-time messengers in delivering a truth never before taught; and that is because my truth is only The Word being Jesus Christ as the author of eternal salvation unto all them that obey Him just as written in Hebrews 5:9 -- which this world of believers do not show in any way, but rather take part in unwritten pagan ways and holidays while trusting in themselves accompanied with confidence in man as shepherds who feed not their sheep through their religions and beliefs worldwide. “So the priests are no different from the people. I will punish them for the things they did. I will pay them back for the wrong things they did. (Hosea 4:9)-(ERV) “Her prophets are light and treacherous persons: her priests have polluted the sanctuary, they have done violence to the law. (Zephaniah 3:4)-(KJV) “Be sure you are not led away by the teaching of those who have nothing worth saying and only plan to deceive you. That teaching is not from Christ. It is only human tradition and comes from the powers that influence this world. (Colossians 2:8)-(ERV) According to God, my time for getting out His message has its coming limit; Behold, the days come, saith the Lord God, that I will send a famine in the land, not a famine of bread, nor a thirst for water, but of hearing the words of the Lord: And they shall wander from sea to sea, and from the north even to the east, they shall run to and fro to seek the word of the Lord, and shall not find it. (Amos 8:11-12)-(KJV) Behold, the eyes of the Lord God are upon the sinful kingdom, and I will destroy it from off the face of the earth; saving that I will not utterly destroy the house of Jacob, saith the Lord. For, lo, I will command, and I will sift the house of Israel among all nations, like as corn is sifted in a sieve, yet shall not the least grain fall upon the earth. All the sinners of my people shall die by the sword, which say, The evil shall not overtake nor prevent us. (Amos 9:8-10)-(KJV) A world of lip loving hypocrites refuse truth, so payback time for refusing truth is coming upon all. All are being given a chance by God --- Either a world of buildings and websites with the blind leading the blind or becoming no longer of this world while learning to live on faith as the only way to survive in the guaranteed coming world chaos. Truth or consequences with the choice being only yours to make. My Published Book http://www.authorsden.com/jeffcallarman Current Writings http://www.booksie.com/Timesoftrouble 413 Blogs I Have Written http://www.true2ourselves.com/Jeffcc My Free Websites The Times of the End Have Arrived http://timesoftroubles.webs.com/ The Beginning of the End http://timesoftroubles.webs.com/ Truth You Need Beyond Words http://timesoftrouble.webs.com/ Human Ignorance http://ttimesoftrouble.webs.com/ We Are Within End Times http://timeoftroubles.webs.com/ What No One Can See http://timeoftrouble.webs.com/ Today's Reality in Truth http://ttimeoftrouble.webs.com/ Words as God Leads http://ttimeoftrouble.webs.com/ The Way Of This World http://ttimeoftrouble.webs.com/ Wisdom or Foolishness http://ttimeoftroubles.webs.com/ What All Civilization Needs to Know http://ourdestinyarrived.webs.com/ World Book You're Life Depends on http://theonlyworldbook.webs.com/ Created from Booksie.com Generated: 2015-03-27 15:20:05
A Vision-based Tactile Sensor Kazuto Kamiyama, Hiroyuki Kajimoto, Masahiko Inami, Naoki Kawakami, Susumu Tachi 1) The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8656 Japan \{ kazuto, kaji, minami, kawakami, tachi \} @star.t.u-tokyo.ac.jp Abstract Receiving tactile information from a slave-robot is a necessary component of telexistence with haptic display, but there are few tactile sensors that can measure the distribution of three-dimensional force vectors on a surface. For this reason, we developed a sensor that provides three-dimensional force distribution by detecting movement vectors in the transparent elastic body with a video camera. From a result of the experiment, it turned out this approach is effective. Key words: Telexistence, Tactile Sensor, Vision based, Force Vector 1. Introduction As humans we affirm our own existence through tactile information, and true reality is unobtainable without the sense of touch. Because of that, displaying tactile information is a critical aspect in virtual reality. There has been extensive research on various types of haptic displays, such as mechanical by Iwata [1], electrical stimulation by Kajimoto [2], and so on, but in telexistence, technology that allows us to feel as if we exist in a distant place, sensing tactile information from a slave-robot is also a necessary and important aspect. While tactile displays and sensory input devices are related for telexistence, relatively few current tactile sensors can obtain sufficient information for tactile display. In this paper, we address tactile sensors that receive tactile information, which is necessary and sufficient for haptic display. By establishing the purpose of our sensor as a tool to collect tactile information for haptic display, this solicits several capabilities. First, the direction of force on the surface of the sensor should be measured as well as its strength, because the fingertip can perceive both the distribution and direction of force at the same time. Second, this tactile sensor needs to have elasticity, since there exists an interaction between the sensor and the object being touched, whereas there is no such interaction in vision. When a finger touches an object, it applies force to the object, and it thereby gives rise to a certain influence to the object. Conversely, the reaction force from an object works on the finger as well, and the finger is influenced by the object in the form of variation. If this condition of mutual influence cannot be represented, the resulting tactile information will take on a different form from that of an actual finger. There are few tactile sensors which fulfill both points that the three-dimensional vector distribution can be measured and also have the property of elasticity. Considering the first point, although a six-dimensional force sensor is already commercially available, as well as others such as a force vector of a single point, or a film force sensor that measures the distribution of force magnitude, few have the ability to measure the distribution of a three-dimensional force vector. A few can sense the vector distribution [3], but it is difficult to have arbitrary elasticity due to the complexity of the architecture. Here, we propose a new optical tactile sensor for displaying tactile information, which can measure a three-dimensional vector distribution and also has the property of elasticity. 2. Theory The tactile sensor which we propose uses a transparent elastic body and a CCD camera. By photographing a certain marker on the interior of an elastic body by CCD, when force is applied to the surface, the variation information of the interior is measured, and is used to reconstruct a force vector distribution. Various methods can be considered to acquire variation information. Our current approach is to measure the horizontal movement of the markers in the elastic body, which are located in an $N \times N$ array at a specified depth. To gather sufficient information for the reconstruction of the force vectors, we used two layers of markers that are located at different depths (Fig.1). These layers could be distinguished by the colors of the markers (red and blue). We set the $x-y$ plane parallel to the sensor surface and the $z$ axis extending vertically on the interior. By measuring these markers from a positive $z$ direction with a CCD camera, we could obtain two sets of two-dimensional movement vectors at different depths, so the amount of information is increased and the distribution of force vector can be readily obtained. 2.1 Method of measuring position of markers The $N \times N$ spherical markers in the elastic body described above are arranged in two planes at different heights parallel to the surface, with red markers at a certain height $z = a$ and blue markers at another height $z = b$, and a picture is taken of the internal reflection of these markers. It is possible to separate the information of markers at different heights by storing the picture in a 24-bit bitmap file, and extracting its red and blue components. We explain the method of determining the center of the marker position. For computational purposes, the position of the picture is expressed as a matrix, and the optical density information for each pixel in the picture is expressed as a matrix element with a scalar value. A center of mass measurement method is performed to pinpoint the marker center position. First, the picture is divided into a mesh for every dozen pixels, which means carving an image matrix, and the center of mass of this mesh is calculated. The mesh is then divided again, focusing on the center of mass, and the center is recalculated. The center of mass calculated on this second pass establishes the center of the markers (Fig.2). This position, which was originally an integer value based on a pixel unit, becomes a real number with sub-pixel accuracy. Moreover, since the markers can be arranged in arbitrary positions and recalibrated with this marker center position determination method, camera alignment is no longer a strict requirement. 2.2 Measuring distribution of two dimensional movement vector of markers When force is applied to the elastic body surface, the markers move, and this movement is observed by the CCD. The picture is subdivided into partial squares, each of which is centered on the marker position measured in the previous state that force is not applied. By taking the difference of the position of the center in the picture before and after the movement, the markers’ translation in the xy direction can be calculated. In this case, however, it is possible to measure movement only within the divided area. If necessary, so-called ‘tracking’ method can be applied, which only uses two sequential images. 2.3 Obtaining distribution of three dimensional force vector from movement vectors In order to obtain a force vector from a movement vector, we use the theory of elasticity [5] by assuming that the elastic body is half-spaced, uniform and has linearity. We set $z$-axis perpendicular to the elastic body’s interior surface, and the $xy$-plane parallel to the surface of the elastic body. Following Eq.2, Eq.1 express the movement vector $\vec{u} = (u_x, u_y)$ of the interior point $\vec{r} = (x, y, z)$ within a plane parallel to the $xy$-plane when a force vector $\vec{f} = (f_x, f_y, f_z)$ is applied to the surface of the elastic body. $\sigma$ is a Poisson ratio, which is set to 0.5, by assuming that the elastic body is incompressible considers an ideal elastic body as incompressible, and is set to 0.5. $E$ is Young’s modulus and must be appropriately defined according to the actual elastic body used, but from this equation it is apparent that $E$ is in effect only multiplying constant to the whole equation, so it is set to 1 here. \[ \begin{align*} u_x &= \frac{1 + \sigma}{2\pi E} \left\{ \left[ \frac{xz}{r^3} - \frac{(1 - 2\sigma)x}{r(r + x)} \right] f_z + \frac{2(1 - \sigma)r + z}{r(r + z)} f_x \\ &+ \frac{[2r(\sigma r + z) + z^2]x}{r^3(r + z)^2} (xf_x + yf_y) \right\}, \\ u_y &= \frac{1 + \sigma}{2\pi E} \left\{ \left[ \frac{yz}{r^3} - \frac{(1 - 2\sigma)y}{r(r + x)} \right] f_z + \frac{2(1 - \sigma)r + z}{r(r + z)} f_y \\ &+ \frac{[2r(\sigma r + z) + z^2]y}{r^3(r + z)^2} (xf_x + yf_y) \right\} \end{align*} \] From these equations, when unit force $\vec{f} = (f_x, f_y, f_z) = (1, 0, 0), (f_x, f_y, f_z) = (0, 1, 0), (f_x, f_y, f_z) = (0, 0, 1)$ is applied in each direction $x, y, z$, the movement vector of the point in the plane at certain depth $z = z_1$ is calculated. We represent it by $u_{fx} = (h_{xx1}, h_{yy1}), \ u_{fy} = (h_{zy1}, h_{yy1}), \ u_{fz} = (h_{xz1}, h_{yy1})$. These movement $h$ can be considered as an impulsive response to unit force of each direction from the origin. When the force applied to the surface of the elastic body is reconsidered as a vector distribution expressed as $\vec{f}(x, y) = (f_x(x, y), f_y(x, y), f_z(x, y))$, the movement vector $\vec{m}(x, y) = (m_{x1}(x, y), \ m_{y1}(x, y), \ m_{z1}(x, y))$ in point $(x, y)$ in the plane at a certain depth and is calculated in the form of convolution (Eq.3). Note that we utilized our assumption of linearity (the asterisk denotes convolution). \[ \begin{align*} m_x &= \ast \left\{ \left[ \frac{xz}{r^3} - \frac{(1 - 2\sigma)x}{r(r + x)} \right] f_z + \frac{2(1 - \sigma)r + z}{r(r + z)} f_x \\ &+ \frac{[2r(\sigma r + z) + z^2]x}{r^3(r + z)^2} (xf_x + yf_y) \right\}, \\ m_y &= \ast \left\{ \left[ \frac{yz}{r^3} - \frac{(1 - 2\sigma)y}{r(r + x)} \right] f_z + \frac{2(1 - \sigma)r + z}{r(r + z)} f_y \\ &+ \frac{[2r(\sigma r + z) + z^2]y}{r^3(r + z)^2} (xf_x + yf_y) \right\} \end{align*} \] \begin{align*} m_{x1}(x, y) &= h_{xx1} * f_x + h_{xy1} * f_y + h_{xz1} * f_z, \\ m_{y1}(x, y) &= h_{yx1} * f_x + h_{yy1} * f_y + h_{yz1} * f_z, \end{align*} (3) A discrete form of this can be expressed as a matrix representation. The movement vectors and force vectors are sampled at $M \times N$ points, and the $x, y, z$ components of the movement vector are expressed as $M_{x1}, M_{y1}, M_{z1}$ in the matrix. Elements from (1, 1) to $(M, N)$ are renumbered using one suffix and are reinserted as a vector. Stress vectors are inserted in the same fashion. By doing so, Eq.3 represents the matrix form (Eq.4). \[ \begin{bmatrix} M_{x1} \\ M_{y1} \end{bmatrix} = \begin{bmatrix} H_{xx1} & H_{xy1} & H_{xz1} \\ H_{yx1} & H_{yy1} & H_{yz1} \end{bmatrix} \begin{bmatrix} F_x \\ F_y \\ F_z \end{bmatrix} \] (4) As this equation calculates movement of an interior point when a force vector distribution is applied to the surface of the elastic body, its inverse becomes a formula which calculates the force vector distribution by giving the measured value $M$ (Eq.5). This is the measurement principle of this research. \[ F = H^{-1} M \] (5) At this point, consider the number of $M$ vector elements on the left side of the equation and the $F$ vector elements on the right side. Since a number of sampling point is $M \times N$, the number of elements of a movement vector is $M \times N \times 2$ and the number of elements of a force vector is $M \times N \times 3$. It means that there are more unknowns than equations, which complicates the determination of the unknowns. Thus, the technique (stated in Section 2.1) of measuring in another height by using the color information is applied. As the impulse response is different in another height, we obtain the following equation. \[ \begin{bmatrix} M_{x1} \\ M_{y1} \\ M_{x2} \\ M_{y2} \end{bmatrix} = \begin{bmatrix} H_{xx1} & H_{xy1} & H_{xz1} \\ H_{yx1} & H_{yy1} & H_{yz1} \\ H_{xx2} & H_{xy2} & H_{xz2} \\ H_{yx2} & H_{yy2} & H_{yz2} \end{bmatrix} \begin{bmatrix} F_x \\ F_y \\ F_z \end{bmatrix} \] (6) This becomes a situation with more equations than unknowns, so it is easy to calculate a force vector distribution $F$. Since the matrix $H$ is not a regular matrix, there does not exist an inverse matrix. Therefore, force will be calculated using a pseudo-inverse matrix. The pseudo-inverse matrix is briefly explained from relation of content in this paper. The pseudo-inverse matrix is obtained using a method of least squares. The prediction error is set as $e$, and the connection between $F$ and $e$ is as follows: \[ M = HF + e \] (7) To determine the value $F$ that minimizes the absolute value $\|e\|$ of error, this is rewritten into an equation (Eq.8) regarding error. \begin{align*} \|e\|^2 &= \|M - HF\|^2 \\ &= \frac{1}{2} F^T Q F + cF + d \end{align*} (8) \[ Q = 2H^T H, c^T = -2M^T H, d = M^T M \] (9) $F$ should be the value that minimizes this prediction error. Hence, by differentiating Eq.8 by $F$, we obtain \begin{align*} (H^T H)F - 2c &= 0 \\ F &= (H^T H)^{-1}c \end{align*} (10) ### 2.4 Algorithmis of improving stability Because noise will be contained in the measurement of motion, we use an algorithm for enhancing the stability of calculating the force vector distribution in the presence of noise. This adds certain constraints to the method of least squares that was explained above. #### 2.4.1 Constraint of minimizing norm of difference between adjacent force vectors By assuming the spatial distribution of a force vector does not change steeply, we impose a constraint that minimizes the norm of the difference between the adjacent force vector. The equation is as follows. \begin{align*} \|\Delta F\|^2 &= \|F(1) - F(2)\|^2 + \|F(2) - F(3)\|^2 \\ &+ \|F(3) - F(4)\| + ... \\ &= \left\| \begin{bmatrix} 1 & -1 & 0 & \ldots & 0 \\ 0 & 1 & -1 & \ldots & 0 \\ 0 & 0 & 1 & \ldots & 0 \\ \vdots & \ddots & \ddots & \ddots & \vdots \\ 0 & 0 & \ldots & 1 & -1 \end{bmatrix} F \right\|^2 \end{align*} (11) \[ = F^T Q_2 F \] By combining with Eq.8 and setting the weight value $\omega$. \[ minimize \left\{ \frac{1}{2} F^T (Q + \omega Q_2) F + cF + d \right\} \] then, \[ F = (H^T H + \frac{1}{2} \omega Q_2)^{-1} c \] (12) #### 2.4.2 Constraint that the value of z-direction is positive Unless an adhesive object is measured, we could assume that the $z$-direction component of the force is a positive value, which requires an additional constraint. \[ minimize \left\{ \frac{1}{2} F^T Q F + cF + d \mid F_z \geq 0 \right\} \] (13) In this case, a least squares method such as Eq.8 using pseudo-inverse matrices is no longer applicable, and it becomes a quadratic programming problem requiring an iterative approach, so the amount of calculation will increase. ### 2.4.3 Constraint of minimizing norm of force vectors The force applied to an elastic body is limited, so an additional constraint is added so that the calculated power vector may not diverge. It is formulated as follows, with $I$ as the identity matrix, and like the constraint to minimize the norm of difference between adjacent force vectors, it can unite with Eq.8 and is written as follows. \[ \|F\|^2 = \|F(1)\|^2 + \|F(2)\|^2 + \|F(3)\| + \ldots \\ = F^T I F \] (14) \[ \text{minimize} \left\{ \frac{1}{2} F^T (Q + \omega I) F + cF + d \right\} \] (15) ### 2.5 Camera calibration The method proposed in this research will reconstruct force vector distribution from the two-dimensional movement vector in a plane of the elastic body surface. The reason for this is that the picture taken by CCD is two dimensional, but the photograph does not only show movement in the plane perpendicular to the axis of the CCD lens. This is based on the lens characteristic of a CCD camera and like Fig.3 the motion of z-direction, which should be originally calculated with no movement, will be detected as movement in the xy direction in the form projected. However, a picture does not in fact show whether movement of a marker is in the xy direction or the z direction. Therefore, the impulse response currently used in Eq.3 must be corrected by including the projection of the z-directional motion to the $xy$-plane. ![Fig. 3: Necessity to calibrate image of CCD camera](image) #### 3. Experimental setup The system of the sensor created this time is shown in Fig.4. A black shading layer is prepared in the surface of the transparent elastic body of a height of 40mm, 90mm long, and 100mm wide. We use silicone rubber (Shin-Etsu Chemical Co., Ltd. KE109) as the transparent elastic body. The elastic body must be large enough to satisfy the half-space assumption. It resulted into this size. Blue markers are arranged at 3mm in depth and red markers at 6mm in depth. The marker interval is about 1.5mm. Markers are arranged at the interval suitable for man’s limit of spatial resolution that he can discriminate between two separated points. The markers need to be so small that assumption of an uniform elastic body does not collapse and the markers of different depth do not overlap with each other as much as possible. At the same time, they also need to be large enough so that the center of marker can be calculated to sub-pixel and the diameter of a ball occupy some pixels of pictures taken by CCD. The size of markers are adjusted so that they might be filled. We use the plastic ball (DAICEL FINECHEM, LTD. FREEPLASTIC) as color marker. The markers inside the elastic body are photographed through the transparent acrylics board which fix the elastic body (Fig.5). One pixel corresponds to about 0.05mm. The image is outputted by NTSC format and sent to PC through the capture unit of USB connection. ![Fig. 4: Image of sensor system](image) ![Fig. 5: Picture of sensor head](image) 4. Experiment 4.1 Picturize marker The picture taken by CCD is shown Fig.6. We extracted Red and Blue component of the picture so that we could separately observe the markers with different depth. For noise reduction, the picture is preprocessed by taking difference of Red and Blue component. Positive part of this difference is considered as Red component, and negative part is regarded as Blue component. ![Bit-mapped image by CCD camera](image) Fig. 6: Bit-mapped image by CCD camera 4.2 Measuring movement vector First, we evaluated accuracy of our algorithm of calculating the movement vector. A photographed picture when force was not applied was considered as original one before markers moving and another picture, which was displaced at 3 pixel on the right, as one after markers move. We used these images and calculated the movement vector of a marker (Fig.9). Each histogram of Red, Blue component of $x$, $y$-direction element of movement vector are Fig.8. Their average and standard deviation is shown Table.1. Table 1: Test of Measuring movement | Component | direction | Average(Pixel) | SD | |-----------|-----------|----------------|----| | Blue | x | 3.09 | 0.19 | | | y | 0.05 | 0.22 | | Red | x | 3.02 | 0.25 | | | y | 0.06 | 0.19 | ![Calculate movement vector when given z-directional force](image) Fig. 9: Calculate movement vector when given z-directional force 4.3 Reconstructing force vector In this subsection, force vector is reconstructed in two situations. One is when only vertical force is applied, and the other is when horizontal force is also applied. 4.3.1 Reconstruction of force vector when only $z$-direction force applied Following is the result of force vector reconstruction when the force is applied perpendicularly to the surface using pillar which diameter is 5mm. During calculation, three constraints that are explained in section 2 are applied. The result calculated without any constraints has been diverged greatly. A very big value is observed at the point clearly different from the point of applied force. When constraint of minimizing norm of difference between adjacent force vectors (Eq.11) is imposed, an obtained result is Fig.10. When other constraint of minimizing norm of force vectors (Eq.14) is imposed, Fig.11 is reconstructing result. Because iteration doesn’t exist in these two constraint, calculation time is short and it took about 0.08 sec from the capture of the picture until it returned the result. On the contrary, constraint that the value of z-direction is positive accompanied with the constraint of minimizing norm of difference between adjacent force vectors is imposed (Eq.13), obtained result is Fig.12. This algorithm includes iterate calculation during force vector is reconstructed, so that calculation speed is slower than the two above-mentioned algorithms and it took about 0.9 sec. These three figures of results was obtained after calibrating data from CCD camera which we described in Section 2.5. The result without this calibration when constraint of minimizing norm of force vectors is applied, is shown in Fig.13. Fig. 10: Distribution of force vector:minimizing norm of difference Fig. 11: Distribution of force vector:minimizing norm of vector Fig. 12: Distribution of force vector:positive z-direction value Fig. 13: Distribution of force vector:minimizing norm of vector,before camera calibration Next, the experiment which investigates spatial resolution was conducted. Force is applied equally along with a y-axis using square pillar. Then distribution of z-component of the force vector along with an x-axis is investigated while pushing in the direction of z. The measurement was conducted for every different length of the side of square pillar along with the x-axis. Fig.14 is the result. Fig. 14: Applying force with various width of plane 4.3.2 Reconstruction of force vector distribution when three-dimensional force applied When force in the direction of z and twist around z-axis is applied using the pillar with a radius of 20mm on the elastic body, reconstructed distribution of force vector is shown in Fig.13. 5. Speculation In this section, we discuss about movement vector measurement, reconstruction of force vector distribution and spacial resolution. 5.1 Accuracy of measuring movement vector From Table 1, it turned out that the obtained movement vector has about ±0.4-pixel data spread at the maximum from histogram (Fig.8). The maximum value of a movement vector is about 10 pixels. Herewith, data spread of about ±0.4-pixel never means small and it is never able to said to measure accurately. We have to measure the movement vector with sufficient accuracy from the following three points. 1. Because the data actually measured is movement and force vector which is a needed quantity is calculated from its data, accuracy of force vector better than the accuracy of a movement vector is never achieved. 2. While force was applied from various directions, in a rare case some force vectors with completely incorrect direction were observed. It turned out that the reason was inaccurate movement vector around the obtained force. As it was thought that the correctly calculated movement vector compensate this inaccurate one, it figure out that there is almost no robustness to error from mistaken calculation. 3. Like Fig.16, addition of only two impulse responses of the x-direction makes quite the same movement that unit force of z-direction on origin was applied. Regarding each movements of the impulse response to unit force of the direction of z, x and y as basis function in equation of convolution, they are not orthogonal but nearly parallel to each other. Additionally, the impulse response in a certain height and the other response in different height are not also orthogonal basis function (Fig.17). 5.2 About calculating distribution of force vector It is called inverse analysis to calculate force vector from movement vector. There are many research about inverse analysis [6]. When there are more unknowns than the number of simultaneous equations, it is called underdetermined problem. On the contrary, the problem that there are less unknowns than equations is called overdetermined problem. Our approach was to change problem from underdetermined to overdetermined by using two observation layers. Constraint of minimizing norm of difference between adjacent force vectors and of minimizing norm of force vectors respectively result from ‘smoothness of a solution’, simplicity of a solution’ and they applied when including underdetermined problem. As the foregoing subsection described, the overdetermined part in problem is buried in a noise and is very weak. Therefore, it is thought suitable to add such constraint. 6. Conclusion As discussed in section 1, our purpose of tactile sensor is realization of an artificial tactile sensation. From this point, spacial resolution and response speed of the sensor need to be better than these of human. We compare fingertip of humans about this point with the tactile sensor manufactured in this research. First, it is reported that the spatial resolution of the human’s fingertip is 2mm. Although the sensor in our research has not yet attained to this ability, we expect to achieve this resolution by raising the measurement accuracy of movement vector with a closer interval between markers than the present. Unlike a mechanical sensor, only a colored marker with spherical shape is needed, so a higher density of markers is easily attained. The second point is the speed of response. Without quadratic programming, the current calculation time of our sensor on an Intel Pentium4 1.8GHz PC is 0.08sec. The bottlenecks are image capture and noise reduction, which take 0.03sec and 0.02sec, respectively. The firing frequency of the mechanoreceptors of a human fingertip is tens of Hz in Meissner corpuscle and Merkel cell, 200Hz in Pacini corpuscle. The Missner corpuscle perceives low frequency vibration, and the Merkel cell perceives displacement of skin. They exist in high density. On the contrary, the Pacini corpuscle, which detects high-frequency vibration, exists in low density. We compensate for the role of the Pacini corpuscle with another tactile sensor, which is unable to be arranged in high density but has a quick response. The goal for our sensor is to sufficiently cover the domain in which the Missner corpuscle and the Merkel cell fire, and a rate of 70 Hz would be preferable. We believe our sensor can meet the speed of this domain through software optimization. Future work includes overcoming these engineering hurdles to allow the development of a sensor with the capacity of the human’s tactile sense. In addition, we will develop a finger shaped tactile sensor, which is our final target. Fig. 18: Tactile sensor shaped finger References 1. R.Kawamura, H.Yano, H.Iwata. "Development of surface type haptic interface for presentation of rigidity distribution", Proceedings of the Virtual Reality Society of Japan, 5 pp.51-54, 2000. 2. H.Kajimoto, N.Kawakami, T.Maeda, S.Tachi, "Tactile Feeling Display using Functional Electrical Stimulation", The Ninth International Conference on Artificial reality and Telexistence, pp.107-114, 1999. 3. M.Ohka, Y.Mituya, K.Hattori, I.Higashioka. "Data conversion capability of optical tactile sensor featuring an array of pyramidal projections", IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, pp.573-580, 1996. 4. K.Tanie, K.Komoriya, M. Kaneko, S.Tachi, "A HIGH RESOLUTION TACTILE SENSOR", proceedings of the 4th international conference on robot vision and sensory controls, pp.251-261, 1987. 5. L.D.Landau,E.M.Lifshitz. "Theory of Elasticity", BUTTERWORTHHEINEMANN, 1985. 6. William Menke. "Geophysical Data Analysis: Discrete Inverse Theory", Academic Press, Inc., 1989. 7. R.S.Fearing, "Using a Cylindrical Tactile Sensor for Determining Curvature", IEEE Trans. on Robotics and Automation, vol.7, No.6, pp.806-817, 1991.
In the world of fertility, the rapid development of assisted reproductive technologies (ART) has led to pivotal advances in IVF laboratories, improving fertility outcomes and patient safety. It is anticipated that the next leap forward will involve the harnessing of technologies to drive standardization, automation and digitalization of clinics. As a global leader in delivering innovative solutions in the field of assisted reproductive technologies and genomics, CooperSurgical aims to support clinics in embracing and successfully implementing this change process as an essential element of the progression to the fertility care of the future. Through the intelligent and targeted collection of data and utilization of key performance indicators (KPIs) and data metrics, clinics can work towards the standardization of laboratory procedures and provision of individualized treatment based on specific patient needs. As well as helping patients in making better informed decisions, clinics can share their knowledge to drive improvements in fertility care worldwide. **DATA COLLECTION DRIVES QUALITY IMPROVEMENTS** For IVF laboratories, data can support standardization, ensuring that all procedures are performed consistently, thereby promoting optimized laboratory performance and positively impacting patient outcomes. Automated data collection makes this process much more manageable, especially in busy centers. “One of the key benefits of standardization is that it increases consistency of performance and predictability of laboratory outcomes,” says Bob Thompson, Director of Digital Innovation at CooperSurgical. “For embryologists carrying out the procedures, this standardization, coupled with adoption of best practices, can give IVF clinics the confidence they are performing optimally and producing the best possible treatment outcomes for their patients.” **THE POWER OF DATA** Collection of data is not an end in itself; but, in the words of Carla Fiorina (ex-CEO of Hewlett Packard), “the goal is to turn data into information and information into insight.” Useful information on the patient journey and on clinic performance is being generated continuously by IVF clinics, but data might be missed or, worse still, collected but not used to drive optimization. Through digitalization, clinics have the opportunity to make the most of this data to produce the metrics needed to improve, optimize and standardize procedures and protocols. “When we offer support to a clinic, their data not only gives us a clearer understanding of their processes and performance, but also highlights the vital data that might be missing, data that could give insights into how to strengthen the clinical services,” says Ines Mesequer, Director of Professional Education and Clinical Support at CooperSurgical. “Data is crucial – if you don’t have the data, you don’t have any KPIs.” Automation of processes and, importantly, data management is a prerequisite to the collection of complete data sets that then generate metrics or KPIs that facilitate quality improvements. In short, data is turned into insights that might then be shared to the benefit of clinics and patients globally. **THE RIGHT KNOWLEDGE GOES A LONG WAY** Though data utilization will help the drive towards standardization and optimization, this is further enhanced when combined with knowledge sharing and high-quality training in technical skills. Through observation and troubleshooting in many different labs, as well as bringing together a wealth of expertise, CooperSurgical seeks to amply support, train and educate practitioners in all disciplines to promote the highest standards and best practices. “We can use education, training and knowledge-sharing to help increase the standards of fertility treatment in the clinic,” says Rachel Chin, Clinical Applications Manager at CooperSurgical, “to help strengthen the core practices in each clinic and provide them with a solid foundation for ongoing quality improvement.” **THE FUTURE OF FERTILITY CARE IS ALREADY HERE** The fertility industry is changing with advances such as CooperSurgical’s RI Witness™ lab management system and the PGTaSM 2.0 technology platform. For example, PGTaSM 2.0 harnesses the power of artificial intelligence (AI) and machine learning to aid in the interpretation of PGT-A results. Both are examples of the role emerging technologies will continue to play. Delivering standardization, automation and digitalization to clinics, along with training and knowledge-sharing, are not just for the benefit of one clinic but are part of a larger commitment for the fertility industry to work more closely and more collaboratively. Knowledge shared among lab practitioners, clinicians, nurses and clinic managers has the potential to improve the quality of fertility care for IVF clinics around the world. Learn how RI Witness™ can help increase overall laboratory efficiency: [fertility.coopersurgical.com/equipment/ri-witness/](http://fertility.coopersurgical.com/equipment/ri-witness/) Vasa Praevia: Diagnosis and Management Green-top Guideline No. 27b September 2018 Please cite this paper as: Jauniaux ERM, Alfirevic Z, Bhide AG, Burton GJ, Collins SL, Silver R on behalf of the Royal College of Obstetricians and Gynaecologists. Vasa praevia: diagnosis and management. Green-top Guideline No. 27b. BJOG 2018 Vasa Praevia: Diagnosis and Management ERM Jauniaux, Z Alfirevic, AG Bhide, GJ Burton, SL Collins, R Silver, on behalf of the Royal College of Obstetricians and Gynaecologists Correspondence: Royal College of Obstetricians and Gynaecologists, 27 Sussex Place, Regent’s Park, London NW1 4RG. Email: firstname.lastname@example.org This is the fourth edition of this guideline. The first, published in 2001, was entitled Placenta Praevia: Diagnosis and Management; the second, published in 2005, was entitled Placenta Praevia and Placenta Praevia Accreta: Diagnosis and Management; and the third, published in 2011, was entitled Placenta Praevia, Placenta Praevia Accreta and Vasa Praevia: Diagnosis and Management. The management and diagnosis of placenta praevia and placenta accreta is addressed in Green-top Guideline No. 27a. Executive summary Management of women with undiagnosed vasa praevia at delivery Emergency caesarean delivery and neonatal resuscitation, including the use of blood transfusion if required, are essential in the management of ruptured vasa praevia diagnosed during labour. Placental pathological examination should be performed to confirm the diagnosis of vasa praevia, in particular when stillbirth has occurred or where there has been acute fetal compromise during delivery. [New 2018] Can vasa praevia be diagnosed antenatally? The performance of ultrasound in diagnosing vasa praevia at the time of the routine fetal anomaly scan has a high diagnostic accuracy with a low false-positive rate. [New 2018] A combination of both transabdominal and transvaginal colour Doppler imaging (CDI) ultrasonography provides the best diagnostic accuracy for vasa praevia. Should we screen for vasa praevia? There is insufficient evidence to support universal screening for vasa praevia at the time of the routine midpregnancy fetal anomaly scan in the general population. Although targeted midpregnancy ultrasound screening of pregnancies at higher risk of vasa praevia may reduce perinatal loss, the balance of benefit versus harm remains undetermined and further research in this area is required. [New 2018] How should women with vasa praevia be managed? Because of the speed at which fetal exsanguination can occur and the high perinatal mortality rate associated with ruptured vasa praevia, delivery should not be delayed while trying to confirm the diagnosis, particularly if there is evidence that fetal wellbeing is compromised. [New 2018] In the presence of confirmed vasa praevia in the third trimester, elective caesarean section should ideally be carried out prior to the onset of labour. A decision for prophylactic hospitalisation from 30–32 weeks of gestation in women with confirmed vasa praevia should be individualised and based on a combination of factors, including multiple pregnancy, antenatal bleeding and threatened premature labour. [New 2018] In cases of vasa praevia that develop premature rupture of membranes and/or labour at viable gestational ages, a caesarean section should be performed without delay. To avoid unnecessary anxiety, admissions, prematurity and caesarean section, it is essential to confirm persistence of vasa praevia by ultrasound in the third trimester. At what gestation should elective delivery occur? The ultimate management goal of confirmed vasa praevia should be to deliver before rupture of membranes while minimising the impact of iatrogenic prematurity. Based on available data, planned caesarean delivery for a prenatal diagnosis of vasa praevia at 34–36 weeks of gestation is reasonable in asymptomatic women. [New 2018] Administration of corticosteroids for fetal lung maturity should be recommended from 32 weeks of gestation due to the increased risk of preterm delivery. 1. Purpose and scope The purpose of this guideline is to describe the diagnostic modalities and review the evidence-based approach to the clinical management of pregnancies complicated by vasa praevia. 2. Introduction and background epidemiology Vasa praevia occurs when the fetal vessels run through the free placental membranes. Unprotected by placental tissue or Wharton’s jelly of the umbilical cord, a vasa praevia is likely to rupture in active labour, or when amniotomy is performed to induce or augment labour; in particular when located near or over the cervix, under the fetal presenting part. Vasa praevia is classified as type I when the vessel is connected to a velamentous umbilical cord, and type II when it connects the placenta with a succenturiate or accessory lobe. Vasa praevia may be diagnosed during early labour by vaginal examination, detecting the pulsating fetal vessels inside the internal os, or by the presence of dark-red vaginal bleeding and acute fetal compromise after spontaneous or artificial rupture of the placental membranes. The fetal mortality rate in this situation is at least 60% despite urgent caesarean delivery. However, improved survival rates of over 95% have been reported where the diagnosis has been made antenatally by ultrasound followed by planned caesarean section.\textsuperscript{3} Vasa praevia is uncommon in the general population with a prevalence ranging between 1 in 1200 and 1 in 5000 pregnancies, although the condition may have been under-reported.\textsuperscript{1–6} ### 3. Identification and assessment of evidence This guideline was developed in accordance with standard methodology for producing Royal College of Obstetricians and Gynaecologists (RCOG) Green-top Guidelines. The Cochrane Library (including the Cochrane Database of Systematic Reviews and the Database of Abstracts of Reviews of Effects [DARE]), EMBASE, Trip, MEDLINE and PubMed (electronic databases) were searched for relevant randomised controlled trials (RCT), systematic reviews and meta-analyses. The search was restricted to articles published between May 2009 and July 2016 (the search for the previous guideline was up to May 2009). A top-up literature search was performed in March 2018. The databases were searched using the relevant Medical Subject Headings (MeSH) terms, including all subheadings, and this was combined with a keyword search. Search words included, ‘vasa praevia’, ‘velamentous cord insertion’ and ‘umbilical cord anomalies’. The search was restricted to humans and the English language. The National Library for Health and the National Guideline Clearinghouse were also searched for relevant guidelines and reviews. Where possible, recommendations are based on available evidence. In the absence of published evidence, these have been annotated as ‘good practice points’. Further information about the assessment of evidence and the grading of recommendations may be found in Appendix I. ### 4. Management of women with undiagnosed vasa praevia at delivery **Emergency caesarean delivery and neonatal resuscitation, including the use of blood transfusion if required, are essential in the management of ruptured vasa praevia diagnosed during labour.** Placental pathological examination should be performed to confirm the diagnosis of vasa praevia, in particular when stillbirth has occurred or where there has been acute fetal compromise during delivery. \textit{[New 2018]} The classic presentation of unexpected vasa praevia in labour is the presence of painless vaginal bleeding (also known as Bencikiser’s haemorrhage). This occurs mainly when the cervix is effaced and dilated, and the membranes rupture spontaneously or are ruptured artificially.\textsuperscript{2,3} As the total fetal blood volume at term is approximately 80–100 ml/kg, the loss of what may appear as a relatively small amount of blood can have major implications for the fetus and is rapidly fatal.\textsuperscript{3,7–10} A systematic review and meta-analysis of the association among placental implantation abnormalities (including placenta praevia, placenta accreta, vasa praevia, velamentous cord insertion) and preterm delivery in singleton gestations has found a perinatal death rate random effect pooled risk ratio of 4.52 (95% CI 2.77–7.39) for vasa praevia.\textsuperscript{5} 5. **Can vasa praevia be diagnosed antenatally?** The performance of ultrasound in diagnosing vasa praevia at the time of the routine fetal anomaly scan has a high diagnostic accuracy with a low false-positive rate. [New 2018] A combination of both transabdominal and transvaginal colour Doppler imaging (CDI) ultrasonography provides the best diagnostic accuracy for vasa praevia. The previous version of this guideline concluded that in the absence of vaginal bleeding during the antenatal period, there is no method to diagnose vasa praevia clinically. Vaginal bleeding in pregnancy could be considered as a possible alert symptom for vasa praevia,\textsuperscript{11} but this is likely to have a very low positive predictive value given the high prevalence of bleeding during pregnancy and low prevalence of vasa praevia.\textsuperscript{12} Various tests can differentiate between maternal and fetal blood but are often not timely in a potentially life-threatening clinical situation. The largest study to date on perinatal outcome is based on a cohort of 155 women with vasa praevia that reported a 97% survival rate in cases of prenatal diagnosis compared with only 44% when the diagnosis was made during delivery.\textsuperscript{13} A prospective population-based cohort study using the Australasian Maternity Outcomes Surveillance System (AMOSS) found that there were no perinatal deaths in the 58 cases diagnosed prenatally out of the 63 cases with confirmed vasa praevia at birth.\textsuperscript{14} Transvaginal CDI has improved the accuracy of greyscale imaging\textsuperscript{3,15} in diagnosing vasa praevia by demonstrating flow and fetal vascular waveforms on pulsed Doppler through at least one aberrant vessel.\textsuperscript{3,5} Vasa praevia has been defined as a vessel running in the free placental membranes within 2 cm of the cervix.\textsuperscript{16,17} The ultrasound definition of ‘within 2 cm from the internal cervical os’ was modelled after the existing definitions for low-lying placentas\textsuperscript{18} and will vary with gestational age; in particular during the third trimester when the lower segment of the uterus forms. There is limited information regarding the actual safe distance that a vasa praevia needs to be from the internal os to be confident that there is no risk for vessel rupture during labour and delivery. Overall, prenatal diagnosis is most effective around midpregnancy (18–24 weeks of gestation) but needs to be confirmed during the third trimester (30–32 weeks of gestation).\textsuperscript{3,15} A systematic review, including two prospective and six retrospective cohort studies of which six had poor methodology, found prenatal detection rates ranging between 53% (10/19) and 100% for a total of 442 633 women, including 138 cases of vasa praevia.\textsuperscript{15} Four out of the eight studies used transvaginal scanning (TVS) for primary assessment, while the remaining four studies used transabdominal ultrasound and only used TVS when vasa praevia was suspected on the transabdominal scan. The results of two prospective studies including a total of 33 795 women reported that TVS CDI performed during the second trimester detects all cases ($n = 11$) of vasa praevia (sensitivity, 100%) with a specificity of 99.0–99.8%. A national UK study using the UK obstetric surveillance system of births between December 2014 and December 2015 found that only 25 out of 45 (56%) cases of vasa praevia were diagnosed antenatally.\textsuperscript{6} The Society of Obstetricians and Gynecologists of Canada (SOGC) guideline based on the published literature up to 2009 also indicates that using combined abdominal and transvaginal CDI results in a high diagnostic accuracy with an extremely low false-positive rate.\textsuperscript{7} However, the SOGC guideline\textsuperscript{19} update also highlighted that many cases are not diagnosed. 6. Should we screen for vasa praevia? There is insufficient evidence to support universal screening for vasa praevia at the time of the midpregnancy routine fetal anomaly scan in the general population. Although targeted midpregnancy ultrasound assessment of pregnancies at higher risk of vasa praevia has been investigated, the balance of benefit versus harm remains undetermined and further research in this area is required. [New 2018] The 2017 UK National Screening Committee (UK NSC) external review of the 2013 screening policy concluded that there appears to be little benefit in attempting to identify cases of vasa praevia in the second trimester and that this strategy could be associated with a high false-positive rate.\textsuperscript{12} RCTs to investigate whether ultrasound screening for vasa praevia decreases perinatal mortality would be ethically unacceptable in view of the poor neonatal prognosis. The analysis of the literature included in the 2017 UK NSC external review of the 2013 screening policy indicates that up to 80% of vasa praevia cases have one or more identifiable prenatal risk factors.\textsuperscript{12} There are no UK data on the epidemiology of velamentous cord insertion and no studies on screening for vasa praevia have reported outcomes (benefits and harms) from identifying velamentous cord insertion in the absence of vasa praevia. Overall, the UK NSC recommendation on screening for vasa praevia is that screening for velamentous cord insertion as a means of identifying vasa praevia should not be implemented. In addition, due to the limited numbers of prospective studies, it is not possible to evaluate the benefits and harms of universal screening over and above a more limited, or targeted, approach to identify vasa praevia in currently identified risk groups, such as women with a low-lying placenta at the midpregnancy routine fetal anatomy ultrasound examination. A 2016 systematic review of the incidence and risk factors of vasa praevia including 13 studies (two prospective cohort studies, 10 retrospective cohort studies and one case-control study) and reporting on 569 410 women found that 83% of the 325 cases reviewed had one or more risk factor, including placenta praevia, bilobed placenta, succenturiate placental lobes, conception by assisted reproductive technology and velamentous cord insertion.\textsuperscript{20} The 2017 prospective population-based cohort study using the AMOSS found that 55 of the 58 women diagnosed prenatally had at least one risk factor for vasa praevia, with velamentous cord insertion (62%) and low-lying placenta (60%) the most prevalent.\textsuperscript{14} These data have also been confirmed by recent retrospective cohort studies.\textsuperscript{17,21-22} Vasa praevia diagnosed in the second trimester resolves in around 20% of cases before delivery.\textsuperscript{16,23} A follow-up ultrasound examination at 32 weeks of gestation is suggested, particularly in women with a low-lying placenta as, even if it has resolved, it is still associated with a high risk of vasa praevia.\textsuperscript{8} The American Institute of Ultrasound in Medicine has recommended that the placental cord insertion site be documented when technically possible.\textsuperscript{24} Identification of the placental cord insertion at the routine fetal anomaly scan is easy and accurate,\textsuperscript{3,9} does not add significantly to scan time and requires little additional scanning skills for a trained operator. A questionnaire survey of obstetricians and gynaecologists in England and Wales with a 55% response rate found that most (80%) respondents felt that a selective screening policy for vasa praevia was not feasible, one-third could not name one risk factor associated with vasa praevia and over one-half had no experience in diagnosing nor managing the condition.\textsuperscript{25} This survey highlights the need to increase awareness of vasa praevia in healthcare professionals, and also the need to ensure skill validation and quality control across the board. A decision-analytic model to estimate the lifetime incremental costs and benefits of screening for vasa praevia in all twin pregnancies was found to be cost effective in a study of approximately 132 000 pregnancies.\textsuperscript{26} Using these data and based on an 80% detection rate, the 2014 UK NSC external review found that the targeted screening of all twins and singleton pregnancies with at least one high-risk factor could reduce the perinatal loss rate by as many as 150 cases per year.\textsuperscript{12} 7. **How should women with vasa praevia be managed?** Because of the speed at which fetal exsanguination can occur and the high perinatal mortality rate associated with ruptured vasa praevia, delivery should not be delayed while trying to confirm the diagnosis, particularly if there is evidence that fetal wellbeing is compromised. \textit{[New 2018]} In the presence of confirmed vasa praevia in the third trimester, elective caesarean section should ideally be carried out prior to the onset of labour. A decision for prophylactic hospitalisation from 30–32 weeks of gestation in women with confirmed vasa praevia should be individualised and based on a combination of factors, including multiple pregnancy, antenatal bleeding and threatened premature labour. \textit{[New 2018]} In cases of vasa praevia that develop premature rupture of membranes and/or labour at viable gestational ages, a caesarean section should be performed without delay. To avoid unnecessary anxiety, admissions, prematurity and caesarean section, it is essential to confirm persistence of vasa praevia by ultrasound in the third trimester. Delivery by caesarean section of women with confirmed vasa praevia is intuitive and logical, and not based on RCTs.\textsuperscript{12} The objective of the management of vasa praevia diagnosed during the second trimester of pregnancy is to prolong pregnancy safely while avoiding potential complications related to rupture of membranes before or during labour. Two other national societies have existing clinical guidelines on the management of vasa praevia diagnosed during pregnancy,\textsuperscript{7,8,19} but the corresponding recommendations are also based on observational data, decision analyses and expert opinion. Antenatal hospitalisation in a unit with appropriate neonatal facilities has been proposed from 30–32 weeks of gestation, but the evidence is weak and of low quality.\textsuperscript{6} The purpose of hospitalisation is to allow for closer surveillance for signs of labour and a timelier performance of caesarean delivery before labour and/or before membrane rupture. The 2017 prospective population-based cohort study using the AMOSS found no difference in perinatal outcome when vasa praevia was diagnosed prenatally between women who were hospitalised compared to those with no antenatal hospitalisation.\textsuperscript{14} Overall, outpatient care has been associated with excellent outcomes,\textsuperscript{3} and thus, the benefit of hospitalisation in asymptomatic women remains unproven. Data on the use of TVS cervical length measurements in the management of vasa praevia are limited and the role of cervical cerclage is unknown.\textsuperscript{12} Some authors have suggested that outpatient management is possible if there is no evidence of cervical shortening on TVS and there are no symptoms of bleeding or preterm uterine activity.\textsuperscript{27} Data from the follow-up of women with placenta praevia indicate that the probability of bleeding is higher if the cervix is shorter in length than expected for gestational age.\textsuperscript{28–32} A 2018 retrospective case-control study of 29 singleton pregnancies with a prenatal diagnosis of vasa praevia in the second trimester found that the rate of cervical length shortening was significantly slower for women with elective compared with emergency caesarean delivery.\textsuperscript{33} For each additional millimetre-per-week decrease in cervical length, the odds of emergency caesarean delivery increased by 6.50 (95% CI 1.02–41.20). Similarly, data from a 2017 systematic review on the management of vasa praevia in twins have indicated that TVS cervical length measurements from 26–28 weeks of gestation may be useful to evaluate the individual risk of preterm birth.\textsuperscript{34} Based on these observations, as well as a lower probability of labour, asymptomatic women with stable cervical length measurements should be the best candidates for outpatient management. 8. **At what gestation should elective delivery occur?** The ultimate management goal of confirmed vasa praevia should be to deliver before rupture of membranes while minimising the impact of iatrogenic prematurity. Based on available data, planned caesarean delivery for a prenatal diagnosis of vasa praevia at 34–36 weeks of gestation is reasonable in asymptomatic women. [New 2018] Administration of corticosteroids for fetal lung maturity should be recommended from 32 weeks of gestation due to the increased risk of preterm delivery. Optimal timing of caesarean delivery remains unknown. There is no consensus about the timing of delivery in cases of confirmed vasa praevia and the currently low prevalence of prenatal diagnosis of this condition in the general population precludes any prospective trials to evaluate the ideal timing.\textsuperscript{3,12} Overall, vasa praevia is associated with an increased risk of preterm birth. The associated complications of prematurity are in many cases the result of iatrogenic preterm birth in an effort to prevent stillbirth. Gestational age at delivery is the only other variable associated with perinatal outcomes in the management of vasa praevia. As for other obstetric situations associated with a higher risk for late preterm delivery, the administration of corticosteroids is recommended.\textsuperscript{7,8,19} In the largest cohort study published so far, fetuses that were diagnosed prenatally had a 97% survival rate for a mean gestational age at delivery of 34.9 (±2.5) weeks of gestation.\textsuperscript{13} Data from a decision analysis study comparing 11 strategies for delivery timing in a woman with vasa praevia found that delivery between 34 and 36 weeks of gestation balances the risk of premature rupture of membranes, and subsequent fetal haemorrhage and death versus the risks of prematurity.\textsuperscript{35} The authors found no benefit to expectant management beyond 37 weeks of gestation and that at any given gestational age, incorporating amniocentesis for verification of fetal lung maturity does not improve outcomes. 9. Clinical governance 9.1 Debriefing Postnatal follow-up should include debriefing with an explanation of what happened, why it happened and any implications for future pregnancy. 9.2 Training Raising awareness about the clinical risk factors of vasa praevia should be pursued locally, including organising policies or guidelines for flagging up women at risk and arranging for them to see a specialist consultant when suspected. There should be appropriate training for ultrasound staff in the antenatal diagnosis of vasa praevia. 9.3 Clinical incident reporting There should be written protocols for the identification of and planning further care of women diagnosed with vasa praevia. 10. **Recommendations for future research** - National and regional epidemiological data are needed to define a relevant high-risk population and the cost-effectiveness of screening for vasa praevia on service provision. - Prospective screening studies are needed to evaluate the outcome of velamentous cord insertion in the absence of vasa praevia. - Prospective multicentre studies on the use of cervical length ultrasound examination are required to evaluate the role of this measurement in the management of vasa praevia. - Prospective quality data are needed to compare hospitalisation at 30–32 weeks of gestation with outpatient follow-up in the management of vasa praevia. - RCTs of optimal timing of delivery for vasa praevia are needed. 11. **Auditable topics** - Appropriate delivery plan in place if an antenatal diagnosis of vasa praevia is made (100%). 12. **Useful links and support groups** - Vasa praevia raising awareness [www.vasapraevia.co.uk/the-experts/]. - The International Vasa Praevia Foundation [www.vasapraevia.org]. - Royal College of Obstetricians and Gynaecologists. *Low-lying placenta after 20 weeks (placenta praevia)*. Information for you. London: RCOG; 2018 [https://www.rcog.org.uk/en/patients/patient-leaflets/a-low-lying-placenta-after-20-weeks-placenta-praevia/]. - UK National Screening Committee. The UK NSC recommendation on Vasa praevia screening in pregnancy. London: UK NSC; 2017 Screening for vasa praevia [legacyscreening.phe.org.uk/vasapraevia]. **References** 1. Fox H, Sebire NJ, editors. *Pathology of the Placenta*. 3rd ed. Philadelphia, PA: Saunders-Elsevier; 2007. 2. Benirschke K, Baergen RL, Baergen GN, editors. *Pathology of the Human Placenta*. 6th ed Berlin: Springer-Verlag; 2012. 3. Silver RM. Abnormal placentation: placenta previa, vasa previa and placenta accreta. *Obstet Gynecol* 2015;126:654–68. 4. Vintzileos AM, Ananth CV, Smulian JC. Using ultrasound in the clinical management of placental implantation abnormalities. *Am J Obstet Gynecol* 2015;213:570–7. 5. Vahanian SA, Lavery JA, Ananth CV, Vintzileos A. Placental implantation site and maternal risk of preterm delivery: a systematic review and metaanalysis. *Am J Obstet Gynecol* 2015;213:578–90. 6. Attiklos G, David A, Bracklehurst P, Knight M. Vasa praevia: a national UK study using the UK Obstetric Surveillance System (UKOSS). Abstracts of the British Maternal & Fetal Medicine Society (BMPMS) 19th Annual Conference 2017. 30–31 March 2017, Amsterdam, The Netherlands. Abstract O.LD.7. BJOG 2017;124 Suppl 2:4–16. 7. Gagnon R, Morin L, Bly S, Bute K, Cargill YM, Denis N, et al.; Diagnosis and Management Committee; Maternal & Fetal Medicine Committee. SOGC clinical practice guideline: guidelines for the management of vasa praevia. *Int J Gynaecol Obstet* 2010;108:85–9. 8. Society of Maternal-Fetal (SMFM) Publications Committee, Sinkey RG, Odibo AO, Dashe JS, #37. Diagnosis and management of vasa previa. *Am J Obstet Gynecol* 2015;212:3615–50. 9. Smulian E, Caudle MD. Vasa previa: more than 100 years in preventing unnecessary fetal deaths. *BJOG* 2016;123:1287. 10. Oyelese YO, Turner M, Lees C, Campbell S. Vasa previa: an avoidable obstetric tragedy. *Obstet Gynecol Surv* 1999;54:138–45. 11. National Institute of Health and Care Excellence. *Antenatal Care for Uncomplicated Pregnancies*. Clinical Guideline 62. Manchester: NICE; 2017. 12. UK National Screening Committee. *Screening for Vasa Praevia in the Second Trimester of Pregnancy External Review Against Programme Appraisal Criteria for the UK National Screening Committee (UK NSC)*. London: UK NSC; 2017. 13. Oyelese Y, Catanzarite V, Prefumo F, Lashtey S, Schachter M, Toivin Y, et al. Vasa previa: the impact of prenatal diagnosis on outcomes. *Obstet Gynecol* 2004;103:937–42. 14. Sullivan EA, Jones N, Durecova G, Li Z, Safi N, Cinicotta R, et al. Vasa previa diagnosis, clinical practice, and outcomes in Australia. *Obstet Gynecol* 2017;130:591–8. 15. Ruiter L, Kok N, Limpens J, Derks JB, de Graaf IM, Mol BW, et al. Systematic review of accuracy of ultrasound in the diagnosis of vasa previa. *Ultrasound Obstet Gynecol* 2015;45:516–22. 16. Rebarber A, Dolin C, Fox NS, Klauser CK, Saltzman DH, Roman AS. Natural history of vasa previa across gestation using a screening protocol. *J Ultrasound Med* 2014;33:141–7. 17. Catanzarite V, Cousins L, Daneshmand S, Schwendemann W, Casele H, Adamczak Z, et al. Prenatally diagnosed vasa previa: a single-institutional review of cases. *Obstet Gynecol* 2016;128:153–61. 18. Broekhoven R, Whiston A, Balasubramanian M, Lee W, Lorenz R, Redman M. Vasa previa: clinical presentations, and implications for management. *Obstet Gynecol* 2013;122:352. 19. Gagnon R. No. 231-Guidelines for the management of vasa previa. *J Obstet Gynaecol Can* 2017;39:e415–21. 20. Ruiter L, Kok N, Limpens J, Derks JB, de Graaf IM, Mol B, et al. Incidence of and risk indicators for vasa previa: a systematic review. *BJOG* 2016;123:878–82. 21. Sweet RL, D’Alton ME, Jaffe CC, Das A, Perlow JH, Combs CA, et al. Obstetrics Collaborative Research Network. Vasa previa: diagnosis and management. *Am J Obstet Gynecol* 2016;61:2523.e1–6 22. Nohuz E, Boulay E, Gallot D, Lemery D, Vendittelli F. Can we perform a prenatal diagnosis of vasa previa to improve its obstetrical and neonatal outcome? *J Gynecol Obstet Hum Reprod* 2017;46:373–7. 23. Lee W, Lee YL, Kirk JS, Sloan CT, Smith RS, Cornstock CH. Vasa previa: prenatal diagnosis, natural evolution, and clinical outcome. *Obstet Gynecol* 2000;95:572–6. 24. American Institute of Ultrasound in Medicine. AIUM practice guideline for the performance of obstetric ultrasound examinations. *J Ultrasound Med* 2013;32:1083–101. 25. Ioannou C, Wayne C. Diagnosis and management of vasa previa: a questionnaire survey. *Ultrasound Obstet Gynecol* 2010;35:205–9. 26. Cipriano LE, Barth WH Jr, Zaric GS. The cost-effectiveness of targeted or universal screening for vasa previa at 18-20 weeks of gestation in Ontario. *BJOG* 2010;117:108–18. 27. Oleyese Y, Spang C, Fernandez MC, McLaren RA. Second trimester low-lying placenta and in-vitro fertilization? Exclude vasa previa. *J Matern Fetal Med* 2000;9:370–2. 28. Conde-Agudelo A, Romero R. Predictive accuracy of changes in transvaginal sonographic cervical length over time for preterm birth: a systematic review and metaanalysis. *Am J Obstet Gynecol* 2015;212:1789–96. 29. Ghi P, Colombo E, Martina T, Piva M, Morandi R, Orsini LF, et al. Cervical length and risk of antepartum bleeding in women with complete placentas previa. *Ultrasound Obstet Gynecol* 2009;33:209–12. 30. Zaitoun MM, El Belery MM, Abd El Hameed AA, Soliman BS. Does cervical length and the lower placental edge thickness measurement correlates with clinical outcome in cases of complete placentas previa? *Arch Gynecol Obstet* 2011;284:867–73. 31. Iwamura T, Hasegawa J, Nakamura N, Matsuo K, Ichizuka K, Sekiguchi A, et al. Correlation between the cervical length and the amount of bleeding during cesarean section in placenta previa. *J Obstet Gynecol Res* 2011;37:830–5. 32. Sekiguchi A, Nakai A, Okuda N, Inde Y, Takeshita T. Consecutive cervical length measurements as a predictor of preterm cesarean section in complete placenta previa. *J Clin Ultrasound* 2015;43:17–22. 33. Maymon R, Melcer Y, Tovbin J, Pekar-Zlotin M, Smorgick N, Janiukas E. The rate of cervical length shortening in the management of vasa previa. *J Matern Fetal Neonatal Med* 2017;7:17–23. 34. Jauniaux E, Melcer Y, Maymon R. Prenatal diagnosis and management of vasa previa in twin pregnancies: a case series and systematic review. *Am J Obstet Gynecol* 2017;216:568–75. 35. Robinson BK, Grobman WA. Effectiveness of timing strategies for delivery of individuals with placenta previa and accreta. *Obstet Gynecol* 2010;116:835–42. Appendix I: Explanation of guidelines and evidence levels Clinical guidelines are: ‘systematically developed statements which assist clinicians and patients in making decisions about appropriate treatment for specific conditions’. Each guideline is systematically developed using a standardised methodology. Exact details of this process can be found in Clinical Governance Advice No.1 *Development of RCOG Green-top Guidelines* (available on the RCOG website at http://www.rcog.org.uk/green-top-development). These recommendations are not intended to dictate an exclusive course of management or treatment. They must be evaluated with reference to individual patient needs, resources and limitations unique to the institution and variations in local populations. It is hoped that this process of local ownership will help to incorporate these guidelines into routine practice. Attention is drawn to areas of clinical uncertainty where further research may be indicated. The evidence used in this guideline was graded using the scheme below and the recommendations formulated in a similar fashion with a standardised grading scheme. ### Classification of evidence levels | Level | Description | |-------|-------------| | 1++ | High-quality meta-analyses, systematic reviews of randomised controlled trials or randomised controlled trials with a very low risk of bias | | 1+ | Well-conducted meta-analyses, systematic reviews of randomised controlled trials or randomised controlled trials with a low risk of bias | | 1− | Meta-analyses, systematic reviews of randomised controlled trials or randomised controlled trials with a high risk of bias | | 2++ | High-quality systematic reviews of case–control or cohort studies or high-quality case–control or cohort studies with a very low risk of confounding, bias or chance and a high probability that the relationship is causal | | 2+ | Well-conducted case–control or cohort studies with a low risk of confounding, bias or chance and a moderate probability that the relationship is causal | | 2− | Case–control or cohort studies with a high risk of confounding, bias or chance and a significant risk that the relationship is not causal | | 3 | Non-analytical studies, e.g. case reports, case series | | 4 | Expert opinion | ### Grades of recommendation | Grade | Description | |-------|-------------| | A | At least one meta-analysis, systematic review or RCT rated as 1++, and directly applicable to the target population; or A systematic review of RCTs or a body of evidence consisting principally of studies rated as 1+, directly applicable to the target population and demonstrating overall consistency of results | | B | A body of evidence including studies rated as 2++ directly applicable to the target population, and demonstrating overall consistency of results; or Extrapolated evidence from studies rated as 1++ or 1+ | | C | A body of evidence including studies rated as 2+ directly applicable to the target population, and demonstrating overall consistency of results; or Extrapolated evidence from studies rated as 2++ | | D | Evidence level 3 or 4; or Extrapolated evidence from studies rated as 2+ | ### Good practice points - Recommended best practice based on the clinical experience of the guideline development group This guideline was produced on behalf of the Royal College of Obstetricians and Gynaecologists by: **Professor ERM Jauniaux FRCOG, London (Lead Developer); Professor Z Alfrevic FRCOG, Liverpool, UK; Mr AG Bhide FRCOG, London, UK; Professor GJ Burton, University of Cambridge, UK; Professor SL Collins MRCOG, Oxford, UK; Professor R Silver, University of Utah, Salt Lake City, Utah, USA** and peer reviewed by: Professor ML Brizot, University of São Paulo, São Paulo, Brazil; Professor J Dashe, University of Texas Southwestern Medical Center, Dallas, TX, USA; Dr D Fraser FRCOG, Norwich; Dr J Hasegawa, St Marianna University School of Medicine, Kawasaki, Kanagawa, Japan; Dr YY Hu, Sichuan University, Chengdu, Sichuan, China; Dr F Malik MRCOG, Southend; Professor P Martinelli, Università di Napoli Federico II, Naples, Italy; RCOG Women’s Network; Dr R Salim, Emek Medical Center, Afula, Israel; Dr JT Thomas FRANZCOG, CMFM, Mater Mothers’ Hospital, Brisbane, Australia; Mr N Thomson, Society and College of Radiographers, London; Dr M Tikkanen, Women’s Clinic, Helsinki University Hospital Finland, Helsinki, Finland; UK National Screening Committee; Vasa Praevia Ireland Support and Awareness Group; and Vasa Praevia Raising Awareness for the UK and the International Vasa Previa Foundation; Dr SG Vitale, University of Catania, Catania, Italy. [Correction added on 14 March 2019, after first online publication: SG Vitale has been added to peer reviewers.] Committee lead reviewers were: Dr A McKelvey MRCOG, Norfolk; and Mr RJ Fernando FRCOG, London The chairs of the Guidelines Committee were: Dr MA Ledingham MRCOG, Glasgow\(^1\); Dr B Magowan FRCOG, Melrose\(^1\); and Dr AJ Thomson MRCOG, Paisley\(^2\). \(^1\)co-chairs from June 2018 \(^2\)until May 2018. *All RCOG guidance developers are asked to declare any conflicts of interest. A statement summarising any conflicts of interest for this guideline is available from: [https://www.rcog.org.uk/en/guidelines-research-services/guidelines/gtg27b/](https://www.rcog.org.uk/en/guidelines-research-services/guidelines/gtg27b/).* The final version is the responsibility of the Guidelines Committee of the RCOG. --- The guideline will be considered for update 3 years after publication, with an intermediate assessment of the need to update 2 years after publication. --- **DISCLAIMER** The Royal College of Obstetricians and Gynaecologists produces guidelines as an educational aid to good clinical practice. They present recognised methods and techniques of clinical practice, based on published evidence, for consideration by obstetricians and gynaecologists and other relevant health professionals. The ultimate judgement regarding a particular clinical procedure or treatment plan must be made by the doctor or other attendant in the light of clinical data presented by the patient and the diagnostic and treatment options available. This means that RCOG Guidelines are unlike protocols or guidelines issued by employers, as they are not intended to be prescriptive directions defining a single course of management. Departure from the local prescriptive protocols or guidelines should be fully documented in the patient’s case notes at the time the relevant decision is taken.
The Effect of Change in Population Size on DNA Polymorphism Fumio Tajima Department of Biology, Kyushu University, Fukuoka 812, Japan Manuscript received March 10, 1989 Accepted for publication July 14, 1989 ABSTRACT The expected number of segregating sites and the expectation of the average number of nucleotide differences among DNA sequences randomly sampled from a population, which is not in equilibrium, have been developed. The results obtained indicate that, in the case where the population size has changed drastically, the number of segregating sites is influenced by the size of the current population more strongly than is the average number of nucleotide differences, while the average number of nucleotide differences is affected by the size of the original population more severely than is the number of segregating sites. The results also indicate that the average number of nucleotide differences is affected by a population bottleneck more strongly than is the number of segregating sites. THE amount of genetic variation at the DNA level can be measured by the number of segregating sites among DNA sequences sampled (WATTERSON 1975) or by the average number of (pairwise) nucleotide differences between DNA sequences sampled (TAJIMA 1983). The statistical properties of these quantities have been obtained under the assumption that the size of population is constant (WATTERSON 1975; TAJIMA 1983). The size of population, however, often changes drastically. Although the effects of change in population size on heterozygosity and the number of alleles in a sample have already been studied by NEI, MARUYAMA and CHAKRABORTY (1975), CHAKRABORTY and NEI (1977), MARUYAMA and FUERST (1984, 1985a,b), WATTERSON (1986), the effect of change in population size on the number of segregating sites and the average number of nucleotide differences is not yet known. Here I examine this problem quantitatively, since the number of segregating sites and the average number of nucleotide differences are more appropriate measures for the amount of DNA polymorphism than heterozygosity and the number of alleles. THEORY Assumption: Assume that a mutant is selectively neutral (KIMURA 1968, 1983), and that the number of sites on a DNA sequence is so large that a newly arisen mutation takes place at a site different from the sites where the previous mutations have occurred (KIMURA 1969). Also assume that a population consists of diploid individuals, and consider a DNA sequence located on an autosomal chromosome. General formula: Consider a randomly mating population with discrete and nonoverlapping generations, and let $N_t$ be the effective population size in the $t$th generation. Denote by $\nu$ the mutation rate per DNA sequence per generation. Also denote the expected number of segregating sites among $n$ DNA sequences randomly chosen from a population in the $t$th generation by $S_n(t)$. The number of segregating sites is the number of sites which are segregating (or polymorphic) among $n$ DNA sequences. On the other hand, the average number of nucleotide differences between DNA sequences is given by $$\hat{k} = \sum_{i<j} k_{ij} / \binom{n}{2},$$ where $k_{ij}$ is the number of nucleotide differences between the $i$th and $j$th DNA sequences. Therefore, the expectation of the average number of nucleotide differences is equal to the expected number of nucleotide differences between two DNA sequences randomly sampled from a population. Since the number of nucleotide differences between two DNA sequences is equal to the number of segregating sites when $n$ is 2, the expectation of the average number of nucleotide differences is equal to the expected number of segregating sites for $n = 2$, namely $$E(\hat{k}) = S_2(t).$$ Incidentally, $S_1(t) = 0$ since there is no segregating site when only one DNA sequence is considered. If we denote the probability, that $n$ DNA sequences randomly sampled from a population in the $t$th generation are derived from $i$ DNA sequences in the previous generation, by $P_n(i)$ then $S_n(t)$ is given by $$S_n(t) = \sum_{i=1}^{n} S_i(t-1)P_n(i) + n\nu,$$ (1) where $S_i(t-1)$ is the expected number of segregating sites among $i$ DNA sequences in the $(t-1)$th generation. where the last term in the right side of (1) is the effect of mutations. When $n$ is small, $P_n(i)$ is approximately given by $$P_n(n) = 1 - \frac{\binom{n}{2}}{2N_{t-1}},$$ (2) $$P_n(n-1) = \frac{\binom{n}{2}}{2N_{t-1}},$$ and $$P_n(i) = 0 \quad \text{for} \quad i < n - 1$$ (Kingman 1982; Hudson 1983; Tajima 1983). Substituting (2) into (1), we have $$S_n(t) - S_n(t-1)$$ (3) $$= \frac{\binom{n}{2}}{2N_{t-1}} [S_{n-1}(t-1) - S_n(t-1)] + nv,$$ where $S_1(t) = 0$ as mentioned earlier. If we use the differential equation method, (3) becomes $$\frac{dS_n(t)}{dt} = \frac{\binom{n}{2}}{2N_t} [S_{n-1}(t) - S_n(t)] + nv.$$ (4) This formula is simpler than (3), and we do not have to assume that $n$ is small in this case. We use (4) instead of (3) in order to obtain $S_n(t)$. Assume that the population size is constant ($N_t = N$, for $t > 0$). Then, integration of (4) gives $$S_n(t) = a_n \exp(-a_nt) \int S_{n-1}(t)\exp(a_nt) \ dt$$ (5) $$+ \frac{M}{n - 1} + C_n\exp(-a_nt),$$ where $$M = 4Nv,$$ $$a_n = \frac{\binom{n}{2}}{2N},$$ and $C_n$ is the integral constant which can be determined from the initial conditions. Then, we have $$S_n(t) = b_{n,1} + \sum_{i=2}^{n} b_{n,i}\exp(-a_it),$$ (6) where $$b_{n,1} = b_{n-1,1} + \frac{M}{n - 1},$$ $$b_{n,i} = \frac{n(n - 1)}{(n - i)(n + i - 1)} b_{n-1,i}, \quad \text{for} \quad 1 < i < n$$ (7) $$b_{n,n} = S_n(0) - \sum_{i=1}^{n-1} b_{n,i}.$$ $b_{1,1}$ is equal to 0 since $S_1(t)$ is 0, so that we have $$b_{n,1} = M \sum_{i=1}^{n-1} \frac{1}{i}.$$ $b_{n,i}$ can be obtained by using (7) repeatedly. For example, when $n$ is 2, from (7) we have $$b_{2,1} = M \quad \text{and} \quad b_{2,2} = S_2(0) - M.$$ Therefore, we obtain $$S_2(t) = M + [S_2(0) - M]\exp[-t/(2N)],$$ (8) which is identical with the formula obtained by Li (1977) using a different method. Incidentally, Li (1977) has shown not only the expectation but also the variance and distribution of the number of nucleotide differences between two DNA sequences. **Starting from an equilibrium population:** When the population is in equilibrium at time 0, we can simplify (6). Since $S_n(0) = M_0 \sum_{i=1}^{n-1} (1/i)$, where $M_0 = 4N_0v$ (Watterson 1975), (6) becomes $$S_n(t) = M \sum_{i=1}^{n-1} \frac{1}{i} + (M_0 - M) \sum_{i=1}^{\lfloor n/2 \rfloor} c_{n,i}\exp(-a_{2i}t),$$ (9) where $\lfloor n/2 \rfloor$ is the largest integer which is not greater than $n/2$, and $c_{n,i}$ is given by $$c_{n,i} = \frac{(n - 1)n!(4i - 1)}{(n - 2i)(n + 2i - 1)i(2i - 1)}.$$ (10) When $n = 2$, we have $c_{2,1} = 1$ from (10). Therefore, we obtain (8). **NUMERICAL EXAMPLE** **Starting from an equilibrium population:** First, we consider the case where the population is in equilibrium at time 0. Then, $S_n(t)$ is given by (9). Table 1 shows the case where $M_0 = 0$ and $M = 1$. This means that until time 0 the size of the population is so small that there is no genetic variation, but population size becomes large afterwards. In this table the values of $S_n(t)/\sum_{i=1}^{n-1} (1/i)$ are shown, since they are equal to $M$ when the population is in equilibrium. From this table TABLE 1 Values of $S_n(t)/\sum_{i=1}^{n-1} (1/i)$ obtained by equation 9, where $4N_0v = 0$ and $4Nv = 1$ are assumed | $\frac{t}{2N}$ | 2 | 5 | 10 | 20 | 50 | 100 | |----------------|-----|-----|-----|-----|-----|-----| | 0.0 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | | 0.1 | 0.095 | 0.109 | 0.146 | 0.198 | 0.283 | 0.349 | | 0.2 | 0.181 | 0.202 | 0.253 | 0.317 | 0.407 | 0.469 | | 0.3 | 0.259 | 0.282 | 0.337 | 0.403 | 0.488 | 0.545 | | 0.4 | 0.330 | 0.353 | 0.407 | 0.471 | 0.550 | 0.601 | | 0.5 | 0.393 | 0.416 | 0.468 | 0.527 | 0.599 | 0.646 | | 0.6 | 0.451 | 0.472 | 0.521 | 0.575 | 0.641 | 0.683 | | 0.7 | 0.503 | 0.523 | 0.567 | 0.617 | 0.677 | 0.715 | | 0.8 | 0.551 | 0.568 | 0.609 | 0.655 | 0.709 | 0.743 | | 0.9 | 0.593 | 0.610 | 0.647 | 0.688 | 0.737 | 0.768 | | 1.0 | 0.632 | 0.647 | 0.681 | 0.718 | 0.763 | 0.791 | | 1.2 | 0.699 | 0.711 | 0.739 | 0.769 | 0.806 | 0.829 | | 1.4 | 0.753 | 0.763 | 0.786 | 0.811 | 0.841 | 0.860 | | 1.6 | 0.798 | 0.806 | 0.825 | 0.846 | 0.870 | 0.885 | | 1.8 | 0.835 | 0.841 | 0.857 | 0.874 | 0.894 | 0.906 | | 2.0 | 0.865 | 0.870 | 0.883 | 0.896 | 0.913 | 0.923 | | 2.5 | 0.918 | 0.921 | 0.929 | 0.937 | 0.947 | 0.953 | | 3.0 | 0.950 | 0.952 | 0.957 | 0.962 | 0.968 | 0.972 | | 3.5 | 0.970 | 0.971 | 0.974 | 0.977 | 0.981 | 0.983 | | 4.0 | 0.982 | 0.982 | 0.984 | 0.986 | 0.988 | 0.990 | | 4.5 | 0.989 | 0.989 | 0.990 | 0.992 | 0.993 | 0.994 | | 5.0 | 0.993 | 0.994 | 0.994 | 0.995 | 0.996 | 0.996 | | 6.0 | 0.998 | 0.998 | 0.998 | 0.998 | 0.998 | 0.999 | | 7.0 | 0.999 | 0.999 | 0.999 | 0.999 | 0.999 | 0.999 | | 8.0 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | $S_n(t)$ is the expected number of segregating sites among a sample of $n$ DNA sequences. Especially, $S_2(t)$ is equal to the expectation of the average number of (pairwise) nucleotide differences between DNA sequences sampled. we can see that the amount of variation increases very slowly, especially in the case of $n = 2$. For example, it takes $1.4N$ generations until this number becomes half of the maximum value. On the other hand, in the case of $n = 100$ it takes only $0.5N$ generations. In fact, from (9) we can see that the larger is the sample size, the more quickly the number of segregating sites increases. Table 2 shows the case where the size of population suddenly becomes one hundredth at time 0. In this case the number of segregating sites declines more rapidly than the average number of nucleotide differences. Again, the larger is the sample size, the more quickly the number of segregating sites decreases. **Bottleneck effect:** In this section we consider the case where the size of the population becomes small, but the population recovers the original size $T$ generations later. Figure 1 shows this process. At time 0 the population is assumed to be in equilibrium, so that $S_n(t)$ for $0 < t < T$ can be computed, using (9). After then, $S_n(t)$ is computed, using (6) with (7), since the population is no more in equilibrium. It should be noted that $M$ is replaced with $M_0$ in these formulae. Figure 2 gives several examples in which the population size is assumed to become one hundredth of the original size. For the values of $T$, $0.4N$, $N$, and $2N$ are used. In all the cases examined, larger reduction of $S_n(t)$ is observed when $n$ is larger, but the bottleneck effect continues longer in the case where $n$ is smaller. In other words, the average number of nucleotide differences is affected by the bottleneck of population size more strongly than is the number of segregating sites. --- **TABLE 2** Values of $S_n(t)/\sum_{i=1}^{n-1} (1/i)$ obtained by equation 9, where $4N_0v = 100$ and $4Nv = 1$ are assumed | $\frac{t}{2N}$ | 2 | 5 | 10 | 20 | 50 | 100 | |----------------|-----|-----|-----|-----|-----|-----| | 0.0 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | | 0.1 | 90.58 | 89.17 | 85.55 | 80.36 | 72.01 | 65.45 | | 0.2 | 82.05 | 80.00 | 74.99 | 68.60 | 59.72 | 53.53 | | 0.3 | 74.34 | 72.06 | 66.63 | 60.11 | 51.65 | 46.04 | | 0.4 | 67.36 | 65.07 | 59.67 | 53.40 | 45.57 | 40.51 | | 0.5 | 61.05 | 58.84 | 53.70 | 47.83 | 40.65 | 36.10 | | 0.6 | 55.33 | 53.27 | 48.47 | 43.06 | 36.52 | 32.40 | | 0.7 | 50.16 | 48.25 | 43.84 | 38.88 | 32.94 | 29.22 | | 0.8 | 45.48 | 43.74 | 39.69 | 35.18 | 29.79 | 26.43 | | 0.9 | 41.25 | 39.66 | 35.98 | 31.88 | 26.99 | 23.95 | | 1.0 | 37.42 | 35.97 | 32.63 | 28.91 | 24.49 | 21.73 | | 1.2 | 30.82 | 29.63 | 26.88 | 23.83 | 20.20 | 17.95 | | 1.4 | 25.41 | 24.44 | 22.18 | 19.68 | 16.71 | 14.87 | | 1.6 | 20.99 | 20.19 | 18.34 | 16.29 | 13.86 | 12.35 | | 1.8 | 17.36 | 16.71 | 15.20 | 13.52 | 11.53 | 10.29 | | 2.0 | 14.40 | 13.86 | 12.62 | 11.25 | 9.62 | 8.61 | | 2.5 | 9.13 | 8.80 | 8.05 | 7.22 | 6.23 | 5.62 | | 3.0 | 5.93 | 5.73 | 5.28 | 4.77 | 4.17 | 3.80 | | 3.5 | 3.99 | 3.87 | 3.59 | 3.29 | 2.92 | 2.70 | | 4.0 | 2.81 | 2.74 | 2.57 | 2.39 | 2.17 | 2.03 | | 4.5 | 2.10 | 2.06 | 1.95 | 1.84 | 1.71 | 1.62 | | 5.0 | 1.67 | 1.64 | 1.58 | 1.51 | 1.43 | 1.38 | | 6.0 | 1.25 | 1.24 | 1.21 | 1.19 | 1.16 | 1.14 | | 7.0 | 1.09 | 1.09 | 1.08 | 1.07 | 1.06 | 1.05 | | 8.0 | 1.03 | 1.03 | 1.03 | 1.03 | 1.02 | 1.02 | | 9.0 | 1.01 | 1.01 | 1.01 | 1.01 | 1.01 | 1.01 | | 10.0 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | $S_n(t)$ is the expected number of segregating sites among a sample of $n$ DNA sequences. Especially, $S_2(t)$ is equal to the expectation of the average number of (pairwise) nucleotide differences between DNA sequences sampled. --- **Figure 1.**—The bottleneck model. Figure 2.—Relationship between $S_n(t)/\sum_{i=1}^{n-1} (1/i)$ and the number of generations after the recovery of population size. $S_n(t)$ is the expected number of segregating sites among a sample of $n$ DNA sequences. Especially, $S_2(t)$ is equal to the expectation of the average number of (pairwise) nucleotide differences between DNA sequences sampled. The bottleneck model is shown in Figure 1. The durations ($T$) of bottleneck are (a) $0.4N$, (b) $N$, and (c) $2N$ generations. $4N_0v = 1$ and $4Nv = 0.01$ are assumed. When points $\bullet$ and $\Delta$ (and $\bigcirc$) are close to each other, only point $\bullet$ is plotted in order to avoid confusion. Point $\bigcirc$ is eliminated when it is close to point $\Delta$. DISCUSSION In this paper the formulae for computing the expected number of segregating sites and the expectation of the average number of nucleotide differences among DNA sequences sampled from a population, which is not in equilibrium, have been developed. The results obtained indicate that the number of segregating sites is influenced by the size of current population more strongly than is the average number of nucleotide differences, while the average number of nucleotide differences is affected by the size of original population more severely than is the number of segregating sites. The relationship between the two numbers is quite similar to the relationship between heterozygosity and the number of alleles. In fact heterozygosity and the number of alleles obtained from the infinite allele model are equivalent to the average number of nucleotide differences and the number of segregating sites obtained from the infinite site model, respectively. Recently, Tajima (1989) has developed a statistical method for testing the neutral mutation hypothesis by using the average number of nucleotide differences and the number of segregating sites. This method, however, assumes that a population is in equilibrium. As he has indicated, we must consider whether the population used is in equilibrium or not when we apply this method. In fact, if the population experienced a bottleneck recently, then this method may falsely reject the neutral hypothesis. This might be avoided, however, if we apply this method for several types of DNA polymorphism separately; for example, coding region vs. noncoding region, nucleotide polymorphism vs. insertion/deletion polymorphism, mitochondrial DNA vs. nuclear DNA, and so on. I thank B. S. Weir and two anonymous reviewers for their valuable suggestions and comments. LITERATURE CITED Chakraborty, R., and M. Nei, 1977 Bottleneck effects on average heterozygosity and genetic distance with the stepwise mutation model. Evolution 31: 347–356. Hudson, R. R., 1983 Testing the constant-rate neutral allele model with protein sequence data. Evolution 37: 203–217. Kimura, M., 1968 Evolutionary rate at the molecular level. Nature 217: 624–626. Kimura, M., 1969 The number of heterozygous nucleotide sites maintained in a finite population due to steady flux of mutations. Genetics 61: 893–903. Kimura, M., 1983 The Neutral Theory of Molecular Evolution. Cambridge University Press, London. Kingman, J. F. C., 1982 On the genealogy of large populations. J. Appl. Probab. 19A: 27–43. Li, W.-H., 1977 Distribution of nucleotide differences between two randomly chosen cistrons of a finite population. Genetics 85: 331–337. Maruyama, T., and P. A. Fuerst, 1984 Population bottlenecks and nonequilibrium models in population genetics. I. Allele numbers when populations evolve from zero variability. Genetics 108: 745–763. Maruyama, T., and P. A. Fuerst, 1985a Population bottlenecks and nonequilibrium models in population genetics. II. Number of alleles in a small population that was formed by a recent bottleneck. Genetics 111: 675–689. Maruyama, T., and P. A. Fuerst, 1985b Population bottlenecks and nonequilibrium models in population genetics. III. Genic homozygosity in populations which experience periodic bottlenecks. Genetics 111: 691–703. Nei, M., T. Maruyama and R. Chakraborty, 1975 The bottleneck effect and genetic variability in populations. Evolution 29: 1–10. Tajima, F., 1983 Evolutionary relationship of DNA sequences in finite populations. Genetics 105: 437–460. Tajima, F., 1989 Statistical method for testing the neutral mutation hypothesis by DNA polymorphism. Genetics 123: 585–595. Watterson, G. A., 1975 On the number of segregating sites in genetic models without recombination. Theor. Popul. Biol. 7: 256–276. Watterson, G. A., 1986 The homozygosity test after a change in population size. Genetics 112: 899–907. Communicating editor: B. S. Weir
Experience affects the outcome of agonistic contests without affecting the selective advantage of size Michael M. Kasumovic\textsuperscript{a,b,*}, Damian O. Elias\textsuperscript{a,c,d}, David Punzalan\textsuperscript{e,f}, Andrew C. Mason\textsuperscript{a}, Maydianne C.B. Andrade\textsuperscript{a} \textsuperscript{a}Integrative Behaviour and Neuroscience Group, University of Toronto, Scarborough, ON, Canada \textsuperscript{b}Evolution & Ecology Research Centre, School of Biological, Earth \& Environmental Sciences, Sydney, Australia \textsuperscript{c}Department of Zoology, University of British Columbia, Vancouver, BC, Canada \textsuperscript{d}Department of Environmental Science, Policy and Management, University of California, Berkeley, CA, USA. \textsuperscript{e}Department of Ecology and Evolutionary Biology, University of Toronto, ON, Canada \textsuperscript{f}Department of Biology, University of Ottawa, Ottawa, ON, Canada **Article info** **Article history:** Received 19 November 2008 Initial acceptance 22 January 2009 Final acceptance 26 February 2009 Published online 28 April 2009 MS. number: A08-00746 **Keywords:** jumping spider multiple competitions *Phidippus clarus* previous experience selection gradient tournament design In the field, phenotypic determinants of competitive success are not always absolute. For example, contest experience may alter future competitive performance. As future contests are not determined solely on phenotypic attributes, prior experience could also potentially alter phenotype–fitness associations. In this study, we examined the influence of single and multiple experiences on contest outcomes in the jumping spider *Phidippus clarus*. We also examined whether phenotype–fitness associations altered as individuals gained more experience. Using both size-matched contests and a tournament design, we found that both winning and losing experience affected future contest success; males with prior winning experience were more likely to win subsequent contests. Although experience was a significant determinant of success in future contests, male weight was approximately 1.3 times more important than experience in predicting contest outcomes. Despite the importance of experience in determining contest outcomes, patterns of selection did not change between rounds. Overall, our results show that experience can be an important determinant in contest outcomes, even in short-lived invertebrates, and that experience alone is unlikely to alter phenotype–fitness associations. Crown Copyright © 2009. Published by Elsevier Ltd on behalf of The Association for the Study of Animal Behaviour. All rights reserved. Intrasexual competitions, phenotypic traits are often strong predictors of competitive success. For example, many studies have shown that larger males, in better condition, and with larger weaponry most often win contests (e.g. Andersson 1994; Hack 1997; Rillich et al. 2006). While the tendency in past analyses of phenotypic selection has been to investigate the predictive relationship between static morphological traits and fitness, it is also well accepted that nonstatic traits and environmental factors are important determinants of fitness/success, and failure to account for these can lead to a distorted view of phenotypic selection and the adaptive value of traits (Lande \& Arnold 1983; Mitchell-Olds \& Shaw 1987; Rausher 1992). One such example is past contest experience. Winning or losing experience can alter future competitive success (reviewed in: Hsu et al. 2006; Rutte et al. 2006), and in each case, prior success increases the probability of future wins, while prior failure increases the probability of future losses (Dodson \& Schwaab 2001; Hsu \& Wolf 2001; Stuart-Fox et al. 2006). Experience effects, however, may not be limited to the most recent contest, as individuals are likely to encounter multiple rivals throughout a breeding season, especially if individuals mate multiply and/or are long lived. Multiple encounters will probably result in multiple winning or losing experiences, and each individual experience may contribute to a cumulative effect on future contest outcomes (e.g. Hsu \& Wolf 1999; Stuart-Fox et al. 2006). In addition to the direct effect that losing and winning experience can have on contest outcomes, experience also has the potential to alter phenotype–fitness correlations, and therefore, estimates of phenotypic selection. In other words, future contest outcomes may be influenced more by experience than by phenotypic traits associated with success, resulting in a disassociation between phenotype and fitness. In this study, we quantified phenotypic selection on a suite of traits during male–male agonistic contests in a jumping spider, *Phidippus clarus*, while simultaneously evaluating the importance of prior contest experience. Male *P. clarus* engage in intense pairwise contests over access to female refuges (see below). Previous work has shown that males use a combination of self-assessment (during the assessment phase) and partial, mutual opponent assessment (during the escalated phases of contests) in determining contest outcomes, and that male weight is as a strong predictor of contest success (Elias et al. 2008). As winning males are able to maintain exclusive access to female refuges (see below), success in aggressive contests is a good indicator of male fitness. There is also clear evidence of an experience effect on male behaviour in these contests; in repeated bouts with the same opponent, winning males continue to outcompete losing males and losing males dramatically reduce behaviours associated with aggression (Elias et al. 2008). This finding suggests that experience influences subsequent contest outcomes between rivals, but whether experience also influences contest outcomes between novel individuals remains to be determined. We had four goals in this study. To determine: (1) the effect of experience on competitive success with novel rivals in *P. clarus*, (2) the effect of experience on contest outcomes relative to other phenotypic traits, (3) the relative importance of the most recent experience versus past experience in determining contest outcomes, and (4) whether experience alters phenotype–fitness correlations, and therefore, selection gradients. To examine the first question, we assigned males a winning and losing experience in the first round, and then fought experienced males against naive weight-matched opponents in a second round. Weight is a strong indicator of fighting success (Elias et al. 2008); hence, by weight matching in the second interaction, we controlled for fighting ability and were thus able to isolate the effect of experience from fighting ability (Hsu et al. 2006; Stuart-Fox et al. 2006). However, this type of experimental procedure does not allow an examination of the relative effects of experience compared to other phenotypic traits (Stuart-Fox et al. 2006). Thus, to address the remaining questions, we used a tournament design where males were randomly paired against one another. A random tournament design allows for an examination of multiple phenotypic traits relative to experience while also allowing for an estimation of selection gradients in each round. **METHODS** **Life History** *Phidippus clarus* is abundant throughout North America during midsummer months. During the early season, both sexes build hibernacula (nests) in curled leaves, and return to these hibernacula each night (Hoefler 2006). Males mature before females (protandry), and mature males begin searching for and defending the hibernacula of penultimate instar females (one moult from maturity) (Hoefler 2007), preferentially choosing larger females (Hoefler 2008). Males mate with females immediately after females mature, making access to hibernacula extremely important. While defending a hibernaculum, males are likely to encounter numerous potential rivals attempting to usurp them, providing individuals with multiple competitive encounters to determine their fighting ability relative to others in the population. However, males also encounter rivals while wandering and do fight in the absence of females (Hoefler 2007). Males perform a series of stereotyped behaviours during aggressive interactions that have been described elsewhere (Elias et al. 2008). Briefly, these behaviours can be divided into two phases: (1) a precontact phase, where males display towards one another and (2) a contact phase, where males physically interact with one another. The precontact phase begins when the two spiders orient towards one another, adopting a hunched posture. Males then approach or retreat from one another with their front legs outstretched horizontally. During these displays, males also produce a series of substrate-borne vibrations (Elias et al. 2008). The contact phase begins when the two spiders are close to each other and begin to leg-fence. Leg-fencing behaviour consists of the two males touching each other’s horizontally outstretched legs, whereby males attempt to push each other backwards with their front legs and bodies. Some of these interactions escalate further to grappling, where males lock chelicerae (jaws) and legs for relatively longer periods. **Housing and Competitions** We collected adult male *P. clarus* from Koffler Scientific Reserve at Joker’s Hill, King, Ontario, Canada (44°03’N, 79°29’W) for this experiment. We housed all males in individual clear plastic cages in the laboratory on a 12:12 h light:dark cycle and fed them small *Acheta domesticus* and *Drosophila hydeei* twice weekly. We placed opaque barriers between cages for at least 4 days to allow males to acclimatize to laboratory conditions, to minimize effects of prior visual interactions between caged males (Forster 1982; Land 1985; Land & Nilsson 2002) and to control for prior fighting experience in the field. Two days before trials, we anaesthetized males using CO$_2$ and marked each individual with two spots of nontoxic fluorescent paint (Luminous paint, BioQuip Products, Inc., Rancho Dominguez, CA, U.S.A.) on the abdomen to allow individual identification during contests. We observed males during feeding intervals to ensure that males were not affected by the marking procedure. We used $5 \times 5 \times 6$ cm plastic containers as competitive arenas, which were similar in size to natural arenas (plant leaves) used by male *P. clarus*. We covered the walls of each arena with petroleum jelly to prevent individuals from escaping from the arena. We covered the base of each arena with a sheet of paper and changed the paper between fights with new individuals to ensure there was no webbing or pheromonal cues left by either the winner or loser. To start each contest, we placed an opaque divider in the centre of the arena and then placed one individual on either side of the divider. Individuals were allowed 1 min to acclimate to their surroundings, after which the divider was removed and the contest began. A contest lasted until an individual won two of three bouts or until 10 min had elapsed. In cases where the full time was reached, the winner was determined to be the individual that won the first bout. A male was considered to have won a bout when the rival male turned away and retreated more than two body lengths. There were no instances where each individual won only a single bout. After the outcome was decided, we removed both individuals and placed them back into their individual cages. Males were not fed between rounds. We weighed individuals after each fight using an Ohaus electronic balance. After all fights were completed, we digitally photographed each individual (Nikon Digital Camera DXM 1200) using a Zeiss microscope (Stemi 2000C). We then used Act-1 software (Nikon Instruments, Inc., New York, NY, U.S.A.) to measure cephalothorax width (at its widest point) and the mean femur, patella-tibia and tarsus of the first legs as measurements of size. **Size-matched Contests** We collected 156 adult males for this experiment. To determine whether experience influences contest outcome in *P. clarus*, we (1) randomly paired males in round 1, ensuring a minimum of 10% weight difference (mean weight difference = 24%), and (2) paired each winner and loser from round 1 with a weight-matched opponent (weight difference less than 5%; mean weight difference = 4%) in round 2. There was a maximum of 60 min between the two rounds. Tournament Design We collected 88 adult males for this experiment. Using a tournament style design, we performed three rounds of contests in a single day. In each round, males were randomly assigned opponents, with the caveat that the colour combination for the two individuals was unique, allowing individual identification during contests. All males completed contests in the current round before starting a subsequent round to ensure that all males had the same amount of experience. There was a minimum of 98 min and a maximum of 282 min between rounds (mean ± SE = 193.73 ± 3.23 min). Statistical Analyses We examined experience effects using three statistical analyses. First, we compared the number of winners and losers with prior winning and losing experience using a Fisher’s exact test to determine whether experience alone affected fight outcome in size-matched and tournament design contests (e.g. Hsu & Wolf 1999). Second, we used a logistic model to determine whether the difference in size between opponents as well as prior experience of opponents affected contest outcome in size-matched contests. Third, we used a modified Bradley–Terry model (Firth 2005; e.g. Stuart-Fox et al. 2006) to examine the relative effect of the measured traits and experience in determining contest outcomes in the tournament design. The Bradley–Terry model is the appropriate method to analyse tournament data as it is explicitly aimed at partitioning the effects of past outcomes and intrinsic measures of quality in tournament designs (Firth 2005; e.g. Stuart-Fox et al. 2006). Assuming that winning a contest has a positive effect and losing a contest has a negative effect on future contests (e.g. Hsu & Wolf 1999), we quantified experience by allotting a value of 1 each time an individual won a contest and a value of −1 each time an individual lost. Since experience from immediately previous versus earlier contests can have different effects on future contest outcomes (Hsu & Wolf 1999), we coded experience in three ways: (1) most recent experience alone: we assumed that only the most recent previous experience would influence contest outcomes, and we coded experience only from the last contest; (2) cumulative experience: we assumed that each experience would have equal value in future contests, and we coded experience equally from both prior contests; (3) degrading cumulative experience: we assumed that experience only from immediately prior contests would influence contest outcomes, and we coded earlier contests with half the value of the most recent contests. We performed separate Bradley–Terry models for winning and losing and selected the most appropriate model by minimizing the Akaike Information Criterion (AIC), a measurement of a goodness of fit of the model where a lower value suggests that the model is a better fit to the data (Akaike 1983; Burnham & Anderson 2002). We also compared our best-fit model to a model that excluded experience to determine whether the model that included experience better explained our results. Winning males are able to maintain exclusive access to female refuges (Hoefler 2007), so success in aggressive contests is a good indicator of male fitness. We performed two selection analyses. All five morphological traits examined (weight, cephalothorax width, and mean length of femur, patella–tibia and tarsus of the first legs) were highly correlated (data not shown), and selection analysis requires use of uncorrelated traits (Lande & Arnold 1983), so in the first analysis, we performed a principal component analysis (PCA using the covariance matrix; e.g. Kraft et al. 2006), which provided a new set of five uncorrelated traits suitable for selection analyses (Lande & Arnold 1983). Although the first component explained the most variance (in this case, overall size), the other components explained variation in individual ‘shapes’. Therefore, as we originally had five traits, we kept all five PC scores in our analysis. We then standardized the PC scores to allow comparison between rounds. Although this allowed us to examine how selection influences a suite of traits, it did not allow us to examine how selection influences weight, the only phenotypic predictor of success (Elias et al. 2008). Thus, in the second analysis, we examined how selection influences weight. We fitted multiple regression models to estimate standardized selection gradients of directional, quadratic and correlation selection on the principal components (Lande & Arnold 1983) separately for each round to examine whether experience altered the strength and/or direction of selection on males between rounds. We first fitted a linear regression to estimate $\beta$. We then fitted a quadratic regression on all linear, quadratic and cross-product terms to estimate the $\gamma$ matrix (Lande & Arnold 1983). We doubled the values of our quadratic terms to accurately reflect how nonlinear selection functions (Stinchcombe et al. 2008). To test for differences in selection gradients between rounds, we used a sequential model-building approach whereby the effect of (i.e. variance explained) including/excluding model terms was evaluated using partial $F$ tests. Partial $F$ tests are used to calculate significance based only a subset of predictor variables in a linear model (Draper & John 1988; Bowerman & O’Connell 1990). The application of this method for comparisons of nonlinear selection among different samples is outlined in Chenoweth & Blows (2005). For the Partial $F$ test, we first fitted a model with only round as a fixed effect (model A). We then added all the linear terms as covariates (model B), and added the linear-by-round interactions (model C). To test for overall significance of linear selection, we estimated a partial $F$ for model B against model A. To test for significance of linear selection between rounds, we estimated a partial $F$ for model C against model B. We tested for significant variance in nonlinear selection between rounds by first adding all linear and nonlinear terms (model D) and then adding the nonlinear-by-round interaction terms (model E). To test for overall significance of nonlinear selection, we compared model D to model B, and to test for significant nonlinear selection between rounds, we compared model E to model D. We tested for significant selection in the univariate test of selection on weight in the same manner. We performed all statistical analyses using JMP 7.0 (2007, SAS Institute, Inc., Cary, NC, U.S.A.). RESULTS Size-matched Contests There were 26 first-round contests where males were given either a winning or losing experience. Of the 26 first-round winners, 20 males won and six males lost against weight-matched opponents in round 2. Of the 26 first-round losers, six males won and 20 males lost against weight-matched opponents in round 2. First-round winners were therefore significantly more likely to win against males with similar fighting ability, while first-round losers were significantly more likely to lose against males with similar fighting ability in subsequent contests (Fisher’s exact two-tailed test: $P = 0.0002$). Results of the logistic model were similar, where winning experience had a significant positive effect ($\chi^2_1 = 9.58$, $P = 0.002$) and weight had no effect ($\chi^2_1 = 0.6$, $P = 0.69$) on contest outcome. Tournament Design During contests, one individual died after round 1, and six individuals died during round 2. Therefore, our analysis is based on 44 first-round fights ($N = 88$ individuals), 42 second-round fights ($N = 84$ individuals) and 40 third-round fights ($N = 80$ individuals) for a total of 126 contests. Of these contests, 93 were between individuals that differed in weight by at least 10%, and 43 were between individuals that differed in size by at least 10% (Fig. 1). All traits were normally distributed. There was no significant difference in weight (mean ± SE weight difference: round 1: $10.14 \pm 0.86$ mg; round 2: $11.47 \pm 0.87$ mg; round 3: $12.77 \pm 0.90$ mg; $F_{2,249} = 2.21$, $P = 0.11$) or body size (mean ± SE cephalothorax width difference: round 1: $0.276 \pm 0.032$ mm; round 2: $0.265 \pm 0.0332$ mm; round 3: $0.296 \pm 0.0340$ mm; $F_{2,249} = 0.22$, $P = 0.80$) between contestants in each round. Male weight tended to decrease throughout the trials, but the difference in weight between trials was not significant (repeated measures ANOVA: $F_{1,78} = 2.186$, $P = 0.12$). Of the 44 males that won in round 1, 43 survived and fought in round 2. Of these, 26 males won and 17 males lost in round 2. Of the 44 males that lost in round 1, 14 males won and 29 males lost in round 2. Thus, first-round winners had greater success than first-round losers in the subsequent round (Fisher’s exact two-tailed test: $P = 0.006$). We examined the third-round results in the same manner. There were 39 winning males in round 2; 27 of these won and 12 lost in round 3. Of the 40 losers from round 2, 12 males won and 28 males lost in round 3. Second-round winners also won more contests in round 3, while second-round losers lost significantly more contests in round 3 (Fisher’s exact two-tailed test: $P < 0.0001$). We analysed the tournament results using a Bradley–Terry model. Of the three candidate models for predicting fight outcomes, Model 1 (incorporating only most recent experience) was the best fit (AIC: Model 1: $-216.30$; Model 2: $-214.571$; Model 3: $-215.65$). Model 1 also explained the greatest proportion of the variance in contest outcomes ($\chi^2_{121} = 49.86$, $P < 0.0001$, $R^2 = 0.2879$) even compared to the model excluding experience (AIC: $-212.18$; $\chi^2_{121} = 44.95$, $P < 0.0001$, $R^2 = 0.2595$). In Model 1, both weight and previous experience significantly predicted contest outcomes (Table 1). Weight was approximately 1.3 times more important than previous experience in determining contest outcomes (standardized coefficients, Table 1). **Estimates of Phenotypic Selection** Table 2 shows how each of the original traits contributed to the new PC scores. All traits loaded positively on PC1, and thus, PC1 can be considered a measurement of morphological size and condition. The other principal component scores (PC 2–5) reflect variation in morphological shape. Linear selection gradients are shown for each round in Table 3. There was significant positive selection on PC1 in each round, with relatively stronger selection evident in round 3. Results from partial $F$ tests showed significant overall linear selection ($F_{5,243} = 7.46$, $P < 0.001$), but no difference in selection gradients between rounds ($F_{10,233} = 0.87$, $P = 0.56$). Nonlinear selection gradients are shown in Table 4. There was significant positive quadratic selection on PC4, however, overall nonlinear selection was not significant (partial $F$ test: $F_{15,228} = 1.36$, $P = 0.17$), and there was no difference in nonlinear selection between rounds (partial $F$ test: $F_{45,188} = 0.52$, $P = 0.99$). Our estimate of univariate selection gradients on male weight in each round showed significant positive selection on weight in each round (round 1: $\beta = 0.15 \pm 0.05$, $F_{1,86} = 2.96$, $P = 0.004$; round 2: $\beta = 0.12 \pm 0.05$, $F_{1,82} = 2.38$, $P = 0.02$; round 3: $\beta = 0.22 \pm 0.05$, $F_{1,78} = 4.43$, $P < 0.0001$). Using a partial $F$ test, overall linear selection on male weight was significant ($F_{1,248} = 31.00$, $P < 0.0001$). There was no significant difference in the pattern of selection between rounds (partial $F$ test: $F_{2,246} = 0.90$, $P = 0.41$). **DISCUSSION** As in a previous study (Elias et al. 2008), we found that weight was the only morphological trait that strongly predicted contest outcome in male *P. clarus*. There was also strong selection, overall, and in each round, on male size (multivariate analysis) and male weight (univariate analysis). Male *P. clarus* mainly use self-assessment to determine the outcome of contests (Elias et al. 2008), so an individual’s weight (or condition in the multivariate analysis) probably determines an individual’s fighting threshold/ability. Thus, even though males undergo a lengthy signalling period, weight is the only reliable cue for determining how long a male will persist in physical contests. Although weight was the most important determinant of contest outcomes in *P. clarus* in our study, all males tended to lose weight between bouts. An individual’s weight probably varies not only throughout the breeding season, but also across days or hours, as it did in our study, where males fought three opponents within 8 h. Thus, it may be difficult for an ![Figure 1](image.png) **Figure 1.** The distribution of the difference in (a) size (cephalothorax width) and (b) weight between competing males. | | $\beta$ | $\chi^2$ | $P$ | Standardized $\beta^*$ | |----------------|-------------|----------|-------|------------------------| | Weight | $-0.09 \pm 0.04$ | 4.589 | 0.03 | $-0.93 \pm 0.68$ | | Experience | $-0.42 \pm 0.19$ | 4.94 | 0.03 | $-0.71 \pm 0.29$ | | Cephalothorax width | $-1.70 \pm 1.30$ | 1.71 | 0.19 | $-0.89 \pm 0.57$ | | Femur length | $-0.79 \pm 2.41$ | 0.11 | 0.74 | $-1.12 \pm 0.31$ | | Patella–tibia length | $1.29 \pm 1.95$ | 0.44 | 0.50 | $0.40 \pm 0.55$ | | Tarsus length | $-0.21 \pm 2.06$ | 0.01 | 0.91 | $-0.02 \pm 0.61$ | * Standardized coefficients allow comparison of the relative strength of the various factors in the model. individual to ascertain their own fighting ability relative to others in the population based on weight alone. This may help explain why individual-based thresholds (self-/cumulative assessment) best explain contest dynamics in *P. clarus* (Elias et al. 2008). We also found that both winning and losing experience greatly contributed to future contest outcomes in *P. clarus*. Winners were more likely to win subsequent contests, while losers were more likely to lose subsequent contests. This result occurred in both the size-matched trials, and in the tournament design where opponents were chosen randomly to simulate natural contests. Furthermore, an individual’s most recent prior experience explained most of the variation in male fight outcome. Although experience had a relatively strong effect on contest outcomes in *P. clarus*, there was no significant difference between estimated selection gradients between rounds in either the univariate or the multivariate analysis. Thus, we found no evidence that experience had either a reinforcing effect or a weakening effect on the strength of selection. This may be because the importance of weight in determining contest outcomes is relatively more important (1.3 times greater) in this species. Thus, unless individuals are relatively similar in size, it is unlikely that experience alone will alter phenotype–fitness associations. In the tournament design, weight may have had a greater influence on contest outcomes since only 19.5% of trials in second and third rounds were between individuals that differed in size by less than 10%. Although experience influences contest outcomes, it is not a heritable trait itself. However, there may be a heritable basis to how an individual responds physiologically and behaviourally to positive or negative contest experience as well as how long memories last, and these traits may be heritable (Hsu et al. 2006). For example, if hormone titres change either during or after contests (e.g. Earley & Hsu 2008), behavioural changes may result that allow individuals to decrease the costs associated with fights, resulting in potential fitness increases (Rutte et al. 2006). This can occur if individuals that have recently lost a contest are less likely to initiate or escalate future contests, and/or if individuals that have recently won contests are bolder (e.g. Frost et al. 2007). Additionally, if information from multiple experiences is reliable, selection may act upon the mechanisms associated with long-term memory formation (Kandel et al. 2000). It is therefore important to begin examining whether experience alters phenotype–fitness associations under different competitive circumstances and its potential effect on the evolution of learning and memory. However, experience is not the only factor that is likely to affect patterns of selection in *P. clarus*. As male *P. clarus* defend female’s hibernacula from other males, ownership is likely to influence contest outcomes, as shown in other species (Olsson & Shine 2000; Hoefler 2002). Males also show variation in maturation rates, which results in a significant increase in male size as the season progresses (M. M. Kasumovic & D. O. Elias, unpublished data). Although later-maturing, larger males are likely to outcompete smaller protandrous males, these smaller protandrous males would gain access to female’s hibernacula before larger males, and would thus gain experience before larger males mature. Together, ownership and previous winning experience of smaller protandrous males may outweigh any size benefits (e.g. Hoefler 2006). Further studies examining multiple factors and the effect that such factors can have on selection in concert may clarify how selection functions in contests and whether patterns of selection can change within a single breeding season (e.g. Kasumovic et al. 2008). **Acknowledgments** We thank J. M. Brandt, T. Peckmezian, K. Permapaladas and S. Sivilinghem for field and laboratory assistance. We also thank S. Lailvaux, M. Hall, and the Integrative Behaviour & Neuroscience Group (University of Toronto, Scarborough) for useful discussions and comments on the manuscript. This project was funded by a grants from Natural Sciences and Engineering Research Council of Canada (NSERC) Postgraduate Scholarship B, Ontario Graduate Student Fellowship, and an Animal Behavior Society Student Grant to M.M.K., National Science Foundation International Research Fellowship Program (0502239) and National Institutes of Health National Research Service Award (1F32GM076091-01A1) to D.O.E., NSERC Discovery Grants (229029-2004 to M.C.B.A. and 238882 241419 to A.C.M.), and grants from the Canadian Foundation for Innovation and Ontario Innovation Trust (M.C.B.A. and A.C.M.). **References** Akaike, H. 1983. Information measures and model selection. *Bulletin of the International Statistical Institute*, **44**, 277–291. Andersson, M. 1994. *Sexual Selection*. Princeton, New Jersey: Princeton University Press. Bowerman, B. L. & O’Connell, R. T. 1990. *Linear Statistical Models: an Applied Approach*. Belmont, California: Duxbury Press. Burnham, K. P. & Anderson, D. R. 2002. *Model Selection and Multi-model Inference*. New York: Springer. Chenoweth, S. F. & Blows, M. W. 2005. Contrasting mutual sexual selection on homologous signal traits in *Drosophila serrata*. *American Naturalist*, **165**, 281–289. Dodson, G. N. & Schwaab, A. T. 2001. Body size, leg autonomy, and prior experience as factors in the fighting success of male crab spiders, *Misumenoides formosipes*. *Journal of Insect Behavior*, **14**, 841–855. Draper, N. R. & John, J. A. 1988. Response-surface designs for quantitative and qualitative variables. *Technometrics*, **30**, 423–428. Earley, R. L. & Hsu, Y. 2008. Reciprocity between endocrine state and contest behavior in the killifish, *Kryptolebias marmoratus*. *Hormones and Behavior*, **53**, 442–457. Elias, D. O., Kasumovic, M. M., Punzalan, D., Andrade, M. C. B. & Mason, A. C. 2008. Male assessment during aggressive contests in jumping spiders. *Animal Behaviour*, **76**, 901–910. Firth, D. 2005. Bradley–Terry models in R. *Journal of Statistical Software*, **12**, 1–12. Forster, L. 1982. Visual communication in jumping spiders (Salticidae). In: *Spider Communication: Mechanisms and Ecological Significance* (Ed. by P. N. Witt & J. S. Rovner), pp. 161–212. Princeton, New Jersey: Princeton University Press. Frost, A. J., Winrow-Giffen, A. & Ashley, P. J. 2007. Plasticity in animal personality traits: does prior experience alter the degree of boldness? *Proceedings of the Royal Society B: Biological Sciences*, **274**, 3335–3339. Hack, M. E. 1997. The energetic cost of fighting in the house cricket, *Acheta domesticus* L. *Behavioral Ecology*, **8**, 28–36. Hoefler, C. D. 2002. Is contest experience a trump card? The interaction of residency status, experience, and body size on fighting success in *Misumenoides formosipes* (Araneae: Thomisidae). *Journal of Insect Behavior*, **15**, 779–790. Hoefler, C. D. 2008. Jumping spiders in space: movement patterns, nest site fidelity and the use of beacons. *Animal Behaviour*, **71**, 109–116. Hoefler, C. D. 2007. Male mate choice and size-assortative pairing in a jumping spider, *Phidippus clarus*. *Animal Behaviour*, **73**, 943–954. Hoefler, C. D. 2008. The costs of male courtship and potential benefits of male choice for large males in *Phidippus clarus* (Aranea, Salticidae). *Journal of Arachnology*, **36**, 210–212. Hsu, Y. & Wolf, L. L. 1999. The winner and loser effect: integrating multiple experiences. *Animal Behaviour*, **57**, 903–910. Hsu, Y. & Wolf, L. L. 2001. The winner and loser effect: what fighting behaviours are influenced? *Animal Behaviour*, **61**, 777–786. Hsu, Y., Earley, R. L. & Wolf, L. L. 2006. Modulation of aggressive behaviour by fighting experience: mechanisms and contest outcomes. *Biological Reviews*, **81**, 33–74. Kandel, E. R., Schwartz, J. H. & Jessell, T. M. 2000. *Principles of Neural Science*. New York: McGraw-Hill. Kasumovic, M. M., Bruce, M. J., Andrade, M. C. B. & Herberstein, M. E. 2008. Spatial and temporal demographic variation drives within-season fluctuations in sexual selection. *Evolution*, **62**, 2316–2325. Kraft, P. G., Frankino, C. E. & Blows, M. W. 2006. Predator induced phenotypic plasticity in tadpoles: extension of innovation. *Journal of Evolutionary Biology*, **19**, 450–458. Land, M. F. 1985. The morphology and optics of spider eyes. In: *Neurobiology of Arachnids* (Ed. by F. G. Barth), pp. 53–78. New York: Springer-Verlag. Land, M. F. & Nilsson, D. E. 2002. *Animal Eyes*. Oxford: Oxford University Press. Lande, R. & Arnold, S. J. 1983. The measurement of selection on correlated characters. *Evolution*, **37**, 1210–1226. Mitchell-Olds, T. & Shaw, R. G. 1987. Regression analysis of natural selection: statistical inference and biological interpretation. *Evolution*, **41**, 1149–1161. Ossian, M. & Shine, R. 2000. Ownership influences the outcome of male–male contests in the scincid lizard, *Niveoscincus microlepidotus*. *Behavioral Ecology*, **11**, 587–590. Rausher, M. D. 1992. The measurement of selection on quantitative traits: biases due to environmental covariances between traits and fitness. *Evolution*, **46**, 616–626. Rillich, J., Schilderberger, K. & Stevenson, P. A. 2006. Assessment strategy of fighting crickets revealed by manipulating information exchange. *Animal Behaviour*, **74**, 823–836. Rutte, C., Taborsky, M. & Brinkhof, M. W. G. 2006. What sets the odds of winning and losing? *Trends in Ecology & Evolution*, **21**, 16–21. Stinchcombe, J., Agrawal, A. F., Hendry, A. P., Hoehn, M., Arnold, S. J. & Blows, M. W. 2006. Estimating natural selection gradients using quadratic regression coefficients: double or nothing? *Evolution*, **62**, 2435–2440. Stuart-Fox, D. M., Firth, D., Moussalli, A. & Whiting, M. J. 2006. Multiple signals in chameleon contests: designing and analysing animal contests as a tournament. *Animal Behaviour*, **71**, 1263–1271.
THE BREAK WITH KAUTSKY, 1910-1911: From Mass Strike Theory to Crisis over Morocco—and Hushed-Up ‘Woman Question’ by Raya Dunayevskaya (A draft chapter from a new work-in-progress, Rosa Luxemburg, Women’s Liberation and Marx’s Philosophy of Revolution.) I SPONTANEITY AND ORGANIZATION ONE SPONTANEITY HAD TAKEN ON THE FORM OF an outright revolution. Luxemburg’s usual sensitivity to the phenomenon took the dimension of a political question in the method of her analysis. As she had written to Luise Kautsky early in 1906, soon after the Russian Revolution, “the proletariat has more general strikes than has ceased to play the role it once had. Now nothing but a direct, natural fight on the streets is left to the workers.” 1 By mid-August, as she was working on The Mass Strike, The Social Party and the Trade Unions, 2 she chose for first chapter the question of spontaneity. In the topics for the title, she was, in fact, beginning to question not only the conservative trade union leadership, but its relation to the revolutionary socialist movement. She had always been highly responsive to proletarian acts of spontaneous self-emancipation. But the Russian 1905 Revolution had disclosed a totally new relationship also to the question of spontaneity. The most striking phenomenon was that the so-called backward Russian workers had shown more revolutionary activity or them in the technologically advanced countries, Germany particularly. Moreover, the Russian Revolution was not just a national phenomenon. As it had spread to other countries in the West, it had displayed an elemental force and creativity that had not been seen at once before—spreading out its application to Germany. In a word, spontaneity did not mean just instinctive action against the bourgeoisie, as before the contrary. Spontaneity was a driving force, not only of the masses, but of the vanguard leadership, keeping it left. As Luxemburg wrote in the preface to her pamphlet: “The element of spontaneity, as we have seen, plays a great part in all Russian mass strikes without exception; be it in different forms, but with a straining influence... In short, in the mass strikes in Russia, the element of spontaneity plays such a predominant part, not because the Russian socialists are ‘undisciplined,’ but because revolutions do not allow anyone to play the schoolmaster with them.” In working out the dialectic of the mass strike, Luxemburg moved from the question of spontaneity for “root causes” to concentrating, instead, on the interrelationship between spontaneity and organization. The question of the general strike from its anarchistic non-political form to its genuine political nature. The 1905 Revolution actually showed, she wrote, that “the historical liquidation of anarchism.” Marxist leaders saw the general strike signified the unity of economics and politics. She traced through the strikes in Russia from 1896 to 1903, the development of the mass strike from the spring of 1903 and into the middle of the summer there forming a continuous chain of events. She saw an uninterrupted economic strike of almost the entire proletariat of the capital, “...Nor was it only a question of the general strike of the proletariat, but of the proletariat.” For the first time she was impressed even with what she called “the tremendous power of the revolution irradiated the genius of all people, and the revolution itself, which had been “even knocked at the gates of the military barracks.” Luxemburg proceeded to show the effectiveness of the strike as a weapon of struggle against the most immediate institution, even before the outbreak of the revolution in January, 1905. The oil workers in Pskat won the 18-hour day in December, 1904; the printing workers in Samara in January, 1905; the sugar workers in Kliev in May, 1905. By the time of the October Days of 1905, she wrote, “the Russian proletariat had formed a broad background of the revolution from which the party could draw its strength.” With the agitation and the external events of the revolution, there emerged “a new wave of mass strikes, explosions and now great general actions of the proletariat.” 3 Naturally, the question of the soldiers’ revolts in Kronstadt and elsewhere was raised by Luxemburg about the breadth and depth of the revolution: “Within a week the revolution was provided in every factory and workshop in Petersburg.” 4 Once one recognizes that this was the essence of what Luxemburg meant by the “new genius of organization,” then it is clear that—with her specific historic examples of how many mass strikes, what durations they lasted, what their effects were—she was pointing to a general political strike which led to “a general people’s movement.” This was the real meaning of the theory of revolution. Moreover, she was developing it not only in terms of the Russian Revolution, but also in terms of Germany, with eyes fixed on technologically advanced Germany. Clearly, it was no longer a question just of experience, but of a new theoretical understanding of a political phenomenon that was so little separated from any national boundaries, that it was a question of the difference between national and international as well as the difference between the past and the future. In dwelling in detail on the mass strike in October, November and December, Luxemburg not only emphasized the importance of the general strike as a genuine lasting thing in this rapid ebb and flow of the wave of the revolution, but also the “unprecedented enthusiasm of the proletariat.” By the time Luxemburg came to the question of organization, she had already gone beyond the formation of clubs, she dealt with the question of trade unionism as something the new forces of workers had to take up. She was taken in by the old idea of thinking about that new force “taking unions in hand” is that it would deal only with the organized but with the unorganized workers. Put differently, Luxemburg was against the trade union movement, not only because they were conservative, but because they represented only the organized workers, whereas the unorganized workers, she believed, were the ones who were the real revolution. And just as she included even the lumpen proletariat in her definition of the proletariat, so she included the drew into the totality and genius of spontaneously everything that was revolutionary. To the artist as being in this great whirlwind of revolution. What strategy, was not brought out to the point of making it a unitary whole. It was not until she came to the question of the whole question of organization—be it the small trade union or the large because of the fact that it was literally overnight, a mass organization, or totally new organization—was the question of organization hitherto become inseparable from mass activity. From 1906—and all the way until the break with Kautsky in 1910—what Luxemburg singled out was the general strike as the intellectual, political and political work which “formed a broad background of the revolution.” The detailed examination of the history of strikes from 1896 to 1902, and the detailed examination of the 1905 revolution, led her to the conclusion that the mass strike is: “The method of motion of the proletariat mass, the phenomenal form of the proletariat struggle in revolution. In a word, the mass strike is the transmitter from one political center to another, from one class struggle to another, the articulation of the soil for the economic struggle. Cause and effect, here, continually change places.” Finally, the events in Russia show us that the mass strike is the only weapon to defeat the bourgeoisie. Finally, she approached the question of upholding lessons of the Russian Revolution to the German masses. The type of revolution which had given the Russian proletariat that "breathing" which 30 years of parliamentary struggle had not given them was "revolutionary," due to the German proletariat. No doubt she did not then (1906) know that her words would be quoted later: "the masses will be decisive chorus and the leaders only the 'speaking paste,' the interpreters of the will of the masses." She was on the ground, not alone for her usual fight with the trade union leaders, but also against the "left wing" of the Social-Democrats—that is, Marxist—leadership. But, in fact, this was only the beginning of the period of the October period and place we will best see both the ramifications of her 1906 general strike thesis, as well as the stratagems of the bourgeoisie to reach the highest levels of "orthodox Marxism," it is to 1919 that we now turn. II UNIFIED REVOLUTIONARY THEORY —PRACTICE VS. "TWO STRATEGIES" LUXEMBURG CONSIDERED THE intersection of revolutionary theory and practice to be a pre-revolutionary situation. 1910 was the year she felt it opportune to begin to apply to Germany the lessons of the Russian Strikes she had drawn from the Russian Revolution. Not only was it a year when the bourgeoisie was preparing for a counteroffensive, but on Feb. 4, when the government published the draft of the new electoral law, which, in spite of the earlier voting limitations, there was mobilization of mass opposition. In Berlin alone, Sunday, March 27, the battle of February and May there were massive demonstrations for equal suffrage. At the same time, the waves of strikes were mounting, reaching their peak in April. (Czechská "a Vague" 4) Carl Schoroske shows that no less than 370,000 workers were involved in work stoppages that year.¹ In mid-February, Luxemburg had written an article on the Russian Revolution, which she called the principle of the General Mass Strike. She entitled it "What Next?" and submitted it to the Party paper. It was returned to her with the comment that the "Executive" had instructed the paper not to carry on agitation for the general strike. The second most important was the electoral campaign. Luxemburg, on the contrary, thought that the most important question was the question of the struggle for electoral reform and on the question of strikes, that made the Russian Revolution possible.² She resubmitted the article, this time to the theoretical organ of the party, the Vorwärts, which was edited by editor. Where, therefore, Luxemburg considered the prestige of the party to be more important than anything else, to be so important that she allowed nothing to divert her from it, this time the priority went to the struggle for electoral reform. She went from Berlin at the Party School to go lecturing throughout Germany, and talk about the Russian Revolution, an event which naturally included the idea of a General Mass Strike. The opposition of the "Executive" to the views of the top echelons of the German Social-Democracy (GSD) was revealed in some curious ways. Thus, in all the papers, the articles were reproduced, they represented Luxemburg's speeches one way. Vorwärts struck out one of the articles, and the other papers printed the enthusiastic approval of the participants when she advocated the General Mass Strike.³ Elsewhere, meanwhile, was doing her reporting to Lude Kautsky. One letter dated March 15, 1910, described how much meditating she had done, how much she had read, and how enthusiastically she had been met by the last one which had mentioned her name. At the end of the two month's lecture tour, Luxemburg returned to Berlin. There, she found a note from Kautsky, who said that the article on the Russian strike was "important" and "very fine;" but he suggested that she postpone publication until after the elections. Meanwhile, he was polemicizing against her views. She at once saw to it that Kautsky was passed on to Liebknecht and others. As for the Party paper, on the question of a republic, she had developed it into a separate article, which was published. This, however, didn't mean that she would let Kautsky off the hook for not publishing her article, much less for his ¹ Carl Schoroske, German Social Democracy 1905-1917 (New York, 1958). ² "Die Wende," Internationale Arbeiter, Vol. 2, no. 38-39 (1910), is very briefly referred to as "What Next?" by Raffel "What Future?" ³ Liebknecht, "The Russian Struggle," in Liebknecht, The Russian Revolution, International Publishers, New York, 1930, p. 145. ⁴ Liebknecht, "The Russian Struggle," in Liebknecht, The Russian Revolution, International Publishers, New York, 1930, p. 145. ⁵ Liebknecht, "The Russian Struggle," in Liebknecht, The Russian Revolution, International Publishers, New York, 1930, p. 145. ⁶ Liebknecht, "The Russian Struggle," in Liebknecht, The Russian Revolution, International Publishers, New York, 1930, p. 145. Nana and Herero guerrillas resisted German imperialism (seated) in the great Nama guerrilla leader — Jacob Marais, who was murdered by the Cape Mountains in the Kalahari Desert, 1904. history, it is totally false. The great historian, Mommsen, has long since shown that the invention of the theory of the "Fateful Civilization" became "famous" for his "masterly invention" theory since, far from winning any marked success, the theory was so much rejected that the Romans decided not to suffer any longer from his generalship and had him executed. As we have already shown in both her "Theory and Practice" and her "Attrition or Collusion" articles, this stretching into Roman history—which was, after all, more relevant to the question—than there were her articles on General Mass Ströhr—was not only irrelevant but totally unconvincing. She was trying to present German history as a "century of Prussian glory." As she pointed out in her "Theory and Practice": "And now let's take a look at the wars which Germany has fought in the meantime. The first was the 'Guerre de la Ligue' [the League War] in 1683. It was a war of defense against the French. The second was the even more glorious Herero war. The Herero were a warrior people who had been forced to submit to their native soil, and made it fertile with their sweat. They were not conquered, but they did not spontaneously surrender themselves to the rapacious robber barons of Europe. They fought bravely and well that they defended their homeland against foreign invaders. In the end, they were defeated, but they never capitulated until—menace. Herr von Trotha issued the well-known general order: every man found dead in the forest will be shot; no quarter will be given. The men were shot; women and children by the hundreds were burned alive in their huts; the women—and the wreath of their parched bones blanches in the murderous Omaheke—a glory partake of German glory!" III "THE MOROCCO INCIDENT" EVER SINCE SHE HAD LANDED IN GERMANY, back in 1895, and published her first book against expansionism, the question that kept cropping up was what we now call the "Third Reich." Whether what she was fighting against was plain, no matter whether it was a question of theory or of practice, her theory was always the same: the struggle against expansionism into imperialism. As we saw in the first chapter, she had already written about it (see "The Struggle for Power," Germania, 1902). She published in the Leipziger Volkszeitung on March 13, 1899, that a new shift in global politics was taking place, that Japan had just attacked China. Moreover it wasn't only a question of Japan's imperialist intrusion into China, but also the U.S. intervention, the Anglo-Boer war, the U.S. intrusion into Latin America. And here we were in 1910 and she found no one less than Karlstein, author of "History of Prussian Democracy," as it was translated in English. Without any objection to the German soldiers that is "Hun campaigns" to maintain the status quo, she wrote that the Germans had taken a lesson in "rightfulness." The Chinese didn't forget, but they remembered us too. The imperialist popular uprising in China, in Southern China in 1898. In 1900, at the very first Congress Luxemburg attended when she became a German citizen, she had already rejected a new imperialist policy. On May 15, 1900, she wrote an article in the Leipziger Volkszeitung an imperialist maneuvers worldwide, specifically illustrated in 1900, and she said that she would support at once all questions of anti-militarism and anti-imperialism. As we see, preoccupation of the dog's opportunism in Karlstein, whom she still considered him the authoritative voice of Marxism, was not limited to the question of the Social Democratic Party, but the line of the question of suffrage, but was integral to the very core of her political thinking. No doubt the GSD leadership thought they had brought her down to size when the Congress that year rejected her proposal to fight for the right to vote. The French must be waged to victory only through great determined mass action in which all means must be employed to make the masses conscious of their democracy." But the 1910 battles with Karlstein and Bebel had convinced her that the GSD leadership was blind to her that the question of fighting opportunism was not only a matter of domestic politics, but of world national policy. On July 1, 1911, the German gunboat Panther sailed into Morocco for the first time. Luxemburg wrote to the chief Bureau that Luxembourg received as a member of that Bureau showed that the leadership was a great deal more concerned with the imperialist maneuvering on Germany, than with Germany's imperialist adventure in Morocco. She was not against the government proposed at the moment, and not only was the question of the suffrage not raised in the open air, but it was clear that the only thing that worried the GSD was that any opposition might harm the electoral --- 8 Rosa Luxemburg, "Our Struggle for Power," Germania, 1902, Vol. II, p. 173-174. 9 On May 26, 1912, in an article entitled "The World's 'Leipzig' Party Congress" in Leipziger Volkszeitung, the party leadership, under the leadership of Karlstein, criticized Wilhelm II and the leaders with the slogan "Don't worry be happy." Luxemburg wrote that the slogan "Don't worry be happy" was a sign that the party leadership was afraid that the whole world was not against "the most precious eyes of a German," because "we are not alone." victory they counted on for the 1912 elections. Luxemburg published the "critics' letter and her own reply in the Vorwärts of Aug. 10, 1912, and July 24, 1911. When more letters and leaflets, each one more audacious than the one before, continued to flow her way, she published a series of articles entitled "About Marocco-Flüchtel," which appeared in the Leipziger Volkszeitung from Oct. 15 to Nov. 13, 1911. Kautsky's manifesto had been published in Vorwärts of Aug. 9, 1911; Luxemburg's reply was published in the same paper on Aug. 16. She did not wish to mention belles-lettres of their manifestos for any serious struggle against colonialism or imperialism. Instead of a serious Marxist analysis of a burning question, she said, they were giving "Social-Democratic political lewdness." By now the issue was the whole of international policy in general, and the Morocco affair in particular. The Social-Democratic party's critics were as anxious as to how the "Morocco affair" was regarded, that is, whether it was a sign of imperialist ... and Germany's urge for world power." She concluded: "Let us add that in the whole of the leaflet there is not one word about the colonized nations, about their rights, interests, and sufferings because of international policy. The leaflet several times speaks of England's splendid colonial policy, of the British army's fighting cholera and typhoid in India, extermination of the Egyptian peasantry, and the horse whip on the backs of the Egyptian peasants." Whereupon all the furies descended upon her for "betrayal of discipline" and "treasonous criticism" for having published a letter that had been meant only for the eyes of the party leaders. By the time the 1912 Congress opened in September the Executive Committee tried reducing the question of colonialism to a matter of discipline. It was merely a question of making public what had been sent to the party. But the one great still was the name of the GSD, and so far as Luxemburg and the International growth was the question of imperialism, that the leadership of the GSD had been diverted from the political analysis to the question of "a breach of discipline." IV TONE—DEAFNESS TO MALE CHAUVINISM IN THE PROCESS OF THEIR DEBATE on the so-called "breach of discipline," male chauvinism had raised its ugly head, as we will shortly see. That it was not only male chauvinism's ugly head, but that of imperialism which the German Social-Democracy was not up to confronting, is clearly being rightly insisted, is made clear by the meeting of the International Socialist Bureau in Zurich, on Sept. 23, 1911, the week following Luxemburg's article. When the international representatives like Lenin present, they withdrew their motion to censure Luxemburg; but managed, with the support of the Swiss Social-Democrats, to bring the discussion over the Morocco crisis. Thus, when Lenin came to Switzerland, he was greeted with a storm of thunder and lightning descended upon him as well. Vladimir Ilich was not spared. When he spoke, the Swiss Fabian replied that the ear should not grow beyond the forehead. Lenin retorted: "I am not surprised that when we had millions of members as the German Social-Democracy had, then we should also be considered. But I find it strange that you are so narrow-minded." After listening to Plekhanov, Vladimir Ilich shunned the other side of the table. The Minutes of the GSD Congress in June 11 the week before tell the whole story; it was there that the male chauvinists dominated the discussion over what they called the "Morocco incident." "I know that there wasn't also much humor in the discussion," as one delegate put it. "When the party executive asserts something, I would never dare not to believe it, even if it quite seems—I believe it precisely because it is absurd." And later she turned to Raben: "I have never seen such a picture of confusion at 'fight ers'" (i.e., from the most conservative benches, where the delegates were sitting). "I have never seen a picture of such pathetic confusion. (Laughter.) I have never seen such a picture of confusion. I cross with you for your accusations, I forgive you and offer you my hand. I forgive you and offer you my advice. Great amusement... do better in the future." Even when there were hisses for Luxemburg's attitude, she replied, "I am not afraid of the hisses, nor anti-militarist stand. Clearly, there was a deep anti-imperialist and anti-colonialist feeling in the German Social-Democracy, which was expressed in the friend of Luxemburg's put it, rising to her defense: "As I prophesied, a trap was set for Rosa Luxemburg. She was caught in the trap, and they made use of the really unjustified over-haste with which she was criticized. They used the trap being used to disguise the real heart of the matter. Once Luxemburg has frequently come into conflict with the party leadership, and that much more often. . . . (but) the mass demonstrations against war and militarism, the mass demonstrations, these are not the achievement of Müller and the executive, but of Clara Luxemburg, through her critique." It wasn't her lack of awareness about the pervasive male chauvinism that made Luxemburg so angry. But so determined was she that nothing should divert from the political analysis of the colonial question, that she refused to budge up the matter, though it involved her own leadership. It had been her principle always to ignore any kind of personal attack, to let the matter pass and pass her lips. It isn't that she wasn't aware of its extreme danger, but that she felt that the colonial question could be abolished only with the abolition of capitalism. She was determined to keep the question of colonialism and imperialism in the party, and she learned to live with what in our era has been challenged by male-specific --- 10 Quoted by Otto von Guehne and H. M. Fisher, "The Radicals and the War," in The New Republic, July 17, 1918. 11 The parties which follow were translated from Postkarte ... June, 1912, by the author. 12 On the question of anti-Semitism as well as the whole question of the party leadership, see Karl Kautsky's Introduction to Rosa Luxemburg, in English, see David Gordon's Introduction to Rosa Luxemburg, in German, see Karl Kautsky's Introduction to Rosa Luxemburg, 1912; for an English translation of Luxemburg's article, "The Political Crisis in France," see New International, July, 1939. ally, male chauvinism. She took no issue with it, though it struck out from her own that the politics against her husband's disagreement with the policy of the orthodox leadership had an extra sharp edge which no male opponent could have dared to use for years. This is one of the letters that passed between Bebel and Adler: "... the poisonous bitch will do a lot of damage... The whole thing is nothing but a monkey (Bildschwein) while on the other hand her sense of responsibility is totally lacking and her only concern is to get herself a good self-justification..." (Victor Adler to August Bebel, Aug. 16, 1910) "... with all the wretched female squires of politics I wouldn't have the party without her." (Bebel to Adler, Aug. 15, 1910) Male chauvinism was for Bebel just a creeping phenomenon in the established revolutionary socialist movement. Much less was it characteristic only of some rank-and-file members of the party, as the myth says. "Clara Zetkin: A Left-Wing Socialist and Feminist in Wilhelmian Germany" by Peter Hartig, published the day that Bebel wrote the above letter to Adler (Aug. 16, 1910), is a very important contribution. "It is an odd thing about women, if their partialities or passions or vanities come anywhere into question, they are immediately injured, or, let alone, are injured, then even the most talented of them flies off the handle and becomes hostile to the point of violence. The most solidly built by side, regulating reason does not exist." The virulent male chauvinism permeated the whole party leadership. It was especially the leaders of Women and Socialism, who had created a myth about himself as a woman's man, who were the worst enemies of the principles of the whole International. Thus, after Luxemburg's break with Kautsky in 1907, when Zetkin also supported Luxemburg's position, she was accused by preaching at the Congress in 1912, Kautsky warned Bebel "the two female leaders are planning something an attack on all central positions." None of this changed the fact that the party leadership had been against the socialist women's movement, Women and Socialism, which had gone through innumerable editions. The myth very nearly continues to this day, and in any case, in the 1910-11 period, both the authority of the GSD in general and Bebel in particular on the women's question was unchallenged everywhere in the world at the very time he was conducting the struggle against Luxemburg. It is high time to turn to this question again. It is not only today that the phenomenon of the "Women Question" is totally unimportant in the German party. It is not only today that Marx's very different concept of women's oppression and the socialist revolution is no accident "... -in our own day — 100 years after the first publication of the Critique of Political Economy, the Rhineland Notebooks, term published. It is therefore only now that we can see that it wasn't only the "young Marx" who was right. Marx's main relationship as a very important pivot in that new continent of thought he was discovering—a "new world"—was with the working class. In his very last years of his life, 1880-1885, he was engaged in the most important sociology as well as in answering the sharpest questions relating to the relations between Russia and on the concrete relationship between the "white" and the "black" countries, between the economically advanced and the most backward countries. That this was the case is now clear, and it was already clear from both the emergence of the Third World and the new "questions of world revolution." The relationship of the party to revolution was a preoccupation of Luxemburg long before the debate leading to the break with Kautsky. Just as soon in 1906 at the Congress in Amsterdam, she was confronted by Kautsky with hostility to theory as she spoke in the most important speeches of the Congress. In 1910 she related opportunism both to inaction and lack of revolutionary theory. In 1911 she said that "it is never ever that Luxemburg considered theory the lifeblood of the movement in general and the leadership in particular. She believed that theory was important, but was quite amicable on the question. She decided that the party leadership had to be reformed, but that the party had to be probed further, much further. Here is what she wrote to Konstantin Zetkin in November 1911, on the occasion of the founding of Socialism. I am following up the economic aspects of this concept; it will be a strictly scientific explanation of materialism and its consequences. Her characteristic confidence in the masses and their spontaneity had, as we saw, no deepened with her experience in the party. Her conviction was that the leaders simply to be the ones who had "the speaking piece" and "any mass is once unleashed, must move forward", and that the leaders will also have to push the leadership and leadership forward. And what in the years 1910-11 was the most important role? It was her. We aren't given the answer. Only one thing is clear beyond doubt: the break with Kautsky and Bebel was irrevocable, though there was no break with the party. The party remained to her unchangeable. But she kept her distance from the leaders who practiced leadership as if they were government rulers, though they did not have state power. For a full analysis, see draft chapter published in Jun.-Feb., 1979, Neue & Lerres.
THE MEANINGS AND ADDRESSES OF SALAM IN PRAYER ALIREZA SALEHI\(^1\) TRANSLATED BY MAHBOOBEH MORSHEDIAN ABSTRACT: *Salam (Peace)* is a Qur’anic term with a wide range of deep meanings and various practical aspects and manifestations. This divine word is a name of Allah, and many hadiths have been reported from the Holy Prophet and Imams regarding its meanings. In many religious texts, including prayers, ziyarahs, and social interactions, its manifestations are clearly seen. As the best deed and act of worship and the most beautiful display of servitude to Allah, prayer has some elements, including Salam. This article first identifies and summarizes the most important meanings of Salam on the basis of authentic Arabic and Persian dictionaries. Next, on the basis of these meanings, the most significant implications will be investigated, namely, the philosophy of Salam in Prayer. The Statement of Problem As will be mentioned in the literal analysis of the word ‘Salam’, there are fifteen meanings for this term. On the basis of legal injunctions, Salam is an element of prayer. In this element of prayer, the praying person recites three sentences. The two first sentences are \(^1\) The faculty member of Islamic Azad University, Southern Tehran Branch recommended, and the last one is obligatory. First, Salam is said to the Holy Prophet, then to ‘us’, and the righteous servants of Allah. In the last sentence, Salam is said to ‘you’. The first question this article addresses is: Among the fifteen mentioned meanings of prayer, which one is congruent with the spirit of prayer and its addressees? The second question to be answered is regarding who the addressees of the second and third sentences are (‘us’ and ‘you’). **Introduction** Salam is a key Qur’anic term and has a wide range of meanings in Islamic culture. This blessed word is used as a name of Allah and also as a word of salutation among Muslims. Salam is among the elements of prayer\(^1\) and the ending part of this divine obligation. Like other elements of prayer, it enjoys profound aspects, some of which will be referred to by those who know the truth and mysteries of prayers based on hadiths by Prophet Muhammad and the Imams. No doubt the way to more reflection on this issue is open, and we have a long way to gain perfect knowledge. As mentioned above, Salam has a wide range of meanings, most of which are materialized in the Salams mentioned in the Holy Qur’an. The main questions about Salam in prayer include a) which of the various meanings and concepts is related to the depth of prayer more, b) which idea about the philosophy of Salam in prayer is more likely to be true and valid, and finally, c) who the addresses of Salam are in each of three sentences and which message is conveyed to them. Benefiting from the support of Allah and using the hadiths of the Imams and valuable points about the philosophy and secrets of prayer, --- \(^1\) By element, it is not meant the Rukn which is a jurisprudential term used in the treatises of Practical Islamic Rulings. Five things are Rukn: intention, saying Allahu-Akbar to start the prayer, standing before Ruku’, Ruku’ and two Sajdas. these questions will be answered as much as possible. In addition, a comparative study will be conducted on books of “Secrets of Prayer” (Asrar al-Salat) and other similar books. **The Meanings of Salam** In order to find out about the main meanings of the word Salam, it is necessary to examine and analyse what is mentioned about this divine word in reliable Persian and Arabic dictionaries. Then the similar redundant meanings should be omitted, and a precise conclusion should be reached about the meanings of Salam. Accordingly, first the following data was gathered and classified: numerous points raised in such well-known dictionaries as Lisan-ul-Arab,\(^1\) Majma’-ul-Bahrain,\(^2\) Kitab-ul-‘Ayn,\(^3\) and Tahdhib-ul-Lughah.\(^4\) whatever commentator on and philologists of Arabic like Tarihi,\(^5\) Zubaidi,\(^6\) ibn Qutaibah,\(^7\) Raghib,\(^8\) Jeffry,\(^9\) Suyuti,\(^10\) Khurramshahi,\(^11\) Insafpur,\(^12\) Dehkhoda,\(^13\) Reyshahri,\(^14\) Mustafa\(^15\) and may other philologists referred to about Salam in their books; mentioning all of them is beyond the constraints of this paper. Afterwards, common points were omitted, and fifteen separate meanings were obtained as follows: 1. health, being away from all scourges; healthy, and pure, 2. a name of Allah, --- \(^1\) vol.2, p. 289 \(^2\) vol.6, p.84 \(^3\) vol.7, p.256 \(^4\) p.445 \(^5\) the entry of ‘Silm’ \(^6\) the entry of ‘Silm’ \(^7\) vol. 1, p. 239 \(^8\) the entry of ‘Silm’ \(^9\) p. 258 \(^10\) vol. 1, p.121 \(^11\) p.1206 \(^12\) p.585 \(^13\) p.13711 \(^14\) p. 2571 \(^15\) p. 446 3. well-being, peace, 4. security and safety, being safe from each other, 5. reconciliation, 6. salutation, 7. farewell, 8. tranquility, 9. submission and surrender, 10. a word which is not futile, the strong and purposeful word, 11. the name of a tree which is immune from any pest, 12. the name of a hard stone which is secure from any kind of erosion, large, broad, and small stones, 13. asking for permission, 14. veneration and reverence, 15. a long stick resembling a tree branch. **Meanings of Salam in Prayer** As mentioned above, there are fifteen meanings for Salam in various dictionaries. These definitions will be examined to see which one conforms to the spirit of Salam in prayer. For this purpose, many references and books on philosophy and secrets of prayer are consulted and their important points and discussions are mentioned in this article. In addition to referring to main viewpoints on various aspects and summarizing the points, the conclusions are reported briefly. Hence, at first the main viewpoints on the meanings and addressees of Salam in prayer are addressed. While so doing, points about the philosophy and interpretation of Salam in prayer and its links with the Ascent (*Mi’raaj*) of the Holy Prophet will be presented. A. The Main Viewpoints and Ideas on the Meanings of Salam in Prayer 1. Mulla Muhsin Fayd Kashani, Imam Khomeini, Shahid Thani (the Second Martyr), Hajj Mirza Jawad Maliki Tabrizi, Ayatollah Jawadi Amuli and others deem Salam in prayer to refer to “security”. This is based on a well-known hadith attributed to Imam Sadiq in *Misbah-u-Shari’ah*. He said: Salam at the end of prayer means security. That is, whoever obeys the commands of Allah and acts upon His Messenger’s Sunnah humbly will be immune from worldly afflictions and punishment in the hereafter. Salam is a name of Allah which He has endowed His servants with so that they use it in their transactions, keeping things in their trusts, and their relationships, keeping company and socializing with each other. And if you want to act upon the meaning of Salam, you should fear Allah, and your faith and wisdom should be immune from you; that is, you should not taint them with sins. You should not annoy your guardian angels (who record your deeds), and not drive them away with your bad deeds either. Likewise, both your friends and enemies should be immune from you and your actions. Whoever does not adhere to Salam is neither secure nor submissive. He is lying about his Salam even though he pretends to adhere to Salam in front of people. 2. As regards the meaning of security, Hajj Mirza Jawad Maliki Tabrizi wrote, “From these words, deduce the ruling of saying “Salam” to people. Do you say “Salam” to somebody while you do not wish him good health, all blessings, or some of them? Is it something other than hypocrisy? Can one expect the reward that Allah has promised in return for such a Salam? In addition, know the status of your saying Salam to the Prophet and the Imams in prayer and while visiting (doing *ziyarah*) their holy shrines.” (ibid, 368). 3. Mulla Muhsin Feid Kashani said about the hadith of Imam Sadiq, “If the guardian angels, who record your deeds and are the closest ones to you, closer than your friend and enemy, and are not immune from you, obviously your friends and enemies are not secure from you either. And no one is secure from the one who does not adhere to Salam as referred to in the hadith. He is insincere about his Salam, even though he says “Salam” to everybody.” 4. In the book *Ma’ani al-Akhbar*, Abdullah ibn Fadl Hashemi is quoted as saying, “When I asked Imam Sadiq about the meaning of Salam, he said, ‘Salam means security and finishing the prayer.’ I asked again, ‘May I be sacrificed for you! How come?’ He responded, ‘In the past, it was customary for people to consider somebody’s coming to them and saying ‘Salam’ to them a sign of being secure from their harm. However, if he did not say ‘Salam’ when approaching them, they were not immune from him. Similarly, if they did not say ‘Salam’ in response, he was not immune from their harm either. This was customary among the Arabs. Thus, ‘Salam’ is indicative of his finishing the prayer and being allowed to speak. It guarantees that nothing can enter prayer and ruin it. Salam is a name of Allah which the praying person addresses to the two angels that Allah has assigned to watch his actions.’” 5. In *Bihar-ul-Anwar*, a hadith by Imam Ali describes the reason for saying Salam: “Peace be upon you and Allah’s mercy and --- 1 Fayd Kashani, 176 2 ibid blessings\textsuperscript{1} in prayer is to seek Allah’s mercy, the Glorified; that is, prayer protects you from the chastisement of the Hereafter.\textsuperscript{2} 6. In this regard, Ayatollah Jawadi Amuli wrote, “Salam can be interpreted as us being covered by mercy of Allah.”\textsuperscript{3} In other above-mentioned references, similar explanations can be found. **B. The addresses of Salam** Salam in the final part of prayer including three sentences, namely 1. السلام عليكم ايها النبي و رحمه الله و بركاته (“Peace be upon you, O’ the Prophet, and Allah’s mercy and His blessing”); 2. السلام علينا و على عباد الله الصالحين (“Peace be upon us and upon the righteous servants of Allah”); 3. السلام عليكم و رحمه الله و بركاته (“Peace be upon you and Allah’s mercy and blessings”). The Shi’a jurists considered only the last sentence as mandatory, and they regard the two first sentences as recommended. No doubt the analyses of Salam in prayer carried out here are related to the third sentence. What follows is a summary of the sayings: 1. In his book *The New Treatise*, Imam Khomeini considered the addresses of Salam in السلام عليكم و رحمه الله و بركاته to be angels or those with qualities of angels: As in Islamic teachings, prayer is the ascent of the believer to Heaven and a spiritual journey. We establish a spiritual relationship with our leaders, Islamic Ummah, all righteous \textsuperscript{1} السلام عليكم و رحمه الله و بركاته \textsuperscript{2} Majlisi, 254/81 \textsuperscript{3} ibid, 148 groups, and angels in the last Rak’at after Tashahud, see them in front of us and say Salam to them because ‘In the spiritual journey, distance does not matter,’\(^1\): السلام عليكم و رحمه الله و بركاته means the Divine peace, mercy and blessings be upon you, o’ the Prophet! السلام علينا means peace be on us (who have the same belief and form Islamic Ummah and Hizbullah [the party of Allah]). وعلى عباد الله الصالحين means peace be upon the righteous servants of Allah. Through this Salam, we can keep away from sectarianism and self-importance and send peace and Salam on all those who tread the path of righteousness. السلام عليكم و رحمه الله و بركاته means Allah’s peace, mercy and blessings be upon you (angels or those with qualities of angels). This Salam takes us out of the earthly world and into the world of souls and angels. Finishing the prayer with Salam to angels shows the result of prayer; that is, if a Muslim performs prayer humbly, he will have the features of angels so much so as to reach the rank of angels and say Salam to them.\(^2\) 2. According to Ayatullah Jawadi Amuli, “Salam was originally realized in the Night of Ascent. In that night, when Prophet Muhammad knelt down and recited Tashahhud\(^3\) and Salawat in the end of his prayer, he suddenly saw the lines of prophets and angels --- \(^1\) Risalah Nowin, 100 \(^2\) Bi-Azar Shirazi, 100 \(^3\) The part of prayer where Muslims kneel in front of him. He was told, “O’ Muhammad! Say Salam to them!” So he said, “Allah’s peace, mercy and blessings be upon you”. Then Allah revealed to him, “Surely peace, mercy and blessing are you and your progeny.” 3. In *Ilal-u-Sharayi’,* it is narrated that Imam Sadiq was asked about the reason for Salam, he said in response, “Salam is a means of leaving the prayer.” Mufaddal ibn ‘Umar asked, “So why does the person praying say Salam looking at the right side and not the left side?” The Imam replied, “Because the angel recording the good deeds is on your right side, the angel recording the bad deeds is on your left side, and prayer is a good deed. Thus, Salam is recited looking at the right side. Then he also asked, “Why do not we say Salam with a singular grammatical object, addressing one angel on the right side?” The sixth Imam answered, “We say Salam with a plural grammatical object so that it is addressed to both angels, but looking at the right side indicates the superiority of the angel at the right side.” Finally, Mufaddal asked, “Why do we leave the state of prayer with Salam?” The Imam replied, “Because it is Salam and salutation to these two angels.” Then he added, “Performing Prayer according to injunctions and observing its Ruku’, Sajdah and Salam guarantees security from Hellfire. On the Day of Judgment, if somebody’s prayer is accepted, his other good deeds will be accepted as well. Thus, if his prayer is perfect, his other good deeds will also be perfect; if not, they will be rejected, too. 4. In the book “Pithy Points”, Ayatullah Bahjat is quoted as saying, “When the servant comes back from the presence of Allah, his first souvenir is His Salam. A part of supplication in Kufa Mosque reads as follows: --- 1 السَّلامُ علَيْكُمْ وَرَحْمَةُ اللهِ وَبَرَكَاتُهُ 2 Kulayni, 3/486 3 Jawadi Amuli, 148 reporting from *Ilal-u-Sharayi’* “O, Allah! You are Salam (peace), and from You is Salam, and to You returns Salam. O’ Our Lord! Salute us with Your Salam!”\(^1\) 5. An excerpt from the book “The Song of Monotheism” also reads as follows, “[After Tashahhud, the praying person says “Peace be upon you, O’ the Prophet!\(^2\) And Allah’s mercy and His blessing”; “Peace be upon us and upon the righteous servants of Allah”. \(^3\) In every prayer, a Muslim inculcates friendship ties with all righteous servants of Allah in himself. In other words, he repeats his sending of peace on righteous and Muslim servants of Allah every day; peace and Salam on righteous servants of Allah. If here ‘righteous people’ is mentioned, then, “Peace be upon you and Allah’s mercy and blessings”. \(^4\) 6. In *Mi’raj-u-Sa’adah*, Mulla Ahmad Naraqi wrote about some rites and secrets of prayer: And when you start reciting Salam, you should consider yourself in the presence of the Holy Prophet, the angels close to Allah, other prophets, the Holy Imams, and the guardian angels who record your deeds. You should remember all of them. Then, you should say “Salam” to Prophet Muhammad, who is the chief and the means of guidance and faith, saying, السلام عليك ايها النبي و رحمه الله و بركاته. Finally, you should turn to all of them and say “Salam” to them. Beware of saying Salam negligently without remembering them. Likewise, when you lead a public prayer, address your Salam to all of --- \(^1\) اللهم انت السلام و منك السلام واليک يرجع و يعود السلام حينا رينا منك بالسلام \(^2\) السلام عليك ايها النبي و رحمه الله و بركاته \(^3\) السلام علينا و على عباد الله الصالحين \(^4\) السلام عليكم و رحمه الله و بركاته Beheshti, 27, 28 those praying behind you. When you put these points into practice, you can hope that your prayer is accepted. Also, beware of praying negligently. As regards the secrets of prayer, some great scholars said that prayer is an example of the way we will be present on the Day of Judgment in the gathering place of resurrection, so in prayer we should remember that state of being. Adhan\(^1\) indicates the second time that Israfil\(^2\) will blow the Horn in the end of Time when the dead will be resurrected. Iqamah\(^3\) represents the call of Allah when He summons His servants, and standing while facing Qiblah\(^4\) symbolizes our standing in the presence of Allah in order to be interrogated for our deeds. These great scholars first talked about the symbolic relationship between all elements of prayer and the presence of man in the gathering place of resurrection, and finally elaborated on Salam.\(^5\) 7. In the book *Jami’ Abbasi*, Shaikh Baha’i wrote regarding Salam in prayer: There are seventeen acts related to Salam, five of which are mandatory and twelve other ones are recommended acts based on Sunnah. The five obligatory acts include kneeling down for reciting Salam, keeping still while doing so, saying السلام عليكم و رحمه الله و بركاته, and saying this sentence after finishing with the tashahhud, and saying it in such a way that one can hear himself. The recommended acts based on Sunnah include Turak (kneeling down in a way that one leans on the left side and left thigh and puts the front side of right foot on the back of his left foot) as what is done in Tashahhud; placing the hands on his lap; keeping fingers --- \(^1\) Call to prayer \(^2\) The angel who will sound the trumpet on Judgment Day \(^3\) The second call to the prayer, recited just before the prayer begins \(^4\) The direction Muslims turn at for prayer towards the Ka’aba - the House of God in Mecca \(^5\) Naraqi, 681-682 close to each other; intending to leave the state of prayer; saying Salam to the Prophets, the Imams, angels, believing human beings and Jinns; the prayer leader’s intending to address Salam to believers praying behind him; the latter’s intending to address Salam to the former; the prayer leader’s audible Salam; those praying behind him saying Salam quietly and choice is left to the one who performs prayer individually; both looking at their right side when saying Salam; then those praying behind him looking at the left side to see if there is somebody on their left side – some also say that also if there is wall on their left side; finally, the one who performs prayer individually looking at his right side.” 8. As for the addressees of Salam in prayer, Shahid Thani said: When you are finished with Tashahhud, consider yourself in the presence of the Holy Prophet and the angels close to Allah, and say, السلام عليكم ايها النبي و رحمه الله و بركاته، السلام عليكم و على عباد الله الصالحين. Then remember Prophet Muhammad, other prophets, the Imams, and guardian angels who record your deeds and say, السلام عليكم و رحمه الله و بركاته. You should not address them while you are ignorant of addressees because your deed will be futile and mere pretence. And how can your call be heard when there is no addressee? What would you do if there were not the bounty, sweeping mercy, and perfect compassion of Allah Who accepts the prayer void of its origin and truth due to inattention? Nevertheless, it may not be accepted. If you lead the public prayer, address your Salam to those who pray behind you, in addition to those who those mentioned above. If you act upon above-mentioned points, --- 1 Ibid, 53-54 you truly observe the right of Salam in prayer and deserve the increasing generosity of Allah.\(^1\) 9. Mulla Muhsin Feid Kashani’s opinion resembles the aforesaid words of Shahid Thani.\(^2\) **Conclusion** 1. The main meaning of Salam is safety and security from the Divine rage on the Day of Judgment. 2. In the Salam of prayer, the prophets, Imams, righteous servants of Allah and angels are addressed by the one praying. 3. In addition to safety and security, Salam in prayer can also refer to salutation. **Bibliography** The Holy Qur’an Amuli, Sayyid Haidar, 1382 solar, *Anwar-u-Haqiqah wa Atwar-u-Tariqah wa Asrar-u-Shari’ah*, Qum: Nur ala Nur Publications. Ibn Salam, 1404 A.H. / 1984 A.D., *Lughat-ul-Qaba’il al-Waridah fil-Qur’an al-Karim*, compiled by Abdul-Hamid a-Sayyid Talab, Kuwait. Ibn Faris, Ahmad, *Mu’jam Muqayis al-Lughah*, compiled by Abdusalam Muhammad Harun, Beirut: Dar-ul-Injil, [Bitaj]. --- \(^1\) Cited in Jawadi Amuli, 77 \(^2\) Feid Kashani, *al-Mahajjatt-ul-Baida’*, 1/393 Ibn Manzhur, Muhammad ibn Mukram, 1414 A.H., *Lisan-ul-Arab*, 3rd ed., Beirut: Dar Sadir. Azhari, Muhammad ibn Muhammad, 1384 A.H. / 1964 A.D., *Tahdhib-ul-Lughah*, compiled by Abdu-Salam Muhammad Harun et al., Cairo. Insafpur, Muhammad Rida, 1373 solar, *Advanced Persian Dictionary*, Tehran. Baqizadeh, Rida, 1382 solar, *Pithy Points from Ayatullah Bahjat*, 1st ed., Tehran: Safir Subh Publications. Beheshti, Muhammad Hussain, 1386 solar, The Song of Monotheism, 5th ed., Tehran: The Publishing Foundation of Works and Thoughts of Ayatuulah Beheshti. Jabal Amili, Zain-u-Din, 1377 solar, *The Secrets of Prayer*, 1st ed., Tehran: Farahani Publications. Jeffry, Arthur, 1385 solar, *The Words in the Holy Qur'an*, translated by Freidun Badrei, Tehran. Jawadi Amuli, Abdullah, 1386 solar, *The Inner Secrets of Prayer*, 10th ed., Isra Publications. Khurramshahi, Baha'u-Din, 1377 A.H., *The Encyclopedia of the Qur'anic Studies*, Tehran. Khomeini, Ruhullah, 1359 solar, *The New Treatise*, translated and commented on by Abdul-Karim Bi-Azar Shirazi, Tehran. Khomeini, Ruhullah, 1378 solar, *The Rites of Prayer*, 7th ed., The Institute of Compilation and Publication of Imam Khomeini’s Works. Dehkhoda, Ali Akbar, 1377 solar, *Dehkhoda Dictionary*, Tehran. Raghib Isfahani, *Mufradat Alfazh al- Qur’an al-Karim*, Damascus, Dar-ul-Qalam, [Bita]. Zubaidi, Murteada, 1306 A.H., *Taj-ul-Arus*, Cairo. Shaikh Baha’I, *Jami’ Abbasi*, Tehran: Abidi Publications. Tabataba’I, Sayyid Muhammad Hussain, 1377 solar, *al-Mizan fi Tafsir al-Qur’an*, translated by Sayyid Muhamad Baqir Musawi Hamadani, Qum. Tarihi, Fakhr-u-Din, 1375 solar, *Majma’ul-Bahrain*, 3rd ed., Tehran: Murtadawi Book Store. Abdul-Baqi, Muhhamd Fu’ad, 1364 A.H., *Al-Mu’jam al-Mufahras li Alfazh al-Qur’an al-Karim*, Cairo. Ghazali, Abu Aharnid Muhammad, 1372 solar, *Ihya’ Ulum I-Din*, translated by Mu’ayyid-u-Din Muhammd Kharazmi, compiled by Hussain Khadiw-Jam, 3rd ed., Tehran: Scientific nad Cultural Publications. Farahidi, Khalil ibn Ahmad, 1414 A.H., *al-Ayn*, Qum. Feid Kashani, Mulla Muhsin, 1415 A.H., *Tafsir a-Safi*, 2nd ed., Tehran: al-Sadr Publications. Feid Kashani, Mulla Muhsin, 1415 A.H., *al-Mahajatt-ul-Baida’ fi Tahdhib al-AHya’*, 3rd ed., Qum: The Office of Dissemination of Islamic Culture. Feid Kashani, Mulla Muhsin, 1373 solar, *Prayer: the Ascent of Man* (a translation of *al-Salat*) with interpretation of the mystic; Shaikh Hassan Ali Isfahani, compiled by Hussai Haidarkhani Mushtaq-Ali, 1st ed., Marwi Publications. Farahidi, Khalil ibn Ahmad, 1410 A.H., *Kitab-ul-‘Ayn*, 2\textsuperscript{nd} ed., Qum: Hijrat Publications. Quraishi, Sayyid Ali Akbar, 1371 solar, *The Lexicon of the Holy Qur’an*, 6\textsuperscript{th} ed., Tehran: Dar-ul-Kutub al-Islamiyyah. Qara’ati, Muhsin, 1381 solar, *A Ray of Secrets of prayer*, 20\textsuperscript{th} ed., Tehran: The Office of Keeping up the Prayer. Majlisi, Muhammad Baqir, 1398 A.H., *Bihar-ul-Anwar*, Tehran. Maliki Tabrizi, Hajj Mirza Jawad, 1376 solar, *Asrar-u-Salat*, translated by Rida Rajabzadeh, 7\textsuperscript{th} ed., Tehran: Payam Azadi Publications. Salem (/ˈsɛləm/; Hebrew: שְׁלֵם Shalem; Ancient Greek: Σαλέμ) is an ancient Middle Eastern town mentioned in the Bible. Salem is referenced in the following biblical passages: "And Melchizedek king of Salem brought forth bread and wine: and he was the priest of the most high God." (Genesis 14:18). "In Salem also is his tabernacle, and his dwelling place in Zion." (Psalm 76:2) "6 Steps of Prayer taught in the Lord's Prayer. 1. Address God's rightful place as the Father 2. Worship and praise God for who He is and all that He has done 3. Acknowledge that it is God's will and plans are in control and not our own 4. Ask God for the things that we need 5. Confess our sins and repent 6. Request protection and help in overcoming. James 5:16 - Therefore confess your sins to each other and pray for each other so that you may be healed. The prayer of a righteous person is powerful and effective. Psalm 145:18 - The Lord is near to all who call on him, to all who call on him in truth. Proverbs 15:29 - The Lord is far from the wicked, but he hears the prayer of the righteous. Romans 8:26 - In the same way, the Spirit helps us in our weakness.
Hilcorp Energy Company (Hilcorp) seeks to amend the current oil and gas field rules in effect for the Refugio-Fox (5800) Field ("Field"), in Refugio County, Texas. Hilcorp seeks the proposed rule changes to facilitate oil and gas production within the Field using secondary oil and gas recovery technology, primarily water and carbon dioxide (CO₂) flooding. The Field has been inactive since the last production in 2007. Hilcorp proposes to establish consistency between the oil and gas lease spacing and well spacing requirements; establish a correlative interval for the Field designated from 5,900 feet to 6,030 feet; amend the drilling unit for proration purposes to 20 acres for oil and gas wells with 10-acre optional density; and amend the allowable to be consistent with Rule 48 – capacity allowable. Hilcorp has demonstrated that the proposed field rule amendments will prevent waste and protect correlative rights. Proper notice was given, and the application was not protested. The Technical Examiner and Administrative Law Judge (collectively, "Examiners") recommend that the amended field rules be granted as proposed by Hilcorp. **DISCUSSION OF THE EVIDENCE** The Field (No. 75612672) was discovered in the 1930s. The Commission recognized the Field as a separate and distinct interval on November 1, 1963. On July 6, 1973 under Docket No. 2-63,051, a hearing was held for unitization and secondary recovery in the Refugio-Fox (5800) Field, Refugio, Texas. The unitization was approved on July 17, 1973 for the Oligocene Frio Sand, known as the Fox No. 1 Sand, found approximately 5,884 feet deep, which correlated to the Refugio-Fox (5800) Field. The Field covers about 1,800 total acres and has been primarily developed using vertical wells from the 1930s to 2007. The net effective oil pay thickness is about 8.4 feet across the Field, with up to 20% recoverable oil remaining within the distinct unit. The average gravity of oil in the reservoir is 33.4 API. Original reservoir pressure was 2,680 psi, which has decreased over 88% to 300 psi at the present time. The cumulative production estimates established by testimony have been 4.66 million barrels of oil (MMBO); 9.86 billion cubic feet (BCF) of gas; and 1.73 million barrels of water (MMBW). Production history shows oil and gas decreasing and water increasing over the monitoring period from about 1931 to 2007. Testimony indicated that approximately 30 wells were completed within the field over the production history, with well density being about one well per 66 acres. Based on the proration schedule, no active oil and gas wells remain in the Field and currently, Hilcorp is the only operator with two injection wells. Hilcorp is seeking field rule changes to utilize secondary and tertiary recovery technology such as water and CO₂ flooding to stimulate oil and gas production. Hilcorp has performed reservoir mechanics testing on the Field by completing two injection wells identified as INJ1 (Fox-A No. 51) and INJ2 (Fox-A No. 52) to attain reservoir data and ultimately use the two injection wells as part of a potential secondary oil and gas recovery water flooding system upon approval by the Commission. The reservoir mechanics indicate the Field has about 30 percent porosity and good permeability which will allow secondary and tertiary oil and gas recovery through water and CO₂ flooding. Faulting within the Field has minimal displacement and appears not to inhibit communication across the disturbed faulted stratigraphy. The stratigraphic slopes from the east to west and is a 40-foot elevation change, but any water or CO₂ flooding will be confined to the proposed correlative interval. Testimony estimates secondary or tertiary recovery technology should recover oil from the reservoir at volumes to be economically viable. Hilcorp has utilized similar amended field rules under Oil and Gas Docket No. 02-0292677 (adopted March 24, 2015), The West Ranch (41A & 98A Cons.) Field in Jackson County, Texas, to utilize secondary recovery technology. Also, the Tom O’Connor (5800) Field, which is equivalent to the Refugio-Fox (5800) Field, has similar reservoir conditions. The oil and gas recovery in the Tom O’Connor (5800) Field using water flooding technology has met Hilcorp’s oil and gas recovery estimates. On October 19, 2017, Hilcorp requested a hearing to amend field rules. A Notice of Hearing was issued by the Commission on December 6, 2017 to all parties entitled to notice at least ten days prior to the date of the hearing. The hearing was held on February 1, 2018. Hilcorp proposes to amend the current field rules to develop the field using secondary and tertiary recovery technology and reduce waste and protect correlative rights. Below are the changes being requested for the Field: - Establish the correlative interval from 5,900 feet to 6,030 feet as shown on the log of the Hilcorp Energy Company, Fox No. 50 Well (API No. 423913307500) located in Refugio County, Texas. The correlative interval is designated as a single reservoir for proration purposes and shall be designated as the Refugio-Fox (5800) Field; - Change lease spacing from 467 feet to 330 feet for both oil and gas wells; - Change well spacing from 1,200 feet for oil and gas wells to no spacing requirement for both oil or gas wells (0 feet); - Change the acres per unit from 40 acres for oil and gas wells to 20 acres for oil and gas wells with 10-acre optional density to efficiently perform the secondary and tertiary recovery operations; - Change the tolerance acreage from 20 acres to one (1) percent of the proration unit, or a maximum of 22 acres; and, - Establish allocation per State Wide Rule 48 – capacity allowable. Hilcorp indicates the proposed field rule amendments outlined in the hearing and itemized above will facilitate secondary and tertiary recovery of oil and gas in a field that has exhausted primary recovery operations. Development of the field using secondary and tertiary recovery technology will reduce waste and protect correlative rights. The Examiners consider the proposed amendments and additions to be appropriate for the Field. Hilcorp has demonstrated that the amendments are necessary to continue recovery of oil and gas and prevent waste. FINDINGS OF FACT 1. Notice of this hearing was given to all parties entitled to notice at least ten days prior to the date of the hearing and no protests were received. 2. The Refugio-Fox (5800) Field (No. 75612672) is in Refugio County, Texas. 3. Oil and gas has been recovered from parts of this Field since the 1930s. The Commission recognized the Refugio-Fox (5800) Field as a separate and distinct field on November 1, 1963. On July 6, 1973, under Docket No. 2-63,051, a hearing was held for unitization and secondary recovery in the Refugio-Fox (5800) Field, Refugio, Texas. The unitization was approved on July 17, 1973, for the Oligocene Frio Sand, located at 5,855 to 5,884 feet, which correlated to the Refugio-Fox (5800) Field in the docket. 4. The cumulative production estimates for the Refugio-Fox (5800) Field established by testimony has been 4.66 MMBO; 9.86 BCF of gas; and 1.73 MMBW. 5. Production history shows oil and gas decreasing and water increasing over the monitoring period from about 1931 to 2007. Testimony indicated that approximately 30 wells were completed within the field over the production history with well density being one well per 66 acres. Based on the proration schedule, no active oil and gas wells remain in the Field. 6. Hilcorp is seeking to amend field rule changes to utilize secondary and tertiary recovery technology such as water and CO₂ flooding to stimulate oil and gas production. 7. Testimony indicates reservoir mechanic testing estimates indicate the net effective oil pay thickness is about 8.4 feet across the Field, with 20% recoverable oil remaining within the distinct unit. 8. A hearing was held on February 1, 2018. 9. Hilcorp proposes to amend the current field rules to development the field using secondary and tertiary technology. Below are the changes being requested for the Field: - Establish the correlative Interval from 5,900 feet to 6,030 feet as shown on the log of the Hilcorp Energy Company, Fox No. 50 Well (API No. 423913307500) located in Refugio County, Texas. The correlative interval is designated as a single reservoir for proration purposes and shall be designated as the Refugio-Fox (5800) Field; - Change Lease spacing from 467 feet to 330 feet for both oil and gas wells; • Change well spacing from 1,200 feet for oil and gas wells to no spacing requirement for both oil or gas wells (0 feet); • Change the acres per unit from 40 acres for oil and gas wells to 20 acres for oil and gas wells with 10-acre optional density to efficiently perform the secondary and tertiary recovery operations; • Change the tolerance acreage from 20 acres to one (1) percent of the proration unit or a maximum of 22 acres; and, • Establish allocation per Rule 48 – capacity allowable. 10. The proposed field rule amendments will reduce waste, protect corrective rights, and promote the orderly development of the field. 11. At the hearing, the Hilcorp agreed on the record that the Final Order in this case is to be effective when the Master Order relating to the Final Order is signed. **CONCLUSIONS OF LAW** 1. Resolution of the subject application is a matter committed to the jurisdiction of the Railroad Commission of Texas. Tex. Nat. Res. Code § 81.051. 2. All notice requirements have been satisfied. 16 Tex. Admin. Code §§ 1.43 and 1.45. 3. The proposed field rules will prevent waste, protect correlative rights, and promote the orderly development of the field. 4. Pursuant to § 2001.144 (a)(4)(A) of the Texas Government Code and the agreement of the applicant, this Final Order is effective when a Master Order relating to the Final Order is presented at Commission conference and signed by the Commissioners. EXAMINER’S RECOMMENDATION Based on the above findings of facts and conclusions of law, the Examiners recommend the Commission enter an order granting the application of Hilcorp Energy Company to amend the field rules for the Refugio-Fox (5800) Field, in Refugio County, Texas. Respectfully submitted, Robert Musick Technical Hearings Examiner Clayton J. Hoover Administrative Law Judge
The following document is provided by the Law and Legislative Digital Library at the Maine State Law and Legislative Reference Library http://legislature.maine.gov/lawlib Reproduced from scanned originals with text recognition applied (searchable text may contain some errors and/or omissions) FIFTH REVISION. THE REVISED STATUTES OF THE STATE OF MAINE, PASSED SEPTEMBER 1, 1903, AND TAKING EFFECT JANUARY 1, 1904. BY THE AUTHORITY OF THE LEGISLATURE. AUGUSTA: KENNEBEC JOURNAL PRINT, 1904. CHAPTER 92. MORTGAGES OF REAL ESTATE. Sec. 1. Mortgages of real estate, mentioned in this chapter, include those made in the usual form, in which the condition is set forth in the deed, and those made by a conveyance appearing on its face to be absolute, with a separate instrument of defeasance executed at the same time or as part of the same transaction. (a) Sec. 2. A mortgagee, or person claiming under him, may enter on the premises, or recover possession thereof, before or after breach of condition, when there is no agreement to the contrary; but in such case, if the mortgage is afterwards redeemed, the amount of the clear rents and profits from the time of taking possession, shall be accounted for and deducted from the sum due on the mortgage. (b) Sec. 3. After breach of the condition, if the mortgagee, or any one claiming under him, desires to obtain possession of the premises for the purpose of foreclosure, he may proceed in either of the following ways, viz.: (c) I. He may obtain possession under a writ of possession issued on a conditional judgment, as provided in section ten, duly executed by an officer. An abstract of such writ, stating the time of obtaining possession, certified by the clerk, shall be recorded in the registry of deeds of the district in which the estate is, within thirty days after possession has been obtained. (d) II. He may enter into possession, and hold the same by consent in writing of the mortgagor, or the person holding under him. (e) III. He may enter peaceably and openly, if not opposed, in the presence of two witnesses, and take possession of the premises; and a certificate of the fact and time of such entry shall be made, signed and sworn to by such witnesses before a justice of the peace; and such certificate, or con- (a) What constitutes a mortgage; 2 Me., 136; 5 Me., 88; 8 Me., 250; 10 Me., 199; 12 Me., 349; 18 Me., 105; 21 Me., 197; 23 Me., 241; 24 Me., 189; 27 Me., 533; 32 Me., 145; 36 Me., 123; 38 Me., 448; 40 Me., 382; 43 Me., 372, 566; 44 Me., 299; 47 Me., 236; 49 Me., 363, 479; 50 Me., 98, 175; 52 Me., 98; 53 Me., 11, 464; 55 Me., 388, 407; 68 Me., 488; 70 Me., 209; 71 Me., 553, 570; 75 Me., 268; 77 Me., 554; 82 Me., 556; 93 Me., 87. (b) Rights of parties; 2 Me., 136, 175, 340; 5 Me., 92; 14 Me., 132; 15 Me., 307; 18 Me., 105; 19 Me., 55, 99, 433; 20 Me., 114; 21 Me., 249, 467, 500; 24 Me., 404; 25 Me., 218, 248, 345; 27 Me., 533; 29 Me., 116, 160; 30 Me., 367; 33 Me., 42; 34 Me., 90, 189; 35 Me., 40, 220, 551; 36 Me., 123, 284, 438; 40 Me., 255; 41 Me., 116, 252; 42 Me., 188; 44 Me., 120; 45 Me., 97, 388, 414; 47 Me., 513; 49 Me., 428; 50 Me., 165, 447, 463; 51 Me., 49; 52 Me., 98, 116, 130, 185, 406; 55 Me., 495, 522; 58 Me., 367; 66 Me., 275; 67 Me., 347; 72 Me., 281; 80 Me., 460; 82 Me., 424, 457; 84 Me., 311; 92 Me., 242. Transfers of mortgages; 2 Me., 331; 5 Me., 276; 8 Me., 283; 23 Me., 346; 24 Me., 189; 27 Me., 240; 31 Me., 313; 32 Me., 203; 41 Me., 223; 44 Me., 302; 46 Me., 447; 50 Me., 177; 51 Me., 123; 52 Me., 185; 71 Me., 377. (c) See c. 79, § 8, ¶ i; 18 Me., 199; 21 Me., 128; 23 Me., 25; 24 Me., 156; 35 Me., 557; 40 Me., 523; 42 Me., 39; 48 Me., 62; 49 Me., 266, 378; 51 Me., 381. (d) 27 Me., 241; 33 Me., 198; 35 Me., 551; 45 Me., 452; 51 Me., 395; 52 Me., 469; 55 Me., 522; 78 Me., 343. (e) 28 Me., 353; 29 Me., 57; 33 Me., 364; 38 Me., 551; 41 Me., 71; 74 Me., 312. sent, with the affidavit of the mortgagee or his assignee to the fact and time of entry indorsed thereon, shall be recorded in each registry of deeds in which the mortgage is or by law ought to be recorded, within thirty days after the entry is made. (a) Sec. 4. Possession obtained in either of these three modes, and continued for the three following years, forever forecloses the right of redemption. (b) Sec. 5. If after breach of the condition, the mortgagee, or any person claiming under him, is not desirous of taking and holding possession of the premises, he may proceed for the purpose of foreclosure in either of the following modes: I. He may give public notice in a newspaper published and printed in whole or in part in the county where the premises are situated, if any, or if not, in the state paper, three weeks successively, of his claim by mortgage on such real estate, describing the premises intelligibly, and naming the date of the mortgage, and that the condition in it is broken, by reason whereof he claims a foreclosure; and cause a copy of such printed notice, and the name and date of the newspaper in which it was last published to be recorded in each registry in which the mortgage deed is or by law ought to be recorded, within thirty days after such last publication. (c) II. He may cause an attested copy of such notice to be served on the mortgagor or his assignee, if he lives in the state, by the sheriff of the same county or his deputy, by delivering it to him in hand or leaving it at his place of last and usual abode; and cause the original notice and the sheriff's return thereon to be recorded within thirty days after such service as aforesaid; and in all cases the certificate of the register of deeds is prima facie evidence of the fact of such entry, notice, publication of foreclosure, and of the sheriff's return. Sec. 6. For the foreclosure of a mortgage by either method prescribed by the preceding section, or by paragraphs two and three of section three, the mortgagee or the person claiming under him may charge an attorney's fee of five dollars which shall be a lien on the mortgaged estate, and shall be included, with the expense of publication, service and recording, in making up the sum to be tendered by the mortgagor or the person claiming under him in order to be entitled to redeem; provided, said sum has actually been paid in full or partial discharge of an attorney's fee. Sec. 7. The mortgagor, or person claiming under him, may redeem the mortgaged premises within three years after the first publication, or the service of the notice mentioned in section five, and if not so redeemed his right of redemption is forever foreclosed; provided, that the mortgagor and mortgagee may agree upon a shorter time, not less than one year, in which the mortgage shall be forever foreclosed, which agreement shall be inserted in the mortgage and be binding on the parties, their heirs and assigns, and shall apply to all the modes prescribed for the foreclosure of mortgages on real estate. (a) 4 Me., 495; 37 Me., 388; 47 Me., 296; 50 Me., 473; 52 Me., 135; 58 Me., 368; 64 Me., 161; 66 Me., 272; 82 Me., 557. (b) 3 Me., 263; 7 Me., 33; 23 Me., 25; 24 Me., 156; 37 Me., 388; 42 Me., 190; 58 Me., 308; 64 Me., 162; 66 Me., 273; 67 Me., 312. (c) 25 Me., 392; 38 Me., 258; 45 Me., 99, 452; 46 Me., 274, 497; 49 Me., 103, 376; 53 Me., 73; 55 Me., 544; 58 Me., 367; 61 Me., 54; 63 Me., 544; 66 Me., 170; 71 Me., 444; 74 Me., 75; 77 Me., 554; 84 Me., 97; 94 Me., 305; 97 Me., 223. Sec. 8. Whenever a mortgagee or his assignee dies, and there is no executor or administrator to receive the mortgage money, the mortgagor or person claiming under him having a right to redeem, may apply to the judge of probate of the county where the estate mortgaged is situated, for the appointment of an administrator upon such estate, and if, after due notice to all parties interested therein, they neglect or refuse to take out administration for thirty days, then the judge may commit administration to such person as he deems suitable, who may act as administrator with reference to said mortgage, as provided by law. In all such cases, however, personal notice shall first be given to the widow and heirs of the deceased known to be living in the state, either by service on them in person or by leaving such notice at their last and usual place of abode. Sec. 9. The mortgagee, or person claiming under him, in an action for possession, may declare on his own seizin, in a writ of entry, without naming the mortgage or assignment; and if it appears on default, demurrer, verdict or otherwise, that the plaintiff is entitled to possession, and that the condition had been broken when the action was commenced, the court shall, on motion of either party, award the conditional judgment, unless it appears that the tenant is not the mortgagor or a person claiming under him, or that the owner of the mortgage proceeded for foreclosure conformably to sections five and seven before the suit was commenced, the plaintiff not consenting to such judgment; and unless such judgment is awarded, judgment shall be entered as at common law. (a) Sec. 10. The conditional judgment shall be, that if the mortgagor, his heirs, executor or administrator, pays the sum that the court adjudges to be due and payable, with interest, within two months from the time of judgment, and also pays such other sums as the court adjudges to be thereafter payable, within two months from the time that they fall due, no writ of possession shall issue and the mortgage shall be void; otherwise it shall issue in due form of law, upon the first failure to pay according to said judgment. And if, after three years from the rendition of the judgment, the writ of possession has not been served or the judgment wholly satisfied, another conditional judgment may, on scire facias sued out in the name of the mortgagee or assignee, be rendered, and a writ of possession issued as before provided. When the condition is for doing some other act than the payment of money, the court may vary the conditional judgment as the circumstances require; and the writ of possession shall issue, if the terms of the conditional judgment are not complied with within the two months. Sec. 11. If it appears that nothing is due on the mortgage, judgment shall be rendered for the defendant and for his costs, and he shall hold the land discharged of the mortgage. (b) Sec. 12. When a mortgagee, or person claiming under him, is dead, the same proceedings to foreclose the mortgage may be had by his executor or administrator, declaring on the seizin of the deceased, as he might have had if living. Sec. 13. Lands mortgaged to secure the payment of debts, or the performance of any collateral engagement, and the debts so secured, are on (a) 2 Me., 332; 13 Me., 186; 14 Me., 299; 19 Me., 276, 366; 28 Me., 135; 42 Me., 188; 53 Me., 77; 56 Me., 10; 63 Me., 545; 64 Me., 445; 79 Me., 570; 80 Me., 460; 81 Me., 285; 95 Me., 33. (b) 2 Me., 322, 332; 31 Me., 394; 67 Me., 548; 72 Me., 202. the death of the mortgagee, or person claiming under him, assets in the hands of his executors or administrators; they shall have the control of them as of a personal pledge; and when they recover seizin and possession thereof, it shall be for the use of the widow and heirs, or devisees, or creditors of the deceased, as the case may be; and when redeemed, they may receive the money, and give effectual discharges therefor, and releases of the mortgaged premises. (a) Sec. 14. An action on a mortgage deed may be brought against a person in possession of the mortgaged premises; and the mortgagor, or person claiming under him, may, in all cases, be joined with him as a co-tenant, whether he then has any interest or not in the premises; but he is not liable for costs, when he has no such interest, and makes his disclaimer thereto upon the records of the court. Sec. 15. Any mortgagor, or other person having a right to redeem lands mortgaged, may demand of the mortgagee or person claiming under him a true account of the sum due on the mortgage, and of the rents and profits, and money expended in repairs and improvements, if any; and if he unreasonably refuses or neglects to render such account in writing, or, in any other way by his default prevents the plaintiff from performing or tendering performance of the condition of the mortgage, he may bring his bill in equity for the redemption of the mortgaged premises within the time limited in section seven, and therein offer to pay the sum found to be equitably due, or to perform any other condition, as the case may require; and such offer has the same force as a tender of payment or performance before the commencement of the suit; and the bill shall be sustained without such tender, and thereupon he shall be entitled to judgment for redemption and costs. (b) Sec. 16. When the amount due on a mortgage has been paid or tendered to the mortgagee, or person claiming under him, by the mortgagor or the person claiming under him, within the time so limited, he may have a bill in equity for the redemption of the mortgaged premises, and compel the mortgagee, or person claiming under him, by a decree of the supreme judicial court, to release to him all his right and title therein; although such mortgagee or his assignee has never had actual possession of the premises for breach of the condition; or, without having made a tender before the commencement of the suit, he may have his bill in the manner prescribed in the preceding section, and the cause shall be tried in the same manner. (c) Sec. 17. When a bill to redeem is brought before an actual entry for breach of the condition, and before payment or tender, if the mortgagee, or person claiming under him, is out of the state and has not had actual notice, the court shall order proper notice to be given him, and continue the cause as long as necessary. When a mortgage is alleged and proved (a) 20 Me., 163; 31 Me., 313; 51 Me., 124; 56 Me., 210; 78 Me., 343; 79 Me., 301; 80 Me., 138; 84 Me., 311; 92 Me., 490. (b) 8 Me., 250, 282; 18 Me., 210; 19 Me., 366; 20 Me., 271; 21 Me., 129; 23 Me., 48, 178; 24 Me., 298; 25 Me., 387; 28 Me., 352; 34 Me., 271; 35 Me., 220; 36 Me., 123; 38 Me., 329; 39 Me., 112; 41 Me., 223; 42 Me., 246; 44 Me., 300; 46 Me., 299, 443, 448, 494; 48 Me., 61; 49 Me., 564; 50 Me., 174, 240; 51 Me., 348; 52 Me., 135, 408, 544; 53 Me., 142, 246, 353, 441; 54 Me., 180, 406; 55 Me., 157; 56 Me., 159; 62 Me., 577; 65 Me., 198, 288; 66 Me., 190, 272, 470; 68 Me., 192; 69 Me., 192; 70 Me., 388; 74 Me., 314; 87 Me., 88; 95 Me., 264. (c) 7 Me., 33; 27 Me., 241; 30 Me., 360; 36 Me., 51; 40 Me., 117; 47 Me., 54; 52 Me., 408, 561; 78 Me., 445; 87 Me., 88; 95 Me., 264; 96 Me., 360. to be fraudulent, in whole or in part, an innocent assignee of the mortgagor, for a valuable consideration, may file his bill within the time allowed to redeem, and be allowed to redeem without a tender. Sec. 18. When a mortgagee, or person claiming under him, residing out of the state, or whose residence is unknown to the party entitled to redeem, has commenced proceedings under section five, or when such mortgagee or claimant having no tenant, agent or attorney in possession on whom service can be made, has commenced proceedings under section three, in either case the party entitled to redeem may file his bill, as prescribed in section fifteen, and pay at the same time to the clerk of the court the sum due, which payment shall have the same effect as a tender before the suit; and the court shall order such notice to be given of the pendency of the suit, as it judges proper. Sec. 19. When an amount due on a mortgage has been paid, or tendered to the mortgagee, or person claiming under him, before foreclosure of the mortgage, and the mortgagee or his assignee is out of the state, and the mortgage is undischarged on the record, the mortgagor or person claiming under him, may have his bill in equity for the redemption of the mortgaged premises, as provided in section sixteen, or for the discharge of the mortgage; and on notice of the pendency of the bill, given by publication in some newspaper in the county where said premises are situated, for three weeks successively, the last publication being thirty days before the time of hearing, or in such other way as the supreme judicial court or a justice thereof, in vacation, orders, said court may decree a discharge of such mortgage; and the record of such decree in the registry of deeds in said county is evidence of such discharge. Sec. 20. No bill in equity shall be brought for redemption of mortgaged premises, founded on a tender of payment or performance of the condition made before commencement of the suit, unless within one year after such tender. Sec. 21. In any suit brought for the redemption of mortgaged premises, when it is necessary to the attainment of justice that any other person, besides the defendant, claiming an interest in the premises, should be made a party with the original defendant, the court on motion, may order him to be served with an attested copy of the bill amended in such manner as it directs, and on his appearance, the cause shall proceed as though he had been originally joined. Sec. 22. The court, when a decree is made for the redemption of mortgaged lands, may award execution jointly or severally, as the case requires; and for sums found due for rents and profits over and above the sums reasonably expended in repairing and increasing the value of the estate redeemed. Sec. 23. When money is brought into court in a suit for redemption of mortgaged premises, the court may deduct therefrom such sum as the defendant is chargeable with on account of rents and profits by him received, or costs awarded against him; and the person to whom money is tendered to redeem such lands, if he receives a larger sum than he is entitled to retain, shall refund the excess. Any mortgagee or person holding under him when requested by an assignee in insolvency or trustee in bankruptcy to render a statement of the amount due on a mortgage given by the insolvent where there is an equity of redemption shall render a true statement to the assignee or trustee of the amount due on such mortgage and for any loss resulting to the insolvent estate from any misrepresentation of the amount due, the assignee or trustee shall have a right of action on the case against such person to recover such loss. Sec. 24. When a mortgage is made or assigned to the state, the treasurer may demand and receive the money due thereon, and discharge it by his deed of release. After breach of the condition, he may, in person or by his agent, make use of the like means for the purpose of foreclosure, which an individual mortgagee might, as prescribed in sections three and five. Sec. 25. If the treasurer of state, and the person applying to redeem any lands mortgaged to the state, disagree as to the sum due thereon, such person may bring a bill in equity against the state for the redemption thereof, in the supreme judicial court. Sec. 26. The court shall order notice to be served on the treasurer of state in the usual form, and shall hear the cause, and decide what sum is due to the state on said mortgage, and award costs as it deems equitable; and the treasurer shall accept the sum adjudged by the court to be due, and discharge the mortgage. Sec. 27. If a person, entitled to redeem a mortgaged estate, or an equity of redemption which has been sold on execution, or the right to redeem such right, or the right to redeem lands set off on execution, dies without having made a tender for that purpose, a tender may be made and a bill for redemption commenced and prosecuted by his executor or administrator, heirs or devisees; and if the plaintiff in such bill in equity dies pending the suit, it may be prosecuted to final judgment by his heirs, devisees, or his executor or administrator. When a mortgagor resides out of the state, any person may, in his behalf, tender to the holder of the mortgage the amount due thereon; and the tender shall be as effectual as if made by the mortgagor. Sec. 28. When the mortgagee, or person holding under him, is under guardianship, a tender may be made to the guardian, and he shall receive the sum due on the mortgage; and upon receiving it, or on performance of such other condition as the case requires, he shall execute a discharge of the mortgage. Sec. 29. In all cases where a debtor has mortgaged real and personal estate to secure the performance of a collateral agreement or undertaking, other than the payment of money, and proceedings have been commenced to foreclose said mortgage for alleged breach of the conditions thereof, but the time of redemption has not expired, any person having any claim against the mortgagor and having attached said mortgagor's interest in said estate on said claim, may file a bill in equity in the supreme judicial court in the county where such agreement has to be performed, where the owner of such mortgage resides, or where the property mortgaged is situated, alleging such facts and praying for relief; and said court may examine into the facts and ascertain whether there has been a breach of the conditions of said mortgage, and if such is found to be the fact, may assess the damages arising therefrom, and may make such orders and decrees in the premises as will secure the rights of said mortgagee or his assignee, so far as the same can be reasonably accomplished, and enable the creditor, by fulfilling such requirements as the court may impose, to hold said property, or such right or interest as may remain therein by virtue of such attachment, for the satisfaction of his claim. Such claim may include possession of the property by the mortgagee, for such time as the court deems just and equitable. Pending such proceedings, the right of redemption shall not expire by any attempted foreclosure of such mortgage. Sec. 30. A mortgage may be discharged by an entry acknowledging the satisfaction thereof, made on the margin of the record of the mortgage in the registry of deeds, and signed by the mortgagee or by his executor, administrator or assignee, and such entry shall have the same effect as a deed of release duly acknowledged and recorded. If a mortgagee or his executor, administrator or assignee after full performance of the condition of his mortgage, whether before or after breach of such condition, refuses or neglects for seven days after being thereto requested to make such discharge or to execute and acknowledge a deed of release of the mortgage, he shall be liable to a fine of not less than ten, nor more than fifty dollars, to be recovered in an action on the case. (a) Sec. 31. A mortgage may be discharged on the record thereof in the office of the registry of deeds by an attorney at law, authorized in writing by the mortgagee or person claiming under him; provided, however, that said writing is first recorded or filed in said office and a minute of the same is made by the register on the margin of the page in connection with said discharge. Sec. 32. If the purchaser of an equity of redemption, sold on execution, has satisfied and paid to the mortgagee, or those claiming under him, the sum due on the mortgage, the mortgagor, or those claiming under him, having redeemed the equity of redemption within one year after such sale, may redeem such mortgaged estate from such purchaser, or any person claiming under him, within the time and in the manner that he might have redeemed it of the mortgagee if there had been no such sale made, and within such time only. (b) Sec. 33. When the mortgagee or person claiming under him has taken possession of the mortgaged premises, and the debt secured by the mortgage is paid or released after condition broken and before foreclosure perfected, the mortgagor or person claiming under him may maintain a writ of entry to recover possession of said premises, the same as if paid or released before condition broken. (c) Sec. 34. When the record title of real estate is encumbered by an undischarged mortgage, and the mortgagor and those having his estate in the premises have been in uninterrupted possession of such real estate for twenty years after the expiration of the time limited in the mortgage for the full performance of the conditions thereof; he or they, or any person having a freehold estate, vested or contingent in possession, reversion or remainder, in the land originally subject to the mortgage or in any undivided or any aliquot part thereof, or any interest therein which may eventually become a freehold estate, or any person who has conveyed such land or any such interest therein with covenants of title or warranty, may apply to the supreme judicial court by petition, setting forth the facts, and asking for a decree as hereinafter provided; and if after notice to all persons interested as provided in section thirty-seven, no evidence is (a) What constitutes a discharge; 5 Me., 275; 6 Me., 260; 17 Me., 371; 18 Me., 11; 24 Me., 335; 25 Me., 346, 402; 27 Me., 219; 31 Me., 394; 33 Me., 451; 39 Me., 22; 44 Me., 115; 45 Me., 103; 54 Me., 466. What does not; 17 Me., 371; 22 Me., 87; 23 Me., 390; 24 Me., 437; 29 Me., 451; 31 Me., 313; 34 Me., 51, 302; 37 Me., 13; 48 Me., 111; 49 Me., 416; 50 Me., 131, 176; 52 Me., 186; 56 Me., 159. (b) 2 Me., 343; 6 Me., 237; 7 Me., 103; 21 Me., 105; 46 Me., 437; 49 Me., 266; 52 Me., 407; 55 Me., 253. (c) 67 Me., 361; 75 Me., 403; 79 Me., 448. offered of any payment within said twenty years or of any other act within said time, in recognition of its existence as a valid mortgage, the court upon hearing may enter a decree setting forth such facts and its findings in relation thereto, which decree shall within thirty days be recorded in the proper registry of deeds and thereafter no action at law or proceeding in equity shall be brought by any person to enforce a title under said mortgage. SEC. 35. Any two or more persons owning in severalty different portions or different interests of the character above described, in the whole or in different portions thereof, may join in one petition. Two or more defects arising under different mortgages affecting one parcel of land may be set forth in the same petition; and in case of a contest the court shall make such order for separate issues as may be proper. SEC. 36. When the mortgagor of such an undischarged mortgage and those having his estate in the premises have been in uninterrupted possession of such real estate for twenty years from the date thereof, and it shall appear that such mortgage was not given to secure the payment of a sum of money or a debt, but to secure the mortgagee against some contingent liability assumed or undertaken by him, and that such conditional liability has ceased to exist and that the interests of no person will be prejudiced by the discharge of such mortgage, the mortgagor or those having his estate in the premises, or any of the persons to whom a similar remedy is granted in section thirty-four may apply to the supreme judicial court by petition setting forth the facts and asking for a decree as hereinafter provided; and if after notice to all persons interested as provided in the following section, and upon hearing it shall appear that the liability on account of which such mortgage was given has ceased to exist and that such mortgage ought to be discharged, the court may enter a decree setting forth the facts proved and its findings in relation thereto, which decree shall within thirty days be recorded in the proper registry of deeds and thereafter no action or proceeding in equity shall be brought to enforce a title under said mortgage. SEC. 37. When it is alleged under oath in the petition that the mortgagees or persons claiming under them are unknown or that their names are unknown, they may be described generally as claiming by, through or under some person or persons named in the petition. Personal service by copy of the petition and order of notice shall be made upon all known respondents residing in the state fourteen days before the return day; and upon all other respondents, service may be made by personal service of copy of the petition and order of notice; by publication for such length of time, in such newspapers or by posting in such public places as the court may direct; or in any or all of these ways at the discretion of the court. SEC. 38. Upon the service of such notice in accordance with the order of the court, the court shall have jurisdiction of all persons made respondents in the manner above provided, and shall upon due hearing make such decree upon the petition and as to costs as it shall deem proper. SEC. 39. The decree of the court, determining the validity, nature or extent of any such encumbrance shall operate directly on the land as a proceeding in rem, and shall be effectual to bar all the respondents from any claim thereunder contrary to such determination, and such decree so barring said respondents shall have the same force and effect as a release of such claims, executed by the respondents in due form of law. The court may, in its discretion, appoint agents or guardians ad litem, to represent minors or other respondents.
BioSecure Registered at Groupe des Ecoles des Télécommunications (GET) 46 Rue Barrault, 75634 Paris Cedex 13 Association founded pursuant to Law 1901 ARTICLES OF ASSOCIATION Pursuant to the contract number IST-2002-507634 (hereinafter “the Main Contract”) concluded with the European Commission, the undersigned in cooperation with other partners have decided to participate in a Network of Excellence named “BioSecure” within the 6th Framework Programme (hereinafter called “The Project”). The Project has the aim of producing various results. In particular, it concerns databases, software and dissemination reports in the area of biometrics. According to the provisions of Article 34 in Annex II to the Main Contract and Article X of the Consortium Agreement, particularly Article X-3-1, the joint owners of the results produced by the Project shall authorise one of them or set up an independent legal body in order to protect, evaluate and disseminate the said results. To this effect, the undersigned: - **GROUPE DES ECOLES DES TELECOMMUNICATIONS** A public administration entity governed by decree number 96-1177 dated 27 December 1996, registered under the SIRET number 180 092 025 00014 at 46 rue Barrault 75634 Paris cedex 13, represented by its General Administrator Mr. Jean-Claude JEANNERET, - **UNIVERSITY OF KENT** With seat in Canterbury, Kent CT2 7NZ, United Kingdom, represented by Mr. David COOMBE, Temporary Research Director, - **ARCHES** A limited liability company with capital of € 8.000 (Eight Thousand Euros), with registered office at 7, Allée de la Veissière, 38640 CLAIX, registered in the Commercial and Companies Register in Grenoble under the SIRET number 423 280 098 00017, represented by Mr. Jean-Paul LEFEVRE, Managing Director, have herewith enacted the Articles of Association, which they have founded to pursue the above objectives and those listed in Article 2 below. TITLE I LEGAL FORM - OBJECT - NAME – SEAT - TERM ARTICLE ONE – LEGAL FORM The undersigned and other natural and legal persons have hereby decided to found an Association pursuant to the laws of 1st July 1901, those currently in force as later amended, and the present Articles of Association, to be bound by the rules of these Articles of Association and meet the conditions stated forthwith. ARTICLE 2 - OBJECT The objective of the Association is to create conditions for an efficient, flexible and multifaceted co-operation amongst its members in order to: - disseminate and make available results of the Project; - manage and evaluate these results, during and after the end of the Project, by all appropriate means, and in particular: - distribution of resources (databases, software, etc..) - establishment of a research partnership in Europe and rest of the world; - foster and encourage research in Europe and the whole world, particularly in the area of biometrics and security, and evaluate results of the Project; - implement, subject to the laws and rules in force, any acts or actions likely to achieve these objectives. ARTICLE 3 - NAME The name of the Association shall be: BioSecure. The abbreviation shall be “BioSecure”. ARTICLE 4 – SEAT The main office of the Association shall be: Groupe des Ecoles des Télécommunications (GET) 46 Rue Barrault, 75634 Paris Cedex 13 It may be changed by a simple decision of the Board of Governors. ARTICLE 5 - TERM The Association is of unlimited duration. TITLE II MEMBERS OF THE ASSOCIATION ARTICLE 6 - MEMBERS Membership of the Association shall be composed of Founder Members, Active Members and Donors. Founder Members shall be, on a permanent basis, all legal and natural persons who have signed these Articles of Association on the day of their enactment, who pool their expertise or coordinate their activities with a purpose other than to share benefits, and who make financial contributions. The title of Founder Member may be extended to those partners of the BioSecure Project who will have abided by these Articles of Association in the year following the establishment of the Association, by the General Assembly on a proposal of the Board of Governors. Active Members shall be all legal and natural persons exercising a function in connection with the objective of the Association, who abide by these Articles of Association, make a contribution and are approved by the Board of Governors. The Board of Governors shall independently decide about all requests for membership as well as the status of the members. The Board need not give reasons for its decisions nor are such decisions subject to appeal. Donors shall be persons who make a gift to the Association. ARTICLE 7 – RESOURCES OF THE ASSOCIATION Resources of the Association encompass: 1/ contributions and its membership fees 2/ subsidies, which may be granted by the State, or public bodies, or all other international, national or local institutions 3/ revenue from its property 4/ interest on loans given by the Association 5/ donations 6/ all other resources permitted by law and regulations. Contributions shall be due and payable by Members of the Association during the month of their membership application and thereafter every year before the 31st January. The amount of contributions shall be determined annually by a decision of the Board of Governors. **ARTICLE 8 – RESIGNATION – EXCLUSION AND DEATH** Members may resign by addressing their resignation to the President of the Board of Governors in a letter sent by registered mail, whose receipt shall be countersigned; they shall lose their status as a Member of the Association at the end of the current calendar year, and they may not request that their contributions be returned all or in part. The Board shall have the power to exclude a Member either on account of his failure to pay due contributions for six months following the deadline or for a material reason. Prior to excluding a Member, the Board shall request the Member, if necessary, to provide a full explanation. In the event that a member (natural person) has deceased, his heirs or legal successors shall not as of right acquire membership of the Association. Death, resignation or suspension of a Member shall not constitute a reason for the Association to be dissolved, which shall continue to exist amongst its other Members. Those members who have resigned, been excluded, or heirs and legal successors of deceased members, are to pay all contributions in arrears and those for the remainder of the year of their resignation, exclusion or death. **ARTICLE 9 – RESPONSIBILITIES OF MEMBERS AND GOVERNORS** The Association shall only be liable for contractual obligations entered in its own name, and subject to the law of 25th January 1985 on judicial relief and involuntary liquidation, no Member or Governor shall incur any personal liability whatsoever. TITLE III MANAGEMENT ARTICLE 10 – BOARD OF GOVERNORS The Association shall be run by a Board of Governors made up of a minimum of three (3) and a maximum of twenty (20) Members elected amongst founder and active members and nominated in an ordinary General Assembly of the members. The Board of Governors shall choose from amongst its members an Executive Committee composed of a President, a Treasurer and a Secretary. Nevertheless, the first Board of Governors shall be composed of: - **GROUP OF SCHOOLS OF TELECOMMUNICATION** Represented by Mrs. Bernadette DORIZZI **President** - **UNIVERSITY OF KENT** Represented by Mr. Farzin DERAVI **Vice-President** - **ARCHES** Represented by Mr. Jean-Paul LEFEVRE **Secretary and Treasurer** Governors shall exercise their functions for a term of three (3) years, each year constituting the time period between two annual ordinary General Assemblies. However, the first Board will stay in office only until the first meeting of the annual ordinary General Assembly, which will approve accounts for the year 2007. Governors may be re-elected for an indefinite period. Members of the Board of Governors will not receive remuneration for exercising their duties. In accordance with provisions to be later determined, which will conform with the present Articles of Association and any future internal procedure rules of the Association, various bodies may be created within the Board of Governors. The Board of Governors may appoint from within its ranks one or several Vice-Presidents. After the first tenure of three (3) years, one third of the Board shall be due for re-election every two years according to provisions stated in the internal procedure rules. Every Member of the Board, who, without an excuse, has not participated in three consecutive meetings, may be considered dismissed. **ARTICLE 11 – THE BOARD OF GOVERNORS – POWER TO APPOINT MEMBERS** If at any one time the Board at one time has fewer than three (3) members, it may, if it finds it to be in the interest of the Association, appoint Members, up to the maximum number stated above, by means of temporary appointment of one or several new Governors. Likewise, should the office of a Governor become vacant during the time period between two annual ordinary General Assemblies, the Council will be entitled to appoint a temporary replacement; if the number of Governors falls below three (3), the Board will have to act without any undue delay. These appointments will be submitted for the approval of members at the first meeting of the Members in an ordinary General Assembly. However, a Governor appointed as a replacement of another will stay in office only for the remaining period of his predecessor’s mandate. In the absence of ratification, decisions and acts of the Board of Governors following a temporary appointment shall not carry less weight. **ARTICLE 12 – MEETINGS AND DECISIONS OF THE BOARD OF GOVERNORS** The Board of Governors shall convene at least once (1) a year when convoked in writing by its President or on written request by half of its Members, as frequently as the interest of the Association shall require, or in its own right. The deliberations may also take place by way of a video or telephone conference. The agenda shall be drafted by the President or the Governors who call the meeting; the final agenda may only be set in the meeting itself. Decisions shall be taken by relative majority votes of present or represented members, every Governor having a right to one vote. In the event of a tie, the President will have the casting vote. Decisions of the Board are confirmed in minutes recorded in a special Register signed by the President and the Secretary who will produce them either jointly or separately as a certificate or a copy. These signatures may be added afterwards if a meeting of the Board has taken place by means of a video or telephone conference. ARTICLE 13 – POWERS OF THE BOARD OF GOVERNORS The Board of Governors enjoys the widest powers to act on behalf of the Association and to authorise all permitted actions and transactions undertaken by the Association with the exception of those which require consent of the Members at a General Assembly. Notably, it may appoint or dismiss all employees, set their remuneration, lease premises according to the needs of the Association, undertake all refurbishment, purchase and sell all legal titles and all goods, furnishings and real estate, use funds of the Association, and represent the Association in legal proceedings as claimant or respondent; It establishes and amends the internal procedure rules of the Association, subject to the approval of the same or their amendments at the next ordinary General Assembly. ARTICLE 14 - DELEGATION OF POWERS The following powers are vested in the Members of the Board of Governors: 14.1 Powers of the President As a rule, the President shall represent the Association in its day-to-day business. The President shall convene General Assemblies and meetings of the Board of Governors, prepare their work and agenda, and each year submit a legal and financial report of the Association. He shall represent the Association in its day-to-day business and all powers to this effect shall be vested in him. In particular, he shall have the capacity to act in legal proceedings on behalf of the Association as claimant or respondent. He may engage any expert or consultant of his choice even if at considerable cost. He may carry out his duties either through a proxy being a member of staff of the Association or delegate all his duties to third parties, legal or natural persons and private or public bodies. In the event of the President’s absence or sickness, he shall be replaced by the oldest current Vice-President, or, in the Vice-President’s absence, by the oldest Member of the Council of Administration, or should the latter be prevented from doing so, by the Treasurer. 14.2 Powers of Vice-Presidents. The Vice-Presidents enjoy powers expressly conferred on them by the President. In the unlikely event of absence, other impediment or death of the President, the person appointed as his replacement pursuant to article 14.1 shall enjoy all powers of the President and assume all liabilities. 14.3 Powers of the Treasurer The Treasurer shall be in charge of finance and asset management of the Association. He receives revenue and effects payments under the President’s supervision. He may not dispose of assets which constitute a reserve except with consent of the Board of Governors. He either himself manages or supervises the management of regular accounts of all transactions made by him, reports about his financial management and submits annual accounts for the approval of the General Meeting. He writes, signs, accepts, endorses or acknowledges all cheques and money transfer orders for the proper functioning of accounts under the supervision of the President. 14.4 Powers of the Secretary The Secretary drafts minutes of the General Assemblies and meetings of the Board of Governors and, as a rule, all other documents concerning the functioning of the Association. He keeps a register required for this purpose by article 5 of the law of 1st July 1901 and articles 6 and 31 of the decree of 16 August 1901. He shall ensure that the formalities provided for in these articles are complied with. 14.5 Appointment of a Manager The Board of Governors may decide, if it thinks fit, to appoint a Manager. On an invitation of the President, the Manager participates in the Executive Committee and in all meetings of the Association but has no active vote. The position of a Manager of the Association may be remunerated or one of the members may exercise his functions. In the former case, his recruitment and remuneration shall be submitted for approval to the Board of Governors. The Executive Committee will determine the Manager’s functions and powers in an appointment letter, which shall be communicated at the next General Assembly. The Executive Committee will suggest to the Board of Governors the potential remuneration of the Manager. He shall be appointed for a duration of three (3) years, and may be re-appointed. TITLE IV GENERAL ASSEMBLIES ARTICLE 15 – STRUCTURE AND MEETINGS OF THE ASSEMBLY The Members convene in a General Assembly, either extraordinary when their decisions concern amendment of the Articles of Association, or ordinary in any other case. The General Assembly shall be made up of all Members of the Association. Only founder and active members shall have the right to vote. Donors shall only have an advisory voice. No one may be represented by a person which is not a Member of the Association, unless they have obtained consent from the members present at the beginning of the session. The ordinary General Assembly convenes at least once per year at the request of the Board of Governors, whenever it sees fit, or at the request of at least a relative majority (half plus one) of founder and active members of the Association. The extraordinary General Assembly shall be called by the Board of Governors whenever it sees fit. ARTICLE 16 – CONVOCATION OF A MEETING AND AGENDA Meetings shall be convoked at least fifteen (15) days in advance by letter or email stating the subject of the meeting. In case of an email, the Board of Governors shall receive current electronic addresses of every Member accepting this form of convocation. For this purpose, the stated Members shall inform the Board of Governors of all changes to their electronic address without undue delay. The agenda is set by the Board of Governors; it shall only include the Board’s proposals and those communicated to the Board in writing one month before the meeting bearing a signature of one quarter of founder and active Members of the Association. Meetings convene at the registered office or in any other place. ARTICLE 17 – BOARD OF THE ASSEMBLY The Board of the Assembly is composed of a President and a Secretary. The Meeting is chaired by the President of the Board of Governors or, in his absence, by an Governor appointed for this purpose by the Board. The duties of a Secretary are fulfilled by the Secretary of the Board of Governors, in his absence, by a Member of the Assembly nominated for that purpose. Members of the Association sign an attendance list at the beginning of the session certified by the President and the Secretary of the Assembly. Members attending the Assembly from a distance will be counted and mentioned on the attendance list certified by the President and the Secretary of the Assembly. **ARTICLE 18 – NUMBER OF VOTES** Every founder or active Member of the Association have a right to one vote, and as many additional votes as the number of founder or active Members they are representing. One member may, however, not represent more than three (3) Members of the Association having the right to vote. **ARTICLE 19 – ORDINARY GENERAL ASSEMBLY** The Ordinary General Assembly receives from the Board of Governors a report concerning its management, the legal and financial status of the Association; it approves or adjusts accounts for the past fiscal period; votes on budget for the next fiscal period; elects Administrators; authorises all purchases of real estate pursuant to the object of the Association, all exchanges and sales of this real estate as well as applications for mortgages or all loans, and, as a general rule, it decides about all the issues of common interest and those which are presented by the Board of Governors, but with the exception of those, which involve any amendments of the Articles of Association. The General Assembly will have reached a quorum if at least three (3) Members having the right to vote are present or are represented. The decisions are taken by a relative majority of votes of the founder and active Members who are either present or are represented. **ARTICLE 20 – EXTRAORDINARY GENERAL ASSEMBLY** The Extraordinary General Assembly may amend all provisions of the Articles of Association; it may, in particular, decide about early dissolution of the Association or its cooperation with other associations. The General Assembly has reached a quorum if at least three (3) members having the right to vote are present or are represented. The decisions are taken by a majority of votes of the founder and active members who are present or are represented, but constitute at least 2/3 of all votes cast. ARTICLE 21 – OTHER MEANS OF CONSULTATION OF MEMBERS Notwithstanding the provisions in articles 15, 16 and 17, and subject to article 18 of the present Articles of Association, decisions of ordinary and extraordinary General Assemblies may be taken by way of written consultation. The choice of this form of consultation is made by the body authorised to do so in article 15, 5th subparagraph, and by informing the Board of Governors. All means of communication (written consultation, telephone, telex, fax, video conference, etc.) may be used independently of or simultaneously with the meetings foreseen in articles 16 and 17, in order to approve decisions of the Members, subject to all Members taking part in the approval of the decisions. In this event, an act stating the text of the resolutions, and the vote cast by every Member shall be drafted and signed, as the case may be, by all Members separately. Members have a period of fifteen (15) days following receipt of this letter to instruct the initiator of the consultation about their vote on every resolution, no matter whether by registered post or by letter delivered by hand and its receipt being countersigned. Every Member, who will not have given any instruction concerning their response within the period specified above, will be held to have accepted the proposed resolutions, which will be mentioned in a statement written according to article 22 below. During the consultation period, every member shall be entitled to request any additional explanations from the initiator of the consultation. ARTICLE 22 - MINUTES The decisions at the General Assembly of the members shall be stated in minutes recorded in a Special Register, which may be the same as the one containing minutes of the Board, and shall be signed by the President and the Secretary of the session. Any copies of or extracts from the minutes to be produced in court or elsewhere shall be signed by the President of the Board of Governors or two (2) Governors. TITLE V DISSOLUTION - LIQUIDATION ARTICLE 23 - DISSOLUTION - LIQUIDATION In the event of voluntary, statutory or involuntary dissolution of the Association, the Extraordinary General Assembly shall appoint one or several liquidators who will enjoy the widest powers to liquidate all assets and liabilities once potential claims on existing contributions by donors, their heirs or any known holders of these rights have been settled. The net amount of the liquidation will be returned to an Association having a similar purpose or to any public or private entities with a charitable status, and which will be nominated by the Extraordinary General Assembly of the members. ARTICLE 24 – INTERNAL PROCEDURE RULES The Board of Governors may establish internal procedure rules which have to be approved at the General Assembly. These rules may potentially have as their object the enactment of various provisions not foreseen in the Articles of Association, particularly those which govern internal administration of the Association, as well as issues concerning intellectual property. TITLE VI FORMALITIES ARTICLE 25 - DECLARATION AND PUBLICATION The Board of Governors will fulfil formalities related to declaration and publication as prescribed by the law. To this effect, all powers are hereby conferred on the holder of the present original document(s). Done in Paris 26 April 2007 IN FOUR ORIGINAL COUNTERPARTS GROUPE DES ECOLES DES TELECOMMUNICATIONS Mr. Jean-Claude JEANNERET UNIVERSITY OF KENT Mr. David COOMBE ARCHES Mr. Jean-Paul LEFEVRE
Bragg coherent diffraction imaging and metrics for radiation damage in protein micro-crystallography H. D. Coughlan, C. Darmanin, H. J. Kirkwood, N. W. Phillips, D. Hoxley, J. N. Clark, D. J. Vine, F. Hofmann, R. J. Harder, E. Maxey and B. Abbey ARC Centre of Advanced Molecular Imaging, Department of Chemistry and Physics, La Trobe Institute for Molecular Science, La Trobe University, Victoria 3086, Australia; CSIRO Manufacturing Flagship, Parkville 3052, Australia; Stanford PULSE Institute, SLAC National Accelerator Laboratory, Menlo Park, CA 94025, USA; Center for Free-Electron Laser Science (CFEL), Deutsches Elektronensynchrotron (DESY), Notkestrasse 85, 22607 Hamburg, Germany; Advanced Light Source, Berkeley Lab, Berkeley, CA 94720, USA; Department of Engineering Science, University of Oxford, Oxford OX1 3PJ, UK; and Advanced Photon Source, Argonne National Laboratory, Argonne, IL 60439, USA. Correspondence e-mail: email@example.com, firstname.lastname@example.org The proliferation of extremely intense synchrotron sources has enabled ever higher-resolution structures to be obtained using data collected from smaller and often more imperfect biological crystals (Hellwell, 1984). Synchrotron beamlines now exist that are capable of measuring data from single crystals that are just a few micrometres in size. This provides renewed motivation to study and understand the radiation damage behaviour of small protein crystals. Reciprocal-space mapping and Bragg coherent diffractive imaging experiments have been performed on cryo-cooled microcrystals of hen egg-white lysozyme as they undergo radiation damage. Several well established metrics, such as intensity-loss and lattice expansion, are applied to the diffraction data and the results are compared with several new metrics that can be extracted from the coherent imaging experiments. Individually some of these metrics are inconclusive. However, combining metrics, the results suggest that radiation damage behaviour in protein micro-crystals differs from that of larger protein crystals and may allow them to continue to diffract for longer. A possible mechanism to account for these observations is proposed. 1. Introduction Over the past 50 years macromolecular crystallography has been the primary method for establishing the structure of proteins. Progress with optimizing and improving protein crystallography beamlines has been such that the collection of partial datasets from single crystals that are just a few micrometres in size is now possible. Theoretical predictions based on radiation damage behaviour suggest that a complete dataset could be collected from a protein crystal as small as 1.2 μm (Holton & Frankel, 2010). In practice though, poor signal-to-noise generally prevents data collection once the diffraction pattern intensity decays by an appreciable amount. Complete datasets from samples this small can be collected however, provided many different crystals are measured in the beam, the quality of individual crystals is sufficient to observe diffraction, and the crystal lattices are sufficiently homogeneous that partial datasets can be merged. Consequently, following successful proof-of-principle experiments, serial synchrotron X-ray crystallography (SSX) is now rapidly becoming established as a viable route to protein structure determination (Stellato et al., 2014; Gati et al., 2014). With the introduction of cryo-cooling in protein crystallography the problem of radiation damage was greatly reduced (Hope, 1988). However, the collection of X-ray diffraction data with more intense beams and from ever smaller crystals continues to make radiation damage a topic of prime importance for protein crystallography (Murray & Garman, 2002; Garman & Owen, 2006). Furthermore, interpreting data collected from samples which have already been damaged can be extremely challenging and has partly motivated extensive efforts in understanding the effects of radiation damage in protein crystallography. Radiation damage can alter the protein structure through bond breaking (commonly disulphide bonds or removal of side chains), heating and the creation of free radicals which interact with the protein and cause damage. Whilst it is known that bond breakage (specific chemical damage) within the crystal can occur prior to the diffraction pattern being visibly altered, structural damage in the crystal eventually leads to spot fading and a modification of the measured Bragg peak intensity distribution. Higher-resolution spots generally fade faster than low-resolution peaks, implying that small spatial features are destroyed first by radiation damage, but may also be because of small displacements of the protein within the unit cell or unit cell expansion. At 100 K, the rate of overall loss of resolution, global radiation damage, is essentially the same for every protein and depends on the specific reflection under consideration and its associated $d$-spacing (Holton, 2009). For macroscopic crystals the collection of a complete, full dataset from a single-crystal opens up a range of possibilities for analysis of both specific and global radiation damage. Two of the most frequently used parameters for monitoring radiation damage are: total scattered intensity (Teng & Moffat, 2000; Leal et al., 2013) and change in full-width half maxima (FWHM) of the peak (Hu et al., 2004). In the present case, however, extremely small crystals (<2 μm) are being measured and real-space information recovered from the diffracting micrometre-sized crystal as it undergoes radiation damage. This places several constraints on the type of data that can be collected. For example, in order to determine the unit cell volume, normally only a couple of images, generally 90° apart, are required. However, due to the setup used for Bragg coherent diffractive imaging (BCDI) in which only one reflection is measured at a time, combined with the rapid damage of the micrometre-sized crystals measured, determining the unit cell volume was not possible in the current experiment. Three possible metrics for radiation damage that can be extracted from coherent reciprocal-space map data are: integrated single Bragg peak intensity, change in $d$-spacing and the widths of any rocking curves. Fig. 1 shows a typical example of the data collected at 100 K in a coherent imaging experiment from a protein crystal undergoing radiation damage. The intensity of individual Bragg peaks depends on a number of parameters including diffracting crystal volume, crystal packing, crystal structure and quality, the time for data collection and the geometry of diffraction (Holton, 2009). Therefore the exact behaviour of peak intensity with radiation damage varies between different crystals and reflections. The reliability of any conclusions drawn from Bragg intensity data will therefore depend on the knowledge of these factors. In the experiment performed here, the full spot intensity is obtained by integrating over the Bragg peak as it moves through the Ewald sphere during rotation of the crystal in and out of the diffraction condition. The reflections studied all appear at similar Bragg spacings such that, for relative measurements of the integrated intensity of individual Bragg reflections, the dependence on the geometry of diffraction should be similar for all datasets. Any significant differences in the data are therefore likely to be attributed to variations in the individual crystal parameters rather than to differences in the measurement. Although a detailed understanding of the underlying mechanisms for radiation damage is still an active area of research, on the basis of experimental observations, models for average spot intensity tend to find that intensity at a given resolution fades exponentially at all temperatures (Holton, 2009). Unit cell expansion has also been discussed extensively in the literature in the context of radiation damage; however, a number of authors have pointed out that the unit cell is an unreliable metric for radiation damage. Both room-temperature (Southworth-Davies et al., 2007) and cryo-crystallography studies conducted at 100 K (Murray & Garman, 2002; Ravelli et al., 2002) have shown that the rate of change of the unit cell varies for different proteins. In the present case, the behaviour of the lattice spacing, $d$, as a function of absorbed dose can be ![Figure 1](image.png) **Figure 1** Example dataset collected from a single micrometre-sized HEWL protein crystal at 100 K; (a) 3D rendering of the reciprocal-space map as a function of dose and (b) the corresponding $\theta$ rocking curves. examined. Although not a direct measure of the change in unit cell volume, $d$-spacing can be used to qualitatively compare the behaviour of these hen egg-white lysozyme (HEWL) micro-crystals with similar studies reporting unit cell volume changes in the literature. The FWHM of the Bragg peak rocking curve is another quantity analysed here that has previously been examined as a function of radiation damage. Although the rocking curve widths are a convolution of lattice change and non-uniform illumination, in the experiments described here it is assumed that the crystal illumination is constant as the crystal undergoes radiation damage. Therefore the change in the rocking curve width is interpreted as a relative change, which can be directly related to the radiation damage. Through measurements of the rocking curve profile, mosaicity and elastic strain have been characterized in both room temperature and cryo-cooled protein crystals of HEWL (Hu et al., 2004; Lovelace et al., 2006). For example, broadening of the rocking curves collected from tetragonal HEWL crystals of a few percent was observed by Hu et al. (2004) during illumination by X-rays. Although the field of radiation damage in protein crystallography is very well established, the extension of many of these ideas to micrometre-sized crystals is still an area of active research (Ziaja et al., 2012). For example, exponential decay trends have been established in the behaviour of Bragg peak intensity as a function of dose for large crystals (Wang & Ealick, 2004; Diederichs, 2006; Diederichs et al., 2003; Ravelli et al., 2003; Holton, 2009), but it is unclear whether the same trends would persist for micrometre-sized crystals. Hence, radiation damage in protein micro-crystallography is an issue which needs to be addressed in order to plan experiments but also to correctly interpret the data. In this work, high-resolution Bragg peak images are recorded on an area detector whilst rocking the crystal in and out of the Bragg condition. The rocking curve profiles can be studied along any direction in reciprocal space as well as examining the overall reciprocal-space map (RSM) (3D Bragg intensity distribution) volume changes during radiation damage. A critical difference here compared with previous work in analysing rocking curves to study crystal perfection and radiation damage is that the evidence from the present experiments suggests that the rocking curve width in the present case is dominated by the shape function of the crystal. Previous reports in the literature have looked at crystals orders of magnitude larger than the ones studied here, and thus convolution with the crystals shape function causes minimal peak broadening. Previous radiation damage investigations for micrometre-sized protein crystals have also indicated that, for an isolated crystal, the radiation damage behaviour may be different compared with larger crystals (Sanishvili et al., 2011). One explanation for this is that, once the dimensions of the crystal become smaller than or comparable with the primary photo-electron mean free path, radiation damage may be mitigated (Stern et al., 2009; Nave & Hill, 2005; Moukhametzianov et al., 2008). A similar line of reasoning is given in published reports showing that the use of micrometre-sized beams with large crystals reduces the rate of radiation damage (Sanishvili et al., 2011; Finfrock et al., 2010, 2013). For example, Finfrock et al. have specifically discussed the spatial dependence of dose using a line-focused microbeam showing that, for the 18.6 keV X-rays used in their experiment, at around 1 µm there is a region of higher deposited energy. Although the data in that paper [specifically Fig. 5 in Finfrock et al. (2010)] need to be adjusted to account for the lower X-ray energy and actual beam size used here, this offers a possible route to interpreting data obtained in the present experiments. The same reduction in radiation damage is not necessarily expected, however, if the volume illuminated includes the surrounding cryo-protected solvent. This is because the photoelectrons generated in this solvent deposit energy in the crystal via secondary effects. In this present study the BCDI technique is used to study radiation damage effects in micrometre-sized protein crystals. BCDI involves iteratively recovering the phase of the scattered intensity associated with individual Bragg reflections, enabling a direct transform and recovery of an image of the crystal (Robinson & Vartanyants, 2001). For a coherently illuminated crystal, the continuous diffracted intensity around each Bragg reflection is described by (Robinson & Vartanyants, 2001; Abbey, 2013) $$I(q) \propto \left| \int_0^\infty \rho_1(r) s(r) \exp(iq.r) \exp[iq.u(r)] dr \right|^2,$$ (1) where $\rho_1(r)$ is the electron density of the crystal, $s(r)$ the shape function describing the diffracting volume, $u(r)$ the relative displacements of the atoms from their ideal lattice positions, and $r$ and $q$ are position vectors in real and reciprocal space, respectively. The use of CDI in the Bragg geometry (BCDI) is now a well established method for characterizing micro- and nanocrystals of small molecule materials science samples (Newton et al., 2010; Pfeifer et al., 2006). However, largely due to the difficulties with applying this technique to radiation-sensitive samples, there have been relatively few reports of BCDI being applied to protein crystals. The first such report was by Boutet et al. who used the technique to study the collapse of the holoterritin crystal lattice due to radiation damage (Boutet & Robinson, 2008). This initial demonstration was followed up much later by Coughlan et al. who used BCDI to study a single HEWL crystal in both 2D and 3D (Coughlan et al., 2015, 2016). A detailed analysis of the coherent diffraction patterns collected from polyhedrin crystals was recently performed by Nave et al. who also examined the phase information retrieved using BCDI to look at the crystal imperfections at sub-micrometre spatial resolution (Nave et al., 2016). Although when using the BCDI approach there is no measurement of information about the evolution of the molecular structure, the technique allows deconvolution of real- and reciprocal-space information. This provides a new window into the global damage effects allowing the evolution of the average crystal properties to be seen directly at nanometre resolution whilst radiation damage is occurring. In the current work, images of the crystals real-space electron density have been examined as a function of radiation damage and this information has been compared with more well established radiation damage metrics. It should be noted that the discussion here is mainly restricted to the electron density (amplitude) image information rather than the disorder (phase) information. This is mainly because for the six HEWL crystals studied here the phase showed little internal structure. This could be due to the fact that the crystal quality was very good or perhaps the image resolution (tens of nanometres) was insufficient to resolve small internal variations in the lattice quality. As demonstrated by Nave et al. (2016) though, for some samples a great deal of additional useful information about defect density, mosaicity etc. can be obtained from the phase information provided by CDI. In the present set of experiments both micrometre-sized protein crystals and a beam that was slightly larger but comparable with the size of the crystal were used. The results suggest that in this case radiation damage is reduced, even though the crystal is embedded in cryo-protectant and fully illuminated by the micro-focused beam. The key elements of the present study in comparison with previous radiation damage studies in this area may be summarized as follows: (i) For six, micrometre-sized, cryo-cooled HEWL crystals the variation in the intensity, relative $d$-spacing and rocking curve width are compared; these are established metrics for radiation damage within the literature. (ii) The volume of the RSM and real-space images associated with these crystals obtained through coherent imaging are investigated as possible additional radiation damage metrics for micrometre-sized crystals. Based on the collective observations, the radiation damage behaviour of micrometre-sized (in the present case < 2 μm) protein crystals is compared and contrasted with that of larger, macroscopic crystals reported in the literature. 2. Materials and methods 2.1. Sample preparation and characterization Oversampled BCDI data were collected at the Advanced Photon Source (APS) on beamline 34-ID-C; procedures for sample preparation and data collection are described in a previous paper applying BCDI to HEWL crystals (Coughlan et al., 2016). Briefly, tetragonal ($a = b = 79$ Å, $c = 37$ Å, $\alpha = \beta = \gamma = 90^\circ$, $P4_12_22$) micrometre-sized HEWL crystals were grown using the batch method. 40 mg ml$^{-1}$ of protein in 0.5 M acetic acid buffer (pH 4) was mixed with precipitant buffer [6% PEG 6 K and 18%(w/v) NaCl at pH 4] at a ratio of 1:3.5 protein to precipitant. The preparation was optimized to produce micrometre-sized crystals of HEWL with a narrow size distribution. Samples were characterized following previously published protocols (Coughlan et al., 2016, 2015; Darmanin et al., 2016). An Olympus BX optical microscope was used to image the crystals. Although the size of the crystals was at the resolution limit, the optical microscope enables a large number of crystals to be imaged quickly, providing a good sampling of the overall size distribution. Electron microscopy was used to obtain a more accurate estimate for the size of a small number of representative crystals. Transmission electron microscope (TEM) measurements were made using a Jeol JEM-2010 TEM operated at 100 keV. Images were taken using a Valeta 4 MP CCD camera. TEM images were analysed using IMAGEJ (Schneider et al., 2012), an open source scientific image analysis software package. The average and standard deviation for the crystal size was calculated as 1.27 ± 0.44 μm by 1.16 ± 0.40 μm. Optical micrographs and TEM images of lysozyme microcrystals prepared under identical conditions to those described here are given by Coughlan et al. (2016) and Darmanin et al. (2016). Using this protocol no aggregation of crystals was observed. 2.2. Data collection and RSM analysis For data collection, the samples were mounted onto MicroMeshes MtTeGen crystallography loops (400/10 mesh) containing either 50% polyethylene glycol (PEG) 400 or glycerol crvo-protectant. Prior to mounting the samples they were plunge cryo-cooled in liquid nitrogen. An Oxford Instruments Cryojet which produced 100 K gaseous nitrogen was set up above the sample stage. The loop was mounted on a goniometer stage and data collected using 9 keV (0.1378 nm) X-rays focused to a spot size measuring 1.7 μm by 1.3 μm FWHM with a total flux of $5 \times 10^7$ photons s$^{-1}$. The Medipix2 photon-counting detector which was used had 256 × 256 square pixels of 55 μm side length and was mounted on a diffractometer arm perpendicular to the scattering vector; an evacuated flight tube was installed to minimize air scatter along the flight path. Here three rotation angles are defined: $\theta$ corresponds to rotation about the vertical axis, $\chi$ to rotations about the incident beam and $\varphi$ to rotations in the direction of the incident beam vector, perpendicular to the other two rotation axes. For the RSM data, rocking curves were measured by placing the detector 1.7 m from the sample at the centre of the Bragg peak and rocking the sample in the $\theta$ direction through a total angular range of approximately 0.4°. The step size for the rocking curve was 0.01° with an exposure time of 5 s per step. This measurement was repeated until the Bragg peak intensity dropped below the background threshold of 1 photon measured on the Medipix2 detector. A detailed schematic of the experimental set-up including the coordinate system is given by Coughlan et al. (2015). Due to the high doses imparted to the sample, matrix deformation leading to parasitic crystal rotations can occur, which causes instabilities in the Bragg reflection. For example, whilst aligned to the peak of the rocking curve, the reflection might move out of the Bragg condition or the centre of mass of the reflection on the detector may change. In practice, the effects of matrix deformation are quite obvious and the associated data can readily be excluded from any further analysis. In addition to rejecting data on the basis of instabilities introduced by matrix deformation, data were selected according to the quality and reliability of the BCDI reconstructions. From a random start, multiple reconstructions were repeated and only those that showed reliable and consistent convergence to the same reconstructed image were retained. The process of rejecting datasets on the basis of instabilities and lack of convergence of the BCDI reconstruction left the six datasets which are analysed in this paper. The intensity was calculated for each Bragg peak by integrating across the entire 3D peak. As the radiation damage rate has a resolution dependence, the $d$-spacings were checked for each reflection to ensure that they remained within approximately the same volume of reciprocal space. To within the angular resolution of the diffractometer, all the data, which could potentially originate from different $hkl$ values, were collected between 13 Å and 17 Å resolution. Due to the geometrical constraints of the BCDI set-up and the limited volume of reciprocal space measured, it was problematic to determine the exact position of the incident beam vector. Hence for many reflections it was not possible to measure the $d$-spacing to sufficient accuracy for precise indexing. However, the $2\theta$ values can be estimated for the reflections measured. The FWHM of the Bragg peak was calculated along the three orthogonal directions in reciprocal space ($q_x$, $q_y$ and $q_z$). This was achieved by summing the intensity in 2D slices and plotting this as a function of their respective angular values. Typically, only the rocking curve as a function of $\theta$ is analysed in the literature (Lubbert et al., 2004). The spacing of the fringes, which were present in all of the RSM data presented here, can be used to estimate the crystal size for a limited number of projections through the crystal. Measurements of the fringe spacing were used to check that the size of the crystals (at the start of the damage measurements) were consistent with the size distribution obtained via TEM analysis (Coughlan et al., 2016). To estimate the total volume of the RSM, which is sensitive to any changes in the crystal shape function as well as to the formation of crystal mosaic blocks, elastic strain and defect density, the number of voxels with a value above 1 photon was calculated. The change in $d$-spacing for the micrometre-sized crystals was determined by measuring the shift of the centre of mass of the peak in reciprocal space for each 3D Bragg peak measurement. This shift was then interpreted as a change in the lattice spacing of the crystal due to radiation damage. The conversion between the detector pixel size and the reciprocal-space pixel size was determined by $$q_x = q_y = \frac{2\pi x_d}{\lambda z}, \quad q_z = \frac{4\pi}{\lambda} \Delta\theta \sin \theta,$$ (2) where $x_d$ is the detector pixel size, $\Delta\theta$ is the rocking curve angular increment (0.1°) and $z$ is the detector-to-sample distance. The equivalent change in $d$-spacing is then determined (Müller et al., 2002) $$g + \Delta g = 2\pi/(d + \Delta D).$$ (3) 2.3. Bragg coherent diffractive imaging reconstructions and analysis In principal, once the phases have been correctly assigned to the measured diffracted intensities it is possible to perform a 3D Fourier transform to recover a 3D image of the crystal. However, in practice the 3D reconstruction was only successful in a limited number of cases where the time to collect a complete rocking curve was much shorter than the time taken for significant radiation damage to occur. The 3D reconstruction of a protein crystal using BCDI is discussed by Coughlan et al. (2016). Reconstruction of 2D projections of the protein micro-crystal from single points on the rocking curve are, however, much easier to achieve since individual reconstructions are from data collected in seconds rather than minutes. Another key point is that the crystal sizes used for the present experiments are comparable with the coherence length at beamline 34-ID-C (Huang et al., 2012; Leake et al., 2009). To characterize and account for the effects of partial coherence the fringe visibility (which was typically >80%) was checked and an algorithm developed by Clark et al. (2012) was used, which incorporates partial coherence in the image reconstruction process (Chen et al., 2012; Clark et al., 2012). For the 2D real-space images presented and analysed in this paper, data collected at the peak of the rocking curve in the $\theta$ direction were used. To reconstruct these images, a combination of error reduction (ER) and hybrid input–output (HIO) algorithms were used. Each reconstruction is the result of averaging 30 independent reconstructions of 4000 iterations generated from a random phase starting guess. Each independent reconstruction contained 210 HIO iterations in blocks of 30 at 100, 500, 800, 1000, 1500, 2000 and 3000 iterations of ER. The 2D support for the crystal was fixed to be a square of 2.2 μm × 2.2 μm; no refinement of the support during the reconstruction was required. In spite of careful alignment of each crystal to the beam position, there is a risk that the crystal could move out of the illumination volume during the rocking curve. In the few cases where this happened the results were obvious: the intensity of the reflection would fall off immediately (rather than more slowly due to damage). In addition, any fringes around the central Bragg peak would rapidly disappear and the peak intensity would appear at a significantly different $2\theta$. Another indication of beam–sample misalignment is that the BCDI reconstructions fail to produce an image in these cases due to the soft edges of the beam profile. In this paper data are only presented for crystals which, based on these observations, were fully contained within the beam at all times. Consequently, BCDI can successfully be applied to all of the six datasets analysed. 2.4. Dose calculations Two methods were used to estimate the absorbed dose for the BCDI experiments. For the first method the program RADDOSE-3D was used with a default crystal density of 1.2 g ml$^{-1}$ (Murray et al., 2004; Zeldin et al., 2013). RADDOSE-3D used the inputs of incident flux (5 × $10^9$ photons s$^{-1}$) (http://www.aps.anl.gov/Beamlines/Directory/showbeamline.php?beamline_id=42). X-ray energy (9 keV), beam size, wedge and crystal size. The ‘calculated dose’ rate for a $1.27 \times 1.27 \times 1.16 \mu$m crystal was 0.417 MGy s$^{-1}$. However, it should be noted that there are experimental uncertainties associated with the incident beamline flux as well as uncertainties in the size of the crystal which will impact the calculated dose. Full details of how the dose was estimated are given by Coughlan et al. (2016). In calculating the dose, the diffracting volume changes due to damage have been neglected since even attempting to estimate the dose for the part of the crystal which still diffracts would be extremely difficult. To model the integrated intensity for individual Bragg peaks as a function of dose, the following exponential decay formula (Holton, 2009) was used, $$\frac{I}{I_{\text{max}}} = \exp\left[-\ln(2)\frac{D}{Hd}\right],$$ where $I$ is the integrated intensity of the RSM after absorbing a dose $D$ (MGy), $I_{\text{max}}$ is the maximum integrated RSM intensity, $H$ is the Howells criterion which is usually given as 10 MGy Å$^{-1}$ (i.e. loss of 1 Å diffraction resolution for every 10 MGy absorbed dose) and $d$ the lattice spacing in Å. Using this formula and the calculated dose rate ($RADDOSE:3D$), the value of $H$ which gave the best fit to the measured intensity data for each crystal was determined. 3. Results and discussion Radiation damage in micrometre-sized protein crystals illuminated by a beam with a FWHM only slightly larger than the crystal was investigated using RSM and BCDI. The relative change in intensity, $d$-spacing, FWHM of the rocking curves and change in RSM volume were recorded as a function of the absorbed dose for six micrometre-sized HEWL crystals. In addition, from the Bragg CDI, complementary real-space information is obtained about the area of the crystal contributing to the diffracted signal. 3.1. Integrated RSM intensity The results for the integrated RSM intensity (Fig. 2) show very good consistency between datasets collected from different HEWL micro-crystals. The exponential decay of the single Bragg reflection which is observed for every crystal describes the average intensity loss of the Bragg peak which characterizes the global damage as discussed by Holton, who observed the same behaviour at all temperatures (Holton, 2009). The relative intensity, defined as the ratio of the current integrated intensity to the initial integrated intensity from the first RSM in the time series ($I_{\text{max}}$), could be matched extremely well using the exponential decay curve of equation (4). This same general trend is also observed in macroscopic crystals at room temperature (Southworth-Davies et al., 2007) and has been discussed by Holton & Frankel in 2010 (Holton & Frankel, 2010) where they modelled radiation damage in a number of different crystals at cryogenic temperatures and show the same exponential decay behaviour for each sample. Each data point in Fig. 2(a) requires that a full RSM is collected which takes a finite amount of time, thus the first data points do not start at 0 s. Data are plotted on the x-axis at the times at which the peak of the rocking curve is reached, and the horizontal error bars indicate the total time taken to complete each rocking curve. A least-squares fit of equation (4) to the measured data was used to determine the Howells parameter $H$ (which characterizes the sensitivity). Since data for the different crystals were measured under nominally identical conditions it is assumed that the dose rate is the same in each case. However, it should be noted that variations of the crystal size within the beam and the neglect of the influence of photoelectron escape in the calculations means that experimentally some differences in the actual dose may occur even though the setup did not change. The dose rate used for determining $H$ was 0.42 MGy s$^{-1}$ calculated in $RADDOSE:3D$ using the ‘Gaussian’ model option. The values for $H$ determined from least-squares fitting to the experimental data are shown in Fig. 2(b). Due to the scatter in the data it is difficult to draw firm conclusions about whether there is any relationship between $H$ and the $d$-spacing, though it does appear that lower values of $H$ may occur at higher $d$-spacing. Note that in general the values determined for $H$ are above that of the nominal value of ![Figure 2](image) Figure 2 (a) Ratio of integrated RSM intensity ($I$) to integrated intensity in the first measured RSM ($I_{\text{max}}$) as a function of dose. (b) Howells criterion determined from least-squares fitting using equation (4) keeping the dose rate fixed to the value calculated from $RADDOSE:3D$ (0.42 MGy s$^{-1}$), plotted against the experimentally determined $d$-spacing. 10 MGy Å$^{-1}$, indicating that the intensity drop-off is slower than would normally be predicted. One possible reason for this is the escape of the photoelectrons which is expected to reduce the rate of intensity decay. Note also that, for the very high doses used here, many of the processes associated with radiation damage will saturate which may alter the value for $H$. From Fig. 2(a) the summed intensity for the six HEWL micro-crystals dropped to 0.7$I_{\text{max}}$ at a dose of 84 ± 11 MGy (where linear interpolation between nearest-neighbour data points was used and the error quoted is the standard deviation for the six values). This is significantly larger than the absorbed dose limit of 30 MGy (Garman, 2010; Owen et al., 2006), though it is important to note that, 30 MGy is an experimental limit and not all crystals will tolerate this. However, our data extend well beyond this up to doses in excess of 800 MGy (reached after 1900 s), a damage regime which has not been well studied in the literature. 3.2. Relative $d$-spacing The relative $d$-spacing determined by tracking the centre of mass of the RSM is shown in Fig. 3 for the individual Bragg peaks moving in 3D reciprocal space. Movement of the centre of mass of the single measured Bragg reflection for each crystal was converted to a change in $d$-spacing according to equations (2) and (3). The variability of the $d$-spacing between crystals was larger than for the corresponding intensity data; however, for five out of the six crystals the $d$-spacing is observed to increase with increasing time/absorbed dose. For crystals 1 to 5, the average increase in $d$-spacing was 0.89% before the intensity of the reflection dropped below the background threshold value, and 0.39% at the point where the intensity dropped to half its maximum value ($I_{0.5}$). However, for crystal 6, after an initial small increase of 0.01% at $I_{0.5}$, the $d$-spacing actually decreased by a total of 0.14%. In general, the behaviour of the relative $d$-spacing as a function of dose appears less consistent between crystals than the integrated intensity loss. Interestingly, between 0 and 500 s (210 MGy) exposure time, the variation of $d$-spacing for crystals 1 to 5 is linear, in line with results reported for unit cell expansion in the literature (Müller et al., 2002). However, the majority of previous studies have not investigated the much higher doses examined here for micrometre-sized crystals. In the data, for the majority of crystals (with the exception of crystal 6) the general behaviour is best described by a logarithmic curve. On the basis of the expansion of the $d$-spacing the data indicate that the rate of damage in fact slows after a certain dose, although it should be emphasized that the dependence of $d$-spacing on radiation damage is highly complex and that the observed trends may well be different for different reflections. In the context of protein micro-crystallography this type of behaviour does not appear to have been reported previously. In general, radiation studies on macroscopic protein crystals in relation to the expansion of $d$-spacing with dose was found to follow a linear relationship (Müller et al., 2002; Ravelli et al., 2002; Ravelli & McSweeney, 2000; Murray & Garman, 2002) or an exponential relationship (Shimizu et al., 2007). The variability of the $d$-spacing expansion between crystals makes this finding on the basis of the $d$-spacing alone inconclusive, nonetheless the results are intriguing. 3.3. Relative FWHM and RSM volume change The FWHM results from the rocking curve data along with the total volume of the RSM as a function of time are shown in Fig. 4. In all cases, the width of the rocking curve is observed to increase. Two key factors influencing rocking widths are the unit cell variation, which can occur for example by lattice strain or extended lattice defects, and the size of the crystal. A final factor to consider is that a non-uniform illumination, particularly in conjunction with a dose-dependent lattice change, could lead to broadening of the rocking curve width. One of the major benefits of having access to both reciprocal-space and complex real-space data from the crystals is in the deconvolution of some of these factors. To assess the origin of the increase in the FWHM of the rocking curve data, the reconstructed phases of the crystals were also examined (Fig. 5). In the case of the six crystals studied, the reconstructed phase was slowly varying across the crystal. Between subsequent reconstructions of the same crystal as a function of dose there was little or no variation in this phase structure. This implies that the significant changes observed in the reciprocal-space information are unlikely to be driven by variations induced in the unit cell or an increase in lattice disorder since both these effects should manifest in... Figure 4 The relative FWHM determined from a Gaussian fit to the experimental RSM data in (a) the $q_z$ direction, with initial FWHM values for crystals 1 to 6 of 0.08°, 0.06°, 0.15°, 0.29°, 0.31° and 0.12°, respectively, (b) the $q_x$ direction, with initial FWHM values for crystal 1 to 6 of 0.09°, 0.19°, 0.11°, 0.08°, 0.07° and 0.05°, respectively, and (c) the $q_y$ direction, with initial FWHM values for crystals 1 to 6 of 0.12°, 0.16°, 0.25°, 0.10°, 0.23° and 0.06°, respectively. (d) The relative volume expansion of the 3D RSM calculated as the total number of non-zero counts in the 3D array containing the Bragg reflection with initial RSM volumes for crystals 1 to 6 of 1838, 864, 2569, 3217, 1198 and 1005 μm³, respectively. The dose rate determined from RADDOSE-3D was 0.42 MGy s⁻¹. the phase information. Also the slowly varying phase structure across the reconstructions is a good indication that the incident illumination at the KB focus was relatively uniform. This is consistent with the earlier findings of Huang et al. (2012) who used scanning diffraction measurements of a ZnO crystal to recover the focused illumination profile at the same beamline under similar experimental conditions. It is worth noting that the small phase gradient observed here (particularly at the edges of the crystals) may be an artefact of beam curvature. In terms of the crystal size and shape, however, the reconstructions show that in this case the effect of a non-uniform beam structure is minimal. In all cases, although the phase information does not appear to undergo any significant changes, the apparent size of the crystal is significantly reduced with radiation damage. From this it is concluded that the increase in rocking curve widths and RSM volume which can be tracked as a function of dose (Fig. 4d) are likely to be dominated by changes in the apparent size of the diffracting crystal. Although the data were collected beyond 1000 s, only results for which a reliable FWHM estimate could be obtained are presented. Beyond 1000 s the fluctuation in intensity was too large for a quality Gaussian fit to be performed. Reports in the literature have shown that for both macroscopic and micrometre-sized crystals (Boutet & Robinson, 2006) the FWHM of the rocking curve increases as a function of dose (Hu et al., 2004); this trend is also confirmed here. However, it is important to note that very few studies have looked at the variation in FWHM for more than two data points, especially for micrometre-sized crystals. In the previous work, the increase in FWHM has been attributed to a corresponding increase in disorder/mosaicity within the crystal at room temperature (Hu et al., 2004). One of the primary drivers identified for this reduction in crystal quality has been dehydration which has been observed in room temperature studies. In the present case, access to real-space images of the crystal during radiation damage via BCDI provides a significant advantage. BCDI images allow an assessment to be made of the effect of morphological changes in the diffracting volume of the crystal on the data. The last metric of diffracting crystal area obtained through these coherent imaging studies is discussed in the final results section. 3.4. Relative crystal area For the work presented here a real-space analysis has been performed of 2D projections of six HEWL crystals during radiation damage and has been compared with the RSM results, in order to draw conclusions about the crystal morphology in three dimensions. The real-space area was calculated as the total area occupied by pixels having an amplitude value greater than 50% of the maximum. When the amplitude within the reconstructed crystal images dropped below 50% of the maximum value, they were considered to be partially disordered and were not included in calculations of the diffracting crystal area. The results of the area analysis are summarized in Fig. 5. As with the slightly larger single HEWL micro-crystal previously analysed by Coughlan et al. (2015), for each of the six crystals here there is an overall decrease in diffracting area. Since this area directly contributes to the measured Bragg peak, it can be assumed that regions which apparently ‘switch off’ during radiation damage must become so disordered that they no longer contribute coherently to the measured signal. Given that the crystal is surrounded by a cryo-protectant and held at $\sim 100$ K, it should be emphasized that the reduction in real-space volume contributing to the formation of the diffraction pattern does not mean that there is actual mass loss from the sample. Rather this is indicative of parts of the crystal becoming so disordered that they no longer coherently diffract X-rays and instead just contribute to a diffuse background. It is also important to note that each 2D reconstruction was performed using data collected at the peak of the rocking curve. Since each scan starts at the beginning of the rocking curve the crystal has already received some dose before the first images are collected. The significant change in the apparent area of the micro-crystal during measurements is an important distinction from studies conducted on macroscopic crystals which tend to show peak broadening due to the formation of crystal mosaics and defects rather than as a result of a change in diffraction volume. For large crystals (hundreds of micrometres or even millimetres across) enclosed within the incident beam the influence of the crystal shape function is small in comparison with the overall crystal quality and mosaicity. In the case of micrometre-sized crystals, changes in the diffracting crystal volume during radiation damage clearly have a significant, even a dominant, influence on the Bragg peak shape and intensity. In every case examined here, an ever smaller area seems to keep diffracting X-rays whilst the rest of the crystal becomes damaged or destroyed entirely. The BCDI images allow the interior and exterior parts of the crystal at any single dose to be distinguished. However, when trying to compare images from the same crystal measured at different doses the translational invariance of the reconstruction makes spatially correlating the different reconstructions problematic. This means, for example, that caution needs to be applied in drawing conclusions about whether the crystal damage really occurs at the surface. To enable exact placement of coherent diffraction images in terms of their spatial location relative to one another would require a scanning diffraction microscopy approach such as ptychography be used (Peterson et al., 2012; Vine et al., 2009). The exact mechanism resulting in some parts of the crystal preferentially suffering the effects of radiation damage is currently not established. However, an explanation for this observation has been developed, as discussed in the next section. 3.5. Discussion summary In summary, the key findings observations from the experimental data are: (1) The integrated intensity data from the single Bragg reflections are very consistent and can be modelled using the exponential decay curve of equation (4). However, to match the experimental data, in the majority of cases Howell’s parameter $H$ needed to be adjusted and increased above 10 MGy Å$^{-1}$. (2) The $d$-spacing varies linearly for lower doses but seems to reach a plateau at higher doses for some crystals (e.g. crystals 2, 3 and 4). With the exception of crystal 6, the $d$-spacing always increases compared with the starting value. An assumption often made in the literature is that an increase in $d$-spacing can be directly a result of radiation damage. The $d$-spacing results imply that at lower doses the radiation damage behaviour is approximately linear but that at higher doses there may be a ‘saturation limit’ where the radiation damage behaviour changes. (3) The FWHM along all three $q_x$, $q_y$ and $q_z$ reciprocal-space vectors as well as the RSM volume increases as a function of dose for all six crystals. Together with the real-space images of the micro-crystals this is taken as strong evidence that the ordered part of the diffracting crystal volume shrinks with increasing dose. Although there are a number of factors that can contribute to rocking curve broadening (unit cell variation, lattice strain and defects, non-uniform illumination etc.) the BCDI results suggest that the evolving shape function of the crystal dominates. (4) From the real-space images, radiation damage appears to preferentially occur in particular regions of the crystal. Looking at the data it is tempting to suggest that the surface of the crystal is damaged more quickly than the inner parts; however, complimentary characterization (e.g. optical micrographs) is required to confirm this. A number of studies have highlighted the importance of crystal size in radiation damage at micrometre length scales. For typical crystallography experiments the primary photoelectron kinetic energy results in a mean free path which is generally of the order of 2–3 μm (Ziaja et al., 2001, 2002). This mean free path length is normally comparable with or smaller than the diameter of the crystals being measured. In the context of XFEL experiments, it has been argued that radiation-induced damage may be reduced due to the fact that the primary photoelectrons can escape through the crystal surface prior to giving up their energy in initiating secondary damage processes (Caleman et al., 2011). Similar types of size effects have been observed at the synchrotron where the use of micrometre-sized beams has been shown to result in crystallographic data with a reduced damage signature due to the primary photoelectron ranges being larger than the beam footprint on the sample (Sanishvili et al., 2011). In the experiments described here a small, micrometresized beam was incident on an even smaller sample. This setup has been less well studied and appears less well understood from a radiation damage perspective than the case of a micrometre-sized beam incident on a larger crystal. The BCDI reconstructions show that the crystal size reduces with dose. If the crystal is shrinking via surface damage the process is likely driven by increasing photoelectron escape. In this scenario energy from the primary photoelectrons is deposited outside of the diffracting volume. Since the size of the crystal is even smaller than the beam footprint, the secondary electrons originating from primary events outside of the interaction volume have a reduced chance of depositing their energy inside the crystal. This line of reasoning follows arguments put forth by Sanishvili et al. (2011) and Holton & Frankel (2010) in which the origin for lower radiation damage rates is explained in the context of micro-focus crystallography experiments performed on larger crystals. For example, Sanishvili et al. studied the effect of the beam size on damage rates in cryo-cooled protein crystals and found that by reducing the beam size from 15.6 μm to 0.84 μm they were able to reduce the radiation damage by a factor of three (Sanishvili et al., 2011). In their experiment, damage was greatest at the beam center where the highest photo-electronic effect was observed. In the present experiment, the exact opposite behaviour is apparently observed, e.g. damage to the surface prior to damage in the centre of the crystal. An important difference between the experiment reported here and that of Sanishvili et al. is that the crystals in the present case are fully contained within the beam. Hence the dose received at the edges is not expected to vary as significantly when moving towards the centre as in the Sanishvili et al. case. In addition, it is worth noting that, if the current interpretation is correct, the effect observed will become more pronounced as the extremities of the crystal become more disordered and the effective area contributing to the measured diffraction decreases. This will lead to increased primary photoelectron escape from the crystal and a more even dose delivered to the surface compared with the interior of the diffracting crystal. It is also expected that not all protein crystals will behave in the same way and that radiation damage rates will occur differently depending on the protein system studied. This was clearly identified in a previous study where analysis of a series of diffraction data sets measured from four native as well as four nicotinic acid-soaked crystals of trypsin at 100 K showed a high variability in radiation-sensitivity among individual crystals for both nicotinic acid-soaked and native crystals (Nowak et al., 2009). Two factors not discussed so far within this paper are the density of the cryo-protectant and the beam profile. Briefly, it was considered whether the choice of cryo-protectant might influence the radiation damage behaviour of the crystal due to small differences in density between it and the sample itself. For example, if the density of the cryo-protectant is less than that of the crystal, one might expect the ejected photoelectrons to travel a further distance once outside the crystal leading to a reduced amount of radiation damage. To investigate this, the same series of measurements were made on crystals embedded in different cryo-solutions having varying densities. It was found (not shown here) that there was no evidence of a systematic difference in the radiation damage metrics presented here for the different cryo-protectants. This was interpreted as indicating that the differences in density between the cryo-protectant and sample were simply too small to have a measurable influence. The second factor not taken into consideration here is the beam profile. This has been characterized using knife-edge scans conducted during the experiment and modelling of the beamline optics and it was found that the beam has a Gaussian profile. Also, the beam–sample alignment was confirmed using the combination of X-ray scintillator and microscope to align the beam with the centre of rotation to ensure that the crystal stayed central to the beam during data collection. However, the influence of the beam profile on the damage rates within the crystals cannot be completely eliminated. It may be that, if the beam profile has a strong gradient or if there are hotspots within the profile, this may be the explanation for at least some of the observations during this experiment. From previously published experiments by Huang et al. (2012), it is known that at the KB focus the beam profile resembles a Gaussian. In the present case the sample is comparable with or smaller than the beam FWHM. A follow-up study examining the effect of varying the beam profile and beam size whilst keeping the crystal size constant could help to determine whether, in the present case, these were a significant factor. 4. Conclusion The results presented here summarize a series of experiments investigating radiation damage in micrometre-sized crystals illuminated with micrometre-sized beams. In these studies the aim has been to shed some light on the key scientific question of whether the radiation damage behaviour observed under these conditions matches the behaviour seen in macroscopic crystals. The coherent imaging and RSM results confirm that the diffracting volume shrinks rapidly with increasing radiation damage. This has a significant effect on the diffraction data which would not be the case with macroscopic crystals. However, it is important to note that the dose in these micro-focus experiments (hundreds of MGy) is much higher than typically used for conventional crystallography (tens of MGy) and from the literature appears much less well understood. For the first time real-space images of micro-crystals undergoing radiation damage can be interpreted. The results from these studies suggest that smaller crystals may have longer lifetimes in micro-focus experiments than would be predicted for macroscopic crystals. The proposed model for this is that the combination of both the beam and the crystal being smaller than the primary photoelectron escape depth leads to an ever-increasing fraction of the cascade energy being deposited in material not contributing to the diffraction signal. This model is able to explain the majority of the observations, but further studies varying beam size and crystal size are required to support or contradict this hypothesis. One open question which is unresolved is that of a quantitative dose for these micrometre-sized samples. Dose was calculated with details of the physical and chemical properties of the sample used as well as the size and shape of the X-ray beam, but neglecting any effects from photoelectron escape. Though this calculated dose is used in the text, how accurate this estimate actually is for micrometre-sized protein crystals remains uncertain. In addition, the intensity data between crystals is remarkably consistent and can be modelled extremely well using equation (4). However, the usual value of 10 MGy Å$^{-1}$ for Howell’s constant does not, in most cases, yield a good match. Why this discrepancy between the expected and ‘best fit’ values for $H$ exists in these experiments remains uncertain at present. In summary, the results from these combined imaging and reciprocal-space mapping experiments indicate that the global damage behaviour of micro-crystals is different from their macroscale counterparts for the conditions reported here. This discovery, combined with the new insights into coherent imaging metrics, suggests the need for a new and wide-ranging series of studies to investigate the radiation damage behaviours that may be unique to protein micro-crystallography experiments. Acknowledgements Use of the Advanced Photon Source was supported by the US Department of Energy, Office of Science, Office of Basic Energy Sciences, under Contract No. DE-AC02-06CH11357. Part of this research was undertaken on the MX2 beamlines at the Australian Synchrotron, Victoria, Australia. This work was partly funded by the CSIRO Manufacturing Flagship and the International Synchrotron Access Program (ISAP) by the Australian Synchrotron. This work was supported by the Australian Research Council Centre of Excellence in Advanced Molecular Imaging (CE140100011) (http://www.imagingcoe.org/). JNC gratefully acknowledges financial support from the Volkswagen Foundation. DH gratefully acknowledges the Physical Sciences Disciplinary Research Program (DRP) of La Trobe University for financial support. References Abbey, B. (2013). *JOM*, **65**, 1183–1201. Boutet, S. & Robinson, I. K. (2006). *J. Synchrotron Rad.*, **13**, 1–7. Boutet, S. & Robinson, I. K. (2008). *J. Synchrotron Rad.*, **15**, 576–583. Caleman, C., Huldt, G., Maia, F. R. N. C., Ortiz, C., Parak, F. G., Hajdu, J., van der Spoel, D., Chapman, H. N. & Timanncu, N. (2011). *ACS Nano*, **5**, 139–146. Chen, B., Abbey, B., Dilanian, R., Balaur, E., van Riessen, G., Junker, M., Truscott, Q. J., Jones, M. W. M., Peele, A. G., McNulty, I., Vine, D. J., Pilkington, C. T., Quiney, H. M. & Nugent, K. A. (2012). *Phys. Rev. E*, **86**, 235401. Clark, J. N., Huang, X., Harder, R. & Robinson, I. K. (2012). *Nat. Commun.*, **3**, 993. Coughlan, H. D., Darmanin, C., Kirkwood, H. J., Phillips, N. W., Hoxley, D., Clark, J. N., Harder, R. J., Maxey, E. & Abbey, B. (2016). *J. Opt.*, **18**, 054003. Coughlan, H. D., Darmanin, C., Phillips, N. W., Hofmann, F., Clark, J. N., Harder, R. J., Vine, D. J. & Abbey, B. (2015). *Struct. Dyn.*, **2**, 041704. Darmanin, C., Strachan, J., Adda, C. G., Ve, T., Kobe, B. & Abbey, B. (2016). *Sci. Rep.*, **6**, 25345. Diederichs, K. (2006). *Acta Cryst. D62*, 96–101. Diederichs, K., McSweeney, S. & Ravelli, R. B. G. (2003). *Acta Cryst. D59*, 903–909. Finrock, Y. Z., Stern, E. A., Alkire, R. W., Kas, J. J., Evans-Lutterodt, K., Stein, A., Duke, N., Lazarski, K. & Joachimiak, A. (2013). *Acta Cryst. D69*, 1463–1469. Finrock, Y. Z., Stern, E. A., Yacobov, Y., Alkire, R. W., Evans-Lutterodt, K., Stein, A., Isakovskaya, F., Kas, J. J. & Joachimiak, A. (2010). *Acta Cryst. D66*, 1287–1294. Garman, E. F. (2010). *Acta Cryst. D66*, 339–351. Garman, E. F. & Sayre, D. R. (2006). *Acta Cryst. D62*, 32–47. Gati, C., Bourenkov, G., Klinge, M., Rehders, D., Stellato, F., Oberthür, D., Yefanov, O., Sommer, B. P., Mogk, S., Duszenko, M., Betzel, C., Schneider, T. R., Chapman, H. N. & Redecke, L. (2014). *IUCrJ*, **1**, 87–94. Hellawell, J. R. (1984). *Rep. Prog. Phys.* **47**, 1403–1497. Holton, J. M. (2009). *J. Synchrotron Rad.* **16**, 133–142. Holton, J. M. & Frankel, K. A. (2010). *Acta Cryst.* **D66**, 393–408. Hope, H. (1988). *Acta Cryst.* **B44**, 22–26. Hu, Z. W., Chu, Y. S., Lai, B., Thomas, B. R. & Chernov, A. A. (2004). *Acta Cryst.* **D60**, 621–629. Huang, X., Harder, R., Leake, S., Clark, J. & Robinson, I. (2012). *J. Appl. Cryst.* **45**, 772–778. Leake, S. J., Newton, M. C., Harder, R. & Robinson, I. K. (2009). *Opt. Express*, **17**, 15853–15859. Leal, R. M. F., Bourenkov, G., Russi, S. & Popov, A. N. (2013). *J. Synchrotron Rad.* **20**, 14–22. Lovelace, J. J., Murphy, C. R., Pahl, R., Brister, K. & Borgstahll, G. E. O. (2006). *J. Appl. Cryst.* **39**, 425–432. Lübbert, D., Meents, A. & Weckert, E. (2004). *Acta Cryst.* **D60**, 987–998. Moukhamedzianov, R., Burghammer, M., Edwards, P. C., Petitdemange, S., Popov, D., Fransen, M., McMullan, G., Schertler, G. F. X. & Riekel, C. (2008). *Acta Cryst.* **D64**, 158–166. Müller, R., Weckert, E., Zellner, J. & Drakopoulos, M. (2002). *J. Synchrotron Rad.* **9**, 368–374. Murray, J. & Garman, E. (2002). *J. Synchrotron Rad.* **9**, 347–354. Murray, J. W., Garman, E. F. & Ravelli, R. B. G. (2004). *J. Appl. Cryst.* **37**, 513–522. Nave, C. & Hill, M. A. (2005). *J. Synchrotron Rad.* **12**, 299–303. Nave, C., Sutton, G., Evans, G., Owen, R., Rau, C., Robinson, I. & Stuart, D. I. (2016). *J. Synchrotron Rad.* **23**, 228–237. Newton, M. C., Leake, S. J., Harder, R. & Robinson, I. K. (2010). *Nat. Methods*, **7**, 100–101. Nowak, E., Bruzdzikiewicz, A., Dauter, M., Dauter, Z. & Rosenbaum, G. (2009). *Acta Cryst.* **D65**, 1004–1006. Owen, R. L., Rudino-Pinera, E. & Garman, E. F. (2006). *Proc. Natl Acad. Sci. USA*, **103**, 4912–4917. Peterson, I., Abbey, B., Putkunz, C. T., Vine, D. J., van Riessen, G. A., Cadenazzi, G. A., Balaur, E., Ryan, R., Quiney, H. M., McNulty, I., Peele, A. G. & Nugent, K. A. (2012). *Opt. Express*, **20**, 24678–24685. Pfeifer, M. A., Williams, G. J., Vartanyants, I. A., Harder, R. & Robinson, I. K. (2006). *Nature (London)*, **442**, 63–66. Ravelli, R. B. G., Leiros, H. S., Pan, B. C., Caffrey, M. & McSweeney, S. (2003). *Structure*, **11**, 217–224. Ravelli, R. B. G. & McSweeney, S. M. (2000). *Structure*, **8**, 315–328. Ravelli, R. B. G., Theveneau, P., McSweeney, S. & Caffrey, M. (2002). *J. Synchrotron Rad.* **9**, 355–360. Robinson, I. K. & Vartanyants, I. A. (2001). *Appl. Surf. Sci.* **182**, 186–191. Saravilis, R., Yoder, D. W., Pothineni, S. B., Rosenbaum, G., Xu, S. L., Vogt, S., Stepanov, S., Makarov, O. A., Corcoran, S., Benn, R., Nagarajan, V., Smith, J. L. & Fischetti, R. F. (2011). *Proc. Natl Acad. Sci. USA*, **108**, 6127–6132. Schneider, C. A., Rasband, W. S. & Eliceiri, K. W. (2012). *Nat. Methods*, **9**, 671–675. Shimizu, N., Hirata, K., Hasegawa, K., Ueno, G. & Yamamoto, M. (2007). *J. Synchrotron Rad.* **14**, 4–10. Southworth-Davies, R. J., Medina, M. A., Carmichael, I. & Garman, E. F. (2007). *Structure*, **15**, 1531–1541. Stellato, F., Oberthür, D., Liang, M., Bean, R., Gati, C., Yefanov, O., Barty, A., Burkhardt, A., Fischer, P., Galli, L., Kirian, R. A., Meyer, J. J., Pfeiffer, M. A., S. Olson, C. H., Chervinski, F., Speller, E., White, T. A., Betzel, C., Meents, A. & Chapman, H. N. (2014). *IUCrJ*, **1**, 204–212. Stern, E. A., Yacoby, Y., Seidler, G. T., Nagle, K. P., Prange, M. P., Sorini, A. P., Rehr, J. J. & Joachimiak, A. (2009). *Acta Cryst.* **D65**, 366–374. Teng, T. & Moffat, K. (2000). *J. Synchrotron Rad.* **7**, 313–317. Vine, D. J., Williams, G. J., Abbey, B., Pfeifer, M. A., Clark, J. N., de Jonge, M. D., McNulty, I., Peele, A. G. & Nugent, K. A. (2009). *Phys. Rev. A*, **80**, 063823. Wang, J. & Ealick, S. E. (2004). *Acta Cryst.* **D60**, 1579–1585. Zeldin, O. B., Gerstel, M. & Garman, E. F. (2013). *J. Appl. Cryst.* **46**, 1225–1230. Ziaja, B., Chapman, H. N., Faustlin, R., Hau-Riege, S., Jurek, Z., Martin, A. V., Toleikis, S., Wang, F., Weckert, E. & Santra, R. (2012). *New J. Phys.*, **14**, 115015. Ziaja, B., Szoke, A., van der Spoel, D. & Hajdu, J. (2002). *Phys. Rev. B*, **66**, 024116. Ziaja, B., van der Spoel, D., Szoke, A. & Hajdu, J. (2001). *Phys. Rev. B*, **64**, 214104.
COSTA RICA PANAMA & NICARAGUA Travel guides Planning your trip Tailor-made holidays Small group holidays Touring holidays Wildlife experiences Selfdrive holidays Where to stay Beaches Specialist birdwatching Planning your holiday If you enjoy planning your holiday in detail, there is plenty to help you in this brochure. Browse the early pages on each country for inspiration. Then choose whether you prefer to travel independently on a private tailor-made trip, or as part of a small group. Tailor-made holidays Our tailor-made service is just that. We design your trip just for you to reflect your tastes and budget, matched against what is available in each country. The designs in this brochure can be taken off-the-peg, or you can pick and choose from them as a starting point for your own unique holiday. Often you can choose both how you would like to travel and the level of accommodation you prefer. Call or email us with your choices and questions and we will discuss them with you and prepare a full written proposal. We can modify this as often as necessary to create your perfect trip. Meals can be included or left for you to decide during your holiday. For each day of the sample itineraries shown here, BLD (breakfast, lunch, dinner) indicates the meals that are included in the prices given in the Booking Information insert. If you would like a guide, we will arrange for a trained and experienced English-speaking guide appropriate to your interests. When you are happy with a proposal, send us your booking form. Small group holidays To join in with a convivial small group, led by a knowledgeable local guide, please see our popular Costa Rican Odyssey on p22. Add-ons Whichever style of holiday you choose, you can always add time at the beach or special extensions for wildlife viewing, walking, etc. Making a booking The Booking Information insert included with this brochure covers dates, prices, and how to book. (If yours is missing or has become out of date then please call us for a replacement or download it from our website.) It’s good to know that when you book your holiday with Geodyssey you not only get the benefit of our in-depth knowledge of our destinations from many years of making travel arrangements to Latin America, and our up-to-date knowledge of the best places, old and new. You also get our experience in designing holidays for different tastes and budgets, the confidence that your money is fully protected, and the reassurance that if anything goes wrong while you are away you have a network of helpful, knowledgeable and resourceful people locally and back in the UK to support you. We’re just a phone call away when you are planning your trip, preparing to leave, or out in your destination. When you get back we will send you a short questionnaire to make sure everything went well and to gather your comments about the places you visited. We will also ask what you think of us. More than 95% of our customers describe their overall level of satisfaction with their holiday as "Excellent" or "Good", with over 90% rating it as "Excellent". A staggering 99% rate the service that our office provides as "Excellent". We protect ALL our customers The air holiday packages in this brochure are ATOL protected by the Civil Aviation Authority. Our ATOL number is 5292. ATOL protection extends primarily to holiday arrangements that include air travel for customers who book and pay in the UK. Geodyssey also provides equivalent financial protection for customers who do not buy flights from us and/or who book and pay from outside the UK. Please see the Booking Information insert for more information. Sustainable travel We try to support local economies, minimise any harmful impact on the natural environment, and encourage conservation wherever we can. Your holiday will not only benefit you it will also benefit local people and their communities. We see our relationships with local hotels, guides and organisers as long-term partnerships that benefit our customers and them. To help hotels adopt sustainable practices Geodyssey has formed a partnership with the Rainforest Alliance, described below. Air travel accounts for 3-4% of global carbon emissions, but the destruction of forests has been estimated to amount to 20-30% of the total - up to ten times more. When fuel is burned, that's much the end of the story, but when a forest is cut down it also reduces the planet's ability to absorb carbon from the atmosphere, and has a major effect on biodiversity with the loss of many animal and plant species. By choosing a holiday that values the environments of the tropics you are doing a great deal to support the planet too - perhaps much more than the impact of the fuel used to take you there (which you may choose to CO2 offset as well). Costa Rica, Panama and Nicaragua all protect large areas in national parks and reserves. By visiting them you are supporting their efforts in a very positive way. This brochure is printed on paper from responsible sources by a printer who follows the Chain of Custody system. Geodyssey and The Rainforest Alliance Back in 2007 we formed a partnership with The Rainforest Alliance to work towards best management practices in sustainable tourism in Costa Rica. We are very proud that we were the first travel company in the UK to form such a partnership with them for any country. It has worked so well that it has now been extended to Nicaragua and to Ecuador. The Rainforest Alliance also help promote sustainable production of timber, and sustainable farming of coffee (look for their symbol on coffee jars in your supermarket), so it is exciting to see a similarly professional approach being applied to travel. One of the things we like best about this initiative is that it is locally based and in tune with how things work in each country. Hoteliers receive training and technical assistance, including workshops and seminars on labour management, health and safety, and sustainability. They are encouraged to seek certification with an appropriate body. It is a remarkable, locally-driven effort which we encourage you to support by choosing hotels which have already received accreditation, from level 1 to level 5 (the highest). Note that most accreditation schemes do not relate to the hotel's structure but to how it is operated, and some of the most sustainably-run hotels are not yet accredited. Tribal communities Meeting tribal people and other indigenous communities on their terms as an invited and welcome guest can be a wonderful and enriching experience. Their ways of life can be under great pressure, however, and it is vitally important that every member of their community is treated with great consideration, politeness and respect. We strongly encourage you to make the effort to experience the lives of different cultures in the country you visit, and to make the sort of contribution to their lives that they themselves would most welcome - personally through the respect you pay them, perhaps with a willingness to acknowledge your own society's shortcomings, as well as materially in ways they may suggest - perhaps by buying handicrafts made for visitors or with useful and appropriate gifts where needed. Hotels When describing hotels we use the following to indicate relative prices: MID-RANGE A good standard option which we think is comfortable and pleasant but without frills, at a price to suit the typical traveller. Guest bedrooms all have private bathrooms of course. UPPER RANGE Something superior marks these hotels out, such as particularly nice décor and furnishings, above average food or an enviable location, with a price to match. TOP RANGE At the upper end of what is available. A special place to stay, but at the top of the market price-wise. High prices do not always mean luxury facilities, but may reflect the remoteness of the location. Our personal favourites are marked with the Geodyssey logo in gold. Guides Our guides are all local people - the best person to introduce you to a country is someone who lives there. A good guide turns a successful trip into a truly memorable one with insights that foreign guides struggle to match. They are typically well educated, fluent English speakers and very experienced. They know how to make things happen locally and how to put things back on the rails if there are last minute hitches. Specialist naturalist and birdwatching guides are also available. Drivers may have English which at best is only serviceable – you will have plenty of opportunity to practise your Spanish or your sign language with them! A note of caution Costa Rica, Panama and Nicaragua are all developing countries. Allowance must be made for occasional inadequacies and shortcomings; a corresponding degree of caution, flexibility, and patience will also help. Nicaragua is far less developed than Costa Rica or Panama but has its own rewards. While Nicaragua's infrastructure improves we strongly recommend travelling with an experienced English-speaking driver/guide. Welcome This brochure is part of our growing series of in-depth travel brochures for selected countries in Latin America and the Caribbean. Our aim is to provide you with a wide choice of travel and holiday ideas that bring out the best in each destination, so that you can pick the holiday that suits you the best. In each country we focus on travel experiences rather than just staying put at the beach. We highlight the distinctive places to visit, the best opportunities to see wildlife, ways to gain insights into local cultures and communities, and characterful hotels. There are different ways to get around too, from joining a small group with a knowledgeable local guide, to hiring a car and setting off on your own, catching special tourist buses, or having a private guide or driver all to yourself. Beaches are not forgotten—how could they be when there is such a fabulous choice for winding down at the start or end of a trip? Also included in the mix are special options like birdwatching at all levels, leg-stretching day walks, and adventurous treks. We also offer rafting, surfing and diving for beginners and intermediates, so you can blend these in as well. We bring all this together for you in a well-organised trip that makes the best use of your precious time and the budget you decide on. About Geodyssey Geodyssey is not an ordinary travel company. We started life in 1993 as a travel specialist for Venezuela, an extraordinary country for which we developed our own dedicated and personal style that many seem to like. We have grown, but we are still a small team and we really care about each and every customer. We aim to provide the best choices, excellent service, and excellent value in each country we offer. Travel is our passion, and we want to share that with you. Each of us has travelled widely in our destinations (and beyond), so if one of us happens not to have visited a particular place we offer, the chances are that someone else on our team will have been there, probably several times. At the last count we had between us visited Costa Rica about twenty times, and as this goes to press I’m setting off to Panama again to explore new ideas and revisit old favourites. It seems to work. Our customer satisfaction scores are phenomenally high, and many clients travel with us again and again. When you are deciding where to go for your next holiday you’ll want to turn to someone who really knows the area you’d like to visit. For Costa Rica, Panama or Nicaragua, we hope you’ll choose us. Gillian Howe Managing Director | Costa Rica | Panama | Nicaragua | |------------|--------|-----------| | Around Costa Rica | 6 | Around Panama | 32 | | Where to see Costa Rica’s wildlife | 8 | Perfect beaches | 35 | | Mountains of Fire | 12 | Tailor-made holidays | 44 | | Active Costa Rica | 13 | * Panama Odyssey | 36 | | Tailor-made holidays | 14 | * Self-drive Panama | 37 | | * Coast to Coast | 14 | Just a week in Panama | 37 | | * Costa Rica Nature Explorer | 15 | Panama Chill-out | 38 | | * Just a Week in Costa Rica | 15 | Panama Adventures | 38 | | * Creature Comforts | 16 | ‘Camino Real’ Trek | 39 | | * Costa Rica Chill-out | 16 | Where to stay | 40 | | * Secret Costa Rica | 17 | * Hotels for touring | 46 | | * Costa Rican Adventures | 18 | Beaches: Slow, slower, stop | 46 | | * Bribrí & Chira Island Communities | 18 | * Classy Chill-out | 47 | | * Costa Rica & Nicaragua Off the Beaten Track | 19 | * Corn Islands | 47 | | Seldrive | 5 | * Beach hotels | 47 | | * Pre-booked Self-drive | 20 | Combining countries | 21 | | * Freedom Self-drive | 21 | Honeymoons | 26 | | Add-ons | 21 | Birdwatching in Panama | 40 | | Small group holiday | 22 | * Birds of Panama | 41 | | * Costa Rican Odyssey | 22 | * Birding the Darién | 41 | | Where to stay | 23 | * Easy birding in Panama | 41 | | * Wildlife lodges | 23 | * Hotels for touring, beach hotels | 39 | | * Hotels for touring | 24 | Birdwatching in Panama | 40 | | Life’s a Beach | 26 | * Birds of Panama | 41 | | * Beach hotels | 27 | * Birding the Darién | 41 | | Birdwatching in Costa Rica | 28 | * Easy birding in Panama | 41 | | * The Birds of Costa Rica | 29 | * Hotels for touring, beach hotels | 39 | The admirable sloth No animal could be better adapted to life in the tree-tops, or seem more admirably content with its existence, than the sloth. Sloths are cousins, not of monkeys, but of anteaters and armadillos, and all three are found only in the New World. Hanging upside down high in a tree, they reach out with sinuous arms to tear slowly but firmly at their favourite leaves. Like cows and sheep, they have several stomachs where the long process of digesting all that greenery can take place. Even so, there is not much energy to be had from such a diet, so the sloth doesn’t waste any. Their movements are preternaturally slow, of course, and they like nothing better than sunbathing, especially in the morning, to warm their tummies and help speed their digestion. Amazingly, they are confident swimmers. Once a week they clamber down to the forest floor, poke a hole in the earth, make a discrete deposit, and slowly make their way back up. A howler monkey whose diet is similar, would accomplish the equivalent task on the move, high in the trees (and will deliberately do so on your head if he doesn’t like you). Why does the sloth go to all that bother? The answer seems to be that the sloth is doing some gardening. It doesn’t roam far, spending its life on perhaps 40 individual trees, but takes about 10% of their output of leaves - a huge proportion. Mineral nutrients are hard to come by in the forest so by returning a proportion (perhaps as much as half) accurately to the roots the next leaf crop is given a boost. Yet another reason for admiring the sloth. Costa Rica is a jewel of a country. Within a small area it has abundant wildlife, wonderful scenery, fine beaches and much more. It is easy to get around and there are some excellent places to stay. Most people visit Costa Rica for the experience of nature that it offers. It is incredibly rich in biodiversity, with over 5% of all the species on earth to be found in an area a fifth the size of the UK. Wildlife reserves and national parks cover 25% of the country, helping to maintain these precious natural wonders. Getting to see the natural side of Costa Rica has been made easy with numerous park trails, elevated walkways in the canopy and river boat trips. Within this small space there is also a great variety of scenery: mountains tipped with misty cloud forest, lowlands swathed in rich rainforest, dry and dusty ranch lands, long beaches and rocky coves. There are impressive volcanoes, including one that spews red lava on an almost daily basis, wild rivers tumbling through narrow gorges, and country roads that wind through sleepy villages whose farming families grow coffee, flowers, or fruits. Costa Rica is a great place for touring, with a local guide, in a group or on your own, or for just taking off in a hire car and going at your own pace. If you want a lively time then adrenaline is available in plenty, with zip-lines high through the forest, surfing on Pacific rollers, whitewater rafting, trekking and horse riding all easy to find. If you prefer to slow things down, there are plenty of lovely beaches of all kinds, and boutique hotels with spas to pamper you and swimming pools to laze by. Often called the Switzerland of Latin America, Costa Rica is peaceful and well organised. It has no army, educational and health standards are relatively high, and English is widely spoken. There is a good choice of accommodation, ranging from well-kept small guesthouses and eco-lodges to ultra-stylish boutique hotels. It’s a great destination for first-timers to Latin America, superb for wildlife enthusiasts, wonderful for families with older children, and excellent for adventurous honeymooners. Pura vida! From top to bottom Costa Ricans call themselves ‘Ticos’ — a nickname that instantly conveys the friendly simplicity and open-heartedness that you will find throughout Costa Rica as you travel. Wherever you go you’ll find a love for life and nature and a determination to make the most of whatever lady luck brings. It’s summed up in the phrase ‘Pura vida!’ — literally ‘pure life’. You can say it when someone asks how you are, when you hear good news, or just any time you’d like to say something positive. Pura vida! You’ll sometimes hear a slang called pachucuo, largely unintelligible to the outsider. If you don’t mind not understanding the answer, you might try ‘Pura vida mae!’ — mae being the equivalent of ‘mate’ or ‘dude’ in pachucuo. You’re sure to get a surprised laugh and a cheerful welcome in return. Around Costa Rica Mountain sierras strung with volcanoes run the length of Costa Rica creating some of its most dramatic scenery, between the long shorelines of the Caribbean to the east and the varied coastline of the Pacific to the west. CENTRAL VALLEY The hub of Costa Rica is the Central Valley, a wide plateau ringed by mountains and volcanoes and home to 70% of Costa Rica's population. Its spring-like climate is perfect for the many coffee plantations, market gardens and fruit fields that chequer the landscape. There is lots to see and do in and around the Central Valley and it is well worth spending a few days there. Some of the many sights are shown in the panel on this page. As well as San José, three of Costa Rica's larger towns, Alajuela, Cartago and Heredia, are also in the Central Valley, along with many smaller towns and villages. It's a busy place, in contrast to almost the whole of the rest of the country. SAN JOSÉ Fully a third of the country's population lives in San José itself, a bustling, congested modern city in the middle of the Central Valley, where most international flights arrive. It is worth a short visit for a taste of local life and to take in some of the principal sights: - the National Theatre is the most lavish building in the capital. Funded by a coffee tax in the late 19th century its baroque interior parades the wealth of the coffee planters in neocolonial style. - the Pre-Columbian Gold Museum is an underground museum containing thousands of gold artefacts from as early as 500 BC. On show are body ornaments, bracelets, earrings, chest plates, little bells and earrings, intricately worked representations of local animals, and delicate figurines. A gallery in the foyer shows the work of contemporary Costa Rican artists. - the Jade Museum holds the largest collection of jade carvings in the Americas and has displays of pre-Columbian art, pottery and sculpture. The museum is on the top floor of an office building and so has good views over the capital. - the Museum of Costa Rican Art contains a small collection of 19th and 20th century painting and sculpture by national and international artists with changing exhibitions. It is located in a large, well-planted park near the city centre. There is a good choice of hotels in the city and the surrounding countryside—see p24 for examples. CARIBBEAN SLOPES East from San José, the road climbs into the mountains, passing volcanoes left and right, and descends through lush rainforest on the Caribbean slopes of the Cordillera Central. These days it's a short drive to the coast—just a couple of hours or so—but it was practically inaccessible from San José until the arrival of the railway at Puerto Limón in 1890. This heralded Costa Rica's banana boom, with plantations replacing swathes of forest behind the long sweeps of Costa Rica's wild Caribbean shores. Jamaican workers brought a West Indian flavour and today Puerto Limón and beach communities in the south have an easy-going African-Caribbean feel. To the north of San José and the Cordillera Central a wide triangle of lowland runs across to the Caribbean and up into Nicaragua. There are some very special places for wildlife in this region, including Sarapiquí (p9), Caño Negro, Maquenque and the flooded forest of Tortuguero (p11). PACIFIC SLOPES & OSA PENINSULA Westwards, it is an even shorter journey from San José to the Pacific ocean, reaching the sea close to the mouth of the Gulf of Nicoya. Going south from Into the Central Valley There are characterful small towns and villages dotted around the Central Valley, and other sites easily reached on day trips from San José. - Grecia's church is made entirely of iron—a novel approach inspired by the fate of its predecessor, which burned down. The parts were forged in Belgium and bolted into position in the 1830s. - Neighbouring Sarapiquí is a centre of folk art, chiefly colourful kaleidoscopic designs that were traditionally painted on ox-carts. The most elaborate convey the bride and groom from church. - Zarcero's town square is jam-packed with topiary clipped into arches—a surreal photo stop. - La Paz waterfall & butterfly garden A popular attraction with 5 waterfalls, large butterfly enclosures and hummingbird gardens. - Lankester Botanical Gardens holds an internationally renowned collection including 800 orchid species. Peak bloom is between February and May. Excellent planting of heliconia, bromeliads, palm, ferns, cacti and bamboo in extensive gardens. - Irazú and Poás Volcanoes: see p12. - The Orosi Valley, shielded by steep-sided mountains, is a very pretty place with two colonial sites of interest; the church of Orosi, one of the few to have survived Costa Rica's earthquakes, and the ruins of Ujarrás, the country's oldest church from 1575. - Coffee plantations: Costa Rica produces some of the best coffee in the world (for the national economy it is truly the grano d'oro—the golden grain). The most famous plantations are Britt and Doka and both offer popularised tours of their fields and the process of coffee production. Several smaller coffee fincas provide their own, more down-to-earth, experience. here, good beaches begin almost immediately but are at first rather busy, being the closest to San José. Beyond Jacó things start to ease, and soon you reach Quepos and the beautiful Manuel Antonio National Park, where verdant tropical forest opens onto picturesque white sand beaches. Wildlife is good here too: see p9. Southwards beyond Manuel Antonio lie quiet natural beaches and the sleepy villages of Dominical and Uvita. Some stylish boutique lodges have recently appeared on this part of the coast, while the good-hearted Hacienda Barú is also a national wildlife refuge rising from beach to mountain ridge. Further still lies the Osa Peninsula, jutting into the Pacific Ocean. Thanks to its remoteness, heavy rainfall and dense jungle it ranks as one of the most biodiverse places in the world. More on p9. **CENTRAL HIGHLANDS & THE SOUTH** The long Pan American Highway heading south from San José ascends the highest pass in Costa Rica, Cerro de la Muerte, a cold and windy place of Andean-type páramo vegetation and low oak trees. The tranquil cloud forests of San Gerardo de Dota are tucked away close by below, see p10. Descending through the Talamanca mountains, the road continues south to the border with Panama. This little visited region is home to the Wilson Botanical Gardens, where the Organisation for Tropical Studies has a research centre in well-planted grounds with beautiful flowers and plantings of gingers, lilies, heliconias, bromeliads, agaves and bamboo. The star of the show is the garden’s collection of over 700 palms. The Organisation for Tropical Studies has a research centre here with accommodation for scientists and visitors. A large part of the Talamanca range is protected by La Amistad International Park which continues into Panama. **NORTHWEST & NICOYA** Northwest from the Central Valley, the Central Cordillera gives way to lower ranges that continue into Nicaragua. A morning’s drive from the capital brings you to the cloud forests of Monteverde or, on the other side of the mountains, the lava-spewing Arenal Volcano, see p12. Beyond them are the ranch lands of Guanacaste, the cultural soul of Costa Rica, and the dwelling place of the country’s *sabañeros* (cowboys). The seasonally parched landscape with its herds of grazing Brahma cows seems a far cry from the lush forests of the south and east. Here you can experience a slice of an authentic rural Costa Rica, an area that retains its quintessential Tico feel. In fact the national dish, gallo pinto, consisting of rice, beans and herbs, originates from the Guanacaste region. At weekends it is not unusual to come across a village fiesta with be-hatted riders on their proud-stepping horses kicking up the dust. Bull-fighting Latino style (where the bull is harried but unharmed) is as popular here as baseball. Liberia, the regional capital, is a very pleasant country town worth a stop. Near its centre there are atmospheric colonial streets, partly restored, and a great many small shops providing everything from groceries, haberdashery and haircuts to complicated brightly painted agricultural machinery. Beach resorts speckle the coast of the Nicoya Peninsula in the far west of the region, see p26. --- **When to visit Costa Rica** - **Dry season** Between December and April there are clear blue skies and sunshine, particularly in the Central Valley, the highlands and the beaches of the north and central Pacific coast (Tamarindo, Nicoya, Jacó and Quepos). This is the most popular season, with Christmas, Holy Week and Easter being particularly busy. Book well in advance at these times, as the best hotels can fill early. - **Green season** In Costa Rica’s May to November ‘green’ season, mornings are typically clear, while afternoons grow cloudy and may bring rain – a short sharp burst or a couple of hours. Skies usually clear for a magnificent sunset before more rain at night. Travel is less popular, but the scenery is greener, prices can be lower, and there can be other bonuses, including wildlife events such as turtles coming to lay their eggs. September and October are the wettest months, when conditions can be tiresome. - **Temperatures** Costa Rica is in the tropics, so temperatures are fairly constant all year, just varying with altitude. At sea level, a tropical 30–35°C is typical, tempered by sea breezes. The Central Valley and San José at around 3800ft average a very pleasant 26°C. In the highlands temperatures can sometimes hover at around a chilly 10–13°C. Where to see Costa Rica’s wildlife To see the greatest variety of Costa Rica’s fabulous wildlife you should visit as many different habitats as you can. In many areas of Costa Rica your morning alarm is more likely to be the call of a howler monkey than the revving of a car engine. Among a wide choice of places to see wildlife, two areas really stand out: Tortuguero in the north of the Caribbean coast and the Osa Peninsula, on the southernmost part of the Pacific coast. Visits to either can be added to the beginning or end of your trip, see p21. The wildlife you will see in any area depends on the habitats found there. Costa Rica offers several good opportunities to experience three of the most significant tropical life zones: lowland rainforest, cloud forest and tropical dry forest. If you would like to see the widest range of wildlife then choose examples from each life zone rather than all the same. FLOODED FOREST Tortuguero National Park The flooded forests of Tortuguero National Park on the north Caribbean coast provide a unique experience. Boats take the place of cars, gliding along the narrow river channels between the trees. From the water, the forest presents a lush wall of green, a dense tangle of palms, mimosa, wild almond and morning glory. Sloths hang motionless in the trees by the river, warming their bellies in the sun to activate the digestion of their latest meal of leaves. Family troupes of mantled howler monkeys exchange throaty roars, while white-headed capuchin monkeys pick delicately at fruiting trees above branches where large iguanas lie motionless in the sun. Their cousins, iridescent emerald green Basilisk or ‘Jesus Christ’ lizards, their long crests raised, prepare to skip and dash across the water’s surface, while tree frogs tuck in their blue legs and close their bright red eyes so all that remains visible is their leaf-green skin. Stalking the water’s edge, tiger herons hunt for fish among tree roots and lianas. In the rivers, caimans lose themselves in the tangle of branches along the shore and play a waiting game. When the coast seems clear, young river otters cavort in playful groups, their parents keeping watchful guard. Among the more extraordinary creatures found here are garfish, ancient creatures with crocodilian snouts, and greater bulldog bats that glide across the water at night to pluck dreaming fish in their strong claws. There are several lodges at Tortuguero where guests stay on a full board basis and are taken out each day on shared excursions by resident naturalist guides, mostly by boat. Access to Tortuguero is by motorboat or by plane from San José. Though there is plenty of sunshine, Tortuguero’s rainfall is tremendously high all year round. The wildlife is prepared for this, as are the lodges which are well stocked with rubber boots, waterproof ponchos, and covered boats. Monkeys Among the most memorable of Costa Rica’s mammals are its 4 species of monkey. At 12-14in plus tail, the squirrel monkey is the smallest. Slender and agile, they roam the forest looking for insects, fruit and nectar from the ground right up to the highest branches. They travel in small groups, making so many squawks, whistles and chirps that they are impossible to miss. They are found so patchily that it is thought they may have been introduced from South America by man. Manuel Antonio NP is a good place to find them. White-headed capuchin monkeys are the most commonly seen, sometimes together with squirrel monkeys. Mid-sized (14-22in plus tail) they move in similar groups, gracefully, agilely, not calling much, but there is always plenty of movement in the branches to give away their presence. Their diet is similar, with wasps being a special favourite. Black-handed spider monkey (aka Geoffrey’s spider monkey) are seen occasionally; they are larger usually generally black monkeys swinging by their arms from branch to branch mostly high up. They need large forest areas and are considered threatened. Mantled howler monkeys (pictured) are truly wonderful. They are entirely black, with a pale frosted fringe of hair on their sides or lower back. The long throaty roars called by the lead males to coordinate their groups echo for miles. You will usually find them sitting around or moving slowly (upright rather than hanging); their diet of fruit and leaves making for a comparatively stolid life. Turtles Of the world’s 7 species of turtle, 5 nest in Costa Rica. Green Turtles arrive in huge numbers at Tortuguero from June to October. Local conservation volunteers lead nightly small groups to see them. Hawksbills and Loggerheads have also been known to nest here at this time although sightings are very rare. Around the same months Olive Ridley turtles, the smallest of the 5, nest on the Pacific at Playas Ostional and Nancite. Their arrivals are timed according to the moon, with each arribada generally lasting about a week around the last and first quarter of the moon. Leatherback Turtles have been seen at Playa Grande on the Nicoya Peninsula between October and mid-March however numbers have dwindled massively and the chances of seeing them are now extremely small. Maquenque Costa Rica’s newest national park, Maquenque lies in the deep lowlands of the San Carlos river which seeps slowly northwards to join the Rio San Juan. The amount and variety of wildlife here easily rivals the more famous Tortuguero, although more effort and time are required to see it. The emblem species of this area is the endangered, almost legendary, Great Green Macaw. Maquenque Ecolodge (see p23) is the place to stay here. Maquenque is a 4 hour drive north from San José. LOWLAND RAINFOREST At Sarapiqui and other parts of the Caribbean slopes northeast of the Cordilleras, and in the far south west, conditions suit very dense tropical rainforest—very wet, essentially non-seasonal, lowland forest. Parts of the mid-Pacific and south east coast support a less drenched but still rich rainforest. Inside an undisturbed rainforest it is dark. The upper ‘canopy’ layer of foliage of mighty buttress-rooted trees blocks out the intense tropical sun. Creepers and climbers wind around their trunks in search of any light that penetrates the canopy, while twisted lianas hang down like ropes. Where the sun reaches the forest floor, fast growing species spring up in a dash for the light. Mantled howler monkey, black-handed spider monkey, white-headed capuchin, brown-nosed coatimundi; sloth, agouti, white-lipped peccaries and whitetail deer all inhabit the rainforest along with more furtive creatures like jaguar, puma, ocelot, jaguarundi and tapir. But sight lines are short and the canopy is high, so it will be harder than you might imagine to see the animals; they will be aware of you and most will be keen to stay out of sight. It is more likely to be the small things—tree frogs, morpho butterflies, columns of leaf-cutter ants, extraordinary fungi and the plants’ often cunning and intricate defence mechanisms—that will keep you enthralled. Sarapiqui area The Organisation for Tropical Studies’ La Selva research station is widely recognised as one of the world’s leading centres for the study of lowland tropical rainforest. It offers a well thought-out series of paved trails through part of its extensive reserve. It is easily accessed from lodges in Sarapiqui and from San José—even in a day trip, as is nearby Braulio Carrillo NP which protects an area of similar forest. A popular attraction in this general area is the Rainforest Aerial Tram, in which four-person open cable cars soar almost silently through the canopy on the slopes of a private rainforest reserve, passing an arm’s length from epiphytes and ferns and offering a monkey’s-eye view of life in the tree-tops. The ride lasts about 1½hr and suits all ages. South Caribbean On the Caribbean coast in the southeast, near the Panamanian border, is Gandoca-Manzanillo National Wildlife Refuge which protects an area of lowland rainforest and wetlands. The lovely Almonds and Corals lodge (p25) lies within the park area. Mid-Pacific On the mid-Pacific coast, Carara Biological Reserve covers a transitional area between dry forest to the north and primary evergreen rainforest to the south. The nearby floodplains of the Río Tárcoles have wetlands rich in water birds and waders, amphibians and reptiles. An oxbow lake beneath the main road bridge over the Tárcoles is home to large American crocodiles of up to 4m. Pause here at dusk and you may be rewarded with the magnificent sight of scarlet macaw, a threatened species, flying to their roosts from feeding grounds in the forest. Villa Lapas Lodge and Cerro Lodge (both p23) are convenient for the reserve. A little further south, Manuel Antonio National Park spreads over a series of bays and headlands where breakers wash up to a pocket of rainforest teeming with wildlife. There are pristine white sand beaches, coral reefs and hiking trails. Cathedral Point is a classic tombolo: an island linked to the land by a sand spit. Most visitors take the 1km forest path to the beach. Even on this short trail there are great wildlife viewing opportunities—four species of monkey (including squirrel monkey), coati, raccoons, sloths, iguanas, toucans and parrots are regularly seen. There are several hotel options close to the park (an area that is becoming overbuilt, and busy in high season) and around the small but growing town of Quepos; many are on cliff tops above the Pacific, but the upscale hotel Arenas del Mar and mid-range Espadilla are by the beach (both p27). South Pacific—Golfo Dulce & Osa Peninsula At the southernmost end of the Pacific coast, the Osa Peninsula wraps around the waters of the Golfo Dulce. The rainforest grows tall here, thriving on the heavy rainfall which averages 5.5m a year and stimulates an incredible variety of flora and fauna, 4% of which are endemic species. The dense forest is home to over 400 species of bird and 114 species of mammal, including such elusive ‘spectaculars’ as jaguars, ocelots and tapirs who stalk its green shadows. Much more common are troops of mantled howler monkeys bellowing from their leafy perches, capuchin and spider monkeys peering out from breaks in the treetop foliage, and sloths hanging semi-camouflaged against the verdant background. The rustle of dry leaves gives away the presence of peccaries scurrying about in the undergrowth. Iridescent blue morpho butterflies, the size of small dinner plates, dance in the sunlight filtering through the leaves. Brightly coloured pairs of scarlet macaw squawk loudly to each other in mid-air as they cross the jungle canopy. Frogs Perhaps nothing evokes Costa Rica more readily than the Red-eyed Tree Frog, although they are found in lowland forests throughout Central America and into South America. They are mostly nocturnal, preferring to spend the day tucked away below a leaf when they appear completely green. Their gaudy colours suddenly flash into life when they move, an off-putting surprise to a predator. Costa Rica has many more frogs to offer, some just as colourful and astonishing – look for the Blue Jeans Frog for example (you’ll win no prizes for working out how it got its name). The easiest places to see the commoner species are in captivity in ‘frog gardens’ around the country. Most of the peninsula, where the forest is at its richest and least disturbed, is protected by Corcovado NP, while Piedras Blancas NP protects a good portion of the remaining forest on the mainland side of the gulf. The leading lodge on the peninsula is Casa Corcovado Lodge (p21 and 23), on the shores of the Pacific deep within the park itself, surrounded by the forest, and accessible only by boat—a spirit-lifting journey through mangroves, along the forested shores of Drake Bay, to a wet landing at the lodge’s jungle-backed beach. Danta Corcovado Lodge and Lapa Rios Lodge face inland across the gulf, and although they are accessible by road, their experience of the forest is less intense. Some of the lodges in the Dominical area offer day trips by boat into Corcovado NP from the north. You can explore the lowland rainforest of Piedras Blancas NP on the mainland from Esquinás Rainforest Lodge. Further down the coast, on the wild Burica peninsula and almost in Panama, Tiskita Lodge offers a rustic forest experience. Some of the lodges on the Pacific, including Casa Corcovado, offer boat trips to Caño Island where a pair of perfect stone spheres mark a traditional burial ground of the Diquis Indians. As you approach the island there is a good chance of seeing bottle-nosed dolphin, bull shark, and perhaps one of three species of whale. Snorkelling and diving are also possible though visibility can be poor. **CLOUD FOREST** On the higher slopes of the cordilleras the forests are cool and moist. Mists shroud the trees for at least part of each day, creating the conditions for orchids, bromeliads, mosses and lichens to festoon the branches. Tree ferns are common. Such ‘cloud forests’ support many of the plants and animals that are found in lowland rainforest, but they also harbour some unique species. Among the birds, many of the most colourful tanagers only inhabit cloud forests, along with the most spectacular bird of the neotropics, the Resplendent Quetzal (see panel). Costa Rica has four key places where you can experience cloud forest. Monteverde is the most famous and has grown very popular. If you prefer your nature in tranquillity we recommend San Gerardo de Dota, Bajos del Toro or Los Angeles. **Monteverde** Monteverde was founded in the late 1950s by Quakers from Alabama escaping the draft. It is perched high on the Atlantic slope of the Cordillera de Tilarán, northwest of San José, sheltered from constant strong winds. A bone-shaking, deliberately unpaved road winds up to it from the Pan American Highway. The settlers cleared the forest on the lower slopes for grazing but visiting biologists found the cloud forest above the community rich in flora and fauna and in 1972 a private reserve was created to protect the watershed and its remaining habitat; contiguous reserves have been added protecting the Santa Elena Cloud Forest and most recently a Children’s Everlasting Forest. There are several lodges and small hotels to choose from, see p25. The settlement of Santa Elena, which serves the Monteverde Reserve, is a disorderly assortment of lodges and ‘eco experiences’. Access to the forest is highly commercialised, with marketing that draws tens of thousands of visitors each year. There are guided nature walks into the Monteverde Reserve and canopy walks along networks of high suspension bridges and trails with fascinating opportunities to see the forest at different levels. There is a hummingbird gallery, a butterfly garden, a serpentarium, an orchid garden, and a number of adrenalin-rush ‘canopy tours’ on zip-lines—see p13. **San Gerardo de Dota** Compared to the full-on experience of Monteverde, San Gerardo de Dota is the Garden of Eden—a quiet forested valley alive with streams that tumble down from the mountains. If you are lucky enough to visit when the sun is shining the valley seems truly charmed, as though you have stepped into the pages of a fairy tale. This is the most reliable place in Costa Rica to see the Resplendent Quetzal. As sightings are easiest in the morning when the weather is finest it is best to stay overnight. There are three nice lodges that also offer horse-riding, hiking and fishing. San Gerardo de Dota is not far below the highest point in Costa Rica, the highlands of Cerro de la Muerte, so can be cold at night. **Bajos del Toro** The small farming village of Bajos del Toro is noteworthy for the delightful cloud forest lodge of Bosque de Paz (p23), within a 700ha private reserve of cloud forest rising up behind it on the Cordillera Central. Alternatively you can stay at El Silencio (p24) an upmarket mountain retreat and spa near the village. **Los Angeles** The quiet cloud forest of Los Angeles on the edge of the Central Valley is not far from the pleasant mountain town of San Ramón. The place to stay here is Villa Blanca (p24), with a short trail near the lodge that provides a taste of the cloud forest. During March the lodge provides transport to a neighbouring reserve for the chance to see quetzals. **DRY FOREST** West of Costa Rica’s cordilleras the climate becomes progressively drier as one travels north through Costa Rica and into Nicaragua. The dry season in the northwest corner of Costa Rica is very pronounced. To minimise water loss during this period of drought, woodland trees such as the *guanacaste* – the national tree of Costa Rica, the startlingly red flowered *flamboyan*, and the ‘naked Indian’ or *gumbo-limbo*, shed their leaves. Such deciduous dry forests are scarce in the tropics and can be very good for wildlife viewing, particularly when the leaves are off the trees. Black-handed spider monkey, white-headed capuchin monkey, coati, tamandua, agouti, blue jay, toucan and long-tailed manakin can all be seen quite readily. The principal dry forests in Guanacaste, now protected against clearance for ranching, are found in the three national parks of Santa Rosa, Guanacaste and Rincón de la Vieja. There are also good opportunities to see tropical dry forest in Nicaragua, the private Domitila reserve (p45), just across the border, being a good example. **Santa Rosa NP** Santa Rosa NP is home to several mammals including armadillo and white tailed deer as well as 253 species of bird and some 3,140 species of butterflies and moths. Trails wind through the park, including the Quebrada Duende trail which passes petroglyphs carved by indigenous peoples. National parks and reserves 25% of Costa Rica’s land area lies within national parks and other state reserves, of which about two-thirds receive active protection. Additionally all mangroves have been put under state ownership and protection. National parks charge a modest entry fee. There are also a good many private reserves, like the excellent cloud forest reserve at Bosque de Paz. The Resplendent Quetzal The spectacular iridescent plumage of the Resplendent Quetzal, a bird of the cloud forest (see p10), made it the sacred bird of the Maya. The male has a crested head, glittering green back, maroon lower breast shading to a bright crimson belly and dramatic 25” streamers that trail as it flies. Like other trogons, the quetzal is a secretive forest bird, but often stays on a favoured perch for a long time—giving good opportunities for lengthy views. It is easiest to see in the March–June nesting season. Rincón de La Vieja NP A handful of lodges near Rincón de La Vieja offer rustic yet comfortable bases from which to explore the area. Hacienda Guachipelín (p25) is an adventure lodge and cattle ranch, where you might start the day watching the cows being milked before embarking on a day of adventures. Optional activities on offer include guided nature walks, horse riding in the national park, ranching cattle, natural mud baths, and full day hikes up the volcano itself. WETLANDS Costa Rica has seven RAMSAR Sites (wetlands of international importance), whose wildlife typically includes aquatic and wading birds such as anhinga, roseate spoonbill and the threatened jabiru, plus mantled howler monkey, white-faced capuchins, sloth and caiman. They include the flooded forest of Tortuguero and Maquenque (p8) and two others of special interest. Palo Verde Long known to birders, Palo Verde National Park is most productive in the dry season when this seasonally flooded wetland set amid the arid dry forests of the north Pacific Guanacaste state becomes an oasis for migrant and resident birds. Caño Negro Also popular with birdwatchers, Caño Negro Wildlife Refuge, north of Arenal near Los Chiles, consists of a seasonal lake and surrounding marsh. Costa Rica’s life zones This map, based on WWF data, shows in broad terms where Costa Rica’s life zones are to be found—when nature has been left to itself. Farming and other human activities have diminished the areas that retain their natural ecosystems. Lowland forest swathes both sides of Costa Rica’s central mountain ranges, often right down to the sea. Shown in green on the map, the darker shades correspond to lower elevations, with the flooded forests of Tortuguero in the northeast among those shaded darkest. Much depends on rainfall, so lowland forest on the drier Nicoya Peninsula is reality much less dense than on the wetter Osa Peninsula, with many gradations between the two. Cloud forest, shown in blue shades, occurs at higher elevations—notably at Monteverde, Bajos del Toro and San Gerardo de Dota. Higher still, the vegetation turns to páramo (a high altitude moorland), in areas of the palest blue on the map. Tropical dry forest is shown in brown. It occurs in the northwest. Mangrove dots the coast and is shown in pink. Within each area there may be many pockets where habitats are different through local influences. Photos 01 Massive forest giants like this one through the canopy of other trees are a feature of Costa Rica’s lowland rainforests. 02 Flooded forest, of which Costa Rica’s Tortuguero National Park is a great example, are especially rich in wildlife that is easy to see. 03 Canopy walkways are an excellent way to experience the forest. There are good ones at Monteverde and at Arenal. 04 Eterna cloud forests swathe the higher elevations of Costa Rica’s mountain ranges, thriving in the moisture-bearing air from the Pacific or the Caribbean. Costa Rica’s 200 volcanoes include some of the most accessible and dramatic active volcanoes in the Americas. IRAZÚ AND POÁS Irazú and Poás volcanoes can each be visited easily on day trips from San José, with roads that take you to within a few hundred metres of their craters. Although both are classed as active, they seem content with the occasional burst of steam and gas from their vivid crater lakes. You’ll need a prompt start to arrive at the craters before the clouds roll in for the day, usually around 10am. Irazú looms above the city of Cartago and at 3432m is the tallest volcano in Costa Rica. It has four dramatic lagoon-filled craters. The main crater is just over 1km wide, with vertiginous walls 300m deep and a sulphurous green lake at the bottom. Be prepared to be blasted by cold winds at the top where the bare pumice creates a moonscape effect. On a very clear morning it is possible to see both the Atlantic and the Pacific from here. 37km north of Alajuela, Poás is a strombolian volcano—a conical shape created by a long succession of non-catastrophic eruptions. Its vast crater is 1320m wide and 300m deep. At the bottom is a circular hot lake. It is reached by a scenic drive from San José, first passing a region of coffee cultivation, then through cloud forest. The final walk to the crater is in a stunted elín forest and areas with little or no vegetation apart from arrayán, a bush with very leathery leaves, and occasional large-leafed ‘poor man’s umbrella’, *Gunnera insignis*. TURRIALBA The easternmost of Costa Rica’s active volcanoes and one of the largest, Turrialba stands at 3340m and is covered in cloud forest vegetation. This stratovolcano has three craters at the upper end of a broad, wide summit depression. It is possible to hike or drive up to the rim, where you can walk some of the way around the craters’ edge. Since a series of eruptions in 1866 Turrialba has been quiet, though its steaming craters hint at its explosive potential. Warm rain gear is recommended at the summit, which can often be damp and chilly. ARENAL Arenal Volcano rises in a perfectly symmetrical cone above the town of La Fortuna. Arenal is one of the region’s most consistently active volcanoes, though even it has periods of relative slumber. When active it spews almost daily outpourings of incandescent lava, mushroom clouds of gas and steam, and ejects hot boulders that bounce hundreds of meters down its slopes—all helping to ease the pressure deep below the volcano where the Cocos plate is being driven under the Caribbean plate at a rate of 9cm a year. In its active periods Arenal’s performances are most impressive on a clear night when red-hot lava can be seen flowing from the top of the cone. In the day, ash clouds billow up from the crater and there are dull thuds and rumblings from deep within. RINCÓN DE LA VIEJA Sulphurous vents and bubbling mud pots spatter the dry forests of Rincón de la Vieja National Park, evidence of the volcanic activity deep underground. Above it all rises Rincón de la Vieja, a 1816m stratovolcano whose 400km² bulk includes nine eruption points, one of which is still active. South of the active crater is a large freshwater lagoon, Los Gilgueros—a good place for Black-faced Solitaire and Baird’s Tapir. Around Arenal The Arenal area is worth stopping for a night or two, there is a good selection of things to see and do: - **Hot springs** Arenal can be frustratingly obscured by cloud, but even if visibility is not good these are attractive, popular, open-air thermal baths a short distance from the foot of the volcano where you can relax in warm sulphurous waters until well into the night, often with the volcano’s rumblings as a soundtrack. - **Hanging Bridges** The lowlands near to Arenal Lake are swathed in forests that can be explored on a series of paths and suspended walkways known as ‘The Hanging Bridges’. Gently sloping paved trails meander through the shaded forest, opening out at set intervals on to footbridges suspended over the forest canopy giving great views over the forest canopy, across the valley to the volcano. It is not uncommon to see families of howler monkeys resting in the tree branches on a hot day, or toucans surveying their forest domain. - **Arenal’s eruptions** Arenal’s bubbling activity is characteristic of a strombolian volcano, and should make Arenal safe from catastrophic eruptions. Active phases can last years, with occasional peaks such as in August 2000 when in a day of thunderous explosions Arenal ejected 20 outflows of gas and rock and a 1km high column of ash. Even in its quieter periods the volcano is seldom silent. When active the show can be seen very well from safe distances, but real dangers confront those who venture off-limits. Poisonous gas and incandescent avalanches claim the lives of the foolhardy. Active Costa Rica Fidgety after just one morning on the beach? Help is round the corner. Ziplines, rafting, surfing, hiking –Costa Rica is a huge adventure playground for grown-ups. EXPLORING THE CANOPY Getting high up into the canopy is a fascinating way to experience the life of the forest and there are several ways to do it in Costa Rica. One good option is to walk on hanging bridges suspended on cables through the forest. There is a lovely suspension bridge at Sarapiquí and aerial tours at Monteverde, Braulio Carrillo National Park and Arenal to name only some. Cable cars, such as the Rainforest Tram, are an even easier alternative. For a high adrenaline experience, zip wires are the only way to fly. Strapped in a light harness with pulley attached, you are sent whizzing along a succession of cables strung between platforms set on trees or metal pylons. Usually the lines are among or below the tree tops, but the final stages of Sky Trek at Monteverde are very long, fast and high –you can be in the cloud as you zoom along. Also called ‘canopy tours’, ziplines are available in many parts of the country. WALKING AND HIKING Many national parks have good trails for general walkers, especially those in the cooler air of mountains and volcanoes. Volcán Irazú, Monteverde to Arenal, Bajas del Toro and Rincón de la Vieja National Park are also great for longer day walks. See the Cerro Chirripó panel for a stretching and rewarding 4 day trek. WHITEWATER RAFTING There is some wonderful rafting in Costa Rica, most notably on the Pacuare and Sarapiquí rivers. Grades I and II are suitable for beginners, grades III to IV are for the slightly more experienced (and can usually be undertaken on your second day of rafting), above that you need to be fully trained and experienced beyond our scope. Safety briefings and basic training are given on the spot, to which you must pay close attention. Rafting is not only great fun but can take you through incredible scenery in locations only accessible by river. In the less hectic moments you float down the river in perfect bliss. RIDING AND RANCHING Horse riding is available in many parts of Costa Rica, so we can fit in as little or as much as you want into your trip. Latin Americans are less precious about riding than the British making it so much more fun, though riskier. Rounding up cattle in Rincón de la Vieja, galloping along the beach at Nosara or pottering around on the slopes of Turrialba are some of our favourite riding experiences. SURFING & SUP Costa Rica’s Pacific coast has plenty of great surfing beaches, and most resort towns offer surfing lessons to get you going. Try it! Tamarindo is the place most associated with surfing in Costa Rica but for a laid back natural experience head for Nosara or Sámara. There are good places on the south Caribbean too. Stand-up paddle boarding (SUP) is really catching on. You’ll find it in most of Costa Rica’s established surfing spots, with easy flatwater SUP on Lake Arenal and mangrove areas at the coast. You may even get to try SUP yoga! DIVING There is a great scuba location at Isla del Caño (sharks almost guaranteed), and diveable reefs at Cahuita and Gandoca-Manzanillo on the Caribbean. Beware currents. Mainland Costa Rica doesn’t offer enough for a purely diving holiday, unless you can include the world-class Cocos Islands (see panel). Cerro Chirripo trek Cerro Chirripó in the Cordillera de Talamanca has the highest peak in Costa Rica at 3819m. Protected in a national park, it’s a rugged landscape with great views, lakes and cirques. We arrange 4-day treks to the summit also taking in Mount Toribo, impressive rock formations known as the Crestones, and Lion’s Valley savannah. You need to be fit already, of course, but this is a great way to stretch yourself during your holiday in quite different scenery. Cocos Island diving Cocos Island, 500km off the Pacific coast, is an uninhabited rainforested island. Its waters explode with life, including innumerable white tip reef sharks, schooling hammerheads, dolphin, manta and marbled rays, giant moray eel, sailfish, occasional whale shark, large schools of jacks and tuna, marlin, and more. A UNESCO World Heritage Site, Jacques Cousteau called it “the most beautiful island in the world”. It was nominated as one of the seven wonders of world and is among the top ten ocean dive locations. Its remoteness means few have dived it, but we offer a week-long live-aboard trip that makes an amazing experience. Costa Rica has the most options for the traveller of any country in Central America. The suggestions on these pages are a starting point and can be adapted in many ways. Browse through them to see which appeal to you the most, then contact us to talk through your ideas. Where to stay There is a wide choice of hotels and lodges, from stylish to rustic, from urban to wilderness, with well-kept family-run hotels being the most plentiful. See pages 23-25 and 27 for a small selection of the nearly 200 hotels we offer. Getting around There are four sensible options for getting about in Costa Rica: - **Private guided touring** An English-speaking guide, who will usually also be your driver, accompanies you between destinations and on excursions in each place. You can sit back and relax while you travel, gain some real local insight, and make the most of your time. - **Private transfers** An experienced local driver (not necessarily English-speaking) collects you from your hotel and transfers you to your next destination. There are no travel worries, it is a private service, and you have your independence in each location. Pick-up times can be adjusted to suit you. - **Shuttle-bus** This is the most cost effective option, making use of a well-developed minibus transfer network between popular locations. A minibus seating about 12 people collects you from your hotel and drops you at the door of your next hotel. You share the journey with other visitors, not necessarily from the UK. Departure times are fixed and there may be some waiting. This option works well when travelling between the main tourist regions, but private transfers are required for the more remote destinations. - **Selfdrive** A hire car is a great way of enjoying Costa Rica. Distances are relatively short, but many minor roads are unpaved, so you will need a relaxed approach and a sense of adventure. SatNav/GPS is a great help. Whatever your mode of transport, we can pre-book your local excursions for you to help make best use of your time in an area. Food and drink Hearty wholesome food is the order of the day. Usually a little plain for European tastes, menus tend to rely on national favourites or standards like pizza and pasta. Few would choose Costa Rica for a gourmet experience, but that’s not to say that you won’t find delicious dinners here and there. Small group holidays As an alternative to a tailor-made holiday, our Costa Rican Odyssey (p22) is an excellent way to see the country with everything taken care of, in the company of like-minded travelling companions, and escorted throughout by a naturalist guide. Coast to Coast One of the best all-round tours of Costa Rica, visiting the Caribbean side and the Pacific, Arenal volcano, the flooded forest of Tortuguero, rainforest at La Selva, and the cloud forests of Monteverde. San José Day 1 You are met on arrival at the airport and driven to your hotel in the San José area, which we helped you choose in the price category you preferred. Caribbean (Gandoca-Manzanillo) Day 2 In the morning you are driven eastwards to the south Caribbean coast (4hr), passing through the lowland rainforest of the Braulio Carrillo NP, where you stay for 2 nights at a lodge near the Gandoca-Manzanillo Wildlife Refuge. Day 3 Today you are free to relax on a pristine local beach or take up one the many optional activities available locally. There are walking trails through the rainforest, you can hire bicycles, go snorkelling on the coral reefs or try an aerial zip-line through the jungle. Visits to local indigenous community projects are possible if booked in advance. Tortuguero Day 4 Early this morning you are driven north along the Caribbean shore to a dock where you board a motorboat for the journey to your lodge in the flooded forests of Tortuguero, see p8. After lunch you are taken to visit Tortuguero village and the local beach, where your guide will introduce you to the history of the area and habitats of the national park. Day 5 A full day based at Tortuguero, including an excursion by boat through the narrow channels of the flooded forest for wildlife viewing, shared with others from the lodge. An English-speaking local naturalist guide provided by the lodge will help point out the creatures. Sarapiquí Day 6 After breakfast at the lodge you are transferred by boat back to the dock, then onwards by road to a local restaurant for lunch. If you have chosen to have a hire car it will have been brought here for you to collect. Travel to Puerto Viejo de Sarapiquí (1hr) where you stay for 2 nights. Day 7 Today is free to take advantage of one of the local activities. You could take a nature walk at La Selva Biological Station, whitewater raft down the Sarapiquí River or take a more leisurely ‘river float’ to enjoy the forest scenery at the water’s edge. You could choose to end the day with a night hike at La Tirimbina in search of nocturnal species. Arenal Day 8 Get a reasonable start this morning to travel to the Arenal area, where you stay for 2 nights. Day 9 A free day for your pick of the many local attractions. You could try one of the canopy tours, such as ‘Hanging Bridges’ for excellent views across the treetops to the outstretched valley, or visit a choice of hot springs where you can relax in thermal waters. Monteverde cloud forests Day 10 Travel along the north shore of Lake Arenal then up the winding country roads to Monteverde cloud forest, your base for the next 2 nights. In the afternoon you could visit the hummingbird gallery or the local cheese farm. Day 11 You are free today for your own choice of activities in the Monteverde area. There are guided nature walks through the Santa Elena and Monteverde cloud forests, a choice of two very good canopy walks, zip wires and horse-riding. Pacific coast Day 12 Travel westwards this morning to the Pacific coast, where you spend 3 nights relaxing at the beach at Tamarindo (or your choice of beach destination - see p26-7). Days 13-14 Free to relax on the beach. San José Day 15 A final free morning at the beach before you travel back to San José for a final night. Day 16 If you are using a hire car drop it off at the international airport before checking in for your chosen flight home. Alternatively you will have a private transfer from your hotel to the airport to co-ordinate with your departure time. Costa Rica Nature Explorer A wonderful trip visiting Costa Rica’s main locations for wildlife, staying in good quality mid-range lodges where nature is the focus. San José Day 1 You are met at the airport on arrival and driven to a mid-range hotel in San José. Tortuguero Day 2 BLD Early this morning you are picked up from your hotel by shuttle-bus, and taken by road and boat into the flooded forest of Tortuguero on the north Caribbean coast. Here you stay for 2 nights on a full board basis at a lodge in the national park. Regular tours with naturalist guides are included, with great opportunities to see sloth, monkeys, basilisk lizards, tree frogs and other amphibians, waterbirds, and turtles in season. Day 3 BLD A day of wildlife safaris in Tortuguero national park. South Pacific jungle lodge Day 4 BLD An early flight from Tortuguero to San José then onwards by air to Golfito on the south Pacific. You are met and driven 30min to Esquinas Rainforest Lodge, your base for 3 nights. Set within the jungle of Piedras Blancas NP, Esquinas is part of a project combining conservation, research and community development. It makes a peaceful setting to enjoy the sights and sounds of the tropical rainforest. The biodiversity is tremendously high with over 140 species of tree per hectare, around 2500 species of plants and more than 360 species of birds. Day 5 BLD A free day for nature viewing. The lodge has a good series of jungle trails with opportunities to see colourful birds, butterflies, bizarre insects, frogs, and possibly monkeys, agouti-pacas, peccaries, and coatis. The understory of the secondary forest has heliconias, ferns, and other more light-hungry plants, and the primary forest has walking palms, buttress-rooted forest giants, orchids and bromeliads, passion flowers, and endless lianas. Day 6 BLD Second free day at the lodge which also offers optional mangrove tours, village excursions, horse riding, kayaking and dolphin watching boat trips in Golfo Dulce, a bay separating Piedras Blancas NP and the Osa Peninsula. Three species of porpoise live and breed here all year round. Humpback whales can be seen at times. There is a 90% chance of seeing bottlenose dolphins playing and swimming by the boat. Sarapiqui Day 7 BLD Fly to San José, to be met at the airport for the drive to Sarapiqui to stay 3 nights at a mid-range hotel. If you have chosen the self-drive option you are taken to collect your car at the airport for the drive to Sarapiqui, otherwise your own private driver will take you. The afternoon is free to settle in and enjoy the grounds. In the early evening you are collected for a private tour of La Tirimbina reserve for nocturnal animals such as porcupines, frogs, opossums and kinkajous, and returned to your hotel at around 9pm. OTS La Selva Day 8 BLD A full day at La Selva Reserve—widely considered one of the world’s foremost sites for tropical forest research. One of the reserve’s guides shows you the forest and gives insights into current projects. There are great birding and wildlife opportunities. Day 9 BLD A free day either to relax and enjoy the grounds of your lodge, or to take an optional excursion, booked in advance or locally, such as a nature trip on the Sarapiqui river. Maqueque Ecolodge Day 10 BLD Today you travel 2hr by private transfer, or driving yourself, from the foothills of the sierra across the plains for 3 nights at Maqueque Lodge (p23), a super lowland rainforest retreat for every aspect of natural history, with knowledgeable biologist staff. Day 11-12 BLD The lodge offers a selection of excellent nature safaris by boat or on trails (including night walks in the forest), which you choose and pay for at the lodge at quite modest cost. Bosque de Paz cloud forest Day 13 BLD By road with either a private driver or selfdrive to Bosque de Paz (p23), one of our favourite lodges in Costa Rica, set in its own 700ha private cloud forest reserve, to stay 2 nights. Days 14 BLD A day to enjoy gardens and cloud forest trails at Bosque de Paz. There is great birding on the little approach road. San José Day 15 B After a cloud forest dawn, return by road (self-drive or private driver) to San José in good time for afternoon flights home. Just a week in Costa Rica It’s amazing what you can see in just a week. Wildlife, volcanoes, and cloud forest make the most of a short visit. San José Day 1 You are met on arrival at San José airport and driven to a favourite mid-range hotel. Tortuguero Day 2 BLD Early today you are collected from your hotel by a tourist minibus shuttle service, shared with others, and driven to the Caribbean coast, then taken by boat into Tortuguero NP (p8) to stay 2 nights full board at a wildlife lodge by the flooded forest. Breakfast is en route. A daily programme of wildlife safaris in the national park is included, led by the lodge’s resident English-speaking naturalist guides and shared with others. Day 3 BLD Wildlife safaris in the flooded forest. Arenal Volcano Day 4 BL Return by boat then road to the Arenal Volcano region (p12) for 2 nights at a mid-range hotel with a view of the volcano. Day 5 B A free day in Arenal. Several tours are available locally at extra cost, eg Arenal Volcano reserve and the hot springs, Caño Negro Wildlife Refuge, a choice of two very good canopy walks, zip wires, or horse riding to the beautiful La Fortuna waterfall. Monteverde Day 6 B You are collected from your hotel for a boat journey across Arenal lake, with super views. You are then shuttle-bussed into the mountains of Monteverde (p10) for a 2 night stay at a mid-range lodge. Day 7 B Free in Monteverde. Excursions available locally at extra cost include walking or birdwatching in the Monteverde or Santa Elena cloud forest reserves, canopy walkways, ziplines, horse riding, a butterfly garden, and a cheese factory. Day 8 B This morning you are shuttle-bussed back to San José (4hr), or to your next destination – perhaps a beach hotel on the Nicoya Peninsula or near Manuel Antonio NP. You could also combine this itinerary with a visit to Nicaragua or Panama, perhaps our ‘Just a week in Panama’ (p37) or ‘Just a week in Nicaragua’ (p45). **Costa Rica Chill-out** A relaxing getaway with a difference: a spa hotel, jungle rafting, some wonderful wildlife, and a beautiful Pacific beach. **Countryside spa** Day 1 You are met on arrival at the airport in San José and taken to Xandari Spa (p24) set in a 40 acre coffee and fruit plantation overlooking Costa Rica’s Central Valley. The rooms have original art and custom designed furniture. There are 2 lap pools in the gardens with sundecks and sun beds. Stay for 2 nights to take advantage of the hotel’s numerous spa facilities and relaxing atmosphere. Day 2 A free day to relax at the hotel. Surrounded by delightful gardens and picturesque waterfalls, the Spa Village offers a range of optional pampering treatments in your own private Jalapa (palm-roofed hut) with Jacuzzi and stunning views of the valley. The outdoor restaurant sources its fruits, vegetables and herbs from the plantation’s own gardens. **Jungle boutique lodge** Day 3 You are transferred early in the morning to the start of a whitewater rafting ride to Pacuare Lodge, set beside the river in a forested gorge, where you stay for 2 nights. Your guides introduce you to the essentials of rafting, and the safety measures you need to know, then it’s onto the river for an exhilarating hour’s ride on class II and III rapids to reach the lodge. After lunch you can relax and enjoy this gorgeous setting. Day 4 Today is free to relax or take part in some of the lodge’s activities, at additional cost, which include rafting, mountain biking and horse-riding. You can also visit a nearby village of the Cabecar community, or be pampered in the lodge’s spa. Day 5 You could leave by road after crossing the river by cable gondola, but it’s far nicer to raft out on class III-IV rapids. You are met by your private driver for the return by road to San José. **Wildlife at Tortuguero** Day 6 This morning you are taken to the airport to catch a flight to the flooded forests of Tortuguero on the Caribbean coast. You are collected on arrival at the airstrip and transferred the short distance by boat to your lodge on the bank of the river near the national park. Stay for 2 nights on a full board basis. Day 7 Take a boat ride through the narrow river channels of the flooded forest for the chance of seeing a host of wildlife. --- **Creature Comforts** First-rate wildlife and nature experiences while staying in some of Costa Rica’s best boutique and spa hotels and wildlife lodges. **San José** Day 1 You are met on arrival at San José and driven to the lovely Xandari Spa (p24), set among coffee and fruit plantations overlooking the Central Valley, where you stay for 2 nights. **Poás Volcano and Doka Coffee Estate** Day 2 Your guide collects you in the morning for a scenic drive through the Central Valley to Poás Volcano (p12), reaching the rim of its steaming crater. Continue to the Doka Coffee estate farm for an insight into high quality coffee growing and preparation, with lunch and a visit to their butterfly garden. Back at Xandari Spa there is time to relax or take one of the optional treatments. **Bajos del Toro** Day 3 Part of the morning is free for relaxation or a spa session, then it’s on to Bajos del Toro for a 2 night stay at El Silencio (p24), a boutique lodge and spa in a private reserve of cloud forest filled with bromeliads, orchids, ferns and clear mountain streams. You travel either by private transfer, or if on a self-drive option, your hire car is delivered to your hotel mid-morning in good time for the journey; you return it on day 10. Day 4 An ‘eco-concierge’ is available to accompany you on a walk to experience the rich life of the cloud forest (with good birding opportunities). There is time to enjoy the private Jacuzzi on your terrace, or the hotel’s spa facilities at an extra charge. **Arenal** Day 5 You have the morning free to explore and enjoy the cloud forest before travelling on to the Arenal area. Stay for 2 nights at the delightful Lost Iguana hotel (p25), which has a good view of the volcano. Rest of the day free to explore the grounds or relax by the hotel’s pool. Day 6 This morning you visit the Hanging Bridges with a private English-speaking local guide, for spectacular views across the trees to Arenal Volcano. You return to Lost Iguana for free time then in mid-afternoon walk with your guide in Arenal Reserve. This is followed by a visit to a hot springs to lounge luxuriantly in thermal waters amid landscaped gardens, taking dinner at its restaurant—with the volcano’s dramatic rumblings in the distance. **Manuel Antonio** Day 7 Travel south to the Central Pacific coast to stay 3 nights at Arenas del Mar (p27), one of the few beachside hotels in the Manuel Antonio/Quepos area. Day 8 A free day. We suggest a visit to the nearby Manuel Antonio NP, the most scenically beautiful region in the country. Take the popular trail through the trees for the chance of seeing sloths, monkeys, agouti and a variety of birds. At the path’s end you are rewarded with a stunning view across the bay and out to sea, with pristine white sand beaches nearby. At low tide it is even possible to stroll back to your hotel along the shore. Day 9 A free day to relax at the beach or pool. You might take advantage of one of the options offered by the hotel, such as sea kayaking, horse riding or a mangrove boat tour, or indulge in one of their spa treatments. Trim your hold luggage down to 14kg per person, leaving the excess to go by road to your hotel in San José to await your arrival on day 13. **Osa Peninsula** Day 10 A choice of two ways to experience the Osa Peninsula, one of the most biodiverse areas in the world, in a 3 night stay. For the remote Casa Corcovado Lodge (p23) catch a flight from Quepos to Palmar Sur to be met and transferred to Sierpe dock for the wonderful boat journey through mangroves, across Drake Bay and along the Pacific shore (can be rough), arriving with a wet beach landing. For the award-winning Lapa Rios (also p23), return to San José airport and fly to Pto Jimenez to be driven 20min to the lodge. Self-drivers drop off their hire cars at Quepos or San José airports. Day 11-12 At Casa Corcovado your visit includes two full days of wildlife and other excursions, at Lapa Rios two wildlife excursions are included with others available locally at extra cost. **San José** Day 13 This morning you are taken to the airstrip for your flight to San José, to be met and taken to Hotel Grano de Oro (p24). Day 14 A private transfer from your hotel to the airport for your flight home. Or you might extend your stay with time at the beach (p26) or a visit to Tortuguero (p8 and 21). including monkeys, tiger herons, basilisk lizards, sloth, river otter and caiman. This is one of the best places in Costa Rica to see wildlife and you will be delighted at how easy it is to get really close encounters. You could also visit the wild local beach, where sea turtles nest between July and October. **Pacific coast selections** **Day 8** After an early breakfast you are taken by boat back to the airstrip for your flight to San José. Change planes and fly onwards to the Pacific coast for 6 nights at your choice of beach hotel. There are many to choose from depending on the style of hotel and type of beach you prefer. For a sophisticated retreat we suggest **Punta Islita**, one of the ‘Small Luxury Hotels of the World’. Alternatively, **Kurá Design Villas** is an away-from-it-all hip hotel looking over the Pacific near Uvita. For an all-style ‘barefoot’ hideaway that’s right on the beach, consider **Ylang Ylang** on the south coast of Nicoya Peninsula. The upper range **Alma del Pacífico** is stylishly designed and has relatively easy access to a number of attractions. If bars and nightlife are more your thing then Tamarindo may suit you, here **Capitán Suizo** and **Cala Luna** are good options. **Harmony** is a holistic natural retreat at Nosara with a spa, yoga and a laid-back vibe. **Days 9-13** Free to relax on the beach. **San José mansion** **Day 14** In the morning fly back to San José where you stay overnight at Grano de Oro, a Victorian mansion converted into an upper-range hotel. It combines a fin de siecle style with art and furnishings by Costa Rican artists. The dining room here is considered to be one of the top spots to eat in the city; a lovely option for your final night in Costa Rica. To top it off we suggest you stay in the garden suite with its wrought-iron king-sized bed, small patio terrace and private Jacuzzi. **Day 15** A private transfer is arranged today to connect with your chosen international flight home. **San José** **Day 1** You are met on arrival in San José and driven to a hotel in your preferred price category for a 2 night stay. **Day 2** A free day to relax and acclimatise at your hotel, or take an optional tour to explore the Central Valley, perhaps to see a volcano or visit a coffee farm. **San Gerardo de Dota** **Day 3** This morning your hire car is delivered to your hotel. From here you drive yourselves to San Gerardo de Dota. On the way you might choose to visit Lankester Botanical Gardens and the Orosi Valley (p6). Stay 2 nights at a mid-range hotel in the lovely valley of San Gerardo (p10). **Day 4** You are taken on an early morning birdwatching walk in the cloud forest, with others, in the most reliable part of the country for Resplendent Quetzal. This peaceful setting is also home to a wide variety of flora and fauna. **South Pacific (Dominical)** **Day 5** Today you continue southwards to the Pacific coast near Dominical, arriving in time to relax at an upper range hotel nestling between trees on a hillside with fantastic views over the ocean, or a similar mid-range hotel nearby. **Day 6** You are free today to take advantage of the optional local excursions. You might take a boat trip to visit Corcovado NP on the Osa Peninsula, or to Cño Island for the chance of seeing dolphins and whales (best in July-October and January-April). **Day 7** A second free day for you to do as you please. There are horse riding options, nature walks, canopy tours in the forest, or you might visit Hacienda Barú a national wildlife refuge with habitats from wetland and secondary forest in the lowlands to primary forest on the highland coastal ridge. The refuge has 7km of walking trails and an orchid and butterfly garden. **Playa Esterillos de Este** **Day 8** After breakfast drive north along the coast to Playa Esterillos de Este, a long undeveloped stretch of broad sands. The sea here is too rough for swimming, but it is a popular spot for surfing – fun to watch. In case you might like to do some nature viewing or sightseeing you are within 45min drive of both the Carara Reserve and Manuel Antonio NP. Stay a night at either a mid-range or an upper range hotel by the beach at Esterillos de Este. **Guanacaste ranch** **Day 9** After breakfast drive north to the characterful Guanacaste region (p7), whose dry forest life zone stands in contrast to the lush rainforests of the south Pacific and the cool cloud forest of San Gerardo de Dota. Stay 2 nights at La Enseñada: a homely ranch with a swimming pool and views across to the Gulf of Nicoya. **Day 10** A variety of options today. Perhaps saddle up and go out on horseback with the cowboys, if your riding skills are up to it, or hike through the surrounding countryside. In the dry season there are excellent birdwatching opportunities at the ranch, in habitats similar to Palo Verde NP (p29). **Nosara** **Day 11** After breakfast at the ranch, drive to Nosara on the Nicoya Peninsula. Here the shoreline has been set aside as a wildlife reserve, so it’s a short walk over the foreshore to the sands. Nearby Ostional Wildlife Refuge is an important Olive Ridley Turtle nesting ground; the turtles nest from July to November, with their largest numbers from August to October. Humpback and Grey whales are seen in winter. Stay 3 nights at either a mid-range or an upper range lodge. **Day 12** Free day for optional excursions in the Nosara area, eg guided tours in the Nosara Biological Reserve, guided boat rides up the Nosara river, or relaxing on pristine natural beaches. **Day 13** A second free day in Nosara. Spectacular sunsets blaze over the ocean on a clear evening. **Day 14** You have most of the morning free in the Nosara area before you must make your way back to the San José area to your chosen hotel. **San José area** **Day 15** Today you drive to San José airport to drop off your car in time for your flight home. Costa Rican Adventures Get ready for two weeks of action-packed adventure, all different, all over Costa Rica. It’s a heady mix that delivers lots of excitement, but with comfortable accommodation too. Ideal for lively couples, friends and families. San José Day 1 You are met on arrival off your chosen international flight and driven to a mid-range hotel in the San José area. Zip-lining through the treetops Day 2 B You are collected from your hotel for a zip-lining adventure. You’ll whiz between 21 platforms suspended high in the tree-tops below Poás volcano, with an impressive 600m long cable as the finale. Whitewater rafting Day 3 BL With an early start, drive to the Pacuare River to meet your rafting guides for safety training and an exhilarating day rafting class II-IV rapids. You are then transferred by road to Arenal for 4 nights. Canopy walk, SUP and hot springs Day 4 BD Early today you join a group with a naturalist guide for the Hanging Bridges—a system of trails and suspended walkways with views into and over the forest and to the valley below. Wildlife sightings will include a variety of birds, possibly a troupe of howler monkeys, and much else. In the afternoon you have a go at stand-up paddle boarding, the fast-growing watersport, on Lake Arenal. An evening at the hot springs brings relaxation in thermal pools as the volcano rumbles in the distance; dinner is included. Volcano hike and waterfall swim Day 5 BL A demanding 8hr hike to the summit of Cerro Chato volcano. From the top there are amazing views across to the smoking summit of neighbouring Arenal and down into the emerald green lagoon that fills Chato’s long-dormant crater, to which you can walk down and bathe in with care. The first half of the hike up is easy, the second is a strenuous trek ascent, often muddy, through virgin rainforest. You descend to arrive at La Fortuna waterfall with the option of a refreshing swim. Canoe through a nature reserve Day 6 BL An early start to travel to the wildlife refuge of Caño Negro where you paddle by canoe along narrow waterways through flooded forest, rich in wildlife. You’ll see some of the refuge’s 350 species of birds, basilisk lizards and iguanas; turtles and caimans, and maybe much more. Return to overnight at Arenal. Cloud forest by night at Monteverde Day 7 B Travel on to the mountains of Monteverde for 2 nights in the cloud forest at a characterful mid-range lodge. Afternoon free to sample the eco-experiences of Santa Elena before an evening tour (shared) to experience the cloud forest’s nocturnal wildlife. Canopy adventures Day 8 B Today you ride a gondola-tram high into the cloud-forest canopy for a high-adrenaline ride on an awesome circuit of zip lines—some very long and high. You are collected from, and returned to, your hotel, with the afternoon free to relax or try some of the other eco-experiences on hand locally. Adventure ranch Day 9 B Down the mountains to Rincón de la Vieja for 2 nights at an adventure lodge and cattle ranch. In the afternoon you can pick an optional adventure activity: hiking, biking, and riding are usually available, to waterfalls, hot springs and streams. If you’re competent in the saddle you might even join the cowboys to round up cattle or horses (check your travel insurance covers ‘ranching’). Day 10 BD We’ve included a one-day ‘adventure pass’ for you. Start with a zip-line, then a horse-ride to the start of a river-tubing adventure. Later relax in hot springs and take on open air bath in volcanic mud (better than it sounds!) Pacific beach at Tamarindo Day 11 B Beach time! Travel to the lively beach town of Tamarindo for 3 nights, with an attractive wide white-sand beach that’s a favourite with surfers, plus beach cafes, bars and restaurants. You should arrive at lunchtime, leaving the afternoon for the beach. Be mindful of sea currents which can be dangerous in some areas. Day 12-13 BD Two days to bask on the beach or enjoy the ocean. You might take a 4hr surfing lesson to cover the basics or to hone your technique. Or try SUP (quietwater or surf), scuba, kayaking or sailing. You can book any of these locally at reasonable extra cost. San José Day 14 B After a morning free in Tamarindo, return by road to San José to the mid- or upper range hotel of your choice. Day 15 B You are driven to the airport for your flight home. Bribri and Chira Island community A really wonderful opportunity to be welcomed by, learn from and work with two special communities. Memorable experiences engage with the Bribri, and the life of the Chira islanders in the beautiful Gulf of Nicoya. The Bribri people have lived in the forests of southern Costa Rica for millennia, with their own language, culture and way of life, which they are keen to share and protect. Meanwhile ecotourism brings the people of Chira Island an alternative contribution to complement their livelihood from artisanal fishing. San José Day 1 You are met at the airport on arrival, and driven to your hotel chosen from El Rodeo or Jade y Oro. Both of these hotels have joined in the Rainforest Alliance’s Sustainable Management Practices programme and both are among Costa Rica’s proud holders of level 3 certification. Caribbean coast Day 2 B You are collected from your hotel by shuttle–bus for the 7hr journey to the Caribbean coast to stay 3 nights at your choice of accommodation near Puerto Viejo. For a good mid-range option we suggest Cariblue (see p27) which also has level 3 certification. It’s a small hotel in lovely mature grounds with tall trees and a swimming pool, just across a quiet lane from Playa Cocalés, which is a wild unspoilt beach popular with surfers. The dramatic Almonds & Corals Lodge (p25), which has achieved level 5—the highest possible sustainability certification, is an upper range option, tucked away in the forest close to the sea within the Gandoca-Manzanillo Wildlife Refuge. Bribri community Day 3 BL Today you visit the indigenous Bribri people, travelling into their lands by dugout with guide and boatmen. After a boat journey on the Yorkin river, which separates Costa Rica and Panama, you arrive at their village. Here you are received at the Casa de las Mujeres with a welcoming snack and presentation by way of introduction to the life of the community. To see things firsthand for yourself you will be shown around parts of the village and into the forest on which it relies. Trails, including one across a hanging bridge, introduce you to the community’s medicinal plants and food crops. Lunch is Bribri style (bananas in fresh chocolate sauce is a favourite dessert). There is much to find out about and visitors are made very welcome amid great fun. You can return to your hotel for the night if you wish, but with prior arrangement you are invited to stay a night in the community. Conditions are rustic but this is an exceptional experience. A simple dinner may be followed by stories told around the fire. Next morning there’s a choice of walks, followed by lunch, returning to your hotel in the afternoon—with a host of memories. Jungle at the beach Day 4 B Wherever you choose to stay in this part of Costa Rica be prepared to be woken at dawn by the call... Costa Rica & Nicaragua off the beaten track An adventurous and very varied trip between Costa Rica and Nicaragua, mostly completely off the beaten track. Rich wildlife experiences, wonderful scenery, evocative history, isolated river settlements and vibrant art communities make this very special indeed. Cano Negro, Costa Rica Day 1 LD You are collected from your hotel in San José or Arenal and driven to Caño Negro Wildlife Refuge, still in Costa Rica, for 2 nights at a comfortable lodge with pool. Caño Negro is an important wetland for migratory birds and 350 resident bird species, as well as emerald basilisk, iguanas, river turtle and caiman. Among 160 mammal species are howler, capuchin and spider monkeys, and jaguar. Afternoon free to relax and explore. Day 2 BLD A morning excursion by boat through the water channels of the reserve. There are 310 species of plants and many species of fish including the ‘living fossil’ garfish. The afternoon is free to relax by the pool or take an optional excursion, such as mountain biking, fishing, kayaking or further nature viewing. Solentiname archipelago, Lake Nicaragua Day 3 BLD Transfer to the border at Los Chiles where your Nicaraguan guide will be waiting on the other side to take you by road and boat to the Solentiname archipelago—a cluster of 36 islets in Lake Nicaragua. Here you stay 2 nights on San Fernando island at a simple family-run guest-house (with fans and private bathrooms). In the 1970s, inspired by Ernesto Cardenal, priest, poet and Minister of Culture in the Sandinista government, a group of fishermen and locals developed a contemplative community and a painting school—the Escuela Primitivista de Solentiname. Today, more than 50 painters and artisans are working in the islands. Their vibrant naïve art is strongly linked to their tropical surroundings. After lunch, stop at a local museum before visiting the artists in their home studios. Los Guatuzos Wildlife Reserve Day 4 BLD Morning boat ride to the southern shore of Lake Nicaragua to visit the forests of Refugio de Vida Silvestre Los Guatuzos. Its superbly rich bird life includes parrots, trogons, roseate spoonbill, jabiru, osprey, herons and egrets, while howler monkey, sloth, caiman, iguanas and agouti are also common. We return to our lodge for lunch, then the afternoon is free to relax or explore locally. The archipelago often enjoys magnificent sunsets. San Juan river to El Castillo & local farm Day 5 BLD Today your boatman will navigate along the Río San Juan to El Diablo Rapids and the waterfront village of El Castillo, a happily isolated community of around 1000 souls within earshot of the rapids. The San Juan river was a route for the export of Incan gold to Spain, known in those days as the ‘dubious passage’. Small wooden houses built on stilts on the river front lie beneath the impressive black stone Fortress of the Immaculate Conception, built by the Spanish in 1675 against marauding pirates and foreign navies (usually British). In 1780, Horatio Nelson, then 22 and already captain of his first frigate, took part in an expedition which circled the fortress, captured it from landward and occupied it for 9 months. The fort has a museum and library, and there are evocative views from the ramparts. El Castillo’s residents work farms in the surrounding hills and fish the river, and in recent years have started to get involved in sustainable tourism. You visit a farm, walk their trails, help harvest the vegetables, hear about their relationship with the rainforest and its protection. Stay 1 night in El Castillo in a small hotel with private bathrooms, a/c and a riverfront deck. Indio-Maíz Biological Reserve & Sábalos Day 6 BLD We travel on the Bartola river to visit the Indio-Maíz Biological Reserve, which protects part of the largest area of primary rainforest in Central America. Inside this pristine forest some trees reach 50m in height. Here we walk for 2hr on the Bartola Trail then travel along the river to a lodge at Sábalos with private reserve where guests can enjoy relaxing in a hammock, birdwatching, kayaking, horse riding and artisanal fishing. Sport fishing for Tarpon is also possible here but at extra cost. Rooms at the lodge have fans and private bathrooms. Managua Day 7 B Morning free to enjoy the lodge’s range of activities. In the middle of the day you travel by river to San Carlos for the early afternoon flight to Managua, where the tour ends. We can arrange an extension of your stay in Nicaragua, perhaps visiting León and Granada or the beach. Selfdrive in Costa Rica Selfdrive is a good way to see Costa Rica. Distances in Costa Rica are quite short, and road conditions are reasonable for the purpose of getting around on holiday with no great speed or urgency. There are two popular possibilities for a selfdrive holiday in Costa Rica. If you are travelling at a busy time of year then your accommodation must be pre-booked on a fixed Pre-booked Selfdrive itinerary. At other times, we can either pre-book everything for you, or you can take advantage of our Freedom Selfdrive scheme which gives plenty of flexibility. Either way, apart from a few dual carriageways around the capital, the few major routes tend to be equivalent to minor country A roads in the UK, but bumpy enough that you rarely feel comfortable going faster than 50mph. Small country roads are the norm everywhere else. Side roads in country areas are often unpaved. You have to be patient and not mind getting lost from time to time (though a GPS/SatNav helps a lot). The Booking Information insert shows cars, rates, insurance details etc, and the dates that the Freedom Selfdrive scheme operates. Tailor-made holidays with self-drive options Several of the tailor-made holiday suggestions on pages 14-19 would suit a selfdrive trip: Coast to Coast p14 Costa Rica Nature Explorer p15 Creature Comforts p16 Secret Costa Rica p17 Costa Rican Adventures p18 Pre-booked Selfdrive For a selfdrive holiday in the busy times, or where your heart is set on particular hotels or lodges, choose a holiday where everything is booked well in advance. Pre-booked selfdrive On a pre-booked selfdrive holiday all your accommodation is confirmed for you in advance. As well as the car, we provide you with good maps, a mobile phone if you’d like one, and 24hr local support. You might choose a tailor-made holiday that has a selfdrive option (see the list on the left) or one of the routes below. There are many hotel choices, see p23-25 and 27 for examples. Starting from San José Route 1: Classic Costa Rica 11 days/10 nights: volcanoes, cloud forest, jungle at the beach Day 1 arrive San José. Day 2 Visit Poás Volcano then to La Fortuna for 1 night near Arenal Volcano, perhaps visit hot springs this evening. Day 3 North shore of Lake Arenal via Tilarán to Monteverde 3 nights. Day 4-5 Free at Monteverde e.g. cloud forest walk, canopy tour. Day 6 Drive to mid-Pacific area, 1 night at Carara / Jacó. Day 7 Visit Carara NP and/or Jacó beach, continue to Manuel Antonio/Quepos 5 nights. Day 8-9 Free at Quepos e.g. beaches and jungle of Manuel Antonio NP. Day 10 Drive back to San José or Alajuela for final night. Day 11 Drop car at airport, fly home. Route 2: South Pacific Explorer 15 days/14 nights: S Pacific coast, Osa Peninsula, cloud forest, páramo Day 1 arrive San José, 2 nights. Day 2 explore Central Valley, Day 3 to mid Pacific coast stay in e.g. Carara/Jaco/Esterillos Este 2 nights. Day 4 e.g. visit Carara reserve or relax at beach. Day 5 visit Manuel Antonio NP then to Dominical 2 nights. Day 6 relax or e.g. visit Hacienda Baru, whale watch (seasonal), Caño Island or Corcovado NP. Day 7 to Osa Peninsula (e.g. Danta Corcovado) or Golfo Dulce (Esquinas) 3 nights. Days 8-9 exploring locally. Day 10 to San Gerardo de Dota 2 nights. Day 11 e.g. quetzal tour, forest walk, páramo. Day 12 to Turrialba region via Lankester Botanical Garden, 2 nights. Day 13 e.g. Guayabo Monument, Irazú region, CAITE, rafting. Day 14 Orosí Valley, return San José Day 15 Drop car at airport, fly home. Starting from Liberia Route 3: The Northwest 15 days/14 nights: cowboys, volcanoes, wetland, cloud forest, beach Day 1 arrive Liberia. Day 2 Drive to ranch at Rincón de la Vieja 2 nights. Day 3 Free at ranch, e.g. riding, hike up volcano, walk in dry forest. Day 4 Drive to La Fortuna near Arenal Volcano 4 nights. Day 5 & 6 Free in Arenal area e.g. hot springs, walking, riding, windsurfing. Day 7 Caño Negro wildlife river trip. Day 8 Drive to Monteverde 2 nights Day 9 Free in Monteverde e.g. zip line, canopy walk, night hike, nature walk Day 10 Drive to beach on Nicoya Peninsula 5 nights e.g. Tamarindo, Potrero, Nosara, Sámara. Day 15 Drop car at Liberia airport, fly home. Route 4: Off the beaten track 15 days/14 nights: dry forest, volcanoes, cloud forest, beach, wetland Day 1 arrive Liberia 2 nights. Day 2 e.g. visit Rincón de la Vieja (e.g. riding, hike up volcano, walk in dry forest). Day 3 to Bijagua area Day 4 free e.g. walk to Rio Celeste Day 5 to La Fortuna near Arenal Volcano 2 nights. Day 6 Free in Arenal area e.g. hot springs, walking, riding, windsurfing. Day 7 to Los Angeles cloud forest 2 nights Day 8 free e.g. cloud forest nature walk & spa Day 9 Ferry to Paquera stay at e.g. Playa Tambor/Montezuma/Santa Teresa 4 nights Day 13 to Abangaritos 2 nights. Day 14 free at Hacienda La Ensenada e.g. horse riding or birdwatching Day 15 Drop car at Liberia airport, fly home. Freedom Selfdrive In the quieter seasons, Freedom Selfdrive has all you need for a great holiday in Costa Rica, going wherever and whenever you please with a wide selection of choices. Freedom Selfdrive Freedom Selfdrive is a special scheme that allows you to benefit from the discounts that hotels are prepared to give when they would otherwise have empty rooms, while giving you the freedom to travel around Costa Rica without a fixed schedule. It operates for most of the year, but not the busiest seasons. What’s included We’ve put it all together at a very low price in one simple package that includes: - **choice of good quality recent model 4WD car** with unlimited mileage, and insurance with CDW. - **wide choice of accommodation** Book as you go throughout Costa Rica staying at the many hotels and lodges in the scheme in a choice of categories (see brochure insert for details or check our website). - **easy arrival and departure** When you arrive you are met at the airport and driven to your hotel. On your return you can drop your car by the airport. - **easy-to-use Geodyssey Travel Planner** Travel advice, recommended routes, driving times, and a detailed guide to each hotel in the scheme. - **latest guide book** Choose either the Rough Guide to Costa Rica or Costa Rica Handbook. - **road maps** We include a good road map of Costa Rica, and the Tourist Board’s general map, both designed for visitors to the country. - **mobile telephone with 60min free calls** Book hotels, ask for advice, or stay in touch (outgoing national calls only). Good coverage throughout most of Costa Rica. Deposit required. - **on-the-spot support** Personal briefing when you arrive. 24hr local help line for advice, information and emergencies during your stay. With Freedom Selfdrive you book your accommodation as you go and use vouchers for payment. Our Freedom Selfdrive scheme is always being updated with new hotels and vehicles. The Booking Information insert shows cars, rates, insurance details etc, and the dates that the Freedom Selfdrive scheme operates. How Freedom Selfdrive works First, make your choice of: - **car** - **hotel category** When you book you let us know your choice of hotel in or near San José for your first night. Flights from Europe arrive at Alajuela, near San José, in the early evening. You are met at the airport and driven to your hotel. At about 8.30am next day you have a briefing meeting at your hotel with our local organiser who will provide your hotel vouchers, answer questions about driving in Costa Rica and discuss your travel plans. At around 9am your car is delivered to the hotel by the rental company. With it you can also receive a local mobile phone and a satnav (at a small extra charge). Then you go as you please, choosing hotels and lodges from the category you selected. There are around 100 in the scheme as a whole, covering almost the entire country. The only condition is that you book as you go, no sooner than the morning of the day before. If you are planning to stay more than 1-3 nights at a certain place we recommend pre-booking that section before you travel, especially at beach hotels during local holidays. In each main area there is a choice of hotels and lodges, so if one hotel is full there is a very good chance that another will have space. The scheme is very flexible, and you can upgrade to a higher category hotel at any time by paying an appropriate supplement. If within a main area where you want to stay no hotels in your category have space (which is unlikely) just call the 24hr local helpline who guarantee to find you a reasonable alternative at no extra cost. At the end of your trip, simply drop the car at the rental company’s office at the airport. Simple. Combining countries **Costa Rica and Nicaragua** The easiest land route between these two countries is by road at Peñas Blancas in northwest Costa Rica, but for a more adventurous option see also our ‘Costa Rica and Nicaragua off the beaten track’ (p19) which uses the San Juan River to connect with Costa Rica. There are several flights a day between San José and Managua and international airlines that operate to each country offer open-jaw tickets that allow you to fly into one and out from another. **Costa Rica and Panama** It is possible to cross between Costa Rica to Panama at Paso Canoas on the Pan-American Highway. Daily flights between the two countries make the route less arduous for those wanting to hop across the border for a few days of beach time. There are three flights a week between San José and Bocas del Toro. Planning your trip Extra wildlife For excellent wildlife, Tortuguero (p8) in the north-east, and the Osa Peninsula (p9) in the far south-west, can easily be added to the start or end of almost any itinerary, and complement self-drive trips very well. **Tortuguero add-on** Day 1 BLD Early in the morning you are collected from your hotel in San José and transferred by shuttle-bus to the dock for Tortuguero, with a stop for breakfast en route. You board a motor boat for the journey to your mid-range lodge. Stay 2 nights full board including daily wildlife viewing excursions with resident naturalist guides, shared with other guests. Day 2 BLD A full day at Tortuguero with an excursion by boat through the quiet river channels of the national park. Day 3 BL This morning you take the boat back to the dock, where a shuttle-bus collects you and returns you to San José, stopping for lunch en route at a restaurant near Guápiles. To combine with a selfdrive option on p20-21 your hire car can be delivered to you here (for onwards travel to Sarapiquí or Arenal), or in San José in the late afternoon. **Osa Peninsula add-ons:** **Casa Corcovado** Day 1 LD You are picked up at your hotel in the San José area and driven to the airport for a morning flight to Palmar Sur. A short drive brings you to the dock at Sierpe for a boat through mangroves to open sea along the coast to Osa Peninsula. Wet beach landing at Casa Corcovado (p23) for 3 nights full board with 2 shared tours led by resident naturalist guides. Day 2 BLD Visit Corcovado NP home to over 400 species of birds and 114 species of amphibians and mammals. Day 3 BLD Today you might take the boat trip to Caño Is.(p9) Day 4 B By boat to Sierpe, fly from Palmar Sur to San José. **Lapa Rios** Day 1 LD From your hotel you are driven to San José’s domestic airport for a morning flight to Puerto Jiménez, and then driven (45min) to Lapa Rios (p23) for 3 nights full board with 2 shared tours led by naturalist guides. Day 2-3 BLD Choose from early birding walks, wild waterfall hike, medicine trail, strenuous ridge walk and night hike, led by resident guides. Optional excursions at extra cost include riding, surf lessons, kayaking, and dolphin trips in Golfo Dulce Day 4 B To Puerto Jiménez for morning flight to San José. Extra adventure This short stay at the wonderful Pacuare Lodge fits easily at the start or end of most itineraries. **Whitewater rafting add-on** Day 1 LD Early morning start from San José to drive to the Pacuare river, where you are met by your rafting guides for basic training and safety instruction. Raft down a tumbling mountain river through steep canyons and class II-III rapids to arrive eventually at Pacuare Lodge, nestling in a forested gorge. You stay 2 nights here on a full-board basis. After a hearty lunch, you are free to relax and enjoy the lodge and its surroundings, or perhaps choose from the lodge’s excellent range of excursions at additional cost. Day 2 BLD Free to relax, explore or perhaps take an optional excursion, payable locally. Day 3 BL You leave Pacuare by raft for an even more exhilarating day of whitewater rafting, approximately 3½hrs over class III to IV rapids, with a break for a picnic lunch on the river bank. Arriving at the landing point you travel onward by road, eg back to San José, to the Caribbean coast, or to Arenal. Discover the wildlife of flooded forest, rainforest, cloud forest and dry forest, including the world famous sites of Monteverde and La Selva in a convivial small group. We also see two of the country’s largest and most active volcanoes, visit tropical dry forest in Guanacaste, and explore both coasts—the Caribbean and the beautiful central Pacific coast. We stay at good quality comfortable mid-range hotels and lodges and are escorted throughout by an experienced and knowledgeable professional naturalist guide with excellent English. An excellent way to experience Costa Rica. San José Day 1 B/L We meet in San José in the early evening at our comfortable mid-range hotel where we stay for 2 nights. You will be met at the international airport from any flight arriving that day and driven to the hotel (½hr). Poas volcano and the Doka Coffee Estate Day 2 B/L This morning we drive through scenic farmlands and coffee plantations to the slopes of Volcán Poás (p12). Ascending through cloud forest we reach the windswept summit where we walk to the crater’s lip looking down to the steaming aquamarine lake below (clouds permitting!). We descend to the lush Doka Coffee estate. Here we tour the plantation, learning the secrets of growing and harvesting top-quality coffee and roasting the beans to produce the best flavours. Tortuguero Day 3 B/LD This morning we descend (3hr) through the Braulio Carillo National Park, and cross banana plantations, to a dock just inland from the Caribbean coast. From here a 1½hr boat ride takes us along the waterways of the flooded forest into Tortuguero NP (p8). We stay 2 nights at cabin-style lodge, with a swimming pool and gardens. Day 4 B/LD A full day at Tortuguero. We explore the smaller channels of the flooded forest by boat for wildlife viewing with the lodge’s naturalist guide. We hope to see three-toed sloths, large iridescent blue morpho butterflies, howler monkeys, capuchin and spider monkeys, toucans, poison dart frogs and the ‘Jesus Christ’ or Basilisk lizard. Day 5 B/L After breakfast we return by boat and then 2hr by road to Sarapiquí to stay 2 nights. The afternoon is free to enjoy the lodge’s facilities, relax by the pool or in the gardens, walk or birdwatch. La Selva Day 6 B/L Today we visit nearby OTS La Selva Biological Station, one of the world’s most important centres for research into tropical rainforest (p9). One of La Selva’s bilingual naturalists joins us for a guided walk and provides an overview of their research, education and conservation programmes. Showy birds such as toucans, parrots, trogons, and hummingbirds, and mammals such as monkeys, peccaries, agoutis, coatis, are seen frequently. We return in the afternoon with time for activities around the lodge. Arenal Volcano Day 7 B/D This morning we drive 2½hr to La Fortuna near Arenal Volcano (p12). After some free time in the afternoon we visit hot springs, where hot pools, streams and waterfalls are laid out for open-air bathing. We stay at a mid-range lodge outside La Fortuna with good views of the volcano, weather permitting. Rincón de la Vieja Day 8 B/LD Rounding the northern shore of Lake Arenal, we drive 4hr north on the Panamerican Highway to Guanacaste, to a lodge in open country below the volcanic peaks of Rincón de la Vieja (1816m) and Santa María (1916m)—where we stay 1 night. We walk in Rincón de la Vieja NP (p11) through dry forest to see hot springs, mud pots and sulphurous vents spawned by the volcanic activity underground. Monteverde Day 9 B: Today we make the 3½hr drive to Monteverde. The scenery en route is reminiscent of the low Alps and dairy farming is a main occupation. We stay 2 nights at mid-range hotel not far from the Monteverde Reserve. Day 10 B: Early this morning we take a guided walk in the Monteverde Cloud Forest Reserve (p10), home of the Resplendent Quetzal which is best seen January–May when the aquacaudito, or little avocado tree, is fruiting. We visit a hummingbird gallery where a string of feeders are kept constantly replenished at the edge of the cloud forest to the delight of the local hummingbirds. We then experience the cloud forest from the Sky Walk, a wonderful trail via a series of suspended walkways at canopy height giving a rare view of the orchids, bromeliads, mosses and lichens that weigh down every bough. The walkways are sturdily built with strong wire mesh from floor to the handrails, which are set high for security. Central Pacific coast Day 11 B: We descend from the mountains to the Pacific coast to visit Carara Reserve, a transitional mix of ‘dry’ and ‘rain’ forest ecosystems. Mammals include monkeys, armadillos, agoutis and most of the large felines—though, of course, the latter rarely allow themselves to be seen. Birds include toucans, trogons, guans, and macaws. If the tide is high we visit the mouth of the Río Tárcoles to see the mangroves and watch sea and water birds such as boatbilled heron and black-necked stilt, stopping at a bridge over a spot favoured by crocodiles and alligators. Here at dusk we may also witness the wonderful sight of scarlet macaw returning from their forest feeding grounds to their roosts in the mangroves (particularly reliable between January and March). We then stay for 2 nights by the Pacific ocean at a mid-range hotel on the beach. Manuel Antonio NP Day 12 B: This morning we visit Manuel Antonio NP for some of the most beautiful scenery in Costa Rica along a series of headlands including Cathedral Point, a classic tombolo or island connected to the land by a sand spit. Breakers wash up to a rainforest teeming with wildlife—monkeys, coatimundi, racoons, sloths, iguanas, toucans and parrots. There are sparkling white sand beaches (the nearest is 1km from the entrance), coral reefs and forest hiking trails. Afternoon free to relax at our hotel and enjoy the pool. San José Day 13 B/L This morning we visit Else Kientzler Garden and the folk art of Sarchí (p6), on our way back to San José for our final night in Costa Rica. Day 14 B: You will be transferred to the airport for your chosen international flight home: most depart in the morning. Optional add-ons can be arranged, eg to the Osa Peninsula (p21) or one of the beach hotels on p27. Join a small group holiday This small group trip (maximum 12 participants) runs during Costa Rica’s dry season. It has been carefully designed to make the most of your time away, and with attention to what makes great travel and the best for everyone on the trip. Our clientele is predominantly from the UK, we offer fair pricing with no ‘local payments’, and no single supplements for those willing to share. For dates, prices and further details of this tour and other Geodyssey small group holidays, please see the insert that accompanies this brochure or check our website. Where to stay in Costa Rica We travel extensively in Costa Rica, usually once or twice a year, to keep an eye on which hotels currently best suit the different tastes of our clients. We particularly look for hotels in great locations or with character, with good standards to suit their style, often run by wonderful owners with a real passion. The examples on these pages have been chosen to illustrate what is available; the full range of hotels that we offer in Costa Rica runs to well over 150, in many different styles covering practically the whole of the country. We have stayed in or visited most of them ourselves. All this groundwork and experience means when we design your holiday we can offer the places to stay that are most likely to suit you best. HOTEL EXAMPLES - Wildlife lodges - this page - Hotels for touring – page 24 - Beach hotels – page 27 - Key to symbols – page 2 Natural suggestions: Wildlife lodges Bosque de Paz Lodge Bajos del Toro This delightful family-run lodge has 9 comfortable bedrooms and great home cooking. It is set in a 700ha private cloud forest reserve and seven well-maintained trails of varying length and steepness run from the lodge through the forest. Nectar feeders have been hung in the garden, attracting good numbers of hummingbirds while butterflies are drawn to the pretty flowering plants. A lovely option for those wanting to relax and experience the true nature of a cloud forest. Mawamba Lodge Tortuguero Set in 15 acres of tropical gardens in walking distance of Tortuguero village, this lodge’s 54 rooms are basic wooden cabins that blend well with the surrounding forest, from which strange jungle sounds emanate as you lie abed. Private nature trails and a butterfly farm introduce the flora and fauna of the forest. There is a red-eyed tree frog project by the swimming pool, plus a bar and buffet-style restaurant. Stays include all meals with morning and afternoon boat trips through the canals to see the abundant wildlife. We like Mawamba’s moderate size and good location, but it can be busy and there are alternatives that are worth considering. Casa Corcovado Osa Peninsula Tucked away in the rainforest by the Pacific, Casa Corcovado Jungle Lodge is a 170 acre private reserve bordering Corcovado NP (p9). Designed and built by a US naturalist with over 25 years of local experience, this unique lodge blends in with its jungle environment and is an ideal base for an in-depth rainforest experience. The 14 individual bungalows offering unpretentious comfort, with beautiful stained glass doors and handmade wooden shutters. There is a restaurant, bar and spring-fed pool. It is reached by motorboat, first through mangroves, then on the open sea, arriving at a beach for a wet landing—a journey that’s an experience in its own right. Lapa Rios Osa Peninsula Set in a private nature reserve spread over 1000 acres of lowland tropical rainforest, Lapa Rios overlooks the point where the Golfo Dulce meets the wild Pacific Ocean: a private nature reserve and eco-lodge which has won many awards for its sustainable practice. The lodge is built in harmony with the surrounding forest and beach environment. The main building and 16 bungalows line three ridges connected by walking paths and steps. Built with local materials on a steep hillside over 350ft above the sea, Lapa Rios catches the cooling tropical breezes. This lodge is legendary in Costa Rica. A great eco-luxury option and ideal for honeymoons. Maquengue Ecolodge Boca Tapada, San Carlos A wonderful family-run eco-lodge set in its own private 60ha reserve within the Maquengue Wildlife Refuge. Its lowland rainforest is especially good for birdwatchers and for people with a strong interest in Costa Rica’s wildlife, from beetles and butterflies to bats and basilisks. The lodge’s conservation-minded team, all biologists, guide many of the nature tours (on foot or by boat), which include very productive night walks. The 14 comfortable bungalows sleep up to 4 and have private bathrooms with solar hot water, fan, and deck. Restaurant, bar, swimming pool. Hacienda Barú Dominical Hacienda Barú is a 300ha National Wildlife Refuge with a broad range of habitats for nature viewing: pristine beach, wetland, secondary forest and primary forest up on the highland of the coastal ridge. There are two types of accommodation all with private bathrooms, hot water shower, and fans. There is a swimming pool and open-air restaurant serving typical Costa Rican cooking. There is a canopy observation platform, zip line and tree climbing and escorted birding hikes, night walk and mangrove walk all available locally. Worth a mention Selva Verde Sarapiquí MID-RANGE This lodge is a haven for birdwatchers and nature enthusiasts. Built on stilts over the Sarapiquí River, with 40 simple comfortable rooms, a la carte and buffet restaurant. Villa Lapas nr Jacó MID-RANGE Popular with birdwatchers and others due to its proximity to Garza NP. There are 55 rooms in lush tropical gardens. Most guests stay on a full board basis. There are trails, a swimming pool, canopy walk and a zip line in the grounds. Rancho Naturalista nr Turrialba MID-RANGE 15-room family-run birders lodge. Full board basis includes the resident bird guide’s services from first light until dark. Good views of Turrialba and Irazu volcanoes. Savegre Lodge San Gerardo de Dota MID-RANGE Owned by the Chacón family since 1954, this is a superb family-run hotel with 31 comfortable rooms and home-cooked food including fruit from their own orchard and trout from their own stream-fed pools. OTS Las Cruces San Vito MID-RANGE A 12 room lodge in Wilson Botanical Gardens, geared to visiting scientists. La Enseñada nr Palo Verde NP, Gulf of Nicoya MID-RANGE A family-run working farm with simple wooden cabins, swimming pool and lawns looking out to the Gulf. They offer horse riding, mangrove trips, and forest trails. A beautiful location in which to relax, with great birding around. Esquinaz Rainforest Lodge Golfito MID-RANGE Set in primary rainforest of Piedras Blancas NP in the remote southern zone. Walking, birding, boat and kayak trips. Landslides gardens, 10 miles of marked trails. Rustic rooms, ‘haute cuisine’ restaurant, stream-fed pool, small lake. Cerro Lodge Carara MID-RANGE Birders lodge on a farm in Tarcoles 10 minutes from Carara NP. Bungalow-style rooms with private bathrooms. Swimming pool, dining room and reception area. Tiskita Lodge Pavones UPPER RANGE A long-established family-run lodge close to the Panama border, in simple style with dramatically natural rustic rooms. Where to stay in Costa Rica Inn keeping: Characterful hotels for touring **Grano de Oro** San José Our favourite amongst the more upmarket downtown hotels is San José, Grano de Oro is an elegant place to stay. It is located in the west of the city near the Parque Metropolitano La Sabana and the Museum of Costa Rican Art. Formerly a private Victorian mansion it has been extended over the years and now has 40 tastefully decorated rooms in three price categories. The hotel’s impressive dining room has been lovingly crafted in fin de siecle style and it is considered one of the city’s top places to dine. There is a small internal patio filled with tropical plants and a rooftop Jacuzzi with a couple of sun loungers if you tire of sightseeing. **Finca Rosa Blanca** Alajuela, Central Valley A lovely place to begin or end your visit to Costa Rica—a small, luxury-priced, eco-award-winning lodge in easy reach of the airport on a cool coffee-growing plateau with a wonderful vista of the Central Valley. Originally built as a fine private home in an eclectic style inspired by Gaudi, each room is unique, with original paintings and sculpture to decorate the common areas. There is a spring-fed natural swimming pool, organic vegetable garden and stables. 10 acres of grounds are filled with tropical plants, fruit trees and impressive, 300 year old Higueron trees. The honeymoon suite is on two levels with panoramic views. **Xandari Spa** Alajuela Xandari was created by architect Sherri Broudy and artist Charlene Broudy to reflect the natural beauty of Costa Rica. There are 22 spacious villas decorated with original art and custom furniture, set apart on a 40 acre coffee and fruit plantation overlooking the Central Valley. All villas have a private terrace with garden, a walled-in sun area, bar kitchen, and either a king or two queen beds. There are two swimming pools, a heated outdoor Jacuzzi and a dining room with a panoramic view of the Central Valley. With 4km of scenic trails within its grounds, this is a great place to start or end your trip and perhaps take a treatment in the lovely spa. **Vista del Valle** Alajuela, Central Valley This peaceful hotel is a good option for relaxing final nights before the flight home. Located 25min from the airport and set on a hillside among beautifully planted gardens, the hotel enjoys lovely views across the Central Valley. Accommodation is in self-contained wooden cottages inspired by Japanese design, dotted amongst trees and plants. Two rooms have outdoor showers and private patios. The open-air restaurant is perched on a cliff overlooking the green valley below. There is a swimming pool and on-site equestrian centre. In the grounds you can enjoy the birds and butterflies, or perhaps follow a self-guided trail to a 300ft waterfall. **Hotel Bougainvillea** Heredia, Central Valley Here you’ll easily forget that you’re just 15min from both downtown San José and the airport. Set amidst the coffee farms of Santa Domingo de Heredia, this family-owned and run hotel has extensive grounds laid out with tall trees and brightened by plentiful flowers. Rooms are a little bland but large, with two double beds, a sitting area and a full range of facilities, with most having free wi-fi. Each has a balcony giving views of the mountains on one side and the gardens on the other. There is a pleasant dining room, swimming pool and tennis courts. A good option for those who prefer to be convenient for San José but outside the city centre. **Villa Blanca Cloud Forest Hotel** San Ramón A unique countryside hotel overlooking dairy pastures and pristine cloud forests that support an inventory of flora and fauna that is similar to that at Monteverde, yet within reasonable driving time from San José airport, so it is possible to stay here on your first night. The hotel has 34 secluded and well-appointed casita rooms with fireplaces very cosy. On the hilltop the Hacienda, the main house, serves fresh baked breads and you can sample the best campesino style cuisine in the region with the buffet menus. The hotel has high sustainable management credentials. **La Quinta** Sarapiquí A good choice for visitors interested in birdwatching, hiking, river rafting and a peaceful stay in a homely setting. It has received high accolades and certification for sustainable tourism. On the doorstep is Rialto Carilio National Park and extensive primary tropical rainforest, OTS La Selva reserve, world class river rafting, and mountain biking trails. The hotel is run by a Costa Rican family who do everything possible to make each guest’s stay a very pleasant one. They have transformed a working farm to a haven of secluded cabins, with gardens, frog and butterfly areas, a freshwater pool and extensive trails (wheel chair accessible). Rooms are basic but comfortable, the cooking is very good. It’s a child-friendly option, too. **El Silencio** Bajas del Toro Costa Rica’s only up-market hotel in a cloud forest setting. It comprises 16 very spacious, stylish cabins with large picture windows, L-shaped sofa area, gas fire and whirlpool tub on deck. The emphasis is on ‘well being’ and the chef prides himself on locally-sourced in-season food from which he creates exciting recipes. With 500 acres of private reserve there are several trails to follow into the cloud forest; some by beautiful waterfalls (an eco-concierge will help you decide). Other activities include horse-riding, coffee tours and birdwatching. There is a spa using El Silencio’s own natural products, and a yoga studio. **Monte Azul** Chirripó Monte Azul is a mountain hideaway boutique lodge in the middle of a lush 125 acre private nature reserve just outside Chirripo National Park. The lodge’s grounds are set alongside 1km of the Chirripo river. The 7 suites and spacious rooms have stylish décor including items from the collection of original contemporary art which marks out this lodge as something special. The level of equipment is high with fine-cotton bed linen, designer kitchens, and modern bathrooms. Monte Azul also has an emphasis on sustainable ecotourism, and offers plenty of on-site activities and tours, many that offer an insight into the local way of life. **Arenal Observatory Lodge** **MID-RANGE** *Arenal* The reward for a little bone-shaking on a windy, stoney track through conifer forest to reach this hotel is its proximity to Arenal volcano and its beautiful views of Arenal Lake. Originally a Smithsonian Institute research station, the lodge provides modest comfortable standard rooms and more spacious ‘Smithsonian’ rooms with excellent views of the live volcano and the chance of thrilling displays. There is a swimming pool over a short hanging bridge and a sunken Jacuzzi in a glass gazebo. The hotel is in a private reserve where volcanic earth supports excellent forest. A well marked trail system provides easy access for walks; the lodge area is good for hummingbirds. Other activities include mountain biking, rafting, canopy tours, and riding. --- **Lost Iguana** **UPPER RANGE** *Arenal* The Lost Iguana, one of Arenal’s boutique hotels, offers luxury in a natural setting. It is an upscale retreat nestling in the jungle on its own 100 acre property. Each of its spacious rooms has fantastic views of Arenal Volcano and the gorge of the Arenal River, and all have private balconies (suites have their own private Jacuzzi on the balcony). The tasteful décor has a Balinese influence, with colourful artwork, wall hangings and furnishings. You might spend the day walking on trails that guide you through the surrounding forest, relaxing at the double pool with swim-up bar, dining at the open air restaurant, or enjoying a massage at the Dos Rios Spa. --- **Hotel Belmar** **MID-RANGE** *Monteverde* This Swiss chalet-style lodge perched high on a hill is a good example of the many functional, family-run, mid-priced, wood-built lodges found in and around Monteverde. This particular lodge has especially high ratings for sustainable tourism. The guest rooms are mostly functional, wood-walled and homely, each with private bathroom. Recently renovated suite-style rooms have balconies and bathrooms with jacuzzi-style baths. There are great views from its mountainside position down to the far distant Gulf of Nicoya with breathtaking orange-red sunsets on clear nights. The simple restaurant offers good meals for the hungry traveller. --- **Hacienda Guachipelín** **MID-RANGE** *Rincón de la Vieja* An adventure lodge on a working ranch dating back over a hundred years, at the foot of Rincón de la Vieja volcano in an important area of tropical dry forest. The ranch continues to raise cattle and breed horses on a third of its large area, with the remainder set aside for conservation and replanting. The guest experience is kept plain and simple, with down-to-earth but comfortable cabins, and a country-style bar and eating area. There is an outdoor pool and pleasant views. Service is often fairly rough and ready—especially in high season when the ranch gets busy with day-trippers. Choose quiet dates if you can. The main reasons for staying are the dry forest (p117), the volcano with its fumaroles, hot springs, etc (p12), and the opportunity to experience cowboy life first hand (p13). Walking, riding, ranching, and various adventure activities are available locally. --- **Pacuare Lodge** **TOP RANGE** *Turrialba* One of our favourite lodges in Costa Rica—a unique jungle getaway deep inside enchanting tropical forest on the edge of the Pacuare river. It is an award-winning ecolodge committed to sustainability completely surrounded by nature in its purest state. You really do feel wrapped in the heart of the jungle here. There are 2 ‘jungle’ rooms, 12 ‘river view’ suites and 5 ‘linda vista’ suites. A honeymoon suite set high in the canopy has a private plunge-pool and a unique hanging bridge. There is no electricity, everything is lit by candlelight—very romantic. Getting there is an adventure in itself, either by whitewater rafting or by 4WD then cross a river by gondola. The restaurant, which looks over the river, is magical at night and the food can be superb. For something very special ask to dine at ‘El Nido’ –a platform in the treetops accessed by zipline! There is a riverside spa offering massages and treatments. --- **Oxygen Jungle Villas** **UPPER RANGE** *Uvita* Hot-listed contemporary boutique hotel, for couples only, in hills above lovely natural beaches on the hip part of the Pacific coast beyond Manuel Antonio NP. 12 gorgeous Balinese style villas each with private terrace, king-size bed, a/c, and wi-fi. Great views from most parts of the property over the distant Ballena marine national park – good for whale-watching in season. Stylish restaurant, bar, lounge, lively infinity pool, large sun decks, gardens, jungle trails; spa, yoga. Surfing, riding, whale-watching, wildlife trips to Día Peninsula & Hacienda Barú are all bookable with the lodge. --- **Almonds and Corals** **UPPER RANGE** *South Caribbean* This is a very special ecolodge where you are both in comfort and very close to nature. In dense rainforest behind Caribbean beaches the lodge’s elegant tented guest rooms are raised on slilted platforms linked by winding boardwalks. With solar powered electricity a romantic atmosphere is created in each room with four poster bed, Jacuzzi, fan and separate area with toilet and shower. It is the creation of Aurora and Marco who came to camp in the area 25 years ago and dreamed of their perfect lodge for lovers of nature—a superb example of sustainability and comfort working hand in hand. You won’t need an alarm call: howler monkeys, bush chickens and a variety of birds will ensure that you are awake as dawn breaks. --- **Lagarta Lodge** **MID-RANGE** *Mosara, Nicoya* Set on a hill 40 metres above sea level, Lagarta Lodge offers a wonderful view of the coastline of Ostional, the mountains, and the river mouth from Rio Nosara and the forest, which belongs to the Reserva Biológica Nosara. The lodge has just 6 rooms which can sleep up to 4 people in each. The rooms are basic but have exceptional views and each has either a balcony or terrace. There is a small swimming pool and the restaurant and bar is one of the best places in the area to watch the spectacular sunsets. The Swiss owners are extremely friendly. This is a haven for nature lovers with access to the private reserve on the doorstep. Agoutis, howler monkeys and many species of birds are all spotted (or heard) regularly. --- **Worth a mention** **Makanda by the Sea Quepos** **TOP RANGE** Perfect for honeymooners or romantic break. There are spectacular views from the 11 rooms and the swimming pool towards Manuel Antonio NP and the open ocean. All rooms are tastefully decorated with king size beds, safas and kitchenettes. We recommend this hotel for privacy and a personal touch. No children. **Cristal Ballena Dominical** **UPPER RANGE** Mediterranean-style, small resort hotel on hillside between forest and Pacific near Uvita and Marino Ballena Marine NP. All 19 rooms have wonderful ocean views. Large swimming pool among lawns attractively edged by traveller palms. **Danta Corcovado Osa Peninsula** **MID-RANGE** Sustainably run lodge on a small farm with optional tours into Corcovado NP; Guaymi indigenous reserve; Good for nature lovers, not for the bug-phobic. **Hacienda La Isla Sarapiquí** **MID-RANGE** Small colonial-style lodge set in gardens in the foothills of Braulio Carrillo NP. 14 deluxe rooms furnished in antique style. Swimming pool, al fresco restaurant. Riding, canopy tours are bookable at the hotel. **Rio Celeste Hideaway Tenorio Volcano NP** **UPPER RANGE** Comfortable small hotel set within the rainforests of Tenorio NP. 26 casitas private balconies, pool, restaurant. Riding, biking, walking options, inc Rio Celeste falls. **Silencio del Campo Arenal** **MID-RANGE** Small resort-style hotel. Stem from La Fortuna with 20 spacious stand-alone villa-style rooms, each with a views of the volcano. Two swimming pools (one for adults, one for children), on-site hot springs and spa, and a restaurant specialising in traditional Costa Rican cooking such as casado and gallo pinto. Life’s a beach Costa Rica has a wide choice of pleasant beaches that are just right for a few days’ stay. NICOYA PENINSULA The Nicoya Peninsula on the north Pacific coast enjoys Guanacaste’s long dependable dry season from December to May, and has a good choice of yellow sand beaches. Impressive sunsets over the Pacific are a feature almost everywhere. Good beaches have all attracted foreign visitors including expatriates in search of paradise, giving a cosmopolitan atmosphere, but most remain nonetheless agreeably low-key and relaxed. Tamarindo is a fishing village that has grown into a busy beach resort with a choice of restaurants, bars and shops. Its attractive long, wide, yellow-sand beach is a favourite with surfers and windsurfers. Leatherback turtles nest at Playa Grande from October to March (but have become rare lately); wildlife trips go into nearby mangroves and wetlands. North of Tamarindo, upmarket Playa Ocotal and neighbouring budget Playa El Coco are good for divers: both are in reach of top offshore dive sites of Bat Islands (bull shark) and Catalinas (manta ray in April). Playa Hermosa has a hide-away feeling but there’s a choice of restaurants and bars, and the sea is good for swimming rather than surfing. To the south lie the dark sand beaches of Playa Potrero, a peaceful area with a handful of beach front hotels at the end of a long bumpy road off the main highway, worthwhile if you stay a few days or more. At Playas Nosara three wild beaches separated by hilltops form a spread out community of mainly ex-pats with an off-beat ‘end of the road’ feel. Nearby a private reserve is home to howler monkeys, coati and racoon. Olive Green Ridley turtles lay here between August and December with mass nestings or arribadas typically during the last quarter of the moon. Playa Sámara is set in a deep horseshoe bay with a wide sandy beach protected by a reef. It is a fishing village that has grown into a beach resort popular with swimmers, windsurfers, backpackers, and young Costa Ricans. The village supports a few beachside snack bars and a handful of cafes and restaurants. Playa Carrillo, 15min south of Sámara, is a quiet, attractive, beige sand beach in a semi-circular bay of calm water protected by a reef, backed by a boulevard of shady palms and is a good spot to watch the sunset. By Nicoya’s southern tip, Montezuma is a friendly laid back village run by expats with an eco-consciousness with boutiques, bars, and a limited selection of restaurants (lots of veggie options). Beyond rocky coves lie wonderful wild beaches backed by forest. There are walks and horse rides on the beach to waterfalls. Further on, Santa Teresa and Malpais attract surfers and young travellers plus some upscale glitterati, with a mix of lively bars eateries and luxury villas dotted along a bumpy road set back from the ocean. In the other direction, Tambor is a secluded getaway, with calmer sea and a pristine palm-backed beach. CENTRAL PACIFIC Beautiful Manuel Antonio National Park (p9) has verdant forest opening on to pristine white sands (closed Mondays—go to small local beaches), and is deservedly much visited. Most hotels are between Quepos and the park along 7km of road that runs through the forested hills; there’s a mini real-estate boom underway here. Quepos itself offers restaurants, cybercafés and lively bars. Esterillos Este, a little further away from the park, is much quieter: a stunning undeveloped stretch of sand with a few hotels and restaurants but not much more. Jacó, with its discos and nightlife, is the closest to San José and is popular with surfers, backpackers and week-enders; rip tides make swimming inadvisable. SOUTH PACIFIC Dominical has a number of attractive forest-backed beaches. Those further south at Uvita and Playa Tortuga are more secluded and situated close to Ballena National Park, a good area for snorkelling and birdwatching. There are strong breaks at Dominical, making it popular with surfers but swimming is not recommended. SOUTH CARIBBEAN The wild beaches between Manzanillo and Cahuita, around Puerto Viejo, are certainly beautiful though not safe for swimming due to strong currents. Coral reefs offer good snorkelling when sea conditions are right. Nearby one of the world’s top surfing beaches produces a wave called the ‘Salsa Brava’ featured in the movie Endless Summer II. Whales and dolphins Costa Rica’s waters see around 40 species of dolphin, porpoise and whale—nearly half the world’s total. The Osa Peninsula is particularly favoured by a number of species including Bottlenose, Spinner, Spotted and Common Dolphin. If you are lucky you will be joined by dolphin pods playing at the front of your boat across Golfo Dulce or out to Caño Island. Orcas, with distinctive black and white markings, can be seen here too, and Sperm, Blue and Pilot Whales. Humpbacks migrate to Costa Rica from colder northerly waters, arriving between November and December to mate. They remain until March around the southern Pacific coast to give birth. Where to stay in Costa Rica Beach hotels to fall in love with There are many options for hotels by the sea in Costa Rica. Which is right for you will depend on your preferences for style of hotel and type of beach. Many beaches are wild or not suitable for swimming because of currents. The hotel’s own pool is then especially important if you plan to just relax, and there are some wonderful examples to choose from. The hotels shown below are all on the sands or within a few steps, many others are not - either because the beach is protected or because a higher location brings superb views. We have selected them from the many that we have visited ourselves. **Sugar Beach** **Playa Pan de Azucar, Nicoya** This away-from-it-all hotel is on Playa Pan de Azucar, a secluded pristine beach north of Potebri Bay, down a very bumpy road. It suits those looking for seclusion and privacy; perfect for honeymooners and couples. Perched above the ocean, it is in a great spot to catch soft sea breezes and for viewing the sunsets. There are 25 rooms of different configurations, many with sea views and large terraces. There is a small pool and a lovely open restaurant serving great local fish. Guests report seeing plenty of wildlife within the grounds, especially iguanas and exotic birds, but mainly they rave about the beach which is virtually empty all year round. **Capitán Suizo** **Tamarindo, Nicoya** Over many years our clients have given excellent feedback about this hotel, which is one of the best options in Tamarindo, and wonderful for families. It is small and attractive, right on the beach, with a beautiful free-form pool. Within the gardens you will see birds, monkeys and iguanas. There are 8 thatch-roof bungalows and 22 suites. The ground floor rooms have air-conditioning and the ceiling fan, ceiling fans. The open-air restaurant serves international cuisine with a Swiss twist. The hotel is only a walking distance of central Tamarindo but in a tranquil area. Early booking is advised, especially for Christmas, Easter, July and August. **Punta Islita** **Punta Islita, Nicoya** On a remote veritable island with magnificent views over the Pacific, Punta Islita is one of the Small Luxury Hotels of the World, and certified as a Sustainable Standard Setter by the Rainforest Alliance. It has a long-standing reputation as one of the most upscale hotels in the country. Its design is elegant yet natural, with tiled floors, wooden furniture and vibrantly coloured textiles. Guest rooms are dotted around the property, many with ocean views. There is a spa and a picture-perfect infinity pool with stunning views out to sea. Activities arranged by the hotel include golf, riding and nature walks. A popular choice with honeymooners. **Ylang Ylang** **Montezuma, Nicoya** Something special. A 10min walk along the sands from the village of Montezuma leads you to this secluded paradise on a forest-backed beach. There is no road access, but porters from the lodge are on hand to carry your bags. A winding path fringed with exquisite heliconia and ginger plants takes you from the restaurant, past the sweet pool to the cluster of thatch bungalows nestled amongst the foliage. Each of the 7 private bungalows has a terrace and glass-free windows plus a carefully-screened outdoor shower. Rooms with views of the beach and fully enclosed private bathrooms are also available. **Tango Mar** **Tambor, Nicoya** Set in extensive grounds on Nicoya’s southern shore, Tango Mar has an enviable beachfront setting. 18 rooms are within yards of the sands, 5 suites modelled on Polynesian-style wooden cabanas are just 20m from the beach, while 12 individual villas are set apart on a hillside away from the main buildings and reached by self-drive golf buggies – the view from the cliff near these suites, looking down the palm-fringed coast and out to sea, is quite breathtaking. Overlooking the beach are two small swimming pools and an open-air bar, while the main restaurant faces the water’s edge. Golf, spa facilities and riding are available on site. **Bosque del Mar** **Playa Hermosa, North Pacific** A family-run beach-front boutique hotel at the quieter end of Playa Hermosa, about 30min from Liberia airport. There 32 well-appointed suites in 4 categories. Each has a telephone, bathroom with hot water, living room, mini bar, a/c, ceiling fan, flat screen cable TV, and terrace. On-site facilities include a beach front lounge bar, restaurant, with access, swimming pool and jacuzzi, spa services, dive centre and locally bookable tours. The hotel’s à la carte restaurant Nirumi serves a fusion of Costa Rican and international cuisine with plenty of fresh seafood. **Alma del Pacifico**, Esterillos Este Formerly Kandari Pacifico. Set directly on the beach. Spacious rooms in a colourful Mexican theme. The al fresco restaurant has views to garden and beach, and there is a swimming pool and spa. A good choice for honeymooners. **Cala Luna**, Tamarindo Beautiful boutique hotel 10min from Tamarindo's busy beachfront, 2min from a quieter beach. Lively pool, al fresco dining. 20 rooms plus villas with private pools. **Clandestino**, Parrita Small 'barrel' boutique hotel on the sands of the beautiful Rancho Palo Seco beach. 12 rooms, pool, bar, restaurant, wifi. Carana and Manuel Antonio are in reach. **Playa Espadilla** Manuel Antonio Close to Manuel Antonio, one of the few mid-range properties in the beach. En suite rooms with a/c in a 2-storey building. Restaurant, pool, bar, tennis court. **Tropico Latino**, Santa Teresa Directly on the beach among palms to shade an ample pool. Rustic cossets sea or garden views, all en suite, a/c, fan and porch area. Italian-leaning restaurant, spa. **Bahia del Sol**, Potrero, Nicoya Quiet area looking over a bay, one of the few beaches in Costa Rica suitable for swimmers (though seal colour sand). Lovely freshwater swimming pool. **Cariblue** near Puerto Viejo, Caribbean coast Between the jungle and the beach, utterly colourful wooden bungalows with verandahs and hammocks set around a large tropical garden. Bar and restaurant with sea food specialties. Good for a 2-3 night stopover. Birdwatching in Costa Rica Costa Rica attracts birdwatchers at all levels, from beginners to the neotropics to experts chasing rarities and endemics. COSTA RICA’S BIRDS Costa Rica can boast more than 850 species of birds (including a high number of regional endemics) in an extremely small area, approximately the same size as Wales. It is one of the most biodiverse places on the planet thanks to its position between North and South America, its tropical climate, and differences in altitude and habitat. Many of Costa Rica’s year round avian inhabitants are colourful, tropical varieties, such as hummingbirds, parrots, toucans and trogons. Others are drab, shy and secretive like the antbirds and woodcreepers. From December to April you can add winter-resident migrants from North America to your list. There is an excellent field guide and a good site guide with sketch maps of birding trails and local species lists. Lodges are good, travel times short, and local guides can usually find most of their birds. Birdwatching with Geodyssey We have organised trips for birdwatchers to the neotropics for over 20 years, with many leading neotropical specialists using our unrivalled services. Planning your birdwatching trip to Costa Rica - **Local driver, self-guided** a local driver takes you from site to site, you spot and identify your own birds - **Selfdrive, self-guided** you drive yourself from site to site, spot and identify your own birds You can also do-it-yourself!! With a flight, hire-car, accommodation vouchers, a guide book and a field guide, you are all set for your own birding trip. Take advantage of our very economical Freedom Self-drive scheme (p21) and make up your itinerary as you go, or choose Prebooked Selfdrive for a fixed itinerary designed for you in advance. Key birding sites - **NE Lowlands** La Selva is a 1500ha reserve run by the Organisation for Tropical Studies. 60% is primary rainforest, the rest a mix of secondary rain forest, abandoned pasture, swamp, and old cacao, laurel and peach palm plantations. Elevations range from 35m to 200m. 480 species are listed and it is one of the best places to pick up those that are hard to find elsewhere. The old growth forest is good for tinamous, antbirds, wrens and woodcreepers; forest edges bring tanagers, orioles, woodpeckers, etc. Notable specialties include Red-fronted Parrotlet, Tawny-chested Flycatcher, Striped-breasted and Black-throated Wren. Higher, Virgen del Socorro is good for warblers, flycatchers, honeycreepers, and humming, its specialties include Black-crowned Antpitta, Blue-and-gold Ianager, Red-headed and Peong-billed Barber, Emerald Tanagee, Fawn-capped Euphonia, Black-faced Antthrush, Ocellated Antbird, Brown-billed Scythebill, Black-crested Coquette and Green Thrushall. In Braulio Carrillo NP at Quebrada Gonzalez, Zelotornis tyrannulus, fawn-capped Euphonia and Sooty-faced Finch can be found. - **Arenal Volcano** Arenal Observatory Lodge has good birding in protected forest. Costa Rican specialties here include the endemic Coppery-headed Emerald, White-belied Mountain-gem, Lattice-tailed and Orange-bellied frogon, Black-thighed Grosbeak, Zeledon’s tyrannulet, Bare-necked Umbrellabird and Sooty-faced Finch. **Mid Pacific Lowlands** Carara Biological Reserve occupies a transition zone between primary tropical dry forest and primary evergreen forest, so it is possible to find White-throated Magpie-jay, Rufous-naped Wren, Hoffman’s Woodpecker, Rose-throated Becard and Fiery-billed Aracari, Riverside Wren, Black-bellied Wren and Black-headed Antshrike all within the same area. Highlights of Carara are the Scarlet Macaw and an active Orange-collared Manakin lek. It also boasts Plain Xenops, White-whiskered Puffbird and five species of Trogon. Nearby at the mouth of the River Tárcoles the marshes are rich in waterfowl and wading birds. **Talamancan mountains** San Gerardo de Dota is year-round the most reliable place in Costa Rica for Resplendent Quetzal. Between 1500—2500m is mainly oak forests and above the tree line (at 3000m) on Cerro de la Muerte lies páramo with Andean albatross, shrubs, bamboo and tree ferns. Clouds and fog are common, usually developing in the afternoon. This is the place for near-endemics Volcano Junco and Yellow-winged Vireo plus Harry and Azara Woodpecker, Long-tailed and Black-and-yellow Silky-flycatcher, Black-throated Green Warbler, Ochraceous and Timberline Wrens, Sooty Robin, Sooty-capped Bush-tanager, Flame-throated Warbler, Flame-Coloured and Summer Tanager, Golden-browed Chlorophonia, Blue-banded Euphonia, Blue Seedeater, Black-billed Nighthingale-thrush, Black-faced Solitaire, Collared Trogon, Green-fronted Lancanchi, Black Phoebe, Black-rapped Flycatcher, Silvery-throated Jay, Large-footed, Yellow-thighed and Peg-billed Finch and Zeledonia. Its many species of hummingbird include Magnificent, Volcano, Green Violet-ear and Fiery-throated Hummingbird. **Turrialba region** Rancho Naturalista is a birders lodge at 1000m offering a mix of mountain and lowland species and access to Tapantí NP. Notables include Black Guan, Chiriquí Quail-dove, Prong-billed Barbet, Streak-breasted Treerhunter, Golden-hellied Flycatcher, Black-faced Solitaire, Golden-browed Chlorophonia. Other special birds in the region are Grey-tailed Mountain-gem, Blue-and-gold and Spangle-cheeked Tanager, Bare-shanked Scrub-vetch, Sulphur-vinged Parakeet and Black-bellied Hummingbird. The lodge’s resident ornithologists regularly see difficult species such as Chestnut-headed Oropendola, Purplish-backed Quail-dove, Black-crested Coquette, Green Thornbill, Snowcap, Tawny-throated Flavibosser and Dull-mantled Antbird. **Monteverde and Santa Elena** 452 species have been recorded in the Monteverde area. Resplendent Quetzal move about in the forest reserve during the year in search of food. There is a Hummingbird Gallery where typically 7 species of hummingbird can be seen at the feeders, most notably the endemic Coppery-headed Emerald. Nearby Santa Elena Reserve gives the option of a Canopy Walk on a network of 7 suspension bridges and trails allowing different observational levels in the cloud forest. Costa Rican specialties found here include Black-breasted Woodquail, Black Guan, Buff-fronted Quail-dove, Fiery-throated Hummingbird, Magenta-throated Woodstar, Scintillant Hummingbird, Orange-bellied Ictagan, Prong-billed Barbet, Ruddy Treerunner, Streak-breasted Treerhunter, Golden-bellied Flycatcher, Dark Pewee, Zeledonia, Golden-browed Chlorophonia, Spangle-cheeked Tanager, Black-faced Solitaire, Sooty-capped Bush-tanager, Black-thighed Grosbeak and Slaty Flyoverpiercer. It is also a good location for Grey-breasted Wood-wren, Tawny-tailed Manakin, Slaty-backed Nighthingale-thrush, Three-striped Warblers, Spotted Bartribal, Three-wattled Bellbird, Yellow-bellied Elenia, Sulphur-bellied Flycatcher, Emerald Ixuacan, White-eared Ground-sparrows and Rufous-browed Peppershrike. **NW Lowlands** Palo Verde NP. Comprises seasonally dry forest and extensive wetland vegetation bordering the Tempisque River near its estuary in the Golfo de Nicoya. From September to March, several thousand herons, storks, egrets, greebes, ibis, ducks and Northern Jacanas flock to the lagoons and surrounding areas to feed and mate. This is the only area in Costa Rica to see Jabiru, Glossy Ibis, Fulvous Whistling-ducks, Bay-winged Hawks and North American waterfowl. Nearby La Ensenada has similar access. **South Pacific** Close to the Panama border this is the region where at mid-high elevations most of the birds with very restricted distributions are found. Good birding locations include Térraba, Las Cruces and Las Tablas. Look for Riverside and Black-bellied Wren, Red-breasted Blackbird, Thick-billed and Spotted-crowned Euphonia, Black-headed Brush-finch, Crested Bobwhite, Band-rumped Swift, White-crested Coquette, Beryl-crowned and Snowy-bellied Hummingbird, White-tailed Emerald, Baird’s Trogon, Golden-naped and Red-crowned Woodpecker and Tawny-winged Woodcreeper. Locally at Goto Bay, Rudy Foliage-gleaner, Rosy-throated Ani and Lance-tailed Manakin occur. In the lowlands around Golfito and the River Esquinas the endemics Black-cheeked Ant-tanager and Mangrove Hummingbird are to be found along with other specialties of these lowlands such as Red-throated Caracara, Marbled Wood-quail, Fiery-billed Aracari Turquoise Cotina and Pale-breasted Spinetail. --- **The Birds of Costa Rica** This is a well-nigh perfect itinerary for a holiday dedicated to birdwatching for first-timers to Costa Rican birds. It features a combination of key habitats that produces long lists, comfortable accommodation in enjoyable locations, and minimum travelling. **San José** Day 1 Met on arrival, you are driven to a mid-range hotel. **Carara Reserve** Day 2 BL Dawn birding in the hotel’s lovely and productive grounds, then drive to the Tárcoles River on the mid-Pacific coast for 3 nights at Cerro Lodge or Villa Lapas, both are birders lodges close to mangroves and the Carara Reserve—a boundary between tropical dry and humid forest offering a mix of species including White-throated Magpie-Jay, Stripe-headed Sparrow, Fiery-billed Aracari and Black-bellied Wren. This is one of two sites in Costa Rica for Scarlet Macaw, reliably seen from the bridge over the Tárcoles at dusk or dawn. Day 3-4 BL Two full days’ birding in the Carara Reserve and Tárcoles. The reserve’s notable birds also include Hoffman’s Woodpecker, Orange-collared Manakin, Panama Flycatcher and Black-headed Antshrike. Look for Zone-tailed Hawk, Gray-chested Dove, Long-billed Hermit, Purple-crowned Fairy, Blue-throated Goldentail, Baird’s Trogon, Long-tailed and Tawny-winged Woodcreepers, Dusky and Chestnut-backed Antbird, Dot-winged Antwren, Spectacled Antpitta, Black-faced Antthrush, Golden-crowned Spadebill, Greenish Elenia, Slate-headed Tody-flycatcher, Northern Bentbill, Rose-throated Becard, Rufous-breasted, Black-bellied and Riverside Wrens, and Western Tanager. Tárcoles river banks bring Collared Pirang, Spotted and Western Sandpipers, and mangroves near its mouth the endemic Mangrove Hummingbird plus Brown Pelican, numerous egrets and herons, White Ibis, Osprey, Plumbeous Kite, Mangrove Black-hawk, Rufous-browed Peppershrike and Mangrove Vireo. **Palo Verde area** Day 5 BL After a final early morning in the Carara area drive up the coast to La Ensenada Refuge by the Gulf of Nicoya, for 2 nights. La Ensenada is a 1000 acre cattle and horse ranch whose birds are similar to the nearby Palo Verde NP but with better access and accommodation. An afternoon on the ranch’s trails by a variety of aquatic habitats—freshwater lagoon, saltwater lagoon, mangrove and river, plus some forest habitats—should produce pelicans, herons, parrots, parakeets, bellbirds, trogons, kingfishers, White Ibis, Great Egret, Montezuma Oropendola, Double-striped Thick-knee and possibly Jabiru. Migrant shore birds are also seen. Day 6 BL Full day birding the trails at La Ensenada. This entire region is most productive in the December-April dry season. At other times substitute 2 nights at Tortuguero, which we would place at the start or end of this itinerary. **Tapantí and to San Gerardo de Dota** Day 13 BL Birding in the Tapantí montane forest where a key target is Rufous-rumped Antwren, then drive onwards to San Gerardo de Dota to either Trogon or Savagre Lodge. **San Gerardo de Dota & Cerro de la Muerte** Day 14 BL A memorable morning’s birding for Resplendent Quetzal. Ascend to Cerro de la Muerte for a prime species: Volcano Hummingbird, Black-capped flycatcher, Ochraceous Pewee, Red-fronted Parrotlet, Barred Parakeet, Timberline and Ochraceous Wrens, Yellow-winged Vireo, Wrenthrush, Volcano Junco, Blue Seedeater and Peg-billed Finch. Day 15 BL A morning’s birding at San Gerardo de Dota, then return to San José for a final night in a mid-range hotel. **San José** Day 16 BL Driven to the airport for your flight home. The Kente cloth is a traditional Ghanaian textile that is often worn by women during special occasions and ceremonies. The cloth is made from cotton and is woven with intricate patterns and designs. The colors used in the cloth are usually bright and vibrant, and they can include shades of red, yellow, green, blue, and black. The patterns on the cloth are often geometric and abstract, and they can be arranged in various ways to create different designs. The cloth is typically worn as a wrap-around skirt or a headscarf, and it is often adorned with beads and other decorative elements. The Kente cloth is an important part of Ghanaian culture and is highly valued for its beauty and significance. Panama offers some of the most amazing experiences in Central America. For us, it’s close behind Costa Rica for wildlife and nature you can see easily, coupled with strong tribal cultures, a vivid history that’s full of surprises, and beaches in variety. Travelling through the Panama Canal on a ‘partial transit’ day trip brings its amazing technical achievement to life. With all this on offer, plus better and better facilities and real enthusiasm from Panamanians to show off the very best of their country, Panama is a fabulous choice for adventurous travellers. Nearly 30% of Panama’s land is protected in national parks, forest reserves and wildlife sanctuaries, which provide great opportunities to see the country’s great wealth of flora and fauna. Panama’s contorted shape and its location at the southernmost range of many North American species and the northernmost range of many South American species creates a melting pot rich in animal and plant life. There are around 950 species of birds (more than in North America and Europe combined), plus 220 mammals, 354 reptiles and amphibians, and more than 10,000 species of plants. But it’s not only the nature that pulls you in. Panama has a fascinating history as a Spanish colony and transit route for the conquistadors’ riches from Peru, often plundered by pirates, with some evocative sites to visit. Panama City combines a colonial past with a brash modernity that springs from its new standing as Latin America’s leading trading centre. Next to the city, the Panama Canal, the key to this success, is a truly amazing engineering achievement, and the trials and tribulations of building it make a remarkable story that lives long in the mind. There are few countries where tribal communities of indigenous peoples survive with such fortitude, struggling against the odds to preserve thousand year old cultures and a future for their children. The largest, the Kuna, have a degree of autonomy over their homelands, which include the beautiful islands of the Kuna Yala (San Blas) archipelago. They warmly welcome visitors for a taste of their paradise at delightful simple lodges purpose-built by the communities. With 1500 islands and 1000 miles of Caribbean and Pacific coast it’s easy to find a white sand beach to get away from it all, or some great places for snorkelling, diving or surfing. Molas The traditional costume worn every day by most Kuna women is very striking. The most important element is the colourful blouse, or mola, sewn with reverse appliqué designs in bright contrasting colours. Patterns often depict geometric shapes, Kuna symbols, stylised birds, fish, etc. Generally 3–5 layers of fabric are used and very fine stitching is employed for the best garments, worn by the older ladies with magnificent authority and charm. More often than not, the appliqué work is reduced to a square panel worn on the front and back of the blouse. These mola panels are also sold separately to visitors. You can find them in the villages of the Kuna Yala and now and again in craft markets and shops in other parts of Panama. Many are designed with visitors in mind, with especially eye-catching designs that may be quite loosely stitched. You can buy single squares, or panels made up into wall hangings, table cloths, etc. Buying them is an excellent way to support Kuna women without diluting the community’s stock of ‘real’ molas. They are easy to pack, and a great way of bringing back home a lively taste of Panama. Around Panama Step beyond Panama City and the narrow Canal zone and you are quickly in two very different Panamas. To the west low mountains, rolling landscapes and some great beaches echo Costa Rica, while to the east lie the magical islands of the Kuna Yala/San Blas and the intense wilderness of the Darién. PANAMA CITY One in every three Panamanians lives in Panama City. It’s a sleek metropolis that curls impressively around a wide bay facing the Pacific ocean, thriving on its role as one of the world’s great trading gateways: Latin America’s Hong Kong or Singapore. The west of the city presses against the Panama Canal itself; where the spectacular Bridge of the Americas brings the Panamerican Highway from the rest of Central and North America to docks that busily load and unload freight for the Canal. Towards its eastern outskirts lie the ruins of the original city, Panama la Vieja, where the riches of the Incan empire first arrived by ship from Peru and were carried by the ‘royal road’ to the Caribbean for onward shipment to the Spanish court—until, that is, the city was comprehensively sacked in 1671 by the Welsh pirate Sir Henry Morgan. After this onslaught the city was rebuilt at Casco Viejo, quite near the mouth of today’s Canal, this time surrounded by a high stone wall and moat. Thus protected, Casco Viejo flourished unscathed for centuries, with wonderful colonial buildings in Spanish, French, and Italian styles crowding narrow streets and small plazas that echo its contemporary, old Havana. Like Havana, the ravages of time have taken their toll since Casco Viejo’s heyday, but restoration projects have brought new life (and some boutique hotels), and the area has received UNESCO World Heritage status. All great cities must have a great park, and Panama City’s outshines all-comers. Its Metropolitan Natural Park brings tropical forest to within 10min of the heart of downtown, with well-maintained trails, 250 bird species, iguanas, tortoises, sloth and anteater, and the Smithsonian Tropical Research Institute, from whose canopy crane you can survey life in the tree-tops. The Smithsonian is also involved in Panama City’s most notable new building, the Biodiversity Museum: Panama Bridge of Life (or just ‘BioMuseo’). Designed by Frank Gehry (Guggenheim Bilbao), and much delayed in construction, this remarkable building is sited at Amador on the Pacific end of the Canal looking back to the City. Eight galleries describe the origin of the isthmus, its impact on evolution, and the huge biodiversity that has emerged. THE PANAMA CANAL Formally handed back to Panama at midnight on 31 December 1999, the Panama Canal’s revenues now benefit the Panamanian economy—these days one of the strongest in Latin America. Today around 14,000 ships (about 5% of world shipping) pass through the Canal’s 80km each year, with a double lock at Miraflores on the Pacific side, and a single lock at Pedro Miguel, to reach Gatún Lake, the highest point, before the triple Gatún lock that connects to the Caribbean. A new ‘third’ set of locks by-passes the old locks at both ends of the canal to accommodate even larger ships. The story of the Canal is so vivid (see box), and the engineering feat so awesome, that a ‘transit’ through the Canal, or part of it on a day trip, is a must for anyone. AROUND THE CANAL National parks Among the Canal’s many surprises is its natural surroundings of rigorously protected forest. Deforestation would reduce the rainfall flowing into the rivers that feed the Canal and its locks, and the consequent erosion would quickly block its channels with silt. National parks protect much of the watershed that feeds the Canal. Soberania NP covers the forested hillsides east of the Canal. It is a paradise for birdwatchers, with a record 525 species listed in a single 24 hour period in 1996. The magnificent forest of cotton, cupu and oak trees is also home to 100 mammal species, 55 amphibian and 79 species of reptile including agouti, cotton-topped tamarind monkey, caiman, collared peccaries, night monkey, jaguar and white-tailed deer. A former USAF radar tower, now converted to a birders’ lodge, perches on a hill with fantastic 360° views into and over the forest canopy. The gently rolling landscape of Las Cruces National Park bordering Soberania to the south boasts its fare share of flora and fauna and is renowned for palm and cotton trees that burst into colour in April and May. The recently established San Lorenzo National Park to the west of the Caribbean end of the Canal protects a mix of habitats, mostly wet lowland forest, and is similarly bird-rich but harder to access. Gatún Lake, created by the damming of the mighty Chagres river to form the central section of the Canal, is now teeming with wildlife and well worth a visit. Barro Colorado Island within the lake is world famous for the study of tropical nature. A little further afield, Chagres NP protects the river’s headwaters, and offers a preview of the great forests of the Darién that lie beyond. Important spectaculars such as the harpy eagle and tapir inhabit its rugged landscape, accessible by road from Panama City. Lush valleys and boisterous rivers stand in contrast to towering craggy peaks, the highest of which is Cerro Jefe, at 1007m. Colon and Portobelo There are three ways to cross to Colon at the Caribbean end of the Canal: by the Canal itself, by a single oft-clogged highway, or by train—a memorable hour-long journey closely following much of the Canal. It crosses Gatún Lake on causeways, and has impressive views at many stages, including all the locks. It was originally built to bear massive amounts of spoil from the Canal’s excavations to construct harbours at either end. Colon itself is chaotic and best avoided, but a pleasant drive along the coast brings you to the pretty harbour town of Portobelo which stood at the Caribbean end of the Royal Road that brought Incan gold across the isthmus from Panama Vieja. Under repeated attack by pirates in the 17th and... 18th centuries, the Spanish built a series of forts along this stretch of coast to protect the area, the remains of which still stand sentinel today. The most imposing is San Lorenzo Fort, positioned at the mouth of the Chagres. Jungle rich with birdlife surrounds its well-preserved ramparts with their original cannon—some still on their mounts, others lying scattered. Sir Francis Drake attacked the area three times, the first unprofitably; the second massively successfully (plundering a year’s shipment of silver destined for the Spanish throne), and the third fatally. He lies in a lead coffin out to sea 20 fathoms deep. Portobelo is home of a Black Christ, a wooden statue found by fishermen in the 17th century which saw off a period of the plague. Amid great festivities the statue is born through the town on 21 October, with many purple-robed pilgrims walking the roads, some on their knees, in the preceding days and weeks. **CENTRAL PANAMA** **El Valle** 3 million years ago a large volcano destroyed itself in a massive eruption that left a crater 5km across, the second largest in the world. Nestling inside the crater is **El Valle**, a tranquil mountain town with a pleasant year-round spring climate: a favourite retreat for wealthy Panamanians being just 2 hours by road from Panama City. It’s a good place to relax and explore, and a delight for walkers with trails leading across flower-strewn mountain slopes, lush cool forests with babbling mountain streams and ancient burial grounds. Attractions include canopy tours, the 80ft El Chorro El Macho waterfall and Pozos Termales hot springs, where you can soak in thermal waters or take a dip in mineral-rich volcanic mud baths. Traders from indigenous communities arrive from far afield to set up stall at the vibrant Sunday market, offering traditional handicrafts including the distinctive woven basketry of the Embera people who live in the forests of Darién, and the carved woods of the Ngobe Buglé from the western provinces of Chiriquí and Bocas del Toro. You can also find the beautiful hand-stitched ‘molas’ of the Kuna community (see panel, p31). **Santiago and the Azuero Peninsula** The Panamerican Highway leads onwards via the bustling city of Santiago between the mountains of the Central Cordillera and the Azuero Peninsula. The peninsula is a bastion of country life and folk culture—small farms, cowboys, fiestas, and ladies in ruffled *pollera* dresses, with a coast that’s dotted with surf beaches. --- **When to visit Panama** - **December-April** Panama’s drier season, is the generally favoured time of year for visitors, but usually only Christmas and the week before Easter are very busy. - **May-November** Travel outside the dry season is less popular, but the scenery is greener and prices can be lower. If you are planning to dive or snorkel the water visibility is better during the rainy season as there is less wind. Panama is outside of the hurricane belt. September is generally the wettest month in Panama City. It is not uncommon for Bocas del Toro to receive at least one torrential downpour on a weekly basis throughout the year. - **Temperatures** Panama lies in the tropics just north of the Equator, so temperatures are fairly constant all year round, just varying with altitude. At sea level, temperatures are a tropical 30–35°C, tempered by sea breezes. Up in the highlands, temperatures generally hover around 15–19°C. --- **How the Canal was built** Finally completed in 1914, the Panama Canal stands as a testament not only to engineering ingenuity but also the power of big ideas. The French began construction work in 1882 under Ferdinand de Lesseps, the bombastic engineer of the Suez Canal, on a similar ‘sea-level’ canal without locks. The huge task of cutting through the low mountains of the isthmus, coupled with ignorance of yellow fever and malaria, brought the French scheme to collapse in 8 years, after the loss of over 20,000 lives and the best part of a billion francs raised from French banks and small private investors. Ten years later, a wily US government bought out the remnants of the French project, manoeuvred the independence of Panama from Colombia, and took a hefty strip of land either side of the Canal under their effective sovereignty. ‘Tough politics,’ Theodore Roosevelt admitted, ‘but it meant the Canal got built.’ WESTERN PANAMA Eventually the highway reaches the city of David below the Chiriquí Highlands—some of Panama’s greatest scenic pleasures with coffee plantations, fields filled with orange trees or dotted with dairy herds, and misty cloud forests with tumbling mountain streams. The towering Volcán Barú, the highest point in Panama at 3475m, offers spectacular views of both the Pacific and Atlantic on clear days. To the east, amid the greenery, the picturesque town of Boquete has an alpine feel harking back to its European heritage. A handful of charming lodges make this a good base for exploring the area, which has some great walking and birdwatching trails. It is one of the most reliable places in the country to see the Resplendent Quetzal. Straddling the border with Costa Rica, La Amistad International Park boasts an extraordinary level of biodiversity and a wildlife population that includes black-handed spider monkeys, tapir and five species of cat. EASTERN PANAMA The Kuna Yala The Kuna Yala (‘land of the Kuna’) extends from the mountains to the Caribbean along most of eastern Panama, and includes the beautiful coral islands of the San Blas (or Kuna Yala) archipelago. The Kuna are the largest of Panama’s indigenous groups, numbering about 50,000. Their oral history traces their origins to mainland Colombia, though some ethnographers link them to Costa Rica, and some Kunas claim they came to earth by UFO. Apparently from the forest, they may have been forced by war and disease to migrate, finding haven in the early 19th century along the Caribbean coast and islands of Panama. They established densely-housed defensive villages on some islands, from which they fish or commute to the mainland to hunt and farm the jungle. Their loyalties to Colombia after Panama’s independence led to an uprising in the 1920s; protracted negotiations brought limited autonomy in 1952. Faced with the modernities of today’s Panama, the Kuna’s sense of identity and culture remain very strong. With effective leadership based on frequent community meetings, they strive to safeguard their society while developing on their own path. Men dress in western style, but most Kuna women follow their community’s lively fashions, with strings of colourful beads wound around forearm and calf, beaded necklaces over printed cotton tops in mola designs (see panel p31), and cotton skirts. A good number of young Kuna attend college and pursue careers beyond their community, while continuing to support their home. Most communities are keen to benefit from visitors to their islands, which though small are among the most beautiful in the Caribbean. The Darién Fully half of Panama lies east of the Canal, but beyond the Canal, Panama City, and the San Blas islands, the isthmus is almost entirely wild. The forests of Darién and its proximity to Colombia have ensured it remains one of the few true wildernesses in Central America. Low mountain ranges run behind the Caribbean coast, while ranges on the Pacific side are broken only by the Golfo de San Miguel which leads into Darién’s densely forested lowlands. Darién is brimming with wildlife and has some of the world’s best birding, but its inhospitable terrain, poor communications, and proximity to Colombia mean opportunities to experience it safely are limited. At the time of writing, we only advocate flying to well-established locations, and remaining with reputable local guides throughout. With such safeguards, Darién is a very remarkable destination for the adventurous traveller and wildlife enthusiast. The ‘Darién Venture’ and how Great Britain was born In the 1690s, William Patterson, a Scot from Dumfriesshire who had travelled the Caribbean in his youth and made good in England on his return (rising to become a director of the Bank of England), put together an investment scheme to capture the imagination of the times. Hearing reports from coastal raiders of a location on the Panama isthmus where valleys led through to the Pacific, he proposed to establish a trading post halfway across between the oceans. His plan failed to interest investors in Europe and England, and despite the fact that the area was claimed by Spain, he finally persuaded the parliament of Scotland to back the idea. In 1695 a company was established and funds poured in from patriotic Scots, about half the nation’s savings. Five ships set off from Edinburgh in July 1698, and landed at the promptly named New Caledonia Bay on the Darién. Early success was reported back to Scotland and more money raised, but meanwhile the valleys to the Pacific turned out to be a fiction, starvation set in, and the settlement was abandoned. Unawares, a second expedition arrived, tried to restart the abandoned colony, but failed again. A third arrived at the ghostly scene, set to work, but was soon blockaded by a Spanish fleet and made to leave in April 1700. The venture cost over 2,000 lives and the savings of a nation. Scotland’s economy was in tatters and its political strength sapped. Soon the only solution was for Scotland to merge into Great Britain through the Act of Union in 1707. As part of the deal England repaid the entire Darién debt with 5% interest—through a new bank that was later named the Royal Bank of Scotland. What little remains of the settlement has been excavated by archaeologists at Punta Escoces, or ‘Scottish Point’, close to the Colombian border, sadly out of reach to most present-day travellers. Perfect beaches Panama has the most amazing variety of beautiful beaches and coral islands to enjoy, in every style—sophisticated sun worship, alt-chill, snorkelling coral reefs, scuba diving, surfing Pacific rollers, and simple, close-to-nature, life by the sea. CARIBBEAN San Blas (Kuna Yala) In the San Blas or Kuna Yala archipelago the Kuna people build small simple lodges in truly beautiful locations, some on special islets a short distance from their own island villages. The turquoise water is a delight for snorkelling and kayaking, with beautiful white-sand beaches providing the backdrop. Fresh fish and seafood are the order of the day, simply but often deliciously prepared. The welcome is warm; in return careful and enlightened respect for Kuna sensibilities is essential. If you yearn for somewhere simple and beautiful, a true escape from the modern world, touching lives lived away from the humdrum, then we wholeheartedly urge you to go. Bocas del Toro Six islands and more than 200 islets swathed in forest and surrounded by waters bursting with marine life make up the archipelago of Bocas del Toro. Colón is the largest island, home to Bocas del Toro town: a laid-back jumble of small hotels, bars, and restaurants (including a few excellent choices). There are quieter places to stay out among the islands, some with wooden cabins on stilts over shallow blue waters. Getting to your chosen beach for the day is part of the fun, perhaps involving a water-taxi and a forest hike, or paddling by sea kayak. The islands receive a fair share of rain, mostly at night, and a drenching tropical downpour can punctuate a sunny day at any time of year. 4 species of turtle lay their eggs between March and September on Isla Bastimentos NP, whose coral reefs offer good diving. The Ngöbe Buglé indigenous community live on some of the remoter islands. Portobelo Pretty Portobelo’s sleepy charms (see p 32) are an easy hop from Panama City. Stay at a luxury retreat a short boat ride from the dock, with pristine sea views against a background of tropical rainforest. PACIFIC The Pearl Islands (Las Perlas) The 103 small islands of the Las Perlas archipelago are easily reached by 2 hour ferry or 30 minute flight from Panama City. Isla Contadora is the most developed island. Here you can laze, swim, snorkel over coral gardens (or see them from a glass-bottomed boat), or walk forest trails. Beaches are secluded and natural, and the style is subdued, with mostly 3* accommodation and menus based around the day’s catch. Isla Taboga 17km into the Pacific from Panama City lies the ‘Island of Flowers’, where Paul Gaugin stayed after his stint with the French Canal project; the island influencing his later use of colour. Today there’s an air of serenity, with sandy coves forest trails and pineapple groves fanned by trade winds. Playa Blanca 90 min by road from Panama City, all-inclusive resorts beside calm Pacific waters offer a US-style holiday experience, with golf courses, tennis courts, spas, Jacuzzis and water sports. The sands are bright white during the sunny season (November-April), while at other times black volcanic sands are washed down from El Valle. Playa Santa Clara Well past Playa Blanca, Playa Santa Clara’s sandy beaches provide an alluring alternative if you’ve a hire car. There’s a handful of nice self-catering beachside cottages and laid-back beach bars. Azuero Peninsula Westwards again, the Azuero Peninsula’s long wide beaches, fringed by coral reefs, are a reminder that paradise still exists. Playa Venado offers secluded beaches, riding, fishing, surfing, hiking, nature walks, whale watching, and famous sunsets over the Pacific, while Pedasi’s waters are filled with ocean game fish. Isla Cañas is an important nesting spot for tens of thousands of sea turtles. Gulf of Chiriquí Hundreds of miles of beautiful unspoiled beaches and islands, mostly with national park protection. Relatively unknown outside Panama, the Gulf of Chiriquí’s undisputable attractions are slowly being revealed. The Boca Chica area has some classy boutique lodges that make a perfect spot to mix adventure and relaxation. Some of the islands here have white sand beaches, hiking trails, snorkelling, kayaking and good wildlife viewing: the park has resident populations of monkeys, nesting sea turtles and 280 species of bird. Santa Catalina The closest access point to Isla Coiba (see box). Its small hotels and cabins are simple but neat and clean. Coiba NP In the Gulf of Chiriquí to the south of the mainland, Coiba is Panama’s largest island. A penal colony until the late 1990s, Coiba is largely uninhabited, with only a biological station and a few rangers’ huts punctuating its thick virgin forest. The island is within one of the largest marine national parks in the world, and its waters are home to an astonishing amount of marine life including six shark species, manta rays and migratory humpback whales. Extensive coral reefs offer excellent diving opportunities. The land-based natural wonders are equally impressive, with botanists citing 1450 species of plant, along with two species of crocodile and turtle, 21 endemic birds, 6 species of iguana and a large nesting population of scarlet macaw. The island is usually reached by a boat journey of 2-3 hours. Accommodation is rudimentary. Planning your trip Tailor-made holidays A small country with good infrastructure, you can cover most of Panama in a 2 week touring holiday, with a choice of ways to travel. Sample itineraries The sample itineraries shown here indicate what works best in Panama in various styles. They give you a starting point. Pick one or two that most appeal to you and talk things through with one of our specialists. Where to stay There is a growing number of hotels and lodges in Panama with good quality mid-range places on offer in most of the main areas. Boutique style lodges are springing up throughout the country. Accommodation on the Kuna Yala San Blas archipelago is generally basic, in keeping with the simple life of the islands. Getting around Panama is reckoned to have the best air network of any country in Latin America–most parts of the country can be reached easily from Panama City. Roads are generally pretty good too, though the shape of the country can make for some long journeys. There are three sensible options for travelling by road, which we can arrange for you: - **Private guided touring** An English-speaking guide who will usually also be your driver, accompanies you between destinations and on excursions in each place. You can sit back and relax while you travel, gain some real local insight, and make the most of your time. - **Private transfers** An experienced local driver (not necessarily English-speaking) collects you from your hotel and transfers you to your next destination. It is a private service, and you have your independence at each location. Pick up times can be adjusted to suit you. - **Selfdrive** A hire car is a great way of enjoying Panama. The roads are very good but navigation is not always straightforward and English is not commonly spoken so you need to feel comfortable if you choose this option. Whatever your mode of transport, we can pre-book local excursions (private or shared) to maximise your time in the area. Food and drink In Panama City you will find something from every corner of the world, including French, Japanese, Italian, Thai, Middle Eastern, and Chinese food. In regional areas, traditional Panamanian cuisine is a mix of Afro-Caribbean, indigenous, and Spanish cooking influences, often incorporating a variety of tropical fruits and vegetables. US influence has led to burger joints and the like, and fast-food chains are plentiful in Panama City. Driving | Route | Distance | Time | |-----------------------|----------|------| | Panama City - El Valle | 126km | 2h | | El Valle - Pedasi | 372km | 4.5h | | El Valle - David | 370km | 4h | | David - Boquete | 37km | 1h | | Boquete - Almirante | 195km | 3h | Distances are approximate, times may vary significantly. --- **Panama Odyssey** This private tour is ideal for active people who would like to discover something of the real Panama and its wildlife as well as seeing the main sights. **Panama City** **Day 1** BL You are met at the airport on arrival and driven to your preferred hotel for a 4 night stay. **Day 2** B Although Panama has much, much else to discover and explore, no first visit could miss seeing the Canal. Your guide picks you up from your hotel to visit the Canal’s Miraflores Lock, where massive ships squeeze through with inches to spare, and a small museum neatly sums up the history of the Canal’s construction and its engineering feats. It will usually be possible to explore the ruins of Old Panama, founded in 1519 and sacked in 1671 by Welsh buccaneer Sir Henry Morgan. The city relocated to Casco Viejo, formerly with sturdy walls against pirates, where you’ll stroll streets redolent of Old Havana. You’ll visit the Canal’s Pacific entrance at Amador: also the location of the brand new Gehry BioMuseo, bringing you bang up to date. **On the Panama Canal** **Day 3** BL Today you go on the Panama Canal itself, making a ‘partial transit’ through the Miraflores and Pedro Miguel locks: so much large scale engineering, so vital for a century of the world’s commerce, yet set in a near Garden of Eden with lush tropical rainforests descending to its sides. **Portobelo & Embera** **Day 4** BL From the Pacific to the Caribbean on an everyday commuter train, then to the sleepy harbour of Portobelo where the vast majority of the Incas’ gold was shipped to Spain. The line runs alongside the Canal at many points, crossing Gatún Lake on causeways with superb views. You might just spot crocodiles, monkeys and toucans on the way. From Portobelo your guide will take you to a small dock to travel by dug-out boat to a village of the tribal Embera community for an insight into their ways of life. **San Blas Islands and the Kuna people** **Day 5** BLD Leaving Panama City with early morning start for the short flight to the San Blas islands in the Kuna Yala. You stay at a charming lodge owned and run by the Kuna community: simple and rustic but with bathrooms en suite. You will have the opportunity of visiting their nearby village and experiencing the Kuna way of life (little English or even Spanish is spoken). Lunch and dinner feature fresh seafood. **Day 6** BLD Visit other islands in the Kuna community in the morning and later have the chance to relax on one of the beautiful white sand beaches, or snorkel. **Chirigui highlands and Boquete** **Day 7** B A morning flight back to Panama City, with an onward connection to David in western Panama from where you are driven to the mountain town of Cerro Punta for 2 nights. Stop en route to see pre-Columbian petroglyphs, with dozens of carved stones and boulders among flower-filled gardens. **Day 8** B A free day to relax or explore your mountain setting. The lodge arranges a guided morning walk for guests, often into beautiful cloud forests alive with birds in La Amistad NP. After lunch you might choose to visit one of Latin America’s finest orchid nurseries with over 2000 varieties from all over the world. **Day 9** BL Your guide takes you for a leg-stretching walk on the first section of the Los Quetzales Trail, through lovely cloud forest which opens out here and there to give fine views. Among the birds and wildlife, there is a good chance between January and May of seeing the beautiful Resplendent Quetzal. Then travel by road around the volcano to the thriving little town of Boquete where you stay for the next 2 nights. **Day 10** B Visit a coffee farm today. Coffee from this region is among the world’s best and you will see how it is harvested, selected, dried, and roasted. **Wildlife around the Canal** **Day 11** B A free morning to relax and enjoy the hotel’s pretty gardens or potter around Boquete. Later return to Panama City by air and on by road to Gamboa Rainforest Resort for 3 nights. **Day 12** B After breakfast take a wildlife boat trip on Gatún Lake. There are good chances of seeing spectacular caiman, green iguana, land turtle, capuchin monkey, two-toed sloth, capybara, and many different birds. Spend the afternoon at leisure. **Day 13** B Together with your guide you take a stroll on the ‘Pipeline Road’ to the Rainforest Discovery Centre, a birdwatching centre with an observation tower, on the border of Soberania NP. **Day 14** B Morning free at the Gamboa resort. Transfer back to Panama City and either embark on additional journeys in Panama, perhaps to the beach, or fly home. Self-drive Panama With reasonable roads and much to see along the way, a self-drive holiday in western and central Panama makes good sense for the independent-minded traveller. This varied route would even suit an adventurous family. Panama City Day 1 You are met on arrival and driven to your preferred hotel in Panama City for a 3 night stay. Day 2 A brief introduction to Panama: the Canal, Old Panama, Casco Viejo, and Amador as for Day 2 of ‘Panama Odyssey’ opposite. Day 3 You are driven to the station for the excellent train journey carrying commuters to the Caribbean, running mostly beside the Canal, and across Gatún Lake on a causeway. You visit the sleepy harbour of Portobelo, where Incan gold was loaded onto Spanish ships in the 16th century. Then continue to a village of the indigenous Embera community, arriving by dug-out boat for insights into their tribal way of life. El Valle Day 4 Pick up your hire car this morning. It’s about 2hr to your first stop at El Valle where you spend 2 nights at a charming family-run guest house in lush gardens. Day 5 A free day to explore El Valle, ringed by the lip of an extinct volcano. There are hiking, horse riding, biking and birdwatching options; flowering plants bloom in its ‘eternal spring’ micro-climate. Small scale attractions include waterfalls, thermal waters, mud baths, and petroglyphs. The excellent, totally chi-chi, Los Mandarino restaurant is a must. Santiago Day 6 On the Panamerican Highway for 2-3 hours to Santiago, a bustling country town. A small side trip leads to the church of San Francisco de La Montana. Built in 1727 this pilgrimage site has elaborate carvings and vibrant frescos mixing Catholic images with indigenous folklore. Stay the night just outside Santiago. Boquete Day 7 A mostly picturesque drive towards David. You pass grazing horses and cattle in the fields, sparkling rice paddies, coffee plantations and rivers tumbling down from the central Cordillera. You continue towards the cloud forests of Barú volcano and the mountain resort of Boquete for 2 nights. Day 8 A free day. There is plenty to do: hiking, bird-watching, whitewater rafting, zip-lines and hot springs and some extraordinary gardens. Coffee estates run good tours. Volcán Day 9 Drive via David up the western flank of Barú, where the scenery above Volcán is most appealing. Pottering through rural villages, where neighbours vie with each other as if in a ‘Panama in Bloom’ competition, you might pause at Sitio Barriles to see petroglyphs and gardens planted by the family who live there. Higher up at Cerro Punta, Finca Dracula orchid farm is an excellent stop. Gulf of Chiriquí Day 10 In the morning drive down to the Pacific coast and the lovely Gulf of Chiriquí for 3 nights at Bocas del Mar—a stylish and welcoming boutique hotel. There’s a choice of infinity pools, steps down to a narrow beach with a jetty for boat trips around the islands and coral reefs of the Gulf of Chiriquí marine national park, which has many secluded white sand beaches. Its forested islands are home to howler monkey, ocelot, margay, jaguarundi, raccoon, ant-eater, and coyote; in the bays and ocean there are leatherback and hawksbill turtles, dolphins and whales. Day 11-12 Relaxing at Bocas del Mar. Boat trips to explore the archipelago are a good option, at an extra charge. Azuero peninsula Day 13 Returning eastwards you turn onto the Peninsula de Azuero, a cowboy ranching region fringed by Pacific beaches: the heart of traditional Panama. Stay 2 nights at an attractively furnished guest house by Playa Venado: a 1.5km crescent of soft dark sand. Day 14 Simply relax on the beach (a good surfing spot), or explore the area. Good options include Isla Iguana Wildlife Refuge off the coast at Pedasi, and Isla Cañas Wildlife Preserve—a sand spit with mangroves and turtle nesting site. Day 15 An early start to drive back to Panama City, dropping your hire car downtown. A driver will take you from the rental office to the international airport for your flight home. Just a week in Panama A week of touring in Panama that combines well with time at a beach, or with a visit to Costa Rica. Panama City Day 1 You will be met on arrival at Panama City and transferred to your preferred hotel for a 3 night stay. Day 2 A brief introduction to Panama: the Canal, Old Panama, Casco Viejo, and Amador as for Day 2 of ‘Panama Odyssey’ opposite. On the Canal Day 3 Today you embark on a ‘partial transit’ of the Panama Canal. For a fuller description see Day 3 of ‘Panama Odyssey’. Boquete & the Chiriquí Highlands Day 4 A morning flight to David in the western province of Chiriquí, where you are met and driven to the mountain town of Boquete on the flank of Barú Volcano, a lively place with lots of activities for outdoorsy weekenders from Panama City. Day 5 Out and about in the highlands. You are collected from your hotel and driven to a waterfall trail for country walking to experience cloud forest nature, birds and lovely views. Then it’s on to an exuberant garden of tropical plants. You next visit a coffee plantation where you learn about coffee cultivation and have the chance to sample different bean varieties and roasts. The day ends at Caldera Hot Springs; take a bathing costume to join locals in thermal waters set in woods and open countryside. Bocas del Toro Day 6 You are collected from your hotel in the morning and driven over the mountains through forest to the Caribbean coast and by boat to the wonderful Bocas del Toro Archipelago. Day 7 Day at leisure with plenty of options for things to do. You might take a boat trip around the islands for swimming and snorkelling, or out to one of the beaches for a day’s relaxing. Day 8 Fly back to Panama City. Alternatively, you could take a direct flight from Bocas del Toro to San José and continue your holiday with a tailor-made itinerary in Costa Rica. Panama Chill-out Beautiful locations make this a memorable getaway, an unusual honeymoon, a chilled-out break. Add more beach days to suit. Panama City Day 1: Arrive in Panama City and transfer to your hotel in Casco Viejo, the atmospheric colonial part of the city, recently restored. Day 2: A free day to recover from your flight and enjoy the sights and sounds of the area. Until quite recently very run-down, Casco Viejo’s streets are quietly becoming some of the most hip in Central America. There are some stunning renovations, and a selection of great cafes, bars and restaurants. Pacific Coast Retreat Day 3: By air to David in western Panama where you are met and driven down to the ocean and the calm waters of the Gulf of Chiriqui, with beautiful white sand beaches and 25 islands and coral reefs protected within a marine national park. Lovely as it is, the gulf is only just beginning to become known outside Panama. A slowly expanding choice of places to stay includes a good hotel on the mainland with fine views across the gulf, and a more upmarket resort secluded from the world on an island in the gulf. Day 4 - 5: Two free days to relax. If you like to be active there are optional tours visiting nearby islands, beaches and mangroves. Day 6: Return by air from David to Panama City to stay the night. Enjoy a night out in the city’s cosmopolitan downtown or atmospheric Casco Viejo. Caribbean Coast Retreat Day 7: A morning’s drive to the pretty harbour town of Portobelo, where you are taken by boat to a private retreat with pristine sea views against a backdrop of tropical rainforest. Day 8 - 10: Three free days to relax by the Caribbean. You can laze by the hotel’s infinity pool, take a boat to nearby deserted beaches or sign up for activities such as snorkelling, kayaking, guided hikes, or art workshops. Day 11: You are returned by road to Panama City for your final night in the city. Day 12: Transfer to the airport for flights home. Panama Adventures Step beyond the ordinary with up-for-it experiences face-to-face with nature in jungles, mountains and coral islands. Sinew-stretching treks, exciting rafting, riding, biking, zip-lines, snorkelling—all in one exhilarating fortnight. Panama City Day 1: On arrival in Panama City you will be met at the airport and driven to your preferred hotel for a 3 night stay. Canopy Crane and Panama City Day 2: With an early start to see the forest come to life, your guide will collect you to experience the Smithsonian Canopy Crane for a bird’s eye view into and over the forest of the remarkable Metropolitan Natural Park. Later you explore Panama City, your guide taking you to see Panama La Vieja, Casco Viejo, and the Canal at Miraflores Lock. Las Cruces Jungle trail and Gatún Lake Day 3: Just 45 minutes from Panama City in Soberania NP is the Las Cruces trail, a 400 year old stone road built through the jungle to connect Panama City on the Pacific with the town of Cruces from where the Spaniards continued their passage by small boat, down the Chagres River to Fort San Lorenzo and the Caribbean Sea. A 6 mile hike along this mostly flat but muddy and uneven trail (5 hrs) takes you into rainforest and past historic landmarks. Be prepared to get your feet wet fording creeks along the way. On arrival at the Chagres River you have a picnic lunch on a small island, then travel by expedition boat across Gatún Lake in the Panama Canal, where you pass gigantic cargo ships in sharp contrast with the natural surroundings. Along the way, there are opportunities to spot green iguana and three-toed sloth resting on tree branches, crocodile, osprey hunting for peacock bass, snail kite and keel-billed toucan among much else. On islands within the lake there are capuchin, howler monkey, spider monkey, and Geoffrey’s tamarin. Multi-active in the Chiriquí Highlands Day 4: Leisurely start for a flight to David below the Chiriquí Highlands. You are met and driven to Volcán on the west of Barú volcano for 2 nights at Los Quetzales Lodge at a cool 2200m. The lodge’s 400ha reserve extends into Barú NP and La Amistad reserve. There is time to stretch your legs on a walk near the lodge. Day 5: Lots to do around here. The lodge’s options include riding in nearby farming scenery or walking in their cloud forest reserve. Mountain bikes are available independently. Spa therapies include massage, hot stones and reflexology. It is 10 min walk to an impressive orchid farm. Los Quetzales Trail Day 6: The Los Quetzales Trail runs for 10km through fabulous cloud forest on the slopes of Volcán Barú. It’s a very scenic walk of about 3hr (sometimes up and down, sometimes muddy), with good chances of seeing a number of birds and mammals. The locals put a near 100% chance of seeing the Resplendent Quetzal here January to May; 25% at other times. As you emerge at the rangers station at the end of the trail a vehicle will be waiting with your luggage to take you to your lodge in Boquete for a 3 night stay. Zip-lines and hot springs Day 7: First today is a ‘canopy adventure’ that has you gliding through the forest for 3km on 11 different zip-lines 30-60m above ground. It’s quite a buzz, and ostensibly you can spot orchids, ferns and bromeliads high in the trees as you whizz by, feet first, crash-helmeted and harnessed, hanging from a pulley. Later, relax in the caldera pools: open-air mineral-rich waters, heated geo-thermally. Whitewater rafting Day 8: Several good rafting rivers tumble down the volcano carving their way through the cloud forest. There’s a gentle 2½hr class II introduction, several class III trips, and an exhilarating world class 4hr class IV run. For pristine nature and waterfalls the 3hr mostly class I and III rapids on the Río Gariche are recommended. Choose sensibly, according to season and your experience. Rest of the day free at the lodge. Island-hopping in Bocas del Toro archipelago Day 9: Crossing to the Caribbean by road you take a boat to the reefs and islands of Bocas del Toro. You stay 3 nights at a 2-3* lodge in the wacky little town of Bocas del Toro, mingling with locals, expats and backpackers in its bars and cafes. (Or upgrade to a quieter superior over-the-water lodge on a remote island.) Day 10: Today you go island-hopping in the marine national park, on a shared trip that starts with a short boat ride from Bocas pier to Dolphin Bay where there is an excellent chance that dolphin will play around the boat. On to Coral Keys for some great snorkelling, and stop at a simple restaurant nearby for lunch (not included). Continue by boat to the aptly named Red Frog Beach for a spot of relaxation (and frog spotting). Boat back to Bocas pier in the late afternoon. Bocas pier is a short walk from your hotel. Day 11: A second day’s shared island-hopping, beginning with a short boat ride from Bocas pier to Playa de las Estrellas, where snorkellers can see the many starfish usually found in the clear waters here. On to the white sands of Boca del Drago for some beach time and swimming in calm turquoise sea. There’s a simple restaurant here for lunch (not included). In the afternoon it’s onwards by boat to visit Bird Island, home to large colonies of sea birds, and then return to Bocas pier. Panama City Day 12: A leisurely start for the mid-morning flight back to Panama City. You are met on arrival and driven to your hotel. There’s free time for shopping, city sightseeing, etc. Day 13: Transfer to the airport for your flight back to the UK or perhaps add a 2 night trip to the Kuna Yala (San Blas) archipelago - see days 4 and 5 of our ‘Panama Odyssey’ itinerary on p36. Where to stay in Panama Country, city and beach **Panamonte Inn** **TOP RANGE** **Boquete** The oldest and most characterful property in Boquete, dating from 1914. Built in a welcoming New England clapboard style, it has lovely gardens, a bar with comfortable sofas and log fire, and a fine restaurant: an ideal base for visiting the Chiriqui Highlands. 3 luxurious garden junior suites and new garden terrace rooms have been added to the original inn's chintzily furnished standard rooms, which are light but rather small. All are air-conditioned, with ceiling fans, television, internet service and telephone. Honeymooners might choose a garden suite or the 'Ingrid Bergman' suite—her favourite room at the hotel. There is a small spa with a selection of treatments. The hotel is 5 min drive from the centre of Boquete village. It's a great location for walkers and birdwatchers. There's a wide assortment of activities in the area, including visits to gourmet coffee plantations, rafting, riding, mountain biking and canopy walks. **Villa Camilla** **TOP RANGE** **Pedasi, Azuero Peninsula** A lovely beach option, especially as part of a self-drive trip around western Panama. The stylish 7 room Villa Camilla is part of a condo development on a secluded stretch of land on the Azuero Peninsula spread over 800 acres of rolling hills overlooking a coastline of hidden caves, cliffs and sandy beaches. The hotel itself is 300m from the beach, so you wake to the sound of distant surf and perhaps the squawks of visiting parakeets. The beach here is darkish volcanic sand (not white or gold), but this is a lovely and relaxing place to stay, much superior to other options in the area. Rooms are very civilised and comfortable with private facilities and a/c, complemented by a good restaurant and a lovely swimming pool with a stand of palm trees. Activities that can be booked locally include surfing lessons, riding, biking, and hiking. There is wi-fi in the public areas. **JW Marriott Panama Golf & Beach Resort** **TOP RANGE** **Playa Blanca, near Panama City** An alternative to downtown hotels, this recently upgraded luxury hotel has 109 rooms and 9 suites over 4 floors within a new resort development with upmarket holiday homes and a shopping centre, set on the beautiful white sand beach of Playa Blanca. 10hrs drive from Panama City. Rooms are very well appointed with luxurious bedding, cable TV, wi-fi, pull-out sofa bed and walk-in away bed, and spacious bathrooms with separate tub and shower. All have balconies overlooking the pool, the lake or the Pacific Ocean. There are 3 restaurants, 2 bars, a large pool area with bar and food service, spa, fitness centre, equestrian centre and beach club. A new 18 hole Nicklaus-designed par-72 championship golf course bordering the ocean has its own clubhouse and restaurant. **Uaguinega and Akwadup Lodges** **UPPER RANGE** **Kuna Yala (San Blas)** The stuff of dreams. Two small and very simple lodges built and run by a Kuna community. You are their guests, invited to live beside them on their own very amenable terms, and to visit their village and their fields on the mainland (a few hundred metres away) with a guide they provide. Uaguinega Lodge is on an islet just across the water from their own small village of closely packed family houses and is every inch of available space. Its thatched wooden roof is covered in green beneath coconut palms; ten steps of pure white sand from the turquoise shallows. Akwadup (see photo) is newer, not so close to the village island, a little smarter, and built over the water. Each has a pleasant bar-cum-dining room, with variable, sometimes excellent, food. Cabins are rustic with fan, en suite bathrooms with cold shower, and a deck with hammocks. Full board. No children under 14 years please. **Worth a mention** **Las Clementinas** Panama City (historic quarter) **UPPER RANGE** Characterful boutique hotel of 5 spacious apartments behind a grand façade in historic Casco Viejo. All modern comforts, art, books, and some great views. Lots of steps (no lift). Nice roof terrace with bar/restaurant. **Country Inn & Suites** Panama City (Amador) **MID-RANGE** US style hotel with 159 balconied rooms saved by its great location by the entrance to the Canal. The only restaurant is TG! Fridays; there's a wide choice a taxi ride away. **Los Quetzales** Volcán **MID-RANGE** High in cloud forest at 2200m, a timber-built lodge in gardens by a river. Simple spacious rooms. Small spa. Friendly atmosphere, hearty food. Nights around 7°C. **Finca Lerida** Boquete **MID RANGE** Outside Boquete, a working coffee farm with cabin cottages each with living room with open fire, 2 bedrooms and a bathroom. There's a more modern 'kudodge' too. **American Trade Hotel** Panama City (historic qtr) **UPPER RANGE** Much praised 2014 Ace Hotel conversion of a landmark former US department store and apartments from 1917. 50 rooms, rooftop pool, restaurant, cafe, jazz bar. Hip. **Bocas del Mar** Chiriqui Coast **TOP RANGE** 16 contemporary cabanas by the waterfront 2km from Boca Chica. Stunning white sand beaches reached by water taxi. **Canopy Tower/Lodge/Camp** **UPPER RANGE** Three lodges primarily for birdwatchers. See page 41. --- **Camino Real Trek** A satisfying, fairly rugged short jungle trek to follow the gold of the Incas from the mountains to the Caribbean. **Panama City** **Day 1** On arrival at the airport you are driven to your hotel in Panama City to meet your guide and the rest of the group. **Old Panama and an Embera village** **Day 2** BLD We visit the ruins of Old Panama where Spanish ships from Peru landed with their cargoes of Incan gold and silver. The Camino Real, or ‘Royal Road’ began here: we drive inland to join it in the mountains of the Serrania de San Blas in Chagres National Park. Boarding a dug-out at Madden Lake we are taken to the starting point of our trek, near a village of the Embera people. We are their welcome guests: their rich culture, music, dances and crafts make this a special day. We camp in tents in the village - a real privilege. **Crossing the continent** **Day 3-5** BLD Our trek on the Camino Real is mostly through primary rainforest: home to jaguar, howler monkeys, anteaters and more than 560 species of bird, including the harpy eagle – the largest in the world. We experience the forest at first hand while reliving the trail’s evocative history. We walk 5-7 hours each day, sleeping in tents in the forest. Original cobbled sections of the trail date back to the time when this was the principal route between the Caribbean and the Pacific. Conditions underfoot can be muddy, steep, and slippery. At first the Camino Real follows the Boquerón River ascending close to its source in the mountains before cresting the ridge and descending to the Caribbean. Along the route we will also find remains of manganese mines from the late 1800s, complete with railway tracks and the relics of old locomotives abandoned to the forest. **Nombre de Dios** **Day 6** BLD On our fourth day of trekking we reach Nombre de Dios on the Caribbean, where the Incan silver and gold arrived from the Camino Real until the town was sacked by Sir Francis Drake in 1596. A short drive along the coast takes us to Portobelo which replaced Nombre de Dios (high in the hills we will have passed the fork in the Camino Real that led down to Portobelo). We visit the Spanish forts and some of its old colonial churches. Overnight in a waterfront lodge. **By train to Panama City** **Day 7** BLD There is free time today to relax by the sea or go snorkelling or diving (extra cost) - perhaps to look for Drake's lead coffin which was lowered into the waters here. In the afternoon we catch the train for Panama City on the first railway to cross the American continents, with great views of Gatun Lake and the canal. We stay at our original hotel with a farewell dinner by the ocean. **Day 8** B If not extending your stay, you can be taken to the airport for your flight home. Birdwatching in Panama Birds, birds, birds in great variety, moderate distances and comfortable lodges make Panama around and west of the canal area a compelling target for any birdwatcher. East of the canal, unveiling the Darién’s astonishingly rich bird life takes care and intrepidness, but with great rewards. Panama offers over 970 species of birds in a broad range of habitats, many of which can be visited quite easily. A first-class field guide, an up-to-date site guide, and some very proficient local two-legged guides, make for a very pleasant and productive birding holiday while staying in reasonably good accommodation in some very enjoyable locations. After all, the world record for the most species seen in one day was set in Panama! Panama also offers expedition birding for the dedicated enthusiast, to find birds that few others will ever see. Our ‘Birding the Darién’ is a fine example to set the pulse of any neotropical specialist racing. Whether you are a relative newcomer to the region, or have already developed a taste for seeing so many different species in such a short space of time, we can help design the perfect trip for you. We’ll do our best to match your birding ambitions, the amount of effort you enjoy putting in, the time you have available, and your budget, with the fabulous birding opportunities that Panama offers. Planning your birdwatching trip to Panama Because our expertise is in travel and logistics (backed by a good understanding of what birders need and current conditions for birders on the ground) we are able to design and support tailor-made birdwatching trips to suit a wide range of interests, styles and budgets. Whether you are travelling solo, as a couple, or with a group of fellow birders we provide all you need for a well-organised, successful and enjoyable tailor-made birdwatching holiday in Panama. We can provide experienced English-speaking local birdwatching guides to escort you throughout, or just in those areas where you feel you might need support. We design itineraries to suit all levels, from newcomers to neotropical birding to seasoned hands out to boost their life lists with hard-to-find endemics, and in all styles, from dawn-to-dusk birders to those who prefer an easier time or like to combine their birding with sightseeing or general wildlife viewing. Self-drive is possible in the canal area and western Panama. We can also arrange birding days as part of general tours - an increasingly popular choice. Key birding sites in Panama - **Central/Canal** Soberanía NP, in the southern canal area is home to 520 species of bird, the trails here include Pipeline Road, Semaphore Hill Road, Plantation Road and Old Gamboa Road. Nearby are the Miraflores Ponds and Camino de Cruces. On the northern Atlantic/Caribbean side of the canal area is the famous Achioté Road in San Lorenzo NP and near Colón the Sierra Chiriquí reserve. Metropolitan Park in Panama City is surprisingly productive: For a day’s birding from the capital are Chagres NP, Cerro Jefe, Cerro Azul & Boqueron Marsh. 2hr drives east of Panama City are El Valle de Anton and El Cope (or Omar Torrijos NP) – the most easterly places where the foothill endemics of the Isthmanca Range can be found. - **West** A short flight to David in the Chiriquí Highlands; near Costa Rica offers birding in La Amistad reserve, Barú Volcano NP, Volcán Lakes, Cerro Punta, Boquete, Fortuna Forest Reserve and Palo Seco Protection Forest. - **East** 2½ hours drive east of Panama City are Burbayar Lodge and the Nusagandi Reserve in the Serranía de San Blas/Kuna Yala where some Darién species can be seen without rugged travel or security concerns. For the more intrepid, who are prepared to share bathrooms, sleep a night in a tent and who take account of Foreign Office warnings but are not deterred, we arrange visits to Canopy Camp, Cara and Cerro Pire in the Darién NP bordering Colombia. Birds of Panama Expect a very long list from this trip, even though you are staying in comfort. A wide variety of habitats, some very very productive locations, some rarities, some spectaculars—a perfect combination! An optional, more rustic, extension adds hard-to-find species and Darién endemics. Panama City Day 1 You are met at the airport on arrival and driven to a comfortable hotel in an interesting location: a former US base just across from the Miraflores Lock on the Panama Canal. Its grounds are great for birds. Pipeline Road Day 2 A full day on the very productive Pipeline Road with your local specialist birdwatching guide. Amongst a very long list look for Slaty-tailed Trogon, Cinnamon Woodpecker, and Chestnut-backed Antbirds, Black-faced Ani, Fasciated Antshrikes, White-flanked and Checker-throated Antwrens and Black-breasted and White-necked Puffbirds. Metropolitan Natural Park Day 3 A full day birding Metropolitan park, mostly semi-deciduous forest, with a list of 200 species. Look for Rosy Thrush-tanager, Lance-tailed Manakin, Slaty Antwren, Pheasant Cuckoo, and Panama endemic Yellow-green Irynnulet. Soberania NP Day 4 Early pick-up for a morning’s birding in Soberania NP. Onwards later for 2 nights at Sierra Llorona Lodge on the Caribbean side of the isthmus at 1000ft. Dusk birding in the lodge’s grounds. Achiote Road Day 5 Full day birding the lovely Achiote Road to Fort Lorenzo for Caribbean lowland birds. The local Christmas Bird Count often reports over 340 species in a 24hr period here. The many opportunities include Crested Oropendola, Collared and Slaty-backed Forest-Falcons, Plumbeous and Semiplumbeous Hawks, Hook-billed Kite, Band-tailed Barbet, Rufous-breasted Hermit, Ocellated Antbird, Oliveaceous Flatbill, Black-tailed Trogon, Speckled Mourner, Purple-throated Fruitrowt, Chestnut-mandibled Toucan and Spot-crowned Barbet. Sierra Llorona & Miraflores Lock Day 6 Birding the forest around the lodge. Good for raptors, Pied Puffbird, Wide-headed Wren, Great Tinamou, Grey Hawk, Barred Forest-falcon, Gray-headed Chachalaca, Blue-headed Parrot, Brown-hooded Parrot, Shining Honeycreeper, Indigo Bunting, Blue Cotina and Long Billed Starthroat. Afternoon visit to Miraflores Lock to see the Canal at work on huge ships that barely fit the double locks’ massive chambers, then to Amador to a hotel beside the Canal’s Pacific entrance. The Gehry BioMuseo is close by. Chiriqui Highlands Day 7 Morning flight to David in the Chiriqui Highlands. Bird mangroves near David for Yellow-billed Cotina and Macho de Monte lakes area near Volcán for Masked Duck, Northern Jacana and other water birds. On to Los Quetzales Lodge at Guadalupe (6300ft), your base for 2 nights. La Amistad National Park–Cerro Punta Day 8 Birding the Cerro Punta area with excellent chances of Resplendent Quetzal in the Jan-May nesting season, 25% otherwise. Apart from 10 species of humming-bird, look for Blue-throated Toucanet, Great Curassow, Black Guan, Collared Redstart and Yellow-thighed Finch. Boquete Day 9 Day of birding on Barú Volcano in the La Amistad Park, switching to the western side of the volcano to stay at Finca Lerida, a coffee hacienda in Boquete for 3 nights. Volcán Barú NP Day 10 Day of birding the eastern slopes of Barú Volcano from Boquete for further mid and high elevation species. Palo Seco Forest Reserve Day 11 Early start, dropping down to the Palo Seco Forest Reserve on the Caribbean lowlands for the day. This is a good place for Rufous-winged Woodpecker, Black-capped Pygmy-Tyrant, Ashy-throated Bush-Tanager, Immaculate Antbird, and Olive-Back euphonia as well as many warblers. Day 12 Birding in the grounds of your lodge in Boquete. Transfer to David for a mid-afternoon flight to Panama City for 2 nights. Engineering or Nature Day 13 Decisions, decisions. On the one hand you could visit to Gatun Locks, continue across the isthmus to Colon and catch an afternoon train back alongside the Canal with views of the lake, the forest and the engineering. Or, at a supplementary cost (and depending on the day of the week), you could either make a partial transit of the Canal itself, or visit the Smithsonian Institute’s tropical research centre at Barro Colorado Island. Panama City Day 14 Transfer to the airport for your international flight home, or extend your trip to experience the birds of Burbayar. Burbayar extension Day 14 From Panama City drive 2-3hr for 2 nights at Burbayar Lodge in the Serranía de Kuna Yala, near Nusagandi, for many Darién endemics. The lodge is attractively rustic, using recycled wood with cave walls it has private bathrooms, some solar power and candles at night. Staff are from local Kuna communities. Day 15 Full day birding at Burbayar. The many highlights include Speckled Antshrike, Black-crowned, Thicket and Streaked Antipittas, Green Manakin, Scaly-breasted and Stripe-throated Wrens, Black-striped and Wedge-billed Woodcreepers, Red-throated Caracara, and King Vulture. Day 16 After a final dawn’s birding at Burbayar, transfer to the airport for international flights departing in the afternoon. Easy birding in Panama Canopy Tower, Soberania NP Day 1 You are met on arrival in Panama City and driven to Canopy Tower in Soberania NP for 7 nights full board with daily birding excursions led by experienced English speaking local bird guides, in groups of up to 8 binders. In a week you can reasonably hope to see 275-300 species. The lodge is a converted US military metal-skinned radar tower on a hilltop with upper level dining area and wide top viewing platform that are excellent for canopy birds. Rooms have fans and windows looking into the trees; some have private bathrooms, others share. Plumbing limitations mean guests are urged to have rapid military-style showers. The tower is very popular in the raptor migration season (mid-Oct - mid-Nov). Day 2 Semaphore Hill and Plantation Trail. Day 3 Summit Pond and Old Gamboa Road. Day 4 First half of Pipeline Road. Summit Garden, Harpy Eagle exhibit. Day 5 Ammo Dump Pond and Chagres River. Evening owling. Day 6 Second half of Pipeline Road Day 7 Round-up day, or visit Miraflores Lock, Panama Canal. Canopy Lodge, El Valle Day 8 Some final birding before a 2hr drive to Canopy Lodge for 4 nights full board with shared birding excursions. Canopy Lodge has stylish rooms in bird-friendly gardens by a stream, next to the protected area of Cerro Gaital. Rest of day birding the Carriaguana trail. Day 9 Cerro Gaital trail and a local waterfall. Day 10 Chorro Macho trails and private gardens with well-attended feeders. Day 11 El Churu forest and La Zamia trail. Day 12 Return by road to Panama City for your flight home. Nicaragua is a very, very special country, blessed with wonderful scenery, a rich colonial heritage, a taste for the arts, and a strong sense of community and humanity. The impressive scenery of Nicaragua’s Pacific basin and the rich colonial architecture of its two historic cities, Granada and León, are a perfect complement to neighbouring Costa Rica. There are some excellent places for time at the beach too, great wildlife experiences, and plenty of off-the-beaten track places to engage a traveller’s curiosity. Now ranked as one of the safest countries in Central and South America, Nicaragua is peaceful, democratic and welcoming. Although cursed by dictators until their overthrow in 1979 (only to be followed by the seven year struggle against the Contras) twenty-five years of peace have allowed its people to rebuild their lives and establish a peaceful, natural Nicaragua of their own choosing. Although Nicaragua remains one of the poorest countries in the world with a GDP of just $1,200 per head, Nicaraguans are working hard to rebuild their economy and the positive warmth and genuineness of their welcome is remarkable. A proportion of international aid has been directed to the restoration of Nicaragua’s architectural heritage and to encourage tourism. A sprinkling of well-run characterful hotels is being added to each year, helping to make Nicaragua a very attractive, unusual and satisfying place to discover and explore. We recommend it thoroughly. **GRANADA** As one of the oldest European settlements in the Americas, this beautiful city is filled with nostalgia and romance. Wander through its brightly painted streets (many in the centre have recently been spruced-up in ice-cream shades) and behind the impressive colonial frontages you’ll catch glimpses of quiet inner courtyard gardens where poets dream. The Convent of San Francisco displays pre-Columbian treasures from Isla Zapatera, including large basalt Chorotega figures. Horse-drawn carriages—the city’s taxis since 1524—allow you to explore the city at a leisurely pace. Five minutes from the city lie Las Isletas de Granada, a freshwater archipelago of 354 rocky islands in Lake Nicaragua. This water-bound community is a mix of ritzy weekend villas on private islands, interspersed with some very humble homes and lately a boutique hotel. Locals are mainly fishermen or caretakers of the luxury properties. The area is great for birdwatchers, with osprey, cormorants, kingfishers, oropendolas, gnat-catchers, egrets, parrots and parakeets all to be seen. **LEÓN** The university city of León, twinned with Oxford, has some of the best preserved classic colonial architecture in Central America. The iconic Cathedral of León, one of the region’s grandest, took 113 years to build and is among León’s dozen or so impressive churches from the colonial era. Other sights include the Museum of Art Ortiz-Gurdian set in a lovely colonial home, and the museum of Rubén Darío, one of the greatest poets in the history of the Spanish language, in the house where he lived. The old city, León Viejo, was Nicaragua’s first capital and was founded by the Spanish in 1524 below looming Momotombo volcano. In 1610 the entire city was evacuated to its current location. Six months later Momotombo erupted and León Viejo was smothered in ash. It was not until 1967 that its ruins were found. It is now a UNESCO World Heritage site. The foundations have been uncovered of homes, the country’s first mint, a church, a convent, a brothel, and the cathedral where the conquistador Cordoba is buried. **MASAYA NP** Masaya National Park includes two volcanic cones and five craters all within one enormous crater. The main cone, Masaya Volcano itself, is one of just four in the world that maintains a constant pool of lava. A road takes you to the lip of its deep vertigo-inducing crater, crossing solidified lava from a flow in 1772. The Chorotega people called Masaya ‘burning mountain’ and made sacrifices of young women and boys to appease its goddess of fire. The Spanish wondered whether the volcano’s magma might be a door to Hell and erected a precautionary cross at the crater’s edge; a replacement can be seen today. Around the park are whitewashed villages known as Los Pueblos Blancos, considered the cradle of Nicaraguan tradition and folklore, whose residents take much pride in their Chorotegan ancestry. Each village specialises in its own craft: pottery, hammock-making, leatherwork, basket-making and so on. They are all for sale at Masaya’s market. On Thursday nights folk musicians and dancers give enthusiastic performances here—a great chance to spend time among local families on an evening out. LAKE NICARAGUA Lake Nicaragua is the largest lake in Central America. It connects to the Caribbean via the Río San Juan, which forms part of the border with Costa Rica. The river was navigable by small vessels, making the city of Granada on the far side of the lake an Atlantic port, though only 50 miles from the Pacific. Plans to build a canal to the Pacific once rivalled the prospect of a canal through Panama. As well as the islets close to Granada, described above, there are two principal islands in the lake, Ometepe and Zapatera, and the Solentiname archipelago. Ometepe Island Ometepe’s dramatic outline of twin volcanoes rising from the lake, linked by a narrow isthmus, instantly marks it as a special place. Concepción is the slightly larger of the two cones—still active and one of the most symmetrical in the world. Its sibling, Maderas, considered extinct, is swathed in dense tropical forest and has a cold, misty crater lake. A gorgeous waterfall pours from its western face and small coffee fincas flourish on its lower slopes. Ometepe is scattered with large carved basalt idols and numerous petroglyphs, dating from at least 1500BC, and probably much earlier—testimony to this inspirational setting. Solentiname archipelago This cluster of 36 islets at the southern end of Lake Nicaragua is home to a remarkable artistic community of more than 50 painters and artisans whose vibrant naïve art is strongly linked to their tropical surroundings. The islands are studded with carved stones, sacred caves and burial grounds of the Chorotega, Nahuatl and Guatuso peoples. RIO SAN JUAN Few places are as isolated and evocative as the jungle settlements along the San Juan river, which flows from Lake Nicaragua into the Caribbean. Things might have been so different. The Spanish considered this a major trading route connecting Granada with the Caribbean and defended it with a castle (below which now lies the small town of El Castillo) and several forts. The English tussled them for it more than once; the young Horatio Nelson had a prominent role in taking the castle in 1780 as part of a short-lived British venture to capture all Spain’s colonies in Central America. Very little has troubled the river and its villages since those distant times, and daily life potters along quietly and harmoniously. It’s a great area to visit for a flavour of how river communities live, and to see three excellent wildlife reserves: the easily-accessed Bartola Reserve close to El Castillo, the large and remote Indio-Maíz reserve further down river, and the excellent Los Guatuzos wetland reserve on the southern edge of Lake Nicaragua, accessed from San Carlos where the river leaves the lake. NORTHERN HIGHLANDS Driving north from the lowlands around Lake Managua and Lake Nicaragua you soon enter the attractive scenery of Nicaragua’s lush Northern Highlands, where cool cloud forests, small fincas, and sleepy villages stand in sharp contrast to the dry plains and bustling cities of the south. It’s a very peaceful and welcoming region. Most of the nation’s coffee, one of Nicaragua’s great gifts to the world, is grown here. Some of the coffee farms now operate as agrotourism lodges (see our website for details), though accommodation everywhere in the highlands is spartan. The reason to visit is to explore the cloud forested hills, home to agouti, howler monkeys, deer, sloth, puma and ocelot, toucans, trogons, hummingbirds and the Resplendent Quetzal. Birding is in its infancy in the highlands, but the auguries are good. The cities of Matagalpa and Jinotega are centres for coffee. Small scale attractions include the tiny village of San Rafael del Norte, from where Sandino coordinated his troops in 1927; now home to a rickety but memorable museum in his honour. MANAGUA Nicaragua’s capital city, Managua, the usual arrival point for visitors, is an easy-going place spread out by the shores of Lake Managua below Volcán Tiscapa. Its future is plain to see as the smart offices of newly-arrived multinationals spring up along roads packed with jostling traffic bearing packed buses, well-worn Ladas and shiny 4x4s. The city’s plusher tree-lined avenues and a scattering of small office blocks soon give way to networks of busy streets and markets. The revolutionary period is marked by a striking 59ft silhouette of Sandino placed on the ruins of the hated Somoza’s presidential palace high on Volcán Tiscapa, while an eternal flame burns at the memorial to Carlos Fonseca, second only to Che Guevara in the pantheon of 1960s revolutionaries. Nature Reserves 18% of Nicaragua is protected in reserves, but ecotourism and wildlife conservation falls far short of its potential and is a story waiting to unfold. Each and every visitor makes a difference. Here are 3 examples. Juan Venado The Juan Venado reserve protects part of an important coastal wetland corridor and can be visited in an easy day trip from León. A 1hr drive brings you to a local beach where you board a waiting motor boat to explore the reserve on an enthralling 3hr trip. Jhr top: Isla Juan Venado is a narrow barrier island 22km long separating the ocean from the shore at a river estuary. Beyond the sandbar are important mangrove areas, and the estuary is home to an abundance of life including caiman, crocodiles, iguanas, crustaceans and other marine creatures. The reserve includes a nesting beach for endangered Leatherback turtles. Olive Ridley turtles nest on the sand spit. Domitilia Reserve Domitila is a private reserve conserving tropical dry forest, where in the dry season the trees shed their leaves to minimise water loss. Domitila is a haven for birds, butterflies, howler monkeys and other mammals including jaguarundi, puma, sloth. There are 15km of trails for hiking, birdwatching and riding. 9 freshwater lagoons, and thermal waters to bathe in. Montibelli Reserve A 160 ha private reserve for day visitors. Mostly dry forest and former coffee plantations on 360–720m slopes, just 30 mins from Managua. Visitor trails give good birding (over 100 species listed). When to visit Nicaragua Nicaragua has a broadly similar climate to Costa Rica, its neighbour to the south. The Pacific basin in western Nicaragua, where the historic towns are situated, has a more pronounced dry season (from December to mid-May), at its height it can be too hot for some. Here the ‘wet’ season (mid-May to mid-November) is more moderate than in Costa Rica and generally good for travel outside the wettest months of September and October. Planning your trip Tailor-made holidays A holiday in Nicaragua might sometimes be more challenging than one in Costa Rica or Panama, but you will be well rewarded by the memorable places you visit. With good flight connections, and a simple road crossing at Peñas Blancas, it is easy to combine Nicaragua and Costa Rica on the same trip. Sample itineraries The sample itineraries shown here indicate what works well in Nicaragua. They give you a starting point. Pick the ideas that most appeal to you and talk things through with one of our specialists. Where to stay Nicaragua has a small but growing number of great places to stay. New arrivals often feature stylish design in natural materials, are strong on conservation and sustainability, and are run by owners with a great passion. There are good options in each of the principal towns, often newly-updated or in well-restored colonial buildings. The Rainforest Alliance has been working hard to promote sustainable development among hoteliers, to match its impressive results with foresters and coffee growers. Within our full range we feature all the relatively upscale hotels that have so far been verified by Rainforest Alliance, see examples on p46-47 marked with a [R]. Getting around The best way to see Nicaragua is to be driven by an English-speaking local guide, which is often a real delight. It means you can hear about Nicaragua from someone who understands the country well and is likely to have lived through its struggles. You’ll get to know Nicaragua as it is now, with your guide to show you the more unusual things as well as the sights in the surprisingly few guide books to the country. Food and drink Nicaraguan food is simple but good. Among local dishes, *gallo pinto* (fried rice and beans) is as popular as in Costa Rica. Maize is the staple in *nacatamales* (like Mexican tamales), tortillas and much else. There are ‘international’ choices for visitors, with steak, pork, chicken, pastas and pizzas prepared in standard ways or with a local twist. By the coast fish and seafood are the norm, often super fresh. Tropical fruits abound: mango, papaya, jocote, bananas, pipian and avocado, and others you will not have heard of. Nicaraguan beer is excellent, and there are imported options too. Chilean or Argentinian wines are usual when eating out. To start the evening try the delicious *el macuca*, Nicaragua’s favourite tipple, made with rum and a guava and lemon juice mix. Flor de Caña is the leading local brand of rum, and very good it is too. Some recent history Nicaragua’s capital, Managua, still bears scars from the 1972 earthquake that destroyed its downtown centre. The misuse of relief funds was one factor that brought on the 1979-79 Sandinista revolution that overthrew General Somoza, the dictator whose corrupt dynasty, long supported by the US, had siphoned off over half the country’s wealth. Well aware of the General’s father’s crimes during his presidency, Franklin Roosevelt had said “He’s a bastard, but he’s our bastard”. The Sandinistas won elections in 1984, only to be opposed by the US-backed ‘Contras’ who resented the left-wingers’ associations with Cuba and the Soviets. US policy collapsed when the Contras were found to be covertly funded by US sales of arms to Iran, allowing a peace agreement to be reached. The subsequent 1990 elections were won by a woman, Doña Violeta Chamorro, whose sons had fought on opposite sides. She consolidated the peace, and brought together both her country and her family. US financial support that helped elect her thereafter wilted. Nicaraguan Odyssey A hugely enjoyable and varied holiday, touching each era from prehistory to the Revolution, visiting glorious colonial cities, lively villages and isolated settlements, in a stunning array of landscapes, with wildlife, creative arts and community projects along the way. Managua Day 1 You are met on arrival at Managua airport and driven to your characterful hotel in a quiet suburb for a 2 night stay. Day 2 BL After a leisurely start to recover from your journey you meet your guide this morning to begin a tour of the capital, visiting the old downtown area and National Museum, the hilltop Sandino memorial, the new centre with its ultra-modern cathedral, and the evocative Footprints of Achiulina made 6,000 years ago. León Day 3 BL Today you travel to the colonial city of León, visiting the ruins of León Viejo on the way. Your guide will show you León, the intellectual heart of the country, with its colonial churches and impressive cathedral, the Museum of Ortiz-Gurdian, one of the finest contemporary art galleries in Central America, or the Rubén Darío museum, one of the greatest Spanish language poets. You stay 2 nights in the heart of the historic quarter. Rural life Day 4 BL Today you’ll learn something of rural Nicaragua’s culture and traditions in the villages northwest of León. A small field of boiling mud fumaroles is a curiosity worth a stop on your way, with a glimpse of village life in tiny San Jacinto beside it. Travelling on you call at the rustic town of El Viejo, known for its doughnuts and a C17 Basilica of the Immaculate Conception. Norwich has the distinction of being twinned with this lovely little town. By mid-day you reach the town of Chinandega, in one of the most fertile valleys in Central America, which has an impressive colonial church of its own. (One of the area’s crops is sugarcane, some of which ends up in Flor de Caña, the nation’s favourite rum.) A foundation was created here in response to the intense child poverty that arose in the 1980s and today they care for 400 lively kids—if you’d like to drop by and say hello you can practise your Spanish amid a bombardment of chatter and big-eyed grins, and valiant attempts to play impossibly large musical instruments. A museum at Chorotega holds 1500 pre-Columbian artefacts and a young trainee from the foundation may join your guide to show you round. Nicaragua’s past, present and future all in a few hours. Juan Venado reserve Day 5 BLD This morning you visit the Juan Venado reserve (p43), an hour’s drive from León. You are met by your private boat to explore the estuary, mangroves and wetlands behind the 22km barrier island: a quiet natural world teeming with life. Returning to modern life you take the Pan American to the village of Masatepe, where you stay 2 nights at the lovely Puerto del Cielo (p46). Pueblos Blancos craft villages Day 6 BL Today you visit the evocative ‘white villages’ below the massive Masaya volcano. Each village specialises in a particular handicraft—from pottery to hammocks, baskets, leather or woodwork. They are busy and attractive little places. You return to your lodge with time to relax and enjoy its spectacular views. Ometepe Island Day 7 BLD The ferry from San Jorge takes you across Lake Nicaragua to Ometepe. You explore the island with your guide, including Laguna Charco Verde, a mysterious pool said to cover an enchanted city, and the hamlet of Altagracia with Nahuatl idols beside the church. Stay 2 nights at the wonderful Totoco Ecolodge, or at the lakeside Villa Paraíso, with the rest of the day free. Day 8 BD A free day to unwind in this great location, with a short country walk to San Ramón Cascade, a lovely 110m waterfall. More strenuously, you could hike with a guide up to the crater of Volcán Concepción; kayaking and horse riding are also available locally. Granada Day 9 BL Catch a late morning ferry off the island to be met at the dock for the drive to Granada for 2 nights. Day 10 BL A tour of the splendid city of Granada, including the Cathedral, Convent of San Francisco, Casa de los Leones and Museo de Granada—a private museum within a grand home. There is free time to wander and enjoy the city by yourself. In the late afternoon take a boat tour on the lake through Las Isletas de Granada. Mombacho Reserve and Masaya at night Day 11 BL Driving north from Granada you visit the cloud forests of Mombacho Reserve. Orchids and bromeliads deck the stunted trees, and you will hear about the reserve’s endemic species, colourful frogs, howler monkeys and white face capuchins. The conservation group that looks after the reserve helps coffee growers use sustainable methods, which you will see at a Rainforest Alliance certified plantation. Schoolchildren are taught about the environment: volunteer pupils work as park rangers and guides, and help in the tree nurseries. After some free time back in Granada, you are driven south at dusk to Masaya Volcano. Parakeets fly in to roost as the sun sets, when you await the exodus of thousands of bats from a lava tube on the volcano’s flank, on their nightly foraging. The intrepid can don... Just a week in Nicaragua A week of touring Nicaragua that combines well with a trip to Costa Rica, or time at the beach on the Pacific coast or the Corn Islands. Granada Day 1 You are met in the morning on your arrival at Managua airport (or the Costa Rican border at Peñas Blancas) by your experienced English-speaking local guide, who drives you to Granada where you stay 3 nights at a comfortable central hotel. Day 2 BL You explore the colonial city of Granada with your guide to show you its churches, archaeological museums, quiet side streets and attractive main plaza, ending with a boat ride among the Isletas de Granada—the small islands described on p43. Mombacho reserve and Masaya at night Day 3 BL Today you visit the Mombacho reserve and see Masaya Volcano at night, as Day 11 of our ‘Nicaraguan Odyssey’ opposite. Pueblos Blancos craft villages Day 4 BLD Leaving Granada you visit the villages known as Los Pueblos Blancos described in Day 6 of our Nicaraguan Odyssey. Stay overnight at a lodge with a spectacular hillside view. León Viejo UNESCO World Heritage Site Day 5 BL A morning to relax in a hammock, by the hotel pool or take an optional spa treatment. After lunch you are driven north along the shores of Lake Managua, where ox carts plough rich soil and mud bricks bake in the sun, to the ruins of Old León, buried by an eruption in 1610 and now an impressive archaeological UNESCO site. Onwards to stay at a comfortable hotel in León for 2 nights. León Day 6 BL A full day to explore León with your guide, including visits to Subtiava church, Ortiz-Gurdian and Rubén Darío museums and Cathedral. Take your time and wear a hat in its dry season heat. You will also be driven out of town to visit San Jacinto’s hot springs. Managua Day 7 BL Driving to Managua, you see the sights of this most unusual capital. There is much living history to hear, tales from the 1972 earthquake and the revolution which followed, the National Museum, the modernist cathedral and the Footprints of Achualinca. Stay a night in the city at a comfortable hotel. Day 8 B Transfer to the airport for the flight home, or flights to the Corn Islands, or the San Juan River, or by road to Pacific beaches. Joining in It seems to be in the soul of Nicaraguans to support each other. Call it socialism, liberation theology, charity, volunteering, community spirit, or just a proper way to live, you’ll find this whole-hearted spirit bubbling up a lot—sometimes organised through unions, cooperatives, councils, churches or schools, other times just done that way. From the traveller’s point of view, helping your neighbour rather than preying on them accounts for the fact that Nicaragua is a relatively safe place to visit. It also means that there is an enlivening human warmth to be experienced and shared as you travel about. What better way could there be than to join in? There are several community development projects that welcome visitors to experience and share local ways of life under the ‘ecotourism’ banner. For examples, a group of villages in the province of Grenada have joined together, each offering visitors something different: country walks, cycling, riding on an ox cart, donkey or horse, seeing how crops are grown, or recounting tales from the 1980s. You stay in spare rooms (some with private bathroom), dorm rooms, or hammocks. There’s a similar sustainable tourism project in Jinotega with a focus on coffee, history, culture and religion. Let us know if you might like to include experiences like these in your tailor-made holiday. (You only pay the charge made by the community, we add nothing.) You could also stay at an ecotourism lodge such as the award-winning Finca Esperanza Verde, see p46. Nicaragua is the poorest country in Latin America after Haiti, with 40% of the population struggling to get by on little more than a dollar a day. Any visit to Nicaragua will have a positive impact, and by joining in you are making your contribution where it is needed most. Where to stay in Nicaragua Hotels for touring **El Convento** *León* A lovely hotel created from a convent founded in 1639, set around a formal courtyard garden with fountain and palms, in the heart of León in walking distance of the Central Plaza. 31 spacious guest rooms with 2 queen beds, all with private bathroom and a/c, in an austere décor with thick walls giving a quiet, cloistered atmosphere that feels rich in history. There is one suite for an extra special stay. The rooms open off a broad tiled corridor decorated with artefacts with a religious theme. Public areas are very elegant with beautiful antiques from the Spanish colonial era and art on the walls. There is a fine restaurant and a patio café. **Jicaro Island Ecolodge** *Granada* A boutique eco-retreat on a wooded rocky islet on Lake Nicaragua among the 300 islands of Las Isletas; a 15min boat ride from Granada, with views across the water to the dramatic outline of Momotombo Volcano. The style is understated Zen, with a focus on wellness, romance, relaxation and good food. Each of the 9 light and airy casitas is on two levels with bedroom above and living area and bathroom below. Both levels give onto private decks with lake views and over-sized hammocks. There is a small infinity pool. Yoga and massage sessions are available. There is no space on the island to stretch your legs, so take a book. The restaurant’s menu is founded on organic local ingredients including lake fish. Romantic dinners can be taken on a private floating deck, though not out of view. **Totoco Lodge** *Ometepe Island* Another superb new ecolodge, this time in the magical surroundings of Ometepe Island, Totoco Lodge is set on slopes of Maderas Volcano with a commanding view across the island to Concepcion, its twin. The lodge was built by hand, literally, and is run with great attention to detail by a committed young team who have put the best sustainable technology into practice in a labour of love. The result is both homely and spectacular. The lodge opened with 4 thatched casitas with private bathrooms and terrace with hammock. Lovely home cooking is served in the open sided thatched dining room. **Hotel Plaza Colón** *Granada* Overlooking Granada’s colourful main square and directly opposite the cathedral, Hotel Plaza Colón is a standard choice for visitors to the city. Rooms in the front, historic part of the hotel are very spacious with lovely polished floorboards, and some have wonderful balconies onto the square. Further back the rooms are neat and modern, set around an attractive series of patio courtyards, one with pool—they don’t have the view but they are quieter. There’s a pleasant bar, a restaurant linked to the hotel, and others to choose from nearby. Hotel Plaza Colón has achieved ‘verified’ status with Rainforest Alliance. **Hotel Los Robles** *Managua* A small hotel in a quiet residential area in central Managua not far from shops and nightlife. The hotel is in colonial style around a courtyard garden with fountain, and decorated with many interesting antique pieces. All rooms have a/c and en suite facilities. A comfortable base from which to explore Managua or to overnight at the start or end of a trip. **Montecristo River Lodge** *Sábalos, San Juan River* A river front lodge backed by rainforest. 1hr by boat from San Carlos, near the historic town of El Castillo. Simple wooden cabins around each with queen and single bed, fan and private facilities. It’s not much, but good for the area. Walks on forest trails, kayaks, canoes, horses, and artisan fishing are included, or just relax in a hammock and watch life on the river go by. Guests must climb 65 steps to arrive at the lodge. All meals and standard drinks are included. **Worth a mention** **La Gran Francia** *Granada* Dating from 1524, restored in the ’90s, and in the heart of the colonial centre, this is one of a small handful of good choices for a visit to Granada. **Villa Paraiso** *Ometepe Island* A simple lodge with wonderful views on a sandy lake shore between Ometepe’s two volcanoes. 1hr from the port at Moyogalpa. Can be windy. **Contempo** *Managua* A boutique hotel in uber-urban style with the feel of a private residence. Each room is uniquely decorated. In house upscale restaurant. Convenient for city and south. **Hacienda Puerto del Cielo** *Masaya* Small new eco lodge and spa in a stunning location high on a hill with commanding views over Lake Catania to Masaya National Park. Just 9 guest rooms so far. THE CORN ISLANDS Twice daily flights from Managua deliver a few handfuls of astute beach lovers to the Corn Islands, 70km off Nicaragua’s Caribbean coast. Life on Big Corn and Little Corn is about as simple as life can get. Forget the luxuries normally associated with the Caribbean, these islands are so far off the tourist map that you’ll have trouble paying more than a few dollars for a lobster dinner. English is the lingua franca, thanks to the islands’ long history as a British protectorate from 1655 to 1894. Accommodation is simple, relaxed and as basic as can be, and that’s the secret of the Corn Islands. You’re not here to be pampered or look good, you’re here to escape from all that, to draw breath, and to revel in the sparse beauty of the beach, the sky and the blue-green sea. Take your snorkel, a powerful sunscreen, and a moderate mosquito repellent. Leave the rest behind. Classy Chill-out This upmarket itinerary works well on its own or combined with touring Nicaragua or Costa Rica. Managua Day 1 You are met on arrival at the airport and driven to your chosen hotel in the capital. Jicaro Island Day 2-4 BLD You are collected from your hotel and privately transferred to the colonial city of Granada, with a short side-trip to see Masaya Volcano, driving up to the very rim of its huge crater. After lunch and a walking tour of the city you are taken by boat out to Jicaro Island Ecolodge in Lake Nicaragua for 3 nights full board. Pacific beach Day 5-7 BLD After a leisurely start, you return by boat to Granada and are privately driven to your chosen Pacific beach setting: Punta Teonoste, Morgan’s Rock, or Aqua Spa yoga and wellness retreat. Day 8 You are collected in the mid or late morning and driven to Managua (or other onwards destination). Corn Islands Step down to the simple life. Pure sea, pure sky, pure beach and very little else to clog your mind. Big Corn Island Day 1 You are transferred to Managua airport for your flight to Big Corn Island to be met by the hotel’s shared taxi service and driven to Arenas Beach Hotel (this page) for 3 nights. Days 2-3 B Two free days on the islands (but we can extend this to as many days as you can spare). You can simply relax, of course, which is the whole idea, or take a cab to good snorkelling spots. Your stay is on a B&B basis so you can sample the various eateries around the island. To explore further afield, take the ferry to the even more basic Little Corn Island. (It may be a bumpy crossing, but usually only in January and February. The ferry returns early, around 1pm.) There is good diving around Big Corn if you bring your PADI certificate. Day 4 B Depending on your onward travel arrangements you are transferred to the airport to catch the early morning or afternoon flight to Managua where you are met and helped on your way. Punta Teonoste Punta Teonoste is a boutique-style eco-resort along a long pristine beach. The atmosphere is chic, relaxed and stylish. There’s a large pool, a spacious open-plan reception, bar and restaurant, and a small gym and spa. The 16 whitewashed adobe-style bungalows, have private decks with hammocks, a living area connecting with an alfresco shower and wc, and a four poster bed upstairs under a steep thatched roof. Active options include trips to the Chacocente Wildlife Refuge, notable for the turtles that nest on its beaches in large numbers. Surf breaks near the lodge attract expert surfers. Punta Teonoste has good sustainability credentials (including no a/c in the rooms). Getting there is an insight into rural life on dirt roads that follow streams. Morgan’s Rock San Juan del Sur An internationally known high-end ecologue in a private nature reserve, with a focus on conservation, community development and reforestation—one of the pioneers of ecotourism in Central America. 15 stylish, spacious wooden bungalows are spread on a cliff above a private bay where turtles nest (August-November). They are connected to a main lodge by lots of steps and a sturdy 110m suspension bridge over a forested canyon. Each bungalow has a king size bed, a sofa bed, and a private deck with ocean view and outdoor shower. There is an infinity swimming pool. The private 800ha dry forest reserve has howler monkey, sloth and many birds. Arenas Beach Big Corn Island Our favourite on the Corn Islands. Set on the best beach (white sands, water usually calm, clear and blue-green), with truly great sunsets. Mid-range in quality but ticks the boxes for clean spacious beach accommodation with a/c and fan, modern en-suite facilities, etc and even wi-fi when it works. Step across a sandy lane and you’re on the beach where there are sun-loungers (air-adjustable) and gazebos to relax under if the sun is too strong or a shower passes over. Lobster, ceviche, fish etc (plus steaks and pizzas) are served in the relaxed restaurant. That’s all you’ll need. Worth a mention Mukul Guaralito de la Isla, Emerald Coast Stylish cottages and villas with plunge pools set in 4 miles of sweeping coast. An eco-sensitive upscale resort with 2 restaurants, beaches, surfing, hiking, spa, golf. Pelican Eyes / Piedras y Olas San Juan del Sur Mid-size resort on a hillside facing the ocean, with a nice bar and restaurant, varied accommodation, choice of pools. Lots of steps and winding paths. Good excursions. Hotel Victoriano San Juan del Sur Neat rooms with shutters and balconies in cream and white around a small pool. A good choice for a seaside experience on the beachfront. This travel brochure is part of a series prepared by Geodyssey on some of our destinations in Latin America and the Caribbean. For others in the series please call us or visit www.geodyssey.co.uk Geodyssey and Rainforest Alliance have established an alliance to support Best Management Practices in Sustainable Tourism since 2007. The copyright of all written material, maps and layouts in this brochure is held by Geodyssey Ltd. The copyright in photographs is either held by Geodyssey Ltd or retained by the photographer. No part of this brochure may be reproduced, stored, introduced to a retrieval system, or transmitted in any form without the prior written permission of the copyright holder. Photographs: Gilberto Alemancia, Nigel Harcourt Brown, Gillian Howe, Judy Kingston, Julie Middleton, Tenille Moore, Ference Spooner, John Thurtle, David Tipling. Tel: 020 7281 7788 Fax: 020 7281 7878 www.geodyssey.co.uk firstname.lastname@example.org 116 Tollington Park, London N4 3RB, England
The following is the proposed agenda for the Auditor Selection Committee of the Silverleaf Community Development District is scheduled for **Wednesday, February 14, 2018 at 1:30 p.m.** at 8141 Lakewood Main Street, Bradenton, FL 34202 (immediately following the adjournment of the Board of Supervisors’ meeting). Call in number: 1-877-864-6450 Participant Code: 974058 **AUDITOR SELECTION COMMITTEE MEETING AGENDA** - Roll Call to Confirm a Quorum - Review and Approval of Audit Documents - Audit RFP Notice - Instructions to Proposers - Evaluation Criteria – with and without price - Adjournment The Board of Supervisors ("Board") of the Silverleaf Community Development District ("District") will hold an Audit Committee meeting and regular meeting of the Board of Supervisors on February 14, 2018 at 1:00 p.m. at 8141 Lakewood Main Street, Suite 209, Bradenton, FL 34202. The Audit Committee will review, discuss and establish the minimum qualifications and evaluation criteria that the District will use to solicit audit services. The regular Board meeting will take place prior to the Audit Committee meeting where the Board may consider any other business that may properly come before it. A copy of the agendas may be obtained at the offices of the District Manager, Fishkind & Associates, Inc., located at 12051 Corporate Boulevard, Orlando, Florida 32817, (407) 382-3256 ("District Manager’s Office"), during normal business hours. The meetings are open to the public and will be conducted in accordance with the provisions of Florida law. The meetings may be continued to a date, time, and place to be specified on the record at the meeting. There may be occasions when Board Supervisors or District Staff may participate by speaker telephone. Any person requiring special accommodations at the meetings because of a disability or physical impairment should contact the District Manager’s Office at least forty-eight (48) hours prior to the meeting. If you are hearing or speech impaired, please contact the Florida Relay Service by dialing 7-1-1, or 1-800-955-8771 (TTY) / 1-800-955-8770 (Voice), for aid in contacting the District Manager’s Office. Any person who decides to appeal any decision made by the Board or the Committee with respect to any matter considered at the meetings is advised that person will need a record of proceedings and that accordingly, the person may need to ensure that a verbatim record of the proceedings is made, including the testimony and evidence upon which such appeal is to be based. Jill Burns District Manager RUN DATE: ________________ The Board of Supervisors of the Silverleaf Community Development District will hold an Audit Committee meeting and regular meeting of the Board of Supervisors on March 14, 2018 at 1:00 p.m. at 8141 Lakewood Main Street, Suite 209, Bradenton, FL 34202. The regular meeting will take place immediately following the adjournment of the Audit Committee meeting where the Board may consider any other business that may properly come before it. The Audit Committee will review, discuss and recommend an auditor to provide audit services to the District for Fiscal Year 2017. A copy of the agendas may be obtained at the offices of the District Manager, Fishkind & Associates, Inc., located at 12051 Corporate Boulevard, Orlando, Florida 32817, (407) 382-3256, during normal business hours. The meetings are open to the public and will be conducted in accordance with the provisions of Florida law. The meetings may be continued to a date, time, and place to be specified on the record at the meetings. There may be occasions when Board Supervisors or District Staff may participate by speaker telephone. Pursuant to the Americans with Disabilities Act, any person requiring special accommodations to participate in these meetings is asked to advise the District Office at (407) 382-3256 at least forty-eight (48) hours prior to the meetings. If you are hearing or speech impaired, please contact the Florida Relay Service by dialing 7-1-1, or 1-800-955-8771 (TTY) / 1-800-955-8770 (Voice), for aid in contacting the District Office. Any person who decides to appeal any decision made by the Board or the Committee with respect to any matter considered at the meetings is advised that this same person will need a record of the proceedings and that accordingly, the person may need to ensure that a verbatim record of the proceedings is made, including the testimony and evidence upon which such appeal is to be based. Jill Burns District Manager RUN DATE: ________________ SILVERLEAF COMMUNITY DEVELOPMENT DISTRICT REQUEST FOR PROPOSALS FOR ANNUAL AUDIT SERVICES The Silverleaf Community Development District hereby requests proposals for annual financial auditing services. The proposal must provide for the auditing of the District’s financial records for the fiscal year ending September 30, 2017, with an option for two (2) additional annual renewals. The District is a local unit of special-purpose government created under Chapter 190, Florida Statutes, for the purpose of financing, constructing, and maintaining public infrastructure. The District is located in Manatee County and has an operating budget of approximately $128,540. The final contract will require that, among other things, the audit for Fiscal Year 2017 be completed no later than June 1, 2018. Each auditing entity submitting a proposal must be authorized to do business in Florida; hold all applicable state and federal professional licenses in good standing, including but not limited to a license under Chapter 473, Florida Statutes; and be qualified to conduct audits in accordance with “Government Auditing Standards,” as adopted by the Florida Board of Accountancy. Audits shall be conducted in accordance with Florida law and particularly Section 218.39, Florida Statutes, and the rules of the Florida Auditor General. Proposal packages, which include additional qualification requirements, evaluation criteria and instructions to proposers, are available from the District Manager at the address and telephone number listed below. Proposers must provide three (3) hard copies of their proposal and one (1) electronic copy (CD or flash drive) to Jill Burns, District Manager, located at 12051 Corporate Boulevard, Orlando, Florida 32817, in an envelope marked on the outside “Auditing Services - Silverleaf Community Development District.” Proposals must be received by March 7, 2018, at 3:00 p.m., at the office of the District Manager. Please direct all questions regarding this Request for Proposals to the District Manager, who can be reached at (407) 382-3256. Any protest regarding the terms of this Notice, or the proposal packages on file with the District Manager, must be filed in writing at the offices of the District Manager within seventy-two (72) calendar hours (excluding weekends) after publication of this Notice. The formal protest setting forth with particularity the facts and law upon which the protest is based shall be filed within seven (7) calendar days after the initial notice of protest was filed. Failure to timely file a notice of protest or failure to timely file a formal written protest shall constitute a waiver of any right to object or protest with respect to aforesaid Notice or proposal package provisions. Silverleaf Community Development District Jill Burns, District Manager RUN DATE: ________ SILVERLEAF COMMUNITY DEVELOPMENT DISTRICT REQUEST FOR PROPOSALS District Auditing Services for Fiscal Year 2017 Manatee County, Florida INSTRUCTIONS TO PROPOSERS SECTION 1. DUE DATE. Sealed proposals must be received no later than March 7, 2018, at 3:00 p.m., at the offices of the District Manager, Fishkind & Associates, Inc., located at 12051 Corporate Boulevard, Orlando, Florida 32817. SECTION 2. FAMILIARITY WITH THE LAW. By submitting a proposal, the Proposer is assumed to be familiar with all federal, state, and local laws, ordinances, rules and regulations that in any manner affect the work. Ignorance on the part of the Proposer will in no way relieve it from responsibility to perform the work covered by the proposal in compliance with all such laws, ordinances and regulations. SECTION 3. QUALIFICATIONS OF PROPOSER. The contract, if awarded, will only be awarded to a responsible Proposer who is qualified by experience and licensing to do the work specified herein. The Proposer shall submit with its proposal satisfactory evidence of experience in similar work and show that it is fully prepared to complete the work to the satisfaction of the District. SECTION 4. SUBMISSION OF ONLY ONE PROPOSAL. Proposers shall be disqualified and their proposals rejected if the District has reason to believe that collusion may exist among the Proposers, the Proposer has defaulted on any previous contract or is in arrears on any previous or existing contract, or for failure to demonstrate proper licensure and business organization. SECTION 5. SUBMISSION OF PROPOSAL. Each Proposer shall submit three (3) hard copies and one (1) electronic copy of the Proposal Documents (defined below), and other requested attachments at the time and place indicated herein, which shall be enclosed in an opaque sealed envelope, marked with the title “Auditing Services – Silverleaf Community Development District” on the face of it. SECTION 6. MODIFICATION AND WITHDRAWAL. Proposals may be modified or withdrawn by an appropriate document duly executed and delivered to the place where proposals are to be submitted at any time prior to the time and date the proposals are due. No proposal may be withdrawn after opening for a period of ninety (90) days. SECTION 7. PROPOSAL DOCUMENTS. The proposal documents shall consist of the notice announcing the request for proposals, these instructions, the evaluation criteria and a proposal with all required documentation pursuant to Section 12 of these instructions (the “Proposal Documents”). SECTION 8. PROPOSAL. In making its proposal, each Proposer represents that it has read and understands the Proposal Documents and that the proposal is made in accordance therewith. SECTION 9. BASIS OF AWARD/RIGHT TO REJECT. The District reserves the right to reject any and all proposals, make modifications to the work, and waive any informalities or irregularities in proposals as it is deemed in the best interests of the District. SECTION 10. CONTRACT AWARD. Within fourteen (14) days of receipt of the Notice of Award from the District, the Proposer shall enter into and execute a contract or engagement letter with the District. SECTION 11. LIMITATION OF LIABILITY. Nothing herein shall be construed as or constitute a waiver of District’s limited waiver of liability contained in section 768.28, Florida Statutes, or any other statute or law. SECTION 12. CONTENTS OF PROPOSALS. All proposals shall include the following information in addition to any other requirements of the Proposal Documents. A. List position or title of all personnel to perform work on the District audit. Include resumes for each person listed; list years of experience in present position for each party listed and years of related experience. B. Describe proposed staffing levels, including resumes with applicable certifications. C. Provide three (3) references from projects of similar size and scope. The Proposer should include information relating to the work it conducted for each reference as well as a name, address and phone number of a contact person. Identify any work previously conducted for other community development districts. D. The lump sum cost of the provision of the services under the proposal, plus the cost of two (2) annual renewals. SECTION 13. PROTESTS. In accordance with the District’s Rules of Procedure, any protest regarding the Proposal Documents, must be filed in writing, at the offices of the District Manager, within seventy-two (72) hours after the receipt of the proposed contract documents. The formal protest setting forth with particularity the facts and law upon which the protest is based shall be filed within seven (7) calendar days after the initial notice of protest was filed. Failure to timely file a notice of protest or failure to timely file a formal written protest shall constitute a waiver of any right to object or protest with respect to aforesaid contract award. SECTION 14. EVALUATION OF PROPOSALS. The criteria to be used in the evaluation of proposals are presented in the evaluation criteria, contained within the Proposal Documents. 1. **Ability of Personnel.** (20 Points) This includes the geographic locations of the firm’s headquarters or permanent office in relation to the project; capabilities and experience of key personnel; present ability to manage this project; evaluation of existing work load; proposed staffing levels, etc. 2. **Proposer’s Experience.** (20 Points) This includes past record and experience of the Proposer in similar projects; volume of work previously performed by the firm; past performance for other community development districts in other contracts; character, integrity, reputation, of respondent, etc. 3. **Understanding of Scope of Work.** (20 Points) Extent to which the proposal demonstrates an understanding of the District’s needs for the services requested. 4. **Ability to Furnish the Required Services.** (20 Points) Extent to which the proposal demonstrates the adequacy of Proposer’s financial resources and stability as a business entity necessary to complete the services required. 5. **Price.** (20 Points) Points will be awarded based upon the lowest total bid for rendering the services and the reasonableness of the proposal. AUDITOR SELECTION EVALUATION CRITERIA (WITHOUT PRICE) 1. Ability of Personnel. (25 Points) This includes the geographic locations of the firm’s headquarters or permanent office in relation to the project; capabilities and experience of key personnel; present ability to manage this project; evaluation of existing work load; proposed staffing levels, etc. 2. Proposer’s Experience. (25 Points) This includes past record and experience of the Proposer in similar projects; volume of work previously performed by the firm; past performance for other community development districts in other contracts; character, integrity, reputation, of respondent, etc. 3. Understanding of Scope of Work. (25 Points) Extent to which the proposal demonstrates an understanding of the District’s needs for the services requested. 4. Ability to Furnish the Required Services. (25 Points) Extent to which the proposal demonstrates the adequacy of Proposer’s financial resources and stability as a business entity necessary to complete the services required.
ODD PERFECT NUMBERS ARE GREATER THAN $10^{1500}$ PASCAL OCHEM AND MICHAËL RAO Abstract. Brent, Cohen, and te Riele proved in 1991 that an odd perfect number $N$ is greater than $10^{300}$. We modify their method to obtain $N > 10^{1500}$. We also obtain that $N$ has at least 101 not necessarily distinct prime factors and that its largest component (i.e. divisor $p^a$ with $p$ prime) is greater than $10^{62}$. 1. Introduction A natural number $N$ is said perfect if it is equal to the sum of its positive divisors (excluding $N$). It is well known that an even natural number $N$ is perfect if and only if $N = 2^{k-1}(2^k - 1)$ for an integer $k$ such that $2^k - 1$ is a Mersenne prime. On the other hand, it is a long-standing open question whether an odd perfect number exists. In order to investigate this question, several authors gave necessary conditions for the existence of an odd perfect number $N$. Euler proved that $N = p^e q^2$ for a prime $p$, with $p = e \equiv 1 \pmod{4}$ and $\gcd(p, q) = 1$. More recent results show that $N$ must be greater than $10^{300}$ [1], it must have at least 75 prime factors (counting multiplicities) [4], and it must have at least 9 distinct prime factors [5]. Moreover, the largest prime factor of $N$ must be greater than $10^8$ [3], and $N$ must have a component greater than $10^{20}$ [2] (i.e. $N$ must have a divisor $p^a$ with $p$ prime, and $p^a > 10^{20}$). We improve in this paper some of these results. In Section 3 we show that $N$ must be greater than $10^{1500}$. We use for this the approach of Brent et al. [1], with a method to by-pass deadlocks similar to the method used by Hare [4]. With a slight modification of the approach, we show that $N$ must have at least 101 prime factors in Section 4, and that $N$ must have a component greater than $10^{62}$ in Section 5. These results are outcomes of some improvements in the used techniques, and of factorization efforts. We discuss that in Section 6. 2. Preliminaries Let $n$ be a natural number. Let $\sigma(n)$ denote the sum of the positive divisors of $n$, and let $\sigma_{-1}(n) = \frac{\sigma(n)}{n}$ be the abundancy of $n$. Clearly, $n$ is perfect if and only if $\sigma_{-1}(n) = 2$. We first recall some easy results on the functions $\sigma$ and $\sigma_{-1}$. If $p$ is prime, $\sigma(p^q) = \frac{p^{q+1}-1}{p-1}$, and $\sigma_{-1}(p^\infty) = \lim_{q \to +\infty} \sigma_{-1}(p^q) = \frac{p}{p-1}$. If $\gcd(a, b) = 1$, then $\sigma(ab) = \sigma(a)\sigma(b)$ and $\sigma_{-1}(ab) = \sigma_{-1}(a)\sigma_{-1}(b)$. Euler proved that if an odd perfect number $N$ exists, then it is of the form $N = p^e m^2$ where $p = e \equiv 1 \pmod{4}$ and $\gcd(p, m) = 1$. The prime $p$ is said to be the special prime. Received by the editor March 27, 2011 and, in revised form, April 14, 2011. 2010 Mathematics Subject Classification. Primary 11A25, 11A51. Many results on odd perfect numbers are obtained using the following argument. Suppose that $N$ is an odd perfect number, and that $p$ is a prime factor of $N$. If $p^q \parallel N$ for a $q > 0$, then $\sigma(p^q) \mid 2N$. Thus if we have a prime factor $p' > 2$ of $\sigma(p^q)$, we can recurse on the factor $p'$. We make all suppositions that for $q$ up we get a contradiction (e.g. $p^q$ is greater than the limit we want to prove). Moreover, since $\sigma(p^a) \mid \sigma(p^b)$ if $a + 1 \mid b + 1$, we can only suppose that $p^q \parallel N$ for $q$ such that $q + 1$ is prime. Major changes between the approaches to get the theorems are the supposition we make on the hypothetical odd perfect number, the order of exploration of prime factors, and the contradictions we use. 3. Size of an odd perfect number **Theorem 1.** An odd perfect number is greater than $10^{1500}$. We use factor chains as described in [1] to forbid the factors in $S = \{127, 19, 7, 11, 331, 31, 97, 61, 13, 398581, 1093, 3, 5, 307, 17\}$, in this order. These chains are constructed using *branchings*. To branch on a prime $p$ means that we sequentially branch on all possible components $p^a$. To branch on a component $p^a$ for $p$ prime means that we suppose $p^a \parallel N$, and thus $p^a \times \sigma(p^a) \mid 2N$ since $\gcd(p^a, \sigma(p^a)) = 1$. Then, if we do not reach a contradiction at this point, we recursively branch on a prime factor of $N$ that has not yet been branched on. If there is no known other factor of $N$, we have a situation called *roadblock*, which is discussed below. Two types of the latter branching are also discussed below. In this section, we branch on the overall largest available prime factor and use the following contradictions: - The abundancy of the current number is strictly greater than 2. - The current number is greater than $10^{1500}$. When branching on a prime $p$, we have to consider various cases depending on the multiplicity of $p$ in $N$. We stop when the multiplicity $a$ of $p$ is such that $p^a > 10^{1500}$ and, except in the cases described below, we consider only the multiplicities $a$ such that $a + 1$ is prime. This is because $\sigma(p^a) \mid \sigma(p^{(a+1)t-1})$, so any contradiction obtained thanks to the factors of $\sigma(p^a)$ when supposing $p^a \parallel N$ also gives a contradiction in the case $p^{(a+1)t-1} \parallel N$. So $p^a$ is a representative for all $p^{(a+1)t-1}$, and to compute lower bounds on the abundancy or the size to test for contradictions, we suppose that the multiplicity of $p$ is exactly $a$. **By-passing roadblocks.** A *roadblock* is a situation such that there is no contradiction and no possibility to branch on a prime. This happens when we have already made suppositions for the multiplicity of all the known primes and the other numbers are composites. We use a method to circumvent roadblocks similar to the one used by Hare [4]. This method requires us to know an upper bound on the abundancy of the current number that is strictly smaller than 2. An obvious upper bound on the contribution of the component $p^a$ to the abundancy is $\sigma_{-1}(p^\infty) = \frac{p}{p-1}$, but it might not always ensure that the bound on the abundancy of the current number is strictly smaller than 2. In order to obtain good enough upper bounds on the abundancy, we distinguish between *exact branchings* and *standard branchings*. Exact branchings concern the special component $p^1$, as well as $3^2$, $3^4$, and $7^2$. Standard branchings concern everything else. In the case of an exact branching on $p^a$, we suppose that $p^a \parallel N$, we use $\sigma_{-1}(p^a)$ for the abundancy, and we use an additional contradiction, occurring when $p$ appears at least $a + 1$ times in the factors of $\prod_{i=1}^{k} \sigma(p_i^{q_i})$, where $(p_1^{q_1}, \ldots, p_k^{q_k})$ is the sequence of considered branchings. In the case of a standard branching on $p^a$, we suppose that $p^{(a+1)t-1} \parallel N$ for a $t \geq 1$, and we use $\sigma_{-1}(p^\infty) = \frac{p}{p-1}$ as an upper bound on the abundancy. Due to these exact branchings, we have to add standard branchings on $3^8$, $3^{14}$, $3^{24}$, and $7^8$ in order to cover all possible exponents for 3 and 7. Let us detail this for the base 3: we make exact branchings on $3^2$ and $3^4$, and standard branchings on $3^8$, $3^{14}$, $3^{24}$, and $3^{p-1}$ for every prime $p \geq 7$. Then the case $3^{m-1} \parallel N$ for $m$ odd is handled by $3^2$ if $m = 3$, by $3^4$ if $m = 5$, by $3^8$ if $3^2 \mid m$, by $3^{14}$ if $3 \times 5 \mid m$, by $3^{24}$ if $5^2 \mid m$, and by $3^{p-1}$ if $p \mid m$. Note that we suppose that the branching for the special prime $p^1$ is always an exact branching, since if $p^{2k+1} \parallel N$ with $k \geq 1$, then this case will be handled by the standard branching $p^{q-1}$, where $q$ is a factor of $2k + 1$. Finally, we have to consider abundancy of nonfactored composites. We check that the composite $C$ has no factors less than $\alpha$ (we used $\alpha = 10^8$ for our computations), thus $C$ has at most $\left\lfloor \frac{\ln(C)}{\ln(\alpha)} \right\rfloor$ different prime factors, each greater than $\alpha$. Thus the abundancy contributed by $C$ is at most $\left( \frac{\alpha}{\alpha-1} \right)^{\left\lfloor \frac{\ln(C)}{\ln(\alpha)} \right\rfloor}$. Given a roadblock $M$, we compute an upper bound $a$ on the abundancy. Our method to by-pass the roadblock only works if $a < 2$. That is why the exact branchings were suitably chosen to ensure that $a < 2$ for every roadblock. Suppose that $a < 2$ and that there is an odd perfect number $N$ divisible by $M$. Let $p$ be the smallest prime which divides $N$ and not $M$. Thus $N$ has at least $t_a(p) := \left\lceil \frac{\ln \left( \frac{p}{\alpha} \right)}{\ln \left( \frac{p}{p-1} \right)} \right\rceil$ distinct prime factors which do not divide $M$. Each of these factors has multiplicity at least 2, except for at most one (special) prime with multiplicity at least one. Thus, if $p^{2t_a(p)-1}$ is greater than $\frac{10^{1500}}{M}$, $N$ is clearly greater than $10^{1500}$. Let $b = \max \left\{ p : p^{2t_a(p)-1} \leq \frac{10^{1500}}{M} \right\}$, which is defined since $p \rightarrow p^{2t_a(p)-1}$ is strictly growing. To prove that there is no odd perfect number $N < 10^{1500}$ such that $M$ divides $N$, we branch on every prime factor up to $b$ to rule them out. We start to branch on the primes in $S$, since we already have good factor chains for these numbers. We do not branch on a prime that divides $M$ or that is already forbidden. When applying this method, we might encounter other roadblocks, because of composite number or because every “produced” prime already divides $M$. So we have to apply the method recursively. **Example.** An example of by-pass two nested roadblocks is shown in Figure 1. We first try to rule out 127 as a factor and encounter as a first roadblock $\sigma(127^{192})$, which is a composite number with no known factors and no factors less than $10^8$. Here, $M = 127^{192} \times \sigma(127^{192}) > 7 \times 10^{807}$. This composite number has at most $\left\lfloor \frac{\ln(\sigma(127^{192}))}{\ln(10^8)} \right\rfloor = 50$ factors who contribute to the abundancy up to at most $C = (1 + 10^{-8})^{50} < 1 + 6 \times 10^{-7}$. As an upper bound on the abundancy, we thus have $a = \sigma_{-1}(127^\infty) \times (1 + 6 \times 10^{-7}) < 1.008$. We try every number until we \[ 127^{19^2} \implies \sigma(127^{19^2}) \quad \text{Roadblock} \] \[ 19^2 \implies 3 \times 127 \] \[ 3^2 \implies 13 \] \[ 13^1 \implies 2 \times 7 \] \[ 7^2 \implies 3 \times 19 \quad \text{Roadblock 2} \] **Figure 1.** Example of two nested roadblock circumvents. get \( t_a(220) = 151 \) and \( 220^{301} > 10^{705} > \frac{10^{1500}}{M} \). So, to get around this roadblock, we have to branch on every prime \( p < 220 \) except 127. We start with 19, which is the next number in \( S \), and then we get stuck with another roadblock (“Roadblock 2”). Here, \( M' = 3^2 \times 7^2 \times 13^1 \times 19^2 \times 127^{19^2} \times \sigma(127^{19^2}) > 10^{814} \). As an upper bound on the abundancy, we have \( a' = \sigma_{-1}(3^2 \times 7^2 \times 13^1 \times 19^\infty \times 127^\infty) \times C \). We thus have an upper bound \( a' < 1.92522 \). We try every number until we get \( t_{a'}(2625) = 101 \) and \( 2625^{201} > 10^{687} > \frac{10^{1500}}{M'} \). So, to get around this roadblock, we have to branch on every prime \( p \) such that \( p < 2625 \), except 3, 7, 13, 19 and 127. We continue to branch on other primes in \( S \), and then on all other primes smaller than 2625. This last example shows that exact branchings on \( 3^2 \) and \( 7^2 \) are necessary since \( \sigma_{-1}(3^\infty \times 7^\infty \times 13^1 \times 19^\infty \times 127^\infty) > 2 \). Notice also the exact branching on the special prime 13. **When \( N \) has no factors in \( S \)**. Finally, we have to show that if \( N \) has no divisor in \( S \), then \( N > 10^{1500} \). We use the following argument, which is an improved version of the argument in [1]. For a prime \( p \) and an integer \( a \), we define the *efficiency* \( f(p,a) \) of the component \( p^a \) as \( f(p,a) = \frac{\ln(\sigma_{-1}(p^a))}{\ln(p^a)} \). The efficiency is the ratio between the contribution in abundancy and the contribution in size of the component \( p^a \). Both contributions are multiplicative increasings, which explains the logarithms. **Remark.** - \( a < b \implies f(p,a) > f(p,b) \). - \( p < q \implies f(p,a) > f(q,a) \). Notice that the best way to reach abundancy 2 and to keep \( N \) small is to take components with highest efficiency \( f \): - For each allowed prime \( p \), we find the smallest exponent \( a \) such that \( \sigma(p^a) \) is not divisible by 4 nor a factor in \( S \). Example: Consider \( p = 23 \). \( \sigma(23^1) \), \( \sigma(23^2) \), \( \sigma(23^3) \), are respectively divisible by 4, 7, 4. So the exponent of 23 is at least 4. - We sort these components \( p^a \) by decreasing efficiency \( f \) to get an ordering \( p_1, p_2, p_3, \ldots \) such that \( f(p_1, a_1) \geq f(p_2, a_2) \geq f(p_3, a_3) \geq \ldots \). - The product \( \Pi_{i=1}^{200} \frac{p_i}{p_i - 1} = 1.99785 \ldots \) is smaller than 2, whereas the product \( \Pi_{i=1}^{200} p_i^{a_i} \) is greater than \( 10^{1735} \). 4. Total number of prime factors of an odd perfect number Hare proved that an odd perfect number has at least 75 prime factors (counting multiplicities) [4]. **Theorem 2.** The total number of prime factors of an odd perfect number is at least 101. We use the following contradictions: - The abundancy of the current number is strictly greater than 2. - The current number has at least 101 prime factors. We forbid the factors in $S' = \{3, 5, 7, 11\}$, in this order. We branch on the smallest available prime. We still use a combination of exact branchings (for $p^1$, $3^2$, and $3^4$) and standard branchings, as in the previous section. **By-passing roadblocks.** Given a roadblock $M$ with at least $g$ not necessarily distinct prime factors, we compute an upper bound $a$ on the abundancy, as described in the previous section. Suppose that $a < 2$ and that there is an odd perfect number $N$ divisible by $M$. Let $p$ be the smallest prime which divides $N$ and not $M$. Thus $N$ has at least $t_a(p)$ distinct prime factors which do not divide $M$. Each of these factors has multiplicity at least 2, except for at most one (special) prime with multiplicity at least one. Thus, if $2t_a(p) - 1$ is greater than $101 - g$, $N$ has more than 101 not necessarily distinct prime factors. So we have a contradiction. For the lower bound $g$ of the not necessarily distinct prime factors, we compute the sum $g_p$ of the exponents of the primes that have been branched on, and we add four times the number $g_c$ of composites. Since we have checked that a composite is not a perfect power, it must be divided by two different primes, each having multiplicity at least two, except for at most one (the special prime). So we take $g = g_p + 4g_c$ or $g = g_p + 4g_c - 1$, depending on whether we have already branched on the special prime. By the above, we can compute an upper bound on the smallest prime dividing $N$ but not $M$. So, to prove that there is no odd perfect number with fewer than 101 not necessarily distinct prime factors such that $M$ divides $N$, we branch on every prime factor up to this bound to rule them out. We do not branch on a prime that divides $M$ or that is already forbidden. We have to resort to exact branchings as in the previous section, but this time only on $3^2$ and $3^4$. **When $N$ has no factors in $S'$.** We use a suitable notion of efficiency defined as $f'(p, a) = \frac{\ln(\sigma_{-1}(p^a))}{a}$. It is the ratio between the multiplicative contribution in abundancy and the additive contribution to the number of primes of the component $p^a$. **Remark.** - $a < b \implies f'(p, a) > f'(p, b)$. - $p < q \implies f'(p, a) > f'(q, a)$. Notice that the best way to reach abundancy 2 with the fewest primes is to take components with highest efficiency $f'$: - For each allowed prime $p$, we find the smallest exponent $a$ such that $\sigma(p^a)$ is not divisible by 4 nor a factor in $S'$. • We sort these components $p^a$ by decreasing efficiency $f'$ to get an ordering $p_1, p_2, p_3, \ldots$ such that $f'(p_1, a_1) \geq f'(p_2, a_2) \geq f'(p_3, a_3) \geq \ldots$. • The product $\Pi_{i=1}^{49} \frac{p_i}{p_i-1} = 1.99601\ldots$ is smaller than 2, whereas $\Sigma_{i=1}^{49} a_i = 118$. 5. LARGEST COMPONENT OF AN ODD PERFECT NUMBER Cohen [2] proved in 1987 that an odd perfect number has a component greater than $10^{20}$. **Theorem 3.** The largest component of an odd perfect number is greater than $10^{62}$. We use the same algorithm as in the previous section to forbid every prime less than $10^8$ using the following contradictions: - The abundancy of the current number is strictly greater than 2. - The current number has a component greater than $10^{62}$. Since we want to quickly reach a large component, we branch on the largest available prime. There is no unfactored composite here, and thus no roadblock, since every number is less than $10^{62}$ and thus has been easily factored. Suppose now that $N$ is an odd perfect number with no prime factor less than $10^8$ and no component $p^e > 10^{62}$. First, the exponent $e$ of any prime factor $p$ is less than 8, since otherwise $p^e > (10^8)^8 > 10^{62}$. The exponent of the special prime $p_1$ is thus 1, because $3 \mid \sigma(p^5)$ and $3 \nmid N$. So $N$ has a prime decomposition $N = p_1 \prod_{i=1}^{n_1} p_{i,2}^2 \prod_{i=1}^{n_4} p_{i,4}^4 \prod_{i=1}^{n_6} p_{i,6}^6$. Let $\pi(x)$ denote the number of primes less than or equal to $x$. In the following, we will use these known values of $\pi(x)$ [8]: - $\pi(10^8) = 5761455$, - $\pi(3 \times 10^{10}) = 1300005926$, - $\pi(32 \times 10^{14}) = 92295586538011$, - $\pi(98 \times 10^{14}) = 273808176380030$. It is well known (see [6]) that for primes $q$, $r$, and $s$ such that $q \mid \sigma(r^{s-1})$, either $q = s$ or $q \equiv 1 \mod s$. So if $p_{j,e'} \mid \sigma(p_{i,e}^e)$, then $p_{j,e'} \equiv 1 \mod (e+1)$, since $(e+1) \nmid N$. We thus have $e' \neq e$, since otherwise $e+1$ would divide $\sigma(p_{j,e}^e)$ (that is, $\sigma(p_{j,e'}^{e'})$), but not $N$. Moreover, $\sigma(p_{i,e}^e)$ cannot be prime unless it is the special prime $p_1$. Suppose to the contrary that $\sigma(p_{i,e}^e) = p_{j,e'}$. Then $p_{i,e'}^{e'}$ is a component of $N$. Since $e' \neq e$, we have that $ee' \geq 8$, so that $p_{j,e'}^{e'} = (\sigma(p_{i,e}^e))^{e'} > (p_{i,e}^e)^{e'} = (p_{i,e})^{ee'} > (10^8)^8 > 10^{62}$. So each $\sigma(p_{i,e}^e)$ produces at least two factors or the special prime. Let $n_{2,2}$ be the number of primes $p_{i,2}$ such that $\sigma(p_{i,2}^2) = q \times r$ where $q < r$, $q$ and $r$ primes. Let $n_{2,3}$ be the number of primes $p_{i,2}$ such that $\sigma(p_{i,2}^2)$ factors into at least three not necessarily distinct primes. By the above, we have $$n_2 \leq n_{2,2} + n_{2,3} + 1.$$ (1) By counting the number of primes produced by the factors $\sigma(p_{i,2}^2)$, we obtain $$2n_{2,2} + 3n_{2,3} \leq 4n_4 + 6n_6 + 1.$$ (2) For $e \in \{4, 6\}$, we have $p_{i,e} < 32 \times 10^{14}$, since otherwise $p_{i,e}^e > (32 \times 10^{14})^4 > 10^{62}$. Suppose that a prime $p_{i,2}$ is such that $\sigma(p_{i,2}^2) = q \times r$ where $q < r$, $q$ and $r$ primes. Then we have that \( r > p_{i,2} \), and by previous discussion, either \( r = p_1 \) or \( r = p_{i',e} \) for \( e \in \{4,6\} \). This implies that at least \((n_{2,2} - 1)\) primes \( p_{i,2} \) are smaller than the largest prime \( p_{i,e} \) for \( e \in \{4,6\} \). So, \( n_{2,2} - 1 + n_4 + n_6 \leq \pi(32 \times 10^{14}) - \pi(10^8) = 92295550776556 \) which gives \[ n_{2,2} + n_4 + n_6 \leq 92295550776557. \] Similarly, \( p_{i,6} < 3 \times 10^{10} \) since otherwise \( p_{i,6}^6 > 10^{62} \). So, \( n_6 \leq \pi(3 \times 10^{10}) - \pi(10^8) \), which gives \[ n_6 \leq 1294244471. \] Now, we consider an upper bound on the abundancy of primes greater than \( 10^8 \). We use equation (3.29) in [7], \[ \prod_{p < x \atop p \text{ prime}} \frac{p}{p-1} < e^\gamma \ln(x) \left(1 + \frac{1}{2 \ln^2(x)}\right) \] where \( \gamma = 0.5772156649\ldots \) is Euler’s constant. We compute that \[ \prod_{p < 10^8 \atop p \text{ prime}} \frac{p}{p-1} > c_1 = 32.80869860873870116 \] and we obtain \[ \prod_{10^8 < p < 98 \times 10^{14} \atop p \text{ prime}} \frac{p}{p-1} < e^\gamma \ln(98 \times 10^{14}) \left(1 + \frac{1}{2 \ln^2(98 \times 10^{14})}\right) / c_1 < 2. \] By the above, we have \( 1 + n_2 + n_4 + n_6 > \pi(98 \times 10^{14}) - \pi(10^8) = 273808170618575 \), which gives, \[ 273808170618575 \leq n_2 + n_4 + n_6. \] The combination \( 3 \times (1) + 1 \times (2) + 7 \times (3) + 2 \times (4) + 3 \times (5) \) gives \( 6n_{2,2} + 175353067930880 \leq 0 \), a contradiction. ### 6. Improvements over previous methods This paper provides a unified framework to obtain lower bounds on three parameters of an odd perfect number: the OPN itself, the total number of prime factors, and the largest component. These parameters are well-suited because a bound on the parameter implies an obvious and reasonable bound on the exponent of a prime factor of an OPN. That is not the case for other parameters of interest, such as the largest prime factor or the number of distinct prime factors. The most useful new tool is the way to get around roadblocks in the proof of Theorem 1. The argument to obtain a bound on the smallest not-yet-considered prime is an adaptation of the one in [4]. In both cases it implies a bound \( b \), an exponent \( t \), an inequality related to the abundancy, and an inequality related to the corresponding parameter. The argument is more sophisticated in the context of a bound on the size rather than on the total number of primes, because both \( b \) and \( t \) are involved in both inequalities. Brent et al. [1] used standard branchings and Hare [4] used exact branchings. We introduce the use of a combination of standard and exact branchings to reduce | Theorem | # Branch. | # Branch. (circ.) | Approx. time | |---------|-----------|------------------|--------------| | 1 | 22 514 255| 10 406 935 | 12 hours | | 2 | 447 019 005| 444 022 | 93 hours | | 3 | 6 574 758 | 0 | 30 minutes | Figure 2. Total number of branchings, number of branchings in roadblocks circumventing, and approximate time. the size of the proof tree. Standard branchings are economical but exact branchings are sometimes unavoidable when we have to by-pass a roadblock. In the final phase of the proof of Theorems 1 and 2, we have to argue that an odd perfect number with no factors in a set of small forbidden primes necessarily violate the corresponding bound. When the bound increases, the set of forbidden primes must get larger. Suitable notions of efficiency of a component are introduced in order to restrain the growth of this set. They allow a better use of the fact that some primes are forbidden, by considering the exponent of the remaining potential prime factors. Finally, we give a proof of Theorem 3 using a system of inequalities. The idea behind it is as follows. If all primes up to $B$ are forbidden, then the largest prime factor must be at least $B^2$ in order to reach abundancy 2. Then we use various arguments and inequalities in order to show that a not too small proportion $C$ of the prime factors have exponent at least 4. Then we conclude that a component of size at least $(C \times B^2)^4 = C' \times B^8$ exists. We would like to point out the importance of separating the search for factors, with efficient dedicated software, from the generation of the proof tree. In particular, this generates most of the improvement to Theorem 2. Credit and acknowledgements The program was written in C++ and uses GMP. The program and the factors are available at http://www.lri.fr/~ochem/opn/. We present in Figure 2 the number of branchings on prime factors (overall and needed in circumvents of roadblocks), and the time needed on a AMD Phenom(tm) II X4 945 to process the tree of suppositions for each theorem. Of course, this does not take into account the time needed to find the factors. Various software and algorithms were used for the factorizations: - GMP-ECM for P-1, P+1 and ECM, - msieve and yafu for MPQS, - msieve combined with GGNFS for NFS (both general and special). We thank the people who contributed to the factorizations at the Mersenne forum, yoyo@home, RSALS, and elsewhere, or provided helpful comments on preliminary versions of the paper. In particular, William Lipp who is obtaining and gathering useful factorizations via his website http://www.odperfect.org, Tom Womack who obtained the factorization of $\sigma(191^{102})$ and $\sigma(2801^{78})$, Warut Roonguthai, Carlos Pinho, Chris K., Serge Batalov, Pace Nielsen, Lionel Debroux, Greg Childers, Alexander Kruppa, Jeff Gilchrist, Rich Dickerson. Experiments presented in this paper were partially carried out using the PlaFRIM experimental testbed, being developed under the INRIA PlaFRIM development action with support from LABRI and IMB and other entities: Conseil Régional d’Aquitaine, FeDER, Université de Bordeaux and CNRS (see https://plafrim.bordeaux.inria.fr/). References [1] R.P. Brent, G.L. Cohen, H.J.J. te Riele. Improved techniques for lower bounds for odd perfect numbers, *Math. Comp.* **57** (1991), no. 196, pp 857–868. MR1094940 (92c:11004) [2] G.L. Cohen. On the largest component of an odd perfect number, *J. Austral. Math. Soc. Ser. A* **42** (1987), pp 280–286. MR869751 (87m:11005) [3] T. Goto, Y. Ohno. Odd perfect numbers have a prime factor exceeding $10^8$, *Math. Comp.* **77** (2008), no. 263, pp 1859–1868. MR2398799 (2009b:11008) [4] K.G. Hare. New techniques for bounds on the total number of prime factors of an odd perfect number, *Math. Comp.* **76** (2007), no. 260, pp 2241–2248. MR2336293 (2008g:11006) [5] P.P. Nielsen. Odd perfect numbers have at least nine different prime factors, *Math. Comp.* **76** (2007), no. 160, pp 2109–2126. MR2336286 (2008g:11153) [6] T. Nagell. Introduction to Number Theory, *John Wiley & Sons Inc.*, New York, 1951. MR0043111 (13:207b) [7] J.B. Rosser, L. Schoenfeld. Approximate formulas for some functions of prime numbers. *Illinois J. Math.* **6** (1962), pp 64–94. MR0137689 (25:1139) [8] http://www.trnicely.net/pi/pix_0000.htm LRI, CNRS, Bât 490 Université Paris-Sud 11, 91405 Orsay cedex, France *E-mail address*: firstname.lastname@example.org CNRS, Lab J.V. Poncelet, Moscow, Russia. LaBRI, 351 cours de la Libération, 33405 Talence cedex, France *E-mail address*: email@example.com
Graph Plotting and Data Analysis using Mathematica The purpose of these notes is to show how *Mathematica* can be used to analyze laboratory data. The notes are not complete, since there are many commands that are not discussed here. For further information you should consult the online Help menu or the *Mathematica* Book. It is good practice to reset everything before you begin a Mathematica session: ``` In[1]:= Clear["Global`*"] ``` **Data Lists** *Mathematica* has some powerful functions for manipulating lists of data. Consider a list of numbers (which we’ll call `list1`): ``` In[2]:= list1 = {26, 13, 4, 0.3, 3, -2, 0.08, 19.3} Out[2]= {26, 13, 4, 0.3, 3, -2, 0.08, 19.3} ``` You can add, subtract, multiply or divide by a constant very easily. For example, we can create a new `list2` by adding 3 to each term: ``` In[3]:= list2 = list1 + 3 Out[3]= {29, 16, 7, 3.3, 6, 1, 3.08, 22.3} ``` If you want to multiply each term in `list1` by $1/4\pi \epsilon_0$, first define $\epsilon_0$ and then multiply each element in `list1` by it. ``` In[4]:= \[Epsilon]0 = 8.85 \times 10^{-12}; In[5]:= list3 = list1 \frac{1}{4\pi \epsilon_0} Out[5]= {2.33787 \times 10^{11}, 1.16893 \times 10^{11}, 3.59672 \times 10^{10}, 2.69754 \times 10^9, 2.69754 \times 10^{10}, -1.79836 \times 10^{10}, 7.19344 \times 10^8, 1.73542 \times 10^{11}} ``` Other operations can be applied similarly. For example, you can obtain the natural logarithm of each term with the `Log` function. Note that the name of the function starts with a capital letter and that the argument appears in [square brackets]: ``` In[6]:= Log[list2] Out[6]= {Log[29], Log[16], Log[7], 1.19392, Log[6], 0, 1.12493, 3.10459} ``` *Mathematica* gives exact values. The exact value of the natural logarithm of 29 is $\log(29)$. An approximate numerical value is obtained using `N` in one of the following two forms: ``` In[7]:= N[Log[29]] Out[7]= 3.3673 ``` or ``` In[8]:= N[Log[29], 5] Out[8]= 3.36735 You can display a result to any number of decimal places. Here’s $\pi$. ```mathematica In[9]:= \pi (* exact *) Out[9]= \pi In[10]:= N[\pi] (* approximate *) Out[10]= 3.14159 In[11]:= N[\pi, 100] (* 100 decimal places *) Out[11]= 3.1415926535897932384626433832795028841971 6939375105820974944592307816406286208998 628034825342117068 ``` **Other operations on lists** To add together all the elements in `list1`: ```mathematica In[12]:= Plus @@ list1 Out[12]= 63.68 ``` To multiply together all the elements in `list1`: ```mathematica In[13]:= Times @@ list1 Out[13]= -3757.48 ``` **Editing Lists of Data** You can select points from a list of data using the commands **Drop**, **Take** and **Part**. Drop the first 2 points from `list1`: ```mathematica In[14]:= Drop[list1, 2] Out[14]= {4, 0.3, 3, -2, 0.08, 19.3} ``` Drop the last 2 points: ```mathematica In[15]:= Drop[list1, -2] Out[15]= {26, 13, 4, 0.3, 3, -2} ``` Drop the second through fifth points: ```mathematica In[16]:= Drop[list1, {2, 5}] Out[16]= {26, -2, 0.08, 19.3} ``` Drop alternate points between the first and eighth points: ```mathematica In[17]:= Drop[list1, {1, 8, 2}] Out[17]= {13, 0.3, -2, 19.3} **Take** is used similarly. For example, to keep the first two points: ``` In[18]:= Take[list1, 2] Out[18]= {26, 13} ``` **Part** also lets you extract one or more data points from the list. It is used in almost the same way as **Take** and **Drop**. These three statements do the same thing: ``` In[19]:= Drop[list1, -3]; Take[list1, 5]; Part[list1, {1, 2, 3, 4, 5}]; ``` Note also that ``` In[20]:= Part[list1, 3]; ``` does the same as ``` In[21]:= list1[[3]]; ``` ### Reading data from a laboratory experiment Most of the data that you obtain in the laboratory will consist of pairs of (x,y) values, for example: ``` In[22]:= data = {{0, 6.62}, {1, 6.73}, {2, 6.86}, {3, 6.98}, {4, 7.03}}; ``` One problem with this method of data entry is that it becomes laborious to type many curly brackets and commas, as well as increasing the possibility of making mistakes. An alternate method is to first create a data file using a text editor. A file which consists of two columns of x and y values might look like this, with a space between each column of numbers: ``` .5 8.1 1 9.2 1.5 10.5 2 13.1 2.5 15.4 3 18 3.5 20.4 4 22.9 4.5 24.5 5 26.3 ``` Save the file under a meaningful name, such as “labdata.dat”. The “.dat” file extension tells you that this is a data file, as opposed any other kind of file, like text (.txt), a picture (.jpg, .gif, .bmp), or a program (.exe). There are several methods for telling *Mathematica* how to read a set of data. The simplest of these is probably the **Import** command to read a data file. If the file is not already in your default working directory, you will need to use **SetDirectory** to make sure that *Mathematica* reads the file from the correct directory. For example (the exact syntax will depend on your operating system - Windows, Macintosh or Linux/Unix): ```mathematica In[23]:= SetDirectory["c:\win98\desktop"]; ``` Let’s read in the data file “labdata.dat”. ```mathematica In[24]:= labdata = Import["labdata.dat"] Out[24]= {{0.5, 8.1}, {1, 9.2}, {1.5, 10.5}, {2, 13.1}, {2.5, 15.4}, {3, 18}, {3.5, 20.4}, {4, 22.9}, {4.5, 24.5}, {5, 26.3}} ``` You can also use `ReadList`. This syntax tells *Mathematica* to read two columns of data. Use whichever form you like. ```mathematica In[25]:= data = ReadList["labdata.dat", {Number, Number}] Out[25]= {{0.5, 8.1}, {1, 9.2}, {1.5, 10.5}, {2, 13.1}, {2.5, 15.4}, {3, 18}, {3.5, 20.4}, {4, 22.9}, {4.5, 24.5}, {5, 26.3}} ``` See for yourself what happens when you use only one `Number` or omit the `Numbers` completely ### Simple Graphs, Fit and Regression Plot the imported data: ```mathematica In[26]:= rawdata = ListPlot[labdata] ``` ![Graph](image) ```mathematica Out[26]= -Graphics- ``` `ListPlot` is probably the most convenient method for displaying raw data. You can add extra parameters if you like. Use whichever is most appropriate for your situation. For example: ```mathematica In[27]:= ListPlot[labdata, PlotStyle -> PointSize[0.02]] Fit these points to a straight line: Obtain the line of best fit without plotting it (that's what `DisplayFunction` does): ```mathematica In[31]:= bestline = Plot[result, {x, 0, 5}, DisplayFunction -> Identity] ``` Plot the data and line of best fit on the same axes. Add a title and axis labels: ```mathematica In[32]:= Show[rawdata, bestline, AxesLabel -> {"x values", "y values"}, PlotLabel -> "Mathematica Graph"] ``` **A Shortcut** The above examples have shown how to draw a graph of your laboratory data using the sequence of commands: `ListPlot`, `Fit` and `Show`. You can combine commands to produce a plot from a single line of input: ```mathematica In[33]:= Plot[Fit[labdata, {1, x}, x], {x, 0, 5}, Epilog -> {PointSize[0.02], Map[Point, labdata]}] For a more detailed statistical analysis, including the errors in the slope and intercept, use the Regress command. You will need to load the LinearRegression package first. ``` In[34]:= <<Statistics`LinearRegression` In[35]:= Regress[labdata, {1, x}, x] Out[35]= {ParameterTable -> 1 4.92667 0.407664 12.0851 2.03141 \times 10^{-6} , RSquared -> 0.992694, x 4.33212 0.131402 32.9685 7.81561 \times 10^{-10} AdjustedRSquared -> 0.99178, EstimatedVariance -> 0.356121, DF SumOfSq MeanSq FRatio PValue ANOVATable -> Model 1 387.075 387.075 1086.92 7.81561 \times 10^{-10} Error 8 2.84897 0.356121 Total 9 389.924} ``` You don’t need to display all these numbers if you don’t want to. To display the parameters and their errors only, use ``` In[36]:= Regress[labdata, {1, x}, x, RegressionReport -> ParameterCITable] Out[36]= {ParameterCITable -> 1 4.92667 0.407664 (3.98659, 5.86674) x 4.33212 0.131402 (4.02911, 4.63513)} ``` You can fit data to any polynomial by including as many terms as you need inside the curly bracket. Thus to fit data to a quadratic, type ``` In[37]:= Fit[labdata, {1, x, x^2}, x] Out[37]= 5.45167 + 3.80712 x + 0.0954545 x^2 ``` The use of Fit and Regress is not limited to polynomials. You can fit data to any linear combination of the parameters that you specify, such as ``` In[38]:= Fit[labdata, {1, Sin[x], Cos[x]}, x] Out[38]= 16.9071 - 2.70799 Cos[x] - 7.28491 Sin[x] Removing erroneous data points Before using **Fit** or **Regress**, you need to be sure that the data is correct. Look at the data below: ```mathematica In[39]:= listr = {{0.5, 8.1}, {1, 9.2}, {1.5, 10.5}, {2, 23.1}, {2.5, 15.4}, {3, 18}, {3.5, 20.4}, {4, 22.9}, {4.5, 24.5}, {5, 26.3}}; ``` As usual, you can get a rough idea of what the graph looks like using **ListPlot**. To make it easier to see, we’ll increase the size of the points using **PointSize**. ```mathematica In[40]:= firstplot = ListPlot[listr, PlotStyle -> PointSize[0.02]] ``` If you attempt to fit this data to a straight line, you will get a meaningless result because the fourth data point lies a long way from the line of best fit. You can remove this point from the fit using **Drop**: ```mathematica In[41]:= dropdata = Drop[listr, {4}] Out[41]= {{0.5, 8.1}, {1, 9.2}, {1.5, 10.5}, {2, 23.1}, {2.5, 15.4}, {3, 18}, {3.5, 20.4}, {4, 22.9}, {4.5, 24.5}, {5, 26.3}} ``` Now you can fit the data to a straight line: ```mathematica In[42]:= corrfit = Fit[dropdata, {1, x}, x] Out[42]= 5.03917 + 4.31167 x ``` Plot the data points (including the wrong one) and the corrected line of best fit: ```mathematica In[43]:= corrplot = Plot[corrfit, {x, 0, 5}, DisplayFunction -> Identity] In[44]:= g = Show[firstplot, corrplot] Obviously, if this was experimental data, you should go back and check the suspicious point before continuing. Consider this data: ```mathematica In[45]:= twoslpes = {{0.5, 1.3}, {1, 2.14}, {1.5, 2.65}, {2, 3.5}, {2.5, 4}, {3, 4.7}, {3.5, 6.35}, {4, 8}, {4.5, 9.25}, {5, 11.01}}; ``` Now plot it: ```mathematica In[46]:= rawdat = ListPlot[twoslpes] ``` This data does not follow a single straight line because the slope changes at larger values of x. We’ll use Take and Drop to draw the lines of best fit for low and high x. First obtain the line of best fit for the low range data: ```mathematica In[47]:= lofit = Fit[Take[twoslpes, 5], {1, x}, x] Out[47]= 0.69 + 1.352 x ``` Now generate the plot of the line of best fit (but don’t display it yet): Similarly, for the high range data. (You can also combine commands): ```mathematica In[49]:= hiline = Plot[Fit[Drop[twoslpes, 5], {1, x}, x], {x, 2, 5}, DisplayFunction -> Identity] In[50]:= Show[rawdat, loline, hiline] ``` **Example: Object in Free-Fall** Consider an object falling freely under gravity, where we measure the distance fallen at one-second intervals: ```mathematica In[51]:= d = {{1, 5.}, {2, 20.}, {3, 42.}, {4, 80.}, {5, 110.}, {6, 170.}, {7, 246.}, {8, 310.}, {9, 400.}, {10, 475.}}; ``` Plot it: ```mathematica In[52]:= falldata = ListPlot[d, PlotStyle -> PointSize[0.02]] Use **Fit** to obtain the coefficient of $x^2$ only. It should be numerically equal to $g/2$ (i.e., 4.9 m/s$^2$) ``` In[53]:= Fit[d, {x^2}, x] Out[53]= 4.83192 x^2 ``` To obtain the error in the acceleration we use **Regress**. Note how Regress includes the constant term by default. ``` In[54]:= Regress[d, {x^2}, x] Out[54]= {ParameterTable -> 1 -0.551648 3.32996 -0.165662 0.872533 , RSquared -> 0.998508, x^2 4.8403 0.06616 73.1606 1.35714 \times 10^{-12}, AdjustedRSquared -> 0.998321, DF SumOfSq MeanSq FRatio PValue EstimatedVariance -> 46.006, ANOVATable -> Model 1 246246. 246246. 5352.47 1.35725 \times Error 8 368.048 46.006 Total 9 246614. ``` We can also use the fact that data which follow a simple power law will appear as a straight line when the logarithm of each term is plotted. The slope of the line gives the expected power law. ``` In[55]:= ListPlot[Log[d], AxesLabel -> {"log time", "log distance"}] Fit $\log[d]$ to a straight line: ``` In[56]:= logfit = Fit[Log[d], {1, x}, x] Out[56]= 1.59544 + 1.9864 x ``` or better still, ``` In[57]:= Regress[Log[d], {1, x}, x] Out[57]= ParameterTable -> 1 1.59544 0.0331279 48.1602 3.82236 \times 10^{-11} , RSquared -> 0.999196, x 1.9864 0.0199225 99.7062 1.14353 \times 10^{-13} AdjustedRSquared -> 0.999095, EstimatedVariance -> 0.00191941, DF SumOfSq MeanSq FRatio PValue ANOVATable -> Model 1 19.0815 19.0815 9941.32 1.14353 \times 10^{-13} Error 8 0.0153553 0.00191941 Total 9 19.0968 ``` Thus the power law is $1.98 \pm 0.02$ (as expected) and the intercept gives the natural logarithm of the acceleration. Hence ``` In[58]:= accln = Exp[logfit[[1]]] Out[58]= 4.93052 ``` **Nonlinear Curve Fitting** If your data does not follow a straight line or simple polynomial, you will need to use Mathematica’s NonlinearFit functions: ``` In[59]:= <<Statistics`NonlinearFit` Example: Charging a Capacitor Here we measure the voltage across the capacitor as a function of time. As usual, we’ll read the data from a file: ```mathematica In[60]:= chargedata = Import["capch.dat"] Out[60]= {{15, 0.7}, {30, 1.2}, {45, 1.71}, {60, 2.13}, {75, 2.48}, {90, 2.78}, {120, 3.29}, {150, 3.66}, {180, 3.96}, {210, 4.19}, {240, 4.29}, {270, 4.49}, {300, 4.6}} ``` Plot the data in the usual way: ```mathematica In[61]:= cdata = ListPlot[chargedata] ``` Define the function and ask Mathematica to solve for \(a\) and \(b\): ```mathematica In[62]:= chrgft = NonlinearFit[chargedata, a(1 - Exp[-x/b]), x, {a, b}] Out[62]= 4.83133 (1 - e^{-0.00957528 x}) ``` Plot it, adding a few extra features: ```mathematica In[63]:= Plot[chrgft, {x, 0, 300}, AxesLabel -> {"Time (s)", "Voltage"}, PlotLabel -> "Capacitor Charging Up", PlotStyle -> {{Dashing[{0.03}], Thickness[0.005]}}, Epilog -> {PointSize[0.02], Map[Point, chargedata]}} The NonLinearRegress function gives output similar to Regress. You may not want to display all the information. ``` In[64]:= chrgft = NonlinearRegress[chargedata, a (1 - Exp[-x/b]), x, {a, b}] Out[64]= {BestFitParameters -> {a -> 4.83133, b -> 104.436}, Estimate AsymptoticSE CI ParameterCITable -> {a 4.83133 0.0275347 {4.77073, 4.89194}, b 104.436 1.43567 {101.276, 107.595}} EstimatedVariance -> 0.000897664, DF SumOfSq MeanSq Model 2 140.442 70.2208 ANOVATable -> {Error 11 0.0098743 0.000897664, UncorrectedTotal 13 140.451 CorrectedTotal 12 20.5537 AsymptoticCorrelationMatrix -> { 1. 0.896223 0.896223 1.}, FitCurvatureTable -> Curvature Max Intrinsic 0.00800664 Max Parameter-Effects 0.0288556 95.% Confidence Region 0.50111 } ``` Extracting the coefficients for use in future calculations requires the `/.` operator ``` In[65]:= values = BestFitParameters /. chrgft Out[65]= {a -> 4.83133, b -> 104.436} Apply the `/.` operator a second time: ``` In[66]:= a2 = a /. values Out[66]= 4.83133 In[67]:= b2 = b /. values Out[67]= 104.436 ``` **Example: Resistance of a Thermistor** This example shows that you need to be careful when using **NonlinearFit**. The resistance of a thermistor varies with temperature according to $R = A \exp(-BT)$, where $A$ and $B$ are constants. Temperature is in degrees Celsius and resistance is in ohms. In this case we have entered temperature and resistance as two separate lists and used the **Table** command to combine them. ``` In[68]:= temp = {22.3, 27.3, 29.7, 33.2, 39.7, 44.7, 49.6, 62.1, 67, 74.5, 84.4, 94.9, 99.3}; ohms = {1501, 1298, 1054, 987, 905, 824, 643, 581, 555, 505, 398, 344, 327, 257}; tdata = Table[{temp[[i]], ohms[[i]]}, {i, 1, Length[temp]}]; In[69]:= thermplot = ListPlot[tdata] ``` ``` In[70]:= fitonly = NonlinearFit[tdata, a Exp[-b x], x, {a, b}] Out[70]= 26.7508 e^{-0.421807 x} In[71]:= bestline = Plot[fitonly, {x, 0, 100}, PlotRange -> All] This fit is no good. We need to choose new starting values and increase the number of iterations. ```mathematica In[72]:= result = NonlinearRegress[tdata, a Exp[-b x], x, {{a, 100}, {b, -0.05}}, RegressionReport -> {StartingParameters, BestFitParameters, BestFit}, MaxIterations -> 200] ``` ```plaintext Out[72]= {StartingParameters -> {a -> 100, b -> -0.05}, BestFitParameters -> {a -> 2180.84, b -> 0.0211993}, BestFit -> 2180.84 e^{-0.0211993 x}} ``` Plot the data and line of best fit. Note how we extract the equation of the line using the `/.` operator. ```mathematica In[73]:= Show[thermplot, Plot[BestFit/.result, {x, 0, 300}], AxesLabel -> {"x values", "y values"}] The values of $a$ and $b$ can be extracted from `BestFitParameters` for use in further calculations as follows: ```mathematica In[74]:= aandb = BestFitParameters /. result Out[74]= {a -> 2180.84, b -> 0.0211993} In[75]:= aa = a /. aandb Out[75]= 2180.84 In[76]:= bb = b /. aandb Out[76]= 0.0211993 ``` **Extracting x and y values from data** You can separate the x-values from the y-values using the `Map` command. Since the x-values are contained in the first column, then for the data set `labdata`, we write ```mathematica In[77]:= xvals = Map[First, labdata] Out[77]= {0.5, 1, 1.5, 2, 2.5, 3, 3.5, 4, 4.5, 5} ``` Similarly for the y-values, ```mathematica In[78]:= yvals = Map[Last, labdata] Out[78]= {8.1, 9.2, 10.5, 13.1, 15.4, 18, 20.4, 22.9, 24.5, 26.3} ``` **Changing x and y values** If you want to do the same operation on x and y values, the procedure for transforming the data is identical to that for a one-dimensional list. Thus, to take the natural logarithm of all the values, type: ```mathematica In[79]:= logdata = Log[labdata] More often, you will want to transform the x and y data separately. Perhaps one column of data will remain unchanged while the other is multiplied by a constant. Or you might take the reciprocal of one column, or you might want to swap the x and y values because you realised that you have plotted the wrong quantity along each axis of the graph. ```mathematica In[80]:= new = labdata/.{x_, y_} \rightarrow {x, 1/y} Out[80]= {{0.5, 0.123457}, {1, 0.108696}, {1.5, 0.0952381}, {2, 0.0763359}, {2.5, 0.0649351}, {3, 1/18}, {3.5, 0.0490196}, {4, 0.0436681}, {4.5, 0.0408163}, {5, 0.0380228}} ``` Note the use of the replacement operator '/.'. To swap the x and y data points, we write: ```mathematica In[81]:= new1 = labdata/.{x_, y_} \rightarrow {y, x} Out[81]= {{8.1, 0.5}, {9.2, 1}, {10.5, 1.5}, {13.1, 2}, {15.4, 2.5}, {18, 3}, {20.4, 3.5}, {22.9, 4}, {24.5, 4.5}, {26.3, 5}} ``` Another method uses the &/@ operators. The following will take the reciprocal of the y-values while leaving the x-values unchanged: ```mathematica In[82]:= datanew = #[[1]], 1/#[[2]]&/@labdata; ``` Think #[[1]] as meaning "the first column of the data" and #[[2]] as "the second column of the data". Swapping the x and y columns is very easy: you just swap the #[[1]] and #[[2]] columns: ```mathematica In[83]:= dataswap = #[[2]], #[[1]]&/@labdata; ``` Do not attempt to transform your data or fit your data to a straight line or curve without first knowing what you are trying to achieve. The Mathematica software is very powerful, but it will give meaningless numbers if you cannot assess the physical significance of your results. **Histograms** To generate a histogram, you need to load the standard Mathematica package to set up additional graphics functions ```mathematica In[84]:= <<Graphics` Now we’ll import some raw data from the “Statistics of Nuclear Counting” experiment and use the **Frequencies** command to draw the histogram. ``` In[85]:= bardat = Import["stats.dat"]; In[86]:= BarChart[Frequencies[bardat]] ``` **BinCounts[xdat, {xmin, xmax, dx}]** lists the number of elements in *xdat* that lie in bins from *xmin* to *xmax* in steps of *dx*. Increasing the bin width to 2 produces the following histogram: ``` In[87]:= abc = BinCounts[bardat, {2, 11, 2}]; In[88]:= BarChart[abc] ``` **Text and Legends** You can insert text anywhere on a graph using a combination of **Show**, **Graphics** and **Text**. For example, to put the words “Some Text” (centered at \{2, 20\}) on the graph which we called *rawdata*, above, type: You can also change the orientation and font of the text. See the *Mathematica* Help menu for further details. Legends can be included by loading the appropriate package: ``` In[90]:= <<Graphics`Legend` ``` You can place a legend in a graph as an option with `Plot`. Simply specify the text for each curve. For example, ``` In[91]:= Plot[Sin[x], Cos[x], x, -2Pi, 2Pi, PlotStyle -> GrayLevel[0], Dashing[.03], PlotLegend -> "Sine", "Cosine"] ``` You can include more options to change the appearance of the legend, for example, Saving your Graph Usually, you will be want to print out the entire *Mathematica* worksheet that you create. This is so that you can show how you performed calculations and obtained the line of best fit to a graph. Sometimes you will want to print out a graph or image separately to in a formal report, for example. *Mathematica* supports many graphics formats, including encapsulated postscript (.eps), Adobe Acrobat portable document format (.pdf), GIF (.gif) and JPEG (.jpg). Suppose you want to save the graph which we called "rawdata". This is achieved using the **"Display"** command. Decide on a name for the graph file (say, "myfile") and the format you want to save it in. Thus you would type one of the following commands to save the file in your default directory. ``` In[93]:= Display["myfile.eps", rawdata, "EPS"]; Display["myfile.pdf", rawdata, "PDF"]; Display["myfile.gif", rawdata, "GIF"]; Display["myfile.jpg", rawdata, "JPEG"];
January 12, 2007 Christine L. Harwell, Hearing Officer Office of the Director – Legal Unit 320 West 4th Street, Suite 600 Los Angeles, CA 90013 Re: Public Works Case No. 2005-037 Off-site Testing and Inspection Services Jurupa Unified School District — Glen Avon High School Dear Ms. Harwell: This constitutes the determination of the Director of Industrial Relations regarding coverage of the above-referenced project under California’s prevailing wage laws, and is made pursuant to California Code of Regulations, title 8, section 16001(a). Based on my review of the facts of this case and an analysis of the applicable law, it is my determination that the off-site testing and inspection services performed by The Twining Laboratory, Inc. (“Twining”) is not subject to prevailing wage requirements. Facts On September 4, 2002, Kern Steel Fabrication, Inc. (“KSF”) entered into a contract with the Jurupa Unified School District (“District”) to provide structural steel for the construction of Glen Avon High School (“Project”) in Riverside. It is undisputed that the Project is a public work. Under the terms of paragraph 2 of its contract, KSF agreed to: [Provide and furnish all the labor, materials, necessary tools, expendable equipment, and all utility and transportation services as described in the complete contract and required to complete all work for: Bid#3/03L—Jurupa High School#3 Phase 1; Category 2—structural steel including, if so desired and ordered by the District, through major change orders requiring the performance of any or all Phase(s) of the Project as identified in the contract documents . . . KSF has been in business as a supplier of structural steel since 1978. Its sole facility is a steel fabrication shop at 627 Williams Street in Bakersfield. KSF supplies structural steel to private and public entities for use in the construction of a variety of structures. It has recently supplied structural steel for commercial building projects such as the Crossroads Business Center in Irvine, the Kaiser Permanente Phase II Project in Bakersfield, the United Airlines hangar in Oakland and the Brisbane Technology Park in Brisbane. On February 3, 2003, Twining entered into a contract with District to provide testing and inspection services at KSF’s facility. As described in its proposal, Twining’s services included the following: --- 1 Tilden-Coil Constructors, Inc. served as construction manager for the Project, apparently in lieu of a general contractor. The DSA [Division of the State Architect] approved plans and specifications and references therein will be thoroughly reviewed prior [to] and during the structural steel fabrication. Mill certifications will be used to identify all structural steel in accordance with the requirements of the project plans and specifications. All welders proposed for the project will have their qualifications and welding procedures (prequalified and qualified) reviewed prior to steel fabrication. Twining will provide daily handwritten reports upon leaving the project each day. These reports will be followed by a formal typed report every two weeks. The daily reports will indicate the work performed on a particular day by preferably a piece mark number of the structural steel member and the individuals performing the work. Partial and complete penetration welds will be non-destructively tested as required by the applicable code standards. All tests will be documented and reported under separate reports. All accepted work will be marked in an acceptable manner so as to permit the project inspector-of-record and the field inspection firm verification that the fabricated structural steel member is acceptable for field erection. In the event of a deficiency discrepancy, Kern Steel will be immediately notified so that the deficiency or discrepancy may be properly addressed or corrected. If the deficiency or discrepancy is not properly addressed or corrected before the steel member in question is to be shipped, the project inspector, the architect, the structural engineer, the owner and ultimately DSA will be notified. All deficiencies and discrepancies will be documented on the daily report as a matter of record. None of Twining’s services was performed at the Project site. All of the above tasks were performed at KSF’s Bakersfield facility, which is located more than 100 miles from the Project site in Riverside. On April 18, 2003, Twining sent a letter to District memorializing certain agreements regarding billing rates. The letter stated in part: During several conversations ... leading to our revised proposal dated December 3, 2002, our special inspection services were estimated on a tentative schedule provided by Kern Steel based upon a non-prevailing hourly billing rate. The basis of our rate was that our special inspection services were to be provided off-site in a fabrication shop apart from the project site. [O]ur services will continue to be compensated on a non-prevailing hourly billing rate as indicated in the contract as Exhibit C. However, based upon new or recent judicial decisions, the Jurupa Unified School District is seeking a legal opinion on whether off-site special inspection services are subject to prevailing wage requirements established by the Director of Industrial Relations. [If] in the event the legal opinion indicates that our off-site special inspectors are subject to prevailing wage requirements, our current non-prevailing wage hourly billing rate for a special inspector would be renegotiated to a prevailing hourly billing rate. ... All previous billings would be then adjusted for prevailing wage rate with our employees being compensated for prevailing wage. Whether prevailing wage obligations attach to the testing and inspection services performed by Twining at KSF’s Bakersfield facility is the subject matter of this determination. **Discussion** Labor Code section 1720(a)(1)\(^2\) defines public works to include: Construction, alteration, demolition, installation, or repair work done under contract and paid for in whole or in part out of public funds . . . . For purposes of this paragraph, “construction” includes work performed during the design and preconstruction phases of construction including, but not limited to, inspection and land surveying work. Section 1771 generally requires the payment of prevailing wages to workers employed on public work. Section 1772 provides that: “Workers employed by contractors or subcontractors in the execution of any contract for public work are deemed to be employed upon public work.” Finally, under section 1774 such contractors or subcontractors “shall pay not less than the specified prevailing rates of wages to all work[ers] employed in the execution of the contract.” Work falls within the scope of sections 1771, 1772 and 1774 when it is “functionally related to the process of construction” and “an integrated aspect of the ‘flow’ process of construction.” *See O. G. Sansone Co. v. Dept. of Transportation* (1976) 55 Cal.App.3d 434, 444, quoting *Green v. Jones* (1964) 23 Wis.2d 551, 128 N.W.2d 1, 7. It is undisputed that the Project, the construction of a high school in Riverside done under contract and paid for in whole or in part out of public funds, is a public work. The question presented here is whether the testing and inspection services performed by Twining employees at the KSF facility\(^3\) in Bakersfield was “functionally related to the process of construction” and “an integrated aspect of the ‘flow’ process of construction” within the meaning sections 1771, 1772 and 1774. Twining employees performed their work independent of the construction activities at the Project site. They worked entirely in the fabrication shop and never at the construction site. They had no --- \(^2\)Subsequent statutory references are to the Labor Code unless otherwise indicated. \(^3\)The KSF facility is a general use facility. Therefore, unlike a dedicated yard or secondary public works site, the fabrication work performed at the KSF facility is not subject to prevailing wage requirements. *See O.G. Sansone Co. v. Dept. of Transportation, supra*, 55 Cal.App.3d 434. interaction with the construction workers, and they inspected and tested the structural steel at an entirely different place and time than the steel was erected. Once they determined a member to be satisfactory, that member could not be immediately incorporated in the construction project because it first had to be transported a distance of more than 100 miles. Under these circumstances, the off-site testing and inspection services performed by Twining employees was not an integrated aspect of the flow process of construction, and was not sufficiently functionally related to that process as to be done in the execution of the public work. It would be more accurate to say that is work is functionally related to the process of material fabrication. Based on the foregoing and consistent with the analysis and outcome of past precedential public works coverage decisions applying the same Code sections, Twining employees performing off-site testing and inspection services were not employed in the execution of a contract for public work within the meaning of sections 1771, 1772 and 1774, and therefore Twining was not required to pay prevailing wages. I hope this determination satisfactorily answers your inquiry. Sincerely, John M. Rea Acting Director --- *Decisions in which the work in question was found not to be in the execution of a contract for public work include PW 2002-096, Request for Proposals: Planting, Operation, Maintenance and Monitoring of Owens Lake Southern Zones Managed Vegetation Project (December 16, 2005) (inspection, testing and monitoring work that occurs after the completion of the public work was not directly related to the prosecution of the public work and necessary for its completion); and PW 99-037, Alameda Corridor Project, A&A Ready Mix Concrete and Robertson’s Ready Mix Concrete (April 10, 2000) (delivery of concrete mix was not an integrated aspect of and functionally related to construction work on the project). Decisions in which the work in question was found to be in the execution of a contract for public work include PW 2003-026, Advisory Opinion on DSA Project Inspectors (October 7, 2003) (project Inspectors actively and continuously monitoring contractor’s work through on-site physical presence whenever there was construction activity were a vital and integral part of construction projects); PW 2004-013, Dry Creek Joint Elementary School District, Coyote Ridge Elementary School, On-site Heavy Equipment Upkeep (December 16, 2005) (on-site heavy equipment upkeep by contractor’s shop employees was directly related to the prosecution of the public work and necessary for its completion); and PW 2005-018, Installation and Removal of Temporary Fencing and Power and Communications Facilities, Eastside High School, Antelope Valley Union High School District (February 28, 2006) (removal of temporary fencing and power and communications facilities was performed as part of construction process). See also, PW 2004-023, Prevailing Wage Rates, Richmond-San Rafael Bridge/Benicia-Martinez Bridge/San Francisco-Oakland Bay Bridge, California Department of Transportation and PW 2003-046, Public Works Coverage, West Mission Bay Drive Bridge Retrofit Project, City of San Diego (January 23, 2006)(only towboat operators who haul materials from dedicated sites or who are involved in the immediate incorporation of materials into bridge projects were performing work functionally related to and integrated with the process of construction).*
Elementary Differential Equations 9780470458327 When somebody should go to the ebook stores, search start by shop, shelf by shelf, it is essentially problematic. This is why we offer the ebook compilations in this website. It will certainly ease you to see guide elementary differential equations 9780470458327 as you such as. By searching the title, publisher, or authors of guide you in fact want, you can discover them rapidly. In the house, workplace, or perhaps in your method can be all best place within net connections. If you seek to download and install the elementary differential equations 9780470458327, it is totally easy then, in the past currently we extend the associate to buy and make bargains to download and install elementary differential equations 9780470458327 consequently simple! Differential Equations Book Review Differential Equations Book Review Differential Equations Book You've Never Heard Of Differential equation introduction | First order differential equations | Khan Academy Elementary Differential Equations Lecture 1 Solving Elementary Differential Equations The THICKEST Differential Equations Book I Own 📚.Elementary Differential Equations with Boundary Value Problems 6th Edition Three Good Differential Equations Books for Beginners This is what a differential equations book from the 1800s looks like This is the Differential Equations Book That... Differential equations by MD Raisinghania book review | best book for differential equations? Books for Learning Mathematics60SMBR: Intro to Topology Differential Equations—Introduction—Part 1 The Most Famous Calculus Book in Existence “Calculus by Michael Spivak” Calculus Book for Beginners My (Portable) Math Book Collection [Math Books] Overview of Differential Equations 10 Best Calculus Textbooks 2019 Introduction to Linear Differential Equations and Integrating Factors (Differential Equations 15) Differential Equations: Final Exam Review Lesson 2 - Solving Elementary Differential Equations Solution Manual for Elementary Differential Equations – Richard DiPrima, William Boyce Elementary Differential Equations and Boundary Value Problems by Boyce and DiPrima #shorts Partial Differential Equations Book Better Than This One?Elementary Differential Equations and Boundary Value Problems by Boyce/DiPrima #shorts Introduction to Differential Equations (Differential Equations 2) Elementary Differential Equations Lecture 4 Elementary Differential Equations 9780470458327 AbeBooks.com: Elementary Differential Equations (9780470458327) by Boyce, William E.; DiPrima, Richard C. and a great selection of similar New, Used and Collectible Books available now at great prices. Details about Elementary Differential Equations: Covering in-depth differential equations themes, the creator of Elementary Differential Equations 10th Edition (978-0470458327) drove to design a definitive publication on the field of Mathematics / Differential Equations and linked subjects. Elementary Differential Equations, 10th Edition is written from the viewpoint of the applied mathematician, whose interest in differential equations may sometimes be quite theoretical and sometimes intensely practical. The authors have sought to combine a sound and accurate exposition of the elementary theory of differential equations with considerable material on methods of solution, analysis ... Main Elementary Differential Equations. Elementary Differential Equations William E. Boyce, Richard C. DiPrima. Boyce/DiPrima is the best-seller in its market and extremely popular. The format remains unchanged, but exercises and examples have been updated to reflect the most current scenarios and topics. ... 9780470458327. File: PDF, 4.21 MB ... Find 9780470458327 Elementary Differential Equations 10th Edition by William Boyce et al at over 30 bookstores. Buy, rent or sell. Elementary Differential Equations (10th Edition) Edit edition. Problem 12P from Chapter 3.5: Find the general solution of the given differential equation. Get solutions Differential equations Elementary Differential Equations Pg. 48 Ex. 10 solutions Elementary Differential Equations, 10th Edition Elementary Differential Equations, 10th Edition 10th Edition | ISBN: 9780470458327 / 0470458321. 1,290. expert-verified solutions in this book. Buy on Amazon.com Elementary Differential Equations, 10th Edition is written from the viewpoint of the applied mathematician, whose interest in differential equations may sometimes be quite theoretical and sometimes intensely practical. The authors have sought to combine a sound and accurate exposition of the elementary theory of differential equations with considerable material on methods of solution, analysis ... **Elementary Differential Equations: Boyce, William E ...** Elementary differential equations and boundary value problems / William E. Boyce, Richard C. DiPrima – 7th ed. p. cm. Includes index. ISBN 0-471-31999-6 (cloth : alk. paper) 1. Differential equations. 2. Boundary value problems. I. DiPrima, Richard C. II. Title QA371 .B773 2000 515’.35–dc21 00-023752 Printed in the United States of ... **Mathematics - Elementary Differential Equations** Elementary Differential Equations with Boundary Value Problems is written for students in science, engineering, and mathematics who have completed calculus through partial differentiation. If your syllabus includes Chapter 10 (Linear Systems of Differential Equations), your students should have some preparation in linear algebra. **ELEMENTARY DIFFERENTIAL EQUATIONS** Textbook solutions for Elementary Differential Equations 10th Edition William E. Boyce and others in this series. View step-by-step homework solutions for your homework. Ask our subject experts for help answering any of your homework questions! **Elementary Differential Equations 10th Edition Textbook ...** Elementary Differential Equations by DiPrima, Richard C., Boyce, William E. and a great selection of related books, art and collectibles available now at AbeBooks.com. 9780470458327 - Elementary Differential Equations by Boyce, William E ; Diprima, Richard C - AbeBooks **9780470458327 - Elementary Differential Equations by Boyce ...** Elementary Differential Equations and Boundary Value Problems, 11e WileyPLUS Registration Card + Loose-leaf Print Companion William E. Boyce. 4.3 out of 5 stars 19. Ring-bound. $122.30. Elementary Differential Equations Boyce. 4.0 out of 5 stars 82. Hardcover. $44.55. **Elementary Differential Equations: William E. Boyce ...** 9780470458327 Our cheapest price for Elementary Differential Equations is $49.07. Free shipping on all orders over $35.00. Elementary Differential Equations 10th edition ... Elementary differential equations Item Preview remove-circle Share or Embed This Item. EMBED (for wordpress.com hosted blogs and archive.org item <description> tags) Want more? Advanced embedding details, examples, and help! No Favorite. share ... Elementary differential equations : Kells, Lyman M. (Lyman ... Elementary Differential Equations 9780470458327 Recognizing the exaggeration ways to acquire this books elementary differential equations 9780470458327 is additionally useful. Elementary Differential Equations 9780470458327 - TecAdmin Buy Elementary Differential Equations 10th Edition by Boyce, William E., DiPrima, Richard C. (ISBN: 9780470458327) from Amazon's Book Store. Everyday low prices and free delivery on eligible orders. Elementary Differential Equations: Amazon.co.uk: Boyce ... Buy Elementary Differential Equations 10th edition (9780470458327) by NA for up to 90% off at Textbooks.com. Elementary Differential Equations 10th edition ... Elementary Differential Equations 10E by Richard C. DiPrima, 9780470458327, available at Book Depository with free delivery worldwide. Elementary Differential Equations, 10th Edition is written from the viewpoint of the applied mathematician, whose interest in differential equations may sometimes be quite theoretical and sometimes intensely practical. The authors have sought to combine a sound and accurate exposition of the elementary theory of differential equations with considerable material on methods of solution, analysis, and approximation that have proved useful in a wide variety of applications. While the general structure of the book remains unchanged, some notable changes have been made to improve the clarity and readability of basic material about differential equations and their applications. In addition to expanded explanations, the 10th edition includes new problems, updated figures and examples to help motivate students. Elementary Differential Equations and Boundary Value Problems 11e, like its predecessors, is written from the viewpoint of the applied mathematician, whose interest in differential equations may sometimes be quite theoretical, sometimes intensely practical, and often somewhere in between. The authors have sought to combine a sound and accurate (but not abstract) exposition of the elementary theory of differential equations with considerable material on methods of solution, analysis, and approximation that have proved useful in a wide variety of applications. While the general structure of the book remains unchanged, some notable changes have been made to improve the clarity and readability of basic material about differential equations and their applications. In addition to expanded explanations, the 11th edition includes new problems, updated figures and examples to help motivate students. The program is primarily intended for undergraduate students of mathematics, science, or engineering, who typically take a course on differential equations during their first or second year of study. The main prerequisite for engaging with the program is a working knowledge of calculus, gained from a normal two? or three? semester course sequence or its equivalent. Some familiarity with matrices will also be helpful in the chapters on systems of differential equations. This package includes a copy of ISBN 9780470458327 and a registration code for the WileyPLUS course associated with the text. Before you purchase, check with your instructor or review your course syllabus to ensure that your instructor requires WileyPLUS. For customer technical support, please visit http://www.wileyplus.com/support. WileyPLUS registration cards are only included with new products. Used and rental products may not include WileyPLUS registration cards. The 10th edition of Elementary Differential Equations is written from the viewpoint of the applied mathematician, whose interest in differential equations may sometimes be quite theoretical and sometimes intensely practical. The authors have sought to combine a sound and accurate exposition of the elementary theory of differential equations with considerable material on methods of solution, analysis, and approximation that have proved useful in a wide variety of applications. While the general structure of the book remains unchanged, some notable changes have been made to improve the clarity and readability of basic material about differential equations and their applications. In addition to expanded explanations, the 10th edition includes new problems, updated figures and examples to help motivate students. With Wiley’s Enhanced E-Text, you get all the benefits of a downloadable, reflowable eBook with added resources to make your study time more effective, including: • Embedded & searchable equations, figures & tables • Math XML • Index with linked pages numbers for easy reference • Redrawn full color figures to allow for easier identification Elementary Differential Equations, 11th Edition is written from the viewpoint of the applied mathematician, whose interest in differential equations may sometimes be quite theoretical, sometimes intensely practical, and often somewhere in between. The authors have sought to combine a sound and accurate (but not abstract) exposition of the elementary theory of differential equations with considerable material on methods of solution, analysis, and approximation that have proved useful in a wide variety of applications. While the general structure of the book remains unchanged, some notable changes have been made to improve the clarity and readability of basic material about differential equations and their applications. In addition to expanded explanations, the 11th edition includes new problems, updated figures and examples to help motivate students. The program is primarily intended for undergraduate students of mathematics, science, or engineering, who typically take a course on differential equations during their first or second year of study. The main prerequisite for engaging with the program is a working knowledge of calculus, gained from a normal two? or three?] semester course sequence or its equivalent. Some familiarity with matrices will also be helpful in the chapters on systems of differential equations. Straightforward and easy to read, DIFFERENTIAL EQUATIONS WITH BOUNDARY-VALUE PROBLEMS, 9th Edition, gives you a thorough overview of the topics typically taught in a first course in Differential Equations as well as an introduction to boundary-value problems and partial Differential Equations. Your study will be supported by a bounty of pedagogical aids, including an abundance of examples, explanations, Remarks boxes, definitions, and more. Important Notice: Media content referenced within the product description or the product text may not be available in the ebook version. Incorporating an innovative modeling approach, this book for a one-semester differential equations course emphasizes conceptual understanding to help users relate information taught in the classroom to real-world experiences. Certain models reappear throughout the book as running themes to synthesize different concepts from multiple angles, and a dynamical systems focus emphasizes predicting the long-term behavior of these recurring models. Users will discover how to identify and harness the mathematics they will use in their careers, and apply it effectively outside the classroom. Important Notice: Media content referenced within the product description or the product text may not be available in the ebook version. Details the methods for solving ordinary and partial differential equations. New material on limit cycles, the Lorenz equations and chaos has been added along with nearly 300 new problems. Also features expanded discussions of competing species and predator-prey problems plus extended treatment of phase plane analysis, qualitative methods and stability. Master the finite element method with this masterful and practical volume *An Introduction to the Finite Element Method (FEM) for Differential Equations* provides readers with a practical and approachable examination of the use of the finite element method in mathematics. Author Mohammad Asadzadeh covers basic FEM theory, both in one-dimensional and higher dimensional cases. The book is filled with concrete strategies and useful methods to simplify its complex mathematical contents. Practically written and carefully detailed, *An Introduction to the Finite Element Method* covers topics including: An introduction to basic ordinary and partial differential equations The concept of fundamental solutions using Green's function approaches Polynomial approximations and interpolations, quadrature rules, and iterative numerical methods to solve linear systems of equations Higher-dimensional interpolation procedures Stability and convergence analysis of FEM for differential equations This book is ideal for upper-level undergraduate and graduate students in natural science and engineering. It belongs on the shelf of anyone seeking to improve their understanding of differential equations.
The Fourth International Congress of the International Association for the Study of Traditional Asian Medicine will be held in Tokyo from the 19th through the 21st of August, 1994. Among the program's highlights will be the awarding of the 1994 A. L. Basham Medals. This recipients are Professors Patricia and Roger Jeffery (University of Edinburgh) and Professor Shigehisa Kuriyama (Emory University). Following presentation of the medals, the recipients will lecture on their special subjects. Twelve individual workshops have been planned, on topics ranging from research on pre-modern Asian medicine to the application of modern techniques in traditional medical practices. In addition, there will be special lectures by Charles Leslie, K. Yamada, K. Nishino, and Yasuo Otsuka. The Congress schedule appears elsewhere in this newsletter. In this issue - The 4th International Congress in Tokyo ........... 6 - 10 - The Basham Medal Recipients ....................... 2 - Book Reviews ............................................. 11 - Conference Proposal by Claire Cassidy ............ 15 The International Association for the Study of Traditional Asian Medicine in 1989 decided to establish an Arthur L. Basham Medal in honor of the great Indologist and co-founder of IASTAM. Two medals are awarded every five years, on the occasion of the International Congress, to outstanding scholars in the study of traditional Asian medicine, one of the recipients being an Asian one a Westerner. It is the goal of IASTAM to encourage scholarly work in any of the subdisciplines of the field, on the social and intellectual history of Asian medicine, the social and cultural anthropology of medicine in Asia, personality and culture of practices and practitioners, and other related topics. The A.L. Basham Medal was awarded for the first time in 1990 to Professor Yamada Keiji of the Research Institute for Humanistic Studies of Kyoto University in Japan, and Professor G. Jan Meulenbeld, M.D., retired professor of Indology at the University of Groningen, Netherlands. The award took place in Bombay on the occasion of the third ICTAM. This year, the A.L. Basham Medal Award Committee, consisting of Charles Leslie, Paul Unschuld, and F. Zimmermaan, has selected Professors Patricia and Roger Jeffery (University of Edinburgh) and Professor Shigehisa Kuriyama (Emory University), to receive the medals in 1994. This award will be presented to Patricia Jeffery, Roger Jeffery, and Shigehisa Kuriyama this summer in Tokyo. The ceremony will be part of the fourth International Congress on Traditional Asian Medicine, August 19-21. The recipients have been invited to Tokyo by Professor Yasuo Otsuka, Chairman of the ICTAM IV Organizing Committee, and they will deliver special lectures on this occasion. One of the two A.L. Basham Medals in 1994 has been awarded jointly to Patricia Margaret Jeffery and Roger Jeffery, both teaching in the Department of Sociology, University of Edinburgh. Patricia Jeffery received her Ph.D. in the Social Sciences from the University of Bristol in 1973. She is currently a Reader in Sociology at the University of Edinburgh and the author of two books, including *Migrants and Refugees: Muslim and Christian Pakistani Families in Bristol*, Cambridge: Cambridge University Press 1976. However, a good number of her publications have been authored jointly with her husband, Roger, including their 1989 book reviewed below, *Labour Pains and Labour Power*, for which they were nominated for the Basham Medal. Roger Jeffery received his Ph.D. from the University of Edinburgh in 1985. Currently a Senior Lecturer at Edinburgh University, he is the author of *The Politics of Health in India*, published by the University of California Press in 1988. This book was reviewed in the IASTAM newsletter in March, 1989. From 1974, Roger Jeffery has published numerous important articles in the sociology of medical policy-making in India. For more than ten years now, Patricia and Roger Jeffery have been collaborating with each other on the study of women, childbirth, midwifery, and *nasbandi* (sterilization), in rural North India. Their first joint publication on this topic is a most important research note on "Female Infanticide and Amniocentesis," *Economic and Political Weekly* (Bombay), XVIII nos. 16-17, 1983; revised and extended version in *Social Science and Medicine* Vol. 19 no. 11, 1984: 1207-1212. They have two jointly-authored books forthcoming, one on women's lives and life stories, and another one on gender, class, and ethnicity in rural North India. Their current research interests include deforestation and environmental issues. Muni worked all day, although the labour pains were beginning to hurt. Contractions then became stronger. Her mother-in-law has come with the *dai* (traditional birth attendant). She will be delivering the baby, which happens to be another girl. "Aah!" "Well, girls are all right, too, you know." "Muni's fate is bad! That 'prostitute-widow' of a dai has produced another girl." The issues raised in passing in this story are dealt with in more detail later. For example, Muni's mother-in-law reluctantly gives the dai ten rupees. To the complaining midwife, she says: "It would be different for a boy" (p.6). As we shall learn later, "a dai getting Rs. 25 plus about five kilograms of grain for delivering a boy would probably be given Rs. 15-20 plus the grain for a girl" (p. 141), and in some cases only Rs. 10. The sociological analysis is embedded in ethnography. A number of line-drawings by Catherine Robin, which are evocative of everyday life and of the maternal bond, tend to reinforce the literary impact of ethnographic descriptions and of the villagers' emotional comments. The authors are skillfully interweaving their analysis with the women's own voices. In that respect, the book is like *Death Without Weeping* (Berkeley: 1992), a book by Nancy Scheper-Hughes on infant deaths among very poor Brazilians. The whole book is constructed on a series of gentle shifts to and fro between private settings and wider social and economic contexts. "We began this book with Muni giving birth in the apparent privacy of her small village home. But the childbearing careers we described in the chapters that followed are tied into much wider social relationships, most obviously connected with domestic structures, class position and ethnicity" (p. 215). Agrarian relations in north Indian cannot be understood if women and their activities, including biological reproduction, and the "private" act of childbearing, are divorced from the wider household politics. This is why the very personal act of childbearing is both described in great ethnographic detail and placed within the wider contexts of kinship, the household economy, the provision of health services and the class system. Two particular qualities should recommend this book to LASTAM members. First, the traditional art of midwifery is extremely well documented, with a wealth of information on the bodily techniques, ethnophysiology and the management of the body's wasters (the cord, the placenta, etc.), diet, and food classification. Second, the modest and committed attitude of the researchers is worth mentioning. It is a dense monograph, packed with facts and ideas, well-researched, well-written, but at the same time it is the account of a personal experience and a testimony written out of friendship and care. Distancing themselves from the classic format of "village studies," in which local settings were objectified in holistic tableaux, the Jefferys have been at great pains to clarify their research strategy and let the people in Bijnor District (UP) speak for themselves. **AAA Meetings -- Atlanta** The annual meeting of the American Anthropological Association will be held in Atlanta, Georgia, November 17 - 21. As usual, there will be several panels relevant to the interests of IASTAM members. Of some interest may be the roundtable discussion to be chaired by Charles Nuckolls on psychiatry, culture, and DSM (the official text of the American psychiatric establishment). The roundtable will be sponsored by the Society for Medical Anthropology. Those interested in participating may sign up in advance or register at the meetings. There is no particular agenda, although some of the discussion will focus on the recently published fourth edition of the *Diagnostic and Statistical Manual of Mental Disorders*. For future reference, please consider the possibility of organizing panels on Asian medicine for later AAA conferences. The IASTAM newsletter will be happy to publish your panel proposals. **Basham Medal Winner** **Shigehisa Kuriyama** One of the two Basham Medals this year has been awarded to Dr. Shigehisa Kuriyama, Assistant Professor in the Institute for Liberal Arts (Emory University USA) and Visiting Associate Professor at the Internal Research Center for Japanese studies in Kyoto. Shigehisa Kuriyama obtained his Ph.D. in the History of Science from Harvard University in 1986, with a dissertation on *The Varieties of Haptic Experience: A Comparative study of Greek and Chinese Pulse Diagnosis*. Since 1989 he has been teaching at Emory University in Atlanta. His book, *Embodied Differences: A Comparative Study of Conceptions of the Body in Classical Greek and Chinese Medicine*, is to appear from Zone Books. From 1983, he has published on the history of Chinese and Japanese medicine. Kuriyama participated in the 9th International Symposium on the Comparative History of Medicine East and West, 1984. His contribution, published in the proceedings, prefigures his dissertation and gives the gist of his argument on "Pulse Diagnosis in the Greek and Chinese Traditions," in Y. Kawakita eds. *History of Diagnosis*, Osaka: The Taniguchi Foundation, 1987. The enigma that motivates this essay is that of a contradiction between historical relativism and the existence of universalis of medicine. In Kuriyama's opinion, it is not enough to say that the various medical traditions reflect cultural differences when the human body is just one. "In what sense do medical traditions diverge? The habitual dichotomy of biology and culture would have us situate the divergence in culture, in different ways of thinking about the unique and universal body... We take the body as a given fact" p. 64. This given fact, however, is an artifact. By scrutinizing in what sense Chinese and Greek sphygmology (pulse diagnosis) diverged, Kuriyama hopes to detain some insight into the fabric of apparently given facts in medicine. Even in those medical traditions that extol the truth value of perception, statements of fact are constructed on the basis of perceptual education. Different medical traditions have followed different possible "paths into the body" (p. 60). For example, the *Nan ching* does not reveal any connection between pulsation, the movement of the blood, and the beating heart, while it points to the connection with breath. "While the *Nan ching* is by no means a text of Taoist yoga, its analysis of inspiration and expiration, and the primacy of the so-called inter-rental pulse (*shen chien chih tung*) all evidence the unmistakable imprint of techniques of breath control on the development of medical theory. The path of embodying change (adopted from the Chinese) led away from an anatomic, cardio-centric interpretation of the pulse (adopted by the Greeks)" (p. 60). Another important paper is "Between Mind and Eye: Japanese Anatomy in the Eighteenth Century," in C. Leslie and A. Young eds. *Paths to Asian Medical Knowledge*. This is in a sense the very same argument as before, on the intimate relationship between visual perception and technical education. Seeing is a learned skill. Kuriyama describes the role of European medical illustrations and the new representational techniques, introduced through the translation of Dutch anatomical books, in the transformation of Japanese medicine. The new style of perspective drawing and chiaroscuro enabled Gempaku, the Japanese physician and translator, to see anatomical features in dissection that he had not seen before. This kind of ethno-epistemological approach to medical history, which entails studying the relationship between texts and practice, the Word and the Eye, would also be a valuable path to other Asian medical traditions, and their encounters with anatomy. One can observe a similar integration of Western anatomical charts into modern Ayurveda, and there is a fundamental polarity between the disciple's own Perception and his guru's authoritative Word in the Sanskrit *nyaya* texts. The last paper I would like to mention, "Visual Knowledge in Classical Chinese Medicine," was presented at a symposium on the comparative epistemological study of scholarly traditions in medicine (Montrea, 1992). It will appear in D. Bates ed. *Epistemology and the Scholarly Medical Traditions*, Cambridge forthcoming. The theoretical stance of classical medicine is put into perspective. Kuriyama plays once again on the polarity of the Word and the Eye, on the two sides of medical knowledge. The Greek anatomical vision was shaped by the assumptions and concerns of Greek physiognomy. By contrast, the Chinese developed a mystique of insight and physiognomony, so to say, which the author reveals through elaborate analysis of the theory of colors and complexions. In both traditions, visual knowledge is directed toward intentionality. But the differences in how Greek and Chinese physicians looked at the body, as an external object, derived in large measure from differences in how they conceived and experienced themselves from within, as persons. The core issue, in the epistemology of scholarly medical traditions, is not that of anatomy or physiology, but that of intentionality and the construction of the person. *F. Zimmermann* **Election of New Officers** The current president, Francis Zimmermann, and the other officers of the association will soon be leaving office. *Please submit your nominations for their successors to Francis Zimmermann or to Charles Nuckolls, editor of the newsletter.* If you wish to be involved in the management of the association, or you have suggestions, please do let us know. Your participation is welcome. The 4th International Congress on Traditional Asian Medicine (ICTAM IV) CONGRESS SCHEDULE I. Special Lectures Professor Charles Leslie (University of Delaware) Professors Patricia and Roger Jeffery (University of Edinburgh) Dr. Shigehisa Kuriyama (Emory University) Professor Keiji Yamada (Int'l Research Center for Japanese Studies) Mr. Kozo Nishino II. Presidential Lecture Dr. Yasuo Otsuka (Oriental Medicine Research Center of the Kitasato Institute) III. Scientific Program A. Topics for Oral and Poster Sessions 1. History of Traditional Asian Medicine a. Middle East b. Southeast Asia c. East Asia From the Editor Charles W. Nuckolls Over the past few months questions have been raised about the editorial policy of this newsletter. Some feel that this policy is too liberal and that it leads to the appearance of articles which might not meet the scholarly standards of peer-reviewed journal. Others feel that as a newsletter, and not a journal, it is the obligation of the editor to publish whatever might be considered newsworthy to the general membership or one of its constituencies. It is time for me to set the record straight and state as a matter of policy that the newsletter will publish anything and everything that could be of interest to students of Asian medicine, without bias and without the application of rigorous rules of acceptability. In this way we insure maximum access and participation. At the same time, the editor reserves the right to reject submissions or require revision when articles do not meet this criterion, or when such articles are obviously erroneous, insulting, or badly written. We encourage debate and welcome controversy. Our goal is to stimulate interest in Asian medicine and to provide a forum for cooperation between students. There will be moments when the reader feels that standards have been imposed too stringently or not stringently enough, but this, I submit, is indicative of the health of the newsletter. When readers lose interest, they will not bother to voice either their objections or their praise. For a Debate on Asian Medicine A perennial debate within Asian medicine is not so much how to define the subject but whose definition should be considered most important. For a long time the participants in this debate have been classicists and social scientists, those who study texts and those who study people. The first group is protective of its domain because it was first onto the field, long before the other existed or had developed the methods appropriate to its task. The second group, as the new kid on the block, has always felt that texts play a secondary role in the study of culture, and that the primary focus should be on practice. That the debate is futile and misconceived is as true as it is unfortunate, but that does not get rid of it. The real question is: Do we permit the situation to continue or do we address it openly and try to resolve it? Of course the issue is no longer as simple as the debate between "classicists" and "ethnographers." Applied scholars, for whom classicists and scientists are alike in their detachment from the real world, now demand recognition. Claire Cassidy's remarks elsewhere in this newsletter put the issue in the foreground. I suggest we take it seriously and address it openly, even if in the short run it means dispensing our attempt to create a unified image of ourselves. In fact, in the long run, such a debate might help us to create such unity. I urge the readers to make their views known. Who are YOU? In future issues, the newsletter will resume publication of short biographies of its members. Please send some information about yourself, your interests, and your research program. In a future issue we shall publish a directory of members according to their interests. Congress Schedule, con't. A. Traditional Asian Medicine as a Socio-Cultural Phenomenon (The anthropological Perspective) 1. Religion, Philosophy, and Traditional Asian Medicine 2. Figurative Language in Traditional Asian Medicine 3. Socio-Cultural Specificity of Traditional Asian Med. B. Traditional Asian Medicine in Contemporary Context 1. The Current Situation of Traditional Medicine in Asian Countries (legislation; economic situation; institutional training; manpower) 2. Traditional Asian Medicine and Primary Health Care (political directives; health delivery systems) 3. The Spread of Traditional Asian Medicine to the Other Parts of the World (the socio-political context) 4. Individual Therapeutic Methods of Traditional Asian Medicine 5. Traditional Asian Pharmaceuticals (drug herbs; cultivation and production; clinical research) 6. Traditional Asian Medicine and Public Health, Environmental Issues 7. Traditional Asian Medicine and Health Care Workshops W1 "Research on pre-modern Asian Medicine" Professor Paul Unschuld (University of Munich); Dr. Donald Harper (University of Arizona) Dr. Hermann Tessenow; Dr. Sheng Jinsheng (China Ins't for the History of Medicine & Literature, Academy for Traditional Chinese Medicine); Dr. Barbara Volkmar (University of Munich); Professor Michio Yano (Internat'l Research Center for Japanese Studies); Keiji Yamada (Int'l Research Center for Japanese Studies) W2 "Social sciences in traditional Asian medicine" Kyoichi Sonoča (Toyo University) W3 "Classical history of Asian Medical systems" Dr. Rahul Peter Das (Hamburg University) Professor Christian Oberlander (University of Tokyo) Professor Wataru Miki (Shizuoka Seika Junior College) Dr. Makoto Mayanagi (Kitasato Institute) W4 "Modern politics" Professor Roger Jeffery (University of Edinburgh); Dr. Do-Ya Chang (Dept. of Health, Executive of Yuan, Republic of China); Professor Kiichiro Tsutani (Medical Research Ins't Tokyo Medical and Dental University) W5 "Acupuncture and modern civilization" Professor Yoshiro Yase W6 and W7 "Study of traditional Asian pharmaceuticals" "Acupuncture II" These will be organized by Dr. Jong-Chol Cyong with the papers presented by the participants of ICTAM IV. W8 "Health care and life breath" Dr. Beema Bhatta (Holy Family Hospital); Dr. Tsutomu Hatai; Dr. U.K. Krishna W9 "Lifestyle and traditional medicine" Dr. Shigehisa Kuriyama (Emory University); Dr. H. R. Nagendra (Vivekanda Kendra Yoga Research Foundation); Dr. Keishin Kumura W10 Dr. Shin-ichi Takemura "Medical anthropology" W11 Dr. Kei-ichi Ueno "Asian medicine and terminal care" W12 Dr. Kazuo Kodama "Modern techniques and medicine" IV. Satellite Symposium (sponsored by MOA Health Sciences Foundation) Date: 18 August, Venue: MOA, Atami Participants: Lectures: Professor Tetsuo Yamaori (International Research Center for Japanese Studies); Professor Francis Zimmermann (EHESS) Speakers: Dr. E. Ohnuki-Tierney (Univ. of Wisconsin) Dr. M. Picone (EHESS) Dr. J. Berton (C.N.R.S.) Professor Noburu Miyata (Ins't of Psychiatry) Professor Kasuhiko Komatsu (Osaka Univ.) Professor Tamotsu Aoki (Osaka University) Congress Timings Friday, August 19 | Time | Event | |------------|--------------------------------------------| | 9 - 10 AM | Registration | | 10 - 10:30 | Opening Ceremony | | 10:30 - 12:00 | Presentation of the A. L. Basham Medels Lectures by the Winners of the Medel | | 12:30 - 1:30 | Lunch | | 1:30 - 3:30 | Workshops 1 & 2 | | 3:30 - 5:30 | Workshops 3 & 4 | Saturday August 20 | Time | Event | |------------|--------------------------------------------| | 9:00 - 11:00 | Workshops 5 & 6 | | 11:00 - 12:30 | Lecture by Charles Leslie | | 12:30 - 1:30 | Lunch | | 1:30 - 3:30 | Workshops 7 & 8 | | 3:30 - 5:00 | Special Lecture by Professor K. Yamada | Sunday, August 21 | Time | Event | |------------|--------------------------------------------| | 9:00 - 11:00 | Workshops 9 & 10 | | 11:00 - 12:30 | Special Lecture by Mr. K. Nishino | | 12:30 - 1:30 | Lunch | | 1:30 - 3:30 | Workshops 11 & 12 | | 3:30 - 4:30 | Special Lecture by Dr. Yasuo Otsuka | Book Reviews C. Leslie & A. Young eds. *Paths to Asian Medical Knowledge*. Berkeley: U of California Press 1992 Sidney Greenfield Professor, Department of Anthropology, University of Wisconsin-Milwaukee I read *Paths to Asian Medical Knowledge* while in Brazil collecting data on brain functioning of Spiritist healer-mediums and the patients they were treating and Afro-Brazilian religious mediums receiving their deities or spirits guides, while in trance. My colleague Norman Don and I were measuring changes in brain wave patterns prior to and during treatment and possession. This, of course, indicates that I am a non-specialist in Asian medicine and my comments here are those of an outsider looking in. My own research has been done primarily in Brazil, most recently on popular religions and their diverse systems of healing. I read the book looking for comparative materials to use in teaching about the relationship between religion and healing. The collection of conference papers, that in spite of the efforts and claims of the editors, has resulted in a classic non-book, brought to my mind the old adage that: "The more things change, the more they remain the same." As someone who knows little about Asian medicine it is obvious that I was not thinking about the subject matter of the book, Asian medicine, or any other aspect of Asian cultures. Instead my reaction related to the discipline of anthropology that provided the theoretical framework for the conference and in terms of which the papers were written. The discipline of anthropology has one through a number of theoretical and paradigmatic changes since its establishment in the early years of the twentieth century. As every graduate student learns, the field was carved out as a distinct academic discipline under the direction of Franz Boas who, with his students, formulated its theoretical and conceptual framework. Boas was reacting against earlier theories of universal evolution and the use of biology and/or geography to explain human behavior. With his students he elaborated the concept of culture which was to become the primary conceptual unit of the discipline. What Boas advocated, and many of his students did in their research, was the reconstruction of specific histories focussing on themes in delimited geographical areas. This, he maintained, would fill in our knowledge about issues evolutionists had glossed over to reach their generalizations. Boasian anthropology emphasized the culture of specific social groups with emphasis on how cultures changed over time and diffused from place to place to account for the diverse behaviors of specific peoples. Of special interest was the coming together of different cultures and the mixture of their constituent units, conceptualized as culture traits and patterns. This coming together and mixture was referred to as acculturation, while the mixing the intertwining of specific elements to form new patterns was referred to as a syncretism. *Paths of Asian Medical Knowledge* is about acculturation in Asian healing practices with emphasis on the syncretism of Asian healing and European medical patterns to produce the behavior healers and patients can be observed practicing in the present day. Most of the papers are specific histories of acculturation and syncretism in specific places. It is a book Franz Boas and his students would have appreciated and enjoyed reading as I did. And it shows that in spite of all the verbiage that passes for theory, good anthropology is still good anthropology. But I was frustrated. Perhaps in response to the new concepts and terms introduced in the literature on anthropological theory most of the authors did not make explicit whose acculturation that were examiining and the syncretism of different traits and elements was mentioned only in passing. This made it difficult for the outsider to comprehend much of the rich detail presented. Furthermore, the conceptual categories that dominated Boasian anthropology were not discussed explicitly, making cross-cultural comparison difficult. Religion is an example. It is quite obvious that traditional Asian medical systems were rooted and based on diverse religious beliefs and practices and their respective world views. This was left implicit in the papers, however, with no explicit relationship elaborated. As I mentioned at the outset, my own work has been in another part of the world, but on the same subject about which the authors of the papers in this volume were writing. I read the book looking for comparative insights from an area in which these issues take different cultural forms. But by choosing to speak of medicine, as if traditional healing practices were medical systems comparable to those of the West, the authors in this volume have minimized the more general relationship between religion and healing found in other world areas. Had they made explicit the religious contexts and their respective world views, the patterns of acculturation and syncretism discussed might have been easier to compare with data from other parts of the world. But the authors instead zeroed in on the specifics of Asian practice and behavior. This may be understandable, given the goals of the conference for which the papers were written. The result, however, is a book that appears to me to be most useful to Asian specialists. In my case it did not lend itself to what I take to have been the goal of Boasian anthropology. It is not very useful for cross-cultural comparison. This may, as I have noted, also be the result of theoretical changes in the discipline that have lead us ever deeper into the analysis of single symbolic systems. As a final note, I was surprised that the editors explicitly eschewed any interest in the efficacy of treatment. I had hoped to find at least some references to studies of how patients fare under traditional Asian medical systems and how this would compare with Western medicine and other healing systems. Perhaps in a future conference questions of efficacy and a greater concern with cross-cultural comparison might be placed on the agenda. **Medical Anthropology:** **The Journal** As an associate editor of the journal *Medical Anthropology*, I invite readers of the IASTAM Newsletter to consider publishing their scholarly articles in one of the two American journals devoted to research in cross-cultural health practices. *Medical Anthropology* publishes papers that explore the relationships among health, disease, illness, treatment, and human social life. Emphasis is on the cross-cultural similarities and differences in the way people cope with health problems. The journal welcomes papers based on empirical research as well as those which deal with significant methodological and/or theoretical issues. The journal publishes papers on a wide range of topics, including: ethnomedical studies; studies evaluating the impact of modernization on indigenous medical systems; studies of sexual and reproductive behavior; and studies of health care providers, services, and policy. *Medical Anthropology* provides important biocultural and cross-cultural perspectives on health, disease, illness, and treatment for nurses, physicians, and biological and social scientists and other professionals in health-related fields, as well as for anthropologists. Students of Asian medicine are invited to submit their work. *Charles W. Nuckolls, Associate Editor* Book Review C. Leslie & A. Young eds. *Paths to Asian Medical Knowledge*. Berkeley: Univ of California Press 1992 Claire M. Cassidy, Ph.D. This scholarly text contains five chapters on Chinese and Japanese medicine, five more on Ayurveda and medical thought in India, and two on Islamic humoral thought. The chapters take very different approaches to the issues, including exegesis of historical and classic texts, analysis of modern texts, symbolic, and ethnographic approaches. It is difficult to link such diversity into a whole, yet, the farther one progresses in reading the text, the more a subtle rhythm emerges, and it is this rhythm which leads the reader to sense a unity of underlying themes. In this review I will examine two of these themes that interest me particularly in my role as an applied medical anthropologist working with Asian medical systems in the U.S. One major theme — expressed by several of the authors as a correction of misapprehensions that exist in the literature — is that Asian medical systems are complex, heterogeneous, permeable and evolving (that is, that they are not homogeneous or unchanging). In a sense, this point out not to have to be made. Yet realistically there remains a tendency for “outsiders” to interpret a system that has been “named” not only as having specific and specifiable boundaries, but also as good at staying within those boundaries and being predictable. That this is not so — that “purity” does not exist, and that syncretism is commonplace and possibly normative among both practitioners and clients — is a point that seemingly must be argued and demonstrated time after time. In this text this theme is considered in ten of the twelve papers. Five chapters examine plurality in non-western systems, including Mark Nichter (the many evolving causative models for Kyansur forest disease among Tuluva in India), Margaret Trawick (how four different systems co-exist in India and prove to be linked at a deep symbolic level,) Gary Seaman (two statuses of Chinese geomancers claim different expertise but privately compete for overlapping functions), Byron Good and Mary-Jo Good (how Islamic humoral medicine achieves cultural authority through links with the sacred), and Carol Laderman the successful through sometimes fragile fusion of aboriginal Malay medical ideas with Islamic medical ideas.) Five others examine relationship of humoral systems with biomedicine. This contact characteristically leads to awareness of the intense paradigmatic foreignness of biomedicine. Some Asians have responded by arguing that biomedicine is superior to their Asian systems, others want to borrow from biomedicine or imagine “new medicines” that will meld two pre-existing systems, and other are driven to claim superiority for their own system, especially in its “classical” form. The chapter by Shigehisa Kuriyama takes up the first case, showing how a Dutch anatomy translated into Japanese in the 18th century caused a major break with tradition: part of the resulting rhetoric dealt with the issue of “purity.” (His chapter makes a nice parallel with work by Barbara Duden on the effects of changing anatomies in Germany in the 17th century.) Margaret Lock’s eerie chapter (I see so many parallels in North American rhetoric) examines how the western medicalized concept of “menopause” is being used to address a social issue, the perceived breakdown of the family in modern Japan. Paul Unschuld considers previously ignored parallelsisms of western and Chinese medicine, including the use of invasion and war metaphors in classical Chinese texts. Francis Zimmermann shows how Ayurveda is redefining itself in terms of the western concept of “naturalness” as overlapping with “gentleness” such that classical “violent” techniques are becoming redefined or rejected. While several of the papers consider the political and legitimizing implications of their findings, this subject is the core of the paper by Charles Leslie. He examines why efforts to create Ayurveda as "complementary" with biomedicine in Sri Lanka foundered from pressures for "classicism" from a uniquely well-placed spokesman for conservatism. This case history closely resembles the history of chiropractics in the early twentieth century and the currently expanding schism in the practice of TCM/acupuncture in the U.S. The second theme is closely related to the first. The complexity of Asian medical systems and their users appears to be, at least partly, a result of their ability to expand, absorb and use conflicting explanatory models. Anthropologists, at least, cannot be surprised by this (though nevertheless fascinated by the details of how people do it), but Western thinkers who seek unitary "truth" must be. Six of the papers — by Judith Farquhar (a study of practical virtuosity in the practice of Chinese medicine), Ganath Obeyesekere (practical virtuosity in prescribing in an Ayurvedic setting), Carol Laderman, Margaret Trawick, Mark Nichter, and Paul Unschuld — deal with this theme. I think it has significant implications for the design of research, particularly relating to the issue of communicating the complexity and creative tension of Asian systems to Western-trained policy makers-funders. Specifically, it seems that humoral systems are by nature overtly metaphorical, or poetical, and this leads to an active awareness of the negotiated nature of medical reality. In practice, this includes the demand to individualize diagnosis as well as to customize prescriptions. But how are these characteristics to be factored into a western style of scientific research, which is often unaware of its use of metaphor (or of the effects of the use of metaphor), believes in the fundamental similarity of patients, and standardizes both diagnosis and prescriptions. The assumption that care can be standardized is a core assumption of western scientific research, where it has achieved the status of "oughtness." This "epistemological break" has achieved sociopolitical implications for all who hope to do clinical or experimental research with Asian medical systems. I have chosen to discuss these themes because they relate to the issue of addressing the world beyond scholarship, that is, the world of applied researchers, practitioners, and policy makers. It is my sense that we have not, recently, been particularly successful at doing this — and I am well aware that some of us don't want to. This text, for example, despite discussing themes of central importance of clinical research, makes no attempt to address such an audience. I rather wish it had, perhaps in a final "implications" chapter. My concern relates to a sense of confidence expressed by Charles Leslie and Allan Young in the Introduction that four points about Asian medicine are not longer remarkable. These points are that Asian medical systems are intellectual coherent, that they are embedded in distinctive cultural premises and symbols that they can't be fully understood outside the stream of history and that they have a conflicted relationship with biomedicine (cosmopolitan medicine allopathy). Of these four points I judge that the last is widely known, but the previous three remain little understood apart from social science specialists. I believe this situation needs to change. In short, we need to address our colleagues outside academe. This text cannot do this because it was designed to address specialists. Yet I think much of the interpretive material in the book would be useful to others, and it has policy implications that none of us can safely ignore. When medicine is cosmopolitan, Asia is not far from America. In sum, the present text is careful, detailed, fascinating and scholarly. Though it cannot reach non-specialists, it will, I feel sure, shine as a star in the specialists' galaxy for many years to come. Conference Proposal Claire Cassidy Shall we convene a conference on medical anthropology and alternative medicine in North America? There are several ways we could go. The choice depends on whom we want to address: ourselves primarily or the wider world. Here are some choices I perceive: — to do a “small focus” conference on how Asian medical systems are re-creating themselves in North America. A slightly larger focus would include Europe as well. This focus would be of most interest to social scientists and humanists, plus some practitioners. We hope to offer a symposium on this subject at the 1994 AAA meeting. — to do a “wider focus” conference on what the social sciences have to contribute to the current discussion of “alternative medicine” in the U.S. particularly. The targeted audience in this case would probably be conventional researchers in bioscience and the social science. — to do a “wide focus” conference on what alternative medicine is. The audience would be practitioners and conventional researchers in bioscience and the social sciences. In my present dual position as both researcher on alternative medicine (especially acupuncture) and “cultural broker” to conventional bioscience researchers involved with researching “alternative” practices. I know that the latter topic needs addressing. The other topics do as well, but I will concentrate on the third since I don’t feel the same urgency about the first two. Why urgency? Because NIH is presently choosing if and how they will fund research on alternative medicine. In this process boundaries are being created that will define what they will consider “appropriate” approaches and topics for research. Will qualitative and mixed qualitative-quantitative approaches be honored? We need to make our voices heard. In my experience with biomedicine researchers and a wide variety of practitioners, I have found that few have social science concepts to help them contextualize their actions. Some ideas that my colleagues have found helpful include: the concept that much medical care is organized into systems; the concepts of naturalistic and personalistic approaches, and of reductionistic and holistic approaches; the concept that different systems define the body, disease/sickness, and health in deeply different ways; the significance of dyadic vs polyphonic treatment settings; . . . and many other issues that are core to medical anthropology and sociology but have not ventured far from our disciplinary borders. The point here is not to teach Med Anthro 101, but to link these concepts to the proper performance of research. For example, since scientific medical research has been primarily developed for use in biomedicine, which assumes that “humans are fundamentally very much alike,” a demand for “standardization” has become normative in research design (as it is clinipractitioners use a “grand round” format to demonstrate their diagnostic techniques and interpretive models and link these to the conceptual schemata already presented); User Profiles and models (in which information on user populations and explanatory models is presented); Research Issues (in which the preceding is connected in such a way as to define the problematics of scientific research on systems other than biomedicine); and a final section on Providing Service, in which the earlier information is tied to universal concern, which is that of trying to help those who suffer. The conference would be small and select so that a seminar-like energy could prevail. A few speakers would have an hour or so to speak, and the audience would repeatedly break into small groups to discuss and develop the ideas the speaker presented. Small groups would present their thoughts in some sort of round table sharing. A proceedings — in fact, a trade book with wide dissemination — would emerge from the conference, hugely expanding the audience and the effectiveness of the integrative experience. I like this scenario thought I also know it is idealistic. But shall we aim for it? A first step is to respond to this idea. A second is to accumulate funds. Who would wish to contribute funds to such a venture? Do we, in this organization, have the drive to develop such a conference? Respond in this newsletter, to the IASTAM North American officers, and/or directly to me at 6201 Winnebago Road, Bethesda MD 20816, 301-229-7718. Looking forward to your ideas! Farewell to Mark Nichter Mark Nichter recently resigned as President of the North American Chapter of the International Association for the Study of Traditional Asian Medicine. He had served in that office for the past several years, and now relinquishes the post to Professor Vincanne Adams (Princeton University) Mark Nichter is one of the premier medical anthropologists in the United States, a scholar whose work is recognized as among the very best in the field. During his tenure in office many important developments took place, including the reinvigoration of this newsletter after a period of neglect. We wish Mark well in his future work, and we look forward to seeing him at future meetings of the chapter. Dues are Due!!! 1994 All members of IASTAM members are required to pay annual dues to their regional chapters. If you wish to continue receiving this newsletter, you must send in your dues immediately to one of the following offices: North America $20 Dr. Steve Ferzacca Department of Anthropology University of Wisconsin Madison, Wisconsin 53706 Other Areas $20 Dr. Ken Zysk Department of Near Eastern Languages and Literature New York University 50 Washington Square. South New York, NY 10003 USA Call for papers The IASTAM Newsletter encourages you to submit brief reports concerning your current research. These will be published in the newsletter, to encourage scholarly debate and initiate exchanges among interested readers. The Newsletter is especially eager to continue debate on topics that may be of particular concern, such as the meaning of "traditional" in traditional Asian medicine. If there are other controversial topics you would like to propose for discussion and debate, please inform the editor. One possibility, certainly, would be the meaning of "medical"—just where do we draw the boundaries? We are committed to making this publication a lively vehicle for the expression of diverse points of view. European Chapter of IASTAM Once again, unfortunately, no news is available from the European Chapter, whose President is Lawrence Conrad of the Wellcome Institute. We invite the European members of the organization to send in their news and comments, and to volunteer for book reviews. Your participation is essential to the organization and we look forward to hearing from you.
RAILWAYS OF AUSTRALIA Vehicle/Track Studies: Study No 2. VOLUME TWO: DRAFT TECHNICAL REPORT Section 2: Test Section Performance December, 1993 ## Table of Contents | Section | Page | |------------------------------------------------------------------------|------| | 2.1 SUMMARY OF TRACK GEOMETRICAL PERFORMANCE | 2.1 | | 2.1.1 Track Geometry Data | 2.2 | | 2.1.2 Track Maintenance Requirements | 2.2 | | 2.1.3 Discussion on Observations | 2.5 | | 2.2 SUMMARY OF BALLAST PERFORMANCE | 2.6 | | 2.2.1 General Considerations | 2.6 | | 2.2.2 In-Situ Ballast Density Measurements | 2.6 | | 2.2.3 Test Section Ballast Geotechnical Data | 2.7 | | 2.2.4 Brief Descriptions of Geotechnical Tests | 2.7 | | 2.2.5 Comparison of Geotechnical Results | 2.8 | | 2.3 FORMATION PENETRATION CHARACTERISTICS | 2.10 | | 2.4 SUMMARY OF TEST SECTION INSPECTIONS | 2.10 | | 2.4.1 Control Group: 1A, 1B, 1C (60 Kg rail) & 6B (68Kg rail). | 2.11 | | 2.4.2 Alternate Ballast Depth: 2A, 2B, 2C & 2D (SF-1 Sleepers). | 2.11 | | 2.4.3 Alternate Ballast Grade: 3A & 3B (SF-1 Sleepers). | 2.12 | | 2.4.4 Alternate Sleeper Spacing: 4A, 4B & 4C (SF-1 Sleepers). | 2.12 | | 2.4.5 Steel Sleepers: Section 5A. | 2.12 | | 2.4.6 Timber Sleeper Group: Sections 5B & 6B. | 2.13 | | 2.4.7 CR-2 Sleepers: Section 5C. | 2.13 | | 2.4.8 Alternate Fastenings Group: Section 7 Section 7A: Hambo Fastening | 2.15 | | Section 7B: Sidewinder 2 Fastening Section 7C: Sidewinder 1 Fastening | | | Section 7D: Vossloh Fastening Section 7E: Springlock Fastening | 2.15 | | 2.4.9 Rail Asymmetry and Straightness: Section 8. | 2.17 | | 2.5 CONCLUSIONS ON TEST SECTION PERFORMANCE | 2.18 | | 2.5.1 Track Geometry | 2.18 | | 2.5.2 Maintenance | 2.18 | | 2.5.3 Ballast | 2.18 | | 2.5.4 Sleepers | 2.19 | | 2.5.5 Formation | 2.19 | 2.1 SUMMARY OF TRACK GEOMETRICAL PERFORMANCE 2.1.1 Track Geometry Data Unfortunately the instrumentation used to record track geometry, located in WESTRAIL’s Matissa PV 6 track recording car, became unusable in the early 1980’s. Some early work was partially successful using hand digitisation techniques, see Section 4.3 of WESTRAIL report TS 403.1/20, Makin 1988. Subsequent reliability tests, however, highlighted other deficiencies causing variation in track geometry that appeared to depend significantly on the car’s direction of travel and the ambient temperature at the time. As a result of this, the decision was made to continue with Study No 2 without PV 6 track geometry information. In December 1992, Australian National’s EM 80 track recording vehicle became available and a full east to west recording run was made on the Perth to Kalgoorlie Standard Gauge line. This was the first of numerous recordings to be subsequently made by AN for WESTRAIL. Appendix 8 provides listings and graphs of the December 1992 results for the following track quality indicators in the Test Sections. 1. Gauge (negative magnitudes are tight). 2. Line Left (alignment of the UP rail). 3. Line Right (alignment of the DOWN rail). 4. Top Left (UP rail dip). 5. Top Right (DOWN rail dip). 6. Cross Level (the difference between Top Right and Top Left). In each case, quality indicators are an accumulative standard deviation of the data at 500 mm spacing in the central 500 metres of each Test Section. This provides a deterioration or departure from design indicator. With the gauge quality indicator for example, see Figures 2.1.1(i)&(ii) (which are duplicated in Appendix Figures A8.1(i)&(ii)) the higher the magnitude, the greater the departure from nominal gauge (negative magnitudes are tight) on the date the recording run was made. 2.1.2 Track Maintenance Requirements Since early 1980, records have been maintained for direct expenditure against all track maintenance work that was conducted on each Test Section over the period, see Figures 2.1.2(i)&(ii). The relevant data is listed in Appendix 3. Expenditure associated with maintenance administration and other associated overhead costs have not been collected. Figures 2.1.2(i)&(ii) provide total and specific track geometry cumulative maintenance expenditure for the Test Sections up to December, 1992. It appears that more work has been done in the West Cunderdin sections, particularly Sections 5A and 5B, (Figure 2.1.2(i): 163 to 173 Km) compared to the East Cunderdin sections (Figure 2.1.2(ii): 180 to 193 Km). Costs associated with travelling or waiting time are not included. Track Geometry Indices: Gauge 10/12/92: 1C, 2B, 2A, 6A, 6B, 5B, 5A Variance Indicator (mm) Location (Kilometres) △ 97.7% UCL × Mean ▽ 97.7% LCL Track Geometry Indices: Gauge 10/12/92: 1A, 4A, 4B, 4C, 7, 5C, 3B, 3A, 2C, 2D, 1B Variance Indicator (mm) Location (Kilometres) △ 97.7% UCL × Mean ▽ 97.7% LCL Figures 2.1.1(i)&(ii) Gauge variance: 500 metre mid Test Section averages: DECEMBER 1992. Historical Maintenance Costs 31/12/92: 1C, 2B, 2A, 6A, 6B, 5B, 5A Unit Kilometre Cost ($) (Thousands) Location (Kilometre) - Total - Track Geometry Historical Maintenance Costs 31/12/92: 1A, 4A, 4B, 4C, 7, 5C, 3B, 3A, 2C, 2D, 1B Unit Kilometre Cost ($) (Thousands) Location (Kilometres) - Total - Track Geometry Figures 2.1.2(i) & (ii) Maintenance Costs Up to 31 December 1992 (1992 Dollars) 2.1.3 Discussion on Observations 126.96.36.199 Track Geometry Several reports not part of Study No 2, (for example; Szito 1992) have been prepared addressing the cracking performance of standard SF-1 concrete sleepers used on the Kwinana to Koolyanobbing standard gauge line upgrade. Similar sleeper behaviour was noted for the SF-1 sleepers in the Test Sections. By December 1992, physical inspections confirmed that West Cunderdin Test Sections 1C(standard), 2B(ballast depth 300 mm) and 2A(ballast depth 150 mm) contained a greater portion of cracked sleepers. The average was 80% cracked, when compared to East sections 1A(standard), 4A(sleeper spacing 700 mm), 4B(sleeper spacing 640 mm) and 4C(sleeper spacing 610 mm), where the average was less than 20%. There appears to be a definite tightening of gauge, from less than 3 mm to more than 4 mm, accompanying more cracked sleepers. However, for a rigorous statistical confirmation more data is needed. It has been separately observed that concrete sleepers exhibit weather shrinkage. The test track was also constructed with approximately 3mm tight gauge, see WESTRAIL report TS 403.1/1, Duncan 1984, This is reflected in Figures 2.1.1(i)&(ii). The tightening of gauge in the timber Sections 6B and 5B is apparent and not typical of timber sleepered track. 188.8.131.52 Track Maintenance The actual on-rail instrumentation was located in the centre of each Test Section and prohibited through passage of working tampers and related track maintenance machinery. Sometimes work was conducted up to the monitoring point and machines disengaged to a point beyond before continuing. The records available reflected the maintenance work done for the bulk of each Test Section and not necessarily at the actual instrumentation point. From Figures 2.1.2(i)&(ii), track geometry maintenance expenditure for the control Test Sections 1A and 1C were approximately 50% of the total expenditure reported. For most of the other sections, the proportion was above this. In contrast with the concrete sleeper sections, track maintenance expenditure for the timber sleeper section, Test Section 5B with 60Kg rail, were excessive. As noted in Section 2.4.6, this was reflected in the greater level of structural deterioration of karri sleepers and track structure visually observed during physical inspections. Similar expenditure was not evident for the only other timber sleeper section, Test Section 6B with 68 Kg rail. Track modulus evaluations, see Section 7, in Test Section 5B were less than most of the concrete sleepered sections. This reflected "softer" structure accompanied by greater structural deterioration and increased maintenance requirements. 2.2 SUMMARY OF BALLAST PERFORMANCE 2.2.1 General Considerations In terms of track/rail structural requirements, the ballast layer performs various functions (Selig and Waters, 1992). The most important of these are: (1) To resist vertical, lateral and longitudinal forces applied to the sleepers to retain the track in its required position. (2) To provide some of the resiliency and energy absorption for the track. (3) To provide large voids for storage of fouling material in the ballast. (4) To facilitate track geometry maintenance operations by allowing the rearrangement of ballast material. (5) To provide immediate drainage of water falling on to the track. (6) To reduce the pressure from sleeper bearing area to acceptable levels for the underlying material. A discussion on the current WESTRAIL ballast research work has been previously prepared (Duncan, 1982). Duncan provided a thorough introduction and assessment of the work done as part of ROA Study No 2 where alternate ballast types and depths have been introduced into the Test Sections. Ballast depth (Standard Ballast Type): Sections 2A and 2D: Ballast depth 150 mm. Sections 2B and 2C: Ballast depth 300 mm. For the remainder, the standard ballast depth is 230 mm. Ballast type (Standard Ballast Depth): Section 3A: New ballast from Hampton Quarry at Kalgoorlie, W.A. (with LAA < 20%). Section 3B: New ballast from Meckering Quarry (with LAA > 30%). For the remainder, the existing ballast was used with make-up ballast from Meckering. The contact pressure distribution between the sleeper and the ballast is mainly dependant upon the degree of voiding in the ballast under the sleeper. Voiding is induced by traffic loading and results in gradual settlement and other changes in the structure of the ballast and subgrade. The difficulty in determining the in-track pressure distribution for a sleeper has been noted (for example, Jeffs & Tew, 1991; Volume 2 Section 3). In Study No 2, this task was attempted using specially developed analysis techniques, see Section 6. 2.2.2 In-Situ Ballast Density Measurements In early 1982, an trial air permeability technique was introduced to measure the density of in-situ ballast without the need to remove sleepers or disturb ballast from the track structure, see WESTRAIL report TS 403.1/9, Duncan 1982. Unfortunately, the technique was still under development and the validation status of calibration curves used was never confirmed. The effects of ambient wind and other geometrical considerations significantly limited confidence in ballast densities measured using air permeability techniques. Although there was little variation between Test Sections, measurements taken appeared to be significantly higher than the maximum obtained from separate laboratory density measurements, see WESTRAIL report TS 403.1/9, Duncan 1982. At the time it was suggested that significant ballast crushing might have occurred and a significant concentration of in-situ fines were present. The air system for ballast density measurement was not used again during the course of Study No 2. ### 2.2.3 Test Section Ballast Geotechnical Data. As an integral part of assessing and monitoring the performance of ballast in each Test Section, three full sets of ballast samples were collected and analysed. On each occasion, ballast samples were collected from below the 45% loaded zone in previously undisturbed cribs without disturbing the track. The timing for sample collection was as follows: 1. June 1982 (at approximately 20 MGT). 2. February 1984 (at approximately 33 MGT). 3. May 1987 (at approximately 56 MGT). For each sample the following industry standard soils laboratory testing was performed: 1. Ballast Particle Gradation. 2. Crushing Value (ACV %). 3. Cement Value (CV MPa: Test Sections 1C, 3A and 3B only). 4. Toughness: Los Angeles Abrasion (LAA %). 5. Hardness: Mill Abrasion (MA %). Test on the 1982 samples were conducted by a different soils laboratory than the 1984 and 1987 samples. ### 2.2.4 Brief Descriptions of Geotechnical Tests #### 184.108.40.206 Particle Gradation. Ballast particle size gradation involves washing and mechanical sieving procedures to develop a particle size/frequency distribution. Results presented on Appendix 9 include cumulative frequency distributions of WESTRAIL Grade A ballast. #### 220.127.116.11 Crushing Value. The aggregate crushing value (ACV) gives a relative measure of the resistance of ballast material to crushing under a gradual applied load. #### 18.104.22.168 Cement Value The cement value (CV) test measures the mechanical compressive strength of cemented ballast material. The testing process involves crushing, sieving, molding and mold strength testing. High cement values indicate ballast fines which bond strongly when cemented. Selig and Waters (1992) have suggested that specimen preparation methods do not simulate field conditions and cementation values, therefore, may not measure the actual tendency for cementation to be a problem in-situ. 22.214.171.124 Abrasion Tests. The LAA test involves dry crushing processes that measure ballast material’s relative toughness or the tendency to cause breakage. By contrast, the MA test involves wet crushing processes that measure relative particle hardness. The MA wet crushing process produces finer material than LAA dry crushing. In recent work (Selig and Waters, 1992), it has been suggested that both tests are complementary and actually measure different rock characteristics. MA test measures the rock particle hardness and LAA testing measures rock particle strength or toughness. A combined Abrasion Number (AN) is suggested by Selig and Waters as an appropriate index for abrasion that has been adopted by some North American operators. \[ AN(\%) = LA(\%) + 5MA(\%) \] 2.2.5 Comparison of Geotechnical Results The ballast in Test Section 3A was made from tougher material (sourced from Hampton Quarry at Kalgoorlie with LAA < 20%) than Meckering quarry material. This difference was consistent with the LAA test results in Appendix 9. With the combined Abrasion Number (AN) introduced in Section 126.96.36.199, the apparent abrasion was about 30% for Test Section 3A to nearly 60% for 3B. However, the corresponding hardness (MA index) results indicated little difference in particle hardness. The noted differences in abrasive characteristics between the two ballast types was also contrasted with cementation value results. In this case, the severe abrasive characteristics for 3A ballast were accompanied by higher strength cementation (above 2MPa). The Meckering ballast (Test Section 3B) appeared to offer a compromise with both reasonable abrasive character and reduced cementation strength. It has been argued (Selig and Waters, 1992) that cementation test methods do not simulate field conditions and cementation values, may not measure the actual tendency for cementation to be a problem in-situ. The standard ballast depth was 230 mm. In contrast, Test Sections 2A & 2D were shallow ballasted (depth 150 mm) and 2B & 2C were deeper (ballast depth 300 mm). From Appendix 9 results, no difference in ballast characteristics appeared to correlate with this depth variation. Ballast Comparison: Cementation and Abrasion. Section: 1B Made-up LAA > 30%. 3A LAA < 20%. 3B LAA > 30% Figure 2.2.4(i): Cementation (CV Index) Figure 2.2.4(ii): Abrasion (AN Index) 2.3 FORMATION PENETRATION CHARACTERISTICS Under dry summer conditions of late February 1993, a dynamic cone penetrometer was used in an attempt to investigate Test Section formation strength. The rate of penetration was measured as the number of blows required for the standard 9 kg weight falling a preset distance to advance a 20 mm diameter cone 50 mm into the formation. For each Test Section, tests were carried out in sleeper cribs adjacent to the track force monitoring instrumentation usually down to about 350 mm below the top of formation. Unfortunately, no formation penetration measurements were made at any earlier stage of Study No 2. Results from this exercise are provided in Appendix 10. For each plot, the Top of Rail (TOR) was taken as the upper depth reference. The top of formation to top of rail distance (nominated Design Ballast Height, DBH) will depend on the structural parameters of each particular section. Based on historical Test Section structural design information, it is understood that the present track was built on the original formation. Appendix 10 results indicate an easing of formation penetrate as formation depth increases. There were significant differences in penetration characteristics for the Test Sections. For most, results indicated hard layering, i.e. the more blows per 50 mm penetration, at the ballast/formation interface. To a varying degree, this confirmed the presence of formation/ballast interaction mechanisms like compression, ballast breakdown, ballast settlement or formation mechanical compression. Section 1A(standard), for example, had a particularly hard top of formation layer whereas for sections 1B(standard) and 4A(sleeper spacing 700 mm) it appeared to be less detectable. The last track force data acquisition was conducted in August 1985, this was 7 years distant. From Appendix 10, Test Sections 1A(standard) and 1B(standard) for example, there were noted differences in below ballast penetration characteristics. However, such differences did not appear as significant track modulus evaluation differences from data collected in 1985 and earlier, see Section 7. A current set of track force measurements would be needed to quantify relationships between below ballast penetration characteristics and in-situ vertical load/track deflection behaviour. 2.4 SUMMARY OF TEST SECTION INSPECTIONS A total of 18 general walking inspections of Test Sections were conducted from early 1980 through to late 1992. Individual reports were prepared for each inspection and inspection dates are listed in Appendix 2. In addition, several specific inspections looking at the performance of wooden sleepers for example, were conducted during this period. For identification purposes, in each Test Section all sleepers were numbered beginning at the western end. Several reports not part of Study No 2, (for example; Szito 1992) have been prepared addressing the cracking performance of SF-1 concrete sleepers used on the Kwinana-Koolyanobbing standard gauge line upgrade. Similar performance has been noted for the SF-1 sleepers in the Test Sections. The problems have been identified as the result of material selection for sleeper manufacture that were not specific to the Test Sections. Walking inspections have visually confirmed: (1) distinct colouration differences for sleepers more prone to cracking. More pink colouring (as compared to grayer) is more evident for cracked sleepers. (2) Batches marked 1978 are more prone to cracking than 1979 marked batches. Where surface cracks are evident on SF-1 sleepers, cracking appears to initially propagate longitudinally (perpendicular to the rail) from the sleeper ends. Cracks then extend towards the rail seat. Cracking in the gauge portion generally appears later in the life of the sleeper. Longitudinal cracking also propagates on the sides of SF-1 sleepers, initiating from the FIST fastening tube. To investigate and report on sleeper performance, specific sleepers in each Test Section were identified and marked as Crack Measurement Sleepers. This enabled specific crack length measurements to be progressively made and monitored. Ballast was removed from the two adjacent cribs as part of this investigation process. A historical summary and performance comparison is hereby provided for each Test Section according to track structure generic groupings. No reference is made where no abnormalities were apparent. In each case, cumulative months since installation and gross tonnages are specified. 2.4.1 Control Group: 1A, 1B, 1C (60 Kg rail) & 6B (68Kg rail). 188.8.131.52 Inspection #8: September/October 1985 (75 Months & 43 MGT) Formation failure in Section 1B necessitated ballast make up and geometry corrective maintenance towards the eastern end of the Test Section. 184.108.40.206 Inspection #10: October/November 1986 (90 Months & 50 MGT) Following removal of ballast from adjacent cribs of specific crack monitoring sleepers, in all Test Sections progressive horizontal cracking was observed to initiate from the FIST fastening tube. On previous inspections, sleepers were not thoroughly examined. 220.127.116.11 Inspection #15: May 1989 (123 Months & 71 MGT) Cracking became apparent in the gauge section of most badly cracked sleepers. 2.4.2 Alternate Ballast Depth: 2A, 2B, 2C & 2D (SF-1 Sleepers). 18.104.22.168 Inspection #1: November 1980 (12 Months & 6 MGT) In Section 2A, one sleeper was identified as having longitudinal cracks (parallel with the rail) in the gauge portion of the sleeper, not typical of the later observed cracking patterns for SF-1 sleepers. 22.214.171.124 Inspection #4: March 1983 (43 Months & 27 MGT) In Sections 2C and 2D, track maintenance machinery reported to have damaged the fastenings of 34 sleepers since the last inspection. 2.4.3 Alternate Ballast Grade: 3A & 3B (SF-1 Sleepers). 126.96.36.199 Inspection #5: November 1983 (52 Months & 31 MGT) In Section 3B, a 13 rail pads were reported to have moved. This was a greater proportion than Section 3A or the standard control Test Sections. This did not appear to have resulted from track machinery as no prior maintenance operations were reported in either 3A and 3B. 188.8.131.52 Inspection #11: April 1987 (96 Months & 54 MGT) In Sections 3A and 3B, Cobbler Pool ballast was reported to have been dumped on the shoulder along most of the Test Section. This occurred in error during routine maintenance operations. The foreign ballast was later removed, no tamping was done, minimum contamination occurred and maintenance costs for 3A were accordingly inflated. 2.4.4 Alternate Sleeper Spacing: 4A, 4B & 4C (SF-1 Sleepers). 184.108.40.206 Inspection #4: March 1983 (43 Months & 27 MGT) In Section 4C, a 11 rail pads were reported to have moved. This was a greater proportion than Sections 4A or 4B. This did not appear to have resulted from track machinery as no prior maintenance operations were reported for Test Section 4C. 2.4.5 Steel Sleepers: Section 5A. 220.127.116.11 Inspection #1: November 1980 (12 Months & 6 MGT) In Section 5A, dirty ballast was reported. 18.104.22.168 Special Inspection of Section 5A: August 1984 (61 Months & 36 MGT) In August 1984 a special inspection of Section 5A was conducted to survey the cracking damage noted to the pads used with both Pandrol and Trak-Lok fastenings, see Appendix A7.3. Trak-Lok pads appeared to be predominantly cracking horizontally where as the Pandrol pads appeared to be cracking vertically at the rail seat hinge line. 22.214.171.124 Inspection #8: September/October 1985 (75 Months & 43 MGT) In Section 5A, 12 pads were renewed, 6 Pandrol and 6 Trak-Lok. To date Section 5A, 16 steel sleepers had been noted to be skewed (no longer perpendicular to the rail). All occurrences likely to have resulted from track maintenance machinery activity during the 1984/85 period. 126.96.36.199 Inspection #11: April 1987 (96 Months & 54 MGT) In Section 5A, 20 pads were replaced by a new type, 10 Pandrol and 10 Trak-Lok. 188.8.131.52 Inspection #18: December 1992 (171 Months & 103 MGT) By December 1992, a total of 9 Trak-Lok clips (out of 1400) and no Pandrol clips (out of 1400) were noted to be dislodged in Section 5A. The new type of pad installed in April 1987 had not developed visible cracks at the hinge. 2.4.6 Timber Sleeper Group: Sections 5B & 6B. 184.108.40.206 Inspection #1: November 1980 (12 Months & 6 MGT) In Section 5B, track reported to crunch under foot on the shoulders of some sleepers. This observation was noted on all inspections. 220.127.116.11 Inspection #5: November 1983 (52 Months & 31 MGT) The timber sleepers in Sections 5B and 6B were subject to a detailed inspection and report (Duncan 1983), see Appendix A7.1 The sleepers were performing no worse than other treated karri sleeper in service elsewhere. Five Pandrol clips from Section 5B, were forwarded to Pandrol for testing, see Appendix A7.2. No specific causes of observed clip dislodging were found. 18.104.22.168 Inspection #7: December 1984 (66 Months & 38 MGT) In Sections 5B and 6B exposed treated timber surfaces were beginning to look like untreated timber, the treatment was still obvious for ballast covered surfaces. This was also noted in future inspections and appeared to be much worse in Section 5B. 22.214.171.124 Inspection #18: December 1992 (171 Months & 103 MGT) By December 1992, approximately 220 (out of 1627) timber sleepers were noted to be splitting or rotting in Section 5B. 2.4.7 CR-2 Sleepers: Section 5C. 126.96.36.199 Inspection #1: November 1980 (12 Months & 6 MGT) On the first inspection in Section 5C, the CR-2 sleepers were reported to have begun to develop cracks. 188.8.131.52 Inspection #3: November 1980 (34 Months & 21 MGT) In Section 5C, 9 individual CR-2 sleepers were identified and recorded as reference sleepers so as to monitor the progress of the observed cracking. 184.108.40.206 Special Inspection of Section 5C: January 1984 (54 Months & 32 MGT) In January 1984 a special inspection of Section 5C was conducted to survey cracked CR-2 sleepers. Twelve pads were replaced with a more compressible type. The pads removed were forwarded to Australian National. In subsequent inspections, no difference in sleeper performance was reported with the new pads. 220.127.116.11 Inspection #10: November 1986 (90 Months & 50 MGT) In Section 5C, the pads replaced at the time of Inspection #8 now indicated the same signs of cracking as the originals. 18.104.22.168 Inspection #11: April 1987 (96 Months & 54 MGT) A summary report was prepared detailing performance up to April 1987 of the Section 5C CR-2 sleeper (Inspection #1 through to Inspection #11), see WESTRAIL report TS 403.1/17, Page 1987. During this period the following cracking performance statistics had been recorded. ![ROA Section 5c](image) **Figure 22.214.171.124** Test Section 5C: CR-2 Sleeper Cracking v Load Statistics Following a request from Australian National, 10 CR-2 sleepers were removed from Section 5C and dispatched to AN for analysis. The new CR-3 type sleepers was installed as the replacement. In subsequent inspections, no cracking of the new CR-3 sleepers was observed. 126.96.36.199 Inspection #14: November 1988 (117 Months & 66 MGT) A further 9 CR-2 sleepers were removed and replaced by CR-3 sleepers in Section 5C. No cracking of CR-3 sleepers had been observed to date. 2.4.8 Alternate Fastenings Group: Section 7 Section 7A: Hambo Fastening Section 7B: Sidewinder 2 Fastening Section 7C: Sidewinder 1 Fastening Section 7D: Vossloh Fastening Section 7E: Springlock Fastening 188.8.131.52 Inspection #7: December 1984 (66 Months & 38 MGT) A summary report was prepared detailing performance up to December 1984 on the Springlock fastening system (Inspection #1 through to Inspection #7), see WEST-RAIL report TS 403.1/14, O’Rourke 1985. Difficulties with broken or dislodged heel blocks, leak springs, hoops and hoop insulators had been experienced. By early 1985 all heel blocks had been replaced. ROA Section 7 Tunnelcrete sleepers ![Cracked sleeper statistics](image) **Figure 184.108.40.206** Test Section 7: Tunnelcrete Sleeper Cracking v Load Statistics 220.127.116.11 Inspection #9: April 1986 (83 Months & 47 MGT) A summary report was prepared detailing performance up to April 1986 of the Vossloh fastening system (Inspection #1 through to Inspection #9), see WESTRAIL report TS 403.1/15, Page 1986. All of the Tunnelcrete sleepers had developed cracks which were still propagating. Most appeared to generate from the fastening bolt hole laterally (perpendicular to the rail) into the gauge and field portions of each sleeper. During this period the following cracking performance statistics had been recorded. 18.104.22.168 Inspection #10: November 1986 (90 Months & 50 MGT) A summary report was prepared detailing performance up to October 1986 on the Springlock fastening system (to Inspection #10), see WESTRAIL report TS 403.1/16, Page 1987. It was concluded that this particular fastener was unsatisfactory. Following which, the Springlock fastening subsection was formally abandoned from Study No 2. 22.214.171.124 Inspection #12: October 1987 (103 Months & 58 MGT) A summary report was prepared detailing performance up to October 1987 of the Hambo fastening system (Inspection #1 through to Inspection #12), see WESTRAIL report TS 403.1/21, Makin 1988. During this period a significant portion of the Swedish Rail System (SRS) sleeper has developed cracks. Clip and insulator skewing, damaging and dislodging were also common occurrences. During this period the cracking performance statistics of Figure 126.96.36.199 had been recorded. Since Inspection #1, 21 Vossloh fastening insulator pads (out of 568) had become dislodged or skewed. 188.8.131.52 Inspection #16: November 1990 (143 Months & 84 MGT) All SRS sleepers (Hambo fastening) were reported to have become crack. Since Inspection #1, 20 Sidewinder 1 clips (out of 588) and 12 Sidewinder 2 clips (out of 600) had been observed to had become dislodged or skewed. 2.4.9 Rail Asymmetry and Straightness: Section 8. In early 1984, this Test Section developed a bog hole and was formally abandoned from Study No 2. 2.5 CONCLUSIONS ON TEST SECTION PERFORMANCE 2.5.1 Track Geometry Reliability testing of track geometry data from WESTRAIL’s Matissa PV 6 track recording car indicated the data to be unsatisfactory. The decision was made to continue Study No 2 without PV 6 historical track geometry data. In December 1992 a different vehicle was used and the concrete sleepered test track generally exhibited tight gauge. The tightening of gauge in the timber Sections 6B and 5B was apparent and not typical of timber sleepered track. There appeared to be a definite tightening of gauge accompanying greater portions of in-situ cracked sleepers. 2.5.2 Maintenance For Test Sections 1A and 1C, maintenance expenditure was about 50% of the total section requirement. For the majority of the other sections, this proportion was greater. There was excessive maintenance expenditure associated with the timber sleepered Test Section 5B. In 5B there was greater levels of visual deterioration of karri sleepers and track structure observed during physical inspections. Track modulus evaluations in timber sleepered Test Sections were less than most of the concrete sleepered sections. This reflected a "softer" overall track structure to accompany the greater degree of visual deterioration. 2.5.3 Ballast The wind and other effects limited confidence in in-situ ballast density measurements made using the air permeability techniques. Measurements appeared to be significantly higher than the maximum obtained from laboratory density measurements on WESTRAIL standard Grade A ballast. The ballast in Test Section 3A was sourced with LAA < 20%. In 3B Meckering LAA > 30% ballast was used. With the combined Abrasion Number (AN) the apparent abrasion was about 30% for Test Section 3A to nearly 60% for 3B. The Meckering ballast (Test Section 3B) appeared to offer a compromise with both reasonable abrasive character and reduced cementation strength. The standard ballast depth was 230 mm. Test Sections 2A & 2D were shallow ballasted (depth 150 mm) and 2B & 2C were deeper (ballast depth 300 mm). No difference in ballast characteristics appeared to correlate with this depth variation. 2.5.4 Sleepers Sleeper cracking performance of the standard SF-1 concrete sleeper used on the Kwinana to Koolyanobbing line have been the subject of a separate study. The problems were not specific to the Test Sections and were the result of material selection during manufacture. Where surface cracks were evident, cracking appeared to initially propagate longitudinally (perpendicular to the rail) from the sleeper ends. Cracks then extended towards the rail seat. Cracking in the gauge portion generally appeared later in the life of the sleeper. Longitudinal cracking also propagated on the sides of SF-1 sleepers, initiating from the FIST fastening tube. Walking inspections have visually confirmed that distinct colouration differences for sleepers more prone to cracking as were batches marked as 1978 manufacture. 2.5.5 Formation High track quality, especially near to the instrumented Test Sections, was the result of well bedded foundations, high standard of construction and lower than anticipated gross tonnages. High axle load iron ore trains ceased running on the track in mid-1983, resulting in reduced overall tonnages and reduced average axle loads, with subsequent reduced track structural and geometrical deterioration. There were significant differences in formation penetration resistance characteristics. For most sections, results indicated hard layering at the ballast/formation interface and the presence of formation/ballast interaction mechanisms like compression, ballast breakdown, ballast settlement or formation mechanical compression. More data is necessary to confirm relationships between below ballast penetration characteristics and in-situ vertical load/deflection behaviour.
UNDER THE INQUIRIES ACT 2013 IN THE MATTER OF A GOVERNMENT INQUIRY INTO OPERATION BURNHAM AND RELATED MATTERS MEMORANDUM OF COUNSEL FOR THE CROWN AGENCIES 16 August 2019 CROWN LAW TE TARI TURE O TE KARAUNA PO Box 2858 WELLINGTON 6140 Tel: 04 472 1719 Fax: 04 473 3482 Contact Person: Aaron Martin / Ian Auld Counsel acting: Paul Rishworth QC MAY IT PLEASE THE INQUIRY: 1. These submissions are filed at the invitation of the Inquiry, on behalf of the Crown Agencies participating in the Inquiry. They address a number of issues arising from the public hearing for module 3, held on 29 and 30 July 2019. Jurisdiction is the basis of legal obligation in International Human Rights Law 2. In the submissions in response to issues arising out of module 2, the Crown Agencies noted the fundamental importance of jurisdiction when determining the applicability of the relevant human rights instruments,\(^1\) and when assessing whether any non-refoulement obligation could apply. In particular, the Crown Agencies noted: 2.1 That the application of the International Covenant on Civil and Political Rights (ICCPR) is dependent on jurisdiction: Art 2 requires states to “ensure to all individuals within its territory and subject to its jurisdiction the rights recognised in [the] covenant” [emphasis added]; 2.2 that any non-refoulement obligation, whether derived from the ICCPR or the Convention Against Torture (CAT), is premised on the transfer of an individual from the jurisdiction of one State to the jurisdiction of another State. 3. The Crown Agencies’ submissions following module 2 address the issue of jurisdiction in detail. The submissions below are not intended to repeat those submissions, but instead address two issues that arose in module 3: 3.1 First, they respond to Professor Akande’s discussion on the issue of whether jurisdiction can be established by a State’s ability to apply force alone, as is suggested by the United Nations Human Rights Committee (UNHRC) in General Comment 36 on Article 6 of the ICCPR (General Comment 36).\(^2\) 3.2 Secondly, they address the Chairperson’s question regarding the distinction between the obligations arising in respect of people --- \(^1\) International Covenant on Civil and Political Rights (ICCPR), United Nations Convention Against Torture (CAT), New Zealand Bill of Rights Act 1990 (BORA). \(^2\) United Nations Human Rights Committee, General comment No. 36 (2018) on article 6 of the International Covenant on Civil and Political Rights, on the right to life, CCPR/C/GC/36. detained by the NZDF and people detained by Afghan authorities with the assistance of the NZDF.\textsuperscript{3} \textit{Jurisdiction in International Human Rights Law is not established by the application of force alone} 4. In his presentation at the hearing for module 3, Professor Akande elaborated on his written opinion on the question of whether the fact that a State has the ability to take an individual’s life means it has sufficient control to establish jurisdiction and thereby trigger its obligations under the ICCPR, as suggested in General Comment 36.\textsuperscript{4} In particular, Professor Akande expanded on his discussion of the decision of the European Court of Human Rights in \textit{Al-Skeini v United Kingdom},\textsuperscript{5} and concluded that, although the ratio of that case does not support the impact-based approach to jurisdiction proposed in General Comment 36, the logic espoused by the Court in the decision might support such an approach.\textsuperscript{6} 5. The Crown Agencies respectfully disagree with Professor Akande’s analysis on this point. 6. In \textit{Al-Skeini}, the applicants’ argument on jurisdiction was that, due to the fact that the British armed forces had responsibility for public order in Iraq, there was a particular relationship of authority and control between the soldiers and civilians killed. Accordingly, the applicants submitted that “to find that the individuals fell within the authority of the United Kingdom armed forces would not require the acceptance of the impact-based analysis which was rejected by the Court in \textit{Bankovic},”\textsuperscript{7} but would instead rest on a particular relationship of authority and control”. It was this reasoning that was accepted by the Court in deciding that the United Kingdom had jurisdiction on the basis that it exercised some of the public powers normally exercised by a sovereign \begin{footnotesize} \textsuperscript{3} Transcript of hearing for module 3, day 1 at p. 73 to 74. \textsuperscript{4} For completeness, the Crown Agencies also again note that the relevant paragraphs of General Comment 36 have not enjoyed support from States (see footnote 32 of the Crown Agencies’ presentation for module 3). \textsuperscript{5} \textit{Al-Skeini v United Kingdom} (2011) 53 EHRR 18. \textsuperscript{6} Transcript of hearing for module 3, day 2, from p. 173. \textsuperscript{7} \textit{Bankovic & Others v Belgium & Ors} (App No. 52207/99) (2001) 44 EHRR SE5. \textsuperscript{8} \textit{Al-Skeini} at [124]. \end{footnotesize} government, and assumed authority and responsibility for the maintenance of security in South-East Iraq.\textsuperscript{9} 7. Accordingly, it is wrong to conclude that the Court intended to adopt the impact-based approach to jurisdiction that it had specifically rejected in \textit{Bankovic}, and which is proposed in General Comment 36. In fact, it is submitted the Court \textit{specifically did not} adopt that approach. 8. In support of his analysis that the Court’s logic suggests support for an impact-based approach to jurisdiction, Professor Akande points to the part of the judgment where the Court noted that its jurisprudence had established that “in certain circumstances, the use of force by a State’s agents operating outside its territory may bring the individual thereby brought under the control of the State’s authorities into the State’s Article 1 jurisdiction”.\textsuperscript{10} However, in discussing this jurisprudence, the Court cited only cases where the relevant states had exercised personal authority and control over the relevant individuals by detaining them, or exercised control over the place they were detained (e.g. a prison, ship or aircraft). 9. One commentator has summarised the \textit{Al-Skeini} decision as follows:\textsuperscript{11} \begin{quote} The Court applied a \textit{personal} model of jurisdiction to the killing of all six applicants, but it did so only \textit{exceptionally}, because the UK exercised public powers in Iraq. But, \textit{a contrario}, had the UK not exercised such public powers, the personal model of jurisdiction would not have applied. In other words, \textit{Bankovic} is, according to the Court, still perfectly correct in its result. While the ability to kill is ‘authority and control’ over the individual if the state has public powers, killing is not authority and control if the state is merely firing missiles from an aircraft. \end{quote} 10. This was also essentially the conclusion reached by the Court of Appeal of England and Wales in \textit{Al-Saadoon}, as noted in the presentation filed prior to module 3.\textsuperscript{12} With respect, the Crown Agencies submit that the Court of Appeal’s conclusion should be preferred over Professor Akande’s analysis on this issue. \textsuperscript{9} At [149]. \textsuperscript{10} At [136]. \textsuperscript{11} M.Milanovic “\textit{Al-Skeini} and \textit{Al-Jedda} in Strasbourg” EJIL (2012), Vol. 23 No. 1, 121–139. \textsuperscript{12} \textit{Al Saadoon & Ors v Secretary of State for Defence and Anor} [2017] 2 All ER 453; [2016] WLR(D) 491 at [73]. See Crown Agencies’ presentation for Module 3 at [123]. Refoulement requires that the person was subject to the jurisdiction of the transferring State 11. The fundamental difference between the obligations of the Crown in respect of people detained by the NZDF, pursuant to authority under the United Nations Security Council (UNSC) mandate, and people detained by the Afghan authorities in operations involving the NZDF, pursuant to Afghan criminal law, is that non-refoulement obligations applied to the former, but not the latter. 12. As was also noted in the submissions following module 2, the Crown Agencies accept that the non-refoulement obligation under Art 3 of the CAT and derived from Arts 6 and 7 of the ICCPR is engaged when a detainee subject to New Zealand's jurisdiction is transferred to the jurisdiction of another State even where that transfer takes place exclusively within the territory of the other State. This is supported by the United Nations High Commission for Refugees (UNHCR) position that the principle of non-refoulement “applies wherever a State exercises jurisdiction, including at the frontier, on the high seas or on the territory of another State” and that the decisive criterion is whether the person comes within the effective control and authority of the State.\(^{13}\) 13. However, non-refoulement obligations are only engaged once a person has come within the jurisdiction of the “transferring” State. To respond to the Chairperson’s question at the module 3 hearing,\(^{14}\) the distinction between detentions conducted by New Zealand forces and detentions conducted by Afghan forces in partnered operations is not that in the latter case a person would come within Afghanistan’s jurisdiction earlier, but rather that they would always be in Afghanistan’s jurisdiction and would not come within New Zealand’s jurisdiction at all (as discussed in the submissions filed following module 2). While there can be reasonable debate over whether this makes any moral or ethical difference, it clearly has legal significance: non-refoulement obligations do not apply. 14. However, as discussed below, the Crown Agencies accept New Zealand had other international legal obligations in respect of partnered operations. \(^{13}\) *Advisory Opinion on the Extraterritorial Application of Non-Refoulement Obligations under the 1951 Convention relating to the Status of Refugees and its 1967 Protocol*, UNHCR, January 2007. \(^{14}\) Transcript of hearing for module 3, day 1, at p.74. The New Zealand State did not aid or assist the Afghan State to commit torture 15. As was noted in the Solicitor-General’s advice to the NZDF, dated 2 November 2010 (Solicitor-General’s Opinion), while New Zealand’s non-refoulement obligations were not engaged in respect of detainees taken by Afghan authorities in New Zealand partnered operations, New Zealand was subject to an obligation to ensure that assistance provided to Afghan authorities did not amount to aiding or assisting any internationally wrongful act. Accordingly, the Crown Agencies agree with Sir Kenneth that the international law relating to State complicity may be the most relevant to partnering operations.\(^{15}\) 16. Although paragraph 7.8 of the terms of reference highlights the judgment in *Maya Evans* as a significant factor in assessing whether the “transfer or transportation” of Qari Miraj in January 2011 was “proper”, it is worth noting that that judgment does not address the question of complicity in internationally wrongful acts: 16.1 the *Maya Evans* case concerned the lawfulness of the application of the United Kingdom’s detainee transfer policy. That policy was concerned with ensuring that individuals detained by the United Kingdom were not transferred to the Afghan authorities when there was a “real risk of torture”. That legal test (“real risk of torture”) is the test applicable under Article 3 ECHR to the issue of non-refoulement. 16.2 the United Kingdom’s detainee transfer policy did not address the question of the United Kingdom’s complicity in torture when individuals arrested by the Afghan authorities in partnered operations were subsequently tortured. As a result, *Maya Evans* is silent on this question. As will be discussed, the test for State complicity in international law is different from the test for a breach of the non-refoulement principle. 17. While it is established in customary international law, reflected in Art 16 of the International Law Commission’s (ILC) Articles on State Responsibility, that a \(^{15}\) Expert Opinion of Sir Kenneth Keith at p.16; Transcript of hearing for module 3, day 1, at p.35. State can bear responsibility for assisting another State to commit an internationally wrongful act, the threshold to establish such responsibility is contested. The Crown is in the process of developing a firm position on this issue.\textsuperscript{16} 18. The ILC’s commentary to Art 16 of the Articles of State Responsibility indicates that, to constitute complicity, aid or assistance must be given “with a view to facilitating the commission of the wrongful act, and must actually do so” and that “this limits the application of article 16 to those cases where the aid or assistance given is clearly linked to the subsequent wrongful conduct.” It further states that “the assisting State will only be responsible to the extent that its own conduct has caused or contributed to the internationally wrongful act”. 19. In the \textit{Bosnian Genocide} case, the International Court of Justice (ICJ) confirmed these principles in noting that complicity for an internationally wrongful act required: i) a positive action to furnish aid or assistance to the perpetrator of the wrongful act; and ii) the provision of support in full knowledge of the facts relating to the wrongful act.\textsuperscript{17} 20. There are, then, three elements of complicity: i) a positive act; ii) a sufficient causal connection between the positive act and the internationally wrongful act;\textsuperscript{18} and iii) a mental element. 21. The mental element of complicity is vital. The law cannot intend that a State assisting another State to conduct a lawful activity is liable for subsequent unlawful activity conducted by the assisted State, unless there is some intention, or at least full knowledge, that the assistance given will facilitate that unlawful activity. Such a result would not accord with the philosophical basis \textsuperscript{16} Independently of this Inquiry, the Ministry of Foreign Affairs and Trade is currently considering this issue as part of a broader suite of advice to government on the question of complicity for internationally wrongful acts in international law. \textsuperscript{17} \textit{Case Concerning Application of the Convention on the Prevention and Punishment of the Crime of Genocide (Bosnia-Herzegovina v. Yugoslavia)}, International Court of Justice (ICJ), 11 July 1996, at [432]. \textsuperscript{18} While not directly applicable, the approach to complicity in International Criminal Law may be relevant by analogy in interpreting state responsibility for assistance. The leading decision of the International Criminal Tribunal for the former Yugoslavia regarding complicity in torture indicates that, to constitute aid and assistance under international criminal law, the assistance rendered must have a “substantial effect” on the commission of the crime (\textit{Prosecutor v Furundžija} (Trial Judgement), IT-95-17/1-T, 10 December 1998 at [234]). This has also been indirectly endorsed by the ICC (\textit{Prosecutor v. Lubanga} ICC-01/04-01/06, 14 March 2012). It would make sense for the same, or a similar, standard to apply in respect of State responsibility. of the legal doctrine of complicity,\textsuperscript{19} and would have an undesirable chilling effect on inter-State cooperation.\textsuperscript{20} 22. The importance of the mental element is highlighted by the \textit{Bosnian Genocide} case where, despite finding that the crimes committed in Srebrenica were committed with resources provided as part of a general policy of aid and assistance by the Federal Republic of Yugoslavia (FRY) towards the Republika Srpska, the ICJ held that complicity could not be made out because it could not be proved that the FRY supplied aid to the perpetrators of the genocide “in full awareness that the aid supplied would be used to commit genocide”.\textsuperscript{21} 23. There is some debate as to the mental element requirement for a State to be liable via complicity, for an internationally wrongful act under Art 16 of the Articles of State Responsibility, as was discussed in detail in the presentations for module 3. The Crown notes that the considerable weight of international legal opinion supports the view that an element of intention is required. The ILC’s commentary (“with a view to facilitating the commission of the wrongful act”) supports that view. A number of academic commentators also support that view.\textsuperscript{22} This approach is also consistent with that proposed by both the United States and United Kingdom during the drafting of the Articles of State Responsibility,\textsuperscript{23} which appears to have influenced the ILC’s commentary to Art 16 discussed above.\textsuperscript{24} 24. Although the ICJ, in its articulation of the requirements for complicity in the \textit{Bosnian Genocide} case, did not need to determine whether an intention to assist is required in addition to knowledge of the essential facts, its framing of the issue (a positive act to furnish aid and assistance in full knowledge of the facts \textsuperscript{19} See J.Crawford \textit{State Responsibility: the General Part} (Cambridge University Press, 2013) at 404, quoting commentary by another of the International Law Commissioners, Roberto Ago, during the drafting of Art 16 that “the very ‘idea’ of complicity in the internationally wrongful act of another presupposes an intent to collaborate”. Although not directly relevant, this is also reflected in the simple formulation in our own law that “the essence of aiding and abetting is intentional help”. \textsuperscript{20} For a helpful discussion of this point, see H. Aust \textit{Complicity and the Law of State Responsibility} (Cambridge University Press, 2011) at 238 – 241. \textsuperscript{21} At [421]-[422]. \textsuperscript{22} As discussed in the Crown Agencies’ presentation for module 3 at [98] to [112]. \textsuperscript{23} \textit{International Law Commission}, ‘State Responsibility – Comments and Observations Received from Governments” (2001) UN Doc A/CN.4/515 at p.52. It is interesting that the position of the UK and USA reflects the standard approach in common law that aid and assistance requires both knowledge of the essential facts of the unlawful act and an intention to assist. This is also the position in our own criminal law. \textsuperscript{24} It is also relevant that the ILC subsequently adopted a test of intention for aiding and assisting in international criminal law. See Art 25(3)(c) of the Rome Statute. of the relating to the wrongful act) is consistent with an intention requirement. Full knowledge that a positive action would inevitably assist the commission of an internationally wrongful act could, we submit, permit intention to be imputed.\footnote{While knowledge and intent are distinct elements in law, proof of a sufficient degree of knowledge may provide an inference of intention: see for example, in the domestic law context, \textit{R v Jago} [2016] UKSC 8; \textit{Ruddock v R} [2016] UKPC 7. See also Crawford, op.cit, at 408: “Additionally, as the first reading commentary may be taken as indicating, if aid is given with certain or near certain knowledge as to the outcome, intent may be imputed.”} 25. In any event, as will be discussed below, the knowledge element is not fulfilled in the current case and so, as in the \textit{Bosnian Genocide} case, there is no need to determine whether an element of intent is required. 26. What is clear from the \textit{Bosnian Genocide} case, and generally accepted in the literature, is that to establish complicity there must be proof of actual knowledge of the circumstances of the internationally wrongful act.\footnote{Again, see Crown Agencies’ presentation for module 3 at [98] to [112].} 27. A threshold of constructive knowledge is not consistent with the ICJ’s formulation\footnote{See also [432], where the Court specifically distinguished complicity from a duty to prevent on the basis that complicity requires that support be given “in full knowledge of the facts”, whereas a standard of constructive knowledge (“aware, or should normally have been aware”) and recklessness (“of a serious danger”) is sufficient to establish a breach of a duty to prevent.} and is unlikely to be sufficient. This is supported by the fact that the ILC did not accept a proposal from the Netherlands that the wording of the knowledge element of Art 16 be changed to read: “the state does so when it knows \textit{or should have known} the circumstances of the internationally wrongful act.”\footnote{International Law Commission, “State Responsibility – Comments and Observations Received from Governments” op.cit. at p.52. See also M. Jackson \textit{State Complicity in International Law} (Oxford University Press, 2015), at p161. Again, incidentally, this is consistent with the approach to knowledge in cases of complicity in our own law: \textit{Commerce Commission v New Zealand Bus Ltd} (2006) 11 TCLR 679, 8 NZBLC 101, 774 (HC) at [231].} 28. Similarly, recklessness, in the sense of knowledge of a “real risk” that the aided state will commit an unlawful act, is also insufficient. A leading commentator has noted as follows:\footnote{Jackson op. cit at pp.161 – 2.} \begin{quote} In practice, the standard of knowing participation means awareness with something approaching practical circumstances of the principal wrongful act. Dilution from that standard – the slide into reckless assistance – starts to become inconsistent with the essential derivative nature of complicity and may indeed undermine valuable international cooperation. \end{quote} While the Crown Agencies acknowledge that some academics have argued that wilful blindness may be sufficient, there is no State practice to support this and the weight of international legal opinion, including the Bosnian Genocide case, currently supports the view that actual knowledge of the circumstance of the internationally wrongful act is required. As the above discussion demonstrates, the test applicable to non-refoulement (“real risk”) is not the same as that for complicity. It appears that Sir Kenneth agrees with this position.\(^{30}\) In the Crown’s submission, merging these two legal frameworks by applying a test for complicity that is met simply by providing aid or assistance to another State with the knowledge that there was a “real risk” the other State might commit an internationally wrongful act would require a significant departure from current international law. Its practical consequences would be felt across the full suite of areas of international cooperation and would have a substantial chilling effect on that cooperation. **Applying the Art 16 test to the circumstances relevant to paragraph 7.8 of the terms of reference** Even proceeding on the hypothetical basis that Qari Miraj were, in fact, tortured by the Afghan authorities, New Zealand’s participation in a partnered operation to arrest him would not amount to aid and assistance in his torture.\(^{31}\) The Crown Agencies accept that first element of the test for complicity is made out: New Zealand’s agents took a positive action to assist the Afghan authorities. However, it was assistance in the arrest and transportation of Qari Miraj. It is not alleged that New Zealand State agents directly aided or assisted the Afghan authorities to commit torture. As such, the second element of the test for complicity (a sufficient causal connection between the positive act and the internationally wrongful act) is less clear-cut. The question is whether \(^{30}\) Transcript of the hearing for module 3, day 1, at page 35. A leading commentator on State complicity, Miles Jackson, also appears to agree that this is the current state of the law (although he argues for an extension of the law): M. Jackson “Freeing Swiring The ECtHR, State Complicity in Torture and Jurisdiction” *EJIL* (2016), Vol. 27 No. 3, 817–830. See also Aust op. cit. at 239: “A requirement of intent is also the only possible conceptual means to distinguish the situation of complicity in the sense of Article 16 ASR from the typical situation of non-refoulement.” \(^{31}\) The Crown Agencies note that the Inquiry’s Terms of Reference do not direct it to determine whether the New Zealand State was in fact complicit in torture conducted by Afghan authorities. First, the Inquiry has no jurisdiction to make determinations about the actions of forces or officials other than NZDF forces or New Zealand officials. A finding of complicity would first require a determination that Afghan officials tortured a particular person detained with the assistance of the NZDF. Secondly, the Inquiry, in common with all inquiries under the Inquiries Act, has no power to determine the civil, criminal, or disciplinary liability of any person. This includes the civil liability of the New Zealand State at international law. providing aid and assistance to the Afghan authorities to conduct a lawful arrest could have a substantial effect on the commission of any subsequent torture. That is a matter for the Inquiry to determine. 33. In any case, it is clear that the third element of the test for complicity is not made out. New Zealand’s agents had no intention to assist in the torture of Qari Miraj, nor did they have actual knowledge that his detention by the Afghan authorities would lead to his torture. To the extent that the Inquiry considers constructive knowledge or wilful blindness to be appropriate standards to establish complicity (which is not accepted), New Zealand was neither wilfully blind to the fact that Qari Miraj would be tortured in detention, nor should it have known that he would be tortured. 34. As set out in the Solicitor-General’s Opinion, the Crown Agencies accept that, had the Government known that there was a systemic practice amongst the relevant Afghan authorities of torturing detainees, then it would have been required to restrict or withdraw its cooperation, as necessary, until that risk was addressed, in order to avoid the risk of complicity.\(^{32}\) 35. As paragraph 7.8 of the terms of reference highlight the *Maya Evans* judgment as significant to an assessment of whether the transfer of Qari Miraj was “proper”, it is worth noting that that judgment did not support a prohibition on transfers of prisoners to all detention facilities in Afghanistan. While serious concerns were raised about ill-treatment of detainees, the Court did not conclude that there was a real risk of torture (let alone knowledge or wilful blindness that torture would occur) in all Afghan facilities. 36. In light of the *Maya Evans* judgment, the Government took a number of steps to inform itself of the overall conduct of Afghan authorities. In particular, the Government took the following steps: 36.1 During a visit to Afghanistan in August 2010, Dr Mapp reiterated New Zealand’s concerns on the treatment of detainees and sought assurances of the humane treatment of detainees apprehended by the Afghan National Security Forces (ANSF), especially when operating \(^{32}\) Noting that, on the ICJ’s formulation of the test for complicity, the practice of torture in detention would have to be so widespread and systemic as to make torture a near certainty. with the support of the NZSAS. Dr Mapp also received updates on the progress of improved surveillance of NDS facilities. He was briefed on improvements within Afghan prisons, particularly where international assistance had helped the NDS improve its investigative, forensic and evidence based methodology and support to modernise detention facilities in Kabul. 36.2 New Zealand joined with a number of international partners in a detainee working group to assist the Afghan Government to upgrade detention facilities, systems and practices, including within the NDS. 36.3 New Zealand informally liaised with Afghan authorities and other ISAF troop contributing nations to discuss detainee issues. 37. Moreover, New Zealand was aware that the International Committee of the Red Cross (ICRC), the Afghan Independent Human Rights Commission (AIHRC), and the United Nations Assistance Mission in Afghanistan (UNAMA) carried out monitoring of Afghan detention centres, together with like-minded troop contributing nations\(^{33}\) who transferred detainees (i.e. detainees captured by ISAF) directly to the Afghan authorities, including the NDS. Given these nations were also conscious of their non-refoulement obligations, it was reasonable to assume that any information indicating that the relevant Afghan authorities routinely committed torture would have been identified by those nations, and this information shared with New Zealand, through ISAF, and particularly through the detainee working group. 38. In addition, in parallel with partnered law enforcement operations, New Zealand was involved in other activities specifically intended to reduce risks of torture, and other human rights breaches in Afghan facilities, for example providing training to the Crisis Response Unit on the professional and humane conduct of their duties. New Zealand was also cooperating with other nations and contributing, within its means, to a wider international effort to address human rights issues in Afghanistan at a systemic level, including by providing funding to UNAMA and the AIHRC. The Crown Agencies submit that this \(^{33}\) Nations New Zealand would consider share our commitment to human rights. evidences the fact that New Zealand’s assistance to the Afghan authorities was not provided “with a view to facilitating torture”. Instead the opposite is true. 39. The Crown Agencies submit that its conduct post-*Maya Evans* does not support a contention that, when participating in the operation to arrest Qari Miraj: i) it had knowledge that Qari Miraj *would be tortured* in detention; or ii) it was wilfully blind to the fact that Qari Miraj would be tortured in detention. Moreover, there is no evidence at all to suggest an intent to assist the Afghan authorities in torture. 40. As a result, the Crown Agencies reject any suggestion that it is arguable in law that New Zealand aided or assisted an internationally wrongful act by Afghanistan. 41. However, the Crown Agencies accept, given the subsequent findings of UNAMA in its report of October 2011 on Treatment of Conflict-Related Detainees in Afghan Custody (*UNAMA Report*), that the Government might reflect critically on its policy position post-*Maya Evans*. That is, while the legal position set a baseline for New Zealand’s conduct, it did not set a ceiling on the protections New Zealand might have sought as a matter of *policy*. Subject to resourcing considerations and the consent of the Afghan authorities, it might have been open to New Zealand to explore the possibility of detainee monitoring as a component of any partnering arrangement. 42. The reasoning behind the decision not to pursue this policy at the time includes that set out in the Solicitor-General’s Opinion at [68] to [72]. Monitoring of detention facilities would have required the consent of the Afghan authorities (which may not have been forthcoming for those arrested in partnered operations). It would also have required either a larger deployment with more resources, or a reduction in capacity to undertake the tasks for which NZDF was deployed and specialised to undertake. However, in hindsight, the Crown Agencies accept that it may have allowed New Zealand to identify and respond to the practices identified in the UNAMA Report earlier. 43. To address directly the Chairperson’s question from the module 3 hearing as to the reason why New Zealand could not have pursued a monitoring regime for detainees arrested by Afghan authorities in partnered operations: 43.1 As a matter of *policy*, and subject to resourcing considerations and the consent of the Afghan authorities, it may have been open to New Zealand to pursue such a regime. Although the decision not to pursue such a policy may be reflected on critically in hindsight, there were legitimate reasons for not doing so at the time. 43.2 As a matter of *international law*, it is not accepted that there was any obligation to do so. 44. It is also important to note that when New Zealand did become aware of a widespread risk of torture shortly before the UNAMA Report was released, steps were taken, in accordance with ISAF policies, to address this issue. **The Government complied with Article 1 of the Geneva Conventions** 45. It is not contentious that under Common Article 1 of the 1949 Geneva Conventions a State has positive and negative obligations in respect of its own troops and their actions. However, the Crown Agencies note that the existence of any positive obligations from Common Article 1 for a State in respect of the troops and actions of another State has been the subject of much academic debate, which has not been settled by any competent court or tribunal, or through a consistent approach in state practice. 46. The Crown Agencies accept that Common Article 1 imposes an obligation not to provide aid, assistance or encouragement to another State’s commission of a breach of International Humanitarian Law (IHL). However, the extent to which it imposes a positive obligation, and the nature of any such obligation, is very much open for debate. This is a developing area of international law, upon which the Crown has not yet formed a comprehensive view. --- 34 Transcript of hearing for module 3, day 1 at p 74. 35 *Military and Paramilitary Activities in and Against Nicaragua (Nicaragua v. United States of America)*, Merits [1986] ICJ Rep 14 at [220]. 36 For example, see the differing views of commentators in the following commentary: ICRC, Commentary on the First Geneva Convention: Convention (I) for the Amelioration of the Condition of the Wounded and Sick in Armed Forces in the Field, 2nd edition, 2016; C. Focarelli “Common Article 1 of the 1949 Geneva Conventions: A Soap Bubble?” *EJIL* (2010), Vol. 21 No. 1, 125–171; K. Dormann and J. Serralvo “Common 47. Even if Common Article 1 did impose a positive obligation for a State in respect of the troops and actions of another State, the New Zealand State met this obligation through its various efforts to promote compliance by the Afghan authorities with IHL and IHRL, as discussed in the Crown Agencies’ presentation for module 3.\textsuperscript{37} The distinction between “security operations” and “active hostilities” paradigms 48. There has been some suggestion that Operation Burnham be seen as domestic “law enforcement” (of Afghan arrest warrants by Afghan authorities) rather than a situation of “armed hostilities”. This may be an allusion to the framework suggested by the authors of the \textit{Practitioners Guide to Human Rights Law in Armed Conflict}\textsuperscript{38} as helpful for determining the interaction of IHL and IHRL in cases where both apply.\textsuperscript{39} 49. The Crown Agencies say in response: 49.1 The situation in Afghanistan at the relevant time constituted a non-international armed conflict. Accordingly, IHL applied throughout Afghan territory.\textsuperscript{40} 49.2 As submitted at the hearing, the interaction of IHL and IHRL falls for resolution only when both apply. New Zealand’s IHRL obligations did \textit{not} apply to the main aspects of the Operation Burnham mission, which was undertaken in an area over which New Zealand exercised no jurisdiction, and in circumstances where New Zealand did not exercise state agent authority and control over any person. 49.3 Even if IHRL did apply, given the facts on the ground, the operative framework within which to consider Operation Burnham (using the \textsuperscript{37} Article 1 to the Geneva Conventions and the obligation to prevent international humanitarian law violations” IRRC (2014) 96 (895/896), 707-736. \textsuperscript{38} Wilmshurst, Hampson, Garraway, Lubell, Akade (eds), Oxford 2016. \textsuperscript{39} As the authors explain (at 4.34), “[t]he term ‘security operations’ denotes activities which are largely of the nature of law enforcement but, since they are carried out within armed conflict, including situations of occupation, the term “law enforcement” was not thought appropriate. \textsuperscript{40} \textit{Prosecutor v Tadić} (decision on the defence motion for interlocutory appeal on jurisdiction) IT-94-1 2 Oct 1995 at [70]. tools of analysis suggested by the authors) was plainly one of “active hostilities” throughout. 49.4 But even if that were not the case, and the operation were cast as a “security operation”, the following observations of the authors are germane:41 Under the ‘security operations’ framework, in circumstances connected to a non-international armed conflict or other significant disturbances, the standard of scrutiny applied to the State’s actions will be determined with a certain degree of flexibility, in order to ensure that an unrealistic burden is not placed on the State or its agents, including in a manner that could be detrimental to the protection of life. 50. The overall result – even if it be assumed, contrary to the Crown Agencies submission, that IHRL applied to New Zealand in relation to events in Tirgiran Valley along with IHL – is in accord with the ICJ’s articulation of the relationship between the two bodies of law: a loss of life in circumstances permitted by IHL is not arbitrary under IHRL. 51. For completeness, Crown Agencies observe that Operation Yamaha was, in contrast, a security operation conducted by Afghan authorities. As such IHRL applied to the Afghan authorities, alongside IHL. The principle of distinction is concerned with attacks intentionally directed against civilians 52. As noted in the presentation for module 3, the Crown Agencies generally agree with the characterisation of the relevant principles of IHL described by Sir Kenneth Keith and Professor Akande in their papers. However, there is one issue that arose during the hearing that the Crown Agencies consider requires response. 53. In answer to questioning by the Inquiry, Professor Akande stated that a State can be legally liable for a breach of the principle of distinction where civilians or civilian objects are unintentionally targeted by its forces in the course of armed conflict.42 Professor Akande distinguished individual international criminal liability for directing attacks against civilians or civilian objects, which --- 41 *Practitioners’ Guide*, paragraph 5.12. 42 The Crown Agencies interpreted the exchange in question as relating to *direct* attacks. Accordingly, the Crown Agencies do not understand Professor Akande to have suggested that the State can be liable for a breach of distinction where civilian casualties have occurred as an incidental result of an attack on a military objective. clearly requires intention,\textsuperscript{43} from State responsibility for the same conduct, which he stated could be established on the basis of strict liability. With respect, the Crown Agencies disagree with this approach. 54. Leading commentators have posited that it is the intent to attack civilians that is the \textit{sine qua non} of the principle of distinction, not the fact that civilians are actually harmed. It is on this basis that the prohibition on attacking civilians applies even if the intended attack proves unsuccessful.\textsuperscript{44} 55. Further, the principle of distinction must be considered alongside other principles of IHL, notably the obligation to take precaution to avoid civilian casualties in attack. In effect, the principle of distinction prohibits intentional targeting of civilians and civilian objects, whereas the obligation to take precaution and verify targets aims to prevent the accidental targeting of civilians and civilian objects under the apprehension they are military objectives. Neither the principle of precaution nor distinction impose strict liability for attacks on civilians or civilian objects. Accordingly, as the Crown Agencies noted in the presentation for module 3, an unintentional attack on a civilian or civilian object, for example based on a genuine mistake as to their status, will not be a breach of the principle of distinction. 56. A State may be liable for breaches of IHL by its agents.\textsuperscript{45} However, the principle of distinction is concerned with intentional targeting of civilians, and, accordingly, accidental targeting, based on a mistake as to status, is not a breach of IHL. Accordingly, as there is no underlying breach of IHL, there is no basis upon which \textit{either} an individual or the State could be liable.\textsuperscript{46} \begin{footnotesize} \begin{itemize} \item[43] Rome Statute of the International Criminal Court (\textbf{ICC Statute}), Arts 8(2)(b)(i) and (ii) and (8)(2)(c)(i) and (ii). \item[44] T. Gill and D. Fleck, ed. \textit{The Handbook of the International Law of Military Operations}, 2\textsuperscript{nd} ed (Oxford University Press, Oxford, 2007) at [16.02.07]. \item[45] See ICRC CIHL Rules, Rule 149 and the commentary that follows. See also M. Sassoli “State responsibility for violations of international humanitarian law” IRRC Vol 84 No 846 (June 2002) at p.401. \item[46] See also B.Bonafe \textit{The Relationship Between State and Individual Responsibility for International Crimes} (Leiden, Martinus Nijhoff, 2009) at p.245, where the author notes as follows “First, positing the unity of state and individual responsibility for international crimes at the level of primary norms is essential to determine the actual content of the relationship between these two regimes, as revealed by the analysis of international practice. A crucial element of this relationship is the correspondence of the conduct amounting to an international crime and giving rise to both state and individual responsibility. Therefore, conduct triggering a dual responsibility under international law cannot be qualified differently (i.e., as lawful or unlawful) according to whether that conduct is assessed from the perspective of state versus individual responsibility.” \end{itemize} \end{footnotesize} Obligation to gather information to verify target and assess proportionality 57. At the hearing for module 3, in the context of a discussion on the principle of proportionality in attack, the Chairperson asked a question as to whether a military planner is under an obligation to attempt to gather information in order to assess the expected harm to civilians (on the one hand) and anticipated military advantage (on the other hand) resulting from an attack.\footnote{Transcript of hearing for module 3, day 1, at p.59. Brigadier Ferris also provided an answer to this question during her presentation – see Transcript of hearing for module 3, day 2, at p.192.} 58. The planner of an attack must do everything feasible to confirm that targets are military objectives and to assess the proportionality of a planned attack. The intent of this requirement is to provide sufficient information to permit an attack to be conducted with reasonable certainty that the target is a military objective (i.e. to comply with the principle of distinction) and that the attack will comply with the principle of proportionality. As noted in the presentation for module 3, the obligation to take all "feasible" precautions has been interpreted by many States (including New Zealand) as being all those precautions which are practicable or practically possible, taking into account all circumstances at the time, including humanitarian and military considerations. Feasible measures include timely collection, analysis and dissemination of intelligence. The Crown Agencies note that what is “feasible” will depend on the circumstances of the case and is ultimately a matter of common sense and good faith.\footnote{See \textit{Gill and Fleck} at [16.07.03]. See also commentary to ICRC CIHL Rule 15.} Command responsibility requires knowledge 59. In the presentation for module 3, the Crown Agencies set out the test for command responsibility for war crimes committed by subordinates, by reference to the elements established in the ICC decision in \textit{Bemba}.\footnote{At [71].} One of those elements is that the commander must have known, or should have known, that their subordinates were committing or about to commit a war crime. 60. At the hearing, the Chairperson questioned whether, if it is assumed that coalition air assets breached IHL (an assumption which we understood to be hypothetical given the absence of evidence to support that conclusion),\textsuperscript{50} the Ground Force Commander could be responsible for such a breach, if, when clearing the aircraft to engage, he did not ask whether the target had been positively identified or inquire about the possibility of collateral damage. In other words could the Ground Force Commander be responsible for a breach of IHL by a pilot of a non-New Zealand air asset if he failed to confirm with the pilot that the engagement would comply with IHL before giving clearance? Is a failure to inquire sufficient to establish knowledge that a breach of IHL will occur, as required for command responsibility? 61. In the absence of any information to the contrary, the Ground Force Commander would be entitled to assume that the pilot operating in a coalition of allies would comply with IHL (and their Rules of Engagement) in any attack. In the absence of any information to indicate that the pilot would breach IHL, there can be no factual finding that the Ground Force Commander “knew or should have known” that this would occur. Accordingly, a failure to inquire is insufficient to establish the knowledge required for command responsibility. 62. In any event, based on the command relationship as described in the memorandum of counsel for the NZDF dated 19 July 2019, “clearance” to engage did not constitute an order to engage. Neither would a refusal of clearance have constituted an \textit{order} not to engage. Accordingly, the Ground Force Commander did not have sufficient command or control over the coalition pilots for their actions to be attributable to him on the basis of command responsibility. \textsuperscript{50} And in relation to which there has been, and can be, no finding as the Inquiry has no jurisdiction to make determinations about the actions of forces or officials other than NZDF forces or New Zealand officials. 63. Accordingly, on the hypothetical assumption that a coalition pilot breached IHL, the Ground Force Commander could not be liable on the basis of command responsibility even if the requisite knowledge element was met. 16 August 2019 ______________________________________ Paul Rishworth QC / Ian Auld Counsel for the Crown Agencies
The electron runaway mechanism in dense gases and the production of high-power subnanosecond electron beams V F Tarasenko, S I Yakovlenko DOI: 10.1070/PU2004v047n09ABEH001790 Contents 1. Introduction 887 2. Electron multiplication and runaway electrons 888 2.1 Local electron runaway criterion; 2.2 Electron multiplication; 2.3 Nonlocal electron runaway criterion; 2.4 On the ignition criterion of a self-sustained discharge 3. Electron-beam production in dense gases 895 3.1 Experiments on electron-beam production in dense gases; 3.2 On the mechanism of electron-beam production at atmospheric pressure 4. Production of a nanosecond discharge at atmospheric pressure 898 4.1 Experiments on volume-discharge production at atmospheric pressure without a supplementary preionization source; 4.2 On preionization mechanisms; 4.3 Background multiplication front in a nonuniform field 5. Conclusions 903 References 904 Abstract. New insight is provided into how runaway electrons are generated in gases. It is shown that the Townsend mechanism of electron multiplication works even for strong fields, when the ionization friction of electrons can be neglected. The nonlocal electron runaway criterion proposed in the work determines the critical voltage $-pd$ relationship as a two-valued function universal for a given gas ($p$ being the gas pressure, and $d$ the electrode spacing). This relationship exhibits an additional upper branch as contrasted to the familiar Paschen’s curves and divides the discharge gap into two regions: one where electrons multiply effectively, and the other which they leave without having enough time to multiply. Experiments on the production of electron beams with subnanosecond pulse duration and an amplitude of tens to hundreds of amperes at atmospheric pressure in various gases are addressed, and the creation of a nanosecond volume discharge with the high density of excitation power and without preionization of the gap by a supplementary source is discussed. 1. Introduction In the present review, we discuss three aspects of the physics of discharges at pressures on the order of atmospheric pressure, which are related to the production of high-power subnanosecond electron beams. First, we give a review of works that have formed the basis for the new understanding of the mechanism of production of a beam of runaway electrons in gases (Section 2). Second, we present the results of experiments on the production of electron beams of subnanosecond duration with a record amplitude of the current in gas-filled atmospheric-pressure diodes (Section 3). Third, we analyze some properties of volume discharges of nanosecond duration, properties that have helped produce such powerful beams (Section 4). As is well known, the runaway of electrons (the whistler mode) occurs in fully ionized plasma placed in a strong electric field, when the majority of electrons acquire on their mean free path more energy from the field than they lose in elastic collisions, with the result that the electrons are continuously accelerated. The phenomenon of runaway of electrons in plasma was predicted long ago by Giovanelli [1]; the numerical calculations of this effect were done by Dreicer [2] and Kulsrud et al. [3], and an analytical investigation for the case of weak fields was carried out by Gurevich [4]. The phenomenon has proved important in the diagnostics and energy balance of impurities in tokamak plasmas [5]. Runaway of electrons can also be observed in gases (see the references cited in the review [6] and in the monographs [7, 8]). It triggers what is known as open discharges in gases of moderate density [9–13] (such discharges are used, among other things, to pump lasers [9, 14, 15]). It is customary to assume that runaway of electrons in a gas is described in approximately the same way as in a fully ionized plasma, i.e., the assumption is that runaway of electrons sets in when the force with which the electric field acts on an electron exceeds the decelerating force [6–8]. However, computer simulations and analytical treatment suggest that most electrons in a gas occupying a fairly large volume are not continuously accelerated [16–19]. No matter how high the electric field strength is, at certain distances from the cathode the Townsend ionization mode sets in for the vast majority of electrons, and this mode is determined by two factors. First, the number of ionization acts grows exponentially with the distance from the cathode. Second, the mean velocity and energy of the electrons do not depend on this distance. The common approach [6–8] leads to a local criterion for the electric field strength that determines the condition in which, as is ordinarily assumed, many runaway electrons appear. The criterion is that the electric field strength must exceed the value at which the energy that an electron acquires on the mean free path is balanced by the maximum energy loss through ionization of the gas. In Section 2 we discuss the reasons why the ordinary approach cannot be applied to the majority of electrons in conditions of electron multiplication, with the result that the local criterion cannot be used for describing the conditions of production of a powerful beam of runaway electrons in a gas. We propose a nonlocal electron runaway criterion in the form of a universal two-valued dependence for the ‘critical’ voltage $U_{\text{cr}}$ on $p d$ for the given gas, where $p$ is the gas pressure, and $d$ is the distance between the flat electrodes. These curves separate the region of effective electron multiplication from the region where the electrons leave the discharge gap before they have any time to multiply. We present the results of simulation of the Townsend coefficients for helium, neon, xenon, nitrogen, and sulfur hexafluoride. The study covering the mechanism of runaway electron production in a gas has become especially important in connection with the problem of generating electron beams of subnanosecond duration with record current amplitudes ($\sim 70$ A in air, and $\sim 200$ A in helium [20–25]) at atmospheric pressure. In Section 3 we discuss the results of experiments on generating high-power subnanosecond electron beams at atmospheric pressure and examine some aspects of the mechanism of electron-beam production when the nonlocal electron runaway criterion is met. It has been found that an electron beam is produced at the stage when the plasma being formed at the cathode closes in on the anode, as if the plasma brings the cathode closer to the anode, which leads to a condition where the nonlocal electron runaway criterion is met. Here, the voltage $U$ and the parameter $p d$ are to be found near the upper branches of the curves characterizing the criterion of electron escape without multiplication. The stage of formation of a plasma cathode approaching the anode is discussed in Section 4. We show that a self-sustained nanosecond discharge in a dense gas is possible when the gap voltage is at its maximum at the quasi-steady stage of the discharge. In this case, the input power density may exceed 400 MW cm$^{-3}$. We also examine the role of preionization by fast electrons emitted by a plasma spike at the cathode and the electron multiplication wave traveling from the anode to the cathode. ## 2. Electron multiplication and runaway electrons ### 2.1 Local electron runaway criterion #### 2.1.1 The traditional approach Let us briefly discuss the main aspects of deriving the local electron runaway criterion (a detailed derivation can be found, e.g., in Refs [6, p. 53; 7, p. 71; 8, p. 74]). It is assumed that in the stable flow of electrons from the cathode to the anode their distribution is close to monoenergetic [8]. For the energy $\varepsilon$ of an electron in an electric field of strength $E$, the following energy balance equation is used in the simplest case: $$\frac{d\varepsilon}{dx} = eE - F(\varepsilon), \tag{1}$$ where $x$ is the distance measured from the cathode, and $F(\varepsilon)$ is the force of friction caused by the collisions of the electron with the gas atoms. The friction force in the nonrelativistic case is often represented in the simple form based on the Bethe approximation [7]: $$F(\varepsilon) = \frac{2\pi e^4 ZN}{\varepsilon} \ln \left( \frac{2\varepsilon}{I} \right). \tag{2}$$ Here, $Z$ is the number of electrons in the atom or molecule of the neutral gas, $N$ is the particle number density in the neutral gas, and $I$ is the mean inelastic energy loss. Despite the rough nature of approximation (2), it gives birth to (and so do more precise calculations) the maximum in the dependence of the friction force on the electron energy, $F_{\text{max}} = F(\varepsilon_{\text{max}})$ (Fig. 1), attained at $\varepsilon_{\text{max}} = 2.72 I/2$. For helium, $I = 44$ eV and $\varepsilon_{\text{max}} = 2.72 I/2 = 60$ eV; a more exact calculation yields $\varepsilon_{\text{max}} \approx 100$ eV. For nitrogen, $I = 80$ eV and $\varepsilon_{\text{max}} = 2.72 I/2 = 109$ eV, whereas a more exact calculation produces $\varepsilon_{\text{max}} \approx 103$ eV. According to the traditional approach [6–8], the condition for a runaway electron to appear in a gas is the requirement that electric field strength be high, $E > E_{\text{cr1}}$, where the critical field strength $E_{\text{cr1}}$ is determined by the maximum value of the decelerating force, $E_{\text{cr1}} = F_{\text{max}}/e$. For instance, when expression (2) is used for the critical field, we have [7] $$E_{\text{cr1}} = \frac{4\pi e^3 ZN}{2.72 I} \quad \text{or} \quad \frac{E_{\text{cr1}}}{p} = 3 \times 10^3 \frac{Z}{I} [\text{V cm}^{-1} \text{Torr}^{-1}], \tag{3}$$ ![Figure 1. Dependence of the ionization-induced decelerating force per electron charge on the electron energy for helium at atmospheric pressure.](image-url) where \( p \) is the gas pressure at 300 K, and \( I \) is measured in electron-volts. For example, in helium \( E_{\text{cr1}}/p \approx 140 \) V cm\(^{-1}\) Torr\(^{-1} \), while for nitrogen \( E_{\text{cr1}}/p \approx 590 \) V cm\(^{-1}\) Torr\(^{-1} \). The criterion \( E > E_{\text{cr1}} \) is local in the sense that the critical field \( E_{\text{cr1}} \) is determined solely by the properties of the neutral particles and the gas density at the given point in space. Following are some simple ideas explaining why the whistler mode, i.e., continuous acceleration of the majority of electrons in gases, is not actually realized even when \( E > E_{\text{cr1}} \) if the distance to the cathode is sufficiently large. ### 2.1.2 Limitation on the mean energy due to electron multiplication An important fact must be taken into account in this case. Even when \( E > E_{\text{cr1}} \), the mean electron energy will not increase without limit with a rise in \( x \), even if we completely ignore the friction force. The point is that the reasoning in Section 2.1.1 does not take into consideration an important fact, namely, the multiplication of electrons. Note that although the electron multiplication is a well-known fact [6–8], the importance of the effect of this process on the electron runaway criterion is usually overlooked. To determine the mean electron energy \( e^* \), we must turn not to equation (1) but to an equation that allows for a change in the number of electrons. In the simplest case, on the level of approximation (1), the law of conservation of energy can be written in the form \[ \frac{\mathrm{d}(N_e e^*)}{\mathrm{d}x} = eEN_e - F(e^*)N_e, \] (4) where \( N_e(x) \) is the electron number density at point \( x \). Since \( \mathrm{d}N_e/\mathrm{d}x = z_iN_e \), where \( z_i \) is the Townsend electron multiplication coefficient, equation (4) yields the following equation for the mean electron energy \( e^* \): \[ \frac{\mathrm{d}e^*}{\mathrm{d}x} = eE - F(e^*) - z_ie^*. \] (5) In contrast to Eqn (1), equation (5) contains a negative term \( z_ie^* \) on the right-hand side, which describes the ‘smearing’ of the energy acquired by the electrons from the electric field onto all electrons, including secondary ones. Hence, even if we ignore electron deceleration in the gaseous medium [i.e., \( F(e) = 0 \)], the mean electron energy is limited: \( e^* < e^*_{\text{max}} = eE/z_i \). Therefore, it is impossible to think that Eqn (1) constitutes an equation for the mean electron energy and that the electron distribution is monoenergetic. When for some reason ionization is absent (\( z_i = 0 \)), the traditional approach based on equation (1) is valid. For instance, electron deceleration in a fully ionized plasma is caused by elastic Coulomb collisions, and a detailed study of this problem yields an expression for the critical field, \( E_{\text{cr1}} = 4\pi e^3AN/T \) (e.g., see Ref. [3]), which qualitatively is similar to formula (3). Here, \( A \) is what is known as the Coulomb logarithm, \( N \) is the number density of the charged particles, and \( T \) is the plasma temperature. Of course, equation (5) represents an approximate one, just as Eqn (1) is. In particular, it is one-dimensional and does not account for the fact that secondary electrons also have some energy. In contrast to the traditional approach, however, equation (5) clearly illustrates that in the event of exponential electron multiplication the mean energy of the electrons (and hence the mean velocity) cannot increase progressively with \( x \). At a certain distance (see Section 2.3.), we can put \( \mathrm{d}e^*/\mathrm{d}x = 0 \), and \( e^* = \text{const} \). The aforesaid implies that for the majority of electrons the Townsend multiplication mode (in which the fraction of continuously accelerated electrons is small) is implemented even in high fields \( E > E_{\text{cr1}} \), when, according to the common viewpoint, all the electrons are accelerated steadily. Of course, some of the fast electrons are indeed accelerated at all times. Moreover, such electrons can play an important role in the gas preionization (see Section 4.2). However, at a certain distance from the cathode the fraction of these continuously accelerated electrons is small compared to the overall number of electrons, since the mean electron energy ceases to increase over this distance. In this connection, we turn to the results of numerical simulation. ### 2.2 Electron multiplication #### 2.2.1 The model To confirm the assumption that the Townsend coefficient does not lose its meaning even when \( E > E_{\text{cr1}} \), mathematical modeling of electron multiplication and runaway of electrons have been carried out for He [16], Ne,\(^1\) Xe [17], N\(_2\) [19], and SF\(_6\) [18] on the basis of a modification of the particle method [26]. The electrons produced at the cathode had randomly directed velocities and initial energies distributed according to the Poisson law with the mean value \( a_0 = 0.2 \) eV. Multiple and stepwise ionization, the electron–electron interaction, and the screening of the external electric field were not taken into account. The equations of motion of all the electrons were solved on a mesh with small temporal steps, while elastic and inelastic collisions were modelled with probabilities determined by the cross sections of the elementary processes. Below, we present the results for flat electrodes separated by a distance \( d \) with a voltage \( U \) applied to them. Although plasma inhomogeneity is essential in real experiments (see Sections 3 and 4), the two-dimensional model makes it possible to expose many important aspects of the physics of gas breakdown (the manner for applying the method to the case of coaxial cylinders is described in Ref. [26]). For helium, the excitation of the states 2'S, 2'S, 2'P, and 2'P was taken into account, and for xenon the excitation of the state 5p\(^2\)6s (\( J = 1 \)) with the energy \( E = 8.44 \) eV. The results given in Refs [27–29] were used in the case of helium, and those in Refs [26, 30, 31] in the case of xenon. The electron–nitrogen-molecule scattering cross section was determined on the basis of the data listed in Refs [32–43]. The excitation of the 10 lowest electronic states of the nitrogen molecule and the 8 lowest vibrational states was taken into account. In modeling the processes in SF\(_6\), the data on the cross sections and energy depositions were taken from Refs [44–46]. #### 2.2.2 The Townsend ionization mode The results of calculations described in Refs [16–19] show that for all values of the reduced field strength and large enough distances \( d \) between the electrodes, the Townsend ionization mode is truly present and runaway electrons are practically absent. That the ionization mode is indeed the Townsend one is confirmed by the fact that as the distance \( x \) from the cathode is increased, the number of acts of excitation and electron production increases exponentially for all time beginning from some values of \( x \). In this case, the mean electron energy \( e^* \) and the mean projection of electron velocity on the \( x \)-axis do not --- \(^1\) The calculations for neon were done by A N Tkachev, A A Fedenev, and S I Yakovlenko. Figure 2. Dependence of the ionization and drift characteristics on the reduced field strength $E/p$ in helium. The points were obtained for different values of electric field strength. Everywhere, if not stated otherwise, $N = 3.22 \times 10^{18}$ cm$^{-3}$ ($p = 100$ Torr). (a) Pressure-normalized values of the Townsend coefficient $z_i/p$ (black circles) and of the ionization rate $v_i/p$ (open squares). The heavy solid curve corresponds to approximation (7), and the dot-and-dash curve to approximation (6). (b) Mean projection $u_x$ (open circles) of the electron velocity on the $x$-axis directed along the electric field, and the mean absolute value of the velocity $u_\perp$ (open squares) in the plane perpendicular to the $x$-axis. The dashed straight line corresponds to the linear dependence $u_\perp = 10^6(E/p)$ cm s$^{-1}$, where the unit of measurement of $E/p$ is 1 V cm$^{-1}$ Torr$^{-1}$. (c) Mean electron energy (obtained by simulation for different values of electric field strength). The dashed curve corresponds to the dependence $\varepsilon^* = 5.5 \exp\left(\sqrt{E/(40p)}\right)$ eV, where the unit of measurement of $E/p$ is 1 V cm$^{-1}$ Torr$^{-1}$. Figure 3. Dependence of the ionization and drift characteristics on the reduced field strength $E/p$ for nitrogen N$_2$. The points were obtained for different electric field strengths at $p = 100$ Torr ($N = 3.22 \times 10^{18}$ cm$^{-3}$). (a) Pressure-normalized values of the Townsend coefficient $z_i/p$ (open circles) obtained as a result of simulation for different values of the field strength. The heavy solid curve corresponds to approximation (6), and the dashed curve to the results of simulation [27]. (b) Mean projection $u_x$ (open circles) of the electron velocity on the $x$-axis directed along the electric field, with the dashed curve corresponding to the results of simulation [27]. (c) Mean electron energy. depend on $x$ at sufficiently large distances from the cathode. The maximum of the distribution function of the electrons that reach the anode lies in the region of low mean electron energies, $\varepsilon^* \ll eU$. As noted earlier, all these indications of a Townsend ionization mode occur both for $E/p \leq E_{cr1}/p$ and for $E/p > E_{cr1}/p$. What is important is that the distance $d$ between the electrodes be large (see below and Ref. [16]). The calculations that were performed for helium, neon, xenon, nitrogen, and sulfur hexafluoride (Figs 2–6) show that the multiplication factor $z_i = (1/N_c) \frac{dN_e}{dx}$, as it is ordinarily assumed, is proportional to the density (pressure) of the gas and can be written in the form $z_i(E,p) = p\xi(E/p)$, where $\xi(E/p)$ is a function typical for the given gas. For helium and xenon, the following approximation based on the experimental data is known [8, 47]: $$\xi\left(\frac{E}{p}\right) = A \exp\left[-B\left(\frac{p}{E}\right)^{1/2}\right],$$ (6) where $A = 4.4$ cm$^{-1}$ Torr$^{-1}$ and $B = 14$ V cm$^{-1/2}$ Torr$^{-1/2}$ for helium, while $A = 65.3$ cm$^{-1}$ Torr$^{-1}$ and $B = 36.1$ V cm$^{-1/2}$ Torr$^{-1/2}$ for xenon. However, the calculations have shown (see Figs 2 and 6) that this approximation holds only if the reduced electric field strength does not exceed a certain value: $E/p < (E/p)_{\text{max}}$; for helium, $(E/p)_{\text{max}} = 200$ V cm$^{-1}$ Torr$^{-1}$, and for xenon $(E/p)_{\text{max}} = 1500$ V cm$^{-1}$ Torr$^{-1}$. At higher values of $E/p$, the multiplication factor $z_i$ begins to decrease. The drop in $z_i$ with increasing $E/p$ is related to the decrease in the ionization cross section at high energies. In this connection, Tkachev and Yakovlenko [16, 17] proposed an approximation of the Townsend coefficient that describes its decrease: $$\xi\left(\frac{E}{p}\right) = A \exp\left[-B\left(\frac{p}{E}\right)^{1/2} - C\frac{E}{p}\right],$$ (7) where $A = 5.4$ cm$^{-1}$ Torr$^{-1}$, $B = 14$ V cm$^{-1/2}$ Torr$^{-1/2}$, and $C = 0.0017$ cm Torr V$^{-1}$ for helium; $A = 7.2$ cm$^{-1}$ Torr$^{-1}$, Figure 4. Dependence of the ionization and drift characteristics on the reduced field strength $E/p$ for SF$_6$. The points were obtained for different electric field strengths at $p = 100$ Torr. (a) Pressure-normalized values of the absolute value of the Townsend coefficient $|z_i|/p$ (solid curve) and of the ionization rate $|v_i|/p$ (dashed curve). (b) Mean projection $u_x$ (solid curve) of the electron velocity on the x-axis directed along the electric field, and the mean absolute value of the velocity $|u_\perp|$ (dashed curve) in the plane perpendicular to the x-axis. (c) Mean electron energy. In the case of molecular nitrogen, the following approximation [8, 47] based on the experimental data is used: $$\xi \left( \frac{E}{p} \right) = A \exp \left( -B \frac{p}{E} \right),$$ where $A = 12$ cm$^{-1}$ Torr$^{-1}$ and $B = 342$ V cm$^{-1}$ Torr$^{-1}$ for $E/p = 100–600$ V cm$^{-1}$ Torr$^{-1}$, while $A = 8.8$ cm$^{-1}$ Torr$^{-1}$ and $B = 275$ V cm$^{-1}$ Torr$^{-1}$ for $E/p = 27–200$ V cm$^{-1}$ Torr$^{-1}$. The calculations for nitrogen yielded $(E/p)_{\text{max}} = 1500$ V cm$^{-1}$ Torr$^{-1}$ (see Fig. 3). Note that the value of the peak field strength $E_{\text{max}}$ agrees rather well with the estimate of $E_{\text{cr1}}$ done in Section 2.1.1, especially if one takes into account the rough nature of formula (2). Actually, the value of $E_{\text{cr1}}$ determines not the condition for continuous acceleration of the majority of electrons with increasing $x$ but is the condition for the drop in the Townsend multiplication coefficient for $E > E_{\text{cr1}}$. In this sense, the above values of $E_{\text{max}}$ are simply the more accurate values of $E_{\text{cr1}}$. 2.2.3 The Townsend coefficient in an electronegative gas. It is interesting to examine the mechanism of electron multiplication in an electronegative gas, i.e., a gas with a large cross section of electron attachment to the molecules. In view of the competition between electron attachment and multiplication, it is not clear from the start to what extent the concept of a Townsend coefficient can be applied to an electronegative gas. At the same time, electronegative gases are widely employed in various discharges, in particular, in pumping exciplex and chemical lasers. Boichenko et al. [18] examined electron multiplication and runaway in SF$_6$, since for this gas the characteristics of electron–molecule collisions are best known. Calculations have shown (see Figs 4 and 6) that for high values of the reduced electric field strength, $E/p > 94$ V cm$^{-1}$ Torr$^{-1}$, and large enough electrode spacing, $d > z_i^{-1}$, the Townsend ionization mode is indeed in operation. Qualitatively, the main characteristics of this mode appear to be the same as in the cases of helium, neon, xenon, and nitrogen (see Figs 2, 3, and 6). The function $\xi(E/p)$ has its maximum at $(E/p)_{\text{max}} \approx 5$ kV cm$^{-1}$ Torr$^{-1}$. An essential feature of an electronegative gas is that in an electric field whose strength is lower than a certain value ($E/p < 94$ V cm$^{-1}$ Torr$^{-1}$ for SF$_6$) the electrons emitted by the cathode do not multiply: instead, they largely attach to the molecules. The attachment mode that sets in is, in a certain sense, opposite to the multiplication mode, but, like the multiplication mode, is characterized by an exponential dependence for the electron current and the number of inelastic collision acts. At the same time, the mean electron energy $e^*$ and the velocities $u_z$ and $u_\perp$ are independent of $x$. However, the multiplication factor is negative in this mode (the Townsend coefficient proves to be negative). Electron attachment is predominant in electric fields $E/p < 94$ V cm$^{-1}$ Torr$^{-1}$ in which the mean electron energy $e^* < 10$ eV becomes much lower than the first ionization threshold (20 eV) of the gas. Within the sign-changing region, the multiplication factor is a linear function of the reduced field strength (see Fig. 5). Note that according to the experimental data available (see Ref. [8]), the breakdown of SF$_6$ occurs for $E/p > 117$ V cm$^{-1}$ Torr$^{-1}$, when $\xi(E/p) > 0$ (see Fig. 6). In their experiments, Panchenko et al. [48] found that at $E/p \approx 117$ V cm$^{-1}$ Torr$^{-1}$ volume discharge in SF$_6$ also ceases to develop. 2.2.4 Runaway of electrons at relativistic velocities. Since generators in which megavolt voltages are reached in one nanosecond can be devised, it is only proper to ask whether the concept of the Townsend coefficient works at relativistic velocities of the electrons. Helium was taken as an example.\footnote{The calculations with allowance for relativistic effects were done by A N Tkachev and S I Yakovlenko.} When relativistic effects were taken into account (see Fig. 6), after the curve representing the dependence of the Townsend coefficient $z_i$ on $E/p$ had passed its maximum at $E/p \approx 263$ kV cm$^{-1}$ Torr$^{-1}$, it dropped off very rapidly, but because there is a limitation on the mean electron velocity it becomes almost flat, and then slowly begins to slope upward. This happens at $E/p \approx 6.6$ MV cm$^{-1}$ Torr$^{-1}$, when $e^* \approx 0.5$ MeV, and $u_x \approx 2.3 \times 10^{10}$ cm s$^{-1}$. 2.3 Nonlocal electron runaway criterion 2.3.1 Critical voltage. The Townsend ionization mode sets in at a certain distance $x \sim z_i^{-1}$ from the cathode, which corresponds to the characteristic multiplication length. If the electrode spacing is small, $d < x_i^{-1}$, the electron multiplication pattern differs dramatically from the Townsend one (for details see Ref. [16]). A substantial fraction of the electrons is accelerated steadily: as the distance $x$ from the cathode increases, both the $x$-component of the electron velocity and the mean electron energy $e^*$ grow. In this case, the peak in the energy distribution function of the electrons that reached the anode occurs at the maximum value of the energy $eU = eEd$ acquired by the electrons in their flight from the cathode to the anode. In Refs [16–19], in contrast to the traditional approach [6–8], it was assumed that runaway electrodes begin to dominate when the distance $d$ between the electrodes becomes comparable to the characteristic multiplication length, i.e., to the reciprocal Townsend coefficient $z_i^{-1}$. For $z_i d < 1$, the runaway electrons are also predominant in the energy spectrum of the electrons that have reached the anode. Accordingly, the criterion determining the limiting value $E_{cr}$ of the electric field strength has the form $$z_i(E_{cr}, p)d = 1.$$ In the Townsend coefficient we isolate as a factor the pressure or gas density and employ the fact that the second cofactor is a function only of the reduced electric field strength $E/p$, i.e., we write $z_i(E, p) = p\xi(E/p)$. For flat electrodes, $E = U/d$, with $E_{cr} = U_{cr}/d$. Then, the criterion of electron escape from the gap between the two flat electrodes becomes $$pd\xi\left(\frac{E_{cr}}{p}\right) = 1 \quad \text{or} \quad pd\xi\left(\frac{U_{cr}}{pd}\right) = 1.$$ This expression presents an implicit dependence of the critical voltage $U_{cr}(pd)$ on the product $pd$ of the pressure and the electrode spacing (Figs 7 and 8). The curve $U_{cr}(pd)$ separates the region of effective electron multiplication from the region in which the electrons leave the discharge gap before they have any time to multiply, and is a universal curve for the given gas. We will call it the electron escape curve. Note that the value of $E_{cr}/p$ depends on $pd$, in contrast to $E_{cr1}/p$ which is determined by the local criterion and depends only on the characteristics of the neutral particles. Hence, by their very nature, these quantities are distinctly different: $E_{cr1}/p \approx (E/p)_{\text{max}}$, as noted in Section 2.2, corresponds to the peak in the Townsend coefficient, while $E_{cr}/p$ determines the electron escape criterion. 2.3.2 The lower and upper branches of the escape curve. The presence of a maximum in the $\xi(E/p)$ function determines the horseshoe shape of the function $U_{cr}(pd)$ for a broad spectrum of gases (see Fig. 7). Note that, mathematically speaking, it would be more convenient to swap the horizontal $pd$-axis and the vertical $U_{cr}$-axis, i.e., to think of $pd$ as a function of $U_{cr}$. However, we will not do this here, so as not to depart from the tradition related to Paschen curves (see Section 2.4). The electron escape curve $U_{cr}(pd)$ has two branches: the upper and the lower. We assume that the boundary point between these two branches is the turning point, i.e., the point where $pd$ reaches its minimum value $(pd)_{\text{min}}$. We will Figure 7. Universal curves characterizing the escape, runaway, and multiplication of electrons. Curves $U_w(pd)$ separate the electron escape and multiplication regions (heavy solid curves) for helium (a), xenon (b), and nitrogen (c); curves $U_{ign}(pd)$ characterizing the discharge ignition criterion (light solid curves) as well as equal efficiency curves and runaway curves for helium (d) and neon (e). In figures a and b, the black circles represent the results of Dikidzhi and Klyarfel’d’s experiments [51], and for the $U_{ign}(pd)$ curves it is assumed that $L = \ln(1 + 1/\gamma) = 2.45$. In figure a, the dashed curve represents the experimental data from Ref. [8], and the open circles represent the results of Penning’s experiments [50]. The large open circle in the upper right corner of figure a corresponds to the maximum value of the voltage in the experiments described in Ref. [24], which were conducted at atmospheric pressure and a distance $d = 28$ mm between the electrodes, while the large open square corresponds to a situation in which the ‘plasma cathode’ is at a smaller distance $d = 0.7$ mm from the anode. In figure c, it is assumed that $L = \ln(1 + 1/\gamma) = 4.0$ for the curve $U_{ign}(pd)$; the dotted curve represents the experimental data from Ref. [8], and the dashed curve represents the results of the calculations done by Campbell et al. [40]. In figures d and e: $I$, the runaway curves at $z_id = 1; 2$, the runaway curves at $z_id = 1.5; 3$, the runaway curves at $z_id = 0.2$, and $4 - 6$, the efficiency curves at $\eta = 20, 50,$ and $80\%$, respectively. The efficiency $\eta$ is defined as the fraction of electrons landing on the anode with an energy exceeding two-thirds of the energy acquired by electrons in free motion, namely $\varepsilon > 2eU/3$. Let us think of $pd$ as a function of $U_{cr}$. For the condition $d(pd)/dU_{cr} = 0$, corresponding to the minimum in the $pd$ vs. $U_{cr}$ dependence, expression (9) yields $\dot{\xi}'(x) = 0$, which corresponds to the maximum of $\zeta(x)$. Thus, the boundary point defined as the minimum value of $pd$ in the $U_w(pd)$ curve corresponds exactly to the value of the reduced field strength $E/p = (E/p)_{\text{max}}$ at which the reduced Townsend coefficient $z_i/p = \dot{\xi}(E/p)$ passes through its maximum. The existence of an upper branch in the escape curve $U_w(pd)$ is due to the drop in the Townsend coefficient with increasing $E/p$. In turn, the drop in the Townsend coefficient is caused by the decrease in the ionization cross section as the energy of the incident electrons increases and by the increase in the energy of the electrons being multiplied with a rise in $E/p$. The region above the upper branch meets the situation in which electrons, while acquiring a large amount of energy on their mean free path, leave the discharge gap without having enough time to effectively multiply because of the small ionization cross sections at high energies. Therefore, it is natural to call the region above the upper branch of the escape curve the electron runaway region (the whistler region), and the upper part of the curve the runaway curve. The lower branch of the curve corresponds to the increasing part of the dependence of the reduced Townsend coefficient $z_i/p$ on $E/p$. In this region, the electrons acquire a relatively small amount of energy on their mean free path, and Figure 8. Universal curves $U_{cr}(pd)$ separating the electron escape and multiplication regions for He, Ne, Xe, N$_2$, and SF$_6$. In calculating the helium characteristics, relativistic effects were taken into account. show that this point corresponds to the maximum of the function $\zeta(x)$. this amount is appropriate to the increasing part of the dependence of the ionization cross section on the electron energy. The region under the lower curve $U_{\text{cr}}(pd)$ fits the situation where the electrons drift from the cathode to the anode and do not acquire enough energy for effective multiplication. Therefore, it is natural to call the region under the lower branch of the escape curve the electron drift region, and the lower part of the curve the drift curve. If relativistic effects are taken into account, three branches can be identified in the electron escape curve $U_{\text{cr}}(pd)$ (see Fig. 8). The appearance of one more turning point in the runaway curve and, hence, of an additional third branch (compared to the nonrelativistic case) is due to the increase in the ionization cross section at high energies as a consequence of relativistic effects. In contrast to the nonrelativistic case for $pd > 230$ Torr cm, electron multiplication occurs at any voltage across the discharge gap, which exceeds the threshold value determined by the drift curve. ### 2.3.3 Electron-beam production efficiency curves The definition of the runaway curve implies that it qualitatively characterizes the fraction of runaway electrons. Generally speaking, there is a certain arbitrariness in the choice of the right-hand sides of expressions (9). The right-hand side of Eqn (9) can be set equal not to unity but, say, to $\pi$ or $1/\pi$. However, it is clear that this choice is not of vital importance. Assuming, for instance, that $z_i d = A = \text{const}$, we get for the new quantity $U'_{\text{cr}}$ the equation $pd \xi(U'_{\text{cr}}/pd) = A$. This leads to a simple relationship between these quantities: $U_{\text{cr}}(pd) = U'_{\text{cr}}(pd/A)/A$. On the log–log scale, the curve $U'_{\text{cr}}(pd)$ can be obtained from the curve $U_{\text{cr}}(pd)$ by a simple shift along the coordinate axes. Although the escape curve qualitatively characterizes the boundary separating the regions of electron multiplication and runaway of electrons, it does directly determine the fraction of the runaway electrons. To establish the quantitative characteristics, calculations of the fraction $\eta$ of runaway electrons as functions of $U$ and $pd$ were done directly. Here, the efficiency was defined as the fraction of electrons arriving at the anode with an energy exceeding two-thirds of the energy that the electrons would acquire in free motion, namely $\varepsilon > 2eU/3$. The results of these calculations are presented in the form of equal-efficiency curves in the $(U,pd)$-plane (Figs 7d and e). These calculations showed that for low enough efficiencies, $\eta \leq 20\%$, the runaway curves practically coincide with the equal-efficiency curves. At higher efficiencies, the runaway curves were found to coincide with the equal-efficiency curves only at large values of $U$ and $pd$. ### 2.4 On the ignition criterion of a self-sustained discharge #### 2.4.1 The upper branch of the self-sustained discharge ignition curve The curve that determines the discharge ignition criterion is usually found from the requirement that each electron must create a sufficient number of ions, so that one more electron is produced at the cathode by secondary electron emission. Accordingly, the discharge ignition potential $U_{\text{ign}}(pd)$ is determined by the following condition (e.g., see Ref. [8]): $$z_i(E,p)d = \ln \left(1 + \frac{1}{\gamma}\right) \quad \text{or} \quad pd \xi \left(\frac{U_{\text{ign}}}{pd}\right) = L,$$ where $L \equiv \ln (1 + 1/\gamma)$, with $\gamma$ being the secondary electron emission coefficient. Comparing the expressions for the discharge ignition criteria (10) and for the electron runaway criteria (9), we arrive at a relationship between the escape and ignition curves: $U_{\text{ign}}(pd) = L U_{\text{cr}}(pd/L)$. This relationship was used when constructing the curves $U_{\text{ign}}(pd)$ in Fig. 7. Compared to the well-known Paschen ignition curve, the resulting dependence $U_{\text{ign}}(pd)$ carries entirely new information. As is well known, the Paschen curves have right and left branches that extend from the minimum in $U_{\text{ign}}(pd)$ to the regions of large and small values of $pd$. However, according to Refs [16–19], the self-sustained discharge ignition curve is to contain in addition an upper branch resulting from the drop in $z_i$ when $E/p$ increases. An important consequence of the above reasoning is also the presence of a minimum value of $(pd)_{\text{min}}$ at which ignition of a self-sustained discharge is still possible due to electron multiplication as a result of gas ionization in the discharge gap. Notice that Kolbychev [49] pointed out the possibility of the discharge disruption at high voltages. #### 2.4.2 Comparison with experiments at low pressures It must be noted, however, that the nature of the ignition curve $U_{\text{ign}}(pd)$ is not as general as that of the escape curve $U_{\text{cr}}(pd)$. The escape curve $U_{\text{cr}}(pd)$ represents a universal characteristic of the given gas, while the ignition curve $U_{\text{ign}}(pd)$ depends on the model describing the discharge ignition, in particular, on the properties of the electrodes. For instance, the upper branch is masked by electrode phenomena and can be observed only in open discharges or for very short pulses. This becomes obvious if we compare the results of our calculations with the experimental data for ordinary cathodes (see Fig. 7). Away back in 1932, Penning [50] showed that the Paschen curve for helium has a loop with a turning point at $pd \approx 1.5$ Torr cm (Fig. 7a). This turning point agrees well with the results of the calculations by Tkachev and Yakovlenko [16], described in Section 2.4.1. Penning rightly assumed that this loop reflects the presence of a maximum in the electron-energy dependence of the ionization cross section. However, his viewpoint did not receive broad support. The reason, probably, was that no such loop was detected in other inert gases (e.g., see Ref. [51]), although the ionization cross sections of all elements possess a maximum. Apart from helium, only in mercury was such a loop observed [52]. Keep in mind that the portion of the Paschen curve that is to the left of the point $(pd)_{\text{min}}$ reflects another mechanism of discharge ignition, a mechanism only slightly related to electron multiplication in the gas. This conclusion is supported by the fact that the Paschen curves in this region depend not only on the cathode material but also on the anode material [51]. The mechanism describing the left branch of the Paschen curves for helium was studied by Ul’yanov and Chulkov [53]. They found that the three-valued nature of the curve $U_{\text{ign}}(pd)$ in the region around $(pd)_{\text{min}}$ is caused by competition between different mechanisms of electron production within the discharge volume and at the electrodes, namely, Townsend ionization, secondary electron emission from the cathode triggered by fast ions and atoms that form as a result of ion charge exchange, and the electron scattering from the anode. The results of experiments with gases at atmospheric pressure are discussed in Sections 3.1 and 4.1. Modern nanosecond techniques have made it possible to ‘get through’ the lower branch of the Paschen curve and land near the runaway curve before the gas-discharge plasma has time to completely short-circuit the interelectrode gap. 3. Electron-beam production in dense gases 3.1 Experiments on electron-beam production in dense gases 3.1.1 History of the problem. Stankevich and Kalinin [54] and Noggle et al. [55] were the first to detect X-ray radiation at atmospheric pressure (in air [54], and in helium [55]). This suggested the presence of accelerated electrons. The discovery of accelerated electrons at atmospheric pressure provided new impetus to the study of the conditions in which such electrons are produced and X-ray radiation is formed in gas-filled diodes at elevated pressures (see the review [6] and references cited therein). However, up to quite recently the amplitude values of the electron-beam currents generated in molecular gases at atmospheric pressure did not exceed fractions of an ampere [6]. In 2002, Alekseev et al. [20–22] found that the amplitude of the electron beam generated in a gas-filled diode at atmospheric pressure can be substantially increased. The experiments involved molecular gases (air and nitrogen), CO$_2$–N$_2$–He mixtures, and helium. High-current electron beams have been generated at mean values of the parameter $E/p$ higher than the critical value, $E/p \gg E_{\text{crit}}/p$ [21], and much lower than the critical value, $E/p \ll E_{\text{crit}}/p$ [20–23]. The latter result required some explanation. To this end, in order to explain the production of electron beams at $E/p \ll E_{\text{crit}}/p$, it was assumed (see Refs [20, 22, 23]) that as the plasma formed near the cathode has a motion toward the anode, the electric field along the discharge gap undergoes a redistribution, and in the part of the gap between the plasma and the foil the field strength becomes higher than $E_{\text{crit}}$. In other words, in studying the mechanism of beam production in Refs [20, 22, 23], the researchers compared the mean values of $E/p$ attained in the experiments with the critical values $E_{\text{crit}}/p$ that had been calculated by Korolev and Mesyats [7] on the basis of the traditional local electron runaway criterion (see Section 2.1). After issuing Tkachev and Yakovlenko’s paper [16], the parameter $E_{\text{crit}}/p$ determined by the nonlocal criterion (9) was adopted as the critical field [24]. Accordingly, it was assumed that the main current pulse emerges when the value of the parameter $E/p$ approaches $E_{\text{crit}}/p$, and the point in the $(U, pd)$-plane the runaway curve (see Section 3.2). We begin by presenting the results of experiments [20–25] in which the production of an electron beam in a gas-filled diode occurs in a mode in which the maximum amplitudes of the beam current are attained in the region behind the anode. 3.1.2 Experimental facility and methods. The research involved using three generators of nanosecond pulses with capacity energy storages (of the SINUS and RADAN brands), which are described in detail in Refs [56–58], and a fourth generator with inductive energy storage [59]. Schematic diagram of the experimental facility based on the first three generators is shown in Fig. 9. The first pulse generator, which has been described in detail in Ref. [56], generated in a 30-$\Omega$ matched load a $\sim 200$-kV pulse with a half-height duration of roughly 3 ns and a voltage pulse front of roughly 1 ns. This voltage pulse was fed to cathode 2, while the pressure in the gas gap varied from $10^{-2}$ to 760 Torr. Two different types of cathodes were utilized. The first was a set of three cylinders (12, 22, and 30 mm in diameter) made from 50-$\mu$m thick Ti foil. The cylinders were inserted into each other and attached to a duralumin base 36 mm in diameter. All three cylinders had a common axis. The height of the cylinders reduced by 2 mm from the cylinder with the smaller diameter to that with the larger one. The second cathode was made from graphite in the form of a tablet 29 mm in diameter with rounded edges, and the convex side of the tablet with a 10-cm radius of curvature was positioned in front of the foil. The graphite cathode was attached to a copper holder 30 mm in diameter. The beam was extracted through the 45-$\mu$m AlBe foil 3. The gas gap could be varied from 10 to 28 mm. The second generator (RADAN-303) had a wave impedance of 45 $\Omega$ and generated in a matched load the voltage pulses in the 50–170 kV range (with an open-circuit voltage up to 340 kV) with a half-height duration of roughly 5 ns and a voltage pulse front of about 1 ns [57]. The voltage across the gas gap could be continuously varied by changing the gap of the main discharger. The third generator (RADAN-220) had a wave impedance of 20 $\Omega$ and generated in the discharge gap a voltage pulse with an amplitude up to 220 kV and with a half-height duration of roughly 2 ns and a voltage pulse front of about 0.3 ns [58]. The research involved studying a flat anode and a small-sized cathode (as in most works devoted to the study of X-ray radiation and fast electrons in gas-filled diodes), which made possible additional enhancement of the electric field at the cathode. The gas-filled diodes for both RADAN generators had the same design and were similar to those used in Refs [20, 22–25]. The cathode was fabricated from a steel tube 6 mm in diameter and 50 $\mu$m in wall thickness, which was attached to a metal rod of the same diameter, or from a graphite rod 6 mm in diameter with rounded or sharp ends. The flat anode (through which the electron beam was extracted) was fabricated from 40-$\mu$m thick AlBe foil or 10-$\mu$m thick Al foil or a grid with a light transparency in the 20–70% range. The cathode–anode spacing could be varied from 13 to 20 mm. In some experiments, the discharge gap was placed in a gas chamber with windows, which made it possible to evacuate and change the composition and pressure of the gases in the discharge gap. The fourth generator was equipped with inductive energy storage which comprised the current interrupter based on semiconducting opening switch (SOS) diodes connected in parallel to the load [59, 60]. It generated in the discharge gap... voltage pulses with an amplitude of roughly 50 kV and with a half-height duration of about 15 ns and a voltage pulse front of 7–15 ns. Two types of discharge gaps were used with this generator. The anode was fabricated from foil and was flat, while the cathode had a small radius of curvature and was fabricated from 50-µm thick steel foil and had the shape of a tube 6 mm in diameter or a blade 8 cm long with rounded edges. The cathode–anode spacing could be varied between 8 and 30 mm. To record the signals from the capacitive divider, Faraday cylinders, and shunts, the researchers used a TDS-684B oscilloscope with a 1-GHz bandwidth (5 dots per nanosecond) or a TDS-334 oscilloscope with a 0.3-GHz bandwidth (2.5 dots per nanosecond). The discharge glow was photographed by a digital camera. The integrated signal of the discharge emission was recorded with a vacuum FEK-22 photodiode, whose output signal was fed to the TDS-334 oscilloscope. ### 3.1.3 Results of measurements The main experimental findings are given in the following. As shown in Refs [20–25], in a nonuniform electric field, with a small-sized cathode and a short voltage front, the discharge mode in the discharge gap is such that in the gas-filled diode an electron beam with a current amplitude amounting to dozens or even hundreds of amperes is produced under atmospheric pressure. The electron beam appears at the front of the voltage pulse and has a ~ 0.3-ns current-pulse half-height duration (Fig. 10a) or can be even shorter. The beam current amplitude at an air pressure of 1 atm in the diode may be as high as 35 A when the beam is extracted through the 40-µm thick AlBe foil for the second generator, and 75 A for the third. The beam current amplitude becomes higher when air is replaced with helium. For helium at atmospheric pressure, the electron beam current beyond the AlBe foil, obtained with the second generator in optimal conditions, was higher than 200 A. As the voltage amplitude increases, the maximum in the beam current shifts toward the beginning of the voltage pulse, and for maximum voltages ends at the pulse front (Fig. 10b). As the voltage drops, the time lag of the electron beam increases to roughly 1 ns, and the beam is recorded at the beginning of the quasi-stationary phase of the voltage pulse. In this case, the beam current amplitude is drastically reduced. Analysis of the oscillograms of the beam current beyond the foil, conducted under variations in foil thickness, shows that as the foil gets thicker, the maximum in the beam current shifts toward the beginning of the voltage pulse. With a 10-µm thick foil, the maximum in the beam current was recorded after the first maximum in the voltage pulse, and for a foil of maximum thickness, the maximum in the beam current shifts toward the first maximum in the voltage oscillogram. If the electrode spacing, the length of the voltage pulse front, the type of gas, and the gas pressure (in the given case, this is air at 1 atm) are fixed, there exists only a fairly narrow range of optimal values of the open-circuit voltages of the generator, within which the maximum amplitudes of the electron beam current beyond the foil are observed (Fig. 11). The amplitude of the beam current after the foil with the second generator attained its maximum at a voltage of about 210 kV. However, in these conditions the amplitudes of the pulse voltage across the discharge gap and of the discharge current are practically linear functions of the amplitude of the open-circuit voltage of the second generator (see curves 2 and 3 in Fig. 11). We comment on this result in Section 4.3. Note that the voltage amplitude of the third generator was about 220 kV, which satisfies the optimal conditions for generating a beam current in a gas-filled diode (see Fig. 11). Here, the wave impedance of the second generator was half the wave impedance of the first ![Figure 10.](image) **Figure 10.** (a) Oscillograms of the voltage across the gas-filled diode (curve 1) and of the electron beam current (curve 2) [23]; the horizontal axis is marked in 1 ns per cell, and the vertical axis is marked in 27 A and 1 kV per cell. (b) Oscillograms of the current pulses (curves 1 and 3) in the electron beam beyond the 40-µm thick AlBe foil and of voltage pulses (curves 2 and 4) on the gas-filled diode, obtained with the second generator in air at atmospheric pressure. The gap in the diode is $d = 16$ mm, and the open-circuit voltages are 260 kV (curves 1 and 2) and 155 kV (curves 3 and 4). (c, d) Oscillograms of the current pulses in the electron beam beyond the 40-µm thick AlBe foil, obtained with the third generator. The gap in the diode is $d = 16$ mm, and collector diameters in figures c and d are 20 and 50 mm, respectively. (Taken from Ref. [24]). Figure 11. Dependences of the electron beam current beyond the 40-µm thick AlBe foil (1), the voltage across the discharge gap (2), and the discharge current (3) on the open-circuit voltage of the second generator. (Taken from Ref. [24].) Figure 12. Energy distribution of the beam electrons at an air pressure of 1 atm in the gas-filled diode. The distribution was recorded by the foil method with a 270-kV open-circuit voltage for the first generator. The gap in the diode was \( d = 17 \text{ mm} \). (Taken from Refs [20, 25].) Figure 13. Dependence of the discharge current amplitude (1) and the beam current beyond the 40-µm thick AlBe foil (2) on the cathode–anode separation (the third generator, cathode of the first type, and the air pressure of 1 atm). The time resolution of the recording system was 1 ns. The electrons in the beam in the optimum mode for the air-filled diode have a mean energy that amounts to approximately 60% of the energy corresponding to the maximum voltage across the discharge gap (for the second generator, the mean energy was about 65 keV). A typical energy distribution of the electrons of a beam produced is depicted in Fig. 12. Clearly, the distribution possesses a large half-width. The half-height of this distribution corresponds to electron energies from 40 to 100 keV, i.e., the electrons in a beam are produced at different voltages across the discharge gap. Similar dependences of the current amplitudes on the open-circuit voltage of the second generator were obtained when the foil was replaced by a grid, with the beam current amplitude being usually found to decrease with a fall in grid transparency. Studies on the effect of the magnitude of the electrode spacing on the beam current in air, carried out with the second and third generators in the case of a cathode in the form of a steel tube, have shown that the reduction of gap size down to 16–17 mm leads to a decrease in the beam current beyond the foil. The same effect showed itself when the gap was widened to 18 mm. For a gap wider than 18 mm, partial breakdown to the metal side wall of the diode was observed in the gas-filled diode of the second and third generators. The dependence of the beam current amplitude in air on the gap size, obtained at accelerator 3 (see Fig. 9), is plotted in Fig. 13. Clearly, the maximum amplitude of the discharge current is at a gap size of 16 mm. Important information may be drawn from the oscillogram of the current pulse (Fig. 10c), which was recorded with a small-sized collector and the highest possible time resolution of the recording system. First, the length of the beam current pulse does not exceed 0.3 ns, when the highest possible time resolution is applied in recording the oscillogram. Second, the time it takes for the beam current amplitude to fall off is in the subnanosecond range. Thus, after the beam current reaches its maximum, the conditions needed for the production of an electron beam in the gas-filled diode deteriorate very rapidly, although the voltage across the diode has not changed significantly. Figure 14a displays photographs of the discharge glow in air, the end view taken through a grid with 50% transparency, and another view at an angle. The discharge was of the volume type, and bright spots were visible only at the cathode. The photographs in Fig. 14b (side view) show that in the discharge gap there is a glow in the form of diffusion jets with an overall diameter at the anode of no less than 12 mm. The diameter of the luminous spot on the luminescent screen, which is formed by the electron beam produced with the third generator in air at a pressure of 1 atm at a distance of 1 cm from the foil, reached 4 cm. Thus, the maximum current amplitudes of a beam produced in a gas-filled diode are attained at a subnanosecond front of the voltage pulse, certain voltages of the generator, and a volume discharge in the form of ‘jets’. have conditions close to those for the runaway curve (see Section 2.3), even if at the beginning of the movement toward the anode the plasma parameters corresponded to the region of Townsend electron multiplication. For instance, if we assume that $d = 28$ mm and $U = 200$ kV for the experiments described in Refs [21, 24], then at $p = 1$ atm we get $pd = 2 \times 10^3$ Torr cm. In Fig. 7a, the respective point in the $(U, pd)$-plane is marked by a large open circle in the upper right corner. Clearly, for the electron runaway criterion to be met, the value of $pd$ must be smaller than the experimental value by a factor of approximately 30. The electron runaway criterion may be met at instants of time when the plasma traveling from the cathode has moved closer to the anode: at $U = 200$ kV this happens when $pd = 55$ Torr cm, which means the criterion is met, for instance, at $d = 0.7$ mm (in Fig. 7a, this point is marked by a large open square). The discharge production stage in a gas-filled diode is discussed below in Section 4.2. ### 4. Production of a nanosecond discharge at atmospheric pressure #### 4.1 Experiments on volume-discharge production at atmospheric pressure without a supplementary preionization source The usual approach to creating a volume discharge in atomic and molecular gases and their mixtures at elevated pressures is to preionize the discharge gap by using various sources of ionizing radiation [7]. The plasma of such a discharge is widely used in pulsed dense-gas lasers [63]. There is also another way to produce a diffusive discharge without preionizing the gas at atmospheric pressure in a nonuniform electric field with nanosecond excitation pulses [6]. In this case, short voltage pulses with a steep edge (from several nanoseconds to fractions of a nanosecond) are applied to the discharge gap. However, the reasons why, and the conditions in which, a volume discharge forms in a nonuniform nanosecond electric field have not been studied, and the input power density has not exceeded 100 MW cm$^{-3}$. In this section we examine the results of studies concerning volume-discharge formation in a nonuniform electric field with voltage pulses of nanosecond duration [23, 24, 59]. The nanosecond-pulse generators used in the experiments were those described in Section 3.1. Volume discharges were ignited in air, nitrogen, helium, neon, argon, and krypton. The following facts were established from measurements of the voltage pulses across the gas-filled diode and the discharge current and from observations of the shape of the discharge in the discharge gap. Within a broad range of experimental conditions and for all the investigated gases, between the tube cathode with a sharp-pointed edge and the anode there emerges a volume discharge in the form of diffusion cones or ‘jets’ (see Fig. 14). The discharge remains diffusive at different pressures, and only at the cathode are there bright spots which appear at the front of the voltage pulse. Note that the volume discharge can be obtained in conditions corresponding to the production of an intense electron beam with electron energies in the dozens and hundreds of kiloelectron-volts (see Section 3.1) as well as in conditions where no electron beam is detected beyond the foil. As the interelectrode gap is made smaller or the design of the cathode is changed or pressure is varied, individual channels may be observed against the background of the diffusive discharge (see photograph 2 in Fig. 14b), while in nonoptimal conditions (e.g., small gap) the discharge may transform into the spark stage. As the generator voltage increases (in optimal conditions for the gap), against the background of the volume discharge there also appear brighter filamentary channels, and a drop in voltage is registered on the voltage oscillogram (oscillogram 2 in Fig. 10b). Figures 14a and b display photographs of discharge glow in air, taken from a side view in the case of a foil anode, and in end and slant views in the case of a grid anode. As noted earlier, the discharge appears in the form of volume jets which originate from the bright spots at the cathode. For a blade-shaped cathode, the discharge also appears in the form of diffusion jets (see Fig. 14c). The discharge current is recorded with a very small time lag (fractions of a nanosecond) in relation to the instant of time the voltage is applied to the discharge gap. The size and duration of the discharge current in a volume discharge depend on the generator parameters, the interelectrode gap, and the type and pressure of gas. For instance, for the first generator with an open-circuit voltage of roughly 270 kV, the discharge current amplitude amounted to about 2400 A. When the volume nature of the discharge remained such for 3 ns, with the first generator the current density at the anode reached 3 kA cm$^{-2}$, the specific energy deposited to the gas amounted to 1 J cm$^{-3}$, and the input power density was more than 400 MW cm$^{-3}$. When the volume stage was maintained for 5 ns, the current density at the anode reached 1.5 kA cm$^{-2}$, the input power density was roughly 200 MW cm$^{-3}$, and the specific energy deposited to the gas reached again $\sim 1$ J cm$^{-3}$. In the given experiment, the discharge was maintained in the self-sustained regime, in which the voltage across the discharge gap is at its maximum in the quasi-stationary discharge stage, i.e., the electron concentration at the front of the voltage pulse in the discharge gap is high. Note that, usually, when the voltage pulse applied to the discharge gap has a steep edge, there appears an overvoltage peak, even if UV or X-ray preionization has been used, and only then does the quasi-stationary discharge stage set in [63, 64]. ### 4.2 On preionization mechanisms #### 4.2.1 Fast electrons. Let us discuss in detail the first phase, i.e., the formation mechanism of plasma that approaches the anode. As noted earlier, a nanosecond discharge at atmospheric pressure appears diffusive in photographs, i.e., it contains no spark channels (see Fig. 14). Only near the cathode are there small bright regions of plasma glow. It is well known that a volume discharge at atmospheric pressure is formed even in the pulsed mode, provided there has been effective preionization. It is only natural to assume that such preionization is available by fast electrons. In this connection, it would be interesting to follow the changes in the various characteristics of the electron beam being produced in the range of $pd$ values fitting the electron runaway curve. The range of values represented in Fig. 15 corresponds to the straight line $U = \text{const}$ (see Fig. 7a) connecting the points marked by an open square and a large open circle. As one would expect, the fraction of fast electrons begins to drop rapidly (see Fig. 15, curve 1) at $pd$ values such that for a given value of $U$ the magnitude of $z_i d = pd \xi(U/pd)$, which characterizes runaway of electrons, becomes comparable to unity (curve 3). The sharp bend in curves 1 and 2 corresponds to the vicinity of the point $pd \approx 55$ Torr cm, where $pd \xi(U/pd) \approx 1$. Note that within the range of parameters represented in Fig. 15, the magnitude of $E/p$ everywhere exceeds $(E/p)_{\text{max}} = 0.2$ kV cm$^{-1}$ Torr$^{-1}$, at which value the Townsend coefficient begins to decrease and, according to the notion adopted, runaway of the majority of electrons will occur. However, it is seen from Figs 7 and 8 that the electron runaway mode is implemented only at small $pd$ values. This is an additional argument in favor of the substantial difference in the criteria $E/p = (E/p)_{\text{max}} = E_{\text{cr1}}/p$ and $z_i(E_{\text{cr1}}, p)d = 1$ discussed in Sections 2.1 and 2.3. What may seem somewhat unexpected is that despite the drop in the number the electrons in the beam at $U_{\text{cr}}(pd) > U$ their current near the same point $pd \approx 55$ Torr cm increases dramatically (curve 2). This is quite natural, however. With the number of ionization acts on the rise, the number of fast electrons must also increase. Here, of course, as $pd$ grows, there is an ever increasing number of low-energy electrons, compared to the number of runaway electrons. Hence, the fraction of runaway electrons drops. The rise in the electron beam current beyond the anode with increasing $p$, as well as with increasing $d$, has been verified in experiments [21]. In view of the multiplication of runaway electrons, it can be expected that they are the cause of strong preionization at relatively low voltages, $U < U_{\text{cr}}(pd)$. Here, the plasma inhomogeneity may be of importance. #### 4.2.2 Field concentration. It is natural to relate the mechanism of plasma formation in the volume between the cathode and the anode to the appearance of fast electrons emitted by the above-mentioned small plasma protuberances at the cathode. Fast kilovolt electrons ensure preionization of the gas between the cathode and the anode. Hence, the discharge at atmospheric pressure is relatively uniform. The production of fast electrons near a tip is related to the field concentration at the ends of the conducting plasma protuberances near the cathode. To clarify this point, we examine the results of the well-known electrostatic problem on the distribution of the electric field potential, when the cathode has a conducting asperity in the form of one-half of an elongated ellipsoid of revolution, whose axis is perpendicular to the planes of the plates [65] (Fig. 16a). The distribution of the potential is given by the formula \[ \varphi(\xi, \zeta) = -\frac{U_0}{d} x(\xi, \zeta) \left\{ 1 - \left[ \ln \left( \frac{1 + \varepsilon}{1 - \varepsilon} \right) - 2\varepsilon \right]^{-1} \times \left[ \ln \left( \frac{\sqrt{1 + \xi/a^2} + \varepsilon}{\sqrt{1 - \xi/a^2} - \varepsilon} \right) - \frac{2\varepsilon}{\sqrt{1 + \xi/a^2}} \right] \right\}, \] where \( \xi \) and \( \zeta \) are the parabolic coordinates, \( U_0 \) is the potential difference between the flat electrodes, \( x(\xi, \zeta) = (a/\varepsilon)\sqrt{(1 + \xi/a^2)(1 + \zeta/a^2)} \) is the coordinate along the field direction, \( a \) and \( b \) are the major and minor semiaxes of the ellipsoid, respectively, and \( \varepsilon = \left[ 1 - (b/a)^2 \right]^{1/2} \) is the eccentricity of the ellipsoid. This solution is valid if \( d - a \gg b \). We can draw the following conclusion from the exact solution (Fig. 16b). As expected, the voltage drop near a tip occurs over a distance on the order of the tip’s radius of curvature (\( \sim b \)). However, the magnitude of this drop is not determined by the tip’s curvature but by the distance \( a \) from the tip’s end to the cathode: \( \varphi(a + b) = -U_0(a + b)/d \). Indeed, turning the radius of the field curvature to zero, we get an infinitely large field strength but a finite drop in the potential: \( \varphi(a) = -U_0a/d \). The magnitude of this drop is determined by how far from the cathode the end of the inhomogeneity tip is. Of course, these general conclusions about the size of the region of potential drop and on the magnitude of the potential drop are valid not only in the case of an elliptic protuberance but also for a ‘needle’ of any shape. Here are some estimates. From the photographs in Fig. 14 it follows that the size of the bright regions near the cathode’s surface is approximately 1 mm. We assume that electron number density in the cathode spot is high and that the protuberance is a good conductor. We also assume that the conductivity of the plasma surrounding the protuberance is moderate. In such conditions, the distribution of the potential that forms is close to the one considered in the model electrostatic problem. The electrons that have been emitted by the tip and have travelled a distance \( \sim (2-3)b \) acquire an energy \( \varepsilon_e \approx eU_0a/d \). The energy of the fast electrons 0.5–1 ns after the voltage pulse has been applied to the discharge gap, with the peak \( U_0 \approx 100 \) kV, will amount to roughly 1–4 keV. The free path of these electrons, \( R = (\varepsilon_e/\varepsilon_i)l_i \), increases in proportion to the square of \( \varepsilon_e \) (Fig. 17) and amounts to \( R \sim 0.1–1 \) cm at \( \varepsilon_e \sim 1–4 \) keV. Here, \( \varepsilon_i = 46 \) eV is the energy spent on a single ionization act (the energy of formation of an ion pair), \( l_i = 1/\sigma_iN \sim 0.1 \) mm is the free path of the electron in a time from one ionization act to another, \( \sigma_i(\varepsilon_e) \) is the ionization cross section, and \( N \approx 2.4 \times 10^{19} \) cm\(^{-3}\) is the helium density. ### 4.2.3 On the preionization by accelerated electrons Let us take a qualitative look at the possible role that externally injected accelerated electrons play in the formation of an ionization wave [66]. When an electron is injected into the gas with a sufficiently high velocity, it becomes accelerated steadily. According to Fig. 1, for instance, for an injected 1-keV electron to become accelerated at all times at atmospheric pressure, an electric field strength of 25 kV cm\(^{-1}\) is sufficient, and this is much lower than the peak field strength in experiments in which electron beams are produced (see Section 3.1). Moving in the medium, this electron leaves behind a trace of electrons produced. And it is these secondary electrons that form avalanches.\(^3\) Accordingly, for the spatial–temporal distribution of the number of electrons produced by a fast electron we have the result \[ n_e = \exp \left\{ v_t \left[ t - \tau(x) \right] \right\}, \] \(^3\) The shape of a solitary electron avalanche in helium has been examined by Tkachev and Yakovlenko [67]. where \( v_i = z_i u_d \) is the rate of ionization in the avalanche, \( u_d \) is the avalanche’s propagation velocity, and \( \tau(x) \) is the time that it takes the electron to travel from the cathode (\( x = 0 \)) to the point \( x \) in question. To calculate the electron number density distribution on the basis of \( n_e \), the latter must be multiplied by the number density of ionization acts along the electron’s path and the path density. Calculations of the path density require a separate study. Ignoring the friction force for fast electrons and assuming that they move along the field, we get \[ n_e(v_i t, z_i x) = \exp \left[ v_i t - b \left( \sqrt{1 + axz_i x} - 1 \right) \right], \] \[ v_i(t, x) = b \left( \sqrt{1 + az_i x} - 1 \right), \] where \[ a = 2 \frac{eE}{m_e} \frac{1}{z_i v_0^2}, \quad b = \frac{m_e}{eE} z_i v_0 u_d, \] with \( v_0 \) being the initial velocity of the externally injected electron, and \( E \) the external electric field strength. An ionization wave is generated when the avalanche build-up time \( 1/v_i \) begins to exceed the time \( \tau_d = \tau(d) \) it takes the fast electrons to travel the length of the discharge gap, i.e., when \( v_i \tau_d < 1 \). At higher ionization rates, the electron number density behind the fast electron rapidly grows. If the ionization rate is low, the entire discharge gap becomes ionized simultaneously since a uniform seed ionization is provided, with no ionization wave being generated. When electron multiplication is rapid, the plasma layer in the region extending from the cathode to a point \( x_{cr} < x_c \) (\( x_c \) is the coordinate of the fast electron) screens the external electric field, and the cathode appears to close in on the anode. This screening occurs in a sizable part of the volume when, due to avalanche multiplication, the electron number density reaches the value of \( N_{e, cr} = U_0/4need^2 \sim 10^{10} \text{ cm}^{-3} \). When the ionization wave approaches the anode, \( x_{cr} \approx d \), the field strength rapidly increases, while the Townsend coefficient decreases. Then, as noted earlier, in a narrow layer between the plasma formed in the volume and the anode the nonlocal electron runaway criterion (9) is met, and a high-power electron beam is produced. ### 4.3 Background multiplication front in a nonuniform field #### 4.3.1 The simplest model Let us discuss in detail the mechanism for the propagation of ionization, originating in the exponential multiplication of background electrons with a low number density in a nonuniform electric field [68–70]. At the points in space where the field strength is higher, the multiplication is more intense, while in regions with a low field strength the multiplication is less intense. The field is concentrated at the cathode spot. Hence, near the cathode’s surface the electrons multiply faster. As the electron number density grows, the electric field is screened and the plasma boundary moves on. To explain the mechanism of the multiplication wave, we take the simplest model possible. We ignore electron drift and determine the boundary between plasma and gas at the points where the plasma density reaches a certain critical value \( N_{cr} \) at which the field is completely screened. In this case, the dependence of the electron number density on the radius vector \( r \) of the point in question and on time \( t \) is given by the expression \[ N_e(r, t) = \begin{cases} N_0 \exp \left[ v_i(E(r))t \right] & \text{for } N_0 \exp \left[ v_i(E(r))t \right] < N_{cr}, \\ N_{cr} & \text{for } N_0 \exp \left[ v_i(E(r))t \right] \geq N_{cr}, \end{cases} \] (11) where \( N_0 \) is the background plasma density. Clearly, in model (11) the direction in which ionization wave propagates does not depend on the sign of the field’s projection onto this direction, since the ionization rate is determined by the absolute value of the electric field strength. Hence, Tkachev and Yakovlenko [67] and Yakovlenko [68, 69] proposed a photonless model of a streamer, based on equation (11). #### 4.3.2 Velocity of the multiplication front The coordinates of the multiplication wave front are determined by the points at which the electron number density reaches its critical value. Let us examine the time dependence of the coordinate \( z(t) \) of one of the points of the wave front along the normal to the front. Implicitly, \( z(t) \) is determined by the expression \[ v_i(E_0(z(t)))t = \text{Ln}, \quad \text{Ln} \equiv \ln \frac{N_{cr}}{N_0}, \] (12) where \( E_0 = E(z(0)) \) is the electric field strength at the surface of the front. Generally speaking, both Ln and \( N_{cr} \) are functions of \( E_0 \). However, we ignore this dependence in view of its logarithmic nature. Taking the time derivative of formula (12), we get \[ u_{fr} = \frac{dz}{dt} = v_i \left[ \left( \frac{\text{d ln } v_i}{\text{d ln } E} \right) \left| \frac{-\nabla E}{E} \right| \right]_{E=E_0} \text{Ln}^{-1}. \] (13) If we approximate a section of the surface near the wave front by a sphere of radius \( r_0 \), then \( |-\nabla E/E|_{E=E_0} = 2/r_0 \). Accordingly, one has \[ u_{fr} = v_i r_0 \left[ \left( \frac{\text{d ln } v_i}{\text{d ln } E} \right)_{E=E_0} 2 \text{Ln} \right]^{-1}. \] (14) The ionization rate \( v_i = z_i u_{de} \) can be written down as the product of the Townsend coefficient \( z_i(E, p) = p \tilde{\zeta}(E/p) \) by the electron drift velocity \( u_{de}(E/p) \). Thus, the velocity of the ionization front is expressed in terms of the functions of \( E_0/p \) that are universal for the given gas: \[ u_{fr} = \frac{v_i r_0}{\tilde{\zeta}(E_0/p)}, \] (15) \[ \tilde{\zeta}\left(\frac{E_0}{p}\right) = 2 \text{Ln} \left\{ \frac{\text{d ln } [u_{de}(E/p) \tilde{\zeta}(E/p)]}{\text{d ln } (E/p)} \right\}_{E/p=E_0/p}. \] #### 4.3.3 Front velocity in helium and xenon Let us discuss in detail the velocity of the ionization front in helium and xenon, since for these gases the ionization and drift characteristics have been thoroughly described (see Section 2.2). For helium [16], one finds \[ \tilde{\zeta}(x) = 5.4 \exp \left[ -\left( \frac{14}{x} \right)^{1/2} - 1.5 \times 10^{-3} x \right] [\text{Torr}^{-1}], \] (16) \[ u_{de} = 10^6 x \ [\text{cm s}^{-1}]. \] Substituting formula (16) into expressions (15), we get \[ u_{\text{ff}} = \frac{v_i r_0}{\zeta(x)}, \quad \zeta(x) = 2 \ln \left(1 + 1.87 x^{-1/2} - 1.5 \times 10^{-3} x\right). \] (17) Here, \(x = (E_0/p) \times (\text{Torr cm}^{-1})\). For helium, one has \((E_0/p)_{\text{cr}} \approx 720 \text{ V cm}^{-1} \text{ Torr}^{-1}\). In the case of xenon, the following approximations were used in modeling (see Refs [17, 26]): \[ \zeta(x) = 45 u_{\text{dc}} \exp \left[-31.1 \left(\frac{1}{x}\right)^{1/2} - 1.7 \times 10^{-4} x\right] [\text{Torr}^{-1}], \] (18) \[ u_{\text{de}} = \frac{1.3 x + 1.3 x^6}{1 + 7.31 \times 10^{10} x^{5.8}} + 1.3 \times 10^5 x \exp \left(-\frac{2.2}{x}\right) [\text{cm s}^{-1}]. \] (19) For xenon, it follows that \((E_0/p)_{\text{cr}} \approx 7 \text{ kV cm}^{-1} \text{ Torr}^{-1}\). Figure 18 displays the dependence of the wave front velocity in helium and xenon on the reduced electric field strength. Equation (17) was verified directly by numerical calculations for the case of a spherically symmetric bunch [68–70]. The distribution of the electron number density at different moments in time was calculated by formula (11). The results were then used to calculate the values of the front radius \(r_{\text{ff}}\) at different moments in time, which were approximated by a linear dependence determining the wave front velocity. Several points obtained in this way are shown in Fig. 18a. ### 4.3.4 Front velocity in N\(_2\) and SF\(_6\) To analyze the velocity of the background multiplication front in N\(_2\) and SF\(_6\) (Fig. 18b, c), we used the quantities \(z_i\) and \(u_{\text{dc}}\) tabulated in Refs [19, 18], respectively. The nonmonotonic character of the velocity of the background multiplication front in SF\(_6\) can be related to the nonmonotonic nature of the derivative of the ionization rate. This nonmonotonicity is caused by the fact that SF\(_6\) possesses three threshold values of the ionization energy (20, 40, and 50 eV). ### 4.3.5 Simulating the multiplication wave According to the experimental data, ionization of the discharge gap occurs in the form of ‘jets’, so to say. Such a jet can be qualitatively represented as a sector of a circle in cylindrical geometry [71]. Hence, to establish the mechanisms of the breakdown of the interelectrode gap, we used a one-dimensional diffusion–drift model described in detail in Ref. [62]. This model describes the development of ionization between coaxial cylindrical electrodes with \(r_0 < r < r_1\), where \(r_0\) and \(r_1\) are the radii of the inner and outer electrodes, respectively. The processes of plasma formation and electric-field screening were described by the equations of momentum transfer and continuity for the electrons and ions, as well as the Poisson equation for the electric field. The dependences of the various quantities present in the equations of the drift-dissipative model (ionization rates, drift velocities, and diffusion coefficients) on the field strength were specified by the approximations obtained in Ref. [26]. Calculations have shown that in the case of almost flat electrodes \((d = r_1 - r_0 \ll r_1)\) propagation of the ionization wave is possible only at low voltages and, accordingly, at small Townsend multiplication factors \(z_i\) \((z_i d \ll 1)\). The ionization wave propagating from the cathode to the anode in the case of almost flat electrodes was produced only when there was initially a region of excessive ionization near the cathode. Notice that near the cathode we indeed saw bright plasma formations (see Fig. 14) in all discharge regimes. The condition needed for an ionization wave to be generated meets a situation in which the electrons leave the discharge gap without having time to essentially multiply. In the opposite case \((z_i d \gg 1)\), volume ionization occurs faster than the electron drift, so that the wave does not have enough time to propagate significantly during ionization. When the electrodes are coaxial cylinders and the cathode has a small radius \((d = r_1 - r_0 \gg r_0)\), the ionization wave forms both at low and high voltages. It propagates not because of electron drift but because of the nonuniformity of the electric field. At the points where the field is stronger, the ionization is more intense. In this case, the plasma density... Figure 19. Radial distributions of the electron number density (a) and the electric field strength (b) at moments in time when the ionization wave approaches the anode. The curves correspond to different moments in time: 1, 1.3 ns; 2, 1.4 ns, and 3, 1.5 ns. Curve 4 fits the field distribution $E(r) = -U/\ln(r_0/r_1)/r$ in empty space, at $U = 100$ kV, $r_0 = 0.25$ mm, and $r_1 = 8$ mm. The time dependence of the voltage fixed across the electrodes corresponds to the one depicted in Fig. 15. grows faster to values at which the field is screened, and all further increase in ionization stops. Figure 19 shows an ionization wave in a nonuniform field. Note that the wave of elevated plasma density is preceded by the wave of elevated electric field strength in the near-anode region. A similar calculation was then carried out for the same conditions, but the voltage $U(t)$ at the instants of time $t > 1$ ns was increased two-fold. Qualitatively, the results agree with those shown in Fig. 19 for lower voltages. Despite an increase in voltage, the electric field strength in the near-anode region at the instant of time when the ionization wave approaches that region does not increase significantly. However, the time it took the ionization wave to cover the distance to the anode decreased. 4.3.6 On the optimal voltage. The results of calculations make it possible to interpret the beam production mechanism in the following manner. According to the discussion in Section 3.2, we assume that the beam electrons are formed near the anode in a layer with a thickness of roughly $1/x_i$. This happens at those moments in time when the ionization wave approaches the anode and the electric field strength in this layer increases. But when the ionization wave touches the anode, the field strength in the near-anode region drops dramatically, although the previous voltage across the electrodes is sustained. Clearly, the conditions for beam production in this case deteriorate. The following must be said about the reasons for this decrease in the beam current (see also Fig. 11). We should probably speak not of a decrease in current but of a decrease in charge transferred by the beam as the peak voltage across the electrodes increased. Indeed, in the experiments discussed here it was impossible to determine current durations shorter than 0.3 ns. Hence, even if the beam current was high but, because of the short duration of the current ($< 0.3$ ns), the charge transferred by the beam small, then due to the limited time resolution this would appear as a decrease in the beam current. The decrease in the amount of charge transferred by the beam with increasing voltage can be explained by the reduction of the time the ionization wave spends in traveling through the region where the electron beam is produced, i.e., within a layer that is roughly $1/x_i$ thick and is located near the anode. The validity of this qualitative explanation has been corroborated by the above calculated results. 5. Conclusions In the present review we have capsuled the discussion about recent studies on the physics of discharges in gases at pressures on the order of atmospheric pressure, which are related to the production of high-power subnanosecond electron beams. Using a simple equation that allows for electron multiplication, we found that at a certain distance from the cathode a specific value of the mean electron energy that is independent of the spatial coordinate sets in, even if the electric field strength is so high that electron friction in the gas can be ignored. This implies that the local electron runaway criterion (i.e., in the whiskler mode) is not sufficiently general to include electron multiplication. We presented the results, which support this viewpoint, of numerical simulations of electron multiplication and transfer of electrons in helium, neon, xenon, nitrogen, and sulfur hexafluoride. We also showed that the Townsend ionization mechanism (characterized by constant velocity and energy of the electrons combined with an exponential increase in the number of electrons) operates even at field strengths at which electron friction can be ignored. What is important is that the electrode spacing be much larger than the multiplication length (the reciprocal Townsend coefficient). We examined the nonlocal electron runaway criterion, according to which a large number of electrons in the interelectrode gap are runaway electrons when the electrode spacing becomes comparable to the reciprocal Townsend coefficient. The nonlocal criterion differs very significantly from the local criterion used today. In particular, different recommendations for electron beam production in gases follow from the criteria. The nonlocal criterion leads to a universal (for the given gas) dependence of the critical voltage $U_{cr}(pd)$ across the interelectrode gap (at which the runaway electrons constitute a substantial fraction of the total number of electrons) on the product $pd$ of the gas pressure by the electrode spacing. The curve $U_{cr}(pd)$ separates the region of effective electron multiplication from the region where the electrons leave the discharge gap without having time to multiply. The curve has an upper and lower branch. The upper branch characterizes runaway of electrons, and the lower branch the escape by drift. The minimum value of $pd$ in the curve $U_{cr}(pd)$ corresponds to the maximum in the dependence of the Townsend coefficient on $E/p$. Calculations of the $U_{cr}(pd)$ curves have been carried out for helium, neon, xenon, nitrogen, and sulfur hexafluoride. The $U_{cr}(pd)$ curves were used to build analogs of the Paschen curves $U_{ign}(pd)$ which characterize the ignition of a self-sustained discharge. The curves differ from the known Paschen curves in that they have an upper branch. The ideas developed in the course of the study have been used to explain the results of experiments on the production of high-current subnanosecond beams of runaway electrons. Electron beams with currents amounting to dozens or even hundreds of amperes have been produced in gas-filled diodes with different gases at atmospheric pressure. The experimental investigations have shown that for achieving a maximum beam current in a gas-filled diode the discharge must be of the volume type, and the increase in the voltage across the discharge gap must be stopped just before the beam current reaches its maximum. The electron beam produced in such gas-filled diodes has been used to initiate a discharge in an atmospheric-pressure carbon dioxide laser [72]. Volume discharges have been studied in a nonuniform electric field as well. The findings suggest that a volume discharge is formed in a nonuniform electric field because of preionization by fast electrons. Fast kilovolt electrons are produced near the cathode, being accelerated in the strong electric field near cathode plasma formations. Here are the parameters of a volume discharge produced without preionization from supplementary sources at elevated pressures: input power density higher than 400 MW cm\(^{-3}\), discharge current density in the near-anode region up to 3 kA cm\(^{-2}\), and specific energy deposition of about 1 J cm\(^{-3}\) in the course of 3–5 ns. The quasi-stationary stage of a volume discharge is formed at lower initial voltages, which also supports the formation of a large number of fast electrons. We have made estimates and done model calculations that show that the main pulse of the electron beam is formed at the instant of time when the plasma in the discharge gap closes in on the anode and when the nonlocal electron runaway criterion is met. We studied the possibility that the plasma formations at the cathode emit fast electrons. We also examined the wave of multiplication of a seed preionization of the discharge gap by runaway electrons in a nonuniform electric field, a wave that begins at the cathode spots. We believe that subnanosecond electron beams produced in gas-filled diodes will find wide application in various fields of physics and technology. It is quite possible that such a method of producing subnanosecond electron beams can compete with the traditional approach [73]. Here are two possible areas of their applications. First, short high-current electron beams are needed in studies of properties of insulators and semiconductors. In solids, the relaxation times of many processes after excitation amount to several fractions of a nanosecond or even shorter; accordingly, this requires using electron beams of small duration with a short fall time. Second, accelerators that employ the simple design of a gas-filled diode can be used in mining precious stones. As is well known, many crystals (e.g., diamonds), when excited by an electron beam, give off luminescent light in the visible part of the spectrum, so that by exciting samples of a previously prepared rock with an electron beam we can discover precious fragments in it [74]. We are grateful to our co-authors in Refs [16–26, 59, 66] for making the valuable contribution to the results presented in this review. The work was made possible by the financial support of the International Science and Technology Center (grants 1270 and 2706). References 1. Giovanelly R G *Philos. Mag.* **40** 206 (1949) 2. Dreicer H *Phys. Rev.* **115** 238 (1959); **117** 329 (1960) 3. Kulsrud R M et al. *Phys. Rev. Lett.* **31** 690 (1973) 4. Gurevich A V *Zh. Eksp. Teor. Fiz.* **39** 1296 (1960) [Sov. Phys. JETP **12** 904 (1961)] 5. Marchenko V S, Yakovenko S I *Fiz. Plazmy* **5** 590 (1979) [Sov. J. Plasma Phys. **5** 331 (1979)] 6. Babich L P, Loiko T V, Tsukerman V A *Usp. Fiz. Nauk* **160** (7) 49 (1990) [Sov. Phys. Usp. **33** 521 (1990)] 7. Korolev Yu D, Mesyats G A *Fizika Impul'snogo Proboya Gazov* (Physics of Pulsed Breakdown of Gases) (Moscow: Nauka, 1991) 8. Raizer Yu P *Fizika Gazovogo Razryada* (Gas Discharge Physics) 2nd ed. (Moscow: Nauka 1992) [Translated into English (Berlin: Springer-Verlag, 1997)] 9. Bokhan P A, Sorokin A R *Zh. Tekh. Fiz.* **55** (1) 88 (1985) [Sov. Phys. Tech. Phys. **30** 50 (1985)] 10. Kolbychev G V, Kalbycheva P D, Prashnik I V *Zh. Tekh. Fiz.* **66** (2) 59 (1996) [Tech. Phys. **41** 144 (1996)] 11. Sorokin A R *Zh. Tekh. Fiz.* **68** (3) 33 (1998) [Tech. Phys. **43** 296 (1998)] 12. Sorokin A R *Pis'ma Zh. Tekh. Fiz.* **28** (9) 14 (2002) [Tech. Phys. Lett. **28** 361 (2002)] 13. Bokhan P A, Zakrevsky D E *Pis'ma Zh. Tekh. Fiz.* **28** (11) 21 (2002) [Tech. Phys. Lett. **28** 454 (2002)] 14. Derzhiev V I et al., in *Plazmennye Lazery Vidimogo i Blizhnego UF Diapazonov* (Plasma Lasers of the Optical and Near-UV Ranges) [Trudy IOFAN (Proc. General Physics Institute), Vol. 21, Ed. S I Yakovenko] (Moscow: Nauka, 1989) p. 5 15. Yakovenko S I “Gazovye i plazmennye lazery” (“Gas and plasma lasers”), in * Entsiklopediya Nizkotemperaturnoi Plazmy* (Encyclopedia of Low-Temperature Plasma) (Ed. V E Fortov) Introductory Volume IV (Moscow: Nauka, 2000) p. 262 16. Tkachev A N, Yakovenko S I *Pis'ma Zh. Eksp. Teor. Fiz.* **77** 264 (2003) [JETP Lett. **77** 221 (2003)] 17. Tkachev A N, Yakovenko S I *Pis'ma Zh. Tekh. Fiz.* **29** (16) 54 (2003) [Tech. Phys. Lett. **29** 683 (2003)] 18. Boichenko A M, Tkachev A N, Yakovenko S I *Pis'ma Zh. Eksp. Teor. Fiz.* **78** 1223 (2003) [JETP Lett. **78** 709 (2003)] 19. Tkachev A N, Yakovenko S I *Pis'ma Zh. Tekh. Fiz.* **30** (7) 14 (2004) [Tech. Phys. Lett. **30** 265 (2004)] 20. Alekseev S B, Orlovskii V M, Tarasenko V F *Pis'ma Zh. Tekh. Fiz.* **29** (10) 29 (2003) [Tech. Phys. Lett. **29** 411 (2003)] 21. Alekseev S B et al. *Pis'ma Zh. Tekh. Fiz.* **29** (16) 45 (2003) [Tech. Phys. Lett. **29** 679 (2003)] 22. Alekseev S B et al. *Prib. Tekh. Eksp.* (4) 81 (2003) [Instrum. Exp. Tech. **46** 505 (2003)] 23. Tarasenko V F, Orlovskii V M, Shunailov S A *Izv. Vyssh. Uchebn. Zaved. Ser. Fiz.* **46** (3) 94 (2003) [Russ. Phys. J. **46** 325 (2003)] 24. Tarasenko V F et al. *Pis'ma Zh. Eksp. Teor. Fiz.* **77** 737 (2003) [JETP Lett. **77** 611 (2003)] 25. Tarasenko V F et al. *Pis'ma Zh. Tekh. Fiz.* **29** (21) 1 (2003) [Tech. Phys. Lett. **29** 879 (2003)] 26. Tkachev A N, Yakovenko S I Proc. SPIE **4747** 271 (2002); *Laser Phys.* **12** 1022 (2002) 27. Krishnakumar E, Srivastava S K J. Phys. B: At. Mol. Opt. Phys. **21** 1055 (1988) 28. Fursa D V, Bray I *Phys. Rev. A* **52** 1279 (1995) 29. Nickel J C et al. *J. Phys. B: At. Mol. Phys.* **18** 125 (1985) 30. Krishnakumar E, Srivastava S K J. Phys. B: At. Mol. Opt. Phys. **21** 1055 (1988) 31. Eletskii A V, Smirnov B M *Fizicheskie Protessy v Gazovykh Laserakh* (Physical Processes in Gas Lasers) (Moscow: Energatomizdat, 1985) p. 44, Table 3.4 32. Engelhardt A G, Phelps A V, Risk C G *Phys. Rev.* **135** A1566 (1964) 33. Golden D E *Phys. Rev. Lett.* **17** 847 (1966) 34. Blaauw H J et al. *J. Phys. B: At. Mol. Phys.* **13** 359 (1980) 35. Dalba G et al. *J. Phys. B: At. Mol. Phys.* **13** 4695 (1980) 36. Krishnakumar E, Srivastava S K J. Phys. B: At. Mol. Opt. Phys. **23** 1893 (1990) 37. Tian C, Vidal C R *J. Phys. B: At. Mol. Opt. Phys.* **31** 5369 (1998) 38. Rapp D, Englander-Golden P, Briglia D D *J. Chem. Phys.* **42** 4081 (1965) 39. Schram B L et al. *Physica* **31** 94 (1965) 40. Campbell L et al. *J. Phys. B: At. Mol. Opt. Phys.* **34** 1185 (2001) 41. Cartwright D C et al. *Phys. Rev. A* **16** 1041 (1977) 42. Schulz G J *Rev. Mod. Phys.* **45** 423 (1973) 43. Vicic M, Poparic G, Belic D S *J. Phys. B: At. Mol. Opt. Phys.* **29** 1273 (1996) 44. Stanski T, Adamczyk B *Int. J. Mass Spectrom. Ion Phys.* **46** 31 (1983) 45. Novak J P, Fréchette M F *J. Appl. Phys.* **55** 107 (1984) 46. Kline I E et al. *J. Appl. Phys.* **50** 6789 (1979) 47. Ward A L J *J. Appl. Phys.* **33** 2789 (1962) 48. Panchenko A N et al. *Kvantovaya Elektron.* **33** 401 (2003) [Quantum Electron. **33** 401 (2003)] 49. Kolbychev G V *Zh. Tekh. Fiz.* **52** 511 (1982) [Sov. Phys. Tech. Phys. **27** 326 (1982)] 50. Penning F M *Physica* **12** (4) 65 (1932) 51. Dikidzhi A N, Klyarfeld B N *Zh. Tekh. Fiz.* **25** 1038 (1955) 52. Guseva L G, Klyarfel’d B N *Zh. Tekh. Fiz.* **24** 1169 (1955) 53. Ulyanov K N, Chulkov V V *Zh. Tekh. Fiz.* **58** 328 (1988) [Sov. Phys. Tekh. Fiz. **33** 201 (1988)] 54. Stankevich Yu L, Kalinin V G *Dokl. Akad. Nauk SSSR* **177** 72 (1967) 55. Noggle R C, Krider E P, Wayland J R *J. Appl. Phys.* **39** 4746 (1968) 56. Gubanov V P et al. *Izv. Vyssh. Uchebn. Zaved. Ser. Fiz.* **39** (12) 110 (1996) 57. Yalandin M I, Shpak V G *Prib. Tekh. Eksp.* (3) 5 (2001) 58. Zagulov F Ya *Prib. Tekh. Eksp.* (2) 146 (1989) 59. Tarasenko V F et al. *Izv. Vyssh. Uchebn. Zaved. Ser. Fiz.* **47** (2) 96 (2004) 60. Kostyrya I D, Tarasenko V F *Opt. Atmos. Okeana* **14** 722 (2001) 61. Arnold E et al. *Laser Phys.* **12** 1227 (2002) 62. Tkachev A N, Yakovlenko S I *Laser Phys.* **13** 1345 (2003) 63. Mesyats G A, Osipov V V, Tarasenko V F *Pulsed Gas Lasers* (Bellingham, Wash.: SPIE, Opt. Eng. Press, 1995) 64. Savin V V, Tarasenko V F, Bychkov Yu I *Zh. Tekh. Fiz.* **46** (1) 198 (1976) [Sov. Phys. Tech. Phys. **21** 113 (1976)] 65. Batygin V V, Toptygin I N *Shornik Zadach po Elektrodinamike* (Problems in Electrodynamics) (Moscow: GIFML, 1962) [Translated into English (London: Academic Press, 1964)] 66. Kostyrya I D et al. *Pisma Zh. Tekh. Fiz.* **30** (10) 31 (2004) [Tech. Phys. Lett. **30** 411 (2004)] 67. Tkachev A N, Yakovlenko S I *Zh. Tekh. Fiz.* **74** (3) 91 (2004) [Tech. Phys. **49** 371 (2004)] 68. Yakovlenko S I *Elektron. Zh. “Issledovana v Rossii”* (9) 86 (2004); http://zhurnal.ape.relarn.ru/articles/2004/009.pdf 69. Yakovlenko S I *Kratk. Soobshch. Fiz. FIAN* (10) 27 (2003) 70. Yakovlenko S I *Pisma Zh. Tekh. Fiz.* **30** (9) 12 (2004) [Tech. Phys. Lett. **30** 354 (2004)] 71. Tarasenko V F et al. *Pisma Zh. Tekh. Fiz.* **30** (8) 68 (2004) [Tech. Phys. Lett. **30** 335 (2004)] 72. Alekseev S B, Orlovskii V M, Tarasenko V F *Kvantovaya Elektron.* **33** 1059 (2003) [Quantum Electron. **33** 1059 (2003)] 73. Zheltov K A *Pikosekundnye Sil’notochnye Elektronnye Uskoriteli* (Picosecond High-Current Electron Accelerators) (Moscow: Energotomizdat, 1991) 74. Bakhteev V V, Osipov V V, Solomonov V I *Geofizika* (6) 37 (1994)
Authentic Learning Experiences: Investigating How Teachers Can Lead Their Students to Intrinsic Motivation in Meaningful Work Rhonda Van Donge Follow this and additional works at: https://digitalcollections.dordt.edu/med_theses Part of the Curriculum and Instruction Commons Recommended Citation Van Donge, Rhonda, "Authentic Learning Experiences: Investigating How Teachers Can Lead Their Students to Intrinsic Motivation in Meaningful Work" (2018). Master of Education Program Theses. 119. https://digitalcollections.dordt.edu/med_theses/119 This Thesis is brought to you for free and open access by Digital Collections @ Dordt. It has been accepted for inclusion in Master of Education Program Theses by an authorized administrator of Digital Collections @ Dordt. For more information, please contact firstname.lastname@example.org. Authentic Learning Experiences: Investigating How Teachers Can Lead Their Students to Intrinsic Motivation in Meaningful Work Abstract This action research study investigated how an authentic learning experience impacted the motivation and engagement of students toward finding intrinsic value in meaningful work in a sophomore English classroom at a private Christian high school in the Midwest. The participants were 57 sophomores at the high school taking required English 10. The students participated in an authentic learning experience (ALE) designed by their teacher in which they were split into 10 teams, each team writing and designing one issue of the sophomore class’s newspaper. The 57 students completed an anonymous survey at the conclusion of the authentic learning experience. Eight students were randomly chosen to be interviewed about their experiences in the ALE. The results of the study suggested that authentic learning experiences do contribute to the overall motivation and engagement of students to find intrinsic value in their work. Document Type Thesis Degree Name Master of Education (MEd) Department Graduate Education First Advisor Patricia C. Kornelis Keywords Master of Education, thesis, authentic learning, motivation, engagement, high school, Christian education Subject Categories Curriculum and Instruction | Education Comments Action Research Thesis Submitted in Partial Fulfillment of the Requirements for the Degree of Master of Education Authentic Learning Experiences: Investigating How Teachers Can Lead Their Students to Intrinsic Motivation in Meaningful Work By Rhonda Van Donge B.A. Dordt College, 1999 Action Research Thesis Submitted in Partial Fulfillment Of the Requirements for the Degree of Masters of Education Department of Education Dordt College Sioux Center, Iowa May 2018 Authentic Learning Experiences: Investigating How Teachers Can Lead Their Students to Intrinsic Motivation in Meaningful Work By Rhonda Van Donge Approved: Pat Kornelis, Ed.D. ______________________________ Faculty Advisor 04/30/2018 ______________________________ Date Approved: Stephen Holtrop, Ph.D. ______________________________ Director of Graduate Education 04/30/2018 ______________________________ Date Acknowledgements I would like to thank Dr. Tim Van Soelen and Dr. Pat Kornelis for their encouragement and guidance throughout this project. They were instrumental in helping me clarify my purpose, research, and writing. I also need to thank Mr. Nathan Ryder for his patience in helping me with my statistical analysis of my data. He has patience beyond measure. I never would have begun this journey without the support of my husband, Benj. He helped me stay focused and motivated, even when that meant attention taken from my family and job as a wife and mother. I also need to thank my four boys, Micah, Jamin, Eli, and Isaac, because even though they may not have realized, they had to sacrifice summer activities and time from their mom so that I could pursue this goal. # Table of Contents | Section | Page | |--------------------------------|------| | Title Page | i | | Approval | ii | | Acknowledgements | iii | | Table of Contents | iv | | List of Figures | v | | Abstract | vi | | Introduction | 1 | | Review of the Literature | 7 | | Methods | 19 | | Results | 22 | | Discussion | 30 | | References | 35 | | Appendixes | | | Appendix A | 40 | | Appendix B | 42 | List of Figures Figures 1. Figure of Berger’s Hierarchy of Audience .........................................................8 2. Linear Graph of Regression Line of Real World/Audience ...............................23 3. Linear Graph of Regression Line of Critical Thinking .....................................24 4. Linear Graph of Regression Line of Community of Learners ..........................24 5. Linear Graph of Regression Line of Student Choice ........................................25 Abstract This action research study investigated how an authentic learning experience impacted the motivation and engagement of students toward finding intrinsic value in meaningful work in a sophomore English classroom at a private Christian high school in the Midwest. The participants were 57 sophomores at the high school taking required English 10. The students participated in an authentic learning experience (ALE) designed by their teacher in which they were split into 10 teams, each team writing and designing one issue the sophomore class’s newspaper. The 57 students completed an anonymous survey at the conclusion of the authentic learning experience. Eight students were randomly chosen to be interviewed about their experiences in the ALE. The results of the study suggested that authentic learning experiences do contribute to the overall motivation and engagement of students to find intrinsic value in their work. The needs of today’s students are changing. “No pupil in the history of education is like today’s modern learner. This is a complex, energetic, and tech-savvy individual” (The Critical, 2017). Students need skills that will allow them to be successful in an ever changing and expanding workforce. In the early 1900’s, 95% of jobs in the United States called for low-skilled workers (Barron & Darling-Hammond, 2008) to work mainly as production workers and laborers (Fisk, 2003). In 2008, the workforce called instead for workers with specialized knowledge and skills (Barron & Darling-Hammond, 2008). The growth of service industries in the 20th century jumped from 31% in 1900 to 78% of all workers in 1999 (Fisk, 2003). Our global economy and expanding technology “have redefined what it takes . . . to prosper” as working members of our shrinking world (Hale, 1999, p. 9). Students today have very different needs to prepare them for the workforce than students did earlier in our nation’s history. It is the responsibility of our educational system to lead the students to skills that will prepare them for their future as working members of a constantly evolving society. When students graduate, they need to be prepared to join a global economy and workforce. This workforce wants people with analytical skills and initiative to problem-solve. Workers need creativity to find new solutions by looking from different angles in order to synthesize information. Collaboration and communication are essential as students will find themselves working and communicating with people from all over the world. They need to be able to communicate their values and beliefs effectively with other people. Finally, businesses want employees with ethical standards who want to be held accountable and responsible for how they handle situations in their job (The Critical, 2017). In short, our students need to graduate from our schools prepared to join a work force that calls for skills in communication and collaboration, as well as skills in researching, collecting, analyzing, synthesizing and applying knowledge. Because of this, schools need to equip and enable students to do more than memorize and regurgitate information. Students need to be able to think critically, to transfer knowledge to new situations, and to adapt in different environments and with many people (Barron & Darling-Hammond, 2008). Students need to take an active and independent role in their education to be prepared for what lies ahead outside of the school building. The key to preparing our students in these skills starts with motivation. Teachers need to motivate students to become engaged in the classroom so that they can participate in their own learning. Motivation gives students the “direction, intensity, quality, and persistence of [their] energies” (Fredricks & McColskey, 2012). Motivation happens by creating learning that challenges the students, that allows them to show what they have discovered in a product that has greater purpose then the classroom assignment, thus giving them the confidence to master the next problem or task set before them. As teachers equip them to grow into responsible individuals motivated to achieve for the intrinsic value of their learning (Beesley, Clark, Barker, Germeroth, & Apthorp, 2010), students will feel prepared to join a workforce that demands communication, collaboration, researching, collecting, analyzing, synthesizing and application of knowledge (Barron & Darling-Hammond, 2008). The challenge of designing curriculum laced with motivation falls then on the teachers tasked with preparing our students for this future. Students are motivated by real world learning. “The more we focus on students’ ability to devise effective solutions to real world problems, the more successful those students will become” (The Critical, 2017). Students feel disengaged when they do not feel that what they are learning is relevant to their own lives (Certo, Cauley, Moxley, & Chafin, 2008). They need opportunities in learning that show them what it means to be a productive member of society (Cronin, 1993). Beesley et al (2010) stated that research has shown that students involved in their community are more likely to excel and thrive in all areas of their lives. Community service opportunities increase students’ future involvement and behavior in their communities. Introducing service in the curriculum led to better social behavior and future involvement in the community. Choice in learning also motivates students to engage in the classroom. When teachers simply pass on information, students do not have as great of a chance to connect personally with the knowledge, with each other, with the teacher, and with the real world (Kalantzis & Cope, 2004). Choice allows students to self-regulate, to make goals, to make a plan, to make a commitment, and then to reflect on what they have done. When given choices, students feel a sense of control in their own learning. Self-efficacy allows the students to take on a task and to believe that they can do the task. Teachers then have the responsibility of giving feedback to their students in order to raise the students’ self-efficacy, to guide them in their learning process while allowing them to use trial and error (Beesley et al, 2010). Teachers motivate students by creating student-directed learning balanced well with the teacher as coach and facilitator in the classroom. Critical thinking and problem solving also motivate students. If a teacher stands in front of a classroom of students who are disengaged from what she is teaching, little hope remains that any deep learning and critical thinking skills are taking place. A teacher needs to create a classroom in which disengagement is not an option, where learning demands the students’ full attention, where what happens in the class creates the challenge and rigor most students ultimately crave (Kalantzis & Cope, 2004). When students are engaged both cognitively and behaviorally, students’ effort and concentration are high. Students choose tasks that challenge and initiate action. Without motivation to engage in critical thinking, students become passive, defensive, and bored. They give up easily (Beesley et al, 2010). Further, being a community of learners motivates students. Cooperative learning results in higher achievement than competitive or individual learning does (Beesley et al, 2010). Working in community leads to students who are more willing to take on difficult tasks that involve higher-level reasoning, more creativity, positive attitudes, more time spent on task, higher motivation and thus higher satisfaction (Beesley et al, 2010). Students feel connected in caring, supportive classrooms (Fredricks & McColskey, 2012). According to Kalantzis and Cope (2004), “learning happens by design” (p. 39). Classroom motivation happens when students are “psychologically engaged, active participants in school, who also value and enjoy the experiences of learning at school” (Quin, 2016, p. 345). By designing a classroom setting in which students are involved in real world problems with an authentic audience, in the need for deeper critical thinking skills, and in defining the problem and the direction for the solution (Rule, 2006), teachers develop motivated students who recognize the “intrinsic fulfillment of meaningful work” (Romano, 2009 p. 36). These students become equipped with the skills and attitudes to be successful after their formal education is completed. Authentic learning experiences (ALE’s) are the “learning by design” (Kalantzis & Cope, 2004) students need to develop the motivation to engage them in the classroom. When they understand meaning behind learning, they become engaged. Instead of giving students a math equation to figure out, the teacher can ask them how much it is going to cost for the school to pave the entire parking lot. Instead of having them write a fake letter in order to learn proper letter formatting, they can write a letter to a family member or friend about the last book they read. Instead of researching a recent war, they can interview a war veteran for firsthand information. Instead of studying various websites to understand how they are made, students can work directly with local businesses to create websites for the business’s actual use (O’Hanlon, 2008). Teachers then give their students meaning in their classroom work and the rigor that students ultimately want (Romano, 2009). Students want to be challenged with high expectations for achievement, knowing that their teacher does in fact believe they all can achieve success (Varuzza, Eschenauer, & Blake, 2014; Vetter, 2010). The teacher needs to help the students feel they are competent to accomplish real world work (Vetter, 2010). With clear expectations, time to delve into the work, and freedom to explore, students find motivation to learn (Lawrence & Harrison, 2009). They find that intrinsic value in what they learn, as well as the sense of accomplishment and satisfaction in a job well done (Romano, 2009). The teacher becomes the facilitator rather than the director (Vetter, 2010). Teachers no longer stand at the front of the room lecturing; rather, they coach their students through the learning process. Teachers can guide students to this kind of learning through ALE’s. **Purpose of the Study** Authentic learning experiences have the power to pull students to that “intrinsic value of meaningful work.” Students will have work that allows them to interact, to take ownership of their learning, and to work outside the classroom (Varuzza et al., 2014). This study sought to answer the question: Do authentic learning experiences in secondary English classrooms lead to “the intrinsic fulfillment” of secondary students? In other words, do authentic learning experiences lead to greater levels of motivation thus leading to greater engagement as students realize the importance of the work they are doing for their future lives? **Definitions** For the purpose of this study, the following definitions will be used. Unless otherwise noted, the definitions are those of the author. **Authentic Learning Experiences**: classroom activities with a real world/real audience focus that incorporate critical thinking skills, that center around a community of learners, and that are student-directed rather than teacher-directed. **Motivation**: direction and energy in a student’s behavior that empowers them to take on a challenge, to do quality work, and to persist until they have accomplished a meaningful goal (Beesley et al, 2010, Fredricks & McColskey, 2012). **Engagement**: cognitive or behavioral action that results from a high level of motivation and leads to strong effort, concentration, enthusiasm, and curiosity (Beesley et al, 2010). **Real World Experiences**: classroom activities that tie directly to situations that happen in the world outside the classroom that students may encounter in their daily life now or in the future. **Real World Audience**: an audience for classroom work other than the teacher, such as parents, school community, public audience beyond the school, anyone capable of critiquing student work, and recipients of service done by the students (Wagner, 2017). **Critical thinking skills**: ability to think clearly and rationally, to engage in reflection, to synthesize and analyze, and to think independently, creatively, and with vision. **Community of Learners**: multiple students or the class as a whole engaged together in the learning process, working collaboratively rather than in competition. Student-directed learning: students taking responsibility and ownership in their learning while the teacher becomes more of a facilitator and coach. Intrinsic value of meaningful work: when students feel personal satisfaction, enjoyment, curiosity, and focus in the activity itself, not from an outside force. Summary Because of our changing workforce, our global economy, and the changing skills required of our graduated students, authentic learning experiences have become essential for our students. We need students to step out of the classroom ready to problem-solve, to find solutions, to think critically and analytically, to collaborate, to communicate effectively, and to be ethical and accountable in the workforce. To be successful in their future, they need authentic learning experiences now to get them actively involved in their learning so that what they gain from their education is the “intrinsic fulfillment of meaningful work” which will “develop a productive, tenacious attitude toward such work” that they can “take . . . with them throughout their lives” (Romano, 2009, p. 30). Literature Review Four Characteristics of an Authentic Learning Experience When teachers plan for an authentic learning experience, four characteristics encompass what makes those plans authentic. There must be a real world problem, use of inquiry and critical thinking skills, a community of learners working together, and student choice in their learning. ALE’s use real world problems with impact outside of the classroom to motivate and teach students (Rule, 2006). For example, an English teacher can connect her students with pen pals from another country so that rather than writing letters only for the sake of learning the format, they can learn the format while writing letters to these pen pals. Part of a real world problem, as in this example, means a real world audience. Berger (2017) has implemented what he calls the “hierarchy of audience.” According to Berger (2017), as the authenticity of the audience increases, so does the motivation and engagement of the students. At the bottom of the hierarchy is the audience of the teacher, followed by parents, the school community, a public audience beyond the school, people capable of critiquing the students’ work, and at the top of Berger’s hierarchy is authentic work done for service to the world (Wagner, 2017). As a service in the outside world People who can critique Public Audience beyond the school School Community Parents Teacher Motivation and Engagement Increase Figure 1. Figure that shows the hierarchy of audience for whom students can present their work in order to increase student motivation and engagement (Wagner, 2017). By incorporating both real world and real need elements, students’ view of the world broadens as the world is brought into the scope of their learning environment (Kalantzis & Copel, 2004). Use of inquiry and critical thinking skills is another characteristic of authentic learning experiences. The teacher creates problems that the students can use to discover, inquire, and deduce (Rule, 2006). Teachers push students to think outside of the box as they connect the learning to the real world. This critical thinking may happen through hands-on activities, through debate, or through problem solving (Certo et al, 2003). For example, at Silverton School, in Silverton, Colorado, students used critical thinking skills as they discovered what it means to be “rich” or “poor”. The students looked at personal finances, national economic problems, and then global issues of wealth and poverty to come to an understanding that being rich or poor is not measured only by money (Expeditions, n.d.). ALE’s also share the characteristic of being formed within a community of learners. Even if students are working individually to find a solution to a real world problem, they are all in a community of inquiry, striving for answers within an environment created by the need for discovery. Students may collaborate in problem solving, creating, or presenting. They talk, argue, and discuss with their peers while searching for solutions. They become actively involved in making meaning (Kukral & Spector, 2012). For examples, they may collaborate with their fellow students by writing a website together (Mac & Coniam, 2008), with the community by working hand in hand on a community project or by offering valuable services to businesses (O’Hanlon, 2008), or with a real audience through a newspaper or bulletin (Mac & Coniam, 2008). Finally, ALE’s allow students to direct their own learning. They have ownership and responsibility in the problem at hand. Teachers give choice to allow the students to both define the problem and design how to find the solution (Rule, 2006). Teachers may use mini-lessons to guide students through the decision-making process and to lead them to real life skills, but as students are equipped, they become the primary directors of their learning (Huntley-Johnston, Merritt, & Huffman, 1997). Teachers may have created the opportunity, the equity, and the participation, but the students must engage with the learning to make it their own (Kalantzis & Cope, 2004). At High Tech High in San Diego, California, through a collaborative project between the humanities and Spanish classes, teachers tasked the students with doing a project that related to the U.S./Mexico border. That was the only parameter given. Students decided for themselves what topic or area they wanted to research, and then they decided how they wanted to display their research for an audience of the school community as well as for Mexican students they had been conversing with since the start of the unit. Their work, though given an overarching theme, was completely student-driven, and much learning took place (Schwartz, 2018). No teacher wants to hear, “How much does this count for?” or “How long does this have to be?” or “Does this have to be typed?” These questions show that learning is a task for the teacher, not for the student to learn life skills needed in the real world or for an authentic audience. Teachers need to deliberately connect students to the real world to help them understand the why behind what they do in the classroom. When teachers have created authentic learning experiences well, learning becomes meaningful to the student (Barron & Darling-Hammond, 2008). Students are committed with a sense of belonging within the learning environment. The opportunity to step out of the classroom either physically or through their mental attitude toward the task gives the students a sense of control over their own learning. This sense of control in turn creates positivity (Shernoff, Csikszentmihalyi, Schneider, & Students gain factual information in the process of problem-solving and can transfer that knowledge to different situations and contexts. They are able to explore and apply their learning as they discover solutions. In the discovery, they learn to define problems and find solutions without being teacher directed (Barron & Darling-Hammond, 2008). The teacher gives appropriate help as needed, but students rise to the challenge by increasing the skills they need to reach a solution (Shernoff et al, 2014). Not only can the students find solutions, they are able to give reasons and support for those solutions. In doing this, the students increase their motivation and form work-habits to use beyond the classroom. They learn to collaborate and become experts with confidence (Barron & Darling-Hammond, 2008). In other words, they become motivated and engaged students learning life skills needed after they graduate from high school. As teachers design work to motivate and engage their students through authentic learning experiences, students realize the importance of what they are doing. With real tasks and real audience, the need to think critically, collaboration and community, and self-directed learning, students feel accomplishment and success knowing they have worked for their own learning purpose, not just for a grade. Often they have shared what they have learned with an audience outside of simply the teacher (Huntley-Johnston et al, 1997). By careful design, teachers have created the “intrinsic fulfillment of meaningful work” for their students through authentic learning experiences. **Misconceptions of Authentic Learning Experiences** As teachers work toward authentic classrooms, they may feel intimidated by certain misconceptions of what ALE’s must look like. One misconception is that an ALE has to be all or nothing. Teachers can work toward authenticity in their classroom as a progression. Creating experiences in a daily lesson can be just as beneficial as creating a semester-long authentic project. Teachers need permission to start small and to use other teaching methods besides ALEs as well (Cronin, 1993). Another misconception about ALE’s is that a teacher’s lesson plans need to be completely redone to include the authentic experience, but ALE’s may be designed from already-created lesson plans. Many teachers subconsciously know that their students need to feel that what they are doing is tied to the real world in some way (Cronin, 1993). Teachers may have already created opportunities for collaboration, critical thinking, differentiation, and student choice. A final myth about ALE’s is that they must always be fun, creative, and original. Students may not enjoy the task, the task may have been done by another teacher already, or it may feel ordinary to the teacher, but that does not mean it is not authentic. If it is tied to an authentic task or has an authentic audience, if critical thinking skills are in full play, if the classroom has become a community of learners working together, and if students have choice in their own learning, then it has the potential of pulling students into a real world situation with intrinsic, meaningful work (Cronin, 1993). Educators and students must understand that “our main task together in the classroom is to attend to learning - not just to learn but to attend to learning, to understand how we learn, and get good at it, and talk about it, perhaps differently than we might other places” (Whitney, 2011 p. 58). When teachers design ALEs and students are motivated to engage, intrinsic learning can take place and break through the stereotype of school as boring and rigid. Authentic learning experiences may not take students out of the actual school setting. Even in the most well designed ALE, teachers must admit to their students that what they do in the classroom may not perfectly mirror the real world, but that does not mean what they learn is not connected to life skills and assets they will need both now and in the future. An English teacher asks students to read and write because the teacher needs to help the students learn to be “self conscious about those practices” (Whitney, 2011 pg. 57). This is a student choosing to learn. Teaching students to be discerning readers or effective writers also teaches them to become better “users” of these skills (Whitney, 2011). This is a student thinking critically. Creating peer groups so that students can give each other feedback on writing allows them to collaborate and communicate. This is a community of learners. Teachers can use ALE’s to motivate students at a deeper level, to create an atmosphere of authenticity in which learning is attached to life skills needed in the real world. Teachers want students who are not just surviving school by counting seconds, goofing around, or staring out the window; teachers want students who feel motivated to engage in meaningful work. Students cannot feel disconnected from their learning (Shernoff et al, 2014). Instead, teachers can use authentic learning experiences to create connections between the students and their life outside of the school building. When teachers work to “attend to learning,” they can position their students to find that intrinsic value in learning through authenticity in the classroom. ALE’s become useful tools for learning when students and teachers find their place of identity and understanding together in the classroom, through interaction and relevance. Teachers understand that each student comes from an individual context that teachers can use to empower each student to make choices and connections for their own learning. Teachers become facilitators and guides within the classroom, empowering students to be competent decision-makers. Teachers also create empowerment and motivation by setting high expectations for accomplishment within an ALE (Vetter, 2010). Creating Motivation with Authentic Learning Experiences Teachers design many experiences in which students move into the intrinsically meaningful work of ALE’s. The best way to clearly understand how ALE’s create motivation and engagement is to see authentic learning at work. O’Hanlon (2008) shared how he connected his students with local businesses to create content for websites that the businesses actually used. Students received real world experience for a real audience. Another teacher created a real audience by having her students publish an anthology of their work that they sold to local businesses. The writing became specifically for an audience, causing them to choose topics that made more sense for that broader audience. The editing and proofreading the students had to do took on significant meaning because they knew mistakes would show carelessness and laziness as writers. The class even learned about marketing and letter writing as they got word out that their anthology was for sale. Not only did the students benefit, but so did the community (Putnam, 2001). Another teacher organized her journalism class like an actual newspaper that caused the students to take on the responsibility of all parts of brainstorming, researching, writing, editing, and publishing. The students never worried about their grade because they were too focused on putting out an excellent newspaper for a real audience. These students had a sense of ownership, accomplishment, and pride in their work (Denman, 1995). Another example of an authentic learning experience happened in an English classroom in which the teacher led her students through the process of writing how-to books. Students were able to share their expertise and saw how that expertise helped others learn something new (Huntley-Johnston et al, 1997). In a research project, Powers (2009) explained how he saw students go above and beyond research requirements as they took ownership of their topic and became personally involved. One student was invited to a private dinner for a Nobel Peace Prize winner through her research project. This student’s research led to an extracurricular club at her school that allowed students to meet people making a difference in the world, and to realize how they themselves could make a difference. All of these examples increased student motivation because they incorporated a real problem with a real audience, they allowed the students to use critical thinking and problem solving skills, they took place as a community of learners, and the students had choice in the direction their learning took. **Authentic Learning Experiences in the English Classroom** English curriculum is designed to focus on skills in discussing, reading, researching, and writing (Kahn, 2007; Powers, 2009; Speaker & Speaker, 1991; Vetter, 2010). In any of these skill areas, ALE’s can be used to motivate and engage students toward intrinsic learning in meaningful work. Students will find meaning in discussing, reading, researching, and writing when that learning is tied to real world/real audience work, to the need for critical thinking, and to student-directed learning within the context of a community of learners. Discussion is a skill area in the English curriculum that can be designed as an ALE. To create an authentic learning experience using discussion, the discussion becomes open-ended, not a question and answer recitation. Teachers create an ALE in discussion when they introduce conflict or controversy and allow students to defend or analyze without implying a right or wrong answer. Instead, students use discussion to analyze and assess their information and experiences. Discussions take on the medium that best suits the students and situation; for example, a blog post creates authentic commenting or an online forum allows students to speak openly with people outside of their own classroom (Kahn, 2007). In one study, a group of students in inner city Chicago began a discussion with local leaders, police, families, and clergy about gun violence that led to service within their community (More Than You, n.d.). Students can be motivated to feel meaningfully engaged as they become personally involved in the contributions they bring to any classroom and to a greater audience. The discussion becomes a sharing of ideas with others through critically thinking, which in turn leads to stronger sense of community with whomever the discussion takes place. Right or wrong no longer becomes the focus; instead, the process of discussing becomes the focus. Reading is another area in which ALE’s can be incorporated. Students become authentic readers when they engage with the words they read and incorporate the new knowledge into a real problem or audience, into the need for critical thinking skills, into work as a community of learners, and into the desire to direct their own learning. What the students do with what they have read can lead to a meaningful authentic learning experience. For those students in inner city Chicago who began a discussion on gun violence, that discussion began after they had read information on the United States constitution. This led them to a connection between “We, the people . . .” and themselves as those very people of whom the constitution spoke. Reading led to authenticity through relationship (More Than You, n.d.). Teachers can lead their students to notice vocabulary or themes or conflicts they have found in their everyday reading that trigger authentic conversations such as the one these students had regarding the Constitution. These conversations can then lead to a heightened awareness of what makes good writing (Speaker & Speaker, 1991) as well as heightened awareness of the needs of others (More Than You, n.d.). An authentic learning experience can then find a fertile place to grow. Another example of authentic reading is in the Reading Workshop format. Students connect with books because they have choice in what they read, they learn to read critically through mini-lessons and use of mentor texts by the teacher, they use their community in the classroom to share about their books, and reading becomes more real world because students are no longer being forced to read one certain book. They become the directors of what they get to read, hopefully also as lifelong readers well after graduation day (Brunow, n.d.). Reading leads students to critical thinking, interaction, and self-confidence--important life skills needed in the real world. Researching in an authentic context allows students to have choice in order to develop ownership toward their work. Students feel that ownership as they direct their own learning with the guidance of their teacher. The students in inner city Chicago took ownership of their learning by addressing a need that they were personally connected to in their neighborhood. Their research moved from a textbook on the American Constitution to interviews and personal experience with people of their community (More Than You, n.d.). Instead of using a magazine article as research to satisfy a requirement for a research paper, students realized that the deepest research comes from face-to-face contact, telephone interviews, or travel to historical sites for hands-on research. Learning becomes personal as the students become authorities and confident experts (Powers, 2009). No longer is researching necessary only for a paper for their teacher; researching becomes a part of discovery, teamwork, and critically thinking towards a solution to a real world problem for a real audience. Writing becomes authentic when it is done for an authentic audience with a real need and a real purpose that leads students to an intrinsic need to use precise wording, details, revisions and proofreading (Powers, 2009). In one teacher’s classroom, the teacher created an authentic writing experience when her students took their study of Benjamin Franklin’s aphorisms in *Poor Richard’s Almanac* and each wrote a children’s book. The students used one of the aphorisms as a basis for their book, explaining it in the form of a digital story for local kindergarteners. The real audience gave the students a real need to critically analyze the aphorism of their choice and to write about it in a way that the kindergarteners would be able to understand (Sztabnik, 2015). In another example of authentic writing, a teacher had his students research writing contests, choose one, read and understand the manuscript guidelines for submission, adapt one of their own pieces of writing to the contest, and submit it to the contest they had found. The students then learned to use proper MLA citation for their own piece in order to include it in a resume. Many of his students became published writers from this authentic learning experience (Sztabnik, 2015). Authentic writing also happens when students write about their personal passions in order to share with the school community as a whole or students write a script for a public service announcement that they turn into a video (Sztabnik, 2015). Students understand the need to be effective and responsible communicators when what they write is for an audience outside of their classroom walls. They see the meaningful value of writing as the prerequisite to becoming active members of the world outside of their classroom walls. In all of these examples, students find themselves a part of a real world problem or working for a real audience. They are defining a problem or asking a question, searching for solutions or designing a product, using critical thinking and inquiry skills, working as a community of learners toward similar goals, and taking ownership and responsibility in their own learning. In these experiences, students find their voice, find their purpose, and find confidence in hard work. New skills are learned, new interests created, new doors opened that they would not have thought possible had the teacher not designed learning for them to step into. Students leave school knowing the value of intrinsic fulfillment in meaningful work because their teacher valued authenticity in the classroom. By designing ALE’s in the classroom that focused on real problems and audiences, on critical thinking skills, on student-directed learning, and on learning in community, teachers prepare their students for life outside the classroom walls. They give their students skills in communication, collaboration, researching, collecting, analyzing, synthesizing and applying knowledge. These are the skills that will lead them to being successful working members of their local and global communities (Barron & Darling-Hammond, 2008). As one student stated, “We work together to get smart for a purpose, to make our community and our world a better place” (More Than You, n.d.). Methods Participants The participants of this research study were 10th grade students at a small private high school in the Midwest made up of 261 ninth through twelfth grade students. The majority of these participants are from white, middle class families who live in rural communities surrounding the high school. There were 30 females and 27 males in the study. All 10th grade students take the required English 10 class in their sophomore year. This research study took place in an English 10 course that split the students into three sections: one section with 21 students, one with 16, and the third with 20. All sections participated in the same authentic learning experience with the same teacher. Materials The material used in this research were a survey given to the students at the end of the authentic learning experience. The anonymous survey was created by the researcher using SurveyMonkey.com. The survey, located in Appendix A, used a five-level Likert-type scale ranging from strongly disagree to strongly agree. The survey was used to determine the intrinsic engagement and value of the ALE for each student through the four characteristics of an ALE. The researcher also conducted semi-structured interviews of eight students selected randomly through a random number generator. See Appendix B for interview questions. Design A descriptive research design was used for this study. An anonymous survey was given to all 57 students at the end of their authentic learning experience. In order to describe the relationship between each of the characteristics of an ALE and overall student motivation in an ALE, the survey statements focused on the four characteristics of an authentic learning experience. Five statements focused on real world problem/audience, five on the use of inquiry and critical thinking skills, five on being a part of a community of learners, and five on student-directed learning. The researcher also used a semi-formal interview process to interview eight randomly selected students at the end of the ALE. These interviews used open-ended questions to allow for more than yes or no answers. The purpose of these interviews was to understand more deeply how students were motivated intrinsically within the ALE. The responses to each interview were recorded and then analyzed and sorted according to different themes and categories. Procedure The 57 students all participated in the same authentic learning experience. The students were divided into ten different teams ranging from 6-8 students in a team. Within their teams, the students worked together to write and layout a newspaper issue to be distributed to the school’s student body. Each student was responsible for interviewing someone, focusing the story around the theme of joy in the interviewee’s life. In order to put out their issue of the newspaper, each team chose various jobs for each member. The jobs included editor-in-chief, revisers, word choosers, proofreaders, picture editors, and layout editors. The teams had autonomy over which roles each person played in their newspaper team. Together they had two weeks to write and design their issue of the sophomore class newspaper that they titled *20/20 Vision*. After the ALE was completed, the researcher gave all 57 students the survey through SurveyMonkey.com. The survey received a perfect rate of return because the survey was taken during class time. The researcher was present when the students took the survey with anonymity preserved because no names were associated with answers on the surveys. The semi-structured interviews took place the day after the teams turned in their final newspapers. Interviews took place within this class period while other students had silent reading time. The researcher interviewed each of the eight students to gather a deeper understanding of the feeling of intrinsic motivation and engagement in the work they did for their authentic learning experiences. The answers to the interviews were coded and analyzed immediately following the interviews according to similar words, phrases, and beliefs common in all of their answers. Results After the students completed the authentic learning experience, they anonymously took the survey to determine the extent that they felt intrinsically motivated by the characteristics of an authentic learning experience. The survey focused questions around the four tenets of an ALE: real world/audience, critical thinking, community of learners, and student-directed learning. Eight randomly selected students were also interviewed in order to further clarify the students’ level of motivation after the ALE was completed. Their answers were coded and analyzed according to the themes and trends that their answers revealed. Survey In order to answer whether ALE’s lead to greater motivation and thus greater engagement for students, the survey was used to show the individual relationship of the four characteristics of an authentic learning experiences to the ALE as a whole. The researcher assigned a value of 5 to each survey answer that showed the best attitude toward an ALE. So if the best attitude answer for a question was “Strongly Agree,” then that answer received a 5, if “Mildly Agree” then a 4, if “Neutral” a 3, if “Mildly Disagree” a 2, and if “Strongly Disagree” a 1. These assigned scores of each survey were then added together to get a total number of points for that student’s survey. The total possible points available for the 20-question survey was 100. The researcher then collated the answers into the four characteristics of an ALE. Each of those sections of five questions was also totalled for each student. The researcher then had a total number for each characteristic as well as a total number for each survey. This data was used to calculate regression, or the relationship between each characteristic of an ALE to the ALE as a whole. Figures 1 thru 4 show the regression lines for each of the four characteristics. The regression is measured using R-squared. The R-squared value for each of the characteristics are as follows: Real World/Audience: 48.4%; Community of Learners: 38.7%; Critical Thinking: 63.3%; Student Choice: 15.1%. The results of this analysis show how each of the characteristics of an ALE fall in relationship to the ALE as a whole. Figure 2. Linear graph showing the correlation between Real World/Audience to the total sum of the survey. The R-squared value of 48.4% shows that having a real problem and/or a real audience was motivating for the students. It was the second highest correlation of the four characteristics. Figure 3. Linear graph showing correlation between Critical Thinking and the total sum of survey. Critical thinking had the highest R-squared value of 63.3%. This is a very strong correlation to show that students felt motivated when they could use this skill while working on their ALE. Figure 4. Linear graph showing the correlation between Community of Learners and the total sum of the survey. Though the R-squared value for Community of Learners was third highest with a value of 38.7%, it does show a correlation between the motivation of the ALE as a whole and being able to work in community with their classmates. Figure 5. Linear graph showing the correlation between Student Choice and the total sum of the survey. Student choice in their learning had the lowest R-squared value. The 15.1% is much lower than the other three characteristics and indicated this was the least motivating factor in how the students felt about the ALE. Even as a lower score, 15.4% does show that students were motivated by being able to have choice in their learning, but the lower score suggests that having choice in their work was not as motivating to the students as the other three characteristics. **Interviews** This study sought to answer whether authentic learning experiences lead to greater levels of motivation thus leading to greater engagement as students realize the importance of the work. they are doing for their future lives. The interview responses of the eight randomly selected students were overwhelmingly positive in regards to answering this research question. Their answers reflected their attitudes in the four basic characteristics of an ALE. **Real world/real audience.** The interviews showed that the students enjoyed connecting with a real audience through the newspaper unit. Student C said that reading the articles written by other students “helped me find joy when I’m busy or find joy when life isn’t really going my way” (Student C interview, March 1, 2018). Student H said that they received reassurance from reading other newspaper articles from fellow classmates because they felt that “my life is kind of hard . . . but it made me get reassured that life will get better” (Student H interview, March 1, 2018). This student also said that publishing the newspaper allowed them “to show people reading it that joy comes in many different ways and it’s not the same for everybody” (Student H interview, March 1, 2018). Having a real audience changed all of the students’ perspectives in how they wrote their article. Student A said that it “changed the way I write when it’s meant to go to everyone instead of just the teacher” (Student A interview, March 1, 2018) Student B said, “I tried harder to make sure I represented myself and the class well” (Student B interview, March 1, 2018). Having a connection to the real world and real audience changed the amount of effort students put into their work. One hundred percent of the students commented in their own words that the real audience made them work harder to publish a well-written article. Student D said, “I wanted more people to see that I can do better than what I probably have done in the past” (Student D interview, March 1, 2018). Student G responded, “I knew that people I knew were going to read it and it had to be good because I had to put my name on it” (Student G Student B shared that she hoped “that people would know that the sophomore class was a great class” because of their newspaper (Student B interview, March 1, 2018). On the negative side of having a real audience, only one student, 12.5%, found a downside of having a real audience. Student C stated “I don’t want people to know it’s from me” (Student C interview, March 1, 2018). **Community of learners.** Eighty-eight percent of the interviewed students found benefits in working as a community to accomplish their project. Student A said that it was “fun to read other people’s stories, where other people find joy in their lives” (Student A interview, March 1, 2018). Student B “loved seeing the creative ideas that the rest of the class did” (Student B interview, March 1, 2018). Student G enjoyed connecting with the greater school community through the newspaper. This student stated, “We got to interview different people and find out about their stories of joy . . . that was really cool” (Student G interview, March 1, 2018). Student F said that he felt “like I put a good amount of effort in for my team” (Student F interview, March 1, 2018), and Student D said, “We each did our part and we got it done” (Student D interview, March 1, 2018). Student H stated “It was nice to have people to hold me accountable” (Student H interview, March 1, 2018). Two of the students agreed that they did the work because they knew that their team was depending on them. Student F said that he “didn’t want to be the weak link that drags everyone else down so you do your job, so I felt responsible for that” (Student F interview, March 1, 2018) while Student E said she knew that “people were counting on me” (Student E interview, March 1, 2018). Student B said that “Everyone did what we assigned them to do, on time, and if someone didn’t get something done, we always helped them. Yeah, I think we really did well together” (Student B interview, March 1, 2018). There were negative feelings toward working as a team in 37% of those interviewed. Student C said that she didn’t feel like her team worked that well together “because half the people on our team don’t care,” and when asked her least favorite part of the project she simply stated, “Some of my team members” (Student C interview, March 1, 2018). Student G said that “there was some people who didn’t really do a lot and some people who did like all of it so it was a mix of people who didn’t think they had to do anything and people who knew they had to do everything” (Student G interview, March 1, 2018). Student A shared, “Depending on others, I’m not always sure that they will do their best work and I wonder how that will affect how well my final project will be” (Student A interview, March 1, 2018). **Critical thinking.** Many of the responses showed that through the process of interviewing people, students critically processed the true meaning of joy. They also had to use their critical thinking and analyzing skills to work through the writing process on their articles. Overall, 87% of the students commented on the need to think critically on this project. The students wanted to use their critical thinking skills to submit a well-written article to their newspapers. Student C said that she “just enjoyed learning about joy . . . because I need to work on that” (Student C interview, March 1, 2018). Student D liked “learning about other people and their stories” (Student D interview, March 1, 2018). Some of the interviewees made specific applications to their own learning needs. Student B said that she “grew from it as a writer, learning how to write more concise how to see things clearer, like grammatically, how to set up things, so yes, think I grew from it” (Student B interview, March 1, 2018). Student H shared that “I don’t say I’m very good at school but when I was correcting my paper I realized . . . it’s not that bad actually” (Student H interview, March 1, 2018). Student D said that “if you don’t do it right, just don’t do it at all. So I have to intentionally do as good as possible” (Student D interview, March 1, 2018). And because of this project, Student H said, “I feel like I can do school a lot better than I am” (Student H interview, March 1, 2018). Student D said that “At the beginning it was a lot of work to do and at the end it wasn’t too hard.” Student D also stated that he felt he needed to “do it right so you don’t get ridiculed for your specific article” (Student D interview, March 1, 2018). Although Student B said that “The least thing I enjoyed would be probably all the revisions we had to do,” she also said, “I know it is necessary” (Student B interview, March 1, 2018). Student F shared that “I’m not a very good speller or with grammar, so when I have to do something with a lot of spelling and grammar, it’s not my favorite because I have to do a lot of correcting” (Student F interview, March 1, 2018). **Student-directed learning.** The students had mixed reviews of being the directors of their own learning. In regards to their ability to choose their own topic, Student G said, “I got to know that part of their family and got to know them a lot more” because of whom she interviewed for her article (Student G interview, March 1, 2018). Student F said, “I don’t know my stepmom that well yet and I got to know her better” (Student H interview, March 1, 2018). Eighty-seven percent of students said they felt personal satisfaction in their project. Student F said, “I’m happy with my final project” (Student F interview, March 1, 2018), and Student B said, “I can express myself through it” (Student B interview, March 1, 2018). Student E said that he’d “never done anything like this before” (Student E interview, March 1, 2018). Only one of the students interviewed said that he didn’t connect with his topic. Student D said that he didn’t find personal meaning in the project because “just maybe the story I picked” (Student D interview, March 1, 2018). Three of the students mentioned that the grade played a part in how they worked on their project and one mentioned that he made sure to do a good job so he could keep playing basketball. **Discussion** **Overview of the Study** This study looked at whether authentic learning experiences increased the motivation and thus the engagement of students, leading to a higher intrinsic value for the students in the work that they did. Eight randomly selected students were interviewed and all 57 students involved in the ALE took the anonymous survey after they completed the ALE. **Summary of Findings** When combining the survey results with the results of the interviews, the attitudes of the students toward what makes an authentic learning experience motivating emerged. The interview results along with the survey results showed that having a real audience for which to do real work, being able to use critical thinking skills, and working within a community of learners motivated the students while doing the project. The students interviewed shared that they felt that the newspaper project gave them feelings of satisfaction, accountability, responsibility, and improvement of skills. Students’ positive comments about being able to direct their own learning showed that they enjoyed being able to choose topics that connected with the people that they knew and had interest in. Although they stated that because of their ability to direct their learning they were able to get to know other people better and express themselves, 38% of those interviewed also commented that the grade remained an important motivator for them in the doing well on the project. So rather than being motivated by an intrinsic value in the work they did, these students needed the extrinsic reward of a grade to ensure higher quality of work. This seemed to be reiterated in the survey through the low R-squared value of 15% for Student Choice. **Recommendations** Based on the results of this study, the researcher believes that creating authentic learning experiences in the classroom is very beneficial to students in increasing higher critical thinking skills, working well with others, taking responsibility in their own learning, and showing students that the work they do has an audience and purpose outside of the classroom. Through this project, the majority of the students involved remained motivated and engaged in their work individually and as a team to put out their own issue of the newspaper. Although the researcher suggests that authentic learning experiences do increase student motivation and thus engagement in the task for intrinsic meaning, some students, for a number of reasons, may still remain somewhat focused on working for a grade or other extrinsic rewards. A well-designed ALE is essential for motivating and engaging all students, especially those who do not enjoy school at all. Without a well-designed authentic learning experience, those students who dislike school and who struggle academically will still resist engaging in the activity. Motivational needs for all students include autonomy, competence, and relatedness (Fredricks & McColskey, 2012). These students need clear connections to a purpose outside of the classroom walls in order to find their intrinsic value in learning because they have completely lacked connection to school in the past. Their connection to a purpose must allow these students to see themselves fitting into the world outside of the school walls, so that they can begin to believe that they can achieve. Then they will take up the challenge in the classroom and feel the satisfaction of accomplishment in learning (Beesley et al, 2010). The researcher also suggests ensuring that all students choose a topic with personal meaning in order to maintain the motivation of student choice in their own learning. Unless students connect personally to their topic, it will continue to remain nothing more than an assignment for their teacher. These unmotivated students must be able to choose learning that matters to them outside of school. Students need to understand that the framework of an ALE still stands within the context of the school setting. Because some students have never found a true connection to school, this researcher believes it is the teacher that needs to work closely with each student to help each personally connect to the project. Unmotivated students need to be led to their intrinsic value at a slower, more deliberate pace than other students who already feel the purpose of school in their lives. When teachers provide opportunities for active involvement and give appropriate support in problem solving (Shernoff et al, 2014), students feel a sense of commitment and belonging in the classroom instead of passivity, boredom, or anxiety (Beesley et al, 2010). The teacher must commit to act as a guide to all of the students in the classroom. The researcher believes that having a strong community of learners can help pull these unmotivated students into the project and into the intrinsic value of working as a team, but they must also have a purpose within the community that fits their personality and gifts. If students believe they won’t achieve well, they won’t take on challenges for fear of another failure (Beesley et al, 2010). As stated by Reeves (n.d), students “are more engaged and learn better when they are challenged, exercise choice, feel significant, receive accurate and timely feedback, and know that they are competent” (p. 10) Students today need skills in communication, collaboration, researching, collecting, analyzing, synthesizing and applying knowledge. This research study affirms that authentic learning experiences do have the power to prepare our students for the world outside the classroom walls as long as the design is well-thought out and the teacher walks intentionally beside each student to guide them toward their intrinsic value in meaningful work. **Limitations of the Study** One limitation of this study was in the design of the authentic learning experience. While the researcher incorporated each characteristic of an ALE into the newspaper project, not all students found the real audience of the school’s student body motivating. Approximately 10% of the students were not motivated by school or grades in general, so they did not find the audience of the student body a strong enough motivator to increase their engagement or to make the work personally meaningful. Additionally, further research through multiple ALE’s throughout the school year would have yielded more research results for this study. More research and data would give multiple values of R-squared to be used to analyze the correlation of the four characteristics of an ALE to the ALE as a whole more accurately. Another limitation was the small sample of students in the study. This action research took place with 57 students, 30 girls and 27 boys, in a small high school in the Midwest, the majority from white, middle class families living in rural communities surrounding the high school. With a larger, more diverse sample size of students, a broader range of data would have been available to analyze for a more accurate regression lines using the R-squared values. Finally, the bias of the teacher was a limitation. The researcher was closely tied to the design and implementation of the project, to the students personally, and to this research study. The researcher also gave the survey in her classroom as the teacher. These circumstances could have led to bias in how the researcher carried out the study, how she interacted with her students as both students and research participants, in how the students interacted with her as both teacher and researcher, and in how the researcher perceived the results of the study. References Barron, B., & Darling-Hammond, L. (2008). Teaching for meaningful learning: A review of research on inquiry-based and cooperative learning. Book excerpt. Furger, R. (Ed.), *Powerful Learning: What We Know About Teaching for Understanding*. Retrieved from https://search.proquest.com/docview/1314330466?accountid=27065. Beesley, A., Clark, T., Barker, J., Germeroth, C., & Apthorp, H. (2010). Expeditionary learning schools: Theory of action and literature review of motivation, character, and engagement. *Mid-continent Research for Education and Learning* (McREL). Retrieved from https://search.proquest.com/docview/864941500?accountid=27065. Brunow, V. (n.d.). Authentic literacy experiences in the secondary classroom. *The Language and Literacy Spectrum*, 26, 60-74. Retrieved February 22, 2018, from https://www.nysreading.org/sites/default/files/regional/Brunow.pdf Certo, J. L., Cauley, K. M., & Chafin, C. (2003). Students' perspectives on their high school experience. *Adolescence*, 38(152), 705+. Retrieved from http://link.galegroup.com.ezproxy.dordt.edu:8080/apps/doc/A114740932/AONE?u=dordt&sid=AONE&xid=cf7fc9fc Certo, J. L., Cauley, K. M., Moxley, K. D., & Chafin, C. (2008). An argument for authenticity: Adolescents' perspectives on standards-based reform. *High School Journal*, 91(4), 26+. Retrieved from http://link.galegroup.com.ezproxy.dordt.edu:8080/apps/doc/A178674145/AONE?u=dordt&sid=AONE&xid=ebae2ff2 Cronin, J. C. (1993). Four misconceptions about authentic learning. *Educational Leadership, 50*(7), 78+. Retrieved from http://link.galegroup.com.ezproxy.dordt.edu:8080/apps/doc/A13976846/AONE?u=dordt&sid=AONE&xid=20b2b852 Denman, C. (1995). Writers, editors, and readers: Authentic assessment in the newspaper class. *The English Journal, 84*(7), 55-57. Retrieved November 27, 2017, from http://www.jstor.org.ezproxy.dordt.edu:8080/stable/pdf/820585.pdf Expeditions. (n.d.). Retrieved February 27, 2018, from http://www.silvertonschool.org/expeditions2.html Fisk, D. M. (2003). American labor in the 20th century. U.S. Bureau of Labor Statistics. Retrieved February 15, 2018, from http://www.bls.gov/opub/mlr/2004/02/art1full.pdf Fredricks J. A., & McColskey W. (2012). The measurement of student engagement: A comparative analysis of various methods and student self-report instruments. In: Christenson S., Reschly A., Wylie C. (eds) *Handbook of Research on Student Engagement*. Springer, Boston, MA. Hale, R. (1999). *From jobs for workers to workers for jobs: Better workforce training for Minnesota. A Citizens League Research Report* Minneapolis, MN: Citizens League. Huntley-Johnston, L., Merritt, S., & Huffman, L. (1997). How to do how-to books: Real-life writing in the classroom. *Journal of Adolescent & Adult Literacy, 41*(3), 172-179. Retrieved from http://www.jstor.org/stable/40027135 Kahn, E. (2007). From the secondary section: building fires: Raising achievement through class discussion. *The English Journal, 96*(4), 16-18. doi:10.2307/30047157 Kalantzis, M., & Cope, B. (2004). Designs for learning. *E-Learning, 1*(1), 38-93. Retrieved November 13, 2017, from http://journals.sagepub.com.ezproxy.dordt.edu:8080/doi/pdf/10.2304/elea.2004.1.1.7 Kukral, N., & Spector, S. (2012). Authentic to the core. *Leadership, 41*(5), 8-10. Retrieved November 27, 2017, from https://files.eric.ed.gov/fulltext/EJ971416.pdf Lawrence, S. A., & Harrison, M. (2009). Using writing projects in a high school classroom to support students' literacy development and foster student engagement. *Language and Literacy Spectrum, 19*, 56-74. Retrieved from https://search.proquest.com/docview/1697497155?accountid=27065 Mac, B., & Coniam, D. (2008). Using wikis to enhance and develop writing skills among secondary students in Hong Kong. *System, 36*(3), 437-455. Retrieved November 29, 2017. More Than You Think Possible. (n.d.). Retrieved February 22, 2018, from https://eleducation.org/resources/more-than-you-think-possible O'Hanlon, C. (2008). Designs on the future: Hired to create websites for local businesses, high school students are building up their online portfolios while gaining a glimpse of the world that awaits them. (e-learning). *T H E Journal [Technological Horizons In Education], 35*(9), 28+. Retrieved from http://link.galegroup.com.ezproxy.dordt.edu:8080/apps/doc/A187765362/AONE?u=dordt&sid=AONE&xid=2268706b Powers, B. H. (2009). From national history day to PeaceJam: Research leads to authentic learning. *The English Journal, 98*(5), 48-53. Retrieved November 27, 2017, from http://www.jstor.org.ezproxy.dordt.edu:8080/stable/pdf/40503297.pdf Putnam, D. (2001). Selling our words to the community. *The English Journal, 90*(5), 102-106. Retrieved November 21, 2017, from http://www.jstor.org/stable/821862 Quin, D. (2016). Longitudinal and contextual associations between teacher–student relationships and student engagement: A systematic review. *Review of Educational Research, 87*(2), 345-387. doi:https://doi-org.ezproxy.dordt.edu:8085/10.3102%2F0034654316669434 Reeves, D. B. (n.d.). Motivating unmotivated students. Retrieved March 28, 2018, from http://www.ascd.org/ascd-express/vol5/504-reeves.aspx Romano, T. (2009). Defining fun and seeking flow in English Language Arts. *The English Journal, 98*(6), 30-37. Retrieved November 4, 2017, from http://www.jstor.org.ezproxy.dordt.edu:8080/stable/pdf/40503454.pdf Rule, A. (2006). Editorial: The components of authentic learning. *Journal of Authentic Learning, 3*(1), 1-10. Retrieved November 15, 2017, from https://www.ernweb.com/educational-research-articles/the-four-characteristics-of-authentic-learning/ Schwartz, K. (2018). Education Writers Association. Retrieved February 27, 2018, from https://www.ewa.org/blog-educated-reporter/high-tech-high-focus-goes-beyond-classroom Shernoff, D. J., Csikszentmihalyi, M., Schneider, B., & Shernoff, E. S. (2014). Student engagement in high school classrooms from the perspective of flow theory. *School Psychology Quarterly, 475*-494. doi:https://doi.org/10.1007/978-94-017-9094-9_24 Sztabnik, B. (2015). Authentic writing: What it means and how to do it. Retrieved February 22, 2018, from http://talkswithteachers.com/authenticwriting/ Speaker, R. B., Jr., & Speaker, P. R. (1991). Sentence collecting: Authentic literacy events in the classroom. *Journal of Reading, 35*(2), 92-95. Retrieved November 27, 2017, from http://www.jstor.org.ezproxy.dordt.edu:8080/stable/pdf/40033116.pdf The critical 21st century skills every student needs and why. (2017). Retrieved February 13, 2018, from https://globaldigitalcitizen.org/21st-century-skills-every-student-needs Varuzza, M., R. S., Eschenauer, R., & Blake, B. E. (2014). The relationship between English Language Arts teachers’ use of instructional strategies and young adolescents’ reading motivation, engagement, and preference. *Journal of Education and Learning, 3*(2), 108-119. doi:10.5539/jel.v3n2p108 Vetter, A. (2010). Positioning students as readers and writers through talk in a high school English classroom. *English Education, 43*(1), 33-64. Retrieved from http://www.jstor.org/stable/2301708 Wagner, K. (2017). Kindling, campfires, or candles. Retrieved February 15, 2018, from http://www.transformschool.com/single-post/2017/09/05/Kindling-Campfires-or-Candles Whitney, A. E. (2011). In search of the authentic English classroom: Facing the schoolishness of school. *English Education, 44*(1), 51-62. Retrieved November 27, 2017, from http://www.jstor.org.ezproxy.dordt.edu:8080/stable/pdf/23238722.pdf Appendix A Survey of all Students at Completion of Authentic Learning Experience The survey is grouped to show which questions correlated to which characteristic of the ALE. Multiple choice answers were: Strongly Disagree, Mildly Disagree, Neutral, Mildly Agree, Strongly Agree. Real World/Audience 1. I am more likely to work hard in class for a project with a real world focus than for a paper and pen test. 2. I have a hard time connecting classwork with the real world. 3. Being assigned a project that mirrors a real world problem/scenario connected to class lessons makes me more likely to do the work required for completion. 4. I am more likely to do more than is required if the audience for my completed work is a person/people other than the teacher. 5. I am more likely to do work in class that only the teacher will see. Critical Thinking 6. I am more likely to memorize information for a test than to work hard on a final project. 7. I get a sense of accomplishment from putting a lot of work into a project or solution. 8. I get energized when my teacher gives me a chance to discover for myself rather than giving me the answer. 9. I dislike when the teacher makes me find an answer myself. 10. I am more likely to remember information if I have to find the answer or solution myself. Community of Learners 11. I am more likely to slack off if I’m working in a group. 12. I am more likely to work hard on a project if I feel like my project matters to my community. 13. I am more likely to complete a project if others are depending on me to do my part. 14. I am more likely to strive to find answers if my classmates are working to find answers too. 15. Working with others on a project does not help me learn at all. Student Choice 16. Having a choice in the topic of my project makes me merely likely to do the work involved in completing the project. 17. The most important factor in determining if I will complete a project is if it is personally meaning and relevant to my life. 18. It is part of my teacher’s job as an instructor to provide motivation for me to want to do assignments for class. 19. I consider doing activities in class a waste of time unless I can make some personal connection with or learn a lesson from the activity. 20. I am more likely to do my best work on a project if the teacher assigns the topic to me. Appendix B Semi-structured Interview Questions of Eight Students at the End of the Authentic Learning Experience 1. What did you enjoy the most about this project? Follow Up / Expanding Questions: a. Do you feel like what you have done in class has personal meaning for you? Explain. b. Did how you did your work change because of the audience/reason you were doing it for? Explain. c. Were you proud of the work you did? Why/Why not? d. Did you feel like your team worked well together to accomplish the newspaper? e. Did you feel a sense of responsibility to put out the paper? 2. Looking back at the project, what was your main motivation in completing it? 3. What did you enjoy the least about this project?
Do the People Matter in Policymaking in Ghana? A Reflection on the E-Levy and Debt Exchange Programs Edward Brenya, * Samuel Adu-Gyamfi, † Philip Nii Noi Nortey, ‡ Dennis Apau, § Kwabena Opoku Dapaah ** Abstract The extent to which the masses have a say in matters concerning their lives is crucial in governance. It makes a significant amount of knowledge to say that people vote for elected policymakers to make policies that will make their lives better off and not the opposite. However, in the making of policies, the views of the people who either benefit or suffer the ramifications of policies are not taken into consideration. Therefore, the content analysis methodology has been employed in this study to systematically analyze secondary sources about the recent adoption of the E-Levy policy and Debt Exchange Program to ascertain whether the people mattered in adopting and implementing these policies. The adoption of these policies has raised a lot of controversies, with the public agitating and calling for its termination. The government of Ghana, being keen on continuing with the implementation of these policies as the only way out of the country’s economic hardship, raises a lot of questions. After a systematic analysis of the literature, the paper argues that both policies were passed without the involvement of the people. The implication of the government’s failure to adopt a participatory policymaking approach accounts for the citizenry’s loss of trust in the government. Keywords: Social Contract, Elitism, Participatory Policymaking, E-levy, Debt Exchange Program Introduction Now and then, citizens are heard on the print and electronic media requesting the government’s aid. Problems such as bad roads collapsed bridges, and lack of social amenities to support basic human needs, among others, are part of interventions the citizens of Ghana anticipate daily that the government will fix to mitigate the challenges they are bombarded with daily in the country. While some of these issues are addressed and others overlooked, scholars such as Thomas Birkland and John Kingdon make us understand that as a result of limited resources, governments cannot always pay attention to all the problems citizens in a country are faced with, but a relatively few which are categorized as being on the decision agenda are tackled (Birkland, 2015; Kin141). Dye (2013, p. 3) defines public policy as whatever governments choose to do or not to do. In John Locke's Second Treatise, he maintains that sovereign power resides with the people and that the people surrender this power to a higher authority for the preservation of their property; the higher authority is expected to act for the common good of the commonwealth (Locke, 1980). Inferring from Locke's Second Treatise, it is evident that the people vote for the elected policymakers to make policies that will benefit them. But more often than not, one finds the citizenry protesting against one policy or the other. Sometimes, policymakers pay heed to protests and make the necessary changes. Other times, policymakers go on with the intended policy amidst public outcry. Tisdall & Davis (2004, p. 131) view that the children and youth in the UK are regularly consulted in policymaking. A step taken by governmental departments in Westminster only makes a solid case that the people, regardless of age and class in the society, should have (if not a role to play) a say in the policies made concerning their very lives. The IMF explains debt restructuring or domestic debt restructuring as modifications to the contractual payment terms of public domestic debt (including amortization, coupons, and any contingent or other payments) that are made at the expense of the creditors, either through legislative or executive acts, through agreements with creditors, or both (International Monetary Fund (IMF), 2021). Also, according to the Electronic Transfer Levy Act of 2022, these electronic transfers are levied to generate income for the country. The implementation of this government electronic transaction levy of 1.5% (e-levy), which started in May 2022, ensured that transfers to and from mobile money accounts or bank accounts were levied. The adoption of these two programs in Ghana raised a lot of controversies, with the public agitating and calling for its termination and the government keen on continuing with the implementation of these policies as it (government) perceives them as the only way out of the economic hardship the country finds itself. Relying on secondary data, this study seeks to draw a distinction between participatory and elitist policy approaches, unearth the consequences of each of the two policy approaches, and ascertain the extent to which the interests and opinions of people count in making policies that affect their own lives in Ghana. Books such as Anderson (2011), Birkland(2015), Dye (2013), Locke (1980), and articles authored by Edelenbos (1999), Hiller, Landenburger & Natowitzc (1997), Kpessa (2011), Mohammed (2015) have been handy in the writing of this piece. Other relevant databases were also helpful for the writing. The databases include Adams(2022), Agyeiwaa-Afrane et al. (2022), Anyidoho, Gallien, Rogan, and Boogaard(2022), Arhinful (2022), and Ahinsah-Wobil (2022). The study begins with an exposition of the social contract theory as a theoretical framework. It then describes public policymaking and the contrast between elitism and participatory policymaking. Public policymaking in Ghana follows suit. Next, the two cases of E-Levy and Debt Exchange Programs are presented. Finally, the study ends with a discussion and conclusion. **Literature Review** **Theoretical Framework** **The Social Contract Theory, Elitism, and Participatory Policymaking** According to the social contract theory, man first existed in the natural world without the benefit of a government or any kind of laws governing him. The different segments of society experienced hardship and oppression (Laskar, 2013). They made two deals, known as "PactumUnionis" and "PactumSubjectionis," in an effort to get rid of these difficulties (Laskar, 2013). People sought to safeguard their lives and property by signing the first union agreement to create a society where people agreed to respect one another and exist in peace and harmony as a result of the contract. In order to guarantee the protection of life, property, and, to some extent, liberty of all citizens, the second agreement of subjection brought the people together. It ensured they vowed to obey an authority and surrender all or part of their freedoms and rights to that authority. Thus, the authority, government, sovereign, or state came into being because of the two agreements. In simple terms, the power of the government emanates from the people; hence, the government has been given authority by the people to make policies for the benefit of all in the society. The social contract theory manifests itself very well in a democracy. The theory postulates that the people get to have a say in whatever decision is being taken concerning their lives through their representatives (Locke, 1980). Reflecting on the above, the members of a society under the pristine form of a social contract do not repudiate their natural rights just for the sake of it, but rather it is to ensure harmony or proper co-existence superintended by the ruler(s). The government or leadership ought to be mindful not to lose their legitimacy due to the lack of faith in their governance by the people or the ruled. This has always been plausible within the community of people or state governed by a government that cannot deliver the public good and still does not shudder to push or implement policies that do not fit well with the masses. Notwithstanding the above, public policy, like other terms and concepts in social science, is heavily contested. Several scholars have defined the concept differently. For instance, James Anderson, in his book *Public Policymaking*, defines public policy as a relatively stable, purposive course of action or inaction followed by an actor or set of actors in dealing with a problem or matter of concern (Anderson, 2011). Birkland (2015) defines public policy as a statement by the government at whatever level of what it intends to do about a public problem. He maintains that the study of public policy is how we translate the popular will into practice. As the social contract theory opines, the translation of the popular will into practice could be seen as the fulfillment of the contract between the people and the government. The popular will is the will or interests of the majority in society, although Birkland argues that the popular will is debatable. Dye (2013) opines that public policy is whatever governments choose to do or not to do (Dye, 2013, p. 3). With no disputations, we infer that a government could choose what it wants to do or not to do but must do it within the confines of satisfying a certain social contract. If the governments could choose what they want to do or not to do with such ease, we may not be discussing a liberal democracy. This government could have illiberal tendencies even if it were democratic. Even if it were democratic, the discerning masses would have their turn in voting that government out of office. Indeed, public policy must be geared toward honoring the social contract that the government has signed with the governed. Kingdon (2014), however, conceives public policymaking to be a set of processes, including the setting of the agenda, the specification of alternatives from which a choice is to be made, an authoritative choice among those specified alternatives, as in a legislative vote or a presidential decision, and the implementation of the chosen alternative (Kingdon, 2014). These definitions or conceptualizations should suffice in a broader or micro discourse on policymaking, including our current preoccupation. There is a direct aberration from the expectations and norms of a discerning civil society, especially when the rulers subtly or glaringly ignore their role and representation in the policymaking agenda. **The Contrast between Elitism and Participatory Policymaking** As asserted by various scholars, public policy is made to meet societal demands, but in reality, it is the governing elite's preferences and values. According to Dye (2013), policies flow downward from elites to the masses because the people the policies affect are viewed as apathetic and ill-informed about public policies, and elites shape public opinions on policy questions. He maintains that changes in public policies will, hence, be incremental rather than revolutionary. Several reasons have been propounded by scholars who support the elite approach to account for why the masses do not dominate the policy process. Some scholars assert that the lack of adequate financial resources and time plays a role (Kpessa, 2011; Rietbergen-McCracken, 2020). Others maintain that the masses do not possess the requisite knowledge and expertise necessary in policymaking (Hiller et al., 1997; Dye, 2013). However, are these enough reasons to deny the people the privilege to participate in the decisions concerning their very lives? According to the social contract theory, the government exists as a result of the people's decision to give up some power to one entity to make decisions for the betterment of the entire society (Locke, 1980). How, then, will the government know what is in the interest of the entire society if it fails to engage the people? Other factors include the tendency of conflict arising among the various stakeholder groups due to their opposing views and the possibility of raising the expectations of the masses that their views will be taken into account. This is not always possible in realpolitik. The tendency for conflict to arise is what politics is about and is an accurate indication of the presence of democracy. The duty of the government is to settle such conflict by ensuring that all stakeholders agree on a consensus. If policies are made on behalf of and in the people's interests, why are they treated as passive agents when making policies that concern their very lives? Participatory policymaking is a process that approaches citizens more as a group to share in decision-making in which there is an explicit connection between citizens' input and policy decisions (Mohammed, 2013). Peters and Pierre define participatory policymaking as the engagement of ordinary citizens in formulating and implementing public programs (cited in Mohammed, 2013). Edelenbos (1999) opines that participatory policymaking refers to the process of making more room for the contributions of those people and organizations who are affected by policy plans. From the above definitions by these scholars, it can be deduced that participatory policymaking is the exact opposite of elitism, or what Mohammed (2013) calls a bureaucratic style or exclusionary approach to policymaking. Participatory policymaking reflects the central idea of the social contract theory by ensuring that people have the opportunity to have a say in the decisions being made concerning their lives. Countries such as Austria, Italy, and Ireland have adopted an ICT-based system in what they term as E-government as a means to bring the government to the doorsteps of their citizens, and by so doing, they provide the opportunity for their citizens to participate in the policymaking process (OECD, 2001). In countries such as the United Kingdom (UK), the opportunity to participate in policymaking has extended to children and the youth (Tisdall & Davis, 2004). Participatory policymaking may be in the form of contribution, information sharing, consultation, cooperation and consensus building, partnership, and empowerment (Rietbergen-McCracken, 2020). Countries have adopted several instruments as a way to enhance public participation in policymaking. For instance, the city of Eindhoven in the Netherlands adopted "digipanel" - a citizens’ panel on the internet, which allows a permanent group of citizens to be regularly consulted on different policy issues (Michels & De Graaf, 2010). Other Participatory instruments include information campaigns, consultative techniques such as interest group meetings, town hall meetings, workshops, and circulation of proposals (Mohammed, 2015). According to Michels & De Graaf(2010), participatory democrats argue that delegating decision-making powers alienate citizens from politics. Some of the benefits of participatory policymaking include the adoption and implementation of better-informed policies (Michels & De Graaf, Examining Citizen Participation: Local, 2010; Rietbergen-McCracken, 2020; Edelenbos, 1999), more equitable policies, strengthened transparency and accountability, strengthened ownership, enhanced capacity and inclusion of marginalized groups and shared understanding of otherwise contentious issues (Rietbergen-McCracken, 2020; OECD, 2001). This ensures that the people are not alienated from the policy process. Michels & De Graaf(2010), just like Edelenbos (1999), assert that citizens may become more competent if they participate in policymaking because they will learn about policy issues and may acquire civic skills, such as debating public issues. Furthermore, the participatory approach leads to the legitimacy of the adopted policy (Michels & De Graaf, 2010; OECD, 2001). Above all these, participatory policymaking strengthens and consolidates democracy (Gramberger, 2001). Elitism, on the other hand, is likely to hamper democracy. The question that arises is how elitism will hamper democracy. We believe that it comes as a result of the disregard for the consent of the people because the power of the government emanates from the people, even though in a constitutional democracy, a constitution may guide the actions of the rulers and the ruled. In several instances in Africa and elsewhere, when the government reneged or failed to provide the public good and sometimes did not listen to the complaints and expectations of the civil society, it culminated in insurrections and sometimes military interventions. It is critical to note that several uprisings that the continent of Africa has seen sometimes showed a vote of no confidence in the existing government due to woeful economic conditions due to mismanagement and a lack of faith in using elections as a democratic tenet to change government. Elitism disregards the masses; hence, the masses take back their power. In a democratic state, it is always envisaged that the power shall be taken away from the government through the ballot box or democratic elections. However, to emphasize, the people, that is, the governed, may revolt or rely on tendencies that are undemocratic. The reverse is true; when an undemocratic regime is put in place, it further emphasizes the exclusionary bureaucratic tendencies in decision-making or introducing policies and implantation of same. **Public Policymaking and Ghana’s Debt Restructuring Program** Mohammed (2013) categorizes public policymaking in Ghana into two eras: the democratic era and the undemocratic era. He maintains that the undemocratic era extended from Nkrumah’s administration after the country’s independence to Rawlings’ PNDC’s regime. The democratic era, on the other hand, began in 1993 till date (Mohammed, 2013). He further describes policymaking during the undemocratic era as an exclusionary bureaucratic approach to policymaking where the people have no say in policies adopted and implemented. For example, the Government of Nkrumah superintended the passing of the Prevention Detention Act (PDA) and the Act that made Ghana a one-party state. Also, the implementation of the Economic Recovery Program under Rawlings took an elitist approach (Gyimah-Boadi, 1990). Such policies during that era disregarded inputs, and to some extent, the government did not solicit support from the people. By extension, these governments violated the contract they had with the people, as the social contract theory postulates. Therefore, the policies they made were not directly in the interest of the people from whom they acquired their sovereign power. Sometimes, it is construed with no equivocation that military leaders acquire their power through force. As a result, such an exclusionary approach in policymaking is much anticipated. The constitution is set aside, the people are ruled with decrees, and the might of the military leader and his apparatchiks prevail. The above notwithstanding, under the fourth republic dispensation in Ghana, opportunities have been presented to Ghanaian citizens to participate in the making of policies that affect their very lives. For instance, the development and implementation of the Ghana Poverty Reduction Strategy I (GPRS I 2003-2005) and the Growth and Poverty Reduction Strategy II (GPRS II, 2006-2009) utilized the participatory approach to policymaking (Mohammed, 2013). Also, the review of the 1992 Constitution and the Reform to Social Security System (RSSS) utilized the participatory policymaking approach (Mohammed, 2015; Kpessa, 2011). Many engagements meant much work and deployment of resources but also good policies that received the people's acclamation, irrespective of the existing limitations that lie therein. The ruled are so inclined to policies that they have contributed to their promulgation and are willing to make sacrifices, if need be, to make them see the light of day and have a glowing success. Debt exchange, debt restructuring, or debt swap has recently become one of the most used phrases in Ghana's media space. As used by Lazard (2021, p. 2), the term refers to agreements between a creditor and a debtor where the old debt is changed for a new one, providing some financial respite for the debtor and reallocating cash flow to specific goals. Debt exchange, as used by Ghanaian government officials, politicians, and experts, is similar to Lazard's definition above and refers to how the government of Ghana will handle the enormous debt it has accumulated over the years, which many believe is the root of the country's economic difficulties (GhanaWeb, 2023). According to the IMF, debt restructuring or domestic debt restructuring refers to modifications to the contractual payment terms of public domestic debt (including amortization, coupons, and any contingent or other payments) that are made at the expense of the creditors, either through legislative or executive acts, through agreements with creditors, or both (International Monetary Fund (IMF), 2021). The International Monetary Fund (IMF) has warned that COVID-19-related policy changes and economic shocks may make domestic debt restructuring more common. On December 5, 2022, the Ghanaian government unveiled the Domestic Debt Exchange Program (DDEP) for exchanging domestic debt (Financial Stability Council, 2022). In the aforementioned program, the Ghanaian government offered a voluntary exchange opportunity for a package of New Bonds that the country would be issuing in exchange for about GHS137 billion in domestic notes and bonds, including the Energy Sector Levy Act (ESLA) and Daakye bonds (Financial Stability Council, 2022). Treasury Bills are held by individuals (natural persons) in their entirety, and notes and bonds are not exchanged (ibid.). Individual bonds were added after changes in the program. The news caused some controversy among the general public, the labor community, and other interested parties; in response to strike threats, the government modified some DDEP specifics (GhanaWeb, 2023). The controversy arose due to the government failing to engage the relevant stakeholders of the policy as the participatory policymaking approach postulates it should. According to the government, the goal of this initiative was to reduce debt swiftly, efficiently, and transparently. In this regard, the Government of Ghana has been trying hard to reduce the effects of the domestic debt exchange on investors holding government bonds through the use of an exchange offer (thebftonline, 2022; Ministry of Finance, 2022). It appears the government has been using an elitist approach concerning this policy. With the DDEP, the government is attempting to change the interest it promised the Ghanaian who provided her with a loan (the bondholder) and the time frame for which the lender is expected to receive his interest and principal back (GhanaWeb, 2023). This defies the very idea of the social contract theory as the government is doing whatever it takes to do things it perceives as correct and in its interests, but on the other hand, it does so to the disadvantage of the people involved. The government's approach toward participants in this domestic debt exchange program seems to minimize the impact on individual bondholders and reassure them that their investments will not be impacted. The government promises not to apply a principal haircut to qualified bonds. Individual bondholders will not be affected by this change and will be able to exchange their current bonds for new ones with extended durations and stepped-up interest rates (thebftonline, 2022). Under the domestic debt exchange, local bonds with 2027, 2029, 2032, and 2037 maturities are being exchanged for new ones with annual coupon rates of 0% in 2023, 5% in 2024, and 10% from 2025 till maturity (Akorlie & Inveen, 2022). Reactions toward the Domestic Debt Exchange (DDE) Programme As noted earlier, the unveiling of the DDE program caused some uproar, particularly in the labor sector. The government received threats of strike actions and opposition to the program, forcing them to change several program details (GhanaWeb, 2023). Due to the government's failure to consult with bondholders and other stakeholders prior to the program's launch, some individuals and groups believed that, even though participation in the DDE program was optional, the government was imposing the program on bondholders (Adams, 2022; Business Ghana, 2022). It has been argued elsewhere that it is only in an autocratic state that a government could be able to impose its will on the people, but Ghana is a democracy, and for that matter, the approach the government ought to be using is that of participatory policymaking but not the reverse. The Trade Union Congress (TUC) and the University Teachers Association Ghana (UTAG) are two major organizations that reacted to the DDEP and voiced their concerns or grievances about how it would harm their interests. Being a labor group, the Trade Union Congress was deeply concerned about the DDEP and how it may harm the interests of its members. Given that a sizable portion of employee pensions are invested in government bonds, Congress emphasized that there had been no prior engagement with the labor sector. The Congress also reassured its members that it would continue to advance their interests and that not a single "pesewa" of their pension funds would be lost to the Debt Exchange Program (Business Ghana, 2022). This defies the social contract theory since the government is expected to act in the interests of the people and not otherwise. The University Teachers Association Ghana (UTAG) stated that it took significant issue with any intervention that would worsen the position of the already struggling university teachers (Modern Ghana, 2022). Their worries centered on the potential harm the DDEP might have on the Ghana Universities Salary Superannuation Scheme (GUSSS) and their Tier Two and Tier Three Pension Funds (ibid.). In a statement, UTAG hinted, "We reiterate our vehement opposition to any strait-jacket implementation of the announced debt exchange program. It should not in any way affect... returns of the hardworking Ghanaian" (Arhinful, 2022; Modern Ghana, 2022). UTAG further stated that they were ready to brainstorm and assist the government with lasting solutions to address the current economic crises (Modern Ghana, 2022). This clearly shows how willing the relevant stakeholders are to cooperate with the government and suggest ways to help the government develop the best policies. The essence of participatory policymaking is for the government to get all the necessary information concerning any policy it wishes to implement. It must effectively do the same by reaching out to the people and coming out with the best policies in the interests of the people, as the social contract theory postulates. The government's domestic debt exchange program was also opposed by the Ghana Federation of Labor (GFL), representing all recognized trade unions and workers' associations. This was because of potential negative impacts on employees and retirees (Business Ghana, 2022). Mr. Abraham Koomson, Secretary General of the GFL, pointed out that before choosing IMF assistance to deal with Ghana's economic problems, the government had not shown good faith in negotiations with organized labor. To prevent any labor disturbance, the Secretary-General urged the government to respect labor union positions (Business Ghana, 2022). Financial, political, and economic professionals and think tanks also provided their opinions and expectations of the program in light of the political, social, and economic circumstances surrounding the DDEP. The government faces this opposition because it has taken an elitist approach to the DDE program. Interacting with Ghanaian Times, Professor John Gatsi, the Dean of the University of Cape Coast School of Business, criticized the government for launching a debt exchange scheme without first engaging creditors in negotiations. He emphasized that debt restructuring was impossible without the participation of creditors (Adams, 2022). He claimed that the government had broken its agreement with creditors by setting its own interest rate and coupon maturity date, claiming that this might undermine investor trust in Ghana's financial market and even portend the collapse of the financial market (Adams, 2022). Rev. Dr. Samuel Worlanyo Mensah, a Chartered Economist, claimed that the government's proposed debt exchange program did not include practical solutions to the economy's problems. He asserted that the economy urgently requires proactive measures and initiatives to increase investor confidence (Annang, 2022). In an interview with GBC News, Rev. Dr. Mensah remarked that to relieve investor community concerns, the government should stop stretching the truth (Annang, 2022). It is prudent of the government to be truthful to the people, especially in times of crisis, as that is the only way for the people to support the government and trust in their capability and credibility to come out with the right policies to take the country out of such crisis. The social contract between the people and the government requires trust between the two parties for the contract to hold. The absence of this would be problematic; it does not support why the people would give power to the government in the first instance. According to another economist, Peter Tekper, the government's debt restructuring initiative is accompanied by financial uncertainty. He advised the administration to proceed with caution since this action would just fuel the panic in the system. He predicted that 2023 would be a difficult year for the nation, saying it is regrettable that problems extend beyond the scope of governments (Annang, 2022). The government was under pressure to change several DDE details due to the responses and actions of interest groups and individuals. The government issued a statement on January 31, 2023, through the Ministry of Finance, offering new bonds to citizens and retirees (Ministry of Finance, 2023). This is very evident; if the government had consulted the people in the first instance, there would not have been the need to change the several DDE details as it did. A participatory policymaking approach would have prevented all the bottlenecks the government has been presented with concerning the DDE program. **Regulatory Tools to Mitigate Financial Stability Risks from the Domestic Debt Exchange Program** The Financial Stability Council issued a statement to evaluate the viability of the DDE and how the government would guarantee the protection of bondholders' and investors' money two days after the DDE was announced. The Financial Stability Council is chaired by the Governor of the Bank of Ghana and has members from the Bank of Ghana, Ministry of Finance, Securities and Exchange Commission, National Insurance Commission, National Pensions Regulatory Authority, and Ghana Deposit Protection Corporation. The Financial Stability Council, established in 2018, is mandated by an Executive Instrument to identify and assess threats, vulnerabilities, and risks to the financial sector's stability (Financial Stability Council, 2022). The Financial Stability Council described in its statement the various steps the government has/is taken to reduce the risks from the DDE. They include regulatory forbearance on liquidity and solvency, Ghana Financial Stability Fund, and accounting treatment. • **Regulatory Forbearance on Liquidity and Solvency** In the first instance, regulated companies and programs that voluntarily participate in the debt operation will temporarily have their regulatory capital and liquidity requirements reduced by the financial sector regulators. Any new regulations that may negatively affect liquidity or solvency will also be suspended or delayed by regulators. Each regulator will eventually inform its regulated firms/schemes of more detailed reliefs (Financial Stability Council, 2022). • **Ghana Financial Stability Fund (GFSF)** The Ghanaian government and its development partners will contribute GHC 15 billion to the GFSF, which is currently being constituted. To the extent that they take part entirely in the Debt Exchange, financial institutions will receive liquidity from the Fund. With effect from the date the Debt Exchange is completed, all financial institutions (banks, SDIs, pension schemes, collective investment plans, fund managers, broker/dealers, and insurance companies) that fully participate in the Debt Exchange are eligible to access the GFSF for increased liquidity support. The Financial Stability Council is creating special operating rules for the Bank of Ghana to operate the Fund. The use of the GFSF will be continuously supervised and advised by the Financial Stability Council (Financial Stability Council, 2022). • **Accounting Treatment** To establish a uniform approach to the accounting treatment given to the Debt Exchange, regulators are already in contact with external auditors of financial institutions and will offer guidelines (Financial Stability Council, 2022). The Financial Stability Council further stated that it will carefully monitor the results of the above-mentioned measures and the effects of the Debt Exchange on financial institutions and the overall financial system. The measures will be regularly assessed and adjusted when necessary to ensure that they are as successful as possible in preserving the stability of the financial system and safeguarding deposits, pensions, policyholder funds, and investor funds and assets (Financial Stability Council, 2022). **Ghana’s E-Levy Policy and its Ramifications** Over the years, Ghana has been saddled with much financial constraints in running the affairs of the state. Governments have come into power relying on internally generated revenue and seeking funds from international organizations to engage in developmental projects. Ghana generates revenue through taxes on income and property, taxes on domestic goods and services, international trade, and value-added tax and a major source of domestic revenue mobilization is the collection of road tolls. Governments do this to improve the people's livelihood with the aim of upholding and fulfilling their part of the social contract. The Ministry of Road and Highways directed the discontinuation of road tolls following the announcement by the Ministry of Finance to scrap tolls on all public roads. According to the finance ministry, the government observed that the collection of tolls poses more harm than good, hence the need to abolish collection at all 37 tolls in the country. The minister further stated that the traffic generated at the various toll booths across the country informed the decision (Ahinsah-Wobil, 2022). Just recently, under the auspices of the NPP government, an E-levy bill was proposed to provide a more efficient way of raising revenue in the state. According to the Electronic Transfer Levy Act 2022, these electronic transfers are levied to enhance revenue mobilization. There is also the argument for broadening the government's tax base to provide for the needs of the citizens and other related matters (Parliament of the Republic of Ghana, 2022). Together with the increasing debt-to-GDP ratio and budget deficit, the government announced this new revenue item in the 2022 Budget and policy statement and set the electronic levy at a rate of 1.75%. The transactions that have been levied include mobile money transfers, mobile money merchant payments, in-store payments using point-of-sale (POS) devices or QR, e-commerce/online payments, and bank-to-mobile money transfers (Agyeiwaa-Afrane et al., 2022). The Ghana Revenue Authority was mandated to collect and account for e-levy. Taxes are meant to build strong and great states. It is required for a good society as long as the social contract theory is concerned. The people, as part of honoring the contract, are expected to pay taxes, and the government does its part by using such taxes to implement policies in the interests of the people. Could it mean that governments just make policies with no consultations? Or can it refuse to engage the citizenry concerning the same? Proponents of this electronic transfer levy believed it would present an opportunity to tap into the rapidly expanding user base of mobile money services and the profits of their providers while representing a relatively simple and transparent means of collection. This proposed bill was met with great opposition from the opposition party and the ordinary Ghanaians, but it was eventually enacted by the Parliament of Ghana. According to Ackah and Opoku (2021) and Nyabor (2022), as cited in Anyidohoet. al. (2022), the tax was immediately challenged on several grounds, including the notion that it violated principles of taxation by potentially placing a double burden on taxpayers, that e-levy will push lower-income people and small owners out of the digital economy, that it may roll back progress on the digitalization of the Ghanaian economy and increase the hardship of the workers in the informal economy already hard-hit with the COVID-19 pandemic. In research conducted by Appiahene et al. (2022), which analyzed the sentiments held by some Ghanaians on Twitter, they concluded that a considerable percentage of Ghanaians were neither delighted nor unhappy (neutral) with the policy because the average person had little or no awareness of the policy. The vehemence of the NDC Members of Parliament (MPs) and the insistence of the ruling party, the New Patriotic Party (NPP) MPs, resulted in a brawl between the MPs on both sides during voting on the Bill. Again, the government opted for an elitist approach concerning the E-Levy policy. Appiahene et al.'s research proves that if only the government had utilized a participatory approach, there would be minimal to no opposition to the policy because the government would have at its disposal several policy options from which to choose the best. The then minority leader in parliament, Hon. Haruna Iddrisu, expressed his dissent in parliament as follows: "The financial institutions of this country should not be subject to this punitive, insensitive tax. It would be a disincentive to the private sector of Ghana." The Bill was subsequently withdrawn by the government. In the face of widespread public opposition to the tax and inability to immediately marshal the numbers to pass it in parliament, the government of the day applied the process Kingdon (Kin141)would call 'softening up', which involved the administration embarking on a nationwide campaign to sell this form of collecting tax to the Ghanaian public (Government of Ghana, 2019). It is evident that the government had no intention of engaging the people and only did so after it realized it needed the people's support to pass the E-Levy bill into law. In a report by Cooper (2022), the ruling Members of Parliament reintroduced the Bill on Tuesday, when many opposition MPs were absent. A surprise move, which analysts had previously said would be one of the only ways for the tax to be passed. There would not be the need for such a move if the government had adopted a participatory policymaking approach. The implementation of this government electronic transaction levy of 1.5% (e-levy), which started in May 2022, ensured that transfers to and from mobile money accounts or bank accounts were levied. The e-levy impacted mobile money transactions in the following ways: mobile money transfers completed on wallets from the same electronic money provider, mobile money transfers completed via different electronic money providers, bank accounts to mobile money wallets, mobile money wallet to bank accounts, and the like. Though raising money through a levy to promote development is a good thing, the Ghanaian populace was of the view that the e-levy on mobile money transactions would destroy the broader diffusion of the mobile money industry in Ghana. The argument was that the levy would discourage people from using it since it taxes their meager money held in mobile money systems, leading to the disruptive inclusiveness of the unbanked population in the financial and banking industry. The E-levy was imposed on the people, especially considering all the events that accounted for the passing of the bill into law. The people were denied the privilege to make inputs concerning the E-Levy policy, and the government resorted to a more elitist approach with regard to the same. According to the government of Ghana, the e-levy tax, which covers mobile money payments, bank transfers, merchant payments, and inward remittances, was estimated to raise up to 6.9 billion Ghanaian cedis in 2022 (Cooper, 2022). **Materials and Methods** With the prime motive to explore the extent of public influence on policies in Ghana, this paper employed the content analysis approach to systematically analyze the content of appropriate secondary sources like books, academic journals, reports, and reputable websites pertinent to the topic under review. Some of these materials were sourced from popular publishing websites like Wiley, Springer, Taylor and Francis, and Elsevier. Books such as Anderson (2011), Birkland (2015), Dye (2013), Locke (1980), and articles authored by Edelenbos (1999), Hiller, Landenburger & Natowitz (1997), Kpessa (2011), and Mohammed (2015) have been instrumental in the writing of this work. Other relevant sources that were consulted in the writing include Adams (2022), Agyeiwaa-Afrane et al. (2022), Anyidoho, Gallien, Rogan, and Boogaard(2022), Arhinful (2022), and Ahinsah-Wobil (2022). Using secondary sources through document analysis enables researchers to verify information, identify trends, and make grounded conclusions, leading to the overall validity and credibility of the work. Therefore, through employing such a method, we have made a great deal to systematically examine existing documents in the crafting of this contribution in order to draw reliable conclusions that are useful for policy, among other things. Results and Discussion Did The People Matter in the Domestic Debt Exchange Program and the E-Levy? In the discussion section, we turn our attention to the question, did the people of Ghana matter in the domestic debt exchange program and the E-Levy policy? As previously stated, the government decided to restructure or swap its bonds for new ones modified with new interests and new durations in order to get the interests and principal as stipulated in the DDEP due to the economic crisis that the Ghanaian economy was experiencing. When the initiative was announced, there was some uproar in the finance and labor sectors as worried interest or labor groups opposed the program to defend the "interests" of their members. Although the government repeated that participation, particularly by private bondholders, was highly optional, the actions of these groups led it to omit pension funds and include individual bonds (Ministry of Finance, 2023). The government claimed it could close the DDEP with over 80% participation of eligible bonds as of the time of writing (Ministry of Finance, 2023). Ghana's politics require transparency, responsibility, and participation as a democratic nation. Transparency is required in the sense that all government actions should be carried out in an open manner for everyone to see and to know. Accountability is also essential to help the government to take full responsibility for its actions and respond honestly to the public and other stakeholders. Also, the participation of the citizenry is very crucial. When the government engages the public or civil society and other stakeholders in its actions, it can gain more support from the populace and make a meaningful impact on the right policies and the implementation of the same to better the lives of the citizens of Ghana. There was no prior consultation with the financial industry, individual bondholders, stakeholders, or the general public, despite the fact that the government made participation in the DDEP voluntary, especially for individual bondholders. There were no 'softening up', as Kingdon (2014, pp. 127-131) terms it, to educate the people and create awareness of the DDEP to ensure favorable passage with little or no opposition. Likewise, in the E-Levy policy, the government only felt the need to engage the people through the town hall meetings when there were a lot of agitations concerning the policy. The aim of the town hall meetings was to explain things to the public and get their support but not to ask for their views and other inputs per se. Answering the question, "did the people matter in the domestic debt exchange program?" is a more complicated issue than treading a horse through the hole in a needle. The government claims the DDEP is essential to help protect the economy and enhance Ghana's capacity to service its public debts effectively. Significantly, the protection of the economy is and should be done in consultation with the various stakeholders. Such consultation should result in the satisfaction of both parties (the government and the bondholders). As earlier noted by Prof. Gatsi, the government's choice of strategy and determination of its own interest and maturity date of coupons will or may dampen investors' confidence. As previously stated, decreased investment portends terrible news for the banking sector, which significantly impacts the people. According to the IMF (2021), such domestic exchange programs are carried out at the expense of the creditor, in this case, the typical Ghanaian. Given the lack of prior engagements with the people, it is difficult to state that the people mattered in the adoption, design, and implementation of the DDEP. Regarding the E-Levy, the issue of whether the presiding speaker could vote had to be settled in court. In order for the majority, the New Patriotic Party (NPP) in parliament to get the Bill passed, irrespective of the stiff opposition by the NDC, which was further exacerbated by a hung parliament, including the absence of Hon. Adwoa Sarfo, the government did not seek for a compromise or a consensus building. This amplifies the government's lack of interest in the desires or expectations of the masses within the state concerning the E-levy. The two policies under discussion were policies the government pushed and got passed as a result of the financial crisis in the country. Others have questioned whether these are the only ways out of Ghana's economic or fiscal challenges. Indeed, it can be argued that the DDEP and the E-levy policies were passed and/or engineered without consulting the people to give them the opportunity to have a say in determining their own destiny. The government adopted an elitist approach regarding both policies, but a participatory approach would have been much better. **Conclusion and Recommendations** This research discussed the art of policymaking within the theoretical framework of the Social Contract Theory. It also sheds light on the contrast between elitism and participatory policymaking. The focus of the research, however, was on policymaking in Ghana. The paper continued by analyzing the Debt Exchange Program and the E-levy. Among other things, it found that both policies were passed without the involvement of the people. In other words, the people had no special say regarding either policy. The public and relevant stakeholders were consulted only when passing the E-levy, which, in particular, became difficult. The struggle the government went through in order to get both policies passed only proved the lack of support for them. Both policies were imposed on the people from whom the government derived its power and sovereignty to make policies on their behalf and in their interest. This defies the spirit and soul of the social contract theory. The implication of the government's failure to adopt a participatory policymaking approach regarding both policies is the citizenry's loss of trust in the government. Some people find ways or means to avoid paying the E-Levy. The citizenry would have been willing to adhere to both policies if only the government had made them a part of it. Notwithstanding the new policies, Ghana is still struggling economically. The country is in an economic decadence; the government had to introduce three new taxes, all in the name of generating revenue to curb the economic hardships faced by the country. Elsewhere, citizens might have revolted against their governments because of their failure to adopt a participatory policymaking approach over an elitist approach, especially concerning certain critical and crucial policies. We recommend that Ghanaians learn lessons from this quandary and hope that subsequent governments will do better in the foreseeable future. Based on the magnitude of the opposition to such policies, the participatory policy approach should be given priority in Ghana as it delves into the significance of involving citizens in the policymaking process, and governments must follow suit. Further research should explore the enhancement of good governance through participatory policymaking. The information researchers provided suggests that participatory policymaking is "democracy in action." Therefore, it will make a great deal of knowledge if further academic investigation is made on the implications of citizen engagement, transparency, and the overall democratic health of the country. This will, in no small way, unleash the challenges and successes associated with following participatory approaches in the policymaking process in Ghana. It would also be important to explore the challenges associated with elitist policy approaches and how they can hamper democracy. That notwithstanding, additional research may explore the appropriate times when elitism would be suitable. References Adams, C. N. (2022, December 6). *Mixed feelings greet Domestic Debt Exchange Program*. Retrieved from Ghanaian Times: https://www.ghanaiantimes.com.gh/mixed-feelings-greet-domestic-debt-exchange-programme/ Agyeiwaa-Afrane, A., Agyei-Henaku, K., Badu-Prah, C., Srofenyoh, F., Gidiglo, F. K., Amezi, J., & Djokoto, J. G. (2022). Drivers of Ghanaians’ Approval of the Electronic Levy. *A Springer Nature Journal*, 2 - 3. Ahinsah-Wobil, I. (2022). *Ghana’s Road Toll And E-Levy: The Consideration For a Good*. Retrieved from https://ssrn.com/abstract=4171809 Akorlie, C., & Inveen, C. (2022, December 5). *Ghana to swap domestic debt in fight to regain economic stability*. Retrieved from REUTERS: https://www.reuters.com/world/africa/ghana-swap-domestic-debt-fight-regain-economic-stability-2022-12-04/ Anderson, J. E. (2011). *Public Policymaking*. Texas: Elm Street Publishing Services. Annang, M. (2022, December 6). *Reactions to Finance Minister’s Debt Exchange Programme*. Retrieved from gbeghanaonline: https://www.gbeghanaonline.com/news/business/reactions-to-finance-ministers-debt-exchange-programme/2022/ Anyidoho, N. A., Gallien, M., Rogan, M., & Boogaard, V. V. (2022). Mobile Money Taxation and Informal Workers: Evidence from Ghana’s E-Levy. *ICTD Working Paper 146*, 7 - 8. Appiahene, P., Afrifa, S., Akwah Kyei, E., Kofi Nii, I., Adu, K., & Kwabena Mensah, P. (2022). Analyzing Sentiments Towards E-Levy Policy Implementation In Ghana Using Twitter Data. 2 - 3. Arhinful, E. (2022, December 8). *Debt exchange: Our money shouldn’t be touched – UTAG tells government*. Retrieved from Myjoyonline: https://www.myjoyonline.com/debt-exchange-our-money-shouldnt-be-touched-utag-tells-government/ Birkland, T. A. (2015). *An Introduction To The Policy Process: Theories, Concepts, and Models of Public Policy Making* (3rd ed.). London and New York: Routledge. Business Ghana. (2022, December 2022). *Business Ghana*. Retrieved from GFL kicks against domestic debt exchange program: https://www.businessghana.com/site/news/business/276924/GFL-kicks-against-domestic-debt-exchange-programme Business Ghana. (2022, December 2022). *TUC expresses ‘grave concern’ about Domestic Debt Exchange and its impact on pensions*. Retrieved from Business Ghana: http://www.businessghana.com/site/news/General/276021/TUC-expresses-grave-concern-about-Domestic-Debt-Exchange-and-its-impact-on-pensions Cooper, I. (2022, April 1). Retrieved from Reuters.com: https://www.reuters.com/world/africa/ghana-approves-tax-electronic-payments-despiteopposition-protest-2022-03-29/#:~:text=Finance%20Minister%20Ken%20Ofori%2DAtta,Ghana's%20raft%20of%20financial%20woes Dye, T. R. (2013). *Understanding Public Policy* (14th ed.). Pearson Education, Incorporation. Edelenbos, J. (1999). Design and Management of Participatory Public Policy Making. *Public Management an International Journal of Research and Theory*, 569–576. Financial Stability Council. (2022, December 7). *Government of Ghana Domestic Debt Exchange: Potential Financial Sector Impacts and Mitigating Safeguards*. Retrieved from Bank of Ghana: https://www.bog.gov.gh/news/government-of-ghana-domestic-debt-exchange-potential-financial-sector-impacts-and-mitigating-safeguards/ GhanaWeb. (2023, January 13). *What really is the debt exchange program? - An explainer*. Retrieved from GhanaWeb: https://www.ghanaweb.com/GhanaHomePage/NewsArchive/What-really-is-the-debt-exchange-programme-An-explainer-1694678 Government of Ghana. (2019). *Ghana Beyond Aid Charter And Strategy Document*. Gramberger, M. (2001). *Citizens as Partners*. Paris Cedex: OECD Publications Service. Hiller, E. L. (1997). Public Participation in Medical Policymaking and The Status of Consumer Autonomy. *American Journal of Public Health*, 87(8), 1280–1288. International Monetary Fund (IMF). (2021). *Issue In Restructuring of Sovereign Domestic Debt*. Washington D.C: International Monetary Fund (IMF). Kingdon, J. W. (2014). *Agendas, Alternatives and Public Policies* (2nd ed.). Harlow: Pearson Education Limited. Kingdon, J. W. (2014). *Agendas, Alternatives, and Public Policies* (2nd ed.). Harlow: Pearson Education Limited. Kpessa, M. W. (2011). The Politics of Public Policy in Ghana: From Closed Circuit Bureaucrats to Citizenry Engagement. *Journal of Developing Societies*, 27(1), 29-56. Laskar, M. (2013, April 4). Summary of Social Contract Theory by Hobbes, Locke and Rousseau, 1-7. doi:http://dx.doi.org/10.2139/ssrn.2410525 Lazard. (2021). *Debts-for-SDGs swaps in indebted countries: The right instrument to meet the funding gap? A review of past implementation and challenges lying ahead*. Lazard. Locke, J. (1980). *Second Treatise of Government*. London: Hackett Publishing Company. Michels, A., & De Graaf, L. (2010). Examining Citizen Participation: Local. *Local Government Studies Participatory Policy Making and Democracy*, 36(4), 477–491. Ministry of Finance. (2022, December 5). *The Launch of Ghana’s Domestic Debt Exchange Programme*. Retrieved from Ministry of Finance: https://mofep.gov.gh/sites/default/files/basic-page/Domestic-Debt-Exchange-Launch.pdf Ministry of Finance. (2023, February 14). *Participation in the Domestic Debt Exchange Program*. Retrieved from Ministry of Finance: https://mofep.gov.gh/press-release/2023-02-14/participation-in-the-domestic-debt-exchange-programme Modern Ghana. (2022, December 8). *The debt exchange program shouldn't affect pensions or investment returns. UTAG rejects the program*. Retrieved from MODERN GHANA: https://www.modernghana.com/news/1199735/debt-exchange-programme-shouldnt-affect-pensions.html Mohammed, A. K. (2013). Civic Engagement in Public Policy Making: Fad or Reality in Ghana? *Politics & Politics*, 41(1), 117–152. Mohammed, A. K. (2015). Ghana’s Policy Making: From Elitism and Exclusion To Participation and Inclusion? *International Public Management Review, 16*(1), 43–66. OECD. (2001, September). Citizens as Partners in Policymaking. *Focus*, pp. 1–8. Parliament of the Republic of Ghana. (2022). *Electronic Transfer Levy Act, 2022*. Accra: Ghana Publishing Company LTD Assembly Press. Rietbergen-McCracken, J. (2020). *CIVICUS, PG Exchange*. Retrieved from https://civicus.org/index.php/media-center/resources/toolkits: https://civicus.org/documents/toolkits/PGX_F_ParticipatoryPolicy%20Making.pdf Savage, R. (2022, December 15). *Analysis: Ghana begins tackling debt restructuring pain as it secures IMF deal*. Retrieved from REUTERS: https://www.reuters.com/world/africa/ghana-begins-tackling-debt-restructuring-pain-it-secures-imf-deal-2022-12-14/ thebftonline. (2022, December 9). *A close-up of Ghana’s Debt Exchange Programme*. Retrieved from thebftonline: https://thebftonline.com/2022/12/19/a-close-up-of-ghanas-debt-exchange-programme/ Tisdall, K. E., & Davis, J. (2004). Making a Difference? Bringing Children's and Young People's Views into Policymaking. *Children & Society, 18*, 131-142.