text
stringlengths
155
435k
hits
int64
5
59
Braille ( , ) is a tactile writing system used by people who are visually impaired. It can be read either on embossed paper or by using refreshable braille displays that connect to computers and smartphone devices. Braille can be written using a slate and stylus, a braille writer, an electronic braille notetaker or with the use of a computer connected to a braille embosser. Braille is named after its creator, Louis Braille, a Frenchman who lost his sight as a result of a childhood accident. In 1824, at the age of fifteen, he developed the braille code based on the French alphabet as an improvement on night writing. He published his system, which subsequently included musical notation, in 1829. The second revision, published in 1837, was the first binary form of writing developed in the modern era. Braille characters are formed using a combination of six raised dots arranged in a 3 × 2 matrix, called the braille cell. The number and arrangement of these dots distinguishes one character from another. Since the various braille alphabets originated as transcription codes for printed writing, the mappings (sets of character designations) vary from language to language, and even within one; in English Braille there are 3 levels of braille: uncontracted braille a letter-by-letter transcription used for basic literacy; contracted braille an addition of abbreviations and contractions used as a space-saving mechanism; and grade 3 various non-standardized personal stenography that is less commonly used. In addition to braille text (letters, punctuation, contractions), it is also possible to create embossed illustrations and graphs, with the lines either solid or made of series of dots, arrows, and bullets that are larger than braille dots. A full braille cell includes six raised dots arranged in two columns, each column having three dots. The dot positions are identified by numbers from one to six. There are 64 possible combinations, including no dots at all for a word space. Dot configurations can be used to represent a letter, digit, punctuation mark, or even a word. Early braille education is crucial to literacy, education and employment among the blind. Despite the evolution of new technologies, including screen reader software that reads information aloud, braille provides blind people with access to spelling, punctuation and other aspects of written language less accessible through audio alone. While some have suggested that audio-based technologies will decrease the need for braille, technological advancements such as braille displays have continued to make braille more accessible and available. Braille users highlight that braille remains as essential as print is to the sighted. History Braille was based on a tactile code, now known as night writing, developed by Charles Barbier. (The name "night writing" was later given to it when it was considered as a means for soldiers to communicate silently at night and without a light source, but Barbier's writings do not use this term and suggest that it was originally designed as a simpler form of writing and for the visually impaired.) In Barbier's system, sets of 12 embossed dots were used to encode 36 different sounds. Braille identified three major defects of the code: first, the symbols represented phonetic sounds and not letters of the alphabetthus the code was unable to render the orthography of the words. Second, the 12-dot symbols could not easily fit beneath the pad of the reading finger. This required the reading finger to move in order to perceive the whole symbol, which slowed the reading process. (This was because Barbier's system was based only on the number of dots in each of two 6-dot columns but not the pattern of the dots.) Third, the code did not include symbols for numerals or punctuation. Braille's solution was to use 6-dot cells and to assign a specific pattern to each letter of the alphabet. Braille also developed symbols for representing numerals and punctuation. At first, Braille was a one-to-one transliteration of the French alphabet, but soon various abbreviations (contractions) and even logograms were developed, creating a system much more like shorthand. Today, there are braille codes for over 133 languages. In English, some variations in the braille codes have traditionally existed among English-speaking countries. In 1991, work to standardize the braille codes used in the English-speaking world began. Unified English Braille (UEB) has been adopted in all seven member countries of the International Council on English Braille (ICEB) as well as Nigeria. For blind readers, Braille is an independent writing system, rather than a code of printed orthography. Derivation Braille is derived from the Latin alphabet, albeit indirectly. In Braille's original system, the dot patterns were assigned to letters according to their position within the alphabetic order of the French alphabet of the time, with accented letters and w sorted at the end. Unlike print, which consists of mostly arbitrary symbols, the braille alphabet follows a logical sequence. The first ten letters of the alphabet, a–j, use the upper four dot positions: (black dots in the table below). These stand for the ten digits 1–9 and 0 in an alphabetic numeral system similar to Greek numerals (as well as derivations of it, including Hebrew numerals, Cyrillic numerals, Abjad numerals, also Hebrew gematria and Greek isopsephy). Though the dots are assigned in no obvious order, the cells with the fewest dots are assigned to the first three letters (and lowest digits), abc = 123 (), and to the three vowels in this part of the alphabet, aei (), whereas the even digits, 4, 6, 8, 0 (), are corners/right angles. The next ten letters, k–t, are identical to a–j respectively, apart from the addition of a dot at position 3 (red dots in the bottom left corner of the cell in the table below): : {| class="wikitable" style="text-align:center" |+ Derivation (colored dots) of the 26 braille letters of the basic Latin alphabet from the 10 numeric digits (black dots) |- ||||||||||||||||||| |- |a/1||b/2||c/3||d/4||e/5||f/6||g/7||h/8||i/9||j/0 |- ||||||||||||||||||| |- |k||l||m||n||o||p||q||r||s||t |- ||||||||||| colspan="4" rowspan="2" | || |- |u||v||x||y||z||w |} The next ten letters (the next "decade") are the same again, but with dots also at both position 3 and position 6 (green dots in the bottom row of the cell in the table above). Here w was initially left out as not being a part of the official French alphabet at the time of Braille's life; the French braille order is u v x y z ç é à è ù (). The next ten letters, ending in w, are the same again, except that for this series position 6 (purple dot in the bottom right corner of the cell in the table above) is used without a dot at position 3. In French braille these are the letters â ê î ô û ë ï ü œ w (). W had been tacked onto the end of 39 letters of the French alphabet to accommodate English. The a–j series shifted down by one dot space () is used for punctuation. Letters a and c , which only use dots in the top row, were shifted two places for the apostrophe and hyphen: . (These are also the decade diacritics, at left in the table below, of the second and third decade.) In addition, there are ten patterns that are based on the first two letters () with their dots shifted to the right; these were assigned to non-French letters (ì ä ò ), or serve non-letter functions: (superscript; in English the accent mark), (currency prefix), (capital, in English the decimal point), (number sign), (emphasis mark), (symbol prefix). {| class="wikitable noresize" styel="text-align:center" |+ The 64 modern braille cells !colspan=2| decade || ||colspan=10| numeric sequence || ||colspan=2| shift right |- !1st | || | | | | | | | | | | || | | |- !2nd | || | | | | | | | | | | || | | |- !3rd | || | | | | | | | | | | || | | |- !4th | || | | | | | | | | | | || | | |- !5th ! shiftdown | | | | | | | | | | | || | | |} The first four decades are similar in respect that in those decades the decade dots are applied to the numeric sequence as a logical "inclusive OR" operation whereas the fifth decade applies a "shift down" operation to the numeric sequence. Originally there had been nine decades. The fifth through ninth used dashes as well as dots, but proved to be impractical and were soon abandoned. These could be replaced with what we now know as the number sign (), though that only caught on for the digits (old 5th decade → modern 1st decade). The dash occupying the top row of the original sixth decade was simply dropped, producing the modern fifth decade. (See 1829 braille.) Assignment Historically, there have been three principles in assigning the values of a linear script (print) to Braille: Using Louis Braille's original French letter values; reassigning the braille letters according to the sort order of the print alphabet being transcribed; and reassigning the letters to improve the efficiency of writing in braille. Under international consensus, most braille alphabets follow the French sorting order for the 26 letters of the basic Latin alphabet, and there have been attempts at unifying the letters beyond these 26 (see international braille), though differences remain, for example, in German Braille. This unification avoids the chaos of each nation reordering the braille code to match the sorting order of its print alphabet, as happened in Algerian Braille, where braille codes were numerically reassigned to match the order of the Arabic alphabet and bear little relation to the values used in other countries (compare modern Arabic Braille, which uses the French sorting order), and as happened in an early American version of English Braille, where the letters w, x, y, z were reassigned to match English alphabetical order. A convention sometimes seen for letters beyond the basic 26 is to exploit the physical symmetry of braille patterns iconically, for example, by assigning a reversed n to ñ or an inverted s to sh. (See Hungarian Braille and Bharati Braille, which do this to some extent.) A third principle was to assign braille codes according to frequency, with the simplest patterns (quickest ones to write with a stylus) assigned to the most frequent letters of the alphabet. Such frequency-based alphabets were used in Germany and the United States in the 19th century (see American Braille), but with the invention of the braille typewriter their advantage disappeared, and none are attested in modern use they had the disadvantage that the resulting small number of dots in a text interfered with following the alignment of the letters, and consequently made texts more difficult to read than Braille's more arbitrary letter assignment. Finally, there are braille scripts that do not order the codes numerically at all, such as Japanese Braille and Korean Braille, which are based on more abstract principles of syllable composition. Texts are sometimes written in a script of eight dots per cell rather than six, enabling them to encode a greater number of symbols. (See Gardner–Salinas braille codes.) Luxembourgish Braille has adopted eight-dot cells for general use; for example, it adds a dot below each letter to derive its capital variant. Form Braille was the first writing system with binary encoding. The system as devised by Braille consists of two parts: Character encoding that mapped characters of the French alphabet to tuples of six bits (the dots). The physical representation of those six-bit characters with raised dots in a braille cell. Within an individual cell, the dot positions are arranged in two columns of three positions. A raised dot can appear in any of the six positions, producing 64 (26) possible patterns, including one in which there are no raised dots. For reference purposes, a pattern is commonly described by listing the positions where dots are raised, the positions being universally numbered, from top to bottom, as 1 to 3 on the left and 4 to 6 on the right. For example, dot pattern 1-3-4 describes a cell with three dots raised, at the top and bottom in the left column and at the top of the right column: that is, the letter m. The lines of horizontal braille text are separated by a space, much like visible printed text, so that the dots of one line can be differentiated from the braille text above and below. Different assignments of braille codes (or code pages) are used to map the character sets of different printed scripts to the six-bit cells. Braille assignments have also been created for mathematical and musical notation. However, because the six-dot braille cell allows only 64 (26) patterns, including space, the characters of a braille script commonly have multiple values, depending on their context. That is, character mapping between print and braille is not one-to-one. For example, the character corresponds in print to both the letter d and the digit 4. In addition to simple encoding, many braille alphabets use contractions to reduce the size of braille texts and to increase reading speed. (See Contracted braille.) Writing braille Braille may be produced by hand using a slate and stylus in which each dot is created from the back of the page, writing in mirror image, or it may be produced on a braille typewriter or Perkins Brailler, or an electronic Brailler or braille notetaker. Braille users with access to smartphones may also activate the on-screen braille input keyboard, to type braille symbols on to their device by placing their fingers on to the screen according to the dot configuration of the symbols they wish to form. These symbols are automatically translated into print on the screen. The different tools that exist for writing braille allow the braille user to select the method that is best for a given task. For example, the slate and stylus is a portable writing tool, much like the pen and paper for the sighted. Errors can be erased using a braille eraser or can be overwritten with all six dots (). Interpoint refers to braille printing that is offset, so that the paper can be embossed on both sides, with the dots on one side appearing between the divots that form the dots on the other. Using a computer or other electronic device, Braille may be produced with a braille embosser (printer) or a refreshable braille display (screen). Eight-dot braille Braille has been extended to an 8-dot code, particularly for use with braille embossers and refreshable braille displays. In 8-dot braille the additional dots are added at the bottom of the cell, giving a matrix 4 dots high by 2 dots wide. The additional dots are given the numbers 7 (for the lower-left dot) and 8 (for the lower-right dot). Eight-dot braille has the advantages that the case of an individual letter is directly coded in the cell containing the letter and that all the printable ASCII characters can be represented in a single cell. All 256 (28) possible combinations of 8 dots are encoded by the Unicode standard. Braille with six dots is frequently stored as Braille ASCII. Letters The first 25 braille letters, up through the first half of the 3rd decade, transcribe a–z (skipping w). In English Braille, the rest of that decade is rounded out with the ligatures and, for, of, the, and with. Omitting dot 3 from these forms the 4th decade, the ligatures ch, gh, sh, th, wh, ed, er, ou, ow and the letter w. (See English Braille.) Formatting Various formatting marks affect the values of the letters that follow them. They have no direct equivalent in print. The most important in English Braille are: That is, is read as capital 'A', and as the digit '1'. Punctuation Basic punctuation marks in English Braille include: is both the question mark and the opening quotation mark. Its reading depends on whether it occurs before a word or after. is used for both opening and closing parentheses. Its placement relative to spaces and other characters determines its interpretation. Punctuation varies from language to language. For example, French Braille uses for its question mark and swaps the quotation marks and parentheses (to and ); it uses the period () for the decimal point, as in print, and the decimal point () to mark capitalization. Contractions Braille contractions are words and affixes that are shortened so that they take up fewer cells. In English Braille, for example, the word afternoon is written with just three letters, , much like stenoscript. There are also several abbreviation marks that create what are effectively logograms. The most common of these is dot 5, which combines with the first letter of words. With the letter m, the resulting word is mother. There are also ligatures ("contracted" letters), which are single letters in braille but correspond to more than one letter in print. The letter and, for example, is used to write words with the sequence a-n-d in them, such as hand. Page dimensions Most braille embossers support between 34 and 40 cells per line, and 25 lines per page. A manually operated Perkins braille typewriter supports a maximum of 42 cells per line (its margins are adjustable), and typical paper allows 25 lines per page. A large interlining Stainsby has 36 cells per line and 18 lines per page. An A4-sized Marburg braille frame, which allows interpoint braille (dots on both sides of the page, offset so they do not interfere with each other), has 30 cells per line and 27 lines per page. Braille writing machine A Braille writing machine is a typewriter with six keys that allows the user to write braille on a regular hard copy page. The first Braille typewriter to gain general acceptance was invented by Frank Haven Hall (Superintendent of the Illinois School for the Blind), and was presented to the public in 1892. The Stainsby Brailler, developed by Henry Stainsby in 1903, is a mechanical writer with a sliding carriage that moves over an aluminium plate as it embosses Braille characters. An improved version was introduced around 1933. In 1951 David Abraham, a woodworking teacher at the Perkins School for the Blind, produced a more advanced Braille typewriter, the Perkins Brailler. Braille printers or embosser were produced in the 1950s. In 1960 Robert Mann, a teacher in MIT, wrote DOTSYS, a software that allowed automatic braille translation, and another group created an embossing device called "M.I.T. Braillemboss". The Mitre Corporation team of Robert Gildea, Jonathan Millen, Reid Gerhart and Joseph Sullivan (now president of Duxbury Systems) developed DOTSYS III, the first braille translator written in a portable programming language. DOTSYS III was developed for the Atlanta Public Schools as a public domain program. In 1991 Ernest Bate developed the Mountbatten Brailler, an electronic machine used to type braille on braille paper, giving it a number of additional features such as word processing, audio feedback and embossing. This version was improved in 2008 with a quiet writer that had an erase key. In 2011 David S. Morgan produced the first SMART Brailler machine, with added text to speech function and allowed digital capture of data entered. Braille reading Braille is traditionally read in hardcopy form, such as with paper books written in braille, documents produced in paper braille (such as restaurant menus), and braille labels or public signage. It can also be read on a refreshable braille display either as a stand-alone electronic device or connected to a computer or smartphone. Refreshable braille displays convert what is visually shown on a computer or smartphone screen into braille through a series of pins that rise and fall to form braille symbols. Currently more than 1% of all printed books have been translated into hardcopy braille. The fastest braille readers apply a light touch and read braille with two hands, although reading braille with one hand is also possible. Although the finger can read only one braille character at a time, the brain chunks braille at a higher level, processing words a digraph, root or suffix at a time. The processing largely takes place in the visual cortex. Literacy Children who are blind miss out on fundamental parts of early and advanced education if not provided with the necessary tools, such as access to educational materials in braille. Children who are blind or visually impaired can begin learning foundational braille skills from a very young age to become fluent braille readers as they get older. Sighted children are naturally exposed to written language on signs, on TV and in the books they see. Blind children require the same early exposure to literacy, through access to braille rich environments and opportunities to explore the world around them. Print-braille books, for example, present text in both print and braille and can be read by sighted parents to blind children (and vice versa), allowing blind children to develop an early love for reading even before formal reading instruction begins. Adults who experience sight loss later in life or who did not have the opportunity to learn it when they were younger can also learn braille. In most cases, adults who learn braille were already literate in print before vision loss and so instruction focuses more on developing the tactile and motor skills needed to read braille. While different countries publish statistics on how many readers in a given organization request braille, these numbers only provide a partial picture of braille literacy statistics. For example, this data does not survey the entire population of braille readers or always include readers who are no longer in the school system (adults) or readers who request electronic braille materials. Therefore, there are currently no reliable statistics on braille literacy rates, as described in a publication in the Journal of Visual Impairment and Blindness. Regardless of the precise percentage of braille readers, there is consensus that braille should be provided to all those who benefit from it. Numerous factors influence access to braille literacy, including school budget constraints, technology advancements such as screen-reader software, access to qualified instruction, and different philosophical views over how blind children should be educated. In the USA, a key turning point for braille literacy was the passage of the Rehabilitation Act of 1973, an act of Congress that moved thousands of children from specialized schools for the blind into mainstream public schools. Because only a small percentage of public schools could afford to train and hire braille-qualified teachers, braille literacy has declined since the law took effect. Braille literacy rates have improved slightly since the bill was passed, in part because of pressure from consumers and advocacy groups that has led 27 states to pass legislation mandating that children who are legally blind be given the opportunity to learn braille. In 1998 there were 57,425 legally blind students registered in the United States, but only 10% (5,461) of them used braille as their primary reading medium. Early Braille education is crucial to literacy for a blind or low-vision child. A study conducted in the state of Washington found that people who learned braille at an early age did just as well, if not better than their sighted peers in several areas, including vocabulary and comprehension. In the preliminary adult study, while evaluating the correlation between adult literacy skills and employment, it was found that 44% of the participants who had learned to read in braille were unemployed, compared to the 77% unemployment rate of those who had learned to read using print. Currently, among the estimated 85,000 blind adults in the United States, 90% of those who are braille-literate are employed. Among adults who do not know braille, only 33% are employed. Statistically, history has proven that braille reading proficiency provides an essential skill set that allows blind or low-vision children to compete with their sighted peers in a school environment and later in life as they enter the workforce. Regardless of the specific percentage of braille readers, proponents point out the importance of increasing access to braille for all those who can benefit from it. Braille transcription Although it is possible to transcribe print by simply substituting the equivalent braille character for its printed equivalent, in English such a character-by-character transcription (known as uncontracted braille) is typically used by beginners or those who only engage in short reading tasks (such as reading household labels). Braille characters are much larger than their printed equivalents, and the standard 11" by 11.5" (28 cm × 30 cm) page has room for only 25 lines of 43 characters. To reduce space and increase reading speed, most braille alphabets and orthographies use ligatures, abbreviations, and contractions. Virtually all English braille books in hardcopy (paper) format are transcribed in contracted braille: The Library of Congress's Instruction Manual for Braille Transcribing runs to over 300 pages, and braille transcribers must pass certification tests. Uncontracted braille was previously known as grade 1 braille, and contracted braille was previously known as grade 2 braille. Uncontracted braille is a direct transliteration of print words (one-to-one correspondence); hence, the word "about" would contain all the same letters in uncontracted braille as it does in inkprint. Contracted braille includes short forms to save space; hence, for example, the letters "ab" when standing alone represent the word "about" in English contracted braille. In English, some braille users only learn uncontracted braille, particularly if braille is being used for shorter reading tasks such as reading household labels. However, those who plan to use braille for educational and employment purposes and longer reading texts often go on to contracted braille. The system of contractions in English Braille begins with a set of 23 words contracted to single characters. Thus the word but is contracted to the single letter b, can to c, do to d, and so on. Even this simple rule creates issues requiring special cases; for example, d is, specifically, an abbreviation of the verb do; the noun do representing the note of the musical scale is a different word and must be spelled out. Portions of words may be contracted, and many rules govern this process. For example, the character with dots 2-3-5 (the letter "f" lowered in the Braille cell) stands for "ff" when used in the middle of a word. At the beginning of a word, this same character stands for the word "to"; the character is written in braille with no space following it. (This contraction was removed in the Unified English Braille Code.) At the end of a word, the same character represents an exclamation point. Some contractions are more similar than their print equivalents. For example, the contraction , meaning "letter", differs from , meaning "little", only by one dot in the second letter: little, letter. This causes greater confusion between the braille spellings of these words and can hinder the learning process of contracted braille. The contraction rules take into account the linguistic structure of the word; thus, contractions are generally not to be used when their use would alter the usual braille form of a base word to which a prefix or suffix has been added. Some portions of the transcription rules are not fully codified and rely on the judgment of the transcriber. Thus, when the contraction rules permit the same word in more than one way, preference is given to "the contraction that more nearly approximates correct pronunciation". "Grade 3 braille" is a variety of non-standardized systems that include many additional shorthand-like contractions. They are not used for publication, but by individuals for their personal convenience. Braille translation software When people produce braille, this is called braille transcription. When computer software produces braille, this is called a braille translator. Braille translation software exists to handle almost all of the common languages of the world, and many technical areas, such as mathematics (mathematical notation), for example WIMATS, music (musical notation), and tactile graphics. Braille reading techniques Since Braille is one of the few writing systems where tactile perception is used, as opposed to visual perception, a braille reader must develop new skills. One skill important for Braille readers is the ability to create smooth and even pressures when running one's fingers along the words. There are many different styles and techniques used for the understanding and development of braille, even though a study by B. F. Holland suggests that there is no specific technique that is superior to any other. Another study by Lowenfield & Abel shows that braille can be read "the fastest and best... by students who read using the index fingers of both hands". Another important reading skill emphasized in this study is to finish reading the end of a line with the right hand and to find the beginning of the next line with the left hand simultaneously. International uniformity When Braille was first adapted to languages other than French, many schemes were adopted, including mapping the native alphabet to the alphabetical order of French – e.g. in English W, which was not in the French alphabet at the time, is mapped to braille X, X to Y, Y to Z, and Z to the first French-accented letter – or completely rearranging the alphabet such that common letters are represented by the simplest braille patterns. Consequently, mutual intelligibility was greatly hindered by this state of affairs. In 1878, the International Congress on Work for the Blind, held in Paris, proposed an international braille standard, where braille codes for different languages and scripts would be based, not on the order of a particular alphabet, but on phonetic correspondence and transliteration to Latin. This unified braille has been applied to the languages of India and Africa, Arabic, Vietnamese, Hebrew, Russian, and Armenian, as well as nearly all Latin-script languages. In Greek, for example, γ (g) is written as Latin g, despite the fact that it has the alphabetic position of c; Hebrew ב (b), the second letter of the alphabet and cognate with the Latin letter b, is sometimes pronounced /b/ and sometimes /v/, and is written b or v accordingly; Russian ц (ts) is written as c, which is the usual letter for /ts/ in those Slavic languages that use the Latin alphabet; and Arabic ف (f) is written as f, despite being historically p and occurring in that part of the Arabic alphabet (between historic o and q). Other braille conventions Other systems for assigning values to braille patterns are also followed beside the simple mapping of the alphabetical order onto the original French order. Some braille alphabets start with unified braille, and then diverge significantly based on the phonology of the target languages, while others diverge even further. In the various Chinese systems, traditional braille values are used for initial consonants and the simple vowels. In both Mandarin and Cantonese Braille, however, characters have different readings depending on whether they are placed in syllable-initial (onset) or syllable-final (rime) position. For instance, the cell for Latin k, , represents Cantonese k (g in Yale and other modern romanizations) when initial, but aak when final, while Latin j, , represents Cantonese initial j but final oei. Novel systems of braille mapping include Korean, which adopts separate syllable-initial and syllable-final forms for its consonants, explicitly grouping braille cells into syllabic groups in the same way as hangul. Japanese, meanwhile, combines independent vowel dot patterns and modifier consonant dot patterns into a single braille cell – an abugida representation of each Japanese mora. Uses Braille is read by people who are blind, deafblind or who have low vision, and by both those born with a visual impairment and those who experience sight loss later in life. Braille may also be used by print impaired people, who although may be fully sighted, due to a physical disability are unable to read print. Even individuals with low vision will find that they benefit from braille, depending on level of vision or context (for example, when lighting or colour contrast is poor). Braille is used for both short and long reading tasks. Examples of short reading tasks include braille labels for identifying household items (or cards in a wallet), reading elevator buttons, accessing phone numbers, recipes, grocery lists and other personal notes. Examples of longer reading tasks include using braille to access educational materials, novels and magazines. People with access to a refreshable braille display can also use braille for reading email and ebooks, browsing the internet and accessing other electronic documents. It is also possible to adapt or purchase playing cards and board games in braille. In India there are instances where the parliament acts have been published in braille, such as The Right to Information Act. Sylheti Braille is used in Northeast India. In Canada, passenger safety information in braille and tactile seat row markers are required aboard planes, trains, large ferries, and interprovincial busses pursuant to the Canadian Transportation Agency's regulations. In the United States, the Americans with Disabilities Act of 1990 requires various building signage to be in braille. In the United Kingdom, it is required that medicines have the name of the medicine in Braille on the labeling. Currency The current series of Canadian banknotes has a tactile feature consisting of raised dots that indicate the denomination, allowing bills to be easily identified by blind or low vision people. It does not use standard braille numbers to identify the value. Instead, the number of full braille cells, which can be simply counted by both braille readers and non-braille readers alike, is an indicator of the value of the bill. Mexican bank notes, Australian bank notes, Indian rupee notes, Israeli new shekel notes and Russian ruble notes also have special raised symbols to make them identifiable by persons who are blind or have low vision. Euro coins were designed in cooperation with organisations representing blind people, and as a result they incorporate many features allowing them to be distinguished by touch alone. In addition, their visual appearance is designed to make them easy to tell apart for persons who cannot read the inscriptions on the coins. "A good design for the blind and partially sighted is a good design for everybody" was the principle behind the cooperation of the European Central Bank and the European Blind Union during the design phase of the first series Euro banknotes in the 1990s. As a result, the design of the first euro banknotes included several characteristics which aid both the blind and partially sighted to confidently use the notes. Australia introduced the tactile feature onto their five-dollar banknote in 2016 In the United Kingdom, the front of the £10 polymer note (the side with raised print), has two clusters of raised dots in the top left hand corner, and the £20 note has three. This tactile feature helps blind and partially sighted people identify the value of the note. In 2003 the US Mint introduced the commemorative Alabama State Quarter, which recognized State Daughter Helen Keller on the Obverse, including the name Helen Keller in both English script and Braille inscription. This appears to be the first known use of Braille on US Coin Currency, though not standard on all coins of this type. Unicode The Braille set was added to the Unicode Standard in version 3.0 (1999). Most braille embossers and refreshable braille displays do not use the Unicode code points, but instead reuse the 8-bit code points that are assigned to standard ASCII for braille ASCII. (Thus, for simple material, the same bitstream may be interpreted equally as visual letter forms for sighted readers or their exact semantic equivalent in tactile patterns for blind readers. However some codes have quite different tactile versus visual interpretations and most are not even defined in Braille ASCII.) Some embossers have proprietary control codes for 8-dot braille or for full graphics mode, where dots may be placed anywhere on the page without leaving any space between braille cells so that continuous lines can be drawn in diagrams, but these are rarely used and are not standard. The Unicode standard encodes 6-dot and 8-dot braille glyphs according to their binary appearance, rather than following their assigned numeric order. Dot 1 corresponds to the least significant bit of the low byte of the Unicode scalar value, and dot 8 to the high bit of that byte. The Unicode block for braille is U+2800 ... U+28FF. The mapping of patterns to characters etc. is language dependent: even for English for example, see American Braille and English Braille. Observation Every year on 4 January, World Braille Day is observed internationally to commemorate the birth of Louis Braille and to recognize his efforts. Although the event is not considered a public holiday, it has been recognized by the United Nations as an official day of celebration since 2019. Braille devices There is a variety of contemporary electronic devices that serve the needs of blind people that operate in Braille, such as refreshable braille displays and Braille e-book that use different technologies for transmitting graphic information of different types (pictures, maps, graphs, texts, etc.). See also ("the Braille man of India") List of binary codes List of international common standards Notes References External links L'association Valentin Haüy (in French) Acting for the autonomy of blind and partially sighted persons (Corporate brochure) (Microsoft Word file, in English) Alternate Text Production Center of the California Community Colleges. Braille Part 1 Text To Speech For The Visually Impaired YouTube Braille information and advice – Sense UK Braille at Omniglot 1824 introductions Assistive technology Augmentative and alternative communication Character sets Digital typography French inventions Latin-script representations Writing systems introduced in the 19th century
16
Blue Velvet is a 1986 American neo-noir mystery thriller film written and directed by David Lynch. Blending psychological horror with film noir, the film stars Kyle MacLachlan, Isabella Rossellini, Dennis Hopper, and Laura Dern, and is named after the 1951 song of the same name. The film concerns a young college student who, returning home to visit his ill father, discovers a severed human ear in a field. The ear then leads him to uncover a vast criminal conspiracy, and enter into a romantic relationship with a troubled lounge singer. The screenplay of Blue Velvet had been passed around multiple times in the late 1970s and early 1980s, with several major studios declining it due to its strong sexual and violent content. After the failure of his 1984 film Dune, Lynch made attempts at developing a more "personal story", somewhat characteristic of the surrealist style displayed in his first film Eraserhead (1977). The independent studio De Laurentiis Entertainment Group, owned at the time by Italian film producer Dino De Laurentiis, agreed to finance and produce the film. Blue Velvet initially received a divided critical response, with many stating that its explicit content served little artistic purpose. Nevertheless, the film earned Lynch his second nomination for the Academy Award for Best Director, and received the year's Best Film and Best Director prizes from the National Society of Film Critics. It came to achieve cult status. As an example of a director casting against the norm, it was credited for revitalizing Hopper's career and for providing Rossellini with a dramatic outlet beyond her previous work as a fashion model and a cosmetics spokeswoman. In the years since, the film has been re-evaluated, and it is now widely regarded as one of Lynch's major works and one of the greatest films of the 1980s. Publications including Sight & Sound, Time, Entertainment Weekly and BBC Magazine have ranked it among the greatest American films ever. In 2008, it was chosen by the American Film Institute as one of the greatest mystery films ever made. Plot College student Jeffrey Beaumont returns to his hometown of Lumberton, North Carolina after his father, Tom, has a near-fatal attack from a medical condition. Walking home from the hospital, Jeffrey cuts through a vacant lot and discovers a severed human ear, which he takes to police detective John Williams. Williams' daughter Sandy tells Jeffrey that the ear somehow relates to a lounge singer named Dorothy Vallens. Intrigued, Jeffrey, posing as a pest exterminator, enters her apartment. While there, he steals a spare key while she is distracted by a man in a distinctive yellow sport coat, whom Jeffrey nicknames the "Yellow Man". Jeffrey and Sandy attend Dorothy's nightclub act, in which she sings "Blue Velvet", and leave early so Jeffrey can infiltrate her apartment. Dorothy returns home, stripping naked, but when she hears Jeffrey, she finds him hiding in a closet and forces him to undress at knifepoint. She attempts to rape Jeffrey, but he retreats to the closet when Frank Booth, an aggressively psychopathic gangster and drug lord, arrives and interrupts their encounter. Frank then proceeds to beat and rape Dorothy while inhaling narcotic gas from a canister and alternating between fits of sobbing and violent rage. After Frank leaves, Jeffrey sneaks away and seeks comfort from Sandy. After surmising that Frank has abducted Dorothy's husband, Don, and son, Donnie, to force her into sex slavery, Jeffrey suspects Frank cut off Don's ear to intimidate her into submission. While continuing to see Sandy, Jeffrey enters into a sadomasochistic sexual relationship in which Dorothy encourages him to hit her. Jeffrey sees Frank attending Dorothy's show and later observes him selling drugs and meeting with the Yellow Man. Jeffrey then sees the Yellow Man meeting with a "well-dressed man." When Frank catches Jeffrey leaving Dorothy's apartment, he abducts them and takes them to the lair of Ben, a criminal associate holding both Don and Donnie hostage. Frank permits Dorothy to see her family and forces Jeffrey to watch Ben perform an impromptu lip-sync of Roy Orbison's "In Dreams", which moves Frank to tears. Afterwards, he and his gang take Jeffrey and Dorothy on a high-speed joyride to a sawmill yard, where he again attempts to sexually abuse Dorothy. When Jeffrey intervenes and punches him in the face, an enraged Frank and his gang pull him out of the car. Replaying the tape of "In Dreams", Frank smears lipstick on his face and violently kisses Jeffrey, likewise smearing him with red lipstick, before savagely beating him unconscious, while Dorothy pleads for Frank to stop. Jeffrey awakens the next morning, bruised and bloodied. Visiting the police station, Jeffrey discovers that Detective Williams' partner, Tom Gordon, is the Yellow Man, who has been murdering Frank's rival drug dealers and stealing confiscated narcotics from the evidence room for Frank to sell. After Jeffrey and Sandy declare their love for each other at a party, they are pursued by a car which they assume belongs to Frank. As they arrive at Jeffrey's home, Sandy realizes the driver is her ex-boyfriend, Mike. After Mike threatens to beat Jeffrey for stealing his girlfriend, Dorothy appears on Jeffrey's porch naked, beaten, and confused. Mike backs down as Jeffrey and Sandy whisk Dorothy to Sandy's house to summon medical attention. When Dorothy calls Jeffrey "my secret lover", a distraught Sandy slaps him for cheating on her. Jeffrey asks Sandy to tell her father everything, and Detective Williams then leads a police raid on Frank's headquarters, killing his men and crippling his criminal empire. Jeffrey returns alone to Dorothy's apartment, where he discovers Don dead and Gordon mortally wounded. As Jeffrey leaves the apartment, the "Well-Dressed Man" arrives, sees Jeffrey in the stairs, and chases him back inside. Jeffrey, realizing that the "Well-Dressed Man" is actually Frank, uses Gordon's walkie-talkie to say he is in the bedroom (remembering Frank's own police radio) before hiding in a closet. When Frank arrives, he starts shooting around Dorothy's apartment attempting to kill Jeffrey in a state of paranoia, in the process killing Gordon. Frank deduces where Jeffrey is hiding, and Jeffrey kills Frank with Gordon's gun upon Frank opening the door, moments before Sandy and Detective Williams arrive to help. In the epilogue, it is revealed that Jeffrey and Sandy have continued their relationship, Tom has recovered from being hospitalized, and Dorothy has been reunited with her son. Cast Production Origin The film's story originated from three ideas that crystallized in the filmmaker's mind over a period of time starting as early as 1973. The first idea was only "a feeling" and the title Blue Velvet, Lynch told Cineaste in 1987. The second idea was an image of a severed, human ear lying in a field. "I don't know why it had to be an ear. Except it needed to be an opening of a part of the body, a hole into something else ... The ear sits on the head and goes right into the mind so it felt perfect," Lynch remarked in a 1986 interview to The New York Times. The third idea was Bobby Vinton's classic rendition of the song "Blue Velvet" and "the mood that came with that song a mood, a time, and things that were of that time." The scene in which Dorothy appears naked outside was inspired by a real-life experience Lynch had during childhood when he and his brother saw a naked woman walking down a neighborhood street at night. The experience was so traumatic to the young Lynch that it made him cry, and he had never forgotten it. After completing The Elephant Man (1980), Lynch met producer Richard Roth over coffee. Roth had read and enjoyed Lynch's Ronnie Rocket script, but did not think it was something he wanted to produce. He asked Lynch if the filmmaker had any other scripts, but the director only had ideas. "I told him I had always wanted to sneak into a girl's room to watch her into the night and that, maybe, at one point or another, I would see something that would be the clue to a murder mystery. Roth loved the idea and asked me to write a treatment. I went home and thought of the ear in the field." Production was announced in August 1984. Lynch wrote two more drafts before he was satisfied with the script of the film. The problem with them, Lynch has said, was that "there was maybe all the unpleasantness in the film but nothing else. A lot was not there. And so it went away for a while." Conditions at this point were ideal for Lynch's film: he had made a deal with Dino De Laurentiis that gave him complete artistic freedom and final cut privileges, with the stipulation that the filmmaker take a cut in his salary and work with a budget of only $6 million. This deal meant that Blue Velvet was the smallest film on De Laurentiis's slate. Consequently, Lynch would be left mostly unsupervised during production. "After Dune I was down so far that anything was up! So it was just a euphoria. And when you work with that kind of feeling, you can take chances. You can experiment." Casting The cast of Blue Velvet included several then-relatively unknown actors. Lynch met Isabella Rossellini at a restaurant, and offered her the role of Dorothy Vallens. Helen Mirren had been Lynch's first choice for the role. Rossellini had gained some exposure before the film for her Lancôme ads in the early 1980s and for being the daughter of actress Ingrid Bergman and Italian film director Roberto Rossellini. After completion of the film, during test screenings, ICM Partners—the agency representing Rossellini—immediately dropped her as a client. Furthermore, the nuns at the school in Rome that Rossellini attended in her youth called to say they were praying for her. Kyle MacLachlan had played the central role in Lynch's critical and commercial failure Dune (1984), a science fiction epic based on the novel of the same name. MacLachlan later became a recurring collaborator with Lynch, who remarked: "Kyle plays innocents who are interested in the mysteries of life. He's the person you trust enough to go into a strange world with." Dennis Hopper was the biggest "name" in the film, having starred in Easy Rider (1969). Hopper—said to be Lynch's third choice (Michael Ironside has stated that Frank was written with him in mind)—accepted the role, reportedly having exclaimed, "I've got to play Frank! I am Frank!" as Hopper confirmed in the Blue Velvet "making-of" documentary The Mysteries of Love, produced for the 2002 special edition. Harry Dean Stanton and Steven Berkoff both turned down the role of Frank because of the violent content in the film. Laura Dern (then 18 years old) was cast, after various already successful actresses had turned it down; among these had been Molly Ringwald. Shooting Principal photography of Blue Velvet began in August 1985 and completed in November. The film was shot at EUE/Screen Gems studio in Wilmington, North Carolina, which also provided the exterior scenes of Lumberton. The scene with a raped and battered Dorothy proved to be particularly challenging. Several townspeople arrived to watch the filming with picnic baskets and rugs, against the wishes of Rossellini and Lynch. However, they continued filming as normal, and when Lynch yelled cut, the townspeople had left. As a result, police told Lynch they were no longer permitted to shoot in any public areas of Wilmington. The Carolina Apartments on 5th and Market St in downtown Wilmington served as the location central to the story, with the adjacent Kenan fountain featured prominently in many shots. The building is also the birth place and death place of noted artist Claude Howell. The apartment building stands today, and the Kenan fountain was refurbished in 2020 after sustaining heavy damage during Hurricane Florence. Editing Lynch's original rough cut ran for approximately four hours. He was contractually obligated to deliver a two-hour movie by De Laurentiis and cut many small subplots and character scenes. He also made cuts at the request of the MPAA. For example, when Frank slaps Dorothy after the first rape scene, the audience was supposed to see Frank actually hitting her. Instead, the film cuts away to Jeffrey in the closet, wincing at what he has just seen. This cut was made to satisfy the MPAA's concerns about violence. Lynch thought that the change only made the scene more disturbing. In 2011, Lynch announced that footage from the deleted scenes, long thought lost, had been discovered. The material was subsequently included on the Blu-ray Disc release of the film. Among the deleted footage was Megan Mullally as Jeffrey's college sweetheart Louise Wertham, whose entire role was cut from the theatrical release. The final cut of the film runs at just over two hours. Distribution Because the material was completely different from anything that would be considered mainstream at the time, De Laurentiis Entertainment Group's marketing employees were unsure of how to promote the film, or even if it would be promoted at all; it wasn't until the positive reception the film received at various film festivals that they began to promote it. Interpretations Despite Blue Velvets initial appearance as a mystery, the film operates on a number of thematic levels. The film owes a large debt to 1950s film noir, containing and exploring such conventions as the femme fatale (Dorothy Vallens), a seemingly unstoppable villain (Frank Booth), and the questionable moral outlook of the hero (Jeffrey Beaumont), as well as its unusual use of shadowy, sometimes dark cinematography. Blue Velvet represents and establishes Lynch's famous "askew vision", and introduces several common elements of Lynch's work, some of which would later become his trademarks, including distorted characters, a polarized world, and debilitating damage to the skull or brain. Perhaps the most significant Lynchian trademark in the film is the depiction of unearthing a dark underbelly in a seemingly idealized small town; Jeffrey even proclaims in the film that he is "seeing something that was always hidden", alluding to the plot's central idea. Lynch's characterization of films, symbols, and motifs have become well known, and his particular style, characterised largely in Blue Velvet for the first time, has been written about extensively using descriptions like "dreamlike", "ultraweird", "dark", and "oddball". Red curtains also show up in key scenes, specifically in Dorothy's apartment, which have since become a Lynch trademark. The film has been compared to Alfred Hitchcock's Psycho (1960) because of its stark treatment of evil and mental illness. The premise of both films is curiosity, leading to an investigation that draws the lead characters into a hidden, voyeuristic underworld of crime. The film's thematic framework harks back to Edgar Allan Poe, Henry James, and early gothic fiction, as well as films such as Shadow of a Doubt (1943) and The Night of the Hunter (1955) and the entire notion of film noir. Lynch has called it a "film about things that are hidden—within a small city and within people." Feminist psychoanalytic film theorist Laura Mulvey argues that Blue Velvet establishes a metaphorical Oedipal family—"the child", Jeffrey Beaumont, and his "parents", Frank Booth and Dorothy Vallens—through deliberate references to film noir and its underlying Oedipal theme. Michael Atkinson claims that the resulting violence in the film can be read as symbolic of domestic violence within real families. For instance, Frank's violent acts can be seen to reflect the different types of abuse within families, and the control he has over Dorothy might represent the hold an abusive husband has over his wife. He reads Jeffrey as an innocent youth who is both horrified by the violence inflicted by Frank, but also tempted by it as the means of possessing Dorothy for himself. Atkinson takes a Freudian approach to the film; considering it to be an expression of the traumatised innocence which characterises Lynch's work. He states, "Dorothy represents the sexual force of the mother [figure] because she is forbidden and because she becomes the object of the unhealthy, infantile impulses at work in Jeffrey's subconscious." Symbolism Symbolism is used heavily in Blue Velvet. The most consistent symbolism in the film is an insect motif introduced at the end of the first scene, when the camera zooms in on a well-kept suburban lawn until it unearths a swarming underground nest of bugs. This is generally recognized as a metaphor for the seedy underworld that Jeffrey will soon discover under the surface of his own suburban, Reaganesque paradise. The severed ear he finds is being overrun by black ants. The bug motif is recurrent throughout the film, most notably in the bug-like gas mask that Frank wears, but also the excuse that Jeffrey uses to gain access to Dorothy's apartment: he claims to be an insect exterminator. One of Frank's sinister accomplices is also consistently identified through the yellow jacket he wears, possibly reminiscent of the name of a type of wasp. Finally, a robin eating a bug on a fence becomes a topic of discussion in the last scene of the film. The severed ear that Jeffrey discovers is also a key symbolic element, leading Jeffrey into danger. Indeed, just as Jeffrey's troubles begin, the audience is treated to a nightmarish sequence in which the camera zooms into the canal of the severed, decomposing ear. Soundtrack The Blue Velvet soundtrack was supervised by Angelo Badalamenti (who makes a brief cameo appearance as the pianist at the Slow Club where Dorothy performs). The soundtrack makes heavy usage of vintage pop songs, such as Bobby Vinton's "Blue Velvet" and Roy Orbison's "In Dreams", juxtaposed with an orchestral score inspired by Shostakovich. During filming, Lynch placed speakers on set and in streets and played Shostakovich to set the mood he wanted to convey. The score alludes to Shostakovich's 15th Symphony, which Lynch had been listening to regularly while writing the screenplay. Lynch had originally opted to use "Song to the Siren" by This Mortal Coil during the scene in which Sandy and Jeffrey share a dance; however, he could not obtain the rights for the song at the time. He would go on to use this song in Lost Highway eleven years later. Entertainment Weekly ranked Blue Velvet soundtrack on its list of the 100 Greatest Film Soundtracks, at the 100th position. Critic John Alexander wrote, "the haunting soundtrack accompanies the title credits, then weaves through the narrative, accentuating the noir mood of the film." Lynch worked with music composer Angelo Badalamenti for the first time in this film and asked him to write a score that had to be "like Shostakovich, be very Russian, but make it the most beautiful thing but make it dark and a little bit scary." Badalamenti's success with Blue Velvet would lead him to contribute to all of Lynch's future full-length films until Inland Empire as well as the cult television program Twin Peaks. Also included in the sound team was long-time Lynch collaborator Alan Splet, a sound editor and designer who had won an Academy Award for his work on The Black Stallion (1979), and been nominated for Never Cry Wolf (1983). Reception Box office Blue Velvet premiered in competition at the Montréal World Film Festival in August 1986, and at the Toronto Festival of Festivals on September 12, 1986, and a few days later in the United States. It debuted commercially in both countries on September 19, 1986, in 98 theatres across the United States. In its opening weekend, the film grossed a total of $789,409. It eventually expanded to another 15 theatres, and in the US and Canada grossed a total of $8,551,228. Blue Velvet was met with uproar during its audience reception, with lines formed around city blocks in New York City and Los Angeles. There were reports of mass walkouts and refund demands during its opening week. At a Chicago screening, a man fainted and had to have his pacemaker checked. Upon completion, he returned to the cinema to see the ending. At a Los Angeles cinema, two strangers became engaged in a heated disagreement, but decided to resolve the disagreement to return to the theatre. Critical reception Blue Velvet was released to a very polarized reception in the United States. The critics who did praise the film were often vociferous. The New York Times critic Janet Maslin directed much praise toward the performances of Hopper and Rossellini: "Mr. Hopper and Miss Rossellini are so far outside the bounds of ordinary acting here that their performances are best understood in terms of sheer lack of inhibition; both give themselves entirely over to the material, which seems to be exactly what's called for." She called it "an instant cult classic". Maslin concluded by saying that Blue Velvet "is as fascinating as it is freakish. It confirms Mr. Lynch's stature as an innovator, a superb technician, and someone best not encountered in a dark alley." Sheila Benson of the Los Angeles Times called the film "the most brilliantly disturbing film ever to have its roots in small-town American life," describing it as "shocking, visionary, rapturously controlled". Film critic Gene Siskel included Blue Velvet on his list of the best films of 1986, at the fifth spot. Peter Travers, film critic for Rolling Stone, named it the best film of the 1980s and referred to it as an "American masterpiece". Upon its initial release, Woody Allen and Martin Scorsese called Blue Velvet the "Best Film of The Year". On the other hand, Paul Attanasio of The Washington Post said "the film showcases a visual stylist utterly in command of his talents" and that Angelo Badalamenti "contributes an extraordinary score, slipping seamlessly from slinky jazz to violin figures to the romantic sweep of a classic Hollywood score," but stated that Lynch "isn't interested in communicating, he's interested in parading his personality. The movie doesn't progress or deepen, it just gets weirder, and to no good end." A general criticism from US critics was Blue Velvets approach to sexuality and violence. They asserted that this detracted from the film's seriousness as a work of art, and some condemned the film as pornographic. One of its detractors, Roger Ebert, praised Isabella Rossellini's performance as "convincing and courageous" but criticized how she was depicted in the film, even accusing David Lynch of misogyny: "degraded, slapped around, humiliated and undressed in front of the camera. And when you ask an actress to endure those experiences, you should keep your side of the bargain by putting her in an important film." While Ebert in later years came to consider Lynch a great filmmaker, his negative view of Blue Velvet remained unchanged after he revisited it in the 21st century. The film is now widely considered a masterpiece and has a score of 95% on Rotten Tomatoes based on 80 reviews with an average rating of 8.8/10. The website's critical consensus states: "If audiences walk away from this subversive, surreal shocker not fully understanding the story, they might also walk away with a deeper perception of the potential of film storytelling." The film also has a score of 76 out of 100 on Metacritic based on 15 critics, indicating "generally favorable reviews". Looking back in his Guardian/Observer review, critic Philip French wrote, "The film is wearing well and has attained a classic status without becoming respectable or losing its sense of danger." Mark Kermode walked out on the film and gave the film a poor review upon its release, but revised his view of the film over time. In 2016, he remarked, "as a film critic, it taught me that when a film really gets under your skin and really provokes a visceral reaction, you have to be very careful about assessing it ... I didn't walk out on Blue Velvet because it was a bad film. I walked out on it because it was a really good film. The point was at the time I wasn't good enough for it." Accolades Lynch was nominated for a Best Director Oscar for the film. Dennis Hopper was nominated for a Golden Globe for his performance. Isabella Rossellini won an Independent Spirit Award for the Best Female Lead in 1987. David Lynch and Dennis Hopper won a Los Angeles Film Critics Association award in 1987 for Blue Velvet in categories Best Director (Lynch) and Best Supporting Actor (Hopper). In 1987, National Society of Film Critics awarded Best Film, Best Director (David Lynch), Best Cinematography (Frederick Elmes), and Best Supporting Actor (Dennis Hopper) awards. Home media Blue Velvet was released on VHS by Karl-Lorimar Home Video in 1987 and re-issued by Warner Home Video in 1992. After that, it was DVD in 1999 and 2002 by MGM Home Entertainment. The film made its Blu-ray debut on November 8, 2011, with a special 25th-anniversary edition featuring never-before-seen deleted scenes. On May 28, 2019, the film was re-released on Blu-ray by the Criterion Collection, featuring a 4K digital restoration, the original stereo soundtrack and other special features, including a behind-the-scenes documentary titled Blue Velvet Revisited. Legacy Although it initially gained a relatively small theatrical audience in North America and was met with controversy over its artistic merit, Blue Velvet soon became the center of a "national firestorm" in 1986, and over time became an American classic. In the late 1980s, and early 1990s, after its release on videotape, the film became a widely recognized cult film, for its dark depiction of a suburban America. With its many VHS, LaserDisc and DVD releases, the film reached broader American audiences. It marked David Lynch's entry into the Hollywood mainstream and Dennis Hopper's comeback. Hopper's performance as Frank Booth has itself left an imprint on popular culture, with countless tributes, cultural references and parodies. The film's success also helped Hollywood address previously censored issues, as Psycho (1960) had. Blue Velvet has been frequently compared to that ground-breaking film. It has become one of the most significant, well-recognized films of its era, spawning countless imitations and parodies in media. The film's dark, stylish and erotic production design has served as a benchmark for a number of films, parodies and even Lynch's own later work, notably Twin Peaks (1990–91), and Mulholland Drive (2001). Peter Travers of Rolling Stone magazine cited it as one of the most "influential American films", as did Michael Atkinson, who dedicated a book to the film's themes and motifs. Blue Velvet now frequently appears in various critical assessments of all-time great films, also ranked as one of the greatest films of the 1980s, one of the best examples of American surrealism and one of the finest examples of David Lynch's work. In a poll of 54 American critics ranking the "most outstanding films of the decade", Blue Velvet was placed fourth, behind Raging Bull (1980), E.T. the Extra-Terrestrial (1982) and the German film Wings of Desire (1987). An Entertainment Weekly book special released in 1999 ranked Blue Velvet 37th of the greatest films of all time. The film was ranked by The Guardian in its list of the 100 Greatest Films. Film Four ranked it on their list of 100 Greatest Films. In a 2007 poll of the online film community held by Variety, Blue Velvet came in at the 95th-greatest film of all time. Total Film ranked Blue Velvet as one of the all-time best films in both a critics' list and a public poll, in 2006 and 2007, respectively. In December 2002, a UK film critics' poll in Sight & Sound ranked the film fifth on their list of the 10 Best Films of the Last 25 Years. In a special Entertainment Weekly issue, 100 new film classics were chosen from 1983 to 2008: Blue Velvet was ranked at fourth. In addition to Blue Velvet various "all-time greatest films" rankings, the American Film Institute has awarded the film three honors in its lists: 96th on 100 Years ... 100 Thrills in 2001, selecting cinema's most thrilling moments and ranked Frank Booth 36th of the 50 greatest villains in 100 Years ... 100 Heroes and Villains in 2003. In June 2008, the AFI revealed its "ten Top Ten"—the best ten films in ten "classic" American film genres—after polling over 1,500 people from the creative community. Blue Velvet was acknowledged as the eighth best film in the mystery genre. Premiere magazine listed Frank Booth, played by Dennis Hopper, as the 54th on its list of 'The 100 Greatest Movie Characters of All Time', calling him one of "the most monstrously funny creations in cinema history". The film was ranked 84th on Bravo Television's four-hour program 100 Scariest Movie Moments (2004). It is frequently sampled musically and an array of bands and solo artists have taken their names and inspiration from the film. In August 2012, Sight & Sound unveiled their latest list of the 250 greatest films of all time, with Blue Velvet ranking at 69th. Blue Velvet was also nominated for the following AFI lists: AFI's 100 Years...100 Movies AFI's 100 Years...100 Heroes & Villains Frank Booth – Ranked 36th-greatest film villain. AFI's 100 Years...100 Songs: "In Dreams" - nominated song. AFI's 100 Years...100 Movies (10th Anniversary Edition) Inspired by the film, pop singer Lana Del Rey recorded a cover version of Bobby Vinton's classic rendition of the song "Blue Velvet" in 2012. Used to endorse clothing line H&M, a music video accompanied the track and aired as a television commercial. Set in post-war America, the video drew influence from Lynch and Blue Velvet. In the video, Del Rey plays the role of Dorothy Vallens, performing a private concert similar to the scene where Ben (Dean Stockwell) pantomimes "In Dreams" for Frank Booth. Del Rey's version, however, has her lip-syncing "Blue Velvet" when a little person dressed as Frank Sinatra approaches and unplugs a hidden Victrola, revealing Del Rey as a fraud. When Lynch heard of the music video, he praised it, telling Artinfo: "Lana Del Rey, she's got some fantastic charisma and—this is a very interesting thing—it's like she's born out of another time. She's got something that's very appealing to people. And I didn't know she was influenced by me!" "Now It's Dark", a song by American heavy metal band Anthrax on their 1988 album State of Euphoria, was directly inspired by the film, and specifically the character of Frank Booth. The same phrase appeared in the liner notes of Rush's album Roll the Bones, and drummer Neil Peart later explained: "The phrase occurs in David Lynch's comedy classic Blue Velvet." The sludge metal band Acid Bath sampled the movie on the song "Cassie Eats Cockroaches" from their first album When the Kite String Pops and industrial metal band Ministry sampled the movie in their song "Jesus Built My Hotrod". The experimental rock band Mr. Bungle also sampled the movie on the songs "Squeeze Me Macaroni", "Stubb - A Dub", and "My Ass Is On Fire" from their debut self-titled album. References Further reading Atkinson, Michael (1997). Blue Velvet. Long Island, New York.: British Film Institute. . Drazin, Charles (2001). Blue Velvet: Bloomsbury Pocket Movie Guide 3. Britain. Bloomsbury Publishing. . Lynch, David and Rodley, Chris (2005). Lynch on Lynch. Faber and Faber: New York. . External links 1986 films 1986 crime thriller films 1986 independent films 1980s mystery thriller films 1980s psychological thriller films American crime thriller films American independent films American mystery thriller films American psychological thriller films BDSM in films De Laurentiis Entertainment Group films 1980s English-language films Erotic mystery films Films about violence against women Films directed by David Lynch Films scored by Angelo Badalamenti Films set in North Carolina Films shot in North Carolina American neo-noir films Postmodern films Films with screenplays by David Lynch National Society of Film Critics Award for Best Film winners 1980s American films
8
Bagpipes are a woodwind instrument using enclosed reeds fed from a constant reservoir of air in the form of a bag. The Great Highland bagpipes are well known, but people have played bagpipes for centuries throughout large parts of Europe, Northern Africa, Western Asia, around the Persian Gulf and northern parts of South Asia. The term bagpipe is equally correct in the singular or the plural, though pipers usually refer to the bagpipes as "the pipes", "a set of pipes" or "a stand of pipes". Construction A set of bagpipes minimally consists of an air supply, a bag, a chanter, and usually at least one drone. Many bagpipes have more than one drone (and, sometimes, more than one chanter) in various combinations, held in place in stocks—sockets that fasten the various pipes to the bag. Air supply The most common method of supplying air to the bag is through blowing into a blowpipe or blowstick. In some pipes the player must cover the tip of the blowpipe with the tongue while inhaling, in order to prevent unwanted deflation of the bag, but most blowpipes have a non-return valve that eliminates this need. In recent times, there are many instruments that assist in creating a clean air flow to the pipes and assist the collection of condensation. The use of a bellows to supply air is an innovation dating from the 16th or 17th century. In these pipes, sometimes called "cauld wind pipes," air is not heated or moistened by the player's breathing, so bellows-driven bagpipes can use more refined or delicate reeds. Such pipes include the Irish uilleann pipes; the border or Lowland pipes, Scottish smallpipes, Northumbrian smallpipes and pastoral pipes in Britain; the musette de cour, the musette bechonnet and the cabrette in France; and the , koziol bialy, and koziol czarny in Poland. Bag The bag is an airtight reservoir that holds air and regulates its flow via arm pressure, allowing the player to maintain continuous, even sound. The player keeps the bag inflated by blowing air into it through a blowpipe or by pumping air into it with a bellows. Materials used for bags vary widely, but the most common are the skins of local animals such as goats, dogs, sheep, and cows. More recently, bags made of synthetic materials including Gore-Tex have become much more common. Some synthetic bags have zips that allow the player to fit a more effective moisture trap to the inside of the bag. However, synthetic bags carry risk of colonisation by fungal spores, and the associated danger of lung infection, because they require less cleaning than do bags made from natural substances. Bags cut from larger materials are usually saddle-stitched with an extra strip folded over the seam and stitched (for skin bags) or glued (for synthetic bags) to reduce leaks. Holes are then cut to accommodate the stocks. In the case of bags made from largely intact animal skins, the stocks are typically tied into the points where the limbs and the head joined the body of the whole animal, a construction technique common in Central Europe. Different regions have different ways of treating the hide. The simplest methods involve just the use of salt, while more complex treatments involve milk, flour, and the removal of fur. The hide is normally turned inside out so that the fur is on the inside of the bag, as this helps to reduce the effect of moisture buildup within the bag. Chanter The chanter is the melody pipe, played with two hands. All bagpipes have at least one chanter; some pipes have two chanters, particularly those in North Africa, in the Balkans, and in Southwest Asia. A chanter can be bored internally so that the inside walls are parallel (or "cylindrical") for its full length, or it can be bored in a conical shape. Popular woods include boxwood, cornel, plum or other fruit wood. The chanter is usually open-ended, so there is no easy way for the player to stop the pipe from sounding. Thus most bagpipes share a constant legato sound with no rests in the music. Primarily because of this inability to stop playing, technical movements are made to break up notes and to create the illusion of articulation and accents. Because of their importance, these embellishments (or "ornaments") are often highly technical systems specific to each bagpipe, and take many years of study to master. A few bagpipes (such as the musette de cour, the uilleann pipes, the Northumbrian smallpipes, the piva and the left chanter of the surdelina) have closed ends or stop the end on the player's leg, so that when the player "closes" (covers all the holes), the chanter becomes silent. A practice chanter is a chanter without bag or drones and has a much quieter reed, allowing a player to practice the instrument quietly and with no variables other than playing the chanter. The term chanter is derived from the Latin cantare, or "to sing", much like the modern French verb meaning "to sing", chanter. A distinctive feature of the gaida's chanter (which it shares with a number of other Eastern European bagpipes) is the "flea-hole" (also known as a mumbler or voicer, marmorka) which is covered by the index finger of the left hand. The flea-hole is smaller than the rest and usually consists of a small tube that is made out of metal or a chicken or duck feather. Uncovering the flea-hole raises any note played by a half step, and it is used in creating the musical ornamentation that gives Balkan music its unique character. Some types of gaida can have a double bored chanter, such as the Serbian three-voiced gajde. It has eight fingerholes: the top four are covered by the thumb and the first three fingers of the left hand, then the four fingers of the right hand cover the remaining four holes. Chanter reed The note from the chanter is produced by a reed installed at its top. The reed may be a single (a reed with one vibrating tongue) or double reed (of two pieces that vibrate against each other). Double reeds are used with both conical- and parallel-bored chanters while single reeds are generally (although not exclusively) limited to parallel-bored chanters. In general, double-reed chanters are found in pipes of Western Europe while single-reed chanters appear in most other regions. They are made from reed (arundo donax or Phragmites), bamboo, or elder. A more modern variant for the reed is a combination of a cotton phenolic (Hgw2082) material from which the body of the reed is made and a clarinet reed cut to size in order to fit the body. These type of reeds produce a louder sound and are not so sensitive to humidity and temperature changes. Drone Most bagpipes have at least one drone, a pipe that generally is not fingered but rather produces a constant harmonizing note throughout play (usually the tonic note of the chanter). Exceptions are generally those pipes that have a double-chanter instead. A drone is most commonly a cylindrically bored tube with a single reed, although drones with double reeds exist. The drone is generally designed in two or more parts with a sliding joint so that the pitch of the drone can be adjusted. Depending on the type of pipes, the drones may lie over the shoulder, across the arm opposite the bag, or may run parallel to the chanter. Some drones have a tuning screw, which effectively alters the length of the drone by opening a hole, allowing the drone to be tuned to two or more distinct pitches. The tuning screw may also shut off the drone altogether. In most types of pipes with one drone, it is pitched two octaves below the tonic of the chanter. Additional drones often add the octave below and then a drone consonant with the fifth of the chanter. History Possible ancient origins The evidence for bagpipes prior to the 13th century AD is still uncertain, but several textual and visual clues have been suggested. The Oxford History of Music posits that a sculpture of bagpipes has been found on a Hittite slab at Euyuk in Anatolia, dated to 1000 BC. Another interpretation of this sculpture suggests that it instead depicts a pan flute played along with a friction drum. Several authors identify the ancient Greek (ἀσκός askos – wine-skin, αὐλός aulos – reed pipe) with the bagpipe. In the 2nd century AD, Suetonius described the Roman emperor Nero as a player of the tibia utricularis. Dio Chrysostom wrote in the 1st century of a contemporary sovereign (possibly Nero) who could play a pipe (tibia, Roman reedpipes similar to Greek and Etruscan instruments) with his mouth as well as by tucking a bladder beneath his armpit. Vereno suggests that such instruments, rather than being seen as an independent class, were understood as variants on mouth-blown instruments that used a bag as an alternative blowing aid and that it was not until drones were added in the European Medieval era that bagpipes were seen as a distinct class. Spread and development in Europe In the early part of the second millennium, representation of bagpipes began to appear with frequency in Western European art and iconography. The Cantigas de Santa Maria, written in Galician-Portuguese and compiled in Castile in the mid-13th century, depicts several types of bagpipes. Several illustrations of bagpipes also appear in the Chronique dite de Baudoin d’Avesnes, a 13th-century manuscript of northern French origin. Although evidence of bagpipes in the British Isles prior to the 14th century is contested, they are explicitly mentioned in The Canterbury Tales (written around 1380): Bagpipes were also frequent subjects for carvers of wooden choir stalls in the late 15th and early 16th century throughout Europe, sometimes with animal musicians. Actual specimens of bagpipes from before the 18th century are extremely rare; however, a substantial number of paintings, carvings, engravings, and manuscript illuminations survive. These artefacts are clear evidence that bagpipes varied widely throughout Europe, and even within individual regions. Many examples of early folk bagpipes in continental Europe can be found in the paintings of Brueghel, Teniers, Jordaens, and Durer. The earliest known artefact identified as a part of a bagpipe is a chanter found in 1985 at Rostock, Germany, that has been dated to the late 14th century or the first quarter of the 15th century. The first clear reference to the use of the Scottish Highland bagpipes is from a French history that mentions their use at the Battle of Pinkie in 1547. George Buchanan (1506–82) claimed that bagpipes had replaced the trumpet on the battlefield. This period saw the creation of the ceòl mór (great music) of the bagpipe, which reflected its martial origins, with battle tunes, marches, gatherings, salutes and laments. The Highlands of the early 17th century saw the development of piping families including the MacCrimmonds, MacArthurs, MacGregors, and the Mackays of Gairloch. The earliest Irish mention of the bagpipe is in 1206, approximately thirty years after the Anglo-Norman invasion; another mention attributes their use to Irish troops in Henry VIII's siege of Boulogne. Illustrations in the 1581 book The Image of Irelande by John Derricke clearly depict a bagpiper. Derricke's illustrations are considered to be reasonably faithful depictions of the attire and equipment of the English and Irish population of the 16th century. The "Battell" sequence from My Ladye Nevells Booke (1591) by William Byrd, which probably alludes to the Irish wars of 1578, contains a piece entitled The bagpipe: & the drone. In 1760, the first serious study of the Scottish Highland bagpipe and its music was attempted in Joseph MacDonald's Compleat Theory. A manuscript from the 1730s by a William Dixon of Northumberland contains music that fits the border pipes, a nine-note bellows-blown bagpipe with a chanter similar to that of the modern Great Highland bagpipe. However, the music in Dixon's manuscript varied greatly from modern Highland bagpipe tunes, consisting mostly of extended variation sets of common dance tunes. Some of the tunes in the Dixon manuscript correspond to those found in the early 19th century manuscript sources of Northumbrian smallpipe tunes, notably the rare book of 50 tunes, many with variations, by John Peacock. As Western classical music developed, both in terms of musical sophistication and instrumental technology, bagpipes in many regions fell out of favour because of their limited range and function. This triggered a long, slow decline that continued, in most cases, into the 20th century. Extensive and documented collections of traditional bagpipes may be found at the Metropolitan Museum of Art in New York City, the International Bagpipe Museum in Gijón, Spain, the Pitt Rivers Museum in Oxford, England and the Morpeth Chantry Bagpipe Museum in Northumberland, and the Musical Instrument Museum in Phoenix, Arizona. The is held every two years in Strakonice, Czech Republic. Recent history During the expansion of the British Empire, spearheaded by British military forces that included Highland regiments, the Scottish Great Highland bagpipe became well known worldwide. This surge in popularity was boosted by large numbers of pipers trained for military service in World War I and World War II. This coincided with a decline in the popularity of many traditional forms of bagpipe throughout Europe, which began to be displaced by instruments from the classical tradition and later by gramophone and radio. As pipers were easily identifiable, combat losses were high, estimated at one thousand in World War I. A front line role was prohibited following high losses in the Second Battle of El Alamein in 1943, though a few later instances occurred. In the United Kingdom and Commonwealth Nations such as Canada, New Zealand and Australia, the Great Highland bagpipe is commonly used in the military and is often played during formal ceremonies. Foreign militaries patterned after the British army have also adopted the Highland bagpipe, including those of Uganda, Sudan, India, Pakistan, Sri Lanka, Jordan, and Oman. Many police and fire services in Scotland, Canada, Australia, New Zealand, Hong Kong, and the United States have also adopted the tradition of fielding pipe bands. In recent years, often driven by revivals of native folk music and dance, many types of bagpipes have enjoyed a resurgence in popularity and, in many cases, instruments that had fallen into obscurity have become extremely popular. In Brittany, the Great Highland bagpipe and concept of the pipe band were appropriated to create a Breton interpretation known as the bagad. The pipe-band idiom has also been adopted and applied to the Galician gaita as well. Bagpipes have often been used in various films depicting moments from Scottish and Irish history; the film Braveheart and the theatrical show Riverdance have served to make the uilleann pipes more commonly known. Bagpipes are sometimes played at formal events at Commonwealth universities, particularly in Canada. Because of Scottish influences on the sport of curling, bagpipes are also the official instrument of the World Curling Federation and are commonly played during a ceremonial procession of teams before major curling championships. Bagpipe making was once a craft that produced instruments in many distinctive, local and traditional styles. Today, the world's largest producer of the instrument is Pakistan, where the industry was worth $6.8 million in 2010. In the late 20th century, various models of electronic bagpipes were invented. The first custom-built MIDI bagpipes were developed by the Asturian piper known as Hevia (José Ángel Hevia Velasco). Astronaut Kjell N. Lindgren is thought to be the first person to play the bagpipes in outer space, having played "Amazing Grace" in tribute to late research scientist Victor Hurst aboard the International Space Station in November 2015. Traditionally, one of the purposes of the bagpipe was to provide music for dancing. This has declined with the growth of dance bands, recordings, and the decline of traditional dance. In turn, this has led to many types of pipes developing a performance-led tradition, and indeed much modern music based on the dance music tradition played on bagpipes is suitable for use as dance music. Modern usage Types of bagpipes Numerous types of bagpipes today are widely spread across Europe and the Middle East, as well as through much of the former British Empire. The name bagpipe has almost become synonymous with its best-known form, the Great Highland bagpipe, overshadowing the great number and variety of traditional forms of bagpipe. Despite the decline of these other types of pipes over the last few centuries, in recent years many of these pipes have seen a resurgence or revival as musicians have sought them out; for example, the Irish piping tradition, which by the mid 20th century had declined to a handful of master players is today alive, well, and flourishing, a situation similar to that of the Asturian gaita, the Galician gaita, the Portuguese gaita transmontana, the Aragonese gaita de boto, Northumbrian smallpipes, the Breton biniou, the Balkan gaida, the Romanian cimpoi, the Black Sea tulum, the Scottish smallpipes and pastoral pipes, as well as other varieties. Bulgaria has the Kaba gaida, a large bagpipe of the Rhodope mountains with a hexagonal and rounded drone, often described as a deep-sounding gaida and the Dzhura gaida with a straight conical drone and of a higher pitch. The Macedonian gaida is structurally between a kaba and dzhura gaida and described as a medium pitched gaida. In Southeastern Europe and Eastern Europe bagpipes known as gaida include: the , , (), () () or (), (), , also and . Image gallery Usage in non-traditional music Since the 1960s, bagpipes have also made appearances in other forms of music, including rock, metal, jazz, hip-hop, punk, and classical music, for example with Paul McCartney's "Mull of Kintyre", AC/DC's "It's a Long Way to the Top (If You Wanna Rock 'n' Roll)", and Peter Maxwell Davies's composition An Orkney Wedding, with Sunrise. Publications Periodicals Periodicals covering specific types of bagpipes are addressed in the article for that bagpipe . . . . . . Books . , 147 pp. with plates. . . . See also List of bagpipes List of bagpipers List of pipe makers List of pipe bands Glossary of bagpipe terms Practice chanter References Bibliography Lommel, Arle. "The Hungarian Duda and Contra-Chanter Bagpipes of the Carpathian Basin." The Galpin Society Journal (2008): 305-321. External links Bagpipe iconography – Paintings and images of the pipes. Musiconis Database of Medieval Musical Iconography: Bagpipe. A demonstration of rare instruments including bagpipes (archived 12 November 2009) The Concise History of the Bagpipe by Frank J. Timoney The Bagpipe Society, dedicated to promoting the study, playing, and making of bagpipes and pipes from around the world Bagpipes from polish collections (Polish folk musical instruments) Bagpipes (local polish name "Koza") played by Jan Karpiel-Bułecka (English subtitles) Official site of Baghet (bagpipe from North Italy) players. (archived 9 July 2017) Celtic Music : Scottish Military Bagpipes. The presence of the gaida in Greece Articles containing video clips
5
Biochemistry or biological chemistry is the study of chemical processes within and relating to living organisms. A sub-discipline of both chemistry and biology, biochemistry may be divided into three fields: structural biology, enzymology, and metabolism. Over the last decades of the 20th century, biochemistry has become successful at explaining living processes through these three disciplines. Almost all areas of the life sciences are being uncovered and developed through biochemical methodology and research. Biochemistry focuses on understanding the chemical basis which allows biological molecules to give rise to the processes that occur within living cells and between cells, in turn relating greatly to the understanding of tissues and organs, as well as organism structure and function. Biochemistry is closely related to molecular biology, which is the study of the molecular mechanisms of biological phenomena. Much of biochemistry deals with the structures, bonding, functions, and interactions of biological macromolecules, such as proteins, nucleic acids, carbohydrates, and lipids. They provide the structure of cells and perform many of the functions associated with life. The chemistry of the cell also depends upon the reactions of small molecules and ions. These can be inorganic (for example, water and metal ions) or organic (for example, the amino acids, which are used to synthesize proteins). The mechanisms used by cells to harness energy from their environment via chemical reactions are known as metabolism. The findings of biochemistry are applied primarily in medicine, nutrition and agriculture. In medicine, biochemists investigate the causes and cures of diseases. Nutrition studies how to maintain health and wellness and also the effects of nutritional deficiencies. In agriculture, biochemists investigate soil and fertilizers, with the goal of improving crop cultivation, crop storage, and pest control. In recent decades, biochemical principles and methods have been combined with problem-solving approaches from engineering to manipulate living systems, in order to produce useful tools for research, industrial processes, and diagnosis and control of diseasethe discipline of biotechnology. History At its most comprehensive definition, biochemistry can be seen as a study of the components and composition of living things and how they come together to become life. In this sense, the history of biochemistry may therefore go back as far as the ancient Greeks. However, biochemistry as a specific scientific discipline began sometime in the 19th century, or a little earlier, depending on which aspect of biochemistry is being focused on. Some argued that the beginning of biochemistry may have been the discovery of the first enzyme, diastase (now called amylase), in 1833 by Anselme Payen, while others considered Eduard Buchner's first demonstration of a complex biochemical process alcoholic fermentation in cell-free extracts in 1897 to be the birth of biochemistry. Some might also point as its beginning to the influential 1842 work by Justus von Liebig, Animal chemistry, or, Organic chemistry in its applications to physiology and pathology, which presented a chemical theory of metabolism, or even earlier to the 18th century studies on fermentation and respiration by Antoine Lavoisier. Many other pioneers in the field who helped to uncover the layers of complexity of biochemistry have been proclaimed founders of modern biochemistry. Emil Fischer, who studied the chemistry of proteins, and F. Gowland Hopkins, who studied enzymes and the dynamic nature of biochemistry, represent two examples of early biochemists. The term "biochemistry" was first used when Vinzenz Kletzinsky (1826–1882) had his "Compendium der Biochemie" printed in Vienna in 1858; it derived from a combination of biology and chemistry. In 1877, Felix Hoppe-Seyler used the term ( in German) as a synonym for physiological chemistry in the foreword to the first issue of Zeitschrift für Physiologische Chemie (Journal of Physiological Chemistry) where he argued for the setting up of institutes dedicated to this field of study. The German chemist Carl Neuberg however is often cited to have coined the word in 1903, while some credited it to Franz Hofmeister. It was once generally believed that life and its materials had some essential property or substance (often referred to as the "vital principle") distinct from any found in non-living matter, and it was thought that only living beings could produce the molecules of life. In 1828, Friedrich Wöhler published a paper on his serendipitous urea synthesis from potassium cyanate and ammonium sulfate; some regarded that as a direct overthrow of vitalism and the establishment of organic chemistry. However, the Wöhler synthesis has sparked controversy as some reject the death of vitalism at his hands. Since then, biochemistry has advanced, especially since the mid-20th century, with the development of new techniques such as chromatography, X-ray diffraction, dual polarisation interferometry, NMR spectroscopy, radioisotopic labeling, electron microscopy and molecular dynamics simulations. These techniques allowed for the discovery and detailed analysis of many molecules and metabolic pathways of the cell, such as glycolysis and the Krebs cycle (citric acid cycle), and led to an understanding of biochemistry on a molecular level. Another significant historic event in biochemistry is the discovery of the gene, and its role in the transfer of information in the cell. In the 1950s, James D. Watson, Francis Crick, Rosalind Franklin and Maurice Wilkins were instrumental in solving DNA structure and suggesting its relationship with the genetic transfer of information. In 1958, George Beadle and Edward Tatum received the Nobel Prize for work in fungi showing that one gene produces one enzyme. In 1988, Colin Pitchfork was the first person convicted of murder with DNA evidence, which led to the growth of forensic science. More recently, Andrew Z. Fire and Craig C. Mello received the 2006 Nobel Prize for discovering the role of RNA interference (RNAi) in the silencing of gene expression. Starting materials: the chemical elements of life Around two dozen chemical elements are essential to various kinds of biological life. Most rare elements on Earth are not needed by life (exceptions being selenium and iodine), while a few common ones (aluminum and titanium) are not used. Most organisms share element needs, but there are a few differences between plants and animals. For example, ocean algae use bromine, but land plants and animals do not seem to need any. All animals require sodium, but is not an essential element for plants. Plants need boron and silicon, but animals may not (or may need ultra-small amounts). Just six elements—carbon, hydrogen, nitrogen, oxygen, calcium and phosphorus—make up almost 99% of the mass of living cells, including those in the human body (see composition of the human body for a complete list). In addition to the six major elements that compose most of the human body, humans require smaller amounts of possibly 18 more. Biomolecules The 4 main classes of molecules in biochemistry (often called biomolecules) are carbohydrates, lipids, proteins, and nucleic acids. Many biological molecules are polymers: in this terminology, monomers are relatively small macromolecules that are linked together to create large macromolecules known as polymers. When monomers are linked together to synthesize a biological polymer, they undergo a process called dehydration synthesis. Different macromolecules can assemble in larger complexes, often needed for biological activity. Carbohydrates Two of the main functions of carbohydrates are energy storage and providing structure. One of the common sugars known as glucose is a carbohydrate, but not all carbohydrates are sugars. There are more carbohydrates on Earth than any other known type of biomolecule; they are used to store energy and genetic information, as well as play important roles in cell to cell interactions and communications. The simplest type of carbohydrate is a monosaccharide, which among other properties contains carbon, hydrogen, and oxygen, mostly in a ratio of 1:2:1 (generalized formula CnH2nOn, where n is at least 3). Glucose (C6H12O6) is one of the most important carbohydrates; others include fructose (C6H12O6), the sugar commonly associated with the sweet taste of fruits, and deoxyribose (C5H10O4), a component of DNA. A monosaccharide can switch between acyclic (open-chain) form and a cyclic form. The open-chain form can be turned into a ring of carbon atoms bridged by an oxygen atom created from the carbonyl group of one end and the hydroxyl group of another. The cyclic molecule has a hemiacetal or hemiketal group, depending on whether the linear form was an aldose or a ketose. In these cyclic forms, the ring usually has 5 or 6 atoms. These forms are called furanoses and pyranoses, respectively—by analogy with furan and pyran, the simplest compounds with the same carbon-oxygen ring (although they lack the carbon-carbon double bonds of these two molecules). For example, the aldohexose glucose may form a hemiacetal linkage between the hydroxyl on carbon 1 and the oxygen on carbon 4, yielding a molecule with a 5-membered ring, called glucofuranose. The same reaction can take place between carbons 1 and 5 to form a molecule with a 6-membered ring, called glucopyranose. Cyclic forms with a 7-atom ring called heptoses are rare. Two monosaccharides can be joined by a glycosidic or ester bond into a disaccharide through a dehydration reaction during which a molecule of water is released. The reverse reaction in which the glycosidic bond of a disaccharide is broken into two monosaccharides is termed hydrolysis. The best-known disaccharide is sucrose or ordinary sugar, which consists of a glucose molecule and a fructose molecule joined. Another important disaccharide is lactose found in milk, consisting of a glucose molecule and a galactose molecule. Lactose may be hydrolysed by lactase, and deficiency in this enzyme results in lactose intolerance. When a few (around three to six) monosaccharides are joined, it is called an oligosaccharide (oligo- meaning "few"). These molecules tend to be used as markers and signals, as well as having some other uses. Many monosaccharides joined form a polysaccharide. They can be joined in one long linear chain, or they may be branched. Two of the most common polysaccharides are cellulose and glycogen, both consisting of repeating glucose monomers. Cellulose is an important structural component of plant's cell walls and glycogen is used as a form of energy storage in animals. Sugar can be characterized by having reducing or non-reducing ends. A reducing end of a carbohydrate is a carbon atom that can be in equilibrium with the open-chain aldehyde (aldose) or keto form (ketose). If the joining of monomers takes place at such a carbon atom, the free hydroxy group of the pyranose or furanose form is exchanged with an OH-side-chain of another sugar, yielding a full acetal. This prevents opening of the chain to the aldehyde or keto form and renders the modified residue non-reducing. Lactose contains a reducing end at its glucose moiety, whereas the galactose moiety forms a full acetal with the C4-OH group of glucose. Saccharose does not have a reducing end because of full acetal formation between the aldehyde carbon of glucose (C1) and the keto carbon of fructose (C2). Lipids Lipids comprise a diverse range of molecules and to some extent is a catchall for relatively water-insoluble or nonpolar compounds of biological origin, including waxes, fatty acids, fatty-acid derived phospholipids, sphingolipids, glycolipids, and terpenoids (e.g., retinoids and steroids). Some lipids are linear, open-chain aliphatic molecules, while others have ring structures. Some are aromatic (with a cyclic [ring] and planar [flat] structure) while others are not. Some are flexible, while others are rigid. Lipids are usually made from one molecule of glycerol combined with other molecules. In triglycerides, the main group of bulk lipids, there is one molecule of glycerol and three fatty acids. Fatty acids are considered the monomer in that case, and may be saturated (no double bonds in the carbon chain) or unsaturated (one or more double bonds in the carbon chain). Most lipids have some polar character in addition to being largely nonpolar. In general, the bulk of their structure is nonpolar or hydrophobic ("water-fearing"), meaning that it does not interact well with polar solvents like water. Another part of their structure is polar or hydrophilic ("water-loving") and will tend to associate with polar solvents like water. This makes them amphiphilic molecules (having both hydrophobic and hydrophilic portions). In the case of cholesterol, the polar group is a mere –OH (hydroxyl or alcohol). In the case of phospholipids, the polar groups are considerably larger and more polar, as described below. Lipids are an integral part of our daily diet. Most oils and milk products that we use for cooking and eating like butter, cheese, ghee etc. are composed of fats. Vegetable oils are rich in various polyunsaturated fatty acids (PUFA). Lipid-containing foods undergo digestion within the body and are broken into fatty acids and glycerol, which are the final degradation products of fats and lipids. Lipids, especially phospholipids, are also used in various pharmaceutical products, either as co-solubilisers (e.g. in parenteral infusions) or else as drug carrier components (e.g. in a liposome or transfersome). Proteins Proteins are very large molecules—macro-biopolymers—made from monomers called amino acids. An amino acid consists of an alpha carbon atom attached to an amino group, –NH2, a carboxylic acid group, –COOH (although these exist as –NH3+ and –COO− under physiologic conditions), a simple hydrogen atom, and a side chain commonly denoted as "–R". The side chain "R" is different for each amino acid of which there are 20 standard ones. It is this "R" group that made each amino acid different, and the properties of the side-chains greatly influence the overall three-dimensional conformation of a protein. Some amino acids have functions by themselves or in a modified form; for instance, glutamate functions as an important neurotransmitter. Amino acids can be joined via a peptide bond. In this dehydration synthesis, a water molecule is removed and the peptide bond connects the nitrogen of one amino acid's amino group to the carbon of the other's carboxylic acid group. The resulting molecule is called a dipeptide, and short stretches of amino acids (usually, fewer than thirty) are called peptides or polypeptides. Longer stretches merit the title proteins. As an example, the important blood serum protein albumin contains 585 amino acid residues. Proteins can have structural and/or functional roles. For instance, movements of the proteins actin and myosin ultimately are responsible for the contraction of skeletal muscle. One property many proteins have is that they specifically bind to a certain molecule or class of molecules—they may be extremely selective in what they bind. Antibodies are an example of proteins that attach to one specific type of molecule. Antibodies are composed of heavy and light chains. Two heavy chains would be linked to two light chains through disulfide linkages between their amino acids. Antibodies are specific through variation based on differences in the N-terminal domain. The enzyme-linked immunosorbent assay (ELISA), which uses antibodies, is one of the most sensitive tests modern medicine uses to detect various biomolecules. Probably the most important proteins, however, are the enzymes. Virtually every reaction in a living cell requires an enzyme to lower the activation energy of the reaction. These molecules recognize specific reactant molecules called substrates; they then catalyze the reaction between them. By lowering the activation energy, the enzyme speeds up that reaction by a rate of 1011 or more; a reaction that would normally take over 3,000 years to complete spontaneously might take less than a second with an enzyme. The enzyme itself is not used up in the process and is free to catalyze the same reaction with a new set of substrates. Using various modifiers, the activity of the enzyme can be regulated, enabling control of the biochemistry of the cell as a whole. The structure of proteins is traditionally described in a hierarchy of four levels. The primary structure of a protein consists of its linear sequence of amino acids; for instance, "alanine-glycine-tryptophan-serine-glutamate-asparagine-glycine-lysine-...". Secondary structure is concerned with local morphology (morphology being the study of structure). Some combinations of amino acids will tend to curl up in a coil called an α-helix or into a sheet called a β-sheet; some α-helixes can be seen in the hemoglobin schematic above. Tertiary structure is the entire three-dimensional shape of the protein. This shape is determined by the sequence of amino acids. In fact, a single change can change the entire structure. The alpha chain of hemoglobin contains 146 amino acid residues; substitution of the glutamate residue at position 6 with a valine residue changes the behavior of hemoglobin so much that it results in sickle-cell disease. Finally, quaternary structure is concerned with the structure of a protein with multiple peptide subunits, like hemoglobin with its four subunits. Not all proteins have more than one subunit. Ingested proteins are usually broken up into single amino acids or dipeptides in the small intestine and then absorbed. They can then be joined to form new proteins. Intermediate products of glycolysis, the citric acid cycle, and the pentose phosphate pathway can be used to form all twenty amino acids, and most bacteria and plants possess all the necessary enzymes to synthesize them. Humans and other mammals, however, can synthesize only half of them. They cannot synthesize isoleucine, leucine, lysine, methionine, phenylalanine, threonine, tryptophan, and valine. Because they must be ingested, these are the essential amino acids. Mammals do possess the enzymes to synthesize alanine, asparagine, aspartate, cysteine, glutamate, glutamine, glycine, proline, serine, and tyrosine, the nonessential amino acids. While they can synthesize arginine and histidine, they cannot produce it in sufficient amounts for young, growing animals, and so these are often considered essential amino acids. If the amino group is removed from an amino acid, it leaves behind a carbon skeleton called an α-keto acid. Enzymes called transaminases can easily transfer the amino group from one amino acid (making it an α-keto acid) to another α-keto acid (making it an amino acid). This is important in the biosynthesis of amino acids, as for many of the pathways, intermediates from other biochemical pathways are converted to the α-keto acid skeleton, and then an amino group is added, often via transamination. The amino acids may then be linked together to form a protein. A similar process is used to break down proteins. It is first hydrolyzed into its component amino acids. Free ammonia (NH3), existing as the ammonium ion (NH4+) in blood, is toxic to life forms. A suitable method for excreting it must therefore exist. Different tactics have evolved in different animals, depending on the animals' needs. Unicellular organisms release the ammonia into the environment. Likewise, bony fish can release the ammonia into the water where it is quickly diluted. In general, mammals convert the ammonia into urea, via the urea cycle. In order to determine whether two proteins are related, or in other words to decide whether they are homologous or not, scientists use sequence-comparison methods. Methods like sequence alignments and structural alignments are powerful tools that help scientists identify homologies between related molecules. The relevance of finding homologies among proteins goes beyond forming an evolutionary pattern of protein families. By finding how similar two protein sequences are, we acquire knowledge about their structure and therefore their function. Nucleic acids Nucleic acids, so-called because of their prevalence in cellular nuclei, is the generic name of the family of biopolymers. They are complex, high-molecular-weight biochemical macromolecules that can convey genetic information in all living cells and viruses. The monomers are called nucleotides, and each consists of three components: a nitrogenous heterocyclic base (either a purine or a pyrimidine), a pentose sugar, and a phosphate group. The most common nucleic acids are deoxyribonucleic acid (DNA) and ribonucleic acid (RNA). The phosphate group and the sugar of each nucleotide bond with each other to form the backbone of the nucleic acid, while the sequence of nitrogenous bases stores the information. The most common nitrogenous bases are adenine, cytosine, guanine, thymine, and uracil. The nitrogenous bases of each strand of a nucleic acid will form hydrogen bonds with certain other nitrogenous bases in a complementary strand of nucleic acid (similar to a zipper). Adenine binds with thymine and uracil, thymine binds only with adenine, and cytosine and guanine can bind only with one another. Adenine and Thymine & Adenine and Uracil contains two hydrogen Bonds, while Hydrogen Bonds formed between cytosine and guanine are three in number. Aside from the genetic material of the cell, nucleic acids often play a role as second messengers, as well as forming the base molecule for adenosine triphosphate (ATP), the primary energy-carrier molecule found in all living organisms. Also, the nitrogenous bases possible in the two nucleic acids are different: adenine, cytosine, and guanine occur in both RNA and DNA, while thymine occurs only in DNA and uracil occurs in RNA. Metabolism Carbohydrates as energy source Glucose is an energy source in most life forms. For instance, polysaccharides are broken down into their monomers by enzymes (glycogen phosphorylase removes glucose residues from glycogen, a polysaccharide). Disaccharides like lactose or sucrose are cleaved into their two component monosaccharides. Glycolysis (anaerobic) Glucose is mainly metabolized by a very important ten-step pathway called glycolysis, the net result of which is to break down one molecule of glucose into two molecules of pyruvate. This also produces a net two molecules of ATP, the energy currency of cells, along with two reducing equivalents of converting NAD+ (nicotinamide adenine dinucleotide: oxidized form) to NADH (nicotinamide adenine dinucleotide: reduced form). This does not require oxygen; if no oxygen is available (or the cell cannot use oxygen), the NAD is restored by converting the pyruvate to lactate (lactic acid) (e.g. in humans) or to ethanol plus carbon dioxide (e.g. in yeast). Other monosaccharides like galactose and fructose can be converted into intermediates of the glycolytic pathway. Aerobic In aerobic cells with sufficient oxygen, as in most human cells, the pyruvate is further metabolized. It is irreversibly converted to acetyl-CoA, giving off one carbon atom as the waste product carbon dioxide, generating another reducing equivalent as NADH. The two molecules acetyl-CoA (from one molecule of glucose) then enter the citric acid cycle, producing two molecules of ATP, six more NADH molecules and two reduced (ubi)quinones (via FADH2 as enzyme-bound cofactor), and releasing the remaining carbon atoms as carbon dioxide. The produced NADH and quinol molecules then feed into the enzyme complexes of the respiratory chain, an electron transport system transferring the electrons ultimately to oxygen and conserving the released energy in the form of a proton gradient over a membrane (inner mitochondrial membrane in eukaryotes). Thus, oxygen is reduced to water and the original electron acceptors NAD+ and quinone are regenerated. This is why humans breathe in oxygen and breathe out carbon dioxide. The energy released from transferring the electrons from high-energy states in NADH and quinol is conserved first as proton gradient and converted to ATP via ATP synthase. This generates an additional 28 molecules of ATP (24 from the 8 NADH + 4 from the 2 quinols), totaling to 32 molecules of ATP conserved per degraded glucose (two from glycolysis + two from the citrate cycle). It is clear that using oxygen to completely oxidize glucose provides an organism with far more energy than any oxygen-independent metabolic feature, and this is thought to be the reason why complex life appeared only after Earth's atmosphere accumulated large amounts of oxygen. Gluconeogenesis In vertebrates, vigorously contracting skeletal muscles (during weightlifting or sprinting, for example) do not receive enough oxygen to meet the energy demand, and so they shift to anaerobic metabolism, converting glucose to lactate. The combination of glucose from noncarbohydrates origin, such as fat and proteins. This only happens when glycogen supplies in the liver are worn out. The pathway is a crucial reversal of glycolysis from pyruvate to glucose and can use many sources like amino acids, glycerol and Krebs Cycle. Large scale protein and fat catabolism usually occur when those suffer from starvation or certain endocrine disorders. The liver regenerates the glucose, using a process called gluconeogenesis. This process is not quite the opposite of glycolysis, and actually requires three times the amount of energy gained from glycolysis (six molecules of ATP are used, compared to the two gained in glycolysis). Analogous to the above reactions, the glucose produced can then undergo glycolysis in tissues that need energy, be stored as glycogen (or starch in plants), or be converted to other monosaccharides or joined into di- or oligosaccharides. The combined pathways of glycolysis during exercise, lactate's crossing via the bloodstream to the liver, subsequent gluconeogenesis and release of glucose into the bloodstream is called the Cori cycle. Relationship to other "molecular-scale" biological sciences Researchers in biochemistry use specific techniques native to biochemistry, but increasingly combine these with techniques and ideas developed in the fields of genetics, molecular biology, and biophysics. There is not a defined line between these disciplines. Biochemistry studies the chemistry required for biological activity of molecules, molecular biology studies their biological activity, genetics studies their heredity, which happens to be carried by their genome. This is shown in the following schematic that depicts one possible view of the relationships between the fields: Biochemistry is the study of the chemical substances and vital processes occurring in live organisms. Biochemists focus heavily on the role, function, and structure of biomolecules. The study of the chemistry behind biological processes and the synthesis of biologically active molecules are applications of biochemistry. Biochemistry studies life at the atomic and molecular level. Genetics is the study of the effect of genetic differences in organisms. This can often be inferred by the absence of a normal component (e.g. one gene). The study of "mutants" – organisms that lack one or more functional components with respect to the so-called "wild type" or normal phenotype. Genetic interactions (epistasis) can often confound simple interpretations of such "knockout" studies. Molecular biology is the study of molecular underpinnings of the biological phenomena, focusing on molecular synthesis, modification, mechanisms and interactions. The central dogma of molecular biology, where genetic material is transcribed into RNA and then translated into protein, despite being oversimplified, still provides a good starting point for understanding the field. This concept has been revised in light of emerging novel roles for RNA. Chemical biology seeks to develop new tools based on small molecules that allow minimal perturbation of biological systems while providing detailed information about their function. Further, chemical biology employs biological systems to create non-natural hybrids between biomolecules and synthetic devices (for example emptied viral capsids that can deliver gene therapy or drug molecules). See also Lists Important publications in biochemistry (chemistry) List of biochemistry topics List of biochemists List of biomolecules See also Fundamental Concepts And Processes In Biochemistry Astrobiology Biochemistry (journal) Biological Chemistry (journal) Biophysics Chemical ecology Computational biomodeling Dedicated bio-based chemical EC number Hypothetical types of biochemistry International Union of Biochemistry and Molecular Biology Metabolome Metabolomics Molecular biology Molecular medicine Plant biochemistry Proteolysis Small molecule Structural biology TCA cycle Notes a. Fructose is not the only sugar found in fruits. Glucose and sucrose are also found in varying quantities in various fruits, and sometimes exceed the fructose present. For example, 32% of the edible portion of a date is glucose, compared with 24% fructose and 8% sucrose. However, peaches contain more sucrose (6.66%) than they do fructose (0.93%) or glucose (1.47%). References Cited literature Further reading Fruton, Joseph S. Proteins, Enzymes, Genes: The Interplay of Chemistry and Biology. Yale University Press: New Haven, 1999. Keith Roberts, Martin Raff, Bruce Alberts, Peter Walter, Julian Lewis and Alexander Johnson, Molecular Biology of the Cell 4th Edition, Routledge, March, 2002, hardcover, 1616 pp. 3rd Edition, Garland, 1994, 2nd Edition, Garland, 1989, Kohler, Robert. From Medical Chemistry to Biochemistry: The Making of a Biomedical Discipline. Cambridge University Press, 1982. External links The Virtual Library of Biochemistry, Molecular Biology and Cell Biology Biochemistry, 5th ed. Full text of Berg, Tymoczko, and Stryer, courtesy of NCBI. SystemsX.ch – The Swiss Initiative in Systems Biology Full text of Biochemistry by Kevin and Indira, an introductory biochemistry textbook. Biotechnology Molecular biology
7
Badminton is a racquet sport played using racquets to hit a shuttlecock across a net. Although it may be played with larger teams, the most common forms of the game are "singles" (with one player per side) and "doubles" (with two players per side). Badminton is often played as a casual outdoor activity in a yard or on a beach; formal games are played on a rectangular indoor court. Points are scored by striking the shuttlecock with the racquet and landing it within the other team's half of the court. Each side may only strike the shuttlecock once before it passes over the net. Play ends once the shuttlecock has struck the floor or ground, or if a fault has been called by the umpire, service judge, or (in their absence) the opposing side. The shuttlecock is a feathered or (in informal matches) plastic projectile that flies differently from the balls used in many other sports. In particular, the feathers create much higher drag, causing the shuttlecock to decelerate more rapidly. Shuttlecocks also have a high top speed compared to the balls in other racquet sports. The flight of the shuttlecock gives the sport its distinctive nature. The game developed in British India from the earlier game of battledore and shuttlecock. European play came to be dominated by Denmark but the game has become very popular in Asia, with recent competitions dominated by China. In 1992, badminton debuted as a Summer Olympic sport with four events: men's singles, women's singles, men's doubles, and women's doubles; mixed doubles was added four years later. At high levels of play, the sport demands excellent fitness: players require aerobic stamina, agility, strength, speed, and precision. It is also a technical sport, requiring good motor coordination and the development of sophisticated racquet movements. History Games employing shuttlecocks have been played for centuries across Eurasia, but the modern game of badminton developed in the mid-19th century among the expatriate officers of British India as a variant of the earlier game of battledore and shuttlecock. ("Battledore" was an older term for "racquet".) Its exact origin remains obscure. The name derives from the Duke of Beaufort's Badminton House in Gloucestershire, but why or when remains unclear. As early as 1860, a London toy dealer named Isaac Spratt published a booklet entitled Badminton Battledore – A New Game, but no copy is known to have survived. An 1863 article in The Cornhill Magazine describes badminton as "battledore and shuttlecock played with sides, across a string suspended some five feet from the ground". The game originally developed in India among the British expatriates, where it was very popular by the 1870s. Ball badminton, a form of the game played with a wool ball instead of a shuttlecock, was being played in Thanjavur as early as the 1850s and was at first played interchangeably with badminton by the British, the woollen ball being preferred in windy or wet weather. Early on, the game was also known as Poona or Poonah after the garrison town of Poona, where it was particularly popular and where the first rules for the game were drawn up in 1873. By 1875, officers returning home had started a badminton club in Folkestone. Initially, the sport was played with sides ranging from 1 to 4 players, but it was quickly established that games between two or four competitors worked the best. The shuttlecocks were coated with India rubber and, in outdoor play, sometimes weighted with lead. Although the depth of the net was of no consequence, it was preferred that it should reach the ground. The sport was played under the Pune rules until 1887, when J. H. E. Hart of the Bath Badminton Club drew up revised regulations. In 1890, Hart and Bagnel Wild again revised the rules. The Badminton Association of England (BAE) published these rules in 1893 and officially launched the sport at a house called "Dunbar" in Portsmouth on 13 September. The BAE started the first badminton competition, the All England Open Badminton Championships for gentlemen's doubles, ladies' doubles, and mixed doubles, in 1899. Singles competitions were added in 1900 and an England–Ireland championship match appeared in 1904. England, Scotland, Wales, Canada, Denmark, France, Ireland, the Netherlands, and New Zealand were the founding members of the International Badminton Federation in 1934, now known as the Badminton World Federation. India joined as an affiliate in 1936. The BWF now governs international badminton. Although initiated in England, competitive men's badminton has traditionally been dominated in Europe by Denmark. Worldwide, Asian nations have become dominant in international competition. China, Denmark, Indonesia, Malaysia, India, South Korea, Taiwan (playing as 'Chinese Taipei') and Japan are the nations which have consistently produced world-class players in the past few decades, with China being the greatest force in men's and women's competition recently. Great Britain, where the rules of the modern game were codified, is not among the top powers in the sport, but has had significant Olympic and World success in doubles play, especially mixed doubles. The game has also become a popular backyard sport in the United States. Rules The following information is a simplified summary of badminton rules based on the BWF Statutes publication, Laws of Badminton. Court The court is rectangular and divided into halves by a net. Courts are usually marked for both singles and doubles play, although badminton rules permit a court to be marked for singles only. The doubles court is wider than the singles court, but both are of the same length. The exception, which often causes confusion to newer players, is that the doubles court has a shorter serve-length dimension. The full width of the court is , and in singles this width is reduced to . The full length of the court is . The service courts are marked by a centre line dividing the width of the court, by a short service line at a distance of from the net, and by the outer side and back boundaries. In doubles, the service court is also marked by a long service line, which is from the back boundary. The net is high at the edges and high in the centre. The net posts are placed over the doubles sidelines, even when singles is played. The minimum height for the ceiling above the court is not mentioned in the Laws of Badminton. Nonetheless, a badminton court will not be suitable if the ceiling is likely to be hit on a high serve. Serving When the server serves, the shuttlecock must pass over the short service line on the opponents' court or it will count as a fault. The server and receiver must remain within their service courts, without touching the boundary lines, until the server strikes the shuttlecock. The other two players may stand wherever they wish, so long as they do not block the vision of the server or receiver. At the start of the rally, the server and receiver stand in diagonally opposite service courts (see court dimensions). The server hits the shuttlecock so that it would land in the receiver's service court. This is similar to tennis, except that in a badminton serve the whole shuttle must be below 1.15 metres from the surface of the court at the instant of being hit by the server's racket, the shuttlecock is not allowed to bounce and in badminton, the players stand inside their service courts, unlike tennis. When the serving side loses a rally, the server immediately passes to their opponent(s) (this differs from the old system where sometimes the serve passes to the doubles partner for what is known as a "second serve"). In singles, the server stands in their right service court when their score is even, and in their left service court when their score is odd. In doubles, if the serving side wins a rally, the same player continues to serve, but he/she changes service courts so that she/he serves to a different opponent each time. If the opponents win the rally and their new score is even, the player in the right service court serves; if odd, the player in the left service court serves. The players' service courts are determined by their positions at the start of the previous rally, not by where they were standing at the end of the rally. A consequence of this system is that each time a side regains the service, the server will be the player who did not serve last time. Scoring Each game is played to 21 points, with players scoring a point whenever they win a rally regardless of whether they served (this differs from the old system where players could only win a point on their serve and each game was played to 15 points). A match is the best of three games. If the score ties at 20–20, then the game continues until one side gains a two-point lead (such as 24–22), except when there is a tie at 29–29, in which the game goes to a golden point of 30. Whoever scores this point wins the game. At the start of a match, the shuttlecock is cast and the side towards which the shuttlecock is pointing serves first. Alternatively, a coin may be tossed, with the winners choosing whether to serve or receive first, or choosing which end of the court to occupy first, and their opponents making the leftover the remaining choice. In subsequent games, the winners of the previous game serve first. Matches are best out of three: a player or pair must win two games (of 21 points each) to win the match. For the first rally of any doubles game, the serving pair may decide who serves and the receiving pair may decide who receives. The players change ends at the start of the second game; if the match reaches a third game, they change ends both at the start of the game and when the leading player's or pair's score reaches 11 points. Lets If a let is called, the rally is stopped and replayed with no change to the score. Lets may occur because of some unexpected disturbance such as a shuttlecock landing on a court (having been hit there by players playing in adjacent court) or in small halls the shuttle may touch an overhead rail which can be classed as a let. If the receiver is not ready when the service is delivered, a let shall be called; yet, if the receiver attempts to return the shuttlecock, the receiver shall be judged to have been ready. Equipment Badminton rules restrict the design and size of racquets and shuttlecocks. Racquets Badminton racquets are lightweight, with top quality racquets weighing between not including grip or strings. They are composed of many different materials ranging from carbon fibre composite (graphite reinforced plastic) to solid steel, which may be augmented by a variety of materials. Carbon fibre has an excellent strength to weight ratio, is stiff, and gives excellent kinetic energy transfer. Before the adoption of carbon fibre composite, racquets were made of light metals such as aluminium. Earlier still, racquets were made of wood. Cheap racquets are still often made of metals such as steel, but wooden racquets are no longer manufactured for the ordinary market, because of their excessive mass and cost. Nowadays, nanomaterials such as carbon nanotubes and fullerene are added to racquets giving them greater durability. There is a wide variety of racquet designs, although the laws limit the racquet size and shape. Different racquets have playing characteristics that appeal to different players. The traditional oval head shape is still available, but an isometric head shape is increasingly common in new racquets. Strings Badminton strings for racquets are thin, high-performing strings with thicknesses ranging from about 0.62 to 0.73 mm. Thicker strings are more durable, but many players prefer the feel of thinner strings. String tension is normally in the range of 80 to 160 N (18 to 36 lbf). Recreational players generally string at lower tensions than professionals, typically between 80 and 110 N (18 and 25 lbf). Professionals string between about 110 and 160 N (25 and 36 lbf). Some string manufacturers measure the thickness of their strings under tension so they are actually thicker than specified when slack. Ashaway Micropower is actually 0.7mm but Yonex BG-66 is about 0.72mm. It is often argued that high string tensions improve control, whereas low string tensions increase power. The arguments for this generally rely on crude mechanical reasoning, such as claiming that a lower tension string bed is more bouncy and therefore provides more power. This is, in fact, incorrect, for a higher string tension can cause the shuttle to slide off the racquet and hence make it harder to hit a shot accurately. An alternative view suggests that the optimum tension for power depends on the player: the faster and more accurately a player can swing their racquet, the higher the tension for maximum power. Neither view has been subjected to a rigorous mechanical analysis, nor is there clear evidence in favour of one or the other. The most effective way for a player to find a good string tension is to experiment. Grip The choice of grip allows a player to increase the thickness of their racquet handle and choose a comfortable surface to hold. A player may build up the handle with one or several grips before applying the final layer. Players may choose between a variety of grip materials. The most common choices are PU synthetic grips or towelling grips. Grip choice is a matter of personal preference. Players often find that sweat becomes a problem; in this case, a drying agent may be applied to the grip or hands, sweatbands may be used, the player may choose another grip material or change their grip more frequently. There are two main types of grip: replacement grips and overgrips. Replacement grips are thicker and are often used to increase the size of the handle. Overgrips are thinner (less than 1 mm), and are often used as the final layer. Many players, however, prefer to use replacement grips as the final layer. Towelling grips are always replacement grips. Replacement grips have an adhesive backing, whereas overgrips have only a small patch of adhesive at the start of the tape and must be applied under tension; overgrips are more convenient for players who change grips frequently, because they may be removed more rapidly without damaging the underlying material. Shuttlecock A shuttlecock (often abbreviated to shuttle; also called a birdie) is a high-drag projectile, with an open conical shape: the cone is formed from sixteen overlapping feathers embedded into a rounded cork base. The cork is covered with thin leather or synthetic material. Synthetic shuttles are often used by recreational players to reduce their costs as feathered shuttles break easily. These nylon shuttles may be constructed with either natural cork or synthetic foam base and a plastic skirt. Badminton rules also provide for testing a shuttlecock for the correct speed: Shoes Badminton shoes are lightweight with soles of rubber or similar high-grip, non-marking materials. Compared to running shoes, badminton shoes have little lateral support. High levels of lateral support are useful for activities where lateral motion is undesirable and unexpected. Badminton, however, requires powerful lateral movements. A highly built-up lateral support will not be able to protect the foot in badminton; instead, it will encourage catastrophic collapse at the point where the shoe's support fails, and the player's ankles are not ready for the sudden loading, which can cause sprains. For this reason, players should choose badminton shoes rather than general trainers or running shoes, because proper badminton shoes will have a very thin sole, lower a person's centre of gravity, and therefore result in fewer injuries. Players should also ensure that they learn safe and proper footwork, with the knee and foot in alignment on all lunges. This is more than just a safety concern: proper footwork is also critical in order to move effectively around the court. Technique Strokes Badminton offers a wide variety of basic strokes, and players require a high level of skill to perform all of them effectively. All strokes can be played either forehand or backhand. A player's forehand side is the same side as their playing hand: for a right-handed player, the forehand side is their right side and the backhand side is their left side. Forehand strokes are hit with the front of the hand leading (like hitting with the palm), whereas backhand strokes are hit with the back of the hand leading (like hitting with the knuckles). Players frequently play certain strokes on the forehand side with a backhand hitting action, and vice versa. In the forecourt and midcourt, most strokes can be played equally effectively on either the forehand or backhand side; but in the rear court, players will attempt to play as many strokes as possible on their forehands, often preferring to play a round-the-head forehand overhead (a forehand "on the backhand side") rather than attempt a backhand overhead. Playing a backhand overhead has two main disadvantages. First, the player must turn their back to their opponents, restricting their view of them and the court. Second, backhand overheads cannot be hit with as much power as forehands: the hitting action is limited by the shoulder joint, which permits a much greater range of movement for a forehand overhead than for a backhand. The backhand clear is considered by most players and coaches to be the most difficult basic stroke in the game, since the precise technique is needed in order to muster enough power for the shuttlecock to travel the full length of the court. For the same reason, backhand smashes tend to be weak. Position of the shuttlecock and receiving player The choice of stroke depends on how near the shuttlecock is to the net, whether it is above net height, and where an opponent is currently positioned: players have much better attacking options if they can reach the shuttlecock well above net height, especially if it is also close to the net. In the forecourt, a high shuttlecock will be met with a net kill, hitting it steeply downwards and attempting to win the rally immediately. This is why it is best to drop the shuttlecock just over the net in this situation. In the midcourt, a high shuttlecock will usually be met with a powerful smash, also hitting downwards and hoping for an outright winner or a weak reply. Athletic jump smashes, where players jump upwards for a steeper smash angle, are a common and spectacular element of elite men's doubles play. In the rearcourt, players strive to hit the shuttlecock while it is still above them, rather than allowing it to drop lower. This overhead hitting allows them to play smashes, clears (hitting the shuttlecock high and to the back of the opponents' court), and drop shots (hitting the shuttlecock softly so that it falls sharply downwards into the opponents' forecourt). If the shuttlecock has dropped lower, then a smash is impossible and a full-length, high clear is difficult. Vertical position of the shuttlecock When the shuttlecock is well below net height, players have no choice but to hit upwards. Lifts, where the shuttlecock is hit upwards to the back of the opponents' court, can be played from all parts of the court. If a player does not lift, their only remaining option is to push the shuttlecock softly back to the net: in the forecourt, this is called a net shot; in the midcourt or rear court, it is often called a push or block. When the shuttlecock is near to net height, players can hit drives, which travel flat and rapidly over the net into the opponents' rear midcourt and rear court. Pushes may also be hit flatter, placing the shuttlecock into the front midcourt. Drives and pushes may be played from the midcourt or forecourt, and are most often used in doubles: they are an attempt to regain the attack, rather than choosing to lift the shuttlecock and defend against smashes. After a successful drive or push, the opponents will often be forced to lift the shuttlecock. Spin Balls may be spun to alter their bounce (for example, topspin and backspin in tennis) or trajectory, and players may slice the ball (strike it with an angled racquet face) to produce such spin. The shuttlecock is not allowed to bounce, but slicing the shuttlecock does have applications in badminton. (See Basic strokes for an explanation of technical terms.) Slicing the shuttlecock from the side may cause it to travel in a different direction from the direction suggested by the player's racquet or body movement. This is used to deceive opponents. Slicing the shuttlecock from the side may cause it to follow a slightly curved path (as seen from above), and the deceleration imparted by the spin causes sliced strokes to slow down more suddenly towards the end of their flight path. This can be used to create drop shots and smashes that dip more steeply after they pass the net. When playing a net shot, slicing underneath the shuttlecock may cause it to turn over itself (tumble) several times as it passes the net. This is called a spinning net shot or tumbling net shot. The opponent will be unwilling to address the shuttlecock until it has corrected its orientation. Due to the way that its feathers overlap, a shuttlecock also has a slight natural spin about its axis of rotational symmetry. The spin is in a counter-clockwise direction as seen from above when dropping a shuttlecock. This natural spin affects certain strokes: a tumbling net shot is more effective if the slicing action is from right to left, rather than from left to right. Biomechanics Badminton biomechanics have not been the subject of extensive scientific study, but some studies confirm the minor role of the wrist in power generation and indicate that the major contributions to power come from internal and external rotations of the upper and lower arm. Recent guides to the sport thus emphasize forearm rotation rather than wrist movements. The feathers impart substantial drag, causing the shuttlecock to decelerate greatly over distance. The shuttlecock is also extremely aerodynamically stable: regardless of initial orientation, it will turn to fly cork-first and remain in the cork-first orientation. One consequence of the shuttlecock's drag is that it requires considerable power to hit it the full length of the court, which is not the case for most racquet sports. The drag also influences the flight path of a lifted (lobbed) shuttlecock: the parabola of its flight is heavily skewed so that it falls at a steeper angle than it rises. With very high serves, the shuttlecock may even fall vertically. Other factors When defending against a smash, players have three basic options: lift, block, or drive. In singles, a block to the net is the most common reply. In doubles, a lift is the safest option but it usually allows the opponents to continue smashing; blocks and drives are counter-attacking strokes but may be intercepted by the smasher's partner. Many players use a backhand hitting action for returning smashes on both the forehand and backhand sides because backhands are more effective than forehands at covering smashes directed to the body. Hard shots directed towards the body are difficult to defend. The service is restricted by the Laws and presents its own array of stroke choices. Unlike in tennis, the server's racquet must be pointing in a downward direction to deliver the serve so normally the shuttle must be hit upwards to pass over the net. The server can choose a low serve into the forecourt (like a push), or a lift to the back of the service court, or a flat drive serve. Lifted serves may be either high serves, where the shuttlecock is lifted so high that it falls almost vertically at the back of the court, or flick serves, where the shuttlecock is lifted to a lesser height but falls sooner. Deception Once players have mastered these basic strokes, they can hit the shuttlecock from and to any part of the court, powerfully and softly as required. Beyond the basics, however, badminton offers rich potential for advanced stroke skills that provide a competitive advantage. Because badminton players have to cover a short distance as quickly as possible, the purpose of many advanced strokes is to deceive the opponent, so that either they are tricked into believing that a different stroke is being played, or they are forced to delay their movement until they actually sees the shuttle's direction. "Deception" in badminton is often used in both of these senses. When a player is genuinely deceived, they will often lose the point immediately because they cannot change their direction quickly enough to reach the shuttlecock. Experienced players will be aware of the trick and cautious not to move too early, but the attempted deception is still useful because it forces the opponent to delay their movement slightly. Against weaker players whose intended strokes are obvious, an experienced player may move before the shuttlecock has been hit, anticipating the stroke to gain an advantage. Slicing and using a shortened hitting action are the two main technical devices that facilitate deception. Slicing involves hitting the shuttlecock with an angled racquet face, causing it to travel in a different direction than suggested by the body or arm movement. Slicing also causes the shuttlecock to travel more slowly than the arm movement suggests. For example, a good crosscourt sliced drop shot will use a hitting action that suggests a straight clear or a smash, deceiving the opponent about both the power and direction of the shuttlecock. A more sophisticated slicing action involves brushing the strings around the shuttlecock during the hit, in order to make the shuttlecock spin. This can be used to improve the shuttle's trajectory, by making it dip more rapidly as it passes the net; for example, a sliced low serve can travel slightly faster than a normal low serve, yet land on the same spot. Spinning the shuttlecock is also used to create spinning net shots (also called tumbling net shots), in which the shuttlecock turns over itself several times (tumbles) before stabilizing; sometimes the shuttlecock remains inverted instead of tumbling. The main advantage of a spinning net shot is that the opponent will be unwilling to address the shuttlecock until it has stopped tumbling, since hitting the feathers will result in an unpredictable stroke. Spinning net shots are especially important for high-level singles players. The lightness of modern racquets allows players to use a very short hitting action for many strokes, thereby maintaining the option to hit a powerful or a soft stroke until the last possible moment. For example, a singles player may hold their racquet ready for a net shot, but then flick the shuttlecock to the back instead with a shallow lift when they notice the opponent has moved before the actual shot was played. A shallow lift takes less time to reach the ground and as mentioned above a rally is over when the shuttlecock touches the ground. This makes the opponent's task of covering the whole court much more difficult than if the lift was hit higher and with a bigger, obvious swing. A short hitting action is not only useful for deception: it also allows the player to hit powerful strokes when they have no time for a big arm swing. A big arm swing is also usually not advised in badminton because bigger swings make it more difficult to recover for the next shot in fast exchanges. The use of grip tightening is crucial to these techniques, and is often described as finger power. Elite players develop finger power to the extent that they can hit some power strokes, such as net kills, with less than a racquet swing. It is also possible to reverse this style of deception, by suggesting a powerful stroke before slowing down the hitting action to play a soft stroke. In general, this latter style of deception is more common in the rear court (for example, drop shots disguised as smashes), whereas the former style is more common in the forecourt and midcourt (for example, lifts disguised as net shots). Deception is not limited to slicing and short hitting actions. Players may also use double motion, where they make an initial racquet movement in one direction before withdrawing the racquet to hit in another direction. Players will often do this to send opponents in the wrong direction. The racquet movement is typically used to suggest a straight angle but then play the stroke crosscourt, or vice versa. Triple motion is also possible, but this is very rare in actual play. An alternative to double motion is to use a racquet head fake, where the initial motion is continued but the racquet is turned during the hit. This produces a smaller change in direction but does not require as much time. Strategy To win in badminton, players need to employ a wide variety of strokes in the right situations. These range from powerful jumping smashes to delicate tumbling net returns. Often rallies finish with a smash, but setting up the smash requires subtler strokes. For example, a net shot can force the opponent to lift the shuttlecock, which gives an opportunity to smash. If the net shot is tight and tumbling, then the opponent's lift will not reach the back of the court, which makes the subsequent smash much harder to return. Deception is also important. Expert players prepare for many different strokes that look identical and use slicing to deceive their opponents about the speed or direction of the stroke. If an opponent tries to anticipate the stroke, they may move in the wrong direction and may be unable to change their body momentum in time to reach the shuttlecock. Singles Since one person needs to cover the entire court, singles tactics are based on forcing the opponent to move as much as possible; this means that singles strokes are normally directed to the corners of the court. Players exploit the length of the court by combining lifts and clears with drop shots and net shots. Smashing tends to be less prominent in singles than in doubles because the smasher has no partner to follow up their effort and is thus vulnerable to a skillfully placed return. Moreover, frequent smashing can be exhausting in singles where the conservation of a player's energy is at a premium. However, players with strong smashes will sometimes use the shot to create openings, and players commonly smash weak returns to try to end rallies. In singles, players will often start the rally with a forehand high serve or with a flick serve. Low serves are also used frequently, either forehand or backhand. Drive serves are rare. At high levels of play, singles demand extraordinary fitness. Singles is a game of patient positional manoeuvring, unlike the all-out aggression of doubles. Doubles Both pairs will try to gain and maintain the attack, smashing downwards when the opportunity arises. Whenever possible, a pair will adopt an ideal attacking formation with one player hitting down from the rear court, and their partner in the midcourt intercepting all smash returns except the lift. If the rear court attacker plays a drop shot, their partner will move into the forecourt to threaten the net reply. If a pair cannot hit downwards, they will use flat strokes in an attempt to gain the attack. If a pair is forced to lift or clear the shuttlecock, then they must defend: they will adopt a side-by-side position in the rear midcourt, to cover the full width of their court against the opponents' smashes. In doubles, players generally smash to the middle ground between two players in order to take advantage of confusion and clashes. At high levels of play, the backhand serve has become popular to the extent that forehand serves have become fairly rare at a high level of play. The straight low serve is used most frequently, in an attempt to prevent the opponents gaining the attack immediately. Flick serves are used to prevent the opponent from anticipating the low serve and attacking it decisively. At high levels of play, doubles rallies are extremely fast. Men's doubles are the most aggressive form of badminton, with a high proportion of powerful jump smashes and very quick reflex exchanges. Because of this, spectator interest is sometimes greater for men's doubles than for singles. Mixed doubles In mixed doubles, both pairs typically try to maintain an attacking formation with the woman at the front and the man at the back. This is because the male players are usually substantially stronger, and can, therefore, produce smashes that are more powerful. As a result, mixed doubles require greater tactical awareness and subtler positional play. Clever opponents will try to reverse the ideal position, by forcing the woman towards the back or the man towards the front. In order to protect against this danger, mixed players must be careful and systematic in their shot selection. At high levels of play, the formations will generally be more flexible: the top women players are capable of playing powerfully from the back-court, and will happily do so if required. When the opportunity arises, however, the pair will switch back to the standard mixed attacking position, with the woman in front and men in the back. Organization Governing bodies The Badminton World Federation (BWF) is the internationally recognized governing body of the sport responsible for the regulation of tournaments and approaching fair play. Five regional confederations are associated with the BWF: Asia: Badminton Asia Confederation (BAC) Africa: Badminton Confederation of Africa (BCA) Americas: Badminton Pan Am (North America and South America belong to the same confederation; BPA) Europe: Badminton Europe (BE) Oceania: Badminton Oceania (BO) Competitions The BWF organizes several international competitions, including the Thomas Cup, the premier men's international team event first held in 1948–1949, and the Uber Cup, the women's equivalent first held in 1956–1957. The competitions now take place once every two years. More than 50 national teams compete in qualifying tournaments within continental confederations for a place in the finals. The final tournament involves 12 teams, following an increase from eight teams in 2004. It was further increased to 16 teams in 2012. The Sudirman Cup, a gender-mixed international team event held once every two years, began in 1989. Teams are divided into seven levels based on the performance of each country. To win the tournament, a country must perform well across all five disciplines (men's doubles and singles, women's doubles and singles, and mixed doubles). Like association football (soccer), it features a promotion and relegation system at every level. However, the system was last used in 2009 and teams competing will now be grouped by world rankings. Badminton was a demonstration event at the 1972 and 1988 Summer Olympics. It became an official Summer Olympic sport at the Barcelona Olympics in 1992 and its gold medals now generally rate as the sport's most coveted prizes for individual players. In the BWF World Championships, first held in 1977, currently only the highest-ranked 64 players in the world, and a maximum of four from each country can participate in any category. Therefore it's not an "open" format. In both the BWF World and the Olympic competitions restrictions on the number of participants from any one country have caused some controversy, because they result in excluding some world elite level players from the strongest badminton nations. The Thomas, Uber, and Sudirman Cups, the Olympics, and the BWF World (and World Junior Championships), are all categorized as level one tournaments. At the start of 2007, the BWF introduced a new tournament structure for the highest level tournaments aside from those in level one: the BWF Super Series. This "level two" tournament series is a circuit for the world's elite players, staging twelve open tournaments around the world with 32 players (half the previous limit). The players collect points that determine whether they can play in Super Series Finals held at the year-end. Among the tournaments in this series is the venerable All-England Championships, first held in 1900, which was once considered the unofficial world championships of the sport. Level three tournaments consist of Grand Prix Gold and Grand Prix event. Top players can collect the world ranking points and enable them to play in the BWF Super Series open tournaments. These include the regional competitions in Asia (Badminton Asia Championships) and Europe (European Badminton Championships), which produce the world's best players as well as the Pan America Badminton Championships. The level four tournaments, known as International Challenge, International Series, and Future Series, encourage participation by junior players. Comparison with tennis Badminton is frequently compared to tennis due to several qualities. The following is a list of manifest differences: Scoring: In badminton, a match is played best 2 of 3 games, with each game played up to 21 points. In tennis a match is played best of 3 or 5 sets, each set consisting of 6 games and each game ends when one player wins 4 points or wins two consecutive points at deuce points. If both teams are tied at "game point", they must play until one team achieves a two-point advantage. However, at 29all, whoever scores the golden point will win. In tennis, if the score is tied 66 in a set, a tiebreaker will be played, which ends once a player reaches 7 points or when one player has a two-point advantage. In tennis, the ball may bounce once before the point ends; in badminton, the rally ends once the shuttlecock touches the floor. In tennis, the serve is dominant to the extent that the server is expected to win most of their service games (at advanced level & onwards); a break of service, where the server loses the game, is of major importance in a match. In badminton, a server has far less an advantage and is unlikely to score an ace (unreturnable serve). In tennis, the server has two chances to hit a serve into the service box; in badminton, the server is allowed only one attempt. A tennis court is approximately twice the length and width of a badminton court. Tennis racquets are about four times as heavy as badminton racquets, versus . Tennis balls are more than eleven times heavier than shuttlecocks, versus . The fastest recorded tennis stroke is Samuel Groth's serve, whereas the fastest badminton stroke during gameplay was Mads Pieler Kolding's recorded smash at a Badminton Premier League match. Statistics such as the smash speed, above, prompt badminton enthusiasts to make other comparisons that are more contentious. For example, it is often claimed that badminton is the fastest racquet sport. Although badminton holds the record for the fastest initial speed of a racquet sports projectile, the shuttlecock decelerates substantially faster than other projectiles such as tennis balls. In turn, this qualification must be qualified by consideration of the distance over which the shuttlecock travels: a smashed shuttlecock travels a shorter distance than a tennis ball during a serve. While fans of badminton and tennis often claim that their sport is the more physically demanding, such comparisons are difficult to make objectively because of the differing demands of the games. No formal study currently exists evaluating the physical condition of the players or demands during gameplay. Badminton and tennis techniques differ substantially. The lightness of the shuttlecock and of badminton racquets allow badminton players to make use of the wrist and fingers much more than tennis players; in tennis, the wrist is normally held stable, and playing with a mobile wrist may lead to injury. For the same reasons, badminton players can generate power from a short racquet swing: for some strokes such as net kills, an elite player's swing may be less than . For strokes that require more power, a longer swing will typically be used, but the badminton racquet swing will rarely be as long as a typical tennis swing. See also Ball badminton Hanetsuki List of racquet sports Crossminton Notes References Sources . . . External links Badminton World Federation Laws of Badminton Simplified Rules Badminton Asia Confederation Badminton Pan Am Badminton Oceania Badminton Europe Badminton Confederation of Africa Racket sports Summer Olympic sports Sports originating in England Sports originating in South Asia
11
The Baroque (, ; ) or Baroquism is a Western style of architecture, music, dance, painting, sculpture, poetry, and other arts that flourished from the early 17th century until the 1750s. It followed Renaissance art and Mannerism and preceded the Rococo (in the past often referred to as "late Baroque") and Neoclassical styles. It was encouraged by the Catholic Church as a means to counter the simplicity and austerity of Protestant architecture, art, and music, though Lutheran Baroque art developed in parts of Europe as well. The Baroque style used contrast, movement, exuberant detail, deep color, grandeur, and surprise to achieve a sense of awe. The style began at the start of the 17th century in Rome, then spread rapidly to the rest of Italy, France, Spain, and Portugal, then to Austria, southern Germany, and Poland. By the 1730s, it had evolved into an even more flamboyant style, called rocaille or Rococo, which appeared in France and Central Europe until the mid to late 18th century. In the territories of the Spanish and Portuguese Empires including the Iberian Peninsula it continued, together with new styles, until the first decade of the 19th century. In the decorative arts, the style employs plentiful and intricate ornamentation. The departure from Renaissance classicism has its own ways in each country. But a general feature is that everywhere the starting point is the ornamental elements introduced by the Renaissance. The classical repertoire is crowded, dense, overlapping, loaded, in order to provoke shock effects. New motifs introduced by Baroque are: the cartouche, trophies and weapons, baskets of fruit or flowers, and others, made in marquetry, stucco, or carved. Origin of the word The English word baroque comes directly from the French. Some scholars state that the French word originated from the Portuguese term barroco 'a flawed pearl', pointing to the Latin verruca 'wart', or to a word with the Romance suffix -ǒccu (common in pre-Roman Iberia). Other sources suggest a Medieval Latin term used in logic, , as the most likely source. In the 16th century, the Medieval Latin word baroco moved beyond scholastic logic and came into use to characterise anything that seemed absurdly complex. The French philosopher Michel de Montaigne (1533–1592) helped to give the term baroco (spelled Barroco by him) the meaning 'bizarre, uselessly complicated'. Other early sources associate baroco with magic, complexity, confusion, and excess. The word baroque was also associated with irregular pearls before the 18th century. The French baroque and Portuguese barroco were terms often associated with jewelry. An example from 1531 uses the term to describe pearls in an inventory of Charles V of France's treasures. Later, the word appears in a 1694 edition of , which describes baroque as "only used for pearls that are imperfectly round." A 1728 Portuguese dictionary similarly describes barroco as relating to a "coarse and uneven pearl". An alternative derivation of the word baroque points to the name of the Italian painter Federico Barocci (1528–1612). In the 18th century, the term began to be used to describe music, and not in a flattering way. In an anonymous satirical review of the première of Jean-Philippe Rameau's Hippolyte et Aricie in October 1733, which was printed in the Mercure de France in May 1734, the critic wrote that the novelty in this opera was "du barocque", complaining that the music lacked coherent melody, was unsparing with dissonances, constantly changed key and meter, and speedily ran through every compositional device. In 1762, recorded that the term could figuratively describe something "irregular, bizarre or unequal". Jean-Jacques Rousseau, who was a musician and composer as well as a philosopher, wrote in the Encyclopédie in 1768: "Baroque music is that in which the harmony is confused, and loaded with modulations and dissonances. The singing is harsh and unnatural, the intonation difficult, and the movement limited. It appears that term comes from the word 'baroco' used by logicians." In 1788, Quatremère de Quincy defined the term in the Encyclopédie Méthodique as "an architectural style that is highly adorned and tormented". The French terms style baroque and musique baroque appeared in in 1835. By the mid-19th century, art critics and historians had adopted the term "baroque" as a way to ridicule post-Renaissance art. This was the sense of the word as used in 1855 by the leading art historian Jacob Burckhardt, who wrote that baroque artists "despised and abused detail" because they lacked "respect for tradition". In 1888, the art historian Heinrich Wölfflin published the first serious academic work on the style, Renaissance und Barock, which described the differences between the painting, sculpture, and architecture of the Renaissance and the Baroque. Architecture: origins and characteristics The Baroque style of architecture was a result of doctrines adopted by the Catholic Church at the Council of Trent in 1545–1563, in response to the Protestant Reformation. The first phase of the Counter-Reformation had imposed a severe, academic style on religious architecture, which had appealed to intellectuals but not the mass of churchgoers. The Council of Trent decided instead to appeal to a more popular audience, and declared that the arts should communicate religious themes with direct and emotional involvement. Similarly, Lutheran Baroque art developed as a confessional marker of identity, in response to the Great Iconoclasm of Calvinists. Baroque churches were designed with a large central space, where the worshippers could be close to the altar, with a dome or cupola high overhead, allowing light to illuminate the church below. The dome was one of the central symbolic features of Baroque architecture illustrating the union between the heavens and the earth. The inside of the cupola was lavishly decorated with paintings of angels and saints, and with stucco statuettes of angels, giving the impression to those below of looking up at heaven. Another feature of Baroque churches are the quadratura; trompe-l'œil paintings on the ceiling in stucco frames, either real or painted, crowded with paintings of saints and angels and connected by architectural details with the balustrades and consoles. Quadratura paintings of Atlantes below the cornices appear to be supporting the ceiling of the church. Unlike the painted ceilings of Michelangelo in the Sistine Chapel, which combined different scenes, each with its own perspective, to be looked at one at a time, the Baroque ceiling paintings were carefully created so the viewer on the floor of the church would see the entire ceiling in correct perspective, as if the figures were real. The interiors of Baroque churches became more and more ornate in the High Baroque, and focused around the altar, usually placed under the dome. The most celebrated baroque decorative works of the High Baroque are the Chair of Saint Peter (1647–1653) and the Baldachino of St. Peter (1623–1634), both by Gian Lorenzo Bernini, in St. Peter's Basilica in Rome. The Baldequin of St. Peter is an example of the balance of opposites in Baroque art; the gigantic proportions of the piece, with the apparent lightness of the canopy; and the contrast between the solid twisted columns, bronze, gold and marble of the piece with the flowing draperies of the angels on the canopy. The Dresden Frauenkirche serves as a prominent example of Lutheran Baroque art, which was completed in 1743 after being commissioned by the Lutheran city council of Dresden and was "compared by eighteenth-century observers to St Peter's in Rome". The twisted column in the interior of churches is one of the signature features of the Baroque. It gives both a sense of motion and also a dramatic new way of reflecting light. The cartouche was another characteristic feature of Baroque decoration. These were large plaques carved of marble or stone, usually oval and with a rounded surface, which carried images or text in gilded letters, and were placed as interior decoration or above the doorways of buildings, delivering messages to those below. They showed a wide variety of invention, and were found in all types of buildings, from cathedrals and palaces to small chapels. Baroque architects sometimes used forced perspective to create illusions. For the Palazzo Spada in Rome, Borromini used columns of diminishing size, a narrowing floor and a miniature statue in the garden beyond to create the illusion that a passageway was thirty meters long, when it was actually only seven meters long. A statue at the end of the passage appears to be life-size, though it is only sixty centimeters high. Borromini designed the illusion with the assistance of a mathematician. Italian Baroque The first building in Rome to have a Baroque facade was the Church of the Gesù in 1584; it was plain by later Baroque standards, but marked a break with the traditional Renaissance facades that preceded it. The interior of this church remained very austere until the high Baroque, when it was lavishly ornamented. In Rome in 1605, Paul V became the first of series of popes who commissioned basilicas and church buildings designed to inspire emotion and awe through a proliferation of forms, and a richness of colours and dramatic effects. Among the most influential monuments of the Early Baroque were the facade of St. Peter's Basilica (1606–1619), and the new nave and loggia which connected the facade to Michelangelo's dome in the earlier church. The new design created a dramatic contrast between the soaring dome and the disproportionately wide facade, and the contrast on the facade itself between the Doric columns and the great mass of the portico. In the mid to late 17th century the style reached its peak, later termed the High Baroque. Many monumental works were commissioned by Popes Urban VIII and Alexander VII. The sculptor and architect Gian Lorenzo Bernini designed a new quadruple colonnade around St. Peter's Square (1656 to 1667). The three galleries of columns in a giant ellipse balance the oversize dome and give the Church and square a unity and the feeling of a giant theatre. Another major innovator of the Italian High Baroque was Francesco Borromini, whose major work was the Church of San Carlo alle Quattro Fontane or Saint Charles of the Four Fountains (1634–1646). The sense of movement is given not by the decoration, but by the walls themselves, which undulate and by concave and convex elements, including an oval tower and balcony inserted into a concave traverse. The interior was equally revolutionary; the main space of the church was oval, beneath an oval dome. Painted ceilings, crowded with angels and saints and trompe-l'œil architectural effects, were an important feature of the Italian High Baroque. Major works included The Entry of Saint Ignatius into Paradise by Andrea Pozzo (1685–1695) in the Church of Saint Ignatius in Rome, and The triumph of the name of Jesus by Giovanni Battista Gaulli in the Church of the Gesù in Rome (1669–1683), which featured figures spilling out of the picture frame and dramatic oblique lighting and light-dark contrasts. The style spread quickly from Rome to other regions of Italy: It appeared in Venice in the church of Santa Maria della Salute (1631–1687) by Baldassare Longhena, a highly original octagonal form crowned with an enormous cupola. It appeared also in Turin, notably in the Chapel of the Holy Shroud (1668–1694) by Guarino Guarini. The style also began to be used in palaces; Guarini designed the Palazzo Carignano in Turin, while Longhena designed the Ca' Rezzonico on the Grand Canal, (1657), finished by Giorgio Massari with decorated with paintings by Giovanni Battista Tiepolo. A series of massive earthquakes in Sicily required the rebuilding of most of them and several were built in the exuberant late Baroque or Rococo style. Spanish Baroque The Catholic Church in Spain, and particularly the Jesuits, were the driving force of Spanish Baroque architecture. The first major work in this style was the San Isidro Chapel in Madrid, begun in 1643 by Pedro de la Torre. It contrasted an extreme richness of ornament on the exterior with simplicity in the interior, divided into multiple spaces and using effects of light to create a sense of mystery. The Cathedral in Santiago de Compostela was modernized with a series of Baroque additions beginning at the end of the 17th century, starting with a highly ornate bell tower (1680), then flanked by two even taller and more ornate towers, called the Obradorio, added between 1738 and 1750 by Fernando de Casas Novoa. Another landmark of the Spanish Baroque is the chapel tower of the Palace of San Telmo in Seville by Leonardo de Figueroa. Granada had only been conquered from the Moors in the 15th century, and had its own distinct variety of Baroque. The painter, sculptor and architect Alonso Cano designed the Baroque interior of Granada Cathedral between 1652 and his death in 1657. It features dramatic contrasts of the massive white columns and gold decor. The most ornamental and lavishly decorated architecture of the Spanish Baroque is called Churrigueresque style, named after the brothers Churriguera, who worked primarily in Salamanca and Madrid. Their works include the buildings on the city's main square, the Plaza Mayor of Salamanca (1729). This highly ornamental Baroque style was influential in many churches and cathedrals built by the Spanish in the Americas. Other notable Spanish baroque architects of the late Baroque include Pedro de Ribera, a pupil of Churriguera, who designed the Royal Hospice of San Fernando in Madrid, and Narciso Tomé, who designed the celebrated El Transparente altarpiece at Toledo Cathedral (1729–1732) which gives the illusion, in certain light, of floating upwards. The architects of the Spanish Baroque had an effect far beyond Spain; their work was highly influential in the churches built in the Spanish colonies in Latin America and the Philippines. The Church built by the Jesuits for a college in Tepotzotlán, with its ornate Baroque facade and tower, is a good example. Central Europe From 1680 to 1750, many highly ornate cathedrals, abbeys, and pilgrimage churches were built in Central Europe, in Bavaria, Austria, Bohemia and southwestern Poland. Some were in Rococo style, a distinct, more flamboyant and asymmetric style which emerged from the Baroque, then replaced it in Central Europe in the first half of the 18th century, until it was replaced in turn by classicism. The princes of the multitude of states in that region also chose Baroque or Rococo for their palaces and residences, and often used Italian-trained architects to construct them. Notable architects included Johann Fischer von Erlach, Lukas von Hildebrandt and Dominikus Zimmermann in Bavaria, Balthasar Neumann in Bruhl, and Matthäus Daniel Pöppelmann in Dresden. In Prussia, Frederick II of Prussia was inspired by the Grand Trianon of the Palace of Versailles, and used it as the model for his summer residence, Sanssouci, in Potsdam, designed for him by Georg Wenzeslaus von Knobelsdorff (1745–1747). Another work of Baroque palace architecture is the Zwinger in Dresden, the former orangerie of the palace of the Dukes of Saxony in the 18th century. One of the best examples of a rococo church is the Basilika Vierzehnheiligen, or Basilica of the Fourteen Holy Helpers, a pilgrimage church located near the town of Bad Staffelstein near Bamberg, in Bavaria, southern Germany. The Basilica was designed by Balthasar Neumann and was constructed between 1743 and 1772, its plan a series of interlocking circles around a central oval with the altar placed in the exact centre of the church. The interior of this church illustrates the summit of Rococo decoration. Another notable example of the style is the Pilgrimage Church of Wies (). It was designed by the brothers J. B. and Dominikus Zimmermann. It is located in the foothills of the Alps, in the municipality of Steingaden in the Weilheim-Schongau district, Bavaria, Germany. Construction took place between 1745 and 1754, and the interior was decorated with frescoes and with stuccowork in the tradition of the Wessobrunner School. It is now a UNESCO World Heritage Site. Another notable example is the St. Nicholas Church (Malá Strana) in Prague (1704–1755), built by Christoph Dientzenhofer and his son Kilian Ignaz Dientzenhofer. Decoration covers all of walls of interior of the church. The altar is placed in the nave beneath the central dome, and surrounded by chapels, light comes down from the dome above and from the surrounding chapels. The altar is entirely surrounded by arches, columns, curved balustrades and pilasters of coloured stone, which are richly decorated with statuary, creating a deliberate confusion between the real architecture and the decoration. The architecture is transformed into a theatre of light, colour and movement. In Poland, the Italian-inspired Polish Baroque lasted from the early 17th to the mid-18th century and emphasised richness of detail and colour. The first Baroque building in present-day Poland and probably one of the most recognizable is the Church of St. Peter and Paul in Kraków, designed by Giovanni Battista Trevano. Sigismund's Column in Warsaw, erected in 1644, was the world's first secular Baroque monument built in the form of a column. The palatial residence style was exemplified by the Wilanów Palace, constructed between 1677 and 1696. The most renowned Baroque architect active in Poland was Dutchman Tylman van Gameren and his notable works include Warsaw's St. Kazimierz Church and Krasiński Palace, St. Anne's in Kraków and Branicki Palace in Bialystok. However, the most celebrated work of Polish Baroque is the Fara Church in Poznań, with details by Pompeo Ferrari. After Thirty Years' War under the agreements of the Peace of Westphalia two unique baroque wattle and daub structures was built: Church of Peace in Jawor, Holy Trinity Church of Peace in Świdnica the largest wooden Baroque temple in Europe. French Baroque Baroque in France developed quite differently from the ornate and dramatic local versions of Baroque from Italy, Spain and the rest of Europe. It appears severe, more detached and restrained by comparison, preempting Neoclassicism and the architecture of the Enlightenment. Unlike Italian buildings, French Baroque buildings have no broken pediments or curvilinear façades. Even religious buildings avoided the intense spatial drama one finds in the work of Borromini. The style is closely associated with the works built for Louis XIV (reign 1643–1715), and because of this, it is also known as the Louis XIV style. Louis XIV invited the master of Baroque, Bernini, to submit a design for the new wing of the Louvre, but rejected it in favor of a more classical design by Claude Perrault and Louis Le Vau. The main architects of the style included François Mansart (1598–1666), Pierre Le Muet (Church of Val-de-Grace, 1645–1665) and Louis Le Vau (Vaux-le-Vicomte, 1657–1661). Mansart was the first architect to introduce Baroque styling, principally the frequent use of an applied order and heavy rustication, into the French architectural vocabulary. The mansard roof was not invented by Mansart, but it has become associated with him, as he used it frequently. The major royal project of the period was the expansion of Palace of Versailles, begun in 1661 by Le Vau with decoration by the painter Charles Le Brun. The gardens were designed by André Le Nôtre specifically to complement and amplify the architecture. The Galerie des Glaces (Hall of Mirrors), the centerpiece of the château, with paintings by Le Brun, was constructed between 1678 and 1686. Mansart completed the Grand Trianon in 1687. The chapel, designed by de Cotte, was finished in 1710. Following the death of Louis XIV, Louis XV added the more intimate Petit Trianon and the highly ornate theatre. The fountains in the gardens were designed to be seen from the interior, and to add to the dramatic effect. The palace was admired and copied by other monarchs of Europe, particularly Peter the Great of Russia, who visited Versailles early in the reign of Louis XV, and built his own version at Peterhof Palace near Saint Petersburg, between 1705 and 1725. Portuguese Baroque Baroque architecture in Portugal lasted about two centuries (the late seventeenth century and eighteenth century). The reigns of John V and Joseph I had increased imports of gold and diamonds, in a period called Royal Absolutism, which allowed the Portuguese Baroque to flourish. Baroque architecture in Portugal enjoys a special situation and different timeline from the rest of Europe. It is conditioned by several political, artistic, and economic factors, that originate several phases, and different kinds of outside influences, resulting in a unique blend, often misunderstood by those looking for Italian art, find instead specific forms and character which give it a uniquely Portuguese variety. Another key factor is the existence of the Jesuitical architecture, also called "plain style" (Estilo Chão or Estilo Plano) which like the name evokes, is plainer and appears somewhat austere. The buildings are single-room basilicas, deep main chapel, lateral chapels (with small doors for communication), without interior and exterior decoration, simple portal and windows. It is a practical building, allowing it to be built throughout the empire with minor adjustments, and prepared to be decorated later or when economic resources are available. In fact, the first Portuguese Baroque does not lack in building because "plain style" is easy to be transformed, by means of decoration (painting, tiling, etc.), turning empty areas into pompous, elaborate baroque scenarios. The same could be applied to the exterior. Subsequently, it is easy to adapt the building to the taste of the time and place, and add on new features and details. Practical and economical. With more inhabitants and better economic resources, the north, particularly the areas of Porto and Braga, witnessed an architectural renewal, visible in the large list of churches, convents and palaces built by the aristocracy. Porto is the city of Baroque in Portugal. Its historical centre is part of UNESCO World Heritage List. Many of the Baroque works in the historical area of the city and beyond, belong to Nicolau Nasoni an Italian architect living in Portugal, drawing original buildings with scenographic emplacement such as the church and tower of Clérigos, the logia of the Porto Cathedral, the church of Misericórdia, the Palace of São João Novo, the Palace of Freixo, the Episcopal Palace (Portuguese: Paço Episcopal do Porto) along with many others. Russian Baroque The debut of Russian Baroque, or Petrine Baroque, followed a long visit of Peter the Great to western Europe in 1697–1698, where he visited the Chateaux of Fontainebleau and Versailles as well as other architectural monuments. He decided, on his return to Russia, to construct similar monuments in St. Petersburg, which became the new capital of Russia in 1712. Early major monuments in the Petrine Baroque include the Peter and Paul Cathedral and Menshikov Palace. During the reign of Empress Anna and Elizaveta Petrovna, Russian architecture was dominated by the luxurious Baroque style of Italian-born Bartolomeo Rastrelli, which developed into Elizabethan Baroque. Rastrelli's signature buildings include the Winter Palace, the Catherine Palace and the Smolny Cathedral. Other distinctive monuments of the Elizabethan Baroque are the bell tower of the Troitse-Sergiyeva Lavra and the Red Gate. In Moscow, Naryshkin Baroque became widespread, especially in the architecture of Eastern Orthodox churches in the late 17th century. It was a combination of western European Baroque with traditional Russian folk styles. Baroque in the Spanish and Portuguese Colonial Americas Due to the colonization of the Americas by European countries, the Baroque naturally moved to the New World, finding especially favorable ground in the regions dominated by Spain and Portugal, both countries being centralized and irreducibly Catholic monarchies, by extension subject to Rome and adherents of the Baroque Counter-reformist most typical. European artists migrated to America and made school, and along with the widespread penetration of Catholic missionaries, many of whom were skilled artists, created a multiform Baroque often influenced by popular taste. The Criollo and indigenous crafters did much to give this Baroque unique features. The main centres of American Baroque cultivation, that are still standing, are (in this order) Mexico, Peru, Brazil, Ecuador, Cuba, Colombia, Bolivia, Guatemala, Nicaragua, Panama and Puerto Rico. Of particular note is the so-called "Missionary Baroque", developed in the framework of the Spanish reductions in areas extending from Mexico and southwestern portions of current-day United States to as far south as Argentina and Chile, indigenous settlements organized by Spanish Catholic missionaries in order to convert them to the Christian faith and acculturate them in the Western life, forming a hybrid Baroque influenced by Native culture, where flourished Criollos and many indigenous artisans and musicians, even literate, some of great ability and talent of their own. Missionaries' accounts often repeat that Western art, especially music, had a hypnotic impact on foresters, and the images of saints were viewed as having great powers. Many natives were converted, and a new form of devotion was created, of passionate intensity, laden with mysticism, superstition, and theatricality, which delighted in festive masses, sacred concerts, and mysteries. The Colonial Baroque architecture in the Spanish America is characterized by a profuse decoration (portal of La Profesa Church, Mexico City; facades covered with Puebla-style azulejos, as in the Church of San Francisco Acatepec in San Andrés Cholula and Convent Church of San Francisco of Puebla), which will be exacerbated in the so-called Churrigueresque style (Facade of the Tabernacle of the Mexico City Cathedral, by Lorenzo Rodríguez; Church of San Francisco Javier, Tepotzotlán; Church of Santa Prisca of Taxco). In Peru, the constructions mostly developed in the cities of Lima, Cusco, Arequipa and Trujillo, since 1650 show original characteristics that are advanced even to the European Baroque, as in the use of cushioned walls and solomonic columns (Church of la Compañía de Jesús, Cusco; Basilica and Convent of San Francisco, Lima). Other countries include: the Metropolitan Cathedral of Sucre in Bolivia; Cathedral Basilica of Esquipulas in Guatemala; Tegucigalpa Cathedral in Honduras; León Cathedral in Nicaragua; the Church of la Compañía de Jesús in Quito, Ecuador; the Church of San Ignacio in Bogotá, Colombia; the Caracas Cathedral in Venezuela; the Cabildo of Buenos Aires in Argentina; the Church of Santo Domingo in Santiago, Chile; and Havana Cathedral in Cuba. It is also worth remembering the quality of the churches of the Spanish Jesuit Missions in Bolivia, Spanish Jesuit missions in Paraguay, the Spanish missions in Mexico and the Spanish Franciscan missions in California. In Brazil, as in the metropolis, Portugal, the architecture has a certain Italian influence, usually of a Borrominesque type, as can be seen in the Co-Cathedral of Recife (1784) and Church of Nossa Senhora da Glória do Outeiro in Rio de Janeiro (1739). In the region of Minas Gerais, highlighted the work of Aleijadinho, author of a group of churches that stand out for their curved planimetry, facades with concave-convex dynamic effects and a plastic treatment of all architectural elements (Church of São Francisco de Assis in Ouro Preto, 1765–1788). Baroque in the Spanish and Portuguese Colonial Asia In the Portuguese colonies of India (Goa, Daman and Diu) an architectural style of Baroque forms mixed with Hindu elements flourished, such as the Goa Cathedral and the Basilica of Bom Jesus of Goa, which houses the tomb of St. Francis Xavier. The set of churches and convents of Goa was declared a World Heritage Site in 1986. In the Philippines, which was a Spanish colony for over three centuries, a large number of Baroque constructions are preserved. Four of these as well as the Baroque and Neoclassical city of Vigan are both UNESCO World Heritage Sites; and although they lack formal classification, The Walled City of Manila along with the city of Tayabas both contain a significant extent of Baroque-era architecture. Echoes in Wallachia and Moldavia As we saw, the Baroque is a Western style, born in Italy. Through the commercial and cultural relationships of Italians with countries of the Balkan Peninsula, including Moldavia and Wallachia, Baroque influences arrive to Eastern Europe. These influences were not very strong, since they usually take place in architecture and stone-sculpted ornaments, and are also mixed intensely with details taken from Byzantine and Islamic art. Before and after the fall of the Byzantine Empire, all the art of Wallachia and Moldavia was primarily influenced by the one of Constantinople. Until the end of the 16th century, with little modifications, the plans of churches and monasteries, the murals, and the ornaments carved in stone remain the same as before. From a period starting with the reigns of Matei Basarab (1632-1654) and Vasile Lupu (1634-1653), which coincides with the popularization of Italian Baroque, new ornaments are added, and the style of religious furniture changes. This is not random at all. Decorative elements and principles are brought from Italy, through Venice, or through the Dalmatian regions, and they are adopted by architects and craftsmen from the east. The window and door frames, the pisanie with dedication, the tombstones, the columns and railings, and a part of the bronze, silver or wooden furniture, receive a more important role than the one they had before. They existed before too, inspired by the Byzantine tradition, but they have a more realist look, showing delicate floral motifs. The relief that existed before too, becomes more accentuated, having volume and consistency now. Before this period, reliefs from Wallachia and Moldavia, like the ones from the East, had only two levels, at a small distance one from the other, one at the surface and the other in depth. Big flowers, maybe roses, peonies or thistles, thick leaves, of acanthus or another similar plant, are twisting on columns, or surround door and windows. A place where the Baroque had a strong influence are columns and the railings. Capitals are more decorated than before with foliage. Columns have often twisting shafts, a local reinterpretation of the Solomonic column. Maximalist railings are placed between these columns, decorated with rinceaux. Some of the ones from the Mogoșoaia Palace are also decorated with dolphins. Cartouches are also used sometimes, mostly on tombstones, like on the one of Constantin Brâncoveanu. This movement, is known as the Brâncovenesc style, after Constantin Brâncoveanu, a ruler of Wallachia whose reign (1654-1714) is highly associated with this kind of architecture and design. The style is also present during the 18th century, and in a part of the 19th. Many of the churches and residences erected by boyards and voivodes of these periods are Brâncovenesc. Although Baroque influences can be clearly seen, the Brâncovenesc style takes much more inspiration from the local tradition. As the 18th century passes, with the Phanariot (members of prominent Greek families in Phanar, Istanbul) reigns in Wallachia and Moldavia, Baroque influences come from Istanbul too. They came before too, during the 17th century, but with the Phanariots, more Western Baroque motifs that arrived to the Ottoman Empire have their final destination in present-day Romania. In Moldavia, Baroque elements come from Russia too, where the influence of Italian art was strong. Painting Baroque painters worked deliberately to set themselves apart from the painters of the Renaissance and the Mannerism period after it. In their palette, they used intense and warm colours, and particularly made use of the primary colours red, blue and yellow, frequently putting all three in close proximity. They avoided the even lighting of Renaissance painting and used strong contrasts of light and darkness on certain parts of the picture to direct attention to the central actions or figures. In their composition, they avoided the tranquil scenes of Renaissance paintings, and chose the moments of the greatest movement and drama. Unlike the tranquil faces of Renaissance paintings, the faces in Baroque paintings clearly expressed their emotions. They often used asymmetry, with action occurring away from the centre of the picture, and created axes that were neither vertical nor horizontal, but slanting to the left or right, giving a sense of instability and movement. They enhanced this impression of movement by having the costumes of the personages blown by the wind, or moved by their own gestures. The overall impressions were movement, emotion and drama. Another essential element of baroque painting was allegory; every painting told a story and had a message, often encrypted in symbols and allegorical characters, which an educated viewer was expected to know and read. Early evidence of Italian Baroque ideas in painting occurred in Bologna, where Annibale Carracci, Agostino Carracci and Ludovico Carracci sought to return the visual arts to the ordered Classicism of the Renaissance. Their art, however, also incorporated ideas central the Counter-Reformation; these included intense emotion and religious imagery that appealed more to the heart than to the intellect. Another influential painter of the Baroque era was Michelangelo Merisi da Caravaggio. His realistic approach to the human figure, painted directly from life and dramatically spotlit against a dark background, shocked his contemporaries and opened a new chapter in the history of painting. Other major painters associated closely with the Baroque style include Artemisia Gentileschi, Elisabetta Sirani, Giovanna Garzoni, Guido Reni, Domenichino, Andrea Pozzo, and Paolo de Matteis in Italy; Francisco de Zurbarán, Bartolomé Esteban Murillo and Diego Velázquez in Spain; Adam Elsheimer in Germany; and Nicolas Poussin and Georges de La Tour in France (though Poussin spent most of his working life in Italy). Poussin and La Tour adopted a "classical" Baroque style with less focus on emotion and greater attention to the line of the figures in the painting than to colour. Peter Paul Rubens was the most important painter of the Flemish Baroque style. Rubens' highly charged compositions reference erudite aspects of classical and Christian history. His unique and immensely popular Baroque style emphasised movement, colour, and sensuality, which followed the immediate, dramatic artistic style promoted in the Counter-Reformation. Rubens specialized in making altarpieces, portraits, landscapes, and history paintings of mythological and allegorical subjects. One important domain of Baroque painting was Quadratura, or paintings in trompe-l'œil, which literally "fooled the eye". These were usually painted on the stucco of ceilings or upper walls and balustrades, and gave the impression to those on the ground looking up were that they were seeing the heavens populated with crowds of angels, saints and other heavenly figures, set against painted skies and imaginary architecture. In Italy, artists often collaborated with architects on interior decoration; Pietro da Cortona was one of the painters of the 17th century who employed this illusionist way of painting. Among his most important commissions were the frescoes he painted for the Palace of the Barberini family (1633–39), to glorify the reign of Pope Urban VIII. Pietro da Cortona's compositions were the largest decorative frescoes executed in Rome since the work of Michelangelo at the Sistine Chapel. François Boucher was an important figure in the more delicate French Rococo style, which appeared during the late Baroque period. He designed tapestries, carpets and theatre decoration as well as painting. His work was extremely popular with Madame Pompadour, the Mistress of King Louis XV. His paintings featured mythological romantic, and mildly erotic themes. Hispanic Americas In the Hispanic Americas, the first influences were from Sevillan Tenebrism, mainly from Zurbarán —some of whose works are still preserved in Mexico and Peru— as can be seen in the work of the Mexicans José Juárez and Sebastián López de Arteaga, and the Bolivian Melchor Pérez de Holguín. The Cusco School of painting arose after the arrival of the Italian painter Bernardo Bitti in 1583, who introduced Mannerism in the Americas. It highlighted the work of Luis de Riaño, disciple of the Italian Angelino Medoro, author of the murals of the Church of San Pedro of Andahuaylillas. It also highlighted the Indian (Quechua) painters Diego Quispe Tito and Basilio Santa Cruz Pumacallao, as well as Marcos Zapata, author of the fifty large canvases that cover the high arches of the Cathedral of Cusco. In Ecuador, the Quito School was formed, mainly represented by the mestizo Miguel de Santiago and the criollo Nicolás Javier de Goríbar. In the 18th century sculptural altarpieces began to be replaced by paintings, developing notably the Baroque painting in the Americas. Similarly, the demand for civil works, mainly portraits of the aristocratic classes and the ecclesiastical hierarchy, grew. The main influence was the Murillesque, and in some cases – as in the criollo Cristóbal de Villalpando – that of Valdés Leal. The painting of this era has a more sentimental tone, with sweet and softer shapes. It highlight Gregorio Vásquez de Arce in Colombia, and Juan Rodríguez Juárez and Miguel Cabrera in Mexico. Sculpture The dominant figure in baroque sculpture was Gian Lorenzo Bernini. Under the patronage of Pope Urban VIII, he made a remarkable series of monumental statues of saints and figures whose faces and gestures vividly expressed their emotions, as well as portrait busts of exceptional realism, and highly decorative works for the Vatican such as the imposing Chair of St. Peter beneath the dome in St. Peter's Basilica. In addition, he designed fountains with monumental groups of sculpture to decorate the major squares of Rome. Baroque sculpture was inspired by ancient Roman statuary, particularly by the famous first century CE statue of Laocoön, which was unearthed in 1506 and put on display in the gallery of the Vatican. When he visited Paris in 1665, Bernini addressed the students at the academy of painting and sculpture. He advised the students to work from classical models, rather than from nature. He told the students, "When I had trouble with my first statue, I consulted the Antinous like an oracle." That Antinous statue is known today as the Hermes of the Museo Pio-Clementino. Notable late French baroque sculptors included Étienne Maurice Falconet and Jean Baptiste Pigalle. Pigalle was commissioned by Frederick the Great to make statues for Frederick's own version of Versailles at Sanssouci in Potsdam, Germany. Falconet also received an important foreign commission, creating the famous statue of Peter the Great on horseback found in St. Petersburg. In Spain, the sculptor Francisco Salzillo worked exclusively on religious themes, using polychromed wood. Some of the finest baroque sculptural craftsmanship was found in the gilded stucco altars of churches of the Spanish colonies of the New World, made by local craftsmen; examples include the Rosary Chapel of the Church of Santo Domingo in Oaxaca (Mexico), 1724–1731. Furniture The main motifs used are: horns of plenty, festoons, baby angels, lion heads holding a metal ring in their mouths, female faces surrounded by garlands, oval cartouches, acanthus leaves, classical columns, caryatids, pediments, and other elements of Classical architecture sculpted on some parts of pieces of furniture, baskets with fruits or flowers, shells, armour and trophies, heads of Apollo or Bacchus, and C-shaped volutes. During the first period of the reign of Louis XIV, furniture followed the previous style of Louis XIII, and was massive, and profusely decorated with sculpture and gilding. After 1680, thanks in large part to the furniture designer André Charles Boulle, a more original and delicate style appeared, sometimes known as Boulle work. It was based on the inlay of ebony and other rare woods, a technique first used in Florence in the 15th century, which was refined and developed by Boulle and others working for Louis XIV. Furniture was inlaid with plaques of ebony, copper, and exotic woods of different colors. New and often enduring types of furniture appeared; the commode, with two to four drawers, replaced the old coffre, or chest. The canapé, or sofa, appeared, in the form of a combination of two or three armchairs. New kinds of armchairs appeared, including the fauteuil en confessionale or "Confessional armchair", which had padded cushions ions on either side of the back of the chair. The console table also made its first appearance; it was designed to be placed against a wall. Another new type of furniture was the table à gibier, a marble-topped table for holding dishes. Early varieties of the desk appeared; the Mazarin desk had a central section set back, placed between two columns of drawers, with four feet on each column. Music The term Baroque is also used to designate the style of music composed during a period that overlaps with that of Baroque art. The first uses of the term 'baroque' for music were criticisms. In an anonymous, satirical review of the première in October 1733 of Rameau's Hippolyte et Aricie, printed in the Mercure de France in May 1734, the critic implied that the novelty of this opera was "du barocque," complaining that the music lacked coherent melody, was filled with unremitting dissonances, constantly changed key and meter, and speedily ran through every compositional device. Jean-Jacques Rousseau, who was a musician and noted composer as well as philosopher, made a very similar observation in 1768 in the famous Encyclopédie of Denis Diderot: "Baroque music is that in which the harmony is confused, and loaded with modulations and dissonances. The singing is harsh and unnatural, the intonation difficult, and the movement limited. It appears that term comes from the word 'baroco' used by logicians." Common use of the term for the music of the period began only in 1919, by Curt Sachs, and it was not until 1940 that it was first used in English in an article published by Manfred Bukofzer. The baroque was a period of musical experimentation and innovation which explains the amount of ornaments and improvisation performed by the musicians. New forms were invented, including the concerto and sinfonia. Opera was born in Italy at the end of the 16th century (with Jacopo Peri's mostly lost Dafne, produced in Florence in 1598) and soon spread through the rest of Europe: Louis XIV created the first Royal Academy of Music, In 1669, the poet Pierre Perrin opened an academy of opera in Paris, the first opera theatre in France open to the public, and premiered Pomone, the first grand opera in French, with music by Robert Cambert, with five acts, elaborate stage machinery, and a ballet. Heinrich Schütz in Germany, Jean-Baptiste Lully in France, and Henry Purcell in England all helped to establish their national traditions in the 17th century. Several new instruments, including the piano, were introduced during this period. The invention of the piano is credited to Bartolomeo Cristofori (1655–1731) of Padua, Italy, who was employed by Ferdinando de' Medici, Grand Prince of Tuscany, as the Keeper of the Instruments. Cristofori named the instrument un cimbalo di cipresso di piano e forte ("a keyboard of cypress with soft and loud"), abbreviated over time as pianoforte, fortepiano, and later, simply, piano. Composers and examples Giovanni Gabrieli (/1557–1612) Sonata pian' e forte (1597), In Ecclesiis (from Symphoniae sacrae book 2, 1615) Giovanni Girolamo Kapsperger (c. 1580–1651) Libro primo di villanelle, 20 (1610) Claudio Monteverdi (1567–1643), L'Orfeo, favola in musica (1610) Heinrich Schütz (1585–1672), Musikalische Exequien (1629, 1647, 1650) Francesco Cavalli (1602–1676), L'Egisto (1643), Ercole amante (1662), Scipione affricano (1664) Johann Jacob Froberger (1616–1667), Complete Music for Harpsichord and Organ, Simone Stella Jean-Baptiste Lully (1632–1687), Armide (1686) Marc-Antoine Charpentier (1643–1704), Te Deum (1688–1698) Heinrich Ignaz Franz Biber (1644–1704), Mystery Sonatas (1681) John Blow (1649–1708), Venus and Adonis (1680–1687) Johann Pachelbel (1653–1706), Canon in D (1680) Arcangelo Corelli (1653–1713), 12 concerti grossi, Op. 6 (1714) Marin Marais (1656–1728), Sonnerie de Ste-Geneviève du Mont-de-Paris (1723) Henry Purcell (1659–1695), Dido and Aeneas (1688) Alessandro Scarlatti (1660–1725), L'honestà negli amori (1680), Il Pompeo (1683), Mitridate Eupatore (1707) François Couperin (1668–1733), Les barricades mystérieuses (1717) Tomaso Albinoni (1671–1751), Didone abbandonata (1724) Antonio Vivaldi (1678–1741), The Four Seasons (1725) Jan Dismas Zelenka (1679–1745), Il Serpente di Bronzo (1730), Missa Sanctissimae Trinitatis (1736) Georg Philipp Telemann (1681–1767), Der Tag des Gerichts (1762) Johann David Heinichen (1683–1729) Jean-Philippe Rameau (1683–1764), Dardanus (1739) George Frideric Handel (1685–1759), Water Music (1717), Messiah (1741) Domenico Scarlatti (1685–1757), Sonatas for harpsichord Johann Sebastian Bach (1685–1750), Toccata and Fugue in D minor (1703–1707), Brandenburg Concertos (1721), St Matthew Passion (1727) Nicola Porpora (1686–1768), Semiramide riconosciuta (1729) Giovanni Battista Pergolesi (1710–1736), Stabat Mater (1736) Dance The classical ballet also originated in the Baroque era. The style of court dance was brought to France by Marie de Medici, and in the beginning the members of the court themselves were the dancers. Louis XIV himself performed in public in several ballets. In March 1662, the Académie Royale de Danse, was founded by the King. It was the first professional dance school and company, and set the standards and vocabulary for ballet throughout Europe during the period. Literary theory Heinrich Wölfflin was the first to transfer the term Baroque to literature. The key concepts of Baroque literary theory, such as "conceit" (concetto), "wit" (acutezza, ingegno), and "wonder" (meraviglia), were not fully developed in literary theory until the publication of Emanuele Tesauro's Il Cannocchiale aristotelico (The Aristotelian Telescope) in 1654. This seminal treatise - inspired by Giambattista Marino's epic Adone and the work of the Spanish Jesuit philosopher Baltasar Gracián - developed a theory of metaphor as a universal language of images and as a supreme intellectual act, at once an artifice and an epistemologically privileged mode of access to truth. Theatre The Baroque period was a golden age for theatre in France and Spain; playwrights included Corneille, Racine and Molière in France; and Lope de Vega and Pedro Calderón de la Barca in Spain. During the Baroque period, the art and style of the theatre evolved rapidly, alongside the development of opera and of ballet. The design of newer and larger theatres, the invention the use of more elaborate machinery, the wider use of the proscenium arch, which framed the stage and hid the machinery from the audience, encouraged more scenic effects and spectacle. The Baroque had a Catholic and conservative character in Spain, following an Italian literary model during the Renaissance. The Hispanic Baroque theatre aimed for a public content with an ideal reality that manifested fundamental three sentiments: Catholic religion, monarchist and national pride and honour originating from the chivalric, knightly world. Two periods are known in the Baroque Spanish theatre, with the division occurring in 1630. The first period is represented chiefly by Lope de Vega, but also by Tirso de Molina, Gaspar Aguilar, Guillén de Castro, Antonio Mira de Amescua, Luis Vélez de Guevara, Juan Ruiz de Alarcón, Diego Jiménez de Enciso, Luis Belmonte Bermúdez, Felipe Godínez, Luis Quiñones de Benavente or Juan Pérez de Montalbán. The second period is represented by Pedro Calderón de la Barca and fellow dramatists Antonio Hurtado de Mendoza, Álvaro Cubillo de Aragón, Jerónimo de Cáncer, Francisco de Rojas Zorrilla, Juan de Matos Fragoso, Antonio Coello y Ochoa, Agustín Moreto, and Francisco Bances Candamo. These classifications are loose because each author had his own way and could occasionally adhere himself to the formula established by Lope. It may even be that Lope's "manner" was more liberal and structured than Calderón's. Lope de Vega introduced through his Arte nuevo de hacer comedias en este tiempo (1609) the new comedy. He established a new dramatic formula that broke the three Aristotle unities of the Italian school of poetry (action, time, and place) and a fourth unity of Aristotle which is about style, mixing of tragic and comic elements showing different types of verses and stanzas upon what is represented. Although Lope has a great knowledge of the plastic arts, he did not use it during the major part of his career nor in theatre or scenography. The Lope's comedy granted a second role to the visual aspects of the theatrical representation. Tirso de Molina, Lope de Vega, and Calderón were the most important play writers in Golden Era Spain. Their works, known for their subtle intelligence and profound comprehension of a person's humanity, could be considered a bridge between Lope's primitive comedy and the more elaborate comedy of Calderón. Tirso de Molina is best known for two works, The Convicted Suspicions and The Trickster of Seville, one of the first versions of the Don Juan myth. Upon his arrival to Madrid, Cosimo Lotti brought to the Spanish court the most advanced theatrical techniques of Europe. His techniques and mechanic knowledge were applied in palace exhibitions called "Fiestas" and in lavish exhibitions of rivers or artificial fountains called "Naumaquias". He was in charge of styling the Gardens of Buen Retiro, of Zarzuela, and of Aranjuez and the construction of the theatrical building of Coliseo del Buen Retiro. Lope's formulas begin with a verse that it unbefitting of the palace theatre foundation and the birth of new concepts that begun the careers of some play writers like Calderón de la Barca. Marking the principal innovations of the New Lopesian Comedy, Calderón's style marked many differences, with a great deal of constructive care and attention to his internal structure. Calderón's work is in formal perfection and a very lyric and symbolic language. Liberty, vitality and openness of Lope gave a step to Calderón's intellectual reflection and formal precision. In his comedy it reflected his ideological and doctrine intentions in above the passion and the action, the work of Autos sacramentales achieved high ranks. The genre of Comedia is political, multi-artistic and in a sense hybrid. The poetic text interweaved with Medias and resources originating from architecture, music and painting freeing the deception that is in the Lopesian comedy was made up from the lack of scenery and engaging the dialogue of action. The best known German playwright was Andreas Gryphius, who used the Jesuit model of the Dutch Joost van den Vondel and Pierre Corneille. There was also Johannes Velten who combined the traditions of the English comedians and the commedia dell'arte with the classic theatre of Corneille and Molière. His touring company was perhaps the most significant and important of the 17th century. The foremost Italian baroque tragedian was Federico Della Valle. His literary activity is summed up by the four plays that he wrote for the courtly theater: the tragicomedy Adelonda di Frigia (1595) and especially his three tragedies, Judith (1627), Esther (1627) and La reina di Scotia (1628). Della Valle had many imitators and followers who combined in their works Baroque taste and the didactic aims of the Jesuits (Pallavicino, Graziani, etc.) Spanish colonial Americas Following the evolution marked from Spain, at the end of the 16th century, the companies of comedians, essentially transhumant, began to professionalize. With professionalization came regulation and censorship: as in Europe, the theatre oscillated between tolerance and even government protection and rejection (with exceptions) or persecution by the Church. The theatre was useful to the authorities as an instrument to disseminate the desired behavior and models, respect for the social order and the monarchy, school of religious dogma. The corrales were administered for the benefit of hospitals that shared the benefits of the representations. The itinerant companies (or "of the league"), who carried the theatre in improvised open-air stages by the regions that did not have fixed locals, required a viceregal license to work, whose price or pinción was destined to alms and works pious. For companies that worked stably in the capitals and major cities, one of their main sources of income was participation in the festivities of the Corpus Christi, which provided them with not only economic benefits, but also recognition and social prestige. The representations in the viceregal palace and the mansions of the aristocracy, where they represented both the comedies of their repertoire and special productions with great lighting effects, scenery, and stage, were also an important source of well-paid and prestigious work. Born in the Viceroyalty of New Spain but later settled in Spain, Juan Ruiz de Alarcón is the most prominent figure in the Baroque theatre of New Spain. Despite his accommodation to Lope de Vega's new comedy, his "marked secularism", his discretion and restraint, and a keen capacity for "psychological penetration" as distinctive features of Alarcón against his Spanish contemporaries have been noted. Noteworthy among his works La verdad sospechosa, a comedy of characters that reflected his constant moralizing purpose. The dramatic production of Sor Juana Inés de la Cruz places her as the second figure of the Spanish-American Baroque theatre. It is worth mentioning among her works the auto sacramental El divino Narciso and the comedy Los empeños de una casa. Gardens The Baroque garden, also known as the jardin à la française or French formal garden, first appeared in Rome in the 16th century, and then most famously in France in the 17th century in the gardens of Vaux le Vicomte and the Palace of Versailles. Baroque gardens were built by Kings and princes in Germany, the Netherlands, Austria, Spain, Poland, Italy and Russia until the mid-18th century, when they began to be remade into by the more natural English landscape garden. The purpose of the baroque garden was to illustrate the power of man over nature, and the glory of its builder, Baroque gardens were laid out in geometric patterns, like the rooms of a house. They were usually best seen from the outside and looking down, either from a chateau or terrace. The elements of a baroque garden included parterres of flower beds or low hedges trimmed into ornate Baroque designs, and straight lanes and alleys of gravel which divided and crisscrossed the garden. Terraces, ramps, staircases and cascades were placed where there were differences of elevation, and provided viewing points. Circular or rectangular ponds or basins of water were the settings for fountains and statues. Bosquets or carefully trimmed groves or lines of identical trees, gave the appearance of walls of greenery and were backdrops for statues. On the edges, the gardens usually had pavilions, orangeries and other structures where visitors could take shelter from the sun or rain. Baroque gardens required enormous numbers of gardeners, continual trimming, and abundant water. In the later part of the Baroque period, the formal elements began to be replaced with more natural features, including winding paths, groves of varied trees left to grow untrimmed; rustic architecture and picturesque structures, such as Roman temples or Chinese pagodas, as well as "secret gardens" on the edges of the main garden, filled with greenery, where visitors could read or have quiet conversations. By the mid-18th century most of the Baroque gardens were partially or entirely transformed into variations of the English landscape garden. Besides Versailles and Vaux-le-Vicomte, celebrated baroque gardens still retaining much of their original appearance include the Royal Palace of Caserta near Naples; Nymphenburg Palace and Augustusburg and Falkenlust Palaces, Brühl in Germany; Het Loo Palace, Netherlands; the Belvedere Palace in Vienna; Royal Palace of La Granja de San Ildefonso, Spain; and Peterhof Palace in St. Petersburg, Russia. Posterity Transition to rococo The Rococo is the final stage of the Baroque, and in many ways took the Baroque's fundamental qualities of illusion and drama to their logical extremes. Beginning in France as a reaction against the heavy Baroque grandeur of Louis XIV's court at the Palace of Versailles, the rococo movement became associated particularly with the powerful Madame de Pompadour (1721–1764), the mistress of the new king Louis XV (1710–1774). Because of this, the style was also known as 'Pompadour'. Although it's highly associated with the reign of Louis XV, it didn't appear in this period. Multiple works from the last years of Louis XIV's reign are examples of early Rococo. The name of the movement derives from the French 'rocaille', or pebble, and refers to stones and shells that decorate the interiors of caves, as similar shell forms became a common feature in Rococo design. It began as a design and decorative arts style, and was characterized by elegant flowing shapes. Architecture followed and then painting and sculpture. The French painter with whom the term Rococo is most often associated is Jean-Antoine Watteau, whose pastoral scenes, or fêtes galantes, dominate the early part of the 18th century. There are multiple similarities between Rococo and Baroque. Both styles insist on monumental forms, and so use continuous spaces, double columns or pilasters, and luxurious materials (including gilded elements). There also noticeable differences. Rococo designed freed themselves from the adherence to symmetry that had dominated architecture and design since the Renaissance. Many small objects, like ink pots or porcelain figures, but also some ornaments, are often asymmetrical. This goes hand in hand with the fact that most ornamentation consisted of interpretation of foliage and sea shells, not as many Classical ornaments inherited from the Renaissance like in Baroque. Another key difference is the fact that since the Baroque is the main cultural manifestation of the spirit of the Counter-Reformation, it is most often associated with ecclesiastical architecture. In contrast, the Rococo is mainly associated with palaces and domestic architecture. In Paris, the popularity of the Rococo coincided with the emergence of the salon as a new type of social gathering, the venues for which were often decorated in this style. Rococo rooms were typically smaller than their Baroque counterparts, reflecting a movement towards domestic intimacy. Colours also match this change, from the earthy tones of Caravaggio's paintings, and the interiors of red marble and gilded mounts of the reign of Louis XIV, to the pastel and relaxed pale blue, Pompadour pink, and white of the Louis XV and Madame de Pompadour's France. Similarly to colours, there was also a transition from serious, dramatic and moralistic subjects in painting and sculpture, to lighthearted and joyful themes. One last difference between Baroque and Rococo is the interest that 18th century aristocrats had for East Asia. Chinoiserie was a style in fine art, architecture and design, popular during the 18th century, that was heavily inspired by Chinese art, but also by Rococo at the same time. Because traveling to China or other Far Eastern countries was something hard at that time and so remained mysterious to most Westerners, European imagination were fuelled by perceptions of Asia as a place of wealth and luxury, and consequently patrons from emperors to merchants vied with each other in adorning their living quarters with Asian goods and decorating them in Asian styles. Where Asian objects were hard to obtain, European craftsmen and painters stepped up to fill the demand, creating a blend of Rococo forms and Asian figures, motifs and techniques. Aside from European recreations of objects in East Asian style, Chinese lacquerware was reused in multiple ways. European aristicrats fully decorated a handful of rooms of palaces, with Chinese lacquer panels used as wall panels. Due to its aspect, black lacquer was popular for Western men's studies. Those panels used were usually glossy and black, made in the Henan province of China. They were made of multiple layers of lacquer, then incised with motifs in-filled with colour and gold. Chinese, but also Japanese lacquer panels were also used by some 18th century European carpenters for making furniture. In order to be produced, Asian screens were dismantled and used to veneer European-made furniture. Complete abandonment with Neoclassicism In 1750, Madame de Pompadour sent her nephew, Abel-François Poisson de Vandières, on a two-year mission to study artistic and archeological developments in Italy. He was accompanied by several artists, including the engraver Nicolas Cochin and the architect Soufflot. They returned to Paris with a passion for classical art. Vandiéres became the Marquis of Marigny, and was named Royal Director of buildings in 1754. He turned official French architecture toward Neoclassicism, a movement that heavily takes its inspiration from and tries to revive the art of Ancient Greece and Rome. Cochin became an important art critic; he denounced the petit style of François Boucher (one of the main Rococo painters), and called for a grand style with a new emphasis on antiquity and nobility in the academies of painting of architecture. The transition from Rococo to Neoclassicism wasn't very abrupt. Some of the biggest patrons of Rococo art also commissioned early Neoclassical works. Madame de Pompadour, one of the main figures of Rococo, commissioned the Petit Trianon, one of the most important examples of French Neoclassical architecture. Similarly, Louis XV, the king at whose court the Rococo flourished, founded the Panthéon, another iconic Neoclassical monument. Besides this, in France there was the Louis XVI style, which uses shapes and motifs taken from ancient Greek, Etruscan and Roman antiquity, but still has the sweet, delicate and fancy vibe of the Rococo. In the UK, Robert Adam's Greco-Roman inspired interior of the Eating Room in the Osterley Park, near London, despite being Neoclassical, is painted mainly in white and pastel green and pink, reminiscent of Rococo. It must be mentioned that Neoclassicism wasn't about copying. Artists didn't try to become frozen in the past, but to use Antiquity and its ideals in a way that was relevant to contemporary society. Condemnation and academic rediscovery The pioneer German art historian and archeologist Johann Joachim Winckelmann also condemned the baroque style, and praised the superior values of classical art and architecture. By the 19th century, Baroque was a target for ridicule and criticism. The neoclassical critic Francesco Milizia wrote: "Borrominini in architecture, Bernini in sculpture, Pietro da Cortona in painting...are a plague on good taste, which infected a large number of artists." In the 19th century, criticism went even further; the British critic John Ruskin declared that baroque sculpture was not only bad, but also morally corrupt. The Swiss-born art historian Heinrich Wölfflin (1864–1945) started the rehabilitation of the word Baroque in his Renaissance und Barock (1888); Wölfflin identified the Baroque as "movement imported into mass", an art antithetic to Renaissance art. He did not make the distinctions between Mannerism and Baroque that modern writers do, and he ignored the later phase, the academic Baroque that lasted into the 18th century. Baroque art and architecture became fashionable in the interwar period, and has largely remained in critical favor. The term "Baroque" may still be used, often pejoratively, describing works of art, craft, or design that are thought to have excessive ornamentation or complexity of line. At the same time "baroque" has become an accepted terms for various trends in Roman art and Roman architecture in the 2nd and 3rd centuries AD, which display some of the same characteristics as the later Baroque. Revivals and influence through eclecticism Highly criticized, the Baroque will later be a source of inspiration for artists, architects and designers during the 19th century through Romanticism, a movement that developed in the 18th century and that reached its peak in the 19th. It was characterized by its emphasis on emotion and individualism, as well as glorification of the past and nature, preferring the medieval to the classical. A mix of literary, religious, and political factors prompted late-18th and 19th century British architects and designers to look back to the Middle Ages for inspiration. Romanticism is the reason the 19th century is best known as the century of revivals. In France, Romanticism was not the key factor that led to the revival of Gothic architecture and design. Vandalism of monuments and buildings associated with the Ancien Régime (Old Regime) happened during the French Revolution. Because of this an archaeologist, Alexandre Lenoir, was appointed curator of the Petits-Augustins depot, where sculptures, statues and tombs removed from churches, abbeys and convents had been transported. He organized the Museum of French Monuments (1795-1816), and was the first to bring back the taste for the art of the Middle Ages, which progressed slowly to flourish a quarter of a century later. This taste and revival of medieval art led to the revival of other periods, including the Baroque and Rococo. Revivalism started with themes first from the Middle Ages, then, towards the end of the reign of Louis Philippe (1830-1848), from the Renaissance. Baroque and Rococo inspiration was more popular during the reign of Napoleon III (1852-1870), and continued later, after the fall of the Second French Empire. Compared to how in England architects and designers saw the Gothic as a national style, Rococo was seen as one of the most representative movements for France. The French felt much more connected to the styles of the Ancien Régime and Napoleon's Empire, than to the medieval or Renaissance past, although Gothic architecture appeared in France, not in England. The revivalism of the 19th century led in time to eclecticism (mix of elements of different styles). Because architects often revived Classical styles, most Eclectic buildings and designs have a distinctive look. Besides pure revivals, the Baroque was also one of the main sources of inspiration for eclecticism. The coupled column and the giant order, two elements widely used in Baroque, are often present in this kind of 19th and early 20th century buildings. Eclecticism was not limited only to architecture. Many designs from the Second Empire style (1848-1870) have elements taken from different styles. Little furniture from the period escaped its three most prevalent historicist influences, which are sometimes kept distinct and sometimes combined: the Renaissance, Louis XV (Rococo), and Louis XVI styles. Revivals and inspiration also came sometimes from Baroque, like in the case of remakes and arabesques that imitate Boulle marquetry, and from other styles, like Gothic, Renaissance, or English Regency. The Belle Époque was a period that begun around 1871–1880 and that ended with the outbreak of World War I in 1914. It was characterized by optimism, regional peace, economic prosperity, colonial expansion, and technological, scientific, and cultural innovations. Eclecticism reached its peak in this period, with Beaux Arts architecture. The style takes its name from the École des Beaux-Arts in Paris, where it developed and where many of the main exponents of the style studied. Buildings in this style often feature Ionic columns with their volues on the corner (like those found in French Baroque), a rusticated basement level, overall simplicity but with some really detailed parts, arched doors, and an arch above the entrance like the one of the Petit Palais in Paris. The style aimed for a Baroque opulence through lavishly decorated monumental structures that evoked Louis XIV's Versailles. When it comes to the design of the Belle Époque, all furniture from the past was admired, including, perhaps, contrary to expectations, the Second Empire style (the style of the proceeding period), which remained popular until 1900. In the years around 1900, there was a gigantic recapitulation of styles of all countries in all preceding periods. Everything from Chinese to Spanish models, from Boulle to Gothic, found its way into furniture production, but some styles were more appreciated than others. The High Middle Ages and the early Renaissance were especially prized. Exoticism of every stripe and exuberant Rococo designs were also favoured. Revivals and influence of the Baroque faded away and disappeared with Art Deco, a style created as a collective effort of multiple French designers to make a new modern style around 1910. It was obscure before WW1, but became very popular during the interwar period, being heavily associated with the 1920s and the 1930s. The movement was a blend of multiple characteristics taken from Modernist currents from the 1900s and the 1910s, like the Vienna Secession, Cubism, Fauvism, Primitivism, Suprematism, Constructivism, Futurism, De Stijl, and Expressionism. Besides Modernism, elements taken from styles popular during the Belle Époque, like Rococo Revival, Neoclassicism, or the neo-Louis XVI style, are also present in Art Deco. The proportions, volumes and structure of Beaux Arts architecture before WW1 is present in early Art Deco buildings of the 1910s and 1920s. Elements taken from Baroque are quite rare, architects and designers preferring the Louis XVI style. At the end of the interwar period, with the rise in popularity of the International Style, characterized by the complete lack of any ornamentation led to the complete abandonment of influence and revivals of the Baroque. Multiple International Style architects and designers, but also Modernist artists criticized Baroque for its extravagance and what they saw as "excess". Ironically this was just at the same time as the critical appreciation of the original Baroque was reviving strongly. Postmodern appreciation and reinterpretations Appreciation for the Baroque reappeared with the rise of Postmodernism, a movement that questioned Modernism (the status quo after WW2), and which promoted the inclusion of elements of historic styles in new designs, and appreciation for the pre-Modernist past. See also List of Baroque architecture Baroque in Brazil Czech Baroque architecture Dutch Baroque architecture Earthquake Baroque English Baroque French Baroque architecture Italian Baroque Sicilian Baroque New Spanish Baroque Mexican Baroque Neoclassicism (music) Andean Baroque Baroque in Poland Baroque architecture in Portugal Naryshkin Baroque Siberian Baroque Spanish Baroque literature Ukrainian Baroque Pasquale Bellonio Notes Sources Causa, Raffaello, L'Art au XVIII siècle du rococo à Goya (1963), (in French) Hachcette, Paris Gardner, Helen, Fred S. Kleiner, and Christin J. Mamiya. 2005. Gardner's Art Through the Ages, 12th edition. Belmont, CA: Thomson/Wadsworth. (hardcover) Prater, Andreas, and Bauer, Hermann, La Peinture du baroque (1997), (in French), Taschen, Paris Tazartes, Maurizia, Fontaines de Rome, (2004), (in French) Citadelles, Paris Further reading Andersen, Liselotte. 1969. Baroque and Rococo Art, New York: H. N. Abrams. Bailey, Gauvin Alexander. 2012. Baroque & Rococo, London: Phaidon Press. Bazin, Germain, 1964. Baroque and Rococo. Praeger World of Art Series. New York: Praeger. (Originally published in French, as Classique, baroque et rococo. Paris: Larousse. English edition reprinted as Baroque and Rococo Art, New York: Praeger, 1974) Buci-Glucksmann, Christine. 1994. Baroque Reason: The Aesthetics of Modernity. Sage. Bailey, Gauvin; Lanthier, Lillian, "Baroque" (2003), Grove Art Online, Oxford Art Online, Oxford University Press, Web. Retrieved 30 March 2021. Hills, Helen (ed.). 2011. Rethinking the Baroque. Farnham, Surrey; Burlington, VT: Ashgate. . Hofer, Philip. 1951.Baroque Book Illustration: A Short Survey.Harvard University Press, Cambridge. Hortolà, Policarp, 2013, The Aesthetics of Haemotaphonomy: Stylistic Parallels between a Science and Literature and the Visual Arts. Sant Vicent del Raspeig: ECU. . Kitson, Michael. 1966. The Age of Baroque. Landmarks of the World's Art. London: Hamlyn; New York: McGraw-Hill. Lambert, Gregg, 2004. Return of the Baroque in Modern Culture. Continuum. . Martin, John Rupert. 1977. Baroque. Icon Editions. New York: Harper and Rowe. (cloth); (pbk.) Vuillemin, Jean-Claude, 2013. Episteme baroque: le mot et la chose. Hermann. . Wakefield, Steve. 2004. Carpentier's Baroque Fiction: Returning Medusa's Gaze. Colección Támesis. Serie A, Monografías 208. Rochester, NY: Tamesis. . Massimo Colella, Separatezza e conversazione. Sondaggi intertestuali attorno a Ciro di Pers, in «Xenia. Trimestrale di Letteratura e Cultura» (Genova), IV, 1, 2019, pp. 11-37. Massimo Colella, Il Barocco sabaudo tra mecenatismo e retorica. Maria Giovanna Battista di Savoia Nemours e l’Accademia Reale Letteraria di Torino, con Prefazione di Maria Luisa Doglio, Fondazione 1563 per l’Arte e la Cultura della Compagnia di San Paolo, Torino (“Alti Studi sull’Età e la Cultura del Barocco”, IV-1), 2019. Massimo Colella, Seicento satirico: "Il Viaggio" di Antonio Abati (con edizione critica in appendice), in «La parola del testo», XXVI, 1-2, 2022, pp. 77-100. External links The baroque and rococo culture Webmuseum Paris barocke in Val di Noto – Sizilien (archived 2 September 2018) Baroque in the "History of Art" The Baroque style and Luis XIV influence (archived 24 June 2007) Melvyn Bragg's BBC Radio 4 program In Our Time: The Baroque 17th century in art 17th century in the arts 18th century in art 18th century in the arts Art movements Art movements in Europe Catholic art Decorative arts Early modern period Catholic art by period
6
The Bank of Italy (Italian: Banca d'Italia, informally referred to as Bankitalia) () is the central bank of Italy and part of the European System of Central Banks. It is located in Palazzo Koch, via Nazionale, Rome. The bank's current governor is Ignazio Visco, who took the office on 1 November 2011. Until January 1999 when Italy adopted the euro, the bank was responsible for national currency, the Italian lira. From then until euro notes and coins were issued from 1 January 2002, lira coins and notes continued as denominations of the euro. Functions After the charge of monetary and exchange rate policies was shifted in 1998 to the European Central Bank, within the European institutional framework, the bank implements the decisions, issues euro banknotes and withdraws and destroys worn pieces. The main function has thus become banking and financial supervision. The objective is to ensure the stability and efficiency of the system and compliance with rules and regulations; the bank pursues it through secondary legislation, controls and cooperation with governmental authorities. Following a reform in 2005, which was prompted by takeover scandals, the bank has lost exclusive antitrust authority in the credit sector, which is now shared with the Italian Competition Authority (). Other functions include market supervision, oversight of the payment system and provision of settlement services, State treasury service, Central Credit Register, economic analysis and institutional consultancy. As of 2021, the Bank of Italy owned 2,451.8 tonnes of gold, the third-largest gold reserve in the world. History The institution was established in 1893 from the combination of three major banks in Italy (after the Banca Romana scandal). The new central bank first issued banknotes during 1926. Until 1928, it was directed by a general manager, after this time instead by a governor elected by an internal commission of managers, with a decree from the President of the Italian Republic, for a term of seven years. In 1863 the crisis of the world money market created panic and the rush to the counters to collect the metallic currency in exchange for the banknotes. The Italian government responded in 1866 by introducing the fiat and legal tender of paper money. The government was accused in this way of favouring the issuing banks, and a long debate called the "banking question" arose about the advisability of having one or more issuers. The Minghetti-Finali law of 1873 established the mandatory consortium of issuing institutions among the six existing issuing institutions, the , , , , Banco di Napoli, and Banco di Sicilia; but the measure proved insufficient. Following the Banca Romana scandal, the reorganization of the issuing institutions became necessary. Establishment Law no. 449 of 10 August 1893 of the Giolitti I government established the Bank of Italy through the merger of four banks: the National Bank in the Kingdom of Italy (formerly Banca Nazionale in the Sardinian States), the Banca Nazionale Toscana, the Banca Toscana di Credito for the Industries and Commerce of Italy and with the liquidation management of Banca Romana. With a complex series of mergers between these banks, the current Bank of Italy was formed. Some families of bankers, historical partners: Bombrini, Bastogi, Balduino, were the supporters of the operation. The institute enjoyed (together with the Banks of Naples and Sicily) the issuing privilege, it also acted as a "bank of banks" through the rediscount of bills, but did not have supervisory powers over other banks. The bank remains a private limited company and was headed by a director. From 1900 to 1928 Bonaldo Stringher was the director, who gave the Bank the role of manager of Italian monetary policy and lender of last resort, bringing it closer to a modern central bank. In particular, he understood that a central bank cannot aim at maximizing profit (which is achieved by printing as much paper money) but must instead aim at price stability. In 1907, the Bank of Italy coordinated the rescue of the Italian Banking Company, a major lender of FIAT, an operation that ended with the absorption of the bank in crisis into the Italian Discount Bank. In 1911 the central bank organized a consortium to rescue the steel companies (Acciaierie di Terni, Ilva and others) of which the Bank of Italy was directly creditor, financing the operation also through the issue of banknotes. In 1912 the credit institute for cooperation, with social purposes, was established, led by the Bank of Italy and also participated by public bodies, savings banks, Monte dei Paschi di Siena, the Cassa di Previdenza, and the Credit Institution for the Cooperatives of Milan. The institute in 1929 was transformed by its director Arturo Osio into the Banca Nazionale del Lavoro. In 1913 the Subsidy Consortium was established, led by the Bank of Italy and also participated by the Banks of Naples and Sicily, some savings banks, Monte dei Paschi di Siena and by the San Paolo Bank of Turin. In 1922 the Consortium saved Ansaldo and took control of it, and in 1923 it did the same with Banco di Roma. In the same 1913 Francesco Saverio Nitti drew up a bill that entrusted the Bank of Italy with the supervision of other banks, but the private banks managed to avoid its approval. In 1914 the Bank of Italy assisted the Banco di Roma, which had to devalue its capital due to losses reported in the activities in the eastern Mediterranean. After the First World War, in 1921, it was always the Bank of Italy that led the consortium that managed the liquidation of the Italian Discount Bank and saved the Banco di Roma once again from crisis. United States and Europe in the 20th century In the United States, the gradual incorporation of local credit institutions, oriented towards balancing costs and small businesses, began in favour of the large national investment banks, which had diametrically opposed budget structures and objectives. Even with a formal corporate separation (but not patrimonial and group) which was not enough to block the rise of the political and economic interests of the large financial-industrial trusts. In fact, by simply scrolling through the names of the latest buyers, this plan continued uninterruptedly even after the war up to the 2000s. Similarly, antitrust and European Union law in the post-war period never endowed themselves with strong legislative instruments capable of preventing undemocratic drifts inclined to the interests of the military–industrial complex, declaring, on the contrary, the legitimate existence of situations of "dominant position" in all sectors of the economy, where only the abuse could constitute a violation of the law punished with pecuniary sanctions, but without any power to intervene in the management, organization or financial engineering of large groups. The Banking Law of 1926 Even with these strong regulatory and intervention powers, the fascist state allowed the crisis of the banks that were headed by the National Credit, the Popular Party bank, to worsen. In this way, fascism, which equally aimed at the political control of monetary issuance, intended to strike one of the electoral strengths and of the business system that orbited around the industrial policy of the Catholic world, supported by credit institutions. With R.D.L. 812 of 6 May 1926, the Bank of Italy obtained the exclusive right to issue the currency (the royal decree of 28 April 1910, no. 204 was thus repealed, which had confirmed the prerogative also to the Bank of Naples and the Bank of Sicily). The subsequent R.D.L. November 6, 1926 n. 1830 entrusted the Bank of Italy with the task of supervising savings banks. In 1928 the Bank was reorganized. The general manager was joined by a governor with greater powers. Meanwhile, in 1926 the Subsidy Consortium had been transformed into a Liquidation Institute, still under the control of the central bank. In 1933 it was absorbed by the new Institute for Industrial Reconstruction, autonomous from the Bank of Italy. While all the banks were in very bad conditions, the Banca Nazionale del Lavoro of the self-styled socialist Arturo Osio, in 1929 confiscated eleven Catholic banks, and in 1932 the Banca Agricola Italiana which had financed SNIA Viscosa di Gualino. Banks and the economy of the 1930s Italy in the 1930s had an agricultural economy, a small number of industrial families who relied on the subcontracting of local suppliers, formed by a myriad of small family-run businesses, not international and whose survival depended on large groups of industrialists, in turn, linked to commercial banks. The savings from agriculture flowed into the rural coffers, the popular banks and the cooperative credit which financed the life of the provincial crafts, small businesses and construction. The job of the banks was to match the customers' short-term investment horizon with the long-term investments of large groups (Rediscount). National banks turned to local banks that had large deposits of deposits for smaller, low-risk loans. The Cassa Depositi e Prestiti channelled postal savings in favour of local authorities, public institutions and infrastructures, which were a way of absorbing mass unemployment, through a vast program of public works. The ideological basis of the law was that savings are a matter of national interest and must be protected by the State, a principle also enshrined in the Republican Constitution and concretized in the first place in the law establishing the interbank guarantee fund and in the policy of public bailouts. With other decrees of the same year, the supervisory task was extended to all Italian banks and the monopoly of issuing the currency was confirmed. The bank no longer had the right to give credit to individuals but only to other banks as a lender of last resort. public bailout policy. Finally, it had the power to require other banks to deposit a portion of the available funds with the same central bank; by varying the share, the Bank of Italy could operate credit tightening or enlargements. The law established certain minimum capital and management requirements necessary to guarantee risk management, stability and operational continuity: minimum capital, minimum ratio between loans and deposits, credit limits, provisions for compulsory reserve. IRI and the war After the "defenestration" of Bonaldo Stringher, Alberto Beneduce took over and was forced to retire in 1936 after a "heart attack" during a meeting at the Bank for International Settlements in Basel. They conceived the duty of the banks towards the public interest of the country, as the subject who had to collect savings to lend them to entrepreneurs, as a tool for development and growth. The process was to be led by a "circulation bank", which would increase the speed of circulation of money in the real economy. The Central Bank supported the fascist monetary policy of defending the stability of the Italian lira (known as the "Quota 90"), through the reduction of discounts and advances, and financing the enormous expenses of wars in the 1930s and 1940s through the unlimited issuing of money (and the "inflation tax", not progressive with income), as Hjalmar Schacht did in Germany under Hitler. Operationally, the government issued and sold debt securities to finance military spending, and the military industry reinvested its government profits in the purchase of such bonds as a de facto advance of future orders, fueling a closed financial circuit. In simple terms, this was something like the ECB issuing money and lending it to private banks who keep it in their current accounts with the ECB. This mechanism was called "capital circuit." The printing of tickets and the scarcity of consumer goods created an overabundance of money that poured into bank deposits, allowing a new expansion of credit, which was directed in favour of the economic sectors themselves. given that the state paid the banks a higher interest on the BOTs than the savers. The absorption of savings into investments in fixed capital had already taken place in the First World War and industries were working with existing production capacities. Without consumption and investments, public spending by the state remained. The war could start with a modest tax levy and inflation within the normal limits in the first months, before the black market and ration cards. The situation followed the conflict of interest between the state entrepreneur and the state bank, albeit in the name of a higher ideological purpose. In 1938, the government decreed the power to directly appoint presidents and vice-presidents of the board of directors of banks. Beneduce planned to have a public bank take over the long-term credit of large companies, financed with bonds of equal duration for public works, energy, and industry. After them, the Central Bank maintained a low-profile monetary policy, consistent with the directives of fascism. IRI operated differently, in agreement with the Italian banks and industries that supported fascism. The banks renounced exercising an option by "converting" the debts into shares (or a law in this regard), preferring not to enter directly into the ownership of the industrial groups. The groups transferred the bank debts to IRI, which became the new owner in exchange for shares (at the book value, not always the same as the market value), until they held control of the property and therefore of management. The debt of the IRI rose to nine and a half billion lire at the time, two-thirds of which were paid within the war, because they were drastically diluted by inflation which has the effect of lowering the real weight of debts until the accounting entries are cancelled. of issuance, but also to halve the purchasing power of small savers. The remaining debt was paid by 1953. The IRI in turn had debts towards the Bank of Italy for five billion lire: the State issued bonds for IRI for one and a half billion, "sterilizing" the debt that should have been repaid with "annuity" interest. accrued until 1971. The change of constitutional order and currency (exchange rate for conversion), and inflation meant that IRI (and industries) paid the Bank of Italy less than a third of the sum. After the armistice of 8 September, the German authorities demanded the delivery of the gold reserve. 173 tons of gold were first transferred to the Milan office, and then to Fortezza. Traces of it were subsequently lost. In the 1960s, the public debt increased and so did inflation. Governor Guido Carli made a policy of credit crunch to stop inflation, particularly in 1964. In general, the Bank of Italy played an important political role under this governorship. Other credit crunches were implemented between 1969 and 1970 due to the flight of capital abroad and in 1974 as a result of the oil crisis. In March 1979 the governor of the Bank of Italy Paolo Baffi and the deputy director in charge of supervision Mario Sarcinelli were accused by the Rome public prosecutor of private interest in official acts and personal aiding and abetting. Sarcinelli was arrested, and released from prison only after being suspended from duties relating to surveillance, while Baffi avoided prison due to his age. In 1981 the two will be completely acquitted. Subsequently, the suspicion will emerge that the indictment was wanted by P2 to prevent the Bank of Italy from supervising Roberto Cavali Banco Ambrosiano. The postwar period The post-war inflation, also due to the Am-lire, was fought with the credit crunch desired by the governor Luigi Einaudi, which was obtained through the compulsory reserve on deposits. In particular, the instrument of compulsory reserves of banks at the central bank was used, introduced in 1926 but never really applied. In 1948 the governor was given the task of regulating the money supply and deciding the discount rate. The universal banks were the ones that had gained the most from war and inflation (under the Authorization Regime of the Interministerial Credit Committee), with the greatest growth in deposits. Along with the recovery, speculative stocks and capital flight abroad appeared. Credit limits were no longer tied to equity, as equity figures were completely distorted by inflation. The squeeze on lending, the liquidity crisis and the Eenaudian deflation pushed operators to finance themselves by placing stocks on the market and returning capital, thus blocking the rise in prices; and by resorting to self-financing (even without distributing profits), aided by the fact that inflation had made it possible to quickly amortize fixed assets whose book value was now nominal. During the years of the Reconstruction, governor Donato Menichella governed the issue in a gradual and balanced way: he did not implement expansionary manoeuvres to encourage growth but was careful to avoid the creation of credit crunches. In this, he was helped by the low public debt. Its monetary policy program was stability for development. A part of the available bank savings was channelled annually to the Treasury to cover the budget deficit (in the current year), while during his tenure the public debt of the state never rose above 1% of GDP, until 1964. In July 1981, a "divorce" between the State (Ministry of the Treasury) and its central bank was initiated by the decision of the then Treasury Minister Beniamino Andreatta. From that moment on, the institute was no longer required to purchase the bonds that the government was unable to place on the market, thus ceasing the monetization of the Italian public debt that it had carried out since the Second World War up to that moment. This decision was opposed by the Minister of Finance Rino Formica, who would have liked the Bank of Italy to be required to repay at least a portion of these securities, and from the summer of 1982 a series of intra-government verbal clashes between the two ministers known as the wives' quarrel, which was followed by the fall of the second Spadolini government a few months later. The divorce between the Ministry of the Treasury and the Bank of Italy is still considered by economic doctrine as a factor of great stabilization of inflation (which went from over 20% in 1980 to less than 5% in the following years) and a central prerequisite for guarantee the full independence of the technical monetary policy body (central bank) from the choices related to fiscal policy (under the responsibility of the government), but also a factor of considerable incidence of growth of the Italian public debt. The law of 7 February 1992 n. 82, proposed by the then Minister of the Treasury Guido Carli, clarifies that the decision on the discount rate is the exclusive competence of the governor and must no longer be agreed in concert with the Minister of the Treasury (the previous decree of the President of the Republic is modified in relation to the new law with the Presidential Decree of 18 July). The euro and the 2006 reform The Legislative Decree 10 March 1998 n. 43 removes the Bank of Italy from management by the Italian government, sanctioning its belonging to the European system of central banks. From this date, therefore, the quantity of currency in circulation is decided autonomously by the Central Bank. With the introduction of the Euro on 1 January 1999, the Bank thus loses the function of presiding over national monetary policy. This function has since been exercised collectively by the Governing Council of the European Central Bank, which also includes the Governor of the Bank of Italy. On 13 June 1999 the Senate of the Republic, during the XIII Legislature, discussed bill no. 4083 "Rules on the ownership of the Bank of Italy and on the criteria for appointing the Board of Governors of the Bank of Italy". This bill would like the state to acquire all the shares of the institute, but it is never approved. On January 4, 2004, the weekly "Famiglia Cristiana" reports, for the first time in history, the list of participants in the capital of the Bank of Italy with the relative shares. The source is a Mediobanca Research & Studies dossier, directed by the researcher Fulvio Coltorti, who, by investigating backwards on the balance sheets of banks, insurance companies and institutions, and gradually noting the shares that indicated a shareholding in the capital of the Bank of Italia managed to reconstruct a large part of the list of participants of the highest Italian financial institution. On September 20, 2005, the list of shareholders was officially made available by the Bank of Italy; until now it was considered confidential. On December 19, 2005, after intense press campaigns and criticism of his actions in the context of the Bancopoli scandal, Governor Antonio Fazio resigned. A few days later, Mario Draghi, who took office on January 16, 2006, was appointed in his place. The law of 28 December 2005, n. 262, as part of various measures to protect savings, introduces for the first time a term to the mandate of the governor and the members of the directorate. It also dealt with (article 19, paragraph 10) the issue of ownership of the capital of the Bank of Italy, providing for the redefinition of the Bank's shareholding structure by means of a government regulation to be issued within three years of the law's entry into force. This regulation should have governed the methods of transferring shares held by "subjects other than the State or other public bodies". The delegation made by law 262/2005, therefore, expired without the regulation being issued, but the right to ownership of the shares of the current participants is in any case safeguarded by a provision of the Bank's Statute. On the basis of law 262/2005, Mario Draghi becomes the first governor to have a term of six years, renewable once for a further six years. Organization Governing bodies The bank's governing bodies are the General Meeting of Shareholders, the board of directors, the governor, the director general and three deputy directors-general; the last five constitute the directorate. The general meeting takes place yearly and with the purpose of approving accounts and appointing the auditors. The board of directors has administrative powers and is chaired by the governor (or by the director-general in his absence). Following a reform in 2005, the governor lost exclusive responsibility regarding decisions of external relevance (i.e. banking and financial supervision), which has been transferred to the directorate (by majority vote). The director-general is responsible for the day-to-day administration of the bank and acts as governor when absent. The board of auditors assesses the bank's administration and compliance with the law, regulations and statute. Appointment The directorate's term of office lasts six years and is renewable once. The appointment of the governor is the responsibility of the government, head of the board of directors, with the approval of the president (formally a decree of the president). The board of directors is elected by the shareholders according to the bank statute. On 25 October 2011, Silvio Berlusconi nominated Ignazio Visco to be the bank's new governor to replace Mario Draghi when he left to become president of the European Central Bank in November. Shareholders Banca d'Italia had 300,000 shares with a nominal value of €25,000. Originally scattered around the banks of Italy, the shares now accumulated due to the merger of the banks in the 1990s. The status of the bank states that a minimum of 54% of profits would go to the Italian government, and only a maximum of 6% of profits would be distributed as dividends according to share ratios. See also Banking in Italy Commissione Nazionale per le Società e la Borsa Economy of Italy Istituto Poligrafico e Zecca dello Stato Italian lira Euro Governor of the Bank of Italy References External links Banks of Italy Italia Italy Banks established in 1893 Italian companies established in 1893 Banknote printing companies
6
Bandwidth is the difference between the upper and lower frequencies in a continuous band of frequencies. It is typically measured in hertz, and depending on context, may specifically refer to passband bandwidth or baseband bandwidth. Passband bandwidth is the difference between the upper and lower cutoff frequencies of, for example, a band-pass filter, a communication channel, or a signal spectrum. Baseband bandwidth applies to a low-pass filter or baseband signal; the bandwidth is equal to its upper cutoff frequency. Bandwidth in hertz is a central concept in many fields, including electronics, information theory, digital communications, radio communications, signal processing, and spectroscopy and is one of the determinants of the capacity of a given communication channel. A key characteristic of bandwidth is that any band of a given width can carry the same amount of information, regardless of where that band is located in the frequency spectrum. For example, a 3 kHz band can carry a telephone conversation whether that band is at baseband (as in a POTS telephone line) or modulated to some higher frequency. However, wide bandwidths are easier to obtain and process at higher frequencies because the is smaller. Overview Bandwidth is a key concept in many telecommunications applications. In radio communications, for example, bandwidth is the frequency range occupied by a modulated carrier signal. An FM radio receiver's tuner spans a limited range of frequencies. A government agency (such as the Federal Communications Commission in the United States) may apportion the regionally available bandwidth to broadcast license holders so that their signals do not mutually interfere. In this context, bandwidth is also known as channel spacing. For other applications, there are other definitions. One definition of bandwidth, for a system, could be the range of frequencies over which the system produces a specified level of performance. A less strict and more practically useful definition will refer to the frequencies beyond which performance is degraded. In the case of frequency response, degradation could, for example, mean more than 3 dB below the maximum value or it could mean below a certain absolute value. As with any definition of the width of a function, many definitions are suitable for different purposes. In the context of, for example, the sampling theorem and Nyquist sampling rate, bandwidth typically refers to baseband bandwidth. In the context of Nyquist symbol rate or Shannon-Hartley channel capacity for communication systems it refers to passband bandwidth. The of a simple radar pulse is defined as the inverse of its duration. For example, a one-microsecond pulse has a Rayleigh bandwidth of one megahertz. The is defined as the portion of a signal spectrum in the frequency domain which contains most of the energy of the signal. x dB bandwidth In some contexts, the signal bandwidth in hertz refers to the frequency range in which the signal's spectral density (in W/Hz or V2/Hz) is nonzero or above a small threshold value. The threshold value is often defined relative to the maximum value, and is most commonly the , that is the point where the spectral density is half its maximum value (or the spectral amplitude, in or , is 70.7% of its maximum). This figure, with a lower threshold value, can be used in calculations of the lowest sampling rate that will satisfy the sampling theorem. The bandwidth is also used to denote system bandwidth, for example in filter or communication channel systems. To say that a system has a certain bandwidth means that the system can process signals with that range of frequencies, or that the system reduces the bandwidth of a white noise input to that bandwidth. The 3 dB bandwidth of an electronic filter or communication channel is the part of the system's frequency response that lies within 3 dB of the response at its peak, which, in the passband filter case, is typically at or near its center frequency, and in the low-pass filter is at or near its cutoff frequency. If the maximum gain is 0 dB, the 3 dB bandwidth is the frequency range where attenuation is less than 3 dB. 3 dB attenuation is also where power is half its maximum. This same half-power gain convention is also used in spectral width, and more generally for the extent of functions as full width at half maximum (FWHM). In electronic filter design, a filter specification may require that within the filter passband, the gain is nominally 0 dB with a small variation, for example within the ±1 dB interval. In the stopband(s), the required attenuation in decibels is above a certain level, for example >100 dB. In a transition band the gain is not specified. In this case, the filter bandwidth corresponds to the passband width, which in this example is the 1 dB-bandwidth. If the filter shows amplitude ripple within the passband, the x dB point refers to the point where the gain is x dB below the nominal passband gain rather than x dB below the maximum gain. In signal processing and control theory the bandwidth is the frequency at which the closed-loop system gain drops 3 dB below peak. In communication systems, in calculations of the Shannon–Hartley channel capacity, bandwidth refers to the 3 dB-bandwidth. In calculations of the maximum symbol rate, the Nyquist sampling rate, and maximum bit rate according to the Hartley's law, the bandwidth refers to the frequency range within which the gain is non-zero. The fact that in equivalent baseband models of communication systems, the signal spectrum consists of both negative and positive frequencies, can lead to confusion about bandwidth since they are sometimes referred to only by the positive half, and one will occasionally see expressions such as , where is the total bandwidth (i.e. the maximum passband bandwidth of the carrier-modulated RF signal and the minimum passband bandwidth of the physical passband channel), and is the positive bandwidth (the baseband bandwidth of the equivalent channel model). For instance, the baseband model of the signal would require a low-pass filter with cutoff frequency of at least to stay intact, and the physical passband channel would require a passband filter of at least to stay intact. Relative bandwidth The absolute bandwidth is not always the most appropriate or useful measure of bandwidth. For instance, in the field of antennas the difficulty of constructing an antenna to meet a specified absolute bandwidth is easier at a higher frequency than at a lower frequency. For this reason, bandwidth is often quoted relative to the frequency of operation which gives a better indication of the structure and sophistication needed for the circuit or device under consideration. There are two different measures of relative bandwidth in common use: fractional bandwidth () and ratio bandwidth (). In the following, the absolute bandwidth is defined as follows, where and are the upper and lower frequency limits respectively of the band in question. Fractional bandwidth Fractional bandwidth is defined as the absolute bandwidth divided by the center frequency (), The center frequency is usually defined as the arithmetic mean of the upper and lower frequencies so that, and However, the center frequency is sometimes defined as the geometric mean of the upper and lower frequencies, and While the geometric mean is more rarely used than the arithmetic mean (and the latter can be assumed if not stated explicitly) the former is considered more mathematically rigorous. It more properly reflects the logarithmic relationship of fractional bandwidth with increasing frequency. For narrowband applications, there is only marginal difference between the two definitions. The geometric mean version is inconsequentially larger. For wideband applications they diverge substantially with the arithmetic mean version approaching 2 in the limit and the geometric mean version approaching infinity. Fractional bandwidth is sometimes expressed as a percentage of the center frequency (percent bandwidth, ), Ratio bandwidth Ratio bandwidth is defined as the ratio of the upper and lower limits of the band, Ratio bandwidth may be notated as . The relationship between ratio bandwidth and fractional bandwidth is given by, and Percent bandwidth is a less meaningful measure in wideband applications. A percent bandwidth of 100% corresponds to a ratio bandwidth of 3:1. All higher ratios up to infinity are compressed into the range 100–200%. Ratio bandwidth is often expressed in octaves for wideband applications. An octave is a frequency ratio of 2:1 leading to this expression for the number of octaves, Photonics In photonics, the term bandwidth carries a variety of meanings: the bandwidth of the output of some light source, e.g., an ASE source or a laser; the bandwidth of ultrashort optical pulses can be particularly large the width of the frequency range that can be transmitted by some element, e.g. an optical fiber the gain bandwidth of an optical amplifier the width of the range of some other phenomenon, e.g., a reflection, the phase matching of a nonlinear process, or some resonance the maximum modulation frequency (or range of modulation frequencies) of an optical modulator the range of frequencies in which some measurement apparatus (e.g., a power meter) can operate the data rate (e.g., in Gbit/s) achieved in an optical communication system; see bandwidth (computing). A related concept is the spectral linewidth of the radiation emitted by excited atoms. See also Bandwidth extension Broadband Noise bandwidth Rise time Spectral efficiency Notes References Signal processing Telecommunication theory Filter frequency response
9
In Buddhism, a bodhisattva ( ; () or bodhisatva is a person who is on the path towards bodhi ('awakening') or Buddhahood. In the Early Buddhist schools, as well as modern Theravāda Buddhism, bodhisattva (Pāli: bodhisatta) refers to someone who has made a resolution to become a Buddha and has also received a confirmation or prediction from a living Buddha that this will be so. In Mahāyāna Buddhism, a bodhisattva refers to anyone who has generated bodhicitta, a spontaneous wish and compassionate mind to attain Buddhahood for the benefit of all sentient beings. Mahayana bodhisattvas are spiritually heroic persons that work to attain awakening and are driven by a great compassion (mahākaruṇā). These beings are exemplified by important spiritual qualities such as the "four divine abodes" (brahmavihāras) of loving-kindness (maitrī), compassion (karuṇā), empathetic joy (muditā) and equanimity (upekṣā), as well as the various bodhisattva "perfections" (pāramitās) which include prajñāpāramitā ("transcendent knowledge" or "perfection of wisdom") and skillful means (upāya). In Theravāda Buddhism, the bodhisattva is mainly seen as an exceptional and rare individual. Only a few select individuals are ultimately able to become bodhisattvas, such as Maitreya. Mahāyāna Buddhism generally understands the bodhisattva path as being open to everyone, and Mahāyāna Buddhists encourage all individuals to become bodhisattvas. Spiritually advanced bodhisattvas such as Avalokiteshvara, Maitreya, and Manjushri are also widely venerated across the Mahāyāna Buddhist world and are believed to possess great magical power which they employ to help all living beings. In Early Buddhism In pre-sectarian Buddhism, the term bodhisatta is used in the early texts to refer to Gautama Buddha in his previous lives and as a young man in his last life, when he was working towards liberation. In the early Buddhist discourses, the Buddha regularly uses the phrase "when I was an unawakened Bodhisatta" to describe his experiences before his attainment of awakening. The early texts which discuss the period before the Buddha's awakening mainly focus on his spiritual development. According to Bhikkhu Analayo, most of these passages focus on three main themes: "the bodhisattva's overcoming of unwholesome states of mind, his development of mental tranquillity, and the growth of his insight." Other early sources like the Acchariyabbhutadhamma-sutta (MN 123, and its Chinese parallel in Madhyama-āgama 32) discuss the marvelous qualities of the bodhisattva Gautama in his previous life in Tuṣita heaven. The Pali text focuses on how the bodhisattva was endowed with mindfulness and clear comprehension while living in Tuṣita, while the Chinese source states that his lifespan, appearance, and glory was greater than all the devas (gods). These sources also discuss various miracles which accompanied the bodhisattva's conception and birth, most famously, his taking seven steps and proclaiming that this was his last life. The Chinese source (titled Discourse on Marvellous Qualities) also states that while living as a monk under the Buddha Kāśyapa he "made his initial vow to [realize] Buddhahood [while] practicing the holy life." Another early source that discusses the qualities of bodhisattvas is the Mahāpadāna sutta. This text discusses bodhisattva qualities in the context of six previous Buddhas who lived long ago, such as Buddha Vipaśyī. Yet another important element of the bodhisattva doctrine, the idea of a prediction of someone's future Buddhahood, is found in another Chinese early Buddhist text, the Discourse on an Explanation about the Past (MĀ 66). In this discourse, a monk named Maitreya aspires to become a Buddha in the future and the Buddha then predicts that Maitreya will become a Buddha in the future. Other discourses found in the Ekottarika-āgama present the "bodhisattva Maitreya" as an example figure (EĀ 20.6 and EĀ 42.6) and one sutra in this collection also discuss how the Buddha taught the bodhisattva path of the six perfections to Maitreya (EĀ 27.5). 'Bodhisatta' may also connote a being who is "bound for enlightenment", in other words, a person whose aim is to become fully enlightened. In the Pāli canon, the Bodhisatta (bodhisattva) is also described as someone who is still subject to birth, illness, death, sorrow, defilement, and delusion. According to the Theravāda monk Bhikkhu Bodhi, while all the Buddhist traditions agree that to attain Buddhahood, one must "make a deliberate resolution" and fulfill the spiritual perfections (pāramīs or pāramitās) as a bodhisattva, the actual bodhisattva path is not taught in the earliest strata of Buddhist texts such as the Pali Nikayas (and their counterparts such as the Chinese Āgamas) which instead focus on the ideal of the arahant. The oldest known story about how Gautama Buddha becomes a bodhisattva is the story of his encounter with the previous Buddha, Dīpankara. During this encounter, a previous incarnation of Gautama, variously named Sumedha, Megha, or Sumati offers five blue lotuses and spreads out his hair or entire body for Dīpankara to walk on, resolving to one day become a Buddha. Dīpankara then confirms that they will attain Buddhahood. Early Buddhist authors saw this story as indicating that the making of a resolution (abhinīhāra) in the presence of a living Buddha and his prediction/confirmation (vyākaraṇa) of one's future Buddhahood was necessary to become a bodhisattva. According to Drewes, "all known models of the path to Buddhahood developed from this basic understanding." Stories and teachings on the bodhisattva ideal are found in the various Jataka tale sources, which mainly focus on stories of the past lives of the Sakyamuni. Among the non-Mahayana Nikaya schools, the Jataka literature was likely the main genre that contained bodhisattva teachings. These stories had certainly become an important part of popular Buddhism by the time of the carving of the Bharhut Stupa railings (c. 125–100 BCE), which contain depictions of around thirty Jataka tales. Thus, it is possible that the bodhisattva ideal was popularized through the telling of Jatakas. Jataka tales contain numerous stories which focus on the past life deeds of Sakyamuni when he was a bodhisattva. These deeds generally express bodhisattva qualities and practices (such as compassion, the six perfections, and supernatural power) in dramatic ways, and include numerous acts of self-sacrifice. Apart from Jataka stories related to Sakyamuni, the idea that Metteya (Maitreya), who currently resides in Tuṣita, would become the future Buddha and that this had been predicted by the Buddha Sakyamuni was also an early doctrine related to the bodhisattva ideal. It first appears in the Cakkavattisihanadasutta. According to A.L. Basham, it is also possible that some of the Ashokan edicts reveal knowledge of the bodhisattva ideal. Basham even argues that Ashoka may have considered himself a bodhisattva, as one edict states that he "set out for sambodhi." In the Nikāya schools By the time that the Buddhist tradition had developed into various competing sects, the idea of the bodhisattva vehicle (Sanskrit: bodhisattvayana) as a distinct (and superior) path from that of the arhat and solitary buddha was widespread among all the major non-Mahayana Buddhist traditions or Nikaya schools, including Theravāda, Sarvāstivāda and Mahāsāṃghika. The doctrine is found, for example, in 2nd century CE sources like the Avadānaśataka and the Divyāvadāna. The bodhisattvayana was referred by other names such as "vehicle of the perfections" (pāramitāyāna), "bodhisatva dharma", "bodhisatva training", and "vehicle of perfect Buddhahood". According to various sources, some of the Nikaya schools (such as the Dharmaguptaka and some of the Mahasamghika sects) transmitted a collection of texts on bodhisattvas alongside the Tripitaka, which they termed "Bodhisattva Piṭaka" or "Vaipulya (Extensive) Piṭaka". None of these have survived. Dar Hayal attributes the historical development of the bodhisattva ideal to "the growth of bhakti (devotion, faith, love) and the idealisation and spiritualisation of the Buddha." The North Indian Sarvāstivāda school held it took Gautama three "incalculable aeons" (asaṃkhyeyas) and ninety one aeons (kalpas) to become a Buddha after his resolution (praṇidhāna) in front of a past Buddha. During the first incalculable aeon he is said to have encountered and served 75,000 Buddhas, and 76,000 in the second, after which he received his first prediction (vyākaraṇa) of future Buddhahood from Dīpankara, meaning that he could no longer fall back from the path to Buddhahood. For Sarvāstivāda, the first two incalculable aeons is a period of time in which a bodhisattva may still fall away and regress from the path. At the end of the second incalculable aeon, they encounter a buddha and receive their prediction, at which point they are certain to achieve Buddhahood. Thus, the presence of a living Buddha is also necessary for Sarvāstivāda. The Mahāvibhāṣā explains that its discussion of the bodhisattva path is partly meant "to stop those who are in fact not bodhisattvas from giving rise to the self-conceit that they are." However, for Sarvāstivāda, one is not technically a bodhisattva until the end of the third incalculable aeon, after which one begins to perform the actions which lead to the manifestation of the marks of a great person. The Mahāvastu of the Mahāsāṃghika-Lokottaravādins presents various ideas regarding the school's conception of the bodhisattva ideal. According to this text, bodhisattva Gautama had already reached a level of dispassion at the time of Buddha Dīpaṃkara many aeons ago and he is also said to have attained the perfection of wisdom countless aeons ago. The Mahāvastu also presents four stages or courses (caryās) of the bodhisattva path without giving specific time frames (though it's said to take various incalculable aeons). This set of four phases of the path is also found in other sources, including the Gandhari “Many-Buddhas Sūtra” (*Bahubudha gasutra) and the Chinese Fó běnxíng jí jīng (佛本行 集經, Taisho vol. 3, no. 190, pp. 669a1–672a11). The four caryās (Gandhari: caria) are the following: Natural (Sanskrit: prakṛti-caryā, Gandhari: pragidi, Chinese: 自性行 zì xìng xíng), one first plants the roots of merit in front of a Buddha to attain Buddhahood. Resolution (praṇidhāna-caryā, G: praṇisi, C: 願性行 yuàn xìng xíng), one makes their first resolution to attain Buddhahood in the presence of a Buddha. Continuing (anuloma-caryā, C: 順性行 shùn xìng xíng) or "development" (vivartana, G: vivaṭaṇa), in which one continues to practice until one meets a Buddha who confirms one's future Buddhahood. Irreversible (anivartana-caryā, C: 轉性行 zhuǎn xìng xíng) or “course of purity” (G: śukracaria), this is the stage at which one cannot fall back and is assured of future Buddhahood. In Theravāda The bodhisattva ideal is also found in southern Buddhist sources, like the Theravāda school's Buddhavaṃsa (1st-2nd century BCE), which explains how Gautama, after making a resolution (abhinīhāra) and receiving his prediction (vyākaraṇa) of future Buddhahood from past Buddha Dīpaṃkara, he became certain (dhuva) to attain Buddhahood. Gautama then took four incalculable aeons and a hundred thousand, shorter kalpas (aeons) to reach Buddhahood. Several sources in the Pali Canon depict the idea that there are multiple Buddhas and that there will be many future Buddhas, all of which must train as bodhisattas. Non-canonical Theravada Jataka literature also teaches about bodhisattvas and the bodhisattva path. The worship of bodhisattvas like Metteya, Saman and Natha (Avalokiteśvara) can also be found in Theravada Buddhism. By the time of the great scholar Buddhaghosa (5th-century CE), orthodox Theravāda held the standard Indian Buddhist view that there were three main spiritual paths within Buddhism: the way of the Buddhas (buddhayāna) i.e. the bodhisatta path; the way of the individual Buddhas (paccekabuddhayāna); and the way of the disciples (sāvakayāna). The Sri Lankan commentator Dhammapāla (6th century CE) wrote a commentary on the Cariyāpiṭaka, a text which focuses on the bodhisattva path and on the ten perfections of a bodhisatta. Dhammapāla's commentary notes that to become a bodhisattva one must make a valid resolution in front of a living Buddha. The Buddha then must provide a prediction (vyākaraṇa) which confirms that one is irreversible (anivattana) from the attainment of Buddhahood. The Nidānakathā, as well as the Buddhavaṃsa and Cariyāpiṭaka commentaries makes this explicit by stating that one cannot use a substitute (such as a Bodhi tree, Buddha statue or Stupa) for the presence of a living Buddha, since only a Buddha has the knowledge for making a reliable prediction. This is the generally accepted view maintained in orthodox Theravada today. According to Theravāda commentators like Dhammapāla as well as the Suttanipāta commentary, there are three types of bodhisattvas: Bodhisattvas "preponderant in wisdom" (paññādhika), like Gautama, reach Buddhahood in four incalculable aeons (asaṃkheyyas) and a hundred thousand kalpas. Bodhisattvas "preponderant in faith" (saddhādhika) take twice as long as paññādhika bodhisattvas Bodhisattvas "preponderant in vigor" (vīriyādhika) take four times as long as paññādhika bodhisattvas According to modern Theravada authors, meeting a Buddha is needed to truly make someone a bodhisattva because any other resolution to attain Buddhahood may easily be forgotten or abandoned during the aeons ahead. The Burmese monk Ledi Sayadaw (1846–1923) explains that though it is easy to make vows for future Buddhahood by oneself, it is very difficult to maintain the necessary conduct and views during periods when the Dharma has disappeared from the world. One will easily fall back during such periods and this is why one is not truly a full bodhisattva until one receives recognition from a living Buddha. Because of this, it was and remains a common practice in Theravada to attempt to establish the necessary conditions to meet the future Buddha Maitreya and thus receive a prediction from him. Medieval Theravada literature and inscriptions report the aspirations of monks, kings and ministers to meet Maitreya for this purpose. Modern figures such as Anagarika Dharmapala (1864–1933), and U Nu (1907–1995) both sought to receive a prediction from a Buddha in the future and believed meritorious actions done for the good of Buddhism would help in their endeavor to become bodhisattvas in the future. Over time the term came to be applied to other figures besides Gautama Buddha in Theravada lands, possibly due to the influence of Mahayana. The Theravada Abhayagiri tradition of Sri Lanka practiced Mahayana Buddhism and was very influential until the 12th century. Kings of Sri Lanka were often described as bodhisattvas, starting at least as early as Sirisanghabodhi (r. 247–249), who was renowned for his compassion, took vows for the welfare of the citizens, and was regarded as a mahāsatta (Sanskrit: mahāsattva), an epithet used almost exclusively in Mahayana Buddhism. Many other Sri Lankan kings from the 3rd until the 15th century were also described as bodhisattas and their royal duties were sometimes clearly associated with the practice of the ten pāramitās. In some cases, they explicitly claimed to have received predictions of Buddhahood in past lives. Popular Buddhist figures have also been seen as bodhisattvas in Theravada Buddhist lands. Shanta Ratnayaka notes that Anagarika Dharmapala, Asarapasarana Saranarikara Sangharaja, and Hikkaduwe Sri Sumamgala "are often called bodhisattvas". Buddhaghosa was also traditionally considered to be a reincarnation of Maitreya. Paul Williams writes that some modern Theravada meditation masters in Thailand are popularly regarded as bodhisattvas. Various modern figures of esoteric Theravada traditions (such as the weizzās of Burma) have also claimed to be bodhisattvas. Theravada bhikkhu and scholar Walpola Rahula writes that the bodhisattva ideal has traditionally been held to be higher than the state of a śrāvaka not only in Mahayana but also in Theravada. Rahula writes "the fact is that both the Theravada and the Mahayana unanimously accept the Bodhisattva ideal as the highest...Although the Theravada holds that anybody can be a Bodhisattva, it does not stipulate or insist that all must be Bodhisattva which is considered not practical." He also quotes the 10th century king of Sri Lanka, Mahinda IV (956–972 CE), who had the words inscribed "none but the bodhisattvas will become kings of a prosperous Lanka," among other examples. Jeffrey Samuels echoes this perspective, noting that while in Mahayana Buddhism the bodhisattva path is held to be universal and for everyone, in Theravada it is "reserved for and appropriated by certain exceptional people." In Mahāyāna Early Mahāyāna Mahāyāna Buddhism (often also called Bodhisattvayāna, "Bodhisattva Vehicle") is based principally upon the path of a bodhisattva. This path was seen as higher and nobler than becoming an arhat or a solitary Buddha. Hayal notes that Sanskrit sources generally depict the bodhisattva path as reaching a higher goal (i.e. anuttara-samyak-sambodhi) than the goal of the path of the "disciples" (śrāvakas), which is the nirvana attained by arhats. For example, the Lotus Sutra states:"To the sravakas, he preached the doctrine which is associated with the four Noble Truths and leads to Dependent Origination. It aims at transcending birth, old age, disease, death, sorrow, lamentation, pain, distress of mind and weariness; and it ends in nirvana. But, to the great being, the bodhisattva, he preached the doctrine, which is associated with the six perfections and which ends in the Knowledge of the Omniscient One after the attainment of the supreme and perfect bodhi."According to Peter Skilling, the Mahayana movement began when "at an uncertain point, let us say in the first century BCE, groups of monks, nuns, and lay-followers began to devote themselves exclusively to the Bodhisatva vehicle." These Mahayanists universalized the bodhisattvayana as a path which was open to everyone and which was taught for all beings to follow. This was in contrast to the Nikaya schools, which held that the bodhisattva path was only for a rare set of individuals. Indian Mahayanists preserved and promoted a set of texts called Vaipulya ("Extensive") sutras (later called Mahayana sutras). Mahayana sources like the Lotus Sutra also claim that arhats that have reached nirvana have not truly finished their spiritual quest, for they still have not attained the superior goal of sambodhi (Buddhahood) and thus must continue to strive until they reach this goal. The , one of the earliest known Mahayana texts, contains a simple and brief definition for the term bodhisattva, which is also the earliest known Mahāyāna definition. This definition is given as the following: "Because he has bodhi as his aim, a bodhisattva-mahāsattva is so called." Mahayana sutras also depict the bodhisattva as a being which, because they want to reach Buddhahood for the sake of all beings, is more loving and compassionate than the sravaka (who only wishes to end their own suffering). Thus, another major difference between the bodhisattva and the arhat is that the bodhisattva practices the path for the good of others (par-ārtha), due to their bodhicitta, while the sravakas do so for their own good (sv-ārtha) and thus, do not have bodhicitta (which is compassionately focused on others). Mahayana bodhisattvas were not just abstract models for Buddhist practice, but also developed as distinct figures which were venerated by Indian Buddhists. These included figures like Manjushri and Avalokiteshvara, which are personifications of the basic virtues of wisdom and compassion respectively and are the two most important bodhisattvas in Mahayana. The development of bodhisattva devotion parallels the development of the Hindu bhakti movement. Indeed, Dayal sees the development of Indian bodhisattva cults as a Buddhist reaction to the growth of bhakti centered religion in India which helped to popularize and reinvigorate Indian Buddhism. Some Mahayana sutras promoted another revolutionary doctrinal turn, claiming that the three vehicles of the Śrāvakayāna, Pratyekabuddhayāna and the Bodhisattvayāna were really just one vehicle (ekayana). This is most famously promoted in the Lotus Sūtra which claims that the very idea of three separate vehicles is just an upaya, a skillful device invented by the Buddha to get beings of various abilities on the path. But ultimately, it will be revealed to them that there is only one vehicle, the ekayana, which ends in Buddhahood. Mature scholastic Mahāyāna Classical Indian mahayanists held that the only sutras which teach the bodhisattva vehicle are the Mahayana sutras. Thus, Nagarjuna writes "the subjects based on the deeds of Bodhisattvas were not mentioned in [non-Mahāyāna] sūtras." They also held that the bodhisattva path was superior to the śrāvaka vehicle and so the bodhisattva vehicle is the "great vehicle" (mahayana) due to its greater aspiration to save others, while the śrāvaka vehicle is the "small" or "inferior" vehicle (hinayana). Thus, Asanga argues in his Mahāyānasūtrālaṃkāra that the two vehicles differ in numerous ways, such as intention, teaching, employment (i.e., means), support, and the time that it takes to reach the goal. Over time, Mahayana Buddhists developed mature systematized doctrines about the bodhisattva. The authors of the various Madhyamaka treatises often presented the view of the ekayana, and thus held that all beings can become bodhisattvas. The texts and sutras associated with the Yogacara school developed a different theory of three separate gotras (families, lineages), that inherently predisposed a person to either the vehicle of the arhat, pratyekabuddha or samyak-saṃbuddha (fully self-awakened one). For the yogacarins then, only some beings (those who have the "bodhisattva lineage") can enter the bodhisattva path. In East Asian Buddhism, the view of the one vehicle (ekayana) which holds that all Buddhist teachings are really part of a single path, is the standard view. The term bodhisattva was also used in a broader sense by later authors. According to the eighth-century Mahāyāna philosopher Haribhadra, the term "bodhisattva" can refer to those who follow any of the three vehicles, since all are working towards bodhi. Therefore, the specific term for a Mahāyāna bodhisattva is a mahāsattva (great being) bodhisattva. According to Atiśa's 11th century Bodhipathapradīpa, the central defining feature of a Mahāyāna bodhisattva is the universal aspiration to end suffering for all sentient beings, which is termed bodhicitta (the mind set on awakening). The bodhisattva doctrine went through a significant transformation during the development of Buddhist tantra, also known as Vajrayana. This movement developed new ideas and texts which introduced new bodhisattvas and re-interpreted old ones in new forms, developed in elaborate mandalas for them and introduced new practices which made use of mantras, mudras and other tantric elements. Entering the bodhisattva path According to David Drewes, "Mahayana sutras unanimously depict the path beginning with the first arising of the thought of becoming a Buddha (prathamacittotpāda), or the initial arising of bodhicitta, typically aeons before one first receives a Buddha's prediction, and apply the term bodhisattva from this point." The Ten Stages Sutra, for example, explains that the arising of bodhicitta is the first step in the bodhisattva's career. Thus, the arising of bodhicitta, the compassionate mind aimed at awakening for the sake of all beings, is a central defining element of the bodhisattva path. Another key element of the bodhisattva path is the concept of a bodhisattva's praṇidhāna - which can mean a resolution, resolve, vow, prayer, wish, aspiration and determination. This more general idea of an earnest wish or solemn resolve which is closely connected with bodhicitta (and is the cause and result of bodhicitta) eventually developed into the idea that bodhisattvas take certain formulaic "bodhisattva vows." One of the earliest of these formulas is found in the and states:We having crossed (the stream of samsara), may we help living beings to cross! We being liberated, may we liberate others! We being comforted, may we comfort others! We being finally released, may we release others! Other sutras contain longer and more complex formulas, such as the ten vows found in the Ten Stages Sutra. Mahayana sources also discuss the importance of a Buddha's prediction (vyākaraṇa) of a bodhisattva's future Buddhahood. This is seen as an important step along the bodhisattva path. Later Mahayana Buddhists also developed specific rituals and devotional acts for which helped to develop various preliminary qualities, such as faith, worship, prayer, and confession, that lead to the arising of bodhicitta. These elements, which constitute a kind of preliminary preparation for bodhicitta, are found in the "seven part worship" (saptāṇgapūjā or saptavidhā anuttarapūjā). This ritual form is visible in the works of Shantideva (8th century) and includes: Vandana (obeisance, bowing down) Puja (worship of the Buddhas) Sarana-gamana (going for refuge) Papadesana (confession of bad deeds) Punyanumodana (rejoicing in merit of the good deeds of oneself and others) Adhyesana (prayer, entreaty) and yacana (supplication) – request to Buddhas and Bodhisattvas to continue preaching Dharma Atmabhavadi-parityagah (surrender) and pariṇāmanā (the transfer of one's Merit to the welfare of others) After these preliminaries have been accomplished, then the aspirant is seen as being ready to give rise to bodhicitta, often through the recitation of a bodhisattva vow. Contemporary Mahāyāna Buddhism encourages everyone to give rise to bodhicitta and ceremonially take bodhisattva vows. With these vows and precepts, one makes the promise to work for the complete enlightenment of all sentient beings by practicing the transcendent virtues or paramitas. In Mahāyāna, bodhisattvas are often not Buddhist monks and are former lay practitioners. The practice of the bodhisattva After a being has entered the path by giving rise to bodhicitta, they must make effort in the practice or conduct (caryā) of the bodhisattvas, which includes all the duties, virtues and practices that bodhisattvas must accomplish to attain Buddhahood. An important early Mahayana source for the practice of the bodhisattva is the Bodhisattvapiṭaka sūtra, a major sutra found in the Mahāratnakūṭa collection which was widely cited by various sources. According to Ulrich Pagel, this text is "one of the longest works on the bodhisattva in Mahayana literature" and thus provides extensive information on the topic bodhisattva training, especially the perfections (pāramitā). Pagel also argues that this text was quite influential on later Mahayana writings which discuss the bodhisattva and thus was "of fundamental importance to the evolution of the bodhisattva doctrine." Other sutras in the Mahāratnakūṭa collection are also important sources for the bodhisattva path. According to Pagel, the basic outline of the bodhisattva practice in the Bodhisattvapiṭaka is outlined in a passage which states "the path to enlightenment comprises benevolence towards all sentient beings, striving after the perfections and compliance with the means of conversion." This path begins with contemplating the failures of samsara, developing faith in the Buddha, giving rise to bodhicitta and practicing the four immesurables. It then proceeds through all six perfections and finally discusses the four means of converting sentient beings (saṃgrahavastu). The path is presented through prose exposition, mnemonic lists (matrka) and also through Jataka narratives. Using this general framework, the Bodhisattvapiṭaka incorporates discussions related to other practices including super knowledge (abhijñā), learning, 'skill' (kauśalya), accumulation of merit (puṇyasaṃbhāra), the thirty-seven factors of awakening (bodhipakṣadharmas), perfect mental quietude (śamatha) and insight (vipaśyanā). Later Mahayana treatises (śāstras) like the Bodhisattvabhumi and the Mahāyānasūtrālamkāra provide the following schema of bodhisattva practices: Bodhipakṣa-caryā, the practice of the 37 bodhipakṣadharmas (the principles conducive to bodhi) which are: the four applications of mindfulness, the four right efforts, the four bases of spiritual power, the five spiritual faculties, the five strengths, the seven factors of awakening and the noble eightfold path. Abhijñā-caryā, the practice of the super-knowledges (which are mainly developed in order to convert, help and guide others). Pāramitā-caryā, the practice of the perfections, which are: Dāna (generosity), Śīla (virtue, ethics), Kṣānti (patient endurance), Vīrya (heroic energy), Dhyāna (meditation), Prajñā (wisdom), Upāya (skillful means), Praṇidhāna (vow, resolve), Bala (spiritual power), and Jñāna (knowledge). Sattvaparipāka-caryā, the practice of maturing the living beings, i.e. preaching and teaching others. The first six perfections (pāramitās) are the most significant and popular set of bodhisattva virtues and thus they serve as a central framework for bodhisattva practice. They are the most widely taught and commented upon virtues throughout the history of Mahayana Buddhist literature and feature prominently in major Sanskrit sources such as the Bodhisattvabhumi, the Mahāyānasūtrālamkāra, the King of Samadhis Sutra and the Ten Stages Sutra. They are extolled and praised by these sources as "the great oceans of all the bright virtues and auspicious principles" (Bodhisattvabhumi) and "the Teacher, the Way and the Light...the Refuge and the Shelter, the Support and the Sanctuary" (Aṣṭasāhasrikā). While many Mahayana sources discuss the bodhisattva's training in ethical discipline (śīla) in classic Buddhist terms, over time, there also developed specific sets of ethical precepts for bodhisattvas (Skt. bodhisattva-śīla). These various sets of precepts are usually taken by bodhisattva aspirants (lay and ordained monastics) along with classic Buddhist pratimoksha precepts. However, in some Japanese Buddhist traditions, monastics rely solely on the bodhisattva precepts. The perfection of wisdom (prajñāpāramitā) is generally seen as the most important and primary of the perfections, without which all the others fall short. Thus, the Madhyamakavatara (6:2) states that wisdom leads the other perfections as a man with eyes leads the blind. This perfect or transcendent wisdom has various qualities, such as being non-attached (asakti), non-conceptual and non-dual (advaya) and signless (animitta). It is generally understood as a kind of insight into the true nature of all phenomena (dharmas) which in Mahayana sutras is widely described as emptiness (shunyatā). Another key virtue which the bodhisattva must develop is great compassion (mahā-karuṇā), a vast sense of care aimed at ending the suffering of all sentient beings. This great compassion is the ethical foundation of the bodhisattva, and it is also an applied aspect of their bodhicitta. Great compassion must also be closely joined with the perfection of wisdom, which reveals that all the beings that the bodhisattva strives to save are ultimately empty of self (anātman) and lack inherent existence (niḥsvabhāva). Due to the bodhisattva's compassionate wish to save all beings, they develop innumerable skillful means or strategies (upaya) with which to teach and guide different kinds of beings with all sorts of different inclinations and tendencies. Another key virtue for the bodhisattva is mindfulness (smṛti), which Dayal calls "the sine qua non of moral progress for a bodhisattva." Mindfulness is widely emphasized by Buddhist authors and Sanskrit sources and it appears four times in the list of 37 bodhipakṣadharmas. According to the Aṣṭasāhasrikā, a bodhisattva must never lose mindfulness so as not to be confused or distracted. The Mahāyānasūtrālamkāra states that mindfulness is the principal asset of a bodhisattva, while both Asvaghosa and Shantideva state that without mindfulness, a bodhisattva will be helpless and uncontrolled (like a mad elephant) and will not succeed in conquering the mental afflictions. The length and nature of the path Just as with non-Mahayana sources, Mahayana sutras generally depict the bodhisattva path as a long path that takes many lifetimes across many aeons. Some sutras state that a beginner bodhisattva could take anywhere from 3 to 22 countless eons (mahāsaṃkhyeya kalpas) to become a Buddha. The Mahāyānasaṃgraha of Asanga states that the bodhisattva must cultivate the six paramitas for three incalculable aeons (kalpāsaṃkhyeya). Shantideva meanwhile states that bodhisattvas must practice each perfection for sixty aeons or kalpas and also declares that a bodhisattva must practice the path for an "inconceivable" (acintya) number of kalpas. Thus, the bodhisattva path could take many billions upon billions of years to complete. Later developments in Indian and Asian Mahayana Buddhism (especially in Vajrayana or tantric Buddhism) lead to the idea that certain methods and practices could substantially shorten the path (and even lead to Buddhahood in a single lifetime). In Pure Land Buddhism, an aspirant might go to a Buddha's pure land or buddha-field (buddhakṣetra), like Sukhavati, where they can study the path directly with a Buddha. This could significantly shorten the length of the path, or at least make it more bearable. East Asian Pure Land Buddhist traditions, such as Jōdo-shū and Jōdo Shinshū, hold the view that realizing Buddhahood through the long bodhisattva path of the perfections is no longer practical in the current age (which is understood as a degenerate age called mappo). Thus, they rely on the salvific power of Amitabha to bring Buddhist practitioners to the pure land of Sukhavati, where they will better be able to practice the path. This view is rejected by other schools such as Tendai, Shingon and Zen. The founders of Tendai and Shingon, Saicho and Kukai, held that anyone who practiced the path properly could reach awakening in this very lifetime. Buddhist schools like Tiantai, Huayan, Chan and the various Vajrayāna traditions maintain that they teach ways to attain Buddhahood within one lifetime. Some of early depictions of the Bodhisattva path in texts such as the Ugraparipṛcchā Sūtra describe it as an arduous, difficult monastic path suited only for the few which is nevertheless the most glorious path one can take. Three kinds of bodhisattvas are mentioned: the forest, city, and monastery bodhisattvas—with forest dwelling being promoted a superior, even necessary path in sutras such as the Ugraparipṛcchā and the Samadhiraja sutras. The early Rastrapalapariprccha sutra also promotes a solitary life of meditation in the forests, far away from the distractions of the householder life. The Rastrapala is also highly critical of monks living in monasteries and in cities who are seen as not practicing meditation and morality. The Ratnagunasamcayagatha also says the bodhisattva should undertake ascetic practices (dhūtaguṇa), "wander freely without a home", practice the paramitas and train under a guru in order to perfect his meditation practice and realization of prajñaparamita. The twelve dhūtaguṇas are also promoted by the King of Samadhis Sutra, the Ten Stages Sutra and Shantideva. Some scholars have used these texts to argue for "the forest hypothesis", the theory that the initial Bodhisattva ideal was associated with a strict forest asceticism. But other scholars point out that many other Mahayana sutras do not promote this ideal, and instead teach "easy" practices like memorizing, reciting, teaching and copying Mahayana sutras, as well as meditating on Buddhas and bodhisattvas (and reciting or chanting their names). Ulrich Pagel also notes that in numerous sutras found in the Mahāratnakūṭa collection, the bodhisattva ideal is placed "firmly within the reach of non-celibate layfolk." Bodhisattvas and Nirvana Related to the different views on the different types of yanas or vehicles is the question of a bodhisattva's relationship to nirvāṇa. In the various Mahāyāna texts, two theories can be discerned. One view is the idea that a bodhisattva must postpone their awakening until full Buddhahood is attained (at which point one ceases to be reborn, which is the classical view of nirvāṇa). This view is promoted in some sutras like the Pañcavimsatisahasrika-prajñaparamita-sutra. The idea is also found in the Laṅkāvatāra Sūtra, which mentions that bodhisattvas take the following vow: "I shall not enter into final nirvana before all beings have been liberated." Likewise, the Śikṣāsamuccaya states "I must lead all beings to Liberation. I will stay here till the end, even for the sake of one living soul." The second theory is the idea that there are two kinds of nirvāṇa, the nirvāṇa of an arhat and a superior type of nirvāṇa called apratiṣṭhita (non-abiding) that allows a Buddha to remain engaged in the samsaric realms without being affected by them. This attainment was understood as a kind of non-dual state in which one is neither limited to samsara nor nirvana. A being who has reached this kind of nirvana is not restricted from manifesting in the samsaric realms, and yet they remain fully detached from the defilements found in these realms (and thus they can help others). This doctrine of non-abiding nirvana developed in the Yogacara school. As noted by Paul Williams, the idea of apratiṣṭhita nirvāṇa may have taken some time to develop and is not obvious in some of the early Mahāyāna literature, therefore while earlier sutras may sometimes speak of "postponement", later texts saw no need to postpone the "superior" apratiṣṭhita nirvāṇa. In this Yogacara model, the bodhisattva definitely rejects and avoids the liberation of the śravaka and pratyekabuddha, described in Mahāyāna literature as either inferior or "hina" (as in Asaṅga's fourth century Yogācārabhūmi) or as ultimately false or illusory (as in the Lotus Sūtra). That a bodhisattva has the option to pursue such a lesser path, but instead chooses the long path towards Buddhahood is one of the five criteria for one to be considered a bodhisattva. The other four are: being human, being a man, making a vow to become a Buddha in the presence of a previous Buddha, and receiving a prophecy from that Buddha. Over time, a more varied analysis of bodhisattva careers developed focused on one's motivation. This can be seen in the Tibetan Buddhist teaching on three types of motivation for generating bodhicitta. According to Patrul Rinpoche's 19th century Words of My Perfect Teacher (Kun bzang bla ma'i gzhal lung), a bodhisattva might be motivated in one of three ways. They are: King-like bodhicitta – To aspire to become a Buddha first in order to then help sentient beings. Boatman-like bodhicitta – To aspire to become a Buddha at the same time as other sentient beings. Shepherd-like bodhicitta – To aspire to become a Buddha only after all other sentient beings have done so. These three are not types of people, but rather types of motivation. According to Patrul Rinpoche, the third quality of intention is most noble though the mode by which Buddhahood occurs is the first; that is, it is only possible to teach others the path to enlightenment once one has attained enlightenment oneself. Bodhisattva stages According to James B. Apple, if one studies the earliest textual materials which discuss the bodhisattva path (which includes the translations of Lokakshema and the Gandharan manuscripts), "one finds four key stages that are demarcated throughout this early textual material that constitute the most basic elements in the path of a bodhisattva". These main elements are: "The arising of the thought of awakening (bodhicittotpāda), when a person first aspires to attain the state of Buddhahood and thereby becomes a bodhisattva" "Endurance towards the fact that things are not produced" (anutpattikadharma-kṣānti) "The attainment of the status of irreversibility" or non-retrogression (avaivartika) from Buddhahood, which means one is close to Buddhahood and that one can no longer turn back or regress from that attainment. They are exemplary monks, with cognitive powers equal to arhats. They practice the four dhyanas, have a deep knowledge of perfect wisdom and teach it to others. In the Lokakshema's Chinese translation of the Aṣṭasāhasrikā, the Daoxing Banruo Jing, this stage is closely related to a concentration (samadhi) that "does not grasp at anything at all" (sarvadharmāparigṛhīta). The prediction (vyākaraṇa), "the event when a Buddha predicts the time and place of a bodhisattva's subsequent awakening." The prediction is directly associated with the status of irreversibility. The Daoxing Banruo Jing states: "all the bodhisattvas who have realized the irreversible stage have obtained their prediction to Buddhahood from the Buddhas in the past." According to Drewes, the Aṣṭasāhasrikā Prajñāpāramitā Sūtra divides the bodhisattva path into three main stages. The first stage is that of bodhisattvas who "first set out in the vehicle" (prathamayānasaṃprasthita), then there is the "irreversible" (avinivartanīya) stage, and finally the third "bound by one more birth" (ekajātipratibaddha), as in, destined to become a Buddha in the next life. Lamotte also mentions four similar stages of the bodhiattva career which are found in the Dazhidulun translated by Kumarajiva: (1) Prathamacittotpādika ("who produces the mind of Bodhi for the first time"), (2) Ṣaṭpāramitācaryāpratipanna ("devoted to the practice of the six perfections"), (3) Avinivartanīya (non-regression), (4) Ekajātipratibaddha ("separated by only one lifetime from buddhahood"). Drewes notes that Mahāyāna sūtras mainly depict a bodhisattvas' first arising of bodhicitta as occurring in the presence of a Buddha. Furthermore, according to Drewes, most Mahāyāna sūtras "never encourage anyone to become a bodhisattva or present any ritual or other means of doing so." In a similar manner to the nikāya sources, Mahāyāna sūtras also see new bodhisattvas as likely to regress, while seeing irreversible bodhisattvas are quite rare. Thus, according to Drewes, "the Aṣṭasāhasrikā, for instance, states that as many bodhisattvas as there grains of sand in the Ganges turn back from the pursuit of Buddhahood and that out of innumerable beings who give rise to bodhicitta and progress toward Buddhahood, only one or two will reach the point of becoming irreversible." Drewes also adds that early texts like the Aṣṭasāhasrikā treat bodhisattvas who are beginners (ādikarmika) or "not long set out in the [great] vehicle" with scorn, describing them as "blind", "unintelligent", "lazy" and "weak". Early Mahayana works identify them with those who reject Mahayana or who abandon Mahayana, and they are seen as likely to become śrāvakas (those on the arhat path). Rather than encouraging them to become bodhisattvas, what early Mahayana sutras like the Aṣṭa do is to help individuals determine if they have already received a prediction in a past life, or if they are close to this point. The Aṣṭa provides a variety of methods, including forms of ritual or divination, methods dealing with dreams and various tests, especially tests based on one's reaction to the hearing of the content in the Aṣṭasāhasrikā itself. The text states that encountering and accepting its teachings mean one is close to being given a prediction and that if one does not "shrink back, cower or despair" from the text, but "firmly believes it", one is either irreversible or is close to this stage. Many other Mahayana sutras such as the Akṣobhyavyūha, Vimalakīrtinirdeśa, Sukhāvatīvyūha, and the Śūraṃgamasamādhi Sūtra present textual approaches to determine one's status as an advanced bodhisattva. These mainly depend on a person's attitude towards listening to, believing, preaching, proclaiming, copying or memorizing and reciting the sutra as well as practicing the sutra's teachings. According to Drewes, this claim that merely having faith in Mahāyāna sūtras meant that one was an advanced bodhisattva, was a departure from previous Nikaya views about bodhisattvas. It created new groups of Buddhists who accepted each other's bodhisattva status. Some Mahayana texts are more open with their bodhisattva doctrine. The Lotus Sutra famously assures large numbers people that they will certainly achieve Buddhahood, with few requirements (other than hearing and accepting the Lotus Sutra itself). The bodhisattva grounds (bhūmis) According to various Mahāyāna sources, on the way to becoming a Buddha, a bodhisattva proceeds through various stages (bhūmis) of spiritual progress. The term bhūmi means "earth" or "place" and figurately can mean "ground, plane, stage, level; state of consciousness". There are various lists of bhumis, the most common is a list of ten found in the Daśabhūmikasūtra (but there are also lists of seven stages as well as lists which have more than 10 stages). The Daśabhūmikasūtra lists the following ten stages: Great Joy: It is said that being close to enlightenment and seeing the benefit for all sentient beings, one achieves great joy, hence the name. In this bhūmi the bodhisattvas practice all perfections (pāramitās), but especially emphasizing generosity (dāna). Stainless: In accomplishing the second bhūmi, the bodhisattva is free from the stains of immorality, therefore, this bhūmi is named "stainless". The emphasized perfection is moral discipline (śīla). Luminous: The light of Dharma is said to radiate for others from the bodhisattva who accomplishes the third bhūmi. The emphasized perfection is patience (). Radiant: This bhūmi it is said to be like a radiating light that fully burns that which opposes enlightenment. The emphasized perfection is vigor (vīrya). Very difficult to train: Bodhisattvas who attain this ground strive to help sentient beings attain maturity, and do not become emotionally involved when such beings respond negatively, both of which are difficult to do. The emphasized perfection is meditative concentration (dhyāna). Obviously Transcendent: By depending on the perfection of wisdom, [the bodhisattva] does not abide in either or , so this state is "obviously transcendent". The emphasized perfection is wisdom (prajñā). Gone afar: Particular emphasis is on the perfection of skillful means (upāya), to help others. Immovable: The emphasized virtue is aspiration. This "immovable" bhūmi is where one becomes able to choose his place of rebirth. Good Discriminating Wisdom: The emphasized virtue is the understanding of self and non-self. Cloud of Dharma: The emphasized virtue is the practice of primordial wisdom. After this bhūmi, one attains full Buddhahood. In some sources, these ten stages are correlated with a different schema of the buddhist path called the five paths which is derived from Vaibhasika Abhidharma sources. The Śūraṅgama Sūtra recognizes 57 stages. Various Vajrayāna schools recognize additional grounds (varying from 3 to 10 further stages), mostly 6 more grounds with variant descriptions. A bodhisattva above the 7th ground is called a mahāsattva. Some bodhisattvas such as Samantabhadra are also said to have already attained Buddhahood. Sōtō Zen As part of the Sōtō Zen school of Mahāyanā, Dōgen Zenji described Four Exemplary Acts of a Bodhisattva: Offering Alms: Not being covetous or greedy; Kind Speech: Feeling genuine affection for other sentient beings and offering words that are neither harsh nor rude. Benevolence: Working out skillful methods to benefit sentient beings, be they of low or high station. Manifesting Sympathy: Not making differences, not treating yourself as different and not treating others as different. Important Bodhisattvas Buddhists (especially Mahayanists) venerate several bodhisattvas (such as Maitreya, Manjushri and Avalokiteshvara) which are seen as highly spiritually advanced (having attained the tenth bhumi) and thus possessing immense magical power. According to Lewis Lancaster, these "celestial" or "heavenly" bodhisattvas are seen as "either the manifestations of a Buddha or they are beings who possess the power of producing many bodies through great feats of magical transformation." The religious devotion to these bodhisattvas probably first developed in north India, and they are widely depicted in Gandharan and Kashmiri art. In Asian art, they are typically depicted as princes and princesses, with royal robes and jewellery (since they are the princes of the Dharma). In Buddhist art, a bodhisattva is often described as a beautiful figure with a serene expression and graceful manner. This is probably in accordance to the description of Prince Siddhārtha Gautama as a bodhisattva. The depiction of bodhisattva in Buddhist art around the world aspires to express the bodhisattva's qualities such as loving-kindness (metta), compassion (karuna), empathetic joy (mudita) and equanimity (upekkha). Literature which glorifies such bodhisattvas and recounts their various miracles remains very popular in Asia. One example of such a work of literature is More Records of Kuan-shih-yin's Responsive Manifestations by Lu Kao (459-532) which was very influential in China. In Tibetan Buddhism, the Maṇi Kambum is a similarly influential text (a revealed text, or terma) which focuses on Chenrezig (Avalokiteshvara, who is seen as the country's patron bodhisattva) and his miraculous activities in Tibet. These celestial bodhisattvas like Avalokiteshvara (Guanyin) are also seen as compassionate savior figures, constantly working for the good of all beings. The Avalokiteshvara chapter of the Lotus Sutra even states that calling Avalokiteshvara to mind can help save someone from natural disasters, demons, and other calamities. It is also supposed to protect one from the afflictions (lust, anger and ignorance). Bodhisattvas can also transform themselves into whatever physical form is useful for helping sentient beings (a god, a bird, a male or female, even a Buddha). Because of this, bodhisattvas are seen as beings that one can pray to for aid and consolation from the sufferings of everyday life as well as for guidance in the path to enlightenment. Thus, the great translator Xuanzang is said to have constantly prayed to Avalokiteshvara for protection on his long journey to India. Eight Main Bodhisattvas In the Tibetan tradition, there are eight bodhisattvas known as the "Eight Great Bodhisattvas", or "Eight Close Sons" (Skt. aṣṭa utaputra; Tib. nyewé sé gyé) and are seen as the main bodhisattvas of Shakyamuni Buddha. These same "Eight Great Bodhisattvas" (Chn. Bādà Púsà, Jp. Hachi Daibosatsu) also appear in East Asian Esoteric Buddhist sources, such as The Sutra on the Maṇḍalas of the Eight Great Bodhisattvas (八大菩薩曼荼羅經), translated by Amoghavajra in the 8th century and Faxian (10th century). The Eight Great Bodhisattvas are the following: Mañjuśrī ("Gentle Glory") Kumarabhuta ("Young Prince"), the main bodhisattva of wisdom Avalokiteśvara ("Lord who gazes down at the world"), the savior bodhisattva of great compassion Vajrapāṇi ("Vajra in hand"), the bodhisattva of protection, the protector of the Buddha (in East Asian sources, this figure appears as Mahāsthāmaprāpta) Maitreya ("Friendly One"), will become the Buddha of our world in the future Kṣitigarbha ("Earth Source") Ākāśagarbha ("Space Source") also known as Gaganagañja Sarvanivāraṇaviṣkambhin ("He who blocks the hindrances") Samantabhadra ("Universal Worthy", or "All Good") In Theravada While the veneration of bodhisattvas is much more widespread and popular in the Mahayana Buddhist world, it is also found in Theravada Buddhist regions. Bodhisattvas which are venerated in Theravada lands include Natha Deviyo (Avalokiteshvara), Metteya (Maitreya), Upulvan (i.e. Vishnu), Saman (Samantabhadra) and Pattini. The veneration of some of these figures may have been influenced by Mahayana Buddhism. These figures are also understood as devas that have converted to Buddhism and have sworn to protect it. The recounting of Jataka tales, which discuss the bodhisattva deeds of Gautama before his awakening, also remains a popular practice. Female Bodhisattvas The bodhisattva Prajñāpāramitā is a female personification of the perfection of wisdom and the Prajñāpāramitā sutras. She became an important figure, widely depicted in Indian Buddhist art. Bodhisattva is a Sanskrit masculine noun. Female Bodhisattvas do not exist in Indian Buddhist literature, but exist in Tibetan Buddhist literature. Thus only in Tibetan Buddhism does Tara become a female Bodhisattva. Guanyin (Jp: Kannon), a female form of Avalokiteshvara, is the most widely revered bodhisattva in East Asian Buddhism, generally depicted as a motherly figure. Guanyin is venerated in various other forms and manifestations, including Cundī, Cintāmaṇicakra, Hayagriva, Eleven-Headed Thousand-Armed Guanyin and Guanyin Of The Southern Seas among others. Gender variant representations of some bodhisattvas, most notably Avalokiteśvara, has prompted conversation regarding the nature of a bodhisattva's appearance. Chan master Sheng Yen has stated that Mahāsattvas such as Avalokiteśvara (known as Guanyin in Chinese) are androgynous (Ch. 中性; pinyin: "zhōngxìng"), which accounts for their ability to manifest in masculine and feminine forms of various degrees. In Tibetan Buddhism, Tara or Jetsun Dölma (rje btsun sgrol ma) is the most important female bodhisattva. Numerous Mahayana sutras feature female bodhisattvas as main characters and discuss their life, teachings and future Buddhahood. These include The Questions of the Girl Vimalaśraddhā (Tohoku Kangyur - Toh number 84), The Questions of Vimaladattā (Toh 77), The Lion's Roar of Śrīmālādevī (Toh 92), The Inquiry of Lokadhara (Toh 174), The Sūtra of Aśokadattā's Prophecy (Toh 76), The Questions of Vimalaprabhā (Toh 168), The Sūtra of Kṣemavatī's Prophecy (Toh 192), The Questions of the Girl Sumati (Toh 74), The Questions of Gaṅgottara (Toh 75), The Questions of an Old Lady (Toh 171), The Miraculous Play of Mañjuśrī (Toh 96), and The Sūtra of the Girl Candrottarā's Prophecy (Toh 191). Popular Figures Over time, numerous historical Buddhist figures also came to be seen as bodhisattvas in their own right, deserving of devotion. For example, an extensive hagiography developed around Nagarjuna, the Indian founder of the madhyamaka school of philosophy. Followers of Tibetan Buddhism consider the Dalai Lamas and the Karmapas to be an emanation of Chenrezig, the Bodhisattva of Compassion. Various Japanese Buddhist schools consider their founding figures like Kukai and Nichiren to be bodhisattvas. In Chinese Buddhism, various historical figures have been called bodhisattvas. Furthermore, various Hindu deities are considered to be bodhisattvas in Mahayana Buddhist sources. For example, in the Kāraṇḍavyūhasūtra, Vishnu, Shiva, Brahma and Saraswati are said to be bodhisattvas, all emanations of Avalokiteshvara. Deities like Saraswati (Chinese: Biàncáitiān, 辯才天, Japanese: Benzaiten) and Shiva (C: Dàzìzàitiān, 大自在天; J: Daikokuten) are still venerated as bodhisattva devas and dharmapalas (guardian deities) in East Asian Buddhism. Both figures are closely connected with Avalokiteshvara. In a similar manner, the Hindu deity Harihara is called a bodhisattva in the famed Nīlakaṇṭha Dhāraṇī, which states: "O Effulgence, World-Transcendent, come, oh Hari, the great bodhisattva." The empress Wu Zetian of the Tang dynasty, was the only female ruler of China. She used the growing popularity of Esoteric Buddhism in China for her own needs. Though she was not the only ruler to have made such a claim, the political utility of her claims, coupled with sincerity make her a great example. She built several temples and contributed to the finishing of the Longmen Caves and even went on to patronise Buddhism over Confucianism or Daoism. She ruled by the title of " Holy Emperor", and claimed to be a Bodhisattva too. She became one of China's most influential rulers. Others Other important bodhisattvas in Mahayana Buddhism include: Vajrasattva, an important figure in Vajrayana Buddhism Vimalakirti the famous lay bodhisattva of the Vimalakīrti Nirdeśa Akṣayamati, the main character in the influential Akṣayamatinirdeśa Sūtra Sadāprarudita, a major bodhisattva in the Prajñāpāramitā sutras Sudhana, the main character of the Gaṇḍavyūha Sutra The Four Bodhisattvas of the Earth from the Lotus Sutra Bhaiṣajyarāja or "Medicine King" Candraprabha ("Moon Light") Sūryaprabha ("Solar Light") Jambhala, a bodhisattva of wealth Mahāsthāmaprāpta, the second attendant bodhisattva to Amitabha (after Avalokiteshvara) Sitatapatra, She is contemplated as a protector against supernatural danger and is worshipped in both Mahayana and Vajrayana traditions Fierce bodhisattvas While bodhisattvas tend to be depicted as conventionally beautiful, there are instances of their manifestation as fierceful and monstrous looking beings. A notable example is Guanyin's manifestation as a preta named "Flaming Face" (面燃大士). This trope is commonly employed among the Wisdom Kings, among whom Mahāmāyūrī Vidyārājñī stands out with a feminine title and benevolent expression. In some depictions, her mount takes on a wrathful appearance. This variation is also found among images of Vajrapani. In Tibetan Buddhism, fierce manifestations (Tibetan: trowo) of the major bodhisattvas are quite common and they often act as protector deities. Sacred places The place of a bodhisattva's earthly deeds, such as the achievement of enlightenment or the acts of Dharma, is known as a bodhimaṇḍa (place of awakening), and may be a site of pilgrimage. Many temples and monasteries are famous as bodhimaṇḍas. Perhaps the most famous bodhimaṇḍa of all is the Bodhi Tree under which Śākyamuṇi achieved Buddhahood. There are also sacred places of awakening for bodhisattvas located throughout the Buddhist world. Mount Potalaka, a sacred mountain in India, is traditionally held to be Avalokiteshvara's bodhimaṇḍa. In Chinese Buddhism, there are four mountains that are regarded as bodhimaṇḍas for bodhisattvas, with each site having major monasteries and being popular for pilgrimages by both monastics and laypeople. These four sacred places are: Mount Putuo for Guanyin (Avalokiteśvara), the bodhisattva of Compassion () Mount Emei for Samantabhadra, the bodhisattva of practice () Mount Wutai for Mañjuśrī, the bodhisattva of wisdom () Mount Jiuhua for Kṣitigarbha, the bodhisattva of the great vow () Etymology The etymology of the Indic terms bodhisattva and bodhisatta is not fully understood. The term bodhi is uncontroversial and means "awakening" or "enlightenment" (from the root budh-). The second part of the compound has many possible meanings or derivations, including: Sattva and satta commonly means "living being", "sentient being" or "person" and many modern scholars adopt an interpretation based on this etymology. Examples include: "a sentient or reasonable being, possessing bodhi" (H. Kern), "a bodhi-being, i.e. a being destined to attain fullest Enlightenment" (T. W. Rhys Davids and W. Stede), "A being seeking for bodhi" (M. Anesaki), "Erleuchtungswesen" (Enlightenment Being) (M. Winternitz), "Weisheitswesen" ("Wisdom Being") (M. Walleser). This etymology is also supported by the Mahayana Samādhirāja Sūtra, which, however, explains the meaning of the term bodhisattva as "one who admonishes or exhorts all beings." According to Har Dayal, the term bodhi-satta may correspond with the Sanskrit bodhi-sakta which means "one who is devoted to bodhi" or "attached to bodhi". Later, the term may have been wrongly sanskritized to bodhi-satva. Hayal notes that the Sanskrit term sakta (from sañj) means "clung, stuck or attached to, joined or connected with, addicted or devoted to, fond of, intent on". This etymology for satta is supported by some passages in the Early Buddhist Texts (such as at SN 23.2, parallel at SĀ 122). The etymology is also supported by the Pāli commentaries, Jain sources and other modern scholars like Tillman Vetter and Neumann. Another related possibility pointed out by K.R. Norman and others is that satta carries the meaning of śakta, and so bodhisatta means "capable of enlightenment." The Sanskrit term sattva may mean "strength, energy, vigour, power, courage" and therefore, bodhisattva could also mean "one whose energy and power is directed towards bodhi". This reading of sattva is found in Ksemendra's AvadanakalpaIata. Har Dayal supports this reading, noting that the term sattva is "almost certainly related to the Vedic word satvan, which means 'a strong or valiant man, hero, warrior and thus, the term bodhisatta should be interpreted as "heroic being, spiritual warrior." Sattva may also mean spirit, mind, sense, consciousness, or geist. Various Indian commentators like Prajñakaramati interpret the term as a synonym for citta (mind, thought) or vyavasāya (decision, determination). Thus, the term bodhisattva could also mean: "one whose mind, intentions, thoughts or wishes are fixed on bodhi". In this sense, this meaning of sattva is similar to the meaning it has in the Yoga-sutras, where it means mind. Tibetan lexicographers translate bodhisattva as byang chub (bodhi) sems dpa (sattva). In this compound, sems means mind, while dpa means "hero, strong man" (Skt. vīra). Thus, this translation combines two possible etymologies of sattva explained above: as "mind" and as "courageous, hero". Chinese Buddhists generally use the term pusa (菩薩), a phonetic transcription of the Sanskrit term. However, early Chinese translators sometimes used a meaning translation of the term bodhisattva, which they rendered as mingshi (明士), which means "a person who understands", reading sattva as "man" or "person" (shi, 士). In Sanskrit, sattva can mean "essence, nature, true essence", and the Pali satta can mean "substance". Some modern scholars interpret bodhisattva in this light, such as Monier-Williams, who translates the term as "one who has bodhi or perfect wisdom as his essence." Gallery See also Bodhicharyavatara (A Guide to the Bodhisattva Way of Life) Bodhisattvas of the Earth Bodhisattva vows Buddhist holidays Junzi Karuna (compassion in Sanskrit) List of bodhisattvas Vegetarianism in Buddhism Concept Of Bodhisattva Citations General references Analayo, The Genesis of the Bodhisattva Ideal, Hamburg Buddhist Studies 1, Hamburg University Press 2010 Dayal, Har (1970). The Bodhisattva Doctrine in Buddhist Sanskrit Literature, Motilal Banarsidass Publ. Gampopa; The Jewel Ornament of Liberation; Snow Lion Publications; Gyatso, Geshe Kelsang Gyatso, The Bodhisattva Vow: A Practical Guide to Helping Others, Tharpa Publications (2nd. ed., 1995) Kawamura, Leslie S. (ed) (1981) The Bodhisattva Doctrine in Buddhism, Wilfrid Laurier University Press, Wilfrid Laurier University, Waterloo, Ontario. Canada. Lampert, K.; Traditions of Compassion: From Religious Duty to Social Activism. Palgrave-Macmillan; Pagel, Ulrich (1992). The Bodhisattvapiṭaka: Its Doctrines, Practices and Their Position in Mahāyāna Literature. Institute of Buddhist Studies. Shantideva: Guide to the Bodhisattva's Way of Life: How to Enjoy a Life of Great Meaning and Altruism, a translation of Shantideva's Bodhisattvacharyavatara with Neil Elliott, Tharpa Publications (2002) Werner, Karel; Samuels, Jeffrey; Bhikkhu Bodhi; Skilling, Peter; Bhikkhu Anālayo, McMahan, David (2013) The Bodhisattva Ideal: Essays on the Emergence of Mahayana. Buddhist Publication Society. ISBN 978-955-24-0396-5 White, Kenneth R.; The Role of Bodhicitta in Buddhist Enlightenment: Including a Translation into English of Bodhicitta-sastra, Benkemmitsu-nikyoron, and Sammaya-kaijo; Lewiston, New York: Edwin Mellen Press, 2005; Williams, Paul (2008). Mahayana Buddhism: The Doctrinal Foundations, Routledge. ; External links The Ethical Discipline of Bodhisattvas, by Geshe Sonam Rinchen (Tibetan Gelug Tradition) Bodhisattva, probably Avalokiteshvara (Guanyin), Northern Qi dynasty, c. 550--60, video, Smarthistory. Archived at ghostarchive.org on 24 May 2022. The 37 Practices of Bodhisattvas online with commentaries. The Thirty-Seven Practices of Bodhisattvas, all-in-one page with memory aids & collection of different versions. Audio recitation of 'The 37 Practices of Bodhisattvas' in MP3 format (Paul & Lee voices). What A Bodhisattva Does: Thirty-Seven Practices by Ngulchu Thogme with slide show format. Access to Insight Library: Bodhi's Wheel409 Arahants, Buddhas and Bodhisattvas by Bhikkhu Bodhi The Bodhisattva Ideal in Theravāda Theory and Practice by Jeffrey Samuels Online exhibition analyzing a Korean Bodhisattva sculpture Buddhanet.net Ksitigarbha Bodhisattva Sacred visions : early paintings from central Tibet, fully digitized text from The Metropolitan Museum of Art libraries Concept Of Bodhisattva Buddhist philosophical concepts Buddhist titles Gender and Buddhism Buddhist stages of enlightenment
7
British Airways (BA) is the flag carrier of the United Kingdom. It is headquartered in London, England, near its main hub at Heathrow Airport. The airline is the second largest UK-based carrier, based on fleet size and passengers carried, behind easyJet. In January 2011 BA merged with Iberia, creating the International Airlines Group (IAG), a holding company registered in Madrid, Spain. IAG is the world's third-largest airline group in terms of annual revenue and the second-largest in Europe. It is listed on the London Stock Exchange and in the FTSE 100 Index. British Airways is the first passenger airline to have generated more than US$1 billion on a single air route in a year (from 1 April 2017, to 31 March 2018, on the New York-JFK - London-Heathrow route). BA was created in 1974 after a British Airways Board was established by the British government to manage the two nationalised airline corporations, British Overseas Airways Corporation and British European Airways, and two regional airlines, Cambrian Airways and Northeast Airlines. On 31 March 1974, all four companies were merged to form British Airways. However, it marked 2019 as its centenary based on predecessor companies. After almost 13 years as a state company, BA was privatised in February 1987 as part of a wider privatisation plan by the Conservative government. The carrier expanded with the acquisition of British Caledonian in 1987, Dan-Air in 1992, and British Midland International in 2012. It is a founding member of the Oneworld airline alliance, along with American Airlines, the now-defunct Canadian Airlines, Cathay Pacific, and Qantas. The alliance has since grown to become the third-largest, after SkyTeam and Star Alliance. History Proposals to establish a joint British airline, combining the assets of the British Overseas Airways Corporation (BOAC) and British European Airways (BEA), were first raised in 1953 as a result of difficulties in attempts by BOAC and BEA to negotiate air rights through the British colony of Cyprus. Increasingly BOAC was protesting that BEA was using its subsidiary Cyprus Airways to circumvent an agreement that BEA would not fly routes further east than Cyprus, particularly to the increasingly important oil regions in the Middle East. The chairman of BOAC, Miles Thomas, was in favour of a merger as a potential solution to this disagreement and had backing for the idea from the Chancellor of the Exchequer at the time, Rab Butler. However, opposition from the Treasury blocked the proposal. Consequently, it was only following the recommendations of the 1969 Edwards Report that a new British Airways Board, managing both BEA and BOAC, and the two regional British airlines Cambrian Airways based at Cardiff, and Northeast Airlines based at Newcastle upon Tyne, was constituted on 1 April 1972. Although each airline's branding was maintained initially, two years later the British Airways Board unified its branding, effectively establishing British Airways as an airline on 31 March 1974. Following two years of fierce competition with British Caledonian, the second-largest airline in the United Kingdom at the time, the Government changed its aviation policy in 1976 so that the two carriers would no longer compete on long-haul routes. British Airways and Air France operated the supersonic Concorde airliner, and the world's first supersonic passenger service flew on 21 January 1976 from Heathrow Airport to Bahrain International Airport. Services to the U.S. began on 24 May 1976 with a flight to Washington Dulles airport, and flights to New York JFK airport followed on 22 September 1977. Service to Singapore was established in co-operation with Singapore Airlines as a continuation of the flight to Bahrain. Following the Air France Concorde crash in Paris and a slump in air travel following the 11 September attacks in New York in 2001, it was decided to cease Concorde operations in 2003 after 27 years of service. The final commercial Concorde flight was BA002 from New York-JFK to London-Heathrow on 24 October 2003. In 1981 the airline was instructed to prepare for privatisation by the Conservative Thatcher government. Sir John King, later Lord King, was appointed chairman, charged with bringing the airline back into profitability. While many other large airlines struggled, King was credited with transforming British Airways into one of the most profitable air carriers in the world. The flag carrier was privatised and was floated on the London Stock Exchange in February 1987. British Airways effected the takeover of the UK's "second" airline, British Caledonian, in July of that same year. The formation of Richard Branson's Virgin Atlantic in 1984 created a competitor for BA. The intense rivalry between British Airways and Virgin Atlantic culminated in the former being sued for libel in 1993, arising from claims and counterclaims over a "dirty tricks" campaign against Virgin. This campaign included allegations of poaching Virgin Atlantic customers, tampering with private files belonging to Virgin, and undermining Virgin's reputation in the city. As a result of the case BA management apologised "unreservedly", and the company agreed to pay £110,000 in damages to Virgin, £500,000 to Branson personally and £3 million legal costs. Lord King stepped down as chairman in 1993 and was replaced by his deputy, Colin Marshall, while Bob Ayling took over as CEO. Virgin filed a separate action in the U.S. that same year regarding BA's domination of the trans-Atlantic routes, but it was thrown out in 1999. In 1992 British Airways expanded through the acquisition of the financially troubled Dan-Air, giving BA a much larger presence at Gatwick Airport. British Asia Airways, a subsidiary based in Taiwan, was formed in March 1993 to operate between London and Taipei. That same month BA purchased a 25% stake in the Australian airline Qantas and, with the acquisition of Brymon Airways in May, formed British Airways Citiexpress (later BA Connect). In September 1998, British Airways, along with American Airlines, Cathay Pacific, Qantas, and Canadian Airlines, formed the Oneworld airline alliance. Oneworld began operations on 1 February 1999, and is the third-largest airline alliance in the world, behind SkyTeam and Star Alliance. Bob Ayling's leadership led to a cost savings of £750m and the establishment of a budget airline, Go, in 1998. The next year, however, British Airways reported an 84% drop in profits in its first quarter alone, its worst in seven years. In March 2000, Ayling was removed from his position and British Airways announced Rod Eddington as his successor. That year, British Airways and KLM conducted talks on a potential merger, reaching a decision in July to file an official merger plan with the European Commission. The plan fell through in September 2000. British Asia Airways ceased operations in 2001 after BA suspended flights to Taipei. Go was sold to its management and the private equity firm 3i in June 2001. Eddington would make further workforce cuts due to reduced demand following 11 September attacks in 2001, and BA sold its stake in Qantas in September 2004. In 2005 Willie Walsh, managing director of Aer Lingus and a former pilot, became the chief executive officer of British Airways. BA unveiled its new subsidiary OpenSkies in January 2008, taking advantage of the liberalisation of transatlantic traffic rights between Europe and the United States. OpenSkies flies non-stop from Paris to New York's JFK and Newark airports. In July 2008, British Airways announced a merger plan with Iberia, another flag carrier airline in the Oneworld alliance, wherein each airline would retain its original brand. The agreement was confirmed in April 2010, and in July the European Commission and US Department of Transport permitted the merger and began to co-ordinate transatlantic routes with American Airlines. On 6 October 2010 the alliance between British Airways, American Airlines and Iberia formally began operations. The alliance generates an estimated £230 million in annual cost-saving for BA, in addition to the £330 million which would be saved by the merger with Iberia. This merger was finalised on 21 January 2011, resulting in the International Airlines Group (IAG), the world's third-largest airline in terms of annual revenue and the second-largest airline group in Europe. Prior to merging, British Airways owned a 13.5% stake in Iberia, and thus received ownership of 55% of the combined International Airlines Group; Iberia's other shareholders received the remaining 45%. As a part of the merger, British Airways ceased trading independently on the London Stock Exchange after 23 years as a constituent of the FTSE 100 Index. In September 2010 Willie Walsh, now CEO of IAG, announced that the group was considering acquiring other airlines and had drawn up a shortlist of twelve possible acquisitions. In November 2011 IAG announced an agreement in principle to purchase British Midland International from Lufthansa. A contract to purchase the airline was agreed the next month, and the sale was completed for £172.5 million on 30 March 2012. The airline established a new subsidiary based at London City Airport operating Airbus A318s. British Airways was the official airline partner of the London 2012 Olympic Games. On 18 May 2012 it flew the Olympic flame from Athens International Airport to RNAS Culdrose while carrying various dignitaries, including Lord Sebastian Coe, Princess Anne, the Olympics minister Hugh Robertson and the London Mayor Boris Johnson, along with the footballer David Beckham. On 27 May 2017, British Airways suffered a computer power failure. All flights were cancelled and thousands of passengers were affected. By the following day, the company had not succeeded in reestablishing the normal function of its computer systems. When asked by reporters for more information on the ongoing problems, British Airways stated "The root cause was a power supply issue which our affected our IT systems - we continue to investigate this" and declined to comment further. Willie Walsh later attributed the crash to an electrical engineer disconnecting the UPS and said there would be an independent investigation. Amidst the decline in the value of Iranian currency due to the reintroduction of U.S. sanctions on Iran, BA announced that the Iranian route is "not commercially viable". As a result, BA decided to stop its services in Iran, effective 22 September 2018. In 2018, British Airways partnered with British tailor and designer Ozwald Boateng to redesign the company's historic uniforms, in honour of its approaching centenary, creating a new look for BA, while adhering to its traditional style. The new collection "A British Original" was launched in 2023. This design initiative also included English bone china manufactured by William Edwards and cutlery by Studio William for the company's first class service. In 2019, as part of the celebrations of a centenary of airline operations in the United Kingdom, British Airways announced that four aircraft would receive retro liveries. The first of these is a Boeing 747-400 (G-BYGC), which was repainted into the former BOAC livery, which it retained until its retirement. Two more Boeing 747-400s were repainted with former British Airways liveries. One wore the "Landor" livery until its retirement in 2020 (G-BNLY), the other (G-CIVB), wore the original "Union Jack" livery until its retirement in 2020 also. An Airbus A319 was repainted into British European Airways livery, which is still flying as G-EUPJ. On 28 April 2020, the company set out plans to make up to 12,000 staff redundant because of the global collapse of air traffic due to the COVID-19 pandemic and that it may not reopen its operations at Gatwick airport. In July 2020, British Airways announced the immediate retirement of its entire 747-400 fleet, having originally intended to phase out the remaining 747s in 2024. The airline stated that its decision to bring forward the date was in part due to the downturn in air travel following the COVID-19 pandemic and to focus on incorporating more modern and fuel-efficient aircraft such as the Airbus A350 and Boeing 787. At the same time, British Airways also announced its intention to eliminate carbon emissions by 2050. On 28 July 2020, the company's cabin crew union issued an "industrial action" warning in order to prevent the 12,000 job cuts and pay cuts. On 12 October 2020, it was announced that Sean Doyle, CEO of Aer Lingus (also part of the IAG airline group) would succeed Álex Cruz as CEO. Corporate affairs Operations British Airways is the largest airline based in the United Kingdom in terms of fleet size, international flights, and international destinations and was, until 2008, the largest airline by passenger numbers. The airline carried 34.6 million passengers in 2008, but, rival carrier easyJet transported 44.5 million passengers that year, passing British Airways for the first time. British Airways holds a United Kingdom Civil Aviation Authority Type A Operating Licence, it is permitted to carry passengers, cargo, and mail on aircraft with 20 or more seats. The airlines' head office, Waterside, stands in Harmondsworth, a village that is near Heathrow Airport. Waterside was completed in June 1998 to replace British Airways' previous head office, Speedbird House, located in Technical Block C on the grounds of Heathrow. British Airways' main base is at Heathrow Airport, but it also has a major presence at Gatwick Airport. It also has a base at London City Airport, where its subsidiary BA Cityflyer is the largest operator. BA had previously operated a significant hub at Manchester Airport. Manchester to New York (JFK) services were withdrawn; later all international services outside London ceased when the subsidiary BA Connect was sold. Passengers wishing to travel internationally with BA either to or from regional UK destinations must now transfer in London. Heathrow Airport is dominated by British Airways, which owns 50% of the slots available at the airport as of 2019, growing from 40% in 2004. The majority of BA services operate from Terminal 5, with the exception of some flights at Terminal 3 owing to insufficient capacity at Terminal 5. At London City Airport, the company owns 52% of the slots as of 2019. In August 2014, Willie Walsh advised the airline would continue to use flight paths over Iraq despite the hostilities there. A few days earlier Qantas announced it would avoid Iraqi airspace, while other airlines did likewise. The issue arose following the downing of Malaysia Airlines Flight 17 over Ukraine, and a temporary suspension of flights to and from Ben Gurion Airport during the 2014 Israel–Gaza conflict. Subsidiaries Over its history, BA has had many subsidiaries. In addition to the below, British Airways also owned Airways Aero Association, the operator of the British Airways flying club based at Wycombe Air Park in High Wycombe, until it was sold to Surinder Arora in 2007. Franchises Shareholdings British Airways obtained a 15% stake in UK regional airline Flybe from the sale of BA Connect in March 2007. It sold the stake in 2014. BA also owned a 10% stake in InterCapital and Regional Rail (ICRR), the company that managed the operations of Eurostar (UK) Ltd from 1998 to 2010, when the management of Eurostar was restructured. Business trends The key trends for the British Airways PLC Group are shown below. On the merger with Iberia, the accounting reference date was changed from 31 March to 31 December; figures below are therefore for the years to 31 March up to 2010, for the nine months to 31 December 2010, and for the years to 31 December thereafter: In 2020, due to the crisis caused by the COVID-19 pandemic, British Airways had to reduce its 42,000-strong workforce by 12,000 jobs. According to the estimate by IAG, a parent company, it will take the air travel industry several years to return to previous performance and profitability levels. However, 2022 saw a dramatic increase in travel, and the company now faced a worker shortage, forcing it to cancel more than 1,500 flights. During February 2023, The international airlines group, the owners of British Airways announced that the group has returned to making an annual profit of €1.3 billion for the first time since the pandemic, following a €2.8 billion loss in 2021. The company warned that due to the surge in demand for flying this could lead to more disruption. Industrial relations Staff working for British Airways are represented by a number of trade unions, pilots are represented by British Air Line Pilots' Association, cabin crew by British Airlines Stewards and Stewardesses Association (a branch of Unite the Union), while other branches of Unite the Union and the GMB Union represent other employees. Bob Ayling's management faced strike action by cabin crew over a £1 billion cost-cutting drive to return BA to profitability in 1997; this was the last time BA cabin crew would strike until 2009, although staff morale has reportedly been unstable since that incident. In an effort to increase interaction between management, employees, and the unions, various conferences and workshops have taken place, often with thousands in attendance. In 2005, wildcat action was taken by union members over a decision by Gate Gourmet not to renew the contracts of 670 workers and replace them with agency staff; it is estimated that the strike cost British Airways £30 million and caused disruption to 100,000 passengers. In October 2006, BA became involved in a civil rights dispute when a Christian employee was forbidden to wear a necklace bearing the cross, a religious symbol. BA's practice of forbidding such symbols has been publicly questioned by British politicians such as the former Home Secretary John Reid and the former Foreign Secretary Jack Straw. Relations have been turbulent between BA and Unite. In 2007, cabin crew threatened strike action over salary changes to be imposed by BA management. The strike was called off at the last minute, British Airways losing £80 million. In December 2009, a ballot for strike action over Christmas received a high level of support, action was blocked by a court injunction that deemed the ballot illegal. Negotiations failed to stop strike action in March, BA withdrew perks for strike participants. Allegations were made by The Guardian newspaper that BA had consulted outside firms methods to undermine the unions: the story was later withdrawn. A strike was announced for May 2010, British Airways again sought an injunction. Members of the Socialist Workers Party disrupted negotiations between BA management and Unite to prevent industrial action. Further disruption struck when Derek Simpson, a Unite co-leader, was discovered to have leaked details of confidential negotiations online via Twitter. Industrial action re-emerged in 2017, this time by BA's Mixed Fleet flight attendants, whom were employed on much less favorable pay and terms and conditions compared to previous cabin staff who joined prior to 2010. A ballot for industrial action was distributed to Mixed Fleet crew in November 2016 and resulted in an overwhelming yes majority for industrial action. Unite described Mixed Fleet crew as on "poverty pay", with many Mixed Fleet flight attendants sleeping in their cars in between shifts because they cannot afford the fuel to drive home, or operating while sick as they cannot afford to call in sick and lose their pay for the shift. Unite also blasted BA of removing staff travel concessions, bonus payments and other benefits to all cabin crew who undertook industrial action, as well as strike-breaking tactics such as wet-leasing aircraft from other airlines and offering financial incentives for cabin crew not to strike. The first dates of strikes during Christmas 2016 were cancelled due to pay negotiations. Industrial action by Mixed Fleet commenced in January 2017 after rejecting a pay offer. Strike action continued throughout 2017 in numerous discontinuous periods, resulting in one of the longest running disputes in aviation history. On 31 October 2017, after 85 days of discontinuous industrial action, Mixed Fleet accepted a new pay deal from BA which ended the dispute. Senior Leadership Chairman: Sean Doyle (since April 2021) Chief Executive: Sean Doyle (since October 2020) List of Former Chairmen Sir David Nicolson (1972–1975) The Lord McFadzean (1976–1979) Sir Ross Stainton (1979–1980) The Lord King (1981–1993) The Lord Marshall (1993–2004) Sir Martin Broughton (2004–2013) Keith Williams (2013–2016) Álex Cruz (2016–2021) List of Former Chief Executives The position was formed in 1977. Sir Ross Stainton (1977–1979) Sir Roy Watts (1979–1983) The Lord Marshall (1983–1995) Bob Ayling (1996–2000) Sir Rod Eddington (2000–2005) Willie Walsh (2005–2010) Keith Williams (2011–2016) Álex Cruz (2016–2020) Destinations British Airways serves over 170 destinations in 70 countries, including eight domestic and 27 in the United States. Alliances British Airways co-founded the airline alliance Oneworld in 1999 with airlines American Airlines, Cathay Pacific and Qantas. British Airways is still currently a member of Oneworld. Codeshare agreements British Airways codeshares with the following airlines: Aer Lingus airBaltic Alaska Airlines American Airlines Bangkok Airways Cathay Pacific China Eastern Airlines China Southern Airlines Finnair Iberia Japan Airlines Kenya Airways LATAM Brasil LATAM Chile Loganair Malaysia Airlines Qantas Qatar Airways Royal Jordanian S7 Airlines TAAG Angola Airlines Vueling Fleet , the British Airways operates a fleet of 253 aircraft with 47 orders. BA operates a mix of Airbus narrow and wide-body aircraft, and Boeing wide-body aircraft, specifically the 777 and 787. In October 2020, British Airways retired its fleet of 747-400 aircraft. It was one of the largest operators of the 747, having previously operated the -100, -200, and -400 aircraft from 1974 (1969 with BOAC). British Airways Engineering The airline has its own engineering branch to maintain its aircraft fleet, this includes line maintenance at over 70 airports around the world. As well as hangar facilities at Heathrow and Gatwick airport it has two major maintenance centres at Glasgow and Cardiff Airports. Marketing Branding The musical theme predominantly used on British Airways advertising has been "The Flower Duet" by Léo Delibes. This was first used in a 1984 advertisement directed by Tony Scott, in an arrangement by Howard Blake. It was reworked by Malcolm McLaren and Yanni for 1989's iconic "Face" advertisement, and subsequently appeared in many different arrangements between 1990 and 2010. The slogan 'the world's favourite airline', first used in 1983, was dropped in 2001 after Lufthansa overtook BA in terms of passenger numbers. Other advertising slogans have included "The World's Best Airline", "We'll Take More Care of You", "Fly the Flag", and "To Fly, To Serve". BA had an account for 23 years with Saatchi & Saatchi, an agency that created many of their most famous advertisements, including "The World's Biggest Offer" and the influential "Face" campaign. Saatchi & Saatchi later imitated this advert for Silverjet, a rival of BA, after BA discontinued their business activities. Since 2007, BA used Bartle Bogle Hegarty as its advertising agency. In October 2022, BA launched a brand new ad campaign, titled "A British Original" produced by London-based Uncommon Creative Studio. This was to be another record-breaking campaign for its use of 500 unique executions along with a series of 32 short films, coinciding with the launch of Ozwald Boateng's new collection of uniform. British Airways purchased the internet domain ba.com in 2002 from previous owner Bell Atlantic, 'BA' being the company's acronym and its IATA Airline code. British Airways is the official airline of the Wimbledon Championship tennis tournament, and was the official airline and tier one partner of the 2012 Summer Olympics and Paralympics. BA was also the official airline of England's bid to host the 2018 Football World Cup. High Life, founded in 1973, is the official in-flight magazine of the airline. Safety video The airline used a cartoon safety video from circa 2005 until 2017. Beginning on 1 September 2017 the airline introduced the new Comic Relief live action safety video hosted by Chabuddy G, with appearances by British celebrities Gillian Anderson, Rowan Atkinson, Jim Broadbent, Rob Brydon, Warwick Davis, Chiwetel Ejiofor, Ian McKellen, Thandie Newton, and Gordon Ramsay. A "sequel" video, also hosted by Chabuddy G, was released in 2018, with Michael Caine, Olivia Colman, Jourdan Dunn, Naomie Harris, Joanna Lumley, and David Walliams. The two videos are part of Comic Relief's charity programme. On 17 April 2023, the airline launched new safety video as a part of “A British Original” campaign, with Emma Raducanu, Robert Peston, Little Simz, and Steven Bartlett. Liveries, logos, and tail fins The aeroplanes that British Airways inherited from the four-way merger between BOAC, BEA, Cambrian, and Northeast were temporarily given the text logo "British airways" but retained the original airline's livery. With its formation in 1974, British Airways' aeroplanes were given a new white, blue, and red colour scheme with a cropped Union Jack painted on their tail fins, designed by Negus & Negus. In 1984, a new livery designed by Landor Associates updated the airline's look as it prepared for privatization. For celebrating centenary, BA announced four retro liveries: three on Boeing 747-400 aircraft (one in each of BOAC, Negus & Negus, and Landor Associates liveries), and one A319 in BEA livery. In 1997, there was a controversial change to a new Project Utopia livery; all aircraft used the corporate colours consistently on the fuselage, but tailfins bore one of multiple designs. Several people spoke out against the change, including the former prime minister Margaret Thatcher, who famously covered the tail of a model 747 at an event with a handkerchief, to show her displeasure. BA's traditional rival, Virgin Atlantic, took advantage of the negative press coverage by applying the Union flag to the winglets of their aircraft along with the slogan "Britain's national flagcarrier". In 1999, the CEO of British Airways, Bob Ayling, announced that all BA planes would adopt the tailfin design Chatham Dockyard Union Flag originally intended to be used only on the Concorde, based on the Union Flag. All BA aircraft have since borne the Chatham Dockyard Union flag variant of the Project Utopia livery, except for the four retro aircraft. Arms In 2011, British Airways made a brand relaunch project, in which BA introduced a stylized, metallized version of arms by For People Design to be used along with its Speedmarque logo. This is used exclusively on aircraft, First Wing Lounge and advertisements. Loyalty programmes British Airways' tiered loyalty programme, called the Executive Club, includes access to special lounges and dedicated "fast" queues. Its program consists of six tiers: Blue, Bronze, Silver, Gold, Gold guest list and Premier. BA invites its top corporate accounts to join the "Premier" incentive programme. The programme incentivises its members to fly with BA by awarding them Avios and Tier Points. Avios is the spending currency, which can be redeemed. Tier Points are a score used to determine the member's tier, and cannot be redeemed. Tier Points are reset at the end of each membership year, and Avios are retained for three years. British Airways operates airside lounges for passengers travelling in premium cabins, and these are available to certain tiers of Executive Club members. First class passengers, as well as Gold Executive Club members, are entitled to use First Class Lounges. Business class passengers (called Club World or Club Europe in BA terms) as well as Silver Executive Club members may use Business lounges. At airports in which BA does not operate a departure lounge, a third party lounge is often provided for premium or status passengers. Members of the programme were also granted status within the Oneworld alliance, which permitted similar benefits when flying with Oneworld member airlines. The level of benefits were determined by the member's tier. In 2011, due to the merger with Iberia, British Airways announced changes to the Executive Club to maximise integration between the airlines. This included the combination and rebranding of Air Miles, BA Miles and Iberia Plus points as the IAG operated loyalty programme Avios. Inflight magazines high life Magazine is British Airways' complimentary inflight magazine. It is available to all customers across all cabins and aircraft types. high life shop Magazine is British Airways' inflight shopping magazine. It is available to all customers on all aircraft where the inflight shopping range can be carried. First life is a complimentary magazine offered to all customers travelling in the First cabin. It has a range of articles including fashion, trends and technology with an upmarket target audience. Business life is a complimentary magazine targeted at business travellers and frequent flyers. The magazine can be found in all short haul aircraft seat pockets, in the magazine selection for Club World customers and in lounges operated by British Airways. Cabins and services Short haul Economy class Euro Traveller is British Airways' economy class cabin on all short-haul flights within Europe, including domestic flights within the UK. Heathrow and Gatwick-based flights are operated by Airbus A320 series aircraft. Standard seat pitch varies from 29" to 34" depending on aircraft type and location of the seat. All flights from Heathrow and Gatwick have a buy on board system with a range of food designed by Tom Kerridge. Food can be pre-ordered through the British Airways mobile application. Alternatively, a limited selection can be purchased on-board using credit and debit card or by using Frequent Flyer Avios points. British Airways is rolling out Wi-Fi across its fleet of aircraft with 90% expected to be Wi-Fi enabled by 2020. Scheduled services operated by BA Cityflyer currently offer complimentary onboard catering. The service will switch to buy on board in the future. Business class Club Europe is the short-haul business class available on all short-haul flights. This class allows for access to business lounges at most airports and complimentary onboard catering. The middle seat of the standard Airbus configured cabin is left free. Instead, a cocktail table folds up from under the middle seat on refurbished aircraft. Pillows and blankets are available on longer flights. In-flight Mid-haul and long haul First class First is offered on all Airbus A380s, Boeing 777-300ERs, Boeing 787-9/10s and on some Boeing 777-200ERs. There are between eight and fourteen private suites depending on the aircraft type. Each First suite comes with a bed, a wide entertainment screen, in-seat power and complimentary Wi-Fi access on select aircraft. The exclusive Concorde Room lounge at Heathrow Terminal 5 offers pre-flight dining with waiter service and more intimate space. Dedicated British Airways 'Galleries First' lounges are available at some airports, and Business lounges are used where these are not available. Some feature a 'First Dining' section where passengers holding a first class ticket can access a pre-flight dining service. Club World Club World is the long-haul business class cabin. It is offered on all long-haul aircraft. The cabin features fully convertible flat bed seats. In March 2019, BA unveiled its new business-class seats on the new A350 aircraft, which feature a suite with a door. Since the unveiling, Club Suite has been installed on the Boeing 787-10 and retrofitted on some Boeing 777 cabins. The remaining aircraft are due to have their seats re-fitted over the coming years and they currently feature an older seat type, initially released in 2006. World Traveller Plus World Traveller Plus is the premium economy class cabin provided on all BA long haul aircraft. This cabin offers wider seats, extended leg-room, additional seat comforts such as larger IFE screen, a foot rest and power sockets. A complimentary 'World Traveller' bar is offered along with an upgraded main meal. World Traveller World Traveller is the mid-haul and long-haul economy class cabin. It offers seat-back entertainment, complimentary food and drink, pillows, and blankets. While the in-flight entertainment screens are available on all long-haul aircraft, international power outlets are available on the aircraft based at Heathrow. Wifi is also available on selected aircraft at an extra fee. Incidents and accidents British Airways is known to have a strong reputation for safety and has been consistently ranked within the top 20 safest airlines globally according to Business Insider and AirlineRatings.com. Since BA's inception in 1974, it has been involved in three hull-loss incidents (British Airways Flight 149 was destroyed on the ground at Kuwait International Airport as a result of military action during the First Gulf War with no one on board) and two hijacking attempts. To date, the only fatal accident experienced by a BA aircraft occurred in 1976 with British Airways Flight 476 which was involved in a midair collision later attributed to an error made by air traffic control. On 22 November 1974, British Airways Flight 870 was hijacked shortly after take-off from Dubai International Airport for London-Heathrow. The Vickers VC10 landed at Tripoli for refuelling before flying on to Tunis. The captain, Jim Futcher, returned to the aircraft to fly it knowing the hijackers were on board. A hostage, 43-year-old German banker Werner Gustav Kehl, was shot in the back. The hijackers eventually surrendered after 84 hours. Futcher was awarded the Queen's Gallantry Medal, the Guild of Air Pilots and Air Navigators Founders Medal, the British Air Line Pilots Association Gold Medal and a Certificate of Commendation from British Airways for his actions during the hijacking. On 10 September 1976, a Trident 3B on British Airways Flight 476 departed from London-Heathrow to Istanbul. It collided in mid-air with an Inex Adria DC9-31 near Zagreb. All 54 passengers and 9 crew members on the BA aircraft died. This is the only fatal accident to a British Airways aircraft since the company's formation in 1974. On 24 June 1982, British Airways Flight 9, a Boeing 747-200 registration G-BDXH, flew through a cloud of volcanic ash and dust from the eruption of Mount Galunggung. The ash and dust caused extensive damage to the aircraft, including the failure of all four engines. The crew managed to glide the plane out of the dust cloud and restart all four of its engines, although one later had to be shut down again. The volcanic ash caused the cockpit window to be scratched to such an extent that it was difficult for the pilots to see out of the plane. However, the aircraft made a successful emergency landing at Halim Perdanakusuma International Airport just outside Jakarta. There were no fatalities or injuries. On 10 June 1990, British Airways Flight 5390, a BAC One-Eleven flight between Birmingham and Málaga, suffered a windscreen blowout due to the fitting of incorrect bolts the previous day. The captain sustained major injuries after being partially blown out of the aircraft, but the co-pilot landed the plane safely at Southampton Airport. On 2 August 1990, British Airways Flight 149 landed at Kuwait International Airport four hours after the Iraqi invasion of Kuwait. The aircraft, a Boeing 747-100 G-AWND, was destroyed, and all passengers and crew were captured. Two of the landing gears were salvaged, and are on display in Waterside, BA Headquarters in London. On 29 December 2000, British Airways Flight 2069 was en route from London to Nairobi when a mentally ill passenger entered the cockpit and grabbed the controls. As the pilots struggled to remove the intruder, the Boeing 747-400 stalled twice and banked to 94 degrees. Several people on board were injured by the violent manoeuvres, which briefly caused the aircraft to descend at 30,000 ft per minute. The man was finally restrained with the help of several passengers, and the co-pilot regained control of the aircraft. The flight landed safely in Nairobi. On 17 January 2008, British Airways Flight 38, a Boeing 777-200ER G-YMMM, from Beijing to London crash-landed approximately short of Heathrow Airport's runway 27L, and slid onto the runway's displaced threshold. The aircraft sustained damage to its landing gear, wing roots, and engines, resulting in the first hull loss of a Boeing 777. There were no fatalities, but there was one serious injury and 12 minor injuries. The accident was caused by icing in the fuel system, resulting in a loss of power. On 24 May 2013, British Airways Flight 762, using an Airbus A319-131 and registered as G-EUOE, returned to Heathrow Airport after fan cowl doors detached from both engines shortly after takeoff. During the approach, a fire broke out in the right engine and persisted after the engine was shut down. The aircraft landed safely with no injuries to the 80 people on board. The accident report revealed that the cowlings had been left unlatched following overnight maintenance. The separation of the doors caused airframe damage and the right-hand engine fire resulted from a ruptured fuel pipe. On 22 December 2013, British Airways Flight 34, a Boeing 747-436 G-BNLL, hit a building at O. R. Tambo International Airport in Johannesburg after missing a turning on a taxiway. The starboard wing was severely damaged but there were no injuries amongst the crew or 189 passengers, however, four members of ground staff were injured when the wing smashed into the building. The aircraft was officially withdrawn from service in February 2014. On 8 September 2015, British Airways Flight 2276, a Boeing 777-236ER G-VIIO, aborted its takeoff at Las Vegas' McCarran International Airport due to an uncontained engine failure of its left (#1) General Electric GE90 engine, which led to a substantial fire. The aircraft was evacuated on the main runway. All 157 passengers and 13 crew escaped the aircraft, at least 14 people sustaining minor injuries. Between 21 August 2018 and 5 September 2018, hackers carried out a "sophisticated, malicious criminal attack" on the website of the airline. Around 380,000 transactions were affected by this web skimming attack. The company was subsequently fined £183 million (1.5% of turnover) in July 2019, by the Information Commissioner's Office, the highest ever fine handed by the ICO at the time of issuing. On 18 June 2021, a British Airways Boeing 787-8 G-ZBJB, had a nose landing gear collapse while on the tarmac at Heathrow Airport. A British Airways spokesperson confirmed that no passengers were on board the plane when the incident occurred. On 6 July 2022, British Airways Flight 820, an Airbus A320-232, caught fire as it was landing at Copenhagen Airport. Airport firefighters put out the fire. They had to use foam as well. People in the terminal buildings were able to record the footage. The plane was ferried back to London-Heathrow Airport on July 9. See also Air transport in the United Kingdom Plane Saver Credit Union Transport in the United Kingdom List of airlines of the United Kingdom References Bibliography Wood, Alan. "Airline at War: British Airways Goes to War". Air Enthusiast, No. 55, Autumn 1994, pp. 62–74. External links British Airways Heritage Collection British companies established in 1974 Airlines based in London Airlines established in 1974 Airlines of the United Kingdom British Air Transport Association British brands Companies formerly listed on the London Stock Exchange European Low Fares Airline Association Former nationalised industries of the United Kingdom Price fixing convictions
13
A bicycle, also called a pedal cycle, bike, push-bike or cycle, is a human-powered or motor-powered assisted, pedal-driven, single-track vehicle, having two wheels attached to a frame, one behind the other. A is called a cyclist, or bicyclist. Bicycles were introduced in the 19th century in Europe. By the early 21st century there were more than 1 billion. These numbers far exceed the number of cars, both in total and ranked by the number of individual models produced. They are the principal means of transportation in many regions. They also provide a popular form of recreation, and have been adapted for use as children's toys, general fitness, military and police applications, courier services, bicycle racing, and bicycle stunts. The basic shape and configuration of a typical upright or "safety bicycle", has changed little since the first chain-driven model was developed around 1885. However, many details have been improved, especially since the advent of modern materials and computer-aided design. These have allowed for a proliferation of specialized designs for many types of cycling. In the 21st century electric bicycles have become popular. The bicycle's invention has had an enormous effect on society, both in terms of culture and of advancing modern industrial methods. Several components that played a key role in the development of the automobile were initially invented for use in the bicycle, including ball bearings, pneumatic tires, chain-driven sprockets and tension-spoked wheels. Etymology The word bicycle first appeared in English print in The Daily News in 1868, to describe "Bysicles and trysicles" on the "Champs Elysées and Bois de Boulogne". The word was first used in 1847 in a French publication to describe an unidentified two-wheeled vehicle, possibly a carriage. The design of the bicycle was an advance on the velocipede, although the words were used with some degree of overlap for a time. Other words for bicycle include "bike", "pushbike", "pedal cycle", or "cycle". In Unicode, the code point for "bicycle" is 0x1F6B2. The entity 🚲 in HTML produces 🚲. Although bike and cycle are used interchangeably to refer mostly to two types of two-wheelers, the terms still vary across the world. In India, for example, a cycle refers only to a two-wheeler using pedal power whereas the term bike is used to describe a two-wheeler using internal combustion engine or electric motors as a source of motive power instead of motorcycle/motorbike. History The "dandy horse", also called Draisienne or Laufmaschine ("running machine"), was the first human means of transport to use only two wheels in tandem and was invented by the German Baron Karl von Drais. It is regarded as the first bicycle and von Drais is seen as the "father of the bicycle", but it did not have pedals. Von Drais introduced it to the public in Mannheim in 1817 and in Paris in 1818. Its rider sat astride a wooden frame supported by two in-line wheels and pushed the vehicle along with his or her feet while steering the front wheel. The first mechanically propelled, two-wheeled vehicle may have been built by Kirkpatrick MacMillan, a Scottish blacksmith, in 1839, although the claim is often disputed. He is also associated with the first recorded instance of a cycling traffic offense, when a Glasgow newspaper in 1842 reported an accident in which an anonymous "gentleman from Dumfries-shire... bestride a velocipede... of ingenious design" knocked over a little girl in Glasgow and was fined five shillings (). In the early 1860s, Frenchmen Pierre Michaux and Pierre Lallement took bicycle design in a new direction by adding a mechanical crank drive with pedals on an enlarged front wheel (the velocipede). This was the first in mass production. Another French inventor named Douglas Grasso had a failed prototype of Pierre Lallement's bicycle several years earlier. Several inventions followed using rear-wheel drive, the best known being the rod-driven velocipede by Scotsman Thomas McCall in 1869. In that same year, bicycle wheels with wire spokes were patented by Eugène Meyer of Paris. The French vélocipède, made of iron and wood, developed into the "penny-farthing" (historically known as an "ordinary bicycle", a retronym, since there was then no other kind). It featured a tubular steel frame on which were mounted wire-spoked wheels with solid rubber tires. These bicycles were difficult to ride due to their high seat and poor weight distribution. In 1868 Rowley Turner, a sales agent of the Coventry Sewing Machine Company (which soon became the Coventry Machinists Company), brought a Michaux cycle to Coventry, England. His uncle, Josiah Turner, and business partner James Starley, used this as a basis for the 'Coventry Model' in what became Britain's first cycle factory. The dwarf ordinary addressed some of these faults by reducing the front wheel diameter and setting the seat further back. This, in turn, required gearing—effected in a variety of ways—to efficiently use pedal power. Having to both pedal and steer via the front wheel remained a problem. Englishman J.K. Starley (nephew of James Starley), J.H. Lawson, and Shergold solved this problem by introducing the chain drive (originated by the unsuccessful "bicyclette" of Englishman Henry Lawson), connecting the frame-mounted cranks to the rear wheel. These models were known as safety bicycles, dwarf safeties, or upright bicycles for their lower seat height and better weight distribution, although without pneumatic tires the ride of the smaller-wheeled bicycle would be much rougher than that of the larger-wheeled variety. Starley's 1885 Rover, manufactured in Coventry is usually described as the first recognizably modern bicycle. Soon the seat tube was added which created the modern bike's double-triangle diamond frame. Further innovations increased comfort and ushered in a second bicycle craze, the 1890s Golden Age of Bicycles. In 1888, Scotsman John Boyd Dunlop introduced the first practical pneumatic tire, which soon became universal. Willie Hume demonstrated the supremacy of Dunlop's tyres in 1889, winning the tyre's first-ever races in Ireland and then England. Soon after, the rear freewheel was developed, enabling the rider to coast. This refinement led to the 1890s invention of coaster brakes. Dérailleur gears and hand-operated Bowden cable-pull brakes were also developed during these years, but were only slowly adopted by casual riders. The Svea Velocipede with vertical pedal arrangement and locking hubs was introduced in 1892 by the Swedish engineers Fredrik Ljungström and Birger Ljungström. It attracted attention at the World Fair and was produced in a few thousand units. In the 1870s many cycling clubs flourished. They were popular in a time when there were no cars on the market and the principal mode of transportation was horse-drawn vehicles, such the horse and buggy or the horsecar. Among the earliest clubs was The Bicycle Touring Club, which has operated since 1878. By the turn of the century, cycling clubs flourished on both sides of the Atlantic, and touring and racing became widely popular. The Raleigh Bicycle Company was founded in Nottingham, England in 1888. It became the biggest bicycle manufacturing company in the world, making over two million bikes per year. Bicycles and horse buggies were the two mainstays of private transportation just prior to the automobile, and the grading of smooth roads in the late 19th century was stimulated by the widespread advertising, production, and use of these devices. More than 1 billion bicycles have been manufactured worldwide as of the early 21st century. Bicycles are the most common vehicle of any kind in the world, and the most numerous model of any kind of vehicle, whether human-powered or motor vehicle, is the Chinese Flying Pigeon, with numbers exceeding 500 million. The next most numerous vehicle, the Honda Super Cub motorcycle, has more than 100 million units made, while most produced car, the Toyota Corolla, has reached 44 million and counting. Uses Bicycles are used for transportation, bicycle commuting, and utility cycling. They are also used professionally by mail carriers, paramedics, police, messengers, and general delivery services. Military uses of bicycles include communications, reconnaissance, troop movement, supply of provisions, and patrol, such as in bicycle infantries. They are also used for recreational purposes, including bicycle touring, mountain biking, physical fitness, and play. Bicycle sports include racing, BMX racing, track racing, criterium, roller racing, sportives and time trials. Major multi-stage professional events are the Giro d'Italia, the Tour de France, the Vuelta a España, the Tour de Pologne, and the Volta a Portugal. They are also used for entertainment and pleasure in other ways, such as in organised mass rides, artistic cycling and freestyle BMX. Technical aspects The bicycle has undergone continual adaptation and improvement since its inception. These innovations have continued with the advent of modern materials and computer-aided design, allowing for a proliferation of specialized bicycle types, improved bicycle safety, and riding comfort. Types Bicycles can be categorized in many different ways: by function, by number of riders, by general construction, by gearing or by means of propulsion. The more common types include utility bicycles, mountain bicycles, racing bicycles, touring bicycles, hybrid bicycles, cruiser bicycles, and BMX bikes. Less common are tandems, low riders, tall bikes, fixed gear, folding models, amphibious bicycles, cargo bikes, recumbents and electric bicycles. Unicycles, tricycles and quadracycles are not strictly bicycles, as they have respectively one, three and four wheels, but are often referred to informally as "bikes" or "cycles". Dynamics A bicycle stays upright while moving forward by being steered so as to keep its center of mass over the wheels. This steering is usually provided by the rider, but under certain conditions may be provided by the bicycle itself. The combined center of mass of a bicycle and its rider must lean into a turn to successfully navigate it. This lean is induced by a method known as countersteering, which can be performed by the rider turning the handlebars directly with the hands or indirectly by leaning the bicycle. Short-wheelbase or tall bicycles, when braking, can generate enough stopping force at the front wheel to flip longitudinally. The act of purposefully using this force to lift the rear wheel and balance on the front without tipping over is a trick known as a stoppie, endo, or front wheelie. Performance The bicycle is extraordinarily efficient in both biological and mechanical terms. The bicycle is the most efficient human-powered means of transportation in terms of energy a person must expend to travel a given distance. From a mechanical viewpoint, up to 99% of the energy delivered by the rider into the pedals is transmitted to the wheels, although the use of gearing mechanisms may reduce this by 10–15%. In terms of the ratio of cargo weight a bicycle can carry to total weight, it is also an efficient means of cargo transportation. A human traveling on a bicycle at low to medium speeds of around uses only the power required to walk. Air drag, which is proportional to the square of speed, requires dramatically higher power outputs as speeds increase. If the rider is sitting upright, the rider's body creates about 75% of the total drag of the bicycle/rider combination. Drag can be reduced by seating the rider in a more aerodynamically streamlined position. Drag can also be reduced by covering the bicycle with an aerodynamic fairing. The fastest recorded unpaced speed on a flat surface is . In addition, the carbon dioxide generated in the production and transportation of the food required by the bicyclist, per mile traveled, is less than that generated by energy efficient motorcars. Parts Frame The great majority of modern bicycles have a frame with upright seating that looks much like the first chain-driven bike. These upright bicycles almost always feature the diamond frame, a truss consisting of two triangles: the front triangle and the rear triangle. The front triangle consists of the head tube, top tube, down tube, and seat tube. The head tube contains the headset, the set of bearings that allows the fork to turn smoothly for steering and balance. The top tube connects the head tube to the seat tube at the top, and the down tube connects the head tube to the bottom bracket. The rear triangle consists of the seat tube and paired chain stays and seat stays. The chain stays run parallel to the chain, connecting the bottom bracket to the rear dropout, where the axle for the rear wheel is held. The seat stays connect the top of the seat tube (at or near the same point as the top tube) to the rear fork ends. Historically, women's bicycle frames had a top tube that connected in the middle of the seat tube instead of the top, resulting in a lower standover height at the expense of compromised structural integrity, since this places a strong bending load in the seat tube, and bicycle frame members are typically weak in bending. This design, referred to as a step-through frame or as an open frame, allows the rider to mount and dismount in a dignified way while wearing a skirt or dress. While some women's bicycles continue to use this frame style, there is also a variation, the mixte, which splits the top tube laterally into two thinner top tubes that bypass the seat tube on each side and connect to the rear fork ends. The ease of stepping through is also appreciated by those with limited flexibility or other joint problems. Because of its persistent image as a "women's" bicycle, step-through frames are not common for larger frames. Step-throughs were popular partly for practical reasons and partly for social mores of the day. For most of the history of bicycles' popularity women have worn long skirts, and the lower frame accommodated these better than the top-tube. Furthermore, it was considered "unladylike" for women to open their legs to mount and dismount—in more conservative times women who rode bicycles at all were vilified as immoral or immodest. These practices were akin to the older practice of riding horse sidesaddle. Another style is the recumbent bicycle. These are inherently more aerodynamic than upright versions, as the rider may lean back onto a support and operate pedals that are on about the same level as the seat. The world's fastest bicycle is a recumbent bicycle but this type was banned from competition in 1934 by the Union Cycliste Internationale. Historically, materials used in bicycles have followed a similar pattern as in aircraft, the goal being high strength and low weight. Since the late 1930s alloy steels have been used for frame and fork tubes in higher quality machines. By the 1980s aluminum welding techniques had improved to the point that aluminum tube could safely be used in place of steel. Since then aluminum alloy frames and other components have become popular due to their light weight, and most mid-range bikes are now principally aluminum alloy of some kind. More expensive bikes use carbon fibre due to its significantly lighter weight and profiling ability, allowing designers to make a bike both stiff and compliant by manipulating the lay-up. Virtually all professional racing bicycles now use carbon fibre frames, as they have the best strength to weight ratio. A typical modern carbon fiber frame can weighs less than . Other exotic frame materials include titanium and advanced alloys. Bamboo, a natural composite material with high strength-to-weight ratio and stiffness has been used for bicycles since 1894. Recent versions use bamboo for the primary frame with glued metal connections and parts, priced as exotic models. Drivetrain and gearing The drivetrain begins with pedals which rotate the cranks, which are held in axis by the bottom bracket. Most bicycles use a chain to transmit power to the rear wheel. A very small number of bicycles use a shaft drive to transmit power, or special belts. Hydraulic bicycle transmissions have been built, but they are currently inefficient and complex. Since cyclists' legs are most efficient over a narrow range of pedaling speeds, or cadence, a variable gear ratio helps a cyclist to maintain an optimum pedalling speed while covering varied terrain. Some, mainly utility, bicycles use hub gears with between 3 and 14 ratios, but most use the generally more efficient dérailleur system, by which the chain is moved between different cogs called chainrings and sprockets to select a ratio. A dérailleur system normally has two dérailleurs, or mechs, one at the front to select the chainring and another at the back to select the sprocket. Most bikes have two or three chainrings, and from 5 to 11 sprockets on the back, with the number of theoretical gears calculated by multiplying front by back. In reality, many gears overlap or require the chain to run diagonally, so the number of usable gears is fewer. An alternative to chaindrive is to use a synchronous belt. These are toothed and work much the same as a chain—popular with commuters and long distance cyclists they require little maintenance. They cannot be shifted across a cassette of sprockets, and are used either as single speed or with a hub gear. Different gears and ranges of gears are appropriate for different people and styles of cycling. Multi-speed bicycles allow gear selection to suit the circumstances: a cyclist could use a high gear when cycling downhill, a medium gear when cycling on a flat road, and a low gear when cycling uphill. In a lower gear every turn of the pedals leads to fewer rotations of the rear wheel. This allows the energy required to move the same distance to be distributed over more pedal turns, reducing fatigue when riding uphill, with a heavy load, or against strong winds. A higher gear allows a cyclist to make fewer pedal turns to maintain a given speed, but with more effort per turn of the pedals. With a chain drive transmission, a chainring attached to a crank drives the chain, which in turn rotates the rear wheel via the rear sprocket(s) (cassette or freewheel). There are four gearing options: two-speed hub gear integrated with chain ring, up to 3 chain rings, up to 11 sprockets, hub gear built into rear wheel (3-speed to 14-speed). The most common options are either a rear hub or multiple chain rings combined with multiple sprockets (other combinations of options are possible but less common). Steering The handlebars connect to the stem that connects to the fork that connects to the front wheel, and the whole assembly connects to the bike and rotates about the steering axis via the headset bearings. Three styles of handlebar are common. Upright handlebars, the norm in Europe and elsewhere until the 1970s, curve gently back toward the rider, offering a natural grip and comfortable upright position. Drop handlebars "drop" as they curve forward and down, offering the cyclist best braking power from a more aerodynamic "crouched" position, as well as more upright positions in which the hands grip the brake lever mounts, the forward curves, or the upper flat sections for increasingly upright postures. Mountain bikes generally feature a 'straight handlebar' or 'riser bar' with varying degrees of sweep backward and centimeters rise upwards, as well as wider widths which can provide better handling due to increased leverage against the wheel. Seating Saddles also vary with rider preference, from the cushioned ones favored by short-distance riders to narrower saddles which allow more room for leg swings. Comfort depends on riding position. With comfort bikes and hybrids, cyclists sit high over the seat, their weight directed down onto the saddle, such that a wider and more cushioned saddle is preferable. For racing bikes where the rider is bent over, weight is more evenly distributed between the handlebars and saddle, the hips are flexed, and a narrower and harder saddle is more efficient. Differing saddle designs exist for male and female cyclists, accommodating the genders' differing anatomies and sit bone width measurements, although bikes typically are sold with saddles most appropriate for men. Suspension seat posts and seat springs provide comfort by absorbing shock but can add to the overall weight of the bicycle. A recumbent bicycle has a reclined chair-like seat that some riders find more comfortable than a saddle, especially riders who suffer from certain types of seat, back, neck, shoulder, or wrist pain. Recumbent bicycles may have either under-seat or over-seat steering. Brakes Bicycle brakes may be rim brakes, in which friction pads are compressed against the wheel rims; hub brakes, where the mechanism is contained within the wheel hub, or disc brakes, where pads act on a rotor attached to the hub. Most road bicycles use rim brakes, but some use disk brakes. Disc brakes are more common for mountain bikes, tandems and recumbent bicycles than on other types of bicycles, due to their increased power, coupled with an increased weight and complexity. With hand-operated brakes, force is applied to brake levers mounted on the handlebars and transmitted via Bowden cables or hydraulic lines to the friction pads, which apply pressure to the braking surface, causing friction which slows the bicycle down. A rear hub brake may be either hand-operated or pedal-actuated, as in the back pedal coaster brakes which were popular in North America until the 1960s. Track bicycles do not have brakes, because all riders ride in the same direction around a track which does not necessitate sharp deceleration. Track riders are still able to slow down because all track bicycles are fixed-gear, meaning that there is no freewheel. Without a freewheel, coasting is impossible, so when the rear wheel is moving, the cranks are moving. To slow down, the rider applies resistance to the pedals, acting as a braking system which can be as effective as a conventional rear wheel brake, but not as effective as a front wheel brake. Suspension Bicycle suspension refers to the system or systems used to suspend the rider and all or part of the bicycle. This serves two purposes: to keep the wheels in continuous contact with the ground, improving control, and to isolate the rider and luggage from jarring due to rough surfaces, improving comfort. Bicycle suspensions are used primarily on mountain bicycles, but are also common on hybrid bicycles, as they can help deal with problematic vibration from poor surfaces. Suspension is especially important on recumbent bicycles, since while an upright bicycle rider can stand on the pedals to achieve some of the benefits of suspension, a recumbent rider cannot. Basic mountain bicycles and hybrids usually have front suspension only, whilst more sophisticated ones also have rear suspension. Road bicycles tend to have no suspension. Wheels and tires The wheel axle fits into fork ends in the frame and fork. A pair of wheels may be called a wheelset, especially in the context of ready-built "off the shelf", performance-oriented wheels. Tires vary enormously depending on their intended purpose. Road bicycles use tires 18 to 25 millimeters wide, most often completely smooth, or slick, and inflated to high pressure to roll fast on smooth surfaces. Off-road tires are usually between wide, and have treads for gripping in muddy conditions or metal studs for ice. Groupset Groupset generally refers to all of the components that make up a bicycle excluding the bicycle frame, fork, stem, wheels, tires, and rider contact points, such as the saddle and handlebars. Accessories Some components, which are often optional accessories on sports bicycles, are standard features on utility bicycles to enhance their usefulness, comfort, safety and visibility. Fenders with spoilers (mudflaps) protect the cyclist and moving parts from spray when riding through wet areas. In some countries (e.g. Germany, UK), fenders are called mudguards. The chainguards protect clothes from oil on the chain while preventing clothing from being caught between the chain and crankset teeth. Kick stands keep bicycles upright when parked, and bike locks deter theft. Front-mounted baskets, front or rear luggage carriers or racks, and panniers mounted above either or both wheels can be used to carry equipment or cargo. Pegs can be fastened to one, or both of the wheel hubs to either help the rider perform certain tricks, or allow a place for extra riders to stand, or rest. Parents sometimes add rear-mounted child seats, an auxiliary saddle fitted to the crossbar, or both to transport children. Bicycles can also be fitted with a hitch to tow a trailer for carrying cargo, a child, or both. Toe-clips and toestraps and clipless pedals help keep the foot locked in the proper pedal position and enable cyclists to pull and push the pedals. Technical accessories include cyclocomputers for measuring speed, distance, heart rate, GPS data etc. Other accessories include lights, reflectors, mirrors, racks, trailers, bags, water bottles and cages, and bell. Bicycle lights, reflectors, and helmets are required by law in some geographic regions depending on the legal code. It is more common to see bicycles with bottle generators, dynamos, lights, fenders, racks and bells in Europe. Bicyclists also have specialized form fitting and high visibility clothing. Children's bicycles may be outfitted with cosmetic enhancements such as bike horns, streamers, and spoke beads. Training wheels are sometimes used when learning to ride, but a dedicated balance bike teaches independent riding more effectively. Bicycle helmets can reduce injury in the event of a collision or accident, and a suitable helmet is legally required of riders in many jurisdictions. Helmets may be classified as an accessory or as an item of clothing. Bike trainers are used to enable cyclists to cycle while the bike remains stationary. They are frequently used to warm up before races or indoors when riding conditions are unfavorable. Standards A number of formal and industry standards exist for bicycle components to help make spare parts exchangeable and to maintain a minimum product safety. The International Organization for Standardization (ISO) has a special technical committee for cycles, TC149, that has the scope of "Standardization in the field of cycles, their components and accessories with particular reference to terminology, testing methods and requirements for performance and safety, and interchangeability". The European Committee for Standardization (CEN) also has a specific Technical Committee, TC333, that defines European standards for cycles. Their mandate states that EN cycle standards shall harmonize with ISO standards. Some CEN cycle standards were developed before ISO published their standards, leading to strong European influences in this area. European cycle standards tend to describe minimum safety requirements, while ISO standards have historically harmonized parts geometry. Maintenance and repair Like all devices with mechanical moving parts, bicycles require a certain amount of regular maintenance and replacement of worn parts. A bicycle is relatively simple compared with a car, so some cyclists choose to do at least part of the maintenance themselves. Some components are easy to handle using relatively simple tools, while other components may require specialist manufacturer-dependent tools. Many bicycle components are available at several different price/quality points; manufacturers generally try to keep all components on any particular bike at about the same quality level, though at the very cheap end of the market there may be some skimping on less obvious components (e.g. bottom bracket). There are several hundred assisted-service Community Bicycle Organizations worldwide. At a Community Bicycle Organization, laypeople bring in bicycles needing repair or maintenance; volunteers teach them how to do the required steps. Full service is available from bicycle mechanics at a local bike shop. In areas where it is available, some cyclists purchase roadside assistance from companies such as the Better World Club or the American Automobile Association. Maintenance The most basic maintenance item is keeping the tires correctly inflated; this can make a noticeable difference as to how the bike feels to ride. Bicycle tires usually have a marking on the sidewall indicating the pressure appropriate for that tire. Bicycles use much higher pressures than cars: car tires are normally in the range of , whereas bicycle tires are normally in the range of . Another basic maintenance item is regular lubrication of the chain and pivot points for derailleurs and brake components. Most of the bearings on a modern bike are sealed and grease-filled and require little or no attention; such bearings will usually last for or more. The crank bearings require periodic maintenance, which involves removing, cleaning and repacking with the correct grease. The chain and the brake blocks are the components which wear out most quickly, so these need to be checked from time to time, typically every or so. Most local bike shops will do such checks for free. Note that when a chain becomes badly worn it will also wear out the rear cogs/cassette and eventually the chain ring(s), so replacing a chain when only moderately worn will prolong the life of other components. Over the longer term, tires do wear out, after ; a rash of punctures is often the most visible sign of a worn tire. Repair Very few bicycle components can actually be repaired; replacement of the failing component is the normal practice. The most common roadside problem is a puncture. After removing the offending nail/tack/thorn/glass shard/etc., there are two approaches: either mend the puncture by the roadside, or replace the inner tube and then mend the puncture in the comfort of home. Some brands of tires are much more puncture-resistant than others, often incorporating one or more layers of Kevlar; the downside of such tires is that they may be heavier and/or more difficult to fit and remove. Tools There are specialized bicycle tools for use both in the shop and at the roadside. Many cyclists carry tool kits. These may include a tire patch kit (which, in turn, may contain any combination of a hand pump or CO2 pump, tire levers, spare tubes, self-adhesive patches, or tube-patching material, an adhesive, a piece of sandpaper or a metal grater (for roughing the tube surface to be patched) and sometimes even a block of French chalk), wrenches, hex keys, screwdrivers, and a chain tool. Special, thin wrenches are often required for maintaining various screw-fastened parts, specifically, the frequently lubricated ball-bearing "cones". There are also cycling-specific multi-tools that combine many of these implements into a single compact device. More specialized bicycle components may require more complex tools, including proprietary tools specific for a given manufacturer. Social and historical aspects The bicycle has had a considerable effect on human society, in both the cultural and industrial realms. In daily life Around the turn of the 20th century, bicycles reduced crowding in inner-city tenements by allowing workers to commute from more spacious dwellings in the suburbs. They also reduced dependence on horses. Bicycles allowed people to travel for leisure into the country, since bicycles were three times as energy efficient as walking and three to four times as fast. In built-up cities around the world, urban planning uses cycling infrastructure like bikeways to reduce traffic congestion and air pollution. A number of cities around the world have implemented schemes known as bicycle sharing systems or community bicycle programs. The first of these was the White Bicycle plan in Amsterdam in 1965. It was followed by yellow bicycles in La Rochelle and green bicycles in Cambridge. These initiatives complement public transport systems and offer an alternative to motorized traffic to help reduce congestion and pollution. In Europe, especially in the Netherlands and parts of Germany and Denmark, bicycle commuting is common. In Copenhagen, a cyclists' organization runs a Cycling Embassy that promotes biking for commuting and sightseeing. The United Kingdom has a tax break scheme (IR 176) that allows employees to buy a new bicycle tax free to use for commuting. In the Netherlands all train stations offer free bicycle parking, or a more secure parking place for a small fee, with the larger stations also offering bicycle repair shops. Cycling is so popular that the parking capacity may be exceeded, while in some places such as Delft the capacity is usually exceeded. In Trondheim in Norway, the Trampe bicycle lift has been developed to encourage cyclists by giving assistance on a steep hill. Buses in many cities have bicycle carriers mounted on the front. There are towns in some countries where bicycle culture has been an integral part of the landscape for generations, even without much official support. That is the case of Ílhavo, in Portugal. In cities where bicycles are not integrated into the public transportation system, commuters often use bicycles as elements of a mixed-mode commute, where the bike is used to travel to and from train stations or other forms of rapid transit. Some students who commute several miles drive a car from home to a campus parking lot, then ride a bicycle to class. Folding bicycles are useful in these scenarios, as they are less cumbersome when carried aboard. Los Angeles removed a small amount of seating on some trains to make more room for bicycles and wheel chairs. Some US companies, notably in the tech sector, are developing both innovative cycle designs and cycle-friendliness in the workplace. Foursquare, whose CEO Dennis Crowley "pedaled to pitch meetings ... [when he] was raising money from venture capitalists" on a two-wheeler, chose a new location for its New York headquarters "based on where biking would be easy". Parking in the office was also integral to HQ planning. Mitchell Moss, who runs the Rudin Center for Transportation Policy & Management at New York University, said in 2012: "Biking has become the mode of choice for the educated high tech worker". Bicycles offer an important mode of transport in many developing countries. Until recently, bicycles have been a staple of everyday life throughout Asian countries. They are the most frequently used method of transport for commuting to work, school, shopping, and life in general. In Europe, bicycles are commonly used. They also offer a degree of exercise to keep individuals healthy. Bicycles are also celebrated in the visual arts. An example of this is the Bicycle Film Festival, a film festival hosted all around the world. Poverty alleviation Female emancipation The safety bicycle gave women unprecedented mobility, contributing to their emancipation in Western nations. As bicycles became safer and cheaper, more women had access to the personal freedom that bicycles embodied, and so the bicycle came to symbolize the New Woman of the late 19th century, especially in Britain and the United States. The bicycle craze in the 1890s also led to a movement for so-called rational dress, which helped liberate women from corsets and ankle-length skirts and other restrictive garments, substituting the then-shocking bloomers. The bicycle was recognized by 19th-century feminists and suffragists as a "freedom machine" for women. American Susan B. Anthony said in a New York World interview on 2 February 1896: "I think it has done more to emancipate woman than any one thing in the world. I rejoice every time I see a woman ride by on a wheel. It gives her a feeling of self-reliance and independence the moment she takes her seat; and away she goes, the picture of untrammelled womanhood." In 1895 Frances Willard, the tightly laced president of the Woman's Christian Temperance Union, wrote A Wheel Within a Wheel: How I Learned to Ride the Bicycle, with Some Reflections by the Way, a 75-page illustrated memoir praising "Gladys", her bicycle, for its "gladdening effect" on her health and political optimism. Willard used a cycling metaphor to urge other suffragists to action. In 1985, Georgena Terry started the first women-specific bicycle company. Her designs featured frame geometry and wheel sizes chosen to better fit women, with shorter top tubes and more suitable reach. Economic implications Bicycle manufacturing proved to be a training ground for other industries and led to the development of advanced metalworking techniques, both for the frames themselves and for special components such as ball bearings, washers, and sprockets. These techniques later enabled skilled metalworkers and mechanics to develop the components used in early automobiles and aircraft. Wilbur and Orville Wright, a pair of businessmen, ran the Wright Cycle Company which designed, manufactured and sold their bicycles during the bike boom of the 1890s. They also served to teach the industrial models later adopted, including mechanization and mass production (later copied and adopted by Ford and General Motors), vertical integration (also later copied and adopted by Ford), aggressive advertising (as much as 10% of all advertising in U.S. periodicals in 1898 was by bicycle makers), lobbying for better roads (which had the side benefit of acting as advertising, and of improving sales by providing more places to ride), all first practiced by Pope. In addition, bicycle makers adopted the annual model change (later derided as planned obsolescence, and usually credited to General Motors), which proved very successful. Early bicycles were an example of conspicuous consumption, being adopted by the fashionable elites. In addition, by serving as a platform for accessories, which could ultimately cost more than the bicycle itself, it paved the way for the likes of the Barbie doll. Bicycles helped create, or enhance, new kinds of businesses, such as bicycle messengers, traveling seamstresses, riding academies, and racing rinks. Their board tracks were later adapted to early motorcycle and automobile racing. There were a variety of new inventions, such as spoke tighteners, and specialized lights, socks and shoes, and even cameras, such as the Eastman Company's Poco. Probably the best known and most widely used of these inventions, adopted well beyond cycling, is Charles Bennett's Bike Web, which came to be called the jock strap. They also presaged a move away from public transit that would explode with the introduction of the automobile. J. K. Starley's company became the Rover Cycle Company Ltd. in the late 1890s, and then renamed the Rover Company when it started making cars. Morris Motors Limited (in Oxford) and Škoda also began in the bicycle business, as did the Wright brothers. Alistair Craig, whose company eventually emerged to become the engine manufacturers Ailsa Craig, also started from manufacturing bicycles, in Glasgow in March 1885. In general, U.S. and European cycle manufacturers used to assemble cycles from their own frames and components made by other companies, although very large companies (such as Raleigh) used to make almost every part of a bicycle (including bottom brackets, axles, etc.) In recent years, those bicycle makers have greatly changed their methods of production. Now, almost none of them produce their own frames. Many newer or smaller companies only design and market their products; the actual production is done by Asian companies. For example, some 60% of the world's bicycles are now being made in China. Despite this shift in production, as nations such as China and India become more wealthy, their own use of bicycles has declined due to the increasing affordability of cars and motorcycles. One of the major reasons for the proliferation of Chinese-made bicycles in foreign markets is the lower cost of labor in China. In line with the European financial crisis, in Italy in 2011 the number of bicycle sales (1.75 million) just passed the number of new car sales. Environmental impact One of the profound economic implications of bicycle use is that it liberates the user from motor fuel consumption. (Ballantine, 1972) The bicycle is an inexpensive, fast, healthy and environmentally friendly mode of transport. Ivan Illich stated that bicycle use extended the usable physical environment for people, while alternatives such as cars and motorways degraded and confined people's environment and mobility. Currently, two billion bicycles are in use around the world. Children, students, professionals, laborers, civil servants and seniors are pedaling around their communities. They all experience the freedom and the natural opportunity for exercise that the bicycle easily provides. Bicycle also has lowest carbon intensity of travel. Manufacturing The global bicycle market is $61 billion in 2011. , 130 million bicycles were sold every year globally and 66% of them were made in China. Legal requirements Early in its development, as with automobiles, there were restrictions on the operation of bicycles. Along with advertising, and to gain free publicity, Albert A. Pope litigated on behalf of cyclists. The 1968 Vienna Convention on Road Traffic of the United Nations considers a bicycle to be a vehicle, and a person controlling a bicycle (whether actually riding or not) is considered an operator. The traffic codes of many countries reflect these definitions and demand that a bicycle satisfy certain legal requirements before it can be used on public roads. In many jurisdictions, it is an offense to use a bicycle that is not in a roadworthy condition. In some countries, bicycles must have functioning front and rear lights when ridden after dark. Some countries require child and/or adult cyclists to wear helmets, as this may protect riders from head trauma. Countries which require adult cyclists to wear helmets include Spain, New Zealand and Australia. Mandatory helmet wearing is one of the most controversial topics in the cycling world, with proponents arguing that it reduces head injuries and thus is an acceptable requirement, while opponents argue that by making cycling seem more dangerous and cumbersome, it reduces cyclist numbers on the streets, creating an overall negative health effect (fewer people cycling for their own health, and the remaining cyclists being more exposed through a reversed safety in numbers effect). Theft Bicycles are popular targets for theft, due to their value and ease of resale. The number of bicycles stolen annually is difficult to quantify as a large number of crimes are not reported. Around 50% of the participants in the Montreal International Journal of Sustainable Transportation survey were subjected to a bicycle theft in their lifetime as active cyclists. Most bicycles have serial numbers that can be recorded to verify identity in case of theft. See also Bicycle and motorcycle geometry Bicycle drum brake Bicycle fender Bicycle parking station Bicycle-sharing system Cyclability Danish bicycle VIN-system List of bicycle types List of films about bicycles and cycling Outline of bicycles Outline of cycling rattleCAD (software for bicycle design) Twike Velomobile World Bicycle Day Notes References Citations Sources General Further reading External links A History of Bicycles and Other Cycles at the Canada Science and Technology Museum 19th-century inventions Appropriate technology Articles containing video clips German inventions Sustainable technologies Sustainable transport
20
Biopolymers are natural polymers produced by the cells of living organisms. Like other polymers, biopolymers consist of monomeric units that are covalently bonded in chains to form larger molecules. There are three main classes of biopolymers, classified according to the monomers used and the structure of the biopolymer formed: polynucleotides, polypeptides, and polysaccharides. The Polynucleotides, RNA and DNA, are long polymers of nucleotides. Polypeptides include proteins and shorter polymers of amino acids; some major examples include collagen, actin, and fibrin. Polysaccharides are linear or branched chains of sugar carbohydrates; examples include starch, cellulose, and alginate. Other examples of biopolymers include natural rubbers (polymers of isoprene), suberin and lignin (complex polyphenolic polymers), cutin and cutan (complex polymers of long-chain fatty acids), melanin, and polyhydroxyalkanoates (PHAs). In addition to their many essential roles in living organisms, biopolymers have applications in many fields including the food industry, manufacturing, packaging, and biomedical engineering. Biopolymers versus synthetic polymers A major defining difference between biopolymers and synthetic polymers can be found in their structures. All polymers are made of repetitive units called monomers. Biopolymers often have a well-defined structure, though this is not a defining characteristic (example: lignocellulose): The exact chemical composition and the sequence in which these units are arranged is called the primary structure, in the case of proteins. Many biopolymers spontaneously fold into characteristic compact shapes (see also "protein folding" as well as secondary structure and tertiary structure), which determine their biological functions and depend in a complicated way on their primary structures. Structural biology is the study of the structural properties of biopolymers. In contrast, most synthetic polymers have much simpler and more random (or stochastic) structures. This fact leads to a molecular mass distribution that is missing in biopolymers. In fact, as their synthesis is controlled by a template-directed process in most in vivo systems, all biopolymers of a type (say one specific protein) are all alike: they all contain similar sequences and numbers of monomers and thus all have the same mass. This phenomenon is called monodispersity in contrast to the polydispersity encountered in synthetic polymers. As a result, biopolymers have a dispersity of 1. Biopolymers versus biobased polymers “Biopolymers” are usually not equal to “biobased polymers”. Biobased polymers are polymers chemically or biologically synthesized (fully or partially) from biomass monomers, such as polyesters (e.g., polyhydroxyalkanoates (PHAs) and polylactic acid (PLA)). In this respect, the only polymers that can be regarded as both biopolymers and biobased polymers are those that are biologically produced (by microbes) from biomass carbon sources (e.g., sugars and lipids), and examples of these include PHAs, bacterial cellulose, gellan gum, xanthan gum, and curdlan. Conventions and nomenclature Polypeptides The convention for a polypeptide is to list its constituent amino acid residues as they occur from the amino terminus to the carboxylic acid terminus. The amino acid residues are always joined by peptide bonds. Protein, though used colloquially to refer to any polypeptide, refers to larger or fully functional forms and can consist of several polypeptide chains as well as single chains. Proteins can also be modified to include non-peptide components, such as saccharide chains and lipids. Nucleic acids The convention for a nucleic acid sequence is to list the nucleotides as they occur from the 5' end to the 3' end of the polymer chain, where 5' and 3' refer to the numbering of carbons around the ribose ring which participate in forming the phosphate diester linkages of the chain. Such a sequence is called the primary structure of the biopolymer. Polysaccharides Polysaccharides (sugar polymers) can be linear or branched and are typically joined with glycosidic bonds. The exact placement of the linkage can vary, and the orientation of the linking functional groups is also important, resulting in α- and β-glycosidic bonds with numbering definitive of the linking carbons' location in the ring. In addition, many saccharide units can undergo various chemical modifications, such as amination, and can even form parts of other molecules, such as glycoproteins. Structural characterization There are a number of biophysical techniques for determining sequence information. Protein sequence can be determined by Edman degradation, in which the N-terminal residues are hydrolyzed from the chain one at a time, derivatized, and then identified. Mass spectrometer techniques can also be used. Nucleic acid sequence can be determined using gel electrophoresis and capillary electrophoresis. Lastly, mechanical properties of these biopolymers can often be measured using optical tweezers or atomic force microscopy. Dual-polarization interferometry can be used to measure the conformational changes or self-assembly of these materials when stimulated by pH, temperature, ionic strength or other binding partners. Common biopolymers Collagen: Collagen is the primary structure of vertebrates and is the most abundant protein in mammals. Because of this, collagen is one of the most easily attainable biopolymers, and used for many research purposes. Because of its mechanical structure, collagen has high tensile strength and is a non-toxic, easily absorbable, biodegradable, and biocompatible material. Therefore, it has been used for many medical applications such as in treatment for tissue infection, drug delivery systems, and gene therapy. Silk fibroin: Silk Fibroin (SF) is another protein rich biopolymer that can be obtained from different silkworm species, such as the mulberry worm Bombyx mori. In contrast to collagen, SF has a lower tensile strength but has strong adhesive properties due to its insoluble and fibrous protein composition. In recent studies, silk fibroin has been found to possess anticoagulation properties and platelet adhesion. Silk fibroin has been additionally found to support stem cell proliferation in vitro. Gelatin: Gelatin is obtained from type I collagen consisting of cysteine, and produced by the partial hydrolysis of collagen from bones, tissues and skin of animals. There are two types of gelatin, Type A and Type B. Type A collagen is derived by acid hydrolysis of collagen and has 18.5% nitrogen. Type B is derived by alkaline hydrolysis containing 18% nitrogen and no amide groups. Elevated temperatures cause the gelatin to melts and exists as coils, whereas lower temperatures result in coil to helix transformation. Gelatin contains many functional groups like NH2, SH, and COOH which allow for gelatin to be modified using nanoparticles and biomolecules. Gelatin is an Extracellular Matrix protein which allows it to be applied for applications such as wound dressings, drug delivery and gene transfection. Starch: Starch is an inexpensive biodegradable biopolymer and copious in supply. Nanofibers and microfibers can be added to the polymer matrix to increase the mechanical properties of starch improving elasticity and strength. Without the fibers, starch has poor mechanical properties due to its sensitivity to moisture. Starch being biodegradable and renewable is used for many applications including plastics and pharmaceutical tablets. Cellulose: Cellulose is very structured with stacked chains that result in stability and strength. The strength and stability comes from the straighter shape of cellulose caused by glucose monomers joined together by glycogen bonds. The straight shape allows the molecules to pack closely. Cellulose is very common in application due to its abundant supply, its biocompatibility, and is environmentally friendly. Cellulose is used vastly in the form of nano-fibrils called nano-cellulose. Nano-cellulose presented at low concentrations produces a transparent gel material. This material can be used for biodegradable, homogeneous, dense films that are very useful in the biomedical field. Alginate: Alginate is the most copious marine natural polymer derived from brown seaweed. Alginate biopolymer applications range from packaging, textile and food industry to biomedical and chemical engineering. The first ever application of alginate was in the form of wound dressing, where its gel-like and absorbent properties were discovered. When applied to wounds, alginate produces a protective gel layer that is optimal for healing and tissue regeneration, and keeps a stable temperature environment. Additionally, there have been developments with alginate as a drug delivery medium, as drug release rate can easily be manipulated due to a variety of alginate densities and fibrous composition. Biopolymer applications The applications of biopolymers can be categorized under two main fields, which differ due to their biomedical and industrial use. Biomedical Because one of the main purposes for biomedical engineering is to mimic body parts to sustain normal body functions, due to their biocompatible properties, biopolymers are used vastly for tissue engineering, medical devices and the pharmaceutical industry. Many biopolymers can be used for regenerative medicine, tissue engineering, drug delivery, and overall medical applications due to their mechanical properties. They provide characteristics like wound healing, and catalysis of bioactivity, and non-toxicity. Compared to synthetic polymers, which can present various disadvantages like immunogenic rejection and toxicity after degradation, many biopolymers are normally better with bodily integration as they also possess more complex structures, similar to the human body. More specifically, polypeptides like collagen and silk, are biocompatible materials that are being used in ground-breaking research, as these are inexpensive and easily attainable materials. Gelatin polymer is often used on dressing wounds where it acts as an adhesive. Scaffolds and films with gelatin allow for the scaffolds to hold drugs and other nutrients that can be used to supply to a wound for healing. As collagen is one of the more popular biopolymers used in biomedical science, here are some examples of their use: Collagen based drug delivery systems: collagen films act like a barrier membrane and are used to treat tissue infections like infected corneal tissue or liver cancer. Collagen films have all been used for gene delivery carriers which can promote bone formation. Collagen sponges: Collagen sponges are used as a dressing to treat burn victims and other serious wounds. Collagen based implants are used for cultured skin cells or drug carriers that are used for burn wounds and replacing skin. Collagen as haemostat: When collagen interacts with platelets it causes a rapid coagulation of blood. This rapid coagulation produces a temporary framework so the fibrous stroma can be regenerated by host cells. Collagen based haemostat reduces blood loss in tissues and helps manage bleeding in organs such as the liver and spleen. Chitosan is another popular biopolymer in biomedical research. Chitosan is derived from chitin, the main component in the exoskeleton of crustaceans and insects and the second most abundant biopolymer in the world. Chitosan has many excellent characteristics for biomedical science. Chitosan is biocompatible, it is highly bioactive, meaning it stimulates a beneficial response from the body, it can biodegrade which can eliminate a second surgery in implant applications, can form gels and films, and is selectively permeable. These properties allow for various biomedical applications of chitosan. Chitosan as drug delivery: Chitosan is used mainly with drug targeting because it has potential to improve drug absorption and stability. In addition, chitosan conjugated with anticancer agents can also produce better anticancer effects by causing gradual release of free drug into cancerous tissue. Chitosan as an anti-microbial agent: Chitosan is used to stop the growth of microorganisms. It performs antimicrobial functions in microorganisms like algae, fungi, bacteria, and gram-positive bacteria of different yeast species. Chitosan composite for tissue engineering: Chitosan powder blended with alginate is used to form functional wound dressings. These dressings create a moist, biocompatible environment which aids in the healing process. This wound dressing is also biodegradable and has porous structures that allows cells to grow into the dressing. Furthermore, thiolated chitosans (see thiomers) are used for tissue engineering and wound healing, as these biopolymers are able to crosslink via disulfide bonds forming stable three-dimensional networks. Industrial Food: Biopolymers are being used in the food industry for things like packaging, edible encapsulation films and coating foods. Polylactic acid (PLA) is very common in the food industry due to is clear color and resistance to water. However, most polymers have a hydrophilic nature and start deteriorating when exposed to moisture. Biopolymers are also being used as edible films that encapsulate foods. These films can carry things like antioxidants, enzymes, probiotics, minerals, and vitamins. The food consumed encapsulated with the biopolymer film can supply these things to the body. Packaging: The most common biopolymers used in packaging are polyhydroxyalkanoates (PHAs), polylactic acid (PLA), and starch. Starch and PLA are commercially available and biodegradable, making them a common choice for packaging. However, their barrier properties (either moisture-barrier or gas-barrier properties) and thermal properties are not ideal. Hydrophilic polymers are not water resistant and allow water to get through the packaging which can affect the contents of the package. Polyglycolic acid (PGA) is a biopolymer that has great barrier characteristics and is now being used to correct the barrier obstacles from PLA and starch. Water purification: Chitosan has been used for water purification. It is used as a flocculant that only takes a few weeks or months rather than years to degrade in the environment. Chitosan purifies water by chelation. This is the process in which binding sites along the polymer chain bind with the metal ions in the water forming chelates. Chitosan has been shown to be an excellent candidate for use in storm and wastewater treatment. As materials Some biopolymers- such as PLA, naturally occurring zein, and poly-3-hydroxybutyrate can be used as plastics, replacing the need for polystyrene or polyethylene based plastics. Some plastics are now referred to as being 'degradable', 'oxy-degradable' or 'UV-degradable'. This means that they break down when exposed to light or air, but these plastics are still primarily (as much as 98 per cent) oil-based and are not currently certified as 'biodegradable' under the European Union directive on Packaging and Packaging Waste (94/62/EC). Biopolymers will break down, and some are suitable for domestic composting. Biopolymers (also called renewable polymers) are produced from biomass for use in the packaging industry. Biomass comes from crops such as sugar beet, potatoes, or wheat: when used to produce biopolymers, these are classified as non food crops. These can be converted in the following pathways: Sugar beet > Glyconic acid > Polyglyconic acid Starch > (fermentation) > Lactic acid > Polylactic acid (PLA) Biomass > (fermentation) > Bioethanol > Ethene > Polyethylene Many types of packaging can be made from biopolymers: food trays, blown starch pellets for shipping fragile goods, thin films for wrapping. Environmental impacts Biopolymers can be sustainable, carbon neutral and are always renewable, because they are made from plant or animal materials which can be grown indefinitely. Since these materials come from agricultural crops, their use could create a sustainable industry. In contrast, the feedstocks for polymers derived from petrochemicals will eventually deplete. In addition, biopolymers have the potential to cut carbon emissions and reduce CO2 quantities in the atmosphere: this is because the CO2 released when they degrade can be reabsorbed by crops grown to replace them: this makes them close to carbon neutral. Almost all biopolymers are biodegradable in the natural environment: they are broken down into CO2 and water by microorganisms. These biodegradable biopolymers are also compostable: they can be put into an industrial composting process and will break down by 90% within six months. Biopolymers that do this can be marked with a 'compostable' symbol, under European Standard EN 13432 (2000). Packaging marked with this symbol can be put into industrial composting processes and will break down within six months or less. An example of a compostable polymer is PLA film under 20μm thick: films which are thicker than that do not qualify as compostable, even though they are "biodegradable". In Europe there is a home composting standard and associated logo that enables consumers to identify and dispose of packaging in their compost heap. See also Biomaterials Bioplastic Biopolymers & Cell (journal) Condensation polymers Condensed tannins DNA sequence Melanin Non food crops Phosphoramidite Polymer chemistry Sequence-controlled polymers Sequencing Small molecules Worm-like chain References External links NNFCC: The UK's National Centre for Biorenewable Energy, Fuels and Materials Bioplastics Magazine Biopolymer group What’s Stopping Bioplastic? Biomolecules Polymers Molecular biology Molecular genetics Biotechnology products Bioplastics Biomaterials
5
The 2001 United Kingdom general election was held on Thursday 7 June 2001, four years after the previous election on 1 May 1997, to elect 659 members to the House of Commons. The governing Labour Party was re-elected to serve a second term in government with another landslide victory with a 167 majority, returning 412 members of Parliament versus 418 from the 1997 general election, a net loss of six seats, though with a significantly lower turnout than before—59.4%, compared to 71.6% at the previous election. The number of votes Labour received fell by nearly three million. Tony Blair went on to become the only Labour Prime Minister to serve two consecutive full terms in office. As Labour retained almost all of their seats won in the 1997 landslide victory, the media dubbed the 2001 election "the quiet landslide". There was little change outside Northern Ireland, with 620 out of the 641 seats in Great Britain electing candidates from the same party as they did in 1997. Factors contributing to the Labour victory included a strong economy, falling unemployment, and public perception that the Labour government had delivered on many key election pledges that it had made in 1997. The opposition Conservative Party, under William Hague's leadership, was still deeply divided on the issue of Europe and the party's policy platform had drifted considerably to the right. The party put the issue of European monetary union (and in particular, the prospect of the UK joining the Eurozone) at the centre of its campaign, but it failed to resonate with the electorate. The Tories briefly had a narrow lead in the polls during the 2000 fuel strikes, but Labour successfully resolved them by year end. Furthermore, a series of publicity stunts that backfired also harmed Hague, and he immediately announced his resignation as party leader when the election result was clear, formally stepping down three months later, therefore becoming the first leader of the Conservative and Unionist Party in the House of Commons since Austen Chamberlain nearly eighty years prior not to serve as prime minister. The election was largely a repeat of the 1997 general election, with Labour losing only six seats overall and the Conservatives making a net gain of one seat (gaining nine seats but losing eight). The Conservatives gained a seat in Scotland, which ended the party's status as an "England-only" party in the prior parliament, but failed again to win any seats in Wales. Although they did not gain many seats, three of the few new MPs elected were future Conservative Prime Ministers David Cameron and Boris Johnson and future Conservative Chancellor of the Exchequer George Osborne; Osborne would serve in the same Cabinet as Cameron from 2010 to 2016. The Liberal Democrats made a net gain of six seats. The 2001 general election is the last to date in which any government has held an overall majority of more than 100 seats in the House of Commons, and the second of only two since the Second World War (the other being 1997) in which a single party won over 400 MPs. Notable departing MPs included former Prime Ministers Edward Heath (also Father of the House) and John Major, former Deputy Prime Minister Michael Heseltine, former Liberal Democrat leader Paddy Ashdown, former Cabinet ministers Tony Benn, Tom King, John Morris, Mo Mowlam, John MacGregor and Peter Brooke, Teresa Gorman, and then Mayor of London Ken Livingstone. Change was seen in Northern Ireland, with the moderate unionist Ulster Unionist Party (UUP) losing four seats to the more hardline Democratic Unionist Party (DUP). A similar transition appeared in the nationalist community, with the moderate Social Democratic and Labour Party (SDLP) losing votes to the more staunchly republican and abstentionist Sinn Féin. Exceptionally low voter turnout, which fell below 60% for the first (and so far, only) time since 1918, also marked this election. The election was broadcast live on BBC One and presented by David Dimbleby, Jeremy Paxman, Andrew Marr, Peter Snow, and Tony King. The 2001 general election was notable for being the first in which pictures of the party logos appeared on the ballot paper. Prior to this, the ballot paper had only displayed the candidate's name, address, and party name. Overview The election had been expected on 3 May, to coincide with local elections, but on 2 April 2001, both were postponed to 7 June because of rural movement restrictions imposed in response to the foot-and-mouth outbreak that had started in February. The elections were marked by voter apathy, with turnout falling to 59.4%, the lowest (and first under 70%) since the Coupon Election of 1918. Throughout the election the Labour Party had maintained a significant lead in the opinion polls and the result was deemed to be so certain that some bookmakers paid out for a Labour majority before election day. However, the opinion polls the previous autumn had shown the first Tory lead (though only by a narrow margin) in the opinion polls for eight years as they benefited from the public anger towards the government over the fuel protests which had led to a severe shortage of motor fuel. By the end of 2000, however, the dispute had been resolved and Labour were firmly back in the lead of the opinion polls. In total, a mere 29 parliamentary seats changed hands at the 2001 Election. 2001 also saw the rare election of an independent. Richard Taylor of Independent Kidderminster Hospital and Health Concern (usually now known simply as "Health Concern") unseated a government MP, David Lock, in Wyre Forest. There was also a high vote for British National Party leader Nick Griffin in Oldham West and Royton, in the wake of recent race riots in the town of Oldham. In Northern Ireland, the election was far more dramatic and marked a move by unionists away from support for the Good Friday Agreement, with the moderate unionist Ulster Unionist Party (UUP) losing to the more hardline Democratic Unionist Party (DUP). This polarisation was also seen in the nationalist community, with the Social Democratic and Labour Party (SDLP) vote losing out to more left-wing and republican Sinn Féin. It also saw a tightening of the parties as the small UK Unionist Party lost its only seat. Campaign For Labour, the last four years had run relatively smoothly. The party had successfully defended all their by election seats, and many suspected a Labour win was inevitable from the start. Many in the party, however, were afraid of voter apathy, which was epitomised in a poster of "Hague with Margaret Thatcher's hair", captioned "Get out and vote. Or they get in." Despite recessions in mainland Europe and the United States, due to the bursting of global tech bubbles, Britain was notably unaffected and Labour however could rely on a strong economy as unemployment continued to decline toward election day, putting to rest any fears of a Labour government putting the economic situation at risk. For William Hague, however, the Conservative Party had still not fully recovered from the loss in 1997. The party was still divided over Europe, and talk of a referendum on joining the Eurozone was rife, and as a result "Save The Pound" was one of the key slogans deployed in the Conservatives' campaign. As Labour remained at the political centre, the Tories moved to the right. A policy gaffe by Oliver Letwin over public spending cuts left the party with an own goal that Labour soon exploited. Thatcher gave a speech to the Conservative Election Rally in Plymouth on May 22, 2001, calling New Labour "rootless, empty, and artificial." She also added to Hague's troubles when speaking out strongly against the Euro to applause. Hague himself, although a witty performer at Prime Minister's Questions, was dogged in the press and reminded of his speech, given at the age of 16, at the 1977 Conservative Conference. The Sun newspaper only added to the Conservatives' woes by backing Labour for a second consecutive election, calling Hague a "dead parrot" during the Conservative Party's conference in October 1998. The Tories campaigned on a strongly right-wing platform, emphasising the issues of Europe, immigration and tax, the fabled "Tebbit Trinity". They also released a poster showing a heavily pregnant Tony Blair, stating "Four years of Labour and he still hasn't delivered". However, Labour countered by asking where the proposed tax cuts were going to come from, and decried the Tory policy as "cut here, cut there, cut everywhere", in reference to the widespread belief that the Conservatives would make major cuts to public services in order to fund tax cuts. Labour also capitalised on the strong economic conditions of the time, and another major line of attack (primarily directed towards Michael Portillo, now Shadow Chancellor after returning to Parliament via a by-election) was to warn of a return to "Tory Boom and Bust" under a Conservative administration. Charles Kennedy contested his first election as leader of the Liberal Democrats. Controversy During the election Sharron Storer, a resident of Birmingham, criticised Prime Minister Tony Blair in front of television cameras about conditions in the National Health Service. The widely televised incident happened on 16 May during a campaign visit by Blair to the Queen Elizabeth Hospital in Birmingham. Sharron Storer's partner, Keith Sedgewick, a cancer patient with non-Hodgkin's lymphoma and therefore highly susceptible to infection, was being treated at the time in the bone marrow unit, but no bed could be found for him and he was transferred to the casualty unit for his first 24 hours. On the evening of the same day Deputy Prime Minister John Prescott punched a protestor after being hit by an egg on his way to an election rally in Rhyl, North Wales. Endorsements Labour received endorsements from The Sun, The Daily Express, The Times (for the first time in its history), The Daily Mirror, The Financial Times, The Economist, and The Guardian. The Independent endorsed Labour and/or the Liberal Democrats. The Conservatives were endorsed by the Daily Mail and The Daily Telegraph. Opinion polling Results The election result was effectively a repeat of 1997, as the Labour Party retained an overwhelming majority, with the BBC announcing the victory at 02:58 on the early morning of 8 June. Having presided over relatively serene political, economic and social conditions, the feeling of prosperity in the United Kingdom had been maintained into the new millennium, and Labour would have a free hand to assert its ideals in the subsequent parliament. Despite the victory, voter apathy was a major issue, as turnout fell below 60%, 12 percentage points down on 1997. All three of the main parties saw their total votes fall, with Labour's total vote dropping by 2.8 million on 1997, the Conservatives 1.3 million, and the Liberal Democrats 428,000. Some suggested this dramatic fall was a sign of the general acceptance of the status quo and the likelihood of Labour's majority remaining unassailable. For the Conservatives, the huge loss they had sustained in 1997 was repeated. Despite gaining nine seats, the Tories lost seven to the Liberal Democrats, and one even to Labour. William Hague was quick to announce his resignation, doing so at 07:44 outside the Conservative Party headquarters. Some believed that Hague had been unlucky; although most considered him to be a talented orator and an intelligent statesman, he had come up against the charismatic Tony Blair in the peak of his political career, and it was no surprise that little progress was made in reducing Labour's majority after a relatively smooth parliament. Staying at what they considered rock bottom, however, showed that the Conservatives had failed to improve their negative public image, had remained somewhat disunited over Europe, and had not regained the trust that they had lost in the 1990s. Hague's focus on the "Save The Pound" campaign narrative had failed to gain any traction; Labour's successful countertactic was to be repeatedly vague over the issue of future monetary union - and said that the UK would only consider joining the Eurozone "when conditions were right". But in Scotland, despite flipping one seat from the Scottish National Party, their vote collapse continued. They failed to retake former strongholds in Scotland as the Nationalists consolidated their grip on the Northeastern portion of the country. The Liberal Democrats could point to steady progress under their new leader, Charles Kennedy, gaining more seats than the main two parties—albeit only six overall—and maintaining the performance of a pleasing 1997 election, where the party had doubled its number of seats from 20 to 46. While they had yet to become electable as a government, they underlined their growing reputation as a worthwhile alternative to Labour and Conservative, offering plenty of debate in Parliament and representing more than a mere protest vote. The SNP failed to gain any new seats and lost a seat to the Conservatives by just 79 votes. In Wales, Plaid Cymru both gained a seat from Labour and lost one to them. In Northern Ireland the Ulster Unionists, despite gaining North Down, lost five other seats. : |} All parties with more than 500 votes shown. The seat gains reflect changes on the 1997 general election result. Two seats had changed hands in by-elections in the intervening period. These were as follows: Romsey from Conservative to Liberal Democrats. The Liberal Democrats held this seat in 2001. South Antrim from Ulster Unionists to Democratic Unionists. The Ulster Unionists won this seat back in 2001. The results of the election give a Gallagher index of dis-proportionality of 17.74. Results by constituent country Seats changing hands MPs who lost their seats Voter Demographics MORI interviewed 18,657 adults in Great Britain after the election which suggested the following demographic breakdown... Manifestos Labour (Ambitions for Britain) Conservative (Time for Common Sense) Liberal Democrat (Freedom, Justice, Honesty) UK Independence Party British National Party (Where we stand!) Green Party of England and Wales Ulster Unionist Party Progressive Unionist Party Social Democratic and Labour Party (It's working – let's keep building) Plaid Cymru Scottish National Party (Heart of the Manifesto 2001) ProLife Alliance The Democratic Party (The will of the people NOT the party) Kidderminster Health Concern Monster Raving Loony Party (Vote for insanity – you know it makes sense) The Stuckist Party Scottish Socialist Party Left Alliance Communist Party of Britain (People's need before corporate profit greed) Revolutionary Communist Party of Britain (Marxist-Leninist) See also List of MPs elected in the 2001 United Kingdom general election List of MPs for constituencies in Wales (2001–2005) List of MPs for constituencies in Scotland (2001–2005) 2001 United Kingdom foot-and-mouth outbreak 2001 United Kingdom general election in Northern Ireland 2001 United Kingdom general election in England 2001 United Kingdom general election in Scotland 2001 United Kingdom general election in Wales 2001 United Kingdom local elections References Bibliography Butler, David and Dennis Kavanagh. The British General Election of 2001 (2002), the standard scholarly study External links BBC News: Vote 2001 – in depth coverage. Catalogue of 2001 general election ephemera at the Archives Division of the London School of Economics. 2001 elections in the United Kingdom 2001 June 2001 events in the United Kingdom Tony Blair
5
The Book of Mormon is a religious text of the Latter Day Saint movement, which, according to Latter Day Saint theology, contains writings of ancient prophets who lived on the American continent from 600 BC to AD 421 and during an interlude dated by the text to the unspecified time of the Tower of Babel. It was first published in March 1830 by Joseph Smith as The Book of Mormon: An Account Written by the Hand of Mormon upon Plates Taken from the Plates of Nephi. The Book of Mormon is one of the earliest and most well known unique writings of the Latter Day Saint movement. The denominations of the Latter Day Saint movement typically regard the text primarily as scripture (sometimes as one of four standard works) and secondarily as a record of God's dealings with ancient inhabitants of the Americas. The majority of Latter Day Saints believe the book to be a record of real-world history, with Latter Day Saint denominations viewing it variously as an inspired record of scripture to the lynchpin or "keystone" of their religion. Some Latter Day Saint academics and apologetic organizations strive to affirm the book as historically authentic through their scholarship and research, but mainstream archaeological, historical, and scientific communities have discovered little to support the existence of the civilizations described therein, and do not consider it to be an actual record of historical events. According to Smith's account and the book's narrative, the Book of Mormon was originally written in otherwise unknown characters referred to as "reformed Egyptian" engraved on golden plates. Smith said that the last prophet to contribute to the book, a man named Moroni, buried it in the Hill Cumorah in present-day Manchester, New York, before his death, and then appeared in a vision to Smith in 1827 as an angel, revealing the location of the plates and instructing him to translate the plates into English. Most naturalistic views on the origins of the Book of Mormon hold that Smith authored it, drawing, whether consciously or subconsciously, on material and ideas from his contemporary 19th-century environment, rather than translating an ancient record. The Book of Mormon has a number of doctrinal discussions on subjects such as the fall of Adam and Eve, the nature of the Christian atonement, eschatology, agency, priesthood authority, redemption from physical and spiritual death, the nature and conduct of baptism, the age of accountability, the purpose and practice of communion, personalized revelation, economic justice, the anthropomorphic and personal nature of God, the nature of spirits and angels, and the organization of the latter day church. The pivotal event of the book is an appearance of Jesus Christ in the Americas shortly after his resurrection. Common teachings of the Latter Day Saint movement hold that the Book of Mormon fulfills numerous biblical prophecies by ending a global apostasy and signaling a restoration of Christian gospel. The book is also a critique of Western society, condemning immorality, individualism, social inequality, ethnic injustice, nationalism, and the rejection of God, revelation, and miraculous religion. The Book of Mormon is divided into smaller books, titled after individuals named as primary authors or other caretakers of the ancient record the Book of Mormon describes itself as and, in most versions, is divided into chapters and verses. Its English text imitates the style of the King James Version of the Bible, and its grammar and word choice reflect Early Modern English. The Book of Mormon has been fully or partially translated into at least 112 languages. Origin Conceptual emergence According to Joseph Smith, in 1823, when he was seventeen years old, an angel of God named Moroni appeared to him and said that a collection of ancient writings was buried in a nearby hill in present-day Wayne County, New York, engraved on golden plates by ancient prophets. The writings were said to describe a people whom God had led from Jerusalem to the Western hemisphere 600 years before Jesus's birth. (This "angel Moroni" figure also appears in the Book of Mormon as the last prophet among these people and had buried the record, which God had promised to bring forth in the latter days.) Smith said this vision occurred on the evening of September 21, 1823, and that on the following day, via divine guidance, he located the burial location of the plates on this hill and was instructed by Moroni to meet him at the same hill on September 22 of the following year to receive further instructions, which repeated annually for the next three years. Smith told his entire immediate family about this angelic encounter by the next night, and his brother William reported that the family "believed all he [Joseph Smith] said" about the angel and plates. Smith and his family reminisced that as part of what Smith believed was angelic instruction, Moroni provided Smith with a "brief sketch" of the "origin, progress, civilization, laws, governments... righteousness and iniquity" of the "aboriginal inhabitants of the country" (referring to the Nephites and Lamanites who figure in the Book of Mormon's primary narrative). Smith sometimes shared what he believed he had learned through such angelic encounters with his family in what his mother Lucy Mack Smith called "most amusing recitals". In Smith's account, Moroni allowed him, accompanied by his wife Emma Hale Smith, to take the plates on September 22, 1827, four years after his initial visit to the hill, and directed him to translate them into English. Smith said the angel Moroni strictly instructed him to not let anyone else see the plates without divine permission. Neighbors, some of whom had collaborated with Smith in earlier treasure-hunting enterprises, several times tried to steal the plates from Smith while he and his family guarded them. Dictation As Smith and contemporaries reported, the English manuscript of the Book of Mormon was produced as scribes wrote down Smith's dictation in multiple sessions between 1828 and 1829. The dictation of the extant Book of Mormon was completed in 1829 in between 53 and 74 working days. Descriptions of the way in which Smith dictated the Book of Mormon vary. Smith himself called the Book of Mormon a translated work, but in public he generally described the process itself only in vague terms, saying he translated by a miraculous gift from God. According to some accounts from his family and friends at the time, early on, Smith copied characters off the plates as part of a process of learning to translate an initial corpus. For the majority of the process, Smith dictated the text by voicing strings of words which a scribe would write down; after the scribe confirmed they had finished writing, Smith would continue. Many accounts describe Smith dictating by reading a text as it appeared either on seer stones he already possessed or on a set of spectacles that accompanied the plates, prepared by the Lord for the purpose of translating. The spectacles, often called the "Nephite interpreters," or the "Urim and Thummim," after the Biblical divination stones, were described as two clear seer stones which Smith said he could look through in order to translate, bound together by a metal rim and attached to a breastplate. Beginning around 1832, both the interpreters and Smith's own seer stone were at times referred to as the "Urim and Thummim", and Smith sometimes used the term interchangeably with "spectacles". Emma Smith's and David Whitmer's accounts describe Smith using the interpreters while dictating for Martin Harris's scribing and switching to only using his seer stone(s) in subsequent translation. Grant Hardy summarizes Smith's known dictation process as follows: "Smith looked at a seer stone placed in his hat and then dictated the text of the Book of Mormon to scribes". Early on, Smith sometimes separated himself from his scribe with a blanket between them, as he did while Martin Harris, a neighbor, scribed his dictation in 1828. At other points in the process, such as when Oliver Cowdery or Emma Smith scribed, the plates were left covered up but in the open. During some dictation sessions the plates were entirely absent.In 1828, while scribing for Smith, Harris, at the prompting of his wife Lucy Harris, repeatedly asked Smith to loan him the manuscript pages of the dictation thus far. Smith reluctantly acceded to Harris's requests. Within weeks, Harris lost the manuscript, most likely stolen by a member of his extended family. After the loss, Smith recorded that he lost the ability to translate and that Moroni had taken back the plates to be returned only after Smith repented. Smith later stated that God allowed him to resume translation, but directed that he begin where he left off (in what is now called the Book of Mosiah), without retranslating what had been in the lost manuscript. Smith recommenced some Book of Mormon dictation between September 1828 and April 1829 with his wife Emma Smith scribing with occasional help from his brother Samuel Smith, though transcription accomplished was limited. In April 1829, Oliver Cowdery met Smith and, believing Smith's account of the plates, began scribing for Smith in what became a "burst of rapid-fire translation". In May, Joseph and Emma Smith along with Cowdery moved in with the Whitmer family, sympathetic neighbors, in an effort to avoid interruptions as they proceeded with producing the manuscript. While living with the Whitmers, Smith said he received permission to allow eleven specific others to see the uncovered golden plates and, in some cases, handle them. Their written testimonies are known as the Testimony of Three Witnesses, who described seeing the plates in a visionary encounter with an angel, and the Testimony of Eight Witnesses, who described handling the plates as displayed by Smith. Statements signed by them have been published in most editions of the Book of Mormon. In addition to Smith and these eleven, several others described encountering the plates by holding or moving them wrapped in cloth, although without seeing the plates themselves. Their accounts of the plates' appearance tend to describe a golden-colored compilation of thin metal sheets (the "plates") bound together by wires in the shape of a book. The manuscript was completed in June 1829. E. B. Grandin published the Book of Mormon in Palmyra, New York, and it went on sale in his bookstore on March 26, 1830. Smith said he returned the plates to Moroni upon the publication of the book. Views on composition No single theory has consistently dominated naturalistic views on Book of Mormon composition. In the twenty-first century, leading naturalistic interpretations of Book of Mormon origins hold that Smith authored it himself, whether consciously or subconsciously, and simultaneously sincerely believed the Book of Mormon was an authentic sacred history. Most adherents of the Latter Day Saint movement consider the Book of Mormon an authentic historical record, translated by Smith from actual ancient plates through divine revelation. The Church of Jesus Christ of Latter-day Saints (LDS Church), the largest Latter Day Saint denomination, maintains this as its official position. Methods The Book of Mormon as a written text is the transcription of what scholars Grant Hardy and William L. Davis call an "extended oral performance", one which Davis considers "comparable in length and magnitude to the classic oral epics, such as Homer’s Iliad and Odyssey". Eyewitnesses said Smith never referred to notes or other documents while dictating, and Smith's followers and those close to him insisted he lacked the writing and narrative skills necessary to consciously produce a text like the Book of Mormon. Some naturalistic interpretations have therefore compared Smith's dictation to automatic writing arising from the subconscious. However, Ann Taves considers this description problematic for overemphasizing "lack of control" when historical and comparative study instead suggests Smith "had a highly focused awareness" and "a considerable degree of control over the experience" of dictation. Independent scholar William L. Davis posits that after believing he had encountered an angel in 1823, Smith "carefully developed his ideas about the narratives" of the Book of Mormon for several years by making outlines, whether mental or on private notes, until he began dictating in 1828. Smith's oral recitations about Nephites to his family could have been an opportunity to work out ideas and practice oratory, and he received some formal education as a lay Methodist exhorter. In this interpretation, Smith believed the dictation he produced reflected an ancient history, but he assembled the narrative in his own words. Inspirations Early observers, presuming Smith incapable of writing something as long or as complex as the Book of Mormon, often searched for a possible source he might have plagiarized. In the nineteenth century, a popular hypothesis was that Smith collaborated with Sidney Rigdon (a convert to the early movement whom Smith did not actually meet until after the Book of Mormon was published) to plagiarize an unpublished manuscript written by Solomon Spalding and turn into the Book of Mormon. Historians have considered the Spalding manuscript source hypothesis debunked since 1945, when Fawn M. Brodie thoroughly disproved it in her critical biography of Smith. Historians since the early-twentieth century have suggested Smith was inspired by View of the Hebrews, an 1823 book which propounded the Hebraic Indian theory, since both associate American Indians with ancient Israel and describe clashes between two dualistically opposed civilizations (View as speculation about American Indian history and the Book of Mormon as its narrative). Whether or not View influenced the Book of Mormon is the subject of debate. A pseudo-anthropological treatise, View presented allegedly empirical evidence in support of its hypothesis. The Book of Mormon is written as a narrative, and Christian themes predominate rather than supposedly indigenous parallels. Additionally, while View supposes that indigenous American peoples descended from the Ten Lost Tribes, the Book of Mormon actively rejects the hypothesis; the peoples in its narrative have an "ancient Hebrew" origin but do not descend from the lost tribes, and the perceived mystery of which the book preserves and escalates. The book ultimately heavily revises, rather than borrows, the Hebraic Indian theory. The Book of Mormon may creatively reconfigure, without plagiarizing, parts of the popular 1678 Christian allegory Pilgrim's Progress written by John Bunyan. For example, the martyr narrative of Abinadi in the Book of Mormon shares a complex matrix of descriptive language with Faithful's martyr narrative in Progress. Some other Book of Mormon narratives, such as the dream Lehi has in the book's opening, also resemble creative reworkings of Progress story arcs as well as elements of other works by Bunyan, such as The Holy War and Grace Abounding. Historical scholarship also suggests it's plausible for Smith to have produced the Book of Mormon himself, based on his knowledge of the Bible and enabled by a democratizing religious culture. Content Presentation The English text of the Book of Mormon resembles the style of the King James Version of the Bible, though its rendering can sometimes be repetitive and difficult to read. Narratively and structurally the book is complex with multiple arcs that diverge and converge in the story while contributing to the book's overarching plot and themes. Historian Daniel Walker Howe concluded in his own appraisal that the Book of Mormon "is a powerful epic written on a grand scale" and "should rank among the great achievements of American literature". The Book of Mormon presents its text through multiple narrators explicitly identified as figures within the book's own narrative. Narrators describe reading, redacting, writing, and exchanging records. The book also embeds sermons, given by figures from the narrative, throughout the text, and these internal orations make up just over 40 percent of the Book of Mormon. Periodically, the book's primary narrators reflexively describe themselves creating the book in a move that is "almost postmodern" in its self-consciousness. In an essay written to introduce the Book of Mormon, historian Laurie Maffly-Kipp explains that "the mechanics of editing and transmitting thereby become an important feature of the text". Organization The Book of Mormon is organized as a compilation of smaller books, each named after its main named narrator or a prominent leader, beginning with the First Book of Nephi (1 Nephi) and ending with the Book of Moroni. The book's sequence is primarily chronological based on the narrative content of the book. Exceptions include the Words of Mormon and the Book of Ether. The Words of Mormon contains editorial commentary by Mormon. The Book of Ether is presented as the narrative of an earlier group of people who had come to the American continent before the immigration described in 1 Nephi. First Nephi through Omni are written in first-person narrative, as are Mormon and Moroni. The remainder of the Book of Mormon is written in third-person historical narrative, said to be compiled and abridged by Mormon (with Moroni abridging the Book of Ether and writing the latter part of Mormon and the Book of Moroni). Most modern editions of the book have been divided into chapters and verses. Most editions of the book also contain supplementary material, including the "Testimony of Three Witnesses" and the "Testimony of Eight Witnesses" which appeared in the original 1830 edition and every official Latter-day Saint edition thereafter. Narrative The books from First Nephi to Omni are described as being from "the small plates of Nephi". This account begins in ancient Jerusalem around 600 BC, telling the story of a man named Lehi, his family, and several others as they are led by God from Jerusalem shortly before the fall of that city to the Babylonians. The book describes their journey across the Arabian peninsula, and then to a "promised land", presumably an unspecified location in the Americas, by ship. These books recount the group's dealings from approximately 600 BC to about 130 BC, during which time the community grows and splits into two main groups, called Nephites and Lamanites, that frequently war with each other throughout the rest of the narrative. Following this section is the Words of Mormon, a small book that introduces Mormon, the principal narrator for the remainder of the text. The narration describes the proceeding content (Book of Mosiah through to chapter 7 of the internal Book of Mormon) as being Mormon's abridgment of "the large plates of Nephi", existing records that detailed the people's history up to Mormon's own life. Part of this portion is the Book of Third Nephi, which describes a visit by Jesus to the people of the Book of Mormon sometime after his resurrection and ascension; historian John Turner calls this episode "the climax of the entire scripture". After this visit, the Nephites and Lamanites unite in a harmonious, peaceful society which endures for several generations before breaking into warring factions again, and in this conflict the Nephites are destroyed while the Lamanites emerge victorious. In the narrative, Mormon, a Nephite, lives during this period of war, and he dies before finishing his book. His son Moroni takes over as narrator, describing himself taking his father's record into his charge and finishing its writing. Before the very end of the book, Moroni describes making an abridgment (called the Book of Ether) of a record from a much earlier people. There is a subsequent subplot describing a group of families who God leads away from the Tower of Babel after it falls. Led by a man named Jared and his brother, described as a prophet of God, these Jaredites travel to the "promised land" and establish a society there. After successive violent reversals between rival monarchs and faction, their society collapses before Lehi's family arrive in the promised land. The narrative returns to Moroni's present (Book of Moroni) in which he transcribes a few short documents, meditates on and addresses the book's audience, finishes the record, and buries the plates upon which they are narrated to be inscribed upon, before implicitly dying as his father did, in what allegedly would have been the early-400s CE. Teachings Jesus On its title page, the Book of Mormon describes its central purpose as being the "convincing of the Jew and Gentile that Jesus is the Christ, the Eternal God, manifesting himself unto all nations." Jesus is mentioned every 1.7 verses on average. Although much of the Book of Mormon's internal chronology takes place prior to the birth of Jesus, prophets in the book frequently see him in vision and preach about him, and the people in the narrative worship Jesus as "pre-Christian Christians." For example, the book's first narrator Nephi describes having a vision of the birth, ministry, and death of Jesus, said to have taken place nearly 600 years prior to Jesus' birth. Late in the book, a narrator refers to converted peoples as "children of Christ". By depicting ancient prophets and peoples as familiar with Jesus as a Savior, the Book of Mormon universalizes Christian salvation as being accessible across all time and places. By implying that even more ancient peoples were familiar with Jesus Christ, the book also presents a "polygenist Christian history" in which Christianity has multiple origins. In what is often called the climax of the book, Jesus visits some early inhabitants of the Americas after his resurrection in an extended bodily theophany. During this ministry, he reiterates many teachings from the New Testament, re-emphasizes salvific baptism, and introduces the ritual consumption of bread and water "in remembrance of [his] body", a teaching that became the basis for modern Latter-day Saints' "memorialist" view of their sacrament ordinance (analogous to communion). Jesus's ministry in the Book of Mormon resembles his portrayal in the Gospel of John, as Jesus similarly teaches without parables and preaches faith and obedience as a central message. The Book of Mormon depicts Jesus with "a twist" on Christian trinitarianism. Jesus in the Book of Mormon is distinct from God the Father, much as he is in the New Testament, as he prays to God while during a post-resurrection visit with the Nephites. However, the Book of Mormon also emphasizes Jesus and God have "divine unity," and other parts of the book call Jesus "the Father and the Son" or describe the Father, the Son, and the Holy Ghost as "one." As a result, beliefs among the churches of the Latter Day Saint movement range between social trinitarianism (such as among Latter-day Saints) and traditional trinitarianism (such as in Community of Christ). Distinctively, the Book of Mormon describes Jesus as having, prior to his birth, a spiritual "body" "without flesh and blood" that looked similar to how he would appear during his physical life. According to the book, the Brother of Jared lived before Jesus and saw him manifest in this spiritual "body" thousands of years prior to his birth. Plan of salvation The Christian concept of God's plan of salvation for humanity is a frequently recurring theme of the Book of Mormon. While the Bible does not directly outline a plan of salvation, the Book of Mormon explicitly refers to the concept thirty times, using a variety of terms such as plan of salvation, plan of happiness, and plan of redemption. The Book of Mormon's plan of salvation doctrine describes life as a probationary time for people to learn the gospel of Christ through revelation given to prophets and have the opportunity to choose whether or not to obey God. Jesus' atonement then makes repentance possible, enabling the righteous to enter a heavenly state after a final judgment. Although most of Christianity traditionally considers the fall of man a negative development for humanity, the Book of Mormon instead portrays the fall as a foreordained step in God's plan of salvation, necessary to securing human agency, eventual righteousness, and bodily joy through physical experience. This positive interpretation of the Adam and Eve story contributes to the Book of Mormon's emphasis "on the importance of human freedom and responsibility" to choose salvation. Dialogic revelation In the Book of Mormon, revelation from God typically manifests as "personalized, dialogic exchange" between God and persons, "rooted in a radically anthropomorphic theology" that personifies deity as a being who hears prayers and provides direct answers to questions. Multiple narratives in the book portray revelation as a dialogue in which petitioners and deity engage one another in a mutual exchange in which God's contributions originate from outside the mortal recipient. The Book of Mormon also emphasizes regular prayer as a significant component of devotional life, depicting it as a central means through which such dialogic revelation can take place. Distinctively, the Book of Mormon's portrayal democratizes revelation by extending it beyond the "Old Testament paradigms" of prophetic authority. In the Book of Mormon, dialogic revelation from God is not the purview of prophets alone but is instead the right of every person. Figures such as Nephi and Ammon receive visions and revelatory direction prior to or without ever becoming prophets, and Laman and Lemuel are rebuked for hesitating to pray for revelation. In the Book of Mormon, God and the divine are directly knowable through revelation and spiritual experience. Also in contrast with traditional Christian conceptions of revelations is the Book of Mormon's broader range of revelatory content. In the Book of Mormon, revelatory topics include not only the expected "exegesis of existence" but also questions that are "pragmatic, and at times almost banal in their mundane specificity". Figures petition God for revelatory answers to doctrinal questions and ecclesiastical crises as well as for inspiration to guide hunts, military campaigns, and sociopolitical decisions, and the Book of Mormon portrays God providing answers to these inquiries. The Book of Mormon depicts revelation as an active and sometimes laborious experience. For example, the Book of Mormon's Brother of Jared learns to act not merely as a petitioner with questions but moreover as an interlocutor with "a specific proposal" for God to consider as part of a guided process of miraculous assistance. Also in the Book of Mormon, Enos describes his revelatory experience as a "wrestle which I had before God" that spanned hours of intense prayer. Apocalyptic reversal and indigenous or nonwhite liberation The Book of Mormon's "eschatological content" lends to a "theology of Native and/or nonwhite liberation", in the words of American studies scholar Jared Hickman. The Book of Mormon's narrative content includes prophecies describing how although Gentiles (generally interpreted as being whites of European descent) would conquer the indigenous residents of the Americas (imagined in the Book of Mormon as being a remnant of descendants of the Lamanites), this conquest would only precede the Native Americans' revival and resurgence as a God-empowered people. The Book of Mormon narrative's prophecies envision a Christian eschaton in which indigenous people are destined to rise up as the true leaders of the continent, manifesting in a new utopia to be called "Zion". White Gentiles would have an opportunity to repent of their sins and join themselves to the indigenous remnant, but if white Gentile society fails to do so, the Book of Mormon's content foretells a future "apocalyptic reversal" in which Native Americans will destroy white American society and replace it with a godly, Zionic society. This prophecy commanding whites to repent and become supporters of American Indians even bears "special authority as an utterance of Jesus" Christ himself during a messianic appearance at the book's climax. Furthermore, the Book of Mormon's "formal logic" criticizes the theological supports for racism and white supremacy prevalent in the antebellum United States by enacting a textual apocalypse. The book's apparently white Nephite narrators fail to recognize and repent of their own sinful, hubristic prejudices against the seemingly darker-skinned Lamanites in the narrative. In their pride, the Nephites repeatedly backslide into producing oppressive social orders, such that the book's narrative performs a sustained critique of colonialist racism. The book concludes with its own narrative implosion in which Lamanites suddenly succeed over and destroy Nephites in a literary turn seemingly designed to jar the average antebellum white American reader into recognizing the "utter inadequacy of his or her rac(ial)ist common sense". Religious significance Early Mormonism Adherents of the early Latter Day Saint movement frequently read the Book of Mormon as a corroboration of and supplement to the Bible, persuaded by its resemblance to the King James Version's form and language. For these early readers, the Book of Mormon confirmed the Bible's scriptural veracity and resolved then-contemporary theological controversies the Bible did not seem to adequately address, such as the appropriate mode of baptism, the role of prayer, and the nature of the Christian atonement. Early church administrative design also drew inspiration from the Book of Mormon. Oliver Cowdery and Joseph Smith, respectively, used the depiction of the Christian church in the Book of Mormon as a template for their Articles of the Church and Articles and Covenants of the Church. The Book of Mormon was also significant in the early movement as a sign, proving Joseph Smith's claimed prophetic calling, signalling the "restoration of all things", and ending what was believed to have been an apostasy from true Christianity. Early Latter Day Saints additionally tended to interpret the Book of Mormon through a millenarian lens and consequently believed the book portended Christ's imminent Second Coming. And during the movement's first years, observers identified converts with the new scripture they propounded, nicknaming them "Mormons". Early Mormons also cultivated their own individual relationships with the Book of Mormon. Reading the book became an ordinary habit for some, and some would reference passages by page number in correspondence with friends and family. Historian Janiece Johnson explains that early converts' "depth of Book of Mormon usage is illustrated most thoroughly through intertextuality—the pervasive echoes, allusions, and expansions on the Book of Mormon text that appear in the early converts' own writings." Early Latter Day Saints alluded to Book of Mormon narratives, incorporated Book of Mormon turns of phrase into their writing styles, and even gave their children Book of Mormon names. Joseph Smith Like many other early adherents of the Latter Day Saint movement, Smith referenced Book of Mormon scriptures in his preaching relatively infrequently and cited the Bible more often, likely because he was more familiar with the Bible, which he had grown up with. In 1832, Smith dictated a revelation that condemned the "whole church" for treating the Book of Mormon lightly, although even after doing so Smith still referenced the Book of Mormon less often than the Bible. Nevertheless, in 1841 Joseph Smith characterized the Book of Mormon as "the most correct of any book on earth, and the keystone of [the] religion". Although Smith quoted the book infrequently, he accepted the Book of Mormon narrative world as his own and conceived of his prophetic identity within the framework of the Book of Mormon's portrayal of a world history full of sacred records of God's dealings with humanity and description of him as a revelatory translator. While they were held in Carthage Jail together, shortly before being killed in a mob attack, Joseph's brother Hyrum Smith read aloud from the Book of Mormon, and Joseph told the jail guards present that the Book of Mormon was divinely authentic. The Church of Jesus Christ of Latter-day Saints The Church of Jesus Christ of Latter-day Saints (LDS Church) accepts the Book of Mormon as one of the four sacred texts in its scriptural canon called the standard works. Church leaders and publications have "strongly affirm[ed]" Smith's claims of the book's significance to the faith. According to the church's "Articles of Faith"—a document written by Joseph Smith in 1842 and canonized by the church as scripture in 1880—members "believe the Bible to be the word of God as far as it is translated correctly," and they "believe the Book of Mormon to be the word of God," without the translation qualification. In their evangelism, Latter-day Saint leaders and missionaries have long emphasized the book's place in a causal chain which held that if the Book of Mormon was "verifiably true revelation of God," then it justified Smith's claims to prophetic authority to restore the New Testament church. Latter-day Saints have also long believed the Book of Mormon's contents confirm and fulfill biblical prophecies. For example, "many Latter-day Saints" consider the biblical patriarch Jacob's description of his son Joseph as "a fruitful bough... whose branches run over a wall" a prophecy of Lehi's posterity—described as descendants of Joseph—overflowing into the New World. Latter-day Saints also believe the Bible prophesies of the Book of Mormon as an additional testament to God's dealings with humanity, such as in their interpretation of Ezekiel 37's injunction to "take thee one stick... For Judah, and... take another stick... For Joseph" as referring to the Bible as the "stick of Judah" and the Book of Mormon as "the stick of Joseph". In the 1980s, the church placed greater emphasis on the Book of Mormon as a central text of the faith and on studying and reading it as a means for devotional communion with Jesus Christ. In 1982, it added the subtitle "Another Testament of Jesus Christ" to its official editions of the Book of Mormon. Ezra Taft Benson, the church's thirteenth president (1985–1994), especially emphasized the Book of Mormon. Referencing Smith's 1832 revelation, Benson said the church remained under condemnation for treating the Book of Mormon lightly. Since the late 1980s, Latter-day Saint leaders have encouraged church members to read from the Book of Mormon daily, and in the twenty-first century, many Latter-day Saints use the book in private devotions and family worship. Literary scholar Terryl Givens observes that for Latter-day Saints, the Book of Mormon is "the principal scriptural focus", a "cultural touchstone, and "absolutely central" to worship, including in weekly services, Sunday School, youth seminaries, and more. The church encourages those considering joining the faith to follow the suggestion in the Book of Mormon's final chapter to study the book, ponder it, and pray to God about it. Latter-day Saints believe that sincerely doing so will provide the reader with a spiritual witness confirming it as true scripture. The relevant passage in the chapter is sometimes referred to as "Moroni's Promise." Approximately 90 to 95% of all Book of Mormon printings have been affiliated with the church. As of October 2020, it has published more than 192 million copies of the Book of Mormon. Community of Christ The Community of Christ (formerly the Reorganized Church of Jesus Christ of Latter Day Saints or RLDS Church) views the Book of Mormon as scripture which provides an additional witness of Jesus Christ in support of the Bible. The Community of Christ publishes two versions of the book. The first is the Authorized Edition, first published by the then-RLDS Church in 1908, whose text is based on comparing the original printer's manuscript and the 1837 Second Edition (or "Kirtland Edition") of the Book of Mormon. Its content is similar to the Latter-day Saint edition of the Book of Mormon, but the versification is different. The Community of Christ also publishes a "New Authorized Version" (also called a "reader's edition"), first released in 1966, which attempts to modernize the language of the text by removing archaisms and standardizing punctuation. Use of the Book of Mormon varies among Community of Christ membership. The church describes it as scripture and includes references to the Book of Mormon in its official lectionary. In 2010, representatives told the National Council of Churches that "the Book of Mormon is in our DNA". The book remains a symbol of the denomination's belief in continuing revelation from God. Nevertheless, its usage in North American congregations declined between the mid-twentieth and twenty-first centuries. Community of Christ theologian Anthony Chvala-Smith describes the Book of Mormon as being akin to a "subordinate standard" relative to the Bible, giving the Bible priority over the Book of Mormon, and the denomination does not emphasize the book as part of its self-conceived identity. Book of Mormon use varies in what David Howlett calls "Mormon heritage regions": North America, Western Europe, and French Polynesia. Outside these regions, where there are tens of thousands of members, congregations almost never use the Book of Mormon in their worship, and they may be entirely unfamiliar with it. Some in Community of Christ remain interested in prioritizing the Book of Mormon in religious practice and have variously responded to these developments by leaving the denomination or by striving to re-emphasize the book. During this time, the Community of Christ moved away from emphasizing the Book of Mormon as an authentic record of a historical past. By the late-twentieth century, church president W. Grant McMurray made open the possibility the book was nonhistorical. McMurray reiterated this ambivalence in 2001, reflecting, "The proper use of the Book of Mormon as sacred scripture has been under wide discussion in the 1970s and beyond, in part because of long-standing questions about its historical authenticity and in part because of perceived theological inadequacies, including matters of race and ethnicity." When a resolution was submitted at the 2007 Community of Christ World Conference to "reaffirm the Book of Mormon as a divinely inspired record", church president Stephen M. Veazey ruled it out-of-order. He stated, "while the Church affirms the Book of Mormon as scripture, and makes it available for study and use in various languages, we do not attempt to mandate the degree of belief or use. This position is in keeping with our longstanding tradition that belief in the Book of Mormon is not to be used as a test of fellowship or membership in the church." Greater Latter Day Saint movement Since the death of Joseph Smith in 1844, there have been approximately seventy different churches that have been part of the Latter Day Saint movement, fifty of which were extant as of 2012. Religious studies scholar Paul Gutjahr explains that "each of these sects developed its own special relationship with the Book of Mormon". For example James Strang, who led a denomination in the nineteenth century, reenacted Smith's production of the Book of Mormon by claiming in the 1840s and 1850s to receive and translate new scriptures engraved on metal plates, which became the Voree Plates and the Book of the Law of the Lord. William Bickerton led another denomination, The Church of Jesus Christ of Latter Day Saints (today called The Church of Jesus Christ), which accepted the Book of Mormon as scripture alongside the Bible although it did not canonize other Latter Day Saint religious texts like the Doctrine and Covenants and Pearl of Great Price. The contemporary Church of Jesus Christ continues to consider the "Bible and Book of Mormon together" to be "the foundation of [their] faith and the building blocks of" their church. Nahua-Mexican Latter-day Saint Margarito Bautista believed the Book of Mormon told an indigenous history of Mexico before European contact, and he identified himself as a "descendant of Father Lehi", a prophet in the Book of Mormon. Bautista believed the Book of Mormon revealed that indigenous Mexicans were a chosen remnant of biblical Israel and therefore had a sacred destiny to someday lead the church spiritually and the world politically. To promote this belief, he wrote a theological treatise synthesizing Mexican nationalism and Book of Mormon content, published in 1935. Anglo-American LDS Church leadership suppressed the book and eventually excommunicated Bautista, and he went on to found a new Mormon denomination. Officially named El Reino de Dios en su Plenitud, the denomination continues to exist in Colonial Industrial, Ozumba, Mexico as a church with several hundred members who call themselves Mormons. Separate editions of the Book of Mormon have been published by a number of churches in the Latter Day Saint movement, along with private individuals and organizations not endorsed by any specific denomination. Views on historical authenticity Mainstream archaeological, historical, and scientific communities do not consider the Book of Mormon an ancient record of actual historical events. Principally, the content of the Book of Mormon does not correlate with archaeological, paleontological, and historical evidence about the past of the Americas. There is no accepted correlation between locations described in the Book of Mormon and known American archaeological sites. There is also no evidence in Mesoamerican societies of cultural influence from anything described in the Book of Mormon. Additionally, the Book of Mormon's narrative refers to the presence of animals, plants, metals, and technologies of which archaeological and scientific studies have found little or no evidence in post-Pleistocene, pre-Columbian America. Such anachronistic references include crops such as barley, wheat, and silk; livestock like sheep and horses; and metals and technology such as brass, steel, the wheel, and chariots. The Book of Mormon also includes excerpts from and demonstrates intertextuality with portions of the biblical Book of Isaiah whose widely-accepted periods of creation postdate the alleged departure of Lehi's family from Jerusalem circa 600 BCE. Until the late-twentieth century, most adherents of the Latter Day Saint movement who affirmed Book of Mormon historicity believed the people described in the Book of Mormon text were the exclusive ancestors of all indigenous peoples in the Americas. However, linguistics and genetics proved that impossible. There are no widely accepted linguistic connections between any Native American languages and Near Eastern languages, and "the diversity of Native American languages could not have developed from a single origin in the time frame" that would be necessary to validate such a view of Book of Mormon historicity. Finally, there is no DNA evidence linking any Native American group to ancestry from the ancient Near East as a belief in Book of Mormon peoples as the exclusive ancestors of indigenous Americans would require. Instead, geneticists find that indigenous Americans' ancestry traces back to Asia. Despite this, most adherents of the Latter Day Saint movement consider the Book of Mormon to generally be historically authentic. Within the Latter Day Saint movement there are several individuals and apologetic organizations, most of whom are or which are lay Latter-day Saints, that seek to answer challenges to or advocate for Book of Mormon historicity. For example, in response to linguistics and genetics rendering long-popular hemispheric models of Book of Mormon geography impossible, many apologists posit Book of Mormon peoples could have dwelled in a limited geographical region while indigenous peoples of other descents occupied the rest of the Americas. To account for anachronisms, apologists often suggest Smith's translation assigned familiar terms to unfamiliar ideas. In the context of a miraculously translated Book of Mormon, anachronistic intertextuality may also have miraculous explanations. Advocating for their interpretation of the book's historicity, some apologists strive to identify parallels between the Book of Mormon and biblical antiquity, such as the presence of several complex chiasmi resembling a literary form used in ancient Hebrew poetry and in the Old Testament. Others attempt to identify parallels between Mesoamerican archaeological sites and locations described in the Book of Mormon, such as John L. Sorenson, according to whom the Santa Rosa archaeological site resembles the city of Zarahemla in the Book of Mormon. When mainstream, non-Mormon scholars examine alleged parallels between the Book of Mormon and the ancient world, however, scholars typically deem them "chance based upon only superficial similarities" or "parallelomania", the result of having predetermined ideas about the subject. Despite the popularity and influence among Latter-day Saints of literature propounding Book of Mormon historicity, not all Mormons who affirm Book of Mormon historicity are universally persuaded by apologetic work. Some claim historicity more modestly, such as Richard Bushman's statement that "I read the Book of Mormon as informed Christians read the Bible. As I read, I know the arguments against the book's historicity, but I can't help feeling that the words are true and the events happened. I believe it in the face of many questions." Some denominations and adherents of the Latter Day Saint movement consider the Book of Mormon a work of inspired fiction akin to pseudepigrapha or biblical midrash that constitutes scripture by revealing true doctrine about God, similar to a common interpretation of the biblical Book of Job. Many in Community of Christ hold this view, and the leadership takes no official position on Book of Mormon historicity; among lay members, views vary. Some Latter-day Saints consider the Book of Mormon fictional, although this view is marginal in the community at large. Influenced by continental philosophy, a handful of academics argue for understanding the Book of Mormon not as historical or unhistorical (either factual or fictional) but as nonhistorical (existing outside history). In this view, both skeptical and affirmative approaches to Book of Mormon historicity make the same Enlightenment-derived assumptions about scriptures being representations of external reality, whereas a premodern understanding would accept scripture as capable of divinely ordering, rather than simply depicting, reality. Historical context American Indian origins Contact with the indigenous peoples of the Americas prompted intellectual and theological controversy among many Europeans and European Americans who wondered how biblical narratives of world history could account for hitherto unrecognized indigenous societies. From the seventeenth century through the early-nineteenth, numerous European and U. S. American writers proposed that ancient Jews, perhaps through the Lost Ten Tribes, were the ancestors of Native Americans. One of the first books to suggest that Native Americans descended from Jews was written by Jewish-Dutch rabbi and scholar Manasseh ben Israel in 1650. Such curiosity and speculation about indigenous origins persisted in the United States into the antebellum period when the Book of Mormon was published, as archaeologist Stephen Williams explains that "relating the American Indians to the Lost Tribes of Israel was supported by many at" the time of the book's production and publication. Although the Book of Mormon did not explicitly identify Native Americans as descendants of the diasporic Israelites in its narrative, nineteenth-century readers consistently drew that conclusion and considered the book theological support for believing American Indians were of Israelite descent. Additionally, European settlers viewed the impressive earthworks left behind by the Mound Builder cultures and had some difficulty believing that the Native Americans, whose numbers had been greatly reduced over the previous centuries, could have produced them. A common theory was that a more "civilized" and "advanced" people had built them, but were overrun and destroyed by a more savage, numerous group. Some Book of Mormon content resembles this "mound-builder" genre pervasive in the nineteenth century. Historian Curtis Dahl wrote, "Undoubtedly the most famous and certainly the most influential of all Mound-Builder literature is the Book of Mormon (1830). Whether one wishes to accept it as divinely inspired or the work of Joseph Smith, it fits exactly into the tradition." However, the Book of Mormon does not comfortably fit the genre, since, as historian Richard Bushman explains, "When other writers delved into Indian origins, they were explicit about recognizable Indian practices", such as Abner Cole, who dressed characters in moccasins in his parody of the book. Meanwhile, the "Book of Mormon deposited its people on some unknown shore—not even definitely identified as America—and had them live out their history in a remote place in a distant time, using names that had no connections to modern Indians" and without including stereotypical Indian terms, practices, or tropes, suggesting disinterest in making connections to mound-builder tropes. Critique of the United States The Book of Mormon can be read as a critique of the United States during Smith's lifetime. Historian of religion Nathan O. Hatch called the Book of Mormon "a document of profound social protest", and Bushman "found the book thundering no to the state of the world in Joseph Smith's time." In the Jacksonian era of antebellum America, class inequality was a major concern as fiscal downturns and the economy's transition from guild-based artisanship to private business sharpened socioeconomic disparity. Poll taxes in New York limited access to the vote, and the culture of civil discourse and mores surrounding liberty allowed social elites to ignore and delegitimize populist participation in public discourse. Ethnic prejudices were also prominent, as Americans typically stereotyped American Indians as ferocious, lazy, and uncivilized. Meanwhile, some Americans thought antebellum disestablishment and denominational proliferation undermined religious authority through ubiquity, producing sectarian confusion that only obfuscated the path to spiritual security. Against the backdrop of these trends, the Book of Mormon "condemned social inequalities, moral abominations, rejection of revelations and miracles, disrespect for Israel (including the Jews), subjection of the Indians, and the abuse of the continent by interloping European migrants". The book's narratives critique bourgeois public discourse where rules of civil democracy silence the demands of common people, and it advocates for the poor, condemning acquisitiveness as antithetical to righteousness. Within the narrative, Lamanites, whom readers generally identified with American Indians, at times were overwhelmingly righteous, even producing a prophet who preached to backsliding Nephites, and the book declared natives to be the rightful inheritors to and leaders of the North American continent. According to the book, implicitly-European Gentiles had an obligation to serve the native people and join their remnant of covenant Israel or else face a violent downfall like the Nephites of the text. In the context of the nineteenth-century United States, the Book of Mormon rejects American denominational pluralism, Enlightenment hegemony, individualistic capitalism, and American nationalism, calling instead for ecclesiastical unity, miraculous religion, communitarian economics, and universal society under God's authority. Manuscripts Joseph Smith dictated the Book of Mormon to several scribes over a period of 13 months, resulting in three manuscripts. Upon examination of pertinent historical records, the book appears to have been dictated over the course of 57 to 63 days within the 13 month period. The 116 lost pages contained the first portion of the Book of Lehi; it was lost after Smith loaned the original, uncopied manuscript to Martin Harris. The first completed manuscript, called the original manuscript, was completed using a variety of scribes. Portions of the original manuscript were also used for typesetting. In October 1841, the entire original manuscript was placed into the cornerstone of the Nauvoo House, and sealed up until nearly forty years later when the cornerstone was reopened. It was then discovered that much of the original manuscript had been destroyed by water seepage and mold. Surviving manuscript pages were handed out to various families and individuals in the 1880s. Only 28 percent of the original manuscript now survives, including a remarkable find of fragments from 58 pages in 1991. The majority of what remains of the original manuscript is now kept in the LDS Church's archives. The second completed manuscript, called the printer's manuscript, was a copy of the original manuscript produced by Oliver Cowdery and two other scribes. It is at this point that initial copyediting of the Book of Mormon was completed. Observations of the original manuscript show little evidence of corrections to the text. Shortly before his death in 1850, Cowdery gave the printer's manuscript to David Whitmer, another of the Three Witnesses. In 1903, the manuscript was bought from Whitmer's grandson by the Reorganized Church of Jesus Christ of Latter Day Saints, now known as the Community of Christ. On September 20, 2017, the LDS Church purchased the manuscript from the Community of Christ at a reported price of $35million. The printer's manuscript is now the earliest surviving complete copy of the Book of Mormon. The manuscript was imaged in 1923 and has been made available for viewing online. Critical comparisons between surviving portions of the manuscripts show an average of two to three changes per page from the original manuscript to the printer's manuscript, with most changes being corrections of scribal errors such as misspellings or the correction, or standardization, of grammar inconsequential to the meaning of the text. The printer's manuscript was further edited, adding paragraphing and punctuation to the first third of the text. The printer's manuscript was not used fully in the typesetting of the 1830 version of Book of Mormon; portions of the original manuscript were also used for typesetting. The original manuscript was used by Smith to further correct errors printed in the 1830 and 1837 versions of the Book of Mormon for the 1840 printing of the book. Ownership history: Book of Mormon printer's manuscript In the late-19th century the extant portion of the printer's manuscript remained with the family of David Whitmer, who had been a principal founder of the Latter Day Saints and who, by the 1870s, led the Church of Christ (Whitmerite). During the 1870s, according to the Chicago Tribune, the LDS Church unsuccessfully attempted to buy it from Whitmer for a record price. Church president Joseph F. Smith refuted this assertion in a 1901 letter, believing such a manuscript "possesses no value whatever." In 1895, Whitmer's grandson George Schweich inherited the manuscript. By 1903, Schweich had mortgaged the manuscript for $1,800 and, needing to raise at least that sum, sold a collection including 72 percent of the book of the original printer's manuscript (John Whitmer's manuscript history, parts of Joseph Smith's translation of the Bible, manuscript copies of several revelations, and a piece of paper containing copied Book of Mormon characters) to the RLDS Church (now the Community of Christ) for $2,450, with $2,300 of this amount for the printer's manuscript. The LDS Church had not sought to purchase the manuscript. In 2015, this remaining portion was published by the Church Historian's Press in its Joseph Smith Papers series, in Volume Three of "Revelations and Translations"; and, in 2017, the church bought the printer's manuscript for . Editions Chapter and verse notation systems The original 1830 publication had unnumbered paragraphs (and no verses) which were divided into relatively long chapters. Just as the Bible's present chapter and verse notation system is a later addition of Bible publishers to books that were originally solid blocks of undivided text, the chapter and verse markers within the books of the Book of Mormon are conventions, not part of the original text. The format of the Book of Mormon stayed the same, with citations noted by book and page number, (Book of Alma, page 262) or just the page number (page 262). As more editions were made, the references were noted by the edition. In 1852, Franklin D. Richards integrated numbered paragraphs for easier reference. In 1876, Orson Pratt revised the Book of Mormon, and while doing so, created smaller chapters comparable in length to the Bible, and added true versification. In 1908, the RLDS Church revised their edition. While doing so, they added versification similar in breaks to the 1876 edition, but opted to use the original longer chapters. Most modern editions use one of the two, based on their heritage. The editions published by the Community of Christ (1908/AV & 1966/RAV), the RCE, and the Temple Lot edition use the 1908 Authorized Version Versing. The LDS Church uses the 1876 Orson Pratt versing. Church editions Other editions Historic editions The following editions no longer in publication marked major developments in the text or reader's helps printed in the Book of Mormon. Textual criticism Although some earlier unpublished studies had been prepared, not until the early 1970s was true textual criticism applied to the Book of Mormon. At that time BYU Professor Ellis Rasmussen and his associates were asked by The Church of Jesus Christ of Latter-day Saints (LDS Church) to begin preparation for a new edition of its scriptures. One aspect of that effort entailed digitizing the text and preparing appropriate footnotes; another aspect required establishing the most dependable text. To that latter end, Stanley R. Larson (a Rasmussen graduate student) set about applying modern text critical standards to the manuscripts and early editions of the Book of Mormon as his thesis project—which he completed in 1974. Larson carefully examined the original manuscript (the one dictated by Joseph Smith to his scribes) and the printer's manuscript (the copy Oliver Cowdery prepared for the printer in 1829–1830), and compared them with the first, second, and third editions of the Book of Mormon; this was done to determine what sort of changes had occurred over time and to make judgments as to which readings were the most original. Larson proceeded to publish a set of well-argued articles on the phenomena which he had discovered. Many of his observations were included as improvements in the church's 1981 edition of the Book of Mormon. By 1979, with the establishment of the Foundation for Ancient Research and Mormon Studies (FARMS) as a California non-profit research institution, an effort led by Robert F. Smith began to take full account of Larson's work and to publish a critical text of the Book of Mormon. Thus was born the FARMS Critical Text Project which published the first volume of the three-volume Book of Mormon Critical Text in 1984. The third volume of that first edition was published in 1987, but was already being superseded by a second, revised edition of the entire work, greatly aided through the advice and assistance of a team that included Yale doctoral candidate Grant Hardy, Dr. Gordon C. Thomasson, Professor John W. Welch (the head of FARMS), and Professor Royal Skousen. However, these were merely preliminary steps to a far more exacting and all-encompassing project. In 1988, with that preliminary phase of the project completed, Skousen took over as editor and head of the FARMS Critical Text of the Book of Mormon Project and proceeded to gather still scattered fragments of the original manuscript of the Book of Mormon and to have advanced photographic techniques applied to obtain fine readings from otherwise unreadable pages and fragments. He also closely examined the printer's manuscript (then owned by RLDS Church) for differences in types of ink or pencil, in order to determine when and by whom they were made. He also collated the various editions of the Book of Mormon down to the present to see what sorts of changes have been made through time. Skousen and the Critical Text Project have published complete transcripts of the Original and Printer's Manuscripts (volumes I and II), parts of a history of the text (volume III), and a six-part analysis of textual variants (volume IV). The remainder of the eight-part history of the text and a complete electronic collation of editions and manuscripts (volumes 5 of the Project) remain forthcoming. In 2009, Yale University published an edition of the Book of Mormon which incorporates all aspects of Skousen's research. Differences between the original and printer's manuscript, the 1830 printed version, and modern versions of the Book of Mormon have led some critics to claim that evidence has been systematically removed that could have proven that Smith fabricated the Book of Mormon, or are attempts to hide embarrassing aspects of the church's past. Latter-day Saint scholars view the changes as superficial, done to clarify the meaning of the text. Non-English translations The Latter-day Saints version of the Book of Mormon has been translated into 83 languages and selections have been translated into an additional 25 languages. In 2001, the LDS Church reported that all or part of the Book of Mormon was available in the native language of 99 percent of Latter-day Saints and 87 percent of the world's total population. Translations into languages without a tradition of writing (e.g., Kaqchikel, Tzotzil) have been published as audio recordings and as transliterations with Latin characters. Translations into American Sign Language are available as video recordings. Typically, translators are Latter-day Saints who are employed by the church and translate the text from the original English. Each manuscript is reviewed several times before it is approved and published. In 1998, the church stopped translating selections from the Book of Mormon and announced that instead each new translation it approves will be a full edition. Representations in media Artists have depicted Book of Mormon scenes and figures in visual art since the beginnings of the Latter Day Saint movement. The nonprofit Book of Mormon Art Catalog documents the existence of at least 2,500 visual depictions of Book of Mormon content. According to art historian Jenny Champoux, early artwork of the Book of Mormon relied on European iconography; eventually, a distinctive "Latter-day Saint style" developed. Events of the Book of Mormon are the focus of several films produced by the LDS Church, including The Life of Nephi (1915), How Rare a Possession (1987) and The Testaments of One Fold and One Shepherd (2000). Depictions of Book of Mormon narratives in films not officially commissioned by the church (sometimes colloquially known as Mormon cinema) include The Book of Mormon Movie, Vol. 1: The Journey (2003) and Passage to Zarahemla (2007). In "one of the most complex uses of Mormonism in cinema," Alfred Hitchcock's film Family Plot portrays a funeral service in which a priest (apparently non-Mormon, by his appearance) reads Second Nephi 9:20–27, a passage describing Jesus Christ having victory over death. In 2011, a long-running religious satire musical titled The Book of Mormon, written by South Park creators Trey Parker and Matt Stone in collaboration with Robert Lopez, premiered on Broadway, winning nine Tony Awards, including Best Musical. Its London production won the Olivier Award for best musical. Although it is titled The Book of Mormon, the musical does not depict Book of Mormon content. Its plot tells an original story about Latter-day Saint missionaries in the twenty-first century. In 2019, the church began producing a series of live-action adaptations of various stories within the Book of Mormon, titled Book of Mormon Videos, which it distributed on its website and YouTube channel. Distribution The LDS Church distributes free copies of the Book of Mormon, and it reported in 2011 that 150 million copies of the book have been printed since its initial publication. The initial printing of the Book of Mormon in 1830 produced 5000 copies. The 50 millionth copy was printed in 1990, with the 100 millionth following in 2000 and reaching 150 million in 2011. In October 2020, the church announced it had printed over 192 million copies of the Book of Mormon. See also Journal of Book of Mormon Studies List of Gospels Studies of the Book of Mormon List of Book of Mormon places References Citations General and cited sources . . . . Further reading One volume in six parts. One volume in six parts. Republished online by the Interpreter Foundation in 2014. External links Book of Mormon (the current official edition of The Church of Jesus Christ of Latter-day Saints) Project Gutenberg has the full text of the Book of Mormon in various formats (LDS chapters and numbering) RLDS 1908 Book of Mormon (RLDS chapters and numbering) The Book of Mormon; An Account Written By the Hand of Mormon Upon Plates Taken From the Plates of Nephi. From the Collections at the Library of Congress Photographs and transcription of the printer's manuscript of the Book of Mormon by the Joseph Smith Papers Photocopies and transcription of the 1830 edition of the Book of Mormon by the Joseph Smith Papers Photographs and transcription of the 1840 of the Book of Mormon by the Joseph Smith Papers Book of Mormon Art Catalog database of known works of visual art depicting Book of Mormon content 1830 books 1830 in Christianity 19th-century Christian texts Standard works Works by Joseph Smith Works in the style of the King James Version
8
Buffalo is the second-most populated city in the U.S. state of New York and the seat of Erie County. It lies in Western New York, at the eastern end of Lake Erie, at the head of the Niagara River, on the United States border with Canada. With a population of 278,349 according to the 2020 census, Buffalo is the 78th-largest city in the United States. Buffalo and the city of Niagara Falls together make up the two-county Buffalo–Niagara Falls Metropolitan Statistical Area (MSA), which had an estimated population of 1.2 million in 2020, making it the 49th largest MSA in the United States. Before the 17th century, the region was inhabited by nomadic Paleo-Indians who were succeeded by the Neutral, Erie, and Iroquois nations. In the early 17th century, the French began to explore the region. In the 18th century, Iroquois land surrounding Buffalo Creek was ceded through the Holland Land Purchase, and a small village was established at its headwaters. In 1825, after its harbor was improved, Buffalo was selected as the terminus of the Erie Canal, which led to its incorporation in 1832. The canal stimulated its growth as the primary inland port between the Great Lakes and the Atlantic Ocean. Transshipment made Buffalo the world's largest grain port of that era. After the coming of railroads greatly reduced the canal's importance, the city became the second-largest railway hub (after Chicago). During the mid-19th century, Buffalo transitioned to manufacturing, which came to be dominated by steel production. Later, deindustrialization and the opening of the St. Lawrence Seaway saw the city's economy decline and diversify. It developed its service industries, such as health care, retail, tourism, logistics, and education, while retaining some manufacturing. In 2019, the gross domestic product of the Buffalo–Niagara Falls MSA was $53 billion (~$ in ). The city's cultural landmarks include the oldest urban parks system in the United States, the Buffalo AKG Art Museum, the Buffalo Philharmonic Orchestra, Shea's Performing Arts Center, the Buffalo Museum of Science, and several annual festivals. Its educational institutions include the University at Buffalo, Buffalo State University, Canisius College, D'Youville University and Medaille College. Buffalo is also known for its winter weather, Buffalo wings, and three major-league sports teams: the National Football League's Buffalo Bills, the National Hockey League's Buffalo Sabres and the National Lacrosse League's Buffalo Bandits. History Pre-Columbian era to European exploration Before the arrival of Europeans, nomadic Paleo-Indians inhabited the western New York region from the 8th millennium BCE. The Woodland period began around 1000 BC, marked by the rise of the Iroquois Confederacy and the spread of its tribes throughout the state. Seventeenth-century Jesuit missionaries were the first Europeans to visit the area. During French exploration of the region in 1620, the region was sparsely populated and occupied by the agrarian Erie people in the south and the Wenrohronon (Wenro) of the Neutral Nation in the north. The Neutral grew tobacco and hemp to trade with the Iroquois, who traded furs with the French for European goods. The tribes used animal- and war paths to travel and move goods across what today is New York State. (Centuries later, these same paths were gradually improved, then paved, then developed into major modern roads.) During the Beaver Wars in the mid-17th century the Senecas partly wiped out and partly absorbed the Erie and Neutrals in the region. Native Americans did not settle along Buffalo Creek permanently until 1780, when displaced Senecas were relocated from Fort Niagara. Louis Hennepin and Sieur de La Salle explored the upper Niagara and Ontario regions in the late 1670s. In 1679, La Salle's ship, Le Griffon, became the first to sail above Niagara Falls near Cayuga Creek. Baron de Lahontan visited the site of Buffalo in 1687. A small French settlement along Buffalo Creek lasted for only a year (1758). After the French and Indian War, the region was ruled by Britain. After the American Revolution, the Province of New York—now a U.S. state—began westward expansion, looking for arable land by following the Iroquois. New York and Massachusetts were vying for the territory which included Buffalo, and Massachusetts had the right to purchase all but a one-mile-(1600-meter)-wide portion of land. The rights to the Massachusetts territories were sold to Robert Morris in 1791. Despite objections from Seneca chief Red Jacket, Morris brokered a deal between fellow chief Cornplanter and the Dutch dummy corporation Holland Land Company. The Holland Land Purchase gave the Senecas three reservations, and the Holland Land Company received for about thirty-three cents per acre. Permanent white settlers along the creek were prisoners captured during the Revolutionary War. Early landowners were Iroquois interpreter Captain William Johnston, former enslaved man Joseph "Black Joe" Hodges and Cornelius Winney, a Dutch trader who arrived in 1789. As a result of the war, in which the Iroquois sided with the British Army, Iroquois territory was gradually reduced in the late 1700s by European settlers through successive statewide treaties which included the Treaty of Fort Stanwix (1784) and the First Treaty of Buffalo Creek (1788). The Iroquois were moved onto reservations, including Buffalo Creek. By the end of the 18th century, only of reservations remained. After the Treaty of Big Tree removed Iroquois title to lands west of the Genesee River in 1797, Joseph Ellicott surveyed land at the mouth of Buffalo Creek. In the middle of the village was an intersection of eight streets at present-day Niagara Square. Originally named New Amsterdam, its name was soon changed to Buffalo. Erie Canal, grain and commerce The village of Buffalo was named for Buffalo Creek. British military engineer John Montresor referred to "Buffalo Creek" in his 1764 journal, the earliest recorded appearance of the name. A road to Pennsylvania from Buffalo was built in 1802 for migrants traveling to the Connecticut Western Reserve in Ohio. Before an east–west turnpike across the state was completed, traveling from Albany to Buffalo would take a week; a trip from nearby Williamsville to Batavia could take over three days. British forces burned Buffalo and the northwestern village of Black Rock in 1813. The battle and subsequent fire was in response to the destruction of Niagara-on-the-Lake by American forces and other skirmishes during the War of 1812. Rebuilding was swift, completed in 1815. As a remote outpost, village residents hoped that the proposed Erie Canal would bring prosperity to the area. To accomplish this, Buffalo's harbor was expanded with the help of Samuel Wilkeson; it was selected as the canal's terminus over the rival Black Rock. It opened in 1825, ushering in commerce, manufacturing and hydropower. By the following year, the Buffalo Creek Reservation (at the western border of the village) was transferred to Buffalo. Buffalo was incorporated as a city in 1832. During the 1830s, businessman Benjamin Rathbun significantly expanded its business district. The city doubled in size from 1845 to 1855. Almost two-thirds of the city's population was foreign-born, largely a mix of unskilled (or educated) Irish and German Catholics. Fugitive slaves made their way north to Buffalo during the 1840s. Buffalo was a terminus of the Underground Railroad, with many free blacks crossing the Niagara River to Fort Erie, Ontario; others remained in Buffalo. During this time, Buffalo's port continued to develop. Passenger and commercial traffic expanded, leading to the creation of feeder canals and the expansion of the city's harbor. Unloading grain in Buffalo was a laborious job, and grain handlers working on lake freighters would make $1.50 a day () in a six-day work week. Local inventor Joseph Dart and engineer Robert Dunbar created the grain elevator in 1843, adapting the steam-powered elevator. Dart's Elevator initially processed one thousand bushels per hour, speeding global distribution to consumers. Buffalo was the transshipment hub of the Great Lakes, and weather, maritime and political events in other Great Lakes cities had a direct impact on the city's economy. In addition to grain, Buffalo's primary imports included agricultural products from the Midwest (meat, whiskey, lumber and tobacco), and its exports included leather, ships and iron products. The mid-19th century saw the rise of new manufacturing capabilities, particularly with iron. By the 1860s, many railroads terminated in Buffalo; they included the Buffalo, Bradford and Pittsburgh Railroad, Buffalo and Erie Railroad, the New York Central Railroad, and the Lehigh Valley Railroad. During this time, Buffalo controlled one-quarter of all shipping traffic on Lake Erie. After the Civil War, canal traffic began to drop as railroads expanded into Buffalo. Unionization began to take hold in the late 19th century, highlighted by the Great Railroad Strike of 1877 and 1892 Buffalo switchmen's strike. Steel, challenges, and the modern era At the start of the 20th century, Buffalo was the world's leading grain port and a national flour-milling hub. Local mills were among the first to benefit from hydroelectricity generated by the Niagara River. Buffalo hosted the 1901 Pan-American Exposition after the Spanish–American War, showcasing the nation's advances in art, architecture, and electricity. Its centerpiece was the Electric Tower, with over two million light bulbs, but some exhibits were jingoistic and racially charged. At the exposition, President William McKinley was assassinated by anarchist Leon Czolgosz. When McKinley died, Theodore Roosevelt was sworn in at the Wilcox Mansion in Buffalo. Attorney John Milburn and local industrialists and convinced the Lackawanna Iron and Steel Company to relocate from Scranton, Pennsylvania to the town of West Seneca in 1904. Employment was competitive, with many Eastern Europeans and Scrantonians vying for jobs. From the late 19th century to the 1920s, mergers and acquisitions led to distant ownership of local companies; this had a negative effect on the city's economy. Examples include the acquisition of Lackawanna Steel by Bethlehem Steel and, later, the relocation of Curtiss-Wright in the 1940s. The Great Depression saw severe unemployment, especially among the working class. New Deal relief programs operated in full force, and the city became a stronghold of labor unions and the Democratic Party. During World War II, Buffalo regained its manufacturing strength as military contracts enabled the city to manufacture steel, chemicals, aircraft, trucks and ammunition. The 15th-most-populous US city in 1950, Buffalo's economy relied almost entirely on manufacturing; eighty percent of area jobs were in the sector. The city also had over a dozen railway terminals, as railroads remained a significant industry. The St. Lawrence Seaway was proposed in the 19th century as a faster shipping route to Europe, and later as part of a bi-national hydroelectric project with Canada. Its combination with an expanded Welland Canal led to a grim outlook for Buffalo's economy. After its 1959 opening, the city's port and barge canal became largely irrelevant. Shipbuilding in Buffalo wound down in the 1960s due to reduced waterfront activity, ending an industry which had been part of the city's economy since 1812. Downsizing of the steel mills was attributed to the threat of higher wages and unionization efforts. Racial tensions culminated in riots in 1967. Suburbanization led to the selection of the town of Amherst for the new University at Buffalo campus by 1970. Unwilling to modernize its plant, Bethlehem Steel began cutting thousands of jobs in Lackawanna during the mid-1970s before closing it in 1983. The region lost at least 70,000 jobs between 1970 and 1984. Like much of the Rust Belt, Buffalo has focused on recovering from the effects of late-20th-century deindustrialization. Geography Topography Buffalo is on the eastern end of Lake Erie opposite Fort Erie, Ontario. It is at the head of the Niagara River, which flows north over Niagara Falls into Lake Ontario. The Buffalo metropolitan area is on the Erie/Ontario Lake Plain of the Eastern Great Lakes Lowlands, a narrow plain extending east to Utica, New York. The city is generally flat, except for elevation changes in the University Heights and Fruit Belt neighborhoods. The Southtowns are hillier, leading to the Cattaraugus Hills in the Appalachian Upland. Several types of shale, limestone and lagerstätten are prevalent in Buffalo and its surrounding area, lining their stream beds. According to Fox Weather, Buffalo is one of the top five snowiest large cities in the country, receiving, on average, 95 inches of snow annually. Although the city has not experienced any recent or significant earthquakes, Buffalo is in the Southern Great Lakes Seismic Zone (part of the Great Lakes tectonic zone). Buffalo has four channels within its boundaries: the Niagara River, Buffalo River (and Creek), Scajaquada Creek, and the Black Rock Canal, adjacent to the Niagara River. The city's Bureau of Forestry maintains a database of over seventy thousand trees. According to the United States Census Bureau, Buffalo has an area of ; is land, and the rest is water. The city's total area is 22.66 percent water. In 2010, its population density was 6,470.6 per square mile. Cityscape Buffalo's architecture is diverse, with a collection of 19th- and 20th-century buildings. Downtown Buffalo landmarks include Louis Sullivan's Guaranty Building, an early skyscraper; the Ellicott Square Building, once one of the largest of its kind in the world; the Art Deco Buffalo City Hall and the McKinley Monument, and the Electric Tower. Beyond downtown, the Buffalo Central Terminal was built in the Broadway-Fillmore neighborhood in 1929; the Richardson Olmsted Complex, built in 1881, was an insane asylum until its closure in the 1970s. Urban renewal from the 1950s to the 1970s spawned the Brutalist-style Buffalo City Court Building and Seneca One Tower, the city's tallest building. In the city's Parkside neighborhood, the Darwin D. Martin House was designed by Frank Lloyd Wright in his Prairie School style. Since 2016, Washington DC real estate developer Douglas Jemal has been acquiring, and redeveloping iconic properties throughout the city. Neighborhoods According to Mark Goldman, the city has a "tradition of separate and independent settlements". The boundaries of Buffalo's neighborhoods have changed over time. The city is divided into five districts, each containing several neighborhoods, for a total of thirty-five neighborhoods. Main Street divides Buffalo's east and west sides, and the west side was fully developed earlier. This division is seen in architectural styles, street names, neighborhood and district boundaries, demographics, and socioeconomic conditions; Buffalo's West Side is generally more affluent than its East Side. Several neighborhoods in Buffalo have had increased investment since the 1990s, beginning with the Elmwood Village. The 2002 redevelopment of the Larkin Terminal Warehouse led to the creation of Larkinville, home to several mixed-use projects and anchored by corporate offices. Downtown Buffalo and its central business district (CBD) had a 10.6-percent increase in residents from 2010 to 2017, as over 1,061 housing units became available; the Seneca One Tower was redeveloped in 2020. Other revitalized areas include Chandler Street, in the Grant-Amherst neighborhood, and Hertel Avenue in Parkside. The Buffalo Common Council adopted its Green Code in 2017, replacing zoning regulations which were over sixty years old. Its emphasis on regulations promoting pedestrian safety and mixed land use received an award at the 2019 Congress for the New Urbanism conference. Climate Buffalo has a humid continental climate (Köppen: Dfb/Dfa), and temperatures have been warming with the rest of the US. Lake-effect snow is characteristic of Buffalo winters, with snow bands (producing intense snowfall in the city and surrounding area) depending on wind direction off Lake Erie. However, Buffalo is rarely the snowiest city in the state. The Blizzard of 1977 resulted from a combination of high winds and snow which accumulated on land and on the frozen Lake Erie. Although snow does not typically impair the city's operation, it can cause significant damage in autumn (as the October 2006 storm did). In November 2014 (called "Snowvember"), the region had a record-breaking storm which produced over of snow. Buffalo's lowest recorded temperature was , which occurred twice: on February 9, 1934, and February 2, 1961. Although the city's summers are drier and sunnier than other cities in the northeastern United States, its vegetation receives enough precipitation to remain hydrated. Buffalo summers are characterized by abundant sunshine, with moderate humidity and temperatures; the city benefits from cool, southwestern Lake Erie summer breezes which temper warmer temperatures. Temperatures rise above an average of three times a year. No official recording of or more has occurred to date, with a maximum temperature of reached on August 27, 1948. Rainfall is moderate, typically falling at night, and cooler lake temperatures hinder storm development in July. August is usually rainier and muggier, as the warmer lake loses its temperature-controlling ability. Demographics Several hundred Seneca, Tuscarora and other Iroquois tribal peoples were the primary residents of the Buffalo area before 1800, concentrated along Buffalo Creek. After the Revolutionary War, settlers from New England and eastern New York began to move into the area. From the 1830s to the 1850s, they were joined by Irish and German immigrants from Europe, both peasants and working class, who settled in enclaves on the city's south and east sides. At the turn of the 20th century, Polish immigrants replaced Germans on the East Side, who moved to newer housing; Italian immigrant families settled throughout the city, primarily on the lower West Side. During the 1830s, Buffalo residents were generally intolerant of the small groups of Black Americans who began settling on the city's East Side. In the 20th century, wartime and manufacturing jobs attracted Black Americans from the South during the First and Second Great Migrations. In the World War II and postwar years from 1940 to 1970, the city's Black population rose by 433 percent. They replaced most of the Polish community on the East Side, who were moving out to suburbs. However, the effects of redlining, steering, social inequality, blockbusting, white flight and other racial policies resulted in the city (and region) becoming one of the most segregated in the U.S. During the 1940s and 1950s, Puerto Rican migrants arrived en masse, also seeking industrial jobs, settling on the East Side and moving westward. In the 21st century, Buffalo is classified as a majority minority city, with a plurality of residents who are Black and Latino. Buffalo has experienced effects of urban decay since the 1970s, and also saw population loss to the suburbs and Sun Belt states, and experienced job losses from deindustrialization. The city's population peaked at 580,132 in 1950, when Buffalo was the 15th-largest city in the United Statesdown from the eighth-largest city in 1900, after its growth rate slowed during the 1920s. Buffalo's population began declining in the second half of the 20th century, due to suburbanization and loss of industrial jobs, and the city's population is now less than half its peak population in 1950. Buffalo finally saw a population gain of 6.5% in the 2020 census, reversing a decades long trend of population decline. The city has 278,349 residents as of the 2020 census, making it the 76th-largest city in the United States. Its metropolitan area had 1.1 million residents in 2020, the country's 49th-largest. Compared to other major US metropolitan areas, the number of foreign-born immigrants to Buffalo is low. New immigrants are primarily resettled refugees (especially from war- or disaster-affected nations) and refugees who had previously settled in other U.S. cities. During the early 2000s, most immigrants came from Canada and Yemen; this shifted in the 2010s to Burmese (Karen) refugees and Bangladeshi immigrants. Between 2008 and 2016, Burmese, Somali, Bhutanese, and Iraqi Americans were the four largest ethnic immigrant groups in Erie County. Poverty has remained an issue for the city; in 2019, it was estimated that 30.1 percent of individuals and 24.8 percent of families lived below the federal poverty line. Per capita income was $24,400 and household income was $37,354: much less than the national average. A 2008 report noted that although food deserts were seen in larger cities and not in Buffalo, the city's neighborhoods of color have access only to smaller grocery stores and lack the supermarkets more typical of newer, white neighborhoods. A 2018 report noted that over fifty city blocks on Buffalo's East Side lacked adequate access to a supermarket. Health disparities exist compared to the rest of the state: Erie County's average 2019 lifespan was three years lower (78.4 years); its 17-percent smoking and 30-percent obesity rates were slightly higher than the state average. According to the Partnership for the Public Good, educational achievement in the city is lower than in the surrounding area; city residents are almost twice as likely as adults in the metropolitan area to lack a high-school diploma. Religion During the early 19th century, Presbyterian missionaries tried to convert the Seneca people on the Buffalo Creek Reservation to Christianity. Initially resistant, some tribal members set aside their traditions and practices to form their own sect. Later, European immigrants added other faiths. Christianity is the predominant religion in Buffalo and Western New York. Catholicism (primarily the Latin Church) has a significant presence in the region, with 161 parishes and over 570,000 adherents in the Diocese of Buffalo. A Jewish community began developing in the city with immigrants from the mid-1800s; about one thousand German and Lithuanian Jews settled in Buffalo before 1880. Buffalo's first synagogue, Temple Beth El, was established in 1847. The city's Temple Beth Zion is the region's largest synagogue. With changing demographics and an increased number of refugees from other areas on the city's East Side, Islam and Buddhism have expanded their presence. In this area, new residents have converted empty churches into mosques and temples. Hinduism maintains a small, active presence in the area, including the town of Amherst. A 2016 American Bible Society survey reported that Buffalo is the fifth-least "Bible-minded" city in the United States; 13 percent of its residents associate with the Bible. Economy The Erie Canal was the impetus for Buffalo's economic growth as a transshipment hub for grain and other agricultural products headed east from the Midwest. Later, manufacturing of steel and automotive parts became central to the city's economy. When these industries downsized in the region, Buffalo's economy became service-based. Its primary sectors include health care, business services (banking, accounting, and insurance), retail, tourism and logistics, especially with Canada. Despite the loss of large-scale manufacturing, some manufacturing of metals, chemicals, machinery, food products, and electronics remains in the region. Advanced manufacturing has increased, with an emphasis on research and development (R&D) and automation. In 2019, the U.S. Bureau of Economic Analysis valued the gross domestic product (GDP) of the Buffalo–Niagara Falls MSA at $53 billion (~$ in ). The civic sector is a major source of employment in the Buffalo area, and includes public, non-profit, healthcare and educational institutions. New York State, with over 19,000 employees, is the region's largest employer. In the private sector, top employers include the Kaleida Health and Catholic Health hospital networks and M&T Bank, the sole Fortune 500 company headquartered in the city. Most have been the top employers in the region for several decades. Buffalo is home to the headquarters of Rich Products, Delaware North and New Era Cap Company; the aerospace manufacturer Moog Inc. and toy maker Fisher-Price are based in nearby East Aurora. National Fuel Gas and Life Storage are headquartered in Williamsville, New York. Buffalo weathered the Great Recession of 2006–09 well in comparison with other U.S. cities, exemplified by increased home prices during this time. The region's economy began to improve in the early 2010s, adding over 25,000 jobs from 2009 to 2017. With state aid, Tesla, Inc.'s Giga New York plant opened in South Buffalo in 2017. The effects of the COVID-19 pandemic in the United States, however, increased the local unemployment rate to 7.5 percent by December 2020. The local unemployment rate had been 4.2 percent in 2019, higher than the national average of 3.5 percent. The Buffalo area has a larger-than-average pay disparity than the rest of the U.S. The average salary ($43,580) was six percent less than the national average in 2017, with the pay gap increasing to ten percent with increased career specialization. Workforce productivity is higher and turnover lower than other regions. Culture Performing arts and music Buffalo is home to over 20 theater companies, with many centered in the downtown Theatre District. Shea's Performing Arts Center is the city's largest theater. Designed by Louis Comfort Tiffany and built in 1926, the theater presents Broadway musicals and concerts. Shakespeare in Delaware Park has been held outdoors every summer since 1976. Stand-up comedy can be found throughout the city and is anchored by Helium Comedy Club, which hosts both local talent and national touring acts. The Nickel City Opera (NCO) was founded in 2004 by Valerian Ruminski and performs at Shea's Performing Arts Center. Matthias Manasi was music director of NCO from 2017 to 2021, his predecessor Michael Ching was music director from 2012 to 2017. NCO's repertoire consists of a wide range of operas from 18th-century Baroque and 19th-century Bel canto to the Minimalism of the 20th century and to contemporary operas of the 20th and 21st centuries. The NCO has commissioned operas and has staged world premieres of notable works. The Buffalo Philharmonic Orchestra was formed in 1935 and performs at Kleinhans Music Hall, whose acoustics have been praised. Although the orchestra nearly disbanded during the late 1990s due to a lack of funding, philanthropic contributions and state aid stabilized it. Under the direction of JoAnn Falletta, the orchestra has received a number of Grammy Award nominations and won the Grammy Award for Best Contemporary Classical Composition in 2009. KeyBank Center draws national music acts year-round. Sahlen Field hosts the annual WYRK Taste of Country music festival every summer with national country music acts. Canalside regularly hosts outdoor summer concerts, a tradition that spun off from the defunct Thursday at the Square concert series. Colored Musicians Club, an extension of what was a separate musicians'-union chapter, maintains jazz history. Rick James was born and raised in Buffalo and later lived on a ranch in the nearby Town of Aurora. James formed his Stone City Band in Buffalo, and had national appeal with several crossover singles in the R&B, disco and funk genres in the late 1970s and early 1980s. Around the same time, the jazz fusion band Spyro Gyra and jazz saxophonist Grover Washington Jr. also got their start in the city. The Goo Goo Dolls, an alternative rock group which formed in 1986, had 19 top-ten singles. Singer-songwriter and activist Ani DiFranco has released over 20 folk and indie rock albums on Righteous Babe Records, her Buffalo-based label. Underground hip-hop acts in the city partner with Buffalo-based Griselda Records, whose artists include Westside Gunn and Conway the Machine, and occasionally refer to Buffalo culture in their lyrics. Cuisine The city's cuisine encompasses a variety of cultures and ethnicities. In 2015, the National Geographic Society ranked Buffalo third on its "World's Top Ten Food Cities" list. Teressa Bellissimo first prepared Buffalo wings (seasoned chicken wings) at the Anchor Bar in 1964. The Anchor Bar has a crosstown rivalry with Duff's Famous Wings, but Buffalo wings are served at many bars and restaurants throughout the city (some with unique cooking styles and flavor profiles). Buffalo wings are traditionally served with blue cheese dressing and celery. In 2003, the Anchor Bar received a James Beard Foundation Award in the America's Classics category. The Buffalo area has over 600 pizzerias, estimated at more per capita than New York City. Several craft breweries began opening in the 1990s, and the city's last call is 4 am. Other mainstays of Buffalo cuisine include beef on weck, butter lambs, kielbasa, pierogi, sponge candy, chicken finger subs (including the stinger - a version that also includes steak), and the fish fry (popular any time of year, but especially during Lent). With an influx of refugees and other immigrants to Buffalo, its number of ethnic restaurants (including the West Side Bazaar kitchen incubator) has increased. Some restaurants use food trucks to serve customers, and nearly fifty food trucks appeared at Larkin Square in 2019. Museums and tourism Buffalo was ranked the seventh-best city in the United States to visit in 2021 by Travel + Leisure, which noted the growth and potential of the city's cultural institutions. The Albright–Knox Art Gallery is a modern and contemporary art museum with a collection of more than 8,000 works, of which only two percent are on display. With a donation from Jeffrey Gundlach, a three-story addition designed by the Dutch architectural firm OMA is under construction and scheduled to open in 2022. Across the street, the Burchfield Penney Art Center contains paintings by Charles E. Burchfield and is operated by Buffalo State College. Buffalo is home to the Freedom Wall, a 2017 art installation commemorating civil-rights activists throughout history. Near both museums is the Buffalo History Museum, featuring artwork, literature and exhibits related to the city's history and major events, and the Buffalo Museum of Science is on the city's East Side. Canalside, Buffalo's historic business district and harbor, attracts more than 1.5 million visitors annually. It includes the Explore & More Children's Museum, the Buffalo and Erie County Naval & Military Park, LECOM Harborcenter, and a number of shops and restaurants. A restored 1924 carousel (now solar-powered) and a replica boathouse were added to Canalside in 2021. Other city attractions include the Theodore Roosevelt Inaugural National Historic Site, the Michigan Street Baptist Church, Buffalo RiverWorks, Seneca Buffalo Creek Casino, Buffalo Transportation Pierce-Arrow Museum, and the Nash House Museum. The National Buffalo Wing Festival is held every Labor Day at Highmark Stadium. Since 2002, it has served over 4.8 million Buffalo wings and has had a total attendance of 865,000. The Taste of Buffalo is a two-day food festival held in July at Niagara Square, attracting 450,000 visitors annually. Other events include the Allentown Art Festival, the Polish-American Dyngus Day, the Elmwood Avenue Festival of the Arts, Juneteenth in Martin Luther King Jr. Park, the World's Largest Disco in October and Friendship Festival in summer, which celebrates Canada-US relations. Sports Buffalo has two major professional sports teams: the Buffalo Sabres (National Hockey League) and the Buffalo Bills (National Football League). The Bills were a founding member of the American Football League in 1960, and have played at Highmark Stadium in Orchard Park since they moved from War Memorial Stadium in 1973. They are the only NFL team based in New York State. Before the Super Bowl era, the Bills won the American Football League Championship in 1964 and 1965. With mixed success throughout their history, the Bills had a close loss in Super Bowl XXV and returned to consecutive Super Bowls after the 1991, 1992, and 1993 seasons (losing each time). The Sabres, an expansion team in 1970, share KeyBank Center with the Buffalo Bandits of the National Lacrosse League. The Bandits are the most decorated of the city's professional teams, with five championships. The Bills, Sabres and Bandits are owned by Pegula Sports and Entertainment. Several colleges and universities in the area field intercollegiate sports teams; the Buffalo Bulls and the Canisius Golden Griffins compete in NCAA Division I. The Bulls have 16 varsity sports in the Mid-American Conference (MAC); the Golden Griffins field 15 teams in the Metro Atlantic Athletic Conference (MAAC), with the men's hockey team part of the Atlantic Hockey Association (AHA). The Bulls participate in the Football Bowl Subdivision, the highest level of college football. Buffalo's minor-league teams include the Buffalo Bisons (Triple-A baseball), who play at Sahlen Field, and the Buffalo eXtreme (American Basketball Association). Parks and recreation Frederick Law Olmsted described Buffalo as being "the best planned city [...] in the United States, if not the world". With encouragement from city stakeholders, he and Calvert Vaux augmented the city's grid plan by drawing inspiration from Paris and introducing landscape architecture with aspects of the countryside. Their plan would introduce a system of interconnected parks, parkways and trails, unlike the singular Central Park in New York City. The largest would be Delaware Park, across Forest Lawn Cemetery to amplify the amount of open space. With construction of the system finishing in 1876, it is regarded as the country's oldest; however, some of Olmsted's plans were never fully realized. Some parks later diminished and succumbed to diseases, highway construction, and weather events such as Lake Storm Aphid in 2006. The non-profit Buffalo Olmsted Park Conservancy was created in 2004 to help preserve the of parkland. Olmsted's work in Buffalo inspired similar efforts in cities such as San Francisco, Chicago, and Boston. The city's Division of Parks and Recreation manages over 180 parks and facilities, seven recreational centers, twenty-one pools and splash pads, and three ice rinks. The Delaware Park features the Buffalo Zoo, Hoyt Lake, a golf course, and playing fields. Buffalo collaborated with its sister city Kanazawa to create the park's Japanese Garden in 1970, where cherry blossoms bloom in the spring. Opening in 1976, Tifft Nature Preserve in South Buffalo is on of remediated industrial land. The preserve is an Important Bird Area, including a meadow with trails for hiking and cross-country skiing, marshland and fishing. The Olmsted-designed Cazenovia and South Parks, the latter home to the Buffalo and Erie County Botanical Gardens, are also in South Buffalo. According to the Trust for Public Land, Buffalo's 2022 ParkScore ranking had high marks for access to parks, with 89 percent of city residents living within a ten-minute walk from a park. The city ranked lower in acreage, however; nine percent of city land is devoted to parks, compared with the national median of about fifteen percent. Efforts to convert Buffalo's former industrial waterfront into recreational space have attracted national attention, with some writers comparing its appeal to that of Niagara Falls. Redevelopment of the waterfront began in the early 2000s, with the reconstruction of historically aligned canals on the site of the former Buffalo Memorial Auditorium. Placemaking initiatives would lead to the area's popularity, rather than permanent buildings and attractions. Under Mayor Byron Brown, Canalside was cited by the Brookings Institution as an example of waterfront revitalization for other U.S. cities to follow. Summer events have included paddle-boating and fitness classes, and the frozen canals permit ice skating, curling, and ice cycling in winter. Its success spurred the state to create Buffalo Harbor State Park in 2014; the park has trails, open recreation areas, bicycle paths and piers. The park's Gallagher Beach, the city's only public beach, has prohibited swimming due to high bacteria levels and other environmental concerns. The Shoreline Trail passes through Buffalo near the Outer Harbor, Centennial Park, and the Black Rock Canal. The North Buffalo–Tonawanda rail trail begins in Shoshone Park, near the LaSalle metro station in North Buffalo. Government Buffalo has a Strong mayor–council government. As the chief executive of city government, the mayor oversees the heads of the city's departments, participates in ceremonies, boards and commissions, and is as the liaison between the city and local cultural institutions. Some agencies, including utilities, urban renewal and public housing, are state- and federally-funded public benefit-corporations semi-independent of city government. Byron Brown, the city's first African American mayor, has held the office since 2006, longer than anyone else. Brown, defeated by India Walton in the 2021 mayoral primary election, began a write-in campaign for the general election. Brown initially denied Walton the chance to become the first female and socialist mayor of Buffalo, winning just under 60% of the votes. No Republican has been mayor of Buffalo since Chester A. Kowal in 1965. With its nine districts, the Buffalo Common Council enacts laws, levies taxes, and approves mayoral appointees and the city budget. Pastor Darius Pridgen has been the Common Council president since 2014. Generally reflecting the city's electorate, all nine councilmen are members of the Democratic Party. Buffalo is the Erie County seat, and is within five of the county's eleven legislative districts. The city is part of the Eighth Judicial District. Court cases handled at the city level include misdemeanors, violations, housing matters, and claims under $15,000; more severe cases are handled at the county level. Buffalo is represented by members of the New York State Assembly and New York State Senate. At the federal level, the city takes up most of and has been represented by Democrat Brian Higgins since 2005. Federal offices in the city include the Buffalo District of the United States Army Corps of Engineers' Great Lakes and Ohio River Division, the Federal Bureau of Investigation, and the United States District Court for the Western District of New York. In 2020, the city spent $519 million (~$ in ) on the effects of the COVID-19 pandemic. supplemented by about $50 million in federal stimulus money. The proposed budget includes a slight increase in the commercial tax and a slight decrease in the residential tax to compensate for the pandemic. Public safety Buffalo is served by the Buffalo Police Department. The police commissioner is Byron Lockwood, who was appointed by Mayor Byron Brown in 2018. Although some criminal activity in the city remains higher than the national average, total crimes have decreased since the 1990s; one reason may be the gun buyback program implemented by the Brown administration in the mid-2000s. Before this, the city was part of the nationwide crack epidemic of the 1980s and 1990s and its accompanying record-high crime levels. In 2018, city police began wearing 300 body cameras. A 2021 Partnership for the Public Good report noted that the BPD, which had a 2020–21 budget of about $145.7 million, had an above-average police-to-citizen ratio of 28.9 officers per 10,000 residents in 2020higher than peer cities Minneapolis and Toledo, Ohio. The force had a roster of 740 officers during the year, about two-thirds of whom handled emergency requests, road patrol and other non-office assignments. The department has been criticized for misconduct and brutality, including the 2004 wrongful termination of officer Cariol Horne for opposing police brutality toward a suspect and a 2020 protest-shoving incident. The Buffalo Fire Department and American Medical Response (AMR) handle fire-protection and emergency medical services (EMS) calls in the city. The fire department has about 710 firefighters and thirty-five stations, including twenty-three engine companies and twelve ladder companies. The department also operates the Edward M. Cotter, considered the world's oldest active fireboat. With vacant and abandoned homes prone to arson, squatting, prostitution and other criminal activities, the fire and police department's resources were overburdened before the 2010s. Buffalo ranked second nationwide to St. Louis for vacant homes per capita in 2007, and the city began a five-year program to demolish five thousand vacant, damaged and abandoned homes. On May 14, 2022, there was a mass shooting in a Tops supermarket on the East Side of Buffalo where 13 victims were shot in a racially motivated attack by a white supremacist who was not a Buffalo native. Ten victims, all of whom were Black, were murdered and three were injured. Media Buffalo's major daily newspaper is The Buffalo News. Established in 1880 as the Buffalo Evening News, the newspaper is estimated to have a daily circulation of 87,000 and 125,000 on Sundays (down from a high of 300,000). The newspaper announced in February 2023 that is had a pending sale on its building and was to be moving printing operations to the home of the Cleveland Plain Dealer. Other newspapers in the Buffalo area include The Public, the Black-focused Challenger Community News, The Record of Buffalo State College, The Spectrum of the University at Buffalo, and Buffalo Business First. Eighteen radio stations are licensed in Buffalo, including an FM station at Buffalo State College. Over ninety FM and AM radio signals can be received throughout the city. Eight full-power television outlets serve the city. Major stations include WKBW-TV (ABC), WIVB-TV (CBS), WGRZ (NBC), WUTV (Fox, received in parts of Southern Ontario), and WNED-TV (PBS); WNED reported that most of the station's members live in the Greater Toronto Area. According to Nielsen Media Research, the Buffalo television market was the 51st largest in the United States . Movies shooting significant footage in Buffalo include Hide in Plain Sight (1980), Tuck Everlasting (1981), Best Friends (1982), The Natural (1984), Vamping (1984), Canadian Bacon (1995), Buffalo '66 (1998), Manna from Heaven (2002), Bruce Almighty (2003), The Savages (2007), Slime City Massacre (2010), Henry's Crime (2011), Sharknado 2: The Second One (2014), Killer Rack (2015), Teenage Mutant Ninja Turtles: Out of the Shadows (2016), Marshall (2016), The American Side (2017), The First Purge (2018), The True Adventures of Wolfboy (2019), A Quiet Place Part II (2021) and Guns of Eden (2022). Although higher Buffalo production costs led to some films being finished elsewhere, tax credits and other economic incentives have enabled new film studios and production facilities to open. In 2021, several studio projects were in the planning stages. Education Primary and secondary education The Buffalo Public Schools have about thirty-four thousand students enrolled in their primary and secondary schools. The district administers about sixty public schools, including thirty-six primary schools, five middle high schools, fourteen high schools and three alternative schools, with a total of about 3,500 teachers. Its board of education, authorized by the state, has nine elected members who select the superintendent and oversee the budget, curriculum, personnel, and facilities. In 2020, the graduation rate was seventy-six percent. The public City Honors School was ranked the top high school in the city and 178th nationwide by U.S. News & World Report in 2021. There are twenty charter schools in Buffalo, with some oversight by the district. The city has over a dozen private schools, including Bishop Timon – St. Jude High School, Canisius High School, Mount Mercy Academy, and Nardin Academy—all Roman Catholic, and Darul Uloom Al-Madania and Universal School of Buffalo (both Islamic schools); nonsectarian options include Buffalo Seminary and the Nichols School. Colleges and universities Founded by Millard Fillmore, the University at Buffalo (UB) is one of the State University of New York's two flagship universities and the state's largest public university. A Research I university, over 32,000 undergraduate, graduate and professional students attend its thirteen schools and colleges. Two of UB's three campuses (the South and Downtown Campuses) are in the city, but most university functions take place at the large North Campus in Amherst. In 2020, U.S. News & World Report ranked UB the 34th-best public university and 88th in national universities. Buffalo State College, founded as a normal school, is one of SUNY's thirteen comprehensive colleges. The city's four-year private institutions include Canisius College, D'Youville University, Medaille University, Trocaire College, and Villa Maria College. SUNY Erie, the county's two-year public higher-education institution, and the for-profit Bryant & Stratton College have small downtown campuses. Libraries Established in 1835, Buffalo's main library is the Central Library of the Buffalo & Erie County Public Library system. Rebuilt in 1964, it contains an auditorium, the original manuscript of the Adventures of Huckleberry Finn (donated by Mark Twain), and a collection of about two million books. Its Grosvenor Room maintains a special-collections listing of nearly five hundred thousand resources for researchers. A pocket park funded by Southwest Airlines opened in 2020, and brought landscaping improvements and seating to Lafayette Square. The system's free library cards are valid at the city's eight branch libraries and at member libraries throughout Erie County. Infrastructure Healthcare Nine hospitals are operated in the city: Oishei Children's Hospital and Buffalo General Medical Center by Kaleida Health, Mercy Hospital and Sisters of Charity Hospital (Catholic Health), Roswell Park Comprehensive Cancer Center, the county-run Erie County Medical Center (ECMC), Buffalo VA Medical Center, BryLin (Psychiatric) Hospital and the state-operated Buffalo Psychiatric Center. John R. Oishei Children's Hospital, built in 2017, is adjacent to Buffalo General Medical Center on the Buffalo Niagara Medical Campus north of downtown; its Gates Vascular Institute specializes in acute stroke recovery. The medical campus includes the University at Buffalo Jacobs School of Medicine and Biomedical Sciences, the Hauptman-Woodward Medical Research Institute and Roswell Park Comprehensive Cancer Center, ranked the 14th-best cancer-treatment center in the United States by U.S. News & World Report. Transportation Growth and changing transportation needs altered Buffalo's grid plan, which was developed by Joseph Ellicott in 1804. His plan laid out streets like the spokes of a wheel, naming them after Dutch landowners and Native American tribes. City streets expanded outward, denser in the west and spreading out east of Main Street. Buffalo is a port of entry with Canada; the Peace Bridge crosses the Niagara River and links the Niagara Thruway (I-190) and Queen Elizabeth Way. I-190, NY 5 and NY 33 are the primary expressways serving the city, carrying a total of over 245,000 vehicles daily. NY 5 carries traffic to the Southtowns, and NY 33 carries traffic to the eastern suburbs and the Buffalo Airport. The east-west Scajacquada Expressway (NY 198) bisects Delaware Park, connecting I-190 with the Kensington Expressway (NY 33) on the city's East Side to form a partial beltway around the city center. The Scajacquada and Kensington Expressways and the Buffalo Skyway (NY 5) have been targeted for redesign or removal. Other major highways include US 62 on the city's East Side; NY 354 and a portion of NY 130, both east–west routes; and NY 265, NY 266 and NY 384, all north–south routes on the city's West Side. Buffalo has a higher-than-average percentage of households without a car: 30 percent in 2015, decreasing to 28.2 percent in 2016; the 2016 national average was 8.7 percent. Buffalo averaged 1.03 cars per household in 2016, compared to the national average of 1.8. The Niagara Frontier Transportation Authority (NFTA) operates the region's public transit, including its airport, light-rail system, buses, and harbors. The NFTA operates 323 buses on 61 lines throughout Western New York. Buffalo Metro Rail is a line which runs from Canalside to the University Heights district. The line's downtown section, south of the Fountain Plaza station, runs at grade and is free of charge. The Buffalo area ranks twenty-third nationwide in transit ridership, with thirty trips per capita per year. Expansions have been proposed since Buffalo Metro Rail's inception in the 1980s, with the latest plan (in the late 2010s) reaching the town of Amherst. Buffalo Niagara International Airport in Cheektowaga has daily scheduled flights by domestic, charter and regional carriers. The airport handled nearly five million passengers in 2019. It received a J.D. Power award in 2018 for customer satisfaction at a mid-sized airport, and underwent a $50 million expansion in 2020–21. The airport, light rail, small-boat harbor and buses are monitored by the NFTA's transit police. Buffalo has an Amtrak intercity train station, Buffalo–Exchange Street station, which was rebuilt in 2020. The city's eastern suburbs are served by Amtrak's Buffalo–Depew station in Depew, which was built in 1979. Buffalo was a major stop on through routes between Chicago and New York City through the lower Ontario Peninsula; trains stopped at Buffalo Central Terminal, which operated from 1929 to 1979. Intercity buses depart and arrive from the NFTA's Metropolitan Transportation Center on Ellicott Street. Since Buffalo adopted a complete streets policy in 2008, efforts have been made to accommodate cyclists and pedestrians into new infrastructure projects. Improved corridors have bike lanes, and Niagara Street received separate bike lanes in 2020. Walk Score gave Buffalo a "somewhat walkable" rating of 68 out of 100, with Allentown and downtown considered more walkable than other areas of the city. Utilities Buffalo's water system is operated by Veolia Water, and water treatment begins at the Colonel Francis G. Ward Pumping Station. When it opened in 1915, the station's capacity was second only to Paris. Wastewater is treated by the Buffalo Sewer Authority, its coverage extending to the eastern suburbs. National Grid and New York State Electric & Gas (NYSEG) provide electricity, and National Fuel Gas provides natural gas. The city's primary telecommunications provider is Spectrum; Verizon Fios serves the North Park neighborhood. A 2018 report by Ookla noted that Buffalo was one of the bottom five U.S. cities in average download speeds at 66 megabits per second. The city's Department of Public Works manages Buffalo's snow and trash removal and street cleaning. Snow removal generally operates from November 15 to April 1. A snow emergency is declared by the National Weather Service after a snowstorm, and the city's roads, major sidewalks and bridges are cleared by over seventy snowplows within 24 hours. Rock salt is the principal agent for preventing snow accumulation and melting ice. Snow removal may coincide with driving bans and parking restrictions. The area along the Outer Harbor is the most dangerous driving area during a snowstorm; when weather conditions dictate, the Buffalo Skyway is closed by the city's police department. To prevent ice jams which may impact hydroelectric plants in Niagara Falls, the New York Power Authority and Ontario Power Generation began installing an ice boom annually in 1964. The boom's installation date is temperature-dependent, and it is removed on April 1 unless there is more than of ice remaining on eastern Lake Erie. It stretches from the outer breakwall at the Buffalo Outer Harbor to the Canadian shore near Fort Erie. Originally made of wood, the boom now consists of steel pontoons. Notable residents Sister cities Buffalo has eighteen sister cities: Aboadze, Ghana Baní, Dominican Republic Bursa, Turkey Cape Coast, Ghana (1976) Changzhou, China (2011) Dortmund, Germany (1972) Drohobych, Ukraine (2000) Horlivka, Ukraine (2007) Kanazawa, Japan (1962) Kiryat Gat, Israel (1977) Lille, France (2000) Rzeszów, Poland (1975) Saint Ann, Jamaica (2007) Siena, Italy (1961) Torremaggiore, Italy (2004) Wolverhampton, United Kingdom Yıldırım, Turkey (2010) See also Architecture of Buffalo, New York Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo Buffalo crime family Buffalo wing History of Buffalo, New York Index of New York (state)–related articles Inland Northern American English List of City of Buffalo landmarks and historic districts List of mayors of Buffalo, New York List of people from Buffalo, New York List of routes of City of Buffalo streetcars National Register of Historic Places listings in Buffalo, New York Sports in Buffalo Politics and government of Buffalo, New York Timeline of Buffalo, New York USS Buffalo, 4 ships Explanatory notes References Further reading Holli, Melvin G., and Jones, Peter d'A., eds. Biographical Dictionary of American Mayors, 1820-1980 (Greenwood Press, 1981) short scholarly biographies each of the city's mayors 1820 to 1980. online; see index at pp. 406–411 for list. External links NYPL Digital Gallery, Media related to Buffalo Library of Congress, Prints & Photos Division: Historical images related to Buffalo WNED Documentaries and Specials: Historical and cultural programming related to Buffalo from Buffalo–Toronto Public Media 1801 establishments in New York (state) Cities in Erie County, New York Cities in New York (state) County seats in New York (state) Erie Canal Inland port cities and towns of the United States New York State Heritage Areas Populated places established in 1801 New York (state) populated places on Lake Erie Populated places on the Underground Railroad Western New York
15
Benjamin Franklin ( April 17, 1790) was an American polymath who was active as a writer, scientist, inventor, statesman, diplomat, printer, publisher, and political philosopher. Among the leading intellectuals of his time, Franklin was one of the Founding Fathers of the United States, a drafter and signer of the Declaration of Independence, and the first postmaster general. Franklin became a successful newspaper editor and printer in Philadelphia, the leading city in the colonies, publishing the Pennsylvania Gazette at age 23. He became wealthy publishing this and Poor Richard's Almanack, which he wrote under the pseudonym "Richard Saunders". After 1767, he was associated with the Pennsylvania Chronicle, a newspaper that was known for its revolutionary sentiments and criticisms of the policies of the British Parliament and the Crown. He pioneered and was the first president of the Academy and College of Philadelphia, which opened in 1751 and later became the University of Pennsylvania. He organized and was the first secretary of the American Philosophical Society and was elected president in 1769. Franklin became a national hero in America as an agent for several colonies when he spearheaded an effort in London to have the Parliament of Great Britain repeal the unpopular Stamp Act. An accomplished diplomat, he was widely admired as the first U.S. ambassador to France and was a major figure in the development of positive FrancoAmerican relations. His efforts proved vital for the American Revolution in securing French aid. He was promoted to deputy postmaster-general for the British colonies on August 10, 1753, having been Philadelphia postmaster for many years, and this enabled him to set up the first national communications network. He was active in community affairs and colonial and state politics, as well as national and international affairs. From 1785 to 1788, he served as governor of Pennsylvania. At some points in his life, he owned slaves and ran "for sale" ads for slaves in his newspaper, but by the late 1750s, he began arguing against slavery, became an active abolitionist, and promoted education and the integration of African Americans into U.S. society. As a scientist, he was a major figure in the American Enlightenment and the history of physics for his studies of electricity, and for charting and naming the Gulf Stream current. As an inventor, he is known for the lightning rod, bifocals, and the Franklin stove, among others. He founded many civic organizations, including the Library Company, Philadelphia's first fire department, and the University of Pennsylvania. Franklin earned the title of "The First American" for his early and indefatigable campaigning for colonial unity. Foundational in defining the American ethos, Franklin has been called "the most accomplished American of his age and the most influential in inventing the type of society America would become." His life and legacy of scientific and political achievement, and his status as one of America's most influential Founding Fathers, have seen Franklin honored more than two centuries after his death on the $100 bill, warships, and the names of many towns, counties, educational institutions, and corporations, as well as numerous cultural references and with a portrait in the Oval Office. Over his lifetime, Franklin wrote or received more than 30,000 letters and other documents, which since the 1950s have been collected in The Papers of Benjamin Franklin, published by both the American Philosophical Society and Yale University. Ancestry Benjamin Franklin's father, Josiah Franklin, was a tallow chandler, soaper, and candlemaker. Josiah Franklin was born at Ecton, Northamptonshire, England, on December 23, 1657, the son of Thomas Franklin, a blacksmith and farmer, and his wife, Jane White. Benjamin's father and all four of his grandparents were born in England. Josiah Franklin had a total of seventeen children with his two wives. He married his first wife, Anne Child, in about 1677 in Ecton and emigrated with her to Boston in 1683; they had three children before emigration and four after. Following her death, Josiah married Abiah Folger on July 9, 1689, in the Old South Meeting House by Reverend Samuel Willard, and had ten children with her. Benjamin, their eighth child, was Josiah Franklin's fifteenth child overall, and his tenth and final son. Benjamin Franklin's mother, Abiah, was born in Nantucket, Massachusetts Bay Colony, on August 15, 1667, to Peter Folger, a miller and schoolteacher, and his wife, Mary Morrell Folger, a former indentured servant. Mary Folger came from a Puritan family that was among the first Pilgrims to flee to Massachusetts for religious freedom, sailing for Boston in 1635 after King Charles I of England had begun persecuting Puritans. Her father Peter was "the sort of rebel destined to transform colonial America." As clerk of the court, he was jailed for disobeying the local magistrate in defense of middle-class shopkeepers and artisans in conflict with wealthy landowners. Early life and education Boston Franklin was born on Milk Street in Boston, Province of Massachusetts Bay on January 17, 1706, and baptized at the Old South Meeting House in Boston. As a child growing up along the Charles River, Franklin recalled that he was "generally the leader among the boys". Franklin's father wanted him to attend school with the clergy but only had enough money to send him to school for two years. He attended Boston Latin School but did not graduate; he continued his education through voracious reading. Although "his parents talked of the church as a career" for Franklin, his schooling ended when he was ten. He worked for his father for a time, and at 12 he became an apprentice to his brother James, a printer, who taught him the printing trade. When Benjamin was 15, James founded The New-England Courant, which was the third newspaper founded in Boston. When denied the chance to write a letter to the paper for publication, Franklin adopted the pseudonym of "Silence Dogood", a middle-aged widow. Mrs. Dogood's letters were published and became a subject of conversation around town. Neither James nor the Courant readers were aware of the ruse, and James was unhappy with Benjamin when he discovered the popular correspondent was his younger brother. Franklin was an advocate of free speech from an early age. When his brother was jailed for three weeks in 1722 for publishing material unflattering to the governor, young Franklin took over the newspaper and had Mrs. Dogood proclaim, quoting Cato's Letters, "Without freedom of thought there can be no such thing as wisdom and no such thing as public liberty without freedom of speech". Franklin left his apprenticeship without his brother's permission, and in so doing became a fugitive. Move to Philadelphia At age 17, Franklin ran away to Philadelphia, seeking a new start in a new city. When he first arrived, he worked in several printing shops in Philadelphia, but he was not satisfied by the immediate prospects in any of these jobs. After a few months, while working in one printing house, Pennsylvania governor Sir William Keith convinced him to go to London, ostensibly to acquire the equipment necessary for establishing another newspaper in Philadelphia. Discovering that Keith's promises of backing a newspaper were empty, he worked as a typesetter in a printer's shop in what is the present-day Church of St Bartholomew-the-Great in the Smithfield area of London. Following this, he returned to Philadelphia in 1726 with the help of Thomas Denham, a merchant who employed him as a clerk, shopkeeper, and bookkeeper in his business. Junto and library In 1727, at age 21, Franklin formed the Junto, a group of "like minded aspiring artisans and tradesmen who hoped to improve themselves while they improved their community". The Junto was a discussion group for issues of the day; it subsequently gave rise to many organizations in Philadelphia. The Junto was modeled after English coffeehouses that Franklin knew well and which had become the center of the spread of Enlightenment ideas in Britain. Reading was a great pastime of the Junto, but books were rare and expensive. The members created a library initially assembled from their own books after Franklin wrote: This did not suffice, however. Franklin conceived the idea of a subscription library, which would pool the funds of the members to buy books for all to read. This was the birth of the Library Company of Philadelphia, whose charter he composed in 1731. Newspaperman Upon Denham's death, Franklin returned to his former trade. In 1728, he set up a printing house in partnership with Hugh Meredith; the following year he became the publisher of The Pennsylvania Gazette, a newspaper in Philadelphia. The Gazette gave Franklin a forum for agitation about a variety of local reforms and initiatives through printed essays and observations. Over time, his commentary, and his adroit cultivation of a positive image as an industrious and intellectual young man, earned him a great deal of social respect. But even after he achieved fame as a scientist and statesman, he habitually signed his letters with the unpretentious 'B. Franklin, Printer.' In 1732, he published the first German-language newspaper in America – Die Philadelphische Zeitung – although it failed after only one year because four other newly founded German papers quickly dominated the newspaper market. Franklin also printed Moravian religious books in German. He often visited Bethlehem, Pennsylvania, staying at the Moravian Sun Inn. In a 1751 pamphlet on demographic growth and its implications for the Thirteen Colonies, he called the Pennsylvania Germans "Palatine Boors" who could never acquire the "Complexion" of Anglo-American settlers and referred to "Blacks and Tawneys" as weakening the social structure of the colonies. Although he apparently reconsidered shortly thereafter, and the phrases were omitted from all later printings of the pamphlet, his views may have played a role in his political defeat in 1764. According to Ralph Frasca, Franklin promoted the printing press as a device to instruct colonial Americans in moral virtue. Frasca argues he saw this as a service to God, because he understood moral virtue in terms of actions, thus, doing good provides a service to God. Despite his own moral lapses, Franklin saw himself as uniquely qualified to instruct Americans in morality. He tried to influence American moral life through the construction of a printing network based on a chain of partnerships from the Carolinas to New England. He thereby invented the first newspaper chain. It was more than a business venture, for like many publishers he believed that the press had a public-service duty. When he established himself in Philadelphia, shortly before 1730, the town boasted two "wretched little" news sheets, Andrew Bradford's The American Weekly Mercury, and Samuel Keimer's Universal Instructor in all Arts and Sciences, and Pennsylvania Gazette. This instruction in all arts and sciences consisted of weekly extracts from Chambers's Universal Dictionary. Franklin quickly did away with all of this when he took over the Instructor and made it The Pennsylvania Gazette. The Gazette soon became his characteristic organ, which he freely used for satire, for the play of his wit, even for sheer excess of mischief or of fun. From the first, he had a way of adapting his models to his own uses. The series of essays called "The Busy-Body", which he wrote for Bradford's American Mercury in 1729, followed the general Addisonian form, already modified to suit homelier conditions. The thrifty Patience, in her busy little shop, complaining of the useless visitors who waste her valuable time, is related to the women who address Mr. Spectator. The Busy-Body himself is a true Censor Morum, as Isaac Bickerstaff had been in the Tatler. And a number of the fictitious characters, Ridentius, Eugenius, Cato, and Cretico, represent traditional 18th-century classicism. Even this Franklin could use for contemporary satire, since Cretico, the "sowre Philosopher", is evidently a portrait of his rival, Samuel Keimer. Franklin had mixed success in his plan to establish an inter-colonial network of newspapers that would produce a profit for him and disseminate virtue. Over the years he sponsored two dozen printers in Pennsylvania, South Carolina, New York, Connecticut and even the Caribbean. By 1753, 8 of the 15 English language newspapers in the colonies were published by him or his partners. He began in Charleston, South Carolina, in 1731. After his second editor died, the widow Elizabeth Timothy took over and made it a success. She was one of the colonial era's first woman printers. For three decades Franklin maintained a close business relationship with her and her son Peter Timothy, who took over the South Carolina Gazette in 1746. The Gazette was impartial in political debates, while creating the opportunity for public debate, which encouraged others to challenge authority. Timothy avoided blandness and crude bias and after 1765 increasingly took a patriotic stand in the growing crisis with Great Britain. However, Franklin's Connecticut Gazette (1755–68) proved unsuccessful. As the Revolution approached, political strife slowly tore his network apart. Freemasonry In 1730 or 1731, Franklin was initiated into the local Masonic lodge. He became a grand master in 1734, indicating his rapid rise to prominence in Pennsylvania. The same year, he edited and published the first Masonic book in the Americas, a reprint of James Anderson's Constitutions of the Free-Masons. He was the secretary of St. John's Lodge in Philadelphia from 1735 to 1738. Franklin remained a Freemason for the rest of his life. Common-law marriage to Deborah Read At age 17 in 1723, Franklin proposed to 15-year-old Deborah Read while a boarder in the Read home. At that time, Deborah's mother was wary of allowing her young daughter to marry Franklin, who was on his way to London at Governor Keith's request, and also because of his financial instability. Her own husband had recently died, and she declined Franklin's request to marry her daughter. While Franklin was in London, his trip was extended, and there were problems with the governor's promises of support. Perhaps because of the circumstances of this delay, Deborah married a man named John Rodgers. This proved to be a regrettable decision. Rodgers shortly avoided his debts and prosecution by fleeing to Barbados with her dowry, leaving her behind. Rodgers's fate was unknown, and because of bigamy laws, Deborah was not free to remarry. Franklin established a common-law marriage with Deborah on September 1, 1730. They took in his recently acknowledged illegitimate young son and raised him in their household. They had two children together. Their son, Francis Folger Franklin, was born in October 1732 and died of smallpox in 1736. Their daughter, Sarah "Sally" Franklin, was born in 1743 and eventually married Richard Bache. Deborah's fear of the sea meant that she never accompanied Franklin on any of his extended trips to Europe; another possible reason why they spent much time apart is that he may have blamed her for possibly preventing their son Francis from being inoculated against the disease that subsequently killed him. Deborah wrote to him in November 1769, saying she was ill due to "dissatisfied distress" from his prolonged absence, but he did not return until his business was done. Deborah Read Franklin died of a stroke on December 14, 1774, while Franklin was on an extended mission to Great Britain; he returned in 1775. William Franklin In 1730, 24-year-old Franklin publicly acknowledged his illegitimate son William and raised him in his household. William was born on February 22, 1730, but his mother's identity is unknown. He was educated in Philadelphia and beginning at about age 30 studied law in London in the early 1760s. William himself fathered an illegitimate son, William Temple Franklin, born on the same day and month: February 22, 1760. The boy's mother was never identified, and he was placed in foster care. In 1762, the elder William Franklin married Elizabeth Downes, daughter of a planter from Barbados, in London. In 1763, he was appointed as the last royal governor of New Jersey. A Loyalist to the king, William Franklin saw his relations with father Benjamin eventually break down over their differences about the American Revolutionary War, as Benjamin Franklin could never accept William's position. Deposed in 1776 by the revolutionary government of New Jersey, William was placed under house arrest at his home in Perth Amboy for six months. After the Declaration of Independence, he was formally taken into custody by order of the Provincial Congress of New Jersey, an entity which he refused to recognize, regarding it as an "illegal assembly". He was incarcerated in Connecticut for two years, in Wallingford and Middletown, and, after being caught surreptitiously engaging Americans into supporting the Loyalist cause, was held in solitary confinement at Litchfield for eight months. When finally released in a prisoner exchange in 1778, he moved to New York City, which was occupied by the British at the time. While in New York City, he became leader of the Board of Associated Loyalists, a quasi-military organization chartered by King George III and headquartered in New York City. They initiated guerrilla forays into New Jersey, southern Connecticut, and New York counties north of the city. When British troops evacuated from New York, William Franklin left with them and sailed to England. He settled in London, never to return to North America. In the preliminary peace talks in 1782 with Britain, "... Benjamin Franklin insisted that loyalists who had borne arms against the United States would be excluded from this plea (that they be given a general pardon). He was undoubtedly thinking of William Franklin." Success as an author In 1733, Franklin began to publish the noted Poor Richard's Almanack (with content both original and borrowed) under the pseudonym Richard Saunders, on which much of his popular reputation is based. He frequently wrote under pseudonyms. He had developed a distinct, signature style that was plain, pragmatic and had a sly, soft but self-deprecating tone with declarative sentences. Although it was no secret that he was the author, his Richard Saunders character repeatedly denied it. "Poor Richard's Proverbs", adages from this almanac, such as "A penny saved is twopence dear" (often misquoted as "A penny saved is a penny earned") and "Fish and visitors stink in three days", remain common quotations in the modern world. Wisdom in folk society meant the ability to provide an apt adage for any occasion, and his readers became well prepared. He sold about ten thousand copies per year—it became an institution. In 1741, Franklin began publishing The General Magazine and Historical Chronicle for all the British Plantations in America. He used the heraldic badge of the Prince of Wales as the cover illustration. In 1758, the year he ceased writing for the Almanack, he printed Father Abraham's Sermon, also known as The Way to Wealth. Franklin began writing his autobiography in 1771, but it was not published until after his death. It has become one of the classics in the genre of autobiographical non-fiction. Franklin wrote a letter, "Advice to a Friend on Choosing a Mistress", dated June 25, 1745, in which he gives advice to a young man about channeling sexual urges. Due to its licentious nature, it was not published in collections of his papers during the 19th century. Federal court rulings from the mid-to-late 20th century cited the document as a reason for overturning obscenity laws and against censorship. Public life Early steps in Pennsylvania In 1736, Franklin created the Union Fire Company, one of the first volunteer firefighting companies in America. In the same year, he printed a new currency for New Jersey based on innovative anti-counterfeiting techniques he had devised. Throughout his career, he was an advocate for paper money, publishing A Modest Enquiry into the Nature and Necessity of a Paper Currency in 1729, and his printer printed money. He was influential in the more restrained and thus successful monetary experiments in the Middle Colonies, which stopped deflation without causing excessive inflation. In 1766, he made a case for paper money to the British House of Commons. As he matured, Franklin began to concern himself more with public affairs. In 1743, he first devised a scheme for the Academy, Charity School, and College of Philadelphia. However, the person he had in mind to run the academy, Rev. Richard Peters, refused and Franklin put his ideas away until 1749 when he printed his own pamphlet, Proposals Relating to the Education of Youth in Pensilvania. He was appointed president of the Academy on November 13, 1749; the academy and the charity school opened in 1751. In 1743, he founded the American Philosophical Society to help scientific men discuss their discoveries and theories. He began the electrical research that, along with other scientific inquiries, would occupy him for the rest of his life, in between bouts of politics and moneymaking. During King George's War, Franklin raised a militia called the Association for General Defense because the legislators of the city had decided to take no action to defend Philadelphia "either by erecting fortifications or building Ships of War". He raised money to create earthwork defenses and buy artillery. The largest of these was the "Association Battery" or "Grand Battery" of 50 guns. In 1747, Franklin (already a very wealthy man) retired from printing and went into other businesses. He formed a partnership with his foreman, David Hall, which provided Franklin with half of the shop's profits for 18 years. This lucrative business arrangement provided leisure time for study, and in a few years he had made many new discoveries. Franklin became involved in Philadelphia politics and rapidly progressed. In October 1748, he was selected as a councilman; in June 1749, he became a justice of the peace for Philadelphia; and in 1751, he was elected to the Pennsylvania Assembly. On August 10, 1753, he was appointed deputy postmaster-general of British North America. His most notable service in domestic politics was his reform of the postal system, with mail sent out every week. In 1751, Franklin and Thomas Bond obtained a charter from the Pennsylvania legislature to establish a hospital. Pennsylvania Hospital was the first hospital in the colonies. In 1752, Franklin organized the Philadelphia Contributionship, the Colonies' first homeowner's insurance company. Between 1750 and 1753, the "educational triumvirate" of Franklin, Samuel Johnson of Stratford, Connecticut, and schoolteacher William Smith built on Franklin's initial scheme and created what Bishop James Madison, president of the College of William & Mary, called a "new-model" plan or style of American college. Franklin solicited, printed in 1752, and promoted an American textbook of moral philosophy by Samuel Johnson, titled Elementa Philosophica, to be taught in the new colleges. In June 1753, Johnson, Franklin, and Smith met in Stratford. They decided the new-model college would focus on the professions, with classes taught in English instead of Latin, have subject matter experts as professors instead of one tutor leading a class for four years, and there would be no religious test for admission. Johnson went on to found King's College (now Columbia University) in New York City in 1754, while Franklin hired Smith as provost of the College of Philadelphia, which opened in 1755. At its first commencement, on May 17, 1757, seven men graduated; six with a Bachelor of Arts and one with a Master of Arts. It was later merged with the University of the State of Pennsylvania to become the University of Pennsylvania. The college was to become influential in guiding the founding documents of the United States: in the Continental Congress, for example, over one-third of the college-affiliated men who contributed to the Declaration of Independence between September 4, 1774, and July 4, 1776, were affiliated with the college. In 1754, he headed the Pennsylvania delegation to the Albany Congress. This meeting of several colonies had been requested by the Board of Trade in England to improve relations with the Indians and defense against the French. Franklin proposed a broad Plan of Union for the colonies. While the plan was not adopted, elements of it found their way into the Articles of Confederation and the Constitution. In 1753, both Harvard and Yale awarded him honorary master of arts degrees. In 1756, he received an honorary Master of Arts degree from the College of William & Mary. Later in 1756, Franklin organized the Pennsylvania Militia. He used Tun Tavern as a gathering place to recruit a regiment of soldiers to go into battle against the Native American uprisings that beset the American colonies. Postmaster Well known as a printer and publisher, Franklin was appointed postmaster of Philadelphia in 1737, holding the office until 1753, when he and publisher William Hunter were named deputy postmasters–general of British North America, the first to hold the office. (Joint appointments were standard at the time, for political reasons.) He was responsible for the British colonies from Pennsylvania north and east, as far as the island of Newfoundland. A post office for local and outgoing mail had been established in Halifax, Nova Scotia, by local stationer Benjamin Leigh, on April 23, 1754, but service was irregular. Franklin opened the first post office to offer regular, monthly mail in Halifax on December 9, 1755. Meantime, Hunter became postal administrator in Williamsburg, Virginia, and oversaw areas south of Annapolis, Maryland. Franklin reorganized the service's accounting system and improved speed of delivery between Philadelphia, New York, and Boston. By 1761, efficiencies led to the first profits for the colonial post office. When the lands of New France were ceded to the British under the Treaty of Paris in 1763, the British province of Quebec was created among them, and Franklin saw mail service expanded between Montreal, Trois-Rivières, Quebec City, and New York. For the greater part of his appointment, he lived in England (from 1757 to 1762, and again from 1764 to 1774)—about three-quarters of his term. Eventually, his sympathies for the rebel cause in the American Revolution led to his dismissal on January 31, 1774. On July 26, 1775, the Second Continental Congress established the United States Post Office and named Franklin as the first United States postmaster general. He had been a postmaster for decades and was a natural choice for the position. He had just returned from England and was appointed chairman of a Committee of Investigation to establish a postal system. The report of the committee, providing for the appointment of a postmaster general for the 13 American colonies, was considered by the Continental Congress on July 25 and 26. On July 26, 1775, Franklin was appointed postmaster general, the first appointed under the Continental Congress. His apprentice, William Goddard, felt that his ideas were mostly responsible for shaping the postal system and that the appointment should have gone to him, but he graciously conceded it to Franklin, 36 years his senior. Franklin, however, appointed Goddard as Surveyor of the Posts, issued him a signed pass, and directed him to investigate and inspect the various post offices and mail routes as he saw fit. The newly established postal system became the United States Post Office, a system that continues to operate today. Decades in London From the mid-1750s to the mid-1770s, Franklin spent much of his time in London. Political work In 1757, he was sent to England by the Pennsylvania Assembly as a colonial agent to protest against the political influence of the Penn family, the proprietors of the colony. He remained there for five years, striving to end the proprietors' prerogative to overturn legislation from the elected Assembly and their exemption from paying taxes on their land. His lack of influential allies in Whitehall led to the failure of this mission. At this time, many members of the Pennsylvania Assembly were feuding with William Penn's heirs, who controlled the colony as proprietors. After his return to the colony, Franklin led the "anti-proprietary party" in the struggle against the Penn family and was elected Speaker of the Pennsylvania House in May 1764. His call for a change from proprietary to royal government was a rare political miscalculation, however: Pennsylvanians worried that such a move would endanger their political and religious freedoms. Because of these fears and because of political attacks on his character, Franklin lost his seat in the October 1764 Assembly elections. The anti-proprietary party dispatched him to England again to continue the struggle against the Penn family proprietorship. During this trip, events drastically changed the nature of his mission. In London, Franklin opposed the 1765 Stamp Act. Unable to prevent its passage, he made another political miscalculation and recommended a friend to the post of stamp distributor for Pennsylvania. Pennsylvanians were outraged, believing that he had supported the measure all along, and threatened to destroy his home in Philadelphia. Franklin soon learned of the extent of colonial resistance to the Stamp Act, and he testified during the House of Commons proceedings that led to its repeal. With this, Franklin suddenly emerged as the leading spokesman for American interests in England. He wrote popular essays on behalf of the colonies. Georgia, New Jersey, and Massachusetts also appointed him as their agent to the Crown. During his lengthy missions to London between 1757 and 1775, Franklin lodged in a house on Craven Street, just off the Strand in central London. During his stays there, he developed a close friendship with his landlady, Margaret Stevenson, and her circle of friends and relations, in particular, her daughter Mary, who was more often known as Polly. The house is now a museum known as the Benjamin Franklin House. Whilst in London, Franklin became involved in radical politics. He belonged to a gentleman's club (which he called "the honest Whigs"), which held stated meetings, and included members such as Richard Price, the minister of Newington Green Unitarian Church who ignited the Revolution controversy, and Andrew Kippis. Scientific work In 1756, Franklin had become a member of the Society for the Encouragement of Arts, Manufactures & Commerce (now the Royal Society of Arts), which had been founded in 1754. After his return to the United States in 1775, he became the Society's Corresponding Member, continuing a close connection. The Royal Society of Arts instituted a Benjamin Franklin Medal in 1956 to commemorate the 250th anniversary of his birth and the 200th anniversary of his membership of the RSA. The study of natural philosophy (referred today as science in general) drew him into overlapping circles of acquaintance. Franklin was, for example, a corresponding member of the Lunar Society of Birmingham. In 1759, the University of St Andrews awarded him an honorary doctorate in recognition of his accomplishments. In October 1759, he was granted Freedom of the Borough of St Andrews. He was also awarded an honorary doctorate by Oxford University in 1762. Because of these honors, he was often addressed as " Franklin". While living in London in 1768, he developed a phonetic alphabet in A Scheme for a new Alphabet and a Reformed Mode of Spelling. This reformed alphabet discarded six letters he regarded as redundant (c, j, q, w, x, and y), and substituted six new letters for sounds he felt lacked letters of their own. This alphabet never caught on, and he eventually lost interest. Travels around Europe Franklin used London as a base to travel. In 1771, he made short journeys through different parts of England, staying with Joseph Priestley at Leeds, Thomas Percival at Manchester and Erasmus Darwin at Lichfield. In Scotland, he spent five days with Lord Kames near Stirling and stayed for three weeks with David Hume in Edinburgh. In 1759, he visited Edinburgh with his son and later reported that he considered his six weeks in Scotland "six weeks of the densest happiness I have met with in any part of my life". In Ireland, he stayed with Lord Hillsborough. Franklin noted of him that "all the plausible behaviour I have described is meant only, by patting and stroking the horse, to make him more patient, while the reins are drawn tighter, and the spurs set deeper into his sides." In Dublin, Franklin was invited to sit with the members of the Irish Parliament rather than in the gallery. He was the first American to receive this honor. While touring Ireland, he was deeply moved by the level of poverty he witnessed. The economy of the Kingdom of Ireland was affected by the same trade regulations and laws that governed the Thirteen Colonies. He feared that the American colonies could eventually come to the same level of poverty if the regulations and laws continued to apply to them. Franklin spent two months in German lands in 1766, but his connections to the country stretched across a lifetime. He declared a debt of gratitude to German scientist Otto von Guericke for his early studies of electricity. Franklin also co-authored the first treaty of friendship between Prussia and America in 1785. In September 1767, he visited Paris with his usual traveling partner, Sir John Pringle, 1st Baronet. News of his electrical discoveries was widespread in France. His reputation meant that he was introduced to many influential scientists and politicians, and also to King Louis XV. Defending the American cause One line of argument in Parliament was that Americans should pay a share of the costs of the French and Indian War and therefore taxes should be levied on them. Franklin became the American spokesman in highly publicized testimony in Parliament in 1766. He stated that Americans already contributed heavily to the defense of the Empire. He said local governments had raised, outfitted and paid 25,000 soldiers to fight France—as many as Britain itself sent—and spent many millions from American treasuries doing so in the French and Indian War alone. In 1772, Franklin obtained private letters of Thomas Hutchinson and Andrew Oliver, governor and lieutenant governor of the Province of Massachusetts Bay, proving that they had encouraged the Crown to crack down on Bostonians. Franklin sent them to America, where they escalated tensions. The letters were finally leaked to the public in the Boston Gazette in mid-June 1773, causing a political firestorm in Massachusetts and raising significant questions in England. The British began to regard him as the fomenter of serious trouble. Hopes for a peaceful solution ended as he was systematically ridiculed and humiliated by Solicitor-General Alexander Wedderburn, before the Privy Council on January 29, 1774. He returned to Philadelphia in March 1775, and abandoned his accommodationist stance. In 1773, Franklin published two of his most celebrated pro-American satirical essays: "Rules by Which a Great Empire May Be Reduced to a Small One", and "An Edict by the King of Prussia". Agent for British and Hellfire Club membership Franklin is known to have occasionally attended the Hellfire Club's meetings during 1758 as a non-member during his time in England. However, some authors and historians would argue he was in fact a British spy. As there are no records left (having been burned in 1774), many of these members are just assumed or linked by letters sent to each other. One early proponent that Franklin was a member of the Hellfire Club and a double agent is the historian Donald McCormick, who has a history of making controversial claims. Coming of revolution In 1763, soon after Franklin returned to Pennsylvania from England for the first time, the western frontier was engulfed in a bitter war known as Pontiac's Rebellion. The Paxton Boys, a group of settlers convinced that the Pennsylvania government was not doing enough to protect them from American Indian raids, murdered a group of peaceful Susquehannock Indians and marched on Philadelphia. Franklin helped to organize a local militia to defend the capital against the mob. He met with the Paxton leaders and persuaded them to disperse. Franklin wrote a scathing attack against the racial prejudice of the Paxton Boys. "If an Indian injures me", he asked, "does it follow that I may revenge that Injury on all Indians?" He provided an early response to British surveillance through his own network of counter-surveillance and manipulation. "He waged a public relations campaign, secured secret aid, played a role in privateering expeditions, and churned out effective and inflammatory propaganda." Declaration of Independence By the time Franklin arrived in Philadelphia on May 5, 1775, after his second mission to Great Britain, the American Revolution had begun at the Battles of Lexington and Concord the previous month, on April 19, 1775. The New England militia had forced the main British army to remain inside Boston. The Pennsylvania Assembly unanimously chose Franklin as their delegate to the Second Continental Congress. In June 1776, he was appointed a member of the Committee of Five that drafted the Declaration of Independence. Although he was temporarily disabled by gout and unable to attend most meetings of the committee, he made several "small but important" changes to the draft sent to him by Thomas Jefferson. At the signing, he is quoted as having replied to a comment by John Hancock that they must all hang together, saying, "Yes, we must, indeed, all hang together, or most assuredly we shall all hang separately." Ambassador to France (1776–1785) On October 26, 1776, Franklin was dispatched to France as commissioner for the United States. He took with him as secretary his 16-year-old grandson, William Temple Franklin. They lived in a home in the Parisian suburb of Passy, donated by Jacques-Donatien Le Ray de Chaumont, who supported the United States. Franklin remained in France until 1785. He conducted the affairs of his country toward the French nation with great success, which included securing a critical military alliance in 1778 and signing the 1783 Treaty of Paris. Among his associates in France was Honoré Gabriel Riqueti, comte de Mirabeau—a French Revolutionary writer, orator and statesman who in 1791 was elected president of the National Assembly. In July 1784, Franklin met with Mirabeau and contributed anonymous materials that the Frenchman used in his first signed work: Considerations sur l'ordre de Cincinnatus. The publication was critical of the Society of the Cincinnati, established in the United States. Franklin and Mirabeau thought of it as a "noble order", inconsistent with the egalitarian ideals of the new republic. During his stay in France, he was active as a Freemason, serving as venerable master of the lodge Les Neuf Sœurs from 1779 until 1781. In 1784, when Franz Mesmer began to publicize his theory of "animal magnetism" which was considered offensive by many, Louis XVI appointed a commission to investigate it. These included the chemist Antoine Lavoisier, the physician Joseph-Ignace Guillotin, the astronomer Jean Sylvain Bailly, and Franklin. In doing so, the committee concluded, through blind trials that mesmerism only seemed to work when the subjects expected it, which discredited mesmerism and became the first major demonstration of the placebo effect, which was described at that time as "imagination". In 1781, he was elected a fellow of the American Academy of Arts and Sciences. Franklin's advocacy for religious tolerance in France contributed to arguments made by French philosophers and politicians that resulted in Louis XVI's signing of the Edict of Versailles in November 1787. This edict effectively nullified the Edict of Fontainebleau, which had denied non-Catholics civil status and the right to openly practice their faith. Franklin also served as American minister to Sweden, although he never visited that country. He negotiated a treaty that was signed in April 1783. On August 27, 1783, in Paris, he witnessed the world's first hydrogen balloon flight. Le Globe, created by professor Jacques Charles and Les Frères Robert, was watched by a vast crowd as it rose from the Champ de Mars (now the site of the Eiffel Tower). Franklin became so enthusiastic that he subscribed financially to the next project to build a manned hydrogen balloon. On December 1, 1783, Franklin was seated in the special enclosure for honored guests it took off from the Jardin des Tuileries, piloted by Charles and Nicolas-Louis Robert. Return to America When he returned home in 1785, Franklin occupied a position second only to that of George Washington as the champion of American independence. He returned from France with an unexplained shortage of 100,000 pounds in Congressional funds. In response to a question from a member of Congress about this, Franklin, quoting the Bible, quipped, "Muzzle not the ox that treadeth out his master's grain." The missing funds were never again mentioned in Congress. Le Ray honored him with a commissioned portrait painted by Joseph Duplessis, which now hangs in the National Portrait Gallery of the Smithsonian Institution in Washington, D.C. After his return, Franklin became an abolitionist and freed his two slaves. He eventually became president of the Pennsylvania Abolition Society. President of Pennsylvania and Delegate to the Constitutional convention Special balloting conducted October 18, 1785, unanimously elected him the sixth president of the Supreme Executive Council of Pennsylvania, replacing John Dickinson. The office was practically that of the governor. He held that office for slightly over three years, longer than any other, and served the constitutional limit of three full terms. Shortly after his initial election, he was re-elected to a full term on October 29, 1785, and again in the fall of 1786 and on October 31, 1787. In that capacity, he served as host to the Constitutional Convention of 1787 in Philadelphia. He also served as a delegate to the Convention. It was primarily an honorary position and he seldom engaged in debate. Death Franklin suffered from obesity throughout his middle age and elder years, which resulted in multiple health problems, particularly gout, which worsened as he aged. In poor health during the signing of the U.S. Constitution in 1787, he was rarely seen in public from then until his death. Franklin died from pleuritic attack at his home in Philadelphia on April 17, 1790. He was aged 84 at the time of his death. His last words were reportedly, "a dying man can do nothing easy", to his daughter after she suggested that he change position in bed and lie on his side so he could breathe more easily. Franklin's death is described in the book The Life of Benjamin Franklin, quoting from the account of John Paul Jones: Approximately 20,000 people attended Franklin's funeral after which he was interred in Christ Church Burial Ground in Philadelphia. Upon learning of his death, the Constitutional Assembly in Revolutionary France entered into a state of mourning for a period of three days, and memorial services were conducted in honor of Franklin throughout the country. In 1728, aged 22, Franklin wrote what he hoped would be his own epitaph: Franklin's actual grave, however, as he specified in his final will, simply reads "Benjamin and Deborah Franklin". Inventions and scientific inquiries Franklin was a prodigious inventor. Among his many creations were the lightning rod, Franklin stove, bifocal glasses and the flexible urinary catheter. He never patented his inventions; in his autobiography he wrote, "... as we enjoy great advantages from the inventions of others, we should be glad of an opportunity to serve others by any invention of ours; and this we should do freely and generously." Electricity Franklin started exploring the phenomenon of electricity in the 1740s, after he met the itinerant lecturer Archibald Spencer, who used static electricity in his demonstrations. He proposed that "vitreous" and "resinous" electricity were not different types of "electrical fluid" (as electricity was called then), but the same "fluid" under different pressures. (The same proposal was made independently that same year by William Watson.) He was the first to label them as positive and negative respectively, and he was the first to discover the principle of conservation of charge. In 1748, he constructed a multiple plate capacitor, that he called an "electrical battery" (not a true battery like Volta's pile) by placing eleven panes of glass sandwiched between lead plates, suspended with silk cords and connected by wires. In pursuit of more pragmatic uses for electricity, remarking in spring 1749 that he felt "chagrin'd a little" that his experiments had heretofore resulted in "Nothing in this Way of Use to Mankind", he planned a practical demonstration. He proposed a dinner party where a turkey was to be killed via electric shock and roasted on an electrical spit. After having prepared several turkeys this way, he noted that "the birds kill'd in this manner eat uncommonly tender." Franklin recounted that in the process of one of these experiments, he was shocked by a pair of Leyden jars, resulting in numbness in his arms that persisted for one evening, noting "I am Ashamed to have been Guilty of so Notorious a Blunder." Franklin briefly investigated electrotherapy, including the use of the electric bath. This work led to the field becoming widely known. In recognition of his work with electricity, he received the Royal Society's Copley Medal in 1753, and in 1756, he became one of the few 18th-century Americans elected a fellow of the Society. The CGS unit of electric charge has been named after him: one franklin (Fr) is equal to one statcoulomb. Franklin advised Harvard University in its acquisition of new electrical laboratory apparatus after the complete loss of its original collection, in a fire that destroyed the original Harvard Hall in 1764. The collection he assembled later became part of the Harvard Collection of Historical Scientific Instruments, now on public display in its Science Center. Kite experiment and lightning rod Franklin published a proposal for an experiment to prove that lightning is electricity by flying a kite in a storm. On May 10, 1752, Thomas-François Dalibard of France conducted Franklin's experiment using a iron rod instead of a kite, and he extracted electrical sparks from a cloud. On June 15, 1752, Franklin may possibly have conducted his well-known kite experiment in Philadelphia, successfully extracting sparks from a cloud. He described the experiment in his newspaper, The Pennsylvania Gazette, on October 19, 1752, without mentioning that he himself had performed it. This account was read to the Royal Society on December 21 and printed as such in the Philosophical Transactions. Joseph Priestley published an account with additional details in his 1767 History and Present Status of Electricity. Franklin was careful to stand on an insulator, keeping dry under a roof to avoid the danger of electric shock. Others, such as Georg Wilhelm Richmann in Russia, were indeed electrocuted in performing lightning experiments during the months immediately following his experiment. In his writings, Franklin indicates that he was aware of the dangers and offered alternative ways to demonstrate that lightning was electrical, as shown by his use of the concept of electrical ground. He did not perform this experiment in the way that is often pictured in popular literature, flying the kite and waiting to be struck by lightning, as it would have been dangerous. Instead he used the kite to collect some electric charge from a storm cloud, showing that lightning was electrical. On October 19, 1752, in a letter to England with directions for repeating the experiment, he wrote: Franklin's electrical experiments led to his invention of the lightning rod. He said that conductors with a sharp rather than a smooth point could discharge silently and at a far greater distance. He surmised that this could help protect buildings from lightning by attaching "upright Rods of Iron, made sharp as a Needle and gilt to prevent Rusting, and from the Foot of those Rods a Wire down the outside of the Building into the Ground; ... Would not these pointed Rods probably draw the Electrical Fire silently out of a Cloud before it came nigh enough to strike, and thereby secure us from that most sudden and terrible Mischief!" Following a series of experiments on Franklin's own house, lightning rods were installed on the Academy of Philadelphia (later the University of Pennsylvania) and the Pennsylvania State House (later Independence Hall) in 1752. Population studies Franklin had a major influence on the emerging science of demography or population studies. In the 1730s and 1740s, he began taking notes on population growth, finding that the American population had the fastest growth rate on Earth. Emphasizing that population growth depended on food supplies, he emphasized the abundance of food and available farmland in America. He calculated that America's population was doubling every 20 years and would surpass that of England in a century. In 1751, he drafted Observations concerning the Increase of Mankind, Peopling of Countries, etc. Four years later, it was anonymously printed in Boston and was quickly reproduced in Britain, where it influenced the economist Adam Smith and later the demographer Thomas Malthus, who credited Franklin for discovering a rule of population growth. Franklin's predictions how British mercantilism was unsustainable alarmed British leaders who did not want to be surpassed by the colonies, so they became more willing to impose restrictions on the colonial economy. Kammen (1990) and Drake (2011) say Franklin's Observations concerning the Increase of Mankind (1755) stands alongside Ezra Stiles' "Discourse on Christian Union" (1760) as the leading works of 18th-century Anglo-American demography; Drake credits Franklin's "wide readership and prophetic insight". Franklin was also a pioneer in the study of slave demography, as shown in his 1755 essay. In his capacity as a farmer, he wrote at least one critique about the negative consequences of price controls, trade restrictions, and subsidy of the poor. This is succinctly preserved in his letter to the London Chronicle published November 29, 1766, titled "On the Price of Corn, and Management of the poor". Oceanography As deputy postmaster, Franklin became interested in North Atlantic Ocean circulation patterns. While in England in 1768, he heard a complaint from the Colonial Board of Customs. British packet ships carrying mail had taken several weeks longer to reach New York than it took an average merchant ship to reach Newport, Rhode Island. The merchantmen had a longer and more complex voyage because they left from London, while the packets left from Falmouth in Cornwall. Franklin put the question to his cousin Timothy Folger, a Nantucket whaler captain, who told him that merchant ships routinely avoided a strong eastbound mid-ocean current. The mail packet captains sailed dead into it, thus fighting an adverse current of . Franklin worked with Folger and other experienced ship captains, learning enough to chart the current and name it the Gulf Stream, by which it is still known today. Franklin published his Gulf Stream chart in 1770 in England, where it was ignored. Subsequent versions were printed in France in 1778 and the U.S. in 1786. The British original edition of the chart had been so thoroughly ignored that everyone assumed it was lost forever until Phil Richardson, a Woods Hole oceanographer and Gulf Stream expert, discovered it in the Bibliothèque Nationale in Paris in 1980. This find received front-page coverage in The New York Times. It took many years for British sea captains to adopt Franklin's advice on navigating the current; once they did, they were able to trim two weeks from their sailing time. In 1853, the oceanographer and cartographer Matthew Fontaine Maury noted that while Franklin charted and codified the Gulf Stream, he did not discover it: An aging Franklin accumulated all his oceanographic findings in Maritime Observations, published by the Philosophical Society's transactions in 1786. It contained ideas for sea anchors, catamaran hulls, watertight compartments, shipboard lightning rods and a soup bowl designed to stay stable in stormy weather. Theories and experiments Franklin was, along with his contemporary Leonhard Euler, the only major scientist who supported Christiaan Huygens's wave theory of light, which was basically ignored by the rest of the scientific community. In the 18th century, Isaac Newton's corpuscular theory was held to be true; it took Thomas Young's well-known slit experiment in 1803 to persuade most scientists to believe Huygens's theory. On October 21, 1743, according to the popular myth, a storm moving from the southwest denied Franklin the opportunity of witnessing a lunar eclipse. He was said to have noted that the prevailing winds were actually from the northeast, contrary to what he had expected. In correspondence with his brother, he learned that the same storm had not reached Boston until after the eclipse, despite the fact that Boston is to the northeast of Philadelphia. He deduced that storms do not always travel in the direction of the prevailing wind, a concept that greatly influenced meteorology. After the Icelandic volcanic eruption of Laki in 1783, and the subsequent harsh European winter of 1784, Franklin made observations on the causal nature of these two seemingly separate events. He wrote about them in a lecture series. Though Franklin is famously associated with kites from his lightning experiments, he has also been noted by many for using kites to pull humans and ships across waterways. George Pocock in the book A Treatise on The Aeropleustic Art, or Navigation in the Air, by means of Kites, or Buoyant Sails noted being inspired by Benjamin Franklin's traction of his body by kite power across a waterway. Franklin noted a principle of refrigeration by observing that on a very hot day, he stayed cooler in a wet shirt in a breeze than he did in a dry one. To understand this phenomenon more clearly, he conducted experiments. In 1758 on a warm day in Cambridge, England, he and fellow scientist John Hadley experimented by continually wetting the ball of a mercury thermometer with ether and using bellows to evaporate the ether. With each subsequent evaporation, the thermometer read a lower temperature, eventually reaching . Another thermometer showed that the room temperature was constant at . In his letter Cooling by Evaporation, Franklin noted that, "One may see the possibility of freezing a man to death on a warm summer's day." According to Michael Faraday, Franklin's experiments on the non-conduction of ice are worth mentioning, although the law of the general effect of liquefaction on electrolytes is not attributed to Franklin. However, as reported in 1836 by Franklin's great-grandson Alexander Dallas Bache of the University of Pennsylvania, the law of the effect of heat on the conduction of bodies otherwise non-conductors, for example, glass, could be attributed to Franklin. Franklin wrote, "... A certain quantity of heat will make some bodies good conductors, that will not otherwise conduct ..." and again, "... And water, though naturally a good conductor, will not conduct well when frozen into ice." While traveling on a ship, Franklin had observed that the wake of a ship was diminished when the cooks scuttled their greasy water. He studied the effects on a large pond in Clapham Common, London. "I fetched out a cruet of oil and dropt a little of it on the water ... though not more than a teaspoon full, produced an instant calm over a space of several yards square." He later used the trick to "calm the waters" by carrying "a little oil in the hollow joint of my cane". Decision-making In a 1772 letter to Joseph Priestley, Franklin laid out the earliest known description of the Pro & Con list, a common decision-making technique, now sometimes called a decisional balance sheet: Views on religion and morality Like the other advocates of republicanism, Franklin emphasized that the new republic could survive only if the people were virtuous. All his life, he explored the role of civic and personal virtue, as expressed in Poor Richard's aphorisms. He felt that organized religion was necessary to keep men good to their fellow men, but rarely attended religious services himself. When he met Voltaire in Paris and asked his fellow member of the Enlightenment vanguard to bless his grandson, Voltaire said in English, "God and Liberty", and added, "this is the only appropriate benediction for the grandson of Monsieur Franklin." Franklin's parents were both pious Puritans. The family attended the Old South Church, the most liberal Puritan congregation in Boston, where Benjamin Franklin was baptized in 1706. Franklin's father, a poor chandler, owned a copy of a book, Bonifacius: Essays to Do Good, by the Puritan preacher and family friend Cotton Mather, which Franklin often cited as a key influence on his life. "If I have been", Franklin wrote to Cotton Mather's son seventy years later, "a useful citizen, the public owes the advantage of it to that book". His first pen name, Silence Dogood, paid homage both to the book and to a widely known sermon by Mather. The book preached the importance of forming voluntary associations to benefit society. Franklin learned about forming do-good associations from Mather, but his organizational skills made him the most influential force in making voluntarism an enduring part of the American ethos. Franklin formulated a presentation of his beliefs and published it in 1728. He no longer accepted the key Puritan ideas regarding salvation, the divinity of Jesus, or indeed much religious dogma. He classified himself as a deist in his 1771 autobiography, although he still considered himself a Christian. He retained a strong faith in a God as the wellspring of morality and goodness in man, and as a Providential actor in history responsible for American independence. At a critical impasse during the Constitutional Convention in June 1787, he attempted to introduce the practice of daily common prayer with these words: The motion gained almost no support and was never brought to a vote. Franklin was an enthusiastic admirer of the evangelical minister George Whitefield during the First Great Awakening. He did not himself subscribe to Whitefield's theology, but he admired Whitefield for exhorting people to worship God through good works. He published all of Whitefield's sermons and journals, thereby earning a lot of money and boosting the Great Awakening. When he stopped attending church, Franklin wrote in his autobiography: Franklin retained a lifelong commitment to the non-religious Puritan virtues and political values he had grown up with, and through his civic work and publishing, he succeeded in passing these values into the American culture permanently. He had a "passion for virtue". These Puritan values included his devotion to egalitarianism, education, industry, thrift, honesty, temperance, charity and community spirit. Thomas Kidd states, "As an adult, Franklin touted ethical responsibility, industriousness, and benevolence, even as he jettisoned Christian orthodoxy." The classical authors read in the Enlightenment period taught an abstract ideal of republican government based on hierarchical social orders of king, aristocracy and commoners. It was widely believed that English liberties relied on their balance of power, but also hierarchal deference to the privileged class. "Puritanism ... and the epidemic evangelism of the mid-eighteenth century, had created challenges to the traditional notions of social stratification" by preaching that the Bible taught all men are equal, that the true value of a man lies in his moral behavior, not his class, and that all men can be saved. Franklin, steeped in Puritanism and an enthusiastic supporter of the evangelical movement, rejected the salvation dogma but embraced the radical notion of egalitarian democracy. Franklin's commitment to teach these values was itself something he gained from his Puritan upbringing, with its stress on "inculcating virtue and character in themselves and their communities". These Puritan values and the desire to pass them on, were one of his quintessentially American characteristics and helped shape the character of the nation. Max Weber considered Franklin's ethical writings a culmination of the Protestant ethic, which ethic created the social conditions necessary for the birth of capitalism. One of his notable characteristics was his respect, tolerance and promotion of all churches. Referring to his experience in Philadelphia, he wrote in his autobiography, "new Places of worship were continually wanted, and generally erected by voluntary Contribution, my Mite for such purpose, whatever might be the Sect, was never refused." "He helped create a new type of nation that would draw strength from its religious pluralism." The evangelical revivalists who were active mid-century, such as Whitefield, were the greatest advocates of religious freedom, "claiming liberty of conscience to be an 'inalienable right of every rational creature.'" Whitefield's supporters in Philadelphia, including Franklin, erected "a large, new hall, that ... could provide a pulpit to anyone of any belief." Franklin's rejection of dogma and doctrine and his stress on the God of ethics and morality and civic virtue made him the "prophet of tolerance". He composed "A Parable Against Persecution", an apocryphal 51st chapter of Genesis in which God teaches Abraham the duty of tolerance. While he was living in London in 1774, he was present at the birth of British Unitarianism, attending the inaugural session of the Essex Street Chapel, at which Theophilus Lindsey drew together the first avowedly Unitarian congregation in England; this was somewhat politically risky and pushed religious tolerance to new boundaries, as a denial of the doctrine of the Trinity was illegal until the 1813 Act. Although his parents had intended for him a career in the church, Franklin as a young man adopted the Enlightenment religious belief in deism, that God's truths can be found entirely through nature and reason, declaring, "I soon became a thorough Deist." He rejected Christian dogma in a 1725 pamphlet A Dissertation on Liberty and Necessity, Pleasure and Pain, which he later saw as an embarrassment, while simultaneously asserting that God is "all wise, all good, all powerful." He defended his rejection of religious dogma with these words: "I think opinions should be judged by their influences and effects; and if a man holds none that tend to make him less virtuous or more vicious, it may be concluded that he holds none that are dangerous, which I hope is the case with me." After the disillusioning experience of seeing the decay in his own moral standards, and those of two friends in London whom he had converted to deism, Franklin decided that deism was true but it was not as useful in promoting personal morality as were the controls imposed by organized religion. Ralph Frasca contends that in his later life he can be considered a non-denominational Christian, although he did not believe Christ was divine. In a major scholarly study of his religion, Thomas Kidd argues that Franklin believed that true religiosity was a matter of personal morality and civic virtue. Kidd says Franklin maintained his lifelong resistance to orthodox Christianity while arriving finally at a "doctrineless, moralized Christianity." According to David Morgan, Franklin was a proponent of "generic religion." He prayed to "Powerful Goodness" and referred to God as "the infinite". John Adams noted that he was a mirror in which people saw their own religion: "The Catholics thought him almost a Catholic. The Church of England claimed him as one of them. The Presbyterians thought him half a Presbyterian, and the Friends believed him a wet Quaker." Adams himself decided that Franklin best fit among the "Atheists, Deists, and Libertines." Whatever else Franklin was, concludes Morgan, "he was a true champion of generic religion". In a letter to Richard Price, Franklin states that he believes religion should support itself without help from the government, claiming, "When a Religion is good, I conceive that it will support itself; and, when it cannot support itself, and God does not take care to support, so that its Professors are oblig'd to call for the help of the Civil Power, it is a sign, I apprehend, of its being a bad one." In 1790, just about a month before he died, Franklin wrote a letter to Ezra Stiles, president of Yale University, who had asked him his views on religion: On July 4, 1776, Congress appointed a three-member committee composed of Franklin, Jefferson, and Adams to design the Great Seal of the United States. Franklin's proposal (which was not adopted) featured the motto: "Rebellion to Tyrants is Obedience to God" and a scene from the Book of Exodus, with Moses, the Israelites, the pillar of fire, and George III depicted as pharaoh. The design that was produced was not acted upon by Congress, and the Great Seal's design was not finalized until a third committee was appointed in 1782. Franklin strongly supported the right to freedom of speech: Thirteen Virtues Franklin sought to cultivate his character by a plan of 13 virtues, which he developed at age 20 (in 1726) and continued to practice in some form for the rest of his life. His autobiography lists his 13 virtues as: Temperance. Eat not to dullness; drink not to elevation." Silence. Speak not but what may benefit others or yourself; avoid trifling conversation." Order. Let all your things have their places; let each part of your business have its time. Resolution. Resolve to perform what you ought; perform without fail what you resolve. Frugality. Make no expense but to do good to others or yourself; i.e., waste nothing. Industry. Lose no time; be always employ'd in something useful; cut off all unnecessary actions. Sincerity. Use no hurtful deceit; think innocently and justly, and, if you speak, speak accordingly. Justice. Wrong none by doing injuries, or omitting the benefits that are your duty. Moderation. Avoid extremes; forbear resenting injuries so much as you think they deserve. Cleanliness. Tolerate no uncleanliness in body, clothes, or habitation. Tranquility. Be not disturbed at trifles, or at accidents common or unavoidable. Chastity. Rarely use venery but for health or offspring, never to dullness, weakness, or the injury of your own or another's peace or reputation. Humility. Imitate Jesus and Socrates. Franklin did not try to work on them all at once. Instead, he worked on one and only one each week "leaving all others to their ordinary chance". While he did not adhere completely to the enumerated virtues, and by his own admission he fell short of them many times, he believed the attempt made him a better man, contributing greatly to his success and happiness, which is why in his autobiography, he devoted more pages to this plan than to any other single point and wrote, "I hope, therefore, that some of my descendants may follow the example and reap the benefit." Slavery Franklin owned as many as seven slaves, including two men who worked in his household and his shop. He posted paid ads for the sale of slaves and for the capture of runaway slaves and allowed the sale of slaves in his general store. However, he later became an outspoken critic of slavery. In 1758, he advocated the opening of a school for the education of black slaves in Philadelphia. He took two slaves to England with him, Peter and King. King escaped with a woman to live in the outskirts of London, and by 1758 he was working for a household in Suffolk. After returning from England in 1762, Franklin became notably more abolitionist in nature, attacking American slavery. In the wake of Somerset v Stewart, he voiced frustration at British abolitionists: Franklin, however, refused to publicly debate the issue of slavery at the 1787 Constitutional Convention. At the time of the American founding, there were about half a million slaves in the United States, mostly in the five southernmost states, where they made up 40% of the population. Many of the leading American foundersmost notably Thomas Jefferson, George Washington, and James Madisonowned slaves, but many others did not. Benjamin Franklin thought that slavery was "an atrocious debasement of human nature" and "a source of serious evils". He and Benjamin Rush founded the Pennsylvania Society for Promoting the Abolition of Slavery in 1774. In 1790, Quakers from New York and Pennsylvania presented their petition for abolition to Congress. Their argument against slavery was backed by the Pennsylvania Abolitionist Society. In his later years, as Congress was forced to deal with the issue of slavery, Franklin wrote several essays that stressed the importance of the abolition of slavery and of the integration of African Americans into American society. These writings included: An Address to the Public (1789) A Plan for Improving the Condition of the Free Blacks (1789) Sidi Mehemet Ibrahim on the Slave Trade (1790) Vegetarianism Franklin became a vegetarian when he was a teenager apprenticing at a print shop, after coming upon a book by the early vegetarian advocate Thomas Tryon. In addition, he would have also been familiar with the moral arguments espoused by prominent vegetarian Quakers in the colonial-era Province of Pennsylvania, including Benjamin Lay and John Woolman. His reasons for vegetarianism were based on health, ethics, and economy: Franklin also declared the consumption of meat to be "unprovoked murder". Despite his convictions, he began to eat fish after being tempted by fried cod on a boat sailing from Boston, justifying the eating of animals by observing that the fish's stomach contained other fish. Nonetheless, he recognized the faulty ethics in this argument and would continue to be a vegetarian on and off. He was "excited" by tofu, which he learned of from the writings of a Spanish missionary to South East Asia, Domingo Fernández Navarrete. Franklin sent a sample of soybeans to prominent American botanist John Bartram and had previously written to British diplomat and Chinese trade expert James Flint inquiring as to how tofu was made, with their correspondence believed to be the first documented use of the word "tofu" in the English language. Franklin's "Second Reply to Vindex Patriae", a 1766 letter advocating self-sufficiency and less dependence on England, lists various examples of the bounty of American agricultural products, and does not mention meat. Detailing new American customs, he wrote that, "[t]hey resolved last spring to eat no more lamb; and not a joint of lamb has since been seen on any of their tables ... the sweet little creatures are all alive to this day, with the prettiest fleeces on their backs imaginable." View on inoculation The concept of preventing smallpox by variolation was introduced to colonial America by an African slave named Onesimus via his owner Cotton Mather in the early eighteenth century, but the procedure was not immediately accepted. James Franklin's newspaper carried articles in 1721 that vigorously denounced the concept. However, by 1736 Benjamin Franklin, by then a prominent Boston citizen, was known as a supporter of the procedure. Therefore, when four-year-old "Franky" died of smallpox, opponents of the procedure circulated rumors that the child had been inoculated, and that this was the cause of his subsequent death. When Franklin became aware of this gossip, he placed a notice in the Pennsylvania Gazette, stating: "I do hereby sincerely declare, that he was not inoculated, but receiv'd the Distemper in the common Way of Infection ... I intended to have my Child inoculated.". The child had a bad case of flux diarrhea, and his parents had waited for him to get well before having him inoculated. Franklin wrote in his Autobiography: "In 1736 I lost one of my sons, a fine boy of four years old, by the small-pox, taken in the common way. I long regretted bitterly, and still regret that I had not given it to him by inoculation. This I mention for the sake of parents who omit that operation, on the supposition that they should never forgive themselves if a child died under it; my example showing that the regret may be the same either way, and that, therefore, the safer should be chosen." Interests and activities Musical endeavors Franklin is known to have played the violin, the harp, and the guitar. He also composed music, notably a string quartet in early classical style. While he was in London, he developed a much-improved version of the glass harmonica, in which the glasses rotate on a shaft, with the player's fingers held steady, instead of the other way around. He worked with the London glassblower Charles James to create it, and instruments based on his mechanical version soon found their way to other parts of Europe. Joseph Haydn, a fan of Franklin's enlightened ideas, had a glass harmonica in his instrument collection. Mozart composed for Franklin's glass harmonica, as did Beethoven. Gaetano Donizetti used the instrument in the accompaniment to Amelia's aria "Par che mi dica ancora" in the tragic opera Il castello di Kenilworth (1821), as did Camille Saint-Saëns in his 1886 The Carnival of the Animals. Richard Strauss calls for the glass harmonica in his 1917 Die Frau ohne Schatten, and numerous other composers used Franklin's instrument as well. Chess Franklin was an avid chess player. He was playing chess by around 1733, making him the first chess player known by name in the American colonies. His essay on "The Morals of Chess" in Columbian Magazine in December 1786 is the second known writing on chess in America. This essay in praise of chess and prescribing a code of behavior for the game has been widely reprinted and translated. He and a friend used chess as a means of learning the Italian language, which both were studying; the winner of each game between them had the right to assign a task, such as parts of the Italian grammar to be learned by heart, to be performed by the loser before their next meeting. Franklin was able to play chess more frequently against stronger opposition during his many years as a civil servant and diplomat in England, where the game was far better established than in America. He was able to improve his playing standard by facing more experienced players during this period. He regularly attended Old Slaughter's Coffee House in London for chess and socializing, making many important personal contacts. While in Paris, both as a visitor and later as ambassador, he visited the famous Café de la Régence, which France's strongest players made their regular meeting place. No records of his games have survived, so it is not possible to ascertain his playing strength in modern terms. Franklin was inducted into the U.S. Chess Hall of Fame in 1999. The Franklin Mercantile Chess Club in Philadelphia, the second oldest chess club in the U.S., is named in his honor. Legacy Bequest Franklin bequeathed £1,000 (about $4,400 at the time, or about $125,000 in 2021 dollars) each to the cities of Boston and Philadelphia, in trust to gather interest for 200 years. The trust began in 1785 when the French mathematician Charles-Joseph Mathon de la Cour, who admired Franklin greatly, wrote a friendly parody of Franklin's Poor Richard's Almanack called Fortunate Richard. The main character leaves a smallish amount of money in his will, five lots of 100 livres, to collect interest over one, two, three, four or five full centuries, with the resulting astronomical sums to be spent on impossibly elaborate utopian projects. Franklin, who was 79 years old at the time, wrote thanking him for a great idea and telling him that he had decided to leave a bequest of 1,000 pounds each to his native Boston and his adopted Philadelphia. By 1990, more than $2,000,000 (~$ in ) had accumulated in Franklin's Philadelphia trust, which had loaned the money to local residents. From 1940 to 1990, the money was used mostly for mortgage loans. When the trust came due, Philadelphia decided to spend it on scholarships for local high school students. Franklin's Boston trust fund accumulated almost $5,000,000 during that same time; at the end of its first 100 years a portion was allocated to help establish a trade school that became the Franklin Institute of Boston, and the entire fund was later dedicated to supporting this institute. In 1787, a group of prominent ministers in Lancaster, Pennsylvania, proposed the foundation of a new college named in Franklin's honor. Franklin donated £200 towards the development of Franklin College (now called Franklin & Marshall College). Likeness and image As the only person to have signed the Declaration of Independence in 1776, Treaty of Alliance with France in 1778, Treaty of Paris in 1783, and U.S. Constitution in 1787, Franklin is considered one of the leading Founding Fathers of the United States. His pervasive influence in the early history of the nation has led to his being jocularly called "the only president of the United States who was never president of the United States". Franklin's likeness is ubiquitous. Since 1928, it has adorned American $100 bills. From 1948 to 1963, Franklin's portrait was on the half-dollar. He has appeared on a $50 bill and on several varieties of the $100 bill from 1914 and 1918. Franklin also appears on the $1,000 Series EE savings bond. On April 12, 1976, as part of a bicentennial celebration, Congress dedicated a tall marble statue in Philadelphia's Franklin Institute as the Benjamin Franklin National Memorial. Many of Franklin's personal possessions are on display at the institute. In London, his house at 36 Craven Street, which is the only surviving former residence of Franklin, was first marked with a blue plaque and has since been opened to the public as the Benjamin Franklin House. In 1998, workmen restoring the building dug up the remains of six children and four adults hidden below the home. A total of 15 bodies have been recovered. The Friends of Benjamin Franklin House (the organization responsible for the restoration) note that the bones were likely placed there by William Hewson, who lived in the house for two years and who had built a small anatomy school at the back of the house. They note that while Franklin likely knew what Hewson was doing, he probably did not participate in any dissections because he was much more of a physicist than a medical man. He has been honored on U.S. postage stamps many times. The image of Franklin, the first postmaster general of the United States, occurs on the face of U.S. postage more than any other notable American save that of George Washington. He appeared on the first U.S. postage stamp issued in 1847. From 1908 through 1923, the U.S. Post Office issued a series of postage stamps commonly referred to as the Washington–Franklin Issues, in which Washington and Franklin were depicted many times over a 14-year period, the longest run of any one series in U.S. postal history. However, he only appears on a few commemorative stamps. Some of the finest portrayals of Franklin on record can be found on the engravings inscribed on the face of U.S. postage. See also Benjamin Franklin in popular culture Benjamin Franklin on postage stamps Bibliography of early American publishers and printers Founders Online database of Franklin's papers Franklin's electrostatic machine Fugio Cent, 1787 coin designed by Franklin List of early American publishers and printers List of opponents of slavery List of richest Americans in history The Papers of Benjamin Franklin Notes References Bibliography Biographies Becker, Carl Lotus. "Benjamin Franklin", Dictionary of American Biography (1931) – vol 3, with links online Crane, Vernon W. Benjamin Franklin and a rising people (1954) short biography by a scholar; online Franklin, Benjamin. The Autobiography of Benjamin Franklin; many editions Gaustad, Edwin S. Benjamin Franklin (2006) online James Srodes, Franklin, The Essential Founding Father, (2002, softcover 2003, Regnery History) Ketcham, Ralph. Benjamin Franklin (1966) 228 pp; short biography by scholar Lemay, J.A. Leo. The Life of Benjamin Franklin, scholarly biography, 3 volumes appeared before the author's death in 2008 Volume 1: Journalist, 1706–1730 (2005) 568 pp Volume 2: Printer and publisher, 1730–1747 (2005) 664 pp Volume 3: Soldier, scientist, and politician, 1748–1757 (2008), 768 pp Morgan, Edmund S. Benjamin Franklin (2003), interpretation by leading scholar online Schiff, Stacy, A Great Improvisation: Franklin, France, and the Birth of America, (2005) Henry Holt , Pulitzer Prize winning older biography; online Wood, Gordon. "Benjamin Franklin" Encyclopedia Britannica (2021) online Wood, Gordon. The Americanization of Benjamin Franklin (2005) , intellectual history by leading historian. Wright, Esmond. Franklin of Philadelphia (1986) –brief older scholarly study Scholarly studies Anderson, Douglas. The Radical Enlightenments of Benjamin Franklin (1997) – fresh look at the intellectual roots of Franklin Buxbaum, M.H., ed. Critical Essays on Benjamin Franklin (1987) Chaplin, Joyce. The First Scientific American: Benjamin Franklin and the Pursuit of Genius. (2007) Cohen, I. Bernard. Benjamin Franklin's Science (1990) – Cohen, the leading specialist, has several books on Franklin's science Conner, Paul W. Poor Richard's Politicks (1965) – analyzes Franklin's ideas in terms of the Enlightenment and republicanism Dixon, Charles Robert. "All about the Benjamins: The nineteenth century character assassination of Benjamin Franklin" (PhD dissertation, The University of Alabama; ProQuest Dissertations Publishing,  2011. 3461038). Dray, Philip. Stealing God's Thunder: Benjamin Franklin's Lightning Rod and the Invention of America. (2005). 279 pp. Dull, Jonathan. A Diplomatic History of the American Revolution (1985) Dull, Jonathan. Benjamin Franklin and the American Revolution (2010) Ford, Paul Leicester. The Many-Sided Franklin (1899) online edition – collection of scholarly essays "Franklin as Politician and Diplomatist" in The Century (October 1899) v. 57 pp. 881–899. By Paul Leicester Ford. "Franklin as Printer and Publisher" in The Century (April 1899) v. 57 pp. 803–818. "Franklin as Scientist" in The Century (September 1899) v.57 pp. 750–763. By Paul Leicester Ford. Frasca, Ralph. "Benjamin Franklin's Printing Network and the Stamp Act." Pennsylvania History 71.4 (2004): 403–419 online . Frasca, Ralph. Benjamin Franklin's printing network: disseminating virtue in early America (U of Missouri Press, 2006) excerpt. Hartsock, Pamela Ann. " 'Tracing the pattern among the tangled threads': The composition and publication history of 'The Autobiography of Benjamin Franklin' " (PhD dissertation,  University of Missouri – Columbia; ProQuest Dissertations Publishing,  2000. 9999293). Houston, Alan. Benjamin Franklin and the Politics of Improvement (2009) Kidd, Thomas S. Benjamin Franklin: The Religious Life of a Founding Father (Yale UP, 2017) excerpt Lemay, J.A. Leo, ed. Reappraising Benjamin Franklin: A Bicentennial Perspective (1993) – scholarly essays Mathews, L.K. "Benjamin Franklin's Plans for a Colonial Union, 1750–1775." American Political Science Review 8 (August 1914): 393–412. Merli, Frank J., and Theodore A. Wilson, eds. Makers of American diplomacy, from Benjamin Franklin to Henry Kissinger (1974) online Mulford, Carla. "Figuring Benjamin Franklin in American Cultural Memory." New England Quarterly 72.3 (1999): 415–443. online Newman, Simon P. "Benjamin Franklin and the Leather-Apron Men: The Politics of Class in Eighteenth-Century Philadelphia", Journal of American Studies, August 2009, Vol. 43#2 pp. 161–75; Franklin took pride in his working class origins and his printer's skills. Olson, Lester C. Benjamin Franklin's Vision of American Community: A Study in Rhetorical Iconology. (2004). 323 pp. Rosenthall, Karen M. "A Generative Populace: Benjamin Franklin's Economic Agendas" Early American Literature 51#3 (2016), pp. 571–598. online Royot, Daniel. "New Perspectives on Benjamin Franklin's Humor." Studies in American Humor (2006) 3#14: 133–138. online Schiffer, Michael Brian. Draw the Lightning Down: Benjamin Franklin and Electrical Technology in the Age of Enlightenment. (2003). 383 pp. Skemp, Sheila L. Benjamin and William Franklin: Father and Son, Patriot and Loyalist (1994) – Ben's son was a leading Loyalist. Slack, Kevin Lee. "Benjamin Franklin and the science of virtue" (PhD dissertation, University of Dallas; ProQuest Dissertations Publishing,  2009. 3357482). Smart, Karl Lyman. "A man for all ages: The changing image of Benjamin Franklin in nineteenth century American popular literature" (PhD dissertation,  University of Florida; ProQuest Dissertations Publishing,  1989. 9021252). Waldstreicher, David. Runaway America: Benjamin Franklin, Slavery, and the American Revolution. Hill and Wang, 2004. 315 pp. Walters, Kerry S. Benjamin Franklin and His Gods. (1999). 213 pp. Takes position midway between D H Lawrence's brutal 1930 denunciation of Franklin's religion as nothing more than a bourgeois commercialism tricked out in shallow utilitarian moralisms and Owen Aldridge's sympathetic 1967 treatment of the dynamism and protean character of Franklin's "polytheistic" religion. York, Neil. "When Words Fail: William Pitt, Benjamin Franklin and the Imperial Crisis of 1766", Parliamentary History, October 2009, Vol. 28#3 pp. 341–374. Historiography Brands, H. W. "Lives and Times" Reviews in American History 41#2 (2013), pp. 207–212. online Waldstreicher, David, ed. A Companion to Benjamin Franklin (2011), 25 essays by scholars emphasizing how historians have handled Franklin. online edition Primary sources "A Dissertation on Liberty and Necessity, Pleasure and Pain." "Experiments and Observations on Electricity." (1751) "Fart Proudly: Writings of Benjamin Franklin You Never Read in School." Carl Japikse, Ed. Frog Ltd.; Reprint ed. 2003. "Heroes of America Benjamin Franklin." "On Marriage." "Satires and Bagatelles." Autobiography, Poor Richard, & Later Writings (J.A. Leo Lemay, ed.) (Library of America, 1987 one-volume, 2005 two-volume) Benjamin Franklin Reader edited by Walter Isaacson (2003) Benjamin Franklin's Autobiography edited by J.A. Leo Lemay and P.M. Zall, (Norton Critical Editions, 1986); 390 pp. text, contemporary documents and 20th century analysis Houston, Alan, ed. Franklin: The Autobiography and other Writings on Politics, Economics, and Virtue. Cambridge University Press, 2004. 371 pp. Ketcham, Ralph, ed. The Political Thought of Benjamin Franklin. (1965, reprinted 2003). 459 pp. Lass, Hilda, ed. The Fabulous American: A Benjamin Franklin Almanac. (1964). 222 pp. Woody, Thomas, ed. Educational views of Benjamin Franklin (1931) Leonard Labaree, and others., eds., The Papers of Benjamin Franklin, 39 vols. to date (1959–2008), definitive edition, through 1783. This massive collection of BF's writings, and letters to him, is available in large academic libraries. It is most useful for detailed research on specific topics. The complete text of all the documents are online and searchable; . Poor Richard Improved by Benjamin Franklin (1751) Silence Dogood, The Busy-Body, & Early Writings (J.A. Leo Lemay, ed.) (Library of America, 1987 one-volume, 2005 two-volume) The Papers of Benjamin Franklin online, Sponsored by The American Philosophical Society and Yale University The Way to Wealth. Applewood Books; 1986. Writings (Franklin)|Writings. For young readers Asimov, Isaac. The Kite That Won the Revolution, a biography for children that focuses on Franklin's scientific and diplomatic contributions. Fleming, Candace. Ben Franklin's Almanac: Being a True Account of the Good Gentleman's Life. Atheneum/Anne Schwart, 2003, 128 pp. . Miller, Brandon. Benjamin Franklin, American Genius: His Life and Ideas with 21 Activities (For Kids series) 2009 Chicago Review Press External links Benjamin Franklin and Electrostatics experiments and Franklin's electrical writings from Wright Center for Science Education Benjamin Franklin Papers, Kislak Center for Special Collections, Rare Books and Manuscripts, University of Pennsylvania. Franklin's impact on medicine – talk by medical historian, Dr. Jim Leavesley celebrating the 300th anniversary of Franklin's birth on Okham's Razor ABC Radio National – December 2006 Video with sheet music of Benjamin Franklin's string quartet Biographical and guides "Special Report: Citizen Ben's Greatest Virtues" Time "Writings of Benjamin Franklin" from C-SPAN's American Writers: A Journey Through History Afsai, Shai (2019). "Benjamin Franklin's Influence on Mussar Thought and Practice: a Chronicle of Misapprehension." Review of Rabbinic Judaism 22, 2: 228–276. Benjamin Franklin: A Documentary History by J.A. Leo Lemay Benjamin Franklin: An extraordinary life PBS Benjamin Franklin: First American Diplomat, 1776–1785 US State Department Finding Franklin: A Resource Guide Library of Congress Guide to Benjamin Franklin By a history professor at the University of Illinois. Online edition of Franklin's personal library The Electric Benjamin Franklin ushistory.org Online writings "A Silence Dogood Sampler" – Selections from Franklin's Silence Dogood writings Abridgement of the Book of Common Prayer (1773), by Benjamin Franklin and Francis Dashwood, transcribed by Richard Mammana Franklin's Last Will & Testament Transcription. Library of Congress web resource: Benjamin Franklin ... In His Own Words Online Works by Franklin Yale edition of complete works, the standard scholarly edition Online, searchable edition Autobiography The Autobiography of Benjamin Franklin at Project Gutenberg The Autobiography of Benjamin Franklin LibriVox recording In the arts Benjamin Franklin 300 (1706–2006) Official web site of the Benjamin Franklin Tercentenary. The Historical Society of Pennsylvania Collection of Benjamin Franklin Papers, including correspondence, government documents, writings and a copy of his will, are available for research use at the Historical Society of Pennsylvania. Founding Fathers of the United States 1706 births 1790 deaths 18th-century American diplomats 18th-century American inventors 18th-century American journalists 18th-century American newspaper publishers (people) 18th-century American non-fiction writers 18th-century American philosophers 18th-century American politicians 18th-century American writers 18th-century letter writers 18th-century pseudonymous writers 18th-century United States government officials Activists from Boston Activists from Philadelphia Age of Enlightenment Almanac compilers Ambassadors of the United States to France Ambassadors of the United States to Sweden American Freemasons Abolitionists from Pennsylvania American autobiographers American businesspeople in retailing American chess players American chess writers American colonial writers American currency designers American deists American humorists American letter writers American male journalists American male non-fiction writers American memoirists American people of English descent American political philosophers American printers American slave owners American typographers and type designers American whistleblowers Aphorists Burials at Christ Church, Philadelphia Chief Administrators of the University of Pennsylvania Coin designers Colonial American printers Colonial agents of the British Empire Continental Congressmen from Pennsylvania Creators of writing systems Deaths from pleurisy Editors of Pennsylvania newspapers English-language spelling reform advocates Fellows of the American Academy of Arts and Sciences Fellows of the Royal Society Founder Fellows of the Royal Society of Edinburgh Franklin family Governors of Pennsylvania Hall of Fame for Great Americans inductees Harvard University people Hellfire Club Honorary members of the Saint Petersburg Academy of Sciences Humor researchers Independent scholars Independent scientists Infectious disease deaths in Pennsylvania Inventors from Massachusetts Les Neuf Sœurs Masonic Grand Masters Members of the American Philosophical Society Members of the Lunar Society of Birmingham Members of the Pennsylvania Provincial Assembly Musicians from Boston Musicians from Philadelphia Pennsylvania Independents Political activists from Pennsylvania Pennsylvania postmasters People associated with electricity People from colonial Boston People of colonial Pennsylvania People of the American Enlightenment Philosophers from Massachusetts Philosophers from Pennsylvania Philosophers of culture Philosophers of education Philosophers of history Philosophers of literature Philosophers of religion Philosophers of science Philosophers of technology Philosophy writers Political philosophers Presbyterians from Pennsylvania Recipients of the Copley Medal Recreational cryptographers Respiratory disease deaths in Pennsylvania Rhetoric theorists Scientists from Boston Scientists from Philadelphia Signers of the United States Constitution Signers of the United States Declaration of Independence Simple living advocates Social philosophers Speakers of the Pennsylvania House of Representatives Theorists on Western civilization United States Postmasters General University and college founders Writers about activism and social change Writers about religion and science Writers from Boston Writers from Philadelphia Writers from the Thirteen Colonies Speakers of the Pennsylvania Provincial Assembly
14
Contract bridge, or simply bridge, is a trick-taking card game using a standard 52-card deck. In its basic format, it is played by four players in two competing partnerships, with partners sitting opposite each other around a table. Millions of people play bridge worldwide in clubs, tournaments, online and with friends at home, making it one of the world's most popular card games, particularly among seniors. The World Bridge Federation (WBF) is the governing body for international competitive bridge, with numerous other bodies governing it at the regional level. The game consists of a number of , each progressing through four phases. The cards are dealt to the players; then the players call (or bid) in an auction seeking to take the , specifying how many tricks the partnership receiving the contract (the declaring side) needs to take to receive points for the deal. During the auction, partners use their bids to exchange information about their hands, including overall strength and distribution of the suits; no other means of conveying or implying any information is permitted. The cards are then played, the trying to fulfill the contract, and the trying to stop the declaring side from achieving its goal. The deal is scored based on the number of tricks taken, the contract, and various other factors which depend to some extent on the variation of the game being played. Rubber bridge is the most popular variation for casual play, but most club and tournament play involves some variant of duplicate bridge, where the cards are not re-dealt on each occasion, but the same deal is played by two or more sets of players (or "tables") to enable comparative scoring. History and etymology Bridge is a member of the family of trick-taking games and is a derivative of whist, which had become the dominant such game and enjoyed a loyal following for centuries. The idea of a trick-taking 52-card game has its first documented origins in Italy and France. The French physician and author Rabelais (1493–1553) mentions a game called "La Triomphe" in one of his works. Also Juan Luis Vives, in his Linguae latinae exercitio (Exercise in the Latin language) of 1539 has a dialogue on card games, where the characters play ‘Triumphus hispanicus’ (Spanish Triumph). Bridge departed from whist with the creation of "Biritch" in the 19th century and evolved through the late 19th and early 20th centuries to form the present game. The first rule book for bridge, dated 1886, is Biritch, or Russian Whist written by John Collinson, an English financier working in Ottoman Constantinople. It and his subsequent letter to The Saturday Review dated 28 May 1906, document the origin of Biritch as being the Russian community in Constantinople. The word biritch is thought to be a transliteration of the Russian word Бирюч (бирчий, бирич), an occupation of a diplomatic clerk or an announcer. Another theory is that British soldiers invented the game bridge while serving in the Crimean War, and named it after the Galata Bridge, which they crossed on their way to a coffeehouse to play cards. Biritch had many significant bridge-like developments: dealer chose the trump suit, or nominated his partner to do so; there was a call of "no trumps" (biritch); dealer's partner's hand became dummy; points were scored above and below the line; game was 3NT, 4 and 5 (although 8 club odd tricks and 15 spade odd tricks were needed); the score could be doubled and redoubled; and there were slam bonuses. It has some features in common with solo whist. This game, and variants of it known as "bridge" and "bridge whist", became popular in the United States and the United Kingdom in the 1890s despite the long-established dominance of whist. Its breakthrough was its acceptance in 1894 by Lord Brougham at London's Portland Club. In 1904 auction bridge was developed, in which the players bid in a competitive auction to decide the contract and declarer. The object became to make at least as many tricks as were contracted for, and penalties were introduced for failing to do so. Auction bridge bidding beyond winning the auction is pointless. If taking all 13 tricks, there is no difference in score between a 1 and a 7 final bid, as the bonus for rubber, small slam or grand slam depends on the number of tricks taken rather than the number of tricks bid. The modern game of contract bridge was the result of innovations to the scoring of auction bridge by Harold Stirling Vanderbilt and others. The most significant change was that only the tricks contracted for were scored below the line toward game or a slam bonus, a change that resulted in bidding becoming much more challenging and interesting. Also new was the concept of "vulnerability", making sacrifices to protect the lead in a rubber more expensive. The various scores were adjusted to produce a more balanced and interesting game. Vanderbilt set out his rules in 1925, and within a few years contract bridge had so supplanted other forms of the game that "bridge" became synonymous with "contract bridge". The form of bridge mostly played in clubs, tournaments and online is duplicate bridge. The number of people playing contract bridge has declined since its peak in the 1940s, when a survey found it was played in 44% of US households. The game is still widely played, especially amongst retirees, and in 2005 the ACBL estimated there were 25 million players in the US. Gameplay Overview Bridge is a four-player partnership trick-taking game with thirteen tricks per deal. The dominant variations of the game are rubber bridge, more common in social play; and duplicate bridge, which enables comparative scoring in tournament play. Each player is dealt thirteen cards from a standard 52-card deck. A starts when a player leads, i.e. plays the first card. The leader to the first trick is determined by the auction; the leader to each subsequent trick is the player who won the preceding trick. Each player, in clockwise order, plays one card on the trick. Players must play a card of the same suit as the original card led, unless they have none (said to be "void"), in which case they may play any card. The player who played the highest-ranked card wins the trick. Within a suit, the ace is ranked highest followed by the king, queen and jack and then the ten through to the two. In a deal where the auction has determined that there is no trump suit, the trick must be won by a card of the suit led. In a deal with a trump suit, cards of that suit are superior in rank to any of the cards of any other suit. If one or more players plays a trump to a trick when void in the suit led, the highest trump wins. For example, if the trump suit is spades and a player is void in the suit led and plays a spade card, they win the trick if no other player plays a higher spade. If a trump suit is led, the usual rule for trick-taking applies. Unlike its predecessor, whist, the goal of bridge is not simply to take the most tricks in a deal. Instead, the goal is to successfully estimate how many tricks one's partnership can take. To illustrate this, the simpler partnership trick-taking game of spades has a similar mechanism: the usual trick-taking rules apply with the trump suit being spades, but in the beginning of the game, players bid or estimate how many tricks they can win, and the number of tricks bid by both players in a partnership are added. If a partnership takes at least that many tricks, they receive points for the round; otherwise, they lose penalty points. Bridge extends the concept of bidding into an , where partnerships compete to take a , specifying how many tricks they will need to take in order to receive points, and also specifying the trump suit (or no trump, meaning that there will be no trump suit). Players take turns to call in a clockwise order: each player in turn either passes, doubleswhich increases the penalties for not making the contract specified by the opposing partnership's last bid, but also increases the reward for making itor redoubles, or states a contract that their partnership will adopt, which must be higher than the previous highest bid (if any). Eventually, the player who bid the highest contractwhich is determined by the contract's level as well as the trump suit or no trumpwins the contract for their partnership. In the example auction below, the east–west pair secures the contract of 6; the auction concludes when there have been three successive passes. Note that six tricks are added to contract values, so the six-level contract is a contract of twelve tricks. In practice, establishing a contract without enough information on the other partner's hand is difficult, so there exist many bidding systems assigning meanings to bids, with common ones including Standard American, Acol, and 2/1 game forcing. Contrast with Spades, where players only have to bid their own hand. After the contract is decided, and the first lead is made, the declarer's partner (dummy) lays their cards face up on the table, and the declarer plays the dummy's cards as well as their own. The opposing partnership is called the , and their goal is to stop the declarer from fulfilling his contract. Once all the cards have been played, the hand is scored: if the declaring side makes their contract, they receive points based on the level of the contract, with some trump suits being worth more points than others and no trump being the highest, as well as bonus points for . If the declarer fails to fulfill the contract, the defenders receive points depending on the declaring side's undertricks (the number of tricks short of the contract) and whether the contract was doubled by the defenders. Setup and dealing The four players sit in two partnerships with players sitting opposite their partners. A cardinal direction is assigned to each seat, so that one partnership sits in North and South, while the other sits in West and East. The cards may be freshly dealt or, in duplicate bridge games, pre-dealt. All that is needed in basic games are the cards and a method of keeping score, but there is often other equipment on the table, such as a board containing the cards to be played (in duplicate bridge), bidding boxes, or screens. In rubber bridge each player draws a card at the start of the game; the player who draws the highest card deals first. The second highest card becomes the dealer's partner and takes the chair on the opposite side of the table. They play against the other two. The deck is shuffled and cut, usually by the player to the left of the dealer, before dealing. Players take turns to deal, in clockwise order. The dealer deals the cards clockwise, one card at a time. Normally, rubber bridge is played with two packs of cards and whilst one pack is being dealt, the dealer's partner shuffles the other pack. After shuffling the pack is placed on the right ready for the next dealer. Before dealing, the next dealer passes the cards to the previous dealer who cuts them. In duplicate bridge the cards are pre-dealt, either by hand or by a computerized dealing machine, in order to allow for competitive scoring. Once dealt, the cards are placed in a device called a "board", having slots designated for each player's cardinal direction seating position. After a deal has been played, players return their cards to the appropriate slot in the board, ready to be played by the next table. Auction The dealer opens the auction and can make the first call, and the auction proceeds clockwise. When it is their turn to call, a player may passbut can enter into the bidding lateror bid a contract, specifying the level of their contract and either the trump suit or no trump (the denomination), provided that it is higher than the last bid by any player, including their partner. All bids promise to take a number of tricks in excess of six, so a bid must be between one (seven tricks) and seven (thirteen tricks). A bid is higher than another bid if either the level is greater (e.g., 2 over 1NT) or the denomination is higher, with the order being in ascending (or alphabetical) order: , , , , and NT (no trump). Calls may be made orally or with a bidding box. If the last bid was by the opposing partnership, one may also the opponents' bid, increasing the penalties for undertricks, but also increasing the reward for making the contract. Doubling does not carry to future bids by the opponents unless future bids are doubled again. A player on the opposing partnership being doubled may also , which increases the penalties and rewards further. Players may not see their partner's hand during the auction, only their own. There exist many bidding conventions that assign agreed meanings to various calls to assist players in reaching an optimal contract (or obstruct the opponents). The auction ends when, after a player bids, doubles, or redoubles, every other player has passed, in which case the action proceeds to the play; or every player has passed and no bid has been made, in which case the round is considered to be "passed out" and not played. Play The player from the declaring side who first bid the denomination named in the final contract becomes declarer. The player left to the declarer leads to the first trick. Dummy then lays his or her cards face-up on the table, organized in columns by suit. Play proceeds clockwise, with each player required to follow suit if possible. Tricks are won by the highest trump, or if there were none played, the highest card of the led suit. The player who won the previous trick leads to the next trick. The declarer has control of the dummy's cards and tells his partner which card to play at dummy's turn. There also exist conventions that communicate further information between defenders about their hands during the play. At any time, a player may , stating that their side will win a specific number of the remaining tricks. The claiming player lays his cards down on the table and explains the order in which he intends to play the remaining cards. The opponents can either accept the claim and the round is scored accordingly, or dispute the claim. If the claim is disputed, play continues with the claiming player's cards face up in rubber games, or in duplicate games, play ceases and the tournament director is called to adjudicate the hand. Scoring At the end of the hand, points are awarded to the declaring side if they make the contract, or else to the defenders. Partnerships can be , increasing the rewards for making the contract, but also increasing the penalties for undertricks. In rubber bridge, if a side has won 100 contract points, they have won a and are vulnerable for the remaining rounds, but in duplicate bridge, vulnerability is predetermined based on the number of each board. If the declaring side makes their contract, they receive points for , or tricks bid and made in excess of six. In both rubber and duplicate bridge, the declaring side is awarded 20 points per odd trick for a contract in clubs or diamonds, and 30 points per odd trick for a contract in hearts or spades. For a contract in notrump, the declaring side is awarded 40 points for the first odd trick and 30 points for the remaining odd tricks. Contract points are doubled or quadrupled if the contract is respectively doubled or redoubled. In rubber bridge, a partnership wins one game once it has accumulated 100 contract points; excess contract points do not carry over to the next game. A partnership that wins two games wins the rubber, receiving a bonus of 500 points if the opponents have won a game, and 700 points if they have not. Overtricks score the same number of points per odd trick, although their doubled and redoubled values differ. Bonuses vary between the two bridge variations both in score and in type (for example, rubber bridge awards a bonus for holding a certain combination of high cards), although some are common between the two. A larger bonus is awarded if the declaring side makes a small slam or grand slam, a contract of 12 or 13 tricks respectively. If the declaring side is not vulnerable, a small slam gets 500 points, and a grand slam 1000 points. If the declaring side is vulnerable, a small slam is 750 points and a grand slam is 1,500. In rubber bridge, the rubber finishes when a partnership has won two games, but the partnership receiving the most overall points wins the rubber. Duplicate bridge is scored comparatively, meaning that the score for the hand is compared to other tables playing the same cards and match points are scored according to the comparative results: usually either "matchpoint scoring", where each partnership receives 2 points (or 1 point) for each pair that they beat, and 1 point (or point) for each tie; or IMPs (international matchpoint) scoring, where the number of IMPs varies (but less than proportionately) with the points difference between the teams. Undertricks are scored in both variations as follows: Rules The rules of the game are referred to as the laws as promulgated by various bridge organizations. Duplicate The official rules of duplicate bridge are promulgated by the WBF as "The Laws of Duplicate Bridge 2017". The Laws Committee of the WBF, composed of world experts, updates the Laws every 10 years; it also issues a Laws Commentary advising on interpretations it has rendered. In addition to the basic rules of play, there are many additional rules covering playing conditions and the rectification of irregularities, which are primarily for use by tournament directors who act as referees and have overall control of procedures during competitions. But various details of procedure are left to the discretion of the zonal bridge organisation for tournaments under their aegis and some (for example, the choice of movement) to the sponsoring organisation (for example, the club). Some zonal organisations of the WBF also publish editions of the Laws. For example, the American Contract Bridge League (ACBL) publishes the Laws of Duplicate Bridge and additional documentation for club and tournament directors. Rubber There are no universally accepted rules for rubber bridge, but some zonal organisations have published their own. An example for those wishing to abide by a published standard is The Laws of Rubber Bridge as published by the American Contract Bridge League. The majority of rules mirror those of duplicate bridge in the bidding and play and differ primarily in procedures for dealing and scoring. Online In 2001, the WBF promulgated a set of laws for online play. Tournaments Bridge is a game of skill played with randomly dealt cards, which makes it also a game of chance, or more exactly, a tactical game with inbuilt randomness, imperfect knowledge and restricted communication. The chance element is in the deal of the cards; in duplicate bridge some of the chance element is eliminated by comparing results of multiple pairs in identical situations. This is achievable when there are eight or more players, sitting at two or more tables, and the deals from each table are preserved and passed to the next table, thereby duplicating them for the other table(s) of players. At the end of a session, the scores for each deal are compared, and the most points are awarded to the players doing the best with each particular deal. This measures relative skill (but still with an element of luck) because each pair or team is being judged only on the ability to bid with, and play, the same cards as other players. Duplicate bridge is played in clubs and tournaments, which can gather as many as several hundred players. Duplicate bridge is a mind sport, and its popularity gradually became comparable to that of chess, with which it is often compared for its complexity and the mental skills required for high-level competition. Bridge and chess are the only "mind sports" recognized by the International Olympic Committee, although they were not found eligible for the main Olympic program. In October 2017 the British High Court ruled against the English Bridge Union, finding that Bridge is not a sport under a definition of sport as involving physical activity, but did not rule on the "broad, somewhat philosophical question" as to whether or not bridge is a sport. The basic premise of duplicate bridge had previously been used for whist matches as early as 1857. Initially, bridge was not thought to be suitable for duplicate competition; it was not until the 1920s that (auction) bridge tournaments became popular. In 1925 when contract bridge first evolved, bridge tournaments were becoming popular, but the rules were somewhat in flux, and several different organizing bodies were involved in tournament sponsorship: the American Bridge League (formerly the American Auction Bridge League, which changed its name in 1929), the American Whist League, and the United States Bridge Association. In 1935, the first officially recognized world championship was held. In 1958, the World Bridge Federation (WBF) was founded to promote bridge worldwide, coordinate periodic revision to the Laws (each ten years, next in 2027) and conduct world championships. Bidding boxes and screens In tournaments, "bidding boxes" are frequently used, as noted above. These avoid the possibility of players at other tables hearing any spoken bids. The bidding cards are laid out in sequence as the auction progresses. Although it is not a formal rule, many clubs adopt a protocol that the bidding cards stay revealed until the first playing card is tabled, after which point the bidding cards are put away. Bidding pads are an alternative to bidding boxes. A bidding pad is a block of 100mm square tear-off sheets. Players write their bids on the top sheet. When the first trick is complete the sheet is torn off and discarded. In top national and international events, "bidding screens" are used. These are placed diagonally across the table, preventing partners from seeing each other during the game; often the screen is removed after the auction is complete. Strategy Bidding Much of the complexity in bridge arises from the difficulty of arriving at a good final contract in the auction (or deciding to let the opponents declare the contract). This is a difficult problem: the two players in a partnership must try to communicate enough information about their hands to arrive at a makeable contract, but the information they can exchange is restricted – information may be passed only by the calls made and later by the cards played, not by other means; in addition, the agreed-upon meaning of each call and play must be available to the opponents. Since a partnership that has freedom to bid gradually at leisure can exchange more information, and since a partnership that can interfere with the opponents' bidding (as by raising the bidding level rapidly) can cause difficulties for their opponents, bidding systems are both informational and strategic. It is this mixture of information exchange and evaluation, deduction, and tactics that is at the heart of bidding in bridge. A number of basic rules of thumb in bridge bidding and play are summarized as bridge maxims. Systems and conventions A bidding system is a set of partnership agreements on the meanings of bids. A partnership's bidding system is usually made up of a core system, modified and complemented by specific conventions (optional customizations incorporated into the main system for handling specific bidding situations) which are pre-chosen between the partners prior to play. The line between a well-known convention and a part of a system is not always clear-cut: some bidding systems include specified conventions by default. Bidding systems can be divided into mainly natural systems such as Acol and Standard American, and mainly artificial systems such as the Precision Club and Polish Club. Calls are usually considered to be either natural or conventional (artificial). A natural call carries a meaning that reflects the call; a natural bid intuitively showing hand or suit strength based on the level or suit of the bid, and a natural double expressing that the player believes that the opposing partnership will not make their contract. By contrast, a conventional (artificial) call offers and/or asks for information by means of pre-agreed coded interpretations, in which some calls convey very specific information or requests that are not part of the natural meaning of the call. Thus in response to 4NT, a 'natural' bid of 5 would state a preference towards a diamond suit or a desire to play in five diamonds, whereas if the partners have agreed to use the common Blackwood convention, a bid of 5 in the same situation would say nothing about the diamond suit, but would tell the partner that the hand in question contains exactly one ace. Conventions are valuable in bridge because of the need to pass information beyond a simple like or dislike of a particular suit, and because the limited bidding space can be used more efficiently by adopting a conventional (artificial) meaning for a given call where a natural meaning has less utility, because the information it conveys is not valuable or because the desire to convey that information arises only rarely. The conventional meaning conveys more useful (or more frequently useful) information. There are a very large number of conventions from which players can choose; many books have been written detailing bidding conventions. Well-known conventions include Stayman (to ask the opening 1NT bidder to show any four-card major suit), Jacoby transfers (a request by (usually) the weak hand for the partner to bid a particular suit first, and therefore to become the declarer), and the Blackwood convention (to ask for information on the number of aces and kings held, used in slam bidding situations). The term preempt refers to a high-level tactical bid by a weak hand, relying upon a very long suit rather than high cards for tricks. Preemptive bids serve a double purpose – they allow players to indicate they are bidding on the basis of a long suit in an otherwise weak hand, which is important information to share, and they also consume substantial bidding space which prevents a possibly strong opposing pair from exchanging information on their cards. Several systems include the use of opening bids or other early bids with weak hands including long (usually six to eight card) suits at the 2, 3 or even 4 or 5 levels as preempts. Basic natural systems As a rule, a natural suit bid indicates a holding of at least four (or more, depending on the situation and the system) cards in that suit as an opening bid, or a lesser number when supporting partner; a natural NT bid indicates a balanced hand. Most systems use a count of high card points as the basic evaluation of the strength of a hand, refining this by reference to shape and distribution if appropriate. In the most commonly used point count system, aces are counted as 4 points, kings as 3, queens as 2, and jacks as 1 point; therefore, the deck contains 40 points. In addition, the distribution of the cards in a hand into suits may also contribute to the strength of a hand and be counted as distribution points. A better than average hand, containing 12 or 13 points, is usually considered sufficient to open the bidding, i.e., to make the first bid in the auction. A combination of two such hands (i.e., 25 or 26 points shared between partners) is often sufficient for a partnership to bid, and generally to make, game in a major suit or notrump (more are usually needed for a minor suit game, as the level is higher). In natural systems, a 1NT opening bid usually reflects a hand that has a relatively balanced shape (usually between two and four (or less often five) cards in each suit) and a sharply limited number of high card points, usually somewhere between 12 and 18 – the most common ranges use a span of exactly three points (for example, 12–14, 15–17 or 16–18), but some systems use a four-point range, usually 15–18. Opening bids of three or higher are preemptive bids, i.e., bids made with weak hands that especially favor a particular suit, opened at a high level in order to define the hand's value quickly and to frustrate the opposition. For example, a hand of would be a candidate for an opening bid of 3, designed to make it difficult for the opposing team to bid and find their optimum contract even if they have the bulk of the points. This hand is nearly valueless unless spades are trumps but it contains good enough spades that the penalty for being set should not be higher than the value of an opponent game. The high card weakness makes it likely that the opponents have enough strength to make game themselves. Openings at the 2 level are either unusually strong (2NT, natural, and 2, artificial) or preemptive, depending on the system. Unusually strong bids communicate an especially high number of points (normally 20 or more) or a high trick-taking potential (normally 8 or more). Also 2 as the strongest (by HCP and by DP+HCP) has become more common, perhaps especially at websites that offer duplicate bridge. Here the 2 opening is used for either hands with a good 6-card suit or longer (max one losing card) and a total of 18 HCP up to 23 total points – or "NT", like 2NT but with 22–23 HCP. Whilst the 2 opening bid takes care of all hands with 24 points (HCP or with distribution points included) with the only exception of "Gambling 3NT". Opening bids at the one level are made with hands containing 12–13 points or more and which are not suitable for one of the preceding bids. Using Standard American with 5-card majors, opening hearts or spades usually promises a 5-card suit. Partnerships who agree to play 5-card majors open a minor suit with 4-card majors and then bid their major suit at the next opportunity. This means that an opening bid of 1 or 1 will sometimes be made with only 3 cards in that suit. Doubles are sometimes given conventional meanings in otherwise mostly natural systems. A natural, or penalty double, is one used to try to gain extra points when the defenders are confident of setting (defeating) the contract. The most common example of a conventional double is the takeout double of a low-level suit bid, implying support for the unbid suits or the unbid major suits and asking partner to choose one of them. Basic variations Bidding systems depart from these basic ideas in varying degrees. Standard American, for instance, is a collection of conventions designed to bolster the accuracy and power of these basic ideas, while Precision Club is a system that uses the 1 opening bid for all or almost all strong hands (but sets the threshold for "strong" rather lower than most other systems – usually 16 high card points) and may include other artificial calls to handle other situations (but it may contain natural calls as well). Many experts today use a system called 2/1 game forcing (enunciated as two over one game forcing), which amongst other features adds some complexity to the treatment of the one notrump response as used in Standard American. In the UK, Acol is the most common system; its main features are a weak one notrump opening with 12–14 high card points and several variations for 2-level openings. There are also a variety of advanced techniques used for hand evaluation. The most basic is the Milton Work point count, (the 4-3-2-1 system detailed above) but this is sometimes modified in various ways, or either augmented or replaced by other approaches such as losing trick count, honor point count, law of total tricks, or Zar Points. Common conventions and variations within natural systems include: Blackwood (either the original version or Roman Key Card) How the partnership's bidding practices will be varied if their opponents intervene or compete. Point count required for 1 NT opening bid ('mini' 10–12, 'weak' 12–14, 'strong' 15–17 or 16–18) Stayman (together with Blackwood, described as "the two most famous conventions in Bridge".) What types of cue bids (e.g. bidding the opponents' suit) the partnership will play, if any. Whether 1 (and sometimes 1) is 'natural' or 'suspect' (also called 'phoney' or 'short'), signifying an opening hand lacking a notable heart or spade suit Whether an opening bid of 1 and 1 requires a minimum of 4 or 5 cards in the suit (4 or 5 card majors) Whether doubling a contract at the 1, 2 and sometimes higher levels signifies a belief that the opponents' contract will fail and a desire to raise the stakes (a penalty double), or an indication of strength but no biddable suit coupled with a request that partner bid something (a takeout double). Whether doubling or overcalling over opponents' 1NT is natural or conventional. One common artificial agreement is Cappelletti, where 2 is a transfer to be passed or corrected to a major, 2 means both majors and a major shows that suit plus a minor. Whether opening bids at the two level are 'strong' (20+ points) or 'weak' (i.e., pre-emptive with a 6 card suit). (Note: an opening bid of 2 is usually played in otherwise natural systems as conventional, signifying any exceptionally strong hand) Whether the partnership will play Jacoby transfers (bids of 2 and 2 over 1NT or 3 and 3 over 2NT respectively require the 1NT or 2NT bidder to rebid 2 and 2 or 3 and 3), minor suit transfers (bids of 2 and either 2NT or 3 over 1NT respectively require the 1NT bidder to bid 3 and 3) and Texas transfers (bids of 4 and 4 respectively require the 1NT, or 2NT bidder to rebid 4 and 4) Which (if any) bids are forcing and require a response. Within play, it is also commonly agreed what systems of opening leads, signals and discards will be played: Conventions for the opening lead govern how the first card to be played will be chosen and what it will mean, Count signals cover the situation when a defender is following suit (usually to a suit that the declarer has led). In such circumstances the order in which a defender plays his spot cards will indicate whether an even or odd number of cards was originally held in that suit. This can help the other defender count out the entire original distribution of the cards in that suit. It is sometimes critical to know this when defending. Discards cover the situation when a defender cannot follow suit and therefore has free choice what card to play or throw away. In such circumstances the thrown-away card can be used to indicate some aspect of the hand, or a desire for a specific suit to be played. Signals indicate how cards played within a suit are chosen – for example, playing a noticeably high card when this is unexpected can signal encouragement to continue playing the suit, and a low card can signal discouragement and a desire for partner to choose some other suit. (Some partnerships use "reverse" signals, meaning that a noticeably high card discourages that suit and a noticeably low card encourages that suit, thus not "wasting" a potentially useful intermediate card in the suit of interest.) Suit preference signals cover the situation where a defender is returning a suit which will be ruffed by his partner. If he plays a high card he is showing an entry in the higher side suit and vice versa. There are some other situations where this tool may be used. Surrogate signals cover the situation when it is critical to show length in a side suit and it will be too late if defenders wait until that suit is played. Then, the play in the first declarer played suit is a count signal regarding the critical suit and not the trump suit itself. In fact, any signal made about a suit in another suit might be called as such. Advanced techniques Every call (including "pass", also sometimes called "no bid") serves two purposes. It confirms or passes some information to a partner, and, by implication, denies any other kind of hand which would have tended to support an alternative call. For example, a bid of 2NT immediately after partner's 1NT not only shows a balanced hand of a certain point range, but also almost always denies possession of a five-card major suit (otherwise the player would have bid it) or even a four card major suit (in that case, the player should use the Stayman convention). Likewise, in some partnerships the bid of 2 in the sequence 1NT–2–2–2 between partners (opponents passing throughout) explicitly shows five hearts but also confirms four cards in spades: the bidder must hold at least five hearts to make it worth looking for a heart fit after 2 denied a four card major, and with at least five hearts, a Stayman bid must have been justified by having exactly four spades, the other major (since Stayman (as used by this partnership) is not useful with anything except a four card major suit). Thus an astute partner can read much more than the surface meaning into the bidding. Alternatively, many partnerships play this same bidding sequence as "Crawling Stayman" by which the responder shows a weak hand (less than eight high card points) with shortness in diamonds but at least four hearts and four spades; the opening bidder may correct to spades if that appears to be the better contract. The situations detailed here are extremely simple examples; many instances of advanced bidding involve specific agreements related to very specific situations and subtle inferences regarding entire sequences of calls. Play techniques Terence Reese, a prolific author of bridge books, points out that there are only four ways of taking a trick by force, two of which are very easy: establishing long suits (the last cards in a suit will take tricks if the opponents don't have the suit and are unable to trump) playing a high card that no one else can beat playing for the opponents' high cards to be in a particular position (if their ace is to the right of your king, your king may be able to take a trick, especially if, when that suit is led, the player to your right has to play their card before you do) trumping an opponent's high card Nearly all trick-taking techniques in bridge can be reduced to one of these four methods. The optimum play of the cards can require much thought and experience and is the subject of whole books on bridge. Example The cards are dealt as shown in the bridge hand diagram; North is the dealer and starts the auction which proceeds as shown in the bidding table. As neither North nor East have sufficient strength to open the bidding, they each pass, denying such strength. South, next in turn, opens with the bid of 1, which denotes a reasonable heart suit (at least 4 or 5 cards long, depending on the bidding system) and at least 12 high card points. On this hand, South has 14 high card points. West overcalls with 1, since he has a long spade suit of reasonable quality and 10 high card points (an overcall can be made on a hand that is not quite strong enough for an opening bid). North supports partner's suit with 2, showing heart support and about points. East supports spades with 2. South inserts a game try of 3, inviting the partner to bid the game of 4 with good club support and overall values. North complies, as North is at the higher end of the range for his 2 bid, and has a fourth trump (the 2 bid promised only three), and the doubleton queen of clubs to fit with partner's strength there. (North could instead have bid 3, indicating not enough strength for game, asking South to pass and so play 3.) In the auction, north–south are trying to investigate whether their cards are sufficient to make a game (nine tricks at notrump, ten tricks in hearts or spades, 11 tricks in clubs or diamonds), which yields bonus points if bid and made. East-West are competing in spades, hoping to play a contract in spades at a low level. 4 is the final contract, 10 tricks being required for to make with hearts as trump. South is the declarer, having been first to bid hearts, and the player to South's left, West, has to choose the first card in the play, known as the opening lead. West chooses the spade king because spades is the suit the partnership has shown strength in, and because they have agreed that when they hold two touching honors (or adjacent honors) they will play the higher one first. West plays the card face down, to give their partner and the declarer (but not dummy) a chance to ask any last questions about the bidding or to object if they believe West is not the correct hand to lead. After that, North's cards are laid on the table and North becomes dummy, as both the North and South hands will be controlled by the declarer. West turns the lead card face up, and the declarer studies the two hands to make a plan for the play. On this hand, the trump ace, a spade, and a diamond trick must be lost, so declarer must not lose a trick in clubs. If the K is held by West, South will find it very hard to prevent it from making a trick (unless West leads a club). There is an almost equal chance that it is held by East, in which case it can be trapped against the ace, and will be beaten, using a tactic known as a finesse. After considering the cards, the declarer directs dummy (North) to play a small spade. East plays low (small card) and South takes the A, gaining the lead. (South may also elect to duck, but for the purpose of this example, let us assume South wins the A at trick 1). South proceeds by drawing trump, leading the K. West decides there is no benefit to holding back, and so wins the trick with the ace, and then cashes the Q. For fear of conceding a ruff and discard, West plays the 2 instead of another spade. Declarer plays low from the table, and East scores the Q. Not having anything better to do, East returns the remaining trump, taken in South's hand. The trumps now accounted for, South can now execute the finesse, perhaps trapping the king as planned. South enters the dummy (i.e. wins a trick in the dummy's hand) by leading a low diamond, using dummy's A to win the trick, and leads the Q from dummy to the next trick. East covers the queen with the king, and South takes the trick with the ace, and proceeds by cashing the remaining master J. (If East doesn't play the king, then South will play a low club from South's hand and the queen will win anyway, this being the essence of the finesse). The game is now safe: South ruffs a small club with a dummy's trump, then ruffs a diamond in hand for an entry back, and ruffs the last club in dummy (sometimes described as a crossruff). Finally, South claims the remaining tricks by showing his or her hand, as it now contains only high trumps and there's no need to play the hand out to prove they are all winners. (The trick-by-trick notation used above can be also expressed in tabular form, but a textual explanation is usually preferred in practice, for reader's convenience. Plays of small cards or discards are often omitted from such a description, unless they were important for the outcome). North-South score the required 10 tricks, and their opponents take the remaining three. The contract is fulfilled, and North enters the pair numbers, the contract, and the score of +420 for the winning side (North is in charge of bookkeeping in duplicate tournaments) on the traveling sheet. North asks East to check the score entered on the traveller. All players return their own cards to the board, and the next deal is played. On the prior hand, it is quite possible that the K is held by West. For example, by swapping the K and A between the defending hands. Then the 4 contract would fail by one trick (unless West had led a club early in the play). The failure of the contract would not mean that 4 was a bad contract on this hand. The contract depends on the club finesse working, or a defense error. The bonus points awarded for making a game contract far outweigh the penalty for going one off, so it is best strategy in the long run to bid game contracts such as this one. Similarly, there is a minuscule chance that the K is in the west hand, but the west hand has no other clubs. In that case, declarer can succeed by simply cashing the A, felling the K and setting up the Q as a winner. The chance of this is far lower than the chance that East started with the K. Therefore, the superior percentage play is to take the club finesse, as described above. Computers After many years of little progress, computer bridge made great progress at the end of the 20th century. In 1996, the ACBL initiated the official World Championships Computer Bridge, to be held annually along with a major bridge event. The first Computer Bridge Championship took place in 1997 at the North American Bridge Championships in Albuquerque, New Mexico. Stand-alone software Strong bridge playing programs such as Jack Bridge (World Champion in 2001, 2002, 2003, 2004, 2006, 2009, 2010, 2012, 2013 and 2015) and Wbridge5 (World Champion in 2005, 2007, 2008, 2016, 2017 and 2018), probably rank among the top few thousand human pairs worldwide. A series of articles published in 2005 and 2006 in the Dutch bridge magazine IMP describes matches between Jack Bridge and seven top Dutch pairs. A total of 196 boards were played. Jack Bridge lost, but by a small margin (359 versus 385 IMPs). Online play There are several free and subscription-based services available for playing bridge on the internet. For example: Bridge Base Online (BBO) is the most active online bridge club in the world, with more than 100,000 daily connections and 500,000 hands played each day, in part because it is free to play regular games and volunteer-run tournaments. Funbridge is a mobile application where users can play deals against robots. The company was started in France and is now owned by Goto-games OKbridge is the oldest extant internet bridge service: it was established as a commercial enterprise in 1994, but the program started to be used interactively in August 1990 by players of all standards. OKbridge is a subscription-based club, with services such as customer support and ethics reviews. RealBridge was founded in November 2020. Its online platform includes built-in audio and video. It is primarily used for organised bridge, ranging from club level to national and zonal championships. Sharkbridge founded in 2020 by Milen Milkovski (Canada), Plamen Panayotov (Canada), John Norris ( Denmark) and Michael Woywode (Germany). SWAN Games was founded April 2000. In March 2004, announced a partnership to provide internet services to SBF members and is a competitor in subscription-based online bridge clubs. BridgeClubLive is a subscription based club which was founded in 1994 with the Bridge Player Live Software for Windows. Some national contract bridge organizations now offer online bridge play to their members, including the English Bridge Union, the Dutch Bridge Federation and the Australian Bridge Federation. MSN and Yahoo! Games have several online rubber bridge rooms. In 2001, the WBF issued a special edition of the lawbook adapted for internet and other electronic forms of the game. Related card games 500 Bridgette Euchre King Lanterloo Lost Heir Nap Ombre Quadrille Rex Bridge Skat Spades Spoil Five Vint Whist See also Glossary of contract bridge terms List of bridge books List of bridge competitions and awards List of bridge magazines List of contract bridge people References Notes Citations Bibliography Further reading External links American Contract Bridge League (ACBL) World Bridge Federation (WBF) The Bridge Library Four-player card games Games of mental skill Multiplayer games French deck card games Card games introduced in 1925
8
A boat is a watercraft of a large range of types and sizes, but generally smaller than a ship, which is distinguished by its larger size, shape, cargo or passenger capacity, or its ability to carry boats. Small boats are typically found on inland waterways such as rivers and lakes, or in protected coastal areas. However, some boats, such as the whaleboat, were intended for use in an offshore environment. In modern naval terms, a boat is a vessel small enough to be carried aboard a ship. Boats vary in proportion and construction methods with their intended purpose, available materials, or local traditions. Canoes have been used since prehistoric times and remain in use throughout the world for transportation, fishing, and sport. Fishing boats vary widely in style partly to match local conditions. Pleasure craft used in recreational boating include ski boats, pontoon boats, and sailboats. House boats may be used for vacationing or long-term residence. Lighters are used to move cargo to and from large ships unable to get close to shore. Lifeboats have rescue and safety functions. Boats can be propelled by manpower (e.g. rowboats and paddle boats), wind (e.g. sailboats), and inboard/outboard motors (including gasoline, diesel, and electric). History Differentiation from other prehistoric watercraft The earliest watercraft are considered to have been rafts. These would have been used for voyages such as the settlement of Australia sometime between 50,000 and 60,000 years ago. A boat differs from a raft by obtaining its buoyancy by having most of its structure exclude water with a waterproof layer, e.g. the planks of a wooden hull, the hide covering (or tarred canvas) of a currach. In contrast, a raft is buoyant because it joins together components that are themselves buoyant, for example, logs, bamboo poles, bundles of reeds, floats (such as inflated hides, sealed pottery containers or, in a modern context, empty oil drums). The key difference between a raft and a boat is that the former is a "flow through" structure, with waves able to pass up through it. Consequently, except for short river crossings, a raft is not a practical means of transport in colder regions of the world as the users would be at risk of hypothermia. Today that climatic limitation restricts rafts to between 40° north and 40° south, with, in the past, similar boundaries that have moved as the world's climate has varied. Types The earliest boats may have been either dugouts or hide boats. The oldest recovered boat in the world, the Pesse canoe, found in the Netherlands, is a dugout made from the hollowed tree trunk of a Pinus sylvestris that was constructed somewhere between 8200 and 7600 BC. This canoe is exhibited in the Drents Museum in Assen, Netherlands. Other very old dugout boats have also been recovered. Hide boats, made from covering a framework with animal skins, could be equally as old as logboats, but such a structure is much less likely to survive in an archaeological context. Plank-built boats are considered, in most cases, to have developed from the logboat. There are examples of logboats that have been expanded: by deforming the hull under the influence of heat, by raising up the sides with added planks, or by splitting down the middle and adding a central plank to make it wider. (Some of these methods have been in quite recent usethere is no simple developmental sequence). The earliest known plank-built boats are from the Nile, dating to the third millennium BC. Outside Egypt, the next earliest are from England. The Ferriby boats are dated to the early part of the second millennium BC and the end of the third millennium. Plank-built boats require a level of woodworking technology that was first available in the neolithic with more complex versions only becoming achievable in the bronze age. Types Boats can be categorized by their means of propulsion. These divide into: Unpowered. This involves drifting with the tide or a river current. Powered by the crew-members on board, using oars, paddles or a punting pole or quant. Powered by sail. Towedeither by humans or animals from a river or canal bank (or in very shallow water, by walking on the sea or river bed) or by another vessel. Powered by machinery, such as internal combustion engines, steam engines or by batteries and an electric motor.Any one vessel may use more than one of these methods at different times or in combination. A number of large vessels are usually referred to as boats. Submarines are a prime example. Other types of large vessels which are traditionally called boats include Great Lakes freighters, riverboats, and ferryboats. Though large enough to carry their own boats and heavy cargoes, these vessels are designed for operation on inland or protected coastal waters. Terminology The hull is the main, and in some cases only, structural component of a boat. It provides both capacity and buoyancy. The keel is a boat's "backbone", a lengthwise structural member to which the perpendicular frames are fixed. On some boats a deck covers the hull, in part or whole. While a ship often has several decks, a boat is unlikely to have more than one. Above the deck are often lifelines connected to stanchions, bulwarks perhaps topped by gunnels, or some combination of the two. A cabin may protrude above the deck forward, aft, along the centerline, or covering much of the length of the boat. Vertical structures dividing the internal spaces are known as bulkheads. The forward end of a boat is called the bow, the aft end the stern. Facing forward the right side is referred to as starboard and the left side as port. Building materials Until the mid-19th century most boats were made of natural materials, primarily wood, although bark and animal skins were also used. Early boats include the birch bark canoe, the animal hide-covered kayak and coracle and the dugout canoe made from a single log. By the mid-19th century, some boats had been built with iron or steel frames but still planked in wood. In 1855 ferro-cement boat construction was patented by the French, who coined the name "ferciment". This is a system by which a steel or iron wire framework is built in the shape of a boat's hull and covered over with cement. Reinforced with bulkheads and other internal structure it is strong but heavy, easily repaired, and, if sealed properly, will not leak or corrode. As the forests of Britain and Europe continued to be over-harvested to supply the keels of larger wooden boats, and the Bessemer process (patented in 1855) cheapened the cost of steel, steel ships and boats began to be more common. By the 1930s boats built entirely of steel from frames to plating were seen replacing wooden boats in many industrial uses and fishing fleets. Private recreational boats of steel remain uncommon. In 1895 WH Mullins produced steel boats of galvanized iron and by 1930 became the world's largest producer of pleasure boats. Mullins also offered boats in aluminum from 1895 through 1899 and once again in the 1920s, but it was not until the mid-20th century that aluminium gained widespread popularity. Though much more expensive than steel, aluminum alloys exist that do not corrode in salt water, allowing a similar load carrying capacity to steel at much less weight. Around the mid-1960s, boats made of fiberglass (aka "glassfibre") became popular, especially for recreational boats. Fiberglass is also known as "GRP" (glass-reinforced plastic) in the UK, and "FRP" (for fiber-reinforced plastic) in the US. Fiberglass boats are strong, and do not rust, corrode, or rot. Instead, they are susceptible to structural degradation from sunlight and extremes in temperature over their lifespan. Fiberglass structures can be made stiffer with sandwich panels, where the fiberglass encloses a lightweight core such as balsa or foam. Cold molding is a modern construction method, using wood as the structural component. In one cold molding process, very thin strips of wood are layered over a form. Each layer is coated with resin, followed by another directionally alternating layer laid on top. Subsequent layers may be stapled or otherwise mechanically fastened to the previous, or weighted or vacuum bagged to provide compression and stabilization until the resin sets. An alternative process uses thin sheets of plywood shaped over a disposable male mold, and coated with epoxy. Propulsion The most common means of boat propulsion are as follows: Engine Inboard motor Stern drive (Inboard/outboard) Outboard motor Paddle wheel Water jet (jetboat, personal water craft) Fan (hovercraft, air boat) Man (rowing, paddling, setting pole etc.) Wind (sailing) Buoyancy A boat displaces its weight in water, regardless whether it is made of wood, steel, fiberglass, or even concrete. If weight is added to the boat, the volume of the hull drawn below the waterline will increase to keep the balance above and below the surface equal. Boats have a natural or designed level of buoyancy. Exceeding it will cause the boat first to ride lower in the water, second to take on water more readily than when properly loaded, and ultimately, if overloaded by any combination of structure, cargo, and water, sink. As commercial vessels must be correctly loaded to be safe, and as the sea becomes less buoyant in brackish areas such as the Baltic, the Plimsoll line was introduced to prevent overloading. European Union classification Since 1998 all new leisure boats and barges built in Europe between 2.5m and 24m must comply with the EU's Recreational Craft Directive (RCD). The Directive establishes four categories that permit the allowable wind and wave conditions for vessels in each class: Class A - the boat may safely navigate any waters. Class B - the boat is limited to offshore navigation. (Winds up to Force 8 & waves up to 4 metres) Class C - the boat is limited to inshore (coastal) navigation. (Winds up to Force 6 & waves up to 2 metres) Class D - the boat is limited to rivers, canals and small lakes. (Winds up to Force 4 & waves up to 0.5 metres) Europe is the main producer of recreational boats (the second production in the world is located in Poland). European brands are known all over the world - in fact, these are the brands that created RCD and set the standard for shipyards around the world. See also Abora Barge Cabin cruiser Car float Dinghy Dory Flatboat Halkett boat Inflatable boat Launch (boat) Log canoe Narrowboat Naval architecture Panga (boat) Pirogue Poveiro Rescue craft Sampan Ship's boat Skiff Tour boat Traditional fishing boats Tûranor PlanetSolar Yacht References External links University of Washington Libraries Digital Collections – Freshwater and Marine Image Bank (enter search term "vessels" for images of boats and vessels) Watercraft Fishing equipment
7
Blood is a body fluid in the circulatory system of humans and other vertebrates that delivers necessary substances such as nutrients and oxygen to the cells, and transports metabolic waste products away from those same cells. Blood in the circulatory system is also known as peripheral blood, and the blood cells it carries, peripheral blood cells. Blood is composed of blood cells suspended in blood plasma. Plasma, which constitutes 55% of blood fluid, is mostly water (92% by volume), and contains proteins, glucose, mineral ions, hormones, carbon dioxide (plasma being the main medium for excretory product transportation), and blood cells themselves. Albumin is the main protein in plasma, and it functions to regulate the colloidal osmotic pressure of blood. The blood cells are mainly red blood cells (also called RBCs or erythrocytes), white blood cells (also called WBCs or leukocytes), and in mammals platelets (also called thrombocytes). The most abundant cells in vertebrate blood are red blood cells. These contain hemoglobin, an iron-containing protein, which facilitates oxygen transport by reversibly binding to this respiratory gas thereby increasing its solubility in blood. In contrast, carbon dioxide is mostly transported extracellularly as bicarbonate ion transported in plasma. Vertebrate blood is bright red when its hemoglobin is oxygenated and dark red when it is deoxygenated. Some animals, such as crustaceans and mollusks, use hemocyanin to carry oxygen, instead of hemoglobin. Insects and some mollusks use a fluid called hemolymph instead of blood, the difference being that hemolymph is not contained in a closed circulatory system. In most insects, this "blood" does not contain oxygen-carrying molecules such as hemoglobin because their bodies are small enough for their tracheal system to suffice for supplying oxygen. Jawed vertebrates have an adaptive immune system, based largely on white blood cells. White blood cells help to resist infections and parasites. Platelets are important in the clotting of blood. Arthropods, using hemolymph, have hemocytes as part of their immune system. Blood is circulated around the body through blood vessels by the pumping action of the heart. In animals with lungs, arterial blood carries oxygen from inhaled air to the tissues of the body, and venous blood carries carbon dioxide, a waste product of metabolism produced by cells, from the tissues to the lungs to be exhaled. Medical terms related to blood often begin with hemo-, hemato-, haemo- or haemato- from the Greek word () for "blood". In terms of anatomy and histology, blood is considered a specialized form of connective tissue, given its origin in the bones and the presence of potential molecular fibers in the form of fibrinogen. Functions Blood performs many important functions within the body, including: Supply of oxygen to tissues (bound to hemoglobin, which is carried in red cells) Supply of nutrients such as glucose, amino acids, and fatty acids (dissolved in the blood or bound to plasma proteins (e.g., blood lipids)) Removal of waste such as carbon dioxide, urea, and lactic acid Immunological functions, including circulation of white blood cells, and detection of foreign material by antibodies Coagulation, the response to a broken blood vessel, the conversion of blood from a liquid to a semisolid gel to stop bleeding Messenger functions, including the transport of hormones and the signaling of tissue damage Regulation of core body temperature Hydraulic functions Constituents In mammals Blood accounts for 7% of the human body weight, with an average density around 1060 kg/m3, very close to pure water's density of 1000 kg/m3. The average adult has a blood volume of roughly or 1.3 gallons, which is composed of plasma and formed elements. The formed elements are the two types of blood cell or corpuscle – the red blood cells, (erythrocytes) and white blood cells (leukocytes), and the cell fragments called platelets that are involved in clotting. By volume, the red blood cells constitute about 45% of whole blood, the plasma about 54.3%, and white cells about 0.7%. Whole blood (plasma and cells) exhibits non-Newtonian fluid dynamics. Cells One microliter of blood contains: 4.7 to 6.1 million (male), 4.2 to 5.4 million (female) erythrocytes: Red blood cells contain the blood's hemoglobin and distribute oxygen. Mature red blood cells lack a nucleus and organelles in mammals. The red blood cells (together with endothelial vessel cells and other cells) are also marked by glycoproteins that define the different blood types. The proportion of blood occupied by red blood cells is referred to as the hematocrit, and is normally about 45%. The combined surface area of all red blood cells of the human body would be roughly 2,000 times as great as the body's exterior surface. 4,000–11,000 leukocytes: White blood cells are part of the body's immune system; they destroy and remove old or aberrant cells and cellular debris, as well as attack infectious agents (pathogens) and foreign substances. The cancer of leukocytes is called leukemia. 200,000–500,000 thrombocytes: Also called platelets, they take part in blood clotting (coagulation). Fibrin from the coagulation cascade creates a mesh over the platelet plug. Plasma About 55% of blood is blood plasma, a fluid that is the blood's liquid medium, which by itself is straw-yellow in color. The blood plasma volume totals of 2.7–3.0 liters (2.8–3.2 quarts) in an average human. It is essentially an aqueous solution containing 92% water, 8% blood plasma proteins, and trace amounts of other materials. Plasma circulates dissolved nutrients, such as glucose, amino acids, and fatty acids (dissolved in the blood or bound to plasma proteins), and removes waste products, such as carbon dioxide, urea, and lactic acid. Other important components include: Serum albumin Blood-clotting factors (to facilitate coagulation) Immunoglobulins (antibodies) lipoprotein particles Various other proteins Various electrolytes (mainly sodium and chloride) The term serum refers to plasma from which the clotting proteins have been removed. Most of the proteins remaining are albumin and immunoglobulins. pH values Blood pH is regulated to stay within the narrow range of 7.35 to 7.45, making it slightly basic (compensation). Extra-cellular fluid in blood that has a pH below 7.35 is too acidic, whereas blood pH above 7.45 is too basic. A pH below 6.9 or above 7.8 is usually lethal. Blood pH, partial pressure of oxygen (pO2), partial pressure of carbon dioxide (pCO2), and bicarbonate (HCO3−) are carefully regulated by a number of homeostatic mechanisms, which exert their influence principally through the respiratory system and the urinary system to control the acid–base balance and respiration, which is called compensation. An arterial blood gas test measures these. Plasma also circulates hormones transmitting their messages to various tissues. The list of normal reference ranges for various blood electrolytes is extensive. In non-mammalian vertebrates Human blood is typical of that of mammals, although the precise details concerning cell numbers, size, protein structure, and so on, vary somewhat between species. In non-mammalian vertebrates, however, there are some key differences: Red blood cells of non-mammalian vertebrates are flattened and ovoid in form, and retain their cell nuclei. There is considerable variation in the types and proportions of white blood cells; for example, acidophils are generally more common than in humans. Platelets are unique to mammals; in other vertebrates, small nucleated, spindle cells called thrombocytes are responsible for blood clotting instead. Physiology Circulatory system Blood is circulated around the body through blood vessels by the pumping action of the heart. In humans, blood is pumped from the strong left ventricle of the heart through arteries to peripheral tissues and returns to the right atrium of the heart through veins. It then enters the right ventricle and is pumped through the pulmonary artery to the lungs and returns to the left atrium through the pulmonary veins. Blood then enters the left ventricle to be circulated again. Arterial blood carries oxygen from inhaled air to all of the cells of the body, and venous blood carries carbon dioxide, a waste product of metabolism by cells, to the lungs to be exhaled. However, one exception includes pulmonary arteries, which contain the most deoxygenated blood in the body, while the pulmonary veins contain oxygenated blood. Additional return flow may be generated by the movement of skeletal muscles, which can compress veins and push blood through the valves in veins toward the right atrium. The blood circulation was famously described by William Harvey in 1628. Cell production and degradation In vertebrates, the various cells of blood are made in the bone marrow in a process called hematopoiesis, which includes erythropoiesis, the production of red blood cells; and myelopoiesis, the production of white blood cells and platelets. During childhood, almost every human bone produces red blood cells; as adults, red blood cell production is limited to the larger bones: the bodies of the vertebrae, the breastbone (sternum), the ribcage, the pelvic bones, and the bones of the upper arms and legs. In addition, during childhood, the thymus gland, found in the mediastinum, is an important source of T lymphocytes. The proteinaceous component of blood (including clotting proteins) is produced predominantly by the liver, while hormones are produced by the endocrine glands and the watery fraction is regulated by the hypothalamus and maintained by the kidney. Healthy erythrocytes have a plasma life of about 120 days before they are degraded by the spleen, and the Kupffer cells in the liver. The liver also clears some proteins, lipids, and amino acids. The kidney actively secretes waste products into the urine. Oxygen transport About 98.5% of the oxygen in a sample of arterial blood in a healthy human breathing air at sea-level pressure is chemically combined with the hemoglobin. About 1.5% is physically dissolved in the other blood liquids and not connected to hemoglobin. The hemoglobin molecule is the primary transporter of oxygen in mammals and many other species (for exceptions, see below). Hemoglobin has an oxygen binding capacity between 1.36 and 1.40 ml O2 per gram hemoglobin, which increases the total blood oxygen capacity seventyfold, compared to if oxygen solely were carried by its solubility of 0.03 ml O2 per liter blood per mm Hg partial pressure of oxygen (about 100 mm Hg in arteries). With the exception of pulmonary and umbilical arteries and their corresponding veins, arteries carry oxygenated blood away from the heart and deliver it to the body via arterioles and capillaries, where the oxygen is consumed; afterwards, venules and veins carry deoxygenated blood back to the heart. Under normal conditions in adult humans at rest, hemoglobin in blood leaving the lungs is about 98–99% saturated with oxygen, achieving an oxygen delivery between 950 and 1150 ml/min to the body. In a healthy adult at rest, oxygen consumption is approximately 200–250 ml/min, and deoxygenated blood returning to the lungs is still roughly 75% (70 to 78%) saturated. Increased oxygen consumption during sustained exercise reduces the oxygen saturation of venous blood, which can reach less than 15% in a trained athlete; although breathing rate and blood flow increase to compensate, oxygen saturation in arterial blood can drop to 95% or less under these conditions. Oxygen saturation this low is considered dangerous in an individual at rest (for instance, during surgery under anesthesia). Sustained hypoxia (oxygenation less than 90%), is dangerous to health, and severe hypoxia (saturations less than 30%) may be rapidly fatal. A fetus, receiving oxygen via the placenta, is exposed to much lower oxygen pressures (about 21% of the level found in an adult's lungs), so fetuses produce another form of hemoglobin with a much higher affinity for oxygen (hemoglobin F) to function under these conditions. Carbon dioxide transport CO2 is carried in blood in three different ways. (The exact percentages vary depending whether it is arterial or venous blood). Most of it (about 70%) is converted to bicarbonate ions by the enzyme carbonic anhydrase in the red blood cells by the reaction ; about 7% is dissolved in the plasma; and about 23% is bound to hemoglobin as carbamino compounds. Hemoglobin, the main oxygen-carrying molecule in red blood cells, carries both oxygen and carbon dioxide. However, the CO2 bound to hemoglobin does not bind to the same site as oxygen. Instead, it combines with the N-terminal groups on the four globin chains. However, because of allosteric effects on the hemoglobin molecule, the binding of CO2 decreases the amount of oxygen that is bound for a given partial pressure of oxygen. The decreased binding to carbon dioxide in the blood due to increased oxygen levels is known as the Haldane effect, and is important in the transport of carbon dioxide from the tissues to the lungs. A rise in the partial pressure of CO2 or a lower pH will cause offloading of oxygen from hemoglobin, which is known as the Bohr effect. Transport of hydrogen ions Some oxyhemoglobin loses oxygen and becomes deoxyhemoglobin. Deoxyhemoglobin binds most of the hydrogen ions as it has a much greater affinity for more hydrogen than does oxyhemoglobin. Lymphatic system In mammals, blood is in equilibrium with lymph, which is continuously formed in tissues from blood by capillary ultrafiltration. Lymph is collected by a system of small lymphatic vessels and directed to the thoracic duct, which drains into the left subclavian vein, where lymph rejoins the systemic blood circulation. Thermoregulation Blood circulation transports heat throughout the body, and adjustments to this flow are an important part of thermoregulation. Increasing blood flow to the surface (e.g., during warm weather or strenuous exercise) causes warmer skin, resulting in faster heat loss. In contrast, when the external temperature is low, blood flow to the extremities and surface of the skin is reduced and to prevent heat loss and is circulated to the important organs of the body, preferentially. Rate of flow Rate of blood flow varies greatly between different organs. Liver has the most abundant blood supply with an approximate flow of 1350 ml/min. Kidney and brain are the second and the third most supplied organs, with 1100 ml/min and ~700 ml/min, respectively. Relative rates of blood flow per 100 g of tissue are different, with kidney, adrenal gland and thyroid being the first, second and third most supplied tissues, respectively. Hydraulic functions The restriction of blood flow can also be used in specialized tissues to cause engorgement, resulting in an erection of that tissue; examples are the erectile tissue in the penis and clitoris. Another example of a hydraulic function is the jumping spider, in which blood forced into the legs under pressure causes them to straighten for a powerful jump, without the need for bulky muscular legs. Invertebrates In insects, the blood (more properly called hemolymph) is not involved in the transport of oxygen. (Openings called tracheae allow oxygen from the air to diffuse directly to the tissues.) Insect blood moves nutrients to the tissues and removes waste products in an open system. Other invertebrates use respiratory proteins to increase the oxygen-carrying capacity. Hemoglobin is the most common respiratory protein found in nature. Hemocyanin (blue) contains copper and is found in crustaceans and mollusks. It is thought that tunicates (sea squirts) might use vanabins (proteins containing vanadium) for respiratory pigment (bright-green, blue, or orange). In many invertebrates, these oxygen-carrying proteins are freely soluble in the blood; in vertebrates they are contained in specialized red blood cells, allowing for a higher concentration of respiratory pigments without increasing viscosity or damaging blood filtering organs like the kidneys. Giant tube worms have unusual hemoglobins that allow them to live in extraordinary environments. These hemoglobins also carry sulfides normally fatal in other animals. Color The coloring matter of blood (hemochrome) is largely due to the protein in the blood responsible for oxygen transport. Different groups of organisms use different proteins. Hemoglobin Hemoglobin is the principal determinant of the color of blood in vertebrates. Each molecule has four heme groups, and their interaction with various molecules alters the exact color. In vertebrates and other hemoglobin-using creatures, arterial blood and capillary blood are bright red, as oxygen imparts a strong red color to the heme group. Deoxygenated blood is a darker shade of red; this is present in veins, and can be seen during blood donation and when venous blood samples are taken. This is because the spectrum of light absorbed by hemoglobin differs between the oxygenated and deoxygenated states. Blood in carbon monoxide poisoning is bright red, because carbon monoxide causes the formation of carboxyhemoglobin. In cyanide poisoning, the body cannot use oxygen, so the venous blood remains oxygenated, increasing the redness. There are some conditions affecting the heme groups present in hemoglobin that can make the skin appear blue – a symptom called cyanosis. If the heme is oxidized, methemoglobin, which is more brownish and cannot transport oxygen, is formed. In the rare condition sulfhemoglobinemia, arterial hemoglobin is partially oxygenated, and appears dark red with a bluish hue. Veins close to the surface of the skin appear blue for a variety of reasons. However, the factors that contribute to this alteration of color perception are related to the light-scattering properties of the skin and the processing of visual input by the visual cortex, rather than the actual color of the venous blood. Skinks in the genus Prasinohaema have green blood due to a buildup of the waste product biliverdin. Hemocyanin The blood of most mollusks – including cephalopods and gastropods – as well as some arthropods, such as horseshoe crabs, is blue, as it contains the copper-containing protein hemocyanin at concentrations of about 50 grams per liter. Hemocyanin is colorless when deoxygenated and dark blue when oxygenated. The blood in the circulation of these creatures, which generally live in cold environments with low oxygen tensions, is grey-white to pale yellow, and it turns dark blue when exposed to the oxygen in the air, as seen when they bleed. This is due to change in color of hemocyanin when it is oxidized. Hemocyanin carries oxygen in extracellular fluid, which is in contrast to the intracellular oxygen transport in mammals by hemoglobin in RBCs. Chlorocruorin The blood of most annelid worms and some marine polychaetes use chlorocruorin to transport oxygen. It is green in color in dilute solutions. Hemerythrin Hemerythrin is used for oxygen transport in the marine invertebrates sipunculids, priapulids, brachiopods, and the annelid worm, magelona. Hemerythrin is violet-pink when oxygenated. Hemovanadin The blood of some species of ascidians and tunicates, also known as sea squirts, contains proteins called vanadins. These proteins are based on vanadium, and give the creatures a concentration of vanadium in their bodies 100 times higher than the surrounding seawater. Unlike hemocyanin and hemoglobin, hemovanadin is not an oxygen carrier. When exposed to oxygen, however, vanadins turn a mustard yellow. Disorders General medical Disorders of volume Injury can cause blood loss through bleeding. A healthy adult can lose almost 20% of blood volume (1 L) before the first symptom, restlessness, begins, and 40% of volume (2 L) before shock sets in. Thrombocytes are important for blood coagulation and the formation of blood clots, which can stop bleeding. Trauma to the internal organs or bones can cause internal bleeding, which can sometimes be severe. Dehydration can reduce the blood volume by reducing the water content of the blood. This would rarely result in shock (apart from the very severe cases) but may result in orthostatic hypotension and fainting. Disorders of circulation Shock is the ineffective perfusion of tissues, and can be caused by a variety of conditions including blood loss, infection, poor cardiac output. Atherosclerosis reduces the flow of blood through arteries, because atheroma lines arteries and narrows them. Atheroma tends to increase with age, and its progression can be compounded by many causes including smoking, high blood pressure, excess circulating lipids (hyperlipidemia), and diabetes mellitus. Coagulation can form a thrombosis, which can obstruct vessels. Problems with blood composition, the pumping action of the heart, or narrowing of blood vessels can have many consequences including hypoxia (lack of oxygen) of the tissues supplied. The term ischemia refers to tissue that is inadequately perfused with blood, and infarction refers to tissue death (necrosis), which can occur when the blood supply has been blocked (or is very inadequate). Hematological Anemia Insufficient red cell mass (anemia) can be the result of bleeding, blood disorders like thalassemia, or nutritional deficiencies, and may require one or more blood transfusions. Anemia can also be due to a genetic disorder in which the red blood cells do not function effectively. Anemia can be confirmed by a blood test if the hemoglobin value is less than 13.5 gm/dl in men or less than 12.0 gm/dl in women. Several countries have blood banks to fill the demand for transfusable blood. A person receiving a blood transfusion must have a blood type compatible with that of the donor. Sickle-cell anemia Disorders of cell proliferation Leukemia is a group of cancers of the blood-forming tissues and cells. Non-cancerous overproduction of red cells (polycythemia vera) or platelets (essential thrombocytosis) may be premalignant. Myelodysplastic syndromes involve ineffective production of one or more cell lines. Disorders of coagulation Hemophilia is a genetic illness that causes dysfunction in one of the blood's clotting mechanisms. This can allow otherwise inconsequential wounds to be life-threatening, but more commonly results in hemarthrosis, or bleeding into joint spaces, which can be crippling. Ineffective or insufficient platelets can also result in coagulopathy (bleeding disorders). Hypercoagulable state (thrombophilia) results from defects in regulation of platelet or clotting factor function, and can cause thrombosis. Infectious disorders of blood Blood is an important vector of infection. HIV, the virus that causes AIDS, is transmitted through contact with blood, semen or other body secretions of an infected person. Hepatitis B and C are transmitted primarily through blood contact. Owing to blood-borne infections, bloodstained objects are treated as a biohazard. Bacterial infection of the blood is bacteremia or sepsis. Viral Infection is viremia. Malaria and trypanosomiasis are blood-borne parasitic infections. Carbon monoxide poisoning Substances other than oxygen can bind to hemoglobin; in some cases, this can cause irreversible damage to the body. Carbon monoxide, for example, is extremely dangerous when carried to the blood via the lungs by inhalation, because carbon monoxide irreversibly binds to hemoglobin to form carboxyhemoglobin, so that less hemoglobin is free to bind oxygen, and fewer oxygen molecules can be transported throughout the blood. This can cause suffocation insidiously. A fire burning in an enclosed room with poor ventilation presents a very dangerous hazard, since it can create a build-up of carbon monoxide in the air. Some carbon monoxide binds to hemoglobin when smoking tobacco. Treatments Transfusion Blood for transfusion is obtained from human donors by blood donation and stored in a blood bank. There are many different blood types in humans, the ABO blood group system, and the Rhesus blood group system being the most important. Transfusion of blood of an incompatible blood group may cause severe, often fatal, complications, so crossmatching is done to ensure that a compatible blood product is transfused. Other blood products administered intravenously are platelets, blood plasma, cryoprecipitate, and specific coagulation factor concentrates. Intravenous administration Many forms of medication (from antibiotics to chemotherapy) are administered intravenously, as they are not readily or adequately absorbed by the digestive tract. After severe acute blood loss, liquid preparations, generically known as plasma expanders, can be given intravenously, either solutions of salts (NaCl, KCl, CaCl2 etc.) at physiological concentrations, or colloidal solutions, such as dextrans, human serum albumin, or fresh frozen plasma. In these emergency situations, a plasma expander is a more effective life-saving procedure than a blood transfusion, because the metabolism of transfused red blood cells does not restart immediately after a transfusion. Letting In modern evidence-based medicine, bloodletting is used in management of a few rare diseases, including hemochromatosis and polycythemia. However, bloodletting and leeching were common unvalidated interventions used until the 19th century, as many diseases were incorrectly thought to be due to an excess of blood, according to Hippocratic medicine. Etymology English blood (Old English blod) derives from Germanic and has cognates with a similar range of meanings in all other Germanic languages (e.g. German Blut, Swedish blod, Gothic blōþ). There is no accepted Indo-European etymology. History Classical Greek medicine Robin Fåhræus (a Swedish physician who devised the erythrocyte sedimentation rate) suggested that the Ancient Greek system of humorism, wherein the body was thought to contain four distinct bodily fluids (associated with different temperaments), were based upon the observation of blood clotting in a transparent container. When blood is drawn in a glass container and left undisturbed for about an hour, four different layers can be seen. A dark clot forms at the bottom (the "black bile"). Above the clot is a layer of red blood cells (the "blood"). Above this is a whitish layer of white blood cells (the "phlegm"). The top layer is clear yellow serum (the "yellow bile"). Types The ABO blood group system was discovered in the year 1900 by Karl Landsteiner. Jan Janský is credited with the first classification of blood into the four types (A, B, AB, and O) in 1907, which remains in use today. In 1907 the first blood transfusion was performed that used the ABO system to predict compatibility. The first non-direct transfusion was performed on 27 March 1914. The Rhesus factor was discovered in 1937. Culture and religion Due to its importance to life, blood is associated with a large number of beliefs. One of the most basic is the use of blood as a symbol for family relationships through birth/parentage; to be "related by blood" is to be related by ancestry or descendence, rather than marriage. This bears closely to bloodlines, and sayings such as "blood is thicker than water" and "bad blood", as well as "Blood brother". Blood is given particular emphasis in the Islamic, Jewish, and Christian religions, because Leviticus 17:11 says "the life of a creature is in the blood." This phrase is part of the Levitical law forbidding the drinking of blood or eating meat with the blood still intact instead of being poured off. Mythic references to blood can sometimes be connected to the life-giving nature of blood, seen in such events as childbirth, as contrasted with the blood of injury or death. Indigenous Australians In many indigenous Australian Aboriginal peoples' traditions, ochre (particularly red) and blood, both high in iron content and considered Maban, are applied to the bodies of dancers for ritual. As Lawlor states: Lawlor comments that blood employed in this fashion is held by these peoples to attune the dancers to the invisible energetic realm of the Dreamtime. Lawlor then connects these invisible energetic realms and magnetic fields, because iron is magnetic. European paganism Among the Germanic tribes, blood was used during their sacrifices; the Blóts. The blood was considered to have the power of its originator, and, after the butchering, the blood was sprinkled on the walls, on the statues of the gods, and on the participants themselves. This act of sprinkling blood was called blóedsian in Old English, and the terminology was borrowed by the Roman Catholic Church becoming to bless and blessing. The Hittite word for blood, ishar was a cognate to words for "oath" and "bond", see Ishara. The Ancient Greeks believed that the blood of the gods, ichor, was a substance that was poisonous to mortals. As a relic of Germanic Law, the cruentation, an ordeal where the corpse of the victim was supposed to start bleeding in the presence of the murderer, was used until the early 17th century. Christianity In Genesis 9:4, God prohibited Noah and his sons from eating blood (see Noahide Law). This command continued to be observed by the Eastern Orthodox Church. It is also found in the Bible that when the Angel of Death came around to the Hebrew house that the first-born child would not die if the angel saw lamb's blood wiped across the doorway. At the Council of Jerusalem, the apostles prohibited certain Christians from consuming blood – this is documented in Acts 15:20 and 29. This chapter specifies a reason (especially in verses 19–21): It was to avoid offending Jews who had become Christians, because the Mosaic Law Code prohibited the practice. Christ's blood is the means for the atonement of sins. Also, "... the blood of Jesus Christ his [God] Son cleanseth us from all sin." (1 John 1:7), "... Unto him [God] that loved us, and washed us from our sins in his own blood." (Revelation 1:5), and "And they overcame him (Satan) by the blood of the Lamb [Jesus the Christ], and by the word of their testimony ..." (Revelation 12:11). Some Christian churches, including Roman Catholicism, Eastern Orthodoxy, Oriental Orthodoxy, and the Assyrian Church of the East teach that, when consecrated, the Eucharistic wine actually becomes the blood of Jesus for worshippers to drink. Thus in the consecrated wine, Jesus becomes spiritually and physically present. This teaching is rooted in the Last Supper, as written in the four gospels of the Bible, in which Jesus stated to his disciples that the bread that they ate was his body, and the wine was his blood. "This cup is the new testament in my blood, which is shed for you." (). Most forms of Protestantism, especially those of a Methodist or Presbyterian lineage, teach that the wine is no more than a symbol of the blood of Christ, who is spiritually but not physically present. Lutheran theology teaches that the body and blood is present together "in, with, and under" the bread and wine of the Eucharistic feast. Judaism In Judaism, animal blood may not be consumed even in the smallest quantity (Leviticus 3:17 and elsewhere); this is reflected in Jewish dietary laws (Kashrut). Blood is purged from meat by rinsing and soaking in water (to loosen clots), salting and then rinsing with water again several times. Eggs must also be checked and any blood spots removed before consumption. Although blood from fish is biblically kosher, it is rabbinically forbidden to consume fish blood to avoid the appearance of breaking the Biblical prohibition. Another ritual involving blood involves the covering of the blood of fowl and game after slaughtering (Leviticus 17:13); the reason given by the Torah is: "Because the life of the animal is [in] its blood" (ibid 17:14). In relation to human beings, Kabbalah expounds on this verse that the animal soul of a person is in the blood, and that physical desires stem from it. Likewise, the mystical reason for salting temple sacrifices and slaughtered meat is to remove the blood of animal-like passions from the person. By removing the animal's blood, the animal energies and life-force contained in the blood are removed, making the meat fit for human consumption. Islam Consumption of food containing blood is forbidden by Islamic dietary laws. This is derived from the statement in the Qur'an, sura Al-Ma'ida (5:3): "Forbidden to you (for food) are: dead meat, blood, the flesh of swine, and that on which has been invoked the name of other than Allah." Blood is considered unclean, hence there are specific methods to obtain physical and ritual status of cleanliness once bleeding has occurred. Specific rules and prohibitions apply to menstruation, postnatal bleeding and irregular vaginal bleeding. When an animal has been slaughtered, the animal's neck is cut in a way to ensure that the spine is not severed, hence the brain may send commands to the heart to pump blood to it for oxygen. In this way, blood is removed from the body, and the meat is generally now safe to cook and eat. In modern times, blood transfusions are generally not considered against the rules. Jehovah's Witnesses Based on their interpretation of scriptures such as Acts 15:28, 29 ("Keep abstaining...from blood."), many Jehovah's Witnesses neither consume blood nor accept transfusions of whole blood or its major components: red blood cells, white blood cells, platelets (thrombocytes), and plasma. Members may personally decide whether they will accept medical procedures that involve their own blood or substances that are further fractionated from the four major components. Vampirism Vampires are mythical creatures that drink blood directly for sustenance, usually with a preference for human blood. Cultures all over the world have myths of this kind; for example the 'Nosferatu' legend, a human who achieves damnation and immortality by drinking the blood of others, originates from Eastern European folklore. Ticks, leeches, female mosquitoes, vampire bats, and an assortment of other natural creatures do consume the blood of other animals, but only bats are associated with vampires. This has no relation to vampire bats, which are New World creatures discovered well after the origins of the European myths. Other uses Forensic and archaeological Blood residue can help forensic investigators identify weapons, reconstruct a criminal action, and link suspects to the crime. Through bloodstain pattern analysis, forensic information can also be gained from the spatial distribution of bloodstains. Blood residue analysis is also a technique used in archeology. Artistic Blood is one of the body fluids that has been used in art. In particular, the performances of Viennese Actionist Hermann Nitsch, Istvan Kantor, Franko B, Lennie Lee, Ron Athey, Yang Zhichao, Lucas Abela and Kira O'Reilly, along with the photography of Andres Serrano, have incorporated blood as a prominent visual element. Marc Quinn has made sculptures using frozen blood, including a cast of his own head made using his own blood. Genealogical The term blood is used in genealogical circles to refer to one's ancestry, origins, and ethnic background as in the word bloodline. Other terms where blood is used in a family history sense are blue-blood, royal blood, mixed-blood and blood relative. See also Autotransfusion Blood as food Blood pressure Blood substitutes ("artificial blood") Blood test Hemophobia Luminol, a visual test for blood left at crime scenes. Oct-1-en-3-one ("Smell" of blood) Taboo food and drink: Blood References External links Blood Groups and Red Cell Antigens. Free online book at NCBI Bookshelf ID: NBK2261 Blood Photomicrographs Hematology Tissues (biology) Articles containing video clips
8
Bigfoot, also commonly referred to as Sasquatch, is a large and hairy human-like mythical creature purported to inhabit forests in North America, particularly in the Pacific Northwest. Enthusiasts of the subject have offered various forms of dubious evidence to prove Bigfoot's existence, including anecdotal claims of sightings, as well as alleged photographs, video and audio recordings, hair samples, and casts of large footprints. Most of this evidence has since been identified as hoaxes or misidentification. The majority of scientists do not find any of the remaining evidence compelling, and instead generally consider it to be the result of a combination of folklore, misidentification, and hoax, rather than a living animal. Folklorists trace the phenomenon of Bigfoot to a combination of factors and sources, including indigenous cultures, the European wild man figure, and folk tales. Wishful thinking, a cultural increase in environmental concerns, and overall societal awareness of the subject have been cited as additional factors. Tales of wild, hair-covered humanoids exist throughout the world, such as the Skunk ape of the southeastern United States, the Almas, Yeren, and Yeti in Asia, the Australian Yowie, and creatures in the mythologies of indigenous people. Bigfoot is an enduring element of popular culture and an icon within the pseudoscience and subculture of cryptozoology. Description Bigfoot is often described as a large, muscular, and bipedal ape or human-like creature covered in black, dark brown, or dark reddish hair. Anecdotal descriptions estimate a height of roughly , with some descriptions having the creatures standing as tall as . Some alleged observations describe Bigfoot as more human than ape, particularly in regards to the face. In 1971, multiple people in The Dalles, Oregon, filed a police report describing an "overgrown ape", and one of the men claimed to have sighted the creature in the scope of his rifle, but could not bring himself to shoot it because, "It looked more human than animal". Common descriptions include broad shoulders, no visible neck, and long arms, which many skeptics attribute to misidentification of a bear standing upright. Some alleged nighttime sightings have stated the creature's eyes "glowed" yellow or red. However, eyeshine is not present in humans or any other known great apes, and so proposed explanations for observable eyeshine off of the ground in the forest include owls, raccoons, or opossums perched in foliage. Michael Rugg, owner of the Bigfoot Discovery Museum, claims to have smelled Bigfoot, stating, "Imagine a skunk that had rolled around in dead animals and had hung around the garbage pits." The enormous footprints for which the creature is named are claimed to be as large as long and wide. Some footprint casts have also contained claw marks, making it likely that they came from known animals such as bears, which have five toes and claws. History Indigenous and early records Many of the indigenous cultures across the North American continent include tales of mysterious hair-covered creatures living in forests, and according to anthropologist David Daegling, these legends existed long before contemporary reports of the creature described as Bigfoot. These stories differed in their details regionally and between families in the same community, and are particularly prevalent in the Pacific Northwest. On the Tule River Indian Reservation, petroglyphs created by a tribe of Yokuts at a site called Painted Rock are alleged by some to depict a group of Bigfoot called "the Family". The local tribespeople call the largest of the glyphs "Hairy Man", and they are estimated to be between 500 and 1000 years old. 16th century Spanish explorers and Mexican settlers told tales of the los Vigilantes Oscuros, or "Dark Watchers", large creatures alleged to stalk their camps at night. In the region that is now Mississippi, a Jesuit priest was living with the Natchez in 1721 and reported stories of hairy creatures in the forest known to scream loudly and steal livestock. Ecologist Robert Pyle argues that most cultures have accounts of human-like giants in their folk history, expressing a need for "some larger-than-life creature". Each language had its name for the creature featured in the local version of such legends. Many names mean something along the lines of "wild man" or "hairy man", although other names described common actions that it was said to perform, such as eating clams or shaking trees. Chief Mischelle of the Nlaka'pamux at Lytton, British Columbia told such a story to Charles Hill-Tout in 1898. The Sts'ailes people tell stories about sasq'ets, a shapeshifting creature that protects the forest. The name "Sasquatch" is the anglicized version of sasq'ets (sas-kets), roughly translating to "hairy man" in the Halq'emeylem language. Members of the Lummi tell tales about creatures known as Ts'emekwes. The stories are similar to each other in the general descriptions of Ts'emekwes, but details differed among various family accounts concerning the creature's diet and activities. Some regional versions tell of more threatening creatures: the stiyaha or kwi-kwiyai were a nocturnal race and children were warned against saying the names so that the "monsters" would not come and carry them off to be killed. The Iroquois tell of an aggressive, hair covered giant with rock-hard skin known as the Ot ne yar heh or "Stone Giant", more commonly referred to as the Genoskwa. In 1847, Paul Kane reported stories by the natives about skoocooms, a race of cannibalistic wild men living on the peak of Mount St. Helens. U.S. President Theodore Roosevelt, in his 1893 book, The Wilderness Hunter, writes of a story he was told by an elderly mountain man named Bauman in which a foul-smelling, bipedal creature ransacked his beaver trapping camp, stalked him, and later became hostile when it fatally broke his companion's neck. Roosevelt notes that Bauman appeared fearful while telling the story, but attributed the trapper's German ancestry to have potentially influenced him. Less menacing versions have been recorded, such as one by Reverend Elkanah Walker in 1840. Walker was a Protestant missionary who recorded stories of giants among the natives living near Spokane, Washington. These giants were said to live on and around the peaks of the nearby mountains, stealing salmon from the fishermen's nets. Ape Canyon incident On July 16, 1924, an article in The Oregonian made national news when a story was published describing a conflict between a group of gold prospectors and a group of "ape-men" in a gorge near Mount St. Helens. The prospectors reported encountering "gorilla men" near their remote cabin. One of the men, Fred Beck, indicated that he struck one of the creatures with rifle fire. That night, they reported coming under attack by the creatures, who were said to have thrown large rocks at the cabin, damaging the roof and knocking Beck unconscious. The men fled the area the following morning. The U.S. Forest Service investigated the site of the alleged incident. The investigators found no compelling evidence of the event and concluded it was likely a fabrication. Stories of large, hair covered bipedal ape-men or "mountain devils" had been a persistent piece of folklore in the area for centuries prior to the alleged incident. Today, the area is known as Ape Canyon and is cemented within Bigfoot-related folklore. Origin of the "Bigfoot" name Jerry Crew and Andrew Genzoli In 1958, Jerry Crew, bulldozer operator for a logging company in Humboldt County, California, discovered a set of large, human-like footprints sunk deep within the mud in the Six Rivers National Forest. Upon informing his coworkers, many claimed to have seen similar tracks on previous job sites as well as telling of odd incidents such as an oil drum weighing having been moved without explanation. The logging company men soon began utilizing "Bigfoot" to describe the apparent culprit. Crew initially believed someone was playing a prank on them. After observing more of these massive footprints, he contacted reporter Andrew Genzoli of the Humboldt Times newspaper. Genzoli interviewed lumber workers and wrote articles about the mysterious footprints, introducing the name "Bigfoot" in relation to the tracks and the local tales of large, hairy wild men. A plaster cast was made of the footprints and Crew appeared, holding one of the casts, on the front page of the newspaper on October 6, 1958. The story spread rapidly as Genzoli began to receive correspondence from major media outlets including the New York Times and Los Angeles Times. As a result, the term Bigfoot became widespread as a reference to an apparently large, unknown creature leaving massive footprints in Northern California. As a result, Willow Creek and Humboldt County are considered by some to be the "Bigfoot Capital of the World". Ray Wallace and Rant Mullens In 2002, the family of Jerry Crew's deceased coworker Ray Wallace revealed a collection of large, carved wooden feet stored in his basement. They stated that Wallace had been secretly making the footprints and was responsible for the tracks discovered by Crew. Wallace was inspired by another hoaxer, Rant Mullens, who revealed information about his hoaxes in 1982. In the 1930s in Toledo, Washington, Mullens and a group of other foresters carved pairs of large feet made of wood and used them to create footprints in the mud to scare huckleberry pickers in the Gifford Pinchot National Forest. The group would also claim to be responsible for hoaxing the alleged Ape Canyon incident in 1924. Mullens and the group of foresters began referring to themselves as the St. Helens Apes, and would later have a cave dedicated to them. Wallace, also from Toledo, knew Mullens and stated he collaborated with him to obtain a pair of the large wooden feet and subsequently used them to create footprints on the 1958 construction site as a means to scare away potential thieves. Other historical uses of "Bigfoot" In the 1830s, a Wyandot chief was nicknamed "Big Foot" due to his significant size, strength and large feet. Potawatomi Chief Maumksuck, known as Chief "Big Foot", is today synonymous with the area of Walworth County, Wisconsin and has a state park and school named for him. William A. A. Wallace, a famous 19th century Texas Ranger, was nicknamed "Bigfoot" due to his large feet and today has a town named for him: Bigfoot, Texas. Lakota leader Spotted Elk was also called "Chief Big Foot". In the late 19th and early 20th centuries, at least two enormous marauding grizzly bears were widely noted in the press and each nicknamed "Bigfoot." The first grizzly bear called "Bigfoot" was reportedly killed near Fresno, California in 1895 after killing sheep for 15 years; his weight was estimated at 2,000 pounds (900 kg). The second one was active in Idaho in the 1890s and 1900s between the Snake and Salmon rivers, and supernatural powers were attributed to it. Regional and other names Many regions have differentiating names for the creatures. In Canada, the name Sasquatch is widely used although often interchangeably with the name Bigfoot. The United States uses both of these names but also has numerous names and descriptions of the creatures depending on the region and area in which they are allegedly sighted. These include the Skunk ape in Florida and other southern states, Grassman in Ohio, Fouke Monster in Arkansas, Wood Booger in Virginia, the Monster of Whitehall in Whitehall, New York, Momo in Missouri, Honey Island Swamp Monster in Louisiana, Dewey Lake Monster in Michigan, Mogollon Monster in Arizona, the Big Muddy Monster in southern Illinois, and The Old Men of the Mountain in West Virginia. The term Wood Ape is also used by some as a means to deviate from the perceived mythical connotation surrounding the name "Bigfoot". Other names include Bushman, Treeman, and Wildman. Proposed explanations Various explanations have been suggested for sightings and to offer conjecture on what existing animal has been misidentified in supposed sightings of Bigfoot. Scientists typically attribute sightings to hoaxes or misidentifications of known animals and their tracks, particularly black bears. Misidentification Bears Scientists theorize that mistaken identification of American black bears as Bigfoot are a likely explanation for most reported sightings, particularly when observers view a subject from afar, are in dense foliage, or there are poor lighting conditions. Additionally, black bears have been observed and recorded walking upright, often as the result of an injury. While upright, adult black bears stand roughly , and grizzly bears roughly , both within the range of anecdotal Bigfoot reports. According to data scientist Floe Foxon, more people report seeing Bigfoot in areas with documented black bear populations. Foxon concludes, "If bigfoot is there, it may be many bears". Foxon acknowledges that alleged Bigfoot sightings have been reported in areas with minimal or no known black bear populations. She states, "Although this may be interpreted as evidence for the existence of an unknown hominid in North America, it is also explained by misidentification of other animals (including humans), among other possibilities". Escaped apes Some have proposed that sightings of Bigfoot may simply be people observing and misidentifying known great apes such as chimpanzees, gorillas, and orangutans that have escaped from captivity such as zoos, circuses, and exotic pets belonging to private owners. This explanation is often proposed in relation to the Skunk ape, as some scientists argue the humid subtropical climate of the southeastern United States could potentially support a population of escaped apes. Humans Humans have been mistaken for Bigfoot, with some incidents leading to injuries. In 2013, a 21-year-old man in Oklahoma was arrested after he told law enforcement he accidentally shot his friend in the back while their group was allegedly hunting for Bigfoot. In 2017, a shamanist wearing clothing made of animal furs was vacationing in a North Carolina forest when local reports of alleged Bigfoot sightings flooded in. The Greenville Police Department issued a public notice not to shoot Bigfoot for fear of mistakenly injuring or killing someone in a fur suit. In 2018, a person was shot at multiple times by a hunter near Helena, Montana who claimed he mistook him for a Bigfoot. Additionally, some have attributed feral humans or hermits living in the wilderness as being another explanation for alleged Bigfoot sightings. One story, the Wild Man of the Navidad, tells of a wild ape-man who roamed the wilderness of eastern Texas in the mid-19th century, stealing food and goods from residents. A search party allegedly captured an escaped African slave attributed to the story. During the 1980s, several psychologically damaged American Vietnam veterans were stated by the state of Washington's veterans' affairs director, Randy Fisher, to have been living in remote wooded areas of the state. Pareidolia Some have proposed that pareidolia may explain Bigfoot sightings, specifically the tendency to observe human-like faces and figures within the natural environment. Photos and videos of poor quality alleged to depict Bigfoots are often attributed to this phenomenon and commonly referred to as "Blobsquatch". Misidentified vocalizations The majority of mainstream scientists maintain that the source of the sounds often attributed to Bigfoot are either hoaxes, anthropomorphization, or likely misidentified and produced by known animals such as owl, wolf, coyote, and fox. Hoaxes Both Bigfoot believers and non-believers agree that many reported sightings are hoaxes. Author Jerome Clark argues that the Jacko Affair was a hoax, involving an 1884 newspaper report of an ape-like creature captured in British Columbia. He cites research by John Green, who found that several contemporaneous British Columbia newspapers regarded the alleged capture as highly dubious. He notes that the Mainland Guardian of New Westminster, British Columbia wrote, "Absurdity is written on the face of it." Gigantopithecus Bigfoot proponents Grover Krantz and Geoffrey H. Bourne both believed that Bigfoot could be a relict population of the extinct southeast Asian ape species Gigantopithecus blacki. According to Bourne, G. blacki may have followed the many other species of animals that migrated across the Bering land bridge to the Americas. To date, no Gigantopithecus fossils have been found in the Americas. In Asia, the only recovered fossils have been of mandibles and teeth, leaving uncertainty about G. blackis locomotion. Krantz has argued that G. blacki could have been bipedal, based on his extrapolation from the shape of its mandible. However, the relevant part of the mandible is not present in any fossils. The consensus view is that G. blacki was quadrupedal, as its enormous mass would have made it difficult for it to adopt a bipedal gait. Anthropologist Matt Cartmill criticizes the G. blacki hypothesis: The trouble with this account is that Gigantopithecus was not a hominin and maybe not even a crown group hominoid; yet the physical evidence implies that Bigfoot is an upright biped with buttocks and a long, stout, permanently adducted hallux. These are hominin autapomorphies, not found in other mammals or other bipeds. It seems unlikely that Gigantopithecus would have evolved these uniquely hominin traits in parallel. Paleoanthropologist Bernard G. Campbell writes: "That Gigantopithecus is in fact extinct has been questioned by those who believe it survives as the Yeti of the Himalayas and the Sasquatch of the north-west American coast. But the evidence for these creatures is not convincing." Extinct hominidae Primatologist John R. Napier and anthropologist Gordon Strasenburg have suggested a species of Paranthropus as a possible candidate for Bigfoot's identity, such as Paranthropus robustus, with its gorilla-like crested skull and bipedal gait —despite the fact that fossils of Paranthropus are found only in Africa. Michael Rugg of the Bigfoot Discovery Museum presented a comparison between human, Gigantopithecus, and Meganthropus skulls (reconstructions made by Grover Krantz) in episodes 131 and 132 of the Bigfoot Discovery Museum Show. Bigfoot enthusiasts that think Bigfoot may be the "missing link" between apes and humans have promoted the idea that Bigfoot is a descendant of Gigantopithecus blacki, but that ape diverged from orangutans around 12 million years ago and is not related to humans. Some suggest Neanderthal, Homo erectus, or Homo heidelbergensis to be the creature, but, like all other great apes, no remains of any of those species have been found in the Americas. Scientific view Expert consensus is that allegations of the existence of Bigfoot are not credible. Belief in the existence of such a large, ape-like creature is more often attributed to hoaxes, confusion, or delusion rather than to sightings of a genuine creature. In a 1996 USA Today article, Washington State zoologist John Crane said, "There is no such thing as Bigfoot. No data other than material that's clearly been fabricated has ever been presented." As with other similar beings, climate and food supply issues would make such a creature's survival in reported habitats unlikely. Bigfoot is alleged to live in regions unusual for a large, nonhuman primate, i.e., temperate latitudes in the northern hemisphere; all recognized nonhuman apes are found in the tropics of Africa and Asia. Great apes have not been found in the fossil record in the Americas, and no Bigfoot remains are known to have been found. Phillips Stevens, a cultural anthropologist at the University at Buffalo, summarized the scientific consensus as follows: In the 1970s, when Bigfoot "experts" were frequently given high-profile media coverage, McLeod writes that the scientific community generally avoided lending credence to such fringe theories by refusing even to debate them. Primatologist Jane Goodall was asked for her personal opinion of Bigfoot in a 2002 interview on National Public Radio's "Science Friday". She joked, "Well, now you will be amazed when I tell you that I'm sure that they exist." She later added, chuckling, "Well, I'm a romantic, so I always wanted them to exist", and finally, "You know, why isn't there a body? I can't answer that, and maybe they don't exist, but I want them to." In 2012, when asked again by the Huffington Post, Goodall said "I'm fascinated and would actually love them to exist," adding, "Of course, it's strange that there has never been a single authentic hide or hair of the Bigfoot, but I've read all the accounts." Paleontologist and author Darren Naish states in a 2016 article for Scientific American that if "Bigfoot" existed, an abundance of evidence would also exist that cannot be found anywhere today, making the existence of such a creature exceedingly unlikely. Naish summarizes the evidence for "Bigfoot" that would exist if the creature itself existed: If "Bigfoot" existed, so would consistent reports of uniform vocalizations throughout North America as can be identified for any existing large animal in the region, rather than the scattered and widely varied "Bigfoot" sounds haphazardly reported; If "Bigfoot" existed, so would many tracks that would be easy for experts to find, just as they easily find tracks for other rare megafauna in North America, rather than a complete lack of such tracks alongside "tracks" that experts agree are fraudulent; Finally, if "Bigfoot" existed, an abundance of "Bigfoot" DNA would already have been found, again as it has been found for similar animals, instead of the current state of affairs, where there is no confirmed DNA for such a creature whatsoever. Researchers Ivan T. Sanderson and Bernard Heuvelmans, founders of the study of cryptozoology, spent parts of their career searching for Bigfoot. Later scientists who researched the topic included Jason Jarvis, Carleton S. Coon, George Allen Agogino and William Charles Osman Hill, though they later stopped their research due to lack of evidence for the alleged creature. John Napier asserts that the scientific community's attitude towards Bigfoot stems primarily from insufficient evidence. Other scientists who have shown varying degrees of interest in the creature are Grover Krantz, Jeffrey Meldrum, John Bindernagel, David J. Daegling, George Schaller, Russell Mittermeier, Daris Swindler, Esteban Sarmiento, and Mireya Mayor. Formal studies One study was conducted by John Napier and published in his book Bigfoot: The Yeti and Sasquatch in Myth and Reality in 1973. Napier wrote that if a conclusion is to be reached based on scant extant "'hard' evidence," science must declare "Bigfoot does not exist." However, he found it difficult to entirely reject thousands of alleged tracks, "scattered over 125,000 square miles" (325,000 km2) or to dismiss all "the many hundreds" of eyewitness accounts. Napier concluded, "I am convinced that Sasquatch exists, but whether it is all it is cracked up to be is another matter altogether. There must be something in north-west America that needs explaining, and that something leaves man-like footprints." In 1974, the National Wildlife Federation funded a field study seeking Bigfoot evidence. No formal federation members were involved and the study made no notable discoveries. Also in 1974, the now defunct North American Wildlife Research Team constructed a "Bigfoot trap" in the Rogue River–Siskiyou National Forest. It was baited with animal carcasses and captured multiple bears, but no Bigfoot. Upkeep of the trap ended in the early 1980s, but in 2006 the United States Forest Service repaired the trap, which today is a tourist destination along the Collings Mountain hiking trail. Beginning in the late 1970s, physical anthropologist Grover Krantz published several articles and four book-length treatments of Bigfoot. However, his work was found to contain multiple scientific failings including falling for hoaxes. A study published in the Journal of Biogeography in 2009 by J.D. Lozier et al. used ecological niche modeling on reported sightings of Bigfoot, using their locations to infer preferred ecological parameters. They found a very close match with the ecological parameters of the American black bear. They also note that an upright bear looks much like a Bigfoot's purported appearance and consider it highly improbable that two species should have very similar ecological preferences, concluding that Bigfoot sightings are likely misidentified sightings of black bears. In the first systematic genetic analysis of 30 hair samples that were suspected to be from Bigfoot-like creatures, only one was found to be primate in origin, and that was identified as human. A joint study by the University of Oxford and Lausanne's Cantonal Museum of Zoology and published in the Proceedings of the Royal Society B in 2014, the team used a previously published cleaning method to remove all surface contamination and the ribosomal mitochondrial DNA 12S fragment of the sample. The sample was sequenced and then compared to GenBank to identify the species origin. The samples submitted were from different parts of the world, including the United States, Russia, the Himalayas, and Sumatra. Other than one sample of human origin, all but two are from common animals. Black and brown bears accounted for most of the samples, other animals include cow, horse, dog/wolf/coyote, sheep, goat, deer, raccoon, porcupine, and tapir. The last two samples were thought to match a fossilized genetic sample of a 40,000 year old polar bear of the Pleistocene epoch; a second test identified these hairs as being from a rare type of brown bear. In 2019, the FBI declassified an analysis it conducted on alleged Bigfoot hairs in 1976. Bigfoot researcher Peter Byrnes sent the FBI 15 hairs attached to a small skin fragment and asked if the bureau could assist him in identifying it. Jay Cochran, Jr., assistant director of the FBI's Scientific and Technical Services division responded in 1977 that the hairs were of deer family origin. Claims Claims about the origins and characteristics of Bigfoot vary. The subject of Bigfoot has crossed over with other paranormal claims, including that Bigfoot, extraterrestrials, and UFOs are related or that Bigfoot are psychic, can shapeshift, are able to cross into different dimensions, or are completely supernatural in origin. Additionally, claims regarding Bigfoot have been associated with conspiracy theories including a government cover-up. Sightings According to Live Science, there have been over 10,000 reported Bigfoot sightings in the continental United States, with reports from every state except the island of Hawaii. About one-third of all claims of Bigfoot sightings are located in the Pacific Northwest, with the remaining reports spread throughout the rest of North America. Most reports are considered mistakes or hoaxes, even by those researchers who claim Bigfoot exists. Sightings predominantly occur in the northwestern region of Washington state, Oregon, Northern California, and British Columbia. According to data collected from the Bigfoot Field Researchers Organization's (BFRO) Bigfoot sightings database in 2019, Washington has over 2,000 reported sightings, California over 1,600, Pennsylvania over 1,300, New York and Oregon over 1,000, and Texas has just over 800. The debate over the legitimacy of Bigfoot sightings reached a peak in the 1970s, and Bigfoot has been regarded as the first widely popularized example of pseudoscience in American culture. Alleged behavior Some Bigfoot researchers allege that Bigfoot throws rocks as territorial displays and for communication. Other alleged behaviors include audible blows struck against trees or "wood knocking", further alleged to be communicative. Skeptics argue that these behaviors are easily hoaxed. Additionally, structures of broken and twisted foliage seemingly placed in specific areas have been attributed by some to Bigfoot behavior. In some reports, lodgepole pine and other small trees have been observed bent, uprooted, or stacked in patterns such as weaved and crisscrossed, leading some to theorize that they are potential territorial markings. Some instances have also included entire deer skeletons being suspended high in trees. Some researchers and enthusiasts believe Bigfoot construct teepee-like structures out of dead trees and foliage. In Washington state, a team of amateur Bigfoot researchers called the Olympic Project claimed to have discovered a collection of nests. The group brought in primatologists to study them, with the conclusion being that they appear to have been created by a primate. Jeremiah Byron, host of the Bigfoot Society Podcast, believes Bigfoot are omnivores, stating, "They eat both plants and meat. I've seen accounts that they eat everything from berries, leaves, nuts, and fruit to salmon, rabbit, elk, and bear. Ronny Le Blanc, host of Expedition Bigfoot on the Travel Channel indicated he has heard anecdotal reports of Bigfoot allegedly hunting and consuming deer. Some Bigfoot researchers have reported the creatures moving or taking possession of intentional "gifts" left by humans such as food and jewelry, and leaving items in their places such as rocks and twigs. Many alleged sightings are reported to occur at night leading some cryptozoologists to hypothesize that Bigfoot may possess nocturnal tendencies. However, experts find such behavior untenable in a supposed ape- or human-like creature, as all known apes, including humans, are diurnal, with only lesser primates exhibiting nocturnality. Most anecdotal sightings of Bigfoot describe the creatures allegedly observed as solitary, although some reports have described groups being allegedly observed together. Alleged vocalizations Alleged vocalizations such as howls, screams, moans, grunts, whistles, and even a form of supposed language have been reported and allegedly recorded. Some of these alleged vocalization recordings have been analyzed by individuals such as retired U.S. Navy cryptologic linguist Scott Nelson. He analyzed audio recordings from the early 1970s said to be recorded in the Sierra Nevada mountains dubbed the "Sierra Sounds" and stated, "It is definitely a language, it is definitely not human in origin, and it could not have been faked". Les Stroud has spoken of a strange vocalization he heard in the wilderness while filming Survivorman that he stated sounded primate in origin. A number of anecdotal reports of Bigfoot encounters have resulted in witnesses claiming to be disoriented, dizzy and anxious. Some Bigfoot researchers, such as paranormal author Nick Redfern, have proposed that Bigfoot may produce infrasound, which could explain reports of this nature. Alleged encounters In Fouke, Arkansas in 1971, a family reported that a large, hair-covered creature startled a woman after reaching through a window. This alleged incident caused hysteria in the Fouke area and inspired the horror movie, The Legend of Boggy Creek (1972). The report was later deemed a hoax. In 1974, the New York Times presented the dubious tale of Albert Ostman, a Canadian prospector, who stated that he was kidnapped and held captive by a family of Bigfoot for six days in 1924. In 1994, former U.S. Forest Service ranger Paul Freeman, a Bigfoot researcher, videotaped an alleged Bigfoot he reportedly encountered in the Blue Mountains in Oregon. The tape, often referred to as the Freeman footage, continues to be scrutinized and its authenticity debated. Freeman had previously gained media recognition in the 1980s for documenting alleged Bigfoot tracks, claiming they possessed dermal ridges. On May 26, 1996, Lori Pate, who was on a camping trip near the Washington state-Canada border, videotaped a dark subject she reported encountering running across a field and claimed it was Bigfoot. The film, dubbed the Memorial Day Bigfoot footage, is often depicted in Bigfoot-related media, most notably in the 2003 documentary, Sasquatch: Legend Meets Science. In his research, Daniel Perez of the Skeptical Inquirer concluded that the footage was likely a hoax perpetuated by a human in a gorilla costume. In 2018, Bigfoot researcher Claudia Ackley garnered international attention after filing a lawsuit with the California Department of Fish and Wildlife (CDFW) for failing to acknowledge the existence of Bigfoot. Ackley claimed to have encountered and filmed a Bigfoot in the San Bernardino Mountains in 2017, describing what she saw as a "Neanderthal man with a lot of hair". Ackley contacted emergency services as well as the CDFW; a state investigator concluded that she encountered a bear. Until her death in 2023, Ackley also ran an online support group for individuals claiming to experience psychological trauma as a result of alleged Bigfoot encounters. In October 2023, a woman named Shannon Parker uploaded a video of an alleged Bigfoot to Facebook. The footage went viral on social media and was shared via various news publications. Shannon Parker reported she and others observed the subject while riding a train on the Durango and Silverton Narrow Gauge Railroad in the San Juan Mountains in Colorado. The authenticity of the video was debated across social media. Skeptics on Reddit speculated it was a publicity hoax perpetrated by an RV company located the area, Sasquatch Expedition Campers. The company denied the allegations. In the early 1990s, 9-1-1 audio recordings were made public in which a homeowner in Kitsap County, Washington called law enforcement for assistance with a large subject, described by him as being "all in black", having entered his backyard. He previously reported to law enforcement that his dog was killed recently when it was thrown over his fence. Anthropologist Jeffrey Meldrum notes that any large predatory animal is potentially dangerous, specifically if provoked, but indicates that most anecdotal accounts of Bigfoot encounter result in the creatures hiding or fleeing from people. The 2021 Hulu documentary series, Sasquatch, describes marijuana farmers telling stories of Bigfoots harassing and killing people within the Emerald Triangle region in the 1970s through the 1990s; and specifically the alleged murder of three migrant workers in 1993. Investigative journalist David Holthouse attributes the stories to illegal drug operations using the local Bigfoot lore to scare away the competition, specifically superstitious immigrants, and that the high rate of murder and missing persons in the area is attributed to human actions. Skeptics argue that many of these alleged encounters are easily hoaxed, the result of misidentification, or are outright fabrications. Patterson-Gimlin film The most well-known video of an alleged Bigfoot, the Patterson-Gimlin film, was recorded on October 20, 1967, by Roger Patterson and Robert "Bob" Gimlin in an area called Bluff Creek in Northern California. The 59.5-second-long video has become an iconic piece of Bigfoot lore, and continues to be a highly scrutinized, analyzed, and debated subject. Academic experts from related fields have typically judged the film as providing "no supportive data of any scientific value" with perhaps the most common proposed explanation being that it was a hoax. Evidence claims A body print taken in the year 2000 from the Gifford Pinchot National Forest in Washington state dubbed the Skookum cast is also believed by some to have been made by a Bigfoot that sat down in the mud to eat fruit left out by researchers during the filming of an episode of the Animal X television show. Skeptics believe the cast to have been made by a known animal such as an elk. Alleged Bigfoot footprints are often suggested by Bigfoot enthusiasts as evidence for the creature's existence. Anthropologist Jeffrey Meldrum, who specializes in the study of primate bipedalism, possesses over 300 footprint casts that he maintains could not be made by wood carvings or human feet based on their anatomy, but instead are evidence of a large, non-human primate present today in North America. In 2005, Matt Crowley obtained a copy of an alleged Bigfoot footprint cast, called the "Onion Mountain Cast", and was able to painstakingly recreate the dermal ridges. Michael Dennett of the Skeptical Inquirer spoke to police investigator and primate fingerprint expert Jimmy Chilcutt in 2006 for comment on the replica and he stated, "Matt has shown artifacts can be created, at least under laboratory conditions, and field researchers need to take precautions". Chilcutt had previously stated that some of the alleged Bigfoot footprint plaster casts he examined were genuine due to the presence of "unique dermal ridges". Dennett states that Chilcutt published nothing to substantiate his claims, nor had anyone else published anything on that topic, with Chilcutt making his statements solely through a posting on the Internet. Dennett states further that no reviews on Chilcutt's statements had been performed beyond those by what Dennett states to be, "other Bigfoot enthusiasts". In 2007, the Bigfoot Field Researchers Organization claimed to have photographs depicting a juvenile Bigfoot allegedly captured on a camera trap in the Allegheny National Forest. The Pennsylvania Game Commission, however, stated that the photos were of a bear with mange. The Pennsylvania Game Commission unsuccessfully attempted to locate the suspected mangey bear. Scientist Vanessa Woods, after estimating that the subject in the photo had approximately long arms and a torso, concluded it was more comparable to a chimpanzee. In 2015, Centralia College professor Michael Townsend claimed to have discovered prey bones with "human-like" bite impressions on the southside of Mount St. Helens. Townsend claimed the bites were over two times wider than a human bite, and that he and two of his students also found 16-inch footprints in the area. Melba Ketchum press release After what The Huffington Post described as "a five-year study of purported Bigfoot (also known as Sasquatch) DNA samples", but prior to peer review of the work, DNA Diagnostics, a veterinary laboratory headed by veterinarian Melba Ketchum issued a press release on November 24, 2012, claiming that they had found proof that the Sasquatch "is a human relative that arose approximately 15,000 years ago as a hybrid cross of modern Homo sapiens with an unknown primate species." Ketchum called for this to be recognized officially, saying that "Government at all levels must recognize them as an indigenous people and immediately protect their human and Constitutional rights against those who would see in their physical and cultural differences a 'license' to hunt, trap, or kill them." Failing to find a scientific journal that would publish their results, Ketchum announced on February 13, 2013, that their research had been published in the DeNovo Journal of Science. The title "DeNovo: Journal of Science" in which the paper was published was found to be a Web site—registered anonymously only nine days before the paper was announced—whose first and only "journal" issue contained nothing but the "Sasquatch" article. Shortly after publication, the paper was analyzed and outlined by Sharon Hill of Doubtful News for the Committee for Skeptical Inquiry. Hill reported on the questionable journal, mismanaged DNA testing and poor quality paper, stating that "The few experienced geneticists who viewed the paper reported a dismal opinion of it noting it made little sense." The Scientist magazine also analyzed the paper, reporting that: Documented hoaxes In 1968, the frozen corpse of a supposed hair-covered hominid measuring was paraded around the United States as part of a traveling exhibition. Many stories surfaced as to its origin, such as it having been killed by hunters in Minnesota or killed by American soldiers near Da Nang during the Vietnam War. It was attributed by some to be proof of Bigfoot-like creatures. Primatologist John R. Napier studied the subject and concluded it was a hoax made of latex. Others disputed this, claiming Napier did not study the original subject. As of 2013, the subject, dubbed the Minnesota Iceman, was on display at the "Museum of the Weird" in Austin, Texas. Tom Biscardi, long-time Bigfoot enthusiast and CEO of "Searching for Bigfoot, Inc.", appeared on the Coast to Coast AM paranormal radio show on July 14, 2005, and said that he was "98% sure that his group will be able to capture a Bigfoot which they had been tracking in the Happy Camp, California area." A month later, he announced on the same radio show that he had access to a captured Bigfoot and was arranging a pay-per-view event for people to see it. He appeared on Coast to Coast AM again a few days later to announce that there was no captive Bigfoot. He blamed an unnamed woman for misleading him, and said that the show's audience was gullible. On July 9, 2008, Rick Dyer and Matthew Whitton posted a video to YouTube, claiming that they had discovered the body of a dead Bigfoot in a forest in northern Georgia, which they named "Rickmat". Tom Biscardi was contacted to investigate. Dyer and Whitton received $50,000 from "Searching for Bigfoot, Inc." The story was covered by many major news networks, including BBC, CNN, ABC News, and Fox News. Soon after a press conference, the alleged Bigfoot body was delivered in a block of ice in a freezer with the Searching for Bigfoot team. When the contents were thawed, observers found that the hair was not real, the head was hollow, and the feet were rubber. Dyer and Whitton admitted that it was a hoax after being confronted by Steve Kulls, executive director of SquatchDetective.com. In August 2012, a man in Montana was killed by a car while perpetrating a Bigfoot hoax using a ghillie suit. In January 2014, Rick Dyer, perpetrator of a previous Bigfoot hoax, said that he had killed a Bigfoot in September 2012 outside San Antonio, Texas. He claimed to have had scientific tests conducted on the body, "from DNA tests to 3D optical scans to body scans. It is the real deal. It's Bigfoot, and Bigfoot's here, and I shot it, and now I'm proving it to the world." He said that he had kept the body in a hidden location, and he intended to take it on tour across North America in 2014. He released photos of the body and a video showing a few individuals' reactions to seeing it, but never released any of the tests or scans. He refused to disclose the test results or to provide biological samples. He said that the DNA results were done by an undisclosed lab and could not be matched to identify any known animal. Dyer said that he would reveal the body and tests on February 9, 2014, at a news conference at Washington University, but he never made the test results available. After the tour, the Bigfoot body was taken to Houston, Texas. On March 28, 2014, Dyer admitted on his Facebook page that his "Bigfoot corpse" was another hoax. He had paid Chris Russel of "Twisted Toybox" to manufacture the prop from latex, foam, and camel hair, which he nicknamed "Hank". Dyer earned approximately US$60,000 from the tour of this second fake Bigfoot corpse. He stated that he did kill a Bigfoot, but did not take the real body on tour for fear that it would be stolen. In April 2022, a man in Mobile, Alabama posted photos he claimed were of a Bigfoot to his Facebook page, indicating the Mobile County Sheriff's Office validated their authenticity and the team from Finding Bigfoot was being dispatched. The photos circulated on social media, attracting the attention of NBC 15. The man admitted the photos were an April Fools' Day hoax. On July 7, 2022, wildlife educator and media personality Coyote Peterson released a post on Facebook in which he claimed to have discovered a large primate skull in British Columbia, indicating that he had excavated and smuggled the skull into the United States for primatologist review. He further claimed to have initially hidden the discovery due to concerns that government agencies may intervene. The post went viral, quickly garnering the attention of multiple scientists who dismissed the skull as likely a replica of a gorilla skull. Darren Naish, a vertebrate paleontologist, stated, "I'm told that Coyote Peterson does this sort of thing fairly often as clickbait, and that this is a stunt done to promote an upcoming video. Maybe this is meant to be taken as harmless fun. But in an age where anti-scientific feelings and conspiracy culture are a serious problem it—again—really isn't a good look. I think this stunt has backfired". Organizations and events There are several organizations dedicated to the research and investigation of Bigfoot sightings. The oldest and largest is the Bigfoot Field Researchers Organization (BFRO). The BFRO also provides a free database to individuals and other organizations. Their website includes reports from across North America that have been investigated by researchers to determine credibility. Another includes the North American Wood Ape Conservancy (NAWAC), a nonprofit organization. Other similar organizations exist throughout many U.S. states and their members come from a variety of backgrounds. In 2004, David Fahrenthold of The Washington Post published an article describing a feud between Bigfoot researchers in the eastern and western United States. Fahrenthold writes, "On the one hand, East Coast Bigfooters say they have to fight discrimination from Western counterparts who think the creature does not live east of the Rocky Mountains. On the other, they have to deal with reports from a more urban population, which includes some who are unfamiliar with wildlife and apt to mistake a black bear for the missing link". Some organizations, as well as private researchers and enthusiasts own and operate Bigfoot museums. In 2019, Bigfoot researcher Cliff Barackman, notable for his role on the Animal Planet series Finding Bigfoot, opened the North American Bigfoot Center in Boring, Oregon. In 2022, The Bigfoot Crossroads of America Museum and Research Center in Hastings, Nebraska was selected for addition into the archives of the U.S. Library of Congress. Conferences and festivals dedicated to Bigfoot are attended by thousands of people. These events commonly include guest speakers, research and lore presentations, and sometimes live music, vendors, food trucks, and other activities such as costume contests and "Bigfoot howl" competitions. The Chamber of Commerce in Willow Creek, California has hosted the "Bigfoot Daze" festival annually since the 1960s, drawing on the popularity of the local lore. Some receive collaboration between local government and corporations, such as the Smoky Mountain Bigfoot Festival in Townsend, Tennessee which is sponsored by Monster Energy. The 2023 Bigfoot Festival in Marion, North Carolina saw approximately 40,000 people in attendance, resulting in a large economic boost for the small town of less than 8,000 residents. In February 2016, the University of New Mexico at Gallup held a two-day Bigfoot conference at a cost of $7,000 in university funds. In popular culture Bigfoot has a demonstrable impact in popular culture, and has been compared to Michael Jordan as a cultural icon. October 20, the anniversary of the Patterson-Gimlin film recording, is considered by some as "National Sasquatch Awareness Day". In 2018, Smithsonian magazine declared, "Interest in the existence of the creature is at an all-time high". According to a poll taken in May 2020, about 1 in 10 American adults believe that Bigfoot is a real animal. According to a May 2023 data study, the terms "Bigfoot" and "Sasquatch" are inputted via internet search engines over 200,000 times annually in the United States, and over 660,000 times worldwide. The creature has inspired the naming of a medical company, music festival, amusement park ride, monster truck, a Marvel Comics superhero and more. Two National Basketball Association teams located in the Pacific Northwest have used Bigfoot as a mascot; Squatch of the now-defunct Seattle SuperSonics from 1993 until 2008, and Douglas Fur of the Portland Trail Blazers as of 2023. Legend the Bigfoot was selected as the official mascot for the 2022 World Athletics Championships being held in Eugene, Oregon. Laws and ordinances exist regarding harming or killing a Bigfoot, specifically in the state of Washington. In 1969 in Skamania County, a law was passed making killing a Bigfoot punishable by a felony conviction resulting in a monetary fine up to $10,000 or five years imprisonment. In 1984, the law was amended to a misdemeanor and the entire county was declared a "Sasquatch refuge". Whatcom County followed suit in 1991, declaring the county a "Sasquatch Protection and Refuge Area". In 2022, Grays Harbor County, Washington, passed a similar resolution after a local elementary school in Hoquiam submitted a classroom project asking for a "Sasquatch Protection and Refuge Area" to be granted. In 2021, Rep. Justin Humphrey, in an effort to bolster tourism, proposed an official Bigfoot hunting season in Oklahoma, indicating that the Wildlife Conservation Commission would regulate permits and the state would offer a $3 million bounty if such a creature was captured alive and unharmed. In 2015, World Champion taxidermist Ken Walker completed what he believes to be a lifelike Bigfoot model based on the subject in the Patterson–Gimlin film. He entered it into the 2015 World Taxidermy & Fish Carving Championships in Springfield, Missouri and was the subject of Dan Wayne's 2019 documentary Big Fur. Some have been critical of Bigfoot's rise to fame, arguing that the appearance of the creatures in cartoons, reality shows, and advertisements further reduces the potential validity of serious scientific research. Others propose that society's fascination with the concept of Bigfoot stems from human interest in mystery, the paranormal, and loneliness. In a 2022 article discussing recent Bigfoot sightings, journalist John Keilman of the Chicago Tribune states, "As UFOs have gained newfound respect, becoming the subject of a Pentagon investigative panel, the alleged Bigfoot sighting is a reminder that other paranormal phenomena are still out there, entrancing true believers and amusing skeptics". In the 2018 podcast Wild Thing, creator and journalist Laura Krantz argues that the concept of Bigfoot can be an important part of environmental interest and protection, stating, "If you look at it from the angle that Bigfoot is a creature that has eluded capture or hasn't left any concrete evidence behind, then you just have a group of people who are curious about the environment and want to know more about it, which isn't that far off from what naturalists have done for centuries". Bigfoot has been used in official government environmental protection campaigns, albeit comedically, by entities such as the U.S. Forest Service in 2015. The act of searching for or researching the creatures is often referred to as "Squatching" or "Squatch'n", popularized by the Animal Planet series, Finding Bigfoot. Bigfoot researchers and believers are often called "Squatchers". During the onset of the COVID-19 pandemic, Bigfoot became a part of many North American social distancing promotion campaigns, with the creature being referred to as the "Social Distancing Champion" and as the subject of various internet memes related to the pandemic. See also Bigfoot: The Life and Times of a Legend – 2009 book published by University of Chicago Press Sasquatch: Legend Meets Science – 2003 film documentary aired on Discovery Channel Sasquatch: Legend Meets Science – 2006 book published by Forge Citations General and cited references Green, John (2004). The Best of Bigfoot/Sasquatch. Hancock House Publishers. p. 144. Green, John (2006). Sasquatch: the Apes Among Us. Hancock House Publishers. p. 492. . Wágner, Karel (2013). Bigfoot Alias Sasquatch. Jonathan Livingston. . External links American folklore California culture Canadian folklore Cascadian folklore Culture of British Columbia Culture of Manitoba Culture of Saskatchewan Culture of the Pacific Northwest Cryptids Oregon culture Washington (state) culture
6
Harry Lillis "Bing" Crosby Jr. (May 3, 1903 – October 14, 1977) was an American singer, actor, television producer, television and radio personality and businessman. The first multimedia star, he was one of the most popular and influential musical artists of the 20th century worldwide. He was a leader in record sales, network radio ratings, and motion picture grosses from 1926 to 1977. He was one of the first global cultural icons. He made over 70 feature films and recorded more than 1,600 songs. His early career coincided with recording innovations that allowed him to develop an intimate singing style that influenced many male singers who followed, such as Frank Sinatra, Perry Como, Dean Martin, Dick Haymes, Elvis Presley, and John Lennon. Yank magazine said that he was "the person who had done the most for the morale of overseas servicemen" during World War II. In 1948, American polls declared him the "most admired man alive", ahead of Jackie Robinson and Pope Pius XII. In 1948, Music Digest estimated that his recordings filled more than half of the 80,000 weekly hours allocated to recorded radio music in America. Crosby won the Academy Award for Best Actor for his performance in Going My Way (1944) and was nominated for its sequel, The Bells of St. Mary's (1945), opposite Ingrid Bergman, becoming the first of six actors to be nominated twice for playing the same character. He was the number one box office attraction for five consecutive years, 1944 to 1948. At his screen apex in 1946, Crosby starred in three of the year's five highest-grossing films: The Bells of St. Mary's, Blue Skies and Road to Utopia. In 1963, Crosby received the first Grammy Global Achievement Award. He is one of 33 people to have three stars on the Hollywood Walk of Fame, in the categories of motion pictures, radio, and audio recording. He was also known for his collaborations with his friend Bob Hope, starring in the Road to... films from 1940 to 1962. Crosby influenced the development of the post World War II recording industry. After seeing a demonstration of a German broadcast quality reel-to-reel tape recorder brought to the United States by John T. Mullin, he invested $50,000 in the California electronics company Ampex to build copies. He then persuaded ABC to allow him to tape his shows. He became the first performer to prerecord his radio shows and master his commercial recordings onto magnetic tape. Crosby has been associated with the Christmas season since Irving Berlin's musical film Holiday Inn, in which he starred and famously sang "White Christmas". Through audio recordings, he produced his radio programs with the same directorial tools and craftsmanship (editing, retaking, rehearsal, time shifting) used in motion picture production, a practice that became the industry standard. In addition to his work with early audio tape recording, he helped finance the development of videotape, bought television stations, bred racehorses, and co-owned the Pittsburgh Pirates baseball team, during which time the team won two World Series (1960 and 1971). Early life Crosby was born on May 3, 1903, in Tacoma, Washington, in a house his father built at 1112 North J Street. In 1906, his family moved to Spokane in Eastern Washington state, where he was raised. In 1913, his father built a house at 508 E. Sharp Avenue. The house sits on the campus of his alma mater, Gonzaga University, as a museum housing over 200 artifacts from his life and career, including his Oscar. He was the fourth of seven children: brothers Laurence Earl "Larry" (1895–1975), Everett Nathaniel (1896–1966), Edward John "Ted" (1900–1973), and George Robert "Bob" (1913–1993); and two sisters, Catherine Cordelia (1904–1974) and Mary Rose (1906–1990). His parents were Harry Lowe Crosby (1870–1950), a bookkeeper, and Catherine Helen "Kate" (née Harrigan; 1873–1964). His mother was a second-generation Irish-American. His father was of Scottish and English descent; an ancestor, Simon Crosby, emigrated from England to New England in the 1630s during the Puritan migration to New England. Through another line, also on his father's side, Crosby is descended from Mayflower passenger William Brewster ( 1567 – 1644). In 1917, Crosby took a summer job as property boy at Spokane's Auditorium, where he witnessed some of the acts of the day, including Al Jolson, who held him spellbound with ad-libbing and parodies of Hawaiian songs. He later described Jolson's delivery as "electric". Crosby graduated from Gonzaga High School in 1920 and enrolled at Gonzaga University. He attended Gonzaga for three years but did not earn a degree. As a freshman, he played on the university's baseball team. The university granted him an honorary doctorate in 1937. Gonzaga University houses a large collection of photographs, correspondence, and other material related to Crosby. On November 8, 1937, after Lux Radio Theatre's adaptation of She Loves Me Not, Joan Blondell asked Crosby how he got his nickname: As it happens, that story was pure whimsy for dramatic effect; the Associated Press had reported as early as February 1932—as would later be confirmed by both Bing himself and his biographer Charles Thompson—that it was in fact a neighbor—Valentine Hobart, circa 1910—who had named him "Bingo from Bingville" after a comic feature in the local paper called The Bingville Bugle which the young Harry liked. In time, Bingo got shortened to Bing. Career Early years In 1923, Crosby was invited to join a new band composed of high-school students a few years younger than himself. Al and Miles Rinker (brothers of singer Mildred Bailey), James Heaton, Claire Pritchard and Robert Pritchard, along with drummer Crosby, formed the Musicaladers, who performed at dances both for high school students and club-goers. The group performed on Spokane radio station KHQ, but disbanded after two years. Crosby and Al Rinker obtained work at the Clemmer Theatre in Spokane (now known as the Bing Crosby Theater). Crosby was initially a member of a vocal trio called The Three Harmony Aces with Al Rinker accompanying on piano from the pit, to entertain between the films. Crosby and Al continued at the Clemmer Theatre for several months often with three other men—Wee Georgie Crittenden, Frank McBride, and Lloyd Grinnell—and they were billed The Clemmer Trio or The Clemmer Entertainers depending who performed. In October 1925, Crosby and Rinker decided to seek fame in California. They traveled to Los Angeles, where Bailey introduced them to her show business contacts. The Fanchon and Marco Time Agency hired them for thirteen weeks for the revue The Syncopation Idea starting at the Boulevard Theater in Los Angeles and then on the Loew's circuit. They each earned $75 a week. As minor parts of The Syncopation Idea Crosby and Rinker started to develop as entertainers. They had a lively style that was popular with college students. After The Syncopation Idea closed, they worked in the Will Morrissey Music Hall Revue. They honed their skills with Morrissey. When they got a chance to present an independent act, they were spotted by a member of the Paul Whiteman organization. Whiteman needed something different to break up his musical selections, and Crosby and Rinker filled this requirement. After less than a year in show business, they were attached to one of the biggest names. Hired for $150 a week in 1926, they debuted with Whiteman on December 6 at the Tivoli Theatre in Chicago. Their first recording, in October 1926, was "I've Got the Girl" with Don Clark's Orchestra, but the Columbia-issued record was inadvertently recorded at a slow speed, which increased the singers' pitch when played at 78 rpm. Throughout his career, Crosby often credited Bailey for getting him his first important job in the entertainment business. The Rhythm Boys Success with Whiteman was followed by disaster when they reached New York. Whiteman considered letting them go. However, the addition of pianist and aspiring songwriter Harry Barris made the difference, and The Rhythm Boys were born. The additional voice meant they could be heard more easily in large New York theaters. Crosby gained valuable experience on tour for a year with Whiteman and performing and recording with Bix Beiderbecke, Jack Teagarden, Tommy Dorsey, Jimmy Dorsey, Eddie Lang, and Hoagy Carmichael. He matured as a performer and was in demand as a solo singer. Crosby became the star attraction of the Rhythm Boys. In 1928, he had his first number one hit, a jazz-influenced rendition of "Ol' Man River". In 1929, the Rhythm Boys appeared in the film King of Jazz with Whiteman, but Crosby's growing dissatisfaction with Whiteman led to the Rhythm Boys leaving his organization. They joined the Gus Arnheim Orchestra, performing nightly in the Coconut Grove of the Ambassador Hotel. Singing with the Arnheim Orchestra, Crosby's solos began to steal the show while the Rhythm Boys' act gradually became redundant. Harry Barris wrote several of Crosby's hits, including "At Your Command", "I Surrender Dear", and "Wrap Your Troubles in Dreams". When Mack Sennett signed Crosby to a solo film contract in 1931, a break with the Rhythm Boys became almost inevitable. Crosby married Dixie Lee in September 1930. After a threat of divorce in March 1931, he applied himself to his career. Success as a solo singer 15 Minutes with Bing Crosby, his nationwide solo radio debut, began broadcasting on September 2, 1931. The weekly broadcast made him a hit. Before the end of the year, he with both Brunswick Records and CBS Radio. "Out of Nowhere", "Just One More Chance", "At Your Command", and "I Found a Million Dollar Baby (in a Five and Ten Cent Store)" were among the best-selling songs of 1931. Ten of the top 50 songs of 1931 included Crosby with others or as a solo act. A "Battle of the Baritones" with singer Russ Columbo proved short-lived, replaced with the slogan "Bing Was King". Crosby played the lead in a series of musical comedy short films for Mack Sennett, signed with Paramount, and starred in his first full-length film 1932's The Big Broadcast (1932), the first of 55 films in which he received top billing. He would appear in almost 80 pictures. He signed a contract with Jack Kapp's new record company, Decca, in late 1934. His first commercial sponsor on radio was Cremo Cigars and his fame spread nationwide. After a long run in New York, he went back to Hollywood to film The Big Broadcast. His appearances, records, and radio work substantially increased his impact. The success of his first film brought him a contract with Paramount, and he began a pattern of making three films a year. He led his radio show for Woodbury Soap for two seasons while his live appearances dwindled. His records produced hits during the Depression when sales were down. Audio engineer Steve Hoffman stated, "By the way, Bing actually saved the record business in 1934 when he agreed to support Decca founder Jack Kapp's crazy idea of lowering the price of singles from a dollar to 35 cents and getting a royalty for records sold instead of a flat fee. Bing's name and his artistry saved the recording industry. All the other artists signed to Decca after Bing did. Without him, Jack Kapp wouldn't have had a chance in hell of making Decca work and the Great Depression would have wiped out phonograph records for good." His social life was frantic. His first son Gary was born in 1933 with twin boys following in 1934. By 1936, he replaced his former boss, Paul Whiteman, as host of the weekly NBC radio program Kraft Music Hall, where he remained for the next ten years. "Where the Blue of the Night (Meets the Gold of the Day)", with his trademark whistling, became his theme song and signature tune. Crosby's vocal style helped take popular singing beyond the "belting" associated with Al Jolson and Billy Murray, who had been obligated to reach the back seats in New York theaters without the aid of a microphone. As music critic Henry Pleasants noted in The Great American Popular Singers, something new had entered American music, a style that might be called "singing in American" with conversational ease. This new sound led to the popular epithet crooner. Crosby admired Louis Armstrong for his musical ability, and the trumpet maestro was a formative influence on Crosby's singing style. When the two met, they became friends. In 1936, Crosby exercised an option in his Paramount contract to regularly star in an out-of-house film. Signing an agreement with Columbia for a single motion picture, Crosby wanted Armstrong to appear in a screen adaptation of The Peacock Feather that eventually became Pennies from Heaven. Crosby asked Harry Cohn, but Cohn had no desire to pay for the flight or to meet Armstrong's "crude, mob-linked but devoted manager, Joe Glaser". Crosby threatened to leave the film and refused to discuss the matter. Cohn gave in; Armstrong's musical scenes and comic dialogue extended his influence to the silver screen, creating more opportunities for him and other African Americans to appear in future films. Crosby also ensured behind the scenes that Armstrong received equal billing with his white co-stars. Armstrong appreciated Crosby's progressive attitudes on race, and often expressed gratitude for the role in later years. During World War II, Crosby made live appearances before American troops who had been fighting in the European Theater. He learned how to pronounce German from written scripts and read propaganda broadcasts intended for German forces. The nickname "Der Bingle" was common among Crosby's German listeners and came to be used by his English-speaking fans. In a poll of U.S. troops at the close of World War II, Crosby topped the list as the person who had done the most for G.I. morale, ahead of President Franklin D. Roosevelt, General Dwight Eisenhower, and Bob Hope. The June 18, 1945, issue of Life magazine stated, "America's number one star, Bing Crosby, has won more fans, made more money than any entertainer in history. Today he is a kind of national institution." "In all, 60,000,000 Crosby discs have been marketed since he made his first record in 1931. His biggest best seller is "White Christmas" 2,000,000 impressions of which have been sold in the U.S. and 250,000 in Great Britain." "Nine out of ten singers and bandleaders listen to Crosby's broadcasts each Thursday night and follow his lead. The day after he sings a song over the air—any song—some 50,000 copies of it are sold throughout the U.S. Time and again Crosby has taken some new or unknown ballad, has given it what is known in trade circles as the 'big goose' and made it a hit single-handed and overnight... Precisely what the future holds for Crosby neither his family nor his friends can conjecture. He has achieved greater popularity, made more money, attracted vaster audiences than any other entertainer in history. And his star is still in the ascendant. His contract with Decca runs until 1955. His contract with Paramount runs until 1954. Records which he made ten years ago are selling better than ever before. The nation's appetite for Crosby's voice and personality appears insatiable. To soldiers overseas and to foreigners he has become a kind of symbol of America, of the amiable, humorous citizen of a free land. Crosby, however, seldom bothers to contemplate his future. For one thing, he enjoys hearing himself sing, and if ever a day should dawn when the public wearies of him, he will complacently go right on singing—to himself." White Christmas The biggest hit song of Crosby's career was his recording of Irving Berlin's "White Christmas", which he introduced on a Christmas Day radio broadcast in 1941. A copy of the recording from the radio program is owned by the estate of Bing Crosby and was loaned to CBS Sunday Morning for their December 25, 2011, program. The song appeared in his films Holiday Inn (1942), and—a decade later—in White Christmas (1954). His record hit the charts on October 3, 1942, and rose to number 1 on October 31, where it stayed for 11 weeks. A holiday perennial, the song was repeatedly re-released by Decca, charting another sixteen times. It topped the charts again in 1945 and a third time in January 1947. The song remains the bestselling single of all time. His recording of "White Christmas", has sold over 50 million copies around the world. His recording was so popular that he was obliged to re-record it in 1947 using the same musicians and backup singers; the original 1942 master had become damaged due to its frequent use in pressing additional singles. In 1977, after Crosby died, the song was re-released and reached No. 5 in the UK Singles Chart. Crosby was dismissive of his role in the song's success, saying "a jackdaw with a cleft palate could have sung it successfully". Motion pictures In the wake of a solid decade of headlining mainly smash hit musical comedy films in the 1930s, Crosby starred with Bob Hope and Dorothy Lamour in six of the seven Road to musical comedies between 1940 and 1962 (Lamour was replaced with Joan Collins in The Road to Hong Kong and limited to a lengthy cameo), cementing Crosby and Hope as an on-and-off duo, despite never declaring themselves a "team" in the sense that Laurel and Hardy or Martin and Lewis (Dean Martin and Jerry Lewis) were teams. The series consists of Road to Singapore (1940), Road to Zanzibar (1941), Road to Morocco (1942), Road to Utopia (1946), Road to Rio (1947), Road to Bali (1952), and The Road to Hong Kong (1962). When they appeared solo, Crosby and Hope frequently made note of the other in a comically insulting fashion. They performed together countless times on stage, radio, film, and television, and made numerous brief and not so brief appearances together in movies aside from the "Road" pictures, Variety Girl (1947) being an example of lengthy scenes and songs together along with billing. In the 1949 Disney animated film The Adventures of Ichabod and Mr. Toad, Crosby provided the narration and song vocals for The Legend of Sleepy Hollow segment. In 1960, he starred in High Time, a collegiate comedy with Fabian Forte and Tuesday Weld that predicted the emerging gap between him and the new younger generation of musicians and actors who had begun their careers after World War II. The following year, Crosby and Hope reunited for one more Road movie, The Road to Hong Kong, which teamed them up with the much younger Joan Collins and Peter Sellers. Collins was used in place of their longtime partner Dorothy Lamour, whom Crosby felt was getting too old for the role, though Hope refused to do the film without her, and she instead made a lengthy and elaborate cameo appearance. Shortly before his death in 1977, he had planned another Road film in which he, Hope, and Lamour search for the Fountain of Youth. He won an Academy Award for Best Actor for Going My Way in 1944 and was nominated for the 1945 sequel, The Bells of St. Mary's. He received critical acclaim for his performance as an alcoholic entertainer in The Country Girl and received his third Academy Award nomination. Television The Fireside Theater (1950) was his first television production. The series of 26-minute shows was filmed at Hal Roach Studios rather than performed live on the air. The "telefilms" were syndicated to individual television stations. He was a frequent guest on the musical variety shows of the 1950s and 1960s, appearing on various variety shows as well as numerous late-night talk shows and his own highly rated specials. Bob Hope memorably devoted one of his monthly NBC specials to his long intermittent partnership with Crosby titled "On the Road With Bing". Crosby was associated with ABC's The Hollywood Palace as the show's first and most frequent guest host and appeared annually on its Christmas edition with his wife Kathryn and his younger children, and continued after The Hollywood Palace was eventually canceled. In the early 1970s, he made two late appearances on the Flip Wilson Show, singing duets with the comedian. His last TV appearance was a Christmas special, Merrie Olde Christmas, taped in London in September 1977 and aired weeks after his death. It was on this special that he recorded a duet of "The Little Drummer Boy" and "Peace on Earth" with rock musician David Bowie. Their duet was released in 1982 as a single 45 rpm record and reached No. 3 in the UK singles charts. It has since become a staple of holiday radio and the final popular hit of Crosby's career. At the end of the 20th century, TV Guide listed the Crosby-Bowie duet one of the 25 most memorable musical moments of 20th-century television. Bing Crosby Productions, affiliated with Desilu Studios and later CBS Television Studios, produced a number of television series, including Crosby's own unsuccessful ABC sitcom The Bing Crosby Show in the 1964–1965 season (with co-stars Beverly Garland and Frank McHugh). The company produced two ABC medical dramas, Ben Casey (1961–1966) and Breaking Point (1963–1964), the popular Hogan's Heroes (1965–1971) military comedy on CBS, as well as the lesser-known show Slattery's People (1964–1965). Singing style and vocal characteristics Crosby was one of the first singers to exploit the intimacy of the microphone rather than use the deep, loud vaudeville style associated with Al Jolson. He was, by his own definition, a "phraser", a singer who placed equal emphasis on both the lyrics and the music. Paul Whiteman's hiring of Crosby, with phrasing that echoed jazz, particularly his bandmate Bix Beiderbecke's trumpet, helped bring the genre to a wider audience. In the framework of the novelty-singing style of the Rhythm Boys, he bent notes and added off-tune phrasing, an approach that was rooted in jazz. He had already been introduced to Louis Armstrong and Bessie Smith before his first appearance on record. Crosby and Armstrong remained warm acquaintances for decades, occasionally singing together in later years, e.g. "Now You Has Jazz" in the film High Society (1956). In Crosby's performances, the presence of jazz phrasing, jazz rhythm and jazz improvisation varied depending on the piece of music, but those were elements that Crosby frequently used. This can be observed particularly in his straight jazz work during the late 1920s/early 1930s, his recordings with Buddy Cole and His Trio from the mid-1950s, as well as in his numerous collaborations with such jazz musicians as Louis Armstrong, Duke Ellington, Ella Fitzgerald, Joe Venuti, or Eddie Lang. However, while Crosby can be called a jazz singer, he was not strictly only a jazz singer as he modeled the style and techniques to a broad scope of music that he performed, ranging from Jazz to Country to even such material as operetta arias. During the early portion of his solo career (about 1931–1934), Crosby's emotional, often pleading style of crooning was popular. But Jack Kapp, manager of Brunswick and later Decca, talked him into dropping many of his jazzier mannerisms in favor of a clear vocal style. Crosby credited Kapp for choosing hit songs, working with many other musicians, and most important, diversifying his repertoire into several styles and genres. Kapp helped Crosby have number one hits in Christmas music, Hawaiian music, and country music, and top-thirty hits in Irish music, French music, rhythm and blues, and ballads. Crosby elaborated on an idea of Al Jolson's: phrasing, or the art of making a song's lyric ring true. "I used to tell Sinatra over and over," said Tommy Dorsey, "there's only one singer you ought to listen to and his name is Crosby. All that matters to him is the words, and that's the only thing that ought to for you, too." Critic Henry Pleasants wrote in 1985: [While] the octave B flat to B flat in Bing's voice at that time [1930s] is, to my ears, one of the loveliest I have heard in forty-five years of listening to baritones, both classical and popular, it dropped conspicuously in later years. From the mid-1950s, Bing was more comfortable in a bass range while maintaining a baritone quality, with the best octave being G to G, or even F to F. In a recording he made of 'Dardanella' with Louis Armstrong in 1960, he attacks lightly and easily on a low E flat. This is lower than most opera basses care to venture, and they tend to sound as if they were in the cellar when they get there. Career achievements Crosby's was among the most popular and successful musical acts of the 20th century. Billboard magazine used different methodologies during his career. But his chart success remains impressive: 396 chart singles, including roughly 41 number 1 hits. Crosby had separate charting singles every year between 1931 and 1954; the annual re-release of "White Christmas" extended that streak to 1957. He had 24 separate popular singles in 1939 alone. Statistician Joel Whitburn at Billboard determined that Crosby was America's most successful recording act of the 1930s and again in the 1940s. In 1960, Crosby was honored as "First Citizen of Record Industry" based on having sold 200 million discs. Sources differ regarding the number of copies he sold: 300 million or even 500 million. The single "White Christmas" sold over 50 million copies according to Guinness World Records. For fifteen years (1934, 1937, 1940, 1943–1954), Crosby was among the top ten acts in box-office sales, and for five of those years (1944–1948) he topped the world. He sang four Academy Award-winning songs—"Sweet Leilani" (1937), "White Christmas" (1942), "Swinging on a Star" (1944), "In the Cool, Cool, Cool of the Evening" (1951)—and won the Academy Award for Best Actor for his role in Going My Way (1944). A survey in 2000 found that with 1,077,900,000 movie tickets sold, Crosby was the third-most-popular actor of all time, behind Clark Gable (1,168,300,000) and John Wayne (1,114,000,000). The International Motion Picture Almanac lists him in a tie for second-most years at number one on the All Time Number One Stars List with Clint Eastwood, Tom Hanks, and Burt Reynolds. His most popular film, White Christmas, grossed $30 million in 1954 ($ million in current value). He received 23 gold and platinum records, according to the book Million Selling Records. The Recording Industry Association of America did not institute its gold record certification program until 1958 when Crosby's record sales were low. Before 1958, gold records were awarded by record companies. Crosby charted 23 Billboard hits from 47 recorded songs with the Andrews Sisters, whose Decca record sales were second only to Crosby's throughout the 1940s. They were his most frequent collaborators on disc from 1939 to 1952, a partnership that produced four million-selling singles: "Pistol Packin' Mama", "Jingle Bells", "Don't Fence Me In", and "South America, Take it Away". They made one film appearance together in Road to Rio singing "You Don't Have to Know the Language", and sang together on the radio throughout the 1940s and 1950s. They appeared as guests on each other's shows and on Armed Forces Radio Service during and after World War II. The quartet's Top-10 Billboard hits from 1943 to 1945 include "The Vict'ry Polka", "There'll Be a Hot Time in the Town of Berlin (When the Yanks Go Marching In)", and "Is You Is or Is You Ain't (Ma' Baby?)" and helped morale of the American public. In 1962, Crosby was given the Grammy Lifetime Achievement Award. He has been inducted into the halls of fame for both radio and popular music. In 2007, he was inducted into the Hit Parade Hall of Fame and in 2008 the Western Music Hall of Fame. Popularity and influence Crosby's popularity around the world was such that Dorothy Masuka, the best-selling African recording artist, stated that, "Only Bing Crosby the famous American crooner sold more records than me in Africa." His great popularity throughout the continent led other African singers to emulate him, including Masuka, Dolly Rathebe, and Míriam Makeba, known locally as "The Bing Crosby of Africa." Presenter Mike Douglas commented in a 1975 interview, "During my days in the Navy in World War II, I remember walking the streets of Calcutta, India, on the coast; it was a lonely night, so far from my home and from my new wife, Gen. I needed something to lift my spirits. As I passed a Hindu sitting on the corner of a street, I heard something surprisingly familiar. I came back to see the man playing one of those old Vitrolas, like those of RCA with the horn speaker. The man was listening to Bing Crosby sing, "Ac-Cent-Tchu-Ate The Positive". I stopped and smiled in grateful acknowledgment. The Hindu nodded and smiled back. The whole world knew and loved Bing Crosby." His popularity in India led many Hindu singers to imitate and emulate him, notably Kishore Kumar, considered the "Bing Crosby of India". Throughout Europe and Russia, Crosby was also known as "Der Bingle", a pseudonym coined in 1944 by Bob Musel, an American journalist based in London, after Crosby had recorded three 15-minute programs with Jack Russin for broadcast to Germany from ABSIE. Entrepreneurship According to Shoshana Klebanoff, Crosby became one of the richest men in the history of show business. He had investments in real estate, mines, oil wells, cattle ranches, race horses, music publishing, baseball teams, and television. He made a fortune from the Minute Maid Orange Juice Corporation, in which he was a principal stockholder. Role in early tape recording During the Golden Age of Radio, performers had to create their shows live, sometimes even redoing the program a second time for the West Coast time zone. Crosby had to do two live radio shows on the same day, three hours apart, for the East and West Coasts. Crosby's radio career took a significant turn in 1945, when he clashed with NBC over his insistence that he be allowed to pre-record his radio shows. (The live production of radio shows was also reinforced by the musicians' union and ASCAP, which wanted to ensure continued work for their members.) In On the Air: The Encyclopedia of Old-Time Radio, John Dunning wrote about German engineers having developed a tape recorder with a near-professional broadcast quality standard: Crosby's insistence eventually factored into the further development of magnetic tape sound recording and the radio industry's widespread adoption of it. He used his clout, both professionally and financially, for innovations in audio. But NBC and CBS refused to broadcast prerecorded radio programs. Crosby left the network and remained off the air for seven months, creating a legal battle with his sponsor Kraft that was settled out of court. He returned to broadcasting for the last 13 weeks of the 1945–1946 season. The Mutual Network, on the other hand, pre-recorded some of its programs as early as 1938 for The Shadow with Orson Welles. ABC was formed from the sale of the NBC Blue Network in 1943 after a federal antitrust suit and was willing to join Mutual in breaking the tradition. ABC offered Crosby $30,000 per week to produce a recorded show every Wednesday that would be sponsored by Philco. He would get an additional $40,000 from 400 independent stations for the rights to broadcast the 30-minute show, which was sent to them every Monday on three lacquer discs that played ten minutes per side at rpm. Murdo MacKenzie of Bing Crosby Enterprises had seen a demonstration of the German Magnetophon in June 1947—the same device that Jack Mullin had brought back from Radio Frankfurt with 50 reels of tape, at the end of the war. It was one of the magnetic tape recorders that BASF and AEG had built in Germany starting in 1935. The 6.5 mm ferric-oxide-coated tape could record 20 minutes per reel of high-quality sound. Alexander M. Poniatoff ordered Ampex, which he founded in 1944, to manufacture an improved version of the Magnetophone. Crosby hired Mullin to start recording his Philco Radio Time show on his German-made machine in August 1947 using the same 50 reels of I.G. Farben magnetic tape that Mullin had found at a radio station at Bad Nauheim near Frankfurt while working for the U.S. Army Signal Corps. The advantage was editing. As Crosby wrote in his autobiography: Mullin's 1976 memoir of these early days of experimental recording agrees with Crosby's account: Crosby invested $50,000 in Ampex with the intent to produce more machines. In 1948, the second season of Philco shows was recorded with the Ampex Model 200A and Scotch 111 tape from 3M. Mullin explained how one new broadcasting technique was invented on the Crosby show with these machines: Crosby started the tape recorder revolution in America. In his 1950 film Mr. Music, he is seen singing into an Ampex tape recorder that reproduced his voice better than anything else. Also quick to adopt tape recording was his friend Bob Hope. He gave one of the first Ampex Model 300 recorders to his friend, guitarist Les Paul, which led to Paul's invention of multitrack recording. His organization, the Crosby Research Foundation, held tape recording patents and developed equipment and recording techniques such as the laugh track that are still in use. With Frank Sinatra, Crosby was one of the principal backers for the United Western Recorders studio complex in Los Angeles. Videotape development Mullin continued to work for Crosby to develop a videotape recorder (VTR). Television production was mostly live television in its early years, but Crosby wanted the same ability to record that he had achieved in radio. The Fireside Theater (1950) sponsored by Procter & Gamble, was his first television production. Mullin had not yet succeeded with videotape, so Crosby filmed the series of 26-minute shows at the Hal Roach Studios, and the "telefilms" were syndicated to individual television stations. Crosby continued to finance the development of videotape. Bing Crosby Enterprises gave the world's first demonstration of videotape recording in Los Angeles on November 11, 1951. Developed by John T. Mullin and Wayne R. Johnson since 1950, the device aired what were described as "blurred and indistinct" images, using a modified Ampex 200 tape recorder and standard quarter-inch (6.3 mm) audio tape moving at per second. Television station ownership A Crosby-led group purchased station KCOP-TV, in Los Angeles, California, in 1954. NAFI Corporation and Crosby purchased television station KPTV in Portland, Oregon, for $4 million on September 1, 1959. In 1960, NAFI purchased KCOP from Crosby's group. In the early 1950s, Crosby helped establish the CBS television affiliate in his hometown of Spokane, Washington. He partnered with Ed Craney, who owned the CBS radio affiliate KXLY (AM) and built a television studio west of Crosby's alma mater, Gonzaga University. After it began broadcasting, the station was sold within a year to Northern Pacific Radio and Television Corporation. Thoroughbred horse racing Crosby was a fan of thoroughbred horse racing and bought his first racehorse in 1935. In 1937, he became a founding partner of the Del Mar Thoroughbred Club and a member of its board of directors. Operating from the Del Mar Racetrack at Del Mar, California, the group included millionaire businessman Charles S. Howard, who owned a successful racing stable that included Seabiscuit. Charles' son, Lindsay C. Howard, became one of Crosby's closest friends; Crosby named his son Lindsay after him, and would purchase his 40-room Hillsborough, California estate from Lindsay in 1965. Crosby and Lindsay Howard formed Binglin Stable to race and breed thoroughbred horses at a ranch in Moorpark in Ventura County, California. They also established the Binglin Stock Farm in Argentina, where they raced horses at Hipódromo de Palermo in Palermo, Buenos Aires. A number of Argentine-bred horses were purchased and shipped to race in the United States. On August 12, 1938, the Del Mar Thoroughbred Club hosted a $25,000 winner-take-all match race won by Charles S. Howard's Seabiscuit over Binglin's horse Ligaroti. In 1943, Binglin's horse Don Bingo won the Suburban Handicap at Belmont Park in Elmont, New York. The Binglin Stable partnership came to an end in 1953 as a result of a liquidation of assets by Crosby, who needed to raise enough funds to pay the hefty federal and state inheritance taxes on his deceased wife's estate. The Bing Crosby Breeders' Cup Handicap at Del Mar Racetrack is named in his honor. Sports Crosby had a keen interest in sports. In the 1930s, his friend and former college classmate, Gonzaga head coach Mike Pecarovich, appointed Crosby as an assistant football coach. From 1946 until his death, he owned a 25% share of the Pittsburgh Pirates. Although he was passionate about the team, he was too nervous to watch the deciding game 7 of the 1960 World Series, choosing to go to Paris with Kathryn and listen to its radio broadcast. Crosby had arranged for Ampex, another of his financial investments, to record the NBC telecast on kinescope. The game was one of the most famous in baseball history, capped off by Bill Mazeroski's walk-off home run that won the game for Pittsburgh. He apparently viewed the complete film just once, and then stored it in his wine cellar, where it remained undisturbed until it was discovered in December 2009. The restored broadcast was shown on MLB Network in December 2010. Crosby was also an avid golfer. He first took up golf at age 12 as a caddy. He was already spending much time on the golf course while touring the country in a vaudeville act or with Paul Whiteman's orchestra in the mid to late 1920s. Eventually, Crosby became accomplished at the sport, at his best reaching a two handicap. He competed in both the British and U.S. Amateur championships, was a five-time club champion at Lakeside Golf Club in Hollywood, and once made a hole-in-one on the 16th hole at Cypress Point. In 1937, Crosby hosted the first 'Crosby Clambake', a pro-am tournament at Rancho Santa Fe Golf Club in Rancho Santa Fe, California, the event's location prior to World War II. After the war, the event resumed play in 1947 on golf courses in Pebble Beach, where it has been played ever since. Now the AT&T Pebble Beach Pro-Am, the tournament is a staple of the PGA Tour, having featured Hollywood stars and other celebrities. In 1950, Crosby became the third person to win the William D. Richardson award, which is given to a non-professional golfer "who has consistently made an outstanding contribution to golf". In 1978, he and Bob Hope were voted the Bob Jones Award, the highest honor given by the United States Golf Association in recognition of distinguished sportsmanship. He is a member of the World Golf Hall of Fame, having been inducted in 1978. Crosby also was a keen fisherman. In the summer of 1966, he spent a week as the guest of Lord Egremont, staying in Cockermouth and fishing on the River Derwent. His trip was filmed for The American Sportsman on ABC, although all did not go well at first as the salmon were not running. He did make up for it at the end of the week by catching a number of sea trout. In Front Royal, Virginia, a baseball stadium was named in his honor. The Front Royal Cardinals of the Valley Baseball League play their home games here. The Bing is also home to both of the county's high schools baseball teams. Personal life Crosby reportedly had an alcohol problem between the late 1920s and early 1930s, but he got a handle on his drinking in 1931. Crosby told Barbara Walters in a 1977 televised interview that he thought marijuana should be legalized because he figured it would make it much easier for the authorities to have a proper legal control over the market. In December 1999, the New York Post published an article by Bill Hoffmann and Murray Weiss called Bing Crosby's Single Life which claimed that "recently published" FBI files revealed connections with figures in the Mafia "since his youth". However, Crosby's FBI files had already been published in 1992 and provide no indication that Crosby had ties to the Mafia except for one major, but accidental encounter in Chicago in 1929 which is not mentioned in the files, but is told by Crosby himself in his as-told-to autobiography Call Me Lucky. In the over 280 pages of Crosby's FBI files all but one reference to organized crime or gambling dens are content of a few of the many threats that Bing Crosby received throughout his life. The comments made by FBI investigators in the memos discredited the claims made in the letters. In all the files there is only one single reference to a person associated with the Mafia. In a memorandum dated January 16, 1959, it is said: "The Salt Lake City Office has developed information indicating that Moe Dalitz received an invitation to join a deer hunting party at Bing Crosby's Elko, Nevada, ranch, together with the crooner, his Las Vegas dentist and several business associates." However, Crosby had already sold his Elko ranch a year earlier, in 1958, and it is doubtful how much he was really involved in that meeting. Romantic Relationships Crosby was married twice. His first wife was actress and nightclub singer Dixie Lee, to whom he was married from 1930 until her death from ovarian cancer in 1952. They had four sons: Gary, twins Dennis and Phillip, and Lindsay. Smash-Up: The Story of a Woman (1947) is said to be based on Lee's life. The Crosby family lived at 10500 Camarillo Street in North Hollywood for more than five years. After his wife died, Crosby had relationships with model Pat Sheehan (who married his son Dennis in 1958) and actresses Inger Stevens and Grace Kelly before marrying actress Kathryn Grant, who converted to Catholicism, in 1957. They had three children: Harry Lillis III (who played Bill in Friday the 13th), Mary Frances (best known for portraying Kristin Shepard on TV's Dallas), and Nathaniel (the 1981 U.S. Amateur champion in golf). Particularly during the late 1930s and through the 1940s Bing Crosby's domestic life was dominated by his wife's excessive drinking. His efforts to cure her with the help of specialists failed. Tired of Dixie's drinking, he even asked her for a divorce in January 1941. During the 1940s, Crosby consistently had difficulties trying to stay away from home while also trying to be there as much as possible for his children. Crosby had one confirmed extramarital affair between 1945 and the late 1940s, while married to his first wife Dixie. Actress Patricia Neal (who herself at the time was having an affair with the married Gary Cooper) wrote in her 1988 autobiography As I Am about a trip on a cruise ship to England with actress Joan Caulfield in 1948: In the most recent Crosby biography, Bing Crosby: Swinging on a Star; the War Years, 1940-1946, Gary Giddins published excerpts from an original diary of two sisters, Violet and Mary Barsa, who, as young women, used to stalk Crosby in New York City during December 1945 and January 1946 and who detailed their observations in the diary. The document reveals that during that time Crosby was indeed taking Joan Caulfield out to dinner, visited theaters and opera houses with her and that Caulfield and a person in her company entered the Waldorf Hotel where Crosby was staying. However, the document also clearly indicates that at their meetings a third person, in most instances Caulfield's mother, was present. In 1954, Joan Caulfield admitted to a relationship with a "top film star" who was a married man with children who at the end chose his wife and children over her. Joan's sister Betty Caulfield confirmed the romantic relationship between Joan and Bing Crosby. Despite being a Catholic, Crosby was seriously considering divorce in order to marry Caulfield. Either in December 1945 or January 1946 Crosby approached Cardinal Francis Spellman with his difficulties with dealing with his wife's alcoholism and his love for Caulfield and his plan to file for divorce. According to Betty Caulfield, Spellman told Crosby: "Bing, you are Father O'Malley and under no circumstances can Father O'Malley get a divorce." Around the same time, Crosby talked to his mother about his intentions and she protested. Ultimately, Crosby chose to end the relationship and to stay with his wife. Bing and Dixie reconciled and he continued trying to help her overcome her alcohol issues. Widow Kathryn Crosby dabbled in local theater productions intermittently and appeared in television tributes to her late husband. Homes In November 1958, Crosby purchased the 1,350-acre Rising River Ranch in Cassel, California after renting a portion of it for several years. Attorney Ira Shadwell declined to disclose the purchase price. In October 1978, actor Clint Eastwood purchased the ranch under the name of his business manager Roy Kaufman for $1.5 million. Crosby and his family lived in the San Francisco area for many years. In 1963, he and his wife Kathryn moved with their three young children from Los Angeles to a $175,000 ten-bedroom Tudor estate in Hillsborough because they did not want to raise their children in Hollywood, according to son Nathaniel. This house went up for sale by its current owners in 2021 for $13.75 million. In 1965, the Crosbys moved to a larger, 40-room French chateau-style house on nearby Jackling Drive, where Kathryn Crosby continued to reside after Bing's death. This house served as a setting for some of the family's Minute Maid orange juice television commercials. Children After Crosby's death, his eldest son, Gary, wrote a highly critical memoir, Going My Own Way (1983), depicting his father as cruel, cold, remote, and physically and psychologically abusive. While acknowledging that corporal punishments took place, there were reports of all of Gary's immediate siblings distancing themselves from the abuse claims, either in public or in private. Crosby's younger son Phillip disputed his brother Gary's claims about their father. Around the time Gary published his claims, Phillip stated to the press that "Gary is a whining, bitching crybaby, walking around with a two-by-four on his shoulder and just daring people to nudge it off." Nevertheless, Phillip did not deny that Crosby believed in corporal punishment. In an interview with People magazine, Phillip stated that "we never got an extra whack or a cuff we didn't deserve". Shortly before Gary's book was actually published, Lindsay said, "I'm glad [Gary] did it. I hope it clears up a lot of the old lies and rumors." Unlike Gary, Lindsay stated that he preferred to remember "all the good things I did with my dad and forget the times that were rough". "Lindsay Crosby supported his brother (Gary) at the time of its publication but had a tempered view of its revelations. 'I never expected affection from my father so it didn't bother me,' he once told an interviewer.'" However, after the book was published, Lindsay addressed the abuse claims and what the media had made out of them: Dennis Crosby reportedly "said his older brother (Gary) was the most severely treated of the four boys. 'He got the first licking, and we got the second.'" Gary's first wife of 19 years, Barbara Cosentino, of whom Gary wrote in his book, "I could confide in her about Mom and Dad and my childhood", and with whom Gary stayed friendly after the divorce, stated: Gary Crosby's adopted son, Steven Crosby, said in a 2003 interview: Bing's younger brother, singer and jazz bandleader Bob Crosby, recalled at the time of Gary's revelations that Bing was a "disciplinarian", as their mother and father had been. He added, "We were brought up that way." In an interview for the same article, Gary clarified that Bing "was like a lot of fathers of that time. He was not out to be vicious, to beat children for his kicks." The author of the most recent biography on Bing Crosby, Gary Giddins, claims that Gary Crosby's memoir is not reliable on many instances and cannot be trusted on the abuse stories. Crosby's will established a blind trust in which none of the sons received an inheritance until they reached the age of 65, intended by Crosby to keep them out of trouble. They instead received several thousand dollars per month from a trust left in 1952 by their mother, Dixie Lee. The trust, tied to high-performing oil stocks, folded in December 1989 following the 1980s oil glut. Lindsay Crosby died in 1989 at age 51, and Dennis Crosby died in 1991 at age 56, both by suicide from self-inflicted gunshot wounds. Gary Crosby died of lung cancer in 1995 at age 62, and Phillip Crosby died of a heart attack in 2004 at age 69. Nathaniel Crosby, Crosby's younger son from his second marriage, is a former high-level golfer who won the U.S. Amateur in 1981 at age 19, becoming the youngest winner in the history of that event at the time. Harry Crosby is an investment banker who occasionally makes singing appearances. Denise Crosby, Dennis Crosby's daughter, is also an actress and is known for her role as Tasha Yar on Star Trek: The Next Generation and for the recurring role of the Romulan Sela after her withdrawal from the series as a regular cast member. She also appeared in the 1989 film adaptation of Stephen King's novel Pet Sematary. In 2006, Crosby's niece through his sister Mary Rose, Carolyn Schneider, published the laudatory book Me and Uncle Bing. There have been disputes between Crosby's two families beginning in the late 1990s. When Dixie died in 1952, her will provided that her share of the community property be distributed in trust to her sons. After Crosby's death in 1977, he left the residue of his estate to a marital trust for the benefit of his widow, Kathryn, and HLC Properties, Ltd., was formed for the purpose of managing his interests, including his right of publicity. In 1996, Dixie's trust sued HLC and Kathryn for declaratory relief as to the trust's entitlement to interest, dividends, royalties, and other income derived from the community property of Crosby and Dixie. In 1999, the parties settled for approximately $1.5 million. Relying on a retroactive amendment to the California Civil Code, Dixie's trust brought suit again, in 2010, alleging that Crosby's right of publicity was community property, and that Dixie's trust was entitled to a share of the revenue it produced. The trial court granted Dixie's trust's claim. The California Court of Appeals reversed it, however, holding that the 1999 settlement barred the claim. In light of the court's ruling, it was unnecessary for the court to decide whether a right of publicity can be characterized as community property under California law. Health and death Following his recovery from a life-threatening fungal infection in his right lung in January 1974, Crosby emerged from semi-retirement to start a new spate of albums and concerts. On March 20, 1977, after videotaping a CBS concert special, "Bing – 50th Anniversary Gala", at the Ambassador Auditorium with Bob Hope looking on, Crosby fell off the stage into an orchestra pit, rupturing a disc in his back requiring a month-long stay in the hospital. His first performance after the accident was his last American concert, on August 16, 1977, the day Elvis Presley died, at the Concord Pavilion in Concord, California. When the electric power failed during his performance, he continued singing without amplification. In September, Crosby, his family and singer Rosemary Clooney began a concert tour of Britain that included two weeks at the London Palladium. While in the UK, Crosby recorded his final album, Seasons, and his final TV Christmas special with guest David Bowie on September 11 (which aired a little over a month after Crosby's death). His last concert was in the Brighton Centre on October 10, four days before his death, with British entertainer Gracie Fields in attendance. The following day he made his final appearance in a recording studio and sang eight songs at the BBC's Maida Vale Studios for a radio program, which also included an interview with Alan Dell. Accompanied by the Gordon Rose Orchestra, Crosby's last recorded performance was of the song "Once in a While". Later that afternoon, he met with Chris Harding to take photographs for the Seasons album jacket. On October 13, 1977, Crosby flew alone to Spain to play golf and hunt partridge. On October 14, at the La Moraleja Golf Course near Madrid, Crosby played 18 holes of golf. His partner was World Cup champion Manuel Piñero; their opponents were club president César de Zulueta and Valentín Barrios. According to Barrios, Crosby was in good spirits throughout the day, and was photographed several times during the round. At the ninth hole, construction workers building a house nearby recognized him, and when asked for a song, Crosby sang "Strangers in the Night". Crosby, who had a 13 handicap, won with his partner by one stroke. At about 6:30 pm, as Crosby and his party headed back to the clubhouse, Crosby said, "That was a great game of golf, fellas. Let's go have a Coca-Cola." These were his last words. About from the clubhouse entrance, Crosby collapsed and died instantly from a massive heart attack. At the clubhouse and later in the ambulance, house physician Dr. Laiseca tried to revive him, but was unsuccessful. At Reina Victoria Hospital he was administered the last rites of the Catholic Church and was pronounced dead at the age of 74. On October 18, 1977, following a private funeral Mass at St. Paul's Catholic Church in Westwood, Crosby was buried at Holy Cross Cemetery in Culver City, California. Legacy Crosby is a member of the National Association of Broadcasters Hall of Fame in the radio division. The family created an official website on October 14, 2007, the 30th anniversary of Crosby's death. In his autobiography Don't Shoot, It's Only Me! (1990), Bob Hope wrote, "Dear old Bing, as we called him, the Economy-sized Sinatra. And what a voice. God I miss that voice. I can't even turn on the radio around Christmas time without crying anymore." Calypso musician Roaring Lion wrote a tribute song in 1939 titled "Bing Crosby", in which he wrote: "Bing has a way of singing with his very heart and soul / Which captivates the world / His millions of listeners never fail to rejoice / At his golden voice...." Bing Crosby Stadium in Front Royal, Virginia, was named after Crosby in honor of his fundraising and cash contributions for its construction from 1948 to 1950. In 2006, the former Metropolitan Theater of Performing Arts ('The Met') in Spokane, Washington, was renamed to The Bing Crosby Theater. Crosby has three stars on the Hollywood Walk of Fame. One each for radio, recording, and motion pictures. Compositions Crosby wrote or co-wrote lyrics to 22 songs. His composition "At Your Command" was number 1 for three weeks on the U.S. pop singles chart beginning on August 8, 1931. "I Don't Stand a Ghost of a Chance With You" was his most successful composition, recorded by Duke Ellington, Frank Sinatra, Thelonious Monk, Billie Holiday, and Mildred Bailey, among others. Songs co-written by Crosby include: "That's Grandma" (1927), with Harry Barris and James Cavanaugh "From Monday On" (1928), with Harry Barris and recorded with the Paul Whiteman Orchestra featuring Bix Beiderbecke on cornet, number 14 on US pop singles charts "What Price Lyrics?" (1928), with Harry Barris and Matty Malneck "Ev'rything's Agreed Upon" (1930), with Harry Barris "At Your Command" (1931), with Harry Barris and Harry Tobias, US, number 1 (3 weeks) "Believe Me" (1931), with James Cavanaugh and Frank Weldon "Where the Blue of the Night (Meets the Gold of the Day)" (1931), with Roy Turk and Fred Ahlert, US, no. 4; US, 1940 re-recording, no. 27 "You Taught Me How to Love" (1931), with H. C. LeBlang and Don Herman "I Don't Stand a Ghost of a Chance with You" (1932), with Victor Young and Ned Washington, US, no. 5 "My Woman" (1932), with Irving Wallman and Max Wartell "Cutesie Pie" (1932), with Red Standex and Chummy MacGregor "I Was So Alone, Suddenly You Were There (1932), with Leigh Harline, Jack Stern and George Hamilton "Love Me Tonight" (1932), with Victor Young and Ned Washington, US, no. 4 "Waltzing in a Dream" (1932), with Victor Young and Ned Washington, US, no.6 "You're Just a Beautiful Melody of Love" (1932), lyrics by Bing Crosby, music by Babe Goldberg "Where Are You, Girl of My Dreams?" (1932), written by Bing Crosby, Irving Bibo, and Paul McVey, featured in the 1932 Universal film The Cohens and Kellys in Hollywood "I Would If I Could But I Can't" (1933), with Mitchell Parish and Alan Grey "Where the Turf Meets the Surf" (1941) with Johnny Burke and James V. Monaco. "Tenderfoot" (1953) with Bob Bowen and Perry Botkin, originally issued using the pseudonym of "Bill Brill" for Bing Crosby. "Domenica" (1961) with Pietro Garinei / Gorni Kramer / Sandro Giovannini "That's What Life is All About" (1975), with Ken Barnes, Peter Dacre, and Les Reed, US, AC chart, no. 35; UK, no. 41 "Sail Away from Norway" (1977) – Crosby wrote lyrics to go with a traditional song. Grammy Hall of Fame Four performances by Bing Crosby have been inducted into the Grammy Hall of Fame, which is a special Grammy award established in 1973 to honor recordings that are at least 25 years old and that have "qualitative or historical significance". Discography Filmography Television appearances Radio 15 Minutes with Bing Crosby (1931, CBS), Unsponsored. 6 nights a week, 15 minutes. The Cremo Singer (1931–1932, CBS), 6 nights a week, 15 minutes. 15 Minutes with Bing Crosby (1932, CBS), initially 3 nights a week, then twice a week, 15 minutes. Chesterfield Cigarettes Presents Music that Satisfies (1933, CBS), broadcast two nights a week, 15 minutes. Bing Crosby Entertains (1933–1935, CBS), weekly, 30 minutes. Kraft Music Hall (1935–1946, NBC), Thursday nights, 60 minutes until January 1943, then 30 minutes. Bing Crosby on Armed Forces Radio in World War II (1941–1945; World War II). Philco Radio Time (1946–1949, ABC), 30 minutes weekly. This Is Bing Crosby (The Minute Maid Show) (1948–1950, CBS), 15 minutes each weekday morning; Bing as disc jockey. The Bing Crosby – Chesterfield Show (1949–1952, CBS), 30 minutes weekly. The Bing Crosby Show for General Electric (1952–1954, CBS), 30 minutes weekly. The Bing Crosby Show (1954–1956) (CBS), 15 minutes, 5 nights a week. A Christmas Sing with Bing (1955–1962), (CBS, VOA and AFRS), 1 hour each year, sponsored by the Insurance Company of North America. The Ford Road Show Featuring Bing Crosby (1957–1958, CBS), 5 minutes, 5 days a week. The Bing Crosby – Rosemary Clooney Show (1960–1962, CBS), 20 minutes, 5 mornings a week, with Rosemary Clooney. RIAA certification Awards and nominations References Citations Sources Fisher, J. (2012). "Bing Crosby: Through the years, volumes one-nine (1954–56)." ARSC Journal, 43(1), 127–130. Crosby interviewed 1971 July 8. Klebanoff, Shoshana. "Crosby, Bing" American National Biography (2000) online Osterholm, J. Roger. Bing Crosby: A Bio-Bibliography. Greenwood Press, 1994. Prigozy, R. & Raubicheck, W., ed. Going My Way: Bing Crosby and American Culture. The Boydell Press, 2007. Primary sources Crosby, Bing. Call Me Lucky (1953) Crosby, Bing. Bing: The Authorized Biography (1975), written with Charles Thompson. Further reading Bookbinder, Robert. The Films of Bing Crosby (Lyle Stuart, 1977) Giddins, Gary. Bing Crosby: A Pocketful of Dreams-The Early Years 1903-1940 (Back Bay Books, 2009) excerpt. Giddins, Gary. Bing Crosby: Swinging on a Star: The War Years, 1940-1946 (Little, Brown, 2018) excerpt. Gilbert, Roger. "Beloved and Notorious: A Theory of American Stardom, with Special Reference to Bing Crosby and Frank Sinatra." Southwest Review 95.1/2 (2010): 167–184. online Morgereth, Timothy A. Bing Crosby: a discography, radio program list, and filmography (McFarland & Co Inc Pub, 1987). Pitts, Michael, et al. The Rise of the Crooners: Gene Austin, Russ Columbo, Bing Crosby, Nick Lucas, Johnny Marvin and Rudy Vallee (Scarecrow Press, 2001). Prigozy, Ruth, and Walter Raubicheck, eds. Going My Way: Bing Crosby and American Culture (University of Rochester Press, 2007), essays by scholars. Includes a chapter on Crosby's involvement in the making of "White Christmas" and an interview with record producer Ken Barnes. Schofield, Mary Anne. "Marketing Iron Pigs, Patriotism, and Peace: Bing Crosby and World War II—A Discourse." Journal of Popular Culture 40.5 (2007): 867–881. Smith, Anthony B. "Entertaining Catholics: Bing Crosby, Religion and Cultural Pluralism in 1940s America." American Catholic Studies (2003) 11#4: 1-19 online. Teachout, Terry. "The Swinging Star: Why is Bing Crosby forgotten?' Commentary (Nov 2018), Vol. 146 Issue 4, pp 51–54. Includes an interview External links 1903 births 1977 deaths 20th-century American male actors 20th-century American singers Age controversies American bass-baritones American crooners American jazz singers American male comedy actors American male film actors American male jazz musicians American male radio actors American male singers American people of English descent American people of Irish descent American people of Scottish descent American racehorse owners and breeders American radio personalities Best Actor Academy Award winners Brunswick Records artists Burials at Holy Cross Cemetery, Culver City Businesspeople from Washington (state) Capitol Records artists Catholics from Washington (state) Cecil B. DeMille Award Golden Globe winners Columbia Records artists Decca Records artists Gonzaga Bulldogs baseball players Gonzaga Bulldogs football coaches Gonzaga Preparatory School alumni Grammy Lifetime Achievement Award winners Jazz musicians from Washington (state) Major League Baseball executives Major League Baseball owners Male actors from Spokane, Washington Male actors from Tacoma, Washington Male actors from Washington (state) MGM Records artists Musicians from Spokane, Washington Musicians from Tacoma, Washington Paramount Pictures contract players Peabody Award winners People from Spokane, Washington People from Tacoma, Washington Pittsburgh Pirates owners RCA Victor artists Reprise Records artists Singers from Washington (state) The Dorsey Brothers members Torch singers Traditional pop music singers United Artists Records artists Vaudeville performers Victor Records artists Warner Records artists Whistlers World Golf Hall of Fame inductees
8
The Basel Convention on the Control of Transboundary Movements of Hazardous Wastes and Their Disposal, usually known as the Basel Convention, is an international treaty that was designed to reduce the movements of hazardous waste between nations, and specifically to prevent transfer of hazardous waste from developed to less developed countries. It does not, however, address the movement of radioactive waste. The convention is also intended to minimize the rate and toxicity of wastes generated, to ensure their environmentally sound management as closely as possible to the source of generation, and to assist developing countries in environmentally sound management of the hazardous and other wastes they generate. The convention was opened for signature on 21 March 1989, and entered into force on 5 May 1992. As of June 2023, there are 191 parties to the convention. In addition, Haiti and the United States have signed the convention but not ratified it. Following a petition urging action on the issue signed by more than a million people around the world, most of the world's countries, but not the United States, agreed in May 2019 to an amendment of the Basel Convention to include plastic waste as regulated material. Although the United States is not a party to the treaty, export shipments of plastic waste from the United States are now "criminal traffic as soon as the ships get on the high seas," according to the Basel Action Network (BAN), and carriers of such shipments may face liability, because the transportation of plastic waste is prohibited in just about every other country. History With the tightening of environmental laws (for example, RCRA) in developed nations in the 1970s, disposal costs for hazardous waste rose dramatically. At the same time, the globalization of shipping made cross-border movement of waste easier, and many less developed countries were desperate for foreign currency. Consequently, the trade in hazardous waste, particularly to poorer countries, grew rapidly. In 1990, OECD countries exported around 1.8 million tons of hazardous waste. Although most of this waste was shipped to other developed countries, a number of high-profile incidents of hazardous waste-dumping led to calls for regulation. One of the incidents which led to the creation of the Basel Convention was the Khian Sea waste disposal incident, in which a ship carrying incinerator ash from the city of Philadelphia in the United States dumped half of its load on a beach in Haiti before being forced away. It sailed for many months, changing its name several times. Unable to unload the cargo in any port, the crew was believed to have dumped much of it at sea. Another incident was a 1988 case in which five ships transported 8,000 barrels of hazardous waste from Italy to the small Nigerian town of Koko in exchange for $100 monthly rent which was paid to a Nigerian for the use of his farmland. At its meeting that took place from 27 November to 1 December 2006, the parties of the Basel Agreement focused on issues of electronic waste and the dismantling of ships. Increased trade in recyclable materials has led to an increase in a market for used products such as computers. This market is valued in billions of dollars. At issue is the distinction when used computers stop being a "commodity" and become a "waste". As of June 2023, there are 191 parties to the treaty, which includes 188 UN member states, the Cook Islands, the European Union, and the State of Palestine. The five UN member states that are not party to the treaty are East Timor, Fiji, Haiti, South Sudan, and United States. Definition of hazardous waste Waste falls under the scope of the convention if it is within the category of wastes listed in Annex I of the convention and it exhibits one of the hazardous characteristics contained in Annex III. In other words, it must both be listed and possess a characteristic such as being explosive, flammable, toxic, or corrosive. The other way that a waste may fall under the scope of the convention is if it is defined as or considered to be a hazardous waste under the laws of either the exporting country, the importing country, or any of the countries of transit. The definition of the term disposal is made in Article 2 al 4 and just refers to annex IV, which gives a list of operations which are understood as disposal or recovery. Examples of disposal are broad, including recovery and recycling. Alternatively, to fall under the scope of the convention, it is sufficient for waste to be included in Annex II, which lists other wastes, such as household wastes and residue that comes from incinerating household waste. Radioactive waste that is covered under other international control systems and wastes from the normal operation of ships are not covered. Annex IX attempts to define wastes which are not considered hazardous wastes and which would be excluded from the scope of the Basel Convention. If these wastes however are contaminated with hazardous materials to an extent causing them to exhibit an Annex III characteristic, they are not excluded. Obligations In addition to conditions on the import and export of the above wastes, there are stringent requirements for notice, consent and tracking for movement of wastes across national boundaries. The convention places a general prohibition on the exportation or importation of wastes between parties and non-parties. The exception to this rule is where the waste is subject to another treaty that does not take away from the Basel Convention. The United States is a notable non-party to the convention and has a number of such agreements for allowing the shipping of hazardous wastes to Basel Party countries. The OECD Council also has its own control system that governs the transboundary movement of hazardous materials between OECD member countries. This allows, among other things, the OECD countries to continue trading in wastes with countries like the United States that have not ratified the Basel Convention. Parties to the convention must honor import bans of other parties. Article 4 of the Basel Convention calls for an overall reduction of waste generation. By encouraging countries to keep wastes within their boundaries and as close as possible to its source of generation, the internal pressures should provide incentives for waste reduction and pollution prevention. Parties are generally prohibited from exporting covered wastes to, or importing covered waste from, non-parties to the convention. The convention states that illegal hazardous waste traffic is criminal but contains no enforcement provisions. According to Article 12, parties are directed to adopt a protocol that establishes liability rules and procedures that are appropriate for damage that comes from the movement of hazardous waste across borders. The current consensus is that as space is not classed as a "country" under the specific definition, export of e-waste to non-terrestrial locations would not be covered. Basel Ban Amendment After the initial adoption of the convention, some least developed countries and environmental organizations argued that it did not go far enough. Many nations and NGOs argued for a total ban on shipment of all hazardous waste to developing countries. In particular, the original convention did not prohibit waste exports to any location except Antarctica but merely required a notification and consent system known as "prior informed consent" or PIC. Further, many waste traders sought to exploit the good name of recycling and begin to justify all exports as moving to recycling destinations. Many believed a full ban was needed including exports for recycling. These concerns led to several regional waste trade bans, including the Bamako Convention. Lobbying at 1995 Basel conference by developing countries, Greenpeace and several European countries such as Denmark, led to the adoption of an amendment to the convention in 1995 termed the Basel Ban Amendment to the Basel Convention. The amendment has been accepted by 86 countries and the European Union, but has not entered into force (as that requires ratification by three-fourths of the member states to the convention). On 6 September 2019, Croatia became the 97th country to ratify the amendment which will enter into force after 90 days on 5 December 2019. The amendment prohibits the export of hazardous waste from a list of developed (mostly OECD) countries to developing countries. The Basel Ban applies to export for any reason, including recycling. An area of special concern for advocates of the amendment was the sale of ships for salvage, shipbreaking. The Ban Amendment was strenuously opposed by a number of industry groups as well as nations including Australia and Canada. The number of ratification for the entry-into force of the Ban Amendment is under debate: Amendments to the convention enter into force after ratification of "three-fourths of the Parties who accepted them" [Art. 17.5]; so far, the parties of the Basel Convention could not yet agree whether this would be three-fourths of the parties that were party to the Basel Convention when the ban was adopted, or three-fourths of the current parties of the convention [see Report of COP 9 of the Basel Convention]. The status of the amendment ratifications can be found on the Basel Secretariat's web page. The European Union fully implemented the Basel Ban in its Waste Shipment Regulation (EWSR), making it legally binding in all EU member states. Norway and Switzerland have similarly fully implemented the Basel Ban in their legislation. In the light of the blockage concerning the entry into force of the Ban Amendment, Switzerland and Indonesia have launched a "Country-led Initiative" (CLI) to discuss in an informal manner a way forward to ensure that the trans boundary movements of hazardous wastes, especially to developing countries and countries with economies in the transition, do not lead to an unsound management of hazardous wastes. This discussion aims at identifying and finding solutions to the reasons why hazardous wastes are still brought to countries that are not able to treat them in a safe manner. It is hoped that the CLI will contribute to the realization of the objectives of the Ban Amendment. The Basel Convention's website informs about the progress of this initiative. Regulation of plastic waste In the wake of popular outcry, in May 2019 most of the world's countries, but not the United States, agreed to amend the Basel Convention to include plastic waste as a regulated material. The world's oceans are estimated to contain 100 million metric tons of plastic, with up to 90% of this quantity originating in land-based sources. The United States, which produces an annual 42 million metric tons of plastic waste, more than any other country in the world, opposed the amendment, but since it is not a party to the treaty it did not have an opportunity to vote on it to try to block it. Information about, and visual images of, wildlife, such as seabirds, ingesting plastic, and scientific findings that nanoparticles do penetrate through the blood–brain barrier were reported to have fueled public sentiment for coordinated international legally binding action. Over a million people worldwide signed a petition demanding official action. Although the United States is not a party to the treaty, export shipments of plastic waste from the United States are now "criminal traffic as soon as the ships get on the high seas," according to the Basel Action Network (BAN), and carriers of such shipments may face liability, because the Basel Convention as amended in May 2019 prohibits the transportation of plastic waste to just about every other country. The Basel Convention contains three main entries on plastic wastes in Annex II, VIII and IX of the Convention. The Plastic Waste Amendments of the convention are now binding on 186 States. In addition to ensuring the trade in plastic waste is more transparent and better regulated, under the Basel Convention governments must take steps not only to ensure the environmentally sound management of plastic waste, but also to tackle plastic waste at its source. Basel watchdog The Basel Action Network (BAN) is a charitable civil society non-governmental organization that works as a consumer watchdog for implementation of the Basel Convention. BAN's principal aims is fighting exportation of toxic waste, including plastic waste, from industrialized societies to developing countries. BAN is based in Seattle, Washington, United States, with a partner office in the Philippines. BAN works to curb trans-border trade in hazardous electronic waste, land dumping, incineration, and the use of prison labor. See also Asbestos and the law Bamako Convention Electronic waste by country Rotterdam Convention Stockholm Convention References Further reading Toxic Exports, Jennifer Clapp, Cornell University Press, 2001. Challenging the Chip: Labor Rights and Environmental Justice in the Global Electronics Industry, Ted Smith, David A. Sonnenfeld, and David Naguib Pellow, eds., Temple University Press link, . "Toxic Trade: International Knowledge Networks & the Development of the Basel Convention," Jason Lloyd, International Public Policy Review, UCL. External links Text of the Convention "A Simplified Guide to the Basel Convention" Text of the regulation no.1013/2006 of the European Union on shipments of waste Flow of Waste among Basel Parties Introductory note to the Basel Convention by Dr. Katharina Kummer Peiry, Executive Secretary of the Basel Convention, UNEP on the website of the UN Audiovisual Library of International Law Basel Convention, Treaty available in ECOLEX-the gateway to environmental law (English) Organisations Basel Action Network Africa Institute for the Environmentally Sound Management of Hazardous and other Wastes a.k.a. Basel Convention Regional Centre Pretoria Page on the Basel Convention at Greenpeace Basel Convention Coordinating Centre for Asia and the Pacific Pollution Environmental treaties Waste treaties Chemical safety Treaties concluded in 1989 Treaties entered into force in 1992 1992 in the environment Hazardous waste
6
BASIC (Beginners' All-purpose Symbolic Instruction Code) is a family of general-purpose, high-level programming languages designed for ease of use. The original version was created by John G. Kemeny and Thomas E. Kurtz at Dartmouth College in 1963. They wanted to enable students in non-scientific fields to use computers. At the time, nearly all computers required writing custom software, which only scientists and mathematicians tended to learn. In addition to the program language, Kemeny and Kurtz developed the Dartmouth Time Sharing System (DTSS), which allowed multiple users to edit and run BASIC programs simultaneously on remote terminals. This general model became popular on minicomputer systems like the PDP-11 and Data General Nova in the late 1960s and early 1970s. Hewlett-Packard produced an entire computer line for this method of operation, introducing the HP2000 series in the late 1960s and continuing sales into the 1980s. Many early video games trace their history to one of these versions of BASIC. The emergence of microcomputers in the mid-1970s led to the development of multiple BASIC dialects, including Microsoft BASIC in 1975. Due to the tiny main memory available on these machines, often 4 KB, a variety of Tiny BASIC dialects were also created. BASIC was available for almost any system of the era, and became the de facto programming language for home computer systems that emerged in the late 1970s. These PCs almost always had a BASIC interpreter installed by default, often in the machine's firmware or sometimes on a ROM cartridge. BASIC declined in popularity in the 1990s, as more powerful microcomputers came to market and programming languages with advanced features (such as Pascal and C) became tenable on such computers. In 1991, Microsoft released Visual Basic, combining an updated version of BASIC with a visual forms builder. This reignited use of the language and "VB" remains a major programming language in the form of VB.NET, while a hobbyist scene for BASIC more broadly continues to exist. Origin John G. Kemeny was the math department chairman at Dartmouth College. Based largely on his reputation as an innovator in math teaching, in 1959 the school won an Alfred P. Sloan Foundation award for $500,000 to build a new department building. Thomas E. Kurtz had joined the department in 1956, and from the 1960s Kemeny and Kurtz agreed on the need for programming literacy among students outside the traditional STEM fields. Kemeny later noted that "Our vision was that every student on campus should have access to a computer, and any faculty member should be able to use a computer in the classroom whenever appropriate. It was as simple as that." Kemeny and Kurtz had made two previous experiments with simplified languages, DARSIMCO (Dartmouth Simplified Code) and DOPE (Dartmouth Oversimplified Programming Experiment). These did not progress past a single freshman class. New experiments using Fortran and ALGOL followed, but Kurtz concluded these languages were too tricky for what they desired. As Kurtz noted, Fortran had numerous oddly-formed commands, notably an "almost impossible-to-memorize convention for specifying a loop: . Is it '1, 10, 2' or '1, 2, 10', and is the comma after the line number required or not?" Moreover, the lack of any sort of immediate feedback was a key problem; the machines of the era used batch processing and took a long time to complete a run of a program. While Kurtz was visiting MIT, John McCarthy suggested that time-sharing offered a solution; a single machine could divide up its processing time among many users, giving them the illusion of having a (slow) computer to themselves. Small programs would return results in a few seconds. This led to increasing interest in a system using time-sharing and a new language specifically for use by non-STEM students. Kemeny wrote the first version of BASIC. The acronym BASIC comes from the name of an unpublished paper by Thomas Kurtz. The new language was heavily patterned on FORTRAN II; statements were one-to-a-line, numbers were used to indicate the target of loops and branches, and many of the commands were similar or identical to Fortran. However, the syntax was changed wherever it could be improved. For instance, the difficult to remember DO loop was replaced by the much easier to remember , and the line number used in the DO was instead indicated by the NEXT I. Likewise, the cryptic IF statement of Fortran, whose syntax matched a particular instruction of the machine on which it was originally written, became the simpler . These changes made the language much less idiosyncratic while still having an overall structure and feel similar to the original FORTRAN. The project received a $300,000 grant from the National Science Foundation, which was used to purchase a GE-225 computer for processing, and a Datanet-30 realtime processor to handle the Teletype Model 33 teleprinters used for input and output. A team of a dozen undergraduates worked on the project for about a year, writing both the DTSS system and the BASIC compiler. The first version BASIC language was released on 1 May 1964. Initially, BASIC concentrated on supporting straightforward mathematical work, with matrix arithmetic support from its initial implementation as a batch language, and character string functionality being added by 1965. Usage in the university rapidly expanded, requiring the main CPU to be replaced by a GE-235, and still later by a GE-635. By the early 1970s there were hundreds of terminals connected to the machines at Dartmouth, some of them remotely. Wanting use of the language to become widespread, its designers made the compiler available free of charge. In the 1960s, software became a chargeable commodity; until then, it was provided without charge as a service with expensive computers, usually available only to lease. They also made it available to high schools in the Hanover, New Hampshire, area and regionally throughout New England on Teletype Model 33 and Model 35 teleprinter terminals connected to Dartmouth via dial-up phone lines, and they put considerable effort into promoting the language. In the following years, as other dialects of BASIC appeared, Kemeny and Kurtz's original BASIC dialect became known as Dartmouth BASIC. New Hampshire recognized the accomplishment in 2019 when it erected a highway historical marker in Hanover describing the creation of "the first user-friendly programming language". Spread on time-sharing services The emergence of BASIC took place as part of a wider movement towards time-sharing systems. First conceptualized during the late 1950s, the idea became so dominant in the computer industry by the early 1960s that its proponents were speaking of a future in which users would "buy time on the computer much the same way that the average household buys power and water from utility companies". General Electric, having worked on the Dartmouth project, wrote their own underlying operating system and launched an online time-sharing system known as Mark I. It featured BASIC as one of its primary selling points. Other companies in the emerging field quickly followed suit; Tymshare introduced SUPER BASIC in 1968, CompuServe had a version on the DEC-10 at their launch in 1969, and by the early 1970s BASIC was largely universal on general-purpose mainframe computers. Even IBM eventually joined the club with the introduction of VS-BASIC in 1973. Although time-sharing services with BASIC were successful for a time, the widespread success predicted earlier was not to be. The emergence of minicomputers during the same period, and especially low-cost microcomputers in the mid-1970s, allowed anyone to purchase and run their own systems rather than buy online time which was typically billed at dollars per minute. Spread on minicomputers BASIC, by its very nature of being small, was naturally suited to porting to the minicomputer market, which was emerging at the same time as the time-sharing services. These machines had small main memory, perhaps as little as 4 KB in modern terminology, and lacked high-performance storage like hard drives that make compilers practical. On these systems, BASIC was normally implemented as an interpreter rather than a compiler due to its lower requirement for working memory. A particularly important example was HP Time-Shared BASIC, which, like the original Dartmouth system, used two computers working together to implement a time-sharing system. The first, a low-end machine in the HP 2100 series, was used to control user input and save and load their programs to tape or disk. The other, a high-end version of the same underlying machine, ran the programs and generated output. For a cost of about $100,000, one could own a machine capable of running between 16 and 32 users at the same time. The system, bundled as the HP 2000, was the first mini platform to offer time-sharing and was an immediate runaway success, catapulting HP to become the third-largest vendor in the minicomputer space, behind DEC and Data General (DG). DEC, the leader in the minicomputer space since the mid-1960s, had initially ignored BASIC. This was due to their work with RAND Corporation, who had purchased a PDP-6 to run their JOSS language, which was conceptually very similar to BASIC. This led DEC to introduce a smaller, cleaned up version of JOSS known as FOCAL, which they heavily promoted in the late 1960s. However, with timesharing systems widely offering BASIC, and all of their competition in the minicomputer space doing the same, DEC's customers were clamoring for BASIC. After management repeatedly ignored their pleas, David H. Ahl took it upon himself to buy a BASIC for the PDP-8, which was a major success in the education market. By the early 1970s, FOCAL and JOSS had been forgotten and BASIC had become almost universal in the minicomputer market. DEC would go on to introduce their updated version, BASIC-PLUS, for use on the RSTS/E time-sharing operating system. During this period a number of simple text-based games were written in BASIC, most notably Mike Mayfield's Star Trek. David Ahl collected these, some ported from FOCAL, and published them in an educational newsletter he compiled. He later collected a number of these into book form, 101 BASIC Computer Games, published in 1973. During the same period, Ahl was involved in the creation of a small computer for education use, an early personal computer. When management refused to support the concept, Ahl left DEC in 1974 to found the seminal computer magazine, Creative Computing. The book remained popular, and was re-published on several occasions. Explosive growth: the home computer era The introduction of the first microcomputers in the mid-1970s was the start of explosive growth for BASIC. It had the advantage that it was fairly well known to the young designers and computer hobbyists who took an interest in microcomputers, many of whom had seen BASIC on minis or mainframes. Despite Dijkstra's famous judgement in 1975, "It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration", BASIC was one of the few languages that was both high-level enough to be usable by those without training and small enough to fit into the microcomputers of the day, making it the de facto standard programming language on early microcomputers. The first microcomputer version of BASIC was co-written by Bill Gates, Paul Allen and Monte Davidoff for their newly formed company, Micro-Soft. This was released by MITS in punch tape format for the Altair 8800 shortly after the machine itself, immediately cementing BASIC as the primary language of early microcomputers. Members of the Homebrew Computer Club began circulating copies of the program, causing Gates to write his Open Letter to Hobbyists, complaining about this early example of software piracy. Partially in response to Gates's letter, and partially to make an even smaller BASIC that would run usefully on 4 KB machines, Bob Albrecht urged Dennis Allison to write their own variation of the language. How to design and implement a stripped-down version of an interpreter for the BASIC language was covered in articles by Allison in the first three quarterly issues of the People's Computer Company newsletter published in 1975 and implementations with source code published in Dr. Dobb's Journal of Tiny BASIC Calisthenics & Orthodontia: Running Light Without Overbyte. This led to a wide variety of Tiny BASICs with added features or other improvements, with versions from Tom Pittman and Li-Chen Wang becoming particularly well known. Micro-Soft, by this time Microsoft, ported their interpreter for the MOS 6502, which quickly become one of the most popular microprocessors of the 8-bit era. When new microcomputers began to appear, notably the "1977 trinity" of the TRS-80, Commodore PET and Apple II, they either included a version of the MS code, or quickly introduced new models with it. Ohio Scientific's personal computers also joined this trend at that time. By 1978, MS BASIC was a de facto standard and practically every home computer of the 1980s included it in ROM. Upon boot, a BASIC interpreter in direct mode was presented. Commodore Business Machines included Commodore BASIC, based on Microsoft BASIC. The Apple II and TRS-80 each had two versions of BASIC, a smaller introductory version introduced with the initial releases of the machines and an MS-based version introduced as interest in the platforms increased. As new companies entered the field, additional versions were added that subtly changed the BASIC family. The Atari 8-bit family had its own Atari BASIC that was modified in order to fit on an 8 KB ROM cartridge. Sinclair BASIC was introduced in 1980 with the Sinclair ZX80, and was later extended for the Sinclair ZX81 and the Sinclair ZX Spectrum. The BBC published BBC BASIC, developed by Acorn Computers Ltd, incorporating many extra structured programming keywords and advanced floating-point operation features. As the popularity of BASIC grew in this period, computer magazines published complete source code in BASIC for video games, utilities, and other programs. Given BASIC's straightforward nature, it was a simple matter to type in the code from the magazine and execute the program. Different magazines were published featuring programs for specific computers, though some BASIC programs were considered universal and could be used in machines running any variant of BASIC (sometimes with minor adaptations). Many books of type-in programs were also available, and in particular, Ahl published versions of the original 101 BASIC games converted into the Microsoft dialect and published it from Creative Computing as BASIC Computer Games. This book, and its sequels, provided hundreds of ready-to-go programs that could be easily converted to practically any BASIC-running platform. The book reached the stores in 1978, just as the home computer market was starting off, and it became the first million-selling computer book. Later packages, such as Learn to Program BASIC would also have gaming as an introductory focus. On the business-focused CP/M computers which soon became widespread in small business environments, Microsoft BASIC (MBASIC) was one of the leading applications. In 1978, David Lien published the first edition of The BASIC Handbook: An Encyclopedia of the BASIC Computer Language, documenting keywords across over 78 different computers. By 1981, the second edition documented keywords from over 250 different computers, showcasing the explosive growth of the microcomputer era. IBM PC and compatibles When IBM was designing the IBM PC, they followed the paradigm of existing home computers in having a built-in BASIC interpreter. They sourced this from Microsoft – IBM Cassette BASIC – but Microsoft also produced several other versions of BASIC for MS-DOS/PC DOS including IBM Disk BASIC (BASIC D), IBM BASICA (BASIC A), GW-BASIC (a BASICA-compatible version that did not need IBM's ROM) and QBasic, all typically bundled with the machine. In addition they produced the Microsoft BASIC Compiler aimed at professional programmers. Turbo Pascal-publisher Borland published Turbo Basic 1.0 in 1985 (successor versions are still being marketed under the name PowerBASIC). On Unix-like systems, specialized implementations were created such as XBasic and X11-Basic. XBasic was ported to Microsoft Windows as XBLite, and cross-platform variants such as SmallBasic, yabasic, Bywater BASIC, nuBasic, MyBasic, Logic Basic, Liberty BASIC, and wxBasic emerged. FutureBASIC and Chipmunk Basic meanwhile targeted the Apple Macintosh. These later variations introduced many extensions, such as improved string manipulation and graphics support, access to the file system and additional data types. More important were the facilities for structured programming, including additional control structures and proper subroutines supporting local variables. However, by the latter half of the 1980s, users were increasingly using pre-made applications written by others rather than learning programming themselves; while professional programmers now had a wide range of more advanced languages available on small computers. C and later C++ became the languages of choice for professional "shrink wrap" application development. A niche that BASIC continued to fill was for hobbyist video game development, as game creation systems and readily available game engines were still in their infancy. The Atari ST had STOS BASIC while the Amiga had AMOS BASIC for this purpose. Microsoft first exhibited BASIC for game development with DONKEY.BAS for GW-BASIC, and later GORILLA.BAS and NIBBLES.BAS for Quick Basic. QBasic maintained an active game development community, which helped later spawn the QB64 and FreeBASIC implementations. In 2013 a game written in QBasic and compiled with QB64 for modern computers entitled Black Annex was released on Steam. Blitz Basic, Dark Basic, SdlBasic, Super Game System Basic, RCBasic, PlayBASIC, CoolBasic, AllegroBASIC, ethosBASIC, NaaLaa, GLBasic and Basic4GL further filled this demand, right up to the modern AppGameKit, Monkey 2 and Cerberus-X. Visual Basic In 1991, Microsoft introduced Visual Basic, an evolutionary development of QuickBASIC. It included constructs from that language such as block-structured control statements, parameterized subroutines and optional static typing as well as object-oriented constructs from other languages such as "With" and "For Each". The language retained some compatibility with its predecessors, such as the Dim keyword for declarations, "Gosub"/Return statements and optional line numbers which could be used to locate errors. An important driver for the development of Visual Basic was as the new macro language for Microsoft Excel, a spreadsheet program. To the surprise of many at Microsoft who still initially marketed it as a language for hobbyists, the language came into widespread use for small custom business applications shortly after the release of VB version 3.0, which is widely considered the first relatively stable version. Microsoft also spun it off as Visual Basic for Applications and Embedded Visual Basic. While many advanced programmers still scoffed at its use, VB met the needs of small businesses efficiently as by that time, computers running Windows 3.1 had become fast enough that many business-related processes could be completed "in the blink of an eye" even using a "slow" language, as long as large amounts of data were not involved. Many small business owners found they could create their own small, yet useful applications in a few evenings to meet their own specialized needs. Eventually, during the lengthy lifetime of VB3, knowledge of Visual Basic had become a marketable job skill. Microsoft also produced VBScript in 1996 and Visual Basic .NET in 2001. The latter has essentially the same power as C# and Java but with syntax that reflects the original Basic language, and also features some cross-platform capability through implementations such as Mono-Basic. The IDE, with its event-driven GUI builder, was also influential on other tools, most notably Borland Software's Delphi for Object Pascal and its own descendants such as Lazarus. Mainstream support for the final version 6.0 of the original Visual Basic ended on March 31, 2005, followed by extended support in March 2008. Owing to its persistent remaining popularity, third-party attempts to further support it, such as Rubberduck and ModernVB, exist. On February 2, 2017 Microsoft announced that development on VB.NET would no longer be in parallel with that of C#, and on March 11, 2020 it was announced that evolution of the VB.NET language had also concluded. Even so, the language was still supported and the third-party Mercury extension has since been produced. Meanwhile, competitors exist such as B4X, RAD Basic, twinBASIC, VisualFBEditor, InForm, Xojo, and Gambas. Post-1990 versions and dialects Many other BASIC dialects have also sprung up since 1990, including the open source QB64 and FreeBASIC, inspired by QBasic, and the Visual Basic-styled RapidQ, HBasic, Basic For Qt and Gambas. Modern commercial incarnations include PureBasic, PowerBASIC, Xojo, Monkey X and True BASIC (the direct successor to Dartmouth BASIC from a company controlled by Kurtz). Several web-based simple BASIC interpreters also now exist, including Microsoft's Small Basic and Google's wwwBASIC. A number of compilers also exist that convert BASIC into JavaScript, such as JSBasic which re-implements Applesoft BASIC, Spider BASIC, and NS Basic. Building from earlier efforts such as Mobile Basic and CellularBASIC, many dialects are now available for smartphones and tablets. Through the Apple App Store for iOS options include Hand BASIC, Learn BASIC, Smart Basic based on Minimal BASIC, Basic! by miSoft, and BASIC by Anastasia Kovba. The Google Play store for Android meanwhile has the touchscreen focused Touch Basic, B4A, the RFO BASIC! interpreter based on Dartmouth Basic, and adaptations of SmallBasic, BBC Basic, Tiny Basic, X11-Basic, and NS Basic. On game consoles, an application for the Nintendo 3DS and Nintendo DSi called Petit Computer allows for programming in a slightly modified version of BASIC with DS button support. A version has also been released for Nintendo Switch, which has also been supplied a version of the Fuze Code System, a BASIC variant first implemented as a custom Raspberry Pi machine. Previously BASIC was made available on consoles as Family BASIC (for the Nintendo Famicom) and PSX Chipmunk Basic (for the original PlayStation), while yabasic was ported to the PlayStation 2 and FreeBASIC to the original Xbox, with Dragon BASIC created for homebrew on the Game Boy Advance and Nintendo DS. Calculators Variants of BASIC are available on graphing and otherwise programmable calculators made by Texas Instruments (TI-BASIC), HP (HP BASIC), Casio (Casio BASIC), and others. Windows command-line QBasic, a version of Microsoft QuickBASIC without the linker to make EXE files, is present in the Windows NT and DOS-Windows 95 streams of operating systems and can be obtained for more recent releases like Windows 7 which do not have them. Prior to DOS 5, the Basic interpreter was GW-Basic. QuickBasic is part of a series of three languages issued by Microsoft for the home and office power user and small-scale professional development; QuickC and QuickPascal are the other two. For Windows 95 and 98, which do not have QBasic installed by default, they can be copied from the installation disc, which will have a set of directories for old and optional software; other missing commands like Exe2Bin and others are in these same directories. Other The various Microsoft, Lotus, and Corel office suites and related products are programmable with Visual Basic in one form or another, including LotusScript, which is very similar to VBA 6. The Host Explorer terminal emulator uses WWB as a macro language; or more recently the programme and the suite in which it is contained is programmable in an in-house Basic variant known as Hummingbird Basic. The VBScript variant is used for programming web content, Outlook 97, Internet Explorer, and the Windows Script Host. WSH also has a Visual Basic for Applications (VBA) engine installed as the third of the default engines along with VBScript, JScript, and the numerous proprietary or open source engines which can be installed like PerlScript, a couple of Rexx-based engines, Python, Ruby, Tcl, Delphi, XLNT, PHP, and others; meaning that the two versions of Basic can be used along with the other mentioned languages, as well as LotusScript, in a WSF file, through the component object model, and other WSH and VBA constructions. VBScript is one of the languages that can be accessed by the 4Dos, 4NT, and Take Command enhanced shells. SaxBasic and WWB are also very similar to the Visual Basic line of Basic implementations. The pre-Office 97 macro language for Microsoft Word is known as WordBASIC. Excel 4 and 5 use Visual Basic itself as a macro language. Chipmunk Basic, an old-school interpreter similar to BASICs of the 1970s, is available for Linux, Microsoft Windows and macOS. Legacy The ubiquity of BASIC interpreters on personal computers was such that textbooks once included simple "Try It In BASIC" exercises that encouraged students to experiment with mathematical and computational concepts on classroom or home computers. Popular computer magazines of the day typically included type-in programs. Futurist and sci-fi writer David Brin mourned the loss of ubiquitous BASIC in a 2006 Salon article as have others who first used computers during this era. In turn, the article prompted Microsoft to develop and release Small Basic; it also inspired similar projects like Basic-256. Dartmouth held a 50th anniversary celebration for BASIC on 1 May 2014, as did other organisations; at least one organisation of VBA programmers organised a 35th anniversary observance in 1999. Dartmouth College celebrated the 50th anniversary of the BASIC language with a day of events on April 30, 2014. A short documentary film was produced for the event. Syntax Typical BASIC keywords Data manipulation LET assigns a value (which may be the result of an expression) to a variable. In most dialects of BASIC, LET is optional, and a line with no other identifiable keyword will assume the keyword to be LET. DATA holds a list of values which are assigned sequentially using the READ command. READ reads a value from a DATA statement and assigns it to a variable. An internal pointer keeps track of the last DATA element that was read and moves it one position forward with each READ. Most dialects allow multiple variables as parameters, reading several values in a single operation. RESTORE resets the internal pointer to the first DATA statement, allowing the program to begin READing from the first value. Many dialects allow an optional line number or ordinal value to allow the pointer to be reset to a selected location. DIM Sets up an array. Program flow control IF ... THEN ... {ELSE} used to perform comparisons or make decisions. Early dialects only allowed a line number after the THEN, but later versions allowed any valid statement to follow. ELSE was not widely supported, especially in earlier versions. FOR ... TO ... {STEP} ... NEXT repeat a section of code a given number of times. A variable that acts as a counter, the "index", is available within the loop. WHILE ... WEND and REPEAT ... UNTIL repeat a section of code while the specified condition is true. The condition may be evaluated before each iteration of the loop, or after. Both of these commands are found mostly in later dialects. DO ... LOOP {WHILE} or {UNTIL} repeat a section of code indefinitely or while/until the specified condition is true. The condition may be evaluated before each iteration of the loop, or after. Similar to WHILE, these keywords are mostly found in later dialects. GOTO jumps to a numbered or labelled line in the program. Most dialects also allowed the form . GOSUB ... RETURN jumps to a numbered or labelled line, executes the code it finds there until it reaches a RETURN command, on which it jumps back to the statement following the GOSUB, either after a colon, or on the next line. This is used to implement subroutines. ON ... GOTO/GOSUB chooses where to jump based on the specified conditions. See Switch statement for other forms. DEF FN a pair of keywords introduced in the early 1960s to define functions. The original BASIC functions were modelled on FORTRAN single-line functions. BASIC functions were one expression with variable arguments, rather than subroutines, with a syntax on the model of DEF FND(x) = x*x at the beginning of a program. Function names were originally restricted to FN, plus one letter, i.e., FNA, FNB ... Input and output LIST displays the full source code of the current program. PRINT displays a message on the screen or other output device. INPUT asks the user to enter the value of a variable. The statement may include a prompt message. TAB used with PRINT to set the position where the next character will be shown on the screen or printed on paper. AT is an alternative form. SPC prints out a number of space characters. Similar in concept to TAB but moves by a number of additional spaces from the current column rather than moving to a specified column. Mathematical functions ABS Absolute value ATN Arctangent (result in radians) COS Cosine (argument in radians) EXP Exponential function INT Integer part (typically floor function) LOG Natural logarithm RND Random number generation SIN Sine (argument in radians) SQR Square root TAN Tangent (argument in radians) Miscellaneous REM holds a programmer's comment or REMark; often used to give a title to the program and to help identify the purpose of a given section of code. USR ("User Serviceable Routine") transfers program control to a machine language subroutine, usually entered as an alphanumeric string or in a list of DATA statements. CALL alternative form of USR found in some dialects. Does not require an artificial parameter to complete the function-like syntax of USR, and has a clearly defined method of calling different routines in memory. TRON / TROFF turns on display of each line number as it is run ("TRace ON"). This was useful for debugging or correcting of problems in a program. TROFF turns it back off again. ASM some compilers such as Freebasic, Purebasic, and Powerbasic also support inline assembly language, allowing the programmer to intermix high-level and low-level code, typically prefixed with "ASM" or "!" statements. Data types and variables Minimal versions of BASIC had only integer variables and one- or two-letter variable names, which minimized requirements of limited and expensive memory (RAM). More powerful versions had floating-point arithmetic, and variables could be labelled with names six or more characters long. There were some problems and restrictions in early implementations; for example, Applesoft BASIC allowed variable names to be several characters long, but only the first two were significant, thus it was possible to inadvertently write a program with variables "LOSS" and "LOAN", which would be treated as being the same; assigning a value to "LOAN" would silently overwrite the value intended as "LOSS". Keywords could not be used in variables in many early BASICs; "SCORE" would be interpreted as "SC" OR "E", where OR was a keyword. String variables are usually distinguished in many microcomputer dialects by having $ suffixed to their name as a sigil, and values are often identified as strings by being delimited by "double quotation marks". Arrays in BASIC could contain integers, floating point or string variables. Some dialects of BASIC supported matrices and matrix operations, which can be used to solve sets of simultaneous linear algebraic equations. These dialects would directly support matrix operations such as assignment, addition, multiplication (of compatible matrix types), and evaluation of a determinant. Many microcomputer BASICs did not support this data type; matrix operations were still possible, but had to be programmed explicitly on array elements. Examples Unstructured BASIC New BASIC programmers on a home computer might start with a simple program, perhaps using the language's PRINT statement to display a message on the screen; a well-known and often-replicated example is Kernighan and Ritchie's "Hello, World!" program: 10 PRINT "Hello, World!" 20 END An infinite loop could be used to fill the display with the message: 10 PRINT "Hello, World!" 20 GOTO 10 Note that the END statement is optional and has no action in most dialects of BASIC. It was not always included, as is the case in this example. This same program can be modified to print a fixed number of messages using the common FOR...NEXT statement: 10 LET N=10 20 FOR I=1 TO N 30 PRINT "Hello, World!" 40 NEXT I Most home computers BASIC versions, such as MSX BASIC and GW-BASIC, supported simple data types, loop cycles, and arrays. The following example is written for GW-BASIC, but will work in most versions of BASIC with minimal changes: 10 INPUT "What is your name: "; U$ 20 PRINT "Hello "; U$ 30 INPUT "How many stars do you want: "; N 40 S$ = "" 50 FOR I = 1 TO N 60 S$ = S$ + "*" 70 NEXT I 80 PRINT S$ 90 INPUT "Do you want more stars? "; A$ 100 IF LEN(A$) = 0 THEN GOTO 90 110 A$ = LEFT$(A$, 1) 120 IF A$ = "Y" OR A$ = "y" THEN GOTO 30 130 PRINT "Goodbye "; U$ 140 END The resulting dialog might resemble: What is your name: Mike Hello Mike How many stars do you want: 7 ******* Do you want more stars? yes How many stars do you want: 3 *** Do you want more stars? no Goodbye Mike The original Dartmouth Basic was unusual in having a matrix keyword, MAT. Although not implemented by most later microprocessor derivatives, it is used in this example from the 1968 manual which averages the numbers that are input: 5 LET S = 0 10 MAT INPUT V 20 LET N = NUM 30 IF N = 0 THEN 99 40 FOR I = 1 TO N 45 LET S = S + V(I) 50 NEXT I 60 PRINT S/N 70 GO TO 5 99 END Structured BASIC Second-generation BASICs (for example, VAX Basic, SuperBASIC, True BASIC, QuickBASIC, BBC BASIC, Pick BASIC, PowerBASIC, Liberty BASIC, QB64 and (arguably) COMAL) introduced a number of features into the language, primarily related to structured and procedure-oriented programming. Usually, line numbering is omitted from the language and replaced with labels (for GOTO) and procedures to encourage easier and more flexible design. In addition keywords and structures to support repetition, selection and procedures with local variables were introduced. The following example is in Microsoft QuickBASIC: REM QuickBASIC example REM Forward declaration - allows the main code to call a REM subroutine that is defined later in the source code DECLARE SUB PrintSomeStars (StarCount!) REM Main program follows INPUT "What is your name: ", UserName$ PRINT "Hello "; UserName$ DO INPUT "How many stars do you want: ", NumStars CALL PrintSomeStars(NumStars) DO INPUT "Do you want more stars? ", Answer$ LOOP UNTIL Answer$ <> "" Answer$ = LEFT$(Answer$, 1) LOOP WHILE UCASE$(Answer$) = "Y" PRINT "Goodbye "; UserName$ END REM subroutine definition SUB PrintSomeStars (StarCount) REM This procedure uses a local variable called Stars$ Stars$ = STRING$(StarCount, "*") PRINT Stars$ END SUB Object-oriented BASIC Third-generation BASIC dialects such as Visual Basic, Xojo, Gambas, StarOffice Basic, BlitzMax and PureBasic introduced features to support object-oriented and event-driven programming paradigm. Most built-in procedures and functions are now represented as methods of standard objects rather than operators. Also, the operating system became increasingly accessible to the BASIC language. The following example is in Visual Basic .NET: Public Module StarsProgram Private Function Ask(prompt As String) As String Console.Write(prompt) Return Console.ReadLine() End Function Public Sub Main() Dim userName = Ask("What is your name: ") Console.WriteLine("Hello {0}", userName) Dim answer As String Do Dim numStars = CInt(Ask("How many stars do you want: ")) Dim stars As New String("*"c, numStars) Console.WriteLine(stars) Do answer = Ask("Do you want more stars? ") Loop Until answer <> "" Loop While answer.StartsWith("Y", StringComparison.OrdinalIgnoreCase) Console.WriteLine("Goodbye {0}", userName) End Sub End Module Standards ANSI/ISO/IEC Standard for Minimal BASIC: ANSI X3.60-1978 "For minimal BASIC" ISO/IEC 6373:1984 "Data Processing—Programming Languages—Minimal BASIC" ECMA-55 Minimal BASIC (withdrawn, similar to ANSI X3.60-1978) ANSI/ISO/IEC Standard for Full BASIC: ANSI X3.113-1987 "Programming Languages Full BASIC" INCITS/ISO/IEC 10279-1991 (R2005) "Information Technology – Programming Languages – Full BASIC" ANSI/ISO/IEC Addendum Defining Modules: ANSI X3.113 Interpretations-1992 "BASIC Technical Information Bulletin # 1 Interpretations of ANSI 03.113-1987" ISO/IEC 10279:1991/ Amd 1:1994 "Modules and Single Character Input Enhancement" ECMA-116 BASIC (withdrawn, similar to ANSI X3.113-1987) Compilers and interpreters See also List of BASIC dialects Notes References General references External links gotBASIC.com - For all people interested in the continued usage and evolution of the BASIC programming language. The Basics' page (Since 2001) - Comprehensive listing of dialects. American inventions Articles with example BASIC code Programming languages Programming languages created in 1964 Programming languages with an ISO standard
13
In chaos theory, the butterfly effect is the sensitive dependence on initial conditions in which a small change in one state of a deterministic nonlinear system can result in large differences in a later state. The term is closely associated with the work of mathematician and meteorologist Edward Norton Lorenz. He noted that the butterfly effect is derived from the metaphorical example of the details of a tornado (the exact time of formation, the exact path taken) being influenced by minor perturbations such as a distant butterfly flapping its wings several weeks earlier. Lorenz originally used a seagull causing a storm but was persuaded to make it more poetic with the use of a butterfly and tornado by 1972. He discovered the effect when he observed runs of his weather model with initial condition data that were rounded in a seemingly inconsequential manner. He noted that the weather model would fail to reproduce the results of runs with the unrounded initial condition data. A very small change in initial conditions had created a significantly different outcome. The idea that small causes may have large effects in weather was earlier acknowledged by French mathematician and engineer Henri Poincaré. American mathematician and philosopher Norbert Wiener also contributed to this theory. Lorenz's work placed the concept of instability of the Earth's atmosphere onto a quantitative base and linked the concept of instability to the properties of large classes of dynamic systems which are undergoing nonlinear dynamics and deterministic chaos. The butterfly effect concept has since been used outside the context of weather science as a broad term for any situation where a small change is supposed to be the cause of larger consequences. History In The Vocation of Man (1800), Johann Gottlieb Fichte says "you could not remove a single grain of sand from its place without thereby ... changing something throughout all parts of the immeasurable whole". Chaos theory and the sensitive dependence on initial conditions were described in numerous forms of literature. This is evidenced by the case of the three-body problem by Poincaré in 1890. He later proposed that such phenomena could be common, for example, in meteorology. In 1898, Jacques Hadamard noted general divergence of trajectories in spaces of negative curvature. Pierre Duhem discussed the possible general significance of this in 1908. In 1950, Alan Turing noted: "The displacement of a single electron by a billionth of a centimetre at one moment might make the difference between a man being killed by an avalanche a year later, or escaping." The idea that the death of one butterfly could eventually have a far-reaching ripple effect on subsequent historical events made its earliest known appearance in "A Sound of Thunder", a 1952 short story by Ray Bradbury. "A Sound of Thunder" features time travel. More precisely, though, almost the exact idea and the exact phrasing —of a tiny insect's wing affecting the entire atmosphere's winds— was published in a children's book which became extremely successful and well-known globally in 1962, the year before Lorenz published: In 1961, Lorenz was running a numerical computer model to redo a weather prediction from the middle of the previous run as a shortcut. He entered the initial condition 0.506 from the printout instead of entering the full precision 0.506127 value. The result was a completely different weather scenario. Lorenz wrote: In 1963, Lorenz published a theoretical study of this effect in a highly cited, seminal paper called Deterministic Nonperiodic Flow (the calculations were performed on a Royal McBee LGP-30 computer). Elsewhere he stated: Following proposals from colleagues, in later speeches and papers, Lorenz used the more poetic butterfly. According to Lorenz, when he failed to provide a title for a talk he was to present at the 139th meeting of the American Association for the Advancement of Science in 1972, Philip Merilees concocted Does the flap of a butterfly's wings in Brazil set off a tornado in Texas? as a title. Although a butterfly flapping its wings has remained constant in the expression of this concept, the location of the butterfly, the consequences, and the location of the consequences have varied widely. The phrase refers to the idea that a butterfly's wings might create tiny changes in the atmosphere that may ultimately alter the path of a tornado or delay, accelerate, or even prevent the occurrence of a tornado in another location. The butterfly does not power or directly create the tornado, but the term is intended to imply that the flap of the butterfly's wings can cause the tornado: in the sense that the flap of the wings is a part of the initial conditions of an interconnected complex web; one set of conditions leads to a tornado, while the other set of conditions doesn't. The flapping wing represents a small change in the initial condition of the system, which cascades to large-scale alterations of events (compare: domino effect). Had the butterfly not flapped its wings, the trajectory of the system might have been vastly different—but it's also equally possible that the set of conditions without the butterfly flapping its wings is the set that leads to a tornado. The butterfly effect presents an obvious challenge to prediction, since initial conditions for a system such as the weather can never be known to complete accuracy. This problem motivated the development of ensemble forecasting, in which a number of forecasts are made from perturbed initial conditions. Some scientists have since argued that the weather system is not as sensitive to initial conditions as previously believed. David Orrell argues that the major contributor to weather forecast error is model error, with sensitivity to initial conditions playing a relatively small role. Stephen Wolfram also notes that the Lorenz equations are highly simplified and do not contain terms that represent viscous effects; he believes that these terms would tend to damp out small perturbations. Recent studies using generalized Lorenz models that included additional dissipative terms and nonlinearity suggested that a larger heating parameter is required for the onset of chaos. While the "butterfly effect" is often explained as being synonymous with sensitive dependence on initial conditions of the kind described by Lorenz in his 1963 paper (and previously observed by Poincaré), the butterfly metaphor was originally applied to work he published in 1969 which took the idea a step further. Lorenz proposed a mathematical model for how tiny motions in the atmosphere scale up to affect larger systems. He found that the systems in that model could only be predicted up to a specific point in the future, and beyond that, reducing the error in the initial conditions would not increase the predictability (as long as the error is not zero). This demonstrated that a deterministic system could be "observationally indistinguishable" from a non-deterministic one in terms of predictability. Recent re-examinations of this paper suggest that it offered a significant challenge to the idea that our universe is deterministic, comparable to the challenges offered by quantum physics. In the book entitled The Essence of Chaos published in 1993, Lorenz defined butterfly effect as: "The phenomenon that a small alteration in the state of a dynamical system will cause subsequent states to differ greatly from the states that would have followed without the alteration." This feature is the same as sensitive dependence of solutions on initial conditions (SDIC) in . In the same book, Lorenz applied the activity of skiing and developed an idealized skiing model for revealing the sensitivity of time-varying paths to initial positions. A predictability horizon is determined before the onset of SDIC. Illustrations {|class="wikitable" width=100% |- ! colspan=3|The butterfly effect in the Lorenz attractor |- | colspan="2" style="text-align:center;" | time 0 ≤ t ≤ 30 (larger) | style="text-align:center;" | z coordinate (larger) |- | colspan="2" style="text-align:center;"| | style="text-align:center;"| |- |colspan=3 | These figures show two segments of the three-dimensional evolution of two trajectories (one in blue, and the other in yellow) for the same period of time in the Lorenz attractor starting at two initial points that differ by only 10−5 in the x-coordinate. Initially, the two trajectories seem coincident, as indicated by the small difference between the z coordinate of the blue and yellow trajectories, but for t > 23 the difference is as large as the value of the trajectory. The final position of the cones indicates that the two trajectories are no longer coincident at t = 30. |- | style="text-align:center;" colspan="3" | An animation of the Lorenz attractor shows the continuous evolution. |} Theory and mathematical definition Recurrence, the approximate return of a system toward its initial conditions, together with sensitive dependence on initial conditions, are the two main ingredients for chaotic motion. They have the practical consequence of making complex systems, such as the weather, difficult to predict past a certain time range (approximately a week in the case of weather) since it is impossible to measure the starting atmospheric conditions completely accurately. A dynamical system displays sensitive dependence on initial conditions if points arbitrarily close together separate over time at an exponential rate. The definition is not topological, but essentially metrical. Lorenz defined sensitive dependence as follows: The property characterizing an orbit (i.e., a solution) if most other orbits that pass close to it at some point do not remain close to it as time advances. If M is the state space for the map , then displays sensitive dependence to initial conditions if for any x in M and any δ > 0, there are y in M, with distance such that and such that for some positive parameter a. The definition does not require that all points from a neighborhood separate from the base point x, but it requires one positive Lyapunov exponent. In addition to a positive Lyapunov exponent, boundedness is another major feature within chaotic systems. The simplest mathematical framework exhibiting sensitive dependence on initial conditions is provided by a particular parametrization of the logistic map: which, unlike most chaotic maps, has a closed-form solution: where the initial condition parameter is given by . For rational , after a finite number of iterations maps into a periodic sequence. But almost all are irrational, and, for irrational , never repeats itself – it is non-periodic. This solution equation clearly demonstrates the two key features of chaos – stretching and folding: the factor 2n shows the exponential growth of stretching, which results in sensitive dependence on initial conditions (the butterfly effect), while the squared sine function keeps folded within the range [0, 1]. In physical systems In weather The butterfly effect is most familiar in terms of weather; it can easily be demonstrated in standard weather prediction models, for example. The climate scientists James Annan and William Connolley explain that chaos is important in the development of weather prediction methods; models are sensitive to initial conditions. They add the caveat: "Of course the existence of an unknown butterfly flapping its wings has no direct bearing on weather forecasts, since it will take far too long for such a small perturbation to grow to a significant size, and we have many more immediate uncertainties to worry about. So the direct impact of this phenomenon on weather prediction is often somewhat wrong." The two kinds of butterfly effects, including the sensitive dependence on initial conditions, and the ability of a tiny perturbation to create an organized circulation at large distances, are not exactly the same. A comparison of the two kinds of butterfly effects and the third kind of butterfly effect has been documented. In recent studies, it was reported that both meteorological and non-meteorological linear models have shown that instability plays a role in producing a butterfly effect, which is characterized by brief but significant exponential growth resulting from a small disturbance. According to Lighthill (1986), the presence of SDIC (commonly known as the butterfly effect) implies that chaotic systems have a finite predictability limit. In a literature review, it was found that Lorenz's perspective on the predictability limit can be condensed into the following statement: (A). The Lorenz 1963 model qualitatively revealed the essence of a finite predictability within a chaotic system such as the atmosphere. However, it did not determine a precise limit for the predictability of the atmosphere. (B). In the 1960s, the two-week predictability limit was originally estimated based on a doubling time of five days in real-world models. Since then, this finding has been documented in Charney et al. (1966) and has become a consensus. Recently, a short video has been created to present Lorenz's perspective on predictability limit. By revealing coexisting chaotic and non-chaotic attractors within Lorenz models, Shen and his colleagues proposed a revised view that "weather possesses chaos and order", in contrast to the conventional view of "weather is chaotic". As a result, sensitive dependence on initial conditions (SDIC) does not always appear. Namely, SDIC appears when two orbits (i.e., solutions) become the chaotic attractor; it does not appear when two orbits move toward the same point attractor. The above animation for double pendulum motion provides an analogy. For large angles of swing the motion of the pendulum is often chaotic. By comparison, for small angles of swing, motions are non-chaotic. Multistability is defined when a system (e.g., the double pendulum system) contains more than one bounded attractor that depends only on initial conditions. The multistability was illustrated using kayaking in Figure on the right side (i.e., Figure 1 of ) where the appearance of strong currents and a stagnant area suggests instability and local stability, respectively. As a result, when two kayaks move along strong currents, their paths display SDIC. On the other hand, when two kayaks move into a stagnant area, they become trapped, showing no typical SDIC (although a chaotic transient may occur). Such features of SDIC or no SDIC suggest two types of solutions and illustrate the nature of multistability. By taking into consideration time-varying multistability that is associated with the modulation of large-scale processes (e.g., seasonal forcing) and aggregated feedback of small-scale processes (e.g., convection), the above revised view is refined as follows: "The atmosphere possesses chaos and order; it includes, as examples, emerging organized systems (such as tornadoes) and time varying forcing from recurrent seasons." In quantum mechanics The potential for sensitive dependence on initial conditions (the butterfly effect) has been studied in a number of cases in semiclassical and quantum physics including atoms in strong fields and the anisotropic Kepler problem. Some authors have argued that extreme (exponential) dependence on initial conditions is not expected in pure quantum treatments; however, the sensitive dependence on initial conditions demonstrated in classical motion is included in the semiclassical treatments developed by Martin Gutzwiller and John B. Delos and co-workers. The random matrix theory and simulations with quantum computers prove that some versions of the butterfly effect in quantum mechanics do not exist. Other authors suggest that the butterfly effect can be observed in quantum systems. Zbyszek P. Karkuszewski et al. consider the time evolution of quantum systems which have slightly different Hamiltonians. They investigate the level of sensitivity of quantum systems to small changes in their given Hamiltonians. David Poulin et al. presented a quantum algorithm to measure fidelity decay, which "measures the rate at which identical initial states diverge when subjected to slightly different dynamics". They consider fidelity decay to be "the closest quantum analog to the (purely classical) butterfly effect". Whereas the classical butterfly effect considers the effect of a small change in the position and/or velocity of an object in a given Hamiltonian system, the quantum butterfly effect considers the effect of a small change in the Hamiltonian system with a given initial position and velocity. This quantum butterfly effect has been demonstrated experimentally. Quantum and semiclassical treatments of system sensitivity to initial conditions are known as quantum chaos. In popular culture See also Avalanche effect Behavioral cusp Cascading failure Catastrophe theory Causality Chain reaction Clapotis Determinism Domino effect Dynamical system Fractal Great Stirrup Controversy Innovation butterfly Kessler syndrome Norton's dome Numerical analysis Point of divergence Positive feedback Potentiality and actuality Representativeness heuristic Ripple effect Snowball effect Traffic congestion Tropical cyclogenesis Unintended consequences References Further reading James Gleick, Chaos: Making a New Science, New York: Viking, 1987. 368 pp. Bradbury, Ray. "A Sound of Thunder." Collier's. 28 June 1952 External links Weather and Chaos: The Work of Edward N. Lorenz. A short documentary that explains the "butterfly effect" in context of Lorenz's work. The Chaos Hypertextbook. An introductory primer on chaos and fractals New England Complex Systems Institute - Concepts: Butterfly Effect ChaosBook.org. Advanced graduate textbook on chaos (no fractals) Causality Chaos theory Determinism Metaphors referring to insects Physical phenomena Stability theory
5
Borland Software Corporation was a computer technology company founded in 1983 by Niels Jensen, Ole Henriksen, Mogens Glad, and Philippe Kahn. Its main business was the development and sale of software development and software deployment products. Borland was first headquartered in Scotts Valley, California, then in Cupertino, California, and then in Austin, Texas. In 2009, the company became a full subsidiary of the British firm Micro Focus International plc. History The 1980s: Foundations Borland Ltd. was founded in August 1981 by three Danish citizens Niels Jensen, Ole Henriksen, and Mogens Glad to develop products like Word Index for the CP/M operating system using an off-the-shelf company. However, the response to the company's products at the CP/M-82 show in San Francisco showed that a U.S. company would be needed to reach the American market. They met Philippe Kahn, who had just moved to Silicon Valley and had been a key developer of the Micral. The three Danes had embarked, at first successfully, on marketing software first from Denmark, and later from Ireland, before running into some challenges when they met Philippe Kahn. Kahn was chairman, president, and CEO of Borland Inc. from its beginning in 1983 until 1995. The company name "Borland" was a creation of Kahn's, taking inspiration from the name of an American Astronaut and then-Eastern Air Lines chairperson Frank Borman. The main shareholders at the incorporation of Borland were Niels Jensen (250,000 shares), Ole Henriksen (160,000), Mogens Glad (100,000), and Kahn (80,000). Borland International, Inc. era Borland developed various software development tools. Its first product was Turbo Pascal in 1983, developed by Anders Hejlsberg (who later developed .NET and C# for Microsoft) and before Borland acquired the product which was sold in Scandinavia under the name of Compas Pascal. 1984 saw the launch of Borland Sidekick, a time organization, notebook, and calculator utility that was an early terminate-and-stay-resident program (TSR) for MS-DOS compatible operating systems. By the mid-1980s, the company had an exhibit at the 1985 West Coast Computer Faire other than IBM or AT&T. Bruce Webster reported that "the legend of Turbo Pascal has by now reached mythic proportions, as evidenced by the number of firms that, in marketing meetings, make plans to become 'the next Borland'". After Turbo Pascal and Sidekick, the company launched other applications such as SuperKey and Lightning, all developed in Denmark. While the Danes remained majority shareholders, board members included Kahn, Tim Berry, John Nash, and David Heller. With the assistance of John Nash and David Heller, both British members of the Borland Board, the company was taken public on London's Unlisted Securities Market (USM) in 1986. Schroders was the lead investment banker. According to the London IPO filings, the management team was Philippe Kahn as president, Spencer Ozawa as VP of Operations, Marie Bourget as CFO, and Spencer Leyton as VP of sales and business development. While all software development continued to take place in Denmark and later London as the Danish co-founders moved there. A first US IPO followed in 1989 after Ben Rosen joined the Borland board with Goldman Sachs as the lead banker and a second offering in 1991 with Lazard as the lead banker. In 1985, Borland acquired Analytica and its Reflex database product. The engineering team of Analytica, managed by Brad Silverberg and including Reflex co-founder Adam Bosworth, became the core of Borland's engineering team in the US. Brad Silverberg was VP of engineering until he left in early 1990 to head up the Personal Systems division at Microsoft. Adam Bosworth initiated and headed up the Quattro project until moving to Microsoft later in 1990 to take over the project which eventually became Access. In 1987, Borland purchased Wizard Systems and incorporated portions of the Wizard C technology into Turbo C. Bob Jervis, the author of Wizard C became a Borland employee. Turbo C was released on May 18, 1987. This drove a wedge between Borland and Niels Jensen and the other members of his team who had been working on a brand-new series of compilers at their London development center. They reached an agreement and spun off a company called Jensen & Partners International (JPI), later TopSpeed. JPI first launched an MS-DOS compiler named JPI Modula-2, which later became TopSpeed Modula-2, and followed up with TopSpeed C, TopSpeed C++, and TopSpeed Pascal compilers for both the MS-DOS and OS/2 operating systems. The TopSpeed compiler technology still exists as the underlying technology of the Clarion 4GL programming language, a Windows development tool. In September 1987, Borland purchased Ansa-Software, including their Paradox (version 2.0) database management tool. Richard Schwartz, a cofounder of Ansa, became Borland's CTO and Ben Rosen joined the Borland board. The Quattro Pro spreadsheet was launched in 1989, with an improvement and charting capabilities at the time. Lotus Development, under the leadership of Jim Manzi, sued Borland for copyright infringement (see Look and feel). The litigation, Lotus Dev. Corp. v. Borland Int'l, Inc., brought forward Borland's open standards position as opposed to Lotus' closed approach. Borland, under Kahn's leadership, took a position of principle and announced that they would defend against Lotus' legal position and "fight for programmer's rights". After a decision in favor of Borland by the First Circuit Court of Appeals, the case went to the United States Supreme Court. Because Justice John Paul Stevens had recused himself, only eight justices heard the case, and concluded in a 4–4 tie. The result of the First Circuit Court decision remained standing since the Supreme Court result, since it was a tie, did not bind any other court and set no national precedent. Additionally, Borland's approach towards software piracy and intellectual property (IP) included its "Borland no-nonsense license agreement"; allowing the developer/user to utilize its products "just like a book". The user was allowed to make multiple copies of a program, as long as it was the only copy in use at any point in time. The 1990s: Rise and change In September 1991, Borland purchased Ashton-Tate, bringing the dBASE and InterBase databases to the house, in an all-stock transaction. However, competition with Microsoft was fierce. Microsoft launched the competing database Microsoft Access and bought the dBASE clone FoxPro in 1992, undercutting Borland's prices. During the early 1990s, Borland's implementation of C and C++ outsold Microsoft's. Borland survived as a company, but no longer dominated the software tools that it once had. It went through a radical transition in products, financing, and staff, and became a very different company from the one which challenged Microsoft and Lotus in the early 1990s. The internal problems that arose with the Ashton-Tate merger were a large part of the downfall. Ashton-Tate's product portfolio proved to be weak, with no provision for evolution into the GUI environment of Windows. Almost all product lines were discontinued. The consolidation of duplicate support and development offices was costly and disruptive. Worst of all, the highest revenue earner of the combined company was dBASE with no Windows version ready. Borland had an internal project to clone dBASE which was intended to run on Windows and was part of the strategy of the acquisition, but by late 1992 this was abandoned due to technical flaws and the company had to constitute a replacement team (the ObjectVision team, redeployed) headed by Bill Turpin to redo the job. Borland lacked the financial strength to project its marketing and move internal resources off other products to shore up the dBASE/W effort. Layoffs occurred in 1993 to keep the company afloat, the third instance of this was in five years. By the time dBASE for Windows eventually shipped, the developer community had moved on to other products such as Clipper or FoxBase, and dBASE never regained a significant share of Ashton-Tate's former market. This happened against the backdrop of the rise in Microsoft's combined Office product marketing. A change in market conditions also contributed to Borland's fall from prominence. In the 1980s, companies had few people who understood the growing personal computer phenomenon and so most technical people were given free rein to purchase whatever software they thought they needed. Borland had done an excellent job marketing to those with a highly technical bent. By the mid-1990s, however, companies were beginning to ask what the return was on the investment they had made in this loosely controlled PC software buying spree. Company executives were starting to ask questions that were hard for technically minded staff to answer, and so corporate standards began to be created. This required new kinds of marketing and support materials from software vendors, but Borland remained focused on the technical side of its products. In 1993 Borland explored ties with WordPerfect as a possible way to form a suite of programs to rival Microsoft's nascent integration strategy. WordPerfect itself was struggling with a late and troubled transition to Windows. The eventual joint company effort, named Borland Office for Windows (a combination of the WordPerfect word processor, Quattro Pro spreadsheet, and Paradox database) was introduced at the 1993 Comdex computer show. Borland Office never made significant inroads against Microsoft Office. WordPerfect was then bought by Novell. In October 1994, Borland sold Quattro Pro and rights to sell up to million copies of Paradox to Novell for $140 million in cash, repositioning the company on its core software development tools and the Interbase database engine and shifting toward client-server scenarios in corporate applications. This later proved a good foundation for the shift to web development tools. Philippe Kahn and the Borland board disagreed on how to focus the company, and Kahn resigned as chairman, CEO and president, after 12 years, in January 1995. Kahn remained on the board until November 7, 1996. Borland named Gary Wetsel as CEO, but he resigned in July 1996. William F. Miller was interim CEO until September of that year, when Whitney G. Lynn became interim president and CEO (along with other executive changes), followed by a succession of CEOs including Dale Fuller and Tod Nielsen. The Delphi 1 rapid application development (RAD) environment was launched in 1995, under the leadership of Anders Hejlsberg. In 1996 Borland acquired Open Environment Corporation, a Cambridge-based company founded by John J. Donovan. On November 25, 1996, Del Yocam was hired as Borland CEO and chairman. In 1997, Borland sold Paradox to Corel, but retained all development rights for the core BDE. In November 1997, Borland acquired Visigenic, a middleware company that was focused on implementations of CORBA. Inprise Corporation Era In April 1998, Borland International, Inc. announced it had become Inprise Corporation. For several years (both before and during the Inprise name) Borland suffered from serious financial losses and poor public image. When the name was changed to Inprise, many thought Borland had gone out of business. In March 1999, dBase was sold to KSoft, Inc. which was soon renamed dBASE Inc. (In 2004 dBASE Inc. was renamed to DataBased Intelligence, Inc.). In 1999, Dale L. Fuller replaced Yocam. At this time Fuller's title was "interim president and CEO". The "interim" was dropped in December 2000. Keith Gottfried served in senior executive positions with the company from 2000 to 2004. A proposed merger between Inprise and Corel was announced in February 2000, aimed at producing Linux-based products. The scheme was abandoned when Corel's shares fell and it became clear that there was no strategic fit. InterBase 6.0 was made available as open-source software in July 2000. In November 2000, Inprise Corporation announced the company intended to officially change its name to Borland Software Corporation. The legal name of the company would continue to be Inprise Corporation until the completion of the renaming process during the first quarter of 2001. Once the name change was completed, the company would also expect to change its Nasdaq market symbol from "INPR" to "BORL". Borland Software Corporation Era On January 2, 2001, Borland Software Corporation announced it has completed its name change from Inprise Corporation. Effective at the open of trading on Nasdaq, the company's Nasdaq market symbol would also be changed from "INPR" to "BORL". Under the Borland name and a new management team headed by president and CEO Dale L. Fuller, a now-smaller and profitable Borland refocused on Delphi and created a version of Delphi and C++ Builder for Linux, both under the name Kylix. This brought Borland's expertise in integrated development environments to the Linux platform for the first time. Kylix was launched in 2001. Plans to spin off the InterBase division as a separate company were abandoned after Borland and the people who were to run the new company could not agree on terms for the separation. Borland stopped open-source releases of InterBase and has developed and sold new versions at a fast pace. In 2001, Delphi 6 became the first integrated development environment to support web services. All of the company's development platforms now support web services. C#Builder was released in 2003 as a native C# development tool, competing with Visual Studio .NET. By the 2005 release, C#Builder, Delphi for Win32, and Delphi for .NET were combined into a single IDE called "Borland Developer Studio" (though the combined IDE is still popularly known as "Delphi"). In late 2002 Borland purchased design tool vendor TogetherSoft and tool publisher Starbase, makers of the StarTeam configuration management tool and the CaliberRM requirements management tool (eventually, CaliberRM was renamed as "Caliber"). The latest releases of JBuilder and Delphi integrate these tools to give developers a broader set of tools for development. Former CEO Dale Fuller quit in July 2005, but remained on the board of directors. Former COO Scott Arnold took the title of interim president and chief executive officer until November 8, 2005, when it was announced that Tod Nielsen would take over as CEO effective November 9, 2005. Nielsen remained with the company until January 2009, when he accepted the position of chief operating officer at VMware; CFO Erik Prusch then took over as acting president and CEO. In early 2007 Borland announced new branding for its focus around open application life-cycle management. In April 2007 Borland announced that it would relocate its headquarters and development facilities to Austin, Texas. It also has development centers at Singapore, Santa Ana, California, and Linz, Austria. On May 6, 2009, the company announced it was to be acquired by Micro Focus for $75 million. The transaction was approved by Borland shareholders on July 22, 2009, with Micro Focus acquiring the company for $1.50 per share. Following Micro Focus shareholder approval and the required corporate filings, the transaction was completed in late July 2009. It was estimated to have 750 employees at the time. On April 5, 2015, Micro Focus announced the completion of integrating Attachmate Group of companies that was merged on November 20, 2014. During the integration period, the affected companies were merged into a single organization. In the announced reorganization, Borland products would be part of Micro Focus portfolio. Subsidiaries Leaders: In October 2005, Borland acquired Leaders, to add its IT management and governance suite, called Tempo, to the Borland product line. CodeGear: On February 8, 2006, Borland announced the divestiture of their IDE division, including Delphi, JBuilder, and InterBase. At the same time, they announced the planned acquisition of Segue Software, a maker of software test and quality tools, to concentrate on application life-cycle management (ALM). On March 20, 2006, Borland announced its acquisition of Gauntlet Systems, a provider of technology that screens software under development for quality and security. On November 14, 2006, Borland announced its decision to separate the developer tools group into a wholly-owned subsidiary. The newly formed operation, CodeGear, was responsible for four IDE product lines. On May 7, 2008, Borland announced the sale of the CodeGear division to Embarcadero Technologies for an expected price and in CodeGear accounts receivables retained by Borland. Products Recent The products acquired from Segue Software include Silk Central, Silk Performer, and Silk Test. The Silk line was first announced in 1997. Other programs are: Historical products Unreleased software Turbo Modula-2: Later sold by TopSpeed as TopSpeed Modula-2. Marketing CB Magazine: It is an official magazine by Borland Japan. The magazine was republished on April 3, 1997. Renaming to Inprise Corporation Along with renaming from Borland International, Inc. to Inprise Corporation, the company refocused its efforts on targeting enterprise applications development. Borland hired a marketing firm Lexicon Branding to come up with a new name for the company. Yocam explained that the new name, Inprise, was meant to evoke "integrating the enterprise". The idea was to integrate Borland's tools, Delphi, C++ Builder, and JBuilder with enterprise environment software, including Visigenic's implementations of CORBA, Visibroker for C++ and Java, and the new product, Application Server. Frank Borland Frank Borland is a mascot character for Borland products. According to Philippe Kahn, the mascot first appeared in advertisements and cover of Borland Sidekick 1.0 manual, which was in 1984 during Borland International, Inc. era. Frank Borland also appeared in Turbo Tutor - A Turbo Pascal Tutorial, Borland JBuilder 2. A live action version of Frank Borland was made after Micro Focus plc had acquired Borland Software Corporation. This version was created by True Agency Limited. An introductory film was also made about the mascot. See also List of file formats (alphabetical) Lotus Development Corp. v. Borland International, Inc. Citations General references External links Borland International, Inc. Inprise Corporation Borland Software Corporation Micro Focus Borland site 1983 establishments in California 2009 mergers and acquisitions American companies established in 1983 American subsidiaries of foreign companies Companies based in Austin, Texas Micro Focus International Software companies based in Texas Software companies established in 1983 Defunct software companies of the United States
9
Richard Buckminster Fuller (; July 12, 1895 – July 1, 1983) was an American architect, systems theorist, writer, designer, inventor, philosopher, and futurist. He styled his name as R. Buckminster Fuller in his writings, publishing more than 30 books and coining or popularizing such terms as "Spaceship Earth", "Dymaxion" (e.g., Dymaxion house, Dymaxion car, Dymaxion map), "ephemeralization", "synergetics", and "tensegrity". Fuller developed numerous inventions, mainly architectural designs, and popularized the widely known geodesic dome; carbon molecules known as fullerenes were later named by scientists for their structural and mathematical resemblance to geodesic spheres. He also served as the second World President of Mensa International from 1974 to 1983. Fuller was awarded 28 United States patents and many honorary doctorates. In 1960, he was awarded the Frank P. Brown Medal from The Franklin Institute. He was elected an honorary member of Phi Beta Kappa in 1967, on the occasion of the 50-year reunion of his Harvard class of 1917 (from which he was expelled in his first year). He was elected a Fellow of the American Academy of Arts and Sciences in 1968. The same year, he was elected into the National Academy of Design as an Associate member. He became a full Academician in 1970, and he received the Gold Medal award from the American Institute of Architects the same year. Also in 1970, Fuller received the title of Master Architect from Alpha Rho Chi (APX), the national fraternity for architecture and the allied arts. In 1976, he received the St. Louis Literary Award from the Saint Louis University Library Associates. In 1977, he received the Golden Plate Award of the American Academy of Achievement. He also received numerous other awards, including the Presidential Medal of Freedom, presented to him on February 23, 1983, by President Ronald Reagan. Life and work Fuller was born on July 12, 1895, in Milton, Massachusetts, the son of Richard Buckminster Fuller and Caroline Wolcott Andrews, and grand-nephew of Margaret Fuller, an American journalist, critic, and women's rights advocate associated with the American transcendentalism movement. The unusual middle name, Buckminster, was an ancestral family name. As a child, Richard Buckminster Fuller tried numerous variations of his name. He used to sign his name differently each year in the guest register of his family summer vacation home at Bear Island, Maine. He finally settled on R. Buckminster Fuller. Fuller spent much of his youth on Bear Island, in Penobscot Bay off the coast of Maine. He attended Froebelian Kindergarten. He was dissatisfied with the way geometry was taught in school, disagreeing with the notions that a chalk dot on the blackboard represented an "empty" mathematical point, or that a line could stretch off to infinity. To him these were illogical, and led to his work on synergetics. He often made items from materials he found in the woods, and sometimes made his own tools. He experimented with designing a new apparatus for human propulsion of small boats. By age 12, he had invented a 'push pull' system for propelling a rowboat by use of an inverted umbrella connected to the transom with a simple oar lock which allowed the user to face forward to point the boat toward its destination. Later in life, Fuller took exception to the term "invention". Years later, he decided that this sort of experience had provided him with not only an interest in design, but also a habit of being familiar with and knowledgeable about the materials that his later projects would require. Fuller earned a machinist's certification, and knew how to use the press brake, stretch press, and other tools and equipment used in the sheet metal trade. Education Fuller attended Milton Academy in Massachusetts, and after that began studying at Harvard College, where he was affiliated with Adams House. He was expelled from Harvard twice: first for spending all his money partying with a vaudeville troupe, and then, after having been readmitted, for his "irresponsibility and lack of interest". By his own appraisal, he was a non-conforming misfit in the fraternity environment. Wartime experience Between his sessions at Harvard, Fuller worked in Canada as a mechanic in a textile mill, and later as a laborer in the meat-packing industry. He also served in the U.S. Navy in World War I, as a shipboard radio operator, as an editor of a publication, and as commander of the crash rescue boat USS Inca. After discharge, he worked again in the meat-packing industry, acquiring management experience. In 1917, he married Anne Hewlett. During the early 1920s, he and his father-in-law developed the Stockade Building System for producing lightweight, weatherproof, and fireproof housing—although the company would ultimately fail in 1927. Depression and epiphany Fuller recalled 1927 as a pivotal year of his life. His daughter Alexandra had died in 1922 of complications from polio and spinal meningitis just before her fourth birthday. Barry Katz, a Stanford University scholar who wrote about Fuller, found signs that around this time in his life Fuller had developed depression and anxiety. Fuller dwelled on his daughter's death, suspecting that it was connected with the Fullers' damp and drafty living conditions. This provided motivation for Fuller's involvement in Stockade Building Systems, a business which aimed to provide affordable, efficient housing. In 1927, at age 32, Fuller lost his job as president of Stockade. The Fuller family had no savings, and the birth of their daughter Allegra in 1927 added to the financial challenges. Fuller drank heavily and reflected upon the solution to his family's struggles on long walks around Chicago. During the autumn of 1927, Fuller contemplated suicide by drowning in Lake Michigan, so that his family could benefit from a life insurance payment. Fuller said that he had experienced a profound incident which would provide direction and purpose for his life. He felt as though he was suspended several feet above the ground enclosed in a white sphere of light. A voice spoke directly to Fuller, and declared: Fuller stated that this experience led to a profound re-examination of his life. He ultimately chose to embark on "an experiment, to find what a single individual could contribute to changing the world and benefiting all humanity". Speaking to audiences later in life, Fuller would frequently recount the story of his Lake Michigan experience, and its transformative impact on his life. Recovery In 1927, Fuller resolved to think independently which included a commitment to "the search for the principles governing the universe and help advance the evolution of humanity in accordance with them ... finding ways of doing more with less to the end that all people everywhere can have more and more". By 1928, Fuller was living in Greenwich Village and spending much of his time at the popular café Romany Marie's, where he had spent an evening in conversation with Marie and Eugene O'Neill several years earlier. Fuller accepted a job decorating the interior of the café in exchange for meals, giving informal lectures several times a week, and models of the Dymaxion house were exhibited at the café. Isamu Noguchi arrived during 1929—Constantin Brâncuși, an old friend of Marie's, had directed him there—and Noguchi and Fuller were soon collaborating on several projects, including the modeling of the Dymaxion car based on recent work by Aurel Persu. It was the beginning of their lifelong friendship. Geodesic domes Fuller taught at Black Mountain College in North Carolina during the summers of 1948 and 1949, serving as its Summer Institute director in 1949. Fuller had been shy and withdrawn, but he was persuaded to participate in a theatrical performance of Erik Satie's Le piège de Méduse produced by John Cage, who was also teaching at Black Mountain. During rehearsals, under the tutelage of Arthur Penn, then a student at Black Mountain, Fuller broke through his inhibitions to become confident as a performer and speaker. At Black Mountain, with the support of a group of professors and students, he began reinventing a project that would make him famous: the geodesic dome. Although the geodesic dome had been created, built and awarded a German patent on June 19, 1925, by Dr. Walther Bauersfeld, Fuller was awarded United States patents. Fuller's patent application made no mention of Bauersfeld's self-supporting dome built some 26 years prior. Although Fuller undoubtedly popularized this type of structure he is mistakenly given credit for its design. One of his early models was first constructed in 1945 at Bennington College in Vermont, where he lectured often. Although Bauersfeld's dome could support a full skin of concrete it was not until 1949 that Fuller erected a geodesic dome building that could sustain its own weight with no practical limits. It was in diameter and constructed of aluminium aircraft tubing and a vinyl-plastic skin, in the form of an icosahedron. To prove his design, Fuller suspended from the structure's framework several students who had helped him build it. The U.S. government recognized the importance of this work, and employed his firm Geodesics, Inc. in Raleigh, North Carolina to make small domes for the Marines. Within a few years, there were thousands of such domes around the world. Fuller's first "continuous tension – discontinuous compression" geodesic dome (full sphere in this case) was constructed at the University of Oregon Architecture School in 1959 with the help of students. These continuous tension – discontinuous compression structures featured single force compression members (no flexure or bending moments) that did not touch each other and were 'suspended' by the tensional members. Dymaxion Chronofile For half of a century, Fuller developed many ideas, designs, and inventions, particularly regarding practical, inexpensive shelter and transportation. He documented his life, philosophy, and ideas scrupulously by a daily diary (later called the Dymaxion Chronofile), and by twenty-eight publications. Fuller financed some of his experiments with inherited funds, sometimes augmented by funds invested by his collaborators, one example being the Dymaxion car project. World stage International recognition began with the success of huge geodesic domes during the 1950s. Fuller lectured at North Carolina State University in Raleigh in 1949, where he met James Fitzgibbon, who would become a close friend and colleague. Fitzgibbon was director of Geodesics, Inc. and Synergetics, Inc. the first licensees to design geodesic domes. Thomas C. Howard was lead designer, architect, and engineer for both companies. Richard Lewontin, a new faculty member in population genetics at North Carolina State University, provided Fuller with computer calculations for the lengths of the domes' edges. Fuller began working with architect Shoji Sadao in 1954, together designing a hypothetical Dome over Manhattan in 1960, and in 1964 they co-founded the architectural firm Fuller & Sadao Inc., whose first project was to design the large geodesic dome for the U.S. Pavilion at Expo 67 in Montreal. This building is now the "Montreal Biosphère". In 1962, the artist and searcher John McHale wrote the first monograph on Fuller, published by George Braziller in New York. After employing several Southern Illinois University Carbondale (SIU) graduate students to rebuild his models following an apartment fire in the summer of 1959, Fuller was recruited by longtime friend Harold Cohen to serve as a research professor of "design science exploration" at the institution's School of Art and Design. According to SIU architecture professor Jon Davey, the position was "unlike most faculty appointments ... more a celebrity role than a teaching job" in which Fuller offered few courses and was only stipulated to spend two months per year on campus. Nevertheless, his time in Carbondale was "extremely productive", and Fuller was promoted to university professor in 1968 and distinguished university professor in 1972. Working as a designer, scientist, developer, and writer, he continued to lecture for many years around the world. He collaborated at SIU with John McHale. In 1965, they inaugurated the World Design Science Decade (1965 to 1975) at the meeting of the International Union of Architects in Paris, which was, in Fuller's own words, devoted to "applying the principles of science to solving the problems of humanity." From 1972 until retiring as university professor emeritus in 1975, Fuller held a joint appointment at Southern Illinois University Edwardsville, where he had designed the dome for the campus Religious Center in 1971. During this period, he also held a joint fellowship at a consortium of Philadelphia-area institutions, including the University of Pennsylvania, Bryn Mawr College, Haverford College, Swarthmore College, and the University City Science Center; as a result of this affiliation, the University of Pennsylvania appointed him university professor emeritus in 1975. Fuller believed human societies would soon rely mainly on renewable sources of energy, such as solar- and wind-derived electricity. He hoped for an age of "omni-successful education and sustenance of all humanity". Fuller referred to himself as "the property of universe" and during one radio interview he gave later in life, declared himself and his work "the property of all humanity". For his lifetime of work, the American Humanist Association named him the 1969 Humanist of the Year. In 1976, Fuller was a key participant at UN Habitat I, the first UN forum on human settlements. Last filmed appearance Fuller's last filmed interview took place on June 21, 1983, in which he spoke at Norman Foster's Royal Gold Medal for architecture ceremony. His speech can be watched in the archives of the AA School of Architecture, in which he spoke after Sir Robert Sainsbury's introductory speech and Foster's keynote address. Death In the year of his death, Fuller described himself as follows: Fuller died on July 1, 1983, 11 days before his 88th birthday. During the period leading up to his death, his wife had been lying comatose in a Los Angeles hospital, dying of cancer. It was while visiting her there that he exclaimed, at a certain point: "She is squeezing my hand!" He then stood up, had a heart attack, and died an hour later, at age 87. His wife of 66 years died 36 hours later. They are buried in Mount Auburn Cemetery in Cambridge, Massachusetts. Philosophy Buckminster Fuller was a Unitarian, and, like his grandfather Arthur Buckminster Fuller (brother of Margaret Fuller), a Unitarian minister. Fuller was also an early environmental activist, aware of Earth's finite resources, and promoted a principle he termed "ephemeralization", which, according to futurist and Fuller disciple Stewart Brand, was defined as "doing more with less". Resources and waste from crude, inefficient products could be recycled into making more valuable products, thus increasing the efficiency of the entire process. Fuller also coined the word synergetics, a catch-all term used broadly for communicating experiences using geometric concepts, and more specifically, the empirical study of systems in transformation; his focus was on total system behavior unpredicted by the behavior of any isolated components. Fuller was a pioneer in thinking globally, and explored energy and material efficiency in the fields of architecture, engineering, and design. In his book Critical Path (1981) he cited the opinion of François de Chadenèdes (1920-1999) that petroleum, from the standpoint of its replacement cost in our current energy "budget" (essentially, the net incoming solar flux), had cost nature "over a million dollars" per U.S. gallon ($300,000 per litre) to produce. From this point of view, its use as a transportation fuel by people commuting to work represents a huge net loss compared to their actual earnings. An encapsulation quotation of his views might best be summed up as: "There is no energy crisis, only a crisis of ignorance." Though Fuller was concerned about sustainability and human survival under the existing socioeconomic system, he remained optimistic about humanity's future. Defining wealth in terms of knowledge, as the "technological ability to protect, nurture, support, and accommodate all growth needs of life", his analysis of the condition of "Spaceship Earth" caused him to conclude that at a certain time during the 1970s, humanity had attained an unprecedented state. He was convinced that the accumulation of relevant knowledge, combined with the quantities of major recyclable resources that had already been extracted from the earth, had attained a critical level, such that competition for necessities had become unnecessary. Cooperation had become the optimum survival strategy. He declared: "selfishness is unnecessary and hence-forth unrationalizable ... War is obsolete." He criticized previous utopian schemes as too exclusive, and thought this was a major source of their failure. To work, he thought that a utopia needed to include everyone. Fuller was influenced by Alfred Korzybski's idea of general semantics. In the 1950s, Fuller attended seminars and workshops organized by the Institute of General Semantics, and he delivered the annual Alfred Korzybski Memorial Lecture in 1955. Korzybski is mentioned in the Introduction of his book Synergetics. The two shared a remarkable amount of similarity in their formulations of general semantics. In his 1970 book I Seem To Be a Verb, he wrote: "I live on Earth at present, and I don't know what I am. I know that I am not a category. I am not a thing—a noun. I seem to be a verb, an evolutionary process—an integral function of the universe." Fuller wrote that the natural analytic geometry of the universe was based on arrays of tetrahedra. He developed this in several ways, from the close-packing of spheres and the number of compressive or tensile members required to stabilize an object in space. One confirming result was that the strongest possible homogeneous truss is cyclically tetrahedral. He had become a guru of the design, architecture, and "alternative" communities, such as Drop City, the community of experimental artists to whom he awarded the 1966 "Dymaxion Award" for "poetically economic" domed living structures. Major design projects The geodesic dome Fuller was most famous for his lattice shell structures – geodesic domes, which have been used as parts of military radar stations, civic buildings, environmental protest camps, and exhibition attractions. An examination of the geodesic design by Walther Bauersfeld for the Zeiss-Planetarium, built some 28 years prior to Fuller's work, reveals that Fuller's Geodesic Dome patent (U.S. 2,682,235; awarded in 1954) is the same design as Bauersfeld's. Their construction is based on extending some basic principles to build simple "tensegrity" structures (tetrahedron, octahedron, and the closest packing of spheres), making them lightweight and stable. The geodesic dome was a result of Fuller's exploration of nature's constructing principles to find design solutions. The Fuller Dome is referenced in the Hugo Award-winning novel Stand on Zanzibar by John Brunner, in which a geodesic dome is said to cover the entire island of Manhattan, and it floats on air due to the hot-air balloon effect of the large air-mass under the dome (and perhaps its construction of lightweight materials). Transportation The Dymaxion car was a vehicle designed by Fuller, featured prominently at Chicago's 1933-1934 Century of Progress World's Fair. During the Great Depression, Fuller formed the Dymaxion Corporation and built three prototypes with noted naval architect Starling Burgess and a team of 27 workmen — using donated money as well as a family inheritance. Fuller associated the word Dymaxion, a blend of the words dynamic, maximum, and tension to sum up the goal of his study, "maximum gain of advantage from minimal energy input". The Dymaxion was not an automobile but rather the 'ground-taxying mode' of a vehicle that might one day be designed to fly, land and drive — an "Omni-Medium Transport" for air, land and water. Fuller focused on the landing and taxiing qualities, and noted severe limitations in its handling. The team made improvements and refinements to the platform, and Fuller noted the Dymaxion "was an invention that could not be made available to the general public without considerable improvements". The bodywork was aerodynamically designed for increased fuel efficiency and its platform featured a lightweight cromoly-steel hinged chassis, rear-mounted V8 engine, front-drive, and three-wheels. The vehicle was steered via the third wheel at the rear, capable of 90° steering lock. Able to steer in a tight circle, the Dymaxion often caused a sensation, bringing nearby traffic to a halt. Shortly after launch, a prototype rolled over and crashed, killing the Dymaxion's driver and seriously injuring its passengers. Fuller blamed the accident on a second car that collided with the Dymaxion. Eyewitnesses reported, however, that the other car hit the Dymaxion only after it had begun to roll over. Despite courting the interest of important figures from the auto industry, Fuller used his family inheritance to finish the second and third prototypes — eventually selling all three, dissolving Dymaxion Corporation and maintaining the Dymaxion was never intended as a commercial venture. One of the three original prototypes survives. Housing Fuller's energy-efficient and inexpensive Dymaxion house garnered much interest, but only two prototypes were ever produced. Here the term "Dymaxion" is used in effect to signify a "radically strong and light tensegrity structure". One of Fuller's Dymaxion Houses is on display as a permanent exhibit at the Henry Ford Museum in Dearborn, Michigan. Designed and developed during the mid-1940s, this prototype is a round structure (not a dome), shaped something like the flattened "bell" of certain jellyfish. It has several innovative features, including revolving dresser drawers, and a fine-mist shower that reduces water consumption. According to Fuller biographer Steve Crooks, the house was designed to be delivered in two cylindrical packages, with interior color panels available at local dealers. A circular structure at the top of the house was designed to rotate around a central mast to use natural winds for cooling and air circulation. Conceived nearly two decades earlier, and developed in Wichita, Kansas, the house was designed to be lightweight, adapted to windy climates, cheap to produce and easy to assemble. Because of its light weight and portability, the Dymaxion House was intended to be the ideal housing for individuals and families who wanted the option of easy mobility. The design included a "Go-Ahead-With-Life Room" stocked with maps, charts, and helpful tools for travel "through time and space". It was to be produced using factories, workers, and technologies that had produced World War II aircraft. It looked ultramodern at the time, built of metal, and sheathed in polished aluminum. The basic model enclosed of floor area. Due to publicity, there were many orders during the early Post-War years, but the company that Fuller and others had formed to produce the houses failed due to management problems. In 1967, Fuller developed a concept for an offshore floating city named Triton City and published a report on the design the following year. Models of the city aroused the interest of President Lyndon B. Johnson who, after leaving office, had them placed in the Lyndon Baines Johnson Library and Museum. In 1969, Fuller began the Otisco Project, named after its location in Otisco, New York. The project developed and demonstrated concrete spray with mesh-covered wireforms for producing large-scale, load-bearing spanning structures built on-site, without the use of pouring molds, other adjacent surfaces, or hoisting. The initial method used a circular concrete footing in which anchor posts were set. Tubes cut to length and with ends flattened were then bolted together to form a duodeca-rhombicahedron (22-sided hemisphere) geodesic structure with spans ranging to . The form was then draped with layers of ¼-inch wire mesh attached by twist ties. Concrete was sprayed onto the structure, building up a solid layer which, when cured, would support additional concrete to be added by a variety of traditional means. Fuller referred to these buildings as monolithic ferroconcrete geodesic domes. However, the tubular frame form proved problematic for setting windows and doors. It was replaced by an iron rebar set vertically in the concrete footing and then bent inward and welded in place to create the dome's wireform structure and performed satisfactorily. Domes up to three stories tall built with this method proved to be remarkably strong. Other shapes such as cones, pyramids, and arches proved equally adaptable. The project was enabled by a grant underwritten by Syracuse University and sponsored by U.S. Steel (rebar), the Johnson Wire Corp (mesh), and Portland Cement Company (concrete). The ability to build large complex load bearing concrete spanning structures in free space would open many possibilities in architecture, and is considered one of Fuller's greatest contributions. Dymaxion map and World Game Fuller, along with co-cartographer Shoji Sadao, also designed an alternative projection map, called the Dymaxion map. This was designed to show Earth's continents with minimum distortion when projected or printed on a flat surface. In the 1960s, Fuller developed the World Game, a collaborative simulation game played on a 70-by-35-foot Dymaxion map, in which players attempt to solve world problems. The object of the simulation game is, in Fuller's words, to "make the world work, for 100% of humanity, in the shortest possible time, through spontaneous cooperation, without ecological offense or the disadvantage of anyone". Appearance and style Buckminster Fuller wore thick-lensed spectacles to correct his extreme hyperopia, a condition that went undiagnosed for the first five years of his life. Fuller's hearing was damaged during his naval service in World War I and deteriorated during the 1960s. After experimenting with bullhorns as hearing aids during the mid-1960s, Fuller adopted electronic hearing aids from the 1970s onward. In public appearances, Fuller always wore dark-colored suits, appearing like "an alert little clergyman". Previously, he had experimented with unconventional clothing immediately after his 1927 epiphany, but found that breaking social fashion customs made others devalue or dismiss his ideas. Fuller learned the importance of physical appearance as part of one's credibility, and decided to become "the invisible man" by dressing in clothes that would not draw attention to himself. With self-deprecating humor, Fuller described this black-suited appearance as resembling a "second-rate bank clerk". Writer Guy Davenport met him in 1965 and described him thus: Lifestyle Following his global prominence from the 1960s onward, Fuller became a frequent flier, often crossing time zones to lecture. In the 1960s and 1970s, he wore three watches simultaneously; one for the time zone of his office at Southern Illinois University, one for the time zone of the location he would next visit, and one for the time zone he was currently in. In the 1970s, Fuller was only in 'homely' locations (his personal home in Carbondale, Illinois; his holiday retreat in Bear Island, Maine; and his daughter's home in Pacific Palisades, California) roughly 65 nights per year—the other 300 nights were spent in hotel beds in the locations he visited on his lecturing and consulting circuits. In the 1920s, Fuller experimented with polyphasic sleep, which he called Dymaxion sleep. Inspired by the sleep habits of animals such as dogs and cats, Fuller worked until he was tired, and then slept short naps. This generally resulted in Fuller sleeping 30-minute naps every 6 hours. This allowed him "twenty-two thinking hours a day", which aided his work productivity. Fuller reportedly kept this Dymaxion sleep habit for two years, before quitting the routine because it conflicted with his business associates' sleep habits. Despite no longer personally partaking in the habit, in 1943 Fuller suggested Dymaxion sleep as a strategy that the United States could adopt to win World War II. Despite only practicing true polyphasic sleep for a period during the 1920s, Fuller was known for his stamina throughout his life. He was described as "tireless" by Barry Farrell in Life magazine, who noted that Fuller stayed up all night replying to mail during Farrell's 1970 trip to Bear Island. In his seventies, Fuller generally slept for 5–8 hours per night. Fuller documented his life copiously from 1915 to 1983, approximately of papers in a collection called the Dymaxion Chronofile. He also kept copies of all incoming and outgoing correspondence. The enormous R. Buckminster Fuller Collection is currently housed at Stanford University. Language and neologisms Buckminster Fuller spoke and wrote in a unique style and said it was important to describe the world as accurately as possible. Fuller often created long run-on sentences and used unusual compound words (omniwell-informed, intertransformative, omni-interaccommodative, omniself-regenerative), as well as terms he himself invented. His style of speech was characterized by progressively rapid and breathless delivery and rambling digressions of thought, which Fuller described as "thinking out loud". The effect, combined with Fuller's dry voice and non-rhotic New England accent, was varyingly considered "hypnotic" or "overwhelming". Fuller used the word Universe without the definite or indefinite article (the or a) and always capitalized the word. Fuller wrote that "by Universe I mean: the aggregate of all humanity's consciously apprehended and communicated (to self or others) Experiences". The words "down" and "up", according to Fuller, are awkward in that they refer to a planar concept of direction inconsistent with human experience. The words "in" and "out" should be used instead, he argued, because they better describe an object's relation to a gravitational center, the Earth. "I suggest to audiences that they say, 'I'm going "outstairs" and "instairs."' At first that sounds strange to them; They all laugh about it. But if they try saying in and out for a few days in fun, they find themselves beginning to realize that they are indeed going inward and outward in respect to the center of Earth, which is our Spaceship Earth. And for the first time they begin to feel real 'reality.'" "World-around" is a term coined by Fuller to replace "worldwide". The general belief in a flat Earth died out in classical antiquity, so using "wide" is an anachronism when referring to the surface of the Earth—a spheroidal surface has area and encloses a volume but has no width. Fuller held that unthinking use of obsolete scientific ideas detracts from and misleads intuition. Other neologisms collectively invented by the Fuller family, according to Allegra Fuller Snyder, are the terms "sunsight" and "sunclipse", replacing "sunrise" and "sunset" to overturn the geocentric bias of most pre-Copernican celestial mechanics. Fuller also invented the word "livingry", as opposed to weaponry (or "killingry"), to mean that which is in support of all human, plant, and Earth life. "The architectural profession—civil, naval, aeronautical, and astronautical—has always been the place where the most competent thinking is conducted regarding livingry, as opposed to weaponry." As well as contributing significantly to the development of tensegrity technology, Fuller invented the term "tensegrity", a portmanteau of "tensional integrity". "Tensegrity describes a structural-relationship principle in which structural shape is guaranteed by the finitely closed, comprehensively continuous, tensional behaviors of the system and not by the discontinuous and exclusively local compressional member behaviors. Tensegrity provides the ability to yield increasingly without ultimately breaking or coming asunder." "Dymaxion" is a portmanteau of "dynamic maximum tension". It was invented around 1929 by two admen at Marshall Field's department store in Chicago to describe Fuller's concept house, which was shown as part of a house of the future store display. They created the term using three words that Fuller used repeatedly to describe his design – dynamic, maximum, and tension. Fuller also helped to popularize the concept of Spaceship Earth: "The most important fact about Spaceship Earth: an instruction manual didn't come with it." In the preface for his "cosmic fairy tale" Tetrascroll: Goldilocks and the Three Bears, Fuller stated that his distinctive speaking style grew out of years of embellishing the classic tale for the benefit of his daughter, allowing him to explore both his new theories and how to present them. The Tetrascroll narrative was eventually transcribed onto a set of tetrahedral lithographs (hence the name), as well as being published as a traditional book. Fuller's language posed problems for his credibility. John Julius Norwich recalled commissioning a 600-word introduction for a planned history of world architecture from him, and receiving a 3500-word proposal which ended: Norwich commented: "On reflection, I asked Dr. Nikolaus Pevsner instead." Concepts and buildings His concepts and buildings include: Dymaxion house (1928) R. Buckminster Fuller and Anne Hewlett Dome Home Aerodynamic Dymaxion car (1933) Prefabricated compact bathroom cell (1937) Dymaxion deployment unit (1940) Dymaxion map of the world (1946) Tensegrity structures (1949) Geodesic dome for Ford Motor Company (1953) Patent on geodesic domes (1954) Tokyo Tower (1958) (unselected design) Tokyo Olympic Stadium (1958) (unselected design) The World Game (1961) and the World Game Institute (1972) Patent on octet truss (1961) Montreal Biosphere (1967), United States pavilion at Expo 67 Fly's Eye Dome Dewan Tunku Geodesic Dome, KOMTAR, Penang, Malaysia (proposed 1974, completed 1985) Comprehensive anticipatory design science Influence and legacy Among the many people who were influenced by Buckminster Fuller are: Constance Abernathy, Ruth Asawa, J. Baldwin, Michael Ben-Eli, Pierre Cabrol, John Cage, Joseph Clinton, Peter Floyd, Norman Foster, Medard Gabel, Michael Hays, Ted Nelson, David Johnston, Peter Jon Pearce, Shoji Sadao, Edwin Schlossberg, Kenneth Snelson, Robert Anton Wilson, Stewart Brand, and Jason McLennan. An allotrope of carbon, fullerene—and a particular molecule of that allotrope C60 (buckminsterfullerene or buckyball) has been named after him. The Buckminsterfullerene molecule, which consists of 60 carbon atoms, very closely resembles a spherical version of Fuller's geodesic dome. The 1996 Nobel prize in chemistry was given to Kroto, Curl, and Smalley for their discovery of the fullerene. On July 12, 2004, the United States Post Office released a new commemorative stamp honoring R. Buckminster Fuller on the 50th anniversary of his patent for the geodesic dome and by the occasion of his 109th birthday. The stamp's design replicated the January 10, 1964, cover of Time magazine. Fuller was the subject of two documentary films: The World of Buckminster Fuller (1971) and Buckminster Fuller: Thinking Out Loud (1996). Additionally, filmmaker Sam Green and the band Yo La Tengo collaborated on a 2012 "live documentary" about Fuller, The Love Song of R. Buckminster Fuller. In June 2008, the Whitney Museum of American Art presented "Buckminster Fuller: Starting with the Universe", the most comprehensive retrospective to date of his work and ideas. The exhibition traveled to the Museum of Contemporary Art, Chicago in 2009. It presented a combination of models, sketches, and other artifacts, representing six decades of the artist's integrated approach to housing, transportation, communication, and cartography. It also featured the extensive connections with Chicago from his years spent living, teaching, and working in the city. In 2009, a number of US companies decided to repackage spherical magnets and sell them as toys. One company, Maxfield & Oberton, told The New York Times that they saw the product on YouTube and decided to repackage them as "Buckyballs", because the magnets could self-form and hold together in shapes reminiscent of the Fuller inspired buckyballs. The buckyball toy launched at New York International Gift Fair in 2009 and sold in the hundreds of thousands, but by 2010 began to experience problems with toy safety issues and the company was forced to recall the packages that were labelled as toys. In 2012, the San Francisco Museum of Modern Art hosted "The Utopian Impulse" – a show about Buckminster Fuller's influence in the Bay Area. Featured were concepts, inventions and designs for creating "free energy" from natural forces, and for sequestering carbon from the atmosphere. The show ran January through July. In popular culture Fuller is quoted in "The Tower of Babble" from the musical Godspell: "Man is a complex of patterns and processes." Belgian rock band dEUS released the song The Architect, inspired by Fuller, on their 2008 album Vantage Point. Indie band Driftless Pony Club titled their 2011 album Buckminster after Fuller. Each of the album's songs is based upon his life and works. The design podcast 99% Invisible (2010–present) takes its title from a Fuller quote: "Ninety-nine percent of who you are is invisible and untouchable." Fuller is briefly mentioned in X-Men: Days of Future Past (2014) when Kitty Pryde is giving a lecture to a group of students regarding utopian architecture. Robert Kiyosaki's 2015 book Second Chance concerns Kiyosaki's interactions with Fuller as well as Fuller's unusual final book, Grunch of Giants. In The House of Tomorrow (2017), based on Peter Bognanni's 2010 novel of the same name, Ellen Burstyn's character is obsessed with Fuller and provides retro-futurist tours of her geodesic home that include videos of Fuller sailing and talking with Burstyn, who had in real life befriended Fuller. Patents (from the Table of Contents of Inventions: The Patented Works of R. Buckminster Fuller (1983) ) 1927 Stockade: building structure 1927 Stockade: pneumatic forming process 1928 (Application Abandoned) 4D house 1937 Dymaxion car 1940 Dymaxion bathroom 1944 Dymaxion deployment unit (sheet) 1944 Dymaxion deployment unit (frame) 1946 Dymaxion map 1946 (No Patent) Dymaxion house (Wichita) 1954 Geodesic dome 1959 Paperboard dome 1959 Plydome 1959 Catenary (geodesic tent) 1961 Octet truss 1962 Tensegrity 1963 Submarisle (undersea island) 1964 Aspension (suspension building) 1965 Monohex (geodesic structures) 1965 Laminar dome 1965 (Filed – No Patent) Octa spinner 1967 Star tensegrity (octahedral truss) 1970 Rowing needles (watercraft) 1974 Geodesic hexa-pent 1975 Floatable breakwater 1975 Non-symmetrical tensegrity 1979 Floating breakwater 1980 Tensegrity truss 1983 Hanging storage shelf unit Bibliography 4d Timelock (1928) Nine Chains to the Moon (1938) Untitled Epic Poem on the History of Industrialization (1962) Ideas and Integrities, a Spontaneous Autobiographical Disclosure (1963) No More Secondhand God and Other Writings (1963) Education Automation: Freeing the Scholar to Return (1963) What I Have Learned: A Collection of 20 Autobiographical Essays, Chapter "How Little I Know", (1968) Operating Manual for Spaceship Earth (1968) Utopia or Oblivion (1969) Approaching the Benign Environment (1970) (with Eric A. Walker and James R. Killian, Jr.) I Seem to Be a Verb (1970) coauthors Jerome Agel, Quentin Fiore, Intuition (1970) Buckminster Fuller to Children of Earth (1972) compiled and photographed by Cam Smith, The Buckminster Fuller Reader (1972) editor James Meller, The Dymaxion World of Buckminster Fuller (1960, 1973) coauthor Robert Marks, Earth, Inc (1973) Synergetics: Explorations in the Geometry of Thinking (1975) in collaboration with E.J. Applewhite with a preface and contribution by Arthur L. Loeb, Tetrascroll: Goldilocks and the Three Bears, A Cosmic Fairy Tale (1975) And It Came to Pass — Not to Stay (1976) R. Buckminster Fuller on Education (1979) Synergetics 2: Further Explorations in the Geometry of Thinking (1979) in collaboration with E.J. Applewhite Buckminster Fuller – Autobiographical Monologue/Scenario (1980) page 54, R. Buckminster Fuller, documented and edited by Robert Snyder, St. Martin's Press, Inc., Buckminster Fuller Sketchbook (1981) Critical Path (1981) Grunch of Giants (1983) Inventions: The Patented Works of R. Buckminster Fuller (1983) Humans in Universe (1983) coauthor Anwar Dil, Cosmography: A Posthumous Scenario for the Future of Humanity (1992) coauthor Kiyoshi Kuromiya, See also Amundsen-Scott South Pole Station The Buckminster Fuller Challenge Bucky Ball Cloud Nine (tensegrity sphere) Design science revolution Drop City Emissions Reduction Currency System Kārlis Johansons, tensegrity innovator Kenneth Snelson, tensegrity sculptor Noosphere Old Man River's City project Space frame Spome Whole Earth Catalog Post-scarcity economy References Further reading Ward, James, ed., The Artifacts Of R. Buckminster Fuller, A Comprehensive Collection of His Designs and Drawings in Four Volumes: Volume One. The Dymaxion Experiment, 1926–1943; Volume Two. Dymaxion Deployment, 1927–1946; Volume Three. The Geodesic Revolution, Part 1, 1947–1959; Volume Four. The Geodesic Revolution, Part 2, 1960–1983: Edited with descriptions by James Ward. Garland Publishing, New York. 1984 ( vol. 1, vol. 2, vol. 3, vol. 4) External links The Estate of R. Buckminster Fuller Buckminster Fuller Institute 1895 births 1983 deaths 20th-century American architects 20th-century American non-fiction writers American architecture writers American humanists American industrial designers American inventors American male non-fiction writers United States Navy personnel of World War I American non-fiction environmental writers American systems scientists American technology writers American Unitarians Bates College alumni Black Mountain College faculty Burials at Mount Auburn Cemetery Critics of work and the work ethic Fellows of the American Academy of Arts and Sciences Futurologists Harvard College alumni Independent scholars Innovation economists Mensans Milton Academy alumni Modernist architects People from Milton, Massachusetts People from Penobscot County, Maine Philosophers of technology Presidential Medal of Freedom recipients Recipients of the Royal Gold Medal Refusal of work Solar building designers Southern Illinois University Carbondale faculty Sustainability advocates Washington University in St. Louis faculty Authors of utopian literature Recipients of the AIA Gold Medal National Arts Club Medal of Honor Recipients
16
Black is a color that results from the absence or complete absorption of visible light. It is an achromatic color, without hue, like white and grey. It is often used symbolically or figuratively to represent darkness. Black and white have often been used to describe opposites such as good and evil, the Dark Ages versus Age of Enlightenment, and night versus day. Since the Middle Ages, black has been the symbolic color of solemnity and authority, and for this reason it is still commonly worn by judges and magistrates. Black was one of the first colors used by artists in Neolithic cave paintings. It was used in ancient Egypt and Greece as the color of the underworld. In the Roman Empire, it became the color of mourning, and over the centuries it was frequently associated with death, evil, witches, and magic. In the 14th century, it was worn by royalty, clergy, judges, and government officials in much of Europe. It became the color worn by English romantic poets, businessmen and statesmen in the 19th century, and a high fashion color in the 20th century. According to surveys in Europe and North America, it is the color most commonly associated with mourning, the end, secrets, magic, force, violence, fear, evil, and elegance. Black is the most common ink color used for printing books, newspapers and documents, as it provides the highest contrast with white paper and thus is the easiest color to read. Similarly, black text on a white screen is the most common format used on computer screens. As of September 2019, the darkest material is made by MIT engineers from vertically aligned carbon nanotubes. Etymology The word black comes from Old English blæc ("black, dark", also, "ink"), from Proto-Germanic *blakkaz ("burned"), from Proto-Indo-European *bhleg- ("to burn, gleam, shine, flash"), from base *bhel- ("to shine"), related to Old Saxon blak ("ink"), Old High German blach ("black"), Old Norse blakkr ("dark"), Dutch blaken ("to burn"), and Swedish bläck ("ink"). More distant cognates include Latin flagrare ("to blaze, glow, burn"), and Ancient Greek phlegein ("to burn, scorch"). The Ancient Greeks sometimes used the same word to name different colors, if they had the same intensity. Kuanos''' could mean both dark blue and black. The Ancient Romans had two words for black: ater was a flat, dull black, while niger was a brilliant, saturated black. Ater has vanished from the vocabulary, but niger was the source of the country name Nigeria, the English word Negro, and the word for "black" in most modern Romance languages (French: noir; Spanish and Portuguese: negro; Italian: nero; Romanian: negru). Old High German also had two words for black: swartz for dull black and blach for a luminous black. These are parallelled in Middle English by the terms swart for dull black and blaek for luminous black. Swart still survives as the word swarthy, while blaek became the modern English black. The former is cognate with the words used for black in most modern Germanic languages aside from English (German: schwarz, Dutch: zwart, Swedish: svart, Danish: sort, Icelandic: svartr). In heraldry, the word used for the black color is sable, named for the black fur of the sable, an animal. Art Prehistoric Black was one of the first colors used in art. The Lascaux Cave in France contains drawings of bulls and other animals drawn by paleolithic artists between 18,000 and 17,000 years ago. They began by using charcoal, and later achieved darker pigments by burning bones or grinding a powder of manganese oxide. Ancient For the ancient Egyptians, black had positive associations; being the color of fertility and the rich black soil flooded by the Nile. It was the color of Anubis, the god of the underworld, who took the form of a black jackal, and offered protection against evil to the dead. To ancient Greeks, black represented the underworld, separated from the living by the river Acheron, whose water ran black. Those who had committed the worst sins were sent to Tartarus, the deepest and darkest level. In the center was the palace of Hades, the king of the underworld, where he was seated upon a black ebony throne. Black was one of the most important colors used by ancient Greek artists. In the 6th century BC, they began making black-figure pottery and later red figure pottery, using a highly original technique. In black-figure pottery, the artist would paint figures with a glossy clay slip on a red clay pot. When the pot was fired, the figures painted with the slip would turn black, against a red background. Later they reversed the process, painting the spaces between the figures with slip. This created magnificent red figures against a glossy black background. In the social hierarchy of ancient Rome, purple was the color reserved for the Emperor; red was the color worn by soldiers (red cloaks for the officers, red tunics for the soldiers); white the color worn by the priests, and black was worn by craftsmen and artisans. The black they wore was not deep and rich; the vegetable dyes used to make black were not solid or lasting, so the blacks often faded to gray or brown. In Latin, the word for black, ater and to darken, atere, were associated with cruelty, brutality and evil. They were the root of the English words "atrocious" and "atrocity". Black was also the Roman color of death and mourning. In the 2nd century BC Roman magistrates began to wear a dark toga, called a toga pulla, to funeral ceremonies. Later, under the Empire, the family of the deceased also wore dark colors for a long period; then, after a banquet to mark the end of mourning, exchanged the black for a white toga. In Roman poetry, death was called the hora nigra, the black hour. The German and Scandinavian peoples worshipped their own goddess of the night, Nótt, who crossed the sky in a chariot drawn by a black horse. They also feared Hel, the goddess of the kingdom of the dead, whose skin was black on one side and red on the other. They also held sacred the raven. They believed that Odin, the king of the Nordic pantheon, had two black ravens, Huginn and Muninn, who served as his agents, traveling the world for him, watching and listening. Postclassical In the early Middle Ages, black was commonly associated with darkness and evil. In Medieval paintings, the devil was usually depicted as having human form, but with wings and black skin or hair. 12th and 13th centuries In fashion, black did not have the prestige of red, the color of the nobility. It was worn by Benedictine monks as a sign of humility and penitence. In the 12th century a famous theological dispute broke out between the Cistercian monks, who wore white, and the Benedictines, who wore black. A Benedictine abbot, Pierre the Venerable, accused the Cistercians of excessive pride in wearing white instead of black. Saint Bernard of Clairvaux, the founder of the Cistercians responded that black was the color of the devil, hell, "of death and sin", while white represented "purity, innocence and all the virtues". Black symbolized both power and secrecy in the medieval world. The emblem of the Holy Roman Empire of Germany was a black eagle. The black knight in the poetry of the Middle Ages was an enigmatic figure, hiding his identity, usually wrapped in secrecy. Black ink, invented in China, was traditionally used in the Middle Ages for writing, for the simple reason that black was the darkest color and therefore provided the greatest contrast with white paper or parchment, making it the easiest color to read. It became even more important in the 15th century, with the invention of printing. A new kind of ink, printer's ink, was created out of soot, turpentine and walnut oil. The new ink made it possible to spread ideas to a mass audience through printed books, and to popularize art through black and white engravings and prints. Because of its contrast and clarity, black ink on white paper continued to be the standard for printing books, newspapers and documents; and for the same reason black text on a white background is the most common format used on computer screens. 14th and 15th centuries In the early Middle Ages, princes, nobles and the wealthy usually wore bright colors, particularly scarlet cloaks from Italy. Black was rarely part of the wardrobe of a noble family. The one exception was the fur of the sable. This glossy black fur, from an animal of the marten family, was the finest and most expensive fur in Europe. It was imported from Russia and Poland and used to trim the robes and gowns of royalty. In the 14th century, the status of black began to change. First, high-quality black dyes began to arrive on the market, allowing garments of a deep, rich black. Magistrates and government officials began to wear black robes, as a sign of the importance and seriousness of their positions. A third reason was the passage of sumptuary laws in some parts of Europe which prohibited the wearing of costly clothes and certain colors by anyone except members of the nobility. The famous bright scarlet cloaks from Venice and the peacock blue fabrics from Florence were restricted to the nobility. The wealthy bankers and merchants of northern Italy responded by changing to black robes and gowns, made with the most expensive fabrics. The change to the more austere but elegant black was quickly picked up by the kings and nobility. It began in northern Italy, where the Duke of Milan and the Count of Savoy and the rulers of Mantua, Ferrara, Rimini and Urbino began to dress in black. It then spread to France, led by Louis I, Duke of Orleans, younger brother of King Charles VI of France. It moved to England at the end of the reign of King Richard II (1377–1399), where all the court began to wear black. In 1419–20, black became the color of the powerful Duke of Burgundy, Philip the Good. It moved to Spain, where it became the color of the Spanish Habsburgs, of Charles V and of his son, Philip II of Spain (1527–1598). European rulers saw it as the color of power, dignity, humility and temperance. By the end of the 16th century, it was the color worn by almost all the monarchs of Europe and their courts. Modern 16th and 17th centuries While black was the color worn by the Catholic rulers of Europe, it was also the emblematic color of the Protestant Reformation in Europe and the Puritans in England and America. John Calvin, Philip Melanchthon and other Protestant theologians denounced the richly colored and decorated interiors of Roman Catholic churches. They saw the color red, worn by the Pope and his Cardinals, as the color of luxury, sin, and human folly. In some northern European cities, mobs attacked churches and cathedrals, smashed the stained glass windows and defaced the statues and decoration. In Protestant doctrine, clothing was required to be sober, simple and discreet. Bright colors were banished and replaced by blacks, browns and grays; women and children were recommended to wear white. In the Protestant Netherlands, Rembrandt used this sober new palette of blacks and browns to create portraits whose faces emerged from the shadows expressing the deepest human emotions. The Catholic painters of the Counter-Reformation, like Rubens, went in the opposite direction; they filled their paintings with bright and rich colors. The new Baroque churches of the Counter-Reformation were usually shining white inside and filled with statues, frescoes, marble, gold and colorful paintings, to appeal to the public. But European Catholics of all classes, like Protestants, eventually adopted a sober wardrobe that was mostly black, brown and gray. In the second part of the 17th century, Europe and America experienced an epidemic of fear of witchcraft. People widely believed that the devil appeared at midnight in a ceremony called a Black Mass or black sabbath, usually in the form of a black animal, often a goat, a dog, a wolf, a bear, a deer or a rooster, accompanied by their familiar spirits, black cats, serpents and other black creatures. This was the origin of the widespread superstition about black cats and other black animals. In medieval Flanders, in a ceremony called Kattenstoet, black cats were thrown from the belfry of the Cloth Hall of Ypres to ward off witchcraft. Witch trials were common in both Europe and America during this period. During the notorious Salem witch trials in New England in 1692–93, one of those on trial was accused of being able turn into a "black thing with a blue cap," and others of having familiars in the form of a black dog, a black cat and a black bird. Nineteen women and men were hanged as witches. 18th and 19th centuries In the 18th century, during the European Age of Enlightenment, black receded as a fashion color. Paris became the fashion capital, and pastels, blues, greens, yellow and white became the colors of the nobility and upper classes. But after the French Revolution, black again became the dominant color. Black was the color of the industrial revolution, largely fueled by coal, and later by oil. Thanks to coal smoke, the buildings of the large cities of Europe and America gradually turned black. By 1846 the industrial area of the West Midlands of England was "commonly called 'the Black Country'”. Charles Dickens and other writers described the dark streets and smoky skies of London, and they were vividly illustrated in the engravings of French artist Gustave Doré. A different kind of black was an important part of the romantic movement in literature. Black was the color of melancholy, the dominant theme of romanticism. The novels of the period were filled with castles, ruins, dungeons, storms, and meetings at midnight. The leading poets of the movement were usually portrayed dressed in black, usually with a white shirt and open collar, and a scarf carelessly over their shoulder, Percy Bysshe Shelley and Lord Byron helped create the enduring stereotype of the romantic poet. The invention of inexpensive synthetic black dyes and the industrialization of the textile industry meant that high-quality black clothes were available for the first time to the general population. In the 19th century black gradually became the most popular color of business dress of the upper and middle classes in England, the Continent, and America. Black dominated literature and fashion in the 19th century, and played a large role in painting. James McNeill Whistler made the color the subject of his most famous painting, Arrangement in grey and black number one (1871), better known as Whistler's Mother. Some 19th-century French painters had a low opinion of black: "Reject black," Paul Gauguin said, "and that mix of black and white they call gray. Nothing is black, nothing is gray." But Édouard Manet used blacks for their strength and dramatic effect. Manet's portrait of painter Berthe Morisot was a study in black which perfectly captured her spirit of independence. The black gave the painting power and immediacy; he even changed her eyes, which were green, to black to strengthen the effect. Henri Matisse quoted the French impressionist Pissarro telling him, "Manet is stronger than us all – he made light with black." Pierre-Auguste Renoir used luminous blacks, especially in his portraits. When someone told him that black was not a color, Renoir replied: "What makes you think that? Black is the queen of colors. I always detested Prussian blue. I tried to replace black with a mixture of red and blue, I tried using cobalt blue or ultramarine, but I always came back to ivory black." Vincent van Gogh used black lines to outline many of the objects in his paintings, such as the bed in the famous painting of his bedroom. making them stand apart. His painting of black crows over a cornfield, painted shortly before he died, was particularly agitated and haunting. In the late 19th century, black also became the color of anarchism. (See the section political movements.) 20th and 21st centuries In the 20th century, black was the color of Italian and German fascism. (See the section political movements.) In art, black regained some of the territory that it had lost during the 19th century. The Russian painter Kasimir Malevich, a member of the Suprematist movement, created the Black Square in 1915, is widely considered the first purely abstract painting. He wrote, "The painted work is no longer simply the imitation of reality, but is this very reality ... It is not a demonstration of ability, but the materialization of an idea." Black was also appreciated by Henri Matisse. "When I didn't know what color to put down, I put down black," he said in 1945. "Black is a force: I used black as ballast to simplify the construction ... Since the impressionists it seems to have made continuous progress, taking a more and more important part in color orchestration, comparable to that of the double bass as a solo instrument." In the 1950s, black came to be a symbol of individuality and intellectual and social rebellion, the color of those who did not accept established norms and values. In Paris, it was worn by Left-Bank intellectuals and performers such as Juliette Gréco, and by some members of the Beat Movement in New York and San Francisco. Black leather jackets were worn by motorcycle gangs such as the Hells Angels and street gangs on the fringes of society in the United States. Black as a color of rebellion was celebrated in such films as The Wild One, with Marlon Brando. By the end of the 20th century, black was the emblematic color of the punk subculture punk fashion, and the goth subculture. Goth fashion, which emerged in England in the 1980s, was inspired by Victorian era mourning dress. In men's fashion, black gradually ceded its dominance to navy blue, particularly in business suits. Black evening dress and formal dress in general were worn less and less. In 1960, John F. Kennedy was the last American President to be inaugurated wearing formal dress; President Lyndon Johnson and all his successors were inaugurated wearing business suits. Women's fashion was revolutionized and simplified in 1926 by the French designer Coco Chanel, who published a drawing of a simple black dress in Vogue magazine. She famously said, "A woman needs just three things; a black dress, a black sweater, and, on her arm, a man she loves." French designer Jean Patou also followed suit by creating a black collection in 1929. Other designers contributed to the trend of the little black dress. The Italian designer Gianni Versace said, "Black is the quintessence of simplicity and elegance," and French designer Yves Saint Laurent said, "black is the liaison which connects art and fashion. One of the most famous black dresses of the century was designed by Hubert de Givenchy and was worn by Audrey Hepburn in the 1961 film Breakfast at Tiffany's. The American civil rights movement in the 1950s was a struggle for the political equality of African Americans. It developed into the Black Power movement in the early 1960s until the late 1980s, and the Black Lives Matter movement in the 2010s and 2020s. It also popularized the slogan "Black is Beautiful". Science Physics In the visible spectrum, black is the result of the absorption of all light wavelengths. Black can be defined as the visual impression (or color) experienced when no visible light reaches the eye. Pigments or dyes that absorb light rather than reflect it back to the eye "look black". A black pigment can, however, result from a combination of several pigments that collectively absorb all colors. If appropriate proportions of three primary pigments are mixed, the result reflects so little light as to be called "black". This provides two superficially opposite but actually complementary descriptions of black. Black is the absorption of all colors of light, or an exhaustive combination of multiple colors of pigment. In physics, a black body is a perfect absorber of light, but, by a thermodynamic rule, it is also the best emitter. Thus, the best radiative cooling, out of sunlight, is by using black paint, though it is important that it be black (a nearly perfect absorber) in the infrared as well. In elementary science, far ultraviolet light is called "black light" because, while itself unseen, it causes many minerals and other substances to fluoresce. Absorption of light is contrasted by transmission, reflection and diffusion, where the light is only redirected, causing objects to appear transparent, reflective or white respectively. A material is said to be black if most incoming light is absorbed equally in the material. Light (electromagnetic radiation in the visible spectrum) interacts with the atoms and molecules, which causes the energy of the light to be converted into other forms of energy, usually heat. This means that black surfaces can act as thermal collectors, absorbing light and generating heat (see Solar thermal collector). As of September 2019, the darkest material is made from vertically aligned carbon nanotubes. The material was grown by MIT engineers and was reported to have a 99.995% absorption rate of any incoming light. This surpasses any former darkest materials including Vantablack, which has a peak absorption rate of 99.965% in the visible spectrum. Chemistry Pigments The earliest pigments used by Neolithic man were charcoal, red ocher and yellow ocher. The black lines of cave art were drawn with the tips of burnt torches made of a wood with resin. Different charcoal pigments were made by burning different woods and animal products, each of which produced a different tone. The charcoal would be ground and then mixed with animal fat to make the pigment. Vine black was produced in Roman times by burning the cut branches of grapevines. It could also be produced by burning the remains of the crushed grapes, which were collected and dried in an oven. According to the historian Vitruvius, the deepness and richness of the black produced corresponded to the quality of the wine. The finest wines produced a black with a bluish tinge the color of indigo. The 15th-century painter Cennino Cennini described how this pigment was made during the Renaissance in his famous handbook for artists: "...there is a black which is made from the tendrils of vines. And these tendrils need to be burned. And when they have been burned, throw some water onto them and put them out and then mull them in the same way as the other black. And this is a lean and black pigment and is one of the perfect pigments that we use." Cennini also noted that "There is another black which is made from burnt almond shells or peaches and this is a perfect, fine black." Similar fine blacks were made by burning the pits of the peach, cherry or apricot. The powdered charcoal was then mixed with gum arabic or the yellow of an egg to make a paint. Different civilizations burned different plants to produce their charcoal pigments. The Inuit of Alaska used wood charcoal mixed with the blood of seals to paint masks and wooden objects. The Polynesians burned coconuts to produce their pigment. Lamp black was used as a pigment for painting and frescoes, as a dye for fabrics, and in some societies for making tattoos. The 15th century Florentine painter Cennino Cennini described how it was made during the Renaissance: "... take a lamp full of linseed oil and fill the lamp with the oil and light the lamp. Then place it, lit, under a thoroughly clean pan and make sure that the flame from the lamp is two or three fingers from the bottom of the pan. The smoke that comes off the flame will hit the bottom of the pan and gather, becoming thick. Wait a bit. take the pan and brush this pigment (that is, this smoke) onto paper or into a pot with something. And it is not necessary to mull or grind it because it is a very fine pigment. Re-fill the lamp with the oil and put it under the pan like this several times and, in this way, make as much of it as is necessary." This same pigment was used by Indian artists to paint the Ajanta Caves, and as dye in ancient Japan. Ivory black, also known as bone char, was originally produced by burning ivory and mixing the resulting charcoal powder with oil. The color is still made today, but ordinary animal bones are substituted for ivory. Mars black is a black pigment made of synthetic iron oxides. It is commonly used in water-colors and oil painting. It takes its name from Mars, the god of war and patron of iron. Dyes Good-quality black dyes were not known until the middle of the 14th century. The most common early dyes were made from bark, roots or fruits of different trees; usually walnuts, chestnuts, or certain oak trees. The blacks produced were often more gray, brown or bluish. The cloth had to be dyed several times to darken the color. One solution used by dyers was add to the dye some iron filings, rich in iron oxide, which gave a deeper black. Another was to first dye the fabric dark blue, and then to dye it black. A much richer and deeper black dye was eventually found made from the oak apple or "gall-nut". The gall-nut is a small round tumor which grows on oak and other varieties of trees. They range in size from 2–5 cm, and are caused by chemicals injected by the larva of certain kinds of gall wasp in the family Cynipidae. The dye was very expensive; a great quantity of gall-nuts were needed for a very small amount of dye. The gall-nuts which made the best dye came from Poland, eastern Europe, the near east and North Africa. Beginning in about the 14th century, dye from gall-nuts was used for clothes of the kings and princes of Europe. Another important source of natural black dyes from the 17th century onwards was the logwood tree, or Haematoxylum campechianum, which also produced reddish and bluish dyes. It is a species of flowering tree in the legume family, Fabaceae, that is native to southern Mexico and northern Central America. The modern nation of Belize grew from 17th century English logwood logging camps. Since the mid-19th century, synthetic black dyes have largely replaced natural dyes. One of the important synthetic blacks is Nigrosin, a mixture of synthetic black dyes (CI 50415, Solvent black 5) made by heating a mixture of nitrobenzene, aniline and aniline hydrochloride in the presence of a copper or iron catalyst. Its main industrial uses are as a colorant for lacquers and varnishes and in marker-pen inks. Inks The first known inks were made by the Chinese, and date back to the 23rd century B.C. They used natural plant dyes and minerals such as graphite ground with water and applied with an ink brush. Early Chinese inks similar to the modern inkstick have been found dating to about 256 BC at the end of the Warring States period. They were produced from soot, usually produced by burning pine wood, mixed with animal glue. To make ink from an inkstick, the stick is continuously ground against an inkstone with a small quantity of water to produce a dark liquid which is then applied with an ink brush. Artists and calligraphists could vary the thickness of the resulting ink by reducing or increasing the intensity and time of ink grinding. These inks produced the delicate shading and subtle or dramatic effects of Chinese brush painting. India ink (or "Indian ink" in British English) is a black ink once widely used for writing and printing and now more commonly used for drawing, especially when inking comic books and comic strips. The technique of making it probably came from China. India ink has been in use in India since at least the 4th century BC, where it was called masi. In India, the black color of the ink came from bone char, tar, pitch and other substances. The ancient Romans had a black writing ink they called atramentum librarium. Its name came from the Latin word atrare, which meant to make something black. (This was the same root as the English word atrocious.) It was usually made, like India ink, from soot, although one variety, called atramentum elephantinum, was made by burning the ivory of elephants. Gall-nuts were also used for making fine black writing ink. Iron gall ink (also known as iron gall nut ink or oak gall ink) was a purple-black or brown-black ink made from iron salts and tannic acids from gall nut. It was the standard writing and drawing ink in Europe, from about the 12th century to the 19th century, and remained in use well into the 20th century. Astronomy A black hole is a region of spacetime where gravity prevents anything, including light, from escaping. The theory of general relativity predicts that a sufficiently compact mass will deform spacetime to form a black hole. Around a black hole there is a mathematically defined boundary called an event horizon that marks the point of no return. It is called "black" because it absorbs all the light that hits the horizon, reflecting nothing, just like a perfect black body in thermodynamics. Black holes of stellar mass are expected to form when very massive stars collapse at the end of their life cycle. After a black hole has formed it can continue to grow by absorbing mass from its surroundings. By absorbing other stars and merging with other black holes, supermassive black holes of millions of solar masses may form. There is general consensus that supermassive black holes exist in the centers of most galaxies. Although a black hole itself is black, infalling material forms an accretion disk, one of the brightest types of object in the universe. Black-body radiation refers to the radiation coming from a body at a given temperature where all incoming energy (light) is converted to heat. Black sky refers to the appearance of space as one emerges from Earth's atmosphere. Why the night sky and space are black – Olbers' paradox The fact that outer space is black is sometimes called Olbers' paradox. In theory, because the universe is full of stars, and is believed to be infinitely large, it would be expected that the light of an infinite number of stars would be enough to brilliantly light the whole universe all the time. However, the background color of outer space is black. This contradiction was first noted in 1823 by German astronomer Heinrich Wilhelm Matthias Olbers, who posed the question of why the night sky was black. The current accepted answer is that, although the universe may be infinitely large, it is not infinitely old. It is thought to be about 13.8 billion years old, so we can only see objects as far away as the distance light can travel in 13.8 billion years. Light from stars farther away has not reached Earth, and cannot contribute to making the sky bright. Furthermore, as the universe is expanding, many stars are moving away from Earth. As they move, the wavelength of their light becomes longer, through the Doppler effect, and shifts toward red, or even becomes invisible. As a result of these two phenomena, there is not enough starlight to make space anything but black. The daytime sky on Earth is blue because light from the Sun strikes molecules in Earth's atmosphere scattering light in all directions. Blue light is scattered more than other colors, and reaches the eye in greater quantities, making the daytime sky appear blue. This is known as Rayleigh scattering. The nighttime sky on Earth is black because the part of Earth experiencing night is facing away from the Sun, the light of the Sun is blocked by Earth itself, and there is no other bright nighttime source of light in the vicinity. Thus, there is not enough light to undergo Rayleigh scattering and make the sky blue. On the Moon, on the other hand, because there is virtually no atmosphere to scatter the light, the sky is black both day and night. This also holds true for other locations without an atmosphere, such as Mercury. Biology Culture In China, the color black is associated with water, one of the five fundamental elements believed to compose all things; and with winter, cold, and the direction north, usually symbolized by a black tortoise. It is also associated with disorder, including the positive disorder which leads to change and new life. When the first Emperor of China Qin Shi Huang seized power from the Zhou Dynasty, he changed the Imperial color from red to black, saying that black extinguished red. Only when the Han Dynasty appeared in 206 BC was red restored as the imperial color. In Japan, black is associated with mystery, the night, the unknown, the supernatural, the invisible and death. Combined with white, it can symbolize intuition. In 10th and 11th century Japan, it was believed that wearing black could bring misfortune. It was worn at court by those who wanted to set themselves apart from the established powers or who had renounced material possessions. In Japan black can also symbolize experience, as opposed to white, which symbolizes naiveté. The black belt in martial arts symbolizes experience, while a white belt is worn by novices. Japanese men traditionally wear a black kimono with some white decoration on their wedding day. In Indonesia black is associated with depth, the subterranean world, demons, disaster, and the left hand. When black is combined with white, however, it symbolizes harmony and equilibrium. Political movements Anarchism Anarchism is a political philosophy, most popular in the late 19th and early 20th centuries, which holds that governments and capitalism are harmful and undesirable. The symbols of anarchism was usually either a black flag or a black letter A. More recently it is usually represented with a bisected red and black flag, to emphasise the movement's socialist roots in the First International. Anarchism was most popular in Spain, France, Italy, Ukraine and Argentina. There were also small but influential movements in the United States, Russia and many other countries all around the world. The Black Army was a collection of anarchist military units which fought for a stateless society in Ukraine in the Russian Civil War. While fighting against the reactionary White Army and alongside the Bolshevik Red Army at first, it was later defeated by the Communist forces. It was officially known as the Revolutionary Insurgent Army of Ukraine, and originally founded by the anarchist Nestor Makhno. Fascism The Blackshirts () were Fascist paramilitary groups in Italy during the period immediately following World War I and until the end of World War II. The Blackshirts were officially known as the Voluntary Militia for National Security (Milizia Volontaria per la Sicurezza Nazionale, or MVSN). Inspired by the black uniforms of the Arditi, Italy's elite storm troops of World War I, the Fascist Blackshirts were organized by Benito Mussolini as the military tool of his political movement. They used violence and intimidation against Mussolini's opponents. The emblem of the Italian fascists was a black flag with fasces, an axe in a bundle of sticks, an ancient Roman symbol of authority. Mussolini came to power in 1922 through his March on Rome with the blackshirts. Black was also adopted by Adolf Hitler and the Nazis in Germany. Red, white and black were the colors of the flag of the German Empire from 1870 to 1918. In Mein Kampf, Hitler explained that they were "revered colors expressive of our homage to the glorious past." Hitler also wrote that "the new flag ... should prove effective as a large poster" because "in hundreds of thousands of cases a really striking emblem may be the first cause of awakening interest in a movement." The black swastika was meant to symbolize the Aryan race, which, according to the Nazis, "was always anti-Semitic and will always be anti-Semitic." Several designs by a number of different authors were considered, but the one adopted in the end was Hitler's personal design. Black became the color of the uniform of the SS, the Schutzstaffel or "defense corps", the paramilitary wing of the Nazi Party, and was worn by SS officers from 1932 until the end of World War II. The Nazis used a black triangle to symbolize anti-social elements. The symbol originates from Nazi concentration camps, where every prisoner had to wear one of the Nazi concentration camp badges on their jacket, the color of which categorized them according to "their kind". Many Black Triangle prisoners were either mentally disabled or mentally ill. The homeless were also included, as were alcoholics, the Romani people, the habitually "work-shy", prostitutes, draft dodgers and pacifists. More recently the black triangle has been adopted as a symbol in lesbian culture and by disabled activists. Black shirts were also worn by the British Union of Fascists before World War II, and members of fascist movements in the Netherlands. Patriotic resistance The Lützow Free Corps, composed of volunteer German students and academics fighting against Napoleon in 1813, could not afford to make special uniforms and therefore adopted black, as the only color that could be used to dye their civilian clothing without the original color showing. In 1815 the students began to carry a red, black and gold flag, which they believed (incorrectly) had been the colors of the Holy Roman Empire (the imperial flag had actually been gold and black). In 1848, this banner became the flag of the German confederation. In 1866, Prussia unified Germany under its rule, and imposed the red, white and black of its own flag, which remained the colors of the German flag until the end of the Second World War. In 1949 the Federal Republic of Germany returned to the original flag and colors of the students and professors of 1815, which is the flag of Germany today. Military Black has been a traditional color of cavalry and armoured or mechanized troops. German armoured troops (Panzerwaffe) traditionally wore black uniforms, and even in others, a black beret is common. In Finland, black is the symbolic color for both armoured troops and combat engineers, and military units of these specialities have black flags and unit insignia. The black beret and the color black is also a symbol of special forces in many countries. Soviet and Russian OMON special police and Russian naval infantry wear a black beret. A black beret is also worn by military police in the Canadian, Czech, Croatian, Portuguese, Spanish and Serbian armies. The silver-on-black skull and crossbones symbol or Totenkopf and a black uniform were used by Hussars and Black Brunswickers, the German Panzerwaffe and the Nazi Schutzstaffel, and U.S. 400th Missile Squadron (crossed missiles), and continues in use with the Estonian Kuperjanov Battalion. Religion In Christian theology, black was the color of the universe before God created light. In many religious cultures, from Mesoamerica to Oceania to India and Japan, the world was created out of a primordial darkness. In the Bible the light of faith and Christianity is often contrasted with the darkness of ignorance and paganism. In Christianity, the devil is often called the "prince of darkness". The term was used in John Milton's poem Paradise Lost, published in 1667, referring to Satan, who is viewed as the embodiment of evil. It is an English translation of the Latin phrase princeps tenebrarum, which occurs in the Acts of Pilate, written in the fourth century, in the 11th-century hymn Rhythmus de die mortis by Pietro Damiani, and in a sermon by Bernard of Clairvaux from the 12th century. The phrase also occurs in King Lear by William Shakespeare (), Act III, Scene IV, l. 14: 'The prince of darkness is a gentleman." Priests and pastors of the Roman Catholic, Eastern Orthodox and Protestant churches commonly wear black, as do monks of the Benedictine Order, who consider it the color of humility and penitence. In Islam, black, along with green, plays an important symbolic role. It is the color of the Black Standard, the banner that is said to have been carried by the soldiers of Muhammad. It is also used as a symbol in Shi'a Islam (heralding the advent of the Mahdi), and the flag of followers of Islamism and Jihadism. In Hinduism, the goddess Kali, goddess of time and change, is portrayed with black or dark blue skin. wearing a necklace adorned with severed heads and hands. Her name means "The black one". She destroys anger and passion according to Hindu mythology and her devotees are supposed to abstain from meat or intoxication. Kali does not eat meat, but it is the śāstra's injunction that those who are unable to give up meat-eating, they may sacrifice one goat, not cow, one small animal before the goddess Kali, on amāvāsya (new moon) day, night, not day, and they can eat it. In Paganism, black represents dignity, force, stability, and protection. The color is often used to banish and release negative energies, or binding. An athame is a ceremonial blade often having a black handle, which is used in some forms of witchcraft. Sports The national rugby union team of New Zealand is called the All Blacks, in reference to their black outfits, and the color is also shared by other New Zealand national teams such as the Black Caps (cricket) and the Kiwis (rugby league). Association football (soccer) referees traditionally wear all-black uniforms, however nowadays other uniform colors may also be worn. In auto racing, a black flag signals a driver to go into the pits. In baseball, "the black" refers to the batter's eye, a blacked out area around the center-field bleachers, painted black to give hitters a decent background for pitched balls. A large number of teams have uniforms designed with black colors even when the team does not normally feature that color. Many feel the color sometimes imparts a psychological advantage in its wearers. Black is used by numerous professional and collegiate sports teams Idioms and expressions In general, the Negro race of African origin is called "Black", while the Caucasian race of European origin is called "White". In the United States, "Black Friday" (the day after Thanksgiving Day, the fourth Thursday in November) is traditionally the busiest shopping day of the year. Many Americans are on holiday because of Thanksgiving, and many retailers open earlier and close later than normal, and offer special prices. The day's name originated in Philadelphia sometime before 1961, and originally was used to describe the heavy and disruptive downtown pedestrian and vehicle traffic which would occur on that day.Martin L. Apfelbaum, Philadelphia's "Black Friday," American Philatelist, vol. 69, no. 4, p. 239 (January 1966). Later an alternative explanation began to be offered: that "Black Friday" indicates the point in the year that retailers begin to turn a profit, or are "in the black", because of the large volume of sales on that day. "In the black" means profitable. Accountants originally used black ink in ledgers to indicate profit, and red ink to indicate a loss. Black Friday also refers to any particularly disastrous day on financial markets. The first Black Friday (1869), September 24, 1869, was caused by the efforts of two speculators, Jay Gould and James Fisk, to corner the gold market on the New York Gold Exchange. A blacklist is a list of undesirable persons or entities (to be placed on the list is to be "blacklisted"). Black comedy is a form of comedy dealing with morbid and serious topics. The expression is similar to black humor or black humour. A black mark against a person relates to something bad they have done. A black mood is a bad one (cf Winston Churchill's clinical depression, which he called "my black dog"). Black market is used to denote the trade of illegal goods, or alternatively the illegal trade of otherwise legal items at considerably higher prices, e.g. to evade rationing. Black propaganda is the use of known falsehoods, partial truths, or masquerades in propaganda to confuse an opponent. Blackmail is the act of threatening someone to do something that would hurt them in some way, such as by revealing sensitive information about them, in order to force the threatened party to fulfill certain demands. Ordinarily, such a threat is illegal. If the black eight-ball, in billiards, is sunk before all others are out of play, the player loses. The black sheep of the family is the ne'er-do-well. To blackball someone is to block their entry into a club or some such institution. In the traditional English gentlemen's club, members vote on the admission of a candidate by secretly placing a white or black ball in a hat. If upon the completion of voting, there was even one black ball amongst the white, the candidate would be denied membership, and he would never know who had "blackballed" him. Black tea in the Western culture is known as "crimson tea" in Chinese and culturally influenced languages (紅 茶, Mandarin Chinese hóngchá; Japanese kōcha; Korean hongcha). "The black" is a wildfire suppression term referring to a burned area on a wildfire capable of acting as a safety zone. Black coffee refers to coffee without sugar or cream. Associations and symbolism Mourning In the West, black is commonly associated with mourning and bereavement, and usually worn at funerals and memorial services. In some traditional societies, for example in Greece and Italy, some widows wear black for the rest of their lives. In contrast, across much of Africa and parts of Asia like Vietnam, white is a color of mourning. In Victorian England, the colors and fabrics of mourning were specified in an unofficial dress code: "non-reflective black paramatta and crape for the first year of deepest mourning, followed by nine months of dullish black silk, heavily trimmed with crape, and then three months when crape was discarded. Paramatta was a fabric of combined silk and wool or cotton; crape was a harsh black silk fabric with a crimped appearance produced by heat. Widows were allowed to change into the colors of half-mourning, such as gray and lavender, black and white, for the final six months." A "black day" (or week or month) usually refers to tragic date. The Romans marked fasti days with white stones and nefasti days with black. The term is often used to remember massacres. Black months include the Black September in Jordan, when large numbers of Palestinians were killed, and Black July in Sri Lanka, the killing of members of the Tamil population by the Sinhalese government. In the financial world, the term often refers to a dramatic drop in the stock market. For example, the Wall Street Crash of 1929, the stock market crash on October 29, 1929, which marked the start of the Great Depression, is nicknamed Black Tuesday, and was preceded by Black Thursday, a downturn on October 24 the previous week. Darkness and evil In western popular culture, black has long been associated with evil and darkness. It is the traditional color of witchcraft and black magic. In the Book of Revelation, the last book in the New Testament of the Bible, the Four Horsemen of the Apocalypse are supposed to announce the Apocalypse before the Last Judgment. The horseman representing famine rides a black horse. The vampire of literature and films, such as Count Dracula of the Bram Stoker novel, dressed in black, and could only move at night. The Wicked Witch of the West in the 1939 film The Wizard of Oz became the archetype of witches for generations of children. Whereas witches and sorcerers inspired real fear in the 17th century, in the 21st century children and adults dressed as witches for Halloween parties and parades. Power, authority and solemnity Black is frequently used as a color of power, law and authority. In many countries judges and magistrates wear black robes. That custom began in Europe in the 13th and 14th centuries. Jurists, magistrates and certain other court officials in France began to wear long black robes during the reign of Philip IV of France (1285–1314), and in England from the time of Edward I (1271–1307). The custom spread to the cities of Italy at about the same time, between 1300 and 1320. The robes of judges resembled those worn by the clergy, and represented the law and authority of the King, while those of the clergy represented the law of God and authority of the church. Until the 20th century most police uniforms were black, until they were largely replaced by a less menacing blue in France, the U.S. and other countries. In the United States, police cars are frequently Black and white. The riot control units of the Basque Autonomous Police in Spain are known as beltzak ("blacks") after their uniform. Black today is the most common color for limousines and the official cars of government officials. Black formal attire is still worn at many solemn occasions or ceremonies, from graduations to formal balls. Graduation gowns are copied from the gowns worn by university professors in the Middle Ages, which in turn were copied from the robes worn by judges and priests, who often taught at the early universities. The mortarboard hat worn by graduates is adapted from a square cap called a biretta worn by Medieval professors and clerics. Functionality In the 19th and 20th centuries, many machines and devices, large and small, were painted black, to stress their functionality. These included telephones, sewing machines, steamships, railroad locomotives, and automobiles. The Ford Model T, the first mass-produced car, was available only in black from 1914 to 1926. Of means of transportation, only airplanes were rarely ever painted black. Black house paint is becoming more popular with Sherwin-Williams reporting that the color, Tricorn Black, was the 6th most popular exterior house paint color in Canada and the 12th most popular paint in the United States in 2018. Ethnography The term "black" is often used in the West to describe people whose skin is darker. In the United States, it is particularly used to describe African Americans. The terms for African Americans have changed over the years, as shown by the categories in the United States Census, taken every ten years. In the first U.S. Census, taken in 1790, just four categories were used: Free White males, Free White females, other free persons, and slaves. In the 1820 census the new category "colored" was added. In the 1850 census, slaves were listed by owner, and a B indicated black, while an M indicated "mulatto". In the 1890 census, the categories for race were white, black, mulatto, quadroon (a person one-quarter black); octoroon (a person one-eighth black), Chinese, Japanese, or American Indian. In the 1930 census, anyone with any black blood was supposed to be listed as "Negro". In the 1970 census, the category "Negro or black" was used for the first time. In the 2000 and 2012 census, the category "Black or African-American" was used, defined as "a person having their origin in any of the racial groups in Africa." In the 2012 Census 12.1 percent of Americans identified themselves as Black or African-American. Black is also commonly used as a racial description in the United Kingdom, since ethnicity was first measured in the 2001 census. The 2011 British census asked residents to describe themselves, and categories offered included Black, African, Caribbean, or Black British. Other possible categories were African British, African Scottish, Caribbean British and Caribbean Scottish. Of the total UK population in 2001, 1.0 percent identified themselves as Black Caribbean, 0.8 percent as Black African, and 0.2 percent as Black (others). In Canada, census respondents can identify themselves as Black. In the 2006 census, 2.5 percent of the population identified themselves as black. In Australia, the term black is not used in the census. In the 2006 census, 2.3 percent of Australians identified themselves as Aboriginal and/or Torres Strait Islanders. In Brazil, the Brazilian Institute of Geography and Statistics (IBGE) asks people to identify themselves as branco (white), pardo (brown), preto (black), or amarelo (yellow). In 2008 6.8 percent of the population identified themselves as "preto". Opposite of white Black and white have often been used to describe opposites; particularly light and darkness and good and evil. In Medieval literature, the white knight usually represented virtue, the black knight something mysterious and sinister. In American westerns, the hero often wore a white hat, the villain a black hat. In the original game of chess invented in Persia or India, the colors of the two sides were varied; a 12th-century Iranian chess set in the New York Metropolitan Museum of Art, has red and green pieces. But when the game was imported into Europe, the colors, corresponding to European culture, usually became black and white. Studies have shown that something printed in black letters on white has more authority with readers than any other color of printing. In philosophy and arguments, the issue is often described as black-and-white, meaning that the issue at hand is dichotomized (having two clear, opposing sides with no middle ground). Conspiracy Black is commonly associated with secrecy. The Black Chamber was a term given to an office which secretly opened and read diplomatic mail and broke codes. Queen Elizabeth I had such an office, headed by her Secretary, Sir Francis Walsingham, which successfully broke the Spanish codes and broke up several plots against the Queen. In France a cabinet noir'' was established inside the French post office by Louis XIII to open diplomatic mail. It was closed during the French Revolution but re-opened under Napoleon I. The Habsburg Empire and Dutch Republic had similar black chambers. The United States created a secret peacetime Black Chamber, called the Cipher Bureau, in 1919. It was funded by the State Department and Army and disguised as a commercial company in New York. It successfully broke a number of diplomatic codes, including the code of the Japanese government. It was closed down in 1929 after the State Department withdrew funding, when the new Secretary of State, Henry Stimson, stated that "Gentlemen do not read each other's mail." The Cipher Bureau was the ancestor of the U.S. National Security Agency. A black project is a secret unacknowledged military project, such as Enigma Decryption during World War II, or a secret counter-narcotics or police sting operation. Black ops are covert operations carried out by a government, government agency or military. A black budget is a government budget that is allocated for classified or other secret operations of a nation. The black budget is an account expenses and spending related to military research and covert operations. The black budget is mostly classified due to security reasons. Elegant fashion Black is the color most commonly associated with elegance in Europe and the United States, followed by silver, gold, and white. Black first became a fashionable color for men in Europe in the 17th century, in the courts of Italy and Spain. (See history above.) In the 19th century, it was the fashion for men both in business and for evening wear, in the form of a black coat whose tails came down the knees. In the evening it was the custom of the men to leave the women after dinner to go to a special smoking room to enjoy cigars or cigarettes. This meant that their tailcoats eventually smelled of tobacco. According to the legend, in 1865 Edward VII, then the Prince of Wales, had his tailor make a special short smoking jacket. The smoking jacket then evolved into the dinner jacket. Again according to legend, the first Americans to wear the jacket were members of the Tuxedo Club in New York State. Thereafter the jacket became known as a tuxedo in the U.S. The term "smoking" is still used today in Russia and other countries. The tuxedo was always black until the 1930s, when the Duke of Windsor began to wear a tuxedo that was a very dark midnight blue. He did so because a black tuxedo looked greenish in artificial light, while a dark blue tuxedo looked blacker than black itself. For women's fashion, the defining moment was the invention of the simple black dress by Coco Chanel in 1926. (See history.) Thereafter, a long black gown was used for formal occasions, while the simple black dress could be used for everything else. The designer Karl Lagerfeld, explaining why black was so popular, said: "Black is the color that goes with everything. If you're wearing black, you're on sure ground." Skirts have gone up and down and fashions have changed, but the black dress has not lost its position as the essential element of a woman's wardrobe. The fashion designer Christian Dior said, "elegance is a combination of distinction, naturalness, care and simplicity," and black exemplified elegance. The expression "X is the new black" is a reference to the latest trend or fad that is considered a wardrobe basic for the duration of the trend, on the basis that black is always fashionable. The phrase has taken on a life of its own and has become a cliché. Many performers of both popular and European classical music, including French singers Edith Piaf and Juliette Gréco, and violinist Joshua Bell have traditionally worn black on stage during performances. A black costume was usually chosen as part of their image or stage persona, or because it did not distract from the music, or sometimes for a political reason. Country-western singer Johnny Cash always wore black on stage. In 1971, Cash wrote the song "Man in Black" to explain why he dressed in that color: "We're doing mighty fine I do suppose / In our streak of lightning cars and fancy clothes / But just so we're reminded of the ones who are held back / Up front there ought to be a man in black." See also Black Rose (disambiguation) Lists of colors Rich black, which is different from using black ink alone, in printing. Shades of black References Notes and citations Bibliography Shades of gray Color Spoken articles Darkness Web colors Cultural aspects of death
7
Bletchley Park is an English country house and estate in Bletchley, Milton Keynes (Buckinghamshire) that became the principal centre of Allied code-breaking during the Second World War. The mansion was constructed during the years following 1883 for the financier and politician Sir Herbert Leon in the Victorian Gothic, Tudor, and Dutch Baroque styles, on the site of older buildings of the same name. During World War II, the estate housed the Government Code and Cypher School (GC&CS), which regularly penetrated the secret communications of the Axis Powersmost importantly the German Enigma and Lorenz ciphers. The GC&CS team of codebreakers included Alan Turing, Gordon Welchman, Hugh Alexander, Bill Tutte, and Stuart Milner-Barry. The nature of the work at Bletchley remained secret until many years after the war. According to the official historian of British Intelligence, the "Ultra" intelligence produced at Bletchley shortened the war by two to four years, and without it the outcome of the war would have been uncertain. The team at Bletchley Park devised automatic machinery to help with decryption, culminating in the development of Colossus, the world's first programmable digital electronic computer. Codebreaking operations at Bletchley Park came to an end in 1946 and all information about the wartime operations was classified until the mid-1970s. After the war it had various uses including as a teacher-training college and local GPO headquarters. By 1990 the huts in which the codebreakers worked were being considered for demolition and redevelopment. The Bletchley Park Trust was formed in February 1992 to save large portions of the site from development. More recently, Bletchley Park has been open to the public, featuring interpretive exhibits and huts that have been rebuilt to appear as they did during their wartime operations. It receives hundreds of thousands of visitors annually. The separate National Museum of Computing, which includes a working replica Bombe machine and a rebuilt Colossus computer, is housed in Block H on the site. History The site appears in the Domesday Book of 1086 as part of the Manor of Eaton. Browne Willis built a mansion there in 1711, but after Thomas Harrison purchased the property in 1793 this was pulled down. It was first known as Bletchley Park after its purchase by the architect Samuel Lipscomb Seckham in 1877, who built a house there. The estate of was bought in 1883 by Sir Herbert Samuel Leon, who expanded the then-existing house into what architect Landis Gores called a "maudlin and monstrous pile" combining Victorian Gothic, Tudor, and Dutch Baroque styles. At his Christmas family gatherings there was a fox hunting meet on Boxing Day with glasses of sloe gin from the butler, and the house was always "humming with servants". With 40 gardeners, a flower bed of yellow daffodils could become a sea of red tulips overnight. After the death of Herbert Leon in 1926, the estate continued to be occupied by his widow Fanny Leon (née Higham) until her death in 1937. In 1938, the mansion and much of the site was bought by a builder for a housing estate, but in May 1938 Admiral Sir Hugh Sinclair, head of the Secret Intelligence Service (SIS or MI6), bought the mansion and of land for £6,000 (£ today) for use by GC&CS and SIS in the event of war. He used his own money as the Government said they did not have the budget to do so. A key advantage seen by Sinclair and his colleagues (inspecting the site under the cover of "Captain Ridley's shooting party") was Bletchley's geographical centrality. It was almost immediately adjacent to Bletchley railway station, where the "Varsity Line" between Oxford and Cambridgewhose universities were expected to supply many of the code-breakersmet the main West Coast railway line connecting London, Birmingham, Manchester, Liverpool, Glasgow and Edinburgh. Watling Street, the main road linking London to the north-west (subsequently the A5) was close by, and high-volume communication links were available at the telegraph and telephone repeater station in nearby Fenny Stratford. Bletchley Park was known as "B.P." to those who worked there. "Station X" (X = Roman numeral ten), "London Signals Intelligence Centre", and "Government Communications Headquarters" were all cover names used during the war. The formal posting of the many "Wrens"members of the Women's Royal Naval Serviceworking there, was to HMS Pembroke V. Royal Air Force names of Bletchley Park and its outstations included RAF Eastcote, RAF Lime Grove and RAF Church Green. The postal address that staff had to use was "Room 47, Foreign Office". After the war, the Government Code & Cypher School became the Government Communications Headquarters (GCHQ), moving to Eastcote in 1946 and to Cheltenham in the 1950s. The site was used by various government agencies, including the GPO and the Civil Aviation Authority. One large building, block F, was demolished in 1987 by which time the site was being run down with tenants leaving. In 1990 the site was at risk of being sold for housing development. However, Milton Keynes Council made it into a conservation area. Bletchley Park Trust was set up in 1991 by a group of people who recognised the site's importance. The initial trustees included Roger Bristow, Ted Enever, Peter Wescombe, Dr Peter Jarvis of the Bletchley Archaeological & Historical Society, and Tony Sale who in 1994 became the first director of the Bletchley Park Museums. Personnel Admiral Hugh Sinclair was the founder and head of GC&CS between 1919 and 1938 with Commander Alastair Denniston being operational head of the organization from 1919 to 1942, beginning with its formation from the Admiralty's Room 40 (NID25) and the War Office's MI1b. Key GC&CS cryptanalysts who moved from London to Bletchley Park included John Tiltman, Dillwyn "Dilly" Knox, Josh Cooper, Oliver Strachey and Nigel de Grey. These people had a variety of backgroundslinguists and chess champions were common, and Knox's field was papyrology. The British War Office recruited top solvers of cryptic crossword puzzles, as these individuals had strong lateral thinking skills. On the day Britain declared war on Germany, Denniston wrote to the Foreign Office about recruiting "men of the professor type". Personal networking drove early recruitments, particularly of men from the universities of Cambridge and Oxford. Trustworthy women were similarly recruited for administrative and clerical jobs. In one 1941 recruiting stratagem, The Daily Telegraph was asked to organise a crossword competition, after which promising contestants were discreetly approached about "a particular type of work as a contribution to the war effort". Denniston recognised, however, that the enemy's use of electromechanical cipher machines meant that formally trained mathematicians would also be needed; Oxford's Peter Twinn joined GC&CS in February 1939; Cambridge's Alan Turing and Gordon Welchman began training in 1938 and reported to Bletchley the day after war was declared, along with John Jeffreys. Later-recruited cryptanalysts included the mathematicians Derek Taunt, Jack Good, Bill Tutte, and Max Newman; historian Harry Hinsley, and chess champions Hugh Alexander and Stuart Milner-Barry. Joan Clarke was one of the few women employed at Bletchley as a full-fledged cryptanalyst. When seeking to recruit more suitably advanced linguists, John Tiltman turned to Patrick Wilkinson of the Italian section for advice, and he suggested asking Lord Lindsay of Birker, of Balliol College, Oxford, S. W. Grose, and Martin Charlesworth, of St John's College, Cambridge, to recommend classical scholars or applicants to their colleges. This eclectic staff of "Boffins and Debs" (scientists and debutantes, young women of high society) caused GC&CS to be whimsically dubbed the "Golf, Cheese and Chess Society". During a morale-boosting visit on 9 September 1941, Winston Churchill reportedly remarked to Denniston or Menzies: "I told you to leave no stone unturned to get staff, but I had no idea you had taken me so literally." Six weeks later, having failed to get sufficient typing and unskilled staff to achieve the productivity that was possible, Turing, Welchman, Alexander and Milner-Barry wrote directly to Churchill. His response was "Action this day make sure they have all they want on extreme priority and report to me that this has been done." After initial training at the Inter-Service Special Intelligence School set up by John Tiltman (initially at an RAF depot in Buckingham and later in Bedfordwhere it was known locally as "the Spy School") staff worked a six-day week, rotating through three shifts: 4 p.m. to midnight, midnight to 8 a.m. (the most disliked shift), and 8 a.m. to 4 p.m., each with a half-hour meal break. At the end of the third week, a worker went off at 8 a.m. and came back at 4 p.m., thus putting in 16 hours on that last day. The irregular hours affected workers' health and social life, as well as the routines of the nearby homes at which most staff lodged. The work was tedious and demanded intense concentration; staff got one week's leave four times a year, but some "girls" collapsed and required extended rest. Recruitment took place to combat a shortage of experts in Morse code and German. In January 1945, at the peak of codebreaking efforts, nearly 10,000 personnel were working at Bletchley and its outstations. About three-quarters of these were women. Many of the women came from middle-class backgrounds and held degrees in the areas of mathematics, physics and engineering; they were given chance due to the lack of men, who had been sent to war. They performed calculations and coding and hence were integral to the computing processes. Among them were Eleanor Ireland who worked on the Colossus computers and Ruth Briggs, a German scholar, who worked within the Naval Section. The female staff in Dilwyn Knox's section were sometimes termed "Dilly's Fillies". Knox's methods enabled Mavis Lever (who married mathematician and fellow code-breaker Keith Batey) and Margaret Rock to solve a German code, the Abwehr cipher. Many of the women had backgrounds in languages, particularly French, German and Italian. Among them were Rozanne Colchester, a translator who worked mainly for the Italian air forces Section, and Cicely Mayhew, recruited straight from university, who worked in Hut 8, translating decoded German Navy signals. Alan Brooke (CIGS) in his secret wartime diary frequently refers to “intercepts”: 16 April 1942: Took lunch in car and went to see the organization for breaking down ciphers [Bletchley Park] – a wonderful set of professors and genii! I marvel at the work they succeed in doing. 28 June 1945: After lunch (with Andrew Cunningham (RN) and Sinclair (RAF) we went to “The Park” … I began by addressing some 400 of the workers who consist of all 3 services, both sexes, and civilians, They come from every sort of walk of life, professors, students, actors, dancers, mathematicians, electricians signallers, etc. I thanked them on behalf of the Chiefs of Staff and congratulated them on the results of their work. We then toured round the establishment and had tea before returning. For a long time, the British Government failed to acknowledge the contributions the personnel at Bletchley Park had made. Their work achieved official recognition only in 2009. Secrecy Properly used, the German Enigma and Lorenz ciphers should have been virtually unbreakable, but flaws in German cryptographic procedures, and poor discipline among the personnel carrying them out, created vulnerabilities that made Bletchley's attacks just barely feasible. These vulnerabilities, however, could have been remedied by relatively simple improvements in enemy procedures, and such changes would certainly have been implemented had Germany had any hint of Bletchley's success. Thus the intelligence Bletchley produced was considered wartime Britain's "Ultra secret"higher even than the normally highest classification and security was paramount. All staff signed the Official Secrets Act (1939) and a 1942 security warning emphasised the importance of discretion even within Bletchley itself: "Do not talk at meals. Do not talk in the transport. Do not talk travelling. Do not talk in the billet. Do not talk by your own fireside. Be careful even in your Hut ..." Nevertheless, there were security leaks. Jock Colville, the Assistant Private Secretary to Winston Churchill, recorded in his diary on 31 July 1941, that the newspaper proprietor Lord Camrose had discovered Ultra and that security leaks "increase in number and seriousness". Without doubt, the most serious of these was that Bletchley Park had been infiltrated by John Cairncross, the notorious Soviet mole and member of the Cambridge Spy Ring, who leaked Ultra material to Moscow. Despite the high degree of secrecy surrounding Bletchley Park during the Second World War, unique and hitherto unknown amateur film footage of the outstation at nearby Whaddon Hall came to light in 2020, after being anonymously donated to the Bletchley Park Trust. A spokesman for the Trust noted the film's existence was all the more incredible because it was "very, very rare even to have [still] photographs" of the park and its associated sites. Early work The first personnel of the Government Code and Cypher School (GC&CS) moved to Bletchley Park on 15 August 1939. The Naval, Military, and Air Sections were on the ground floor of the mansion, together with a telephone exchange, teleprinter room, kitchen, and dining room; the top floor was allocated to MI6. Construction of the wooden huts began in late 1939, and Elmers School, a neighbouring boys' boarding school in a Victorian Gothic redbrick building by a church, was acquired for the Commercial and Diplomatic Sections. After the United States joined World War II, a number of American cryptographers were posted to Hut 3, and from May 1943 onwards there was close co-operation between British and American intelligence. (See 1943 BRUSA Agreement.) In contrast, the Soviet Union was never officially told of Bletchley Park and its activities a reflection of Churchill's distrust of the Soviets even during the US-UK-USSR alliance imposed by the Nazi threat. The only direct enemy damage to the site was done 2021 November 1940 by three bombs probably intended for Bletchley railway station; Hut 4, shifted two feet off its foundation, was winched back into place as work inside continued. Intelligence reporting Initially, when only a very limited amount of Enigma traffic was being read, deciphered non-Naval Enigma messages were sent from Hut 6 to Hut 3 which handled their translation and onward transmission. Subsequently, under Group Captain Eric Jones, Hut 3 expanded to become the heart of Bletchley Park's intelligence effort, with input from decrypts of "Tunny" (Lorenz SZ42) traffic and many other sources. Early in 1942 it moved into Block D, but its functions were still referred to as Hut 3. Hut 3 contained a number of sections: Air Section "3A", Military Section "3M", a small Naval Section "3N", a multi-service Research Section "3G" and a large liaison section "3L". It also housed the Traffic Analysis Section, SIXTA. An important function that allowed the synthesis of raw messages into valuable Military intelligence was the indexing and cross-referencing of information in a number of different filing systems. Intelligence reports were sent out to the Secret Intelligence Service, the intelligence chiefs in the relevant ministries, and later on to high-level commanders in the field. Naval Enigma deciphering was in Hut 8, with translation in Hut 4. Verbatim translations were sent to the Naval Intelligence Division (NID) of the Admiralty's Operational Intelligence Centre (OIC), supplemented by information from indexes as to the meaning of technical terms and cross-references from a knowledge store of German naval technology. Where relevant to non-naval matters, they would also be passed to Hut 3. Hut 4 also decoded a manual system known as the dockyard cipher, which sometimes carried messages that were also sent on an Enigma network. Feeding these back to Hut 8 provided excellent "cribs" for Known-plaintext attacks on the daily naval Enigma key. Listening stations Initially, a wireless room was established at Bletchley Park. It was set up in the mansion's water tower under the code name "Station X", a term now sometimes applied to the codebreaking efforts at Bletchley as a whole. The "X" is the Roman numeral "ten", this being the Secret Intelligence Service's tenth such station. Due to the long radio aerials stretching from the wireless room, the radio station was moved from Bletchley Park to nearby Whaddon Hall to avoid drawing attention to the site. Subsequently, other listening stationsthe Y-stations, such as the ones at Chicksands in Bedfordshire, Beaumanor Hall, Leicestershire (where the headquarters of the War Office "Y" Group was located) and Beeston Hill Y Station in Norfolkgathered raw signals for processing at Bletchley. Coded messages were taken down by hand and sent to Bletchley on paper by motorcycle despatch riders or (later) by teleprinter. Additional buildings The wartime needs required the building of additional accommodation. Huts Often a hut's number became so strongly associated with the work performed inside that even when the work was moved to another building it was still referred to by the original "Hut" designation. Hut 1: The first hut, built in 1939 used to house the Wireless Station for a short time, later administrative functions such as transport, typing, and Bombe maintenance. The first Bombe, "Victory", was initially housed here. Hut 2: A recreational hut for "beer, tea, and relaxation". Hut 3: Intelligence: translation and analysis of Army and Air Force decrypts Hut 4: Naval intelligence: analysis of Naval Enigma and Hagelin decrypts Hut 5: Military intelligence including Italian, Spanish, and Portuguese ciphers and German police codes. Hut 6: Cryptanalysis of Army and Air Force Enigma Hut 7: Cryptanalysis of Japanese naval codes and intelligence. Hut 8: Cryptanalysis of Naval Enigma. Hut 9: ISOS (Intelligence Section Oliver Strachey). Hut 10: Secret Intelligence Service (SIS or MI6) codes, Air and Meteorological sections. Hut 11: Bombe building. Hut 14: Communications centre. Hut 15: SIXTA (Signals Intelligence and Traffic Analysis). Hut 16: ISK (Intelligence Service Knox) Abwehr ciphers. Hut 18: ISOS (Intelligence Section Oliver Strachey). Hut 23: Primarily used to house the engineering department. After February 1943, Hut 3 was renamed Hut 23. Blocks In addition to the wooden huts, there were a number of brick-built "blocks". Block A: Naval Intelligence. Block B: Italian Air and Naval, and Japanese code breaking. Block C: Stored the substantial punch-card indexes. Block D: From February 1943 it housed those from Hut 3, who synthesised intelligence from multiple sources, Huts 6 and 8 and SIXTA. Block E: Incoming and outgoing Radio Transmission and TypeX. Block F: Included the Newmanry and Testery, and Japanese Military Air Section. It has since been demolished. Block G: Traffic analysis and deception operations. Block H: Tunny and Colossus (now The National Museum of Computing). Work on specific countries' signals German signals Most German messages decrypted at Bletchley were produced by one or another version of the Enigma cipher machine, but an important minority were produced by the even more complicated twelve-rotor Lorenz SZ42 on-line teleprinter cipher machine used for high command messages, known as Fish. Five weeks before the outbreak of war, Warsaw's Cipher Bureau revealed its achievements in breaking Enigma to astonished French and British personnel. The British used the Poles' information and techniques, and the Enigma clone sent to them in August 1939, which greatly increased their (previously very limited) success in decrypting Enigma messages. The bombe was an electromechanical device whose function was to discover some of the daily settings of the Enigma machines on the various German military networks. Its pioneering design was developed by Alan Turing (with an important contribution from Gordon Welchman) and the machine was engineered by Harold 'Doc' Keen of the British Tabulating Machine Company. Each machine was about high and wide, deep and weighed about a ton. At its peak, GC&CS was reading approximately 4,000 messages per day. As a hedge against enemy attack most bombes were dispersed to installations at Adstock and Wavendon (both later supplanted by installations at Stanmore and Eastcote), and Gayhurst. Luftwaffe messages were the first to be read in quantity. The German navy had much tighter procedures, and the capture of code books was needed before they could be broken. When, in February 1942, the German navy introduced the four-rotor Enigma for communications with its Atlantic U-boats, this traffic became unreadable for a period of ten months. Britain produced modified bombes, but it was the success of the US Navy Bombe that was the main source of reading messages from this version of Enigma for the rest of the war. Messages were sent to and fro across the Atlantic by enciphered teleprinter links. The Lorenz messages were codenamed Tunny at Bletchley Park. They were only sent in quantity from mid-1942. The Tunny networks were used for high-level messages between German High Command and field commanders. With the help of German operator errors, the cryptanalysts in the Testery (named after Ralph Tester, its head) worked out the logical structure of the machine despite not knowing its physical form. They devised automatic machinery to help with decryption, which culminated in Colossus, the world's first programmable digital electronic computer. This was designed and built by Tommy Flowers and his team at the Post Office Research Station at Dollis Hill. The prototype first worked in December 1943, was delivered to Bletchley Park in January and first worked operationally on 5 February 1944. Enhancements were developed for the Mark 2 Colossus, the first of which was working at Bletchley Park on the morning of 1 June in time for D-day. Flowers then produced one Colossus a month for the rest of the war, making a total of ten with an eleventh part-built. The machines were operated mainly by Wrens in a section named the Newmanry after its head Max Newman. Bletchley's work was essential to defeating the U-boats in the Battle of the Atlantic, and to the British naval victories in the Battle of Cape Matapan and the Battle of North Cape. In 1941, Ultra exerted a powerful effect on the North African desert campaign against German forces under General Erwin Rommel. General Sir Claude Auchinleck wrote that were it not for Ultra, "Rommel would have certainly got through to Cairo". While not changing the events, "Ultra" decrypts featured prominently in the story of Operation SALAM, László Almásy's mission across the desert behind Allied lines in 1942. Prior to the Normandy landings on D-Day in June 1944, the Allies knew the locations of all but two of Germany's fifty-eight Western-front divisions. Italian signals Italian signals had been of interest since Italy's attack on Abyssinia in 1935. During the Spanish Civil War the Italian Navy used the K model of the commercial Enigma without a plugboard; this was solved by Knox in 1937. When Italy entered the war in 1940 an improved version of the machine was used, though little traffic was sent by it and there were "wholesale changes" in Italian codes and cyphers. Knox was given a new section for work on Enigma variations, which he staffed with women ("Dilly's girls"), who included Margaret Rock, Jean Perrin, Clare Harding, Rachel Ronald, Elisabeth Granger; and Mavis Lever. Mavis Lever solved the signals revealing the Italian Navy's operational plans before the Battle of Cape Matapan in 1941, leading to a British victory. Although most Bletchley staff did not know the results of their work, Admiral Cunningham visited Bletchley in person a few weeks later to congratulate them. On entering World War II in June 1940, the Italians were using book codes for most of their military messages. The exception was the Italian Navy, which after the Battle of Cape Matapan started using the C-38 version of the Boris Hagelin rotor-based cipher machine, particularly to route their navy and merchant marine convoys to the conflict in North Africa. As a consequence, JRM Butler recruited his former student Bernard Willson to join a team with two others in Hut 4. In June 1941, Willson became the first of the team to decode the Hagelin system, thus enabling military commanders to direct the Royal Navy and Royal Air Force to sink enemy ships carrying supplies from Europe to Rommel's Afrika Korps. This led to increased shipping losses and, from reading the intercepted traffic, the team learnt that between May and September 1941 the stock of fuel for the Luftwaffe in North Africa reduced by 90 per cent. After an intensive language course, in March 1944 Willson switched to Japanese language-based codes. A Middle East Intelligence Centre (MEIC) was set up in Cairo in 1939. When Italy entered the war in June 1940, delays in forwarding intercepts to Bletchley via congested radio links resulted in cryptanalysts being sent to Cairo. A Combined Bureau Middle East (CBME) was set up in November, though the Middle East authorities made "increasingly bitter complaints" that GC&CS was giving too little priority to work on Italian cyphers. However, the principle of concentrating high-grade cryptanalysis at Bletchley was maintained. John Chadwick started cryptanalysis work in 1942 on Italian signals at the naval base 'HMS Nile' in Alexandria. Later, he was with GC&CS; in the Heliopolis Museum, Cairo and then in the Villa Laurens, Alexandria. Soviet signals Soviet signals had been studied since the 1920s. In 193940, John Tiltman (who had worked on Russian Army traffic from 1930) set up two Russian sections at Wavendon (a country house near Bletchley) and at Sarafand in Palestine. Two Russian high-grade army and navy systems were broken early in 1940. Tiltman spent two weeks in Finland, where he obtained Russian traffic from Finland and Estonia in exchange for radio equipment. In June 1941, when the Soviet Union became an ally, Churchill ordered a halt to intelligence operations against it. In December 1941, the Russian section was closed down, but in late summer 1943 or late 1944, a small GC&CS Russian cypher section was set up in London overlooking Park Lane, then in Sloane Square. Japanese signals An outpost of the Government Code and Cypher School had been set up in Hong Kong in 1935, the Far East Combined Bureau (FECB). The FECB naval staff moved in 1940 to Singapore, then Colombo, Ceylon, then Kilindini, Mombasa, Kenya. They succeeded in deciphering Japanese codes with a mixture of skill and good fortune. The Army and Air Force staff went from Singapore to the Wireless Experimental Centre at Delhi, India. In early 1942, a six-month crash course in Japanese, for 20 undergraduates from Oxford and Cambridge, was started by the Inter-Services Special Intelligence School in Bedford, in a building across from the main Post Office. This course was repeated every six months until war's end. Most of those completing these courses worked on decoding Japanese naval messages in Hut 7, under John Tiltman. By mid-1945, well over 100 personnel were involved with this operation, which co-operated closely with the FECB and the US Signal intelligence Service at Arlington Hall, Virginia. In 1999, Michael Smith wrote that: "Only now are the British codebreakers (like John Tiltman, Hugh Foss, and Eric Nave) beginning to receive the recognition they deserve for breaking Japanese codes and cyphers". Postwar Continued secrecy After the War, the secrecy imposed on Bletchley staff remained in force, so that most relatives never knew more than that a child, spouse, or parent had done some kind of secret war work. Churchill referred to the Bletchley staff as "the geese that laid the golden eggs and never cackled". That said, occasional mentions of the work performed at Bletchley Park slipped the censor's net and appeared in print. With the publication of F. W. Winterbotham's The Ultra Secret (1974) public discussion of Bletchley Park's work finally became possible, although even today some former staff still consider themselves bound to silence. Professor Brian Randell was researching the history of computer science in Britain in 1975-76 for a conference on the history of computing held at the Los Alamos National Laboratory, New Mexico on 10-15 June 1976, and received permission to present a paper on wartime development of the COLOSSI at the Post Office Research Station, Dollis Hill. (In October 1975 the British Government had released a series of captioned photographs from the Public Record Office.) The interest in the “revelations” in his paper resulted in a special evening meeting when Randell and Cooombs answered further questions. Coombs later wrote that "no member of our team could ever forget the fellowship, the sense of purpose and, above all, the breathless excitement of those days". In 1977 Randell published an article "The First Electronic Computer" in several journals. In July 2009 the British government announced that Bletchley personnel would be recognised with a commemorative badge. Site After the war, the site passed through a succession of hands and saw a number of uses, including as a teacher-training college and local GPO headquarters. By 1991, the site was nearly empty and the buildings were at risk of demolition for redevelopment. In February 1992, the Milton Keynes Borough Council declared most of the Park a conservation area, and the Bletchley Park Trust was formed to maintain the site as a museum. The site opened to visitors in 1993, and was formally inaugurated by the Duke of Kent as Chief Patron in July 1994. In 1999 the land owners, the Property Advisors to the Civil Estate and BT, granted a lease to the Trust giving it control over most of the site. Heritage attraction June 2014 saw the completion of an £8 million restoration project by museum design specialist, Event Communications, which was marked by a visit from Catherine, Duchess of Cambridge. The Duchess' paternal grandmother, Valerie, and Valerie's twin sister, Mary (née Glassborow), both worked at Bletchley Park during the war. The twin sisters worked as Foreign Office Civilians in Hut 6, where they managed the interception of enemy and neutral diplomatic signals for decryption. Valerie married Catherine's grandfather, Captain Peter Middleton. A memorial at Bletchley Park commemorates Mary and Valerie Middleton's work as code-breakers. Exhibitions Block C Visitor Centre Secrets Revealed introduction The Road to Bletchley Park. Codebreaking in World War One. Intel Security Cybersecurity exhibition. Online security and privacy in the 21st Century. Block B Lorenz Cipher Alan Turing Enigma machines Japanese codes Home Front exhibition. How people lived in WW2 The Mansion Office of Alistair Denniston Library. Dressed as a World War II naval intelligence office The Imitation Game exhibition Gordon Welchman: Architect of Ultra Intelligence exhibition Huts 3 and 6. Codebreaking offices as they would have looked during World War II. Hut 8. Interactive exhibitions explaining codebreaking Alan Turing's office Pigeon exhibition. The use of pigeons in World War II. Hut 11. Life as a WRNS Bombe operator Hut 12. Bletchley Park: Rescued and Restored. Items found during the restoration work. Wartime garages Hut 19. 2366 Bletchley Park Air Training Corp Squadron Learning Department The Bletchley Park Learning Department offers educational group visits with active learning activities for schools and universities. Visits can be booked in advance during term time, where students can engage with the history of Bletchley Park and understand its wider relevance for computer history and national security. Their workshops cover introductions to codebreaking, cyber security and the story of Enigma and Lorenz. Funding In October 2005, American billionaire Sidney Frank donated £500,000 to Bletchley Park Trust to fund a new Science Centre dedicated to Alan Turing. Simon Greenish joined as Director in 2006 to lead the fund-raising effort in a post he held until 2012 when Iain Standen took over the leadership role. In July 2008, a letter to The Times from more than a hundred academics condemned the neglect of the site. In September 2008, PGP, IBM, and other technology firms announced a fund-raising campaign to repair the facility. On 6 November 2008 it was announced that English Heritage would donate £300,000 to help maintain the buildings at Bletchley Park, and that they were in discussions regarding the donation of a further £600,000. In October 2011, the Bletchley Park Trust received a £4.6m Heritage Lottery Fund grant to be used "to complete the restoration of the site, and to tell its story to the highest modern standards" on the condition that £1.7m of 'match funding' is raised by the Bletchley Park Trust. Just weeks later, Google contributed £550k and by June 2012 the trust had successfully raised £2.4m to unlock the grants to restore Huts 3 and 6, as well as develop its exhibition centre in Block C. Additional income is raised by renting Block H to the National Museum of Computing, and some office space in various parts of the park to private firms. Due to the COVID-19 pandemic the Trust expected to lose more than £2m in 2020 and be required to cut a third of its workforce. Former MP John Leech asked tech giants Amazon, Apple, Google, Facebook and Microsoft to donate £400,000 each to secure the future of the Trust. Leech had led the successful campaign to pardon Alan Turing and implement Turing's Law. Other organisations sharing the campus The National Museum of Computing The National Museum of Computing is housed in Block H, which is rented from the Bletchley Park Trust. Its Colossus and Tunny galleries tell an important part of allied breaking of German codes during World War II. There is a working reconstruction of a Bombe and a rebuilt Colossus computer which was used on the high-level Lorenz cipher, codenamed Tunny by the British. The museum, which opened in 2007, is an independent voluntary organisation that is governed by its own board of trustees. Its aim is "To collect and restore computer systems particularly those developed in Britain and to enable people to explore that collection for inspiration, learning and enjoyment." Through its many exhibits, the museum displays the story of computing through the mainframes of the 1960s and 1970s, and the rise of personal computing in the 1980s. It has a policy of having as many of the exhibits as possible in full working order. Science and Innovation Centre This consists of serviced office accommodation housed in Bletchley Park's Blocks A and E, and the upper floors of the Mansion. Its aim is to foster the growth and development of dynamic knowledge-based start-ups and other businesses. Proposed National College of Cyber Security In April 2020 Bletchley Park Capital Partners, a private company run by Tim Reynolds, Deputy Chairman of the National Museum of Computing, announced plans to sell off the freehold to part of the site containing former Block G for commercial development. Offers of between £4m and £6m were reportedly being sought for the 3 acre plot, for which planning permission for employment purposes was granted in 2005. Previously, the construction of a National College of Cyber Security for students aged from 16 to 19 years old had been envisaged on the site, to be housed in Block G after renovation with funds supplied by the Bletchley Park Science and Innovation Centre. RSGB National Radio Centre The Radio Society of Great Britain's National Radio Centre (including a library, radio station, museum and bookshop) are in a newly constructed building close to the main Bletchley Park entrance. Final recognition Not until July 2009 did the British government fully acknowledge the contribution of the many people working for the Government Code and Cypher School ('G C & C S') at Bletchley. Only then was a commemorative medal struck to be presented to those involved. The gilded medal bears the inscription G C & C S 1939-1945 Bletchley Park and its Outstations. In popular culture Literature Bletchley featured heavily in Robert Harris' novel Enigma (1995). A fictionalised version of Bletchley Park is featured in Neal Stephenson's novel Cryptonomicon (1999). Bletchley Park plays a significant role in Connie Willis' novel All Clear (2010). The Agatha Christie novel N or M?, published in 1941, was about spies during the Second World War and featured a character called Major Bletchley. Christie was friends with one of the code-breakers at Bletchley Park, and MI5 thought that the character name might have been a joke indicating that she knew what was happening there. It turned out to be a coincidence. Bletchley Park is the setting of Kate Quinn's 2021 historical fiction novel, The Rose Code. Quinn used the likenesses of true veterans of Bletchley Park as inspiration for her story of three women who worked in some of the different areas at Bletchley Park. Film The film Enigma (2001), which was based upon Robert Harris's book and starred Kate Winslet, Saffron Burrows and Dougray Scott, is set in part in Bletchley Park. The film The Imitation Game (2014), starring Benedict Cumberbatch as Alan Turing, is set in Bletchley Park, and was partially filmed there. Radio The Radio Show Hut 33 is a Situation Comedy set in the fictional 33rd Hut of Bletchley Park. The Big Finish Productions Doctor Who audio Criss-Cross, released in September 2015, features the Sixth Doctor working undercover in Bletchley Park to decode a series of strange alien signals that have hindered his TARDIS, the audio also depicting his first meeting with his new companion Constance Clarke. The Bletchley Park Podcast began in August 2012, with new episodes being released approximately monthly. It features stories told by the codebreakers, staff and volunteers, audio from events and reports on the development of Bletchley Park. Television The 1979 ITV television serial Danger UXB featured the character Steven Mount, who was a codebreaker at Bletchley and was driven to a nervous breakdown (and eventual suicide) by the stressful and repetitive nature of the work. In Foyle's War, Adam Wainwright (Samantha Stewart's fiancé, then husband), is a former Bletchley Park codebreaker. The Second World War code-breaking sitcom pilot "Satsuma & Pumpkin" was recorded at Bletchley Park in 2003 and featured Bob Monkhouse, OBE in his last ever screen role. The BBC declined to produce the show and develop it further before creating effectively the same show on Radio 4 several years later, featuring some of the same cast, entitled Hut 33. Bletchley came to wider public attention with the documentary series Station X (1999). The 2012 ITV programme, The Bletchley Circle, is a set of murder mysteries set in 1952 and 1953. The protagonists are four female former Bletchley codebreakers, who use their skills to solve crimes. The pilot episode's opening scene was filmed on-site, and the set was asked to remain there for its close adaptation of historiography. The 2018 programme, The Bletchley Circle: San Francisco, is a spin-off of The Bletchley Circle. It takes place in San Francisco and features two characters from the original series. Ian McEwan's television play The Imitation Game (1980) concludes at Bletchley Park. Bletchley Park was featured in the sixth and final episode of the BBC TV documentary The Secret War (1977), presented and narrated by William Woodard. This episode featured interviews with Gordon Welchman, Harry Golombek, Peter Calvocoressi, F. W. Winterbotham, Max Newman, Jack Good, and Tommy Flowers. The Agent Carter season 2 episode "Smoke & Mirrors" reveals that Agent Peggy Carter worked at Bletchley Park early in the war before joining the Strategic Scientific Reserve. Theatre The play Breaking the Code (1986) is set at Bletchley Park. Location Bletchley Park is opposite Bletchley railway station. It is close to junctions 13 and 14 of the M1, about northwest of London. See also Arlington Hall Beeston Hill Y Station Danesfield House Far East Combined Bureau in Hong Kong prewar, then Singapore, Colombo (Ceylon) and Kilindini (Kenya) List of people associated with Bletchley Park List of women in Bletchley Park National Cryptologic Museum Newmanry OP-20-G, the US Navy's cryptanalysis office in Washington, D.C. Testery Wireless Experimental Centre operated by the Intelligence Corps outside Delhi Y-stations Notes and references Notes References Bibliography in Updated and extended version of Action This Day: From Breaking of the Enigma Code to the Birth of the Modern Computer Bantam Press 2001 in That version is a facsimile copy, but there is a transcript of much of this document in '.pdf' format at: , and a web transcript of Part 1 at: (CAPTCHA) (10-page preview from A Century of mathematics in America, Volume 1 By Peter L. Duren, Richard Askey, Uta C. Merzbach, see A Century of Mathematics in America: Part 1 ; ). Transcript of a lecture given on Tuesday 19 October 1993 at Cambridge University in in in in in in in New edition with addendum by Welchman correcting his misapprehensions in the 1982 edition. External links Bletchley Park Trust Bletchley Park—Virtual Tour by Tony Sale The National Museum of Computing (based at Bletchley Park) The RSGB National Radio Centre (based at Bletchley Park) (The Daily Telegraph 3 March 1997) Boffoonery! Comedy Benefit For Bletchley Park Comedians and computing professionals stage comedy show in aid of Bletchley Park Bletchley Park: It's No Secret, Just an Enigma, The Telegraph, 29 August 2009 Bletchley Park is official charity of Shed Week 2010—in recognition of the work done in the Huts Saving Bletchley Park blog by Sue Black with Sue Black by Robert Llewellyn about Bletchley Park C4 Station X 1999 on DVD here How Alan Turing Cracked The Enigma Code Imperial War Museums The Bletchley Park Podcast on Audioboom Bletchley Park Paperwork at The ICL Computer Museum 1993 establishments in England Biographical museums in Buckinghamshire British Telecom buildings and structures Country houses in Buckinghamshire Cryptography organizations Enigma machine Foreign Office during World War II Historic house museums in Buckinghamshire History museums in Buckinghamshire Locations in the history of espionage Military and war museums in England Milton Keynes Museums established in 1993 Museums in Buckinghamshire Signals intelligence of World War II Telecommunications museums in the United Kingdom Tourist attractions in Buckinghamshire Toy museums in England World War II museums in the United Kingdom World War II sites in England Buildings and structures in Milton Keynes
11
Bede (; , ; 672/326 May 735), also known as Saint Bede, The Venerable Bede, and Bede the Venerable (), was an English monk and an author and scholar. He was one of the greatest teachers and writers during the Early Middle Ages, and his most famous work, Ecclesiastical History of the English People, gained him the title "The Father of English History". He served at the monastery of St Peter and its companion monastery of St Paul in the Kingdom of Northumbria of the Angles. Born on lands belonging to the twin monastery of Monkwearmouth–Jarrow in present-day Tyne and Wear, England, Bede was sent to Monkwearmouth at the age of seven and later joined Abbot Ceolfrith at Jarrow. Both of them survived a plague that struck in 686 and killed a majority of the population there. While Bede spent most of his life in the monastery, he travelled to several abbeys and monasteries across the British Isles, even visiting the archbishop of York and King Ceolwulf of Northumbria. His ecumenical writings were extensive and included a number of Biblical commentaries and other theological works of exegetical erudition. Another important area of study for Bede was the academic discipline of computus, otherwise known to his contemporaries as the science of calculating calendar dates. One of the more important dates Bede tried to compute was Easter, an effort that was mired in controversy. He also helped popularize the practice of dating forward from the birth of Christ (Anno Domini—in the year of our Lord), a practice which eventually became commonplace in medieval Europe. He is considered by many historians to be the most important scholar of antiquity for the period between the death of Pope Gregory I in 604 and the coronation of Charlemagne in 800. In 1899, Pope Leo XIII declared him a Doctor of the Church. He is the only native of Great Britain to achieve this designation. Bede was moreover a skilled linguist and translator, and his work made the Latin and Greek writings of the early Church Fathers much more accessible to his fellow Anglo-Saxons, which contributed significantly to English Christianity. Bede's monastery had access to an impressive library which included works by Eusebius, Orosius, and many others. Life Almost everything that is known of Bede's life is contained in the last chapter of his Ecclesiastical History of the English People, a history of the church in England. It was completed in about 731, and Bede implies that he was then in his fifty-ninth year, which would give a birth date in 672 or 673. A minor source of information is the letter by his disciple Cuthbert (not to be confused with the saint, Cuthbert, who is mentioned in Bede's work) which relates Bede's death. Bede, in the Historia, gives his birthplace as "on the lands of this monastery". He is referring to the twinned monasteries of Monkwearmouth and Jarrow, in modern-day Wearside and Tyneside respectively. There is also a tradition that he was born at Monkton, two miles from the site where the monastery at Jarrow was later built. Bede says nothing of his origins, but his connections with men of noble ancestry suggest that his own family was well-to-do. Bede's first abbot was Benedict Biscop, and the names "Biscop" and "Beda" both appear in a list of the kings of Lindsey from around 800, further suggesting that Bede came from a noble family. Bede's name reflects West Saxon Bīeda (Northumbrian Bǣda, Anglian Bēda). It is an Old English short name formed on the root of bēodan "to bid, command". The name also occurs in the Anglo-Saxon Chronicle, s.a. 501, as Bieda, one of the sons of the Saxon founder of Portsmouth. The Liber Vitae of Durham Cathedral names two priests with this name, one of whom is presumably Bede himself. Some manuscripts of the Life of Cuthbert, one of Bede's works, mention that Cuthbert's own priest was named Bede; it is possible that this priest is the other name listed in the Liber Vitae. At the age of seven, Bede was sent as a puer oblatus to the monastery of Monkwearmouth by his family to be educated by Benedict Biscop and later by Ceolfrith. Bede does not say whether it was already intended at that point that he would be a monk. It was fairly common in Ireland at this time for young boys, particularly those of noble birth, to be fostered out as an oblate; the practice was also likely to have been common among the Germanic peoples in England. Monkwearmouth's sister monastery at Jarrow was founded by Ceolfrith in 682, and Bede probably transferred to Jarrow with Ceolfrith that year. The dedication stone for the church has survived ; it is dated 23 April 685, and as Bede would have been required to assist with menial tasks in his day-to-day life it is possible that he helped in building the original church. In 686, plague broke out at Jarrow. The Life of Ceolfrith, written in about 710, records that only two surviving monks were capable of singing the full offices; one was Ceolfrith and the other a young boy, who according to the anonymous writer had been taught by Ceolfrith. The two managed to do the entire service of the liturgy until others could be trained. The young boy was almost certainly Bede, who would have been about 14. When Bede was about 17 years old, Adomnán, the abbot of Iona Abbey, visited Monkwearmouth and Jarrow. Bede would probably have met the abbot during this visit, and it may be that Adomnán sparked Bede's interest in the Easter dating controversy. In about 692, in Bede's nineteenth year, Bede was ordained a deacon by his diocesan bishop, John, who was bishop of Hexham. The canonical age for the ordination of a deacon was 25; Bede's early ordination may mean that his abilities were considered exceptional, but it is also possible that the minimum age requirement was often disregarded. There might have been minor orders ranking below a deacon; but there is no record of whether Bede held any of these offices. In Bede's thirtieth year (about 702), he became a priest, with the ordination again performed by Bishop John. In about 701 Bede wrote his first works, the De Arte Metrica and De Schematibus et Tropis; both were intended for use in the classroom. He continued to write for the rest of his life, eventually completing over 60 books, most of which have survived. Not all his output can be easily dated, and Bede may have worked on some texts over a period of many years. His last surviving work is a letter to Ecgbert of York, a former student, written in 734. A 6th-century Greek and Latin manuscript of Acts of the Apostles that is believed to have been used by Bede survives and is now in the Bodleian Library at University of Oxford. It is known as the Codex Laudianus. Bede may have worked on some of the Latin Bibles that were copied at Jarrow, one of which, the Codex Amiatinus, is now held by the Laurentian Library in Florence. Bede was a teacher as well as a writer; he enjoyed music and was said to be accomplished as a singer and as a reciter of poetry in the vernacular. It is possible that he suffered a speech impediment, but this depends on a phrase in the introduction to his verse life of St Cuthbert. Translations of this phrase differ, and it is uncertain whether Bede intended to say that he was cured of a speech problem, or merely that he was inspired by the saint's works. In 708, some monks at Hexham accused Bede of having committed heresy in his work De Temporibus. The standard theological view of world history at the time was known as the Six Ages of the World; in his book, Bede calculated the age of the world for himself, rather than accepting the authority of Isidore of Seville, and came to the conclusion that Christ had been born 3,952 years after the creation of the world, rather than the figure of over 5,000 years that was commonly accepted by theologians. The accusation occurred in front of the bishop of Hexham, Wilfrid, who was present at a feast when some drunken monks made the accusation. Wilfrid did not respond to the accusation, but a monk present relayed the episode to Bede, who replied within a few days to the monk, writing a letter setting forth his defence and asking that the letter also be read to Wilfrid. Bede had another brush with Wilfrid, for the historian says that he met Wilfrid sometime between 706 and 709 and discussed Æthelthryth, the abbess of Ely. Wilfrid had been present at the exhumation of her body in 695, and Bede questioned the bishop about the exact circumstances of the body and asked for more details of her life, as Wilfrid had been her advisor. In 733, Bede travelled to York to visit Ecgbert, who was then bishop of York. The See of York was elevated to an archbishopric in 735, and it is likely that Bede and Ecgbert discussed the proposal for the elevation during his visit. Bede hoped to visit Ecgbert again in 734 but was too ill to make the journey. Bede also travelled to the monastery of Lindisfarne and at some point visited the otherwise unknown monastery of a monk named , a visit that is mentioned in a letter to that monk. Because of his widespread correspondence with others throughout the British Isles, and because many of the letters imply that Bede had met his correspondents, it is likely that Bede travelled to some other places, although nothing further about timing or locations can be guessed. It seems certain that he did not visit Rome, however, as he did not mention it in the autobiographical chapter of his Historia Ecclesiastica. Nothhelm, a correspondent of Bede's who assisted him by finding documents for him in Rome, is known to have visited Bede, though the date cannot be determined beyond the fact that it was after Nothhelm's visit to Rome. Except for a few visits to other monasteries, his life was spent in a round of prayer, observance of the monastic discipline and study of the Sacred Scriptures. He was considered the most learned man of his time. Bede died on the Feast of the Ascension, Thursday, 26 May 735, on the floor of his cell, singing "Glory be to the Father and to the Son and to the Holy Spirit" and was buried at Jarrow. Cuthbert, a disciple of Bede's, wrote a letter to a Cuthwin (of whom nothing else is known), describing Bede's last days and his death. According to Cuthbert, Bede fell ill, "with frequent attacks of breathlessness but almost without pain", before Easter. On the Tuesday, two days before Bede died, his breathing became worse and his feet swelled. He continued to dictate to a scribe, however, and despite spending the night awake in prayer he dictated again the following day. At three o'clock, according to Cuthbert, he asked for a box of his to be brought and distributed among the priests of the monastery "a few treasures" of his: "some pepper, and napkins, and some incense". That night he dictated a final sentence to the scribe, a boy named Wilberht, and died soon afterwards. The account of Cuthbert does not make entirely clear whether Bede died before midnight or after. However, by the reckoning of Bede's time, passage from the old day to the new occurred at sunset, not midnight, and Cuthbert is clear that he died after sunset. Thus, while his box was brought at three o'clock Wednesday afternoon of 25 May, by the time of the final dictation it was considered 26 May, although it might still have been 25 May in modern usage. Cuthbert's letter also relates a five-line poem in the vernacular that Bede composed on his deathbed, known as "Bede's Death Song". It is the most-widely copied Old English poem and appears in 45 manuscripts, but its attribution to Bede is not certain—not all manuscripts name Bede as the author, and the ones that do are of later origin than those that do not. Bede's remains may have been transferred to Durham Cathedral in the 11th century; his tomb there was looted in 1541, but the contents were probably re-interred in the Galilee chapel at the cathedral. One further oddity in his writings is that in one of his works, the Commentary on the Seven Catholic Epistles, he writes in a manner that gives the impression he was married. The section in question is the only one in that work that is written in first-person view. Bede says: "Prayers are hindered by the conjugal duty because as often as I perform what is due to my wife I am not able to pray." Another passage, in the Commentary on Luke, also mentions a wife in the first person: "Formerly I possessed a wife in the lustful passion of desire and now I possess her in honourable sanctification and true love of Christ." The historian Benedicta Ward argues that these passages are Bede employing a rhetorical device. Works Bede wrote scientific, historical and theological works, reflecting the range of his writings from music and metrics to exegetical Scripture commentaries. He knew patristic literature, as well as Pliny the Elder, Virgil, Lucretius, Ovid, Horace and other classical writers. He knew some Greek. Bede's scriptural commentaries employed the allegorical method of interpretation, and his history includes accounts of miracles, which to modern historians has seemed at odds with his critical approach to the materials in his history. Modern studies have shown the important role such concepts played in the world-view of Early Medieval scholars. Although Bede is mainly studied as a historian now, in his time his works on grammar, chronology, and biblical studies were as important as his historical and hagiographical works. The non-historical works contributed greatly to the Carolingian renaissance. He has been credited with writing a penitential, though his authorship of this work is disputed. Ecclesiastical History of the English People Bede's best-known work is the , or An Ecclesiastical History of the English People, completed in about 731. Bede was aided in writing this book by Albinus, abbot of St Augustine's Abbey, Canterbury. The first of the five books begins with some geographical background and then sketches the history of England, beginning with Caesar's invasion in 55 BC. A brief account of Christianity in Roman Britain, including the martyrdom of St Alban, is followed by the story of Augustine's mission to England in 597, which brought Christianity to the Anglo-Saxons. The second book begins with the death of Gregory the Great in 604 and follows the further progress of Christianity in Kent and the first attempts to evangelise Northumbria. These ended in disaster when Penda, the pagan king of Mercia, killed the newly Christian Edwin of Northumbria at the Battle of Hatfield Chase in about 632. The setback was temporary, and the third book recounts the growth of Christianity in Northumbria under kings Oswald of Northumbria and Oswy. The climax of the third book is the account of the Council of Whitby, traditionally seen as a major turning point in English history. The fourth book begins with the consecration of Theodore as Archbishop of Canterbury and recounts Wilfrid's efforts to bring Christianity to the Kingdom of Sussex. The fifth book brings the story up to Bede's day and includes an account of missionary work in Frisia and of the conflict with the British church over the correct dating of Easter. Bede wrote a preface for the work, in which he dedicates it to Ceolwulf, king of Northumbria. The preface mentions that Ceolwulf received an earlier draft of the book; presumably Ceolwulf knew enough Latin to understand it, and he may even have been able to read it. The preface makes it clear that Ceolwulf had requested the earlier copy, and Bede had asked for Ceolwulf's approval; this correspondence with the king indicates that Bede's monastery had connections among the Northumbrian nobility. Sources The monastery at Wearmouth-Jarrow had an excellent library. Both Benedict Biscop and Ceolfrith had acquired books from the Continent, and in Bede's day the monastery was a renowned centre of learning. It has been estimated that there were about 200 books in the monastic library. For the period prior to Augustine's arrival in 597, Bede drew on earlier writers, including Solinus. He had access to two works of Eusebius: the Historia Ecclesiastica, and also the Chronicon, though he had neither in the original Greek; instead he had a Latin translation of the Historia, by Rufinus, and Jerome's translation of the Chronicon. He also knew Orosius's Adversus Paganus, and Gregory of Tours' Historia Francorum, both Christian histories, as well as the work of Eutropius, a pagan historian. He used Constantius's Life of Germanus as a source for Germanus's visits to Britain. Bede's account of the Anglo-Saxon settlement of Britain is drawn largely from Gildas's De Excidio et Conquestu Britanniae. Bede would also have been familiar with more recent accounts such as Stephen of Ripon's Life of Wilfrid, and anonymous Life of Gregory the Great and Life of Cuthbert. He also drew on Josephus's Antiquities, and the works of Cassiodorus, and there was a copy of the Liber Pontificalis in Bede's monastery. Bede quotes from several classical authors, including Cicero, Plautus, and Terence, but he may have had access to their work via a Latin grammar rather than directly. However, it is clear he was familiar with the works of Virgil and with Pliny the Elder's Natural History, and his monastery also owned copies of the works of Dionysius Exiguus. He probably drew his account of Alban from a life of that saint which has not survived. He acknowledges two other lives of saints directly; one is a life of Fursa, and the other of Æthelburh; the latter no longer survives. He also had access to a life of Ceolfrith. Some of Bede's material came from oral traditions, including a description of the physical appearance of Paulinus of York, who had died nearly 90 years before Bede's Historia Ecclesiastica was written. Bede had correspondents who supplied him with material. Albinus, the abbot of the monastery in Canterbury, provided much information about the church in Kent, and with the assistance of Nothhelm, at that time a priest in London, obtained copies of Gregory the Great's correspondence from Rome relating to Augustine's mission. Almost all of Bede's information regarding Augustine is taken from these letters. Bede acknowledged his correspondents in the preface to the Historia Ecclesiastica; he was in contact with Bishop Daniel of Winchester, for information about the history of the church in Wessex and also wrote to the monastery at Lastingham for information about Cedd and Chad. Bede also mentions an Abbot Esi as a source for the affairs of the East Anglian church, and Bishop Cynibert for information about Lindsey. The historian Walter Goffart argues that Bede based the structure of the Historia on three works, using them as the framework around which the three main sections of the work were structured. For the early part of the work, up until the Gregorian mission, Goffart feels that Bede used De excidio. The second section, detailing the Gregorian mission of Augustine of Canterbury was framed on Life of Gregory the Great written at Whitby. The last section, detailing events after the Gregorian mission, Goffart feels was modelled on Life of Wilfrid. Most of Bede's informants for information after Augustine's mission came from the eastern part of Britain, leaving significant gaps in the knowledge of the western areas, which were those areas likely to have a native Briton presence. Models and style Bede's stylistic models included some of the same authors from whom he drew the material for the earlier parts of his history. His introduction imitates the work of Orosius, and his title is an echo of Eusebius's Historia Ecclesiastica. Bede also followed Eusebius in taking the Acts of the Apostles as the model for the overall work: where Eusebius used the Acts as the theme for his description of the development of the church, Bede made it the model for his history of the Anglo-Saxon church. Bede quoted his sources at length in his narrative, as Eusebius had done. Bede also appears to have taken quotes directly from his correspondents at times. For example, he almost always uses the terms "Australes" and "Occidentales" for the South and West Saxons respectively, but in a passage in the first book he uses "Meridiani" and "Occidui" instead, as perhaps his informant had done. At the end of the work, Bede adds a brief autobiographical note; this was an idea taken from Gregory of Tours' earlier History of the Franks. Bede's work as a hagiographer and his detailed attention to dating were both useful preparations for the task of writing the Historia Ecclesiastica. His interest in computus, the science of calculating the date of Easter, was also useful in the account he gives of the controversy between the British and Anglo-Saxon church over the correct method of obtaining the Easter date. Bede is described by Michael Lapidge as "without question the most accomplished Latinist produced in these islands in the Anglo-Saxon period". His Latin has been praised for its clarity, but his style in the Historia Ecclesiastica is not simple. He knew rhetoric and often used figures of speech and rhetorical forms which cannot easily be reproduced in translation, depending as they often do on the connotations of the Latin words. However, unlike contemporaries such as Aldhelm, whose Latin is full of difficulties, Bede's own text is easy to read. In the words of Charles Plummer, one of the best-known editors of the Historia Ecclesiastica, Bede's Latin is "clear and limpid ... it is very seldom that we have to pause to think of the meaning of a sentence ... Alcuin rightly praises Bede for his unpretending style." Intent Bede's primary intention in writing the Historia Ecclesiastica was to show the growth of the united church throughout England. The native Britons, whose Christian church survived the departure of the Romans, earn Bede's ire for refusing to help convert the Anglo-Saxons; by the end of the Historia the English, and their church, are dominant over the Britons. This goal, of showing the movement towards unity, explains Bede's animosity towards the British method of calculating Easter: much of the Historia is devoted to a history of the dispute, including the final resolution at the Synod of Whitby in 664. Bede is also concerned to show the unity of the English, despite the disparate kingdoms that still existed when he was writing. He also wants to instruct the reader by spiritual example and to entertain, and to the latter end he adds stories about many of the places and people about which he wrote. N. J. Higham argues that Bede designed his work to promote his reform agenda to Ceolwulf, the Northumbrian king. Bede painted a highly optimistic picture of the current situation in the Church, as opposed to the more pessimistic picture found in his private letters. Bede's extensive use of miracles can prove difficult for readers who consider him a more or less reliable historian but do not accept the possibility of miracles. Yet both reflect an inseparable integrity and regard for accuracy and truth, expressed in terms both of historical events and of a tradition of Christian faith that continues. Bede, like Gregory the Great whom Bede quotes on the subject in the Historia, felt that faith brought about by miracles was a stepping stone to a higher, truer faith, and that as a result miracles had their place in a work designed to instruct. Omissions and biases Bede is somewhat reticent about the career of Wilfrid, a contemporary and one of the most prominent clerics of his day. This may be because Wilfrid's opulent lifestyle was uncongenial to Bede's monastic mind; it may also be that the events of Wilfrid's life, divisive and controversial as they were, simply did not fit with Bede's theme of the progression to a unified and harmonious church. Bede's account of the early migrations of the Angles and Saxons to England omits any mention of a movement of those peoples across the English Channel from Britain to Brittany described by Procopius, who was writing in the sixth century. Frank Stenton describes this omission as "a scholar's dislike of the indefinite"; traditional material that could not be dated or used for Bede's didactic purposes had no interest for him. Bede was a Northumbrian, and this tinged his work with a local bias. The sources to which he had access gave him less information about the west of England than for other areas. He says relatively little about the achievements of Mercia and Wessex, omitting, for example, any mention of Boniface, a West Saxon missionary to the continent of some renown and of whom Bede had almost certainly heard, though Bede does discuss Northumbrian missionaries to the continent. He is also parsimonious in his praise for Aldhelm, a West Saxon who had done much to convert the native Britons to the Roman form of Christianity. He lists seven kings of the Anglo-Saxons whom he regards as having held imperium, or overlordship; only one king of Wessex, Ceawlin, is listed as Bretwalda, and none from Mercia, though elsewhere he acknowledges the secular power several of the Mercians held. Historian Robin Fleming states that he was so hostile to Mercia because Northumbria had been diminished by Mercian power that he consulted no Mercian informants and included no stories about its saints. Bede relates the story of Augustine's mission from Rome, and tells how the British clergy refused to assist Augustine in the conversion of the Anglo-Saxons. This, combined with Gildas's negative assessment of the British church at the time of the Anglo-Saxon invasions, led Bede to a very critical view of the native church. However, Bede ignores the fact that at the time of Augustine's mission, the history between the two was one of warfare and conquest, which, in the words of Barbara Yorke, would have naturally "curbed any missionary impulses towards the Anglo-Saxons from the British clergy." Use of Anno Domini At the time Bede wrote the Historia Ecclesiastica, there were two common ways of referring to dates. One was to use indictions, which were 15-year cycles, counting from 312 AD. There were three different varieties of indiction, each starting on a different day of the year. The other approach was to use regnal years—the reigning Roman emperor, for example, or the ruler of whichever kingdom was under discussion. This meant that in discussing conflicts between kingdoms, the date would have to be given in the regnal years of all the kings involved. Bede used both these approaches on occasion but adopted a third method as his main approach to dating: the Anno Domini method invented by Dionysius Exiguus. Although Bede did not invent this method, his adoption of it and his promulgation of it in De Temporum Ratione, his work on chronology, is the main reason it is now so widely used. Bede's Easter table, contained in De Temporum Ratione, was developed from Dionysius Exiguus' Easter table. Assessment The Historia Ecclesiastica was copied often in the Middle Ages, and about 160 manuscripts containing it survive. About half of those are located on the European continent, rather than in the British Isles. Most of the 8th- and 9th-century texts of Bede's Historia come from the northern parts of the Carolingian Empire. This total does not include manuscripts with only a part of the work, of which another 100 or so survive. It was printed for the first time between 1474 and 1482, probably at Strasbourg. Modern historians have studied the Historia extensively, and several editions have been produced. For many years, early Anglo-Saxon history was essentially a retelling of the Historia, but recent scholarship has focused as much on what Bede did not write as what he did. The belief that the Historia was the culmination of Bede's works, the aim of all his scholarship, was a belief common among historians in the past but is no longer accepted by most scholars. Modern historians and editors of Bede have been lavish in their praise of his achievement in the Historia Ecclesiastica. Stenton regards it as one of the "small class of books which transcend all but the most fundamental conditions of time and place", and regards its quality as dependent on Bede's "astonishing power of co-ordinating the fragments of information which came to him through tradition, the relation of friends, or documentary evidence ... In an age where little was attempted beyond the registration of fact, he had reached the conception of history." Patrick Wormald describes him as "the first and greatest of England's historians". The Historia Ecclesiastica has given Bede a high reputation, but his concerns were different from those of a modern writer of history. His focus on the history of the organisation of the English church, and on heresies and the efforts made to root them out, led him to exclude the secular history of kings and kingdoms except where a moral lesson could be drawn or where they illuminated events in the church. Besides the Anglo-Saxon Chronicle, the medieval writers William of Malmesbury, Henry of Huntingdon, and Geoffrey of Monmouth used his works as sources and inspirations. Early modern writers, such as Polydore Vergil and Matthew Parker, the Elizabethan Archbishop of Canterbury, also utilised the Historia, and his works were used by both Protestant and Catholic sides in the wars of religion. Some historians have questioned the reliability of some of Bede's accounts. One historian, Charlotte Behr, thinks that the Historia's account of the arrival of the Germanic invaders in Kent should not be considered to relate what actually happened, but rather relates myths that were current in Kent during Bede's time. It is likely that Bede's work, because it was so widely copied, discouraged others from writing histories and may even have led to the disappearance of manuscripts containing older historical works. Other historical works Chronicles As Chapter 66 of his On the Reckoning of Time, in 725 Bede wrote the Greater Chronicle (chronica maiora), which sometimes circulated as a separate work. For recent events the Chronicle, like his Ecclesiastical History, relied upon Gildas, upon a version of the Liber Pontificalis current at least to the papacy of Pope Sergius I (687–701), and other sources. For earlier events he drew on Eusebius's Chronikoi Kanones. The dating of events in the Chronicle is inconsistent with his other works, using the era of creation, the Anno Mundi. Hagiography His other historical works included lives of the abbots of Wearmouth and Jarrow, as well as verse and prose lives of St Cuthbert, an adaptation of Paulinus of Nola's Life of St Felix, and a translation of the Greek Passion of St Anastasius. He also created a listing of saints, the Martyrology. Theological works In his own time, Bede was as well known for his biblical commentaries, and for his exegetical and other theological works. The majority of his writings were of this type and covered the Old Testament and the New Testament. Most survived the Middle Ages, but a few were lost. It was for his theological writings that he earned the title of Doctor Anglorum and why he was declared a saint. Bede synthesised and transmitted the learning from his predecessors, as well as made careful, judicious innovation in knowledge (such as recalculating the age of the earth—for which he was censured before surviving the heresy accusations and eventually having his views championed by Archbishop Ussher in the sixteenth century—see below) that had theological implications. In order to do this, he learned Greek and attempted to learn Hebrew. He spent time reading and rereading both the Old and the New Testaments. He mentions that he studied from a text of Jerome's Vulgate, which itself was from the Hebrew text. He also studied both the Latin and the Greek Fathers of the Church. In the monastic library at Jarrow were numerous books by theologians, including works by Basil, Cassian, John Chrysostom, Isidore of Seville, Origen, Gregory of Nazianzus, Augustine of Hippo, Jerome, Pope Gregory I, Ambrose of Milan, Cassiodorus, and Cyprian. He used these, in conjunction with the Biblical texts themselves, to write his commentaries and other theological works. He had a Latin translation by Evagrius of Athanasius's Life of Antony and a copy of Sulpicius Severus' Life of St Martin. He also used lesser known writers, such as Fulgentius, Julian of Eclanum, Tyconius, and Prosper of Aquitaine. Bede was the first to refer to Jerome, Augustine, Pope Gregory and Ambrose as the four Latin Fathers of the Church. It is clear from Bede's own comments that he felt his calling was to explain to his students and readers the theology and thoughts of the Church Fathers. Bede also wrote homilies, works written to explain theology used in worship services. He wrote homilies on the major Christian seasons such as Advent, Lent, or Easter, as well as on other subjects such as anniversaries of significant events. Both types of Bede's theological works circulated widely in the Middle Ages. Several of his biblical commentaries were incorporated into the Glossa Ordinaria, an 11th-century collection of biblical commentaries. Some of Bede's homilies were collected by Paul the Deacon, and they were used in that form in the Monastic Office. Boniface used Bede's homilies in his missionary efforts on the continent. Bede sometimes included in his theological books an acknowledgement of the predecessors on whose works he drew. In two cases he left instructions that his marginal notes, which gave the details of his sources, should be preserved by the copyist, and he may have originally added marginal comments about his sources to others of his works. Where he does not specify, it is still possible to identify books to which he must have had access by quotations that he uses. A full catalogue of the library available to Bede in the monastery cannot be reconstructed, but it is possible to tell, for example, that Bede was very familiar with the works of Virgil. There is little evidence that he had access to any other of the pagan Latin writers—he quotes many of these writers, but the quotes are almost always found in the Latin grammars that were common in his day, one or more of which would certainly have been at the monastery. Another difficulty is that manuscripts of early writers were often incomplete: it is apparent that Bede had access to Pliny's Encyclopaedia, for example, but it seems that the version he had was missing book xviii, since he did not quote from it in his De temporum ratione.Bede's works included Commentary on Revelation, Commentary on the Catholic Epistles, Commentary on Acts, Reconsideration on the Books of Acts, On the Gospel of Mark, On the Gospel of Luke, and Homilies on the Gospels. At the time of his death he was working on a translation of the Gospel of John into English. He did this for the last 40 days of his life. When the last passage had been translated he said: "All is finished." The works dealing with the Old Testament included Commentary on Samuel, Commentary on Genesis, Commentaries on Ezra and Nehemiah, On the Temple, On the Tabernacle, Commentaries on Tobit, Commentaries on Proverbs, Commentaries on the Song of Songs, Commentaries on the Canticle of Habakkuk, The works on Ezra, the tabernacle and the temple were especially influenced by Gregory the Great's writings. Historical and astronomical chronology De temporibus, or On Time, written in about 703, provides an introduction to the principles of Easter computus. This was based on parts of Isidore of Seville's Etymologies, and Bede also included a chronology of the world which was derived from Eusebius, with some revisions based on Jerome's translation of the Bible. In about 723, Bede wrote a longer work on the same subject, On the Reckoning of Time, which was influential throughout the Middle Ages. He also wrote several shorter letters and essays discussing specific aspects of computus. On the Reckoning of Time (De temporum ratione) included an introduction to the traditional ancient and medieval view of the cosmos, including an explanation of how the spherical Earth influenced the changing length of daylight, of how the seasonal motion of the Sun and Moon influenced the changing appearance of the new moon at evening twilight. Bede also records the effect of the moon on tides. He shows that the twice-daily timing of tides is related to the Moon and that the lunar monthly cycle of spring and neap tides is also related to the Moon's position. He goes on to note that the times of tides vary along the same coast and that the water movements cause low tide at one place when there is high tide elsewhere. Since the focus of his book was the computus, Bede gave instructions for computing the date of Easter from the date of the Paschal full moon, for calculating the motion of the Sun and Moon through the zodiac, and for many other calculations related to the calendar. He gives some information about the months of the Anglo-Saxon calendar. Any codex of Bede's Easter table is normally found together with a codex of his De temporum ratione. His Easter table, being an exact extension of Dionysius Exiguus' Paschal table and covering the time interval AD 532–1063, contains a 532-year Paschal cycle based on the so-called classical Alexandrian 19-year lunar cycle, being the close variant of bishop Theophilus' 19-year lunar cycle proposed by Annianus and adopted by bishop Cyril of Alexandria around AD 425. The ultimate similar (but rather different) predecessor of this Metonic 19-year lunar cycle is the one invented by Anatolius around AD 260. For calendric purposes, Bede made a new calculation of the age of the world since the creation, which he dated as 3952 BC. Because of his innovations in computing the age of the world, he was accused of heresy at the table of Bishop Wilfrid, his chronology being contrary to accepted calculations. Once informed of the accusations of these "lewd rustics," Bede refuted them in his Letter to Plegwin. In addition to these works on astronomical timekeeping, he also wrote De natura rerum, or On the Nature of Things, modelled in part after the work of the same title by Isidore of Seville. His works were so influential that late in the ninth century Notker the Stammerer, a monk of the Monastery of St Gall in Switzerland, wrote that "God, the orderer of natures, who raised the Sun from the East on the fourth day of Creation, in the sixth day of the world has made Bede rise from the West as a new Sun to illuminate the whole Earth". Educational works Bede wrote some works designed to help teach grammar in the abbey school. One of these was De arte metrica, a discussion of the composition of Latin verse, drawing on previous grammarians' work. It was based on Donatus's De pedibus and Servius's De finalibus and used examples from Christian poets as well as Virgil. It became a standard text for the teaching of Latin verse during the next few centuries. Bede dedicated this work to Cuthbert, apparently a student, for he is named "beloved son" in the dedication, and Bede says "I have laboured to educate you in divine letters and ecclesiastical statutes." De orthographia is a work on orthography, designed to help a medieval reader of Latin with unfamiliar abbreviations and words from classical Latin works. Although it could serve as a textbook, it appears to have been mainly intended as a reference work. The date of composition for both of these works is unknown. De schematibus et tropis sacrae scripturae discusses the Bible's use of rhetoric. Bede was familiar with pagan authors such as Virgil, but it was not considered appropriate to teach biblical grammar from such texts, and Bede argues for the superiority of Christian texts in understanding Christian literature. Similarly, his text on poetic metre uses only Christian poetry for examples. Latin poetry A number of poems have been attributed to Bede. His poetic output has been systematically surveyed and edited by Michael Lapidge, who concluded that the following works belong to Bede: the Versus de die iudicii ("verses on the day of Judgement", found complete in 33 manuscripts and fragmentarily in 10); the metrical Vita Sancti Cudbercti ("Life of St Cuthbert"); and two collections of verse mentioned in the Historia ecclesiastica V.24.2. Bede names the first of these collections as "librum epigrammatum heroico metro siue elegiaco" ("a book of epigrams in the heroic or elegiac metre"), and much of its content has been reconstructed by Lapidge from scattered attestations under the title Liber epigrammatum. The second is named as "liber hymnorum diuerso metro siue rythmo" ("a book of hymns, diverse in metre or rhythm"); this has been reconstructed by Lapidge as containing ten liturgical hymns, one paraliturgical hymn (for the Feast of St Æthelthryth), and four other hymn-like compositions. Vernacular poetry According to his disciple Cuthbert, Bede was doctus in nostris carminibus ("learned in our songs"). Cuthbert's letter on Bede's death, the Epistola Cuthberti de obitu Bedae, moreover, commonly is understood to indicate that Bede composed a five-line vernacular poem known to modern scholars as Bede's Death Song As Opland notes, however, it is not entirely clear that Cuthbert is attributing this text to Bede: most manuscripts of the latter do not use a finite verb to describe Bede's presentation of the song, and the theme was relatively common in Old English and Anglo-Latin literature. The fact that Cuthbert's description places the performance of the Old English poem in the context of a series of quoted passages from Sacred Scripture might be taken as evidence simply that Bede also cited analogous vernacular texts. On the other hand, the inclusion of the Old English text of the poem in Cuthbert's Latin letter, the observation that Bede "was learned in our song," and the fact that Bede composed a Latin poem on the same subject all point to the possibility of his having written it. By citing the poem directly, Cuthbert seems to imply that its particular wording was somehow important, either since it was a vernacular poem endorsed by a scholar who evidently frowned upon secular entertainment or because it is a direct quotation of Bede's last original composition. Veneration There is no evidence for cult being paid to Bede in England in the 8th century. One reason for this may be that he died on the feast day of Augustine of Canterbury. Later, when he was venerated in England, he was either commemorated after Augustine on 26 May, or his feast was moved to 27 May. However, he was venerated outside England, mainly through the efforts of Boniface and Alcuin, both of whom promoted the cult on the continent. Boniface wrote repeatedly back to England during his missionary efforts, requesting copies of Bede's theological works. Alcuin, who was taught at the school set up in York by Bede's pupil Ecgbert, praised Bede as an example for monks to follow and was instrumental in disseminating Bede's works to all of Alcuin's friends. Bede's cult became prominent in England during the 10th-century revival of monasticism and by the 14th century had spread to many of the cathedrals of England. Wulfstan, Bishop of Worcester was a particular devotee of Bede's, dedicating a church to him in 1062, which was Wulfstan's first undertaking after his consecration as bishop. His body was 'translated' (the ecclesiastical term for relocation of relics) from Jarrow to Durham Cathedral around 1020, where it was placed in the same tomb with St Cuthbert. Later Bede's remains were moved to a shrine in the Galilee Chapel at Durham Cathedral in 1370. The shrine was destroyed during the English Reformation, but the bones were reburied in the chapel. In 1831 the bones were dug up and then reburied in a new tomb, which is still there. Other relics were claimed by York, Glastonbury and Fulda. His scholarship and importance to Catholicism were recognised in 1899 when the Vatican declared him a Doctor of the Church. He is the only Englishman named a Doctor of the Church. He is also the only Englishman in Dante's Paradise (Paradiso X.130), mentioned among theologians and doctors of the church in the same canto as Isidore of Seville and the Scot Richard of St Victor. His feast day was included in the General Roman Calendar in 1899, for celebration on 27 May rather than on his date of death, 26 May, which was then the feast day of St Augustine of Canterbury. He is venerated in the Catholic Church, in the Church of England and in the Episcopal Church (United States) on 25 May, and in the Eastern Orthodox Church, with a feast day on 27 May (Βεδέα του Ομολογητού). Bede became known as Venerable Bede (Latin: ) by the 9th century because of his holiness, but this was not linked to consideration for sainthood by the Catholic Church. According to a legend, the epithet was miraculously supplied by angels, thus completing his unfinished epitaph. It is first utilised in connection with Bede in the 9th century, where Bede was grouped with others who were called "venerable" at two ecclesiastical councils held at Aachen in 816 and 836. Paul the Deacon then referred to him as venerable consistently. By the 11th and 12th century, it had become commonplace. Modern legacy Bede's reputation as a historian, based mostly on the Historia Ecclesiastica, remains strong. Thomas Carlyle called him "the greatest historical writer since Herodotus". Walter Goffart says of Bede that he "holds a privileged and unrivalled place among first historians of Christian Europe". He is patron of Beda College in Rome which prepares older men for the Roman Catholic priesthood. His life and work have been celebrated with the annual Jarrow Lecture, held at St Paul's Church, Jarrow, since 1958. Bede has been described as a progressive scholar, who made Latin and Greek teachings accessible to his fellow Anglo-Saxons. Jarrow Hall (formerly Bede's World), in Jarrow, is a museum that celebrates the history of Bede and other parts of English heritage, on the site where he lived. Bede Metro station, part of the Tyne and Wear Metro light rail network, is named after him. See also List of manuscripts of Bede's Historia Ecclesiastica List of works by Bede Medieval ecclesiastic historiography Notes References Sources Primary sources (Parallel Latin text and English translation with English notes.) (contains translations of On the Song of Songs, Homilies on the Gospels and selections from the Ecclesiastical history of the English people). Secondary sources Further reading External links Dickinson College Commentaries: Historia Ecclēsiastica Bede's World: the museum of early medieval Northumbria at Jarrow The Venerable Bede from In Our Time (BBC Radio 4) Ecclesiastical History of the English People, Books 1–5, L.C. Jane's 1903 Temple Classics translation. From the Internet Medieval Sourcebook. Bede's Ecclesiastical History and the Continuation of Bede (pdf), at CCEL, edited & translated by A.M. Sellar. Saint Bede, complete works, in Latin, with historical works also in English at The Online Library of Liberty Dionysius Exiguus' Paschal table 673 births 735 deaths 7th-century Christian monks 7th-century Christian theologians 8th-century Christian monks 8th-century historians 8th-century Christian theologians 8th-century writers in Latin Anglo-Saxon monks Anglo-Saxon poets Benedictine Biblical scholars Benedictine theologians Benedictine writers Bible translators Burials at Monkwearmouth-Jarrow Abbey Christian hagiographers English Christian theologians Chronologists Church Fathers Doctors of the Church English Benedictines English chroniclers Hagiographers Medieval English theologians Northumbrian saints People from Jarrow People from Sunderland British biblical scholars Trope theorists 7th-century English writers 8th-century English writers Anglican saints English Roman Catholic saints History of Catholicism in England Lutheran saints
7
The Battle of Blenheim (; ; ) fought on , was a major battle of the War of the Spanish Succession. The overwhelming Allied victory ensured the safety of Vienna from the Franco-Bavarian army, thus preventing the collapse of the reconstituted Grand Alliance. Louis XIV of France sought to knock the Holy Roman Emperor, Leopold, out of the war by seizing Vienna, the Habsburg capital, and gain a favourable peace settlement. The dangers to Vienna were considerable: Maximilian II Emanuel, Elector of Bavaria, and Marshal Ferdinand de Marsin's forces in Bavaria threatened from the west, and Marshal Louis Joseph de Bourbon, duc de Vendôme's large army in northern Italy posed a serious danger with a potential offensive through the Brenner Pass. Vienna was also under pressure from Rákóczi's Hungarian revolt from its eastern approaches. Realising the danger, the Duke of Marlborough resolved to alleviate the peril to Vienna by marching his forces south from Bedburg to help maintain Emperor Leopold within the Grand Alliance. A combination of deception and skilled administration – designed to conceal his true destination from friend and foe alike – enabled Marlborough to march unhindered from the Low Countries to the River Danube in five weeks. After securing Donauwörth on the Danube, Marlborough sought to engage Maximilian's and Marsin's army before Marshal Camille d'Hostun, duc de Tallard, could bring reinforcements through the Black Forest. The Franco-Bavarian commanders proved reluctant to fight until their numbers were deemed sufficient, and Marlborough failed in his attempts to force an engagement. When Tallard arrived to bolster Maximilian's army, and Prince Eugene of Savoy arrived with reinforcements for the Allies, the two armies finally met on the banks of the Danube in and around the small village of Blindheim, from which the English "Blenheim" is derived. Blenheim was one of the battles that altered the course of the war, which until then was favouring the French and Spanish Bourbons. Although the battle did not win the war, it prevented a potentially devastating loss for the Grand Alliance and shifted the war's momentum, ending French plans of knocking Emperor Leopold out of the war. The French suffered catastrophic casualties in the battle, including their commander-in-chief, Tallard, who was taken captive to England. Before the 1704 campaign ended, the Allies had taken Landau, and the towns of Trier and Trarbach on the Moselle in preparation for the following year's campaign into France itself. This offensive never materialised, for the Grand Alliance's army had to depart the Moselle to defend Liège from a French counter-offensive. The war continued for another decade before ending in 1714. Background By 1704, the War of the Spanish Succession was in its fourth year. The previous year had been one of successes for France and her allies, most particularly on the Danube, where Marshal Claude-Louis-Hector de Villars and Maximilian II Emanuel, Elector of Bavaria, had created a direct threat to Vienna, the Habsburg capital. Vienna had been saved by dissension between the two commanders, leading to Villars being replaced by the less dynamic Marshal Ferdinand de Marsin. Nevertheless, the threat was still real: Rákóczi's Hungarian revolt was threatening the Empire's eastern approaches, and Marshal Louis Joseph, Duke of Vendôme's forces threatened an invasion from northern Italy. In the courts of Versailles and Madrid, Vienna's fall was confidently anticipated, an event which would almost certainly have led to the collapse of the reconstituted Grand Alliance. To isolate the Danube from any Allied intervention, Marshal François de Neufville, duc de Villeroi's 46,000 troops were expected to pin the 70,000 Dutch and British troops around Maastricht in the Low Countries, while General Robert Jean Antoine de Franquetot de Coigny protected Alsace against surprise with a further corps. The only forces immediately available for Vienna's defence were Prince Louis of Baden's 36,000 men stationed in the Lines of Stollhofen to watch Marshal Camille d'Hostun, duc de Tallard, at Strasbourg; and 10,000 men under Prince Eugene of Savoy south of Ulm. Both the Imperial Austrian Ambassador in London, Count Wratislaw, and the Duke of Marlborough realised the implications of the situation on the Danube. The Dutch were against any adventurous military operation as far south as the Danube and would not permit any major weakening of the forces in the Spanish Netherlands. Marlborough, realising the only way to reinforce the Austrians was by the use of secrecy and guile, set out to deceive his Dutch allies by pretending to move his troops to the Moselle – a plan approved of by The Hague – but once there, he would slip the Dutch leash and link up with Austrian forces in southern Germany. This does not mean that he proceeded entirely without consultation with the Dutch. Without them, the army's logistics system would have simply collapsed. Intensive consultations preceded the campaign and Anthonie Heinsius, the Dutch Grand Pensionary, was likely informed by Marlborough of his secret plan to link up with Austrian forces. Many other important Dutchmen, like Major-General Johan Wijnand van Goor, were in favour of helping the Emperor and participated in the campaign. The Dutch diplomat and field deputy Van Rechteren-Almelo also played an important role. He made sure that on their 450-kilometer-long march, the Allies would nowhere be denied passage by local rulers, nor would they need to look for provisions, horsefeed or new boots. He also saw to it that sufficient stopovers were arranged along the way to ensure that the Allies arrived at their destination in good condition. This was of paramount importance, for the success of the operation depended on a quick elimination of the Bavarian elector. However, it was not possible to make the logistical arrangements in advance that would have been indispensable to supply the Allied army south of the Danube. For this, the Allies should have had access to Ulm and Augsburg, but the Bavarian elector had taken these two cities. This could have become a problem for Marlborough had the Elector avoided a battle and instead entrenched himself south of the Danube. Had Villeroy then managed to take advantage of the weakening of Allied forces in the Netherlands by recapturing Liège and besieging Maastricht, it would have validated the concerns of his Dutch adversaries. Prelude Protagonists march to the Danube Marlborough's march started on 19 May from Bedburg, northwest of Cologne. The army assembled by Marlborough's brother, General Charles Churchill, consisted of 66 squadrons of cavalry, 31 battalions of infantry and 38 guns and mortars, totalling 21,000 men, 16,000 of whom were British. This force was augmented en route, and by the time it reached the Danube it numbered 40,00047 battalions and 88 squadrons. While Marlborough led this army south, the Dutch general, Henry Overkirk, Count of Nassau, maintained a defensive position in the Dutch Republic against the possibility of Villeroi mounting an attack. Marlborough had assured the Dutch that if the French were to launch an offensive he would return in good time, but he calculated that as he marched south, the French army would be drawn after him. In this assumption Marlborough proved correct: Villeroi shadowed Marlborough with 30,000 men in 60 squadrons and 42 battalions. Marlborough wrote to Godolphin: "I am very sensible that I take a great deal upon me, but should I act otherwise, the Empire would be undone ..." In the meantime, the appointment of Henry Overkirk as Field Marshal caused significant controversy in the Dutch Republic. After the Earl of Athlone's death, the Dutch States General had put Overkirk in charge of the Dutch States Army, which led to much discontent among the other high-ranking Dutch generals. Ernst Wilhelm von Salisch, Daniël van Dopff and Menno van Coehoorn threatened to resign or go into the service of other countries, although all were eventually convinced to stay. The new infantry generals were also disgruntled — the Lord of Slangenburg because he had to serve the less experienced Overkirk; and the Count of Noyelles because he had to serve the orders of the 'insupportable' Slangenburg. Then there was the major problem of the position of the Prince of Orange. The provinces of Friesland and Groningen demanded that their 17-year-old stadtholder be appointed supreme infantry general. This divided the parties so much that a second Grand Assembly, as had existed in 1651, was considered. However, after pressure from the other provinces, Friesland and Groningen adjusted their demands and a compromise was found. The Prince of Orange would nominally be appointed infantry general, behind Slangenburg and Noyelles, but he would not really be in command until he was 20. While the Allies were making their preparations, the French were striving to maintain and re-supply Marsin. He had been operating with Maximilian II against Prince Louis, and was somewhat isolated from France: his only lines of communication lay through the rocky passes of the Black Forest. On 14 May, Tallard brought 8,000 reinforcements and vast supplies and munitions through the difficult terrain, whilst outmanoeuvring , the Imperial general who sought to block his path. Tallard then returned with his own force to the Rhine, once again side-stepping Thüngen's efforts to intercept him. On 26 May, Marlborough reached Coblenz, where the Moselle meets the Rhine. If he intended an attack along the Moselle his army would now have to turn west; instead it crossed to the right bank of the Rhine, and was reinforced by 5,000 waiting Hanoverians and Prussians. The French realised that there would be no campaign on the Moselle. A second possible objective now occurred to theman Allied incursion into Alsace and an attack on Strasbourg. Marlborough furthered this apprehension by constructing bridges across the Rhine at Philippsburg, a ruse that not only encouraged Villeroi to come to Tallard's aid in the defence of Alsace, but one that ensured the French plan to march on Vienna was delayed while they waited to see what Marlborough's army would do. Encouraged by Marlborough's promise to return to the Netherlands if a French attack developed there, transferring his troops up the Rhine on barges at a rate of a day, the Dutch States General agreed to release the Danish contingent of seven battalions and 22 squadrons as reinforcements. Marlborough reached Ladenburg, in the plain of the Neckar and the Rhine, and there halted for three days to rest his cavalry and allow the guns and infantry to close up. On 6 June he arrived at Wiesloch, south of Heidelberg. The following day, the Allied army swung away from the Rhine towards the hills of the Swabian Jura and the Danube beyond. At last Marlborough's destination was established without doubt. Strategy On 10 June, Marlborough met for the first time the President of the Imperial War Council, Prince Eugene – accompanied by Count Wratislaw – at the village of Mundelsheim, halfway between the Danube and the Rhine. By 13 June, the Imperial Field Commander, Prince Louis, had joined them in Großheppach. The three generals commanded a force of nearly 110,000 men. At this conference, it was decided that Prince Eugene would return with 28,000 men to the Lines of Stollhofen on the Rhine to watch Villeroi and Tallard and prevent them going to the aid of the Franco-Bavarian army on the Danube. Meanwhile, Marlborough's and Prince Louis's forces would combine, totalling 80,000 men, and march on the Danube to seek out Maximilian II and Marsin before they could be reinforced. Knowing Marlborough's destination, Tallard and Villeroi met at Landau in the Palatinate on 13 June to construct a plan to save Bavaria. The rigidity of the French command system was such that any variations from the original plan had to be sanctioned by Versailles. The Count of Mérode-Westerloo, commander of the Flemish troops in Tallard's army, wrote "One thing is certain: we delayed our march from Alsace for far too long and quite inexplicably." Approval from King Louis arrived on 27 June: Tallard was to reinforce Marsin and Maximilian II on the Danube via the Black Forest, with 40 battalions and 50 squadrons; Villeroi was to pin down the Allies defending the Lines of Stollhofen, or, if the Allies should move all their forces to the Danube, he was to join with Tallard; Coigny with 8,000 men would protect Alsace. On 1 July Tallard's army of 35,000 re-crossed the Rhine at Kehl and began its march. On 22 June, Marlborough's forces linked up with Prince Louis' Imperial forces at Launsheim, having covered in five weeks. Thanks to a carefully planned timetable, the effects of wear and tear had been kept to a minimum. Captain Parker described the march discipline: "As we marched through the country of our Allies, commissars were appointed to furnish us with all manner of necessaries for man and horse ... the soldiers had nothing to do but pitch their tents, boil kettles and lie down to rest." In response to Marlborough's manoeuvres, Maximilian and Marsin, conscious of their numerical disadvantage with only 40,000 men, moved their forces to the entrenched camp at Dillingen on the north bank of the Danube. Marlborough could not attack Dillingen because of a lack of siege guns – he had been unable to bring any from the Low Countries, and Prince Louis had failed to supply any, despite prior assurances that he would. The Allies needed a base for provisions and a good river crossing. Consequently, on 2 July Marlborough stormed the fortress of Schellenberg on the heights above the town of Donauwörth. Count Jean d'Arco had been sent with 12,000 men from the Franco-Bavarian camp to hold the town and grassy hill, but after a fierce battle, with heavy casualties on both sides, Schellenberg fell. This forced Donauwörth to surrender shortly afterward. Maximilian, knowing his position at Dillingen was now not tenable, took up a position behind the strong fortifications of Augsburg. Tallard's march presented a dilemma for Prince Eugene. If the Allies were not to be outnumbered on the Danube, he realised that he had to either try to cut Tallard off before he could get there, or to reinforce Marlborough. If he withdrew from the Rhine to the Danube, Villeroi might also make a move south to link up with Maximilian and Marsin. Prince Eugene compromisedleaving 12,000 troops behind guarding the Lines of Stollhofenhe marched off with the rest of his army to forestall Tallard. Lacking in numbers, Prince Eugene could not seriously disrupt Tallard's march but the French marshal's progress was proving slow. Tallard's force had suffered considerably more than Marlborough's troops on their march – many of his cavalry horses were suffering from glanders and the mountain passes were proving tough for the 2,000 wagonloads of provisions. Local German peasants, angry at French plundering, compounded Tallard's problems, leading Mérode-Westerloo to bemoan – "the enraged peasantry killed several thousand of our men before the army was clear of the Black Forest." At Augsburg, Maximilian was informed on 14 July that Tallard was on his way through the Black Forest. This good news bolstered his policy of inaction, further encouraging him to wait for the reinforcements. This reticence to fight induced Marlborough to undertake a controversial policy of spoliation in Bavaria, burning buildings and crops throughout the rich lands south of the Danube. This had two aims: firstly to put pressure on Maximilian to fight or come to terms before Tallard arrived with reinforcements; and secondly, to ruin Bavaria as a base from which the French and Bavarian armies could attack Vienna, or pursue Marlborough into Franconia if, at some stage, he had to withdraw northwards. But this destruction, coupled with a protracted siege of the town of Rain over 9 to 16 July, caused Prince Eugene to lament "... since the Donauwörth action I cannot admire their performances", and later to conclude "If he has to go home without having achieved his objective, he will certainly be ruined." Final positioning Tallard, with 34,000 men, reached Ulm, joining with Maximilian and Marsin at Augsburg on 5 August, although Maximilian had dispersed his army in response to Marlborough's campaign of ravaging the region. Also on 5 August, Prince Eugene reached Höchstädt, riding that same night to meet with Marlborough at Schrobenhausen. Marlborough knew that another crossing point over the Danube was required in case Donauwörth fell to the enemy; so on 7 August, the first of Prince Louis' 15,000 Imperial troops left Marlborough's main force to besiege the heavily defended city of Ingolstadt, farther down the Danube, with the remainder following two days later. With Prince Eugene's forces at Höchstädt on the north bank of the Danube, and Marlborough's at Rain on the south bank, Tallard and Maximilian debated their next move. Tallard preferred to bide his time, replenish supplies and allow Marlborough's Danube campaign to flounder in the colder autumn weather; Maximilian and Marsin, newly reinforced, were keen to push ahead. The French and Bavarian commanders eventually agreed to attack Prince Eugene's smaller force. On 9 August, the Franco-Bavarian forces began to cross to the north bank of the Danube. On 10 August, Prince Eugene sent an urgent dispatch reporting that he was falling back to Donauwörth. By a series of swift marches Marlborough concentrated his forces on Donauwörth and, by noon 11 August, the link-up was complete. During 11 August, Tallard pushed forward from the river crossings at Dillingen. By 12 August, the Franco-Bavarian forces were encamped behind the small River Nebel near the village of Blenheim on the plain of Höchstädt. On the same day, Marlborough and Prince Eugene carried out a reconnaissance of the French position from the church spire at Tapfheim, and moved their combined forces to Münster – from the French camp. A French reconnaissance under Jacques Joseph Vipart, Marquis de Silly went forward to probe the enemy, but were driven off by Allied troops who had deployed to cover the pioneers of the advancing army, labouring to bridge the numerous streams in the area and improve the passage leading westwards to Höchstädt. Marlborough quickly moved forward two brigades under the command of Lieutenant General John Wilkes and Brigadier Archibald Rowe to secure the narrow strip of land between the Danube and the wooded Fuchsberg hill, at the Schwenningen defile. Tallard's army numbered 56,000 men and 90 guns; the army of the Grand Alliance, 52,000 men and 66 guns. Some Allied officers who were acquainted with the superior numbers of the enemy, and aware of their strong defensive position, remonstrated with Marlborough about the hazards of attacking; but he was resolute – partly because the Dutch officer Willem Vleertman had scouted the marshy ground before them and reported that the land was perfectly suitable for the troops. Battle The battlefield The battlefield stretched for nearly . The extreme right flank of the Franco-Bavarian army rested on the Danube, the undulating pine-covered hills of the Swabian Jura lay to their left. A small stream, the Nebel, fronted the French line; the ground either side of this was marshy and only fordable intermittently. The French right rested on the village of Blenheim near where the Nebel flows into the Danube; the village itself was surrounded by hedges, fences, enclosed gardens, and meadows. Between Blenheim and the village of Oberglauheim to the north west the fields of wheat had been cut to stubble and were now ideal for the deployment of troops. From Oberglauheim to the next hamlet of Lutzingen the terrain of ditches, thickets and brambles was potentially difficult ground for the attackers. Initial manoeuvres At 02:00 on 13 August, 40 Allied cavalry squadrons were sent forward, followed at 03:00, in eight columns, by the main Allied force pushing over the River Kessel. At about 06:00 they reached Schwenningen, from Blenheim. The British and German troops who had held Schwenningen through the night joined the march, making a ninth column on the left of the army. Marlborough and Prince Eugene made their final plans. The Allied commanders agreed that Marlborough would command 36,000 troops and attack Tallard's force of 33,000 on the left, including capturing the village of Blenheim, while Prince Eugene's 16,000 men would attack Maximilian and Marsin's combined forces of 23,000 troops on the right. If this attack was pressed hard, it was anticipated that Maximilian and Marsin would feel unable to send troops to aid Tallard on their right. Lieutenant-General John Cutts would attack Blenheim in concert with Prince Eugene's attack. With the French flanks busy, Marlborough could cross the Nebel and deliver the fatal blow to the French at their centre. The Allies would have to wait until Prince Eugene was in position before the general engagement could begin. Tallard was not anticipating an Allied attack; he had been deceived by intelligence gathered from prisoners taken by de Silly the previous day, and his army's strong position. Tallard and his colleagues believed that Marlborough and Prince Eugene were about to retreat north-westwards towards Nördlingen. Tallard wrote a report to this effect to King Louis that morning. Signal guns were fired to bring in the foraging parties and pickets as the French and Bavarian troops drew into battle-order to face the unexpected threat. At about 08:00 the French artillery on their right wing opened fire, answered by Colonel Holcroft Blood's batteries. The guns were heard by Prince Louis in his camp before Ingolstadt. An hour later Tallard, Maximilian, and Marsin climbed Blenheim's church tower to finalise their plans. It was settled that Maximilian and Marsin would hold the front from the hills to Oberglauheim, whilst Tallard would defend the ground between Oberglauheim and the Danube. The French commanders were divided as to how to utilise the Nebel. Tallard's preferred tactic was to lure the Allies across before unleashing his cavalry upon them. This was opposed by Marsin and Maximilian who felt it better to close their infantry right up to the stream itself, so that while the enemy was struggling in the marshes, they would be caught in crossfire from Blenheim and Oberglauheim. Tallard's approach was sound if all its parts were implemented, but in the event it allowed Marlborough to cross the Nebel without serious interference and fight the battle he had planned. Deployment The Franco-Bavarian commanders deployed their forces. In the village of Lutzingen, Count Alessandro de Maffei positioned five Bavarian battalions with a great battery of 16 guns at the village's edge. In the woods to the left of Lutzingen, seven French battalions under César Armand, Marquis de Rozel moved into place. Between Lutzingen and Oberglauheim Maximilian placed 27 squadrons of cavalry and 14 Bavarian squadrons commanded by d'Arco with 13 more in support nearby under Baron Veit Heinrich Moritz Freiherr von Wolframsdorf. To their right stood Marsin's 40 French squadrons and 12 battalions. The village of Oberglauheim was packed with 14 battalions commanded by , including the effective Irish Brigade known as the "Wild Geese". Six batteries of guns were ranged alongside the village. On the right of these French and Bavarian positions, between Oberglauheim and Blenheim, Tallard deployed 64 French and Walloon squadrons, 16 of which were from Marsin, supported by nine French battalions standing near the Höchstädt road. In the cornfield next to Blenheim stood three battalions from the Regiment de Roi. Nine battalions occupied the village itself, commanded by Philippe, Marquis de Clérambault. Four battalions stood to the rear and a further eleven were in reserve. These battalions were supported by Count Gabriel d'Hautefeuille's twelve squadrons of dismounted dragoons. By 11:00 Tallard, Maximilian, and Marsin were in place. Many of the Allied generals were hesitant to attack such a strong position. The Earl of Orkney later said that, "had I been asked to give my opinion, I had been against it." Prince Eugene was expected to be in position by 11:00, but due to the difficult terrain and enemy fire, progress was slow. Cutts' column – which by 10:00 had expelled the enemy from two water mills on the Nebel – had already deployed by the river against Blenheim, enduring over the next three hours severe fire from a six-gun heavy battery posted near the village. The rest of Marlborough's army, waiting in their ranks on the forward slope, were also forced to bear the cannonade from the French artillery, suffering 2,000 casualties before the attack could even start. Meanwhile, engineers repaired a stone bridge across the Nebel, and constructed five additional bridges or causeways across the marsh between Blenheim and Oberglauheim. Marlborough's anxiety was finally allayed when, just past noon, Colonel William Cadogan reported that Prince Eugene's Prussian and Danish infantry were in place – the order for the general advance was given. At 13:00, Cutts was ordered to attack the village of Blenheim whilst Prince Eugene was requested to assault Lutzingen on the Allied right flank. Blenheim Cutts ordered Rowe's brigade to attack. The English infantry rose from the edge of the Nebel, and silently marched towards Blenheim, a distance of some . James Ferguson's Scottish brigade supported Rowe's left, and moved towards the barricades between the village and the river, defended by Hautefeuille's dragoons. As the range closed to within , the French fired a deadly volley. Rowe had ordered that there should be no firing from his men until he struck his sword upon the palisades, but as he stepped forward to give the signal, he fell mortally wounded. The survivors of the leading companies closed up the gaps in their ranks and rushed forward. Small parties penetrated the defences, but repeated French volleys forced the English back and inflicted heavy casualties. As the attack faltered, eight squadrons of elite Gens d'Armes, commanded by the veteran Swiss officer, , fell on the English troops, cutting at the exposed flank of Rowe's own regiment. Wilkes' Hessian brigade, nearby in the marshy grass at the water's edge, stood firm and repulsed the Gens d'Armes with steady fire, enabling the English and Hessians to re-order and launch another attack. Although the Allies were again repulsed, these persistent attacks on Blenheim eventually bore fruit, panicking Clérambault into making the worst French error of the day. Without consulting Tallard, Clérambault ordered his reserve battalions into the village, upsetting the balance of the French position and nullifying the French numerical superiority. "The men were so crowded in upon one another", wrote Mérode-Westerloo, "that they couldn't even fire – let alone receive or carry out any orders". Marlborough, spotting this error, now countermanded Cutts' intention to launch a third attack, and ordered him simply to contain the enemy within Blenheim; no more than 5,000 Allied soldiers were able to pen in twice the number of French infantry and dragoons. Lutzingen On the Allied right, Prince Eugene's Prussian and Danish forces were desperately fighting the numerically superior forces of Maximilian and Marsin. Leopold I, Prince of Anhalt-Dessau led forward four brigades across the Nebel to assault the well-fortified position of Lutzingen. Here, the Nebel was less of an obstacle, but the great battery positioned on the edge of the village enjoyed a good field of fire across the open ground stretching to the hamlet of Schwennenbach. As soon as the infantry crossed the stream, they were struck by Maffei's infantry, and salvoes from the Bavarian guns positioned both in front of the village and in enfilade on the wood-line to the right. Despite heavy casualties the Prussians attempted to storm the great battery, whilst the Danes, under Count , attempted to drive the French infantry out of the copses beyond the village. With the infantry heavily engaged, Prince Eugene's cavalry picked its way across the Nebel. After an initial success, his first line of cavalry, under the Imperial General of Horse, Prince Maximilian of Hanover, were pressed by the second line of Marsin's cavalry and forced back across the Nebel in confusion. The exhausted French were unable to follow up their advantage, and both cavalry forces tried to regroup and reorder their ranks. Without cavalry support, and threatened with envelopment, the Prussian and Danish infantry were in turn forced to pull back across the Nebel. Panic gripped some of Prince Eugene's troops as they crossed the stream. Ten infantry colours were lost to the Bavarians, and hundreds of prisoners taken; it was only through the leadership of Prince Eugene and the Prince Maximilian of Hanover that the Imperial infantry was prevented from abandoning the field. After rallying his troops near Schwennenbach – well beyond their starting point – Prince Eugene prepared to launch a second attack, led by the second-line squadrons under the Duke of Württemberg-Teck. Yet again they were caught in the murderous crossfire from the artillery in Lutzingen and Oberglauheim, and were once again thrown back in disarray. The French and Bavarians were almost as disordered as their opponents, and they too were in need of inspiration from their commander, Maximilian, who was seen " ... riding up and down, and inspiring his men with fresh courage." Anhalt-Dessau's Danish and Prussian infantry attacked a second time but could not sustain the advance without proper support. Once again they fell back across the stream. Centre and Oberglauheim Whilst these events around Blenheim and Lutzingen were taking place, Marlborough was preparing to cross the Nebel. Hulsen's brigade of Hessians and Hanoverians and the earl of Orkney's British brigade advanced across the stream and were supported by dismounted British dragoons and ten British cavalry squadrons. This covering force allowed Charles Churchill's Dutch, British and German infantry and further cavalry units to advance and form up on the plain beyond. Marlborough arranged his infantry battalions in a novel manner with gaps sufficient to allow the cavalry to move freely between them. Marlborough ordered the formation forward. Once again Zurlauben's Gens d'Armes charged, looking to rout Henry Lumley's English cavalry who linked Cutts' column facing Blenheim with Churchill's infantry. As the elite French cavalry attacked, they were faced by five English squadrons under Colonel Francis Palmes. To the consternation of the French, the Gens d'Armes were pushed back in confusion and pursued well beyond the Maulweyer stream that flows through Blenheim. "What? Is it possible?" exclaimed Maximilian, "the gentlemen of France fleeing?" Palmes attempted to follow up his success but was repulsed by other French cavalry and musket fire from the edge of Blenheim. Nevertheless, Tallard was alarmed by the repulse of the Gens d'Armes and urgently rode across the field to ask Marsin for reinforcements; but on the basis of being hard pressed by Prince Eugene – whose second attack was in full flood – Marsin refused. As Tallard consulted with Marsin, more of his infantry were taken into Blenheim by Clérambault. Fatally, Tallard, although aware of the situation, did nothing to rectify it, leaving him with just the nine battalions of infantry near the Höchstädt road to oppose the massed enemy ranks in the centre. Zurlauben tried several more times to disrupt the Allies forming on Tallard's side of the stream. His front-line cavalry darted forward down the gentle slope towards the Nebel, but the attacks lacked co-ordination, and the Allied infantry's steady volleys disconcerted the French horsemen. During these skirmishes Zurlauben fell mortally wounded; he died two days later. At this stage the time was just after 15:00. The Danish cavalry, under Carl Rudolf, Duke of Württemberg-Neuenstadt, had made slow work of crossing the Nebel near Oberglauheim. Harassed by Marsin's infantry near the village, the Danes were driven back across the stream. Count Horn's Dutch infantry managed to push the French back from the water's edge, but it was apparent that before Marlborough could launch his main effort against Tallard, Oberglauheim would have to be secured. Count Horn directed Anton Günther, Fürst von Holstein-Beck to take the village, but his two Dutch brigades were cut down by the French and Irish troops, capturing and badly wounding Holstein-Beck during the action. The battle was now in the balance. If Holstein-Beck's Dutch column were destroyed, the Allied army would be split in two: Prince Eugene's wing would be isolated from Marlborough's, passing the initiative to the Franco-Bavarian forces. Seeing the opportunity, Marsin ordered his cavalry to change from facing Prince Eugene, and turn towards their right and the open flank of Churchill's infantry drawn up in front of Unterglau. Marlborough, who had crossed the Nebel on a makeshift bridge to take personal control, ordered Hulsen's Hanoverian battalions to support the Dutch infantry. A nine-gun artillery battery and a Dutch cavalry brigade under Averock were also called forward, but the cavalry soon came under pressure from Marsin's more numerous squadrons. Marlborough now requested Prince Eugene to release Count Hendrick Fugger and his Imperial Cuirassier brigade to help repel the French cavalry thrust. Despite his own difficulties, Prince Eugene at once complied. Although the Nebel stream lay between Fugger's and Marsin's squadrons, the French were forced to change front to meet this new threat, thus preventing Marsin from striking at Marlborough's infantry. Fugger's cuirassiers charged and, striking at a favourable angle, threw back Marsin's squadrons in disorder. With support from Blood's batteries, the Hessian, Hanoverian and Dutch infantry – now commanded by Count Berensdorf – succeeded in pushing the French and Irish infantry back into Oberglauheim so that they could not again threaten Churchill's flank as he moved against Tallard. The French commander in the village, de Blainville, was numbered among the heavy casualties. Breakthrough By 16:00, with large parts of the Franco-Bavarian army besieged in Blenheim and Oberglau, the Allied centre of 81 squadrons (nine squadrons had been transferred from Cutts' column) supported by 18 battalions was firmly planted amidst the French line of 64 squadrons and nine battalions of raw recruits. There was now a pause in the battle: Marlborough wanted to attack simultaneously along the whole front, and Prince Eugene, after his second repulse, needed time to reorganise. By just after 17:00 all was ready along the Allied front. Marlborough's two lines of cavalry had now moved to the front of his line of battle, with the two supporting lines of infantry behind them. Mérode-Westerloo attempted to extricate some French infantry crowded into Blenheim, but Clérambault ordered the troops back into the village. The French cavalry exerted themselves once more against the Allied first line – Lumley's English and Scots on the Allied left, and Reinhard Vincent Graf von Hompesch's Dutch and German squadrons on the Allied right. Tallard's squadrons, which lacked infantry support and were tired, managed to push the Allied first line back to their infantry support. With the battle still not won, Marlborough had to rebuke one of his cavalry officers who was attempting to leave the field – "Sir, you are under a mistake, the enemy lies that way ..." Marlborough commanded the second Allied line, under and , to move forward, and, driving through the centre, the Allies finally routed Tallard's tired cavalry. The Prussian Life Dragoons' Colonel, Ludwig von Blumenthal, and his second in command, Lieutenant Colonel von Hacke, fell next to each other, but the charge succeeded. With their cavalry in headlong flight, the remaining nine French infantry battalions fought with desperate valour, trying to form a square, but they were overwhelmed by Blood's close-range artillery and platoon fire. Mérode-Westerloo later wrote – "[They] died to a man where they stood, stationed right out in the open plain – supported by nobody." The majority of Tallard's retreating troops headed for Höchstädt but most did not make the safety of the town, plunging instead into the Danube where over 3,000 French horsemen drowned; others were cut down by the pursuing Allied cavalry. The Marquis de Gruignan attempted a counter-attack, but he was brushed aside by the triumphant Allies. After a final rally behind his camp's tents, shouting entreaties to stand and fight, Tallard was caught up in the rout and swept towards Sonderheim. Surrounded by a squadron of Hessian troops, Tallard surrendered to Lieutenant Colonel de Boinenburg, the Prince of Hesse-Kassel's aide-de-camp, and was sent under escort to Marlborough. Marlborough welcomed the French commander – "I am very sorry that such a cruel misfortune should have fallen upon a soldier for whom I have the highest regard." Meanwhile, the Allies had once again attacked the Bavarian stronghold at Lutzingen. Prince Eugene became exasperated with the performance of his Imperial cavalry whose third attack had failed: he had already shot two of his troopers to prevent a general flight. Then, declaring in disgust that he wished to "fight among brave men and not among cowards", Prince Eugene went into the attack with the Prussian and Danish infantry, as did Leopold I, waving a regimental colour to inspire his troops. This time the Prussians were able to storm the great Bavarian battery, and overwhelm the guns' crews. Beyond the village, Scholten's Danes defeated the French infantry in a desperate hand-to-hand bayonet struggle. When they saw that the centre had broken, Maximilian and Marsin decided the battle was lost; like the remnants of Tallard's army, they fled the battlefield, albeit in better order than Tallard's men. Attempts to organise an Allied force to prevent Marsin's withdrawal failed owing to the exhaustion of the cavalry, and the growing confusion in the field. Fall of Blenheim Marlborough now turned his attention from the fleeing enemy to direct Churchill to detach more infantry to storm Blenheim. Orkney's infantry, Hamilton's English brigade and St Paul's Hanoverians moved across the trampled wheat to the cottages. Fierce hand-to-hand fighting gradually forced the French towards the village centre, in and around the walled churchyard which had been prepared for defence. Lord John Hay and Charles Ross's dismounted dragoons were also sent, but suffered under a counter-charge delivered by the regiments of Artois and Provence under command of Colonel de la Silvière. Colonel Belville's Hanoverians were fed into the battle to steady the resolve of the dragoons, who attacked again. The Allied progress was slow and hard, and like the defenders, they suffered many casualties. Many of the cottages were now burning, obscuring the field of fire and driving the defenders out of their positions. Hearing the din of battle in Blenheim, Tallard sent a message to Marlborough offering to order the garrison to withdraw from the field. "Inform Monsieur Tallard", replied Marlborough, "that, in the position in which he is now, he has no command." Nevertheless, as dusk came the Allied commander was anxious for a quick conclusion. The French infantry fought tenaciously to hold on to their position in Blenheim, but their commander was nowhere to be found. By now Blenheim was under assault from every side by three British generals: Cutts, Churchill, and Orkney. The French had repulsed every attack, but many had seen what had happened on the plain: their army was routed and they were cut off. Orkney, attacking from the rear, now tried a different tactic – "... it came into my head to beat parley", he later wrote, "which they accepted of and immediately their Brigadier de Nouville capitulated with me to be prisoner at discretion and lay down their arms." Threatened by Allied guns, other units followed their example. It was not until 21:00 that the Marquis de Blanzac, who had taken charge in Clérambault's absence, reluctantly accepted the inevitability of defeat, and some 10,000 of France's best infantry had laid down their arms. During these events Marlborough was still in the saddle organising the pursuit of the broken enemy. Pausing for a moment, he scribbled on the back of an old tavern bill a note addressed to his wife, Sarah: "I have no time to say more but to beg you will give my duty to the Queen, and let her know her army has had a glorious victory." Aftermath French losses were immense, with over 27,000 killed, wounded and captured. Moreover the myth of French invincibility had been destroyed, and King Louis's hopes of a victorious early peace were over. Mérode-Westerloo summarised the case against Tallard's army: It was a hard-fought contest: Prince Eugene observed that "I have not a squadron or battalion which did not charge four times at least." Although the war dragged on for years, the Battle of Blenheim was probably its most decisive victory; Marlborough and Prince Eugene had saved the Habsburg Empire and thereby preserved the Grand Alliance from collapse. Munich, Augsburg, Ingolstadt, Ulm and the remaining territory of Bavaria soon fell to the Allies. By the Treaty of Ilbersheim, signed on 7 November, Bavaria was placed under Austrian military rule, allowing the Habsburgs to use its resources for the rest of the conflict. The remnants of Maximilian and Marsin's wing limped back to Strasbourg, losing another 7,000 men through desertion. Despite being offered the chance to remain as ruler of Bavaria, under the strict terms of an alliance with Austria, Maximilian left his country and family in order to continue the war against the Allies from the Spanish Netherlands where he still held the post of governor-general. Tallard – who, unlike his subordinates, was not ransomed or exchanged – was taken to England and imprisoned in Nottingham until his release in 1711. The 1704 campaign lasted longer than usual, for the Allies sought to extract the maximum advantage. Realising that France was too powerful to be forced to make peace by a single victory, Prince Eugene, Marlborough and Prince Louis met to plan their next moves. For the following year Marlborough proposed a campaign along the valley of the Moselle to carry the war deep into France. This required the capture of the major fortress of Landau which guarded the Rhine, and the towns of Trier and Trarbach on the Moselle itself. Trier was taken on 27 October and Landau fell on 23 November to Prince Louis and Prince Eugene; with the fall of Trarbach on 20 December, the campaign season for 1704 came to an end. The planned offensive never materialised as the Grand Alliance's army had to depart the Moselle to defend Liège from a French counteroffensive. The war raged on for another decade. Marlborough returned to England on 14 December (O.S) to the acclamation of Queen Anne and the country. In the first days of January, the 110 cavalry standards and 128 infantry colours that had been captured during the battle were borne in procession to Westminster Hall. In February 1705, Queen Anne, who had made Marlborough a duke in 1702, granted him the Park of Woodstock and promised a sum of £240,000 to build a suitable house as a gift from a grateful Crown in recognition of his victory; this resulted in the construction of Blenheim Palace. The British historian Sir Edward Shepherd Creasy considered Blenheim one of the pivotal battles in history, writing: "Had it not been for Blenheim, all Europe might at this day suffer under the effect of French conquests resembling those of Alexander in extent and those of the Romans in durability." The military historian John A. Lynn considers this claim unjustified, for King Louis never had such an objective; the campaign in Bavaria was intended only to bring a favourable peace settlement and not domination over Europe. Lake poet Robert Southey criticised the Battle of Blenheim in his anti-war poem "After Blenheim", but later praised the victory as "the greatest victory which had ever done honour to British arms". Notes References Sources External links Battles involving Bavaria Battles involving England Battles involving France Battles involving Hesse-Kassel Battles involving the Dutch Republic Battles of the War of the Spanish Succession Battle of Blenheim 1704 in Europe
8
The Battle of Ramillies (), fought on 23 May 1706, was a battle of the War of the Spanish Succession. For the Grand AllianceAustria, England, and the Dutch Republicthe battle had followed an indecisive campaign against the Bourbon armies of King Louis XIV of France in 1705. Although the Allies had captured Barcelona that year, they had been forced to abandon their campaign on the Moselle, had stalled in the Spanish Netherlands and suffered defeat in northern Italy. Yet despite his opponents' setbacks LouisXIV wanted peace, but on reasonable terms. Because of this, as well as to maintain their momentum, the French and their allies took the offensive in 1706. The campaign began well for Louis XIV's generals: in Italy Marshal Vendôme defeated the Austrians at the Battle of Calcinato in April, while in Alsace Marshal Villars forced the Margrave of Baden back across the Rhine. Encouraged by these early gains LouisXIV urged Marshal Villeroi to go over to the offensive in the Spanish Netherlands and, with victory, gain a 'fair' peace. Accordingly, the French Marshal set off from Leuven (Louvain) at the head of 60,000 men and marched towards Tienen (Tirlemont), as if to threaten Zoutleeuw (Léau). Also determined to fight a major engagement, the Duke of Marlborough, commander-in-chief of Anglo-Dutch forces, assembled his armysome 62,000 mennear Maastricht, and marched past Zoutleeuw. With both sides seeking battle, they soon encountered each other on the dry ground between the rivers Mehaigne and Petite Gette, close to the small village of Ramillies. In less than four hours Marlborough's Dutch, English, and Danish forces overwhelmed Villeroi's and Max Emanuel's Franco-Spanish-Bavarian army. The Duke's subtle moves and changes in emphasis during the battlesomething his opponents failed to realise until it was too latecaught the French in a tactical vice. With their foe broken and routed, the Allies were able to fully exploit their victory. Town after town fell, including Brussels, Bruges and Antwerp; by the end of the campaign Villeroi's army had been driven from most of the Spanish Netherlands. With Prince Eugene's subsequent success at the Battle of Turin in northern Italy, the Allies had imposed the greatest loss of territory and resources that LouisXIV would suffer during the war. Thus, the year 1706 proved, for the Allies, to be an annus mirabilis. Background After their disastrous defeat at Blenheim in 1704, the next year brought the French some respite. The Duke of Marlborough had intended the 1705 campaignan invasion of France through the Moselle valleyto complete the work of Blenheim and persuade King LouisXIV to make peace but the plan had been thwarted by friend and foe alike. The reluctance of his Dutch allies to see their frontiers denuded of troops for another gamble in Germany had denied Marlborough the initiative but of far greater importance was the Margrave of Baden's pronouncement that he could not join the Duke in strength for the coming offensive. This was in part due to the sudden switching of troops from the Rhine to reinforce Prince Eugene in Italy and part due to the deterioration of Baden's health brought on by the re-opening of a severe foot wound he had received at the storming of the Schellenberg the previous year. Marlborough had to cope with the death of Emperor LeopoldI in May and the accession of JosephI, which unavoidably complicated matters for the Grand Alliance. The resilience of the French King and the efforts of his generals, also added to Marlborough's problems. Marshal Villeroi, exerting considerable pressure on the Dutch commander, Count Overkirk, along the Meuse, took Huy on 10 June before pressing on towards Liège. With Marshal Villars sitting strong on the Moselle, the Allied commanderwhose supplies had by now become very shortwas forced to call off his campaign on 16 June. "What a disgrace for Marlborough," exulted Villeroi, "to have made false movements without any result!" With Marlborough's departure north, the French transferred troops from the Moselle valley to reinforce Villeroi in Flanders, while Villars marched off to the Rhine. The Anglo-Dutch forces gained minor compensation for the failed Moselle campaign with the success at Elixheim and the crossing of the Lines of Brabant in the Spanish Netherlands (Huy was also retaken on 11 July) but a chance to bring the French to a decisive engagement eluded Marlborough. The year 1705 proved almost entirely barren for the Duke, whose military disappointments were only partly compensated by efforts on the diplomatic front where, at the courts of Düsseldorf, Frankfurt, Vienna, Berlin and Hanover, Marlborough sought to bolster support for the Grand Alliance and extract promises of prompt assistance for the following year's campaign. Prelude On 11 January 1706 Marlborough finally reached London at the end of his diplomatic tour but he had already been planning his strategy for the coming season. The first option (although it is debatable to what extent the Duke was committed to such an enterprise) was a plan to transfer his forces from the Spanish Netherlands to northern Italy; once there, he intended linking up with Prince Eugene in order to defeat the French and safeguard Savoy from being overrun. Savoy would then serve as a gateway into France by way of the mountain passes or an invasion with naval support along the Mediterranean coast via Nice and Toulon, in connexion with redoubled Allied efforts in Spain. It seems that the Duke's favoured scheme was to return to the Moselle valley (where Marshal Marsin had recently taken command of French forces) and once more attempt an advance into the heart of France. But these decisions soon became academic. Shortly after Marlborough landed in the Dutch Republic on 14 April, news arrived of big Allied setbacks in the wider war. Determined to show the Grand Alliance that France was still resolute, LouisXIV prepared to launch a double surprise in Alsace and northern Italy. On the latter front Marshal Vendôme defeated the Imperial army at Calcinato on 19 April, pushing the Imperialists back in confusion (French forces were now in a position to prepare for the long-anticipated siege of Turin). In Alsace, Marshal Villars took Baden by surprise and captured Haguenau, driving him back across the Rhine in some disorder, thus creating a threat on Landau. With these reverses, the Dutch refused to contemplate Marlborough's ambitious march to Italy or any plan that denuded their borders of the Duke and their army. In the interest of coalition harmony, Marlborough prepared to campaign in the Low Countries. On the move The Duke left The Hague on 9 May. "God knows I go with a heavy heart," he wrote six days later to his friend and political ally in England, Lord Godolphin, "for I have no hope of doing anything considerable, unless the French do what I am very confident they will not..."in other words, court battle. On 17 May the Duke concentrated his Dutch and English troops at Tongeren, near Maastricht. The Hanoverians, Hessians and Danes, despite earlier undertakings, found, or invented, pressing reasons for withholding their support. Marlborough wrote an appeal to the Duke of Württemberg, the commander of the Danish contingent: "I send you this express to request your Highness to bring forward by a double march your cavalry so as to join us at the earliest moment..." Additionally, the King in Prussia, Frederick I, had kept his troops in quarters behind the Rhine while his personal disputes with Vienna and the States General at The Hague remained unresolved. Nevertheless, the Duke could think of no circumstances why the French would leave their strong positions and attack his army, even if Villeroi was first reinforced by substantial transfers from Marsin's command. But in this he had miscalculated. Although LouisXIV wanted peace he wanted it on reasonable terms; for that, he needed victory in the field and to convince the Allies that his resources were by no means exhausted. Following the successes in Italy and along the Rhine, LouisXIV was now hopeful of similar results in Flanders. Far from standing on the defensive thereforeand unbeknown to MarlboroughLouisXIV was persistently goading his marshal into action. "[Villeroi] began to imagine," wrote St Simon, "that the King doubted his courage, and resolved to stake all at once in an effort to vindicate himself." Accordingly, on 18 May, Villeroi set off from Leuven at the head of 70 battalions, 132 squadrons and 62 cannoncomprising an overall force of some 60,000 troopsand crossed the river Dyle to seek battle with the enemy. Spurred on by his growing confidence in his ability to out-general his opponent, and by Versailles’ determination to avenge Blenheim, Villeroi and his generals anticipated success. Neither opponent expected the clash at the exact moment or place where it occurred. The French moved first to Tienen, (as if to threaten Zoutleeuw, abandoned by the French in October 1705), before turning southwards, heading for Jodoignethis line of march took Villeroi's army towards the narrow aperture of dry ground between the rivers Mehaigne and Petite Gette close to the small villages of Ramillies and Taviers; but neither commander quite appreciated how far his opponent had travelled. Villeroi still believed (on 22 May) the Allies were a full day's march away when in fact they had camped near Corswaren waiting for the Danish squadrons to catch up; for his part, Marlborough deemed Villeroi still at Jodoigne when in reality he was now approaching the plateau of Mont St. André with the intention of pitching camp near Ramillies (see map at right). However, the Prussian infantry was not there. Marlborough wrote to Lord Raby, the English resident at Berlin: "If it should please God to give us victory over the enemy, the Allies will be little obliged to the King [Frederick] for the success." The following day, at 01:00, Marlborough dispatched Cadogan, his Quartermaster-General, with an advanced guard to reconnoitre the same dry ground that Villeroi's army was now heading toward, country that was well known to the Duke from previous campaigns. Two hours later the Duke followed with the main body: 74 battalions, 123 squadrons, 90 pieces of artillery and 20 mortars, totalling 62,000 troops. About 08:00, after Cadogan had just passed Merdorp, his force made brief contact with a party of French hussars gathering forage on the edge of the plateau of Jandrenouille. After a brief exchange of shots the French retired and Cadogan's dragoons pressed forward. With a short lift in the mist, Cadogan soon discovered the smartly ordered lines of Villeroi's advance guard some off; a galloper hastened back to warn Marlborough. Two hours later the Duke, accompanied by the Dutch field commander Field Marshal Overkirk, General Daniël van Dopff, and the Allied staff, rode up to Cadogan where on the horizon to the westward he could discern the massed ranks of the French army deploying for battle along the front. Marlborough later told Bishop Burnet: "The French army looked the best of any he had ever seen." Battle Battlefield The battlefield of Ramillies is very similar to that of Blenheim, for here too there is an immense area of arable land unimpeded by woods or hedges. Villeroi's right rested on the villages of Franquenée and Taviers, with the river Mehaigne protecting his flank. A large open plain, about wide, lay between Taviers and Ramillies, but unlike Blenheim, there was no stream to hinder the cavalry. His centre was secured by Ramillies itself, lying on a slight eminence which gave distant views to the north and east. The French left flank was protected by broken country, and by a stream, the Petite Gheete, which runs deep between steep and slippery slopes. On the French side of the stream the ground rises to Offus, the village which, together with Autre-Eglise farther north, anchored Villeroi's left flank. To the west of the Petite Gheete rises the plateau of Mont St. André; a second plain, the plateau of Jandrenouilleupon which the Anglo-Dutch army amassedrises to the east. Initial dispositions At 11:00 the Duke ordered the army to take standard battle formation. On the far right, towards Foulz, the British battalions and squadrons took up their posts in a double line near the Jeuche stream. The centre was formed by the mass of Dutch, German, Protestant Swiss and Scottish infantryperhaps 30,000 menfacing Offus and Ramillies. Also facing Ramillies Marlborough placed a powerful battery of thirty 24-pounders, dragged into position by a team of oxen; further batteries were positioned overlooking the Petite Gheete. On their left, on the broad plain between Taviers and Ramilliesand where Marlborough thought the decisive encounter must take placeOverkirk drew the 69 squadrons of the Dutch and Danish horse, supported by 19 battalions of Dutch infantry and two artillery pieces. Meanwhile, Villeroi deployed his forces. In Taviers on his right, he placed two battalions of the Greder Suisse Régiment, with a smaller force forward in Franquenée; the whole position was protected by the boggy ground of the river Mehaigne, thus preventing an Allied flanking movement. In the open country between Taviers and Ramillies, he placed 82 squadrons under General de Guiscard supported by several interleaved brigades of French, Swiss and Bavarian infantry. Along the Ramillies–Offus–Autre Eglise ridge-line, Villeroi positioned Walloon and Bavarian infantry, supported by the Elector of Bavaria's 50 squadrons of Bavarian and Walloon cavalry placed behind on the plateau of Mont St. André. Ramillies, Offus and Autre-Eglise were all packed with troops and put in a state of defence, with alleys barricaded and walls loop-holed for muskets. Villeroi also positioned powerful batteries near Ramillies. These guns (some of which were of the three barrelled kind first seen at Elixheim the previous year) enjoyed good arcs of fire, able to fully cover the approaches of the plateau of Jandrenouille over which the Allied infantry would have to pass. Marlborough, however, noticed several important weaknesses in the French dispositions. Tactically, it was imperative for Villeroi to occupy Taviers on his right and Autre-Eglise on his left, but by adopting this posture he had been forced to over-extend his forces. Moreover, this dispositionconcave in relation to the Allied armygave Marlborough the opportunity to form a more compact line, drawn up in a shorter front between the 'horns' of the French crescent; when the Allied blow came it would be more concentrated and carry more weight. Additionally, the Duke's disposition facilitated the transfer of troops across his front far more easily than his foe, a tactical advantage that would grow in importance as the events of the afternoon unfolded. Although Villeroi had the option of enveloping the flanks of the Allied army as they deployed on the plateau of Jandrenouillethreatening to encircle their armythe Duke correctly gauged that the characteristically cautious French commander was intent on a defensive battle along the ridge-line. Taviers At 13:00 the batteries went into action; a little later two Allied columns set out from the extremities of their line and attacked the flanks of the Franco-Bavarian army. To the south, 4 battalions, under the command of Colonel Wertmüller, came forward with their two field guns to seize the hamlet of Franquenée. The small Swiss garrison in the village, shaken by the sudden onslaught and unsupported by the battalions to their rear, were soon compelled back towards the village of Taviers. Taviers was of particular importance to the Franco-Bavarian position: it protected the otherwise unsupported flank of General de Guiscard's cavalry on the open plain, while at the same time, it allowed the French infantry to pose a threat to the flanks of the Dutch and Danish squadrons as they came forward into position. But hardly had the retreating Swiss rejoined their comrades in that village when the Dutch Guards renewed their attack. The fighting amongst the alleys and cottages soon deteriorated into a fierce bayonet and clubbing mêlée, but the superiority in Dutch firepower soon told. The accomplished French officer, Colonel de la Colonie, standing on the plain nearby remembered: "This village was the opening of the engagement, and the fighting there was almost as murderous as the rest of the battle put together." By about 15:00 the Swiss had been pushed out of the village into the marshes beyond. Villeroi's right flank fell into chaos and was now open and vulnerable. Alerted to the situation de Guiscard ordered an immediate attack with 14 squadrons of French dragoons currently stationed in the rear. Two other battalions of the Greder Suisse Régiment were also sent, but the attack was poorly co-ordinated and consequently went in piecemeal. The Anglo-Dutch commanders now sent dismounted Dutch dragoons into Taviers, which, together with the Guards and their field guns, poured concentrated musketry- and canister-fire into the advancing French troops. Colonel d’Aubigni, leading his regiment, fell mortally wounded. As the French ranks wavered, the leading squadrons of Württemberg's Danish horsenow unhampered by enemy fire from either villagewere also sent into the attack and fell upon the exposed flank of the Franco-Swiss infantry and dragoons. De la Colonie, with his Grenadiers Rouge regiment, together with the Cologne Guards who were brigaded with them, was now ordered forward from his post south of Ramillies to support the faltering counter-attack on the village. But on his arrival, all was chaos: "Scarcely had my troops got over when the dragoons and Swiss who had preceded us, came tumbling down upon my battalions in full flight... My own fellows turned about and fled along with them." De La Colonie managed to rally some of his grenadiers, together with the remnants of the French dragoons and Greder Suisse battalions, but it was an entirely peripheral operation, offering only fragile support for Villeroi's right flank. Offus and Autre-Eglise While the attack on Taviers went on the Earl of Orkney launched his first line of English across the Petite Gheete in a determined attack against the barricaded villages of Offus and Autre-Eglise on the Allied right. Villeroi, posting himself near Offus, watched anxiously the redcoats' advance, mindful of the counsel he had received on 6 May from LouisXIV: "Have particular care to that part of the line which will endure the first shock of the English troops." Heeding this advice the French commander began to transfer battalions from his centre to reinforce the left, drawing more foot from the already weakened right to replace them. As the English battalions descended the gentle slope of the Petite Gheete valley, struggling through the boggy stream, they were met by Major General de la Guiche's disciplined Walloon infantry sent forward from around Offus. After concentrated volleys, exacting heavy casualties on the redcoats, the Walloons reformed back to the ridgeline in good order. The English took some time to reform their ranks on the dry ground beyond the stream and press on up the slope towards the cottages and barricades on the ridge. The vigour of the English assault, however, was such that they threatened to break through the line of the villages and out onto the open plateau of Mont St André beyond. This was potentially dangerous for the Allied infantry who would then be at the mercy of the Elector's Bavarian and Walloon squadrons patiently waiting on the plateau for the order to move. Although Henry Lumley's English cavalry had managed to cross the marshy ground around the Petite Gheete, it was soon evident to Marlborough that sufficient cavalry support would not be practicable and that the battle could not be won on the Allied right. The Duke, therefore, called off the attack against Offus and Autre-Eglise. To make sure that Orkney obeyed his order to withdraw, Marlborough sent his Quartermaster-General in person with the command. Despite Orkney's protestations, Cadogan insisted on compliance and, reluctantly, Orkney gave the word for his troops to fall back to their original positions on the edge of the plateau of Jandrenouille. It is still not clear how far Orkney's advance was planned only as a feint; according to historian David Chandler it is probably more accurate to surmise that Marlborough launched Orkney in a serious probe with a view to sounding out the possibilities of the sector. Nevertheless, the attack had served its purpose. Villeroi had given his personal attention to that wing and strengthened it with large bodies of horse and foot that ought to have been taking part in the decisive struggle south of Ramillies. Ramillies Meanwhile, the Dutch assault on Ramillies was gaining pace. Marlborough's younger brother, General of Infantry, Charles Churchill, ordered four brigades of foot to attack the village. The assault consisted of 12 battalions of Dutch infantry commanded by Major Generals Schultz and Sparre; two brigades of Saxons under Count Schulenburg; a Scottish brigade in Dutch service led by the 2nd Duke of Argyle; and a small brigade of Protestant Swiss. The 20 French and Bavarian battalions in Ramillies, supported by the Irish who had left Ireland in the Flight of the Wild Geese to join Clare's Dragoons who fought as infantry and captured a colour from the British 3rd Regiment of Foot and a small brigade of Cologne and Bavarian Guards under the Marquis de Maffei, put up a determined defence, initially driving back the attackers with severe losses as commemorated in the song Clare's Dragoons. Seeing that Schultz and Spaar were faltering, Marlborough now ordered Orkney's second-line British and Danish battalions (who had not been used in the assault on Offus and Autre-Eglise) to move south towards Ramillies. Shielded as they were from observation by a slight fold in the land, their commander, Brigadier-General Van Pallandt, ordered the regimental colours to be left in place on the edge of the plateau to convince their opponents they were still in their initial position. Therefore, unbeknown to the French who remained oblivious to the Allies' real strength and intentions on the opposite side of the Petite Gheete, Marlborough was throwing his full weight against Ramillies and the open plain to the south. Villeroi meanwhile, was still moving more reserves of infantry in the opposite direction towards his left flank; crucially, it would be some time before the French commander noticed the subtle change in emphasis of the Allied dispositions. Around 15:30 Overkirk advanced his massed squadrons on the open plain in support of the infantry attack on Ramillies. 48 Dutch squadrons, supported on their left by 21 Danish squadrons, led by Count Tilly and Lieutenants Generals Hompesch, d'Auvergne, Ostfriesland and Dopffsteadily advanced towards the enemy (taking care not to prematurely tire the horses), before breaking into a trot to gain the impetus for their charge. The Marquis de Feuquières writing after the battle described the scene: "They advanced in four lines... As they approached they advanced their second and fourth lines into the intervals of their first and third lines; so that when they made their advance upon us, they formed only one front, without any intermediate spaces." This made it nearly impossible for the French cavalry to perform flanking manoeuvres. The initial clash favoured the Dutch and Danish squadrons. The disparity of numbersexacerbated by Villeroi stripping their ranks of infantry to reinforce his left flankenabled Overkirk's cavalry to throw the first line of French horse back in some disorder towards their second-line squadrons. This line also came under severe pressure and, in turn, was forced back to their third-line of cavalry and the few battalions still remaining on the plain. But these French horsemen were amongst the best in LouisXIV's armythe Maison du Roi, supported by four elite squadrons of Bavarian Cuirassiers. Ably led by de Guiscard, the French cavalry rallied, thrusting back the Allied squadrons in successful local counterattacks. On Overkirk's right flank, close to Ramillies, ten of his squadrons suddenly broke ranks and were scattered, riding headlong to the rear to recover their order, leaving the left flank of the Allied assault on Ramillies dangerously exposed. Notwithstanding the lack of infantry support, de Guiscard threw his cavalry forward in an attempt to split the Allied army in two. A crisis threatened the centre, but from his vantage point Marlborough was at once aware of the situation. The Allied commander now summoned the cavalry on the right wing to reinforce his centre, leaving only the English squadrons in support of Orkney. Thanks to a combination of battle-smoke and favourable terrain, his redeployment went unnoticed by Villeroi who made no attempt to transfer any of his own 50 unused squadrons. While he waited for the fresh reinforcements to arrive, Marlborough flung himself into the mêlée, rallying some of the Dutch cavalry who were in confusion. But his personal involvement nearly led to his undoing. A number of French horsemen, recognising the Duke, came surging towards his party. Marlborough's horse tumbled and the Duke was thrown"Milord Marlborough was rid over," wrote Orkney some time later. It was a critical moment of the battle. "Major-General Murray," recalled one eyewitness: "...seeing him fall, marched up in all haste with two Swiss battalions to save him and stop the enemy who were hewing all down in their way." Fortunately Marlborough's newly appointed aide-de-camp, Richard Molesworth, galloped to the rescue, mounted the Duke on his horse and made good their escape, before Murray's disciplined ranks threw back the pursuing French troopers. After a brief pause, Marlborough's equerry, Colonel Bringfield (or Bingfield), led up another of the Duke's spare horses; but while assisting him onto his mount, the unfortunate Bringfield was hit by an errant cannonball that sheared off his head. One account has it that the cannonball flew between the Captain-General's legs before hitting the unfortunate colonel, whose torso fell at Marlborough's feeta moment subsequently depicted in a lurid set of contemporary playing cards. Nevertheless, the danger passed and Overkirk and Tilly restored order among the confused squadrons and ordered them to attack again, enabling the Duke to attend to the positioning of the cavalry reinforcements feeding down from his right flanka change of which Villeroi remained blissfully unaware. Breakthrough The time was about 16:30, and the two armies were in close contact across the whole front, from the skirmishing in the marshes in the south, through the vast cavalry battle on the open plain; to the fierce struggle for Ramillies at the centre, and to the north, where, around the cottages of Offus and Autre-Eglise, Orkney and de la Guiche faced each other across the Petite Gheete ready to renew hostilities. The arrival of the transferring squadrons now began to tip the balance in favour of the Allies. Tired, and suffering a growing list of casualties, the numerical inferiority of Guiscard's squadrons battling on the plain at last began to tell. After earlier failing to hold or retake Franquenée and Taviers, Guiscard's right flank had become dangerously exposed and a fatal gap had opened on the right of their line. Taking advantage of this breach, Württemberg's Danish cavalry now swept forward, wheeling to penetrate the flank of the Maison du Roi whose attention was almost entirely fixed on holding back the Dutch. Sweeping forwards, virtually without resistance, the 21 Danish squadrons reformed behind the French around the area of the Tomb of Ottomond, facing north across the plateau of Mont St André towards the exposed flank of Villeroi's army. The final Allied reinforcements for the cavalry contest to the south were at last in position; Marlborough's superiority on the left could no longer be denied, and his fast-moving plan took hold of the battlefield. Now, far too late, Villeroi tried to redeploy his 50 unused squadrons, but a desperate attempt to form line facing south, stretching from Offus to Mont St André, floundered amongst the baggage and tents of the French camp carelessly left there after the initial deployment. The Allied commander ordered his cavalry forward against the now heavily outnumbered French and Bavarian horsemen. De Guiscard's right flank, without proper infantry support, could no longer resist the onslaught and, turning their horses northwards, they broke and fled in complete disorder. Even the squadrons currently being scrambled together by Villeroi behind Ramillies could not withstand the onslaught. "We had not got forty yards on our retreat," remembered Captain Peter Drake, an Irishman serving with the French"when the words sauve qui peut went through the great part, if not the whole army, and put all to confusion" In Ramillies the Allied infantry, now reinforced by the English troops brought down from the north, at last broke through. The Régiment de Picardie stood their ground but were caught between Colonel Borthwick's Scots-Dutch regiment and the English reinforcements. Borthwick was killed, as was Charles O’Brien, the Irish Viscount Clare in French service, fighting at the head of his regiment. The Marquis de Maffei attempted one last stand with his Bavarian and Cologne Guards, but it proved in vain. Noticing a rush of horsemen fast approaching from the south, he later recalled: "...I went towards the nearest of these squadrons to instruct their officer, but instead of being listened to [I] was immediately surrounded and called upon to ask for quarter." Pursuit The roads leading north and west were choked with fugitives. Orkney now sent his English troops back across the Petite Gheete stream to once again storm Offus where de la Guiche's infantry had begun to drift away in the confusion. To the right of the infantry Lord John Hay's 'Scots Greys' also picked their way across the stream and charged the Régiment du Roi within Autre-Eglise. "Our dragoons," wrote John Deane, "pushing into the village... made terrible slaughter of the enemy." The Bavarian Horse Grenadiers and the Electoral Guards withdrew and formed a shield about Villeroi and the Elector but were scattered by Lumley's cavalry. Stuck in the mass of fugitives fleeing the battlefield, the French and Bavarian commanders narrowly escaped capture by General Cornelius Wood who, unaware of their identity, had to content himself with the seizure of two Bavarian Lieutenant-Generals. Far to the south, the remnants of de la Colonie's brigade headed in the opposite direction towards the French held fortress of Namur. The retreat became a rout. Individual Allied commanders drove their troops forward in pursuit, allowing their beaten enemy no chance to recover. Soon the Allied infantry could no longer keep up, but their cavalry were off the leash, heading through the gathering night for the crossings on the river Dyle. At last, however, Marlborough called a halt to the pursuit shortly after midnight near Meldert, from the field. "It was indeed a truly shocking sight to see the miserable remains of this mighty army," wrote Captain Drake, "...reduced to a handful." Aftermath What was left of Villeroi's army was now broken in spirit; the imbalance of the casualty figures amply demonstrates the extent of the disaster for LouisXIV's army: (see below). In addition, hundreds of French soldiers were fugitives, many of whom would never remuster to the colours. Villeroi also lost 52 artillery pieces and his entire engineer pontoon train. In the words of Marshal Villars, the French defeat at Ramillies was "the most shameful, humiliating and disastrous of routs". Town after town now succumbed to the Allies. Leuven fell on 25 May 1706; three days later, the Allies entered Brussels, the capital of the Spanish Netherlands. Marlborough realised the great opportunity created by the early victory of Ramillies: "We now have the whole summer before us," wrote the Duke from Brussels to Robert Harley: "...and with the blessing of God I shall make the best use of it." Malines, Lierre, Ghent, Alost, Damme, Oudenaarde, Bruges, and on 6 June Antwerp, all subsequently fell to Marlborough's victorious army and, like Brussels, proclaimed the Austrian candidate for the Spanish throne, the Archduke Charles, as their sovereign. Villeroi was helpless to arrest the process of collapse. When LouisXIV learnt of the disaster he recalled Marshal Vendôme from northern Italy to take command in Flanders; but it would be weeks before the command changed hands. As news spread of the Allies' triumph, the Prussians, Hessians and Hanoverian contingents, long delayed by their respective rulers, eagerly joined the pursuit of the broken French and Bavarian forces. "This," wrote Marlborough wearily, "I take to be owing to our late success." Meanwhile, Overkirk took the port of Ostend on 4 July thus opening a direct route to the English Channel for communication and supply, but the Allies were making scant progress against Dendermonde whose governor, the Marquis de Valée, was stubbornly resisting. Only later when Cadogan and Churchill went to take charge did the town's defences begin to fail. Vendôme formally took over command in Flanders on 4 August; Villeroi would never again receive a major command: "I cannot foresee a happy day in my life save only that of my death." LouisXIV was more forgiving to his old friend: "At our age, Marshal, we must no longer expect good fortune." In the meantime, Marlborough invested the elaborate fortress of Menin which, after a costly siege, capitulated on 22 August. Dendermonde finally succumbed on 6 September followed by Aththe last conquest of 1706on 2 October. By the time Marlborough had closed down the Ramillies campaign he had denied the French most of the Spanish Netherlands west of the Meuse and north of the Sambreit was an unsurpassed operational triumph for the English Duke but once again it was not decisive as these gains did not defeat France. The immediate question for the Allies was how to deal with the Spanish Netherlands, a subject on which the Austrians and the Dutch were diametrically opposed. Emperor JosephI, acting on behalf of his younger brother King CharlesIII, absent in Spain, claimed that reconquered Brabant and Flanders should be put under immediate possession of a governor named by himself. The Dutch, however, who had supplied the major share of the troops and money to secure the victory (the Austrians had produced nothing of either) claimed the government of the region till the war was over, and that after the peace they should continue to garrison Barrier Fortresses stronger than those which had fallen so easily to LouisXIV's forces in 1701. Marlborough mediated between the two parties but favoured the Dutch position. To sway the Duke's opinion, the Emperor offered Marlborough the governorship of the Spanish Netherlands. It was a tempting offer, but in the name of Allied unity, it was one he refused. In the end England and the Dutch Republic took control of the newly won territory for the duration of the war; after which it was to be handed over to the direct rule of CharlesIII, subject to the reservation of a Dutch Barrier, the extent and nature of which had yet to be settled. Meanwhile, on the Upper Rhine, Villars had been forced onto the defensive as battalion after battalion had been sent north to bolster collapsing French forces in Flanders; there was now no possibility of his undertaking the re-capture of Landau. Further good news for the Allies arrived from northern Italy where, on 7 September, Prince Eugene had routed a French army before the Piedmontese capital, Turin, driving the Franco-Spanish forces from northern Italy. Only from Spain did LouisXIV receive any good news where Das Minas and Galway had been forced to retreat from Madrid towards Valencia, allowing PhilipV to re-enter his capital on 4 October. All in all though, the situation had changed considerably and LouisXIV began to look for ways to end what was fast becoming a ruinous war for France. For Queen Anne also, the Ramillies campaign had one overriding significance: "Now we have God be thanked so hopeful a prospect of peace." Instead of continuing the momentum of victory, however, cracks in Allied unity would enable LouisXIV to reverse some of the major setbacks suffered at Turin and Ramillies. Casualties The total number of French casualties cannot be calculated precisely, so complete was the collapse of the Franco-Bavarian army that day. David G. Chandler's Marlborough as Military Commander and A Guide to the Battlefields of Europe are consistent with regards to French casualty figures, i.e. 12,000 dead and wounded plus some 7,000 taken prisoner. James Falkner, in Ramillies 1706: Year of Miracles, also notes 12,000 dead and wounded and "up to 10,000" taken prisoner. In Notes on the history of military medicine, Garrison puts French casualties at 13,000, including 2,000 killed, 3,000 wounded and 6,000 missing. In The Collins Encyclopaedia of Military History, Dupuy puts Villeroi's dead and wounded at 8,000, with a further 7,000 captured. Neil Litten, using French archives, suggests 7,000 killed and wounded and 6,000 captured, with a further 2,000 choosing to desert. John Millner's memoirsCompendious Journal (1733)is more specific, recording 12,087 of Villeroi's army were killed or wounded, with another 9,729 taken prisoner. In Marlborough, however, Correlli Barnett puts the total casualty figure as high as 30,000–15,000 dead and wounded with an additional 15,000 taken captive. Trevelyan estimates Villeroi's casualties at 13,000 but adds "his losses by desertion may have doubled that number". La Colonie omits a casualty figure in his Chronicles of an old Campaigner but Saint-Simon in his Memoirs states 4,000 killed adding "many others were wounded and many important persons were taken prisoner". Voltaire, however, in Histoire du siècle du LouisXIV records "the French lost there twenty thousand men". Gaston Bodart states 2,000 killed or wounded, 6,000 captured and 7,000 scattered for a total of 13,000 casualties. Périni writes that both sides lost 2 to 3,000 killed or wounded (the Dutch losing precisely 716 killed and 1,712 wounded), and that 5,600 French were captured. See also The battle was used as the name of several Royal Navy ships, HMS Ramillies. Notes Footnotes References Primary La Colonie, Jean Martin de. The Chronicles of an Old Campaigner, (trans. W. C. Horsley), (1904) (1857) Mémoires relatifs à la Guerre de succession de 1706–1709 et 1711, de Sicco van Goslinga, publiés par mm. U. A. Evertsz et G. H. M. Delprat, au nom de la Société d’histoire, d’archéologie et de linquistique de Frise, (Published by G.T.N. Suringar, 1857) Saint-Simon. Memoirs, vol i. Prion Books Ltd., (1999). Secondary Barnett, Correlli. Marlborough. Wordsworth Editions Limited, (1999). Chandler, David G. A Guide to the Battlefields of Europe. Wordsworth Editions Limited, (1998). Chandler, David G. Marlborough as Military Commander. Spellmount Ltd, (2003). Falkner, James. Ramillies 1706: Year of Miracles. Pen & Sword Books Ltd, (2006). Gregg, Edward. Queen Anne. Yale University Press, (2001). Lynn, John A. The Wars of Louis XIV, 1667–1714. Longman, (1999). Trevelyan, G. M. England Under Queen Anne: Ramillies and the Union with Scotland. Longmans, Green and co., (1932) Battle of Ramillies Battles of the War of the Spanish Succession Battles involving Bavaria Battles involving France Battles involving England Battles involving the Dutch Republic Battles involving the Spanish Netherlands 1706 in France Battle of Ramillies Battle 18th century in the Southern Netherlands Battles involving Spain
6
Brian Wilson Kernighan (; born January 30, 1942) is a Canadian computer scientist. He worked at Bell Labs and contributed to the development of Unix alongside Unix creators Ken Thompson and Dennis Ritchie. Kernighan's name became widely known through co-authorship of the first book on the C programming language (The C Programming Language) with Dennis Ritchie. Kernighan affirmed that he had no part in the design of the C language ("it's entirely Dennis Ritchie's work"). He authored many Unix programs, including ditroff. Kernighan is coauthor of the AWK and AMPL programming languages. The "K" of K&R C and of AWK both stand for "Kernighan". In collaboration with Shen Lin he devised well-known heuristics for two NP-complete optimization problems: graph partitioning and the travelling salesman problem. In a display of authorial equity, the former is usually called the Kernighan–Lin algorithm, while the latter is known as the Lin–Kernighan heuristic. Kernighan has been a professor of computer science at Princeton University since 2000 and is the director of undergraduate studies in the department of computer science. In 2015, he co-authored the book The Go Programming Language. Early life and education Kernighan was born in Toronto. He attended the University of Toronto between 1960 and 1964, earning his bachelor's degree in engineering physics. He received his Ph.D. in electrical engineering from Princeton University in 1969, completing a doctoral dissertation titled "Some graph partitioning problems related to program segmentation" under the supervision of Peter G. Weiner. Career and research Kernighan has held a professorship in the department of computer science at Princeton since 2000. Each fall he teaches a course called "Computers in Our World", which introduces the fundamentals of computing to non-majors. Kernighan was the software editor for Prentice Hall International. His "Software Tools" series spread the essence of "C/Unix thinking" with makeovers for BASIC, FORTRAN, and Pascal, and most notably his "Ratfor" (rational FORTRAN) was put in the public domain. He has said that if stranded on an island with only one programming language it would have to be C. Kernighan coined the term "Unix" and helped popularize Thompson's Unix philosophy. Kernighan is also known as a coiner of the expression "What You See Is All You Get" (WYSIAYG), which is a sarcastic variant of the original "What You See Is What You Get" (WYSIWYG). Kernighan's term is used to indicate that WYSIWYG systems might throw away information in a document that could be useful in other contexts. In 1972, Kernighan described memory management in strings using "hello" and "world", in the B programming language, which became the iconic example we know today. Kernighan's original 1978 implementation of Hello, World! was sold at The Algorithm Auction, the world's first auction of computer algorithms. In 1996, Kernighan taught CS50 which is the Harvard University introductory course in computer science. Kernighan was an influence on David J. Malan who subsequently taught the course and scaled it up to run at multiple universities and in multiple digital formats. Kernighan was elected a member of the National Academy of Engineering in 2002 for contributions to software and to programming languages. He was also elected a member of the American Academy of Arts and Sciences in 2019. In 2022 Kernighan stated that he was actively working on improvements to the AWK programming language, which he took part in creating in 1977. Books and reports The Elements of Programming Style, with P. J. Plauger Software Tools, a book and set of tools for Ratfor, co-created in part with P. J. Plauger Software Tools in Pascal, a book and set of tools for Pascal, with P. J. Plauger The C Programming Language, with C creator Dennis Ritchie, the first book on C The Practice of Programming, with Rob Pike The Unix Programming Environment, a tutorial book, with Rob Pike "Why Pascal is Not My Favorite Programming Language", a popular criticism of Niklaus Wirth's Pascal. Some parts of the criticism are obsolete due to ISO 7185 (Programming Languages - Pascal); the criticism was written before ISO 7185 was created. (AT&T Computing Science Technical Report #100) Algorithms 1972: The first documented "Hello, world!" program, in Kernighan's "A Tutorial Introduction to the Language B" 1973: ditroff, or "device independent troff", which allowed troff to be used with any device 1974: The eqn typesetting language for troff, with Lorinda Cherry 1976: Ratfor 1977: The m4 macro processing language, with Dennis Ritchie 1977: The AWK programming language, with Alfred Aho and Peter J. Weinberger, and its book The AWK Programming Language 1985: The AMPL programming language 1988: The pic typesetting language for troff Publications The Elements of Programming Style (1974, 1978) with P. J. Plauger Software Tools (1976) with P. J. Plauger The C Programming Language (1978, 1988) with Dennis M. Ritchie Software Tools in Pascal (1981) with P. J. Plauger The Unix Programming Environment (1984) with Rob Pike The AWK Programming Language (1988) with Alfred Aho and Peter J. Weinberger The Practice of Programming (1999) with Rob Pike AMPL: A Modeling Language for Mathematical Programming, 2nd ed. (2003) with Robert Fourer and David Gay D is for Digital: What a well-informed person should know about computers and communications (2011) The Go Programming Language (2015) with Alan Donovan Understanding the Digital World: What You Need to Know about Computers, the Internet, Privacy, and Security (2017) Millions, Billions, Zillions: Defending Yourself in a World of Too Many Numbers (2018) UNIX: A History and a Memoir (2019) Programming setup Kernighan uses a 13-inch MacBook Air as his primary device. Along with this, from time to time, he uses an iMac in his office. He, most of the time, uses Sam as his text editor. See also List of pioneers in computer science References External links Brian Kernighan's home page at Bell Labs Lex Fridman Podcast #109: Brian Kernighan - UNIX, C, AWK, AMPL, and Go Programming "Why Pascal is Not My Favorite Programming Language" — By Brian Kernighan, AT&T Bell Labs, 2 April 1981 "Leap In and Try Things" — Interview with Brian Kernighan — on "Harmony at Work Blog", October 2009. An Interview with Brian Kernighan — By Mihai Budiu, for PC Report Romania, August 2000 – Interview by Video — TechNetCast At Bell Labs: Dennis Ritchie and Brian Kernighan (1999-05-14) Video (Princeton University, September 7, 2003) — "Assembly for the Class of 2007: 'D is for Digital and Why It Matters'" A Descent into Limbo by Brian Kernighan Photos of Brian Kernighan Video interview with Brian Kernighan for Princeton Startup TV (2012-03-20) The Setup, Brian Kernighan 1942 births Living people Canadian computer scientists Canadian computer programmers Computer programmers Inferno (operating system) people Canadian people of Irish descent Writers from Toronto Plan 9 people Princeton University School of Engineering and Applied Science alumni Princeton University faculty Programming language designers Scientists at Bell Labs Canadian technology writers University of Toronto alumni Unix people C (programming language) Members of the United States National Academy of Engineering Berkman Fellows Scientists from Toronto
9
BCPL ("Basic Combined Programming Language") is a procedural, imperative, and structured programming language. Originally intended for writing compilers for other languages, BCPL is no longer in common use. However, its influence is still felt because a stripped down and syntactically changed version of BCPL, called B, was the language on which the C programming language was based. BCPL introduced several features of many modern programming languages, including using curly braces to delimit code blocks. BCPL was first implemented by Martin Richards of the University of Cambridge in 1967. Design BCPL was designed so that small and simple compilers could be written for it; reputedly some compilers could be run in 16 kilobytes. Furthermore, the original compiler, itself written in BCPL, was easily portable. BCPL was thus a popular choice for bootstrapping a system. A major reason for the compiler's portability lay in its structure. It was split into two parts: the front end parsed the source and generated O-code, an intermediate language. The back end took the O-code and translated it into the machine code for the target machine. Only of the compiler's code needed to be rewritten to support a new machine, a task that usually took between 2 and 5 person-months. This approach became common practice later (e.g. Pascal, Java). The language is unusual in having only one data type: a word, a fixed number of bits, usually chosen to align with the architecture's machine word and of adequate capacity to represent any valid storage address. For many machines of the time, this data type was a 16-bit word. This choice later proved to be a significant problem when BCPL was used on machines in which the smallest addressable item was not a word but a byte or on machines with larger word sizes such as 32-bit or 64-bit. The interpretation of any value was determined by the operators used to process the values. (For example, + added two values together, treating them as integers; ! indirected through a value, effectively treating it as a pointer.) In order for this to work, the implementation provided no type checking. The mismatch between BCPL's word orientation and byte-oriented hardware was addressed in several ways. One was by providing standard library routines for packing and unpacking words into byte strings. Later, two language features were added: the bit-field selection operator and the infix byte indirection operator (denoted by %). BCPL handles bindings spanning separate compilation units in a unique way. There are no user-declarable global variables; instead, there is a global vector, similar to "blank common" in Fortran. All data shared between different compilation units comprises scalars and pointers to vectors stored in a pre-arranged place in the global vector. Thus, the header files (files included during compilation using the "GET" directive) become the primary means of synchronizing global data between compilation units, containing "GLOBAL" directives that present lists of symbolic names, each paired with a number that associates the name with the corresponding numerically addressed word in the global vector. As well as variables, the global vector contains bindings for external procedures. This makes dynamic loading of compilation units very simple to achieve. Instead of relying on the link loader of the underlying implementation, effectively, BCPL gives the programmer control of the linking process. The global vector also made it very simple to replace or augment standard library routines. A program could save the pointer from the global vector to the original routine and replace it with a pointer to an alternative version. The alternative might call the original as part of its processing. This could be used as a quick ad hoc debugging aid. BCPL was the first brace programming language and the braces survived the syntactical changes and have become a common means of denoting program source code statements. In practice, on limited keyboards of the day, source programs often used the sequences $( and $) in place of the symbols { and }. The single-line // comments of BCPL, which were not adopted by C, reappeared in C++ and later in C99. The book BCPL: The language and its compiler describes the philosophy of BCPL as follows: History BCPL was first implemented by Martin Richards of the University of Cambridge in 1967. BCPL was a response to difficulties with its predecessor, Cambridge Programming Language, later renamed Combined Programming Language (CPL), which was designed during the early 1960s. Richards created BCPL by "removing those features of the full language which make compilation difficult". The first compiler implementation, for the IBM 7094 under Compatible Time-Sharing System, was written while Richards was visiting Project MAC at the Massachusetts Institute of Technology in the spring of 1967. The language was first described in a paper presented to the 1969 Spring Joint Computer Conference. BCPL has been rumored to have originally stood for "Bootstrap Cambridge Programming Language", but CPL was never created since development stopped at BCPL, and the acronym was later reinterpreted for the BCPL book. BCPL is the language in which the original "Hello, World!" program was written. The first MUD was also written in BCPL (MUD1). Several operating systems were written partially or wholly in BCPL (for example, TRIPOS and the earliest versions of AmigaDOS). BCPL was also the initial language used in the Xerox PARC Alto project, the first modern personal computer; among other projects, the Bravo document preparation system was written in BCPL. An early compiler, bootstrapped in 1969, by starting with a paper tape of the O-code of Richards's Atlas 2 compiler, targeted the ICT 1900 series. The two machines had different word-lengths (48 vs 24 bits), different character encodings, and different packed string representations—and the successful bootstrapping increased confidence in the practicality of the method. By late 1970, implementations existed for the Honeywell 635 and Honeywell 645, IBM 360, PDP-10, TX-2, CDC 6400, UNIVAC 1108, PDP-9, KDF 9 and Atlas 2. In 1974 a dialect of BCPL was implemented at BBN without using the intermediate O-code. The initial implementation was a cross-compiler hosted on BBN's TENEX PDP-10s, and directly targeted the PDP-11s used in BBN's implementation of the second generation IMPs used in the ARPANET. There was also a version produced for the BBC Micro in the mid-1980s, by Richards Computer Products, a company started by John Richards, the brother of Martin Richards. The BBC Domesday Project made use of the language. Versions of BCPL for the Amstrad CPC and Amstrad PCW computers were also released in 1986 by UK software house Arnor Ltd. MacBCPL was released for the Apple Macintosh in 1985 by Topexpress Ltd, of Kensington, England. Both the design and philosophy of BCPL strongly influenced B, which in turn influenced C. Programmers at the time debated whether an eventual successor to C would be called "D", the next letter in the alphabet, or "P", the next letter in the parent language name. The language most accepted as being C's successor is C++ (with ++ being C's increment operator), although meanwhile, a D programming language also exists. In 1979, implementations of BCPL existed for at least 25 architectures; the language gradually fell out of favour as C became popular on non-Unix systems. Martin Richards maintains a modern version of BCPL on his website, last updated in 2018. This can be set up to run on various systems including Linux, FreeBSD, and Mac OS X. The latest distribution includes graphics and sound libraries, and there is a comprehensive manual. He continues to program in it, including for his research on musical automated score following. A common informal MIME type for BCPL is . Examples Hello world Richards and Whitby-Strevens provide an example of the "Hello, World!" program for BCPL using a standard system header, 'LIBHDR': GET "LIBHDR" LET START() BE WRITES("Hello, World") Further examples If these programs are run using Richards' current version of Cintsys (December 2018), LIBHDR, START and WRITEF must be changed to lower case to avoid errors. Print factorials: GET "LIBHDR" LET START() = VALOF $( FOR I = 1 TO 5 DO WRITEF("%N! = %I4*N", I, FACT(I)) RESULTIS 0 $) AND FACT(N) = N = 0 -> 1, N * FACT(N - 1) Count solutions to the N queens problem: GET "LIBHDR" GLOBAL $( COUNT: 200 ALL: 201 $) LET TRY(LD, ROW, RD) BE TEST ROW = ALL THEN COUNT := COUNT + 1 ELSE $( LET POSS = ALL & ~(LD | ROW | RD) UNTIL POSS = 0 DO $( LET P = POSS & -POSS POSS := POSS - P TRY(LD + P << 1, ROW + P, RD + P >> 1) $) $) LET START() = VALOF $( ALL := 1 FOR I = 1 TO 12 DO $( COUNT := 0 TRY(0, 0, 0) WRITEF("%I2-QUEENS PROBLEM HAS %I5 SOLUTIONS*N", I, COUNT) ALL := 2 * ALL + 1 $) RESULTIS 0 $) References Further reading Martin Richards, The BCPL Reference Manual (Memorandum M-352, Project MAC, Cambridge, MA, USA, July, 1967) Martin Richards, BCPL - a tool for compiler writing and systems programming (Proceedings of the Spring Joint Computer Conference, Vol 34, pp 557–566, 1969) Martin Richards, Arthur Evans, Robert F. Mabee, The BCPL Reference Manual (MAC TR-141, Project MAC, Cambridge, MA, USA, 1974) Martin Richards, Colin Whitby-Strevens, BCPL, the language and its compiler (Cambridge University Press, 1980) External links Martin Richards' BCPL distribution Martin Richards' BCPL Reference Manual, 1967 by Dennis M. Ritchie BCPL entry in the Jargon File Nordier & Associates' x86 port ArnorBCPL manual (1986, Amstrad PCW/CPC) How BCPL evolved from CPL, Martin Richards Ritchie's The Development of the C Language has commentary about BCPL's influence on C The BCPL Cintsys and Cintpos User Guide History of computing in the United Kingdom Procedural programming languages Programming languages created in 1967 Structured programming languages Systems programming languages University of Cambridge Computer Laboratory
7
A battleship is a large armored warship with a main battery consisting of large caliber guns. It dominated naval warfare in the late 19th and early 20th centuries. The term battleship came into use in the late 1880s to describe a type of ironclad warship, now referred to by historians as pre-dreadnought battleships. In 1906, the commissioning of into the United Kingdom's Royal Navy heralded a revolution in the field of battleship design. Subsequent battleship designs, influenced by HMS Dreadnought, were referred to as "dreadnoughts", though the term eventually became obsolete as dreadnoughts became the only type of battleship in common use. Battleships were a symbol of naval dominance and national might, and for decades the battleship was a major factor in both diplomacy and military strategy. A global arms race in battleship construction began in Europe in the 1890s and culminated at the decisive Battle of Tsushima in 1905, the outcome of which significantly influenced the design of HMS Dreadnought. The launch of Dreadnought in 1906 commenced a new naval arms race. Three major fleet actions between steel battleships took place: the long-range gunnery duel at the Battle of the Yellow Sea in 1904, the decisive Battle of Tsushima in 1905 (both during the Russo-Japanese War) and the inconclusive Battle of Jutland in 1916, during the First World War. Jutland was the largest naval battle and the only full-scale clash of dreadnoughts of the war, and it was the last major battle in naval history fought primarily by battleships. The Naval Treaties of the 1920s and 1930s limited the number of battleships, though technical innovation in battleship design continued. Both the Allied and Axis powers built battleships during World War II, though the increasing importance of the aircraft carrier meant that the battleship played a less important role than had been expected in that conflict. The value of the battleship has been questioned, even during their heyday. There were few of the decisive fleet battles that battleship proponents expected and used to justify the vast resources spent on building battlefleets. Even in spite of their huge firepower and protection, battleships were increasingly vulnerable to much smaller and relatively inexpensive weapons: initially the torpedo and the naval mine, and later aircraft and the guided missile. The growing range of naval engagements led to the aircraft carrier replacing the battleship as the leading capital ship during World War II, with the last battleship to be launched being in 1944. Four battleships were retained by the United States Navy until the end of the Cold War for fire support purposes and were last used in combat during the Gulf War in 1991, and then struck from the U.S. Naval Vessel Register in the 2000s. Many World War II-era battleships remain today as museum ships. History Ships of the line A ship of the line was a large, unarmored wooden sailing ship which mounted a battery of up to 120 smoothbore guns and carronades, which came to prominence with the adoption of line of battle tactics in the early 17th century and the end of the sailing battleship's heyday in the 1830s. From 1794, the alternative term 'line of battle ship' was contracted (informally at first) to 'battle ship' or 'battleship'. The sheer number of guns fired broadside meant a ship of the line could wreck any wooden enemy, holing her hull, knocking down masts, wrecking her rigging, and killing her crew. However, the effective range of the guns was as little as a few hundred yards, so the battle tactics of sailing ships depended in part on the wind. Over time, ships of the line gradually became larger and carried more guns, but otherwise remained quite similar. The first major change to the ship of the line concept was the introduction of steam power as an auxiliary propulsion system. Steam power was gradually introduced to the navy in the first half of the 19th century, initially for small craft and later for frigates. The French Navy introduced steam to the line of battle with the 90-gun in 1850—the first true steam battleship. Napoléon was armed as a conventional ship-of-the-line, but her steam engines could give her a speed of , regardless of the wind. This was a potentially decisive advantage in a naval engagement. The introduction of steam accelerated the growth in size of battleships. France and the United Kingdom were the only countries to develop fleets of wooden steam screw battleships although several other navies operated small numbers of screw battleships, including Russia (9), the Ottoman Empire (3), Sweden (2), Naples (1), Denmark (1) and Austria (1). Ironclads The adoption of steam power was only one of a number of technological advances which revolutionized warship design in the 19th century. The ship of the line was overtaken by the ironclad: powered by steam, protected by metal armor, and armed with guns firing high-explosive shells. Explosive shells Guns that fired explosive or incendiary shells were a major threat to wooden ships, and these weapons quickly became widespread after the introduction of 8-inch shell guns as part of the standard armament of French and American line-of-battle ships in 1841. In the Crimean War, six line-of-battle ships and two frigates of the Russian Black Sea Fleet destroyed seven Turkish frigates and three corvettes with explosive shells at the Battle of Sinop in 1853. Later in the war, French ironclad floating batteries used similar weapons against the defenses at the Battle of Kinburn. Nevertheless, wooden-hulled ships stood up comparatively well to shells, as shown in the 1866 Battle of Lissa, where the modern Austrian steam two-decker ranged across a confused battlefield, rammed an Italian ironclad and took 80 hits from Italian ironclads, many of which were shells, but including at least one 300-pound shot at point-blank range. Despite losing her bowsprit and her foremast, and being set on fire, she was ready for action again the very next day. Iron armor and construction The development of high-explosive shells made the use of iron armor plate on warships necessary. In 1859 France launched , the first ocean-going ironclad warship. She had the profile of a ship of the line, cut to one deck due to weight considerations. Although made of wood and reliant on sail for most journeys, Gloire was fitted with a propeller, and her wooden hull was protected by a layer of thick iron armor. Gloire prompted further innovation from the Royal Navy, anxious to prevent France from gaining a technological lead. The superior armored frigate followed Gloire by only 14 months, and both nations embarked on a program of building new ironclads and converting existing screw ships of the line to armored frigates. Within two years, Italy, Austria, Spain and Russia had all ordered ironclad warships, and by the time of the famous clash of the and the at the Battle of Hampton Roads at least eight navies possessed ironclad ships. Navies experimented with the positioning of guns, in turrets (like the USS Monitor), central-batteries or barbettes, or with the ram as the principal weapon. As steam technology developed, masts were gradually removed from battleship designs. By the mid-1870s steel was used as a construction material alongside iron and wood. The French Navy's , laid down in 1873 and launched in 1876, was a central battery and barbette warship which became the first battleship in the world to use steel as the principal building material. Pre-dreadnought battleship The term "battleship" was officially adopted by the Royal Navy in the re-classification of 1892. By the 1890s, there was an increasing similarity between battleship designs, and the type that later became known as the 'pre-dreadnought battleship' emerged. These were heavily armored ships, mounting a mixed battery of guns in turrets, and without sails. The typical first-class battleship of the pre-dreadnought era displaced 15,000 to 17,000 tons, had a speed of , and an armament of four guns in two turrets fore and aft with a mixed-caliber secondary battery amidships around the superstructure. An early design with superficial similarity to the pre-dreadnought is the British of 1871. The slow-firing main guns were the principal weapons for battleship-to-battleship combat. The intermediate and secondary batteries had two roles. Against major ships, it was thought a 'hail of fire' from quick-firing secondary weapons could distract enemy gun crews by inflicting damage to the superstructure, and they would be more effective against smaller ships such as cruisers. Smaller guns (12-pounders and smaller) were reserved for protecting the battleship against the threat of torpedo attack from destroyers and torpedo boats. The beginning of the pre-dreadnought era coincided with Britain reasserting her naval dominance. For many years previously, Britain had taken naval supremacy for granted. Expensive naval projects were criticized by political leaders of all inclinations. However, in 1888 a war scare with France and the build-up of the Russian navy gave added impetus to naval construction, and the British Naval Defence Act of 1889 laid down a new fleet including eight new battleships. The principle that Britain's navy should be more powerful than the two next most powerful fleets combined was established. This policy was designed to deter France and Russia from building more battleships, but both nations nevertheless expanded their fleets with more and better pre-dreadnoughts in the 1890s. In the last years of the 19th century and the first years of the 20th, the escalation in the building of battleships became an arms race between Britain and Germany. The German naval laws of 1890 and 1898 authorized a fleet of 38 battleships, a vital threat to the balance of naval power. Britain answered with further shipbuilding, but by the end of the pre-dreadnought era, British supremacy at sea had markedly weakened. In 1883, the United Kingdom had 38 battleships, twice as many as France and almost as many as the rest of the world put together. In 1897, Britain's lead was far smaller due to competition from France, Germany, and Russia, as well as the development of pre-dreadnought fleets in Italy, the United States and Japan. The Ottoman Empire, Spain, Sweden, Denmark, Norway, the Netherlands, Chile and Brazil all had second-rate fleets led by armored cruisers, coastal defence ships or monitors. Pre-dreadnoughts continued the technical innovations of the ironclad. Turrets, armor plate, and steam engines were all improved over the years, and torpedo tubes were also introduced. A small number of designs, including the American and es, experimented with all or part of the 8-inch intermediate battery superimposed over the 12-inch primary. Results were poor: recoil factors and blast effects resulted in the 8-inch battery being completely unusable, and the inability to train the primary and intermediate armaments on different targets led to significant tactical limitations. Even though such innovative designs saved weight (a key reason for their inception), they proved too cumbersome in practice. Dreadnought era In 1906, the British Royal Navy launched the revolutionary . Created as a result of pressure from Admiral Sir John ("Jackie") Fisher, HMS Dreadnought rendered existing battleships obsolete. Combining an "all-big-gun" armament of ten 12-inch (305 mm) guns with unprecedented speed (from steam turbine engines) and protection, she prompted navies worldwide to re-evaluate their battleship building programs. While the Japanese had laid down an all-big-gun battleship, , in 1904 and the concept of an all-big-gun ship had been in circulation for several years, it had yet to be validated in combat. Dreadnought sparked a new arms race, principally between Britain and Germany but reflected worldwide, as the new class of warships became a crucial element of national power. Technical development continued rapidly through the dreadnought era, with steep changes in armament, armor and propulsion. Ten years after Dreadnoughts commissioning, much more powerful ships, the super-dreadnoughts, were being built. Origin In the first years of the 20th century, several navies worldwide experimented with the idea of a new type of battleship with a uniform armament of very heavy guns. Admiral Vittorio Cuniberti, the Italian Navy's chief naval architect, articulated the concept of an all-big-gun battleship in 1903. When the Regia Marina did not pursue his ideas, Cuniberti wrote an article in Janes proposing an "ideal" future British battleship, a large armored warship of 17,000 tons, armed solely with a single calibre main battery (twelve 12-inch [305 mm] guns), carrying belt armor, and capable of 24 knots (44 km/h). The Russo-Japanese War provided operational experience to validate the "all-big-gun" concept. During the Battle of the Yellow Sea on August 10, 1904, Admiral Togo of the Imperial Japanese Navy commenced deliberate 12-inch gun fire at the Russian flagship Tzesarevich at 14,200 yards (13,000 meters). At the Battle of Tsushima on May 27, 1905, Russian Admiral Rozhestvensky's flagship fired the first 12-inch guns at the Japanese flagship Mikasa at 7,000 meters. It is often held that these engagements demonstrated the importance of the gun over its smaller counterparts, though some historians take the view that secondary batteries were just as important as the larger weapons when dealing with smaller fast moving torpedo craft. Such was the case, albeit unsuccessfully, when the Russian battleship Knyaz Suvorov at Tsushima had been sent to the bottom by destroyer launched torpedoes. The 1903–04 design also retained traditional triple-expansion steam engines. As early as 1904, Jackie Fisher had been convinced of the need for fast, powerful ships with an all-big-gun armament. If Tsushima influenced his thinking, it was to persuade him of the need to standardise on guns. Fisher's concerns were submarines and destroyers equipped with torpedoes, then threatening to outrange battleship guns, making speed imperative for capital ships. Fisher's preferred option was his brainchild, the battlecruiser: lightly armored but heavily armed with eight 12-inch guns and propelled to by steam turbines. It was to prove this revolutionary technology that Dreadnought was designed in January 1905, laid down in October 1905 and sped to completion by 1906. She carried ten 12-inch guns, had an 11-inch armor belt, and was the first large ship powered by turbines. She mounted her guns in five turrets; three on the centerline (one forward, two aft) and two on the wings, giving her at her launch twice the broadside of any other warship. She retained a number of 12-pound (3-inch, 76 mm) quick-firing guns for use against destroyers and torpedo-boats. Her armor was heavy enough for her to go head-to-head with any other ship in a gun battle, and conceivably win. Dreadnought was to have been followed by three s, their construction delayed to allow lessons from Dreadnought to be used in their design. While Fisher may have intended Dreadnought to be the last Royal Navy battleship, the design was so successful he found little support for his plan to switch to a battlecruiser navy. Although there were some problems with the ship (the wing turrets had limited arcs of fire and strained the hull when firing a full broadside, and the top of the thickest armor belt lay below the waterline at full load), the Royal Navy promptly commissioned another six ships to a similar design in the and es. An American design, , authorized in 1905 and laid down in December 1906, was another of the first dreadnoughts, but she and her sister, , were not launched until 1908. Both used triple-expansion engines and had a superior layout of the main battery, dispensing with Dreadnoughts wing turrets. They thus retained the same broadside, despite having two fewer guns. Arms race In 1897, before the revolution in design brought about by , the Royal Navy had 62 battleships in commission or building, a lead of 26 over France and 50 over Germany. From the 1906 launching of Dreadnought, an arms race with major strategic consequences was prompted. Major naval powers raced to build their own dreadnoughts. Possession of modern battleships was not only seen as vital to naval power, but also, as with nuclear weapons after World War II, represented a nation's standing in the world. Germany, France, Japan, Italy, Austria, and the United States all began dreadnought programmes; while the Ottoman Empire, Argentina, Russia, Brazil, and Chile commissioned dreadnoughts to be built in British and American yards. World War I By virtue of geography, the Royal Navy was able to use her imposing battleship and battlecruiser fleet to impose a strict and successful naval blockade of Germany and kept Germany's smaller battleship fleet bottled up in the North Sea: only narrow channels led to the Atlantic Ocean and these were guarded by British forces. Both sides were aware that, because of the greater number of British dreadnoughts, a full fleet engagement would be likely to result in a British victory. The German strategy was therefore to try to provoke an engagement on their terms: either to induce a part of the Grand Fleet to enter battle alone, or to fight a pitched battle near the German coastline, where friendly minefields, torpedo-boats and submarines could be used to even the odds. This did not happen however, due in large part to the necessity to keep submarines for the Atlantic campaign. Submarines were the only vessels in the Imperial German Navy able to break out and raid British commerce in force, but even though they sank many merchant ships, they could not successfully counter-blockade the United Kingdom; the Royal Navy successfully adopted convoy tactics to combat Germany's submarine counter-blockade and eventually defeated it. This was in stark contrast to Britain's successful blockade of Germany. The first two years of war saw the Royal Navy's battleships and battlecruisers regularly "sweep" the North Sea making sure that no German ships could get in or out. Only a few German surface ships that were already at sea, such as the famous light cruiser , were able to raid commerce. Even some of those that did manage to get out were hunted down by battlecruisers, as in the Battle of the Falklands, December 7, 1914. The results of sweeping actions in the North Sea were battles including the Heligoland Bight and Dogger Bank and German raids on the English coast, all of which were attempts by the Germans to lure out portions of the Grand Fleet in an attempt to defeat the Royal Navy in detail. On May 31, 1916, a further attempt to draw British ships into battle on German terms resulted in a clash of the battlefleets in the Battle of Jutland. The German fleet withdrew to port after two short encounters with the British fleet. Less than two months later, the Germans once again attempted to draw portions of the Grand Fleet into battle. The resulting Action of 19 August 1916 proved inconclusive. This reinforced German determination not to engage in a fleet to fleet battle. In the other naval theatres there were no decisive pitched battles. In the Black Sea, engagement between Russian and Ottoman battleships was restricted to skirmishes. In the Baltic Sea, action was largely limited to the raiding of convoys, and the laying of defensive minefields; the only significant clash of battleship squadrons there was the Battle of Moon Sound at which one Russian pre-dreadnought was lost. The Adriatic was in a sense the mirror of the North Sea: the Austro-Hungarian dreadnought fleet remained bottled up by the British and French blockade. And in the Mediterranean, the most important use of battleships was in support of the amphibious assault on Gallipoli. In September 1914, the threat posed to surface ships by German U-boats was confirmed by successful attacks on British cruisers, including the sinking of three British armored cruisers by the German submarine in less than an hour. The British Super-dreadnought soon followed suit as she struck a mine laid by a German U-boat in October 1914 and sank. The threat that German U-boats posed to British dreadnoughts was enough to cause the Royal Navy to change their strategy and tactics in the North Sea to reduce the risk of U-boat attack. Further near-misses from submarine attacks on battleships and casualties amongst cruisers led to growing concern in the Royal Navy about the vulnerability of battleships. As the war wore on however, it turned out that whilst submarines did prove to be a very dangerous threat to older pre-dreadnought battleships, as shown by examples such as the sinking of , which was caught in the Dardanelles by a British submarine and and were torpedoed by U-21 as well as , , etc., the threat posed to dreadnought battleships proved to have been largely a false alarm. HMS Audacious turned out to be the only dreadnought sunk by a submarine in World War I. While battleships were never intended for anti-submarine warfare, there was one instance of a submarine being sunk by a dreadnought battleship. HMS Dreadnought rammed and sank the German submarine U-29 on March 18, 1915, off the Moray Firth. Whilst the escape of the German fleet from the superior British firepower at Jutland was effected by the German cruisers and destroyers successfully turning away the British battleships, the German attempt to rely on U-boat attacks on the British fleet failed. Torpedo boats did have some successes against battleships in World War I, as demonstrated by the sinking of the British pre-dreadnought by during the Dardanelles Campaign and the destruction of the Austro-Hungarian dreadnought by Italian motor torpedo boats in June 1918. In large fleet actions, however, destroyers and torpedo boats were usually unable to get close enough to the battleships to damage them. The only battleship sunk in a fleet action by either torpedo boats or destroyers was the obsolescent German pre-dreadnought . She was sunk by destroyers during the night phase of the Battle of Jutland. The German High Seas Fleet, for their part, were determined not to engage the British without the assistance of submarines; and since the submarines were needed more for raiding commercial traffic, the fleet stayed in port for much of the war. Inter-war period For many years, Germany simply had no battleships. The Armistice with Germany required that most of the High Seas Fleet be disarmed and interned in a neutral port; largely because no neutral port could be found, the ships remained in British custody in Scapa Flow, Scotland. The Treaty of Versailles specified that the ships should be handed over to the British. Instead, most of them were scuttled by their German crews on June 21, 1919, just before the signature of the peace treaty. The treaty also limited the German Navy, and prevented Germany from building or possessing any capital ships. The inter-war period saw the battleship subjected to strict international limitations to prevent a costly arms race breaking out. While the victors were not limited by the Treaty of Versailles, many of the major naval powers were crippled after the war. Faced with the prospect of a naval arms race against the United Kingdom and Japan, which would in turn have led to a possible Pacific war, the United States was keen to conclude the Washington Naval Treaty of 1922. This treaty limited the number and size of battleships that each major nation could possess, and required Britain to accept parity with the U.S. and to abandon the British alliance with Japan. The Washington treaty was followed by a series of other naval treaties, including the First Geneva Naval Conference (1927), the First London Naval Treaty (1930), the Second Geneva Naval Conference (1932), and finally the Second London Naval Treaty (1936), which all set limits on major warships. These treaties became effectively obsolete on September 1, 1939, at the beginning of World War II, but the ship classifications that had been agreed upon still apply. The treaty limitations meant that fewer new battleships were launched in 1919–1939 than in 1905–1914. The treaties also inhibited development by imposing upper limits on the weights of ships. Designs like the projected British , the first American , and the Japanese —all of which continued the trend to larger ships with bigger guns and thicker armor—never got off the drawing board. Those designs which were commissioned during this period were referred to as treaty battleships. Rise of air power As early as 1914, the British Admiral Percy Scott predicted that battleships would soon be made irrelevant by aircraft. By the end of World War I, aircraft had successfully adopted the torpedo as a weapon. In 1921 the Italian general and air theorist Giulio Douhet completed a hugely influential treatise on strategic bombing titled The Command of the Air, which foresaw the dominance of air power over naval units. In the 1920s, General Billy Mitchell of the United States Army Air Corps, believing that air forces had rendered navies around the world obsolete, testified in front of Congress that "1,000 bombardment airplanes can be built and operated for about the price of one battleship" and that a squadron of these bombers could sink a battleship, making for more efficient use of government funds. This infuriated the U.S. Navy, but Mitchell was nevertheless allowed to conduct a careful series of bombing tests alongside Navy and Marine bombers. In 1921, he bombed and sank numerous ships, including the "unsinkable" German World War I battleship and the American pre-dreadnought . Although Mitchell had required "war-time conditions", the ships sunk were obsolete, stationary, defenseless and had no damage control. The sinking of Ostfriesland was accomplished by violating an agreement that would have allowed Navy engineers to examine the effects of various munitions: Mitchell's airmen disregarded the rules, and sank the ship within minutes in a coordinated attack. The stunt made headlines, and Mitchell declared, "No surface vessels can exist wherever air forces acting from land bases are able to attack them." While far from conclusive, Mitchell's test was significant because it put proponents of the battleship against naval aviation on the defensive. Rear Admiral William A. Moffett used public relations against Mitchell to make headway toward expansion of the U.S. Navy's nascent aircraft carrier program. Rearmament The Royal Navy, United States Navy, and Imperial Japanese Navy extensively upgraded and modernized their World War I–era battleships during the 1930s. Among the new features were an increased tower height and stability for the optical rangefinder equipment (for gunnery control), more armor (especially around turrets) to protect against plunging fire and aerial bombing, and additional anti-aircraft weapons. Some British ships received a large block superstructure nicknamed the "Queen Anne's castle", such as in and , which would be used in the new conning towers of the fast battleships. External bulges were added to improve both buoyancy to counteract weight increase and provide underwater protection against mines and torpedoes. The Japanese rebuilt all of their battleships, plus their battlecruisers, with distinctive "pagoda" structures, though the received a more modern bridge tower that would influence the new . Bulges were fitted, including steel tube arrays to improve both underwater and vertical protection along the waterline. The U.S. experimented with cage masts and later tripod masts, though after the Japanese attack on Pearl Harbor some of the most severely damaged ships (such as and ) were rebuilt with tower masts, for an appearance similar to their contemporaries. Radar, which was effective beyond visual range and effective in complete darkness or adverse weather, was introduced to supplement optical fire control. Even when war threatened again in the late 1930s, battleship construction did not regain the level of importance it had held in the years before World War I. The "building holiday" imposed by the naval treaties meant the capacity of dockyards worldwide had shrunk, and the strategic position had changed. In Germany, the ambitious Plan Z for naval rearmament was abandoned in favor of a strategy of submarine warfare supplemented by the use of battlecruisers and commerce raiding (in particular by s). In Britain, the most pressing need was for air defenses and convoy escorts to safeguard the civilian population from bombing or starvation, and re-armament construction plans consisted of five ships of the . It was in the Mediterranean that navies remained most committed to battleship warfare. France intended to build six battleships of the and es, and the Italians four ships. Neither navy built significant aircraft carriers. The U.S. preferred to spend limited funds on aircraft carriers until the . Japan, also prioritising aircraft carriers, nevertheless began work on three mammoth Yamatos (although the third, , was later completed as a carrier) and a planned fourth was cancelled. At the outbreak of the Spanish Civil War, the Spanish navy included only two small dreadnought battleships, and . España (originally named Alfonso XIII), by then in reserve at the northwestern naval base of El Ferrol, fell into Nationalist hands in July 1936. The crew aboard Jaime I remained loyal to the Republic, killed their officers, who apparently supported Franco's attempted coup, and joined the Republican Navy. Thus each side had one battleship; however, the Republican Navy generally lacked experienced officers. The Spanish battleships mainly restricted themselves to mutual blockades, convoy escort duties, and shore bombardment, rarely in direct fighting against other surface units. In April 1937, España ran into a mine laid by friendly forces, and sank with little loss of life. In May 1937, Jaime I was damaged by Nationalist air attacks and a grounding incident. The ship was forced to go back to port to be repaired. There she was again hit by several aerial bombs. It was then decided to tow the battleship to a more secure port, but during the transport she suffered an internal explosion that caused 300 deaths and her total loss. Several Italian and German capital ships participated in the non-intervention blockade. On May 29, 1937, two Republican aircraft managed to bomb the German pocket battleship outside Ibiza, causing severe damage and loss of life. retaliated two days later by bombarding Almería, causing much destruction, and the resulting Deutschland incident meant the end of German and Italian participation in non-intervention. World War II The —an obsolete pre-dreadnought—fired the first shots of World War II with the bombardment of the Polish garrison at Westerplatte; and the final surrender of the Japanese Empire took place aboard a United States Navy battleship, . Between those two events, it had become clear that aircraft carriers were the new principal ships of the fleet and that battleships now performed a secondary role. Battleships played a part in major engagements in Atlantic, Pacific and Mediterranean theaters; in the Atlantic, the Germans used their battleships as independent commerce raiders. However, clashes between battleships were of little strategic importance. The Battle of the Atlantic was fought between destroyers and submarines, and most of the decisive fleet clashes of the Pacific war were determined by aircraft carriers. In the first year of the war, armored warships defied predictions that aircraft would dominate naval warfare. and surprised and sank the aircraft carrier off western Norway in June 1940. This engagement marked the only time a fleet carrier was sunk by surface gunnery. In the attack on Mers-el-Kébir, British battleships opened fire on the French battleships in the harbor near Oran in Algeria with their heavy guns. The fleeing French ships were then pursued by planes from aircraft carriers. The subsequent years of the war saw many demonstrations of the maturity of the aircraft carrier as a strategic naval weapon and its effectiveness against battleships. The British air attack on the Italian naval base at Taranto sank one Italian battleship and damaged two more. The same Swordfish torpedo bombers played a crucial role in sinking the German battleship . On December 7, 1941, the Japanese launched a surprise attack on Pearl Harbor. Within a short time, five of eight U.S. battleships were sunk or sinking, with the rest damaged. All three American aircraft carriers were out to sea, however, and evaded destruction. The sinking of the British battleship and battlecruiser , demonstrated the vulnerability of a battleship to air attack while at sea without sufficient air cover, settling the argument begun by Mitchell in 1921. Both warships were under way and en route to attack the Japanese amphibious force that had invaded Malaya when they were caught by Japanese land-based bombers and torpedo bombers on December 10, 1941. At many of the early crucial battles of the Pacific, for instance Coral Sea and Midway, battleships were either absent or overshadowed as carriers launched wave after wave of planes into the attack at a range of hundreds of miles. In later battles in the Pacific, battleships primarily performed shore bombardment in support of amphibious landings and provided anti-aircraft defense as escort for the carriers. Even the largest battleships ever constructed, Japan's , which carried a main battery of nine 18-inch (46 cm) guns and were designed as a principal strategic weapon, were never given a chance to show their potential in the decisive battleship action that figured in Japanese pre-war planning. The last battleship confrontation in history was the Battle of Surigao Strait, on October 25, 1944, in which a numerically and technically superior American battleship group destroyed a lesser Japanese battleship group by gunfire after it had already been devastated by destroyer torpedo attacks. All but one of the American battleships in this confrontation had previously been sunk during the attack on Pearl Harbor and subsequently raised and repaired. fired the last major-caliber salvo of this battle. In April 1945, during the battle for Okinawa, the world's most powerful battleship, the Yamato, was sent out on a suicide mission against a massive U.S. force and sunk by overwhelming pressure from carrier aircraft with nearly all hands lost. After that, Japanese fleet remaining in the mainland was also destroyed by the US naval air force. Cold War After World War II, several navies retained their existing battleships, but they were no longer strategically dominant military assets. It soon became apparent that they were no longer worth the considerable cost of construction and maintenance and only one new battleship was commissioned after the war, . During the war it had been demonstrated that battleship-on-battleship engagements like Leyte Gulf or the sinking of were the exception and not the rule, and with the growing role of aircraft engagement ranges were becoming longer and longer, making heavy gun armament irrelevant. The armor of a battleship was equally irrelevant in the face of a nuclear attack as tactical missiles with a range of or more could be mounted on the Soviet and s. By the end of the 1950s, smaller vessel classes such as destroyers, which formerly offered no noteworthy opposition to battleships, now were capable of eliminating battleships from outside the range of the ship's heavy guns. The remaining battleships met a variety of ends. and were sunk during the testing of nuclear weapons in Operation Crossroads in 1946. Both battleships proved resistant to nuclear air burst but vulnerable to underwater nuclear explosions. The was taken by the Soviets as reparations and renamed Novorossiysk; she was sunk by a leftover German mine in the Black Sea on October 29, 1955. The two ships were scrapped in 1956. The French was scrapped in 1954, in 1968, and in 1970. The United Kingdom's four surviving ships were scrapped in 1957, and followed in 1960. All other surviving British battleships had been sold or broken up by 1949. The Soviet Union's was scrapped in 1953, in 1957 and (back under her original name, , since 1942) in 1956–57. Brazil's was scrapped in Genoa in 1953, and her sister ship sank during a storm in the Atlantic en route to the breakers in Italy in 1951. Argentina kept its two ships until 1956 and Chile kept (formerly ) until 1959. The Turkish battlecruiser (formerly , launched in 1911) was scrapped in 1976 after an offer to sell her back to Germany was refused. Sweden had several small coastal-defense battleships, one of which, , survived until 1970. The Soviets scrapped four large incomplete cruisers in the late 1950s, whilst plans to build a number of new s were abandoned following the death of Joseph Stalin in 1953. The three old German battleships , , and all met similar ends. Hessen was taken over by the Soviet Union and renamed Tsel. She was scrapped in 1960. Schleswig-Holstein was renamed Borodino, and was used as a target ship until 1960. Schlesien, too, was used as a target ship. She was broken up between 1952 and 1957. The s gained a new lease of life in the U.S. Navy as fire support ships. Radar and computer-controlled gunfire could be aimed with pinpoint accuracy to target. The U.S. recommissioned all four Iowa-class battleships for the Korean War and the for the Vietnam War. These were primarily used for shore bombardment, New Jersey firing nearly 6,000 rounds of 16 inch shells and over 14,000 rounds of 5 inch projectiles during her tour on the gunline, seven times more rounds against shore targets in Vietnam than she had fired in the Second World War. As part of Navy Secretary John F. Lehman's effort to build a 600-ship Navy in the 1980s, and in response to the commissioning of Kirov by the Soviet Union, the United States recommissioned all four Iowa-class battleships. On several occasions, battleships were support ships in carrier battle groups, or led their own battleship battle group. These were modernized to carry Tomahawk (TLAM) missiles, with New Jersey seeing action bombarding Lebanon in 1983 and 1984, while and fired their 16-inch (406 mm) guns at land targets and launched missiles during Operation Desert Storm in 1991. Wisconsin served as the TLAM strike commander for the Persian Gulf, directing the sequence of launches that marked the opening of Desert Storm, firing a total of 24 TLAMs during the first two days of the campaign. The primary threat to the battleships were Iraqi shore-based surface-to-surface missiles; Missouri was targeted by two Iraqi Silkworm missiles, with one missing and another being intercepted by the British destroyer . End of the battleship era After was stricken in 1962, the four Iowa-class ships were the only battleships in commission or reserve anywhere in the world. There was an extended debate when the four Iowa ships were finally decommissioned in the early 1990s. and were maintained to a standard whereby they could be rapidly returned to service as fire support vessels, pending the development of a superior fire support vessel. These last two battleships were finally stricken from the U.S. Naval Vessel Register in 2006. The Military Balance and Russian Foreign Military Review states the U.S. Navy listed one battleship in the reserve (Naval Inactive Fleet/Reserve 2nd Turn) in 2010. The Military Balance states the U.S. Navy listed no battleships in the reserve in 2014. When the last Iowa-class ship was finally stricken from the Naval Vessel Registry, no battleships remained in service or in reserve with any navy worldwide. A number are preserved as museum ships, either afloat or in drydock. The U.S. has eight battleships on display: , , , , , , , and . Missouri and New Jersey are museums at Pearl Harbor and Camden, New Jersey, respectively. Iowa is on display as an educational attraction at the Los Angeles Waterfront in San Pedro, California. Wisconsin now serves as a museum ship in Norfolk, Virginia. Massachusetts, which has the distinction of never having lost a man during service, is on display at the Battleship Cove naval museum in Fall River, Massachusetts. Texas, the first battleship turned into a museum, is normally on display at the San Jacinto Battleground State Historic Site, near Houston, but as of 2021 is closed for repairs. North Carolina is on display in Wilmington, North Carolina. Alabama is on display in Mobile, Alabama. The wreck of , sunk during the Pearl Harbor attack in 1941, is designated a historical landmark and national gravesite. The wreck of , also sunk during the attack, is a historic landmark. The only other 20th-century battleship on display is the Japanese pre-dreadnought . A replica of the ironclad battleship was built by the Weihai Port Bureau in 2003 and is on display in Weihai, China. Former battleships that were previously used as museum ships included , SMS Tegetthoff, and SMS Erzherzog Franz Ferdinand. Strategy and doctrine Doctrine Battleships were the embodiment of sea power. For American naval officer Alfred Thayer Mahan and his followers, a strong navy was vital to the success of a nation, and control of the seas was vital for the projection of force on land and overseas. Mahan's theory, proposed in The Influence of Sea Power Upon History, 1660–1783 of 1890, dictated the role of the battleship was to sweep the enemy from the seas. While the work of escorting, blockading, and raiding might be done by cruisers or smaller vessels, the presence of the battleship was a potential threat to any convoy escorted by any vessels other than capital ships. This concept of "potential threat" can be further generalized to the mere existence (as opposed to presence) of a powerful fleet tying the opposing fleet down. This concept came to be known as a "fleet in being"—an idle yet mighty fleet forcing others to spend time, resource and effort to actively guard against it. Mahan went on to say victory could only be achieved by engagements between battleships, which came to be known as the decisive battle doctrine in some navies, while targeting merchant ships (commerce raiding or guerre de course, as posited by the Jeune École) could never succeed. Mahan was highly influential in naval and political circles throughout the age of the battleship, calling for a large fleet of the most powerful battleships possible. Mahan's work developed in the late 1880s, and by the end of the 1890s it had acquired much international influence on naval strategy; in the end, it was adopted by many major navies (notably the British, American, German, and Japanese). The strength of Mahanian opinion was important in the development of the battleships arms races, and equally important in the agreement of the Powers to limit battleship numbers in the interwar era. The "fleet in being" suggested battleships could simply by their existence tie down superior enemy resources. This in turn was believed to be able to tip the balance of a conflict even without a battle. This suggested even for inferior naval powers a battleship fleet could have important strategic effect. Tactics While the role of battleships in both World Wars reflected Mahanian doctrine, the details of battleship deployment were more complex. Unlike ships of the line, the battleships of the late 19th and early 20th centuries had significant vulnerability to torpedoes and mines—because efficient mines and torpedoes did not exist before that—which could be used by relatively small and inexpensive craft. The Jeune École doctrine of the 1870s and 1880s recommended placing torpedo boats alongside battleships; these would hide behind the larger ships until gun-smoke obscured visibility enough for them to dart out and fire their torpedoes. While this tactic was made less effective by the development of smokeless propellant, the threat from more capable torpedo craft (later including submarines) remained. By the 1890s, the Royal Navy had developed the first destroyers, which were initially designed to intercept and drive off any attacking torpedo boats. During the First World War and subsequently, battleships were rarely deployed without a protective screen of destroyers. Battleship doctrine emphasized the concentration of the battlegroup. In order for this concentrated force to be able to bring its power to bear on a reluctant opponent (or to avoid an encounter with a stronger enemy fleet), battlefleets needed some means of locating enemy ships beyond horizon range. This was provided by scouting forces; at various stages battlecruisers, cruisers, destroyers, airships, submarines and aircraft were all used. (With the development of radio, direction finding and traffic analysis would come into play, as well, so even shore stations, broadly speaking, joined the battlegroup.) So for most of their history, battleships operated surrounded by squadrons of destroyers and cruisers. The North Sea campaign of the First World War illustrates how, despite this support, the threat of mine and torpedo attack, and the failure to integrate or appreciate the capabilities of new techniques, seriously inhibited the operations of the Royal Navy Grand Fleet, the greatest battleship fleet of its time. Strategic and diplomatic impact The presence of battleships had a great psychological and diplomatic impact. Similar to possessing nuclear weapons today, the ownership of battleships served to enhance a nation's force projection. Even during the Cold War, the psychological impact of a battleship was significant. In 1946, USS Missouri was dispatched to deliver the remains of the ambassador from Turkey, and her presence in Turkish and Greek waters staved off a possible Soviet thrust into the Balkan region. In September 1983, when Druze militia in Lebanon's Shouf Mountains fired upon U.S. Marine peacekeepers, the arrival of USS New Jersey stopped the firing. Gunfire from New Jersey later killed militia leaders. Value for money Battleships were the largest and most complex, and hence the most expensive warships of their time; as a result, the value of investment in battleships has always been contested. As the French politician Etienne Lamy wrote in 1879, "The construction of battleships is so costly, their effectiveness so uncertain and of such short duration, that the enterprise of creating an armored fleet seems to leave fruitless the perseverance of a people". The Jeune École school of thought of the 1870s and 1880s sought alternatives to the crippling expense and debatable utility of a conventional battlefleet. It proposed what would nowadays be termed a sea denial strategy, based on fast, long-ranged cruisers for commerce raiding and torpedo boat flotillas to attack enemy ships attempting to blockade French ports. The ideas of the Jeune École were ahead of their time; it was not until the 20th century that efficient mines, torpedoes, submarines, and aircraft were available that allowed similar ideas to be effectively implemented. The determination of powers such as Germany to build battlefleets with which to confront much stronger rivals has been criticized by historians, who emphasise the futility of investment in a battlefleet that has no chance of matching its opponent in an actual battle. Former operators : lost its two Dingyuan-class battleships Dingyuan and Zhenyuan during the Battle of Weihaiwei in 1895. : lost its entire navy following the collapse of the Empire at the end of World War I. : its only battleship, KB Jugoslavija, was sunk by Italian frogmen during the 1918 Raid on Pula. : lost its entire navy upon its conquest by the Bolsheviks in 1921. : sole surviving battleship TCG Turgut Reis was decommissioned in 1933. : lost its two surviving s during the Spanish Civil War, both in 1937. : lost its two s during the German bombing of Salamis in 1941. : scuttled its two surviving s in 1945, during the closing months of World War II. : surrendered its sole surviving battleship, Nagato to the United States following World War II. : decommissioned its last battleship Minas Geraes in 1952. : decommissioned its two s in 1953. : decommissioned its last two s in 1956. : decommissioned its last battleship ARA Rivadavia in 1957. : decommissioned its last battleship, Almirante Latorre in 1958. : decommissioned its last battleship, HMS Vanguard in 1960. : decommissioned its last battleship, Jean Bart in 1970. : decommissioned its last battleship USS Missouri in 1992. She was the last active battleship of any navy. See also Arsenal ship List of battleships List of sunken battleships List of ships of World War II List of battleships of World War I List of battleships of World War II Notes References Corbett, Sir Julian. "Maritime Operations in the Russo-Japanese War 1904–1905." (1994). Originally Classified and in two volumes. . Corbett, Sir Julian. "Maritime Operations in the Russo-Japanese War 1904–1905." Volume I (2015) Originally published in January 1914. Naval Institute Press Corbett, Sir Julian. "Maritime Operations in the Russo-Japanese War 1904–1905." Volume II (2015) Originally published in October 1915. Naval Institute Press Friedman, Norman (2013). "Naval Firepower, Battleship Guns and Gunnery in the Dreadnaught Era." Seaforth Publishing, Great Britain. Polmar, Norman. The Naval Institute Guide to the Ships and Aircraft of the US Fleet. 2001, Naval Institute Press. . Further reading Mahan, Alred Thayer. Reflections, Historic and Other, Suggested by the Battle of the Japan Sea. By Captain A. T. Mahan, US Navy. US Naval Proceedings magazine; June 1906, volume XXXIV, number 2. United States Naval Institute Press. Taylor, Bruce, ed. The world of the battleship: The design and careers of capital ships of the world's navies, 1900–1950 (US Naval Institute Press, 2017) 224 pp External links Comparison of the capabilities of seven World War II battleships Comparison of projected post-World War II battleship designs Development of U.S. battleships, with timeline graph Battleships in the Transportation Photographs Collection – University of Washington Library Ship types
14
The battlecruiser (also written as battle cruiser or battle-cruiser) was a type of capital ship of the first half of the 20th century. These were similar in displacement, armament and cost to battleships, but differed in form and balance of attributes. Battlecruisers typically had thinner armour (to a varying degree) and a somewhat lighter main gun battery than contemporary battleships, installed on a longer hull with much higher engine power in order to attain greater speeds. The first battlecruisers were designed in the United Kingdom, as a development of the armoured cruiser, at the same time as the dreadnought succeeded the pre-dreadnought battleship. The goal of the design was to outrun any ship with similar armament, and chase down any ship with lesser armament; they were intended to hunt down slower, older armoured cruisers and destroy them with heavy gunfire while avoiding combat with the more powerful but slower battleships. However, as more and more battlecruisers were built, they were increasingly used alongside the better-protected battleships. Battlecruisers served in the navies of the United Kingdom, Germany, the Ottoman Empire, Australia and Japan during World War I, most notably at the Battle of the Falkland Islands and in the several raids and skirmishes in the North Sea which culminated in a pitched fleet battle, the Battle of Jutland. British battlecruisers in particular suffered heavy losses at Jutland, where poor fire safety and ammunition handling practices left them vulnerable to catastrophic magazine explosions following hits to their main turrets from large-calibre shells. This dismal showing led to a persistent general belief that battlecruisers were too thinly armoured to function successfully. By the end of the war, capital ship design had developed, with battleships becoming faster and battlecruisers becoming more heavily armoured, blurring the distinction between a battlecruiser and a fast battleship. The Washington Naval Treaty, which limited capital ship construction from 1922 onwards, treated battleships and battlecruisers identically, and the new generation of battlecruisers planned by the United States, Great Britain and Japan were scrapped or converted into aircraft carriers under the terms of the treaty. Improvements in armour design and propulsion created the 1930s "fast battleship" with the speed of a battlecruiser and armour of a battleship, making the battlecruiser in the traditional sense effectively an obsolete concept. Thus from the 1930s on, only the Royal Navy continued to use "battlecruiser" as a classification for the World War I–era capital ships that remained in the fleet; while Japan's battlecruisers remained in service, they had been significantly reconstructed and were re-rated as full-fledged fast battleships. Battlecruisers were put into action again during World War II, and only one survived to the end. There was also renewed interest in large "cruiser-killer" type warships, but few were ever begun, as construction of battleships and battlecruisers was curtailed in favor of more-needed convoy escorts, aircraft carriers, and cargo ships. Near the end, and after the Cold War era, the Soviet of large guided missile cruisers have been the only active ships termed "battlecruisers". Background The battlecruiser was developed by the Royal Navy in the first years of the 20th century as an evolution of the armoured cruiser. The first armoured cruisers had been built in the 1870s, as an attempt to give armour protection to ships fulfilling the typical cruiser roles of patrol, trade protection and power projection. However, the results were rarely satisfactory, as the weight of armour required for any meaningful protection usually meant that the ship became almost as slow as a battleship. As a result, navies preferred to build protected cruisers with an armoured deck protecting their engines, or simply no armour at all. In the 1890s, technology began to change this balance. New Krupp steel armour meant that it was now possible to give a cruiser side armour which would protect it against the quick-firing guns of enemy battleships and cruisers alike. In 1896–97 France and Russia, who were regarded as likely allies in the event of war, started to build large, fast armoured cruisers taking advantage of this. In the event of a war between Britain and France or Russia, or both, these cruisers threatened to cause serious difficulties for the British Empire's worldwide trade. Britain, which had concluded in 1892 that it needed twice as many cruisers as any potential enemy to adequately protect its empire's sea lanes, responded to the perceived threat by laying down its own large armoured cruisers. Between 1899 and 1905, it completed or laid down seven classes of this type, a total of 35 ships. This building program, in turn, prompted the French and Russians to increase their own construction. The Imperial German Navy began to build large armoured cruisers for use on their overseas stations, laying down eight between 1897 and 1906. The cost of this cruiser arms race was significant. In the period 1889–1896, the Royal Navy spent £7.3 million on new large cruisers. From 1897 to 1904, it spent £26.9 million. Many armoured cruisers of the new kind were just as large and expensive as the equivalent battleship. The increasing size and power of the armoured cruiser led to suggestions in British naval circles that cruisers should displace battleships entirely. The battleship's main advantage was its 12-inch heavy guns, and heavier armour designed to protect from shells of similar size. However, for a few years after 1900 it seemed that those advantages were of little practical value. The torpedo now had a range of 2,000 yards, and it seemed unlikely that a battleship would engage within torpedo range. However, at ranges of more than 2,000 yards it became increasingly unlikely that the heavy guns of a battleship would score any hits, as the heavy guns relied on primitive aiming techniques. The secondary batteries of 6-inch quick-firing guns, firing more plentiful shells, were more likely to hit the enemy. As naval expert Fred T. Jane wrote in June 1902,Is there anything outside of 2,000 yards that the big gun in its hundreds of tons of medieval castle can affect, that its weight in 6-inch guns without the castle could not affect equally well? And inside 2,000, what, in these days of gyros, is there that the torpedo cannot effect with far more certainty? In 1904, Admiral John "Jacky" Fisher became First Sea Lord, the senior officer of the Royal Navy. He had for some time thought about the development of a new fast armoured ship. He was very fond of the "second-class battleship" , a faster, more lightly armoured battleship. As early as 1901, there is confusion in Fisher's writing about whether he saw the battleship or the cruiser as the model for future developments. This did not stop him from commissioning designs from naval architect W. H. Gard for an armoured cruiser with the heaviest possible armament for use with the fleet. The design Gard submitted was for a ship between , capable of , armed with four 9.2-inch and twelve guns in twin gun turrets and protected with six inches of armour along her belt and 9.2-inch turrets, on her 7.5-inch turrets, 10 inches on her conning tower and up to on her decks. However, mainstream British naval thinking between 1902 and 1904 was clearly in favour of heavily armoured battleships, rather than the fast ships that Fisher favoured. The Battle of Tsushima proved conclusively the effectiveness of heavy guns over intermediate ones and the need for a uniform main caliber on a ship for fire control. Even before this, the Royal Navy had begun to consider a shift away from the mixed-calibre armament of the 1890s pre-dreadnought to an "all-big-gun" design, and preliminary designs circulated for battleships with all 12-inch or all 10-inch guns and armoured cruisers with all 9.2-inch guns. In late 1904, not long after the Royal Navy had decided to use 12-inch guns for its next generation of battleships because of their superior performance at long range, Fisher began to argue that big-gun cruisers could replace battleships altogether. The continuing improvement of the torpedo meant that submarines and destroyers would be able to destroy battleships; this in Fisher's view heralded the end of the battleship or at least compromised the validity of heavy armour protection. Nevertheless, armoured cruisers would remain vital for commerce protection. Fisher's views were very controversial within the Royal Navy, and even given his position as First Sea Lord, he was not in a position to insist on his own approach. Thus he assembled a "Committee on Designs", consisting of a mixture of civilian and naval experts, to determine the approach to both battleship and armoured cruiser construction in the future. While the stated purpose of the committee was to investigate and report on future requirements of ships, Fisher and his associates had already made key decisions. The terms of reference for the committee were for a battleship capable of with 12-inch guns and no intermediate calibres, capable of docking in existing drydocks; and a cruiser capable of , also with 12-inch guns and no intermediate armament, armoured like , the most recent armoured cruiser, and also capable of using existing docks. First battlecruisers Under the Selborne plan of 1902, the Royal Navy intended to start three new battleships and four armoured cruisers each year. However, in late 1904 it became clear that the 1905–1906 programme would have to be considerably smaller, because of lower than expected tax revenue and the need to buy out two Chilean battleships under construction in British yards, lest they be purchased by the Russians for use against the Japanese, Britain's ally. These economies meant that the 1905–1906 programme consisted only of one battleship, but three armoured cruisers. The battleship became the revolutionary battleship , and the cruisers became the three ships of the . Fisher later claimed, however, that he had argued during the committee for the cancellation of the remaining battleship. The construction of the new class was begun in 1906 and completed in 1908, delayed perhaps to allow their designers to learn from any problems with Dreadnought. The ships fulfilled the design requirement quite closely. On a displacement similar to Dreadnought, the Invincibles were longer to accommodate additional boilers and more powerful turbines to propel them at . Moreover, the new ships could maintain this speed for days, whereas pre-dreadnought battleships could not generally do so for more than an hour. Armed with eight 12-inch Mk X guns, compared to ten on Dreadnought, they had of armour protecting the hull and the gun turrets. (Dreadnoughts armour, by comparison, was at its thickest.) The class had a very marked increase in speed, displacement and firepower compared to the most recent armoured cruisers but no more armour. While the Invincibles were to fill the same role as the armoured cruisers they succeeded, they were expected to do so more effectively. Specifically their roles were: Heavy reconnaissance. Because of their power, the Invincibles could sweep away the screen of enemy cruisers to close with and observe an enemy battlefleet before using their superior speed to retire. Close support for the battle fleet. They could be stationed at the ends of the battle line to stop enemy cruisers harassing the battleships, and to harass the enemy's battleships if they were busy fighting battleships. Also, the Invincibles could operate as the fast wing of the battlefleet and try to outmanoeuvre the enemy. Pursuit. If an enemy fleet ran, then the Invincibles would use their speed to pursue, and their guns to damage or slow enemy ships. Commerce protection. The new ships would hunt down enemy cruisers and commerce raiders. Confusion about how to refer to these new battleship-size armoured cruisers set in almost immediately. Even in late 1905, before work was begun on the Invincibles, a Royal Navy memorandum refers to "large armoured ships" meaning both battleships and large cruisers. In October 1906, the Admiralty began to classify all post-Dreadnought battleships and armoured cruisers as "capital ships", while Fisher used the term "dreadnought" to refer either to his new battleships or the battleships and armoured cruisers together. At the same time, the Invincible class themselves were referred to as "cruiser-battleships", "dreadnought cruisers"; the term "battlecruiser" was first used by Fisher in 1908. Finally, on 24 November 1911, Admiralty Weekly Order No. 351 laid down that "All cruisers of the "Invincible" and later types are for the future to be described and classified as "battle cruisers" to distinguish them from the armoured cruisers of earlier date." Along with questions over the new ships' nomenclature came uncertainty about their actual role due to their lack of protection. If they were primarily to act as scouts for the battle fleet and hunter-killers of enemy cruisers and commerce raiders, then the seven inches of belt armour with which they had been equipped would be adequate. If, on the other hand, they were expected to reinforce a battle line of dreadnoughts with their own heavy guns, they were too thin-skinned to be safe from an enemy's heavy guns. The Invincibles were essentially extremely large, heavily armed, fast armoured cruisers. However, the viability of the armoured cruiser was already in doubt. A cruiser that could have worked with the Fleet might have been a more viable option for taking over that role. Because of the Invincibles size and armament, naval authorities considered them capital ships almost from their inception—an assumption that might have been inevitable. Complicating matters further was that many naval authorities, including Lord Fisher, had made overoptimistic assessments from the Battle of Tsushima in 1905 about the armoured cruiser's ability to survive in a battle line against enemy capital ships due to their superior speed. These assumptions had been made without taking into account the Russian Baltic Fleet's inefficiency and tactical ineptitude. By the time the term "battlecruiser" had been given to the Invincibles, the idea of their parity with battleships had been fixed in many people's minds. Not everyone was so convinced. Brasseys Naval Annual, for instance, stated that with vessels as large and expensive as the Invincibles, an admiral "will be certain to put them in the line of battle where their comparatively light protection will be a disadvantage and their high speed of no value." Those in favor of the battlecruiser countered with two points—first, since all capital ships were vulnerable to new weapons such as the torpedo, armour had lost some of its validity; and second, because of its greater speed, the battlecruiser could control the range at which it engaged an enemy. Battlecruisers in the dreadnought arms race Between the launching of the Invincibles to just after the outbreak of the First World War, the battlecruiser played a junior role in the developing dreadnought arms race, as it was never wholeheartedly adopted as the key weapon in British imperial defence, as Fisher had presumably desired. The biggest factor for this lack of acceptance was the marked change in Britain's strategic circumstances between their conception and the commissioning of the first ships. The prospective enemy for Britain had shifted from a Franco-Russian alliance with many armoured cruisers to a resurgent and increasingly belligerent Germany. Diplomatically, Britain had entered the Entente cordiale in 1904 and the Anglo-Russian Entente. Neither France nor Russia posed a particular naval threat; the Russian navy had largely been sunk or captured in the Russo-Japanese War of 1904–1905, while the French were in no hurry to adopt the new dreadnought-type design. Britain also boasted very cordial relations with two of the significant new naval powers: Japan (bolstered by the Anglo-Japanese Alliance, signed in 1902 and renewed in 1905), and the US. These changed strategic circumstances, and the great success of the Dreadnought ensured that she rather than the Invincible became the new model capital ship. Nevertheless, battlecruiser construction played a part in the renewed naval arms race sparked by the Dreadnought. For their first few years of service, the Invincibles entirely fulfilled Fisher's vision of being able to sink any ship fast enough to catch them, and run from any ship capable of sinking them. An Invincible would also, in many circumstances, be able to take on an enemy pre-dreadnought battleship. Naval circles concurred that the armoured cruiser in its current form had come to the logical end of its development and the Invincibles were so far ahead of any enemy armoured cruiser in firepower and speed that it proved difficult to justify building more or bigger cruisers. This lead was extended by the surprise both Dreadnought and Invincible produced by having been built in secret; this prompted most other navies to delay their building programmes and radically revise their designs. This was particularly true for cruisers, because the details of the Invincible class were kept secret for longer; this meant that the last German armoured cruiser, , was armed with only guns, and was no match for the new battlecruisers. The Royal Navy's early superiority in capital ships led to the rejection of a 1905–1906 design that would, essentially, have fused the battlecruiser and battleship concepts into what would eventually become the fast battleship. The 'X4' design combined the full armour and armament of Dreadnought with the 25-knot speed of Invincible. The additional cost could not be justified given the existing British lead and the new Liberal government's need for economy; the slower and cheaper , a relatively close copy of Dreadnought, was adopted instead. The X4 concept would eventually be fulfilled in the and later by other navies. The next British battlecruisers were the three , slightly improved Invincibles built to fundamentally the same specification, partly due to political pressure to limit costs and partly due to the secrecy surrounding German battlecruiser construction, particularly about the heavy armour of . This class came to be widely seen as a mistake and the next generation of British battlecruisers were markedly more powerful. By 1909–1910 a sense of national crisis about rivalry with Germany outweighed cost-cutting, and a naval panic resulted in the approval of a total of eight capital ships in 1909–1910. Fisher pressed for all eight to be battlecruisers, but was unable to have his way; he had to settle for six battleships and two battlecruisers of the . The Lions carried eight 13.5-inch guns, the now-standard caliber of the British "super-dreadnought" battleships. Speed increased to and armour protection, while not as good as in German designs, was better than in previous British battlecruisers, with armour belt and barbettes. The two Lions were followed by the very similar . By 1911 Germany had built battlecruisers of her own, and the superiority of the British ships could no longer be assured. Moreover, the German Navy did not share Fisher's view of the battlecruiser. In contrast to the British focus on increasing speed and firepower, Germany progressively improved the armour and staying power of their ships to better the British battlecruisers. Von der Tann, begun in 1908 and completed in 1910, carried eight 11.1-inch guns, but with 11.1-inch (283 mm) armour she was far better protected than the Invincibles. The two s were quite similar but carried ten 11.1-inch guns of an improved design. , designed in 1909 and finished in 1913, was a modified Moltke; speed increased by one knot to , while her armour had a maximum thickness of 12 inches, equivalent to the s of a few years earlier. Seydlitz was Germany's last battlecruiser completed before World War I. The next step in battlecruiser design came from Japan. The Imperial Japanese Navy had been planning the ships from 1909, and was determined that, since the Japanese economy could support relatively few ships, each would be more powerful than its likely competitors. Initially the class was planned with the Invincibles as the benchmark. On learning of the British plans for Lion, and the likelihood that new U.S. Navy battleships would be armed with guns, the Japanese decided to radically revise their plans and go one better. A new plan was drawn up, carrying eight 14-inch guns, and capable of , thus marginally having the edge over the Lions in speed and firepower. The heavy guns were also better-positioned, being superfiring both fore and aft with no turret amidships. The armour scheme was also marginally improved over the Lions, with nine inches of armour on the turrets and on the barbettes. The first ship in the class was built in Britain, and a further three constructed in Japan. The Japanese also re-classified their powerful armoured cruisers of the Tsukuba and Ibuki classes, carrying four 12-inch guns, as battlecruisers; nonetheless, their armament was weaker and they were slower than any battlecruiser. The next British battlecruiser, , was intended initially as the fourth ship in the Lion class, but was substantially redesigned. She retained the eight 13.5-inch guns of her predecessors, but they were positioned like those of Kongō for better fields of fire. She was faster (making on sea trials), and carried a heavier secondary armament. Tiger was also more heavily armoured on the whole; while the maximum thickness of armour was the same at nine inches, the height of the main armour belt was increased. Not all the desired improvements for this ship were approved, however. Her designer, Sir Eustace Tennyson d'Eyncourt, had wanted small-bore water-tube boilers and geared turbines to give her a speed of , but he received no support from the authorities and the engine makers refused his request. 1912 saw work begin on three more German battlecruisers of the , the first German battlecruisers to mount 12-inch guns. These ships, like Tiger and the Kongōs, had their guns arranged in superfiring turrets for greater efficiency. Their armour and speed was similar to the previous Seydlitz class. In 1913, the Russian Empire also began the construction of the four-ship , which were designed for service in the Baltic Sea. These ships were designed to carry twelve 14-inch guns, with armour up to 12 inches thick, and a speed of . The heavy armour and relatively slow speed of these ships made them more similar to German designs than to British ships; construction of the Borodinos was halted by the First World War and all were scrapped after the end of the Russian Civil War. World War I Construction For most of the combatants, capital ship construction was very limited during the war. Germany finished the Derfflinger class and began work on the . The Mackensens were a development of the Derfflinger class, with 13.8-inch guns and a broadly similar armour scheme, designed for . In Britain, Jackie Fisher returned to the office of First Sea Lord in October 1914. His enthusiasm for big, fast ships was unabated, and he set designers to producing a design for a battlecruiser with 15-inch guns. Because Fisher expected the next German battlecruiser to steam at 28 knots, he required the new British design to be capable of 32 knots. He planned to reorder two s, which had been approved but not yet laid down, to a new design. Fisher finally received approval for this project on 28 December 1914 and they became the . With six 15-inch guns but only 6-inch armour they were a further step forward from Tiger in firepower and speed, but returned to the level of protection of the first British battlecruisers. At the same time, Fisher resorted to subterfuge to obtain another three fast, lightly armoured ships that could use several spare gun turrets left over from battleship construction. These ships were essentially light battlecruisers, and Fisher occasionally referred to them as such, but officially they were classified as large light cruisers. This unusual designation was required because construction of new capital ships had been placed on hold, while there were no limits on light cruiser construction. They became and her sisters and , and there was a bizarre imbalance between their main guns of 15 inches (or in Furious) and their armour, which at thickness was on the scale of a light cruiser. The design was generally regarded as a failure (nicknamed in the Fleet Outrageous, Uproarious and Spurious), though the later conversion of the ships to aircraft carriers was very successful. Fisher also speculated about a new mammoth, but lightly built battlecruiser, that would carry guns, which he termed ; this never got beyond the concept stage. It is often held that the Renown and Courageous classes were designed for Fisher's plan to land troops (possibly Russian) on the German Baltic coast. Specifically, they were designed with a reduced draught, which might be important in the shallow Baltic. This is not clear-cut evidence that the ships were designed for the Baltic: it was considered that earlier ships had too much draught and not enough freeboard under operational conditions. Roberts argues that the focus on the Baltic was probably unimportant at the time the ships were designed, but was inflated later, after the disastrous Dardanelles Campaign. The final British battlecruiser design of the war was the , which was born from a requirement for an improved version of the Queen Elizabeth battleship. The project began at the end of 1915, after Fisher's final departure from the Admiralty. While initially envisaged as a battleship, senior sea officers felt that Britain had enough battleships, but that new battlecruisers might be required to combat German ships being built (the British overestimated German progress on the Mackensen class as well as their likely capabilities). A battlecruiser design with eight 15-inch guns, 8 inches of armour and capable of 32 knots was decided on. The experience of battlecruisers at the Battle of Jutland meant that the design was radically revised and transformed again into a fast battleship with armour up to 12 inches thick, but still capable of . The first ship in the class, , was built according to this design to counter the possible completion of any of the Mackensen-class ship. The plans for her three sisters, on which little work had been done, were revised once more later in 1916 and in 1917 to improve protection. The Admiral class would have been the only British ships capable of taking on the German Mackensen class; nevertheless, German shipbuilding was drastically slowed by the war, and while two Mackensens were launched, none were ever completed. The Germans also worked briefly on a further three ships, of the , which were modified versions of the Mackensens with 15-inch guns. Work on the three additional Admirals was suspended in March 1917 to enable more escorts and merchant ships to be built to deal with the new threat from U-boats to trade. They were finally cancelled in February 1919. Battlecruisers in action The first combat involving battlecruisers during World War I was the Battle of Heligoland Bight in August 1914. A force of British light cruisers and destroyers entered the Heligoland Bight (the part of the North Sea closest to Hamburg) to attack German destroyer patrols. When they met opposition from light cruisers, Vice Admiral David Beatty took his squadron of five battlecruisers into the Bight and turned the tide of the battle, ultimately sinking three German light cruisers and killing their commander, Rear Admiral Leberecht Maass. The German battlecruiser perhaps made the most impact early in the war. Stationed in the Mediterranean, she and the escorting light cruiser evaded British and French ships on the outbreak of war, and steamed to Constantinople (Istanbul) with two British battlecruisers in hot pursuit. The two German ships were handed over to the Ottoman Navy, and this was instrumental in bringing the Ottoman Empire into the war as one of the Central Powers. Goeben herself, renamed Yavuz Sultan Selim, fought engagements against the Imperial Russian Navy in the Black Sea before being knocked out of the action for the remainder of the war after the Battle of Imbros against British forces in the Aegean Sea in January 1918. The original battlecruiser concept proved successful in December 1914 at the Battle of the Falkland Islands. The British battlecruisers and did precisely the job for which they were intended when they chased down and annihilated the German East Asia Squadron, centered on the armoured cruisers and , along with three light cruisers, commanded by Admiral Maximilian Graf Von Spee, in the South Atlantic Ocean. Prior to the battle, the Australian battlecruiser had unsuccessfully searched for the German ships in the Pacific. During the Battle of Dogger Bank in 1915, the aftermost barbette of the German flagship Seydlitz was struck by a British 13.5-inch shell from HMS Lion. The shell did not penetrate the barbette, but it dislodged a piece of the barbette armour that allowed the flame from the shell's detonation to enter the barbette. The propellant charges being hoisted upwards were ignited, and the fireball flashed up into the turret and down into the magazine, setting fire to charges removed from their brass cartridge cases. The gun crew tried to escape into the next turret, which allowed the flash to spread into that turret as well, killing the crews of both turrets. Seydlitz was saved from near-certain destruction only by emergency flooding of her after magazines, which had been effected by Wilhelm Heidkamp. This near-disaster was due to the way that ammunition handling was arranged and was common to both German and British battleships and battlecruisers, but the lighter protection on the latter made them more vulnerable to the turret or barbette being penetrated. The Germans learned from investigating the damaged Seydlitz and instituted measures to ensure that ammunition handling minimised any possible exposure to flash. Apart from the cordite handling, the battle was mostly inconclusive, though both the British flagship Lion and Seydlitz were severely damaged. Lion lost speed, causing her to fall behind the rest of the battleline, and Beatty was unable to effectively command his ships for the remainder of the engagement. A British signalling error allowed the German battlecruisers to withdraw, as most of Beatty's squadron mistakenly concentrated on the crippled armoured cruiser Blücher, sinking her with great loss of life. The British blamed their failure to win a decisive victory on their poor gunnery and attempted to increase their rate of fire by stockpiling unprotected cordite charges in their ammunition hoists and barbettes. At the Battle of Jutland on 31 May 1916, both British and German battlecruisers were employed as fleet units. The British battlecruisers became engaged with both their German counterparts, the battlecruisers, and then German battleships before the arrival of the battleships of the British Grand Fleet. The result was a disaster for the Royal Navy's battlecruiser squadrons: Invincible, Queen Mary, and exploded with the loss of all but a handful of their crews. The exact reason why the ships' magazines detonated is not known, but the plethora of exposed cordite charges stored in their turrets, ammunition hoists and working chambers in the quest to increase their rate of fire undoubtedly contributed to their loss. Beatty's flagship Lion herself was almost lost in a similar manner, save for the heroic actions of Major Francis Harvey. The better-armoured German battlecruisers fared better, in part due to the poor performance of British fuzes (the British shells tended to explode or break up on impact with the German armour). —the only German battlecruiser lost at Jutland—had only 128 killed, for instance, despite receiving more than thirty hits. The other German battlecruisers, , Von der Tann, Seydlitz, and , were all heavily damaged and required extensive repairs after the battle, Seydlitz barely making it home, for they had been the focus of British fire for much of the battle. Interwar period In the years immediately after World War I, Britain, Japan and the US all began design work on a new generation of ever more powerful battleships and battlecruisers. The new burst of shipbuilding that each nation's navy desired was politically controversial and potentially economically crippling. This nascent arms race was prevented by the Washington Naval Treaty of 1922, where the major naval powers agreed to limits on capital ship numbers. The German navy was not represented at the talks; under the terms of the Treaty of Versailles, Germany was not allowed any modern capital ships at all. Through the 1920s and 1930s only Britain and Japan retained battlecruisers, often modified and rebuilt from their original designs. The line between the battlecruiser and the modern fast battleship became blurred; indeed, the Japanese Kongōs were formally redesignated as battleships after their very comprehensive reconstruction in the 1930s. Plans in the aftermath of World War I Hood, launched in 1918, was the last World War I battlecruiser to be completed. Owing to lessons from Jutland, the ship was modified during construction; the thickness of her belt armour was increased by an average of 50 percent and extended substantially, she was given heavier deck armour, and the protection of her magazines was improved to guard against the ignition of ammunition. This was hoped to be capable of resisting her own weapons—the classic measure of a "balanced" battleship. Hood was the largest ship in the Royal Navy when completed; thanks to her great displacement, in theory she combined the firepower and armour of a battleship with the speed of a battlecruiser, causing some to refer to her as a fast battleship. However, her protection was markedly less than that of the British battleships built immediately after World War I, the . The navies of Japan and the United States, not being affected immediately by the war, had time to develop new heavy guns for their latest designs and to refine their battlecruiser designs in light of combat experience in Europe. The Imperial Japanese Navy began four s. These vessels would have been of unprecedented size and power, as fast and well armoured as Hood whilst carrying a main battery of ten 16-inch guns, the most powerful armament ever proposed for a battlecruiser. They were, for all intents and purposes, fast battleships—the only differences between them and the s which were to precede them were less side armour and a increase in speed. The United States Navy, which had worked on its battlecruiser designs since 1913 and watched the latest developments in this class with great care, responded with the . If completed as planned, they would have been exceptionally fast and well armed with eight 16-inch guns, but carried armour little better than the Invincibles—this after an increase in protection following Jutland. The final stage in the post-war battlecruiser race came with the British response to the Amagi and Lexington types: four G3 battlecruisers. Royal Navy documents of the period often described any battleship with a speed of over about as a battlecruiser, regardless of the amount of protective armour, although the G3 was considered by most to be a well-balanced fast battleship. The Washington Naval Treaty meant that none of these designs came to fruition. Ships that had been started were either broken up on the slipway or converted to aircraft carriers. In Japan, Amagi and were selected for conversion. Amagi was damaged beyond repair by the 1923 Great Kantō earthquake and was broken up for scrap; the hull of one of the proposed Tosa-class battleships, , was converted in her stead. The United States Navy also converted two battlecruiser hulls into aircraft carriers in the wake of the Washington Treaty: and , although this was only considered marginally preferable to scrapping the hulls outright (the remaining four: Constellation, Ranger, Constitution and United States were scrapped). In Britain, Fisher's "large light cruisers," were converted to carriers. Furious had already been partially converted during the war and Glorious and Courageous were similarly converted. Rebuilding programmes In total, nine battlecruisers survived the Washington Naval Treaty, although HMS Tiger later became a victim of the London Naval Conference 1930 and was scrapped. Because their high speed made them valuable surface units in spite of their weaknesses, most of these ships were significantly updated before World War II. and were modernized significantly in the 1920s and 1930s. Between 1934 and 1936, Repulse was partially modernized and had her bridge modified, an aircraft hangar, catapult and new gunnery equipment added and her anti-aircraft armament increased. Renown underwent a more thorough reconstruction between 1937 and 1939. Her deck armour was increased, new turbines and boilers were fitted, an aircraft hangar and catapult added and she was completely rearmed aside from the main guns which had their elevation increased to +30 degrees. The bridge structure was also removed and a large bridge similar to that used in the battleships installed in its place. While conversions of this kind generally added weight to the vessel, Renowns tonnage actually decreased due to a substantially lighter power plant. Similar thorough rebuildings planned for Repulse and Hood were cancelled due to the advent of World War II. Unable to build new ships, the Imperial Japanese Navy also chose to improve its existing battlecruisers of the Kongō class (initially the , , and —the only later as it had been disarmed under the terms of the Washington treaty) in two substantial reconstructions (one for Hiei). During the first of these, elevation of their main guns was increased to +40 degrees, anti-torpedo bulges and of horizontal armour added, and a "pagoda" mast with additional command positions built up. This reduced the ships' speed to . The second reconstruction focused on speed as they had been selected as fast escorts for aircraft carrier task forces. Completely new main engines, a reduced number of boilers and an increase in hull length by allowed them to reach up to 30 knots once again. They were reclassified as "fast battleships," although their armour and guns still fell short compared to surviving World War I–era battleships in the American or the British navies, with dire consequences during the Pacific War, when Hiei and Kirishima were easily crippled by US gunfire during actions off Guadalcanal, forcing their scuttling shortly afterwards. Perhaps most tellingly, Hiei was crippled by medium-caliber gunfire from heavy and light cruisers in a close-range night engagement. There were two exceptions: Turkey's Yavuz Sultan Selim and the Royal Navy's Hood. The Turkish Navy made only minor improvements to the ship in the interwar period, which primarily focused on repairing wartime damage and the installation of new fire control systems and anti-aircraft batteries. Hood was in constant service with the fleet and could not be withdrawn for an extended reconstruction. She received minor improvements over the course of the 1930s, including modern fire control systems, increased numbers of anti-aircraft guns, and in March 1941, radar. Naval rearmament In the late 1930s navies began to build capital ships again, and during this period a number of large commerce raiders and small, fast battleships were built that are sometimes referred to as battlecruisers. Germany and Russia designed new battlecruisers during this period, though only the latter laid down two of the 35,000-ton . They were still on the slipways when the Germans invaded in 1941 and construction was suspended. Both ships were scrapped after the war. The Germans planned three battlecruisers of the as part of the expansion of the Kriegsmarine (Plan Z). With six 15-inch guns, high speed, excellent range, but very thin armour, they were intended as commerce raiders. Only one was ordered shortly before World War II; no work was ever done on it. No names were assigned, and they were known by their contract names: 'O', 'P', and 'Q'. The new class was not universally welcomed in the Kriegsmarine. Their abnormally-light protection gained it the derogatory nickname Ohne Panzer Quatsch (without armour nonsense) within certain circles of the Navy. World War II The Royal Navy deployed some of its battlecruisers during the Norwegian Campaign in April 1940. The and the were engaged during the action off Lofoten by Renown in very bad weather and disengaged after Gneisenau was damaged. One of Renowns 15-inch shells passed through Gneisenaus director-control tower without exploding, severing electrical and communication cables as it went and destroyed the rangefinders for the forward 150 mm (5.9 in) turrets. Main-battery fire control had to be shifted aft due to the loss of electrical power. Another shell from Renown knocked out Gneisenaus aft turret. The British ship was struck twice by German shells that failed to inflict any significant damage. She was the only pre-war battlecruiser to survive the war. In the early years of the war various German ships had a measure of success hunting merchant ships in the Atlantic. Allied battlecruisers such as Renown, Repulse, and the fast battleships Dunkerque and were employed on operations to hunt down the commerce-raiding German ships. The one stand-up fight occurred when the battleship and the heavy cruiser sortied into the North Atlantic to attack British shipping and were intercepted by Hood and the battleship in May 1941 in the Battle of the Denmark Strait. The elderly British battlecruiser was no match for the modern German battleship: within minutes, the Bismarcks 15-inch shells caused a magazine explosion in Hood reminiscent of the Battle of Jutland. Only three men survived. The first battlecruiser to see action in the Pacific War was Repulse when she was sunk by Japanese torpedo bombers north of Singapore on 10 December 1941 whilst in company with Prince of Wales. She was lightly damaged by a single bomb and near-missed by two others in the first Japanese attack. Her speed and agility enabled her to avoid the other attacks by level bombers and dodge 33 torpedoes. The last group of torpedo bombers attacked from multiple directions and Repulse was struck by five torpedoes. She quickly capsized with the loss of 27 officers and 486 crewmen; 42 officers and 754 enlisted men were rescued by the escorting destroyers. The loss of Repulse and Prince of Wales conclusively proved the vulnerability of capital ships to aircraft without air cover of their own. The Japanese Kongō-class battlecruisers were extensively used as carrier escorts for most of their wartime career due to their high speed. Their World War I–era armament was weaker and their upgraded armour was still thin compared to contemporary battleships. On 13 November 1942, during the First Naval Battle of Guadalcanal, Hiei stumbled across American cruisers and destroyers at point-blank range. The ship was badly damaged in the encounter and had to be towed by her sister ship Kirishima. Both were spotted by American aircraft the following morning and Kirishima was forced to cast off her tow because of repeated aerial attacks. Hieis captain ordered her crew to abandon ship after further damage and scuttled Hiei in the early evening of 14 November. On the night of 14/15 November during the Second Naval Battle of Guadalcanal, Kirishima returned to Ironbottom Sound, but encountered the American battleships and . While failing to detect Washington, Kirishima engaged South Dakota with some effect. Washington opened fire a few minutes later at short range and badly damaged Kirishima, knocking out her aft turrets, jamming her rudder, and hitting the ship below the waterline. The flooding proved to be uncontrollable and Kirishima capsized three and a half hours later. Returning to Japan after the Battle of Leyte Gulf, Kongō was torpedoed and sunk by the American submarine on 21 November 1944. Haruna was moored at Kure, Japan when the naval base was attacked by American carrier aircraft on 24 and 28 July 1945. The ship was only lightly damaged by a single bomb hit on 24 July, but was hit a dozen more times on 28 July and sank at her pier. She was refloated after the war and scrapped in early 1946. Large cruisers or "cruiser killers" A late renaissance in popularity of ships between battleships and cruisers in size occurred on the eve of World War II. Described by some as battlecruisers, but never classified as capital ships, they were variously described as "super cruisers", "large cruisers" or even "unrestricted cruisers". The Dutch, American, and Japanese navies all planned these new classes specifically to counter the heavy cruisers, or their counterparts, being built by their naval rivals. The first such battlecruisers were the Dutch Design 1047, designed to protect their colonies in the East Indies in the face of Japanese aggression. Never officially assigned names, these ships were designed with German and Italian assistance. While they broadly resembled the German Scharnhorst class and had the same main battery, they would have been more lightly armoured and only protected against eight-inch gunfire. Although the design was mostly completed, work on the vessels never commenced as the Germans overran the Netherlands in May 1940. The first ship would have been laid down in June of that year. The only class of these late battlecruisers actually built were the United States Navy's "large cruisers". Two of them were completed, and ; a third, , was cancelled while under construction and three others, to be named Philippines, Puerto Rico and Samoa, were cancelled before they were laid down. They were classified as "large cruisers" instead of battlecruisers. These ships were named after territories or protectorates. (Battleships, were named after states and cruisers after cities.) With a main armament of nine 12-inch guns in three triple turrets and a displacement of , the Alaskas were twice the size of s and had guns some 50% larger in diameter. They lacked the thick armoured belt and intricate torpedo defence system of true capital ships. However, unlike most battlecruisers, they were considered a balanced design according to cruiser standards as their protection could withstand fire from their own caliber of gun, albeit only in a very narrow range band. They were designed to hunt down Japanese heavy cruisers, though by the time they entered service most Japanese cruisers had been sunk by American aircraft or submarines. Like the contemporary fast battleships, their speed ultimately made them more useful as carrier escorts and bombardment ships than as the surface combatants they were developed to be. The Japanese started designing the B64 class, which was similar to the Alaska but with guns. News of the Alaskas led them to upgrade the design, creating Design B-65. Armed with 356 mm guns, the B65s would have been the best armed of the new breed of battlecruisers, but they still would have had only sufficient protection to keep out eight-inch shells. Much like the Dutch, the Japanese got as far as completing the design for the B65s, but never laid them down. By the time the designs were ready the Japanese Navy recognized that they had little use for the vessels and that their priority for construction should lie with aircraft carriers. Like the Alaskas, the Japanese did not call these ships battlecruisers, referring to them instead as super-heavy cruisers. Cold War–era designs In spite of the fact that most navies abandoned the battleship and battlecruiser concepts after World War II, Joseph Stalin's fondness for big-gun-armed warships caused the Soviet Union to plan a large cruiser class in the late 1940s. In the Soviet Navy, they were termed "heavy cruisers" (tjazholyj krejser). The fruits of this program were the Project 82 (Stalingrad) cruisers, of standard load, nine guns and a speed of . Three ships were laid down in 1951–1952, but they were cancelled in April 1953 after Stalin's death. Only the central armoured hull section of the first ship, Stalingrad, was launched in 1954 and then used as a target. The Soviet is sometimes referred to as a battlecruiser. This description arises from their over displacement, which is roughly equal to that of a First World War battleship and more than twice the displacement of contemporary cruisers; upon entry into service, Kirov was the largest surface combatant to be built since World War II. The Kirov class lacks the armour that distinguishes battlecruisers from ordinary cruisers and they are classified as heavy nuclear-powered missile cruisers (T'Yazholiy atomn'iy raketn'iy Krey'Ser) by Russia, with their primary surface armament consisting of twenty P-700 Granit surface to surface missiles. Four members of the class were completed during the 1980s and 1990s, but due to budget constraints only the is operational with the Russian Navy, though plans were announced in 2010 to return the other three ships to service. As of 2021, was being refitted, but the other two ships are reportedly beyond economical repair. Operators operates one with one more being overhauled. Former operators five surviving battlecruisers were all scuttled at Scapa Flow in 1919. decommissioned its only battlecruiser HMAS Australia in 1921. upgraded its s into fast-battleships in the 1930s, ending their operation of battlecruisers. last battlecruiser, HMS Renown was decommissioned in 1945, following World War II. two Alaska-class battlecruisers were both decommissioned in 1947. decommissioned its only battlecruiser TCG Yavuz in 1950. See also List of battlecruisers List of battlecruisers of World War I List of battlecruisers of World War II List of ships of the Second World War List of sunken battlecruisers Footnotes Notes Citations References External links Maritimequest Battleships & Battlecruisers of the 20th century British and German Battlecruisers of the First World War Navsource Online Ship types Battlecruisers
14
Robert James Lee Hawke (9 December 1929 – 16 May 2019) was an Australian politician and trade unionist who served as the 23rd prime minister of Australia from 1983 to 1991. He held office as the leader of the Australian Labor Party (ALP), having previously served as the president of the Australian Council of Trade Unions from 1969 to 1980 and president of the Labor Party national executive from 1973 to 1978. Hawke was born in Border Town, South Australia. He attended the University of Western Australia and went on to study at University College, Oxford as a Rhodes Scholar. In 1956, Hawke joined the Australian Council of Trade Unions (ACTU) as a research officer. Having risen to become responsible for national wage case arbitration, he was elected as president of the ACTU in 1969, where he achieved a high public profile. In 1973, he was appointed as president of the Labor Party. In 1980, Hawke stood down from his roles as ACTU and Labor Party president to announce his intention to enter parliamentary politics, and was subsequently elected to the Australian House of Representatives as a member of parliament (MP) for the division of Wills at the 1980 federal election. Three years later, he was elected unopposed to replace Bill Hayden as leader of the Australian Labor Party, and within five weeks led Labor to a landslide victory at the 1983 election, and was sworn in as prime minister. He led Labor to victory three times, with successful outcomes in 1984, 1987 and 1990 elections, making him the most electorally successful prime minister in the history of the Labor Party. The Hawke government implemented a significant number of reforms, including major economic reforms, the establishment of Landcare, the introduction of the universal healthcare scheme Medicare, brokering the Prices and Incomes Accord, creating APEC, floating the Australian dollar, deregulating the financial sector, introducing the Family Assistance Scheme, enacting the Sex Discrimination Act to prevent discrimination in the workplace, declaring "Advance Australia Fair" as the country's national anthem, initiating superannuation pension schemes for all workers, negotiating a ban on mining in Antarctica and overseeing passage of the Australia Act that removed all remaining jurisdiction by the United Kingdom from Australia. In June 1991, Hawke faced a leadership challenge by the Treasurer, Paul Keating, but Hawke managed to retain power; however, Keating mounted a second challenge six months later, and won narrowly, replacing Hawke as prime minister. Hawke subsequently retired from parliament, pursuing both a business career and a number of charitable causes, until his death in 2019, aged 89. Hawke remains his party's longest-serving Prime Minister, and Australia's third-longest-serving prime minister behind Robert Menzies and John Howard. He is also the only prime minister to be born in South Australia and the only one raised and educated in Western Australia. Hawke holds the highest-ever approval rating for an Australian prime minister, reaching 75% approval in 1984. Hawke is frequently ranked within the upper tier of Australian prime ministers by historians. Early life and family Bob Hawke was born on 9 December 1929 in Border Town, South Australia, the second child of Arthur "Clem" Hawke (1898–1989), a Congregationalist minister, and his wife Edith Emily (Lee) (1897–1979) (known as Ellie), a schoolteacher. His uncle, Albert, was the Labor premier of Western Australia between 1953 and 1959. Hawke's brother Neil, who was seven years his senior, died at the age of seventeen after contracting meningitis, for which there was no cure at the time. Ellie Hawke subsequently developed an almost messianic belief in her son's destiny, and this contributed to Hawke's supreme self-confidence throughout his career. At the age of fifteen, he presciently boasted to friends that he would one day become the prime minister of Australia. At the age of seventeen, the same age that his brother Neil had died, Hawke had a serious crash while riding his Panther motorcycle that left him in a critical condition for several days. This near-death experience acted as his catalyst, driving him to make the most of his talents and not let his abilities go to waste. He joined the Labor Party in 1947 at the age of eighteen. Education and early career Hawke was educated at West Leederville State School, Perth Modern School and the University of Western Australia, graduating in 1952 with Bachelor of Arts and Bachelor of Laws degrees. He was also president of the university's guild during the same year. The following year, Hawke won a Rhodes Scholarship to attend University College, Oxford, where he began a Bachelor of Arts course in philosophy, politics and economics (PPE). He soon found he was covering much the same ground as he had in his education at the University of Western Australia, and transferred to a Bachelor of Letters course. He wrote his thesis on wage-fixing in Australia and successfully presented it in January 1956. In 1956, Hawke accepted a scholarship to undertake doctoral studies in the area of arbitration law in the law department at the Australian National University in Canberra. Soon after his arrival at ANU, Hawke became the students' representative on the University Council. A year later, Hawke was recommended to the President of the ACTU to become a research officer, replacing Harold Souter who had become ACTU Secretary. The recommendation was made by Hawke's mentor at ANU, H. P. Brown, who for a number of years had assisted the ACTU in national wage cases. Hawke decided to abandon his doctoral studies and accept the offer, moving to Melbourne with his wife Hazel. World record beer skol (scull) Hawke is well known for a "world record" allegedly achieved at Oxford University for a beer skol (scull) of a yard of ale in 11 seconds. The record is widely regarded as having been important to his career and ocker chic image. A recent historical journal article describes the record as "possibly fabricated" and "cultural propaganda" designed to make Hawke appealing to unionised workers and nationalistic middle-class voters. The article demonstrates that "the record is apocryphal: its location and time remain uncertain; there are no known witnesses; the field of competition was exclusive and with no scientific accountability; the record was first published in a beer pamphlet; and Hawke's recollections were unreliable." Australian Council of Trade Unions Not long after Hawke began work at the ACTU, he became responsible for the presentation of its annual case for higher wages to the national wages tribunal, the Commonwealth Conciliation and Arbitration Commission. He was first appointed as an ACTU advocate in 1959. The 1958 case, under previous advocate R.L. Eggleston, had yielded only a five-shilling increase. The 1959 case found for a fifteen-shilling increase, and was regarded as a personal triumph for Hawke. He went on to attain such success and prominence in his role as an ACTU advocate that, in 1969, he was encouraged to run for the position of ACTU President, despite the fact that he had never held elected office in a trade union. He was elected ACTU President in 1969 on a modernising platform by the narrow margin of 399 to 350, with the support of the left of the union movement, including some associated with the Communist Party of Australia. He later credited Ray Gietzelt, General Secretary of the FMWU, as the single most significant union figure in helping him achieve this outcome. Questioned after his election on his political stance, Hawke stated that "socialist is not a word I would use to describe myself", saying instead his approach to politics was pragmatic. His commitment to the cause of Jewish Refuseniks purportedly led to a planned assassination attempt on Hawke by the Popular Front for the Liberation of Palestine, and its Australian operative Munif Mohammed Abou Rish. In 1971, Hawke along with other members of the ACTU requested that South Africa send a non-racially biased team for the rugby union tour, with the intention of unions agreeing not to serve the team in Australia. Prior to arrival, the Western Australian branch of the Transport Workers' Union, and the Barmaids' and Barmens' Union, announced that they would serve the team, which allowed the Springboks to land in Perth. The tour commenced on 26 June and riots occurred as anti-apartheid protesters disrupted games. Hawke and his family started to receive malicious mail and phone calls from people who thought that sport and politics should not mix. Hawke remained committed to the ban on apartheid teams and later that year, the South African cricket team was successfully denied and no apartheid team was to ever come to Australia again. It was this ongoing dedication to racial equality in South Africa that would later earn Hawke the respect and friendship of Nelson Mandela. In industrial matters, Hawke continued to demonstrate a preference for, and considerable skill at, negotiation, and was generally liked and respected by employers as well as the unions he advocated for. As early as 1972, speculation began that he would seek to enter the Parliament of Australia and eventually run to become the Leader of the Australian Labor Party. But while his professional career continued successfully, his heavy drinking and womanising placed considerable strains on his family life. In June 1973, Hawke was elected as the Federal President of the Labor Party. Two years later, when the Whitlam government was controversially dismissed by the Governor-General, Hawke showed an initial keenness to enter Parliament at the ensuing election. Harry Jenkins, the MP for Scullin, came under pressure to step down to allow Hawke to stand in his place, but he strongly resisted this push. Hawke eventually decided not to attempt to enter Parliament at that time, a decision he soon regretted. After Labor was defeated at the election, Whitlam initially offered the leadership to Hawke, although it was not within Whitlam's power to decide who would succeed him. Despite not taking on the offer, Hawke remained influential, playing a key role in averting national strike action. During the 1977 federal election, he emerged as a strident opponent of accepting Vietnamese boat people as refugees into Australia, stating that they should be subject to normal immigration requirements and should otherwise be deported. He further stated only refugees selected off-shore should be accepted. Hawke resigned as President of the Labor Party in August 1978. Neil Batt was elected in his place. The strain of this period took its toll on Hawke and in 1979 he suffered a physical collapse. This shock led Hawke to publicly announce his alcoholism in a television interview, and that he would make a concerted—and ultimately successful—effort to overcome it. He was helped through this period by the relationship that he had established with writer Blanche d'Alpuget, who, in 1982, published a biography of Hawke. His popularity with the public was, if anything, enhanced by this period of rehabilitation, and opinion polling suggested that he was a more popular public figure than either Labor Leader Bill Hayden or Liberal Prime Minister Malcolm Fraser. Informer for the United States During the period of 1973 to 1979, Hawke acted as an informant for the United States government. During his time as ACTU leader, Hawke informed the US of details surrounding labour disputes, especially those relating to American companies and individuals, such as union disputes with Ford Motor Company and the black ban of Frank Sinatra. The major industrial action taken against Sinatra came about because Sinatra had made sexist comments against female journalists. The dispute was the subject of the 2003 film The Night We Called It a Day. Hawke was described by US diplomats as "a bulwark against anti-American sentiment and resurgent communism during the economic turmoil of the 1970s", and often disputed with the Whitlam government over issues of foreign policy and industrial relations. With the knowledge of US diplomats, Hawke secretly planned to leave Labor in 1974 to form a new centrist political party to challenge the Whitlam government. This plan had the support of Rupert Murdoch and Hawke's confidant, Peter Abeles, but did not eventuate because of the events of 1975. US diplomats played a major role in shaping Hawke's consensus politics and economics. Member of Parliament Hawke's first attempt to enter Parliament came during the 1963 federal election. He stood in the seat of Corio in Geelong and managed to achieve a 3.1% swing against the national trend, although he fell short of ousting longtime Liberal incumbent Hubert Opperman. Hawke rejected several opportunities to enter Parliament throughout the 1970s, something he later wrote that he "regretted". He eventually stood for election to the House of Representatives at the 1980 election for the safe Melbourne seat of Wills, winning it comfortably. Immediately upon his election to Parliament, Hawke was appointed to the Shadow Cabinet by Labor Leader Bill Hayden as Shadow Minister for Industrial Relations. Hayden, after having led the Labour party to narrowly lose the 1980 election, was increasingly subject to criticism from Labor MPs over his leadership style. To quell speculation over his position, Hayden called a leadership spill on 16 July 1982, believing that if he won he would be guaranteed to lead Labor through to the next election. Hawke decided to challenge Hayden in the spill, but Hayden defeated him by five votes; the margin of victory, however, was too slim to dispel doubts that he could lead the Labor Party to victory at an election. Despite his defeat, Hawke began to agitate more seriously behind the scenes for a change in leadership, with opinion polls continuing to show that Hawke was a far more popular public figure than both Hayden and Prime Minister Malcolm Fraser. Hayden was further weakened after Labor's unexpectedly poor performance at a by-election in December 1982 for the Victorian seat of Flinders, following the resignation of the sitting member, former deputy Liberal leader Phillip Lynch. Labor needed a swing of 5.5% to win the seat and had been predicted by the media to win, but could only achieve 3%. Labor Party power-brokers, such as Graham Richardson and Barrie Unsworth, now openly switched their allegiance from Hayden to Hawke. More significantly, Hayden's staunch friend and political ally, Labor's Senate Leader John Button, had become convinced that Hawke's chances of victory at an election were greater than Hayden's. Initially, Hayden believed that he could remain in his job, but Button's defection proved to be the final straw in convincing Hayden that he would have to resign as Labor Leader. Less than two months after the Flinders by-election result, Hayden announced his resignation as Leader of the Labor Party on 3 February 1983. Hawke was subsequently elected as Leader unopposed on 8 February, and became Leader of the Opposition in the process. Having learned that morning about the possible leadership change, on the same that Hawke assumed the leadership of the Labor Party, Malcolm Fraser called a snap election for 5 March 1983, unsuccessfully attempting to prevent Labor from making the leadership change. However, he was unable to have the Governor-General confirm the election before Labor announced the change. At the 1983 election, Hawke led Labor to a landslide victory, achieving a 24-seat swing and ending seven years of Liberal Party rule. With the election called at the same time that Hawke became Labor leader this meant that Hawke never sat in Parliament as Leader of the Opposition having spent the entirety of his short Opposition leadership in the election campaign which he won. Prime Minister of Australia (1983–1991) Leadership style After Labor's landslide victory, Hawke was sworn in as the Prime Minister by the Governor-General Ninian Stephen on 11 March 1983. The style of the Hawke government was deliberately distinct from the Whitlam government, the most recent Labor government that preceded it. Rather than immediately initiating multiple extensive reform programs as Whitlam had, Hawke announced that Malcolm Fraser's pre-election concealment of the budget deficit meant that many of Labor's election commitments would have to be deferred. As part of his internal reforms package, Hawke divided the government into two tiers, with only the most senior ministers sitting in the Cabinet of Australia. The Labor caucus was still given the authority to determine who would make up the Ministry, but this move gave Hawke unprecedented powers to empower individual ministers. In particular, the political partnership that developed between Hawke and his Treasurer, Paul Keating, proved to be essential to Labor's success in government, with multiple Labor figures in years since citing the partnership as the party's greatest ever. The two men proved a study in contrasts: Hawke was a Rhodes Scholar; Keating left high school early. Hawke's enthusiasms were cigars, betting and most forms of sport; Keating preferred classical architecture, Mahler symphonies and collecting British Regency and French Empire antiques. Despite not knowing one another before Hawke assumed the leadership in 1983, the two formed a personal as well as political relationship which enabled the Government to pursue a significant number of reforms, although there were occasional points of tension between the two. The Labor Caucus under Hawke also developed a more formalised system of parliamentary factions, which significantly altered the dynamics of caucus operations. Unlike many of his predecessor leaders, Hawke's authority within the Labor Party was absolute. This enabled him to persuade MPs to support a substantial set of policy changes which had not been considered achievable by Labor governments in the past. Individual accounts from ministers indicate that while Hawke was not often the driving force behind individual reforms, outside of broader economic changes, he took on the role of providing political guidance on what was electorally feasible and how best to sell it to the public, tasks at which he proved highly successful. Hawke took on a very public role as Prime Minister, campaigning frequently even outside of election periods, and for much of his time in office proved to be incredibly popular with the Australian electorate; to this date he still holds the highest ever AC Nielsen approval rating of 75%. Economic policy The Hawke government oversaw significant economic reforms, and is often cited by economic historians as being a "turning point" from a protectionist, agricultural model to a more globalised and services-oriented economy. According to the journalist Paul Kelly, "the most influential economic decisions of the 1980s were the floating of the Australian dollar and the deregulation of the financial system". Although the Fraser government had played a part in the process of financial deregulation by commissioning the 1981 Campbell Report, opposition from Fraser himself had stalled this process. Shortly after its election in 1983, the Hawke government took the opportunity to implement a comprehensive program of economic reform, in the process "transform(ing) economics and politics in Australia". Hawke and Keating together led the process for overseeing the economic changes by launching a "National Economic Summit" one month after their election in 1983, which brought together business and industrial leaders together with politicians and trade union leaders; the three-day summit led to a unanimous adoption of a national economic strategy, generating sufficient political capital for widespread reform to follow. Among other reforms, the Hawke government floated the Australian dollar, repealed rules that prohibited foreign-owned banks to operate in Australia, dismantled the protectionist tariff system, privatised several state sector industries, ended the subsidisation of loss-making industries, and sold off part of the state-owned Commonwealth Bank. The taxation system was also significantly reformed, with income tax rates reduced and the introduction of a fringe benefits tax and a capital gains tax; the latter two reforms were strongly opposed by the Liberal Party at the time, but were never reversed by them when they eventually returned to office in 1996. Partially offsetting these imposts upon the business community—the "main loser" from the 1985 Tax Summit according to Paul Kelly—was the introduction of full dividend imputation, a reform insisted upon by Keating. Funding for schools was also considerably increased as part of this package, while financial assistance was provided for students to enable them to stay at school longer; the number of Australian children completing school rose from 3 in 10 at the beginning of the Hawke government to 7 in 10 by its conclusion in 1991. Considerable progress was also made in directing assistance "to the most disadvantaged recipients over the whole range of welfare benefits." Social and environmental policy Although criticisms were leveled against the Hawke government that it did not achieve all it said it would do on social policy, it nevertheless enacting a series of reforms which remain in place to the present day. From 1983 to 1989, the Government oversaw the permanent establishment of universal health care in Australia with the creation of Medicare, doubled the number of subsidised childcare places, began the introduction of occupational superannuation, oversaw a significant increase in school retention rates, created subsidised homecare services, oversaw the elimination of poverty traps in the welfare system, increased the real value of the old-age pension, reintroduced the six-monthly indexation of single-person unemployment benefits, and established a wide-ranging programme for paid family support, known as the Family Income Supplement. During the 1980s, the proportion of total government outlays allocated to families, the sick, single parents, widows, the handicapped, and veterans was significantly higher than under the previous Fraser and Whitlam governments. In 1984, the Hawke government enacted the landmark Sex Discrimination Act 1984, which eliminated discrimination on the grounds of sex within the workplace. In 1989, Hawke oversaw the gradual re-introduction of some tuition fees for university study, creating set up the Higher Education Contributions Scheme (HECS). Under the original HECS, a $1,800 fee was charged to all university students, and the Commonwealth paid the balance. A student could defer payment of this HECS amount and repay the debt through the tax system, when the student's income exceeds a threshold level. As part of the reforms, Colleges of Advanced Education entered the University sector by various means. by doing so, university places were able to be expanded. Further notable policy decisions taken during the Government's time in office included the public health campaign regarding HIV/AIDS, and Indigenous land rights reform, with an investigation of the idea of a treaty between Aborigines and the Government being launched, although the latter would be overtaken by events, notably the Mabo court decision. The Hawke government also drew attention for a series of notable environmental decisions, particularly in its second and third terms. In 1983, Hawke personally vetoed the construction of the Franklin Dam in Tasmania, responding to a groundswell of protest around the issue. Hawke also secured the nomination of the Wet Tropics of Queensland as a UNESCO World Heritage Site in 1987, preventing the forests there from being logged. Hawke would later appoint Graham Richardson as Environment Minister, tasking him with winning the second-preference support from environmental parties, something which Richardson later claimed was the major factor in the government's narrow re-election at the 1990 election. In the Government's fourth term, Hawke personally led the Australian delegation to secure changes to the Protocol on Environmental Protection to the Antarctic Treaty, ultimately winning a guarantee that drilling for minerals within Antarctica would be totally prohibited until 2048 at the earliest. Hawke later claimed that the Antarctic drilling ban was his "proudest achievement". Industrial relations policy As a former ACTU President, Hawke was well-placed to engage in reform of the industrial relations system in Australia, taking a lead on this policy area as in few others. Working closely with ministerial colleagues and the ACTU Secretary, Bill Kelty, Hawke negotiated with trade unions to establish the Prices and Incomes Accord in 1983, an agreement whereby unions agreed to restrict their demands for wage increases, and in turn the Government guaranteed to both minimise inflation and promote an increased social wage, including by establishing new social programmes such as Medicare. Inflation had been a significant issue for the previous decade prior to the election of the Hawke government, regularly running into double-digits. The process of the Accord, by which the Government and trade unions would arbitrate and agree upon wage increases in many sectors, led to a decrease in both inflation and unemployment through to 1990. Criticisms of the Accord would come from both the right and the left of politics. Left-wing critics claimed that it kept real wages stagnant, and that the Accord was a policy of class collaboration and corporatism. By contrast, right-wing critics claimed that the Accord reduced the flexibility of the wages system. Supporters of the Accord, however, pointed to the improvements in the social security system that occurred, including the introduction of rental assistance for social security recipients, the creation of labour market schemes such as NewStart, and the introduction of the Family Income Supplement. In 1986, the Hawke government passed a bill to de-register the Builders Labourers Federation federally due to the union not following the Accord agreements. Despite a percentage fall in real money wages from 1983 to 1991, the social wage of Australian workers was argued by the Government to have improved drastically as a result of these reforms, and the ensuing decline in inflation. The Accord was revisited six further times during the Hawke government, each time in response to new economic developments. The seventh and final revisiting would ultimately lead to the establishment of the enterprise bargaining system, although this would be finalised shortly after Hawke left office in 1991. Foreign policy Arguably the most significant foreign policy achievement of the Government took place in 1989, after Hawke proposed a south-east Asian region-wide forum for leaders and economic ministers to discuss issues of common concern. After winning the support of key countries in the region, this led to the creation of the Asia-Pacific Economic Cooperation (APEC). The first APEC meeting duly took place in Canberra in November 1989; the economic ministers of Australia, Brunei, Canada, Indonesia, Japan, South Korea, Malaysia, New Zealand, Philippines, Singapore, Thailand and the United States all attended. APEC would subsequently grow to become one of the most pre-eminent high-level international forums in the world, particularly after the later inclusions of China and Russia, and the Keating government's later establishment of the APEC Leaders' Forum. Elsewhere in Asia, the Hawke government played a significant role in the build-up to the United Nations peace process for Cambodia, culminating in the Transitional Authority; Hawke's Foreign Minister Gareth Evans was nominated for the Nobel Peace Prize for his role in negotiations. Hawke also took a major public stand after the 1989 Tiananmen Square protests and massacre; despite having spent years trying to get closer relations with China, Hawke gave a tearful address on national television describing the massacre in graphic detail, and unilaterally offered asylum to over 42,000 Chinese students who were living in Australia at the time, many of whom had publicly supported the Tiananmen protesters. Hawke did so without even consulting his Cabinet, stating later that he felt he simply had to act. The Hawke government pursued a close relationship with the United States, assisted by Hawke's close friendship with US Secretary of State George Shultz; this led to a degree of controversy when the Government supported the US's plans to test ballistic missiles off the coast of Tasmania in 1985, as well as seeking to overturn Australia's long-standing ban on uranium exports. Although the US ultimately withdrew the plans to test the missiles, the furore led to a fall in Hawke's approval ratings. Shortly after the 1990 election, Hawke would lead Australia into its first overseas military campaign since the Vietnam War, forming a close alliance with US President George H. W. Bush to join the coalition in the Gulf War. The Royal Australian Navy contributed several destroyers and frigates to the war effort, which successfully concluded in February 1991, with the expulsion of Iraqi forces from Kuwait. The success of the campaign, and the lack of any Australian casualties, led to a brief increase in the popularity of the Government. Through his role on the Commonwealth Heads of Government Meeting, Hawke played a leading role in ensuring the Commonwealth initiated an international boycott on foreign investment into South Africa, building on work undertaken by his predecessor Malcolm Fraser, and in the process clashing publicly with Prime Minister of the United Kingdom Margaret Thatcher, who initially favoured a more cautious approach. The resulting boycott, led by the Commonwealth, was widely credited with helping bring about the collapse of apartheid, and resulted in a high-profile visit by Nelson Mandela in October 1990, months after the latter's release from a 27-year stint in prison. During the visit, Mandela publicly thanked the Hawke government for the role it played in the boycott. Election wins and leadership challenges Hawke benefited greatly from the disarray into which the Liberal Party fell after the resignation of Fraser following the 1983 election. The Liberals were torn between supporters of the more conservative John Howard and the more liberal Andrew Peacock, with the pair frequently contesting the leadership. Hawke and Keating were also able to use the concealment of the size of the budget deficit by Fraser before the 1983 election to great effect, damaging the Liberal Party's economic credibility as a result. However, Hawke's time as Prime Minister also saw friction develop between himself and the grassroots of the Labor Party, many of whom were unhappy at what they viewed as Hawke's iconoclasm and willingness to cooperate with business interests. Hawke regularly and publicly expressed his willingness to cull Labor's "sacred cows". The Labor Left faction, as well as prominent Labor backbencher Barry Jones, offered repeated criticisms of a number of government decisions. Hawke was also subject to challenges from some former colleagues in the trade union movement over his "confrontationalist style" in siding with the airline companies in the 1989 Australian pilots' strike. Nevertheless, Hawke was able to comfortably maintain a lead as preferred prime minister in the vast majority of opinion polls carried out throughout his time in office. He recorded the highest popularity rating ever measured by an Australian opinion poll, reaching 75% approval in 1984. After leading Labor to a comfortable victory in the snap 1984 election, called to bring the mandate of the House of Representatives back in line with the Senate, Hawke was able to secure an unprecedented third consecutive term for Labor with a landslide victory in the double dissolution election of 1987. Hawke was subsequently able to lead the nation in the bicentennial celebrations of 1988, culminating with him welcoming Queen Elizabeth II to open the newly constructed Parliament House. The late-1980s recession, and the accompanying high interest rates, saw the Government fall in opinion polls, with many doubting that Hawke could win a fourth election. Keating, who had long understood that he would eventually succeed Hawke as prime minister, began to plan a leadership change; at the end of 1988, Keating put pressure on Hawke to retire in the new year. Hawke rejected this suggestion but reached a secret agreement with Keating, the so-called "Kirribilli Agreement", stating that he would step down in Keating's favour at some point after the 1990 election. Hawke subsequently won that election, in the process leading Labor to a record fourth consecutive electoral victory, albeit by a slim margin. Hawke appointed Keating as deputy prime minister to replace the retiring Lionel Bowen. By the end of 1990, frustrated by the lack of any indication from Hawke as to when he might retire, Keating made a provocative speech to the Federal Parliamentary Press Gallery. Hawke considered the speech disloyal, and told Keating he would renege on the Kirribilli Agreement as a result. After attempting to force a resolution privately, Keating finally resigned from the Government in June 1991 to challenge Hawke for the leadership. Hawke won the leadership spill, and in a press conference after the result, Keating declared that he had fired his "one shot" on the leadership. Hawke appointed John Kerin to replace Keating as Treasurer. Despite his victory in the June spill, Hawke quickly began to be regarded by many of his colleagues as a "wounded" leader; he had now lost his long-term political partner, his rating in opinion polls were beginning to fall significantly, and after nearly nine years as Prime Minister, there was speculation that it would soon be time for a new leader. Hawke's leadership was ultimately irrevocably damaged at the end of 1991; after Liberal Leader John Hewson released 'Fightback!', a detailed proposal for sweeping economic change, including the introduction of a goods and services tax, Hawke was forced to sack Kerin as Treasurer after the latter made a public gaffe attempting to attack the policy. Keating duly challenged for the leadership a second time on 19 December, arguing that he would better placed to defeat Hewson; this time, Keating succeeded, narrowly defeating Hawke by 56 votes to 51. In a speech to the House of Representatives following the vote, Hawke declared that his nine years as prime minister had left Australia a better and wealthier country, and he was given a standing ovation by those present. He subsequently tendered his resignation to the Governor-General and pledged support to his successor. Hawke briefly returned to the backbench, before resigning from Parliament on 20 February 1992, sparking a by-election which was won by the independent candidate Phil Cleary from among a record field of 22 candidates. Keating would go on to lead Labor to a fifth victory at the 1993 election, although he was defeated by the Liberal Party at the 1996 election. Hawke wrote that he had very few regrets over his time in office, although stated he wished he had been able to advance the cause of Indigenous land rights further. His bitterness towards Keating over the leadership challenges surfaced in his earlier memoirs, although by the 2000s Hawke stated he and Keating had buried their differences, and that they regularly dined together and considered each other friends. The publication of the book Hawke: The Prime Minister, by Hawke's second wife, Blanche d'Alpuget, in 2010, reignited conflict between the two, with Keating accusing Hawke and d'Alpuget of spreading falsehoods about his role in the Hawke government. Despite this, the two campaigned together for Labor several times, including at the 2019 election, where they released their first joint article for nearly three decades; Craig Emerson, who worked for both men, said they had reconciled in later years after Hawke grew ill. Retirement and later life After leaving Parliament, Hawke entered the business world, taking on a number of directorships and consultancy positions which enabled him to achieve considerable financial success. He avoided public involvement with the Labor Party during Keating's tenure as Prime Minister, not wanting to be seen as attempting to overshadow his successor. After Keating's defeat and the election of the Howard government at the 1996 election, he returned to public campaigning with Labor and regularly appearing at election launches. Despite his personal affection for Queen Elizabeth II, boasting that he had been her "favourite Prime Minister", Hawke was an enthusiastic republican and joined the campaign for a Yes vote in the 1999 republic referendum. In 2002, Hawke was named to South Australia's Economic Development Board during the Rann government. In the lead up to the 2007 election, Hawke made a considerable personal effort to support Kevin Rudd, making speeches at a large number of campaign office openings across Australia, and appearing in multiple campaign advertisements. As well as campaigning against WorkChoices, Hawke also attacked John Howard's record as Treasurer, stating "it was the judgement of every economist and international financial institution that it was the restructuring reforms undertaken by my government, with the full cooperation of the trade union movement, which created the strength of the Australian economy today". In February 2008, after Rudd's victory, Hawke joined former Prime Ministers Gough Whitlam, Malcolm Fraser and Paul Keating in Parliament House to witness the long anticipated apology to the Stolen Generations. In 2009, Hawke helped establish the Centre for Muslim and Non-Muslim Understanding at the University of South Australia. Interfaith dialogue was an important issue for Hawke, who told The Adelaide Review that he was "convinced that one of the great potential dangers confronting the world is the lack of understanding in regard to the Muslim world. Fanatics have misrepresented what Islam is. They give a false impression of the essential nature of Islam." In 2016, after taking part in Andrew Denton's Better Off Dead podcast, Hawke added his voice to calls for voluntary euthanasia to be legalised. Hawke labelled as 'absurd' the lack of political will to fix the problem. He revealed that he had such an arrangement with his wife Blanche should such a devastating medical situation occur. He also publicly advocated for nuclear power and the importation of international spent nuclear fuel to Australia for storage and disposal, stating that this could lead to considerable economic benefits for Australia. In late December 2018, Hawke revealed that he was in "terrible health". While predicting a Labor win in the upcoming 2019 federal election, Hawke said he "may not witness the party's success". In May 2019, the month of the election, he issued a joint statement with Paul Keating endorsing Labor's economic plan and condemning the Liberal Party for "completely [giving] up the economic reform agenda". They stated that "Shorten's Labor is the only party of government focused on the need to modernise the economy to deal with the major challenge of our time: human induced climate change". It was the first joint press statement released by the two since 1991. On 16 May 2019, two days before the election, Hawke died at his home in Northbridge at the age of 89, following a short illness. His family held a private cremation on 27 May at Macquarie Park Cemetery and Crematorium where he was subsequently interred. A state memorial was held at the Sydney Opera House on 14 June; speakers included Craig Emerson as master of ceremonies and Kim Beazley reading the eulogy, as well as Paul Keating, Julia Gillard, Bill Kelty, Ross Garnaut, and incumbent Prime Minister Scott Morrison and Opposition Leader Anthony Albanese. Personal life Hawke married Hazel Masterson in 1956 at Perth Trinity Church. They had three children: Susan (born 1957), Stephen (born 1959) and Roslyn (born 1960). Their fourth child, Robert Jr, died in early infancy in 1963. Hawke was named Victorian Father of the Year in 1971, an honour which his wife disputed due to his heavy drinking and womanising. The couple divorced in 1995, after he left her for the writer Blanche d'Alpuget, and the two lived together in Northbridge, a suburb of the North Shore of Sydney. The divorce estranged Hawke from some of his family for a period, although they had reconciled by the 2010s. Throughout his early life, Hawke was a heavy drinker, having set a world record for drinking during his years as a student. Hawke eventually suffered from alcohol poisoning following the death of his and Hazel's infant son in 1963. He publicly announced in 1980 that he would abstain from alcohol to seek election to Parliament, in a move which garnered significant public attention and support. Hawke began to drink again following his retirement from politics, although to a more manageable extent; on several occasions, in his later years, videos of Hawke downing beer at cricket matches would frequently go viral. On the subject of religion, Hawke wrote, while attending the 1952 World Christian Youth Conference in India, that "there were all these poverty stricken kids at the gate of this palatial place where we were feeding our face and I just (was) struck by this enormous sense of irrelevance of religion to the needs of people". He subsequently abandoned his Christian beliefs. By the time he entered politics he was a self-described agnostic. Hawke told Andrew Denton in 2008 that his father's Christian faith had continued to influence his outlook, saying "My father said if you believe in the fatherhood of God you must necessarily believe in the brotherhood of man, it follows necessarily, and even though I left the church and was not religious, that truth remained with me." Hawke was a supporter of National Rugby League club the Canberra Raiders. Legacy A biographical television film, Hawke, premiered on the Ten Network in Australia on 18 July 2010, with Richard Roxburgh playing the title character. Rachael Blake and Felix Williamson portrayed Hazel Hawke and Paul Keating, respectively. Roxburgh reprised his role as Hawke in the 2020 episode "Terra Nullius" of the Netflix series The Crown. In July 2019, the Australian Government announced it would spend $750,000 to purchase and renovate the house in Bordertown where Hawke was born and spent his early childhood. In January 2021, the Tatiara District Council decided to turn the house into tourist accommodation. In December 2020, the Western Australian Government announced that it had purchased Hawke's childhood home in West Leederville and would maintain it as a state asset. The property will also be assessed for entry onto the State Register of Heritage Places. The Australian Government pledged $5 million in July 2019 to establish a new annual scholarship—the Bob Hawke John Monash Scholarship—through the General Sir John Monash Foundation. Bob Hawke College, a high school in Subiaco, Western Australia named after Hawke, was opened in February 2020. In March 2020, the Australian Electoral Commission announced that it would create a new Australian electoral division in the House of Representatives named in honour of Hawke. The Division of Hawke was first contested at the 2022 federal election, and is located in the state of Victoria, near the seat of Wills, which Hawke represented from 1980 to 1992. Honours Orders 1979: Companion of the Order of Australia (AC), "For services to trade unionism and industrial relations" Foreign honours 1989: Knight Grand Cordon of the Order of the White Elephant 1999: Freedom of the City of London 2008 Grand Companion of the Order of Logohu 2012 Grand Cordon of the Order of the Rising Sun Awards August 1978: Rostrum Award of Merit, for "excellence in the art of public speaking over a considerable period and his demonstration of an effective contribution to society through the spoken word" August 2009: Australian Labor Party Life membership, Bob Hawke became only the third person to be awarded life membership of the Australian Labor Party, after Gough and Margaret Whitlam. During the conferring, Prime Minister Kevin Rudd referred to Hawke as "the heart and soul of the Labor Party". March 2014: University of Western Australia Student Guild Life membership Fellowships University College, Oxford Honorary degrees Nanjing University, Honorary doctorate University of Oxford, Honorary Doctor of Civil Law Hebrew University of Jerusalem, Honorary doctorate Rikkyo University, Honorary Doctor of Humanities Macquarie University, Honorary Doctor of Letters University of New South Wales, Honorary doctorate University of South Australia, Honorary doctorate University of Western Australia, Honorary Doctor of Letters University of Sydney, Honorary Doctor of Letters Other University of South Australia, the Hawke Centre and the Bob Hawke Prime Ministerial Library See also Hawke–Keating government First Hawke Ministry Second Hawke Ministry Third Hawke Ministry Fourth Hawke Ministry Footnotes References Bibliography External links "Hawke Swoops into Power" – Time, 14 March 1983 Robert Hawke – Australia's Prime Ministers / National Archives of Australia Bob Hawke Prime Ministerial Centre |- |- |- |- |- 1929 births 2019 deaths 20th-century Australian politicians Alumni of University College, Oxford Australian agnostics Australian former Christians Australian Labor Party members of the Parliament of Australia Leaders of the Opposition (Australia) Australian National University alumni Australian people of Cornish descent Australian republicans Australian Rhodes Scholars Australian social democrats Australian trade unionists Australian Zionists Companions of the Order of Australia Former Congregationalists Grand Companions of the Order of Logohu Leaders of the Australian Labor Party Members of the Australian House of Representatives Members of the Australian House of Representatives for Wills Members of the Cabinet of Australia People educated at Perth Modern School People from South Australia Politicians from Melbourne Prime Ministers of Australia Trade unionists from Melbourne Treasurers of Australia University of Western Australia alumni
8
Blaise Pascal ( , , ; ; 19 June 1623 – 19 August 1662) was a French mathematician, physicist, inventor, philosopher, and Catholic writer. Pascal was a child prodigy who was educated by his father, a tax collector in Rouen. His earliest mathematical work was on conic sections; he wrote a significant treatise on the subject of projective geometry at the age of 16. He later corresponded with Pierre de Fermat on probability theory, strongly influencing the development of modern economics and social science. In 1642, while still a teenager, he started some pioneering work on calculating machines (called Pascal's calculators and later Pascalines), establishing him as one of the first two inventors of the mechanical calculator. Like his contemporary René Descartes, Pascal was also a pioneer in the natural and applied sciences. Pascal wrote in defense of the scientific method and produced several controversial results. He made important contributions to the study of fluids, and clarified the concepts of pressure and vacuum by generalising the work of Evangelista Torricelli. Following Torricelli and Galileo Galilei, he rebutted the likes of Aristotle and Descartes who insisted that nature abhors a vacuum in 1647. In 1646, he and his sister Jacqueline identified with the religious movement within Catholicism known by its detractors as Jansenism. Following a religious experience in late 1654, he began writing influential works on philosophy and theology. His two most famous works date from this period: the and the Pensées, the former set in the conflict between Jansenists and Jesuits. The latter contains Pascal's wager, known in the original as the Discourse on the Machine, a fideistic probabilistic argument for God's existence. In that year, he also wrote an important treatise on the arithmetical triangle. Between 1658 and 1659, he wrote on the cycloid and its use in calculating the volume of solids. Throughout his life, Pascal was in frail health, especially after the age of 18; he died just two months after his 39th birthday. Life Early life and education Pascal was born in Clermont-Ferrand, which is in France's Auvergne region, by the Massif Central. He lost his mother, Antoinette Begon, at the age of three. His father, Étienne Pascal (1588–1651), who also had an interest in science and mathematics, was a local judge and member of the "Noblesse de Robe". Pascal had two sisters, the younger Jacqueline and the elder Gilberte. In 1631, five years after the death of his wife, Étienne Pascal moved with his children to Paris. The newly arrived family soon hired Louise Delfault, a maid who eventually became a key member of the family. Étienne, who never remarried, decided that he alone would educate his children, for they all showed extraordinary intellectual ability, particularly his son Blaise. The young Pascal showed an amazing aptitude for mathematics and science. Essay on Conics Particularly of interest to Pascal was a work of Desargues on conic sections. Following Desargues' thinking, the 16-year-old Pascal produced, as a means of proof, a short treatise on what was called the Mystic Hexagram, Essai pour les coniques (Essay on Conics) and sent it — his first serious work of mathematics — to Père Mersenne in Paris; it is known still today as Pascal's theorem. It states that if a hexagon is inscribed in a circle (or conic) then the three intersection points of opposite sides lie on a line (called the Pascal line). Pascal's work was so precocious that René Descartes was convinced that Pascal's father had written it. When assured by Mersenne that it was, indeed, the product of the son and not the father, Descartes dismissed it with a sniff: "I do not find it strange that he has offered demonstrations about conics more appropriate than those of the ancients," adding, "but other matters related to this subject can be proposed that would scarcely occur to a 16-year-old child." Leaving Paris In France at that time offices and positions could be—and were—bought and sold. In 1631, Étienne sold his position as second president of the Cour des Aides for 65,665 livres. The money was invested in a government bond which provided, if not a lavish, then certainly a comfortable income which allowed the Pascal family to move to, and enjoy, Paris. But in 1638 Richelieu, desperate for money to carry on the Thirty Years' War, defaulted on the government's bonds. Suddenly Étienne Pascal's worth had dropped from nearly 66,000 livres to less than 7,300. Like so many others, Étienne was eventually forced to flee Paris because of his opposition to the fiscal policies of Cardinal Richelieu, leaving his three children in the care of his neighbour Madame Sainctot, a great beauty with an infamous past who kept one of the most glittering and intellectual salons in all France. It was only when Jacqueline performed well in a children's play with Richelieu in attendance that Étienne was pardoned. In time, Étienne was back in good graces with the Cardinal and in 1639 had been appointed the king's commissioner of taxes in the city of Rouen—a city whose tax records, thanks to uprisings, were in utter chaos. Pascaline In 1642, in an effort to ease his father's endless, exhausting calculations, and recalculations, of taxes owed and paid (into which work the young Pascal had been recruited), Pascal, not yet 19, constructed a mechanical calculator capable of addition and subtraction, called Pascal's calculator or the Pascaline. Of the eight Pascalines known to have survived, four are held by the Musée des Arts et Métiers in Paris and one more by the Zwinger museum in Dresden, Germany, exhibit two of his original mechanical calculators. Although these machines are pioneering forerunners to a further 400 years of development of mechanical methods of calculation, and in a sense to the later field of computer engineering, the calculator failed to be a great commercial success. Partly because it was still quite cumbersome to use in practice, but probably primarily because it was extraordinarily expensive, the Pascaline became little more than a toy, and a status symbol, for the very rich both in France and elsewhere in Europe. Pascal continued to make improvements to his design through the next decade, and he refers to some 50 machines that were built to his design. He built 20 finished machines over the following 10 years. Mathematics Probability Pascal's development of probability theory was his most influential contribution to mathematics. Originally applied to gambling, today it is extremely important in economics, especially in actuarial science. John Ross writes, "Probability theory and the discoveries following it changed the way we regard uncertainty, risk, decision-making, and an individual's and society's ability to influence the course of future events." However, Pascal and Fermat, though doing important early work in probability theory, did not develop the field very far. Christiaan Huygens, learning of the subject from the correspondence of Pascal and Fermat, wrote the first book on the subject. Later figures who continued the development of the theory include Abraham de Moivre and Pierre-Simon Laplace. In 1654, prompted by his friend the Chevalier de Méré, he corresponded with Pierre de Fermat on the subject of gambling problems, and from that collaboration was born the mathematical theory of probabilities. The specific problem was that of two players who want to finish a game early and, given the current circumstances of the game, want to divide the stakes fairly, based on the chance each has of winning the game from that point. From this discussion, the notion of expected value was introduced. Pascal later (in the Pensées) used a probabilistic argument, Pascal's wager, to justify belief in God and a virtuous life. The work done by Fermat and Pascal into the calculus of probabilities laid important groundwork for Leibniz' formulation of the calculus. Treatise on the Arithmetical Triangle Pascal's Traité du triangle arithmétique, written in 1654 but published posthumously in 1665, described a convenient tabular presentation for binomial coefficients which he called the arithmetical triangle, but is now called Pascal's triangle. The triangle can also be represented: He defined the numbers in the triangle by recursion: Call the number in the (m + 1)th row and (n + 1)th column tmn. Then tmn = tm–1,n + tm,n–1, for m = 0, 1, 2, ... and n = 0, 1, 2, ... The boundary conditions are tm,−1 = 0, t−1,n = 0 for m = 1, 2, 3, ... and n = 1, 2, 3, ... The generator t00 = 1. Pascal concluded with the proof, In the same treatise, Pascal gave an explicit statement of the principle of mathematical induction. In 1654, he proved Pascal's identity relating the sums of the p-th powers of the first n positive integers for p = 0, 1, 2, ..., k. That same year, Pascal had a religious experience, and mostly gave up work in mathematics. Cycloid In 1658, Pascal, while suffering from a toothache, began considering several problems concerning the cycloid. His toothache disappeared, and he took this as a heavenly sign to proceed with his research. Eight days later he had completed his essay and, to publicize the results, proposed a contest. Pascal proposed three questions relating to the center of gravity, area and volume of the cycloid, with the winner or winners to receive prizes of 20 and 40 Spanish doubloons. Pascal, Gilles de Roberval and Pierre de Carcavi were the judges, and neither of the two submissions (by John Wallis and Antoine de Lalouvère) were judged to be adequate. While the contest was ongoing, Christopher Wren sent Pascal a proposal for a proof of the rectification of the cycloid; Roberval claimed promptly that he had known of the proof for years. Wallis published Wren's proof (crediting Wren) in Wallis's Tractus Duo, giving Wren priority for the first published proof. Physics Pascal contributed to several fields in physics, most notably the fields of fluid mechanics and pressure. In honour of his scientific contributions, the name Pascal has been given to the SI unit of pressure and Pascal's law (an important principle of hydrostatics). He introduced a primitive form of roulette and the roulette wheel in his search for a perpetual motion machine. Fluid dynamics His work in the fields of hydrodynamics and hydrostatics centered on the principles of hydraulic fluids. His inventions include the hydraulic press (using hydraulic pressure to multiply force) and the syringe. He proved that hydrostatic pressure depends not on the weight of the fluid but on the elevation difference. He demonstrated this principle by attaching a thin tube to a barrel full of water and filling the tube with water up to the level of the third floor of a building. This caused the barrel to leak, in what became known as Pascal's barrel experiment. Vacuum By 1647, Pascal had learned of Evangelista Torricelli's experimentation with barometers. Having replicated an experiment that involved placing a tube filled with mercury upside down in a bowl of mercury, Pascal questioned what force kept some mercury in the tube and what filled the space above the mercury in the tube. At the time, most scientists including Descartes believed in a plenum, i. e. some invisible matter filled all of space, rather than a vacuum. "Nature abhors a vacuum." This was based on the Aristotelian notion that everything in motion was a substance, moved by another substance. Furthermore, light passed through the glass tube, suggesting a substance such as aether rather than vacuum filled the space. Following more experimentation in this vein, in 1647 Pascal produced Experiences nouvelles touchant le vide ("New experiments with the vacuum"), which detailed basic rules describing to what degree various liquids could be supported by air pressure. It also provided reasons why it was indeed a vacuum above the column of liquid in a barometer tube. This work was followed by Récit de la grande expérience de l'équilibre des liqueurs ("Account of the great experiment on equilibrium in liquids") published in 1648. First atmospheric pressure vs. altitude experiment The Torricellian vacuum found that air pressure is equal to the weight of 30 inches of mercury. If air has a finite weight, Earth's atmosphere must have a maximum height. Pascal reasoned that if true, air pressure on a high mountain must be less than at a lower altitude. He lived near the Puy de Dôme mountain, tall, but his health was poor so could not climb it. On 19 September 1648, after many months of Pascal's friendly but insistent prodding, Florin Périer, husband of Pascal's elder sister Gilberte, was finally able to carry out the fact-finding mission vital to Pascal's theory. The account, written by Périer, reads: Pascal replicated the experiment in Paris by carrying a barometer up to the top of the bell tower at the church of Saint-Jacques-de-la-Boucherie, a height of about 50 metres. The mercury dropped two lines. He found with both experiments that an ascent of 7 fathoms lowers the mercury by half a line. Note: Pascal used pouce and ligne for "inch" and "line", and toise for "fathom". In a reply to Étienne Noël, who believed in the plenum, Pascal wrote, echoing contemporary notions of science and falsifiability: "In order to show that a hypothesis is evident, it does not suffice that all the phenomena follow from it; instead, if it leads to something contrary to a single one of the phenomena, that suffices to establish its falsity." Blaise Pascal Chairs are given to outstanding international scientists to conduct their research in the Ile de France region. Adult life: religion, literature, and philosophy Religious conversion In the winter of 1646, Pascal's 58-year-old father broke his hip when he slipped and fell on an icy street of Rouen; given the man's age and the state of medicine in the 17th century, a broken hip could be a very serious condition, perhaps even fatal. Rouen was home to two of the finest doctors in France, Deslandes and de la Bouteillerie. The elder Pascal "would not let anyone other than these men attend him...It was a good choice, for the old man survived and was able to walk again..." But treatment and rehabilitation took three months, during which time La Bouteillerie and Deslandes had become regular visitors. Both men were followers of Jean Guillebert, proponent of a splinter group from Catholic teaching known as Jansenism. This still fairly small sect was making surprising inroads into the French Catholic community at that time. It espoused rigorous Augustinism. Blaise spoke with the doctors frequently, and after their successful treatment of his father, borrowed from them works by Jansenist authors. In this period, Pascal experienced a sort of "first conversion" and began to write on theological subjects in the course of the following year. Pascal fell away from this initial religious engagement and experienced a few years of what some biographers have called his "worldly period" (1648–54). His father died in 1651 and left his inheritance to Pascal and his sister Jacqueline, for whom Pascal acted as conservator. Jacqueline announced that she would soon become a postulant in the Jansenist convent of Port-Royal. Pascal was deeply affected and very sad, not because of her choice, but because of his chronic poor health; he needed her just as she had needed him. By the end of October in 1651, a truce had been reached between brother and sister. In return for a healthy annual stipend, Jacqueline signed over her part of the inheritance to her brother. Gilberte had already been given her inheritance in the form of a dowry. In early January, Jacqueline left for Port-Royal. On that day, according to Gilberte concerning her brother, "He retired very sadly to his rooms without seeing Jacqueline, who was waiting in the little parlor..." In early June 1653, after what must have seemed like endless badgering from Jacqueline, Pascal formally signed over the whole of his sister's inheritance to Port-Royal, which, to him, "had begun to smell like a cult." With two-thirds of his father's estate now gone, the 29-year-old Pascal was now consigned to genteel poverty. For a while, Pascal pursued the life of a bachelor. During visits to his sister at Port-Royal in 1654, he displayed contempt for affairs of the world but was not drawn to God. The Memorial On the 23 of November, 1654, between 10:30 and 12:30 at night, Pascal had an intense religious experience and immediately wrote a brief note to himself which began: "Fire. God of Abraham, God of Isaac, God of Jacob, not of the philosophers and the scholars..." and concluded by quoting Psalm 119:16: "I will not forget thy word. Amen." He seems to have carefully sewn this document into his coat and always transferred it when he changed clothes; a servant discovered it only by chance after his death. This piece is now known as the Memorial. The story of a carriage accident as having led to the experience described in the Memorial is disputed by some scholars. His belief and religious commitment revitalized, Pascal visited the older of two convents at Port-Royal for a two-week retreat in January 1655. For the next four years, he regularly travelled between Port-Royal and Paris. It was at this point immediately after his conversion when he began writing his first major literary work on religion, the Provincial Letters. Literature In literature, Pascal is regarded as one of the most important authors of the French Classical Period and is read today as one of the greatest masters of French prose. His use of satire and wit influenced later polemicists. The Provincial Letters Beginning in 1656–57, Pascal published his memorable attack on casuistry, a popular ethical method used by Catholic thinkers in the early modern period (especially the Jesuits, and in particular Antonio Escobar). Pascal denounced casuistry as the mere use of complex reasoning to justify moral laxity and all sorts of sins. The 18-letter series was published between 1656 and 1657 under the pseudonym Louis de Montalte and incensed Louis XIV. The king ordered that the book be shredded and burnt in 1660. In 1661, in the midsts of the formulary controversy, the Jansenist school at Port-Royal was condemned and closed down; those involved with the school had to sign a 1656 papal bull condemning the teachings of Jansen as heretical. The final letter from Pascal, in 1657, had defied Alexander VII himself. Even Pope Alexander, while publicly opposing them, nonetheless was persuaded by Pascal's arguments. Aside from their religious influence, the Provincial Letters were popular as a literary work. Pascal's use of humor, mockery, and vicious satire in his arguments made the letters ripe for public consumption, and influenced the prose of later French writers like Voltaire and Jean-Jacques Rousseau. It is in the Provincial Letters that Pascal made his oft-quoted apology for writing a long letter, as he had not had time to write a shorter one. From Letter XVI, as translated by Thomas M'Crie: 'Reverend fathers, my letters were not wont either to be so prolix, or to follow so closely on one another. Want of time must plead my excuse for both of these faults. The present letter is a very long one, simply because I had no leisure to make it shorter.' Charles Perrault wrote of the Letters: "Everything is there—purity of language, nobility of thought, solidity in reasoning, finesse in raillery, and throughout an agrément not to be found anywhere else." Philosophy Pascal is arguably best known as a philosopher, considered by some the second greatest French mind behind René Descartes. He was a dualist following Descartes. However, he is also remembered for his opposition to both the rationalism of the likes of Descartes and simultaneous opposition to the main countervailing epistemology, empiricism, preferring fideism. He cared above all about the philosophy of religion. Pascalian theology has grown out of his perspective that humans are, according to Wood, "born into a duplicitous world that shapes us into duplicitous subjects and so we find it easy to reject God continually and deceive ourselves about our own sinfulness". Philosophy of mathematics Pascal's major contribution to the philosophy of mathematics came with his De l'Esprit géométrique ("Of the Geometrical Spirit"), originally written as a preface to a geometry textbook for one of the famous Petites écoles de Port-Royal ("Little Schools of Port-Royal"). The work was unpublished until over a century after his death. Here, Pascal looked into the issue of discovering truths, arguing that the ideal of such a method would be to found all propositions on already established truths. At the same time, however, he claimed this was impossible because such established truths would require other truths to back them up—first principles, therefore, cannot be reached. Based on this, Pascal argued that the procedure used in geometry was as perfect as possible, with certain principles assumed and other propositions developed from them. Nevertheless, there was no way to know the assumed principles to be true. Pascal also used De l'Esprit géométrique to develop a theory of definition. He distinguished between definitions which are conventional labels defined by the writer and definitions which are within the language and understood by everyone because they naturally designate their referent. The second type would be characteristic of the philosophy of essentialism. Pascal claimed that only definitions of the first type were important to science and mathematics, arguing that those fields should adopt the philosophy of formalism as formulated by Descartes. In De l'Art de persuader ("On the Art of Persuasion"), Pascal looked deeper into geometry's axiomatic method, specifically the question of how people come to be convinced of the axioms upon which later conclusions are based. Pascal agreed with Montaigne that achieving certainty in these axioms and conclusions through human methods is impossible. He asserted that these principles can be grasped only through intuition, and that this fact underscored the necessity for submission to God in searching out truths. Pensées Blaise Pascal, Pensées No. 200 Pascal's most influential theological work, referred to posthumously as the Pensées ("Thoughts") is widely considered to be a masterpiece, and a landmark in French prose. When commenting on one particular section (Thought #72), Sainte-Beuve praised it as the finest pages in the French language. Will Durant hailed the Pensées as "the most eloquent book in French prose". The Pensées was not completed before his death. It was to have been a sustained and coherent examination and defense of the Christian faith, with the original title Apologie de la religion Chrétienne ("Defense of the Christian Religion"). The first version of the numerous scraps of paper found after his death appeared in print as a book in 1669 titled Pensées de M. Pascal sur la religion, et sur quelques autres sujets ("Thoughts of M. Pascal on religion, and on some other subjects") and soon thereafter became a classic. One of the Apologies main strategies was to use the contradictory philosophies of Pyrrhonism and Stoicism, personalized by Montaigne on one hand, and Epictetus on the other, in order to bring the unbeliever to such despair and confusion that he would embrace God. Last works and death T. S. Eliot described him during this phase of his life as "a man of the world among ascetics, and an ascetic among men of the world." Pascal's ascetic lifestyle derived from a belief that it was natural and necessary for a person to suffer. In 1659, Pascal fell seriously ill. During his last years, he frequently tried to reject the ministrations of his doctors, saying, "Sickness is the natural state of Christians." Louis XIV suppressed the Jansenist movement at Port-Royal in 1661. In response, Pascal wrote one of his final works, Écrit sur la signature du formulaire ("Writ on the Signing of the Form"), exhorting the Jansenists not to give in. Later that year, his sister Jacqueline died, which convinced Pascal to cease his polemics on Jansenism. Pascal's last major achievement, returning to his mechanical genius, was inaugurating perhaps the first bus line, the carrosses à cinq sols, moving passengers within Paris in a carriage with many seats. Pascal also designated the operation principles which were later used to plan public transportation: The carriages has a fixed route, fixed price, and left even if there were no passengers. It is widely considered that the idea of public transportation was well ahead of time. The lines were not commercially successful, and the last one closed by 1675. In 1662, Pascal's illness became more violent, and his emotional condition had severely worsened since his sister's death. Aware that his health was fading quickly, he sought a move to the hospital for incurable diseases, but his doctors declared that he was too unstable to be carried. In Paris on 18 August 1662, Pascal went into convulsions and received extreme unction. He died the next morning, his last words being "May God never abandon me," and was buried in the cemetery of Saint-Étienne-du-Mont. An autopsy performed after his death revealed grave problems with his stomach and other organs of his abdomen, along with damage to his brain. Despite the autopsy, the cause of his poor health was never precisely determined, though speculation focuses on tuberculosis, stomach cancer, or a combination of the two. The headaches which affected Pascal are generally attributed to his brain lesion. Legacy One of the Universities of Clermont-Ferrand, France – Université Blaise Pascal – is named after him. Établissement scolaire français Blaise-Pascal in Lubumbashi, Democratic Republic of the Congo is named after Pascal. The 1969 Eric Rohmer film My Night at Maud's is based on the work of Pascal. Roberto Rossellini directed a filmed biopic, Blaise Pascal, which originally aired on Italian television in 1971. Pascal was a subject of the first edition of the 1984 BBC Two documentary, Sea of Faith, presented by Don Cupitt. The chameleon in the film Tangled is named for Pascal. A programming language is named for Pascal. In 2014, Nvidia announced its new Pascal microarchitecture, which is named for Pascal. The first graphics cards featuring Pascal were released in 2016. The 2017 game Nier: Automata has multiple characters named after famous philosophers; one of these is a sentient pacifistic machine named Pascal, who serves as a major supporting character. Pascal creates a village for machines to live peacefully with the androids they're at war with and acts as a parental figure for other machines trying to adapt to their newly-found individuality. The otter in the Animal Crossing series is named for Pascal. Minor planet 4500 Pascal is named in his honor. Pope Paul VI, in encyclical Populorum progressio, issued in 1967, quotes Pascal's Pensées: In 2023, Pope Francis released an apostolic letter, Sublimitas et miseria hominis, dedicated to Blaise Pascal, in commemoration of the fourth centenary of his birth. Works "Essai pour les coniques" [Essay on conics] (1639) Experiences nouvelles touchant le vide [New experiments with the vacuum] (1647) Récit de la grande expérience de l'équilibre des liqueurs [Account of the great experiment on equilibrium in liquids] (1648) Traité du triangle arithmétique [Treatise on the arithmetical triangle] (written ; publ. 1665) [The provincial letters] (1656–57) De l'Esprit géométrique [On the geometrical spirit] (1657 or 1658) Écrit sur la signature du formulaire (1661) Pensées [Thoughts] (incomplete at death; publ. 1670) "Discourse on the Passion of Love" "On the Conversion of the Sinner" See also Expected value Gambler's ruin Pascal's barrel Pascal distribution Pascal's mugging Pascal's pyramid Pascal's simplex Problem of points Scientific revolution List of pioneers in computer science List of works by Eugène Guillaume References Further reading Adamson, Donald. Blaise Pascal: Mathematician, Physicist, and Thinker about God (1995) Adamson, Donald. "Pascal's Views on Mathematics and the Divine," Mathematics and the Divine: A Historical Study (eds. T. Koetsier and L. Bergmans. Amsterdam: Elsevier 2005), pp. 407–21. Broome, J.H. Pascal. (London: E. Arnold, 1965). Campe, Rüdiger, "Numbers and Calculation in Context: The Game of Decision - Pascal" in The Game of Probability. Literature and Calculation from Pascal and Kleist, Stanford University Press, 2012 Davidson, Hugh M. Blaise Pascal. (Boston: Twayne Publishers), 1983. Farrell, John. "Pascal and Power". Chapter seven of Paranoia and Modernity: Cervantes to Rousseau (Cornell UP, 2006). Goldmann, Lucien, The hidden God; a study of tragic vision in the Pensees of Pascal and the tragedies of Racine (original ed. 1955, Trans. Philip Thody. London: Routledge, 1964). Groothuis, Douglas. On Pascal. (Belmont: Wadsworth, 2002). Jordan, Jeff. Pascal's Wager: Pragmatic Arguments and Belief in God. (Oxford: Clarendon Press, 2006). Landkildehus, Søren. "Kierkegaard and Pascal as kindred spirits in the Fight against Christendom" in Kierkegaard and the Renaissance and Modern Traditions (ed. Jon Stewart. Farnham: Ashgate Publishing, 2009). Mackie, John Leslie. The Miracle of Theism: Arguments for and against the Existence of God. (Oxford: Oxford University Press, 1982). Stafford Harry Northcote, Viscount Saint Cyres, Pascal (London: Smith, Elder & Company, 1909; New York: E. P. Dutton) Pugh, Anthony R. The Composition of Pascal's Apologia, (University of Toronto Press, 1984). Tobin, Paul. "The Rejection of Pascal's Wager: A Skeptic's Guide to the Bible and the Historical Jesus". authorsonline.co.uk, 2009. Yves Morvan, Pascal à Mirefleurs ? Les dessins de la maison de Domat, Impr. Blandin, 1985. (FRBNF40378895) External links Oeuvres complètes, volume 2 (1858) Paris: Libraire de L Hachette et Cie, link from HathiTrust. The Correspondence of Blaise Pascal in EMLO Pensées de Blaise Pascal. Renouard, Paris 1812 (2 vols.) () Discussion of the Pascaline, its history, mechanism, surviving examples, and modern replicas at http://things-that-count.net Pascal's Memorial in orig. French/Latin and modern English, trans. Elizabeth T. Knuth. Biography, Bibliography. (in French) BBC Radio 4. In Our Time: Pascal. Blaise Pascal featured on the 500 French Franc banknote in 1977. Blaise Pascal's works: text, concordances and frequency lists Etext of Pascal's Pensées (English, in various formats) Etext of Pascal's Lettres Provinciales (English) Etext of a number of Pascal's minor works (English translation) including, De l'Esprit géométrique and De l'Art de persuader. 1623 births 1662 deaths Writers from Clermont-Ferrand French Roman Catholic writers 17th-century French writers 17th-century male writers 17th-century French mathematicians 17th-century French philosophers Aphorists Christian apologists Christian humanists Roman Catholic mystics Critics of atheism Converts to Roman Catholicism Fluid dynamicists French mathematicians French philosophers French physicists 17th-century French theologians 17th-century Christian mystics Hypochondriacs Jansenists Probability theorists Catholic philosophers 17th-century Roman Catholics Burials at Saint-Étienne-du-Mont Cartesianism Scientists from Clermont-Ferrand
5
A binary prefix is a unit prefix that indicates a multiple of a unit of measurement by an integer power of two. The most commonly used binary prefixes are kibi (symbol Ki, meaning 210= 1024), mebi (Mi, 220 = ), and gibi (Gi, 230 = ). They are most often used in information technology as multipliers of bit and byte, when expressing the capacity of storage devices or the size of computer files. The binary prefixes "kibi", "mebi", etc. were defined in 1999 by the International Electrotechnical Commission (IEC), in the IEC 60027-2 standard (Amendment 2). They were meant to replace the metric (SI) decimal power prefixes, such as "kilo" ("k", 103 = 1000), "mega" ("M", 106 = ) and "giga" ("G", 109 = ), that were commonly used in the computer industry to indicate the nearest powers of two. For example, a memory module whose capacity was specified by the manufacturer as "2 megabytes" or "2 MB" would hold 2 × 220 = bytes, instead of 2 × 106 = . On the other hand, a hard disk whose capacity is specified by the manufacturer as "10 gigabytes" or "10 GB", holds 10 × 109 = bytes, or a little more than that, but less than 10 × 230 = and a file whose size is listed as "2.3 GB" may have a size closer to 2.3 × 230 ≈ or to 2.3 × 109 = , depending on the program or operating system providing that measurement. This kind of ambiguity is often confusing to computer system users and has resulted in lawsuits. The IEC 60027-2 binary prefixes have been incorporated in the ISO/IEC 80000 standard and are supported by other standards bodies, including the BIPM, which defines the SI system, the US NIST, and the European Union. Prior to the 1999 IEC standard, some industry organizations, such as the Joint Electron Device Engineering Council (JEDEC), attempted to redefine the terms kilobyte, megabyte, and gigabyte, and the corresponding symbols KB, MB, and GB in the binary sense, for use in storage capacity measurements. However, other computer industry sectors (such as magnetic storage) continued using those same terms and symbols with the decimal meaning. Since then, the major standards organizations have expressly disapproved the use of SI prefixes to denote binary multiples, and recommended or mandated the use of the IEC prefixes for that purpose, but the use of SI prefixes has persisted in some fields. While the binary prefixes are almost always used with the units of information, bits and bytes, they may be used with any other unit of measure, when convenient. For example, in signal processing one may need binary multiples of the frequency unit hertz (Hz), for example the kibihertz (KiHz) equal to . Definitions In 2022, the International Bureau of Weights and Measures (BIPM) adopted the decimal prefixes ronna for 10009 and quetta for 100010. In analogy to the existing binary prefixes, a consultation paper of the International Committee for Weights and Measures' Consultative Committee for Units (CCU) suggested the prefixes robi (Ri, 10249) and quebi (Qi, 102410) for their binary counterparts, but , no corresponding binary prefixes have been adopted. Comparison of binary and decimal prefixes The relative difference between the values in the binary and decimal interpretations increases, when using the SI prefixes as the base, from 2.4% for kilo to nearly 27% for the quetta prefix. Although the prefixes ronna and quetta have been defined, as of 2022 no names have been officially assigned to the corresponding binary prefixes. History Early prefixes The original metric system adopted by France in 1795 included two binary prefixes named double- (2×) and demi- (×). However, these were not retained when the SI prefixes were internationally adopted by the 11th CGPM conference in 1960. Storage capacity Main memory Early computers used one of two addressing methods to access the system memory; binary (base 2) or decimal (base 10). For example, the IBM 701 (1952) used a binary methods and could address 2048 words of 36 bits each, while the IBM 702 (1953) used a decimal system, and could address ten thousand 7-bit words. By the mid-1960s, binary addressing had become the standard architecture in most computer designs, and main memory sizes were most commonly powers of two. This is the most natural configuration for memory, as all combinations of states of their address lines map to a valid address, allowing easy aggregation into a larger block of memory with contiguous addresses. While early documentation specified those memory sizes as exact numbers such as 4096, 8192, or units (usually words, bytes, or bits), computer professionals also started using the long-established metric system prefixes "kilo", "mega", "giga", etc., defined to be powers of 10, to mean instead the nearest powers of two; namely, 210 = 1024, 220 = 10242, 230 = 10243, etc.. The corresponding metric prefix symbols ("k", "M", "G", etc.) where used with the same binary meanings. The symbol for 210 = 1024 could be written either in lower case ("k") or in uppercase ("K"). The latter was often used intentionally to indicate the binary rather than decimal meaning. This convention, which could not be extended to higher powers, was widely used in the documentation of the IBM 360 (1964) and of the IBM System/370 (1972), of the CDC 7600, of the DEC PDP-11/70 (1975) and of the DEC VAX-11/780 (1977). In other documents, however, the metric prefixes and their symbols were used to denote powers of 10, but usually with the understanding that the values given were approximate, often truncated down. Thus, for example, a 1967 document by Control Data Corporation (CDC) abbreviated "216 = 64 × 1024 = words" as "65K words" (rather than "64K" or "66K"),, while the documentation of the HP 21MX real-time computer (1974) denoted 3 × 216 = 192 × 1024 = as "196K" and 220 = as "1M". These three possible meanings of "k" and "K" ("1024", "1000", or "approximately 1000") were used loosely around the same time, sometimes by the same company. The HP 3000 business computer (1973) could have "64K", "96K", or "128K" bytes of memory. The use of SI prefixes, and the use of "K" instead of "k" remained popular in computer-related publications well into the 21st century, although the ambiguity persisted. The correct meaning was often clear from the context; for instance, in a binary-addressed computer, the true memory size had to be either a power of 2, or a small integer multiple thereof. Thus a "512 megabyte" RAM module was generally understood to have = bytes, rather than . Hard disks In specifying disk drive capacities, manufacturers have always used conventional decimal SI prefixes representing powers of 10. Storage in a rotating disk drive is organized in platters and tracks whose sizes and counts are determined by mechanical engineering constraints so that the capacity of a disk drive has hardly ever been a simple multiple of a power of 2. For example, the first commercially sold disk drive, the IBM 350 (1956), had 50 physical disk platters containing a total of sectors of 100 characters each, for a total quoted capacity of 5 million characters. Moreover, since the 1960s, many disk drives used IBM's disk format, where each track was divided into blocks of user-specified size; and the block sizes were recorded on the disk, subtracting from the usable capacity. For example, the|IBM 3336]] disk pack was quoted to have a 200-megabyte capacity, achieved only with a single -byte block in each of its 808 x 19 tracks. Decimal megabytes were used for disk capacity by the CDC in 1974. The Seagate ST-412, one of several types installed in the IBM PC/XT, had a capacity of when formatted as 306 × 4 tracks and 32 256-byte sectors per track, which was quoted as "". Similarly, a "" hard drive can be expected to offer only slightly more than = , bytes, not (which would be about bytes or ""). The first terabyte (SI prefix, bytes) hard disk drive was introduced in 2007. Decimal prefixes were generally used by information processing publications when comparing hard disk capacities. Users must be aware that some programs and operating systems, such as earlier versions of Microsoft Windows and MacOS, may use "MB" and "GB" to denote binary prefixes even when displaying disk drive capacities. Thus, for example, the capacity of a "10 MB" (decimal "M") disk drive could be reported as "9.56 MB", and that of a "300 GB" drive as "279.4 GB". Good software and documentation should specify clearly whether "K", "M", "G" mean binary or decimal multipliers. Floppy disks Floppy disks used a variety of formats, and their capacities was usually specified with SI-like prefixes "K" and "M" with either decimal or binary meaning. The capacity of the disks was often specified without accounting for the internal formatting overhead, leading to more irregularities. The early 8-inch diskette formats could contain less than a megabyte with the capacities of those devices specified in kilobytes, kilobits or megabits. The 5.25-inch diskette sold with the IBM PC AT could hold = bytes, and thus was marketed as "" with the binary sense of "KB". However, the capacity was also quoted "", which was a hybrid decimal and binary notation, since the "M" meant 1000 × 1024. The precise value was (decimal) or (binary). The 5.25-inch Apple Disk II had 256 bytes per sector, 13 sectors per track, 35 tracks per side, or a total capacity of bytes. It was later upgraded to 16 sectors per track, giving a total of = bytes, which was described as "140KB" usin the binary sense of "K". The most recent version of the physical hardware, the "3.5-inch diskette" cartridge, had 720 512-byte blocks (single-sided). Since two blocks comprised 1024 bytes, the capacity was quoted "360 KB", with the binary sense of "K". On the other hand, the quoted capacity of "1.44 MB" of the High Density ("HD") version was again a hybrid decimal and binary notation, since it meant 1440 pairs of 512-byte sectors, or 1440 × 210 = bytes. Some operating systems displayed the capacity of those disks using the binary sense of "MB", as "1.4 MB" (which would be 1.4 x 220 ≈ bytes). User complaints forced both Apple and Microsoft to issue support bulletins explaining the discrepancy. Optical disks When specifying the capacities of optical compact discs, "megabyte" and "MB" usually mean 10242 bytes. Thus a "700-MB" (or "80-minute") CD has a nominal capacity of about , which is approximately (decimal). On the other hand, capacities of other optical disc storage media like DVD, Blu-ray Disc, HD DVD and magneto-optical (MO) have been generally specified in decimal gigabytes ("GB"), that is, 10003 bytes. In particular, a typical "" DVD has a nominal capacity of about 4.7 × 109 bytes, which is about . Tape drives and media Tape drive and media manufacturers have generally used SI decimal prefixes to specify the maximum capacity, although the actual capacity would depend on the block size used when recording. Data and clock rates Computer clock frequencies are always quoted using SI prefixes in their decimal sense. For example, the internal clock frequency of the original IBM PC was , that is . Similarly, digital information transfer rates are quoted using decimal prefixe. The Parallel ATA "100 MB/s" disk interface can transfer bytes per second, and a "56 Kb/s" modem transmits bits per second. Seagate specified the sustained transfer rate of some hard disk drive models with both decimal and IEC binary prefixes. The standadrd sampling rate of music compact disks, quoted as , is indeed samples per second. A " Ethernet interface can receive or transmit up to 109 bits per second, or bytes per second within each packet. A "56k" modem can encode or decode up to bits per second. Decimal SI prefixes are also generally used for processor-memory data transfer speeds. A PCI-X bus with clock and 64 bits wide can transfer 64-bit words per second, or bit/s = B/s, which is usually quoted as . A PC3200 memory on a double data rate bus, transferring 8 bytes per cycle with a clock speed of has a bandwidth of = B/s, which would be quoted as . Ambiguous standards The ambiguous usage of the prefixes "kilo ("K" or "k"), "mega" ("M"), and "giga" ("G"), as meaning both powers of 1000 or (in computer contexts) of 1024, has been recorded in popular dictionaries, and even in some obsolete standards, such as ANSI/IEEE 1084-1986 and 1212-1991, IEEE 610.10-1994, and 100–2000. Some of these standards specifically limited the binary meaning to multiples of "byte" ("B") or "bit" ("b"). Early binary prefix proposals Before the IEC standard, several alternative proposals existed for unique binary prefixes, starting in the late 1960s. In 1996, Markus Kuhn proposed the extra prefix "di" and the symbol suffix or subscript "2" to mean "binary"; so that, for example, "one dikilobyte" would mean "1024 bytes", denoted "" or ". In 1968, Donald Morrison proposed to use the Greek letter kappa (κ) to denote 1024, κ2 to denote 10242, and so on. (At the time, memory size was small, and only K was in widespread use.) In the same year, Wallace Givens responded with a suggestion to use bK as an abbreviation for 1024 and bK2 or bK2 for 10242, though he noted that neither the Greek letter nor lowercase letter b would be easy to reproduce on computer printers of the day. Bruce Alan Martin of Brookhaven National Laboratory proposed that, instead of prefixes, binary powers of two were indicated by the letter B followed by the exponent, similar to E in decimal scientific notation. Thus one would write 3B20 for . This convention is still used on some calculators to present binary floating point-numbers today. In 1969, Donald Knuth, who uses decimal notation like 1 MB = 1000 kB, proposed that the powers of 1024 be designated as "large kilobytes" and "large megabytes", with abbreviations KKB and MMB. However, the use of double SI prefixes, although rejected by the BIPM, had already been given a multiplicative meaning; so that "" could be understood as "(106)2 bytes, that is, "". Consumer confusion The ambiguous meanings of "kilo", "mega", "giga", etc., has caused significant consumer confusion, especially in the personal computer era. A common source of confusion was the discrepancy between the capacities of hard drives specified by manufacturers, using those prefixes in the decimal sense, and the numbers reported by operating systems and other software, that used them in the binary sense, such as the Apple in 1984. For example, a hard drive marketed as "" could be reported as having only "". The confusion was compounded by fact that RAM manufacturers used the binary sense too. Legal disputes The different interpretations of disk size prefixes led to class action lawsuits against digital storage manufacturers. These cases involved both flash memory and hard disk drives. Early cases Early cases (2004–2007) were settled prior to any court ruling with the manufacturers admitting no wrongdoing but agreeing to clarify the storage capacity of their products on the consumer packaging. Accordingly, many flash memory and hard disk manufacturers have disclosures on their packaging and web sites clarifying the formatted capacity of the devices or defining MB as 1 million bytes and 1 GB as 1 billion bytes. Willem Vroegh v. Eastman Kodak Company On 20 February 2004, Willem Vroegh filed a lawsuit against Lexar Media, Dane–Elec Memory, Fuji Photo Film USA, Eastman Kodak Company, Kingston Technology Company, Inc., Memorex Products, Inc.; PNY Technologies Inc., SanDisk Corporation, Verbatim Corporation, and Viking Interworks alleging that their descriptions of the capacity of their flash memory cards were false and misleading. Vroegh claimed that a 256 MB Flash Memory Device had only 244 MB of accessible memory. "Plaintiffs allege that Defendants marketed the memory capacity of their products by assuming that one megabyte equals one million bytes and one gigabyte equals one billion bytes." The plaintiffs wanted the defendants to use the customary values of 10242 for megabyte and 10243 for gigabyte. The plaintiffs acknowledged that the IEC and IEEE standards define a MB as one million bytes but stated that the industry has largely ignored the IEC standards. The parties agreed that manufacturers could continue to use the decimal definition so long as the definition was added to the packaging and web sites. The consumers could apply for "a discount of ten percent off a future online purchase from Defendants' Online Stores Flash Memory Device". Orin Safier v. Western Digital Corporation On 7 July 2005, an action entitled Orin Safier v. Western Digital Corporation, et al. was filed in the Superior Court for the City and County of San Francisco, Case No. CGC-05-442812. The case was subsequently moved to the Northern District of California, Case No. 05-03353 BZ. Although Western Digital maintained that their usage of units is consistent with "the indisputably correct industry standard for measuring and describing storage capacity", and that they "cannot be expected to reform the software industry", they agreed to settle in March 2006 with 14 June 2006 as the Final Approval hearing date. Western Digital offered to compensate customers with a free download of backup and recovery software valued at US$30. They also paid $ in fees and expenses to San Francisco lawyers Adam Gutride and Seth Safier, who filed the suit. The settlement called for Western Digital to add a disclaimer to their later packaging and advertising. Western Digital had this footnote in their settlement. "Apparently, Plaintiff believes that he could sue an egg company for fraud for labeling a carton of 12 eggs a 'dozen', because some bakers would view a 'dozen' as including 13 items." Cho v. Seagate Technology (US) Holdings, Inc. A lawsuit (Cho v. Seagate Technology (US) Holdings, Inc., San Francisco Superior Court, Case No. CGC-06-453195) was filed against Seagate Technology, alleging that Seagate overrepresented the amount of usable storage by 7% on hard drives sold between 22 March 2001 and 26 September 2007. The case was settled without Seagate admitting wrongdoing, but agreeing to supply those purchasers with free backup software or a 5% refund on the cost of the drives. Dinan et al. v. SanDisk LLC On 22 January 2020, the district court of the Northern District of California ruled in favor of the defendant, SanDisk, upholding its use of "GB" to mean . The IEC 1999 Standard in 1995, the International Union of Pure and Applied Chemistry's (IUPAC) Interdivisional Committee on Nomenclature and Symbols (IDCNS) proposed the prefixes "kibi" (short for "kilobinary"), "mebi" ("megabinary"), "gibi" ("gigabinary") and "tebi" ("terabinary"), with respective symbols "kb", "Mb", "Gb" and "Tb", for binary multipliers. The proposal suggested that the SI prefixes should be used only for powers of 10; so that a disk drive capacity of "500 gigabytes", "0.5 terabytes", "500 GB", or "0.5 TB" should all mean bytes, exactly or approximately, rather than (= ) or (= ). The proposal was not accepted by IUPAC at the time, but was taken up in 1996 by the Institute of Electrical and Electronics Engineers (IEEE) in collaboration with the International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC). The prefixes "kibi", "mebi", "gibi" and "tebi" were retained, but with the symbols "Ki" (with capital "K"), "Mi", "Gi" and "Ti" respectively. In January 1999, the IEC published this proposal, with additional prefixes "pebi" ("Pi") and "exbi" ("Ei"), as an international standard (IEC 60027-2 Amendment 2) The standard reaffirmed the BIPM's position that the SI prefixes should always denote powers of 10. The third edition of the standard, published in 2005, added prefixes "zebi" and "yobi", thus matching all then-defined SI prefixes with binary counterparts. The harmonized ISO/IEC IEC 80000-13:2008 standard cancels and replaces subclauses 3.8 and 3.9 of IEC 60027-2:2005 (those defining prefixes for binary multiples). The only significant change is the addition of explicit definitions for some quantities. In 2009, the prefixes kibi-, mebi-, etc. were defined by ISO 80000-1 in their own right, independently of the kibibyte, mebibyte, and so on. The BIPM standard JCGM 200:2012 "International vocabulary of metrology – Basic and general concepts and associated terms (VIM), 3rd edition" lists the IEC binary prefixes and states "SI prefixes refer strictly to powers of 10, and should not be used for powers of 2. For example, 1 kilobit should not be used to represent bits (210 bits), which is 1 kibibit." The IEC 60027-2 standard recommended operating systems and other software were updated to use binary or decimal prefixes consistently, but incorrect usage of SI prefixes for binary multiples is still common. At the time, the IEEE decided that their standards would use the prefixes "kilo", etc. with their metric definitions, but allowed the binary definitions to be used in an interim period as long as such usage was explicitly pointed out on a case-by-case basis. Other standards bodies and organizations The IEC standard binary prefixes are supported by other standardization bodies and technical organizations. The United States National Institute of Standards and Technology (NIST) supports the ISO/IEC standards for "Prefixes for binary multiples" and has a web page documenting them, describing and justifying their use. NIST suggests that in English, the first syllable of the name of the binary-multiple prefix should be pronounced in the same way as the first syllable of the name of the corresponding SI prefix, and that the second syllable should be pronounced as bee. NIST has stated the SI prefixes "refer strictly to powers of 10" and that the binary definitions "should not be used" for them. As of 2014, the microelectronics industry standards body JEDEC describes the IEC prefixes in its online dictionary, but still allowed the SI prefixes and the symbols "K", "M" and "G" to be used with the binary sense for memory sizes. On 19 March 2005, the IEEE standard IEEE 1541-2002 ("Prefixes for Binary Multiples") was elevated to a full-use standard by the IEEE Standards Association after a two-year trial period. , the IEEE Publications division does not require the use of IEC prefixes in its major magazines such as Spectrum or Computer. The International Bureau of Weights and Measures (BIPM), which maintains the International System of Units (SI), expressly prohibits the use of SI prefixes to denote binary multiples, and recommends the use of the IEC prefixes as an alternative since units of information are not included in the SI. The Society of Automotive Engineers (SAE) prohibits the use of SI prefixes with anything but a power-of-1000 meaning, but does not cite the IEC binary prefixes. The European Committee for Electrotechnical Standardization (CENELEC) adopted the IEC-recommended binary prefixes via the harmonization document HD 60027-2:2003-03. The European Union (EU) has required the use of the IEC binary prefixes since 2007. Current practice Some computer industry participants, such as Hewlett-Packard (HP), and IBM have adopted or recommended IEC binary prefixes as part of their general documentation policies. As of 2023, the use of SI prefixes with the binary meanings is still prevalent for specifying the capacity of the main memory of computers, of RAM, ROM, EPROM, and EEPROM chips and moduless, and of the cache of computer processors. For example, a "512-megabyte" or "512 MB" memory module holds 512 MiB; that is, 512 × 220 bytes, not 512 × 106. JEDEC Solid State Technology Association, the semiconductor engineering standardization body of the Electronic Industries Alliance (EIA), continues to include the customary binary definitions of "kilo", "mega", and "giga" in the document Terms, Definitions, and Letter Symbols, and uses those definitions in their later memory standards On the other hand, the SI prefixes with powers of ten meanings are generally used for the capacity of external storage units, such as disk drives and solid state drives, except for some flash memory modules intended to be used EEPROMs or other similar uses. However, some disk manufacturers have used the IEC prefixes to avoid confusion. The decimal meaning of SI prefixes is usually also intended in measurements of data transfer rates, and clock speeds. Some operating systems and other software use either the IEC binary multiplier symbols ("Ki", "Mi", etc.) or the SI multiplier symbols ("k", "M", "G", etc.) with decimal meaning. Some programs, such as the Linux/GNU ls command, let the user choose between binary or decimal multipliers. However, some continue to use the SI symbols with the binary meanings, even when reporting disk or file sizes. Some programs may also use "K" instead of "k", with either meaning. See also Binary engineering notation B notation (scientific notation) ISO/IEC 80000 Nibble Octet References Further reading – An introduction to binary prefixes —a 1996–1999 paper on bits, bytes, prefixes and symbols —Another description of binary prefixes —White-paper on the controversy over drive capacities External links A plea for sanity A summary of the organizations, software, and so on that have implemented the new binary prefixes KiloBytes vs. kilobits vs. Kibibytes (Binary prefixes) SI/Binary Prefix Converter Storage Capacity Measurement Standards Measurement Naming conventions Units of information Numeral systems
10
In computational complexity theory, bounded-error quantum polynomial time (BQP) is the class of decision problems solvable by a quantum computer in polynomial time, with an error probability of at most 1/3 for all instances. It is the quantum analogue to the complexity class BPP. A decision problem is a member of BQP if there exists a quantum algorithm (an algorithm that runs on a quantum computer) that solves the decision problem with high probability and is guaranteed to run in polynomial time. A run of the algorithm will correctly solve the decision problem with a probability of at least 2/3. Definition BQP can be viewed as the languages associated with certain bounded-error uniform families of quantum circuits. A language L is in BQP if and only if there exists a polynomial-time uniform family of quantum circuits , such that For all , Qn takes n qubits as input and outputs 1 bit For all x in L, For all x not in L, Alternatively, one can define BQP in terms of quantum Turing machines. A language L is in BQP if and only if there exists a polynomial quantum Turing machine that accepts L with an error probability of at most 1/3 for all instances. Similarly to other "bounded error" probabilistic classes the choice of 1/3 in the definition is arbitrary. We can run the algorithm a constant number of times and take a majority vote to achieve any desired probability of correctness less than 1, using the Chernoff bound. The complexity class is unchanged by allowing error as high as 1/2 − n−c on the one hand, or requiring error as small as 2−nc on the other hand, where c is any positive constant, and n is the length of input. A complete problem for Promise-BQP Similar to the notion of NP-completeness and other complete problems, we can define a complete problem as a problem that is in Promise-BQP and that every problem in Promise-BQP reduces to it in polynomial time. Here is an intuitive problem that is complete for efficient quantum computation, which stems directly from the definition of Promise-BQP. Note that for technical reasons, completeness proofs focus on the promise problem version of BQP. We show that the problem below is complete for the Promise-BQP complexity class (and not for the total BQP complexity class having a trivial promise, for which no complete problems are known). APPROX-QCIRCUIT-PROB problem Given a description of a quantum circuit acting on qubits with gates, where is a polynomial in and each gate acts on one or two qubits, and two numbers , distinguish between the following two cases: measuring the first qubit of the state yields with probability measuring the first qubit of the state yields with probability Here, there is a promise on the inputs as the problem does not specify the behavior if an instance is not covered by these two cases. Claim. Any BQP problem reduces to APPROX-QCIRCUIT-PROB. Proof. Suppose we have an algorithm that solves APPROX-QCIRCUIT-PROB, i.e., given a quantum circuit acting on qubits, and two numbers , distinguishes between the above two cases. We can solve any problem in BQP with this oracle, by setting . For any , there exists family of quantum circuits such that for all , a state of qubits, if ; else if . Fix an input of qubits, and the corresponding quantum circuit . We can first construct a circuit such that . This can be done easily by hardwiring and apply a sequence of CNOT gates to flip the qubits. Then we can combine two circuits to get , and now . And finally, necessarily the results of is obtained by measuring several qubits and apply some (classical) logic gates to them. We can always defer the measurement and reroute the circuits so that by measuring the first qubit of , we get the output. This will be our circuit , and we decide the membership of in by running with . By definition of BQP, we will either fall into the first case (acceptance), or the second case (rejection), so reduces to APPROX-QCIRCUIT-PROB. APPROX-QCIRCUIT-PROB comes handy when we try to prove the relationships between some well-known complexity classes and BQP. Relationship to other complexity classes BQP is defined for quantum computers; the corresponding complexity class for classical computers (or more formally for probabilistic Turing machines) is BPP. Just like P and BPP, BQP is low for itself, which means BQPBQP = BQP. Informally, this is true because polynomial time algorithms are closed under composition. If a polynomial time algorithm calls polynomial time algorithms as subroutines, the resulting algorithm is still polynomial time. BQP contains P and BPP and is contained in AWPP, PP and PSPACE. In fact, BQP is low for PP, meaning that a PP machine achieves no benefit from being able to solve BQP problems instantly, an indication of the possible difference in power between these similar classes. The known relationships with classic complexity classes are: As the problem of P ≟ PSPACE has not yet been solved, the proof of inequality between BQP and classes mentioned above is supposed to be difficult. The relation between BQP and NP is not known. In May 2018, computer scientists Ran Raz of Princeton University and Avishay Tal of Stanford University published a paper which showed that, relative to an oracle, BQP was not contained in PH. It can be proven that there exists an oracle A such that BQPA PHA. In an extremely informal sense, this can be thought of as giving PH and BQP an identical, but additional, capability and verifying that BQP with the oracle (BQPA) can do things PHA cannot. While an oracle separation has been proven, the fact that BQP is not contained in PH has not been proven. An oracle separation does not prove whether or not complexity classes are the same. The oracle separation gives intuition that BQP may not be contained in PH. It has been suspected for many years that Fourier Sampling is a problem that exists within BQP, but not within the polynomial hierarchy. Recent conjectures have provided evidence that a similar problem, Fourier Checking, also exists in the class BQP without being contained in the polynomial hierarchy. This conjecture is especially notable because it suggests that problems existing in BQP could be classified as harder than NP-Complete problems. Paired with the fact that many practical BQP problems are suspected to exist outside of P (it is suspected and not verified because there is no proof that P ≠ NP), this illustrates the potential power of quantum computing in relation to classical computing. Adding postselection to BQP results in the complexity class PostBQP which is equal to PP. We will prove or discuss some of these results below. BQP and EXP We begin with an easier containment. To show that , it suffices to show that APPROX-QCIRCUIT-PROB is in EXP since APPROX-QCIRCUIT-PROB is BQP-complete. Note that this algorithm also requires space to store the vectors and the matrices. We will show in the following section that we can improve upon the space complexity. BQP and PSPACE To prove , we first introduce a technique called the sum of histories. Sum of Histories Source: Sum of histories is a technique introduced by physicist Richard Feynman for path integral formulation. We apply this technique to quantum computing to solve APPROX-QCIRCUIT-PROB. Consider a quantum circuit , which consists of gates, , where each comes from a universal gate set and acts on at most two qubits. To understand what the sum of histories is, we visualize the evolution of a quantum state given a quantum circuit as a tree. The root is the input , and each node in the tree has children, each representing a state in . The weight on a tree edge from a node in -th level representing a state to a node in -th level representing a state is , the amplitude of after applying on . The transition amplitude of a root-to-leaf path is the product of all the weights on the edges along the path. To get the probability of the final state being , we sum up the amplitudes of all root-to-leave paths that ends at a node representing . More formally, for the quantum circuit , its sum over histories tree is a tree of depth , with one level for each gate in addition to the root, and with branching factor . Notice in the sum over histories algorithm to compute some amplitude , only one history is stored at any point in the computation. Hence, the sum over histories algorithm uses space to compute for any since bits are needed to store the histories in addition to some workspace variables. Therefore, in polynomial space, we may compute over all with the first qubit being , which is the probability that the first qubit is measured to be 1 by the end of the circuit. Notice that compared with the simulation given for the proof that , our algorithm here takes far less space but far more time instead. In fact it takes time to calculate a single amplitude! BQP and PP A similar sum-over-histories argument can be used to show that . P and BQP We know , since every classical circuit can be simulated by a quantum circuit. It is conjectured that BQP solves hard problems outside of P, specifically, problems in NP. The claim is indefinite because we don't know if P=NP, so we don't know if those problems are actually in P. Below are some evidence of the conjecture: Integer factorization (see Shor's algorithm) Discrete logarithm Simulation of quantum systems (see universal quantum simulator) Approximating the Jones polynomial at certain roots of unity Harrow-Hassidim-Lloyd (HHL) algorithm See also Hidden subgroup problem Polynomial hierarchy (PH) Quantum complexity theory QMA, the quantum equivalent to NP. QIP, the quantum equivalent to IP. References External links Complexity Zoo link to BQP Probabilistic complexity classes Quantum complexity theory Quantum computing
7
A bishop is an ordained member of the clergy who is entrusted with a position of authority and oversight in a religious institution. In Christianity, bishops are normally responsible for the governance of dioceses. The role or office of bishop is called episcopacy. Organizationally, several Christian denominations utilize ecclesiastical structures that call for the position of bishops, while other denominations have dispensed with this office, seeing it as a symbol of power. Bishops have also exercised political authority. Traditionally, bishops claim apostolic succession, a direct historical lineage dating back to the original Twelve Apostles or Saint Paul. The bishops are by doctrine understood as those who possess the full priesthood given by Jesus Christ, and therefore may ordain other clergy, including other bishops. A person ordained as a deacon, priest (i.e. presbyter), and then bishop is understood to hold the fullness of the ministerial priesthood, given responsibility by Christ to govern, teach and sanctify the Body of Christ (the Church). Priests, deacons and lay ministers co-operate and assist their bishops in pastoral ministry. Some Pentecostal and other Protestant denominations have bishops who oversee congregations, though they do not claim apostolic succession. Terminology The English term bishop derives from the Greek word , meaning "overseer"; Greek was the language of the early Christian church. However, the term did not originate in Christianity. In Greek literature, the term had been used for several centuries before the advent of Christianity. It later transformed into the Latin , Old English , Middle English and lastly bishop. In the early Christian era the term was not always clearly distinguished from (literally: "elder" or "senior", origin of the modern English word priest), but is used in the sense of the order or office of bishop, distinct from that of presbyter, in the writings attributed to Ignatius of Antioch (died ). History in Christianity The earliest organization of the Church in Jerusalem was, according to most scholars, similar to that of Jewish synagogues, but it had a council or college of ordained presbyters (). In Acts 11:30 and Acts 15:22, a collegiate system of government in Jerusalem is chaired by James the Just, according to tradition the first bishop of the city. In Acts 14:23, the Apostle Paul ordains presbyters in churches in Anatolia. The word presbyter was not yet distinguished from overseer (, later used exclusively to mean bishop), as in Acts 20:17, Titus 1:5–7 and 1 Peter 5:1. The earliest writings of the Apostolic Fathers, the Didache and the First Epistle of Clement, for example, show the church used two terms for local church offices—presbyters (seen by many as an interchangeable term with or overseer) and deacon. In the First epistle to Timothy and Epistle to Titus in the New Testament a more clearly defined episcopate can be seen. Both letters state that Paul had left Timothy in Ephesus and Titus in Crete to oversee the local church. Paul commands Titus to ordain presbyters/bishops and to exercise general oversight. Early sources are unclear but various groups of Christian communities may have had the bishop surrounded by a group or college functioning as leaders of the local churches. Eventually the head or "monarchic" bishop came to rule more clearly, and all local churches would eventually follow the example of the other churches and structure themselves after the model of the others with the one bishop in clearer charge, though the role of the body of presbyters remained important. Eventually, as Christendom grew, bishops no longer directly served individual congregations. Instead, the metropolitan bishop (the bishop in a large city) appointed priests to minister each congregation, acting as the bishop's delegate. Apostolic Fathers Around the end of the 1st century, the church's organization became clearer in historical documents. In the works of the Apostolic Fathers, and Ignatius of Antioch in particular, the role of the episkopos, or bishop, became more important or, rather, already was very important and being clearly defined. While Ignatius of Antioch offers the earliest clear description of monarchial bishops (a single bishop over all house churches in a city) he is an advocate of monepiscopal structure rather than describing an accepted reality. To the bishops and house churches to which he writes, he offers strategies on how to pressure house churches who do not recognize the bishop into compliance. Other contemporary Christian writers do not describe monarchial bishops, either continuing to equate them with the presbyters or speaking of (bishops, plural) in a city. As the Church continued to expand, new churches in important cities gained their own bishop. Churches in the regions outside an important city were served by Chorbishop, an official rank of bishops. However, soon, presbyters and deacons were sent from the bishop of a city church. Gradually, priests replaced the chorbishops. Thus, in time, the bishop changed from being the leader of a single church confined to an urban area to being the leader of the churches of a given geographical area. Clement of Alexandria (end of the 2nd century) writes about the ordination of a certain Zachæus as bishop by the imposition of Simon Peter Bar-Jonah's hands. The words bishop and ordination are used in their technical meaning by the same Clement of Alexandria. The bishops in the 2nd century are defined also as the only clergy to whom the ordination to priesthood (presbyterate) and diaconate is entrusted: "a priest (presbyter) lays on hands, but does not ordain." (). At the beginning of the 3rd century, Hippolytus of Rome describes another feature of the ministry of a bishop, which is that of the : the primate of sacrificial priesthood and the power to forgive sins. Christian bishops and civil government The efficient organization of the Roman Empire became the template for the organisation of the church in the 4th century, particularly after Constantine's Edict of Milan. As the church moved from the shadows of privacy into the public forum it acquired land for churches, burials and clergy. In 391, Theodosius I decreed that any land that had been confiscated from the church by Roman authorities be returned. The most usual term for the geographic area of a bishop's authority and ministry, the diocese, began as part of the structure of the Roman Empire under Diocletian. As Roman authority began to fail in the western portion of the empire, the church took over much of the civil administration. This can be clearly seen in the ministry of two popes: Pope Leo I in the 5th century, and Pope Gregory I in the 6th century. Both of these men were statesmen and public administrators in addition to their role as Christian pastors, teachers and leaders. In the Eastern churches, latifundia entailed to a bishop's see were much less common, the state power did not collapse the way it did in the West, and thus the tendency of bishops acquiring civil power was much weaker than in the West. However, the role of Western bishops as civil authorities, often called prince bishops, continued throughout much of the Middle Ages. Bishops holding political office As well as being Archchancellors of the Holy Roman Empire after the 9th century, bishops generally served as chancellors to medieval monarchs, acting as head of the justiciary and chief chaplain. The Lord Chancellor of England was almost always a bishop up until the dismissal of Cardinal Thomas Wolsey by Henry VIII. Similarly, the position of Kanclerz in the Polish kingdom was always held by a bishop until the 16th century. In modern times, the principality of Andorra is headed by Co-Princes of Andorra, one of whom is the Bishop of Urgell and the other, the sitting President of France, an arrangement that began with the Paréage of Andorra (1278), and was ratified in the 1993 constitution of Andorra. The office of the Papacy is inherently held by the sitting Roman Catholic Bishop of Rome. Though not originally intended to hold temporal authority, since the Middle Ages the power of the Papacy gradually expanded deep into the secular realm and for centuries the sitting Bishop of Rome was the most powerful governmental office in Central Italy. In modern times, the Pope is also the sovereign Prince of Vatican City, an internationally recognized micro-state located entirely within the city of Rome. In France, prior to the Revolution, representatives of the clergy — in practice, bishops and abbots of the largest monasteries — comprised the First Estate of the Estates-General. This role was abolished after separation of Church and State was implemented during the French Revolution. In the 21st century, the more senior bishops of the Church of England continue to sit in the House of Lords of the Parliament of the United Kingdom, as representatives of the established church, and are known as Lords Spiritual. The Bishop of Sodor and Man, whose diocese lies outside the United Kingdom, is an ex officio member of the Legislative Council of the Isle of Man. In the past, the Bishop of Durham had extensive vice-regal powers within his northern diocese, which was a county palatine, the County Palatine of Durham, (previously, Liberty of Durham) of which he was ex officio the earl. In the 19th century, a gradual process of reform was enacted, with the majority of the bishop's historic powers vested in The Crown by 1858. Eastern Orthodox bishops, along with all other members of the clergy, are canonically forbidden to hold political office. Occasional exceptions to this rule are tolerated when the alternative is political chaos. In the Ottoman Empire, the Patriarch of Constantinople, for example, had de facto administrative, cultural and legal jurisdiction, as well as spiritual authority, over all Eastern Orthodox Christians of the empire, as part of the Ottoman millet system. An Orthodox bishop headed the Prince-Bishopric of Montenegro from 1516 to 1852, assisted by a secular guvernadur. More recently, Archbishop Makarios III of Cyprus, served as President of the Cyprus from 1960 to 1977, an extremely turbulent time period on the island. In 2001, Peter Hollingworth, AC, OBE – then the Anglican Archbishop of Brisbane – was controversially appointed Governor-General of Australia. Although Hollingworth gave up his episcopal position to accept the appointment, it still attracted considerable opposition in a country which maintains a formal separation between Church and State. Episcopacy during the English Civil War During the period of the English Civil War, the role of bishops as wielders of political power and as upholders of the established church became a matter of heated political controversy. Presbyterianism was the polity of most Reformed Churches in Europe, and had been favored by many in England since the English Reformation. Since in the primitive church the offices of presbyter and were not clearly distinguished, many Puritans held that this was the only form of government the church should have. The Anglican divine, Richard Hooker, objected to this claim in his famous work Of the Laws of Ecclesiastic Polity while, at the same time, defending Presbyterian ordination as valid (in particular Calvin's ordination of Beza). This was the official stance of the English Church until the Commonwealth, during which time, the views of Presbyterians and Independents (Congregationalists) were more freely expressed and practiced. Christian churches Catholic, Eastern Orthodox, Oriental Orthodox, Lutheran and Anglican churches Bishops form the leadership in the Catholic Church, the Eastern Orthodox Church, the Oriental Orthodox Churches, certain Lutheran Churches, the Anglican Communion, the Independent Catholic Churches, the Independent Anglican Churches, and certain other, smaller, denominations. The traditional role of a bishop is as pastor of a diocese (also called a bishopric, synod, eparchy or see), and so to serve as a "diocesan bishop", or "eparch" as it is called in many Eastern Christian churches. Dioceses vary considerably in size, geographically and population-wise. Some dioceses around the Mediterranean Sea which were Christianised early are rather compact, whereas dioceses in areas of rapid modern growth in Christian commitment—as in some parts of Sub-Saharan Africa, South America and the Far East—are much larger and more populous. As well as traditional diocesan bishops, many churches have a well-developed structure of church leadership that involves a number of layers of authority and responsibility. Duties In Catholicism, Eastern Orthodoxy, Oriental Orthodoxy, High Church Lutheranism, and Anglicanism, only a bishop can ordain other bishops, priests, and deacons. In the Eastern liturgical tradition, a priest can celebrate the Divine Liturgy only with the blessing of a bishop. In Byzantine usage, an antimension signed by the bishop is kept on the altar partly as a reminder of whose altar it is and under whose omophorion the priest at a local parish is serving. In Syriac Church usage, a consecrated wooden block called a thabilitho is kept for the same reasons. The bishop is the ordinary minister of the sacrament of confirmation in the Latin Church, and in the Old Catholic communion only a bishop may administer this sacrament. In the Lutheran and Anglican churches, the bishop normatively administers the rite of confirmation, although in those denominations that do not have an episcopal polity, confirmation is administered by the priest. However, in the Byzantine and other Eastern rites, whether Eastern or Oriental Orthodox or Eastern Catholic, chrismation is done immediately after baptism, and thus the priest is the one who confirms, using chrism blessed by a bishop. Ordination of Catholic, Eastern Orthodox, Oriental Orthodox, Lutheran and Anglican bishops Bishops in all of these communions are ordained by other bishops through the laying on of hands. Ordination of a bishop, and thus continuation of apostolic succession, takes place through a ritual centred on the imposition of hands and prayer. Catholic, Eastern Orthodox, Oriental Orthodox, Anglican, Old Catholic and some Lutheran bishops claim to be part of the continuous sequence of ordained bishops since the days of the apostles referred to as apostolic succession. In Scandinavia and the Baltic region, Lutheran churches participating in the Porvoo Communion (those of Iceland, Norway, Sweden, Finland, Estonia, and Lithuania), as well as many non-Porvoo membership Lutheran churches (including those of Kenya, Latvia, and Russia), as well as the confessional Communion of Nordic Lutheran Dioceses, believe that they ordain their bishops in the apostolic succession in lines stemming from the original apostles. The New Westminster Dictionary of Church History states that "In Sweden the apostolic succession was preserved because the Catholic bishops were allowed to stay in office, but they had to approve changes in the ceremonies." Peculiar to the Catholic Church While traditional teaching maintains that any bishop with apostolic succession can validly perform the ordination of another bishop, some churches require two or three bishops participate, either to ensure sacramental validity or to conform with church law. Catholic doctrine holds that one bishop can validly ordain another (priest) as a bishop. Though a minimum of three bishops participating is desirable (there are usually several more) in order to demonstrate collegiality, canonically only one bishop is necessary. The practice of only one bishop ordaining was normal in countries where the Church was persecuted under Communist rule. The title of archbishop or metropolitan may be granted to a senior bishop, usually one who is in charge of a large ecclesiastical jurisdiction. He may, or may not, have provincial oversight of suffragan bishops and may possibly have auxiliary bishops assisting him. Apart from the ordination, which is always done by other bishops, there are different methods as to the actual selection of a candidate for ordination as bishop. In the Catholic Church the Congregation for Bishops generally oversees the selection of new bishops with the approval of the pope. The papal nuncio usually solicits names from the bishops of a country, consults with priests and leading members of a laity, and then selects three to be forwarded to the Holy See. In Europe, some cathedral chapters have duties to elect bishops. The Eastern Catholic churches generally elect their own bishops. Most Eastern Orthodox churches allow varying amounts of formalised laity or lower clergy influence on the choice of bishops. This also applies in those Eastern churches which are in union with the pope, though it is required that he give assent. The pope, in addition to being the Bishop of Rome and spiritual head of the Catholic Church, is also the Patriarch of the Latin Church. Each bishop within the Latin Church is answerable directly to the Pope and not any other bishop except to metropolitans in certain oversight instances. The pope previously used the title Patriarch of the West, but this title was dropped from use in 2006, a move which caused some concern within the Eastern Orthodox Communion as, to them, it implied wider papal jurisdiction. Recognition of other churches' ordinations The Catholic Church does recognise as valid (though illicit) ordinations done by breakaway Catholic, Old Catholic or Oriental bishops, and groups descended from them; it also regards as both valid and licit those ordinations done by bishops of the Eastern churches, so long as those receiving the ordination conform to other canonical requirements (for example, is an adult male) and an eastern orthodox rite of episcopal ordination, expressing the proper functions and sacramental status of a bishop, is used; this has given rise to the phenomenon of (for example, clergy of the Independent Catholic groups which claim apostolic succession, though this claim is rejected by both Catholicism and Eastern Orthodoxy). With respect to Lutheranism, "the Catholic Church has never officially expressed its judgement on the validity of orders as they have been handed down by episcopal succession in these two national Lutheran churches" (the Evangelical Lutheran Church of Sweden and the Evangelical Lutheran Church of Finland) though it does "question how the ecclesiastical break in the 16th century has affected the apostolicity of the churches of the Reformation and thus the apostolicity of their ministry". Since Pope Leo XIII issued the bull in 1896, the Catholic Church has insisted that Anglican orders are invalid because of the Reformed changes in the Anglican ordination rites of the 16th century and divergence in understanding of the theology of priesthood, episcopacy and Eucharist. However, since the 1930s, Utrecht Old Catholic bishops (recognised by the Holy See as validly ordained) have sometimes taken part in the ordination of Anglican bishops. According to the writer Timothy Dufort, by 1969, all Church of England bishops had acquired Old Catholic lines of apostolic succession recognised by the Holy See. This development has been used to argue that the strain of apostolic succession has been re-introduced into Anglicanism, at least within the Church of England. However, other issues, such as the Anglican ordination of women, is at variance with Catholic understanding of Christian teaching, and have contributed to the reaffirmation of Catholic rejection of Anglican ordinations. The Eastern Orthodox Churches do not accept the validity of any ordinations performed by the Independent Catholic groups, as Eastern Orthodoxy considers to be spurious any consecration outside the Church as a whole. Eastern Orthodoxy considers apostolic succession to exist only within the Universal Church, and not through any authority held by individual bishops; thus, if a bishop ordains someone to serve outside the (Eastern Orthodox) Church, the ceremony is ineffectual, and no ordination has taken place regardless of the ritual used or the ordaining prelate's position within the Eastern Orthodox Churches. The position of the Catholic Church is slightly different. Whilst it does recognise the validity of the orders of certain groups which separated from communion with Holy See (for instance, the ordinations of the Old Catholics in communion with Utrecht, as well as the Polish National Catholic Church - which received its orders directly from Utrecht, and was until recently part of that communion), Catholicism does not recognise the orders of any group whose teaching is at variance with what they consider the core tenets of Christianity; this is the case even though the clergy of the Independent Catholic groups may use the proper ordination ritual. There are also other reasons why the Holy See does not recognise the validity of the orders of the Independent clergy: They hold that the continuing practice among many Independent clergy of one person receiving multiple ordinations in order to secure apostolic succession, betrays an incorrect and mechanistic theology of ordination. They hold that the practice within Independent groups of ordaining women (such as within certain member communities of the Anglican Communion) demonstrates an understanding of priesthood that they vindicate is totally unacceptable to the Catholic and Eastern Orthodox churches as they believe that the Universal Church does not possess such authority; thus, they uphold that any ceremonies performed by these women should be considered being sacramentally invalid. The theology of male clergy within the Independent movement is also suspect according to the Catholics, as they presumably approve of the ordination of females, and may have even undergone an (invalid) ordination ceremony conducted by a woman. Whilst members of the Independent Catholic movement take seriously the issue of valid orders, it is highly significant that the relevant Vatican Congregations tend not to respond to petitions from Independent Catholic bishops and clergy who seek to be received into communion with the Holy See, hoping to continue in some sacramental role. In those instances where the pope does grant reconciliation, those deemed to be clerics within the Independent Old Catholic movement are invariably admitted as laity and not priests or bishops. There is a mutual recognition of the validity of orders amongst Catholic, Eastern Orthodox, Old Catholic, Oriental Orthodox and Assyrian Church of the East churches. Some provinces of the Anglican Communion have begun ordaining women as bishops in recent decades – for example, England, Ireland, Scotland, Wales, the United States, Australia, New Zealand, Canada and Cuba. The first woman to be consecrated a bishop within Anglicanism was Barbara Harris, who was ordained in the United States in 1989. In 2006, Katharine Jefferts Schori, the Episcopal Bishop of Nevada, became the first woman to become the presiding bishop of the Episcopal Church. In the Evangelical Lutheran Church in America (ELCA) and the Evangelical Lutheran Church in Canada (ELCIC), the largest Lutheran Church bodies in the United States and Canada, respectively, and roughly based on the Nordic Lutheran national churches (similar to that of the Church of England), bishops are elected by Synod Assemblies, consisting of both lay members and clergy, for a term of six years, which can be renewed, depending upon the local synod's "constitution" (which is mirrored on either the ELCA or ELCIC's national constitution). Since the implementation of concordats between the ELCA and the Episcopal Church of the United States and the ELCIC and the Anglican Church of Canada, all bishops, including the presiding bishop (ELCA) or the national bishop (ELCIC), have been consecrated using the historic succession in line with bishops from the Evangelical Lutheran Church of Sweden, with at least one Anglican bishop serving as co-consecrator. Since going into ecumenical communion with their respective Anglican body, bishops in the ELCA or the ELCIC not only approve the "rostering" of all ordained pastors, diaconal ministers, and associates in ministry, but they serve as the principal celebrant of all pastoral ordination and installation ceremonies, diaconal consecration ceremonies, as well as serving as the "chief pastor" of the local synod, upholding the teachings of Martin Luther as well as the documentations of the Ninety-Five Theses and the Augsburg Confession. Unlike their counterparts in the United Methodist Church, ELCA and ELCIC synod bishops do not appoint pastors to local congregations (pastors, like their counterparts in the Episcopal Church, are called by local congregations). The presiding bishop of the ELCA and the national bishop of the ELCIC, the national bishops of their respective bodies, are elected for a single 6-year term and may be elected to an additional term. Although ELCA agreed with the Episcopal Church to limit ordination to the bishop "ordinarily", ELCA pastor-ordinators are given permission to perform the rites in "extraordinary" circumstance. In practice, "extraordinary" circumstance have included disagreeing with Episcopalian views of the episcopate, and as a result, ELCA pastors ordained by other pastors are not permitted to be deployed to Episcopal Churches (they can, however, serve in Presbyterian Church USA, United Methodist Church, Reformed Church in America, and Moravian Church congregations, as the ELCA is in full communion with these denominations). The Lutheran Church–Missouri Synod (LCMS) and the Wisconsin Evangelical Lutheran Synod (WELS), the second and third largest Lutheran bodies in the United States and the two largest Confessional Lutheran bodies in North America, do not follow an episcopal form of governance, settling instead on a form of quasi-congregationalism patterned off what they believe to be the practice of the early church. The second largest of the three predecessor bodies of the ELCA, the American Lutheran Church, was a congregationalist body, with national and synod presidents before they were re-titled as bishops (borrowing from the Lutheran churches in Germany) in the 1980s. With regard to ecclesial discipline and oversight, national and synod presidents typically function similarly to bishops in episcopal bodies. Methodism African Methodist Episcopal Church In the African Methodist Episcopal Church, "Bishops are the Chief Officers of the Connectional Organization. They are elected for life by a majority vote of the General Conference which meets every four years." Christian Methodist Episcopal Church In the Christian Methodist Episcopal Church in the United States, bishops are administrative superintendents of the church; they are elected by "delegate" votes for as many years deemed until the age of 74, then the bishop must retire. Among their duties, are responsibility for appointing clergy to serve local churches as pastor, for performing ordinations, and for safeguarding the doctrine and discipline of the Church. The General Conference, a meeting every four years, has an equal number of clergy and lay delegates. In each Annual Conference, CME bishops serve for four-year terms. CME Church bishops may be male or female. United Methodist Church In the United Methodist Church (the largest branch of Methodism in the world) bishops serve as administrative and pastoral superintendents of the church. They are elected for life from among the ordained elders (presbyters) by vote of the delegates in regional (called jurisdictional) conferences, and are consecrated by the other bishops present at the conference through the laying on of hands. In the United Methodist Church bishops remain members of the "Order of Elders" while being consecrated to the "Office of the Episcopacy". Within the United Methodist Church only bishops are empowered to consecrate bishops and ordain clergy. Among their most critical duties is the ordination and appointment of clergy to serve local churches as pastor, presiding at sessions of the Annual, Jurisdictional, and General Conferences, providing pastoral ministry for the clergy under their charge, and safeguarding the doctrine and discipline of the Church. Furthermore, individual bishops, or the Council of Bishops as a whole, often serve a prophetic role, making statements on important social issues and setting forth a vision for the denomination, though they have no legislative authority of their own. In all of these areas, bishops of the United Methodist Church function very much in the historic meaning of the term. According to the Book of Discipline of the United Methodist Church, a bishop's responsibilities are: In each Annual Conference, United Methodist bishops serve for four-year terms, and may serve up to three terms before either retirement or appointment to a new Conference. United Methodist bishops may be male or female, with Marjorie Matthews being the first woman to be consecrated a bishop in 1980. The collegial expression of episcopal leadership in the United Methodist Church is known as the Council of Bishops. The Council of Bishops speaks to the Church and through the Church into the world and gives leadership in the quest for Christian unity and interreligious relationships. The Conference of Methodist Bishops includes the United Methodist Council of Bishops plus bishops from affiliated autonomous Methodist or United Churches. John Wesley consecrated Thomas Coke a "General Superintendent", and directed that Francis Asbury also be consecrated for the United States of America in 1784, where the Methodist Episcopal Church first became a separate denomination apart from the Church of England. Coke soon returned to England, but Asbury was the primary builder of the new church. At first he did not call himself bishop, but eventually submitted to the usage by the denomination. Notable bishops in United Methodist history include Coke, Asbury, Richard Whatcoat, Philip William Otterbein, Martin Boehm, Jacob Albright, John Seybert, Matthew Simpson, John S. Stamm, William Ragsdale Cannon, Marjorie Matthews, Leontine T. Kelly, William B. Oden, Ntambo Nkulu Ntanda, Joseph Sprague, William Henry Willimon, and Thomas Bickerton. The Church of Jesus Christ of Latter-day Saints In the Church of Jesus Christ of Latter-day Saints, the Bishop is the leader of a local congregation, called a ward. As with most LDS priesthood holders, the bishop is a part-time lay minister and earns a living through other employment. As such, it is his duty to preside, call local leaders, and judge the worthiness of members for certain activities. The bishop does not deliver sermons at every service (generally asking members to do so), but is expected to be a spiritual guide for his congregation. It is therefore believed that he has both the right and ability to receive divine inspiration (through the Holy Spirit) for the ward under his direction. Because it is a part-time position, all able members are expected to assist in the management of the ward by holding delegated lay positions (for example, women's and youth leaders, teachers) referred to as callings. The bishop is especially responsible for leading the youth, in connection with the fact that a bishop is the president of the Aaronic priesthood in his ward (and is thus a form of Mormon Kohen). Although members are asked to confess serious sins to him, unlike the Catholic Church, he is not the instrument of divine forgiveness, but merely a guide through the repentance process (and a judge in case transgressions warrant excommunication or other official discipline). The bishop is also responsible for the physical welfare of the ward, and thus collects tithing and fast offerings and distributes financial assistance where needed. A literal descendant of Aaron has "legal right" to act as a bishop after being found worthy and ordained by the First Presidency. In the absence of a literal descendant of Aaron, a high priest in the Melchizedek priesthood is called to be a bishop. Each bishop is selected from resident members of the ward by the stake presidency with approval of the First Presidency, and chooses two counselors to form a bishopric. An priesthood holder called as bishop must be ordained a high priest if he is not already one, unlike the similar function of branch president. In special circumstances (such as a ward consisting entirely of young university students), a bishop may be chosen from outside the ward. Traditionally, bishops are married, though this is not always the case. A bishop is typically released after about five years and a new bishop is called to the position. Although the former bishop is released from his duties, he continues to hold the Aaronic priesthood office of bishop. Church members frequently refer to a former bishop as "Bishop" as a sign of respect and affection. Latter-day Saint bishops do not wear any special clothing or insignia the way clergy in many other churches do, but are expected to dress and groom themselves neatly and conservatively per their local culture, especially when performing official duties. Bishops (as well as other members of the priesthood) can trace their line of authority back to Joseph Smith, who, according to church doctrine, was ordained to lead the Church in modern times by the ancient apostles Peter, James, and John, who were ordained to lead the Church by Jesus Christ. At the global level, the presiding bishop oversees the temporal affairs (buildings, properties, commercial corporations, and so on) of the worldwide Church, including the Church's massive global humanitarian aid and social welfare programs. The presiding bishop has two counselors; the three together form the presiding bishopric. As opposed to ward bishoprics, where the counselors do not hold the office of bishop, all three men in the presiding bishopric hold the office of bishop, and thus the counselors, as with the presiding bishop, are formally referred to as "Bishop". Irvingism New Apostolic Church The New Apostolic Church (NAC) knows three classes of ministries: Deacons, Priests and Apostles. The Apostles, who are all included in the apostolate with the Chief Apostle as head, are the highest ministries. Of the several kinds of priest....ministries, the bishop is the highest. Nearly all bishops are set in line directly from the chief apostle. They support and help their superior apostle. Pentecostalism Church of God in Christ In the Church of God in Christ (COGIC), the ecclesiastical structure is composed of large dioceses that are called "jurisdictions" within COGIC, each under the authority of a bishop, sometimes called "state bishops". They can either be made up of large geographical regions of churches or churches that are grouped and organized together as their own separate jurisdictions because of similar affiliations, regardless of geographical location or dispersion. Each state in the U.S. has at least one jurisdiction while others may have several more, and each jurisdiction is usually composed of between 30 and 100 churches. Each jurisdiction is then broken down into several districts, which are smaller groups of churches (either grouped by geographical situation or by similar affiliations) which are each under the authority of District Superintendents who answer to the authority of their jurisdictional/state bishop. There are currently over 170 jurisdictions in the United States, and over 30 jurisdictions in other countries. The bishops of each jurisdiction, according to the COGIC Manual, are considered to be the modern day equivalent in the church of the early apostles and overseers of the New Testament church, and as the highest ranking clergymen in the COGIC, they are tasked with the responsibilities of being the head overseers of all religious, civil, and economic ministries and protocol for the church denomination. They also have the authority to appoint and ordain local pastors, elders, ministers, and reverends within the denomination. The bishops of the COGIC denomination are all collectively called "The Board of Bishops". From the Board of Bishops, and the General Assembly of the COGIC, the body of the church composed of clergy and lay delegates that are responsible for making and enforcing the bylaws of the denomination, every four years, twelve bishops from the COGIC are elected as "The General Board" of the church, who work alongside the delegates of the General Assembly and Board of Bishops to provide administration over the denomination as the church's head executive leaders. One of twelve bishops of the General Board is also elected the "presiding bishop" of the church, and two others are appointed by the presiding bishop himself, as his first and second assistant presiding bishops. Bishops in the Church of God in Christ usually wear black clergy suits which consist of a black suit blazer, black pants, a purple or scarlet clergy shirt and a white clerical collar, which is usually referred to as "Class B Civic attire". Bishops in COGIC also typically wear the Anglican Choir Dress style vestments of a long purple or scarlet chimere, cuffs, and tippet worn over a long white rochet, and a gold pectoral cross worn around the neck with the tippet. This is usually referred to as "Class A Ceremonial attire". The bishops of COGIC alternate between Class A Ceremonial attire and Class B Civic attire depending on the protocol of the religious services and other events they have to attend. Church of God (Cleveland, Tennessee) In the polity of the Church of God (Cleveland, Tennessee), the international leader is the presiding bishop, and the members of the executive committee are executive bishops. Collectively, they supervise and appoint national and state leaders across the world. Leaders of individual states and regions are administrative bishops, who have jurisdiction over local churches in their respective states and are vested with appointment authority for local pastorates. All ministers are credentialed at one of three levels of licensure, the most senior of which is the rank of ordained bishop. To be eligible to serve in state, national, or international positions of authority, a minister must hold the rank of ordained bishop. Pentecostal Church of God In 2002, the general convention of the Pentecostal Church of God came to a consensus to change the title of their overseer from general superintendent to bishop. The change was brought on because internationally, the term bishop is more commonly related to religious leaders than the previous title. The title bishop is used for both the general (international leader) and the district (state) leaders. The title is sometimes used in conjunction with the previous, thus becoming general (district) superintendent/bishop. Seventh-day Adventists According to the Seventh-day Adventist understanding of the doctrine of the Church: "The "elders" (Greek, ) or "bishops" () were the most important officers of the church. The term elder means older one, implying dignity and respect. His position was similar to that of the one who had supervision of the synagogue. The term bishop means "overseer". Paul used these terms interchangeably, equating elders with overseers or bishops (Acts 20:17,28; Titus 1:5, 7). "Those who held this position supervised the newly formed churches. Elder referred to the status or rank of the office, while bishop denoted the duty or responsibility of the office—"overseer". Since the apostles also called themselves elders (1 Peter 5:1; 2 John 1; 3 John 1), it is apparent that there were both local elders and itinerant elders, or elders at large. But both kinds of elder functioned as shepherds of the congregations." The above understanding is part of the basis of Adventist organizational structure. The world wide Seventh-day Adventist church is organized into local districts, conferences or missions, union conferences or union missions, divisions, and finally at the top is the general conference. At each level (with exception to the local districts), there is an elder who is elected president and a group of elders who serve on the executive committee with the elected president. Those who have been elected president would in effect be the "bishop" while never actually carrying the title or ordained as such because the term is usually associated with the episcopal style of church governance most often found in Catholic, Anglican, Methodist and some Pentecostal/Charismatic circles. Others Some Baptists also have begun taking on the title of bishop. In some smaller Protestant denominations and independent churches, the term bishop is used in the same way as pastor, to refer to the leader of the local congregation, and may be male or female. This usage is especially common in African-American churches in the US. In the Church of Scotland, which has a Presbyterian church structure, the word "bishop" refers to an ordained person, usually a normal parish minister, who has temporary oversight of a trainee minister. In the Presbyterian Church (USA), the term bishop is an expressive name for a Minister of Word and Sacrament who serves a congregation and exercises "the oversight of the flock of Christ." The term is traceable to the 1789 Form of Government of the PC (USA) and the Presbyterian understanding of the pastoral office. While not considered orthodox Christian, the Ecclesia Gnostica Catholica uses roles and titles derived from Christianity for its clerical hierarchy, including bishops who have much the same authority and responsibilities as in Catholicism. The Salvation Army does not have bishops but has appointed leaders of geographical areas, known as Divisional Commanders. Larger geographical areas, called Territories, are led by a Territorial Commander, who is the highest-ranking officer in that Territory. Jehovah's Witnesses do not use the title 'Bishop' within their organizational structure, but appoint elders to be overseers (to fulfill the role of oversight) within their congregations. The Batak Christian Protestant Church of Indonesia, the most prominent Protestant denomination in Indonesia, uses the term Ephorus instead of bishop. In the Vietnamese syncretist religion of Caodaism, bishops () comprise the fifth of nine hierarchical levels, and are responsible for spiritual and temporal education as well as record-keeping and ceremonies in their parishes. At any one time there are seventy-two bishops. Their authority is described in Section I of the text (revealed through seances in December 1926). Caodai bishops wear robes and headgear of embroidered silk depicting the Divine Eye and the Eight Trigrams. (The color varies according to branch.) This is the full ceremonial dress; the simple version consists of a seven-layered turban. Dress and insignia in Christianity Traditionally, a number of items are associated with the office of a bishop, most notably the mitre and the crosier. Other vestments and insignia vary between Eastern and Western Christianity. In the Latin Rite of the Catholic Church, the choir dress of a bishop includes the purple cassock with amaranth trim, rochet, purple zucchetto (skull cap), purple biretta, and pectoral cross. The cappa magna may be worn, but only within the bishop's own diocese and on especially solemn occasions. The mitre, zucchetto, and stole are generally worn by bishops when presiding over liturgical functions. For liturgical functions other than the Mass the bishop typically wears the cope. Within his own diocese and when celebrating solemnly elsewhere with the consent of the local ordinary, he also uses the crosier. When celebrating Mass, a bishop, like a priest, wears the chasuble. The Caeremoniale Episcoporum recommends, but does not impose, that in solemn celebrations a bishop should also wear a dalmatic, which can always be white, beneath the chasuble, especially when administering the sacrament of holy orders, blessing an abbot or abbess, and dedicating a church or an altar. The Caeremoniale Episcoporum no longer makes mention of episcopal gloves, episcopal sandals, liturgical stockings (also known as buskins), or the accoutrements that it once prescribed for the bishop's horse. The coat of arms of a Latin Church Catholic bishop usually displays a galero with a cross and crosier behind the escutcheon; the specifics differ by location and ecclesiastical rank (see Ecclesiastical heraldry). Anglican bishops generally make use of the mitre, crosier, ecclesiastical ring, purple cassock, purple zucchetto, and pectoral cross. However, the traditional choir dress of Anglican bishops retains its late mediaeval form, and looks quite different from that of their Catholic counterparts; it consists of a long rochet which is worn with a chimere. In the Eastern Churches (Eastern Orthodox, Eastern Rite Catholic) a bishop will wear the mandyas, panagia (and perhaps an enkolpion), sakkos, omophorion and an Eastern-style mitre. Eastern bishops do not normally wear an episcopal ring; the faithful kiss (or, alternatively, touch their forehead to) the bishop's hand. To seal official documents, he will usually use an inked stamp. An Eastern bishop's coat of arms will normally display an Eastern-style mitre, cross, eastern style crosier and a red and white (or red and gold) mantle. The arms of Oriental Orthodox bishops will display the episcopal insignia (mitre or turban) specific to their own liturgical traditions. Variations occur based upon jurisdiction and national customs. Cathedra In Catholic, Eastern Orthodox, Oriental Orthodox, Lutheran and Anglican cathedrals there is a special chair set aside for the exclusive use of the bishop. This is the bishop's cathedra and is often called the throne. In some Christian denominations, for example, the Anglican Communion, parish churches may maintain a chair for the use of the bishop when he visits; this is to signify the parish's union with the bishop. The term's use in non-Christian religions Buddhism The leader of the Buddhist Churches of America (BCA) is their bishop, The Japanese title for the bishop of the BCA is , although the English title is favored over the Japanese. When it comes to many other Buddhist terms, the BCA chose to keep them in their original language (terms such as and ), but with some words (including ), they changed/translated these terms into English words. Between 1899 and 1944, the BCA held the name Buddhist Mission of North America. The leader of the Buddhist Mission of North America was called (superintendent/director) between 1899 and 1918. In 1918 the was promoted to bishop (). However, according to George J. Tanabe, the title "bishop" was in practice already used by Hawaiian Shin Buddhists (in Honpa Hongwanji Mission of Hawaii) even when the official title was kantoku. Bishops are also present in other Japanese Buddhist organizations. Higashi Hongan-ji's North American District, Honpa Honganji Mission of Hawaii, Jodo Shinshu Buddhist Temples of Canada, a Jodo Shu temple in Los Angeles, the Shingon temple Koyasan Buddhist Temple, Sōtō Mission in Hawai‘i (a Soto Zen Buddhist institution), and the Sōtō Zen Buddhist Community of South America () all have or have had leaders with the title bishop. As for the Sōtō Zen Buddhist Community of South America, the Japanese title is , but the leader is in practice referred to as "bishop". Tenrikyo Tenrikyo is a Japanese New Religion with influences from both Shinto and Buddhism. The leader of the Tenrikyo North American Mission has the title of bishop. See also Anglican ministry#Bishops Appointment of Catholic bishops Appointment of Church of England bishops Bishop in Europe Bishop in the Catholic Church Bishop of Alexandria, or Pope Bishops in the Church of Scotland Diocesan bishop Ecclesiastical polity (church governance) Congregationalist polity Presbyterian polity Ganzibra Gay bishops Hierarchy of the Catholic Church List of Catholic bishops of the United States List of Metropolitans and Patriarchs of Moscow List of types of spiritual teachers List of Lutheran bishops and archbishops Lists of patriarchs, archbishops, and bishops Lord Bishop Order of precedence in the Catholic Church Shepherd in religion Spokesperson bishops in the Church of England Suffragan Bishop in Europe Notes References Citations Sources External links Methodist/Anglican Thoughts On Apostolic Succession by Gregory Neal Methodist Episcopacy: In Search of Holy Orders by Gregory Neal The Old Catholic Church, Province of the United States The Ecumenical Catholic Communion* The United Methodist Church: Council of Bishops Vatican Website with Canon Law of Catholic Church Episcophobia: The Fear of bishops Christian terminology Ecclesiastical titles Episcopacy in Eastern Orthodoxy Episcopacy in Oriental Orthodoxy Anglican episcopal offices Methodism Religious leadership roles Bishop
8
Bordeaux ( , ; Gascon ; ) is a city on the river Garonne in the Gironde department, southwestern France. A port city, it is the capital of the Nouvelle-Aquitaine region, as well as the prefecture of the Gironde department. Its inhabitants are called "Bordelais (masculine) or "Bordelaises (feminine). The term "Bordelais" may also refer to the city and its surrounding region. The city of Bordeaux proper had a population of 259,809 in 2020 within its small municipal territory of , but together with its suburbs and exurbs the Bordeaux metropolitan area had a population of 1,376,375 that same year (Jan. 2020 census), the sixth-most populated in France after Paris, Lyon, Marseille, Lille, and Toulouse. Bordeaux and 27 suburban municipalities form the Bordeaux Metropolis, an indirectly elected metropolitan authority now in charge of wider metropolitan issues. The Bordeaux Metropolis, with a population of 819,604 at the January 2020 census, is the fifth most populated metropolitan council in France after those of Paris, Marseille, Lyon and Lille. Bordeaux is a world capital of wine: many castles and vineyards stand on the hillsides of the Gironde, and the city is home to the world's main wine fair, Vinexpo. Bordeaux is also one of the centers of gastronomy and business tourism for the organization of international congresses. It is a central and strategic hub for the aeronautics, military and space sector, home to international companies such as Dassault Aviation, Ariane Group, Safran and Thalès. The link with aviation dates back to 1910, the year the first airplane flew over the city. A crossroads of knowledge through university research, it is home to one of the only two megajoule lasers in the world, as well as a university population of more than 130,000 students within the Bordeaux Metropolis. Bordeaux is an international tourist destination for its architectural and cultural heritage with more than 350 historic monuments, making it, after Paris, the city with the most listed or registered monuments in France. The "Pearl of Aquitaine" has been voted European Destination of the year in a 2015 online poll. The metropolis has also received awards and rankings by international organizations such as in 1957, Bordeaux was awarded the Europe Prize for its efforts in transmitting the European ideal. In June 2007, the Port of the Moon in historic Bordeaux was inscribed on the UNESCO World Heritage List, for its outstanding architecture and urban ensemble and in recognition of Bordeaux's international importance over the last 2000 years. Bordeaux is also ranked as a Sufficiency city by the Globalization and World Cities Research Network. History 5th century BC to 11th century AD Around 300 BC, the region was the settlement of a Celtic tribe, the Bituriges Vivisci, named the town Burdigala, probably of Aquitanian origin. In 107 BC, the Battle of Burdigala was fought by the Romans who were defending the Allobroges, a Gallic tribe allied to Rome, and the Tigurini led by Divico. The Romans were defeated and their commander, the consul Lucius Cassius Longinus, was killed in battle. The city came under Roman rule around 60 BC, and it became an important commercial centre for tin and lead. During this period were built the amphitheatre and the monument Les Piliers de Tutelle. In 276, it was sacked by the Vandals. The Vandals attacked again in 409, followed by the Visigoths in 414, and the Franks in 498, and afterwards the city fell into a period of relative obscurity. In the late sixth century the city re-emerged as the seat of a county and an archdiocese within the Merovingian kingdom of the Franks, but royal Frankish power was never strong. The city started to play a regional role as a major urban center on the fringes of the newly founded Frankish Duchy of Vasconia. Around 585 Gallactorius was made Count of Bordeaux and fought the Basques. In 732, the city was plundered by the troops of Abd er Rahman who stormed the fortifications and overwhelmed the Aquitanian garrison. Duke Eudes mustered a force to engage the Umayyads, eventually engaging them in the Battle of the River Garonne somewhere near the river Dordogne. The battle had a high death toll, and although Eudes was defeated he had enough troops to engage in the Battle of Poitiers and so retain his grip on Aquitaine. In 737, following his father Eudes's death, the Aquitanian duke Hunald led a rebellion to which Charles responded by launching an expedition that captured Bordeaux. However, it was not retained for long, during the following year the Frankish commander clashed in battle with the Aquitanians but then left to take on hostile Burgundian authorities and magnates. In 745 Aquitaine faced another expedition where Charles's sons Pepin and Carloman challenged Hunald's power and defeated him. Hunald's son Waifer replaced him and confirmed Bordeaux as the capital city (along with Bourges in the north). During the last stage of the war against Aquitaine (760–768), it was one of Waifer's last important strongholds to fall to the troops of King Pepin the Short. Charlemagne built the fortress of Fronsac (Frontiacus, Franciacus) near Bordeaux on a hill across the border with the Basques (Wascones), where Basque commanders came and pledged their loyalty (769). In 778, Seguin (or Sihimin) was appointed count of Bordeaux, probably undermining the power of the Duke Lupo, and possibly leading to the Battle of Roncevaux Pass. In 814, Seguin was made Duke of Vasconia, but was deposed in 816 for failing to suppress a Basque rebellion. Under the Carolingians, sometimes the Counts of Bordeaux held the title concomitantly with that of Duke of Vasconia. They were to keep the Basques in check and defend the mouth of the Garonne from the Vikings when they appeared in c. 844. In Autumn 845, the Vikings were raiding Bordeaux and Saintes, count Seguin II marched on them but was captured and executed. Although the port of Bordeaux was a buzzing trade center, the stability and success of the city was threatened by Viking and Norman incursions and political instability. The restoration of the Ramnulfid Dukes of Aquitaine under William IV and his successors (known as the House of Poitiers) brought continuity of government. 12th century to 15th century, the English era From the 12th to the 15th century, Bordeaux flourished once more following the marriage of Eléonore, Duchess of Aquitaine and the last of the House of Poitiers, to Henry II Plantagenêt, Count of Anjou and the grandson of Henry I of England, who succeeded to the English crown months after their wedding, bringing into being the vast Angevin Empire, which stretched from the Pyrenees to Ireland. After granting a tax-free trade status with England, Henry was adored by the locals as they could be even more profitable in the wine trade, their main source of income, and the city benefited from imports of cloth and wheat. The belfry (Grosse Cloche) and city cathedral St-André were built, the latter in 1227, incorporating the artisan quarter of Saint-Paul. Under the terms of the Treaty of Brétigny it became briefly the capital of an independent state (1362–1372) under Edward, the Black Prince, but after the Battle of Castillon (1453) it was annexed by France. 15th century to 17th century In 1462, Bordeaux created a local parliament. Bordeaux adhered to the Fronde, being effectively annexed to the Kingdom of France only in 1653, when the army of Louis XIV entered the city. 18th century, the golden era The 18th century saw another golden age of Bordeaux. The Port of the Moon supplied the majority of Europe with coffee, cocoa, sugar, cotton and indigo, becoming France's busiest port and the second busiest port in the world after London. Many downtown buildings (about 5,000), including those on the quays, are from this period. Bordeaux was also a major trading centre for slaves. In total, the Bordeaux shipowners deported 150,000 Africans in some 500 expeditions. French Revolution: political disruption and loss of the most profitable colony At the beginning of the French Revolution (1789), many local revolutionaries were members of the Girondists. This Party represented the provincial bourgeoisie, favorable towards abolishing aristocracy privileges, but opposed to the Revolution's social dimension. In 1793, the Montagnards led by Robespierre and Marat came to power. Fearing a bourgeois misappropriation of the Revolution, they executed a great number of Girondists. During the purge, the local Montagnard Section renamed the city of Bordeaux "Commune-Franklin" (Franklin-municipality) in homage to Benjamin Franklin. At the same time, in 1791, a slave revolt broke out at Saint-Domingue (current Haiti), the most profitable of the French colonies. Three years later, the Montagnard Convention abolished slavery. In 1802, Napoleon revoked the manumission law but lost the war against the army of former slaves. In 1804, Haiti became independent. The loss of this "Pearl" of the West Indies generated the collapse of Bordeaux's port economy, which was dependent on the colonial trade and trade in slaves. Towards the end of the Peninsular War of 1814, the Duke of Wellington sent William Beresford with two divisions and seized Bordeaux, encountering little resistance. Bordeaux was largely anti-Bonapartist and the majority supported the Bourbons. The British troops were treated as liberators. 19th century, rebirth of the economy From the Bourbon Restoration, the economy of Bordeaux was rebuilt by traders and shipowners. They engaged to construct the first bridge of Bordeaux, and customs warehouses. The shipping traffic grew through the new African colonies. Georges-Eugène Haussmann, a longtime prefect of Bordeaux, used Bordeaux's 18th-century large-scale rebuilding as a model when he was asked by Emperor Napoleon III to transform the quasi-medieval Paris into a "modern" capital that would make France proud. Victor Hugo found the town so beautiful he said: "Take Versailles, add Antwerp, and you have Bordeaux". In 1870, at the beginning of the Franco-Prussian war against Prussia, the French government temporarily relocated to Bordeaux from Paris. That recurred during World War I and again very briefly during World War II, when it became clear that Paris would fall into German hands. 20th century During World War II, Bordeaux fell under German occupation. In May and June 1940, Bordeaux was the site of the life-saving actions of the Portuguese consul-general, Aristides de Sousa Mendes, who illegally granted thousands of Portuguese visas, which were needed to pass the Spanish border, to refugees fleeing the German occupation. From 1941 to 1943, the Italian Royal Navy established BETASOM, a submarine base at Bordeaux. Italian submarines participated in the Battle of the Atlantic from that base, which was also a major base for German U-boats as headquarters of 12th U-boat Flotilla. The massive, reinforced concrete U-boat pens have proved impractical to demolish and are now partly used as a cultural center for exhibitions. 21st century, listed as World heritage In 2007, 40% of the city surface area, located around the Port of the Moon, was listed as World heritage sites. Unesco inscribed Bordeaux as "an inhabited historic city, an outstanding urban and architectural ensemble, created in the age of the Enlightenment, whose values continued up to the first half of the 20th century, with more protected buildings than any other French city except Paris". Geography Bordeaux is located close to the European Atlantic coast, in the southwest of France and in the north of the Aquitaine region. It is around southwest of Paris. The city is built on a bend of the river Garonne, and is divided into two parts: the right bank to the east and left bank in the west. Historically the left bank is more developed because when flowing outside the bend, the water makes a furrow of the required depth to allow the passing of merchant ships, which used to offload on this side of the river. But, today, the right bank is developing, including new urban projects. In Bordeaux, the Garonne River is accessible to ocean liners through the Gironde estuary. The right bank of the Garonne is a low-lying, often marshy plain. Climate Bordeaux's climate can be classified as oceanic (Köppen climate classification Cfb), bordering on a humid subtropical climate (Cfa). However, the Trewartha climate classification system classifies the city as solely humid subtropical, due to a recent rise in temperatures related - to some degree or another - to climate change and the city's urban heat island. The city enjoys cool to mild, wet winters, due to its relatively southerly latitude, and the prevalence of mild, westerly winds from the Atlantic. Its summers are warm and somewhat drier, although wet enough to avoid a Mediterranean classification. Frosts occur annually, but snowfall is quite infrequent, occurring for no more than 3-4 days a year. The summer of 2003 set a record with an average temperature of , while February 1956 was the coldest month on record with an average temperature of −2.00 °C at Bordeaux Mérignac-Airport. Economy Bordeaux is a major centre for business in France as it has the sixth largest metropolitan population in France. It serves as a major regional center for trade, administration, services and industry. Wine The vine was introduced to the Bordeaux region by the Romans, probably in the mid-first century, to provide wine for local consumption, and wine production has been continuous in the region since. Bordeaux wine growing area has about of vineyards, 57 appellations, 10,000 wine-producing estates (châteaux) and 13,000 grape growers. With an annual production of approximately 960 million bottles, the Bordeaux area produces large quantities of everyday wine as well as some of the most expensive wines in the world. Included among the latter are the area's five premier cru (First Growth) red wines (four from Médoc and one, Château Haut-Brion, from Graves), established by the Bordeaux Wine Official Classification of 1855: Both red and white wines are made in the Bordeaux region. Red Bordeaux wine is called claret in the United Kingdom. Red wines are generally made from a blend of grapes, and may be made from Cabernet Sauvignon, Merlot, Cabernet Franc, Petit verdot, Malbec, and, less commonly in recent years, Carménère. White Bordeaux is made from Sauvignon blanc, Sémillon, and Muscadelle. Sauternes is a sub-region of Graves known for its intensely sweet, white, dessert wines such as Château d'Yquem. Because of a wine glut (wine lake) in the generic production, the price squeeze induced by an increasingly strong international competition, and vine pull schemes, the number of growers has recently dropped from 14,000 and the area under vine has also decreased significantly. In the meantime, the global demand for first growths and the most famous labels markedly increased and their prices skyrocketed. The Cité du Vin, a museum as well as a place of exhibitions, shows, movie projections and academic seminars on the theme of wine opened its doors in June 2016. Others The Laser Mégajoule will be one of the most powerful lasers in the world, allowing fundamental research and the development of the laser and plasma technologies. Some 20,000 people work for the aeronautic industry in Bordeaux. The city has some of the biggest companies including Dassault, EADS Sogerma, Snecma, Thales, SNPE, and others. The Dassault Falcon private jets are built there as well as the military aircraft Rafale and Mirage 2000, the Airbus A380 cockpit, the boosters of Ariane 5, and the M51 SLBM missile. Tourism, especially wine tourism, is a major industry. Globelink.co.uk mentioned Bordeaux as the best tourist destination in Europe in 2015. Gourmet Touring is a tourism company operating in the Bordeaux wine region. Access to the port from the Atlantic is via the Gironde estuary. Almost nine million tonnes of goods arrive and leave each year. Major companies This list includes indigenous Bordeaux-based companies and companies that have major presence in Bordeaux, but are not necessarily headquartered there. Arena Groupe Bernard Groupe Castel Cdiscount Dassault Jock Marie Brizard McKesson Corporation Oxbow Ricard Sanofi Aventis Smurfit Kappa Snecma Solectron Thales Group Population In January 2020, there were 259,809 inhabitants in the city proper (commune) of Bordeaux. The commune (including Caudéran which was annexed by Bordeaux in 1965) had its largest population of 284,494 at the 1954 census. The majority of the population is French, but there are sizable groups of Italians, Spaniards (Up to 20% of the Bordeaux population claim some degree of Spanish heritage), Portuguese, Turks, Germans. The built-up area has grown for more than a century beyond the municipal borders of Bordeaux due to the small size of the commune () and urban sprawl, so that by January 2020 there were 1,376,375 people living in the overall metropolitan area (aire d'attraction) of Bordeaux, only a fifth of whom lived in the city proper. Politics Municipal administration The Mayor of the city is the environmentalist Pierre Hurmic. Bordeaux is the capital of five cantons and the Prefecture of the Gironde and Aquitaine. The town is divided into three districts, the first three of Gironde. The headquarters of Urban Community of Bordeaux Mériadeck is located in the neighbourhood and the city is at the head of the Chamber of Commerce and Industry that bears his name. The number of inhabitants of Bordeaux is greater than 250,000 and less than 299,999 so the number of municipal councilors is 65. They are divided according to the following composition: Mayors of Bordeaux Since the Liberation (1944), there have been six mayors of Bordeaux: RPR was renamed to UMP in 2002 which was later renamed to LR in 2015. Elections Presidential elections of 2007 At the 2007 presidential election, the Bordelais gave 31.37% of their votes to Ségolène Royal of the Socialist Party against 30.84% to Nicolas Sarkozy, president of the UMP. Then came François Bayrou with 22.01%, followed by Jean-Marie Le Pen who recorded 5.42%. None of the other candidates exceeded the 5% mark. Nationally, Nicolas Sarkozy led with 31.18%, then Ségolène Royal with 25.87%, followed by François Bayrou with 18.57%. After these came Jean-Marie Le Pen with 10.44%, none of the other candidates exceeded the 5% mark. In the second round, the city of Bordeaux gave Ségolène Royal 52.44% against 47.56% for Nicolas Sarkozy, the latter being elected President of the Republic with 53.06% against 46.94% for Ségolène Royal. The abstention rates for Bordeaux were 14.52% in the first round and 15.90% in the second round. Parliamentary elections of 2007 In the parliamentary elections of 2007, the left won eight constituencies against only three for the right. It should be added that after the partial 2008 elections, the eighth district of Gironde switched to the left, bringing the count to nine. In Bordeaux, the left was for the first time in its history the majority as it held two of three constituencies following the elections. In the first division of the Gironde, the outgoing UMP MP Chantal Bourragué was well ahead with 44.81% against 25.39% for the Socialist candidate Beatrice Desaigues. In the second round, it was Chantal Bourragué who was re-elected with 54.45% against 45.55% for his socialist opponent. In the second district of Gironde the UMP mayor and all new Minister of Ecology, Energy, Sustainable Development and the Sea Alain Juppé confronted the General Counsel PS Michèle Delaunay. In the first round, Alain Juppé was well ahead with 43.73% against 31.36% for Michèle Delaunay. In the second round, it was finally Michèle Delaunay who won the election with 50.93% of the votes against 49.07% for Alain Juppé, the margin being only 670 votes. The defeat of the so-called constituency "Mayor" showed that Bordeaux was rocking increasingly left. Finally, in the third constituency of the Gironde, Noël Mamère was well ahead with 39.82% against 28.42% for the UMP candidate Elizabeth Vine. In the second round, Noël Mamère was re-elected with 62.82% against 37.18% for his right-wing rival. Municipal elections of 2008 In 2008 municipal elections saw the clash between mayor of Bordeaux, Alain Juppé and the President of the Regional Council of Aquitaine Socialist Alain Rousset. The PS had put up a Socialist heavyweight in the Gironde and had put great hopes in this election after the victory of Ségolène Royal and Michèle Delaunay in 2007. However, after a rather exciting campaign it was Alain Juppé who was widely elected in the first round with 56.62%, far ahead of Alain Rousset who has managed to get 34.14%. At present, of the eight cantons that has Bordeaux, five are held by the PS and three by the UMP, the left eating a little each time into the right's numbers. European elections of 2009 In the European elections of 2009, Bordeaux voters largely voted for the UMP candidate Dominique Baudis, who won 31.54% against 15.00% for PS candidate Kader Arif. The candidate of Europe Ecology José Bové came second with 22.34%. None of the other candidates reached the 10% mark. The 2009 European elections were like the previous ones in eight constituencies. Bordeaux is located in the district "Southwest", here are the results: UMP candidate Dominique Baudis: 26.89%. His party gained four seats. PS candidate Kader Arif: 17.79%, gaining two seats in the European Parliament. Europe Ecology candidate Bove: 15.83%, obtaining two seats. MoDem candidate Robert Rochefort: 8.61%, winning a seat. Left Front candidate Jean-Luc Mélenchon: 8.16%, gaining the last seat. At regional elections in 2010, the Socialist incumbent president Alain Rousset won the first round by totaling 35.19% in Bordeaux, but this score was lower than the plan for Gironde and Aquitaine. Xavier Darcos, Minister of Labour followed with 28.40% of the votes, scoring above the regional and departmental average. Then came Monique De Marco, Green candidate with 13.40%, followed by the member of Pyrenees-Atlantiques and candidate of the MoDem Jean Lassalle who registered a low 6.78% while qualifying to the second round on the whole Aquitaine, closely followed by Jacques Colombier, candidate of the National Front, who gained 6.48%. Finally the candidate of the Left Front Gérard Boulanger with 5.64%, no other candidate above the 5% mark. In the second round, Alain Rousset had a tidal wave win as national totals rose to 55.83%. If Xavier Darcos largely lost the election, he nevertheless achieved a score above the regional and departmental average obtaining 33.40%. Jean Lassalle, who qualified for the second round, passed the 10% mark by totaling 10.77%. The ballot was marked by abstention amounting to 55.51% in the first round and 53.59% in the second round. Only candidates obtaining more than 5% are listed 2017 elections Bordeaux voted for Emmanuel Macron in the presidential election. In the 2017 parliamentary election, La République En Marche! won most of the constituencies in Bordeaux. 2019 European elections Bordeaux voted in the 2019 European Parliament election in France. Municipal elections of 2020 After 73 years of right-of-centre rule, the ecologist Pierre Hurmic (EELV) came in ahead of Nicolas Florian (LR/LaREM). Parliamentary representation The city area is represented by the following constituencies: Gironde's 1st, Gironde's 2nd, Gironde's 3rd, Gironde's 4th, Gironde's 5th, Gironde's 6th, Gironde's 7th. Education University During Antiquity, a first university had been created by the Romans in 286. The city was an important administrative centre and the new university had to train administrators. Only rhetoric and grammar were taught. Ausonius and Sulpicius Severus were two of the teachers. In 1441, when Bordeaux was an English town, the Pope Eugene IV created a university by demand of the archbishop Pey Berland. In 1793, during the French Revolution, the National Convention abolished the university, and replace them with the École centrale in 1796. In Bordeaux, this one was located in the former buildings of the college of Guyenne. In 1808, the university reappeared with Napoleon. Bordeaux accommodates approximately 70,000 students on one of the largest campuses of Europe (235 ha). Schools Bordeaux has numerous public and private schools offering undergraduate and postgraduate programs. Engineering schools: Arts et Métiers ParisTech, graduate school of industrial and mechanical engineering ESME-Sudria, graduate school of engineering École nationale supérieure d'électronique, informatique, télécommunications, mathématique et mécanique de Bordeaux (ENSEIRB-MATMECA) École supérieure de technologie des biomolécules de Bordeaux École nationale supérieure des sciences agronomiques de Bordeaux Aquitaine École nationale supérieure de chimie et physique de Bordeaux École pour l'informatique et les nouvelles technologies Institut des sciences et techniques des aliments de Bordeaux Institut de cognitique École supérieure d'informatique École privée des sciences informatiques Business and management schools: The Bordeaux MBA (International College of Bordeaux) IUT Techniques de Commercialisation of Bordeaux (business school) INSEEC Business School (Institut des hautes études économiques et commerciales) KEDGE Business School (former BEM – Bordeaux Management School) Vatel Bordeaux International Business School E-Artsup Institut supérieur européen de gestion group Institut supérieur européen de formation par l'action Other: École nationale de la magistrature (National school for the judiciary) (EFAP) (CNAM) (law school) Weekend education The , a part-time Japanese supplementary school, is held in the Salle de L'Athénée Municipal in Bordeaux. Main sights Heritage and architecture Bordeaux is classified "City of Art and History". The city is home to 362 monuments historiques (only Paris has more in France) with some buildings dating back to Roman times. Bordeaux, Port of the Moon, has been inscribed on UNESCO World Heritage List as "an outstanding urban and architectural ensemble". Bordeaux is home to one of Europe's biggest 18th-century architectural urban areas, making it a sought-after destination for tourists and cinema production crews. It stands out as one of the first French cities, after Nancy, to have entered an era of urbanism and metropolitan big scale projects, with the team Gabriel father and son, architects for King Louis XV, under the supervision of two intendants (Governors), first Nicolas-François Dupré de Saint-Maur then the Marquis de Tourny. Saint-André Cathedral, Saint-Michel Basilica and Saint-Seurin Basilica are part of the World Heritage Sites of the Routes of Santiago de Compostela in France. The organ in Saint-Louis-des-Chartrons is registered on the French monuments historiques. Buildings Main sights include: Place de la Bourse (1735–1755), designed by the Royal architect Jacques Gabriel as landscape for an equestrian statue of Louis XV, now replaced by the Fountain of the Three Graces. Grand Théâtre (1780), a large neoclassical theater built in the 18th century. Allées de Tourny Cours de l'Intendance Place du Chapelet Place du Parlement Place des Quinconces, the largest square in France. Monument aux Girondins Place Saint-Pierre Pont de pierre (1822) Bordeaux Cathedral (Saint André), consecrated by Pope Urban II in 1096 and dedicated to the Apostle Saint Andrew. Of the original Romanesque edifice only a wall in the nave remains. The Royal Door is from the early 13th century, while the rest of the construction is mostly from the 14th and 15th centuries. Tour Pey-Berland (1440–1450), a massive, quadrangular Gothic tower annexed to the cathedral. Sainte-Croix church: This church, dedicated to the Holy Cross, stands on the site of a seventh-century abbey destroyed by the Saracens. Rebuilt under the Carolingians, it was again destroyed by the Normans in 845 and 864. The present building was erected and was built in the late 11th and early 12th centuries. The façade is in Romanesque style. The Gothic Saint Michel Basilica, constructed between the end of the 14th century and the 16th century. Basilica of Saint Severinus, the oldest church in Bordeaux, built in the early sixth century on the site of a palaeo-Christian necropolis. It has an 11th-century portico, while the apse and transept are from the 12th. The 13th-century nave has chapels from the 11th and the 14th centuries. The ancient crypt houses tombs of the Merovingian family. Église Saint-Pierre, Gothic church Église Saint-Éloi, Gothic church Église Saint-Bruno, baroque church decorated with frescoes Église Notre-Dame, baroque church Église Saint-Paul-Saint-François-Xavier, baroque church Palais Rohan, once the archbishop's residence, now city hall , the remains of a late second-century Roman amphitheatre Porte Cailhau, a medieval gatehouse in the old city walls. La Grosse Cloche (15th century), the second remaining gate in the medieval walls. It was the belfry of the old Town Hall. It consists of two circular towers and a central bell tower housing a bell weighing . The clock is from 1759. Grande Synagogue, completed 1882 Rue Sainte-Catherine, the longest pedestrian street in France Darwin ecosystem, alternative place into former military barracks The BETASOM submarine base Contemporary architecture Cité Frugès, district of Pessac, built by Le Corbusier, 1924–1926, listed as UNESCO heritage Fire Station, la Benauge, Claude Ferret/Adrien Courtois/Yves Salier, 1951–1954 Mériadeck district, 1960-70's Court of first instance, Richard Rogers, 1998 CTBA, wood and furniture research center, A. Loisier, 1998 Hangar 14 on the Quai des Chartrons, 1999 The Management Science faculty on the Bastide, Anne Lacaton/Jean-Philippe Vassal, 2006 The Jardin botanique de la Bastide, Catherine Mosbach/Françoise Hélène Jourda/Pascal Convert, 2007 The Nuyens School complex on the Bastide, Yves Ballot/Nathalie Franck, 2007 Seeko'o Hotel on the Quai de Bacalan, King Kong architects, 2007 Matmut Atlantique stadium, Herzog & de Meuron, 2015 Cité du Vin, XTU architects, Anouk Legendre & Nicolas Desmazières, 2016 MECA, Maison de l'Économie Créative et de la culture de la Région Nouvelle-Aquitaine, Bjarke Ingels, 2019 Museums Musée des Beaux-Arts (Fine arts museum), one of the finest painting galleries in France with paintings by painter such as Tiziano, Veronese, Rubens, Van Dyck, Frans Hals, Claude, Chardin, Delacroix, Renoir, Seurat, Redon, Matisse and Picasso. Musée d'Aquitaine (archeological and history museum) Musée du Vin et du Négoce (museum of the wine trade) (museum of decorative arts and design) Musée d'Histoire Naturelle (natural history museum) Musée Mer Marine (Sea and Navy museum) Cité du Vin CAPC musée d'art contemporain de Bordeaux (modern art museum) Musée national des douanes (history of French customs) Bordeaux Patrimoine Mondial (architectural and heritage interpretation centre) Musée d'ethnologie (ethnology museum) Institut culturel Bernard Magrez, modern and streetart museum into an 18th-century mansion Cervantez Institute (into the house of Goya) Cap Sciences Centre Jean Moulin Memory of slavery Slavery was part of a growing drive for the city. Firstly, during the 18th and 19th centuries, Bordeaux was an important slave port, which saw some 500 slave expeditions that cause the deportation of 150,000 Africans by Bordeaux shipowners. Secondly, even though the "Triangular trade" represented only 5% of Bordeaux's wealth, the city's direct trade with the Caribbean, that accounted for the other 95%, concerns the colonial stuffs made by the slave (sugar, coffee, cocoa). And thirdly, in that same period, a major migratory movement by Aquitanians took place to the Caribbean colonies, with Saint-Domingue (now Haiti) being the most popular destination. 40% of the white population of the island came from Aquitaine. They prospered with plantations incomes, until the first slave revolts which concluded in 1848 in the final abolition of slavery in France. A statue of Modeste Testas, an Ethiopian woman who was enslaved by the Bordeaux-based Testas brothers was unveiled in 2019. She was trafficked by them from West Africa, to Philadelphia (where one of the brother coerced her to have two children by him) and was ultimately freed and lived in Haiti. The bronze sculpture was created by the Haitian artists Woodly Caymitte. A number of traces and memorial sites are visible in the city. Moreover, in May 2009, the Museum of Aquitaine opened the spaces dedicated to "Bordeaux in the 18th century, trans-Atlantic trading and slavery". This work, richly illustrated with original documents, contributes to disseminate the state of knowledge on this question, presenting above all the facts and their chronology. The region of Bordeaux was also the land of several prominent abolitionists, as Montesquieu, Laffon de Ladébat and Elisée Reclus. Others were members of the Society of the Friends of the Blacks as the revolutionaries Boyer-Fonfrède, Gensonné, Guadet and Ducos. Parks and gardens Jardin public de Bordeaux, with inside the Jardin botanique de Bordeaux Jardin botanique de la Bastide Parc bordelais Parc aux Angéliques Jardin des Lumières Parc Rivière Parc Floral Pont Jacques Chaban-Delmas Europe's longest-span vertical-lift bridge, the Pont Jacques Chaban-Delmas, was opened in 2013 in Bordeaux, spanning the River Garonne. The central lift span is , weighs 4,600 tons and can be lifted vertically up to to let tall ships pass underneath. The €160 million bridge was inaugurated by President François Hollande and Mayor Alain Juppé on 16 March 2013. The bridge was named after the late Jacques Chaban-Delmas, who was a former Prime Minister and Mayor of Bordeaux. Shopping Bordeaux has many shopping options. In the heart of Bordeaux is Rue Sainte-Catherine. This pedestrian-only shopping street has of shops, restaurants and cafés; it is also one of the longest shopping streets in Europe. Rue Sainte-Catherine starts at Place de la Victoire and ends at Place de la Comédie by the Grand Théâtre. The shops become progressively more upmarket as one moves towards Place de la Comédie and the nearby Cours de l'Intendance is where one finds the more exclusive shops and boutiques. Culture Bordeaux is also the first city in France to have created, in the 1980s, an architecture exhibition and research centre, Arc en rêve. Bordeaux offers a large number of cinemas, theatres, and is the home of the Opéra national de Bordeaux. There are many music venues of varying capacity. The city also offers several festivals throughout the year. In October 2021, Bordeaux was shortlisted for the European Commission's 2022 European Capital of Smart Tourism award along with Copenhagen, Dublin, Florence, Ljubljana, Palma de Mallorca and Valencia. Transport Road Bordeaux is an important road and motorway junction. The city is connected to Paris by the A10 motorway, with Lyon by the A89, with Toulouse by the A62, and with Spain by the A63. There is a ring road called the "Rocade" which is often very busy. Another ring road is under consideration. Bordeaux has five road bridges that cross the Garonne, the Pont de pierre built in the 1820s and three modern bridges built after 1960: the Pont Saint Jean, just south of the Pont de pierre (both located downtown), the Pont d'Aquitaine, a suspension bridge downstream from downtown, and the Pont François Mitterrand, located upstream of downtown. These two bridges are part of the ring-road around Bordeaux. A fifth bridge, the Pont Jacques-Chaban-Delmas, was constructed in 2009–2012 and opened to traffic in March 2013. Located halfway between the Pont de pierre and the Pont d'Aquitaine and serving downtown rather than highway traffic, it is a vertical-lift bridge with a height in closed position comparable to that of Pont de pierre, and to the Pont d'Aquitaine when open. All five road bridges, including the two highway bridges, are open to cyclists and pedestrians as well. Another bridge, the Pont Jean-Jacques Bosc, is to be built in 2018. Lacking any steep hills, Bordeaux is relatively friendly to cyclists. Cycle paths (separate from the roadways) exist on the highway bridges, along the riverfront, on the university campuses, and incidentally elsewhere in the city. Cycle lanes and bus lanes that explicitly allow cyclists exist on many of the city's boulevards. A paid bicycle-sharing system with automated stations was established in 2010. Rail The main railway station, Gare de Bordeaux Saint-Jean, near the center of the city, has 12 million passengers a year. It is served by the French national (SNCF) railway's high speed train, the TGV, that gets to Paris in two hours, with connections to major European centers such as Lille, Brussels, Amsterdam, Cologne, Geneva and London. The TGV also serves Toulouse and Irun (Spain) from Bordeaux. A regular train service is provided to Nantes, Nice, Marseille and Lyon. The Gare Saint-Jean is the major hub for regional trains (TER) operated by the SNCF to Arcachon, Limoges, Agen, Périgueux, Langon, Pau, Le Médoc, Angoulême and Bayonne. Historically the train line used to terminate at a station on the right bank of the river Garonne near the Pont de Pierre, and passengers crossed the bridge to get into the city. Subsequently, a double-track steel railway bridge was constructed in the 1850s, by Gustave Eiffel, to bring trains across the river direct into Gare de Bordeaux Saint-Jean. The old station was later converted and in 2010 comprised a cinema and restaurants. The two-track Eiffel bridge with a speed limit of became a bottleneck and a new bridge was built, opening in 2009. The new bridge has four tracks and allows trains to pass at . During the planning there was much lobbying by the Eiffel family and other supporters to preserve the old bridge as a footbridge across the Garonne, with possibly a museum to document the history of the bridge and Gustave Eiffel's contribution. The decision was taken to save the bridge, but by early 2010 no plans had been announced as to its future use. The bridge remains intact, but unused and without any means of access. Since July 2017, the LGV Sud Europe Atlantique is fully operational and makes Bordeaux city 2h04 from Paris. Air Bordeaux is served by Bordeaux–Mérignac Airport, located from the city centre in the suburban city of Mérignac. Trams, buses and boats Bordeaux has an important public transport system called Transports Bordeaux Métropole (TBM). This company is run by the Keolis group. The network consists of: Four tram lines (A, B, C and D) 75 bus routes, all connected to the tramway network (from 1 to 96) 13 night bus routes (from 1 to 16) An electric bus shuttle in the city centre A boat shuttle on the Garonne river This network is operated from 5 am to 2 am. There had been several plans for a subway network to be set up, but they stalled for both geological and financial reasons. Work on the Tramway de Bordeaux system was started in the autumn of 2000, and services started in December 2003 connecting Bordeaux with its suburban areas. The tram system uses Alstom APS a form of ground-level power supply technology developed by French company Alstom and designed to preserve the aesthetic environment by eliminating overhead cables in the historic city. Conventional overhead cables are used outside the city. The system was controversial for its considerable cost of installation, maintenance and also for the numerous initial technical problems that paralysed the network. Many streets and squares along the tramway route became pedestrian areas, with limited access for cars. The planned Bordeaux tramway system is to link with the airport to the city centre towards the end of 2019. Taxis There are more than 400 taxicabs in Bordeaux. Public transportation statistics The average amount of time people spend commuting with public transit in Bordeaux, for example to and from work, on a weekday is 51 min. 12.% of public transit riders, ride for more than 2 hours every day. The average amount of time people wait at a stop or station for public transit is 13 min, while 15.5% of riders wait for over 20 minutes on average every day. The average distance people usually ride in a single trip with public transit is , while 8% travel for over in a single direction. Sport The 41,458-capacity Nouveau Stade de Bordeaux is the largest stadium in Bordeaux. The stadium was opened in 2015 and replaced the Stade Chaban-Delmas, which was a venue for the FIFA World Cup in 1938 and 1998, as well as the 2007 Rugby World Cup. In the 1938 FIFA World Cup, it hosted a violent quarter-final known as the Battle of Bordeaux. The ground was formerly known as the Stade du Parc Lescure until 2001, when it was renamed in honour of the city's long-time mayor, Jacques Chaban-Delmas. There are two major sport teams in Bordeaux, Girondins de Bordeaux is the football team, playing in Ligue 2, the second tier of French football. Union Bordeaux Bègles is a rugby team in the Top 14 in the Ligue Nationale de Rugby. Skateboarding, rollerblading, and BMX biking are activities enjoyed by many young inhabitants of the city. Bordeaux is home to a quay which runs along the Garonne river. On the quay there is a skate-park divided into three sections. One section is for Vert tricks, one for street style tricks, and one for little action sports athletes with easier features and softer materials. The skate-park is very well maintained by the municipality. Bordeaux is also the home to one of the strongest cricket teams in France and are champions of the South West League. There is a wooden velodrome, Vélodrome du Lac, in Bordeaux which hosts international cycling competition in the form of UCI Track Cycling World Cup events. The 2015 Trophee Eric Bompard was in Bordeaux. But the Free Skate was cancelled in all of the divisions due to the Paris and aftermath. The Short Program occurred hours before the bombing. French skaters Chafik Besseghier (68.36) in tenth place, Romain Ponsart (62.86) in 11th. Mae-Berenice-Meite (46.82) in 11th and Laurine Lecavelier (46.53) in 12th. Vanessa James/Morgan Cipres (65.75) in second. Between 1951 and 1955, an annual Formula 1 motor race was held on a 2.5-kilometre circuit which looped around the Esplanade des Quinconces and along the waterfront, attracting drivers such as Juan Manuel Fangio, Stirling Moss, Jean Behra and Maurice Trintignant. Notable people Ausonius (310–395), Roman poet and teacher of rhetoric Jean Alaux (1786–1864), painter Bertrand Andrieu (1761–1822), engraver Jean Anouilh (1910–1987), dramatist Lucien Arman (1811–1873), shipbuilder and politician Yvonne Arnaud (1892–1958), pianist, singer and actress Xavier Arnozan (1852–1928), physician Floyd Ayité (born 1988), Togolese footballer Jonathan Ayité (born 1985), Togolese footballer Christine Barbe, winemaker Jean-Baptiste Barrière (1707-1747), cellist, composer Gérard Bayo (born 1936), writer and poet, François Bigot (1703–1778), last "Intendant" of New France Arnaud Binard (born 1971), actor and producer Rosa Bonheur (1822–1899), animal painter and sculptor Grégory Bourdy (born 1982), golfer Samuel Boutal (born 1969), footballer Edmond de Caillou (died c. February 1316) Gascon knight fighting in Scotland Gérald Caussé, Presiding Bishop of The Church of Jesus Christ of Latter Day Saints René Clément (1913–1996), actor, director, writer Jean-René Cruchet (1875–1959), pathologist Boris Cyrulnik (born 1937), psychiatrist and psychoanalyst Damia (1899–1978), singer and actress Étienne Noël Damilaville (1723–1768), encyclopédiste Lili Damita (1901–1994), actress Frédéric Daquin, (born 1978), footballer Danielle Darrieux (1917–2017), actress Bernard Delvaille (1931–2006), poet, essayist David Diop (1927–1960), poet Jean-Francois Domergue, footballer Eleanor of Aquitaine (1122–1204), duchess of Aquitaine, queen of France and queen of England Jacques Ellul (1912–1994), sociologist, theologian, Christian anarchist Jean Eustache (1938-1981), Nouvelle Vague director Marie Fel (1713–1794), opera singer Jean-Luc Fournet (1965), papyrologist Pierre-Jean Garat (1762–1823), singer Armand Gensonné (1758–1793), politician Sébastien Gervais (born 1976), professional footballer Stephen Girard (1750–1831), merchant, banker, and Philadelphia philanthropist Jérôme Gnako (born 1968), footballer Randolphe Gohi (born 1969), former professional footballer Eugène Goossens (1867–1958), conductor, violinist Anna Hamilton (1864–1935), doctor, superintendent of the Protestant Hospital at Bordeaux (1901–1934) Adolphe Jacquies (c. 1798–1860), Canadian shopkeeper, printer, trade unionist, and newspaper publisher Pierre Lacour (1745–1814), painter Léopold Lafleurance (1865–1953), flautist Joseph Henri Joachim Lainé (1767–1835), statesman Sainte Jeanne de Lestonnac (1556–1640), Roman Catholic saint and foundress of the Sisters of the Company of Mary, Our Lady Christophe Lestrade (born 1969), former professional footballer André Lhote (1885–1962), cubist painter Jeanne Henriette Louis, (1938), professor of North American civilization Jean-Baptiste Lynch (1749–1835), politician Lucenzo (born 1983), singer Jean-Jacques Magendie (1766–1835), officer François Magendie (1783–1855), physiologist Bruno Marie-Rose (born 1965), athlete (sprinter) Albert Marquet, (1875–1947), painter François Mauriac (1885–1970), writer, Nobel laureate 1952 Benjamin Millepied (born 1977), dancer and choreographer Édouard Molinaro (1928–2013), film director, screenwriter Pierre Molinier (1900–1976), painter, photographer Michel de Montaigne (1533–1592), essayist Montesquieu (1689–1755), man of letters and political philosopher Olivier Mony (1966–), writer and literary critic Étienne Marie Antoine Champion de Nansouty (1768–1815), general Elie Okobo, basketball player Pierre Palmade (born 1968), actor and comedian St. Paulinus of Nola (354–431), educator, religious figure Émile Péreire (1800–1875), banker and industrialist Sophie Pétronin (born 1945), aid worker and humanitarian Albert Pitres (1848–1928), neurologist Hippolyte Pradelles (1824–1913), naturalist painter Georges Antoine Pons Rayet (1839–1906), astronomer, discoverer of the Wolf-Rayet stars, & founder of the Bordeaux Observatory Odilon Redon (1840–1916), painter Richard II of England (1367–1400), king Pierre Rode (1774–1830), violinist Olinde Rodrigues (1795–1851), mathematician, banker and social reformer Marie-Sabine Roger (born 1957), writer Eugenie Santa Coloma Sourget (1827-1895), composer, pianist and singer Bernard Sarrette (1765–1858), conductor and music pedagogue Jean-Jacques Sempé (1932–2022), cartoonist Florent Serra (born 1981), tennis player Alfred Smith, (1854–1932), painter Soko (born 1985), singer Philippe Sollers, (born 1936), writer Wilfried Tekovi, (born 1989), Togolese footballer Elie Vinet (1509–1587), historian and humanist of the Renaissance International relationships Twin towns – sister cities Bordeaux is twinned with: Ashdod, Israel, since 1984 Bilbao, Spain Baku, Azerbaijan, since 1985 Bristol, United Kingdom, since 1947 Casablanca, Morocco, since 1988 Fukuoka, Japan, since 1982 Kraków, Poland, since 1993 Lima, Peru, since 1957 Los Angeles, California United States, since 1968 Madrid, Spain, since 1984 Munich, Germany, since 1964 Oran, Algeria, since 2003 Porto, Portugal, since 1978 Quebec City, Quebec Canada, since 1962 Ramallah, Palestine Riga, Latvia Saint Petersburg, Russia, since 1993 Wuhan, China, since 1998 Partnerships Samsun, Turkey, since 2010 See also Atlantic history Bordeaux wine regions Bordeaux–Paris, a formerly professional road bicycle racing annual event The Burdigalian Age of the Miocene Epoch is named for Bordeaux Canelé, a local pastry Communes of the Gironde department Dogue de Bordeaux, a breed of dog originally bred for dog fighting French wine Girondins History of slavery List of mayors of Bordeaux Operation Frankton, a British Combined Operations raid on shipping in the harbour at Bordeaux, in December 1942, during World War II Roman Catholic Archdiocese of Bordeaux Explanatory notes References Further reading External links Bordeaux: the world capital of wine – Official French website (in English) Cities in France Cities in Nouvelle-Aquitaine Communes of Gironde Gallia Aquitania Gironde Guyenne Port cities and towns on the French Atlantic coast Prefectures in France World Heritage Sites in France
11
internationally known as Bust-A-Move, is a 1994 tile-matching puzzle arcade game developed and published by Taito. It is based on the 1986 arcade game Bubble Bobble, featuring characters and themes from that game. Its characteristically cute Japanese animation and music, along with its play mechanics and level designs, made it successful as an arcade title and spawned several sequels and ports to home gaming systems. Gameplay At the start of each round, the rectangular playing arena contains a prearranged pattern of colored "bubbles". At the bottom of the screen, the player controls a device called a "pointer", which aims and fires bubbles up the screen. The color of bubbles fired is randomly generated and chosen from the colors of bubbles still left on the screen. The objective of the game is to clear all the bubbles from the arena without any bubble crossing the bottom line. Bubbles will fire automatically if the player remains idle. After clearing the arena, the next round begins with a new pattern of bubbles to clear. The game consists of 32 levels. The fired bubbles travel in straight lines (possibly bouncing off the sidewalls of the arena), stopping when they touch other bubbles or reach the top of the arena. If a bubble touches identically-colored bubbles, forming a group of three or more, those bubbles—as well as any bubbles hanging from them—are removed from the field of play, and points are awarded. After every few shots, the "ceiling" of the playing arena drops downwards slightly, along with all the bubbles stuck to it. The number of shots between each drop of the ceiling is influenced by the number of bubble colors remaining. The closer the bubbles get to the bottom of the screen, the faster the music plays and if they cross the line at the bottom then the game is over. Release Two different versions of the original game were released. Puzzle Bobble was originally released in Japan only in June 1994 by Taito, running on Taito B System hardware (with the preliminary title "Bubble Buster"). Then, 6 months later in December, the international Neo Geo version of Puzzle Bobble was released. It was almost identical aside from being in stereo and having some different sound effects and translated text. Reception In Japan, Game Machine listed the Neo Geo version of Puzzle Bobble on their February 15, 1995 issue as being the second most-popular arcade game at the time. It went on to become Japan's second highest-grossing arcade printed circuit board (PCB) software of 1995, below Virtua Fighter 2. In North America, RePlay reported the Neo Geo version of Puzzle Bobble to be the fourth most-popular arcade game in February 1995. Reviewing the Super NES version, Mike Weigand of Electronic Gaming Monthly called it "a thoroughly enjoyable and incredibly addicting puzzle game". He considered the two player mode the highlight, but also said that the one player mode provides a solid challenge. GamePro gave it a generally negative review, saying it "starts out fun but ultimately lacks intricacy and longevity." They elaborated that in one player mode all the levels feel the same, and that two player matches are over too quickly to build up any excitement. They also criticized the lack of any 3D effects in the graphics. Next Generation reviewed the SNES version of the game, and stated that "It's very simple, using only the control pad and one button to fire, and it's addictive as hell." A reviewer for Next Generation, while questioning the continued viability of the action puzzle genre, admitted that the game is "very simple and very addictive". He remarked that though the 3DO version makes no significant additions, none are called for by a game with such simple enjoyment. GamePros brief review of the 3DO version commented, "The move-and-shoot controls are very responsive and the simple visuals and music are well done. This is one puzzler that isn't a bust." Edge magazine ranked the game 73rd on their 100 Best Video Games in 2007. IGN rated the SNES version 54th in its Top 100 SNES Games. Legacy The simplicity of the concept has led to many clones, both commercial and otherwise. 1996's Snood replaced the bubbles with small creatures and has been successful in its own right. Worms Blast was Team 17's take on the concept. On September 24, 2000, British game publisher Empire Interactive released a similar game, Spin Jam, for the original PlayStation console. Mobile clones include Bubble Witch Saga and Bubble Shooter. Frozen Bubble is a free software clone. For Bubble Bobble's 35th anniversary, Taito launched Puzzle Bobble VR: Vacation Odyssey on the Oculus Quest and Oculus Quest 2, later coming to PlayStation 4 and PlayStation 5 as Puzzle Bobble 3D: Vacation Odyssey in 2021. Puzzle Bobble Everybubble! On August 25, 2022, a new game titled Puzzle Bobble Everybubble! was announced for Nintendo Switch. It was released on May 23, 2023. The game also comes with an extra mode called "Puzzle Bobble vs. Space Invaders", where up to four players can work together to erase bubble-encased invaders before they reach the player while only being able to aim straight up. Notes References External links Taito Corporation page: arcade, PB (mobile) 1994 video games 3DO Interactive Multiplayer games ACA Neo Geo games Arcade video games Square Enix franchises Bubble Bobble Head-to-head arcade video games IOS games Kinesoft games Microcabin games Mobile games Multiplayer and single-player video games Neo Geo games Neo Geo CD games PlayStation 4 games Puzzle video games Game Gear games SNK games Split-screen multiplayer games Super Nintendo Entertainment System games Taito arcade games Taito B System games Video games scored by Tamayo Kawamoto Windows games WonderSwan games Xbox 360 Live Arcade games Xbox One games Video games developed in Japan Hamster Corporation games
6
A bone is a rigid organ that constitutes part of the skeleton in most vertebrate animals. Bones protect the various other organs of the body, produce red and white blood cells, store minerals, provide structure and support for the body, and enable mobility. Bones come in a variety of shapes and sizes and have complex internal and external structures. They are lightweight yet strong and hard and serve multiple functions. Bone tissue (osseous tissue), which is also called bone in the uncountable sense of that word, is hard tissue, a type of specialised connective tissue. It has a honeycomb-like matrix internally, which helps to give the bone rigidity. Bone tissue is made up of different types of bone cells. Osteoblasts and osteocytes are involved in the formation and mineralisation of bone; osteoclasts are involved in the resorption of bone tissue. Modified (flattened) osteoblasts become the lining cells that form a protective layer on the bone surface. The mineralised matrix of bone tissue has an organic component of mainly collagen called ossein and an inorganic component of bone mineral made up of various salts. Bone tissue is mineralized tissue of two types, cortical bone and cancellous bone. Other types of tissue found in bones include bone marrow, endosteum, periosteum, nerves, blood vessels and cartilage. In the human body at birth, there are approximately 300 bones present; many of these fuse together during development, leaving a total of 206 separate bones in the adult, not counting numerous small sesamoid bones. The largest bone in the body is the femur or thigh-bone, and the smallest is the stapes in the middle ear. The Greek word for bone is ὀστέον ("osteon"), hence the many terms that use it as a prefix—such as osteopathy. In anatomical terminology, including the Terminologia Anatomica international standard, the word for a bone is os (for example, os breve, os longum, os sesamoideum). Structure Bone is not uniformly solid, but consists of a flexible matrix (about 30%) and bound minerals (about 70%) which are intricately woven and endlessly remodeled by a group of specialized bone cells. Their unique composition and design allows bones to be relatively hard and strong, while remaining lightweight. Bone matrix is 90 to 95% composed of elastic collagen fibers, also known as ossein, and the remainder is ground substance. The elasticity of collagen improves fracture resistance. The matrix is hardened by the binding of inorganic mineral salt, calcium phosphate, in a chemical arrangement known as bone mineral, a form of calcium apatite. It is the mineralization that gives bones rigidity. Bone is actively constructed and remodeled throughout life by special bone cells known as osteoblasts and osteoclasts. Within any single bone, the tissue is woven into two main patterns, known as cortical and cancellous bone, each with a different appearance and characteristics. Cortex The hard outer layer of bones is composed of cortical bone, which is also called compact bone as it is much denser than cancellous bone. It forms the hard exterior (cortex) of bones. The cortical bone gives bone its smooth, white, and solid appearance, and accounts for 80% of the total bone mass of an adult human skeleton. It facilitates bone's main functions—to support the whole body, to protect organs, to provide levers for movement, and to store and release chemical elements, mainly calcium. It consists of multiple microscopic columns, each called an osteon or Haversian system. Each column is multiple layers of osteoblasts and osteocytes around a central canal called the haversian canal. Volkmann's canals at right angles connect the osteons together. The columns are metabolically active, and as bone is reabsorbed and created the nature and location of the cells within the osteon will change. Cortical bone is covered by a periosteum on its outer surface, and an endosteum on its inner surface. The endosteum is the boundary between the cortical bone and the cancellous bone. The primary anatomical and functional unit of cortical bone is the osteon. Trabeculae Cancellous bone, or spongy bone, also known as trabecular bone, is the internal tissue of the skeletal bone and is an open cell porous network that follows the material properties of biofoams. Cancellous bone has a higher surface-area-to-volume ratio than cortical bone and it is less dense. This makes it weaker and more flexible. The greater surface area also makes it suitable for metabolic activities such as the exchange of calcium ions. Cancellous bone is typically found at the ends of long bones, near joints, and in the interior of vertebrae. Cancellous bone is highly vascular and often contains red bone marrow where hematopoiesis, the production of blood cells, occurs. The primary anatomical and functional unit of cancellous bone is the trabecula. The trabeculae are aligned towards the mechanical load distribution that a bone experiences within long bones such as the femur. As far as short bones are concerned, trabecular alignment has been studied in the vertebral pedicle. Thin formations of osteoblasts covered in endosteum create an irregular network of spaces, known as trabeculae. Within these spaces are bone marrow and hematopoietic stem cells that give rise to platelets, red blood cells and white blood cells. Trabecular marrow is composed of a network of rod- and plate-like elements that make the overall organ lighter and allow room for blood vessels and marrow. Trabecular bone accounts for the remaining 20% of total bone mass but has nearly ten times the surface area of compact bone. The words cancellous and trabecular refer to the tiny lattice-shaped units (trabeculae) that form the tissue. It was first illustrated accurately in the engravings of Crisóstomo Martinez. Marrow Bone marrow, also known as myeloid tissue in red bone marrow, can be found in almost any bone that holds cancellous tissue. In newborns, all such bones are filled exclusively with red marrow or hematopoietic marrow, but as the child ages the hematopoietic fraction decreases in quantity and the fatty/ yellow fraction called marrow adipose tissue (MAT) increases in quantity. In adults, red marrow is mostly found in the bone marrow of the femur, the ribs, the vertebrae and pelvic bones. Vascular supply Bone receives about 10% of cardiac output. Blood enters the endosteum, flows through the marrow, and exits through small vessels in the cortex. In humans, blood oxygen tension in bone marrow is about 6.6%, compared to about 12% in arterial blood, and 5% in venous and capillary blood. Cells Bone is metabolically active tissue composed of several types of cells. These cells include osteoblasts, which are involved in the creation and mineralization of bone tissue, osteocytes, and osteoclasts, which are involved in the reabsorption of bone tissue. Osteoblasts and osteocytes are derived from osteoprogenitor cells, but osteoclasts are derived from the same cells that differentiate to form macrophages and monocytes. Within the marrow of the bone there are also hematopoietic stem cells. These cells give rise to other cells, including white blood cells, red blood cells, and platelets. Osteoblast Osteoblasts are mononucleate bone-forming cells. They are located on the surface of osteon seams and make a protein mixture known as osteoid, which mineralizes to become bone. The osteoid seam is a narrow region of a newly formed organic matrix, not yet mineralized, located on the surface of a bone. Osteoid is primarily composed of Type I collagen. Osteoblasts also manufacture hormones, such as prostaglandins, to act on the bone itself. The osteoblast creates and repairs new bone by actually building around itself. First, the osteoblast puts up collagen fibers. These collagen fibers are used as a framework for the osteoblasts' work. The osteoblast then deposits calcium phosphate which is hardened by hydroxide and bicarbonate ions. The brand-new bone created by the osteoblast is called osteoid. Once the osteoblast is finished working it is actually trapped inside the bone once it hardens. When the osteoblast becomes trapped, it becomes known as an osteocyte. Other osteoblasts remain on the top of the new bone and are used to protect the underlying bone, these become known as lining cells. Osteocyte Osteocytes are cells of mesenchymal origin and originate from osteoblasts that have migrated into and become trapped and surrounded by a bone matrix that they themselves produced. The spaces the cell body of osteocytes occupy within the mineralized collagen type I matrix are known as lacunae, while the osteocyte cell processes occupy channels called canaliculi. The many processes of osteocytes reach out to meet osteoblasts, osteoclasts, bone lining cells, and other osteocytes probably for the purposes of communication. Osteocytes remain in contact with other osteocytes in the bone through gap junctions—coupled cell processes which pass through the canalicular channels. Osteoclast Osteoclasts are very large multinucleate cells that are responsible for the breakdown of bones by the process of bone resorption. New bone is then formed by the osteoblasts. Bone is constantly remodeled by the resorption of osteoclasts and created by osteoblasts. Osteoclasts are large cells with multiple nuclei located on bone surfaces in what are called Howship's lacunae (or resorption pits). These lacunae are the result of surrounding bone tissue that has been reabsorbed. Because the osteoclasts are derived from a monocyte stem-cell lineage, they are equipped with phagocytic-like mechanisms similar to circulating macrophages. Osteoclasts mature and/or migrate to discrete bone surfaces. Upon arrival, active enzymes, such as tartrate-resistant acid phosphatase, are secreted against the mineral substrate. The reabsorption of bone by osteoclasts also plays a role in calcium homeostasis. Composition Bones consist of living cells (osteoblasts and osteocytes) embedded in a mineralized organic matrix. The primary inorganic component of human bone is hydroxyapatite, the dominant bone mineral, having the nominal composition of Ca10(PO4)6(OH)2. The organic components of this matrix consist mainly of type I collagen—"organic" referring to materials produced as a result of the human body—and inorganic components, which alongside the dominant hydroxyapatite phase, include other compounds of calcium and phosphate including salts. Approximately 30% of the acellular component of bone consists of organic matter, while roughly 70% by mass is attributed to the inorganic phase. The collagen fibers give bone its tensile strength, and the interspersed crystals of hydroxyapatite give bone its compressive strength. These effects are synergistic. The exact composition of the matrix may be subject to change over time due to nutrition and biomineralization, with the ratio of calcium to phosphate varying between 1.3 and 2.0 (per weight), and trace minerals such as magnesium, sodium, potassium and carbonate also be found. Type I collagen composes 90–95% of the organic matrix, with the remainder of the matrix being a homogenous liquid called ground substance consisting of proteoglycans such as hyaluronic acid and chondroitin sulfate, as well as non-collagenous proteins such as osteocalcin, osteopontin or bone sialoprotein. Collagen consists of strands of repeating units, which give bone tensile strength, and are arranged in an overlapping fashion that prevents shear stress. The function of ground substance is not fully known. Two types of bone can be identified microscopically according to the arrangement of collagen: woven and lamellar. Woven bone (also known as fibrous bone), which is characterized by a haphazard organization of collagen fibers and is mechanically weak. Lamellar bone, which has a regular parallel alignment of collagen into sheets ("lamellae") and is mechanically strong. Woven bone is produced when osteoblasts produce osteoid rapidly, which occurs initially in all fetal bones, but is later replaced by more resilient lamellar bone. In adults, woven bone is created after fractures or in Paget's disease. Woven bone is weaker, with a smaller number of randomly oriented collagen fibers, but forms quickly; it is for this appearance of the fibrous matrix that the bone is termed woven. It is soon replaced by lamellar bone, which is highly organized in concentric sheets with a much lower proportion of osteocytes to surrounding tissue. Lamellar bone, which makes its first appearance in humans in the fetus during the third trimester, is stronger and filled with many collagen fibers parallel to other fibers in the same layer (these parallel columns are called osteons). In cross-section, the fibers run in opposite directions in alternating layers, much like in plywood, assisting in the bone's ability to resist torsion forces. After a fracture, woven bone forms initially and is gradually replaced by lamellar bone during a process known as "bony substitution." Compared to woven bone, lamellar bone formation takes place more slowly. The orderly deposition of collagen fibers restricts the formation of osteoid to about 1 to 2 µm per day. Lamellar bone also requires a relatively flat surface to lay the collagen fibers in parallel or concentric layers. Deposition The extracellular matrix of bone is laid down by osteoblasts, which secrete both collagen and ground substance. These synthesise collagen within the cell and then secrete collagen fibrils. The collagen fibers rapidly polymerise to form collagen strands. At this stage, they are not yet mineralised, and are called "osteoid". Around the strands calcium and phosphate precipitate on the surface of these strands, within days to weeks becoming crystals of hydroxyapatite. In order to mineralise the bone, the osteoblasts secrete vesicles containing alkaline phosphatase. This cleaves the phosphate groups and acts as the foci for calcium and phosphate deposition. The vesicles then rupture and act as a centre for crystals to grow on. More particularly, bone mineral is formed from globular and plate structures. Types There are five types of bones in the human body: long, short, flat, irregular, and sesamoid. Long bones are characterized by a shaft, the diaphysis, that is much longer than its width; and by an epiphysis, a rounded head at each end of the shaft. They are made up mostly of compact bone, with lesser amounts of marrow, located within the medullary cavity, and areas of spongy, cancellous bone at the ends of the bones. Most bones of the limbs, including those of the fingers and toes, are long bones. The exceptions are the eight carpal bones of the wrist, the seven articulating tarsal bones of the ankle and the sesamoid bone of the kneecap. Long bones such as the clavicle, that have a differently shaped shaft or ends are also called modified long bones. Short bones are roughly cube-shaped, and have only a thin layer of compact bone surrounding a spongy interior. Short bones provide stability and support as well as some limited motion. The bones of the wrist and ankle are short bones. Flat bones are thin and generally curved, with two parallel layers of compact bone sandwiching a layer of spongy bone. Most of the bones of the skull are flat bones, as is the sternum. Sesamoid bones are bones embedded in tendons. Since they act to hold the tendon further away from the joint, the angle of the tendon is increased and thus the leverage of the muscle is increased. Examples of sesamoid bones are the patella and the pisiform. Irregular bones do not fit into the above categories. They consist of thin layers of compact bone surrounding a spongy interior. As implied by the name, their shapes are irregular and complicated. Often this irregular shape is due to their many centers of ossification or because they contain bony sinuses. The bones of the spine, pelvis, and some bones of the skull are irregular bones. Examples include the ethmoid and sphenoid bones. Terminology In the study of anatomy, anatomists use a number of anatomical terms to describe the appearance, shape and function of bones. Other anatomical terms are also used to describe the location of bones. Like other anatomical terms, many of these derive from Latin and Greek. Some anatomists still use Latin to refer to bones. The term "osseous", and the prefix "osteo-", referring to things related to bone, are still used commonly today. Some examples of terms used to describe bones include the term "foramen" to describe a hole through which something passes, and a "canal" or "meatus" to describe a tunnel-like structure. A protrusion from a bone can be called a number of terms, including a "condyle", "crest", "spine", "eminence", "tubercle" or "tuberosity", depending on the protrusion's shape and location. In general, long bones are said to have a "head", "neck", and "body". When two bones join, they are said to "articulate". If the two bones have a fibrous connection and are relatively immobile, then the joint is called a "suture". Development The formation of bone is called ossification. During the fetal stage of development this occurs by two processes: intramembranous ossification and endochondral ossification. Intramembranous ossification involves the formation of bone from connective tissue whereas endochondral ossification involves the formation of bone from cartilage. Intramembranous ossification mainly occurs during formation of the flat bones of the skull but also the mandible, maxilla, and clavicles; the bone is formed from connective tissue such as mesenchyme tissue rather than from cartilage. The process includes: the development of the ossification center, calcification, trabeculae formation and the development of the periosteum. Endochondral ossification occurs in long bones and most other bones in the body; it involves the development of bone from cartilage. This process includes the development of a cartilage model, its growth and development, development of the primary and secondary ossification centers, and the formation of articular cartilage and the epiphyseal plates. Endochondral ossification begins with points in the cartilage called "primary ossification centers." They mostly appear during fetal development, though a few short bones begin their primary ossification after birth. They are responsible for the formation of the diaphyses of long bones, short bones and certain parts of irregular bones. Secondary ossification occurs after birth and forms the epiphyses of long bones and the extremities of irregular and flat bones. The diaphysis and both epiphyses of a long bone are separated by a growing zone of cartilage (the epiphyseal plate). At skeletal maturity (18 to 25 years of age), all of the cartilage is replaced by bone, fusing the diaphysis and both epiphyses together (epiphyseal closure). In the upper limbs, only the diaphyses of the long bones and scapula are ossified. The epiphyses, carpal bones, coracoid process, medial border of the scapula, and acromion are still cartilaginous. The following steps are followed in the conversion of cartilage to bone: Zone of reserve cartilage. This region, farthest from the marrow cavity, consists of typical hyaline cartilage that as yet shows no sign of transforming into bone. Zone of cell proliferation. A little closer to the marrow cavity, chondrocytes multiply and arrange themselves into longitudinal columns of flattened lacunae. Zone of cell hypertrophy. Next, the chondrocytes cease to divide and begin to hypertrophy (enlarge), much like they do in the primary ossification center of the fetus. The walls of the matrix between lacunae become very thin. Zone of calcification. Minerals are deposited in the matrix between the columns of lacunae and calcify the cartilage. These are not the permanent mineral deposits of bone, but only a temporary support for the cartilage that would otherwise soon be weakened by the breakdown of the enlarged lacunae. Zone of bone deposition. Within each column, the walls between the lacunae break down and the chondrocytes die. This converts each column into a longitudinal channel, which is immediately invaded by blood vessels and marrow from the marrow cavity. Osteoblasts line up along the walls of these channels and begin depositing concentric lamellae of matrix, while osteoclasts dissolve the temporarily calcified cartilage. Functions Bones have a variety of functions: Mechanical Bones serve a variety of mechanical functions. Together the bones in the body form the skeleton. They provide a frame to keep the body supported, and an attachment point for skeletal muscles, tendons, ligaments and joints, which function together to generate and transfer forces so that individual body parts or the whole body can be manipulated in three-dimensional space (the interaction between bone and muscle is studied in biomechanics). Bones protect internal organs, such as the skull protecting the brain or the ribs protecting the heart and lungs. Because of the way that bone is formed, bone has a high compressive strength of about , poor tensile strength of 104–121 MPa, and a very low shear stress strength (51.6 MPa). This means that bone resists pushing (compressional) stress well, resist pulling (tensional) stress less well, but only poorly resists shear stress (such as due to torsional loads). While bone is essentially brittle, bone does have a significant degree of elasticity, contributed chiefly by collagen. Mechanically, bones also have a special role in hearing. The ossicles are three small bones in the middle ear which are involved in sound transduction. Synthetic The cancellous part of bones contain bone marrow. Bone marrow produces blood cells in a process called hematopoiesis. Blood cells that are created in bone marrow include red blood cells, platelets and white blood cells. Progenitor cells such as the hematopoietic stem cell divide in a process called mitosis to produce precursor cells. These include precursors which eventually give rise to white blood cells, and erythroblasts which give rise to red blood cells. Unlike red and white blood cells, created by mitosis, platelets are shed from very large cells called megakaryocytes. This process of progressive differentiation occurs within the bone marrow. After the cells are matured, they enter the circulation. Every day, over 2.5 billion red blood cells and platelets, and 50–100 billion granulocytes are produced in this way. As well as creating cells, bone marrow is also one of the major sites where defective or aged red blood cells are destroyed. Metabolic Mineral storage – bones act as reserves of minerals important for the body, most notably calcium and phosphorus. Determined by the species, age, and the type of bone, bone cells make up to 15 percent of the bone. Growth factor storage—mineralized bone matrix stores important growth factors such as insulin-like growth factors, transforming growth factor, bone morphogenetic proteins and others. Fat storage – marrow adipose tissue (MAT) acts as a storage reserve of fatty acids. Acid-base balance – bone buffers the blood against excessive pH changes by absorbing or releasing alkaline salts. Detoxification – bone tissues can also store heavy metals and other foreign elements, removing them from the blood and reducing their effects on other tissues. These can later be gradually released for excretion. Endocrine organ – bone controls phosphate metabolism by releasing fibroblast growth factor 23 (FGF-23), which acts on kidneys to reduce phosphate reabsorption. Bone cells also release a hormone called osteocalcin, which contributes to the regulation of blood sugar (glucose) and fat deposition. Osteocalcin increases both the insulin secretion and sensitivity, in addition to boosting the number of insulin-producing cells and reducing stores of fat. Calcium balance – the process of bone resorption by the osteoclasts releases stored calcium into the systemic circulation and is an important process in regulating calcium balance. As bone formation actively fixes circulating calcium in its mineral form, removing it from the bloodstream, resorption actively unfixes it thereby increasing circulating calcium levels. These processes occur in tandem at site-specific locations. Remodeling Bone is constantly being created and replaced in a process known as remodeling. This ongoing turnover of bone is a process of resorption followed by replacement of bone with little change in shape. This is accomplished through osteoblasts and osteoclasts. Cells are stimulated by a variety of signals, and together referred to as a remodeling unit. Approximately 10% of the skeletal mass of an adult is remodelled each year. The purpose of remodeling is to regulate calcium homeostasis, repair microdamaged bones from everyday stress, and to shape the skeleton during growth. Repeated stress, such as weight-bearing exercise or bone healing, results in the bone thickening at the points of maximum stress (Wolff's law). It has been hypothesized that this is a result of bone's piezoelectric properties, which cause bone to generate small electrical potentials under stress. The action of osteoblasts and osteoclasts are controlled by a number of chemical enzymes that either promote or inhibit the activity of the bone remodeling cells, controlling the rate at which bone is made, destroyed, or changed in shape. The cells also use paracrine signalling to control the activity of each other. For example, the rate at which osteoclasts resorb bone is inhibited by calcitonin and osteoprotegerin. Calcitonin is produced by parafollicular cells in the thyroid gland, and can bind to receptors on osteoclasts to directly inhibit osteoclast activity. Osteoprotegerin is secreted by osteoblasts and is able to bind RANK-L, inhibiting osteoclast stimulation. Osteoblasts can also be stimulated to increase bone mass through increased secretion of osteoid and by inhibiting the ability of osteoclasts to break down osseous tissue. Increased secretion of osteoid is stimulated by the secretion of growth hormone by the pituitary, thyroid hormone and the sex hormones (estrogens and androgens). These hormones also promote increased secretion of osteoprotegerin. Osteoblasts can also be induced to secrete a number of cytokines that promote reabsorption of bone by stimulating osteoclast activity and differentiation from progenitor cells. Vitamin D, parathyroid hormone and stimulation from osteocytes induce osteoblasts to increase secretion of RANK-ligand and interleukin 6, which cytokines then stimulate increased reabsorption of bone by osteoclasts. These same compounds also increase secretion of macrophage colony-stimulating factor by osteoblasts, which promotes the differentiation of progenitor cells into osteoclasts, and decrease secretion of osteoprotegerin. Volume Bone volume is determined by the rates of bone formation and bone resorption. Recent research has suggested that certain growth factors may work to locally alter bone formation by increasing osteoblast activity. Numerous bone-derived growth factors have been isolated and classified via bone cultures. These factors include insulin-like growth factors I and II, transforming growth factor-beta, fibroblast growth factor, platelet-derived growth factor, and bone morphogenetic proteins. Evidence suggests that bone cells produce growth factors for extracellular storage in the bone matrix. The release of these growth factors from the bone matrix could cause the proliferation of osteoblast precursors. Essentially, bone growth factors may act as potential determinants of local bone formation. Research has suggested that cancellous bone volume in postmenopausal osteoporosis may be determined by the relationship between the total bone forming surface and the percent of surface resorption. Clinical significance A number of diseases can affect bone, including arthritis, fractures, infections, osteoporosis and tumors. Conditions relating to bone can be managed by a variety of doctors, including rheumatologists for joints, and orthopedic surgeons, who may conduct surgery to fix broken bones. Other doctors, such as rehabilitation specialists may be involved in recovery, radiologists in interpreting the findings on imaging, and pathologists in investigating the cause of the disease, and family doctors may play a role in preventing complications of bone disease such as osteoporosis. When a doctor sees a patient, a history and exam will be taken. Bones are then often imaged, called radiography. This might include ultrasound X-ray, CT scan, MRI scan and other imaging such as a Bone scan, which may be used to investigate cancer. Other tests such as a blood test for autoimmune markers may be taken, or a synovial fluid aspirate may be taken. Fractures In normal bone, fractures occur when there is significant force applied or repetitive trauma over a long time. Fractures can also occur when a bone is weakened, such as with osteoporosis, or when there is a structural problem, such as when the bone remodels excessively (such as Paget's disease) or is the site of the growth of cancer. Common fractures include wrist fractures and hip fractures, associated with osteoporosis, vertebral fractures associated with high-energy trauma and cancer, and fractures of long-bones. Not all fractures are painful. When serious, depending on the fractures type and location, complications may include flail chest, compartment syndromes or fat embolism. Compound fractures involve the bone's penetration through the skin. Some complex fractures can be treated by the use of bone grafting procedures that replace missing bone portions. Fractures and their underlying causes can be investigated by X-rays, CT scans and MRIs. Fractures are described by their location and shape, and several classification systems exist, depending on the location of the fracture. A common long bone fracture in children is a Salter–Harris fracture. When fractures are managed, pain relief is often given, and the fractured area is often immobilised. This is to promote bone healing. In addition, surgical measures such as internal fixation may be used. Because of the immobilisation, people with fractures are often advised to undergo rehabilitation. Tumors There are several types of tumor that can affect bone; examples of benign bone tumors include osteoma, osteoid osteoma, osteochondroma, osteoblastoma, enchondroma, giant-cell tumor of bone, and aneurysmal bone cyst. Cancer Cancer can arise in bone tissue, and bones are also a common site for other cancers to spread (metastasise) to. Cancers that arise in bone are called "primary" cancers, although such cancers are rare. Metastases within bone are "secondary" cancers, with the most common being breast cancer, lung cancer, prostate cancer, thyroid cancer, and kidney cancer. Secondary cancers that affect bone can either destroy bone (called a "lytic" cancer) or create bone (a "sclerotic" cancer). Cancers of the bone marrow inside the bone can also affect bone tissue, examples including leukemia and multiple myeloma. Bone may also be affected by cancers in other parts of the body. Cancers in other parts of the body may release parathyroid hormone or parathyroid hormone-related peptide. This increases bone reabsorption, and can lead to bone fractures. Bone tissue that is destroyed or altered as a result of cancers is distorted, weakened, and more prone to fracture. This may lead to compression of the spinal cord, destruction of the marrow resulting in bruising, bleeding and immunosuppression, and is one cause of bone pain. If the cancer is metastatic, then there might be other symptoms depending on the site of the original cancer. Some bone cancers can also be felt. Cancers of the bone are managed according to their type, their stage, prognosis, and what symptoms they cause. Many primary cancers of bone are treated with radiotherapy. Cancers of bone marrow may be treated with chemotherapy, and other forms of targeted therapy such as immunotherapy may be used. Palliative care, which focuses on maximising a person's quality of life, may play a role in management, particularly if the likelihood of survival within five years is poor. Other painful conditions Osteomyelitis is inflammation of the bone or bone marrow due to bacterial infection. Osteomalacia is a painful softening of adult bone caused by severe vitamin D deficiency. Osteogenesis imperfecta Osteochondritis dissecans Ankylosing spondylitis Skeletal fluorosis is a bone disease caused by an excessive accumulation of fluoride in the bones. In advanced cases, skeletal fluorosis damages bones and joints and is painful. Osteoporosis Osteoporosis is a disease of bone where there is reduced bone mineral density, increasing the likelihood of fractures. Osteoporosis is defined in women by the World Health Organization as a bone mineral density of 2.5 standard deviations below peak bone mass, relative to the age and sex-matched average. This density is measured using dual energy X-ray absorptiometry (DEXA), with the term "established osteoporosis" including the presence of a fragility fracture. Osteoporosis is most common in women after menopause, when it is called "postmenopausal osteoporosis", but may develop in men and premenopausal women in the presence of particular hormonal disorders and other chronic diseases or as a result of smoking and medications, specifically glucocorticoids. Osteoporosis usually has no symptoms until a fracture occurs. For this reason, DEXA scans are often done in people with one or more risk factors, who have developed osteoporosis and are at risk of fracture. One of the most important risk factors for osteoporosis is advanced age. Accumulation of oxidative DNA damage in osteoblastic and osteoclastic cells appears to be a key factor in age-related osteoporosis. Osteoporosis treatment includes advice to stop smoking, decrease alcohol consumption, exercise regularly, and have a healthy diet. Calcium and trace mineral supplements may also be advised, as may Vitamin D. When medication is used, it may include bisphosphonates, Strontium ranelate, and hormone replacement therapy. Osteopathic medicine Osteopathic medicine is a school of medical thought originally developed based on the idea of the link between the musculoskeletal system and overall health, but now very similar to mainstream medicine. , over 77,000 physicians in the United States are trained in osteopathic medical schools. Osteology The study of bones and teeth is referred to as osteology. It is frequently used in anthropology, archeology and forensic science for a variety of tasks. This can include determining the nutritional, health, age or injury status of the individual the bones were taken from. Preparing fleshed bones for these types of studies can involve the process of maceration. Typically anthropologists and archeologists study bone tools made by Homo sapiens and Homo neanderthalensis. Bones can serve a number of uses such as projectile points or artistic pigments, and can also be made from external bones such as antlers. Other animals Bird skeletons are very lightweight. Their bones are smaller and thinner, to aid flight. Among mammals, bats come closest to birds in terms of bone density, suggesting that small dense bones are a flight adaptation. Many bird bones have little marrow due to them being hollow. A bird's beak is primarily made of bone as projections of the mandibles which are covered in keratin. Some bones, primarily formed separately in subcutaneous tissues, include headgears (such as bony core of horns, antlers, ossicones), osteoderm, and os penis/ os clitoris. A deer's antlers are composed of bone which is an unusual example of bone being outside the skin of the animal once the velvet is shed. The extinct predatory fish Dunkleosteus had sharp edges of hard exposed bone along its jaws. The proportion of cortical bone that is 80% in the human skeleton may be much lower in other animals, especially in marine mammals and marine turtles, or in various Mesozoic marine reptiles, such as ichthyosaurs, among others. This proportion can vary quickly in evolution; it often increases in early stages of returns to an aquatic lifestyle, as seen in early whales and pinnipeds, among others. It subsequently decreases in pelagic taxa, which typically acquire spongy bone, but aquatic taxa that live in shallow water can retain very thick, pachyostotic, osteosclerotic, or pachyosteosclerotic bones, especially if they move slowly, like sea cows. In some cases, even marine taxa that had acquired spongy bone can revert to thicker, compact bones if they become adapted to live in shallow water, or in hypersaline (denser) water. Many animals, particularly herbivores, practice osteophagy—the eating of bones. This is presumably carried out in order to replenish lacking phosphate. Many bone diseases that affect humans also affect other vertebrates—an example of one disorder is skeletal fluorosis. Society and culture Bones from slaughtered animals have a number of uses. In prehistoric times, they have been used for making bone tools. They have further been used in bone carving, already important in prehistoric art, and also in modern time as crafting materials for buttons, beads, handles, bobbins, calculation aids, head nuts, dice, poker chips, pick-up sticks, arrows, scrimshaw, ornaments, etc. Bone glue can be made by prolonged boiling of ground or cracked bones, followed by filtering and evaporation to thicken the resulting fluid. Historically once important, bone glue and other animal glues today have only a few specialized uses, such as in antiques restoration. Essentially the same process, with further refinement, thickening and drying, is used to make gelatin. Broth is made by simmering several ingredients for a long time, traditionally including bones. Bone char, a porous, black, granular material primarily used for filtration and also as a black pigment, is produced by charring mammal bones. Oracle bone script was a writing system used in Ancient China based on inscriptions in bones. Its name originates from oracle bones, which were mainly ox clavicle. The Ancient Chinese (mainly in the Shang dynasty), would write their questions on the oracle bone, and burn the bone, and where the bone cracked would be the answer for the questions. To point the bone at someone is considered bad luck in some cultures, such as Australian aborigines, such as by the Kurdaitcha. The wishbones of fowl have been used for divination, and are still customarily used in a tradition to determine which one of two people pulling on either prong of the bone may make a wish. Various cultures throughout history have adopted the custom of shaping an infant's head by the practice of artificial cranial deformation. A widely practised custom in China was that of foot binding to limit the normal growth of the foot. Additional images See also Artificial bone Bone health Distraction osteogenesis National Bone Health Campaign Skeleton References Further reading – drawings by Philip J. – Anthony edits the current version; Harrison edited previous versions. External links Educational resource materials (including animations) by the American Society for Bone and Mineral Research Review (including references) of piezoelectricity and bone remodelling A good basic overview of bone biology from the Science Creative Quarterly Bone histology photomicrographs Bones Skeletal system Connective tissue
10
Brouwer's fixed-point theorem is a fixed-point theorem in topology, named after L. E. J. (Bertus) Brouwer. It states that for any continuous function mapping a nonempty compact convex set to itself, there is a point such that . The simplest forms of Brouwer's theorem are for continuous functions from a closed interval in the real numbers to itself or from a closed disk to itself. A more general form than the latter is for continuous functions from a nonempty convex compact subset of Euclidean space to itself. Among hundreds of fixed-point theorems, Brouwer's is particularly well known, due in part to its use across numerous fields of mathematics. In its original field, this result is one of the key theorems characterizing the topology of Euclidean spaces, along with the Jordan curve theorem, the hairy ball theorem, the invariance of dimension and the Borsuk–Ulam theorem. This gives it a place among the fundamental theorems of topology. The theorem is also used for proving deep results about differential equations and is covered in most introductory courses on differential geometry. It appears in unlikely fields such as game theory. In economics, Brouwer's fixed-point theorem and its extension, the Kakutani fixed-point theorem, play a central role in the proof of existence of general equilibrium in market economies as developed in the 1950s by economics Nobel prize winners Kenneth Arrow and Gérard Debreu. The theorem was first studied in view of work on differential equations by the French mathematicians around Henri Poincaré and Charles Émile Picard. Proving results such as the Poincaré–Bendixson theorem requires the use of topological methods. This work at the end of the 19th century opened into several successive versions of the theorem. The case of differentiable mappings of the -dimensional closed ball was first proved in 1910 by Jacques Hadamard and the general case for continuous mappings by Brouwer in 1911. Statement The theorem has several formulations, depending on the context in which it is used and its degree of generalization. The simplest is sometimes given as follows: In the plane Every continuous function from a closed disk to itself has at least one fixed point. This can be generalized to an arbitrary finite dimension: In Euclidean spaceEvery continuous function from a closed ball of a Euclidean space into itself has a fixed point. A slightly more general version is as follows: Convex compact setEvery continuous function from a nonempty convex compact subset K of a Euclidean space to K itself has a fixed point. An even more general form is better known under a different name: Schauder fixed point theoremEvery continuous function from a nonempty convex compact subset K of a Banach space to K itself has a fixed point. Importance of the pre-conditions The theorem holds only for functions that are endomorphisms (functions that have the same set as the domain and codomain) and for nonempty sets that are compact (thus, in particular, bounded and closed) and convex (or homeomorphic to convex). The following examples show why the pre-conditions are important. The function f as an endomorphism Consider the function with domain [-1,1]. The range of the function is [0,2]. Thus, f is not an endomorphism. Boundedness Consider the function which is a continuous function from to itself. As it shifts every point to the right, it cannot have a fixed point. The space is convex and closed, but not bounded. Closedness Consider the function which is a continuous function from the open interval (−1,1) to itself. Since x = 1 is not part of the interval, there is not a fixed point of f(x) = x. The space (−1,1) is convex and bounded, but not closed. On the other hand, the function f have a fixed point for the closed interval [−1,1], namely f(1) = 1. Convexity Convexity is not strictly necessary for BFPT. Because the properties involved (continuity, being a fixed point) are invariant under homeomorphisms, BFPT is equivalent to forms in which the domain is required to be a closed unit ball . For the same reason it holds for every set that is homeomorphic to a closed ball (and therefore also closed, bounded, connected, without holes, etc.). The following example shows that BFPT does not work for domains with holes. Consider the function , which is a continuous function from the unit circle to itself. Since -x≠x holds for any point of the unit circle, f has no fixed point. The analogous example works for the n-dimensional sphere (or any symmetric domain that does not contain the origin). The unit circle is closed and bounded, but it has a hole (and so it is not convex) . The function f have a fixed point for the unit disc, since it takes the origin to itself. A formal generalization of BFPT for "hole-free" domains can be derived from the Lefschetz fixed-point theorem. Notes The continuous function in this theorem is not required to be bijective or surjective. Illustrations The theorem has several "real world" illustrations. Here are some examples. Take two sheets of graph paper of equal size with coordinate systems on them, lay one flat on the table and crumple up (without ripping or tearing) the other one and place it, in any fashion, on top of the first so that the crumpled paper does not reach outside the flat one. There will then be at least one point of the crumpled sheet that lies directly above its corresponding point (i.e. the point with the same coordinates) of the flat sheet. This is a consequence of the n = 2 case of Brouwer's theorem applied to the continuous map that assigns to the coordinates of every point of the crumpled sheet the coordinates of the point of the flat sheet immediately beneath it. Take an ordinary map of a country, and suppose that that map is laid out on a table inside that country. There will always be a "You are Here" point on the map which represents that same point in the country. In three dimensions a consequence of the Brouwer fixed-point theorem is that, no matter how much you stir a delicious cocktail in a glass (or think about milk shake), when the liquid has come to rest, some point in the liquid will end up in exactly the same place in the glass as before you took any action, assuming that the final position of each point is a continuous function of its original position, that the liquid after stirring is contained within the space originally taken up by it, and that the glass (and stirred surface shape) maintain a convex volume. Ordering a cocktail shaken, not stirred defeats the convexity condition ("shaking" being defined as a dynamic series of non-convex inertial containment states in the vacant headspace under a lid). In that case, the theorem would not apply, and thus all points of the liquid disposition are potentially displaced from the original state. Intuitive approach Explanations attributed to Brouwer The theorem is supposed to have originated from Brouwer's observation of a cup of gourmet coffee. If one stirs to dissolve a lump of sugar, it appears there is always a point without motion. He drew the conclusion that at any moment, there is a point on the surface that is not moving. The fixed point is not necessarily the point that seems to be motionless, since the centre of the turbulence moves a little bit. The result is not intuitive, since the original fixed point may become mobile when another fixed point appears. Brouwer is said to have added: "I can formulate this splendid result different, I take a horizontal sheet, and another identical one which I crumple, flatten and place on the other. Then a point of the crumpled sheet is in the same place as on the other sheet." Brouwer "flattens" his sheet as with a flat iron, without removing the folds and wrinkles. Unlike the coffee cup example, the crumpled paper example also demonstrates that more than one fixed point may exist. This distinguishes Brouwer's result from other fixed-point theorems, such as Stefan Banach's, that guarantee uniqueness. One-dimensional case In one dimension, the result is intuitive and easy to prove. The continuous function f is defined on a closed interval [a, b] and takes values in the same interval. Saying that this function has a fixed point amounts to saying that its graph (dark green in the figure on the right) intersects that of the function defined on the same interval [a, b] which maps x to x (light green). Intuitively, any continuous line from the left edge of the square to the right edge must necessarily intersect the green diagonal. To prove this, consider the function g which maps x to f(x) − x. It is ≥ 0 on a and ≤ 0 on b. By the intermediate value theorem, g has a zero in [a, b]; this zero is a fixed point. Brouwer is said to have expressed this as follows: "Instead of examining a surface, we will prove the theorem about a piece of string. Let us begin with the string in an unfolded state, then refold it. Let us flatten the refolded string. Again a point of the string has not changed its position with respect to its original position on the unfolded string." History The Brouwer fixed point theorem was one of the early achievements of algebraic topology, and is the basis of more general fixed point theorems which are important in functional analysis. The case n = 3 first was proved by Piers Bohl in 1904 (published in Journal für die reine und angewandte Mathematik). It was later proved by L. E. J. Brouwer in 1909. Jacques Hadamard proved the general case in 1910, and Brouwer found a different proof in the same year. Since these early proofs were all non-constructive indirect proofs, they ran contrary to Brouwer's intuitionist ideals. Although the existence of a fixed point is not constructive in the sense of constructivism in mathematics, methods to approximate fixed points guaranteed by Brouwer's theorem are now known. Before discovery At the end of the 19th century, the old problem of the stability of the solar system returned into the focus of the mathematical community. Its solution required new methods. As noted by Henri Poincaré, who worked on the three-body problem, there is no hope to find an exact solution: "Nothing is more proper to give us an idea of the hardness of the three-body problem, and generally of all problems of Dynamics where there is no uniform integral and the Bohlin series diverge." He also noted that the search for an approximate solution is no more efficient: "the more we seek to obtain precise approximations, the more the result will diverge towards an increasing imprecision". He studied a question analogous to that of the surface movement in a cup of coffee. What can we say, in general, about the trajectories on a surface animated by a constant flow? Poincaré discovered that the answer can be found in what we now call the topological properties in the area containing the trajectory. If this area is compact, i.e. both closed and bounded, then the trajectory either becomes stationary, or it approaches a limit cycle. Poincaré went further; if the area is of the same kind as a disk, as is the case for the cup of coffee, there must necessarily be a fixed point. This fixed point is invariant under all functions which associate to each point of the original surface its position after a short time interval t. If the area is a circular band, or if it is not closed, then this is not necessarily the case. To understand differential equations better, a new branch of mathematics was born. Poincaré called it analysis situs. The French Encyclopædia Universalis defines it as the branch which "treats the properties of an object that are invariant if it is deformed in any continuous way, without tearing". In 1886, Poincaré proved a result that is equivalent to Brouwer's fixed-point theorem, although the connection with the subject of this article was not yet apparent. A little later, he developed one of the fundamental tools for better understanding the analysis situs, now known as the fundamental group or sometimes the Poincaré group. This method can be used for a very compact proof of the theorem under discussion. Poincaré's method was analogous to that of Émile Picard, a contemporary mathematician who generalized the Cauchy–Lipschitz theorem. Picard's approach is based on a result that would later be formalised by another fixed-point theorem, named after Banach. Instead of the topological properties of the domain, this theorem uses the fact that the function in question is a contraction. First proofs At the dawn of the 20th century, the interest in analysis situs did not stay unnoticed. However, the necessity of a theorem equivalent to the one discussed in this article was not yet evident. Piers Bohl, a Latvian mathematician, applied topological methods to the study of differential equations. In 1904 he proved the three-dimensional case of our theorem, but his publication was not noticed. It was Brouwer, finally, who gave the theorem its first patent of nobility. His goals were different from those of Poincaré. This mathematician was inspired by the foundations of mathematics, especially mathematical logic and topology. His initial interest lay in an attempt to solve Hilbert's fifth problem. In 1909, during a voyage to Paris, he met Henri Poincaré, Jacques Hadamard, and Émile Borel. The ensuing discussions convinced Brouwer of the importance of a better understanding of Euclidean spaces, and were the origin of a fruitful exchange of letters with Hadamard. For the next four years, he concentrated on the proof of certain great theorems on this question. In 1912 he proved the hairy ball theorem for the two-dimensional sphere, as well as the fact that every continuous map from the two-dimensional ball to itself has a fixed point. These two results in themselves were not really new. As Hadamard observed, Poincaré had shown a theorem equivalent to the hairy ball theorem. The revolutionary aspect of Brouwer's approach was his systematic use of recently developed tools such as homotopy, the underlying concept of the Poincaré group. In the following year, Hadamard generalised the theorem under discussion to an arbitrary finite dimension, but he employed different methods. Hans Freudenthal comments on the respective roles as follows: "Compared to Brouwer's revolutionary methods, those of Hadamard were very traditional, but Hadamard's participation in the birth of Brouwer's ideas resembles that of a midwife more than that of a mere spectator." Brouwer's approach yielded its fruits, and in 1910 he also found a proof that was valid for any finite dimension, as well as other key theorems such as the invariance of dimension. In the context of this work, Brouwer also generalized the Jordan curve theorem to arbitrary dimension and established the properties connected with the degree of a continuous mapping. This branch of mathematics, originally envisioned by Poincaré and developed by Brouwer, changed its name. In the 1930s, analysis situs became algebraic topology. Reception The theorem proved its worth in more than one way. During the 20th century numerous fixed-point theorems were developed, and even a branch of mathematics called fixed-point theory. Brouwer's theorem is probably the most important. It is also among the foundational theorems on the topology of topological manifolds and is often used to prove other important results such as the Jordan curve theorem. Besides the fixed-point theorems for more or less contracting functions, there are many that have emerged directly or indirectly from the result under discussion. A continuous map from a closed ball of Euclidean space to its boundary cannot be the identity on the boundary. Similarly, the Borsuk–Ulam theorem says that a continuous map from the n-dimensional sphere to Rn has a pair of antipodal points that are mapped to the same point. In the finite-dimensional case, the Lefschetz fixed-point theorem provided from 1926 a method for counting fixed points. In 1930, Brouwer's fixed-point theorem was generalized to Banach spaces. This generalization is known as Schauder's fixed-point theorem, a result generalized further by S. Kakutani to set-valued functions. One also meets the theorem and its variants outside topology. It can be used to prove the Hartman-Grobman theorem, which describes the qualitative behaviour of certain differential equations near certain equilibria. Similarly, Brouwer's theorem is used for the proof of the Central Limit Theorem. The theorem can also be found in existence proofs for the solutions of certain partial differential equations. Other areas are also touched. In game theory, John Nash used the theorem to prove that in the game of Hex there is a winning strategy for white. In economics, P. Bich explains that certain generalizations of the theorem show that its use is helpful for certain classical problems in game theory and generally for equilibria (Hotelling's law), financial equilibria and incomplete markets. Brouwer's celebrity is not exclusively due to his topological work. The proofs of his great topological theorems are not constructive, and Brouwer's dissatisfaction with this is partly what led him to articulate the idea of constructivity. He became the originator and zealous defender of a way of formalising mathematics that is known as intuitionism, which at the time made a stand against set theory. Brouwer disavowed his original proof of the fixed-point theorem. Proof outlines A proof using degree Brouwer's original 1911 proof relied on the notion of the degree of a continuous mapping, stemming from ideas in differential topology. Several modern accounts of the proof can be found in the literature, notably . Let denote the closed unit ball in centered at the origin. Suppose for simplicity that is continuously differentiable. A regular value of is a point such that the Jacobian of is non-singular at every point of the preimage of . In particular, by the inverse function theorem, every point of the preimage of lies in (the interior of ). The degree of at a regular value is defined as the sum of the signs of the Jacobian determinant of over the preimages of under : The degree is, roughly speaking, the number of "sheets" of the preimage f lying over a small open set around p, with sheets counted oppositely if they are oppositely oriented. This is thus a generalization of winding number to higher dimensions. The degree satisfies the property of homotopy invariance: let and be two continuously differentiable functions, and for . Suppose that the point is a regular value of for all t. Then . If there is no fixed point of the boundary of , then the function is well-defined, and defines a homotopy from the identity function to it. The identity function has degree one at every point. In particular, the identity function has degree one at the origin, so also has degree one at the origin. As a consequence, the preimage is not empty. The elements of are precisely the fixed points of the original function f. This requires some work to make fully general. The definition of degree must be extended to singular values of f, and then to continuous functions. The more modern advent of homology theory simplifies the construction of the degree, and so has become a standard proof in the literature. A proof using the hairy ball theorem The hairy ball theorem states that on the unit sphere in an odd-dimensional Euclidean space, there is no nowhere-vanishing continuous tangent vector field on . (The tangency condition means that = 0 for every unit vector .) Sometimes the theorem is expressed by the statement that "there is always a place on the globe with no wind". An elementary proof of the hairy ball theorem can be found in . In fact, suppose first that is continuously differentiable. By scaling, it can be assumed that is a continuously differentiable unit tangent vector on . It can be extended radially to a small spherical shell of . For sufficiently small, a routine computation shows that the mapping () = + is a contraction mapping on and that the volume of its image is a polynomial in . On the other hand, as a contraction mapping, must restrict to a homeomorphism of onto (1 + )½ and onto (1 + )½ . This gives a contradiction, because, if the dimension of the Euclidean space is odd, (1 + )/2 is not a polynomial. If is only a continuous unit tangent vector on , by the Weierstrass approximation theorem, it can be uniformly approximated by a polynomial map of into Euclidean space. The orthogonal projection on to the tangent space is given by () = () - () ⋅ . Thus is polynomial and nowhere vanishing on ; by construction /|||| is a smooth unit tangent vector field on , a contradiction. The continuous version of the hairy ball theorem can now be used to prove the Brouwer fixed point theorem. First suppose that is even. If there were a fixed-point-free continuous self-mapping of the closed unit ball of the -dimensional Euclidean space , set Since has no fixed points, it follows that, for in the interior of , the vector () is non-zero; and for in , the scalar product ⋅ () = 1 – ⋅ () is strictly positive. From the original -dimensional space Euclidean space , construct a new auxiliary ()-dimensional space = x R, with coordinates = (, ). Set By construction is a continuous vector field on the unit sphere of , satisfying the tangency condition ⋅ () = 0. Moreover, () is nowhere vanishing (because, if has norm 1, then ⋅ () is non-zero; while if has norm strictly less than 1, then and () are both non-zero). This contradiction proves the fixed point theorem when is even. For odd, one can apply the fixed point theorem to the closed unit ball in dimensions and the mapping (,) = ((),0). The advantage of this proof is that it uses only elementary techniques; more general results like the Borsuk-Ulam theorem require tools from algebraic topology. A proof using homology or cohomology The proof uses the observation that the boundary of the n-disk Dn is Sn−1, the (n − 1)-sphere. Suppose, for contradiction, that a continuous function has no fixed point. This means that, for every point x in Dn, the points x and f(x) are distinct. Because they are distinct, for every point x in Dn, we can construct a unique ray from f(x) to x and follow the ray until it intersects the boundary Sn−1 (see illustration). By calling this intersection point F(x), we define a function F : Dn → Sn−1 sending each point in the disk to its corresponding intersection point on the boundary. As a special case, whenever x itself is on the boundary, then the intersection point F(x) must be x. Consequently, F is a special type of continuous function known as a retraction: every point of the codomain (in this case Sn−1) is a fixed point of F. Intuitively it seems unlikely that there could be a retraction of Dn onto Sn−1, and in the case n = 1, the impossibility is more basic, because S0 (i.e., the endpoints of the closed interval D1) is not even connected. The case n = 2 is less obvious, but can be proven by using basic arguments involving the fundamental groups of the respective spaces: the retraction would induce a surjective group homomorphism from the fundamental group of D2 to that of S1, but the latter group is isomorphic to Z while the first group is trivial, so this is impossible. The case n = 2 can also be proven by contradiction based on a theorem about non-vanishing vector fields. For n > 2, however, proving the impossibility of the retraction is more difficult. One way is to make use of homology groups: the homology Hn−1(Dn) is trivial, while Hn−1(Sn−1) is infinite cyclic. This shows that the retraction is impossible, because again the retraction would induce an injective group homomorphism from the latter to the former group. The impossibility of a retraction can also be shown using the de Rham cohomology of open subsets of Euclidean space En. For n ≥ 2, the de Rham cohomology of U = En – (0) is one-dimensional in degree 0 and n - 1, and vanishes otherwise. If a retraction existed, then U would have to be contractible and its de Rham cohomology in degree n - 1 would have to vanish, a contradiction. A proof using Stokes' theorem As in the proof of Brouwer's fixed-point theorem for continuous maps using homology, it is reduced to proving that there is no continuous retraction from the ball onto its boundary ∂. In that case it can be assumed that is smooth, since it can be approximated using the Weierstrass approximation theorem or by convolving with non-negative smooth bump functions of sufficiently small support and integral one (i.e. mollifying). If is a volume form on the boundary then by Stokes' theorem, giving a contradiction. More generally, this shows that there is no smooth retraction from any non-empty smooth oriented compact manifold onto its boundary. The proof using Stokes' theorem is closely related to the proof using homology, because the form generates the de Rham cohomology group (∂) which is isomorphic to the homology group (∂) by de Rham's theorem. A combinatorial proof The BFPT can be proved using Sperner's lemma. We now give an outline of the proof for the special case in which f is a function from the standard n-simplex, to itself, where For every point also Hence the sum of their coordinates is equal: Hence, by the pigeonhole principle, for every there must be an index such that the th coordinate of is greater than or equal to the th coordinate of its image under f: Moreover, if lies on a k-dimensional sub-face of then by the same argument, the index can be selected from among the coordinates which are not zero on this sub-face. We now use this fact to construct a Sperner coloring. For every triangulation of the color of every vertex is an index such that By construction, this is a Sperner coloring. Hence, by Sperner's lemma, there is an n-dimensional simplex whose vertices are colored with the entire set of available colors. Because f is continuous, this simplex can be made arbitrarily small by choosing an arbitrarily fine triangulation. Hence, there must be a point which satisfies the labeling condition in all coordinates: for all Because the sum of the coordinates of and must be equal, all these inequalities must actually be equalities. But this means that: That is, is a fixed point of A proof by Hirsch There is also a quick proof, by Morris Hirsch, based on the impossibility of a differentiable retraction. The indirect proof starts by noting that the map f can be approximated by a smooth map retaining the property of not fixing a point; this can be done by using the Weierstrass approximation theorem or by convolving with smooth bump functions. One then defines a retraction as above which must now be differentiable. Such a retraction must have a non-singular value, by Sard's theorem, which is also non-singular for the restriction to the boundary (which is just the identity). Thus the inverse image would be a 1-manifold with boundary. The boundary would have to contain at least two end points, both of which would have to lie on the boundary of the original ball—which is impossible in a retraction. R. Bruce Kellogg, Tien-Yien Li, and James A. Yorke turned Hirsch's proof into a computable proof by observing that the retract is in fact defined everywhere except at the fixed points. For almost any point, q, on the boundary, (assuming it is not a fixed point) the one manifold with boundary mentioned above does exist and the only possibility is that it leads from q to a fixed point. It is an easy numerical task to follow such a path from q to the fixed point so the method is essentially computable. gave a conceptually similar path-following version of the homotopy proof which extends to a wide variety of related problems. A proof using oriented area A variation of the preceding proof does not employ the Sard's theorem, and goes as follows. If is a smooth retraction, one considers the smooth deformation and the smooth function Differentiating under the sign of integral it is not difficult to check that (t) = 0 for all t, so φ is a constant function, which is a contradiction because φ(0) is the n-dimensional volume of the ball, while φ(1) is zero. The geometric idea is that φ(t) is the oriented area of gt(B) (that is, the Lebesgue measure of the image of the ball via gt, taking into account multiplicity and orientation), and should remain constant (as it is very clear in the one-dimensional case). On the other hand, as the parameter t passes form 0 to 1 the map gt transforms continuously from the identity map of the ball, to the retraction r, which is a contradiction since the oriented area of the identity coincides with the volume of the ball, while the oriented area of r is necessarily 0, as its image is the boundary of the ball, a set of null measure. A proof using the game Hex A quite different proof given by David Gale is based on the game of Hex. The basic theorem regarding Hex, first proven by John Nash, is that no game of Hex can end in a draw; the first player always has a winning strategy (although this theorem is nonconstructive, and explicit strategies have not been fully developed for board sizes of dimensions 10 x 10 or greater). This turns out to be equivalent to the Brouwer fixed-point theorem for dimension 2. By considering n-dimensional versions of Hex, one can prove in general that Brouwer's theorem is equivalent to the determinacy theorem for Hex. A proof using the Lefschetz fixed-point theorem The Lefschetz fixed-point theorem says that if a continuous map f from a finite simplicial complex B to itself has only isolated fixed points, then the number of fixed points counted with multiplicities (which may be negative) is equal to the Lefschetz number and in particular if the Lefschetz number is nonzero then f must have a fixed point. If B is a ball (or more generally is contractible) then the Lefschetz number is one because the only non-zero simplicial homology group is: and f acts as the identity on this group, so f has a fixed point. A proof in a weak logical system In reverse mathematics, Brouwer's theorem can be proved in the system WKL0, and conversely over the base system RCA0 Brouwer's theorem for a square implies the weak Kőnig's lemma, so this gives a precise description of the strength of Brouwer's theorem. Generalizations The Brouwer fixed-point theorem forms the starting point of a number of more general fixed-point theorems. The straightforward generalization to infinite dimensions, i.e. using the unit ball of an arbitrary Hilbert space instead of Euclidean space, is not true. The main problem here is that the unit balls of infinite-dimensional Hilbert spaces are not compact. For example, in the Hilbert space ℓ2 of square-summable real (or complex) sequences, consider the map f : ℓ2 → ℓ2 which sends a sequence (xn) from the closed unit ball of ℓ2 to the sequence (yn) defined by It is not difficult to check that this map is continuous, has its image in the unit sphere of ℓ2, but does not have a fixed point. The generalizations of the Brouwer fixed-point theorem to infinite dimensional spaces therefore all include a compactness assumption of some sort, and also often an assumption of convexity. See fixed-point theorems in infinite-dimensional spaces for a discussion of these theorems. There is also finite-dimensional generalization to a larger class of spaces: If is a product of finitely many chainable continua, then every continuous function has a fixed point, where a chainable continuum is a (usually but in this case not necessarily metric) compact Hausdorff space of which every open cover has a finite open refinement , such that if and only if . Examples of chainable continua include compact connected linearly ordered spaces and in particular closed intervals of real numbers. The Kakutani fixed point theorem generalizes the Brouwer fixed-point theorem in a different direction: it stays in Rn, but considers upper hemi-continuous set-valued functions (functions that assign to each point of the set a subset of the set). It also requires compactness and convexity of the set. The Lefschetz fixed-point theorem applies to (almost) arbitrary compact topological spaces, and gives a condition in terms of singular homology that guarantees the existence of fixed points; this condition is trivially satisfied for any map in the case of Dn. Equivalent results See also Banach fixed-point theorem Fixed-point computation Infinite compositions of analytic functions Nash equilibrium Poincaré–Miranda theorem – equivalent to the Brouwer fixed-point theorem Topological combinatorics Notes References (see p. 72–73 for Hirsch's proof utilizing non-existence of a differentiable retraction) Leoni, Giovanni (2017). A First Course in Sobolev Spaces: Second Edition. Graduate Studies in Mathematics. 181. American Mathematical Society. pp. 734. External links Brouwer's Fixed Point Theorem for Triangles at cut-the-knot Brouwer theorem , from PlanetMath with attached proof. Reconstructing Brouwer at MathPages Brouwer Fixed Point Theorem at Math Images. Fixed-point theorems Theory of continuous functions Theorems in topology Theorems in convex geometry
5
In statistical mechanics and mathematics, a Boltzmann distribution (also called Gibbs distribution) is a probability distribution or probability measure that gives the probability that a system will be in a certain state as a function of that state's energy and the temperature of the system. The distribution is expressed in the form: where is the probability of the system being in state , is the exponential function, is the energy of that state, and a constant of the distribution is the product of the Boltzmann constant and thermodynamic temperature . The symbol denotes proportionality (see for the proportionality constant). The term system here has a wide meaning; it can range from a collection of 'sufficient number' of atoms or a single atom to a macroscopic system such as a natural gas storage tank. Therefore the Boltzmann distribution can be used to solve a wide variety of problems. The distribution shows that states with lower energy will always have a higher probability of being occupied. The ratio of probabilities of two states is known as the Boltzmann factor and characteristically only depends on the states' energy difference: The Boltzmann distribution is named after Ludwig Boltzmann who first formulated it in 1868 during his studies of the statistical mechanics of gases in thermal equilibrium. Boltzmann's statistical work is borne out in his paper “On the Relationship between the Second Fundamental Theorem of the Mechanical Theory of Heat and Probability Calculations Regarding the Conditions for Thermal Equilibrium" The distribution was later investigated extensively, in its modern generic form, by Josiah Willard Gibbs in 1902. The Boltzmann distribution should not be confused with the Maxwell–Boltzmann distribution or Maxwell-Boltzmann statistics. The Boltzmann distribution gives the probability that a system will be in a certain state as a function of that state's energy, while the Maxwell-Boltzmann distributions give the probabilities of particle speeds or energies in ideal gases. The distribution of energies in a one-dimensional gas however, does follow the Boltzmann distribution. The distribution The Boltzmann distribution is a probability distribution that gives the probability of a certain state as a function of that state's energy and temperature of the system to which the distribution is applied. It is given as where: is the exponential function, is the probability of state , is the energy of state , is the Boltzmann constant, is the absolute temperature of the system, is the number of all states accessible to the system of interest, (denoted by some authors by ) is the normalization denominator, which is the canonical partition function It results from the constraint that the probabilities of all accessible states must add up to 1. Using Lagrange multipliers, one can prove that the Boltzmann distribution is the distribution that maximizes the entropy subject to the normalization constraint that and the constraint that equals a particular mean energy value. The partition function can be calculated if we know the energies of the states accessible to the system of interest. For atoms the partition function values can be found in the NIST Atomic Spectra Database. The distribution shows that states with lower energy will always have a higher probability of being occupied than the states with higher energy. It can also give us the quantitative relationship between the probabilities of the two states being occupied. The ratio of probabilities for states and is given as where: is the probability of state , the probability of state , is the energy of state , is the energy of state . The corresponding ratio of populations of energy levels must also take their degeneracies into account. The Boltzmann distribution is often used to describe the distribution of particles, such as atoms or molecules, over bound states accessible to them. If we have a system consisting of many particles, the probability of a particle being in state is practically the probability that, if we pick a random particle from that system and check what state it is in, we will find it is in state . This probability is equal to the number of particles in state divided by the total number of particles in the system, that is the fraction of particles that occupy state . where is the number of particles in state and is the total number of particles in the system. We may use the Boltzmann distribution to find this probability that is, as we have seen, equal to the fraction of particles that are in state i. So the equation that gives the fraction of particles in state as a function of the energy of that state is This equation is of great importance to spectroscopy. In spectroscopy we observe a spectral line of atoms or molecules undergoing transitions from one state to another. In order for this to be possible, there must be some particles in the first state to undergo the transition. We may find that this condition is fulfilled by finding the fraction of particles in the first state. If it is negligible, the transition is very likely not observed at the temperature for which the calculation was done. In general, a larger fraction of molecules in the first state means a higher number of transitions to the second state. This gives a stronger spectral line. However, there are other factors that influence the intensity of a spectral line, such as whether it is caused by an allowed or a forbidden transition. The softmax function commonly used in machine learning is related to the Boltzmann distribution: Generalized Boltzmann distribution Distribution of the form is called generalized Boltzmann distribution by some authors. The Boltzmann distribution is a special case of the generalized Boltzmann distribution. The generalized Boltzmann distribution is used in statistical mechanics to describe canonical ensemble, grand canonical ensemble and isothermal–isobaric ensemble. The generalized Boltzmann distribution is usually derived from the principle of maximum entropy, but there are other derivations. The generalized Boltzmann distribution has the following properties: It is the only distribution for which the entropy as defined by Gibbs entropy formula matches with the entropy as defined in classical thermodynamics. It is the only distribution that is mathematically consistent with the fundamental thermodynamic relation where state functions are described by ensemble average. In statistical mechanics The Boltzmann distribution appears in statistical mechanics when considering closed systems of fixed composition that are in thermal equilibrium (equilibrium with respect to energy exchange). The most general case is the probability distribution for the canonical ensemble. Some special cases (derivable from the canonical ensemble) show the Boltzmann distribution in different aspects: Canonical ensemble (general case) The canonical ensemble gives the probabilities of the various possible states of a closed system of fixed volume, in thermal equilibrium with a heat bath. The canonical ensemble has a state probability distribution with the Boltzmann form. Statistical frequencies of subsystems' states (in a non-interacting collection) When the system of interest is a collection of many non-interacting copies of a smaller subsystem, it is sometimes useful to find the statistical frequency of a given subsystem state, among the collection. The canonical ensemble has the property of separability when applied to such a collection: as long as the non-interacting subsystems have fixed composition, then each subsystem's state is independent of the others and is also characterized by a canonical ensemble. As a result, the expected statistical frequency distribution of subsystem states has the Boltzmann form. Maxwell–Boltzmann statistics of classical gases (systems of non-interacting particles) In particle systems, many particles share the same space and regularly change places with each other; the single-particle state space they occupy is a shared space. Maxwell–Boltzmann statistics give the expected number of particles found in a given single-particle state, in a classical gas of non-interacting particles at equilibrium. This expected number distribution has the Boltzmann form. Although these cases have strong similarities, it is helpful to distinguish them as they generalize in different ways when the crucial assumptions are changed: When a system is in thermodynamic equilibrium with respect to both energy exchange and particle exchange, the requirement of fixed composition is relaxed and a grand canonical ensemble is obtained rather than canonical ensemble. On the other hand, if both composition and energy are fixed, then a microcanonical ensemble applies instead. If the subsystems within a collection do interact with each other, then the expected frequencies of subsystem states no longer follow a Boltzmann distribution, and even may not have an analytical solution. The canonical ensemble can however still be applied to the collective states of the entire system considered as a whole, provided the entire system is in thermal equilibrium. With quantum gases of non-interacting particles in equilibrium, the number of particles found in a given single-particle state does not follow Maxwell–Boltzmann statistics, and there is no simple closed form expression for quantum gases in the canonical ensemble. In the grand canonical ensemble the state-filling statistics of quantum gases are described by Fermi–Dirac statistics or Bose–Einstein statistics, depending on whether the particles are fermions or bosons, respectively. In mathematics In more general mathematical settings, the Boltzmann distribution is also known as the Gibbs measure. In statistics and machine learning, it is called a log-linear model. In deep learning, the Boltzmann distribution is used in the sampling distribution of stochastic neural networks such as the Boltzmann machine, restricted Boltzmann machine, energy-based models and deep Boltzmann machine. In deep learning, the Boltzmann machine is considered to be one of the unsupervised learning models. In the design of Boltzmann machine in deep learning , as the number of nodes are increased the difficulty of implementing in real time applications becomes critical, so a different type of architecture named Restricted Boltzmann machine is introduced. In economics The Boltzmann distribution can be introduced to allocate permits in emissions trading. The new allocation method using the Boltzmann distribution can describe the most probable, natural, and unbiased distribution of emissions permits among multiple countries. The Boltzmann distribution has the same form as the multinomial logit model. As a discrete choice model, this is very well known in economics since Daniel McFadden made the connection to random utility maximization. See also Bose–Einstein statistics Fermi–Dirac statistics Negative temperature Softmax function References Statistical mechanics Distribution
5
Bouldering is a form of free climbing that is performed on small rock formations or artificial rock walls without the use of ropes or harnesses. While bouldering can be done without any equipment, most climbers use climbing shoes to help secure footholds, chalk to keep their hands dry and to provide a firmer grip, and bouldering mats to prevent injuries from falls. Unlike free solo climbing, which is also performed without ropes, bouldering problems (the sequence of moves that a climber performs to complete the climb) are usually less than tall. Traverses, which are a form of boulder problem, require the climber to climb horizontally from one end to another. Artificial climbing walls allow boulderers to climb indoors in areas without natural boulders. In addition, bouldering competitions take place in both indoor and outdoor settings. The sport was originally a method of training for roped climbs and mountaineering, so climbers could practice specific moves at a safe distance from the ground. Additionally, the sport served to build stamina and increase finger strength. Throughout the 20th century, bouldering evolved into a separate discipline. Individual problems are assigned ratings based on difficulty. Although there have been various rating systems used throughout the history of bouldering, modern problems usually use either the V-scale or the Fontainebleau scale. The growing popularity of bouldering has caused several environmental concerns, including soil erosion and trampled vegetation, as climbers often hike off-trail to reach bouldering sites. This has caused some landowners to restrict access or prohibit bouldering altogether. Outdoor bouldering The characteristics of boulder problems depend largely on the type of rock being climbed. For example, granite often features long cracks and slabs while sandstone rocks are known for their steep overhangs and frequent horizontal breaks. Limestone and volcanic rock are also used for bouldering. There are many prominent bouldering areas throughout the United States, including Hueco Tanks in Texas, Mount Blue Sky in Colorado, The Appalachian Mountains in The Eastern United States, and The Buttermilks in Bishop, California. Squamish, British Columbia is one of the most popular bouldering areas in Canada. Europe is also home to a number of bouldering sites, such as Fontainebleau in France, Meschia in Italy, Albarracín in Spain, and various mountains throughout Switzerland. Africa's most prominent bouldering areas include the more established Rocklands, South Africa, the newer Oukaïmeden in Morocco or more recently opened areas like Chimanimani in Zimbabwe. Indoor bouldering Artificial climbing walls are used to simulate boulder problems in an indoor environment, usually at climbing gyms. These walls are constructed with wooden panels, polymer cement panels, concrete shells, or precast molds of actual rock walls. Holds, usually made of plastic, are then bolted onto the wall to create problems. Some problems use steep overhanging surfaces which force the climber to support much of their weight using their upper body strength. Other problems are set on flat walls; Instead of requiring upper body strength, these problems create difficulty by requiring the climber to execute a series of predetermined movements to complete the route. The IFSC Climbing World Championships have noticeably included more of such problems in their competitions as of late. Climbing gyms often feature multiple problems within the same section of wall. In the US the most common method route-setters use to designate the intended problem is by placing colored tape next to each hold. For example, red tape would indicate one bouldering problem while green tape would be used to set a different problem in the same area. Across much of the rest of the world, problems and grades are usually designated using a set color of plastic hold to indicate problems and their difficulty levels. Using colored holds to set has certain advantages, the most notable of which are that it makes it more obvious where the holds for a problem are, and that there is no chance of tape being accidentally kicked off footholds. Smaller, resource-poor climbing gyms may prefer taped problems because large, expensive holds can be used in multiple routes by marking them with more than one color of tape. The tape indicates the hold(s) that the athlete should grab first. Indoor bouldering requires very little in terms of equipment, at minimum climbing shoes, at maximum, a chalk bag, chalk, a brush, and climbing shoes. Grading Bouldering problems are assigned numerical difficulty ratings by route-setters and climbers. The two most widely used rating systems are the V-scale and the Fontainebleau system. The V-scale, which originated in the United States, is an open-ended rating system with higher numbers indicating a higher degree of difficulty. The V1 rating indicates that a problem can be completed by a novice climber in good physical condition after several attempts. The scale begins at V0, and as of 2013, the highest V rating that has been assigned to a bouldering problem is V17. Some climbing gyms also use a VB grade to indicate beginner problems. The Fontainebleau scale follows a similar system, with each numerical grade divided into three ratings with the letters a, b, and c. For example, Fontainebleau 7A roughly corresponds with V6, while Fontainebleau 7C+ is equivalent to V10. In both systems, grades are further differentiated by appending "+" to indicate a small increase in difficulty. Despite this level of specificity, ratings of individual problems are often controversial, as ability level is not the only factor that affects how difficult a problem may be for a particular climber. Height, arm length, flexibility, and other body characteristics can also be relevant to perceived difficulty. Highball bouldering Highball bouldering is defined as climbing high, difficult, long, and tall boulders, using the same protection as standard bouldering. This form of bouldering adds an additional requirement of mental focus to the existing test of physical strength and skill. Highballing, like most of climbing, is open to interpretation. Most climbers say anything above is a highball and can range in height up to where highball bouldering then turns into free soloing. Highball bouldering may have begun in 1961 when John Gill, without top-rope rehearsal, bouldered a steep face on a granite spire called "The Thimble". The difficulty level of this ascent (V4/5 or 5.12a) was extraordinary for that time. Gill's achievement initiated a wave of climbers making ascents of large boulders. Later, with the introduction and evolution of crash pads, climbers were able to push the limits of highball bouldering ever higher. In 2002 Jason Kehl completed the first highball at double-digit V-difficulty, called Evilution, a boulder in the Buttermilks of California, earning the grade of V12. This climb marked the beginning of a new generation of highball climbing that pushed not only height but great difficulty. It is not unusual for climbers to rehearse such risky problems on top-rope, although this practice is not a settled issue. Important milestone ascents in this style include: Ambrosia, V11, a boulder in Bishop, California, climbed by Kevin Jorgeson in 2015. Too Big to Flail, V10, another line in Bishop, California, climbed by Alex Honnold in 2016. Livin' Large, V15, a boulder in Rocklands, South Africa, found and established by Nalle Hukkataival in 2009, which has been repeated by only one person, Jimmy Webb. The Process, V16, a boulder in Bishop, California, first climbed by Daniel Woods in 2015. Competition bouldering Traditionally, competition in bouldering was informal, with climbers working out problems near the limits of their abilities, then challenging their peers to repeat these accomplishments. However, modern climbing gyms allow for a more formal competitive structure. The International Federation of Sport Climbing (IFSC) employs an indoor format (although competitions can also take place in an outdoor setting) that breaks the competition into three rounds: qualifications, semi-finals, and finals. The rounds feature different sets of four to six boulder problems, and each competitor has a fixed amount of time to attempt each problem. At the end of each round, competitors are ranked by the number of completed problems with ties settled by the total number of attempts taken to solve the problems. Some competitions only permit climbers a fixed number of attempts at each problem with a timed rest period in between. In an open-format competition, all climbers compete simultaneously, and are given a fixed amount of time to complete as many problems as possible. More points are awarded for more difficult problems, while points are deducted for multiple attempts on the same problem. In 2012, the IFSC submitted a proposal to the International Olympic Committee (IOC) to include lead climbing in the 2020 Summer Olympics. The proposal was later revised to an "overall" competition, which would feature bouldering, lead climbing, and speed climbing. In May 2013, the IOC announced that climbing would not be added to the 2020 Olympic program. In 2016, the International Olympic Committee (IOC) officially approved climbing as an Olympic sport "in order to appeal to younger audiences." The Olympics will feature the earlier proposed overall competition. Medalists will be competing in all three categories for a best overall score. The score will be calculated by the multiplication of the positions that the climbers have attained in each discipline of climbing. History Rock climbing first appeared as a sport in the late 1800s. Early records describe climbers engaging in what is now referred to as bouldering, not as a separate discipline, but as a playful form of training for larger ascents. It was during this time that the words "bouldering" and "problem" first appeared in British climbing literature. Oscar Eckenstein was an early proponent of the activity in the British Isles. In the early 20th century, the Fontainebleau area of France established itself as a prominent climbing area, where some of the first dedicated bleausards (or "boulderers") emerged. One of those athletes, Pierre Allain, invented the specialized shoe used for rock climbing. Modern bouldering In the late 1950s through the 1960s, American mathematician John Gill pushed the sport further and contributed several important innovations, distinguishing bouldering as a separate discipline in the process. Gill previously pursued gymnastics, a sport which had an established scale of difficulty for movements and body positions, and shifted the focus of bouldering from reaching the summit to navigating a set of holds. Gill developed a rating system that was closed-ended: B1 problems were as difficult as the most challenging roped routes of the time, B2 problems were more difficult, and B3 problems had been completed once. Gill introduced chalk as a method of keeping the climber's hands dry, promoted a dynamic climbing style, and emphasized the importance of strength training to complement skill. As Gill improved in ability and influence, his ideas became the norm. In the 1980s, two important training tools emerged. One important training tool was bouldering mats, also referred to as "crash pads", which protected against injuries from falling and enabled boulderers to climb in areas that would have been too dangerous otherwise. The second important tool was indoor climbing walls, which helped spread the sport to areas without outdoor climbing and allowed serious climbers to train year-round. As the sport grew in popularity, new bouldering areas were developed throughout Europe and the United States, and more athletes began participating in bouldering competitions. The visibility of the sport greatly increased in the early 2000s, as YouTube videos and climbing blogs helped boulderers around the world to quickly learn techniques, find hard problems, and announce newly completed projects. Notable ascents Notable boulder climbs are chronicled by the climbing media to track progress in boulder climbing standards and levels of technical difficulty; in contrast, the hardest traditional climbing routes tend to be of lower technical difficulty due to the additional burden of having to place protection during the course of the climb, and due to the lack of any possibility of using natural protection on the most extreme climbs. As of November 2022, the world's hardest bouldering routes are Burden of Dreams by Nalle Hukkataival and Return of the Sleepwalker by Daniel Woods, both at proposed grades of . There are a number of routes with a confirmed climbing grade of , the first of which was Gioia by Christian Core in 2008 (and confirmed by Adam Ondra in 2011). As of December 2021, female climbers Josune Bereziartu, Ashima Shiraishi, and Kaddi Lehmann have repeated boulder problems at the boulder grade. On July 28, 2023, Katie Lamb completed the first-ever female ascent of an , climbing Box Therapy at Rocky Mountain National Park. Equipment Unlike other climbing sports, bouldering can be performed safely and effectively with very little equipment, an aspect which makes the discipline highly appealing, but opinions differ. While bouldering pioneer John Sherman asserted that "The only gear really needed to go bouldering is boulders," others suggest the use of climbing shoes and a chalkbag – a small pouch where ground-up chalk is kept – as the bare minimum, and more experienced boulderers typically bring multiple pairs of climbing shoes, chalk, brushes, crash pads, and a skincare kit. Climbing shoes have the most direct impact on performance. Besides protecting the climber's feet from rough surfaces, climbing shoes are designed to help the climber secure footholds. Climbing shoes typically fit much tighter than other athletic footwear and often curl the toes downwards to enable precise footwork. They are manufactured in a variety of different styles to perform in different situations. For example, High-top shoes provide better protection for the ankle, while low-top shoes provide greater flexibility and freedom of movement. Stiffer shoes excel at securing small edges, whereas softer shoes provide greater sensitivity. The front of the shoe, called the "toe box", can be asymmetric, which performs well on overhanging rocks, or symmetric, which is better suited for vertical problems and slabs.   To absorb sweat, most boulderers use gymnastics chalk on their hands, stored in a chalkbag, which can be tied around the waist (also called sport climbing chalkbags), allowing the climber to reapply chalk during the climb. There are also versions of floor chalkbags (also called bouldering chalkbags), which are usually bigger than sport climbing chalkbags and are meant to be kept on the floor while climbing; this is because boulders do not usually have so many movements as to require chalking up more than once. Different sizes of brushes are used to remove excess chalk and debris from boulders in between climbs; they are often attached to the end of a long straight object in order to reach higher holds. Crash pads, also referred to as bouldering mats, are foam cushions placed on the ground to protect climbers from injury after falling. Boulder problems are generally shorter than from ground to top. This makes the sport significantly safer than free solo climbing, which is also performed without ropes, but with no upper limit on the height of the climb. However, minor injuries are common in bouldering, particularly sprained ankles and wrists. Two factors contribute to the frequency of injuries in bouldering: first, boulder problems typically feature more difficult moves than other climbing disciplines, making falls more common. Second, without ropes to arrest the climber's descent, every fall will cause the climber to hit the ground. To prevent injuries, boulderers position crash pads near the boulder to provide a softer landing, as well as one or more spotters (people watching out for the climber to fall in convenient position) to help redirect the climber towards the pads. Upon landing, boulderers employ falling techniques similar to those used in gymnastics: spreading the impact across the entire body to avoid bone fractures, and positioning limbs to allow joints to move freely throughout the impact. Technique Although every type of rock climbing requires a high level of strength and technique, bouldering is the most dynamic form of the sport, requiring the highest level of power and placing considerable strain on the body. Training routines that strengthen fingers and forearms are useful in preventing injuries such as tendonitis and ruptured ligaments. However, as with other forms of climbing, bouldering technique begins with proper footwork. Leg muscles are significantly stronger than arm muscles; thus, proficient boulderers use their arms to maintain balance and body positioning as much as possible, relying on their legs to push them up the rock. Boulderers also keep their arms straight with their shoulders engaged whenever feasible, allowing their bones to support their body weight rather than their muscles. Bouldering movements are described as either "static" or "dynamic". Static movements are those that are performed slowly, with the climber's position controlled by maintaining contact on the boulder with the other three limbs. Dynamic movements use the climber's momentum to reach holds that would be difficult or impossible to secure statically, with an increased risk of falling if the movement is not performed accurately. Environmental impact Bouldering can damage vegetation that grows on rocks, such as moss and lichens. This can occur as a result of the climber intentionally cleaning the boulder, or unintentionally from repeated use of handholds and footholds. Vegetation on the ground surrounding the boulder can also be damaged from overuse, particularly by climbers laying down crash pads. Soil erosion can occur when boulderers trample vegetation while hiking off of established trails, or when they unearth small rocks near the boulder in an effort to make the landing zone safer in case of a fall. The repeated use of white climbing chalk can damage the rock surface of boulders and cliffs, particularly sandstone and other porous rock types, and the scrubbing of rocks to remove chalk can also degrade the rock surface. In order to prevent chalk from damaging the surface of the rock, it is important to remove it gently with a brush after a rock climbing session. Other environmental concerns include littering, improperly disposed feces, and graffiti. These issues have caused some land managers to prohibit bouldering, as was the case in Tea Garden, a popular bouldering area in Rocklands, South Africa. See also Competition climbing Free solo climbing Lead climbing References External links Articles containing video clips Types of climbing
5
The Big Bang event is a physical theory that describes how the universe expanded from an initial state of high density and temperature. Various cosmological models of the Big Bang explain the evolution of the observable universe from the earliest known periods through its subsequent large-scale form. These models offer a comprehensive explanation for a broad range of observed phenomena, including the abundance of light elements, the cosmic microwave background (CMB) radiation, and large-scale structure. The overall uniformity of the Universe, known as the flatness problem, is explained through cosmic inflation: a sudden and very rapid expansion of space during the earliest moments. However, physics currently lacks a widely accepted theory of quantum gravity that can successfully model the earliest conditions of the Big Bang. Crucially, these models are compatible with the Hubble–Lemaître law—the observation that the farther away a galaxy is, the faster it is moving away from Earth. Extrapolating this cosmic expansion backwards in time using the known laws of physics, the models describe an increasingly concentrated cosmos preceded by a singularity in which space and time lose meaning (typically named "the Big Bang singularity"). In 1964 the CMB was discovered, which convinced many cosmologists that the competing steady-state model of cosmic evolution was falsified, since the Big Bang models predict a uniform background radiation caused by high temperatures and densities in the distant past. A wide range of empirical evidence strongly favors the Big Bang event, which is now essentially universally accepted. Detailed measurements of the expansion rate of the universe place the Big Bang singularity at an estimated  billion years ago, which is considered the age of the universe. There remain aspects of the observed universe that are not yet adequately explained by the Big Bang models. After its initial expansion, the universe cooled sufficiently to allow the formation of subatomic particles, and later atoms. The unequal abundances of matter and antimatter that allowed this to occur is an unexplained effect known as baryon asymmetry. These primordial elements—mostly hydrogen, with some helium and lithium—later coalesced through gravity, forming early stars and galaxies. Astronomers observe the gravitational effects of an unknown dark matter surrounding galaxies. Most of the gravitational potential in the universe seems to be in this form, and the Big Bang models and various observations indicate that this excess gravitational potential is not created by baryonic matter, such as normal atoms. Measurements of the redshifts of supernovae indicate that the expansion of the universe is accelerating, an observation attributed to an unexplained phenomenon known as dark energy. Features of the models The Big Bang models offer a comprehensive explanation for a broad range of observed phenomena, including the abundances of the light elements, the CMB, large-scale structure, and Hubble's law. The models depend on two major assumptions: the universality of physical laws and the cosmological principle. The universality of physical laws is one of the underlying principles of the theory of relativity. The cosmological principle states that on large scales the universe is homogeneous and isotropic—appearing the same in all directions regardless of location. These ideas were initially taken as postulates, but later efforts were made to test each of them. For example, the first assumption has been tested by observations showing that the largest possible deviation of the fine-structure constant over much of the age of the universe is of order 10−5. Also, general relativity has passed stringent tests on the scale of the Solar System and binary stars. The large-scale universe appears isotropic as viewed from Earth. If it is indeed isotropic, the cosmological principle can be derived from the simpler Copernican principle, which states that there is no preferred (or special) observer or vantage point. To this end, the cosmological principle has been confirmed to a level of 10−5 via observations of the temperature of the CMB. At the scale of the CMB horizon, the universe has been measured to be homogeneous with an upper bound on the order of 10% inhomogeneity, as of 1995. Horizons An important feature of the Big Bang spacetime is the presence of particle horizons. Since the universe has a finite age, and light travels at a finite speed, there may be events in the past whose light has not yet had time to reach earth. This places a limit or a past horizon on the most distant objects that can be observed. Conversely, because space is expanding, and more distant objects are receding ever more quickly, light emitted by us today may never "catch up" to very distant objects. This defines a future horizon, which limits the events in the future that we will be able to influence. The presence of either type of horizon depends on the details of the FLRW model that describes our universe. Our understanding of the universe back to very early times suggests that there is a past horizon, though in practice our view is also limited by the opacity of the universe at early times. So our view cannot extend further backward in time, though the horizon recedes in space. If the expansion of the universe continues to accelerate, there is a future horizon as well. Thermalization Some processes in the early universe occurred too slowly, compared to the expansion rate of the universe, to reach approximate thermodynamic equilibrium. Others were fast enough to reach thermalization. The parameter usually used to find out whether a process in the very early universe has reached thermal equilibrium is the ratio between the rate of the process (usually rate of collisions between particles) and the Hubble parameter. The larger the ratio, the more time particles had to thermalize before they were too far away from each other. Timeline According to the Big Bang models, the universe at the beginning was very hot and very compact, and since then it has been expanding and cooling. Singularity Extrapolation of the expansion of the universe backwards in time using general relativity yields an infinite density and temperature at a finite time in the past. This irregular behavior, known as the gravitational singularity, indicates that general relativity is not an adequate description of the laws of physics in this regime. Models based on general relativity alone cannot fully extrapolate toward the singularity. In some proposals, such as the emergent Universe models, the singularity is replaced by another cosmological epoch. A different approach identifies the initial singularity as a singularity predicted by some models of the Big Bang theory to have existed before the Big Bang. This primordial singularity is itself sometimes called "the Big Bang", but the term can also refer to a more generic early hot, dense phase of the universe. In either case, "the Big Bang" as an event is also colloquially referred to as the "birth" of our universe since it represents the point in history where the universe can be verified to have entered into a regime where the laws of physics as we understand them (specifically general relativity and the Standard Model of particle physics) work. Based on measurements of the expansion using Type Ia supernovae and measurements of temperature fluctuations in the cosmic microwave background, the time that has passed since that event—known as the "age of the universe"—is 13.8 billion years. Despite being extremely dense at this time—far denser than is usually required to form a black hole—the universe did not re-collapse into a singularity. Commonly used calculations and limits for explaining gravitational collapse are usually based upon objects of relatively constant size, such as stars, and do not apply to rapidly expanding space such as the Big Bang. Since the early universe did not immediately collapse into a multitude of black holes, matter at that time must have been very evenly distributed with a negligible density gradient. Inflation and baryogenesis The earliest phases of the Big Bang are subject to much speculation, since astronomical data about them are not available. In the most common models the universe was filled homogeneously and isotropically with a very high energy density and huge temperatures and pressures, and was very rapidly expanding and cooling. The period up to 10−43 seconds into the expansion, the Planck epoch, was a phase in which the four fundamental forces—the electromagnetic force, the strong nuclear force, the weak nuclear force, and the gravitational force, were unified as one. In this stage, the characteristic scale length of the universe was the Planck length, , and consequently had a temperature of approximately 1032 degrees Celsius. Even the very concept of a particle breaks down in these conditions. A proper understanding of this period awaits the development of a theory of quantum gravity. The Planck epoch was succeeded by the grand unification epoch beginning at 10−43 seconds, where gravitation separated from the other forces as the universe's temperature fell. At approximately 10−37 seconds into the expansion, a phase transition caused a cosmic inflation, during which the universe grew exponentially, unconstrained by the light speed invariance, and temperatures dropped by a factor of 100,000. This concept is motivated by the flatness problem, where the density of matter and energy is very close to the critical density needed to produce a flat universe. That is, the shape of the universe has no overall geometric curvature due to gravitational influence. Microscopic quantum fluctuations that occurred because of Heisenberg's uncertainty principle were "frozen in" by inflation, becoming amplified into the seeds that would later form the large-scale structure of the universe. At a time around 10−36 seconds, the electroweak epoch begins when the strong nuclear force separates from the other forces, with only the electromagnetic force and weak nuclear force remaining unified. Inflation stopped locally at around 10−33 to 10−32 seconds, with the observable universe's volume having increased by a factor of at least 1078. Reheating occurred until the universe obtained the temperatures required for the production of a quark–gluon plasma as well as all other elementary particles. Temperatures were so high that the random motions of particles were at relativistic speeds, and particle–antiparticle pairs of all kinds were being continuously created and destroyed in collisions. At some point, an unknown reaction called baryogenesis violated the conservation of baryon number, leading to a very small excess of quarks and leptons over antiquarks and antileptons—of the order of one part in 30 million. This resulted in the predominance of matter over antimatter in the present universe. Cooling The universe continued to decrease in density and fall in temperature, hence the typical energy of each particle was decreasing. Symmetry-breaking phase transitions put the fundamental forces of physics and the parameters of elementary particles into their present form, with the electromagnetic force and weak nuclear force separating at about 10−12 seconds. After about 10−11 seconds, the picture becomes less speculative, since particle energies drop to values that can be attained in particle accelerators. At about 10−6 seconds, quarks and gluons combined to form baryons such as protons and neutrons. The small excess of quarks over antiquarks led to a small excess of baryons over antibaryons. The temperature was no longer high enough to create either new proton–antiproton or neutron–antineutron pairs. A mass annihilation immediately followed, leaving just one in 108 of the original matter particles and none of their antiparticles. A similar process happened at about 1 second for electrons and positrons. After these annihilations, the remaining protons, neutrons and electrons were no longer moving relativistically and the energy density of the universe was dominated by photons (with a minor contribution from neutrinos). A few minutes into the expansion, when the temperature was about a billion kelvin and the density of matter in the universe was comparable to the current density of Earth's atmosphere, neutrons combined with protons to form the universe's deuterium and helium nuclei in a process called Big Bang nucleosynthesis (BBN). Most protons remained uncombined as hydrogen nuclei. As the universe cooled, the rest energy density of matter came to gravitationally dominate that of the photon radiation. After about 379,000 years, the electrons and nuclei combined into atoms (mostly hydrogen), which were able to emit radiation. This relic radiation, which continued through space largely unimpeded, is known as the cosmic microwave background. Structure formation Over a long period of time, the slightly denser regions of the uniformly distributed matter gravitationally attracted nearby matter and thus grew even denser, forming gas clouds, stars, galaxies, and the other astronomical structures observable today. The details of this process depend on the amount and type of matter in the universe. The four possible types of matter are known as cold dark matter (CDM), warm dark matter, hot dark matter, and baryonic matter. The best measurements available, from the Wilkinson Microwave Anisotropy Probe (WMAP), show that the data is well-fit by a Lambda-CDM model in which dark matter is assumed to be cold. (Warm dark matter is ruled out by early reionization.) This CDM is estimated to make up about 23% of the matter/energy of the universe, while baryonic matter makes up about 4.6%. In an "extended model" which includes hot dark matter in the form of neutrinos, then the "physical baryon density" is estimated at 0.023. (This is different from the 'baryon density' expressed as a fraction of the total matter/energy density, which is about 0.046.) The corresponding cold dark matter density is about 0.11, and the corresponding neutrino density is estimated to be less than 0.0062. Cosmic acceleration Independent lines of evidence from Type Ia supernovae and the CMB imply that the universe today is dominated by a mysterious form of energy known as dark energy, which appears to homogeneously permeate all of space. Observations suggest that 73% of the total energy density of the present day universe is in this form. When the universe was very young it was likely infused with dark energy, but with everything closer together, gravity predominated, braking the expansion. Eventually, after billions of years of expansion, the declining density of matter relative to the density of dark energy allowed the expansion of the universe to begin to accelerate. Dark energy in its simplest formulation is modeled by a cosmological constant term in Einstein field equations of general relativity, but its composition and mechanism are unknown. More generally, the details of its equation of state and relationship with the Standard Model of particle physics continue to be investigated both through observation and theory. All of this cosmic evolution after the inflationary epoch can be rigorously described and modeled by the lambda-CDM model of cosmology, which uses the independent frameworks of quantum mechanics and general relativity. There are no easily testable models that would describe the situation prior to approximately 10−15 seconds. Understanding this earliest of eras in the history of the universe is one of the greatest unsolved problems in physics. Concept history Etymology English astronomer Fred Hoyle is credited with coining the term "Big Bang" during a talk for a March 1949 BBC Radio broadcast, saying: "These theories were based on the hypothesis that all the matter in the universe was created in one big bang at a particular time in the remote past." However, it did not catch on until the 1970s. It is popularly reported that Hoyle, who favored an alternative "steady-state" cosmological model, intended this to be pejorative, but Hoyle explicitly denied this and said it was just a striking image meant to highlight the difference between the two models. Helge Kragh writes that the evidence for the claim that it was meant as a pejorative is "unconvincing", and mentions a number of indications that it was not a pejorative. The term itself has been argued to be a misnomer because it evokes an explosion. The argument is that whereas an explosion suggests expansion into a surrounding space, the Big Bang only describes the intrinsic expansion of the contents of the universe. Another issue pointed out by Santhosh Mathew is that bang implies sound, which is not an important feature of the model. An attempt to find a more suitable alternative was not successful. Development The Big Bang models developed from observations of the structure of the universe and from theoretical considerations. In 1912, Vesto Slipher measured the first Doppler shift of a "spiral nebula" (spiral nebula is the obsolete term for spiral galaxies), and soon discovered that almost all such nebulae were receding from Earth. He did not grasp the cosmological implications of this fact, and indeed at the time it was highly controversial whether or not these nebulae were "island universes" outside our Milky Way. Ten years later, Alexander Friedmann, a Russian cosmologist and mathematician, derived the Friedmann equations from the Einstein field equations, showing that the universe might be expanding in contrast to the static universe model advocated by Albert Einstein at that time. In 1924, American astronomer Edwin Hubble's measurement of the great distance to the nearest spiral nebulae showed that these systems were indeed other galaxies. Starting that same year, Hubble painstakingly developed a series of distance indicators, the forerunner of the cosmic distance ladder, using the Hooker telescope at Mount Wilson Observatory. This allowed him to estimate distances to galaxies whose redshifts had already been measured, mostly by Slipher. In 1929, Hubble discovered a correlation between distance and recessional velocity—now known as Hubble's law. Independently deriving Friedmann's equations in 1927, Georges Lemaître, a Belgian physicist and Roman Catholic priest, proposed that the recession of the nebulae was due to the expansion of the universe. He inferred the relation that Hubble would later observe, given the cosmological principle. In 1931, Lemaître went further and suggested that the evident expansion of the universe, if projected back in time, meant that the further in the past the smaller the universe was, until at some finite time in the past all the mass of the universe was concentrated into a single point, a "primeval atom" where and when the fabric of time and space came into existence. In the 1920s and 1930s, almost every major cosmologist preferred an eternal steady-state universe, and several complained that the beginning of time implied by the Big Bang imported religious concepts into physics; this objection was later repeated by supporters of the steady-state theory. This perception was enhanced by the fact that the originator of the Big Bang concept, Lemaître, was a Roman Catholic priest. Arthur Eddington agreed with Aristotle that the universe did not have a beginning in time, viz., that matter is eternal. A beginning in time was "repugnant" to him. Lemaître, however, disagreed: During the 1930s, other ideas were proposed as non-standard cosmologies to explain Hubble's observations, including the Milne model, the oscillatory universe (originally suggested by Friedmann, but advocated by Albert Einstein and Richard C. Tolman) and Fritz Zwicky's tired light hypothesis. After World War II, two distinct possibilities emerged. One was Fred Hoyle's steady-state model, whereby new matter would be created as the universe seemed to expand. In this model the universe is roughly the same at any point in time. The other was Lemaître's Big Bang theory, advocated and developed by George Gamow, who introduced BBN and whose associates, Ralph Alpher and Robert Herman, predicted the CMB. Ironically, it was Hoyle who coined the phrase that came to be applied to Lemaître's theory, referring to it as "this big bang idea" during a BBC Radio broadcast in March 1949. For a while, support was split between these two theories. Eventually, the observational evidence, most notably from radio source counts, began to favor Big Bang over steady state. The discovery and confirmation of the CMB in 1964 secured the Big Bang as the best theory of the origin and evolution of the universe. In 1968 and 1970, Roger Penrose, Stephen Hawking, and George F. R. Ellis published papers where they showed that mathematical singularities were an inevitable initial condition of relativistic models of the Big Bang. Then, from the 1970s to the 1990s, cosmologists worked on characterizing the features of the Big Bang universe and resolving outstanding problems. In 1981, Alan Guth made a breakthrough in theoretical work on resolving certain outstanding theoretical problems in the Big Bang models with the introduction of an epoch of rapid expansion in the early universe he called "inflation". Meanwhile, during these decades, two questions in observational cosmology that generated much discussion and disagreement were over the precise values of the Hubble Constant and the matter-density of the universe (before the discovery of dark energy, thought to be the key predictor for the eventual fate of the universe). In the mid-1990s, observations of certain globular clusters appeared to indicate that they were about 15 billion years old, which conflicted with most then-current estimates of the age of the universe (and indeed with the age measured today). This issue was later resolved when new computer simulations, which included the effects of mass loss due to stellar winds, indicated a much younger age for globular clusters. Significant progress in Big Bang cosmology has been made since the late 1990s as a result of advances in telescope technology as well as the analysis of data from satellites such as the Cosmic Background Explorer (COBE), the Hubble Space Telescope and WMAP. Cosmologists now have fairly precise and accurate measurements of many of the parameters of the Big Bang model, and have made the unexpected discovery that the expansion of the universe appears to be accelerating. Observational evidence The earliest and most direct observational evidence of the validity of the theory are the expansion of the universe according to Hubble's law (as indicated by the redshifts of galaxies), discovery and measurement of the cosmic microwave background and the relative abundances of light elements produced by Big Bang nucleosynthesis (BBN). More recent evidence includes observations of galaxy formation and evolution, and the distribution of large-scale cosmic structures, These are sometimes called the "four pillars" of the Big Bang models. Precise modern models of the Big Bang appeal to various exotic physical phenomena that have not been observed in terrestrial laboratory experiments or incorporated into the Standard Model of particle physics. Of these features, dark matter is currently the subject of most active laboratory investigations. Remaining issues include the cuspy halo problem and the dwarf galaxy problem of cold dark matter. Dark energy is also an area of intense interest for scientists, but it is not clear whether direct detection of dark energy will be possible. Inflation and baryogenesis remain more speculative features of current Big Bang models. Viable, quantitative explanations for such phenomena are still being sought. These are unsolved problems in physics. Hubble's law and the expansion of the universe Observations of distant galaxies and quasars show that these objects are redshifted: the light emitted from them has been shifted to longer wavelengths. This can be seen by taking a frequency spectrum of an object and matching the spectroscopic pattern of emission or absorption lines corresponding to atoms of the chemical elements interacting with the light. These redshifts are uniformly isotropic, distributed evenly among the observed objects in all directions. If the redshift is interpreted as a Doppler shift, the recessional velocity of the object can be calculated. For some galaxies, it is possible to estimate distances via the cosmic distance ladder. When the recessional velocities are plotted against these distances, a linear relationship known as Hubble's law is observed: where is the recessional velocity of the galaxy or other distant object, is the proper distance to the object, and is Hubble's constant, measured to be km/s/Mpc by the WMAP. Hubble's law implies that the universe is uniformly expanding everywhere. This cosmic expansion was predicted from general relativity by Friedmann in 1922 and Lemaître in 1927, well before Hubble made his 1929 analysis and observations, and it remains the cornerstone of the Big Bang model as developed by Friedmann, Lemaître, Robertson, and Walker. The theory requires the relation to hold at all times, where is the proper distance, v is the recessional velocity, and , , and vary as the universe expands (hence we write to denote the present-day Hubble "constant"). For distances much smaller than the size of the observable universe, the Hubble redshift can be thought of as the Doppler shift corresponding to the recession velocity . For distances comparable to the size of the observable universe, the attribution of the cosmological redshift becomes more ambiguous, although its interpretation as a kinematic Doppler shift remains the most natural one. An unexplained discrepancy with the determination of the Hubble constant is known as Hubble tension. Techniques based on observation of the CMB suggest a lower value of this constant compared to the quantity derived from measurements based on the cosmic distance ladder. Cosmic microwave background radiation In 1964, Arno Penzias and Robert Wilson serendipitously discovered the cosmic background radiation, an omnidirectional signal in the microwave band. Their discovery provided substantial confirmation of the big-bang predictions by Alpher, Herman and Gamow around 1950. Through the 1970s, the radiation was found to be approximately consistent with a blackbody spectrum in all directions; this spectrum has been redshifted by the expansion of the universe, and today corresponds to approximately 2.725 K. This tipped the balance of evidence in favor of the Big Bang model, and Penzias and Wilson were awarded the 1978 Nobel Prize in Physics. The surface of last scattering corresponding to emission of the CMB occurs shortly after recombination, the epoch when neutral hydrogen becomes stable. Prior to this, the universe comprised a hot dense photon-baryon plasma sea where photons were quickly scattered from free charged particles. Peaking at around , the mean free path for a photon becomes long enough to reach the present day and the universe becomes transparent. In 1989, NASA launched COBE, which made two major advances: in 1990, high-precision spectrum measurements showed that the CMB frequency spectrum is an almost perfect blackbody with no deviations at a level of 1 part in 104, and measured a residual temperature of 2.726 K (more recent measurements have revised this figure down slightly to 2.7255 K); then in 1992, further COBE measurements discovered tiny fluctuations (anisotropies) in the CMB temperature across the sky, at a level of about one part in 105. John C. Mather and George Smoot were awarded the 2006 Nobel Prize in Physics for their leadership in these results. During the following decade, CMB anisotropies were further investigated by a large number of ground-based and balloon experiments. In 2000–2001, several experiments, most notably BOOMERanG, found the shape of the universe to be spatially almost flat by measuring the typical angular size (the size on the sky) of the anisotropies. In early 2003, the first results of the Wilkinson Microwave Anisotropy Probe were released, yielding what were at the time the most accurate values for some of the cosmological parameters. The results disproved several specific cosmic inflation models, but are consistent with the inflation theory in general. The Planck space probe was launched in May 2009. Other ground and balloon-based cosmic microwave background experiments are ongoing. Abundance of primordial elements Using Big Bang models, it is possible to calculate the expected concentration of the isotopes helium-4 (4He), helium-3 (3He), deuterium (2H), and lithium-7 (7Li) in the universe as ratios to the amount of ordinary hydrogen. The relative abundances depend on a single parameter, the ratio of photons to baryons. This value can be calculated independently from the detailed structure of CMB fluctuations. The ratios predicted (by mass, not by abundance) are about 0.25 for 4He:H, about 10−3 for 2H:H, about 10−4 for 3He:H, and about 10−9 for 7Li:H. The measured abundances all agree at least roughly with those predicted from a single value of the baryon-to-photon ratio. The agreement is excellent for deuterium, close but formally discrepant for 4He, and off by a factor of two for 7Li (this anomaly is known as the cosmological lithium problem); in the latter two cases, there are substantial systematic uncertainties. Nonetheless, the general consistency with abundances predicted by BBN is strong evidence for the Big Bang, as the theory is the only known explanation for the relative abundances of light elements, and it is virtually impossible to "tune" the Big Bang to produce much more or less than 20–30% helium. Indeed, there is no obvious reason outside of the Big Bang that, for example, the young universe before star formation, as determined by studying matter supposedly free of stellar nucleosynthesis products, should have more helium than deuterium or more deuterium than 3He, and in constant ratios, too. Galactic evolution and distribution Detailed observations of the morphology and distribution of galaxies and quasars are in agreement with the current Big Bang models. A combination of observations and theory suggest that the first quasars and galaxies formed within a billion years after the Big Bang, and since then, larger structures have been forming, such as galaxy clusters and superclusters. Populations of stars have been aging and evolving, so that distant galaxies (which are observed as they were in the early universe) appear very different from nearby galaxies (observed in a more recent state). Moreover, galaxies that formed relatively recently, appear markedly different from galaxies formed at similar distances but shortly after the Big Bang. These observations are strong arguments against the steady-state model. Observations of star formation, galaxy and quasar distributions and larger structures, agree well with Big Bang simulations of the formation of structure in the universe, and are helping to complete details of the theory. Primordial gas clouds In 2011, astronomers found what they believe to be pristine clouds of primordial gas by analyzing absorption lines in the spectra of distant quasars. Before this discovery, all other astronomical objects have been observed to contain heavy elements that are formed in stars. Despite being sensitive to carbon, oxygen, and silicon, these three elements were not detected in these two clouds. Since the clouds of gas have no detectable levels of heavy elements, they likely formed in the first few minutes after the Big Bang, during BBN. Other lines of evidence The age of the universe as estimated from the Hubble expansion and the CMB is now in agreement with other estimates using the ages of the oldest stars, both as measured by applying the theory of stellar evolution to globular clusters and through radiometric dating of individual Population II stars. It is also in agreement with age estimates based on measurements of the expansion using Type Ia supernovae and measurements of temperature fluctuations in the cosmic microwave background. The agreement of independent measurements of this age supports the Lambda-CDM (ΛCDM) model, since the model is used to relate some of the measurements to an age estimate, and all estimates turn agree. Still, some observations of objects from the relatively early universe (in particular quasar APM 08279+5255) raise concern as to whether these objects had enough time to form so early in the ΛCDM model. The prediction that the CMB temperature was higher in the past has been experimentally supported by observations of very low temperature absorption lines in gas clouds at high redshift. This prediction also implies that the amplitude of the Sunyaev–Zel'dovich effect in clusters of galaxies does not depend directly on redshift. Observations have found this to be roughly true, but this effect depends on cluster properties that do change with cosmic time, making precise measurements difficult. Future observations Future gravitational-wave observatories might be able to detect primordial gravitational waves, relics of the early universe, up to less than a second after the Big Bang. Problems and related issues in physics As with any theory, a number of mysteries and problems have arisen as a result of the development of the Big Bang models. Some of these mysteries and problems have been resolved while others are still outstanding. Proposed solutions to some of the problems in the Big Bang model have revealed new mysteries of their own. For example, the horizon problem, the magnetic monopole problem, and the flatness problem are most commonly resolved with inflation theory, but the details of the inflationary universe are still left unresolved and many, including some founders of the theory, say it has been disproven. What follows are a list of the mysterious aspects of the Big Bang concept still under intense investigation by cosmologists and astrophysicists. Baryon asymmetry It is not yet understood why the universe has more matter than antimatter. It is generally assumed that when the universe was young and very hot it was in statistical equilibrium and contained equal numbers of baryons and antibaryons. However, observations suggest that the universe, including its most distant parts, is made almost entirely of normal matter, rather than antimatter. A process called baryogenesis was hypothesized to account for the asymmetry. For baryogenesis to occur, the Sakharov conditions must be satisfied. These require that baryon number is not conserved, that C-symmetry and CP-symmetry are violated and that the universe depart from thermodynamic equilibrium. All these conditions occur in the Standard Model, but the effects are not strong enough to explain the present baryon asymmetry. Dark energy Measurements of the redshift–magnitude relation for type Ia supernovae indicate that the expansion of the universe has been accelerating since the universe was about half its present age. To explain this acceleration, general relativity requires that much of the energy in the universe consists of a component with large negative pressure, dubbed "dark energy". Dark energy, though speculative, solves numerous problems. Measurements of the cosmic microwave background indicate that the universe is very nearly spatially flat, and therefore according to general relativity the universe must have almost exactly the critical density of mass/energy. But the mass density of the universe can be measured from its gravitational clustering, and is found to have only about 30% of the critical density. Since theory suggests that dark energy does not cluster in the usual way it is the best explanation for the "missing" energy density. Dark energy also helps to explain two geometrical measures of the overall curvature of the universe, one using the frequency of gravitational lenses, and the other using the characteristic pattern of the large-scale structure as a cosmic ruler. Negative pressure is believed to be a property of vacuum energy, but the exact nature and existence of dark energy remains one of the great mysteries of the Big Bang. Results from the WMAP team in 2008 are in accordance with a universe that consists of 73% dark energy, 23% dark matter, 4.6% regular matter and less than 1% neutrinos. According to theory, the energy density in matter decreases with the expansion of the universe, but the dark energy density remains constant (or nearly so) as the universe expands. Therefore, matter made up a larger fraction of the total energy of the universe in the past than it does today, but its fractional contribution will fall in the far future as dark energy becomes even more dominant. The dark energy component of the universe has been explained by theorists using a variety of competing theories including Einstein's cosmological constant but also extending to more exotic forms of quintessence or other modified gravity schemes. A cosmological constant problem, sometimes called the "most embarrassing problem in physics", results from the apparent discrepancy between the measured energy density of dark energy, and the one naively predicted from Planck units. Dark matter During the 1970s and the 1980s, various observations showed that there is not sufficient visible matter in the universe to account for the apparent strength of gravitational forces within and between galaxies. This led to the idea that up to 90% of the matter in the universe is dark matter that does not emit light or interact with normal baryonic matter. In addition, the assumption that the universe is mostly normal matter led to predictions that were strongly inconsistent with observations. In particular, the universe today is far more lumpy and contains far less deuterium than can be accounted for without dark matter. While dark matter has always been controversial, it is inferred by various observations: the anisotropies in the CMB, galaxy cluster velocity dispersions, large-scale structure distributions, gravitational lensing studies, and X-ray measurements of galaxy clusters. Indirect evidence for dark matter comes from its gravitational influence on other matter, as no dark matter particles have been observed in laboratories. Many particle physics candidates for dark matter have been proposed, and several projects to detect them directly are underway. Additionally, there are outstanding problems associated with the currently favored cold dark matter model which include the dwarf galaxy problem and the cuspy halo problem. Alternative theories have been proposed that do not require a large amount of undetected matter, but instead modify the laws of gravity established by Newton and Einstein; yet no alternative theory has been as successful as the cold dark matter proposal in explaining all extant observations. Horizon problem The horizon problem results from the premise that information cannot travel faster than light. In a universe of finite age this sets a limit—the particle horizon—on the separation of any two regions of space that are in causal contact. The observed isotropy of the CMB is problematic in this regard: if the universe had been dominated by radiation or matter at all times up to the epoch of last scattering, the particle horizon at that time would correspond to about 2 degrees on the sky. There would then be no mechanism to cause wider regions to have the same temperature. A resolution to this apparent inconsistency is offered by inflation theory in which a homogeneous and isotropic scalar energy field dominates the universe at some very early period (before baryogenesis). During inflation, the universe undergoes exponential expansion, and the particle horizon expands much more rapidly than previously assumed, so that regions presently on opposite sides of the observable universe are well inside each other's particle horizon. The observed isotropy of the CMB then follows from the fact that this larger region was in causal contact before the beginning of inflation. Heisenberg's uncertainty principle predicts that during the inflationary phase there would be quantum thermal fluctuations, which would be magnified to a cosmic scale. These fluctuations served as the seeds for all the current structures in the universe. Inflation predicts that the primordial fluctuations are nearly scale invariant and Gaussian, which has been confirmed by measurements of the CMB. A related issue to the classic horizon problem arises because in most standard cosmological inflation models, inflation ceases well before electroweak symmetry breaking occurs, so inflation should not be able to prevent large-scale discontinuities in the electroweak vacuum since distant parts of the observable universe were causally separate when the electroweak epoch ended. Magnetic monopoles The magnetic monopole objection was raised in the late 1970s. Grand unified theories (GUTs) predicted topological defects in space that would manifest as magnetic monopoles. These objects would be produced efficiently in the hot early universe, resulting in a density much higher than is consistent with observations, given that no monopoles have been found. This problem is resolved by cosmic inflation, which removes all point defects from the observable universe, in the same way that it drives the geometry to flatness. Flatness problem The flatness problem (also known as the oldness problem) is an observational problem associated with a FLRW. The universe may have positive, negative, or zero spatial curvature depending on its total energy density. Curvature is negative if its density is less than the critical density; positive if greater; and zero at the critical density, in which case space is said to be flat. Observations indicate the universe is consistent with being flat. The problem is that any small departure from the critical density grows with time, and yet the universe today remains very close to flat. Given that a natural timescale for departure from flatness might be the Planck time, 10−43 seconds, the fact that the universe has reached neither a heat death nor a Big Crunch after billions of years requires an explanation. For instance, even at the relatively late age of a few minutes (the time of nucleosynthesis), the density of the universe must have been within one part in 1014 of its critical value, or it would not exist as it does today. Misconceptions One of the common misconceptions about the Big Bang model is that it fully explains the origin of the universe. However, the Big Bang model does not describe how energy, time, and space were caused, but rather it describes the emergence of the present universe from an ultra-dense and high-temperature initial state. It is misleading to visualize the Big Bang by comparing its size to everyday objects. When the size of the universe at Big Bang is described, it refers to the size of the observable universe, and not the entire universe. Another common misconception is that the Big Bang must be understood as the expansion of space and not in terms of the contents of space exploding apart. In fact, either description can be accurate. The expansion of space (implied by the FLRW metric) is only a mathematical convention, corresponding to a choice of coordinates on spacetime. There is no generally covariant sense in which space expands. The recession speeds associated with Hubble's law are not velocities in a relativistic sense (for example, they are not related to the spatial components of 4-velocities). Therefore, it is not remarkable that according to Hubble's law, galaxies farther than the Hubble distance recede faster than the speed of light. Such recession speeds do not correspond to faster-than-light travel. Many popular accounts attribute the cosmological redshift to the expansion of space. This can be misleading because the expansion of space is only a coordinate choice. The most natural interpretation of the cosmological redshift is that it is a Doppler shift. Implications Given current understanding, scientific extrapolations about the future of the universe are only possible for finite durations, albeit for much longer periods than the current age of the universe. Anything beyond that becomes increasingly speculative. Likewise, at present, a proper understanding of the origin of the universe can only be subject to conjecture. Pre–Big Bang cosmology The Big Bang explains the evolution of the universe from a starting density and temperature that is well beyond humanity's capability to replicate, so extrapolations to the most extreme conditions and earliest times are necessarily more speculative. Lemaître called this initial state the "primeval atom" while Gamow called the material "ylem". How the initial state of the universe originated is still an open question, but the Big Bang model does constrain some of its characteristics. For example, specific laws of nature most likely came to existence in a random way, but as inflation models show, some combinations of these are far more probable. A flat universe implies a balance between gravitational potential energy and other energy forms, requiring no additional energy to be created. The Big Bang theory, built upon the equations of classical general relativity, indicates a singularity at the origin of cosmic time, and such an infinite energy density may be a physical impossibility. However, the physical theories of general relativity and quantum mechanics as currently realized are not applicable before the Planck epoch, and correcting this will require the development of a correct treatment of quantum gravity. Certain quantum gravity treatments, such as the Wheeler–DeWitt equation, imply that time itself could be an emergent property. As such, physics may conclude that time did not exist before the Big Bang. While it is not known what could have preceded the hot dense state of the early universe or how and why it originated, or even whether such questions are sensible, speculation abounds on the subject of "cosmogony". Some speculative proposals in this regard, each of which entails untested hypotheses, are: The simplest models, in which the Big Bang was caused by quantum fluctuations. That scenario had very little chance of happening, but, according to the totalitarian principle, even the most improbable event will eventually happen. It took place instantly, in our perspective, due to the absence of perceived time before the Big Bang. Emergent Universe models, which feature a low-activity past-eternal era before the Big Bang, resembling ancient ideas of a cosmic egg and birth of the world out of primordial chaos. Models in which the whole of spacetime is finite, including the Hartle–Hawking no-boundary condition. For these cases, the Big Bang does represent the limit of time but without a singularity. In such a case, the universe is self-sufficient. Brane cosmology models, in which inflation is due to the movement of branes in string theory; the pre-Big Bang model; the ekpyrotic model, in which the Big Bang is the result of a collision between branes; and the cyclic model, a variant of the ekpyrotic model in which collisions occur periodically. In the latter model the Big Bang was preceded by a Big Crunch and the universe cycles from one process to the other. Eternal inflation, in which universal inflation ends locally here and there in a random fashion, each end-point leading to a bubble universe, expanding from its own big bang. Proposals in the last two categories see the Big Bang as an event in either a much larger and older universe or in a multiverse. Ultimate fate of the universe Before observations of dark energy, cosmologists considered two scenarios for the future of the universe. If the mass density of the universe were greater than the critical density, then the universe would reach a maximum size and then begin to collapse. It would become denser and hotter again, ending with a state similar to that in which it started—a Big Crunch. Alternatively, if the density in the universe were equal to or below the critical density, the expansion would slow down but never stop. Star formation would cease with the consumption of interstellar gas in each galaxy; stars would burn out, leaving white dwarfs, neutron stars, and black holes. Collisions between these would result in mass accumulating into larger and larger black holes. The average temperature of the universe would very gradually asymptotically approach absolute zero—a Big Freeze. Moreover, if protons are unstable, then baryonic matter would disappear, leaving only radiation and black holes. Eventually, black holes would evaporate by emitting Hawking radiation. The entropy of the universe would increase to the point where no organized form of energy could be extracted from it, a scenario known as heat death. Modern observations of accelerating expansion imply that more and more of the currently visible universe will pass beyond our event horizon and out of contact with us. The eventual result is not known. The ΛCDM model of the universe contains dark energy in the form of a cosmological constant. This theory suggests that only gravitationally bound systems, such as galaxies, will remain together, and they too will be subject to heat death as the universe expands and cools. Other explanations of dark energy, called phantom energy theories, suggest that ultimately galaxy clusters, stars, planets, atoms, nuclei, and matter itself will be torn apart by the ever-increasing expansion in a so-called Big Rip. Religious and philosophical interpretations As a description of the origin of the universe, the Big Bang has significant bearing on religion and philosophy. As a result, it has become one of the liveliest areas in the discourse between science and religion. Some believe the Big Bang implies a creator, while others argue that Big Bang cosmology makes the notion of a creator superfluous. See also , a Big Bang speculation . Also known as the Big Chill and the Big Freeze , a discredited theory that denied the Big Bang and posited that the universe always existed Notes References Bibliography "Reprinted from Astrophysics and Space Science Volumes 269–270, Nos. 1–4, 1999". "Lectures presented at the XX Canary Islands Winter School of Astrophysics, held in Tenerife, Spain, November 17–18, 2008." "Symposium held in Dallas, Tex., Dec. 11-16, 1988." The 2004 edition of the book is available from the Internet Archive. Retrieved 20 December 2019. Further reading 1st edition is available from the Internet Archive. Retrieved 23 December 2019. External links Once Upon a Universe – STFC funded project explaining the history of the universe in easy-to-understand language "Big Bang Cosmology" – NASA/WMAP Science Team "The Big Bang" – NASA Science "Big Bang, Big Bewilderment" – Big bang model with animated graphics by Johannes Koelman "The Trouble With "The Big Bang"" – A rash of recent articles illustrates a longstanding confusion over the famous term. by Sabine Hossenfelde Physical cosmology Concepts in astronomy Astronomical events Scientific models Origins Beginnings
7
The Boeing CIM-10 Bomarc ("Boeing Michigan Aeronautical Research Center") (IM-99 Weapon System prior to September 1962) was a supersonic ramjet powered long-range surface-to-air missile (SAM) used during the Cold War for the air defense of North America. In addition to being the first operational long-range SAM and the first operational pulse doppler aviation radar, it was the only SAM deployed by the United States Air Force. Stored horizontally in a launcher shelter with a movable roof, the missile was erected, fired vertically using rocket boosters to high altitude, and then tipped over into a horizontal Mach 2.5 cruise powered by ramjet engines. This lofted trajectory allowed the missile to operate at a maximum range as great as 430 mi (700 km). Controlled from the ground for most of its flight, when it reached the target area it was commanded to begin a dive, activating an onboard active radar homing seeker for terminal guidance. A radar proximity fuse detonated the warhead, either a large conventional explosive or the W40 nuclear warhead. The Air Force originally planned for a total of 52 sites covering most of the major cities and industrial regions in the US. The US Army was deploying their own systems at the same time, and the two services fought constantly both in political circles and in the press. Development dragged on, and by the time it was ready for deployment in the late 1950s, the nuclear threat had moved from manned bombers to the intercontinental ballistic missile (ICBM). By this time the Army had successfully deployed the much shorter range Nike Hercules that they claimed filled any possible need through the 1960s, in spite of Air Force claims to the contrary. As testing continued, the Air Force reduced its plans to sixteen sites, and then again to eight with an additional two sites in Canada. The first US site was declared operational in 1959, but with only a single working missile. Bringing the rest of the missiles into service took years, by which time the system was obsolete. Deactivations began in 1969 and by 1972 all Bomarc sites had been shut down. A small number were used as target drones, and only a few remain on display today. Design and development Bomarc A In 1946, Boeing started to study surface-to-air guided missiles under the United States Army Air Forces project MX-606. By 1950, Boeing had launched more than 100 test rockets in various configurations, all under the designator XSAM-A-1 GAPA (Ground-to-Air Pilotless Aircraft). Because these tests were very promising, Boeing received a USAF contract in 1949 to develop a pilotless interceptor (a term then used by the USAF for air-defense guided missiles) under project MX-1599. The MX-1599 missile was to be a ramjet-powered, nuclear-armed long-range surface-to-air missile to defend the Continental United States from high-flying bombers. The Michigan Aerospace Research Center (MARC) was added to the project soon afterward, and this gave the new missile its name Bomarc (for Boeing and MARC). In 1951, the USAF decided to emphasize its point of view that missiles were nothing else than pilotless aircraft by assigning aircraft designators to its missile projects, and anti-aircraft missiles received F-for-Fighter designations. The Bomarc became the F-99. Test flights of XF-99 test vehicles began in September 1952 and continued through early 1955. The XF-99 tested only the liquid-fueled booster rocket, which would accelerate the missile to ramjet ignition speed. In February 1955, tests of the XF-99A propulsion test vehicles began. These included live ramjets, but still had no guidance system or warhead. The designation YF-99A had been reserved for the operational test vehicles. In August 1955, the USAF discontinued the use of aircraft-like type designators for missiles, and the XF-99A and YF-99A became XIM-99A and YIM-99A, respectively. Originally the USAF had allocated the designation IM-69, but this was changed (possibly at Boeing's request to keep number 99) to IM-99 in October 1955. In October 1957, the first YIM-99A production-representative prototype flew with full guidance, and succeeded to pass the target within destructive range. In late 1957, Boeing received the production contract for the IM-99A Bomarc A interceptor missile, and in September 1959, the first IM-99A squadron became operational. The IM-99A had an operational radius of and was designed to fly at Mach 2.5–2.8 at a cruising altitude of . It was long and weighed . Its armament was either a conventional warhead or a W40 nuclear warhead (7–10 kiloton yield). A liquid-fuel rocket engine boosted the Bomarc to Mach 2, when its Marquardt RJ43-MA-3 ramjet engines, fueled by 80-octane gasoline, would take over for the remainder of the flight. This was the same model of engine used to power the Lockheed X-7, the Lockheed AQM-60 Kingfisher drone used to test air defenses, and the Lockheed D-21 launched from the back of an M-21, although the Bomarc and Kingfisher engines used different materials due to the longer duration of their flights. Operational units The operational IM-99A missiles were based horizontally in semi-hardened shelters, nicknamed "coffins". After the launch order, the shelter's roof would slide open, and the missile raised to the vertical. After the missile was supplied with fuel for the booster rocket, it would be launched by the Aerojet General LR59-AJ-13 booster. After sufficient speed was reached, the Marquardt RJ43-MA-3 ramjets would ignite and propel the missile to its cruise speed of Mach 2.8 at an altitude of . When the Bomarc was within of the target, its own Westinghouse AN/DPN-34 radar guided the missile to the interception point. The maximum range of the IM-99A was , and it was fitted with either a conventional high-explosive or a 10 kiloton W-40 nuclear fission warhead. The Bomarc relied on the Semi-Automatic Ground Environment (SAGE), an automated control system used by NORAD for detecting, tracking and intercepting enemy bomber aircraft. SAGE allowed for remote launching of the Bomarc missiles, which were housed in a constant combat-ready basis in individual launch shelters in remote areas. At the height of the program, there were 14 Bomarc sites located in the US and two in Canada. Bomarc B The liquid-fuel booster of the Bomarc A had several drawbacks. It took two minutes to fuel before launch, which could be a long time in high-speed intercepts, and its hypergolic propellants (hydrazine and nitric acid) were very dangerous to handle, leading to several serious accidents. As soon as high-thrust solid-fuel rockets became a reality in the mid-1950s, the USAF began to develop a new solid-fueled Bomarc variant, the IM-99B Bomarc B. It used a Thiokol XM51 booster, and also had improved Marquardt RJ43-MA-7 (and finally the RJ43-MA-11) ramjets. The first IM-99B was launched in May 1959, but problems with the new propulsion system delayed the first fully successful flight until July 1960, when a supersonic MQM-15A Regulus II drone was intercepted. Because the new booster required less space in the missile, more ramjet fuel could be carried, thus increasing the range to . The terminal homing system was also improved, using the world's first pulse Doppler search radar, the Westinghouse AN/DPN-53. All Bomarc Bs were equipped with the W-40 nuclear warhead. In June 1961, the first IM-99B squadron became operational, and Bomarc B quickly replaced most Bomarc A missiles. On 23 March 1961, a Bomarc B successfully intercepted a Regulus II cruise missile flying at , thus achieving the highest interception in the world up to that date. Boeing built 570 Bomarc missiles between 1957 and 1964, 269 CIM-10A, 301 CIM-10B. In September 1958 Air Research & Development Command decided to transfer the Bomarc program from its testing at Cape Canaveral Air Force Station to a new facility on Santa Rosa Island, south of Eglin AFB Hurlburt Field on the Gulf of Mexico. To operate the facility and to provide training and operational evaluation in the missile program, Air Defense Command established the 4751st Air Defense Wing (Missile) (4751st ADW) on 15 January 1958. The first launch from Santa Rosa took place on 15 January 1959. Operational history In 1955, to support a program which called for 40 squadrons of BOMARC (120 missiles to a squadron for a total of 4,800 missiles), ADC reached a decision on the location of these 40 squadrons and suggested operational dates for each. The sequence was as follows: ... l. McGuire 1/60 2. Suffolk 2/60 3. Otis 3/60 4. Dow 4/60 5. Niagara Falls 1/61 6. Plattsburgh 1/61 7. Kinross 2/61 8. K.I. Sawyer 2/61 9. Langley 2/61 10. Truax 3/61 11. Paine 3/61 12. Portland 3/61 ... At the end of 1958, ADC plans called for construction of the following BOMARC bases in the following order: l. McGuire 2. Suffolk 3. Otis 4. Dow 5. Langley 6. Truax 7. Kinross 8. Duluth 9. Ethan Allen 10. Niagara Falls 11. Paine 12. Adair 13. Travis 14. Vandenberg 15. San Diego 16. Malmstrom 17. Grand Forks 18. Minot 19. Youngstown 20. Seymour-Johnson 21. Bunker Hill 22. Sioux Falls 23. Charleston 24. McConnell 25. Holloman 26. McCoy 27. Amarillo 28. Barksdale 29. Williams. United States The first USAF operational Bomarc squadron was the 46th Air Defense Missile Squadron (ADMS), organized on 1 January 1959 and activated on 25 March. The 46th ADMS was assigned to the New York Air Defense Sector at McGuire Air Force Base, New Jersey. The training program, under the 4751st Air Defense Wing used technicians acting as instructors and was established for a four-month duration. Training included missile maintenance; SAGE operations and launch procedures, including the launch of an unarmed missile at Eglin. In September 1959 the squadron assembled at their permanent station, the Bomarc site near McGuire AFB, and trained for operational readiness. The first Bomarc-A were used at McGuire on 19 September 1959 with Kincheloe AFB getting the first operational IM-99Bs. While several of the squadrons replicated earlier fighter interceptor unit numbers, they were all new organizations with no previous historical counterpart. ADC's initial plans called for some 52 Bomarc sites around the United States with 120 missiles each but as defense budgets decreased during the 1950s the number of sites dropped substantially. Ongoing development and reliability problems didn't help, nor did Congressional debate over the missile's usefulness and necessity. In June 1959, the Air Force authorized 16 Bomarc sites with 56 missiles each; the initial five would get the IM-99A with the remainder getting the IM-99B. However, in March 1960, HQ USAF cut deployment to eight sites in the United States and two in Canada. Bomarc incident Within a year of operations, a Bomarc A with a nuclear warhead caught fire at McGuire AFB on 7 June 1960 after its on-board helium tank exploded. While the missile's explosives did not detonate, the heat melted the warhead and released plutonium, which the fire crews spread. The Air Force and the Atomic Energy Commission cleaned up the site and covered it with concrete. This was the only major incident involving the weapon system. The site remained in operation for several years following the fire. Since its closure in 1972, the area has remained off limits, primarily due to low levels of plutonium contamination. Between 2002 and 2004, 21,998 cubic yards of contaminated debris and soils were shipped to what was then known as Envirocare, located in Utah. Modification and deactivation In 1962, the US Air Force started using modified A-models as drones; following the October 1962 tri-service redesignation of aircraft and weapons systems they became CQM-10As. Otherwise the air defense missile squadrons maintained alert while making regular trips to Santa Rosa Island for training and firing practice. After the inactivation of the 4751st ADW(M) on 1 July 1962 and transfer of Hurlburt to Tactical Air Command for air commando operations the 4751st Air Defense Squadron (Missile) remained at Hurlburt and Santa Rosa Island for training purposes. In 1964, the liquid-fueled Bomarc-A sites and squadrons began to be deactivated. The sites at Dow and Suffolk County closed first. The remainder continued to be operational for several more years while the government started dismantling the air defense missile network. Niagara Falls was the first BOMARC B installation to close, in December 1969; the others remained on alert through 1972. In April 1972, the last Bomarc B in U.S. Air Force service was retired at McGuire and the 46th ADMS inactivated and the base was deactivated. In the era of the intercontinental ballistic missiles the Bomarc, designed to intercept relatively slow manned bombers, had become a useless asset. The remaining Bomarc missiles were used by all armed services as high-speed target drones for tests of other air-defense missiles. The Bomarc A and Bomarc B targets were designated as CQM-10A and CQM-10B, respectively. Following the accident, the McGuire complex has never been sold or converted to other uses and remains in Air Force ownership, making it the most intact site of the eight in the US. It has been nominated to the National Register of Historic Sites. Although a number of IM-99/CIM-10 Bomarcs have been placed on public display, because of concerns about the possible environmental hazards of the thoriated magnesium structure of the airframe several have been removed from public view. Russ Sneddon, director of the Air Force Armament Museum, Eglin Air Force Base, Florida provided information about missing CIM-10 exhibit airframe serial 59–2016, one of the museum's original artifacts from its founding in 1975 and donated by the 4751st Air Defense Squadron at Hurlburt Field, Eglin Auxiliary Field 9, Eglin AFB. As of December 2006, the suspect missile was stored in a secure compound behind the Armaments Museum. In December 2010, the airframe was still on premises, but partly dismantled. Canada The Bomarc Missile Program was highly controversial in Canada. The Progressive Conservative government of Prime Minister John Diefenbaker initially agreed to deploy the missiles, and shortly thereafter controversially scrapped the Avro Arrow, a supersonic manned interceptor aircraft, arguing that the missile program made the Arrow unnecessary. Initially, it was unclear whether the missiles would be equipped with nuclear warheads. By 1960 it became known that the missiles were to have a nuclear payload, and a debate ensued about whether Canada should accept nuclear weapons. Ultimately, the Diefenbaker government decided that the Bomarcs should not be equipped with nuclear warheads. The dispute split the Diefenbaker Cabinet, and led to the collapse of the government in 1963. The Official Opposition and Liberal Party leader Lester B. Pearson originally was against nuclear missiles, but reversed his personal position and argued in favour of accepting nuclear warheads. He won the 1963 election, largely on the basis of this issue, and his new Liberal government proceeded to accept nuclear-armed Bomarcs, with the first being deployed on 31 December 1963. When the nuclear warheads were deployed, Pearson's wife, Maryon, resigned her honorary membership in the anti-nuclear weapons group, Voice of Women. Canadian operational deployment of the Bomarc involved the formation of two specialized Surface/Air Missile squadrons. The first to begin operations was No. 446 SAM Squadron at RCAF Station North Bay, which was the command and control center for both squadrons. With construction of the compound and related facilities completed in 1961, the squadron received its Bomarcs in 1961, without nuclear warheads. The squadron became fully operational from 31 December 1963, when the nuclear warheads arrived, until disbanding on 31 March 1972. All the warheads were stored separately and under control of Detachment 1 of the USAF 425th Munitions Maintenance Squadron at Stewart Air Force Base. During operational service, the Bomarcs were maintained on stand-by, on a 24-hour basis, but were never fired, although the squadron test-fired the missiles at Eglin AFB, Florida on annual winter retreats. No. 447 SAM Squadron operating out of RCAF Station La Macaza, Quebec, was activated on 15 September 1962 although warheads were not delivered until late 1963. The squadron followed the same operational procedures as No. 446, its sister squadron. With the passage of time the operational capability of the 1950s-era Bomarc system no longer met modern requirements; the Department of National Defence deemed that the Bomarc missile defense was no longer a viable system, and ordered both squadrons to be stood down in 1972. The bunkers and ancillary facilities remain at both former sites. Variants XF-99 (experimental for booster research) XF-99A/XIM-99A (experimental for ramjet research) YF-99A/YIM-99A (service-test) IM-99A/CIM-10A (initial production) IM-99B/CIM-10B ("advanced") CQM-10A (target drone developed from CIM-10A) CQM-10B (target drone developed from CIM-10B) Operators / Royal Canadian Air Force from 1955 to 1968 / Canadian Forces from 1968 to 1972 446 SAM Squadron: 28 IM-99B, CFB North Bay, Ontario 1962–1972 Bomarc site located at 447 SAM Squadron: 28 IM-99B, La Macaza, Quebec (La Macaza – Mont Tremblant International Airport) 1962–1972 Bomarc site located at (Approximately) United States Air Force Air (later Aerospace) Defense Command 6th Air Defense Missile Squadron, 56 IM-99A Activated on 1 February 1959 Assigned to: New York Air Defense Sector Inactivated 15 December 1964 Stationed at: Suffolk County Air Force Base Missile Annex, New York Bomarc site located 3 miles SW at 22d Air Defense Missile Squadron: 28 IM-99A/28 IM-99B Activated on 15 September 1959 Assigned to: Washington Air Defense Sector Reassigned to: 33d Air Division, 1 April 1966 Reassigned to: 20th Air Division, 19 November 1969 Inactivated: 31 October 1972 Stationed at: Langley AFB, Virginia Bomarc site located 3 miles WNW at 26th Air Defense Missile Squadron: 28 IM-99A/28 IM-99B Activated 1 March 1959 Assigned to: Boston Air Defense Sector Reassigned to: 35th Air Division, 1 April 1966 Reassigned to: 21st Air Division, 19 November 1969 Inactivated: 30 April 1972 Stationed at: Otis Air Force Base BOMARC site, Massachusetts Bomarc site located 1 mile NNW at 30th Air Defense Missile Squadron: 28 IM-99A Activated on 1 June 1959 Assigned to Bangor Air Defense Sector Inactivated: 15 December 1964 Stationed at Dow AFB, Maine Bomarc site located 4 mils NNE at 35th Air Defense Missile Squadron: 56 IM-99B Activated 1 June 1960 Assigned to Syracuse Air Defense Sector Reassigned to: Detroit Air Defense Sector, 4 September 1963 Reassigned to: 34th Air Division, 1 April 1966 Reassigned to: 35th Air Division, 15 September 1969 Inactivated: 31 December 1969 Stationed at: Niagara Falls Air Force Missile Site, New York Bomarc site located at 37th Air Defense Missile Squadron: 28 IM-99B Activated 1 March 1960 Assigned to 30th Air Division Reassigned to: Sault Sainte Marie Air Defense Sector, 1 April 1960 Reassigned to: Duluth Air Defense Sector, 1 October 1963 Reassigned to: 29th Air Division, 1 April 1966 Reassigned to: 23d Air Division, 19 November 1969 Inactivated 31 July 1972 Stationed at: Kincheloe AFB, Michigan Bomarc site located 19 miles NW at Raco 46th Air Defense Missile Squadron: 28 IM-99A/56 IM-99B Activated 1 January 1959 Assigned to New York Air Defense Sector Reassigned to: 21st Air Division, 1 April 1966 Reassigned to: 35th Air Division, 1 December 1957 Reassigned to: 21st Air Division, 19 November 1969 Inactivated 31 October 1972 Stationed at: McGuire AFB, New Jersey Bomarc site located 4 miles ESE at 74th Air Defense Missile Squadron: 28 IM-99B Activated 1 April 1960 Assigned to Duluth Air Defense Sector Reassigned to: 29th Air Division, 1 April 1966 Reassigned to: 23d Air Division, 19 November 1969 Inactivated 30 April 1972 Stationed at: Duluth International Airport, Minnesota Bomarc site located 10 miles NE at 4751st Air Defense Missile Squadron Activated 15 January 1959 Assigned to 73d Air Division (Weapons) Reassigned to: 32d Air Division, 1 October 1959 Reassigned to: Montgomery Air Defense Sector, 1 July 1962 Reassigned to: Air Defense, Tactical Air Command, 1 September 1979 Inactivated 30 September 1979 Stationed at: Eglin Auxiliary Field #9 (Hurlburt Field), Florida Bomarc site located on Santa Rosa Island at Bomarc site located at Eglin Auxiliary Field #5 (Piccolo Field) at Air Force Systems Command Cape Canaveral Air Force Station, Florida Launch Complex 4 (LC-4) was used for Bomarc testing and development launches 2 February 1956 – 15 April 1960 (17 Launches). Vandenberg Air Force Base, California Two launch sites, BOM-1 and BOM-2 were used by the United States Navy for Bomarc launches against aerial targets. The first launch taking place on 25 August 1966. The last two launches occurred on 14 July 1982. BOM1 49 launches; BOM2 38 launches. Locations under construction but not activated. Each site was programmed for 28 IM-99B missiles: Camp Adair, Oregon Charleston AFB, South Carolina Ethan Allen AFB, Vermont Paine Field, Washington Travis AFB, California Truax Field, Wisconsin Vandenberg AFB, California Reference for BOMARC units and locations: Surviving missiles Below is a list of museums or sites which have a Bomarc missile on display: Air Force Armament Museum, Eglin Air Force Base, Florida Air Force Space & Missile Museum, Cape Canaveral Air Force Station, Florida. It is on display Hangar C. Alberta Aviation Museum, Edmonton, Alberta, Canada Canada Aviation and Space Museum, Ottawa, Ontario, Canada Hill Aerospace Museum, Hill Air Force Base, Utah Historical Electronics Museum, Linthicum, Maryland (display of AN/DPN-53, the first airborne pulse-doppler radar, used in the Bomarc) Illinois Soldiers & Sailors Home, Quincy, Illinois Keesler Air Force Base, Biloxi, Mississippi Museum of Aviation, Robins Air Force Base, Warner Robins, Georgia National Museum of Nuclear Science & History, Kirtland Air Force Base, Albuquerque, New Mexico Octave Chanute Aerospace Museum (former Chanute Air Force Base), Rantoul, Illinois; the museum closed on December 30, 2015 Peterson Air and Space Museum, Peterson Air Force Base, Colorado Strategic Air and Space Museum, Ashland, Nebraska USAF Airman Heritage Museum, Lackland Air Force Base, San Antonio, Texas Vandenberg Air Force Base (Space and Missile Heritage Center), California. Bomarc not for public access. Impact on popular music The Bomarc missile captured the imagination of the American and Canadian popular music industry, giving rise to a pop music group, the Bomarcs (composed mainly of servicemen stationed on a Florida radar site that tracked Bomarcs), a record label, Bomarc Records, and a moderately successful Canadian pop group, The Beau Marks. See also References Bibliography Clearwater, John. Canadian Nuclear Weapons: The Untold Story of Canada's Cold War Arsenal. Toronto, Ontario, Canada: Dundern Press, 1999. . Clearwater, John. U.S. Nuclear Weapons in Canada. Toronto, Ontario, Canada: Dundern Press, 1999. . Cornett, Lloyd H. Jr. and Mildred W. Johnson. A Handbook of Aerospace Defense Organization 1946–1980. Peterson Air Force Base, Colorado: Office of History, Aerospace Defense Center, 1980. No ISBN. Gibson, James N. Nuclear Weapons of the United States: An Illustrated History. Atglen, Pennsylvania: Schiffer Publishing Ltd., 1996. . Jenkins, Dennis R. and Tony R. Landis. Experimental & Prototype U.S. Air Force Jet Fighters. North Branch, Minnesota: Specialty Press, 2008. . Nicks, Don, John Bradley and Chris Charland. A History of the Air Defence of Canada 1948–1997. Ottawa, Ontario, Canada: Commander Fighter Group, 1997. . Pedigree of Champions: Boeing Since 1916, Third Edition. Seattle, Washington: The Boeing Company, 1969. Winkler, David F. Searching the Skies: The Legacy of the United States Cold War Defense Radar Program. Langley Air Force Base, Virginia: United States Air Force Headquarters Air Combat Command, 1997. . External links RCAF 446 SAM Squadron BOMARC Missile Sites Boeing Company History, Bomarc Astronautix.com Bomarc pictures Bomarc Video Clip SAGE-BOMARC risks – Oral history: Les Earnest talks about air defense system called SAGE and a ground-to-air missile called BOMARC. Cold War surface-to-air missiles of the United States Nuclear anti-aircraft weapons Ramjet-powered aircraft Nuclear weapons of Canada Nuclear weapons of the United States Military equipment introduced in the 1950s
10
A bus (contracted from omnibus, with variants multibus, motorbus, autobus, etc.) is a road vehicle that carries significantly more passengers than an average car or van. It is most commonly used in public transport, but is also in use for charter purposes, or through private ownership. Although the average bus carries between 30 and 100 passengers, some buses have a capacity of up to 300 passengers. The most common type is the single-deck rigid bus, with double-decker and articulated buses carrying larger loads, and midibuses and minibuses carrying smaller loads. Coaches are used for longer-distance services. Many types of buses, such as city transit buses and inter-city coaches, charge a fare. Other types, such as elementary or secondary school buses or shuttle buses within a post-secondary education campus, are free. In many jurisdictions, bus drivers require a special large vehicle licence above and beyond a regular driving licence. Buses may be used for scheduled bus transport, scheduled coach transport, school transport, private hire, or tourism; promotional buses may be used for political campaigns and others are privately operated for a wide range of purposes, including rock and pop band tour vehicles. Horse-drawn buses were used from the 1820s, followed by steam buses in the 1830s, and electric trolleybuses in 1882. The first internal combustion engine buses, or motor buses, were used in 1895. Recently, interest has been growing in hybrid electric buses, fuel cell buses, and electric buses, as well as buses powered by compressed natural gas or biodiesel. As of the 2010s, bus manufacturing is increasingly globalised, with the same designs appearing around the world. Name The word bus is a shortened form of the Latin adjectival form ("for all"), the dative plural of ("all"). The theoretical full name is in French ("vehicle for all"). The name originates from a mass-transport service started in 1823 by a French corn-mill owner named in Richebourg, a suburb of Nantes. A by-product of his mill was hot water, and thus next to it he established a spa business. In order to encourage customers he started a horse-drawn transport service from the city centre of Nantes to his establishment. The first vehicles stopped in front of the shop of a hatter named Omnés, which displayed a large sign inscribed "Omnes Omnibus", a pun on his Latin-sounding surname, being the male and female nominative, vocative and accusative form of the Latin adjective ("all"), combined with omnibus, the dative plural form meaning "for all", thus giving his shop the name "Omnés for all", or "everything for everyone". His transport scheme was a huge success, although not as he had intended as most of his passengers did not visit his spa. He turned the transport service into his principal lucrative business venture and closed the mill and spa. Nantes citizens soon gave the nickname "omnibus" to the vehicle. Having invented the successful concept Baudry moved to Paris and launched the first omnibus service there in April 1828. A similar service was introduced in Manchester in 1824 and in London in 1829. History Steam buses Regular intercity bus services by steam-powered buses were pioneered in England in the 1830s by Walter Hancock and by associates of Sir Goldsworthy Gurney, among others, running reliable services over road conditions which were too hazardous for horse-drawn transportation. The first mechanically propelled omnibus appeared on the streets of London on 22 April 1833. Steam carriages were much less likely to overturn, they travelled faster than horse-drawn carriages, they were much cheaper to run, and caused much less damage to the road surface due to their wide tyres. However, the heavy road tolls imposed by the turnpike trusts discouraged steam road vehicles and left the way clear for the horse bus companies, and from 1861 onwards, harsh legislation virtually eliminated mechanically propelled vehicles from the roads of Great Britain for 30 years, the Locomotive Act 1861 imposing restrictive speed limits on "road locomotives" of in towns and cities, and in the country. Trolleybuses In parallel to the development of the bus was the invention of the electric trolleybus, typically fed through trolley poles by overhead wires. The Siemens brothers, William in England and Ernst Werner in Germany, collaborated on the development of the trolleybus concept. Sir William first proposed the idea in an article to the Journal of the Society of Arts in 1881 as an "...arrangement by which an ordinary omnibus...would have a suspender thrown at intervals from one side of the street to the other, and two wires hanging from these suspenders; allowing contact rollers to run on these two wires, the current could be conveyed to the tram-car, and back again to the dynamo machine at the station, without the necessity of running upon rails at all." The first such vehicle, the Electromote, was made by his brother Ernst Werner von Siemens and presented to the public in 1882 in Halensee, Germany. Although this experimental vehicle fulfilled all the technical criteria of a typical trolleybus, it was dismantled in the same year after the demonstration. Max Schiemann opened a passenger-carrying trolleybus in 1901 near Dresden, in Germany. Although this system operated only until 1904, Schiemann had developed what is now the standard trolleybus current collection system. In the early days, a few other methods of current collection were used. Leeds and Bradford became the first cities to put trolleybuses into service in Great Britain on 20 June 1911. Motor buses In Siegerland, Germany, two passenger bus lines ran briefly, but unprofitably, in 1895 using a six-passenger motor carriage developed from the 1893 Benz Viktoria. Another commercial bus line using the same model Benz omnibuses ran for a short time in 1898 in the rural area around Llandudno, Wales. Germany's Daimler Motors Corporation also produced one of the earliest motor-bus models in 1898, selling a double-decker bus to the Motor Traction Company which was first used on the streets of London on 23 April 1898. The vehicle had a maximum speed of and accommodated up to 20 passengers, in an enclosed area below and on an open-air platform above. With the success and popularity of this bus, DMG expanded production, selling more buses to companies in London and, in 1899, to Stockholm and Speyer. Daimler Motors Corporation also entered into a partnership with the British company Milnes and developed a new double-decker in 1902 that became the market standard. The first mass-produced bus model was the B-type double-decker bus, designed by Frank Searle and operated by the London General Omnibus Company—it entered service in 1910, and almost 3,000 had been built by the end of the decade. Hundreds of them saw military service on the Western Front during the First World War. The Yellow Coach Manufacturing Company, which rapidly became a major manufacturer of buses in the US, was founded in Chicago in 1923 by John D. Hertz. General Motors purchased a majority stake in 1925 and changed its name to the Yellow Truck and Coach Manufacturing Company. GM purchased the balance of the shares in 1943 to form the GM Truck and Coach Division. Models expanded in the 20th century, leading to the widespread introduction of the contemporary recognizable form of full-sized buses from the 1950s. The AEC Routemaster, developed in the 1950s, was a pioneering design and remains an icon of London to this day. The innovative design used lightweight aluminium and techniques developed in aircraft production during World War II. As well as a novel weight-saving integral design, it also introduced for the first time on a bus independent front suspension, power steering, a fully automatic gearbox, and power-hydraulic braking. Types Formats include single-decker bus, double-decker bus (both usually with a rigid chassis) and articulated bus (or 'bendy-bus') the prevalence of which varies from country to country. High-capacity bi-articulated buses are also manufactured, and passenger-carrying trailers—either towed behind a rigid bus (a bus trailer) or hauled as a trailer by a truck (a trailer bus). Smaller midibuses have a lower capacity and open-top buses are typically used for leisure purposes. In many new fleets, particularly in local transit systems, a shift to low-floor buses is occurring, primarily for easier accessibility. Coaches are designed for longer-distance travel and are typically fitted with individual high-backed reclining seats, seat belts, toilets, and audio-visual entertainment systems, and can operate at higher speeds with more capacity for luggage. Coaches may be single- or double-deckers, articulated, and often include a separate luggage compartment under the passenger floor. Guided buses are fitted with technology to allow them to run in designated guideways, allowing the controlled alignment at bus stops and less space taken up by guided lanes than conventional roads or bus lanes. Bus manufacturing may be by a single company (an integral manufacturer), or by one manufacturer's building a bus body over a chassis produced by another manufacturer. Design Accessibility Transit buses used to be mainly high-floor vehicles. However, they are now increasingly of low-floor design and optionally also 'kneel' air suspension and have ramps to provide access for wheelchair users and people with baby carriages, sometimes as electrically or hydraulically extended under-floor constructs for level access. Prior to more general use of such technology, these wheelchair users could only use specialist para-transit mobility buses. Accessible vehicles also have wider entrances and interior gangways and space for wheelchairs. Interior fittings and destination displays may also be designed to be usable by the visually impaired. Coaches generally use wheelchair lifts instead of low-floor designs. In some countries, vehicles are required to have these features by disability discrimination laws. Configuration Buses were initially configured with an engine in the front and an entrance at the rear. With the transition to one-man operation, many manufacturers moved to mid- or rear-engined designs, with a single door at the front or multiple doors. The move to the low-floor design has all but eliminated the mid-engined design, although some coaches still have mid-mounted engines. Front-engined buses still persist for niche markets such as American school buses, some minibuses, and buses in less developed countries, which may be derived from truck chassis, rather than purpose-built bus designs. Most buses have two axles, while articulated buses have three. Guidance Guided buses are fitted with technology to allow them to run in designated guideways, allowing the controlled alignment at bus stops and less space taken up by guided lanes than conventional roads or bus lanes. Guidance can be mechanical, optical, or electromagnetic. Extensions of the guided technology include the Guided Light Transit and Translohr systems, although these are more often termed 'rubber-tyred trams' as they have limited or no mobility away from their guideways. Liveries Transit buses are normally painted to identify the operator or a route, function, or to demarcate low-cost or premium service buses. Liveries may be painted onto the vehicle, applied using adhesive vinyl technologies, or using decals. Vehicles often also carry bus advertising or part or all of their visible surfaces (as mobile billboard). Campaign buses may be decorated with key campaign messages; these can be to promote an event or initiative. Propulsion The most common power source since the 1920s has been the diesel engine. Early buses, known as trolleybuses, were powered by electricity supplied from overhead lines. Nowadays, electric buses often carry their own battery, which is sometimes recharged on stops/stations to keep the size of the battery small/lightweight. Currently, interest exists in hybrid electric buses, fuel cell buses, electric buses, and ones powered by compressed natural gas or biodiesel. Gyrobuses, which are powered by the momentum stored by a flywheel, were tried in the 1940s. Dimensions United Kingdom and European Union: Maximum Length: Single rear axle . Twin rear axle . Maximum Width: United States, Canada and Mexico: Maximum Length: None Maximum Width: Manufacture Early bus manufacturing grew out of carriage coach building, and later out of automobile or truck manufacturers. Early buses were merely a bus body fitted to a truck chassis. This body+chassis approach has continued with modern specialist manufacturers, although there also exist integral designs such as the Leyland National where the two are practically inseparable. Specialist builders also exist and concentrate on building buses for special uses or modifying standard buses into specialised products. Integral designs have the advantages that they have been well-tested for strength and stability, and also are off-the-shelf. However, two incentives cause use of the chassis+body model. First, it allows the buyer and manufacturer both to shop for the best deal for their needs, rather than having to settle on one fixed design—the buyer can choose the body and the chassis separately. Second, over the lifetime of a vehicle (in constant service and heavy traffic), it will likely get minor damage now and again, and being able easily to replace a body panel or window etc. can vastly increase its service life and save the cost and inconvenience of removing it from service. As with the rest of the automotive industry, into the 20th century, bus manufacturing increasingly became globalized, with manufacturers producing buses far from their intended market to exploit labour and material cost advantages. A typical city bus costs almost US$450,000. Uses Public transport Transit buses, used on public transport bus services, have utilitarian fittings designed for efficient movement of large numbers of people, and often have multiple doors. Coaches are used for longer-distance routes. High-capacity bus rapid transit services may use the bi-articulated bus or tram-style buses such as the Wright StreetCar and the Irisbus Civis. Buses and coach services often operate to a predetermined published public transport timetable defining the route and the timing, but smaller vehicles may be used on more flexible demand responsive transport services. Tourism Buses play a major part in the tourism industry. Tour buses around the world allow tourists to view local attractions or scenery. These are often open-top buses, but can also be regular buses or coaches. In local sightseeing, City Sightseeing is the largest operator of local tour buses, operating on a franchised basis all over the world. Specialist tour buses are also often owned and operated by safari parks and other theme parks or resorts. Longer-distance tours are also carried out by bus, either on a turn up and go basis or through a tour operator, and usually allow disembarkation from the bus to allow touring of sites of interest on foot. These may be day trips or longer excursions incorporating hotel stays. Tour buses often carry a tour guide, although the driver or a recorded audio commentary may also perform this function. The tour operator may be a subsidiary of a company that operates buses and coaches for other uses or an independent company that charters buses or coaches. Commuter transport operators may also use their coaches to conduct tours within the target city between the morning and evening commuter transport journey. Buses and coaches are also a common component of the wider package holiday industry, providing private airport transfers (in addition to general airport buses) and organised tours and day trips for holidaymakers on the package. Tour buses can also be hired as chartered buses by groups for sightseeing at popular holiday destinations. These private tour buses may offer specific stops, such as all the historical sights, or allow the customers to choose their own itineraries. Tour buses come with professional and informed staff and insurance, and maintain state governed safety standards. Some provide other facilities like entertainment units, luxurious reclining seats, large scenic windows, and even lavatories. Public long-distance coach networks are also often used as a low-cost method of travel by students or young people travelling the world. Some companies such as Topdeck Travel were set up specifically to use buses to drive the hippie trail or travel to places such as North Africa. In many tourist or travel destinations, a bus is part of the tourist attraction, such as the North American tourist trolleys, London's AEC Routemaster heritage routes, or the customised buses of Malta, Asia, and the Americas. Another example of tourist stops is the homes of celebrities, such as tours based near Hollywood. There are several such services between 6000 and 7000 Hollywood Boulevard in Los Angeles. Student transport In some countries, particularly the US and Canada, buses used to transport schoolchildren have evolved into a specific design with specified mandatory features. American states have also adopted laws regarding motorist conduct around school buses, including large fines and possibly prison for passing a stopped school bus in the process of loading or offloading children passengers. These school buses may have school bus yellow livery and crossing guards. Other countries may mandate the use of seat belts. As a minimum, many countries require a bus carrying students to display a sign, and may also adopt yellow liveries. Student transport often uses older buses cascaded from service use, retrofitted with more seats or seatbelts. Student transport may be operated by local authorities or private contractors. Schools may also own and operate their own buses for other transport needs, such as class field trips, or transport to associated sports, music, or other school events. Private charter Due to the costs involved in owning, operating, and driving buses and coaches, much bus and coach use comes from the private hire of vehicles from charter bus companies, either for a day or two or on a longer contract basis, where the charter company provides the vehicles and qualified drivers. Charter bus operators may be completely independent businesses, or charter hire may be a subsidiary business of a public transport operator that might maintain a separate fleet or use surplus buses, coaches, and dual-purpose coach-seated buses. Many private taxicab companies also operate larger minibus vehicles to cater for group fares. Companies, private groups, and social clubs may hire buses or coaches as a cost-effective method of transporting a group to an event or site, such as a group meeting, racing event, or organised recreational activity such as a summer camp. Schools often hire charter bus services on regular basis for transportation of children to and from their homes. Chartered buses are also used by education institutes for transport to conventions, exhibitions, and field trips. Entertainment or event companies may also hire temporary shuttles buses for transport at events such as festivals or conferences. Party buses are used by companies in a similar manner to limousine hire, for luxury private transport to social events or as a touring experience. Sleeper buses are used by bands or other organisations that tour between entertainment venues and require mobile rest and recreation facilities. Some couples hire preserved buses for their wedding transport, instead of the traditional car. Buses are often hired for parades or processions. Victory parades are often held for triumphant sports teams, who often tour their home town or city in an open-top bus. Sports teams may also contract out their transport to a team bus, for travel to away games, to a competition or to a final event. These buses are often specially decorated in a livery matching the team colours. Private companies often contract out private shuttle bus services, for transport of their customers or patrons, such as hotels, amusement parks, university campuses, or private airport transfer services. This shuttle usage can be as transport between locations, or to and from parking lots. High specification luxury coaches are often chartered by companies for executive or VIP transport. Charter buses may also be used in tourism and for promotion (See Tourism and Promotion sections). Private ownership Many organisations, including the police, not for profit, social or charitable groups with a regular need for group transport may find it practical or cost-effective to own and operate a bus for their own needs. These are often minibuses for practical, tax and driver licensing reasons, although they can also be full-size buses. Cadet or scout groups or other youth organizations may also own buses. Companies such as railroads, construction contractors, and agricultural firms may own buses to transport employees to and from remote job sites. Specific charities may exist to fund and operate bus transport, usually using specially modified mobility buses or otherwise accessible buses (See Accessibility section). Some use their contributions to buy vehicles and provide volunteer drivers. Airport operators make use of special airside airport buses for crew and passenger transport in the secure airside parts of an airport. Some public authorities, police forces, and military forces make use of armoured buses where there is a special need to provide increased passenger protection. The United States Secret Service acquired two in 2010 for transporting dignitaries needing special protection. Police departments make use of police buses for a variety of reasons, such as prisoner transport, officer transport, temporary detention facilities, and as command and control vehicles. Some fire departments also use a converted bus as a command post while those in cold climates might retain a bus as a heated shelter at fire scenes. Many are drawn from retired school or service buses. Promotion Buses are often used for advertising, political campaigning, public information campaigns, public relations, or promotional purposes. These may take the form of temporary charter hire of service buses, or the temporary or permanent conversion and operation of buses, usually of second-hand buses. Extreme examples include converting the bus with displays and decorations or awnings and fittings. Interiors may be fitted out for exhibition or information purposes with special equipment or audio visual devices. Bus advertising takes many forms, often as interior and exterior adverts and all-over advertising liveries. The practice often extends into the exclusive private hire and use of a bus to promote a brand or product, appearing at large public events, or touring busy streets. The bus is sometimes staffed by promotions personnel, giving out free gifts. Campaign buses are often specially decorated for a political campaign or other social awareness information campaign, designed to bring a specific message to different areas, or used to transport campaign personnel to local areas/meetings. Exhibition buses are often sent to public events such as fairs and festivals for purposes such as recruitment campaigns, for example by private companies or the armed forces. Complex urban planning proposals may be organised into a mobile exhibition bus for the purposes of public consultation. Goods transport In some sparsely populated areas, it is common to use brucks, buses with a cargo area to transport both passengers and cargo at the same time. They are especially common in the Nordic countries. Around the world Historically, the types and features of buses have developed according to local needs. Buses were fitted with technology appropriate to the local climate or passenger needs, such as air conditioning in Asia, or cycle mounts on North American buses. The bus types in use around the world where there was little mass production were often sourced secondhand from other countries, such as the Malta bus, and buses in use in Africa. Other countries such as Cuba required novel solutions to import restrictions, with the creation of the "camellos" (camel bus), a specially manufactured trailer bus. After the Second World War, manufacturers in Europe and the Far East, such as Mercedes-Benz buses and Mitsubishi Fuso expanded into other continents influencing the use of buses previously served by local types. Use of buses around the world has also been influenced by colonial associations or political alliances between countries. Several of the Commonwealth nations followed the British lead and sourced buses from British manufacturers, leading to a prevalence of double-decker buses. Several Eastern Bloc countries adopted trolleybus systems, and their manufacturers such as Trolza exported trolleybuses to other friendly states. In the 1930s, Italy designed the world's only triple decker bus for the busy route between Rome and Tivoli that could carry eighty-eight passengers. It was unique not only in being a triple decker but having a separate smoking compartment on the third level. The buses to be found in countries around the world often reflect the quality of the local road network, with high-floor resilient truck-based designs prevalent in several less developed countries where buses are subject to tough operating conditions. Population density also has a major impact, where dense urbanisation such as in Japan and the far east has led to the adoption of high capacity long multi-axle buses, often double-deckers while South America and China are implementing large numbers of articulated buses for bus rapid transit schemes. Bus expositions Euro Bus Expo is a trade show, which is held biennially at the UK's National Exhibition Centre in Birmingham. As the official show of the Confederation of Passenger Transport, the UK's trade association for the bus, coach and light rail industry, the three-day event offers visitors from Europe and beyond the chance to see and experience the very latest vehicles and product and service innovations right across the industry. Busworld Kortrijk in Kortrijk, Belgium, is the leading bus trade fair in Europe. It is also held biennially. Use of retired buses Most public or private buses and coaches, once they have reached the end of their service with one or more operators, are sent to the wrecking yard for breaking up for scrap and spare parts. Some buses which are not economical to keep running as service buses are often converted for use other than revenue-earning transport. Much like old cars and trucks, buses often pass through a dealership where they can be bought privately or at auction. Bus operators often find it economical to convert retired buses to use as permanent training buses for driver training, rather than taking a regular service bus out of use. Some large operators have also converted retired buses into tow bus vehicles, to act as tow trucks. With the outsourcing of maintenance staff and facilities, the increase in company health and safety regulations, and the increasing curb weights of buses, many operators now contract their towing needs to a professional vehicle recovery company. Some buses that have reached the end of their service that are still in good condition are sent for export to other countries. Some retired buses have been converted to static or mobile cafés, often using historic buses as a tourist attraction. There are also catering buses: buses converted into a mobile canteen and break room. These are commonly seen at external filming locations to feed the cast and crew, and at other large events to feed staff. Another use is as an emergency vehicle, such as high-capacity ambulance bus or mobile command centre. Some organisations adapt and operate playbuses or learning buses to provide a playground or learning environments to children who might not have access to proper play areas. An ex-London AEC Routemaster bus has been converted to a mobile theatre and catwalk fashion show. Some buses meet a destructive end by being entered in banger races or at demolition derbys. A larger number of old retired buses have also been converted into mobile holiday homes and campers. Bus preservation Rather than being scrapped or converted for other uses, sometimes retired buses are saved for preservation. This can be done by individuals, volunteer preservation groups or charitable trusts, museums, or sometimes by the operators themselves as part of a heritage fleet. These buses often need to be restored to their original condition and will have their livery and other details such as internal notices and rollsigns restored to be authentic to a specific time in the bus's history. Some buses that undergo preservation are rescued from a state of great disrepair, but others enter preservation with very little wrong with them. As with other historic vehicles, many preserved buses either in a working or static state form part of the collections of transport museums. Additionally, some buses are preserved so they can appear alongside other period vehicles in television and film. Working buses will often be exhibited at rallies and events, and they are also used as charter buses. While many preserved buses are quite old or even vintage, in some cases relatively new examples of a bus type can enter restoration. In-service examples are still in use by other operators. This often happens when a change in design or operating practice, such as the switch to one person operation or low floor technology, renders some buses redundant while still relatively new. Modification as railway vehicles See also Coach (bus) Bicycle carrier (bus mounted bike racks) Bus spotting Bus station Cutaway bus Dollar van Horsebus Intercity bus Intercity bus driver List of fictional buses Multi-axle bus Public light bus Trackless train Transit bus References Bibliography External links American Bus Association () French inventions
16
Bali (; ) is a province of Indonesia and the westernmost of the Lesser Sunda Islands. East of Java and west of Lombok, the province includes the island of Bali and a few smaller offshore islands, notably Nusa Penida, Nusa Lembongan, and Nusa Ceningan to the southeast. The provincial capital, Denpasar, is the most populous city in the Lesser Sunda Islands and the second-largest, after Makassar, in Eastern Indonesia. The upland town of Ubud in Greater Denpasar is considered Bali's cultural centre. The province is Indonesia's main tourist destination, with a significant rise in tourism since the 1980s. Tourism-related business makes up 80% of its economy. Bali is the only Hindu-majority province in Indonesia, with 86.9% of the population adhering to Balinese Hinduism. It is renowned for its highly developed arts, including traditional and modern dance, sculpture, painting, leather, metalworking, and music. The Indonesian International Film Festival is held every year in Bali. Other international events that have been held in Bali include Miss World 2013, the 2018 Annual Meetings of the International Monetary Fund and the World Bank Group and the 2022 G20 summit. In March 2017, TripAdvisor named Bali as the world's top destination in its Traveller's Choice award, which it also earned in January 2021. Bali is part of the Coral Triangle, the area with the highest biodiversity of marine species, especially fish and turtles. In this area alone, over 500 reef-building coral species can be found. For comparison, this is about seven times as many as in the entire Caribbean. Bali is the home of the Subak irrigation system, a UNESCO World Heritage Site. It is also home to a unified confederation of kingdoms composed of 10 traditional royal Balinese houses, each house ruling a specific geographic area. The confederation is the successor of the Bali Kingdom. The royal houses are not recognised by the government of Indonesia; however, they originated before Dutch colonisation. History Ancient Bali was inhabited around 2000 BC by Austronesian people who migrated originally from the island of Taiwan to Southeast Asia and Oceania through Maritime Southeast Asia. Culturally and linguistically, the Balinese are closely related to the people of the Indonesian archipelago, Malaysia, Brunei, the Philippines, and Oceania. Stone tools dating from this time have been found near the village of Cekik in the island's west. In ancient Bali, nine Hindu sects existed, the Pasupata, Bhairawa, Siwa Shidanta, Vaishnava, Bodha, Brahma, Resi, Sora and Ganapatya. Each sect revered a specific deity as its personal Godhead. Inscriptions from 896 and 911 do not mention a king, until 914, when Sri Kesarivarma is mentioned. They also reveal an independent Bali, with a distinct dialect, where Buddhism and Shaivism were practised simultaneously. Mpu Sindok's great-granddaughter, Mahendradatta (Gunapriyadharmapatni), married the Bali king Udayana Warmadewa (Dharmodayanavarmadeva) around 989, giving birth to Airlangga around 1001. This marriage also brought more Hinduism and Javanese culture to Bali. Princess Sakalendukirana appeared in 1098. Suradhipa reigned from 1115 to 1119, and Jayasakti from 1146 until 1150. Jayapangus appears on inscriptions between 1178 and 1181, while Adikuntiketana and his son Paramesvara in 1204. Balinese culture was strongly influenced by Indian, Chinese, and particularly Hindu culture, beginning around the 1st century AD. The name Bali dwipa ("Bali island") has been discovered from various inscriptions, including the Blanjong pillar inscription written by Sri Kesari Warmadewa in 914 AD and mentioning Walidwipa. It was during this time that the people developed their complex irrigation system subak to grow rice in wet-field cultivation. Some religious and cultural traditions still practised today can be traced to this period. The Hindu Majapahit Empire (1293–1520 AD) on eastern Java founded a Balinese colony in 1343. The uncle of Hayam Wuruk is mentioned in the charters of 1384–86. Mass Javanese immigration to Bali occurred in the next century when the Majapahit Empire fell in 1520. Bali's government then became an independent collection of Hindu kingdoms which led to a Balinese national identity and major enhancements in culture, arts, and economy. The nation with various kingdoms became independent for up to 386 years until 1906 when the Dutch subjugated and repulsed the natives for economic control and took it over. Portuguese contacts The first known European contact with Bali is thought to have been made in 1512, when a Portuguese expedition led by Antonio Abreu and Francisco Serrão sighted its northern shores. It was the first expedition of a series of bi-annual fleets to the Moluccas, that throughout the 16th century travelled along the coasts of the Sunda Islands. Bali was also mapped in 1512, in the chart of Francisco Rodrigues, aboard the expedition. In 1585, a ship foundered off the Bukit Peninsula and left a few Portuguese in the service of Dewa Agung. Dutch East Indies In 1597, the Dutch explorer Cornelis de Houtman arrived at Bali, and the Dutch East India Company was established in 1602. The Dutch government expanded its control across the Indonesian archipelago during the second half of the 19th century. Dutch political and economic control over Bali began in the 1840s on the island's north coast when the Dutch pitted various competing Balinese realms against each other. In the late 1890s, struggles between Balinese kingdoms on the island's south were exploited by the Dutch to increase their control. In June 1860, the famous Welsh naturalist, Alfred Russel Wallace, travelled to Bali from Singapore, landing at Buleleng on the north coast of the island. Wallace's trip to Bali was instrumental in helping him devise his Wallace Line theory. The Wallace Line is a faunal boundary that runs through the strait between Bali and Lombok. It is a boundary between species. In his travel memoir The Malay Archipelago, Wallace wrote of his experience in Bali, which has a strong mention of the unique Balinese irrigation methods: I was astonished and delighted; as my visit to Java was some years later, I had never beheld so beautiful and well-cultivated a district out of Europe. A slightly undulating plain extends from the seacoast about inland, where it is bounded by a fine range of wooded and cultivated hills. Houses and villages, marked out by dense clumps of coconut palms, tamarind and other fruit trees, are dotted about in every direction; while between them extend luxurious rice grounds, watered by an elaborate system of irrigation that would be the pride of the best-cultivated parts of Europe. The Dutch mounted large naval and ground assaults at the Sanur region in 1906 and were met by the thousands of members of the royal family and their followers who rather than yield to the superior Dutch force committed ritual suicide (puputan) to avoid the humiliation of surrender. Despite Dutch demands for surrender, an estimated 200 Balinese killed themselves rather than surrender. In the Dutch intervention in Bali, a similar mass suicide occurred in the face of a Dutch assault in Klungkung. Afterwards, the Dutch governours exercised administrative control over the island, but local control over religion and culture generally remained intact. Dutch rule over Bali came later and was never as well established as in other parts of Indonesia such as Java and Maluku. In the 1930s, anthropologists Margaret Mead and Gregory Bateson, artists Miguel Covarrubias and Walter Spies, and musicologist Colin McPhee all spent time here. Their accounts of the island and its peoples created a western image of Bali as "an enchanted land of aesthetes at peace with themselves and nature". Western tourists began to visit the island. The sensuous image of Bali was enhanced in the West by a quasi-pornographic 1932 documentary Virgins of Bali about a day in the lives of two teenage Balinese girls whom the film's narrator Deane Dickason notes in the first scene "bathe their shamelessly nude bronze bodies". Under the looser version of the Hays code that existed up to 1934, nudity involving "civilised" (i.e. white) women was banned, but permitted with "uncivilised" (i.e. all non-white women), a loophole that was exploited by the producers of Virgins of Bali. The film, which mostly consisted of scenes of topless Balinese women was a great success in 1932, and almost single-handedly made Bali into a popular spot for tourists. Imperial Japan occupied Bali during World War II. It was not originally a target in their Netherlands East Indies Campaign, but as the airfields on Borneo were inoperative due to heavy rains, the Imperial Japanese Army decided to occupy Bali, which did not suffer from comparable weather. The island had no regular Royal Netherlands East Indies Army (KNIL) troops. There was only a Native Auxiliary Corps Prajoda (Korps Prajoda) consisting of about 600 native soldiers and several Dutch KNIL officers under the command of KNIL Lieutenant Colonel W.P. Roodenburg. On 19 February 1942, the Japanese forces landed near the town of Sanoer (Sanur). The island was quickly captured. During the Japanese occupation, a Balinese military officer, I Gusti Ngurah Rai, formed a Balinese 'freedom army'. The harshness of Japanese occupation forces made them more resented than the Dutch colonial rulers. Independence from the Dutch In 1945, Bali was liberated by the British 5th infantry Division under the command of Major-General Robert Mansergh who took the Japanese surrender. Once Japanese forces had been repatriated the island was handed over to the Dutch the following year. In 1946, the Dutch constituted Bali as one of the 13 administrative districts of the newly proclaimed State of East Indonesia, a rival state to the Republic of Indonesia, which was proclaimed and headed by Sukarno and Hatta. Bali was included in the "Republic of the United States of Indonesia" when the Netherlands recognised Indonesian independence on 29 December 1949. The first governor of Bali, Anak Agung Bagus Suteja, was appointed by President Sukarno in 1958, when Bali became a province. Contemporary The 1963 eruption of Mount Agung killed thousands, created economic havoc, and forced many displaced Balinese to be transmigrated to other parts of Indonesia. Mirroring the widening of social divisions across Indonesia in the 1950s and early 1960s, Bali saw conflict between supporters of the traditional caste system, and those rejecting this system. Politically, the opposition was represented by supporters of the Indonesian Communist Party (PKI) and the Indonesian Nationalist Party (PNI), with tensions and ill-feeling further increased by the PKI's land reform programmes. A purported coup attempt in Jakarta was averted by forces led by General Suharto. The army became the dominant power as it instigated a violent anti-communist purge, in which the army blamed the PKI for the coup. Most estimates suggest that at least 500,000 people were killed across Indonesia, with an estimated 80,000 killed in Bali, equivalent to 5% of the island's population. With no Islamic forces involved as in Java and Sumatra, upper-caste PNI landlords led the extermination of PKI members. As a result of the 1965–66 upheavals, Suharto was able to manoeuvre Sukarno out of the presidency. His "New Order" government re-established relations with Western countries. The pre-War Bali as "paradise" was revived in a modern form. The resulting large growth in tourism has led to a dramatic increase in Balinese standards of living and significant foreign exchange earned for the country. A bombing in 2002 by militant Islamists in the tourist area of Kuta killed 202 people, mostly foreigners. This attack, and another in 2005, severely reduced tourism, producing much economic hardship on the island. On 27 November 2017, Mount Agung erupted five times, causing the evacuation of thousands, disrupting air travel and causing much environmental damage. Further eruptions also occurred between 2018 and 2019. On 15–16 November 2022, was held in Nusa Dua the 2022 G20 Bali summit, the seventeenth meeting of Group of Twenty (G20). Geography The island of Bali lies east of Java, and is approximately 8 degrees south of the equator. Bali and Java are separated by the Bali Strait. East to west, the island is approximately wide and spans approximately north to south; administratively it covers , or without Nusa Penida District, which comprises three small islands off the southeast coast of Bali. Its population density was roughly in 2020. Bali's central mountains include several peaks over in elevation and active volcanoes such as Mount Batur. The highest is Mount Agung (), known as the "mother mountain", which is an active volcano rated as one of the world's most likely sites for a massive eruption within the next 100 years. In late 2017 Mount Agung started erupting and large numbers of people were evacuated, temporarily closing the island's airport. Mountains range from centre to the eastern side, with Mount Agung the easternmost peak. Bali's volcanic nature has contributed to its exceptional fertility and its tall mountain ranges provide the high rainfall that supports the highly productive agriculture sector. South of the mountains is a broad, steadily descending area where most of Bali's large rice crop is grown. The northern side of the mountains slopes more steeply to the sea and is the main coffee-producing area of the island, along with rice, vegetables, and cattle. The longest river, Ayung River, flows approximately (see List of rivers of Bali). The island is surrounded by coral reefs. Beaches in the south tend to have white sand while those in the north and west have black sand. Bali has no major waterways, although the Ho River is navigable by small sampan boats. Black sand beaches between Pasut and Klatingdukuh are being developed for tourism, but apart from the seaside temple of Tanah Lot, they are not yet used for significant tourism. The largest city is the provincial capital, Denpasar, near the southern coast. Its population is around 726,800 (mid 2022). Bali's second-largest city is the old colonial capital, Singaraja, which is located on the north coast and is home to around 150,000 people in 2020. Other important cities include the beach resort, Kuta, which is practically part of Denpasar's urban area, and Ubud, situated at the north of Denpasar, is the island's cultural centre. Three small islands lie to the immediate south-east and all are administratively part of the Klungkung regency of Bali: Nusa Penida, Nusa Lembongan and Nusa Ceningan. These islands are separated from Bali by the Badung Strait. To the east, the Lombok Strait separates Bali from Lombok and marks the biogeographical division between the fauna of the Indomalayan realm and the distinctly different fauna of Australasia. The transition is known as the Wallace Line, named after Alfred Russel Wallace, who first proposed a transition zone between these two major biomes. When sea levels dropped during the Pleistocene ice age, Bali was connected to Java and Sumatra and to the mainland of Asia and shared the Asian fauna, but the deep water of the Lombok Strait continued to keep Lombok Island and the Lesser Sunda archipelago isolated. Climate Being just 8 degrees south of the equator, Bali has a fairly even climate all year round. Average year-round temperature stands at around with a humidity level of about 85%. Daytime temperatures at low elevations vary between , but the temperatures decrease significantly with increasing elevation. The west monsoon is in place from approximately October to April, and this can bring significant rain, particularly from December to March. During the rainy season, there are comparatively fewer tourists seen in Bali. During the Easter and Christmas holidays, the weather is very unpredictable. Outside of the monsoon period, humidity is relatively low and any rain is unlikely in lowland areas. Ecology Bali lies just to the west of the Wallace Line, and thus has a fauna that is Asian in character, with very little Australasian influence, and has more in common with Java than with Lombok. An exception is the yellow-crested cockatoo, a member of a primarily Australasian family. There are around 280 species of birds, including the critically endangered Bali myna, which is endemic. Others include barn swallow, black-naped oriole, black racket-tailed treepie, crested serpent-eagle, crested treeswift, dollarbird, Java sparrow, lesser adjutant, long-tailed shrike, milky stork, Pacific swallow, red-rumped swallow, sacred kingfisher, sea eagle, woodswallow, savanna nightjar, stork-billed kingfisher, yellow-vented bulbul and great egret. Until the early 20th century, Bali was possibly home to several large mammals: banteng, leopard and the endemic Bali tiger. The banteng still occurs in its domestic form, whereas leopards are found only in neighbouring Java, and the Bali tiger is extinct. The last definite record of a tiger on Bali dates from 1937 when one was shot, though the subspecies may have survived until the 1940s or 1950s. Pleistocene and Holocene megafaunas include banteng and giant tapir (based on speculations that they might have reached up to the Wallace Line), and rhinoceros. Squirrels are quite commonly encountered, less often is the Asian palm civet, which is also kept in coffee farms to produce kopi luwak. Bats are well represented, perhaps the most famous place to encounter them remaining is the Goa Lawah (Temple of the Bats) where they are worshipped by the locals and also constitute a tourist attraction. They also occur in other cave temples, for instance at Gangga Beach. Two species of monkey occur. The crab-eating macaque, known locally as "kera", is quite common around human settlements and temples, where it becomes accustomed to being fed by humans, particularly in any of the three "monkey forest" temples, such as the popular one in the Ubud area. They are also quite often kept as pets by locals. The second monkey, endemic to Java and some surrounding islands such as Bali, is far rarer and more elusive and is the Javan langur, locally known as "lutung". They occur in a few places apart from the West Bali National Park. They are born an orange colour, though they would have already changed to a more blackish colouration by their first year. In Java, however, there is more of a tendency for this species to retain its juvenile orange colour into adulthood, and a mixture of black and orange monkeys can be seen together as a family. Other rarer mammals include the leopard cat, Sunda pangolin and black giant squirrel. Snakes include the king cobra and reticulated python. The water monitor can grow to at least in length and and can move quickly. The rich coral reefs around the coast, particularly around popular diving spots such as Tulamben, Amed, Menjangan or neighbouring Nusa Penida, host a wide range of marine life, for instance hawksbill turtle, giant sunfish, giant manta ray, giant moray eel, bumphead parrotfish, hammerhead shark, reef shark, barracuda, and sea snakes. Dolphins are commonly encountered on the north coast near Singaraja and Lovina. A team of scientists surveyed from 29 April 2011, to 11 May 2011, at 33 sea sites around Bali. They discovered 952 species of reef fish of which 8 were new discoveries at Pemuteran, Gilimanuk, Nusa Dua, Tulamben and Candidasa, and 393 coral species, including two new ones at Padangbai and between Padangbai and Amed. The average coverage level of healthy coral was 36% (better than in Raja Ampat and Halmahera by 29% or in Fakfak and Kaimana by 25%) with the highest coverage found in Gili Selang and Gili Mimpang in Candidasa, Karangasem Regency. Among the larger trees the most common are: banyan trees, jackfruit, coconuts, bamboo species, acacia trees and also endless rows of coconuts and banana species. Numerous flowers can be seen: hibiscus, frangipani, bougainvillea, poinsettia, oleander, jasmine, water lily, lotus, roses, begonias, orchids and hydrangeas exist. On higher grounds that receive more moisture, for instance, around Kintamani, certain species of fern trees, mushrooms and even pine trees thrive well. Rice comes in many varieties. Other plants with agricultural value include: salak, mangosteen, corn, Kintamani orange, coffee and water spinach. Environment Over-exploitation by the tourist industry has led to 200 out of 400 rivers on the island drying up. Research suggests that the southern part of Bali would face a water shortage. To ease the shortage, the central government plans to build a water catchment and processing facility at Petanu River in Gianyar. The 300 litres capacity of water per second will be channelled to Denpasar, Badung and Gianyar in 2013. A 2010 Environment Ministry report on its environmental quality index gave Bali a score of 99.65, which was the highest score of Indonesia's 33 provinces. The score considers the level of total suspended solids, dissolved oxygen, and chemical oxygen demand in water. Erosion at Lebih Beach has seen of land lost every year. Decades ago, this beach was used for holy pilgrimages with more than 10,000 people, but they have now moved to Masceti Beach. In 2017, a year when Bali received nearly 5.7 million tourists, government officials declared a "garbage emergency" in response to the covering of 3.6-mile stretch of coastline in plastic waste brought in by the tide, amid concerns that the pollution could dissuade visitors from returning. Indonesia is one of the world's worst plastic polluters, with some estimates suggesting the country is the source of around 10 per cent of the world's plastic waste. Government Politics In the national legislature, Bali is represented by nine members, with a single electoral district covering the whole province. The Bali Regional People's Representative Council, the provincial legislature, has 55 members. The province's politics has historically been dominated by the Indonesian Democratic Party of Struggle (PDI-P), which has won by far the most votes in every election in Bali since the first free elections in 1999. Administrative divisions The province is divided into eight regencies (kabupaten) and one city (kota). These are, with their areas and their populations at the 2010 census and the 2020 census, together with the official estimates as at mid 2022 and the Human Development Index for each regency and city. Economy In the 1970s, the Balinese economy was largely agriculture-based in terms of both output and employment. Tourism is now the largest single industry in terms of income, and as a result, Bali is one of Indonesia's wealthiest regions. In 2003, around 80% of Bali's economy was tourism related. By the end of June 2011, the rate of non-performing loans of all banks in Bali were 2.23%, lower than the average of Indonesian banking industry non-performing loan rates (about 5%). The economy, however, suffered significantly as a result of the Islamists' terrorist bombings in 2002 and 2005. The tourism industry has since recovered from these events. Agriculture Although tourism produces the GDP's largest output, agriculture is still the island's biggest employer. Fishing also provides a significant number of jobs. Bali is also famous for its artisans who produce a vast array of handicrafts, including batik and ikat cloth and clothing, wooden carvings, stone carvings, painted art and silverware. Notably, individual villages typically adopt a single product, such as wind chimes or wooden furniture. The Arabica coffee production region is the highland region of Kintamani near Mount Batur. Generally, Balinese coffee is processed using the wet method. This results in a sweet, soft coffee with good consistency. Typical flavours include lemon and other citrus notes. Many coffee farmers in Kintamani are members of a traditional farming system called Subak Abian, which is based on the Hindu philosophy of "Tri Hita Karana". According to this philosophy, the three causes of happiness are good relations with God, other people, and the environment. The Subak Abian system is ideally suited to the production of fair trade and organic coffee production. Arabica coffee from Kintamani is the first product in Indonesia to request a geographical indication. Tourism In 1963 the Bali Beach Hotel in Sanur was built by Sukarno and boosted tourism in Bali. Before the Bali Beach Hotel construction, there were only three significant tourist-class hotels on the island. Construction of hotels and restaurants began to spread throughout Bali. Tourism further increased in Bali after the Ngurah Rai International Airport opened in 1970. The Buleleng regency government encouraged the tourism sector as one of the mainstays for economic progress and social welfare. The tourism industry is primarily focused in the south, while also significant in the other parts of the island. The prominent tourist locations are the town of Kuta (with its beach), and its outer suburbs of Legian and Seminyak (which were once independent townships), the east coast town of Sanur (once the only tourist hub), Ubud towards the centre of the island, to the south of the Ngurah Rai International Airport, Jimbaran and the newer developments of Nusa Dua and Pecatu. The United States government lifted its travel warnings in 2008. The Australian government issued an advisory on Friday, 4 May 2012, with the overall level of this advisory lowered to 'Exercise a high degree of caution'. The Swedish government issued a new warning on Sunday, 10 June 2012, because of one tourist who died from methanol poisoning. Australia last issued an advisory on Monday, 5 January 2015, due to new terrorist threats. An offshoot of tourism is the growing real estate industry. Bali's real estate has been rapidly developing in the main tourist areas of Kuta, Legian, Seminyak, and Oberoi. Most recently, high-end 5-star projects are under development on the Bukit peninsula, on the island's south side. Expensive villas are being developed along the cliff sides of south Bali, with commanding panoramic ocean views. Foreign and domestic, many Jakarta individuals and companies are fairly active, and investment into other areas of the island also continues to grow. Land prices, despite the worldwide economic crisis, have remained stable. In the last half of 2008, Indonesia's currency had dropped approximately 30% against the US dollar, providing many overseas visitors with improved value for their currencies. Bali's tourism economy survived the Islamist terrorist bombings of 2002 and 2005, and the tourism industry has slowly recovered and surpassed its pre-terrorist bombing levels; the long-term trend has been a steady increase in visitor arrivals. In 2010, Bali received 2.57 million foreign tourists, which surpassed the target of 2.0–2.3 million tourists. The average occupancy of starred hotels achieved 65%, so the island still should be able to accommodate tourists for some years without any addition of new rooms/hotels, although at the peak season some of them are fully booked. Bali received the Best Island award from Travel and Leisure in 2010. Bali won because of its attractive surroundings (both mountain and coastal areas), diverse tourist attractions, excellent international and local restaurants, and the friendliness of the local people. The Balinese culture and its religion are also considered the main factor of the award. One of the most prestigious events that symbolize a strong relationship between a god and its followers is Kecak dance. According to BBC Travel released in 2011, Bali is one of the World's Best Islands, ranking second after Santorini, Greece. In 2006, Elizabeth Gilbert's memoir Eat, Pray, Love was published, and in August 2010 it was adapted into the film Eat Pray Love. It took place at Ubud and Padang-Padang Beach in Bali. Both the book and the film fuelled a boom in tourism in Ubud, the hill town and cultural and tourist centre that was the focus of Gilbert's quest for balance and love through traditional spirituality and healing. In January 2016, after musician David Bowie died, it was revealed that in his will, Bowie asked for his ashes to be scattered in Bali, conforming to Buddhist rituals. He had visited and performed in several Southeast Asian cities early in his career, including Bangkok and Singapore. Since 2011, China has displaced Japan as the second-largest supplier of tourists to Bali, while Australia still tops the list while India has also emerged as a greater supply of tourists. Chinese tourists increased by 17% in 2011 from 2010 due to the impact of ACFTA and new direct flights to Bali. In January 2012, Chinese tourists increased by 222.18% compared to January 2011, while Japanese tourists declined by 23.54% year on year. Bali authorities reported the island had 2.88 million foreign tourists and 5 million domestic tourists in 2012, marginally surpassing the expectations of 2.8 million foreign tourists. Based on a Bank Indonesia survey in May 2013, 34.39 per cent of tourists are upper-middle class, spending between $1,286 and $5,592, and are dominated by Australia, India, France, China, Germany and the UK. Some Chinese tourists have increased their levels of spending from previous years. 30.26 per cent of tourists are middle class, spending between $662 and $1,285. In 2017 it was expected that Chinese tourists would outnumber Australian tourists. In January 2020, 10,000 Chinese tourists cancelled trips to Bali due to the COVID-19 pandemic. Because of the COVID-19 pandemic travel restrictions, Bali welcomed 1.07 million international travelers in 2020, most of them between January and March, which is -87% compared to 2019. In the first half of 2021, they welcomed 43 international travelers. The pandemic presented a major blow on Bali's tourism-dependent economy. On 3 February 2022, Bali reopened again for the first foreign tourists after 2 years of being closed due to the pandemic. In 2022 Indonesia's Minister of Health, Budi Sadikin, stated that the tourism industry in Bali will be complemented by the medical industry. At the beginning of 2023, the governor of Bali demanded a ban on the use of motorcycles by tourists. This happened after a series of accidents. Wayan Koster proposed to cancel the violators' visas. The move sparked widespread outrage on social media. Transportation The Ngurah Rai International Airport is located near Jimbaran, on the isthmus at the southernmost part of the island. Lt. Col. Wisnu Airfield is in northwest Bali. A coastal road circles the island, and three major two-lane arteries cross the central mountains at passes reaching 1,750 m in height (at Penelokan). The Ngurah Rai Bypass is a four-lane expressway that partly encircles Denpasar. Bali has no railway lines. There is a car ferry between Gilimanuk on the west coast of Bali to Ketapang on Java. In December 2010 the Government of Indonesia invited investors to build a new Tanah Ampo Cruise Terminal at Karangasem, Bali with a projected worth of $30 million. On 17 July 2011, the first cruise ship (Sun Princess) anchored about away from the wharf of Tanah Ampo harbour. The current pier is only but will eventually be extended to to accommodate international cruise ships. The harbour is safer than the existing facility at Benoa and has a scenic backdrop of east Bali mountains and green rice fields. The tender for improvement was subject to delays, and as of July 2013 the situation was unclear with cruise line operators complaining and even refusing to use the existing facility at Tanah Ampo. A memorandum of understanding was signed by two ministers, Bali's governor and Indonesian Train Company to build of railway along the coast around the island. As of July 2015, no details of these proposed railways have been released. In 2019 it was reported in Gapura Bali that Wayan Koster, governor of Bali, "is keen to improve Bali's transportation infrastructure and is considering plans to build an electric rail network across the island". On 16 March 2011 (Tanjung) Benoa port received the "Best Port Welcome 2010" award from London's "Dream World Cruise Destination" magazine. Government plans to expand the role of Benoa port as export-import port to boost Bali's trade and industry sector. In 2013, The Tourism and Creative Economy Ministry advised that 306 cruise liners were scheduled to visit Indonesia, an increase of 43 per cent compared to the previous year. In May 2011, an integrated Area Traffic Control System (ATCS) was implemented to reduce traffic jams at four crossing points: Ngurah Rai statue, Dewa Ruci Kuta crossing, Jimbaran crossing and Sanur crossing. ATCS is an integrated system connecting all traffic lights, CCTVs and other traffic signals with a monitoring office at the police headquarters. It has successfully been implemented in other ASEAN countries and will be implemented at other crossings in Bali. On 21 December 2011, construction started on the Nusa Dua-Benoa-Ngurah Rai International Airport toll road, which will also provide a special lane for motorcycles. This has been done by seven state-owned enterprises led by PT Jasa Marga with 60% of the shares. PT Jasa Marga Bali Tol will construct the toll road (totally with access road). The construction is estimated to cost Rp.2.49 trillion ($273.9 million). The project goes through of mangrove forest and through of beach, both within area. The elevated toll road is built over the mangrove forest on 18,000 concrete pillars that occupied two hectares of mangrove forest. This was compensated by the planting of 300,000 mangrove trees along the road. On 21 December 2011, the Dewa Ruci underpass has also started on the busy Dewa Ruci junction near Bali Kuta Galeria with an estimated cost of Rp136 billion ($14.9 million) from the state budget. On 23 September 2013, the Bali Mandara Toll Road was opened, with the Dewa Ruci Junction (Simpang Siur) underpass being opened previously. To solve chronic traffic problems, the province will also build a toll road connecting Serangan with Tohpati, a toll road connecting Kuta, Denpasar, and Tohpati, and a flyover connecting Kuta and Ngurah Rai Airport. Demographics The population of Bali was 3,890,757 as of the 2010 census, and 4,317,404 at the 2020 census; the official estimate as at mid 2022 was 4,415,100. There are an estimated 30,000 expatriates living in Bali. Ethnic origins A DNA study in 2005 by Karafet et al. found that 12% of Balinese Y-chromosomes are of likely Indian origin, while 84% are of likely Austronesian origin, and 2% of likely Melanesian origin. Caste system Pre-modern Bali had four castes, as Jeff Lewis and Belinda Lewis state, but with a "very strong tradition of communal decision-making and interdependence". The four castes have been classified as Sudra (Shudra), Wesia (Vaishyas), Satria (Kshatriyas) and Brahmana (Brahmin). The 19th-century scholars such as Crawfurd and Friederich suggested that the Balinese caste system had Indian origins, but Helen Creese states that scholars such as Brumund who had visited and stayed on the island of Bali suggested that his field observations conflicted with the "received understandings concerning its Indian origins". In Bali, the Shudra (locally spelt Soedra) has typically been the temple priests, though depending on the demographics, a temple priest may also be from the other three castes. In most regions, it has been the Shudra who typically make offerings to the gods on behalf of the Hindu devotees, chant prayers, recite meweda (Vedas), and set the course of Balinese temple festivals. Religion About 86.91% of Bali's population adheres to Balinese Hinduism, formed as a combination of existing local beliefs and Hindu influences from mainland Southeast Asia and South Asia. Minority religions include Islam (10.05%), Christianity (2.35%), and Buddhism (0.68%) as for 2018. The general beliefs and practices of Agama Hindu Dharma mix ancient traditions and contemporary pressures placed by Indonesian laws that permit only monotheist belief under the national ideology of Pancasila. Traditionally, Hinduism in Indonesia had a pantheon of deities and that tradition of belief continues in practice; further, Hinduism in Indonesia granted freedom and flexibility to Hindus as to when, how and where to pray. However, officially, the Indonesian government considers and advertises Indonesian Hinduism as a monotheistic religion with certain officially recognised beliefs that comply with its national ideology. Indonesian school textbooks describe Hinduism as having one supreme being, Hindus offering three daily mandatory prayers, and Hinduism as having certain common beliefs that in part parallel those of Islam. Scholars contest whether these Indonesian government recognised and assigned beliefs to reflect the traditional beliefs and practices of Hindus in Indonesia before Indonesia gained independence from Dutch colonial rule. Balinese Hinduism has roots in Indian Hinduism and Buddhism, which arrived through Java. Hindu influences reached the Indonesian Archipelago as early as the first century. Historical evidence is unclear about the diffusion process of cultural and spiritual ideas from India. Java legends refer to Saka-era, traced to 78 AD. Stories from the Mahabharata Epic have been traced in Indonesian islands to the 1st century; however, the versions mirror those found in the southeast Indian peninsular region (now Tamil Nadu and southern Karnataka and Andhra Pradesh). The Bali tradition adopted the pre-existing animistic traditions of the indigenous people. This influence strengthened the belief that the gods and goddesses are present in all things. Every element of nature, therefore, possesses its power, which reflects the power of the gods. A rock, tree, dagger, or woven cloth is a potential home for spirits whose energy can be directed for good or evil. Balinese Hinduism is deeply interwoven with art and ritual. Ritualising states of self-control are a notable feature of religious expression among the people, who for this reason have become famous for their graceful and decorous behaviour. Apart from the majority of Balinese Hindus, there also exist Chinese immigrants whose traditions have melded with that of the locals. As a result, these Sino-Balinese embrace their original religion, which is a mixture of Buddhism, Christianity, Taoism, and Confucianism, and find a way to harmonise it with the local traditions. Hence, it is not uncommon to find local Sino-Balinese during the local temple's odalan. Moreover, Balinese Hindu priests are invited to perform rites alongside a Chinese priest in the event of the death of a Sino-Balinese. Nevertheless, the Sino-Balinese claim to embrace Buddhism for administrative purposes, such as their Identity Cards. The Roman Catholic community has a diocese, the Diocese of Denpasar that encompasses the province of Bali and West Nusa Tenggara and has its cathedral located in Denpasar. Language Balinese and Indonesian are the most widely spoken languages in Bali, and the vast majority of Balinese people are bilingual or trilingual. The most common spoken language around the tourist areas is Indonesian, as many people in the tourist sector are not solely Balinese, but migrants from Java, Lombok, Sumatra, and other parts of Indonesia. The Balinese language is heavily stratified due to the Balinese caste system. Kawi and Sanskrit are also commonly used by some Hindu priests in Bali, as Hindu literature was mostly written in Sanskrit. English and Chinese are the next most common languages (and the primary foreign languages) of many Balinese, owing to the requirements of the tourism industry, as well as the English-speaking community and huge Chinese-Indonesian population. Other foreign languages, such as Japanese, Korean, French, Russian or German are often used in multilingual signs for foreign tourists. Culture Bali is renowned for its diverse and sophisticated art forms, such as painting, sculpture, woodcarving, handcrafts, and performing arts. Balinese cuisine is also distinctive, and unlike the rest of Indonesia, pork is commonly found in Balinese dishes such as Babi Guling. Balinese percussion orchestra music, known as gamelan, is highly developed and varied. Balinese performing arts often portray stories from Hindu epics such as the Ramayana but with heavy Balinese influence. Famous Balinese dances include pendet, legong, baris, topeng, barong, gong keybar, and kecak (the monkey dance). Bali boasts one of the most diverse and innovative performing arts cultures in the world, with paid performances at thousands of temple festivals, private ceremonies, and public shows. Architecture Kaja and kelod are the Balinese equivalents of North and South, which refer to one's orientation between the island's largest mountain Gunung Agung (kaja), and the sea (kelod). In addition to spatial orientation, kaja and kelod have the connotation of good and evil; gods and ancestors are believed to live on the mountain whereas demons live in the sea. Buildings such as temples and residential homes are spatially oriented by having the most sacred spaces closest to the mountain and the unclean places nearest to the sea. Most temples have an inner courtyard and an outer courtyard which are arranged with the inner courtyard furthest kaja. These spaces serve as performance venues since most Balinese rituals are accompanied by any combination of music, dance, and drama. The performances that take place in the inner courtyard are classified as wali, the most sacred rituals which are offerings exclusively for the gods, while the outer courtyard is where bebali ceremonies are held, which are intended for gods and people. Lastly, performances meant solely for the entertainment of humans take place outside the temple's walls and are called bali-balihan. This three-tiered system of classification was standardised in 1971 by a committee of Balinese officials and artists to better protect the sanctity of the oldest and most sacred Balinese rituals from being performed for a paying audience. Dances Tourism, Bali's chief industry, has provided the island with a foreign audience that is eager to pay for entertainment, thus creating new performance opportunities and more demand for performers. The impact of tourism is controversial since before it became integrated into the economy, the Balinese performing arts did not exist as a capitalist venture, and were not performed for entertainment outside of their respective ritual context. Since the 1930s sacred rituals such as the barong dance have been performed both in their original contexts, as well as exclusively for paying tourists. This has led to new versions of many of these performances that have developed according to the preferences of foreign audiences; some villages have a barong mask specifically for non-ritual performances and an older mask that is only used for sacred performances. Festivals Throughout the year, there are many festivals celebrated locally or island-wide according to the traditional calendars. The Hindu New Year, Nyepi, is celebrated in the spring by a day of silence. On this day everyone stays at home and tourists are encouraged (or required) to remain in their hotels. On the day before New Year, large and colourful sculptures of Ogoh-ogoh monsters are paraded and burned in the evening to drive away evil spirits. Other festivals throughout the year are specified by the Balinese pawukon calendrical system. Celebrations are held for many occasions such as a tooth-filing (coming-of-age ritual), cremation or odalan (temple festival). One of the most important concepts that Balinese ceremonies have in common is that of désa kala patra, which refers to how ritual performances must be appropriate in both the specific and general social context. Many ceremonial art forms such as wayang kulit and topeng are highly improvisatory, providing flexibility for the performer to adapt the performance to the current situation. Many celebrations call for a loud, boisterous atmosphere with much activity, and the resulting aesthetic, ramé, is distinctively Balinese. Often two or more gamelan ensembles will be performing well within earshot, and sometimes compete with each other to be heard. Likewise, the audience members talk amongst themselves, get up and walk around, or even cheer on the performance, which adds to the many layers of activity and the liveliness typical of ramé. Tradition Balinese society continues to revolve around each family's ancestral village, to which the cycle of life and religion is closely tied. Coercive aspects of traditional society, such as customary law sanctions imposed by traditional authorities such as village councils (including "kasepekang", or shunning) have risen in importance as a consequence of the democratisation and decentralisation of Indonesia since 1998. Other than Balinese sacred rituals and festivals, the government presents Bali Arts Festival to showcase Bali's performing arts and various artworks produced by the local talents that they have. It is held once a year, from the second week of June until the end of July. Southeast Asia's biggest annual festival of words and ideas Ubud Writers and Readers Festival is held at Ubud in October, which is participated by the world's most celebrated writers, artists, thinkers, and performers. One unusual tradition is the naming of children in Bali. In general, Balinese people name their children depending on the order they are born, and the names are the same for both males and females. Beauty pageant Bali was the host of Miss World 2013 (63rd edition of the Miss World pageant). It was the first time Indonesia hosted an international beauty pageant. In 2022, Bali also co-hosted Miss Grand International 2022 along with Jakarta, West Java, and Banten. Sports Bali is a major world surfing destination with popular breaks dotted across the southern coastline and around the offshore island of Nusa Lembongan. As part of the Coral Triangle, Bali, including Nusa Penida, offers a wide range of dive sites with varying types of reefs, and tropical aquatic life. Bali was the host of 2008 Asian Beach Games. It was the second time Indonesia hosted an Asia-level multi-sport event, after Jakarta held the 1962 Asian Games. In 2023, Bali was the location for a major eSports event, the Dota 2 Bali Major, the third and final Major of the Dota Pro Circuit season. The event was held at the Ayana Estate and the Champa Garden, and it was the first time that a Dota Pro Circuit Major was held in Indonesia. In football, Bali is home to Bali United football club, which plays in Liga 1. The team was relocated from Samarinda, East Kalimantan to Gianyar, Bali. Harbiansyah Hanafiah, the main commissioner of Bali United explained that he changed the name and moved the home base because there was no representative from Bali in the highest football tier in Indonesia. Another reason was due to local fans in Samarinda preferring to support Pusamania Borneo F.C. rather than Persisam. Heritage sites In June 2012, Subak, the irrigation system for paddy fields in Jatiluwih, central Bali was listed as a Natural UNESCO World Heritage Site. See also Culture of Indonesia Hinduism in Indonesia Tourism in Indonesia References Bibliography Further reading Cotterell, Arthur (2015). Bali: A cultural history, Signal Books Covarrubias, Miguel (1946). Island of Bali. External links 1958 establishments in Indonesia Hinduism in Indonesia Islands of Indonesia Lesser Sunda Islands Provinces of Indonesia States and territories established in 1958 Former kingdoms Populated places in Indonesia
10
Brown University is a private Ivy League research university in Providence, Rhode Island. It is the seventh-oldest institution of higher education in the United States, founded in 1764 as the College in the English Colony of Rhode Island and Providence Plantations. One of nine colonial colleges chartered before the American Revolution, it was the first college in the United States to codify in its charter that admission and instruction of students was to be equal regardless of their religious affiliation. The university is home to the oldest applied mathematics program in the United States, the oldest engineering program in the Ivy League, and the third-oldest medical program in New England. It was one of the early doctoral-granting U.S. institutions in the late 19th century, adding masters and doctoral studies in 1887. In 1969, it adopted its Open Curriculum after a period of student lobbying, which eliminated mandatory "general education" distribution requirements, made students "the architects of their own syllabus", and allowed them to take any course for a grade of satisfactory (Pass) or no-credit (Fail) which is unrecorded on external transcripts. In 1971, Brown's coordinate women's institution, Pembroke College, was fully merged into the university. The university comprises the College, the Graduate School, Alpert Medical School, the School of Engineering, the School of Public Health and the School of Professional Studies. Its international programs are organized through the Watson Institute for International and Public Affairs, and it is academically affiliated with the Marine Biological Laboratory and the Rhode Island School of Design; with the latter, it offers undergraduate and graduate dual degree programs. Brown's main campus is in the College Hill neighborhood of Providence, Rhode Island. The university is surrounded by a federally listed architectural district with a dense concentration of Colonial-era buildings. Benefit Street, which runs along the campus's western edge, has one of America's richest concentrations of 17th- and 18th-century architecture. Brown's undergraduate admissions are among the most selective in the country, with an overall acceptance rate of 5% for the class of 2026. , 11 Nobel Prize winners have been affiliated with Brown as alumni, faculty, or researchers, as well as 1 Fields Medalist, 7 National Humanities Medalists and 10 National Medal of Science laureates. Other notable alumni include 27 Pulitzer Prize winners, 21 billionaires, 1 U.S. Supreme Court Chief Justice, 4 U.S. Secretaries of State, over 100 members of the United States Congress, 58 Rhodes Scholars, 22 MacArthur Genius Fellows, and 38 Olympic medalists. History Foundation and charter In 1761, three residents of Newport, Rhode Island, drafted a petition to the colony's General Assembly: The three petitioners were Ezra Stiles, pastor of Newport's Second Congregational Church and future president of Yale University; William Ellery Jr., future signer of the United States Declaration of Independence; and Josias Lyndon, future governor of the colony. Stiles and Ellery later served as co-authors of the college's charter two years later. The editor of Stiles's papers observes, "This draft of a petition connects itself with other evidence of Dr. Stiles's project for a Collegiate Institution in Rhode Island, before the charter of what became Brown University." The Philadelphia Association of Baptist Churches was also interested in establishing a college in Rhode Island—home of the mother church of their denomination. At the time, the Baptists were unrepresented among the colonial colleges; the Congregationalists had Harvard and Yale, the Presbyterians had the College of New Jersey (later Princeton), and the Episcopalians had the College of William and Mary and King's College (later Columbia) while their local University of Pennsylvania was specifically founded without direct association with any particular denomination. Isaac Backus, a historian of the New England Baptists and an inaugural trustee of Brown, wrote of the October 1762 resolution taken at Philadelphia: James Manning arrived at Newport in July 1763 and was introduced to Stiles, who agreed to write the charter for the college. Stiles' first draft was read to the General Assembly in August 1763, and rejected by Baptist members who worried that their denomination would be underrepresented in the College Board of Fellows. A revised charter written by Stiles and Ellery was adopted by the Rhode Island General Assembly on March 3, 1764, in East Greenwich. In September 1764, the inaugural meeting of the corporation—the college's governing body—was held in Newport's Old Colony House. Governor Stephen Hopkins was chosen chancellor, former and future governor Samuel Ward vice chancellor, John Tillinghast treasurer, and Thomas Eyres secretary. The charter stipulated that the board of trustees should be composed of 22 Baptists, five Quakers, five Episcopalians, and four Congregationalists. Of the 12 Fellows, eight should be Baptists—including the college president—"and the rest indifferently of any or all Denominations." At the time of its creation, Brown's charter was a uniquely progressive document. Other colleges had curricular strictures against opposing doctrines, while Brown's charter asserted, "Sectarian differences of opinions, shall not make any Part of the Public and Classical Instruction." The document additionally "recognized more broadly and fundamentally than any other [university charter] the principle of denominational cooperation." The oft-repeated statement that Brown's charter alone prohibited a religious test for College membership is inaccurate; other college charters were similarly liberal in that particular. The college was founded as Rhode Island College, at the site of the First Baptist Church in Warren, Rhode Island. Manning was sworn in as the college's first president in 1765 and remained in the role until 1791. In 1766, the college authorized the Reverend Morgan Edwards to travel to Europe to "solicit Benefactions for this Institution". During his year-and-a-half stay in the British Isles, Edwards secured funding from benefactors including Thomas Penn and Benjamin Franklin. In 1770, the college moved from Warren to Providence. To establish a campus, John and Moses Brown purchased a four-acre lot on the crest of College Hill on behalf of the school. The majority of the property fell within the bounds of the original home lot of Chad Brown, an ancestor of the Browns and one of the original proprietors of Providence Plantations. After the college was relocated to the city, work began on constructing its first building. A building committee, organized by the corporation, developed plans for the college's first purpose-built edifice, finalizing a design on February 9, 1770. The subsequent structure, referred to as "The College Edifice" and later as University Hall, may have been modeled on Nassau Hall, built 14 years prior at the College of New Jersey. President Manning, an active member of the building process, was educated at Princeton and might have suggested that Brown's first building resemble that of his alma mater. Brown family Nicholas Brown, John Brown, Joseph Brown, and Moses Brown were instrumental in moving the college to Providence, constructing its first building, and securing its endowment. Joseph became a professor of natural philosophy at the college; John served as its treasurer from 1775 to 1796; and Nicholas Sr's son Nicholas Brown Jr. succeeded his uncle as treasurer from 1796 to 1825. On September 8, 1803, the corporation voted, "That the donation of $5,000, if made to this College within one Year from the late Commencement, shall entitle the donor to name the College." The following year, the appeal was answered by College Treasurer Nicholas Brown Jr. In a letter dated September 6, 1804, Brown committed "a donation of Five Thousand Dollars to Rhode Island College, to remain in perpetuity as a fund for the establishment of a Professorship of Oratory and Belles Letters." In recognition of the gift, the corporation on the same day voted, "That this College be called and known in all future time by the Name of Brown University." Over the years, the benefactions of Nicholas Brown Jr., totaled nearly $160,000 and included funds for building Hope College (1821–22) and Manning Hall (1834–35). In 1904, the John Carter Brown Library was established as an independently funded research library on Brown's campus; the library's collection was founded on that of John Carter Brown, son of Nicholas Brown Jr. The Brown family was involved in various business ventures in Rhode Island, and accrued wealth both directly and indirectly from the transatlantic slave trade. The family was divided on the issue of slavery. John Brown had defended slavery, while Moses and Nicholas Brown Jr. were fervent abolitionists. In 2003, under the tenure of President Ruth Simmons, the university established a steering committee to investigate these ties of the university to slavery and recommend a strategy to address them. American Revolution With British vessels patrolling Narragansett Bay in the fall of 1776, the college library was moved out of Providence for safekeeping. During the subsequent American Revolutionary War, Brown's University Hall was used to house French and other revolutionary troops led by General George Washington and the Comte de Rochambeau as they waited to commence the march of 1781 that led to the Siege of Yorktown and the Battle of the Chesapeake. This has been celebrated as marking the defeat of the British and the end of the war. The building functioned as barracks and hospital from December 10, 1776, to April 20, 1780, and as a hospital for French troops from June 26, 1780, to May 27, 1782. A number of Brown's founders and alumni played roles in the American Revolution and subsequent founding of the United States. Brown's first chancellor, Stephen Hopkins, served as a delegate to the Colonial Congress in Albany in 1754, and to the Continental Congress from 1774 to 1776. James Manning represented Rhode Island at the Congress of the Confederation, while concurrently serving as Brown's first president. Two of Brown's founders, William Ellery and Stephen Hopkins signed the Declaration of Independence. James Mitchell Varnum, who graduated from Brown with honors in 1769, served as one of General George Washington's Continental Army brigadier generals and later as major general in command of the entire Rhode Island militia. Varnum is noted as the founder and commander of the 1st Rhode Island Regiment, widely regarded as the first Black battalion in U.S. military history. David Howell, who graduated with an A.M. in 1769, served as a delegate to the Continental Congress from 1782 to 1785. Presidents Nineteen individuals have served as presidents of the university since its founding in 1764. Since 2012, Christina Hull Paxson has served as president. Paxson had previously served as dean of Princeton University's School of Public and International Affairs and chair of Princeton's economics department. Paxson's immediate predecessor, Ruth Simmons, is noted as the first African American president of an Ivy League institution. Other presidents of note include academic, Vartan Gregorian; and philosopher and economist, Francis Wayland. New Curriculum In 1966, the first Group Independent Study Project (GISP) at Brown was formed, involving 80 students and 15 professors. The GISP was inspired by student-initiated experimental schools, especially San Francisco State College, and sought ways to "put students at the center of their education" and "teach students how to think rather than just teaching facts". Members of the GISP, Ira Magaziner and Elliot Maxwell published a paper of their findings titled, "Draft of a Working Paper for Education at Brown University." The paper made proposals for a new curriculum, including interdisciplinary freshman-year courses that would introduce "modes of thought," with instruction from faculty from different disciplines as well as for an end to letter grades. The following year Magaziner began organizing the student body to press for the reforms, organizing discussions and protests. In 1968, university president Ray Heffner established a Special Committee on Curricular Philosophy. Composed of administrators, the committee was tasked with developing specific reforms and producing recommendations. A report, produced by the committee, was presented to the faculty, which voted the New Curriculum into existence on May 7, 1969. Its key features included: Modes of Thought courses for first-year students The introduction of interdisciplinary courses The abandonment of "general education" distribution requirements The Satisfactory/No Credit (S/NC) grading option The ABC/No Credit grading system, which eliminated pluses, minuses, and D's; a grade of "No Credit" (equivalent to F's at other institutions) would not appear on external transcripts. The Modes of Thought course was discontinued early on, but the other elements remain in place. In 2006, the reintroduction of plus/minus grading was proposed in response to concerns regarding grade inflation. The idea was rejected by the College Curriculum Council after canvassing alumni, faculty, and students, including the original authors of the Magaziner-Maxwell Report. "Slavery and Justice" report In 2003, then-university president Ruth Simmons launched a steering committee to research Brown's eighteenth-century ties to slavery. In October 2006, the committee released a report documenting its findings. Titled "Slavery and Justice", the document detailed the ways in which the university benefited both directly and indirectly from the transatlantic slave trade and the labor of enslaved people. The report also included seven recommendations for how the university should address this legacy. Brown has since completed a number of these recommendations including the establishment of its Center for the Study of Slavery and Justice, the construction of its Slavery Memorial, and the funding of a $10 million permanent endowment for Providence Public Schools. The Slavery and Justice report marked the first major effort by an American university to address its ties to slavery and prompted other institutions to undertake similar processes. Coat of arms Brown's coat of arms was created in 1834. The prior year, president Francis Wayland had commissioned a committee to update the school's original seal to match the name the university had adopted in 1804. Central in the coat of arms is a white escutcheon divided into four sectors by a red cross. Within each sector of the coat of arms lies an open book. Above the shield is a crest consisting of the upper half of a sun in splendor among the clouds atop a red and white torse. Campus Brown is the largest institutional landowner in Providence, with properties on College Hill and in the Jewelry District. The university was built contemporaneously with the eighteenth and nineteenth-century precincts surrounding it, making Brown's campus tightly integrated into Providence's urban fabric. Among the noted architects who have shaped Brown's campus are McKim, Mead & White, Philip Johnson, Rafael Viñoly, Diller Scofidio + Renfro, and Robert A. M. Stern. Main campus Brown's main campus, comprises 235 buildings and in the East Side neighborhood of College Hill. The university's central campus sits on a block bounded by Waterman, Prospect, George, and Thayer Streets; newer buildings extend northward, eastward, and southward. Brown's core, historic campus, constructed primary between 1770 and 1926, is defined by three greens: the Front or Quiet Green, the Middle or College Green, and the Ruth J. Simmons Quadrangle (historically known as Lincoln Field). A brick and wrought-iron fence punctuated by decorative gates and arches traces the block's perimeter. This section of campus is primarily Georgian and Richardsonian Romanesque in its architectural character. To the south of the central campus are academic buildings and residential quadrangles, including Wriston, Keeney, and Gregorian quadrangles. Immediately to the east of the campus core sit Sciences Park and Brown's School of Engineering. North of the central campus are performing and visual arts facilities, life sciences labs, and the Pembroke Campus, which houses both dormitories and academic buildings. Facing the western edge of the central campus sit two of the Brown's seven libraries, the John Hay Library and the John D. Rockefeller Jr. Library. The university's campus is contiguous with that of the Rhode Island School of Design, which is located immediately to Brown's west, along the slope of College Hill. Van Wickle Gates Built in 1901, the Van Wickle Gates are a set of wrought iron gates that stand at the western edge of Brown's campus. The larger main gate is flanked by two smaller side gates. At Convocation the central gate opens inward to admit the procession of new students; at Commencement, the gate opens outward for the procession of graduates. A Brown superstition holds that students who walk through the central gate a second time prematurely will not graduate, although walking backward is said to cancel the hex. John Hay Library The John Hay Library is the second oldest library on campus. Opened in 1910, the library is named for John Hay (class of 1858), private secretary to Abraham Lincoln and Secretary of State under William McKinley and Theodore Roosevelt. The construction of the building was funded in large part by Hay's friend, Andrew Carnegie, who contributed half of the $300,000 cost of construction. The John Hay Library serves as the repository of the university's archives, rare books and manuscripts, and special collections. Noteworthy among the latter are the Anne S. K. Brown Military Collection (described as "the foremost American collection of material devoted to the history and iconography of soldiers and soldiering"), the Harris Collection of American Poetry and Plays (described as "the largest and most comprehensive collection of its kind in any research library"), the Lownes Collection of the History of Science (described as "one of the three most important private collections of books of science in America"), and the papers of H. P. Lovecraft. The Hay Library is home to one of the broadest collections of incunabula in the Americas, one of Brown's two Shakespeare First Folios, the manuscript of George Orwell's Nineteen Eighty-Four, and three books bound in human skin. John Carter Brown Library Founded in 1846, the John Carter Brown Library is generally regarded as the world's leading collection of primary historical sources relating to the exploration and colonization of the Americas. While administered and funded separately from the university, the library has been owned by Brown and located on its campus since 1904. The library contains the best preserved of the eleven surviving copies of the Bay Psalm Book—the earliest extant book printed in British North America and the most expensive printed book in the world. Other holdings include a Shakespeare First Folio and the world's largest collection of 16th century Mexican texts. Haffenreffer Museum The exhibition galleries of the Haffenreffer Museum of Anthropology, Brown's teaching museum, are located in Manning Hall on the campus's main green. Its one million artifacts, available for research and educational purposes, are located at its Collections Research Center in Bristol, Rhode Island. The museum's goal is to inspire creative and critical thinking about culture by fostering an interdisciplinary understanding of the material world. It provides opportunities for faculty and students to work with collections and the public, teaching through objects and programs in classrooms and exhibitions. The museum sponsors lectures and events in all areas of anthropology and also runs an extensive program of outreach to local schools. Annmary Brown Memorial The Annmary Brown Memorial was constructed from 1903 to 1907 by the politician, Civil War veteran, and book collector General Rush Hawkins, as a mausoleum for his wife, Annmary Brown, a member of the Brown family. In addition to its crypt—the final repository for Brown and Hawkins—the Memorial includes works of art from Hawkins's private collection, including paintings by Angelica Kauffman, Peter Paul Rubens, Gilbert Stuart, Giovanni Battista Tiepolo, Benjamin West, and Eastman Johnson, among others. His collection of over 450 incunabula was relocated to the John Hay Library in 1990. Today the Memorial is home to Brown's Medieval Studies and Renaissance Studies programs. The Walk The Walk, a landscaped pedestrian corridor, connects the Pembroke Campus to the main campus. It runs parallel to Thayer Street and serves as a primary axis of campus, extending from Ruth Simmons Quadrangle at its southern terminus to the Meeting Street entrance to the Pembroke Campus at its northern end. The walk is bordered by departmental buildings as well as the Lindemann Performing Arts Center and Granoff Center for the Creative Arts The corridor is home to public art including sculptures by Maya Lin and Tom Friedman. Pembroke campus The Women's College in Brown University, known as Pembroke College, was founded in October 1891. Upon its 1971 merger with the College of Brown University, Pembroke's campus was absorbed into the larger Brown campus. The Pembroke campus is bordered by Meeting, Brown, Bowen, and Thayer Streets and sits three blocks north of Brown's central campus. The campus is dominated by brick architecture, largely of the Georgian and Victorian styles. The west side of the quadrangle comprises Pembroke Hall (1897), Smith-Buonanno Hall (1907), and Metcalf Hall (1919), while the east side comprises Alumnae Hall (1927) and Miller Hall (1910). The quadrangle culminates on the north with Andrews Hall (1947). East Campus, centered on Hope and Charlesfield streets, originally served as the campus of Bryant University. In 1969, as Bryant was preparing to relocate to Smithfield, Rhode Island, Brown purchased their Providence campus for $5 million. The transaction expanded the Brown campus by and 26 buildings. In 1971, Brown renamed the area East Campus. Today, the area is largely used for dormitories. Thayer Street runs through Brown's main campus. As a commercial corridor frequented by students, Thayer is comparable to Harvard Square or Berkeley's Telegraph Avenue. Wickenden Street, in the adjacent Fox Point neighborhood, is another commercial street similarly popular among students. Built in 1925, Brown Stadium—the home of the school's football team—is located approximately a mile and a half northeast of the university's central campus. Marston Boathouse, the home of Brown's crew teams, lies on the Seekonk River, to the southeast of campus. Brown's sailing teams are based out of the Ted Turner Sailing Pavilion at the Edgewood Yacht Club in adjacent Cranston. Since 2011, Brown's Warren Alpert Medical School has been located in Providence's historic Jewelry District, near the medical campus of Brown's teaching hospitals, Rhode Island Hospital and the Women and Infants Hospital of Rhode Island. Other university facilities, including molecular medicine labs and administrative offices, are likewise located in the area. Brown's School of Public Health occupies a landmark modernist building along the Providence River. Other Brown properties include the Mount Hope Grant in Bristol, Rhode Island, an important Native American site noted as a location of King Philip's War. Brown's Haffenreffer Museum of Anthropology Collection Research Center, particularly strong in Native American items, is located in the Mount Hope Grant. Sustainability Brown has committed to "minimize its energy use, reduce negative environmental impacts, and promote environmental stewardship." Since 2010, the university has required all new buildings meet LEED silver standards. Between 2007 and 2018, Brown reduced its greenhouse emissions by 27 percent; the majority of this reduction is attributable to the university's Thermal Efficiency Project which converted its central heating plant from a steam-powered system to a hot water-powered system. In 2020, Brown announced it had sold 90 percent of its fossil fuel investments as part of a broader divestment from direct investments and managed funds that focus on fossil fuels. In 2021, the university adopted the goal of reducing quantifiable campus emissions by 75 percent by 2025 and achieving carbon neutrality by 2040. According to the A. W. Kuchler U.S. potential natural vegetation types, Brown would have a dominant vegetation type of Appalachian Oak (104) with a dominant vegetation form of Eastern Hardwood Forest (25). Academics The College Founded in 1764, The College is Brown's oldest school. About 7,200 undergraduate students are enrolled in the college , and 81 concentrations are offered. For the graduating class of 2020, the most popular concentrations were Computer Science, Economics, Biology, History, Applied Mathematics, International Relations, and Political Science. A quarter of Brown undergraduates complete more than one concentration before graduating. If the existing programs do not align with their intended curricular interests, undergraduates may design and pursue independent concentrations. Around 35 percent of undergraduates pursue graduate or professional study immediately, 60 percent within 5 years, and 80 percent within 10 years. For the Class of 2009, 56 percent of all undergraduate alumni have since earned graduate degrees. Among undergraduate alumni who go on to receive graduate degrees, the most common degrees earned are J.D. (16%), M.D. (14%), M.A. (14%), M.Sc. (14%), and Ph.D. (11%). The most common institutions from which undergraduate alumni earn graduate degrees are Brown University, Columbia University, and Harvard University. The highest fields of employment for undergraduate alumni ten years after graduation are education and higher education (15%), medicine (9%), business and finance (9%), law (8%), and computing and technology (7%). Brown and RISD Since its 1893 relocation to College Hill, Rhode Island School of Design (RISD) has bordered Brown to its west. Since 1900, Brown and RISD students have been able to cross-register at the two institutions, with Brown students permitted to take as many as four courses at RISD to count towards their Brown degree. The two institutions partner to provide various student-life services and the two student bodies compose a synergy in the College Hill cultural scene. Dual Degree Program After several years of discussion between the two institutions and several students pursuing dual degrees unofficially, Brown and RISD formally established a five-year dual degree program in 2007, with the first class matriculating in the fall of 2008. The Brown|RISD Dual Degree Program, among the most selective in the country, offered admission to 20 of the 725 applicants for the class entering in autumn 2020, for an acceptance rate of 2.7%. The program combines the complementary strengths of the two institutions, integrating studio art and design at RISD with Brown's academic offerings. Students are admitted to the Dual Degree Program for a course lasting five years and culminating in both the Bachelor of Arts (A.B.) or Bachelor of Science (Sc.B.) degree from Brown and the Bachelor of Fine Arts (B.F.A.) degree from RISD. Prospective students must apply to the two schools separately and be accepted by separate admissions committees. Their application must then be approved by a third Brown|RISD joint committee. Admitted students spend the first year in residence at RISD completing its first-year Experimental and Foundation Studies curriculum while taking up to three Brown classes. Students spend their second year in residence at Brown, during which students take mainly Brown courses while starting on their RISD major requirements. In the third, fourth, and fifth years, students can elect to live at either school or off-campus, and course distribution is determined by the requirements of each student's unique combination of Brown concentration and RISD major. Program participants are noted for their creative and original approach to cross-disciplinary opportunities, combining, for example, industrial design with engineering, or anatomical illustration with human biology, or philosophy with sculpture, or architecture with urban studies. An annual "BRDD Exhibition" is a well-publicized and heavily attended event, drawing interest and attendees from the broader world of industry, design, the media, and the fine arts. MADE Program In 2020, the two schools announced the establishment of a new joint Master of Arts in Design Engineering program. Abbreviated as MADE, the program intends to combine RISD's programs in industrial design with Brown's programs in engineering. The program is administered through Brown's School of Engineering and RISD's Architecture and Design Division. Theatre and playwriting Brown's theatre and playwriting programs are among the best-regarded in the country. Six Brown graduates have received the Pulitzer Prize for Drama; Alfred Uhry '58, Lynn Nottage '86, Ayad Akhtar '93, Nilo Cruz '94, Quiara Alegría Hudes '04, and Jackie Sibblies Drury MFA '04. In American Theater magazine's 2009 ranking of the most-produced American plays, Brown graduates occupied four of the top five places—Peter Nachtrieb '97, Rachel Sheinkin '89, Sarah Ruhl '97, and Stephen Karam '02. The undergraduate concentration encompasses programs in theatre history, performance theory, playwriting, dramaturgy, acting, directing, dance, speech, and technical production. Applications for doctoral and master's degree programs are made through the University Graduate School. Master's degrees in acting and directing are pursued in conjunction with the Brown/Trinity Rep MFA program, which partners with the Trinity Repertory Company, a local regional theatre. Writing programs Writing at Brown—fiction, non-fiction, poetry, playwriting, screenwriting, electronic writing, mixed media, and the undergraduate writing proficiency requirement—is catered for by various centers and degree programs, and a faculty that has long included nationally and internationally known authors. The undergraduate concentration in literary arts offers courses in fiction, poetry, screenwriting, literary hypermedia, and translation. Graduate programs include the fiction and poetry MFA writing programs in the literary arts department and the MFA playwriting program in the theatre arts and performance studies department. The non-fiction writing program is offered in the English department. Screenwriting and cinema narrativity courses are offered in the departments of literary arts and modern culture and media. The undergraduate writing proficiency requirement is supported by the Writing Center. Author prizewinners Alumni authors take their degrees across the spectrum of degree concentrations, but a gauge of the strength of writing at Brown is the number of major national writing prizes won. To note only winners since the year 2000: Pulitzer Prize for Fiction-winners Jeffrey Eugenides '82 (2003), Marilynne Robinson '66 (2005), and Andrew Sean Greer '92 (2018); British Orange Prize-winners Marilynne Robinson '66 (2009) and Madeline Miller '00 (2012); Pulitzer Prize for Drama-winners Nilo Cruz '94 (2003), Lynn Nottage '86 (twice, 2009, 2017), Quiara Alegría Hudes '04 (2012), Ayad Akhtar '93 (2013), and Jackie Sibblies Drury MFA '04 (2019); Pulitzer Prize for Biography-winners David Kertzer '69 (2015) and Benjamin Moser '98 (2020); Pulitzer Prize for Journalism-winners James Risen '77 (2006), Gareth Cook '91 (2005), Tony Horwitz '80 (1995), Usha Lee McFarling '89 (2007), David Rohde '90 (1996), Kathryn Schulz '96 (2016), and Alissa J. Rubin '80 (2016); Pulitzer Prize for General Nonfiction-winner James Forman Jr. '88 (2018); Pulitzer Prize for History-winner Marcia Chatelain PhD '08 (2021); Pulitzer Prize for Criticism-winner Salamishah Tillet MAT '97 (2022); and Pulitzer Prize for Poetry-winner Peter Balakian PhD '80 (2016) Computer science Brown began offering computer science courses through the departments of Economics and Applied Mathematics in 1956 when it acquired an IBM machine. Brown added an IBM 650 in January 1958, the only one of its type between Hartford and Boston. In 1960, Brown opened its first dedicated computer building. The facility, designed by Philip Johnson, received an IBM 7070 computer the following year. Brown granted computer sciences full Departmental status in 1979. In 2009, IBM and Brown announced the installation of a supercomputer (by teraflops standards), the most powerful in the southeastern New England region. In the 1960s, Andries van Dam along with Ted Nelson, and Bob Wallace invented The Hypertext Editing Systems, HES and FRESS while at Brown. Nelson coined the word hypertext while Van Dam's students helped originate XML, XSLT, and related Web standards. Among the school's computer science alumni are principal architect of the Classic Mac OS, Andy Hertzfeld; principal architect of the Intel 80386 and Intel 80486 microprocessors, John Crawford; former CEO of Apple, John Sculley; and digital effects programmer Masi Oka. Other alumni include former CS department head at MIT, John Guttag, Workday founder, Aneel Bhusri, MongoDB founder Eliot Horowitz, Figma founders Dylan Field and Evan Wallace; and OpenSea founder Devin Finzer. The character "Andy" in the animated film Toy Story is purportedly an homage to professor Van Dam from his students employed at Pixar. Between 2012 and 2018, the number of concentrators in CS tripled. In 2017, computer science overtook economics as the school's most popular undergraduate concentration. Applied mathematics Brown's program in applied mathematics was established in 1941 making it the oldest such program in the United States. The division is highly ranked and regarded nationally and internationally. Among the 67 recipients of the Timoshenko Medal, 22 have been affiliated with Brown's applied mathematics division as faculty, researchers, or students. The Joukowsky Institute for Archaeology and the Ancient World Established in 2004, the Joukowsky Institute for Archaeology and the Ancient World is Brown's interdisciplinary research center for archeology and ancient studies. The institute pursues fieldwork, excavations, regional surveys, and academic study of the archaeology and art of the ancient Mediterranean, Egypt, and Western Asia from the Levant to the Caucasus. The institute has a very active fieldwork profile, with faculty-led excavations and regional surveys presently in Petra (Jordan), Abydos (Egypt), Turkey, Sudan, Italy, Mexico, Guatemala, Montserrat, and Providence. The Joukowsky Institute's faculty includes cross-appointments from the departments of Egyptology, Assyriology, Classics, Anthropology, and History of Art and Architecture. Faculty research and publication areas include Greek and Roman art and architecture, landscape archaeology, urban and religious architecture of the Levant, Roman provincial studies, the Aegean Bronze Age, and the archaeology of the Caucasus. The institute offers visiting teaching appointments and postdoctoral fellowships which have, in recent years, included Near Eastern Archaeology and Art, Classical Archaeology and Art, Islamic Archaeology and Art, and Archaeology and Media Studies. Egyptology and Assyriology Facing the Joukowsky Institute, across the Front Green, is the Department of Egyptology and Assyriology, formed in 2006 by the merger of Brown's departments of Egyptology and History of Mathematics. It is one of only a handful of such departments in the United States. The curricular focus is on three principal areas: Egyptology, Assyriology, and the history of the ancient exact sciences (astronomy, astrology, and mathematics). Many courses in the department are open to all Brown undergraduates without prerequisites and include archaeology, languages, history, and Egyptian and Mesopotamian religions, literature, and science. Students concentrating in the department choose a track of either Egyptology or Assyriology. Graduate-level study comprises three tracks to the doctoral degree: Egyptology, Assyriology, or the History of the Exact Sciences in Antiquity. The Watson Institute for International and Public Affairs The Watson Institute for International and Public Affairs, Brown's center for the study of global Issues and public affairs, is one of the leading institutes of its type in the country. The institute occupies facilities designed by Uruguayan architect Rafael Viñoly and Japanese architect Toshiko Mori. The institute was initially endowed by Thomas Watson Jr. (Class of 1937), former Ambassador to the Soviet Union and longtime president of IBM. Institute faculty and faculty emeritus include Italian prime minister and European Commission president Romano Prodi, Brazilian president Fernando Henrique Cardoso, Chilean president Ricardo Lagos Escobar, Mexican novelist and statesman Carlos Fuentes, Brazilian statesman and United Nations commission head Paulo Sérgio Pinheiro, Indian foreign minister and ambassador to the United States Nirupama Rao, American diplomat and Dayton Peace Accords author Richard Holbrooke (Class of 1962), and Sergei Khrushchev, editor of the papers of his father Nikita Khrushchev, leader of the Soviet Union. The institute's curricular interest is organized into the principal themes of development, security, and governance—with further focuses on globalization, economic uncertainty, security threats, environmental degradation, and poverty. Six Brown undergraduate concentrations are hosted by the Watson Institute: Development Studies, International and Public Affairs, International Relations, Latin American and Caribbean Studies, Middle East Studies, Public Policy, and South Asian Studies. Graduate programs offered at the Watson Institute include the Graduate Program in Development (Ph.D.) and the Master of Public Affairs (M.P.A) Program. The institute also offers postdoctoral, professional development, and global outreach programming. In support of these programs, the institute houses various centers, including the Brazil Initiative, Brown-India Initiative, China Initiative, Middle East Studies Center, The Center for Latin American and Caribbean Studies (CLACS), and the Taubman Center for Public Policy. In recent years, the most internationally cited product of the Watson Institute has been its Costs of War Project, first released in 2011 and continuously updated since. The project comprises a team of economists, anthropologists, political scientists, legal experts, and physicians, and seeks to calculate the economic costs, human casualties, and impact on civil liberties of the wars in Iraq, Afghanistan, and Pakistan since 2001. The School of Engineering Established in 1847, Brown's engineering program is the oldest in the Ivy League and the third oldest civilian engineering program in the country. In 1916, Brown's departments of electrical, mechanical, and civil engineering were merged into a single Division of Engineering. In 2010 the division was elevated to a School of Engineering. Engineering at Brown is especially interdisciplinary. The school is organized without the traditional departments or boundaries found at most schools and follows a model of connectivity between disciplines—including biology, medicine, physics, chemistry, computer science, the humanities, and the social sciences. The school practices an innovative clustering of faculties in which engineers team with non-engineers to bring a convergence of ideas. Student teams have launched two CubeSats with the support of the School of Engineering. Brown Space Engineering developed EQUiSat a 1U satellite, and another interdisciplinary team developed SBUDNIC a 3U satellite. IE Brown Executive MBA Dual Degree Program Since 2009, Brown has developed an Executive MBA program in conjunction with one of the leading Business Schools in Europe, IE Business School in Madrid. This relationship has since strengthened resulting in both institutions offering a dual degree program. In this partnership, Brown provides its traditional coursework while IE provides most of the business-related subjects making a differentiated alternative program to other Ivy League's EMBAs. The cohort typically consists of 25–30 EMBA candidates from some 20 countries. Classes are held in Providence, Madrid, Cape Town and Online. The Pembroke Center The Pembroke Center for Teaching and Research on Women was established at Brown in 1981 by Joan Wallach Scott as an interdisciplinary research center on gender. The center is named for Pembroke College, Brown's former women's college, and is affiliated with Brown's Sarah Doyle Women's Center. The Pembroke Center supports Brown's undergraduate concentration in Gender and Sexuality Studies, post-doctoral research fellowships, the annual Pembroke Seminar, and other academic programs. It also manages various collections, archives, and resources, including the Elizabeth Weed Feminist Theory Papers and the Christine Dunlap Farnham Archive. The Graduate School Brown introduced graduate courses in the 1870s and granted its first advanced degrees in 1888. The university established a Graduate Department in 1903 and a full Graduate School in 1927. With an enrollment of approximately 2,600 students, the school currently offers 33 and 51 master's and doctoral programs, respectively. The school additionally offers a number of fifth-year master's programs. Overall, admission to the Graduate School is most competitive with an acceptance rate averaging at approximately 9 percent in recent years. Carney Institute for Brain Science The Robert J. & Nancy D. Carney Institute for Brain Science is Brown's cross-departmental neuroscience research institute. The institute's core focus areas include brain-computer interfaces and computational neuroscience; additional areas of focus include research into mechanisms of cell death with the interest of developing therapies for neurodegenerative diseases. The Carney Institute was founded by John Donoghue in 2009 as the Brown Institute for Brain Science and renamed in 2018 in recognition of a $100 million gift. The donation, one of the largest in the university's history, established the institute as one of the best-endowed university neuroscience programs in the country. Alpert Medical School Established in 1811, Brown's Alpert Medical School is the fourth oldest medical school in the Ivy League. In 1827, medical instruction was suspended by President Francis Wayland after the program's faculty declined to follow a new policy requiring students to live on campus. The program was reorganized in 1972; the first M.D. degrees from the new Program in Medicine were awarded to a graduating class of 58 students in 1975. In 1991, the school was officially renamed the Brown University School of Medicine, then renamed once more to Brown Medical School in October 2000. In January 2007, entrepreneur and philanthropist Warren Alpert donated $100 million to the school. In recognition of the gift, the school's name was changed to the Warren Alpert Medical School of Brown University. In 2020, U.S. News & World Report ranked Brown's medical school the 9th most selective in the country, with an acceptance rate of 2.8%. U.S. News ranks the school 38th for research and 35th for primary care. Brown's medical school is known especially for its eight-year Program in Liberal Medical Education (PLME), an eight-year combined baccalaureate-M.D. medical program. Inaugurated in 1984, the program is one of the most selective and renowned programs of its type in the country, offering admission to only 2% of applicants in 2021. Since 1976, the Early Identification Program (EIP) has encouraged Rhode Island residents to pursue careers in medicine by recruiting sophomores from Providence College, Rhode Island College, the University of Rhode Island, and Tougaloo College. In 2004, the school once again began to accept applications from premedical students at other colleges and universities via AMCAS like most other medical schools. The medical school also offers M.D./PhD, M.D./M.P.H. and M.D./M.P.P. dual degree programs. School of Public Health Brown's School of Public Health grew out of the Alpert Medical School's Department of Community Health and was officially founded in 2013 as an independent school. The school issues undergraduate (A.B., Sc.B.), graduate (M.P.H., Sc.M., A.M.), doctoral (Ph.D.), and dual-degrees (M.P.H./M.P.A., M.D./M.P.H.). Online programs The Brown University School of Professional Studies currently offers blended learning Executive master's degrees in Healthcare Leadership, Cyber Security, and Science and Technology Leadership. The master's degrees are designed to help students who have a job and life outside of academia to progress in their respective fields. The students meet in Providence every 6–7 weeks for a weekly seminar each trimester. The university has also invested in MOOC development starting in 2013, when two courses, Archeology's Dirty Little Secrets and The Fiction of Relationship, both of which received thousands of students. However, after a year of courses, the university broke its contract with Coursera and revamped its online persona and MOOC development department. By 2017, the university released new courses on edx, two of which were The Ethics of Memory and Artful Medicine: Art's Power to Enrich Patient Care. In January 2018, Brown published its first "game-ified" course called Fantastic Places, Unhuman Humans: Exploring Humanity Through Literature, which featured out-of-platform games to help learners understand materials, as well as a story-line that immerses users into a fictional world to help characters along their journey. Admissions and financial aid Undergraduate Undergraduate admission to Brown University is considered "most selective" by U.S. News & World Report. For the undergraduate class of 2026, Brown received 50,649 applications—the largest applicant pool in the university's history and a 9% increase from the prior year. Of these applicants, 2,560 were admitted for an acceptance rate of 5.0%, the lowest in the university's history. In 2021, the university reported a yield rate of 69%. For the academic year 2019–20 the university received 2,030 transfer applications, of which 5.8% were accepted. Brown's admissions policy is stipulated need-blind for all domestic first-year applicants. In 2017, Brown announced that loans would be eliminated from all undergraduate financial aid awards starting in 2018–2019, as part of a new $30 million campaign called the Brown Promise. In 2016–17, the university awarded need-based scholarships worth $120.5 million. The average need-based award for the class of 2020 was $47,940. Graduate In 2017, the Graduate School accepted 11% of 9,215 applicants. In 2021, Brown received a record 948 applications for roughly 90 spots in its Master of Public Health Degree. In 2020, U.S. News ranked Brown's Warren Alpert Medical School the 9th most selective in the country, with an acceptance rate of 2.8 percent. Rankings Brown University is accredited by the New England Commission of Higher Education. For their 2021 rankings, The Wall Street Journal/Times Higher Education ranked Brown 5th in the "Best Colleges 2021" edition. The Forbes magazine annual ranking of "America's Top Colleges 2022"—which ranked 600 research universities, liberal arts colleges and service academies—ranked Brown 19th overall and 18th among universities. U.S. News & World Report ranked Brown 13th among national universities in its 2022 edition. The 2022 edition also ranked Brown 2nd for undergraduate teaching, 25th in Most Innovative Schools, and 14th in Best Value Schools. Washington Monthly ranked Brown 40th in 2022 among 442 national universities in the U.S. based on its contribution to the public good, as measured by social mobility, research, and promoting public service. In 2022, U.S. News & World Report ranks Brown 129th globally. In 2014, Forbes magazine ranked Brown 7th on its list of "America's Most Entrepreneurial Universities." The Forbes analysis looked at the ratio of "alumni and students who have identified themselves as founders and business owners on LinkedIn" and the total number of alumni and students. LinkedIn particularized the Forbes rankings, placing Brown third (between MIT and Princeton) among "Best Undergraduate Universities for Software Developers at Startups." LinkedIn's methodology involved a career-path examination of "millions of alumni profiles" in its membership database. In 2016, 2017, 2018, and 2021 the university produced the most Fulbright recipients of any university in the nation. Brown has also produced the 7th most Rhodes Scholars of all colleges and universities in the United States. Research Brown is a member of the Association of American Universities since 1933 and is classified among "R1: Doctoral Universities – Very High Research Activity". In FY 2017, Brown spent $212.3 million on research and was ranked 103rd in the United States by total R&D expenditure by National Science Foundation. In 2021 Brown's School of Public Health received the 4th most funding in NIH awards among schools of public health in the U.S. Student life Campus safety In 2014, Brown tied with the University of Connecticut for the highest number of reported rapes in the nation, with its "total of reports of rape" on their main campus standing at 43. However, such rankings have been criticized for failing to account for how different campus environments can encourage or discourage individuals from reporting sexual assault cases, thereby affecting the number of reported rapes. Spring weekend Established in 1950, Spring Weekend is an annual spring music festival for students. Historical performers at the festival have included Ella Fitzgerald, Dizzy Gillespie, Ray Charles, Bob Dylan, Janis Joplin, Bruce Springsteen, and U2. More recent headliners include Kendrick Lamar, Young Thug, Daniel Caesar, Anderson .Paak, Mitski, Aminé, and Mac DeMarco. Since 1960, Spring Weekend has been organized by the student-run Brown Concert Agency. Residential and Greek societies Approximately 12 percent of Brown students participate in Greek Life. The university recognizes thirteen active Greek organizations: six fraternities (Beta Omega Chi, Beta Rho Pi, Delta Tau, Delta Phi, Kappa Alpha Psi, and Theta Alpha), five sororities (Alpha Chi Omega, Delta Sigma Theta, Delta Gamma, Kappa Delta, and Kappa Alpha Theta,), one co-ed house (Zeta Delta Xi), and one co-ed literary society (Alpha Delta Phi). Other Greek-lettered organizations that have been historically active at Brown University include Alpha Kappa Alpha, Alpha Phi Alpha, and Lambda Upsilon Lambda. Since the early 1950s, all Greek organizations on campus have been located in Wriston Quadrangle. The organizations are overseen by the Greek Council. An alternative to Greek-letter organizations are Brown's program houses, which are organized by themes. As with Greek houses, the residents of program houses select their new members, usually at the start of the spring semester. Examples of program houses are St. Anthony Hall (located in King House), Buxton International House, the Machado French/Hispanic/Latinx House, Technology House, Harambee (African culture) House, Social Action House and Interfaith House. All students not in program housing enter a lottery for general housing. Students form groups and are assigned time slots during which they can pick among the remaining housing options. Societies and clubs The earliest societies at Brown were devoted to oration and debate. The Pronouncing Society is mentioned in the diary of Solomon Drowne, class of 1773, who was voted its president in 1771. The organization seems to have disappeared during the American Revolutionary War. Subsequent societies include the Misokosmian Society (est. 1798 and renamed the Philermenian Society), the Philandrian Society (est. 1799), the United Brothers (1806), the Philophysian Society (1818), and the Franklin Society (1824). Societies served social as well as academic purposes, with many supporting literary debate and amassing large libraries. Older societies generally aligned with Federalists while younger societies generally leaned Republican. Societies remained popular into the 1860s, after which they were largely replaced by fraternities. The Cammarian Club was at first a semi-secret society that "tapped" 15 seniors each year. In 1915, self-perpetuating membership gave way to popular election by the student body, and thenceforward the club served as the de facto undergraduate student government. The organization was dissolved in 1971 and ultimately succeeded by a formal student government. Societas Domi Pacificae, known colloquially as "Pacifica House", is a present-day, self-described secret society. It purports a continuous line of descent from the Franklin Society of 1824, citing a supposed intermediary "Franklin Society" traceable in the nineteenth century. Student organizations There are over 300 registered student organizations on campus with diverse interests. The Student Activities Fair, during the orientation program, provides first-year students the opportunity to become acquainted with a wide range of organizations. A sample of organizations includes: The Brown Daily Herald Brown Debating Union The Brown Derbies Brown International Organization Brown Journal of World Affairs The Brown Jug The Brown Noser Brown Political Review The Brown Spectator BSR Brown University Band Brown University Orchestra Chinese Students and Scholars Association The College Hill Independent Critical Review Ivy Film Festival Jabberwocks Production Workshop Strait Talk Starla and Sons Students for Sensible Drug Policy WBRU LGBT In 2023, 38% of Brown's students identified as being LGBT, in a poll by The Brown Daily Herald. The 2023 LGBT self-identification level was an increase, up from 14% LGBT identification in 2010. "Bisexual" was the most common answer amongst LGBT respondents to the poll. Resource centers Brown has several resource centers on campus. The centers often act as sources of support as well as safe spaces for students to explore certain aspects of their identity. Additionally, the centers often provide physical spaces for students to study and have meetings. Although most centers are identity-focused, some provide academic support as well. The Brown Center for Students of Color (BCSC) is a space that provides support for students of color. Established in 1972 at the demand of student protests, the BCSC encourages students to engage in critical dialogue, develop leadership skills, and promote social justice. The center houses various programs for students to share their knowledge and engage in discussion. Programs include the Third World Transition Program, the Minority Peer Counselor Program, the Heritage Series, and other student-led initiatives. Additionally, the BCSC hopes to foster community among the students it serves by providing spaces for students to meet and study. The Sarah Doyle Women's Center aims to provide a space for members of the Brown community to examine and explore issues surrounding gender. The center was named after one of the first women to attend Brown, Sarah Doyle. The center emphasizes intersectionality in its conversations on gender, encouraging people to see gender as present and relevant in various aspects of life. The center hosts programs and workshops in order to facilitate dialogue and provide resources for students, faculty, and staff. Other centers include the LGBTQ Center, the Undocumented, First-Generation College and Low-Income Student (U-FLi) Center, and the Curricular Resource Center. Activism 1968 Black Student Walkout On December 5, 1968, several Black women from Pembroke College initiated a walkout in protest of an atmosphere at the colleges described by Black students as a "stifling, frustrating, [and] degrading place for Black students" after feeling the colleges were non-responsive to their concerns. In total, 65 Black students participated in the walkout. Their principal demand was to increase Black student enrollment to 11% of the student populace, in an attempt to match that of the proportion in the US. This ultimately resulted in a 300% increase in Black enrollment the following year, but some demands have yet to be met. Divestment from South Africa In the mid-1980s, under student pressure, the university divested from certain companies involved in South Africa. Some students were still unsatisfied with partial divestment and began a fast in Manning Chapel and the university disenrolled them. In April 1987, "dozens" of students interrupted a university corporation meeting, leading to 20 being put on probation. Athletics Brown is a member of the Ivy League athletic conference, which is categorized as a Division I (top-level) conference of the National Collegiate Athletic Association (NCAA). The Brown Bears has one of the largest university sports programs in the United States, sponsoring 32 varsity intercollegiate teams. Brown's athletic program is one of the U.S. News & World Report top 20—the "College Sports Honor Roll"—based on breadth of the program and athletes' graduation rates. Brown's newest varsity team is women's rugby, promoted from club-sport status in 2014. Brown women's rowing has won 7 national titles between 1999 and 2011. Brown men's rowing perennially finishes in the top 5 in the nation, most recently winning silver, bronze, and silver in the national championship races of 2012, 2013, and 2014. The men's and women's crews have also won championship trophies at the Henley Royal Regatta and the Henley Women's Regatta. Brown's men's soccer is consistently ranked in the top 20 and has won 18 Ivy League titles overall; recent soccer graduates play professionally in Major League Soccer and overseas. Brown football, under its most successful coach historically, Phil Estes, won Ivy League championships in 1999, 2005, and 2008. high-profile alumni of the football program include former Houston Texans head coach Bill O'Brien; former Penn State football coach Joe Paterno, Heisman Trophy namesake John W. Heisman, and Pollard Award namesake Fritz Pollard. Brown women's gymnastics won the Ivy League tournament in 2013 and 2014. The Brown women's sailing team has won 5 national championships, most recently in 2019 while the coed sailing team won 2 national championships in 1942 and 1948. Both teams are consistency ranked in the top 10 in the nation. The first intercollegiate ice hockey game in America was played between Brown and Harvard on January 19, 1898. The first university rowing regatta larger than a dual-meet was held between Brown, Harvard, and Yale at Lake Quinsigamond in Massachusetts on July 26, 1859. Brown also supports competitive intercollegiate club sports, including ultimate frisbee. The men's ultimate team, Brownian Motion, has won three national championships, in 2000, 2005, and 2019. Notable people Alumni Alumni in politics include U.S. Secretary of State John Hay (1852), U.S. Secretary of State and U.S. Attorney General Richard Olney (1856), Chief Justice of the United States and U.S. Secretary of State Charles Evans Hughes (1881), Louisiana Governor Bobby Jindal '92, U.S. Senator Maggie Hassan '80 of New Hampshire, Delaware Governor Jack Markell '82, Rhode Island Representative David Cicilline '83, Minnesota Representative Dean Phillips '91, 2020 Presidential candidate and entrepreneur Andrew Yang '96, and DNC Chair Tom Perez '83. Prominent alumni in business and finance include philanthropist John D. Rockefeller Jr. (1897), managing director of McKinsey & Company and "father of modern management consulting" Marvin Bower '25, former Chair of the Federal Reserve and current U.S. Secretary of the Treasury Janet Yellen '67, World Bank President Jim Yong Kim '82, Bank of America CEO Brian Moynihan '81, CNN founder Ted Turner '60, IBM chairman and CEO Thomas Watson Jr. '37, co-founder of Starwood Capital Group Barry Sternlicht '82, Apple Inc. CEO John Sculley '61, Blackberry Ltd. CEO John S. Chen '78, Facebook CFO David Ebersman '91, and Uber CEO Dara Khosrowshahi '91. Companies founded by Brown alumni include CNN,The Wall Street Journal, Searchlight Pictures, Netgear, W Hotels, Workday, Warby Parker, Casper, Figma, ZipRecruiter, and Cards Against Humanity. Alumni in the arts and media include actors Emma Watson '14, John Krasinski '01, Daveed Diggs '04, Julie Bowen '91, Tracee Ellis Ross '94, and Jessica Capshaw '98; NPR program host Ira Glass '82; singer-composer Mary Chapin Carpenter '81; humorist and Marx Brothers screenwriter S.J. Perelman '25; novelists Nathanael West '24, Jeffrey Eugenides '83, Edwidge Danticat (MFA '93), and Marilynne Robinson '66; and composer and synthesizer pioneer Wendy Carlos '62, journalist James Risen '77; political pundit Mara Liasson; MSNBC hosts Alex Wagner '99 and Chris Hayes '01; New York Times, publisher A. G. Sulzberger '03, and magazine editor John F. Kennedy Jr. '83. Important figures in the history of education include the father of American public school education Horace Mann (1819), civil libertarian and Amherst College president Alexander Meiklejohn, first president of the University of South Carolina Jonathan Maxcy (1787), Bates College founder Oren B. Cheney (1836), University of Michigan president (1871–1909) James Burrill Angell (1849), University of California president (1899–1919) Benjamin Ide Wheeler (1875), and Morehouse College's first African-American president John Hope (1894). Alumni in the computer sciences and industry include architect of Intel 386, 486, and Pentium microprocessors John H. Crawford '75, inventor of the first silicon transistor Gordon Kidd Teal '31, MongoDB founder Eliot Horowitz '03, Figma founder Dylan Field, and Macintosh developer Andy Hertzfeld '75. Other notable alumni include "Lafayette of the Greek Revolution" and its historian Samuel Gridley Howe (1821) Governor of Wyoming Territory and Nebraska Governor John Milton Thayer (1841), Rhode Island Governor Augustus Bourn (1855), NASA head during first seven Apollo missions Thomas O. Paine '42, diplomat Richard Holbrooke '62, sportscaster Chris Berman '77, Houston Texans head coach Bill O'Brien '92, 2018 Miss America Cara Mund '16, Penn State football coach Joe Paterno '50, Heisman Trophy namesake John W. Heisman '91, distinguished professor of law Cortney Lollar '97, Olympic and world champion triathlete Joanna Zeiger, royals and nobles such as Prince Rahim Aga Khan, Prince Faisal bin Al Hussein of the Hashemite Kingdom of Jordan, Princess Leila Pahlavi of Iran '92, Prince Nikolaos of Greece and Denmark, Prince Nikita Romanov, Princess Theodora of Greece and Denmark, Prince Jaime of Bourbon-Parma, Duke of San Jaime and Count of Bardi, Prince Ra'ad bin Zeid, Lady Gabriella Windsor, Prince Alexander von Fürstenberg, Countess Cosima von Bülow Pavoncelli, and her half-brother Prince Alexander-Georg von Auersperg. Nobel Laureate alumni include humanitarian Jerry White '87 (Peace, 1997), biologist Craig Mello '82 (Physiology or Medicine, 2006), economist Guido Imbens (AM '89, PhD '91; Economic Sciences, 2021), and economist Douglas Diamond '75 (Economic Sciences, 2022). Faculty Among Brown's past and present faculty are seven Nobel Laureates: Lars Onsager (Chemistry, 1968), Leon Cooper (Physics, 1972), George Snell (Physiology or Medicine, 1980), George Stigler (Economic Sciences, 1982), Henry David Abraham (Peace, 1985), Vernon L. Smith (Economic Sciences, 2002), and J. Michael Kosterlitz (Physics, 2016). Notable past and present faculty include biologists Anne Fausto-Sterling (Ph.D. 1970) and Kenneth R. Miller (Sc.B. 1970); computer scientists Robert Sedgewick and Andries van Dam; economists Hyman Minsky, Glenn Loury, George Stigler, Mark Blyth, and Emily Oster; historians Gordon S. Wood and Joan Wallach Scott; mathematicians David Gale, David Mumford, Mary Cartwright, and Solomon Lefschetz; physicists Sylvester James Gates and Gerald Guralnik. Faculty in literature include Chinua Achebe, Ama Ata Aidoo, and Carlos Fuentes. Among Brown's faculty and fellows in political science, and public affairs are the former prime minister of Italy and former EU chief, Romano Prodi; former president of Brazil, Fernando Cardoso; former president of Chile, Ricardo Lagos; and son of Soviet Premier Nikita Khrushchev, Sergei Khrushchev. Other faculty include philosopher Martha Nussbaum, author Ibram X. Kendi, and public health doctor Ashish Jha. In popular culture Brown's reputation as an institution with a free-spirited, iconoclastic student body is portrayed in fiction and popular culture. Family Guy character Brian Griffin is a Brown alumnus. The O.C.s main character Seth Cohen is denied acceptance to Brown while his girlfriend Summer Roberts is accepted. In The West Wing, Amy Gardner is a Brown alumna. See also List of Brown University statues Brown University Alma Mater Josiah S. Carberry Explanatory notes References Citations External links Brown University Athletics – Official Athletics Website Colonial architecture in Rhode Island Colonial colleges Educational institutions established in 1764 Universities and colleges established in the 18th century Georgian architecture in Rhode Island Rhode Island in the American Revolution Universities and colleges in Providence, Rhode Island 1764 establishments in Rhode Island Private universities and colleges in Rhode Island Leadership in Energy and Environmental Design basic silver certified buildings Need-blind educational institutions
15
William "Bill" D. Atkinson (born March 17, 1951) is an American computer engineer and photographer. Atkinson worked at Apple Computer from 1978 to 1990. Atkinson was the principal designer and developer of the graphical user interface (GUI) of the Apple Lisa and, later, one of the first thirty members of the original Apple Macintosh development team, and was the creator of the MacPaint application. He also designed and implemented QuickDraw, the fundamental toolbox that the Lisa and Macintosh used for graphics. QuickDraw's performance was essential for the success of the Macintosh GUI. He also was one of the main designers of the Lisa and Macintosh user interfaces. Atkinson also conceived, designed and implemented HyperCard, an early and influential hypermedia system. HyperCard put the power of computer programming and database design into the hands of nonprogrammers. In 1994, Atkinson received the EFF Pioneer Award for his contributions. Education He received his undergraduate degree from the University of California, San Diego, where Apple Macintosh developer Jef Raskin was one of his professors. Atkinson continued his studies as a graduate student in neurochemistry at the University of Washington. Raskin invited Atkinson to visit him at Apple Computer; Steve Jobs persuaded him to join the company immediately as employee No. 51, and Atkinson never finished his PhD. Career Around 1990, General Magic's founding, with Bill Atkinson as one of the three cofounders, met the following press in Byte magazine: The obstacles to General Magic's success may appear daunting, but General Magic is not your typical start-up company. Its partners include some of the biggest players in the worlds of computing, communications, and consumer electronics, and it's loaded with top-notch engineers who have been given a clean slate to reinvent traditional approaches to ubiquitous worldwide communications. In 2007, Atkinson began working as an outside developer with Numenta, a startup working on computer intelligence. On his work there Atkinson said, "what Numenta is doing is more fundamentally important to society than the personal computer and the rise of the Internet." Currently, Atkinson has combined his passion for computer programming with his love of nature photography to create art images. He takes close-up photographs of stones that have been cut and polished. His works are highly regarded for their resemblance to miniature landscapes which are hidden within the stones. Atkinson's 2004 book Within the Stone features a collection of his close-up photographs. The highly intricate and detailed images he creates are made possible by the accuracy and creative control of the digital printing process that he helped create. Some of Atkinson's noteworthy contributions to the field of computing include: Macintosh QuickDraw and Lisa LisaGraf Atkinson independently discovered the midpoint circle algorithm for fast drawing of circles by using the sum of consecutive odd numbers. Marching ants The double-click Menu bar The selection lasso FatBits MacPaint HyperCard Atkinson dithering Bill Atkinson PhotoCard Atkinson now works as a nature photographer. Actor Nelson Franklin portrayed him in the 2013 film Jobs. References External links 1951 births American photographers Apple Inc. employees Apple Fellows Living people University of California, San Diego alumni Place of birth missing (living people) University of Washington alumni Macintosh operating systems people Scientists from the San Francisco Bay Area
7
Bertrand Arthur William Russell, 3rd Earl Russell, (18 May 1872 – 2 February 1970) was a British mathematician, philosopher, logician, and public intellectual. He had a considerable influence on mathematics, logic, set theory, linguistics, artificial intelligence, cognitive science, computer science, and various areas of analytic philosophy, especially philosophy of mathematics, philosophy of language, epistemology, and metaphysics. He was one of the early 20th century's most prominent logicians and a founder of analytic philosophy, along with his predecessor Gottlob Frege, his friend and colleague G. E. Moore, and his student and protégé Ludwig Wittgenstein. Russell with Moore led the British "revolt against idealism". Together with his former teacher A. N. Whitehead, Russell wrote Principia Mathematica, a milestone in the development of classical logic and a major attempt to reduce the whole of mathematics to logic (see Logicism). Russell's article "On Denoting" has been considered a "paradigm of philosophy". Russell was a pacifist who championed anti-imperialism and chaired the India League. He went to prison for his pacifism during World War I, but also saw the war against Adolf Hitler's Nazi Germany as a necessary "lesser of two evils". In the wake of World War II, he welcomed American global hegemony in favour of either Soviet hegemony or no (or ineffective) world leadership, even if it were to come at the cost of using their nuclear weapons. He would later criticise Stalinist totalitarianism, condemn the United States' involvement in the Vietnam War, and become an outspoken proponent of nuclear disarmament. In 1950, Russell was awarded the Nobel Prize in Literature "in recognition of his varied and significant writings in which he champions humanitarian ideals and freedom of thought". He was also the recipient of the De Morgan Medal (1932), Sylvester Medal (1934), Kalinga Prize (1957), and Jerusalem Prize (1963). Biography Early life and background Bertrand Arthur William Russell was born at Ravenscroft, Trellech, Monmouthshire, Wales, United Kingdom, on 18 May 1872, into an influential and liberal family of the British aristocracy. His parents, Viscount and Viscountess Amberley, were radical for their times. Lord Amberley consented to his wife's affair with their children's tutor, the biologist Douglas Spalding. Both were early advocates of birth control at a time when this was considered scandalous. Lord Amberley was a deist, and even asked the philosopher John Stuart Mill to act as Russell's secular godfather. Mill died the year after Russell's birth, but his writings had a great effect on Russell's life. His paternal grandfather, Lord John Russell, later 1st Earl Russell (1792–1878), had twice been prime minister in the 1840s and 1860s. A member of Parliament since the early 1810s, he met with Napoleon Bonaparte in Elba. The Russells had been prominent in England for several centuries before this, coming to power and the peerage with the rise of the Tudor dynasty (see: Duke of Bedford). They established themselves as one of the leading Whig families and participated in every great political event from the dissolution of the monasteries in 1536–1540 to the Glorious Revolution in 1688–1689 and the Great Reform Act in 1832. Lady Amberley was the daughter of Lord and Lady Stanley of Alderley. Russell often feared the ridicule of his maternal grandmother, one of the campaigners for education of women. Childhood and adolescence Russell had two siblings: brother Frank (nearly seven years older), and sister Rachel (four years older). In June 1874, Russell's mother died of diphtheria, followed shortly by Rachel's death. In January 1876, his father died of bronchitis after a long period of depression. Frank and Bertrand were placed in the care of staunchly Victorian paternal grandparents, who lived at Pembroke Lodge in Richmond Park. His grandfather, former Prime Minister Earl Russell, died in 1878, and was remembered by Russell as a kindly old man in a wheelchair. His grandmother, the Countess Russell (née Lady Frances Elliot), was the dominant family figure for the rest of Russell's childhood and youth. The Countess was from a Scottish Presbyterian family and successfully petitioned the Court of Chancery to set aside a provision in Amberley's will requiring the children to be raised as agnostics. Despite her religious conservatism, she held progressive views in other areas (accepting Darwinism and supporting Irish Home Rule), and her influence on Bertrand Russell's outlook on social justice and standing up for principle remained with him throughout his life. Her favourite Bible verse, "Thou shalt not follow a multitude to do evil", became his motto. The atmosphere at Pembroke Lodge was one of frequent prayer, emotional repression and formality; Frank reacted to this with open rebellion, but the young Bertrand learned to hide his feelings. Russell's adolescence was lonely and he often contemplated suicide. He remarked in his autobiography that his keenest interests in "nature and books and (later) mathematics saved me from complete despondency;" only his wish to know more mathematics kept him from suicide. He was educated at home by a series of tutors. When Russell was eleven years old, his brother Frank introduced him to the work of Euclid, which he described in his autobiography as "one of the great events of my life, as dazzling as first love". During these formative years he also discovered the works of Percy Bysshe Shelley. Russell wrote: "I spent all my spare time reading him, and learning him by heart, knowing no one to whom I could speak of what I thought or felt, I used to reflect how wonderful it would have been to know Shelley, and to wonder whether I should meet any live human being with whom I should feel so much sympathy." Russell claimed that beginning at age 15, he spent considerable time thinking about the validity of Christian religious dogma, which he found unconvincing. At this age, he came to the conclusion that there is no free will and, two years later, that there is no life after death. Finally, at the age of 18, after reading Mill's Autobiography, he abandoned the "First Cause" argument and became an atheist. He travelled to the continent in 1890 with an American friend, Edward FitzGerald, and with FitzGerald's family he visited the Paris Exhibition of 1889 and climbed the Eiffel Tower soon after it was completed. University and first marriage Russell won a scholarship to read for the Mathematical Tripos at Trinity College, Cambridge, and began his studies there in 1890, taking as coach Robert Rumsey Webb. He became acquainted with the younger George Edward Moore and came under the influence of Alfred North Whitehead, who recommended him to the Cambridge Apostles. He quickly distinguished himself in mathematics and philosophy, graduating as seventh Wrangler in the former in 1893 and becoming a Fellow in the latter in 1895. Russell was 17 years old in the summer of 1889 when he met the family of Alys Pearsall Smith, an American Quaker five years older, who was a graduate of Bryn Mawr College near Philadelphia. He became a friend of the Pearsall Smith family. They knew him primarily as "Lord John's grandson" and enjoyed showing him off. He soon fell in love with the puritanical, high-minded Alys, and contrary to his grandmother's wishes, married her on 13 December 1894. Their marriage began to fall apart in 1901 when it occurred to Russell, while cycling, that he no longer loved her. She asked him if he loved her and he replied that he did not. Russell also disliked Alys's mother, finding her controlling and cruel. A lengthy period of separation began in 1911 with Russell's affair with Lady Ottoline Morrell, and he and Alys finally divorced in 1921 to enable Russell to remarry. During his years of separation from Alys, Russell had passionate (and often simultaneous) affairs with a number of women, including Morrell and the actress Lady Constance Malleson. Some have suggested that at this point he had an affair with Vivienne Haigh-Wood, the English governess and writer, and first wife of T. S. Eliot. Early career Russell began his published work in 1896 with German Social Democracy, a study in politics that was an early indication of a lifelong interest in political and social theory. In 1896 he taught German social democracy at the London School of Economics. He was a member of the Coefficients dining club of social reformers set up in 1902 by the Fabian campaigners Sidney and Beatrice Webb. He now started an intensive study of the foundations of mathematics at Trinity. In 1897, he wrote An Essay on the Foundations of Geometry (submitted at the Fellowship Examination of Trinity College) which discussed the Cayley–Klein metrics used for non-Euclidean geometry. He attended the First International Congress of Philosophy in Paris in 1900 where he met Giuseppe Peano and Alessandro Padoa. The Italians had responded to Georg Cantor, making a science of set theory; they gave Russell their literature including the Formulario mathematico. Russell was impressed by the precision of Peano's arguments at the Congress, read the literature upon returning to England, and came upon Russell's paradox. In 1903 he published The Principles of Mathematics, a work on foundations of mathematics. It advanced a thesis of logicism, that mathematics and logic are one and the same. At the age of 29, in February 1901, Russell underwent what he called a "sort of mystic illumination", after witnessing Whitehead's wife's acute suffering in an angina attack. "I found myself filled with semi-mystical feelings about beauty... and with a desire almost as profound as that of the Buddha to find some philosophy which should make human life endurable", Russell would later recall. "At the end of those five minutes, I had become a completely different person." In 1905, he wrote the essay "On Denoting", which was published in the philosophical journal Mind. Russell was elected a Fellow of the Royal Society (FRS) in 1908. The three-volume Principia Mathematica, written with Whitehead, was published between 1910 and 1913. This, along with the earlier The Principles of Mathematics, soon made Russell world-famous in his field. Russell's first political activity was as the Independent Liberal candidate in the 1907 by-election for the Wimbledon constituency, where he was not elected. In 1910, he became a University of Cambridge lecturer at Trinity College, where he had studied. He was considered for a Fellowship, which would give him a vote in the college government and protect him from being fired for his opinions, but was passed over because he was "anti-clerical", essentially because he was agnostic. He was approached by the Austrian engineering student Ludwig Wittgenstein, who became his PhD student. Russell viewed Wittgenstein as a genius and a successor who would continue his work on logic. He spent hours dealing with Wittgenstein's various phobias and his frequent bouts of despair. This was often a drain on Russell's energy, but Russell continued to be fascinated by him and encouraged his academic development, including the publication of Wittgenstein's Tractatus Logico-Philosophicus in 1922. Russell delivered his lectures on logical atomism, his version of these ideas, in 1918, before the end of World War I. Wittgenstein was, at that time, serving in the Austrian Army and subsequently spent nine months in an Italian prisoner of war camp at the end of the conflict. First World War During World War I, Russell was one of the few people to engage in active pacifist activities. In 1916, because of his lack of a Fellowship, he was dismissed from Trinity College following his conviction under the Defence of the Realm Act 1914. He later described this, in Free Thought and Official Propaganda, as an illegitimate means the state used to violate freedom of expression. Russell championed the case of Eric Chappelow, a poet jailed and abused as a conscientious objector. Russell played a significant part in the Leeds Convention in June 1917, a historic event which saw well over a thousand "anti-war socialists" gather; many being delegates from the Independent Labour Party and the Socialist Party, united in their pacifist beliefs and advocating a peace settlement. The international press reported that Russell appeared with a number of Labour Members of Parliament (MPs), including Ramsay MacDonald and Philip Snowden, as well as former Liberal MP and anti-conscription campaigner, Professor Arnold Lupton. After the event, Russell told Lady Ottoline Morrell that, "to my surprise, when I got up to speak, I was given the greatest ovation that was possible to give anybody". His conviction in 1916 resulted in Russell being fined £100 (), which he refused to pay in hope that he would be sent to prison, but his books were sold at auction to raise the money. The books were bought by friends; he later treasured his copy of the King James Bible that was stamped "Confiscated by Cambridge Police". A later conviction for publicly lecturing against inviting the United States to enter the war on the United Kingdom's side resulted in six months' imprisonment in Brixton Prison (see Bertrand Russell's political views) in 1918. He later said of his imprisonment: While he was reading Strachey's Eminent Victorians chapter about Gordon he laughed out loud in his cell prompting the warder to intervene and reminding him that "prison was a place of punishment". Russell was reinstated to Trinity in 1919, resigned in 1920, was Tarner Lecturer in 1926 and became a Fellow again in 1944 until 1949. In 1924, Russell again gained press attention when attending a "banquet" in the House of Commons with well-known campaigners, including Arnold Lupton, who had been an MP and had also endured imprisonment for "passive resistance to military or naval service". G. H. Hardy on the Trinity controversy In 1941, G. H. Hardy wrote a 61-page pamphlet titled Bertrand Russell and Trinity – published later as a book by Cambridge University Press with a foreword by C. D. Broad—in which he gave an authoritative account of Russell's 1916 dismissal from Trinity College, explaining that a reconciliation between the college and Russell had later taken place and gave details about Russell's personal life. Hardy writes that Russell's dismissal had created a scandal since the vast majority of the Fellows of the College opposed the decision. The ensuing pressure from the Fellows induced the Council to reinstate Russell. In January 1920, it was announced that Russell had accepted the reinstatement offer from Trinity and would begin lecturing from October. In July 1920, Russell applied for a one year leave of absence; this was approved. He spent the year giving lectures in China and Japan. In January 1921, it was announced by Trinity that Russell had resigned and his resignation had been accepted. This resignation, Hardy explains, was completely voluntary and was not the result of another altercation. The reason for the resignation, according to Hardy, was that Russell was going through a tumultuous time in his personal life with a divorce and subsequent remarriage. Russell contemplated asking Trinity for another one-year leave of absence but decided against it, since this would have been an "unusual application" and the situation had the potential to snowball into another controversy. Although Russell did the right thing, in Hardy's opinion, the reputation of the College suffered with Russell's resignation, since the 'world of learning' knew about Russell's altercation with Trinity but not that the rift had healed. In 1925, Russell was asked by the Council of Trinity College to give the Tarner Lectures on the Philosophy of the Sciences; these would later be the basis for one of Russell's best-received books according to Hardy: The Analysis of Matter, published in 1927. In the preface to the Trinity pamphlet, Hardy wrote: Between the wars In August 1920, Russell travelled to Soviet Russia as part of an official delegation sent by the British government to investigate the effects of the Russian Revolution. He wrote a four-part series of articles, titled "Soviet Russia—1920", for the magazine The Nation. He met Vladimir Lenin and had an hour-long conversation with him. In his autobiography, he mentions that he found Lenin disappointing, sensing an "impish cruelty" in him and comparing him to "an opinionated professor". He cruised down the Volga on a steamship. His experiences destroyed his previous tentative support for the revolution. He subsequently wrote a book, The Practice and Theory of Bolshevism, about his experiences on this trip, taken with a group of 24 others from the UK, all of whom came home thinking well of the Soviet regime, despite Russell's attempts to change their minds. For example, he told them that he had heard shots fired in the middle of the night and was sure that these were clandestine executions, but the others maintained that it was only cars backfiring. Russell's lover Dora Black, a British author, feminist and socialist campaigner, visited Soviet Russia independently at the same time; in contrast to his reaction, she was enthusiastic about the Bolshevik revolution. The following year, Russell, accompanied by Dora, visited Peking (as Beijing was then known outside of China) to lecture on philosophy for a year. He went with optimism and hope, seeing China as then being on a new path. Other scholars present in China at the time included John Dewey and Rabindranath Tagore, the Indian Nobel-laureate poet. Before leaving China, Russell became gravely ill with pneumonia, and incorrect reports of his death were published in the Japanese press. When the couple visited Japan on their return journey, Dora took on the role of spurning the local press by handing out notices reading "Mr. Bertrand Russell, having died according to the Japanese press, is unable to give interviews to Japanese journalists". Apparently they found this harsh and reacted resentfully. Dora was six months pregnant when the couple returned to England on 26 August 1921. Russell arranged a hasty divorce from Alys, marrying Dora six days after the divorce was finalised, on 27 September 1921. Russell's children with Dora were John Conrad Russell, 4th Earl Russell, born on 16 November 1921, and Katharine Jane Russell (later Lady Katharine Tait), born on 29 December 1923. Russell supported his family during this time by writing popular books explaining matters of physics, ethics, and education to the layman. From 1922 to 1927 the Russells divided their time between London and Cornwall, spending summers in Porthcurno. In the 1922 and 1923 general elections Russell stood as a Labour Party candidate in the Chelsea constituency, but only on the basis that he knew he was extremely unlikely to be elected in such a safe Conservative seat, and he was unsuccessful on both occasions. After the birth of his two children, he became interested in education, especially early childhood education. He was not satisfied with the old traditional education and thought that progressive education also had some flaws; as a result, together with Dora, Russell founded the experimental Beacon Hill School in 1927. The school was run from a succession of different locations, including its original premises at the Russells' residence, Telegraph House, near Harting, West Sussex. During this time, he published On Education, Especially in Early Childhood. On 8 July 1930, Dora gave birth to her third child Harriet Ruth. After he left the school in 1932, Dora continued it until 1943. In 1927 Russell met Barry Fox (later Barry Stevens), who became a well-known Gestalt therapist and writer in later years. They developed an intense relationship, and in Fox's words: "...for three years we were very close." Fox sent her daughter Judith to Beacon Hill School. From 1927 to 1932 Russell wrote 34 letters to Fox. Upon the death of his elder brother Frank, in 1931, Russell became the 3rd Earl Russell. Russell's marriage to Dora grew increasingly tenuous, and it reached a breaking point over her having two children with an American journalist, Griffin Barry. They separated in 1932 and finally divorced. On 18 January 1936, Russell married his third wife, an Oxford undergraduate named Patricia ("Peter") Spence, who had been his children's governess since 1930. Russell and Peter had one son, Conrad Sebastian Robert Russell, 5th Earl Russell, who became a prominent historian and one of the leading figures in the Liberal Democrat party. Russell returned in 1937 to the London School of Economics to lecture on the science of power. During the 1930s, Russell became a friend and collaborator of V. K. Krishna Menon, then President of the India League, the foremost lobby in the United Kingdom for Indian independence. Russell chaired the India League from 1932 to 1939. Second World War Russell's political views changed over time, mostly about war. He opposed rearmament against Nazi Germany. In 1937, he wrote in a personal letter: "If the Germans succeed in sending an invading army to England we should do best to treat them as visitors, give them quarters and invite the commander and chief to dine with the prime minister." In 1940, he changed his appeasement view that avoiding a full-scale world war was more important than defeating Hitler. He concluded that Adolf Hitler taking over all of Europe would be a permanent threat to democracy. In 1943, he adopted a stance toward large-scale warfare called "relative political pacifism": "War was always a great evil, but in some particularly extreme circumstances, it may be the lesser of two evils." Before World War II, Russell taught at the University of Chicago, later moving on to Los Angeles to lecture at the UCLA Department of Philosophy. He was appointed professor at the City College of New York (CCNY) in 1940, but after a public outcry the appointment was annulled by a court judgment that pronounced him "morally unfit" to teach at the college because of his opinions, especially those relating to sexual morality, detailed in Marriage and Morals (1929). The matter was however taken to the New York Supreme Court by Jean Kay who was afraid that her daughter would be harmed by the appointment, though her daughter was not a student at CCNY. Many intellectuals, led by John Dewey, protested at his treatment. Albert Einstein's oft-quoted aphorism that "great spirits have always encountered violent opposition from mediocre minds" originated in his open letter, dated 19 March 1940, to Morris Raphael Cohen, a professor emeritus at CCNY, supporting Russell's appointment. Dewey and Horace M. Kallen edited a collection of articles on the CCNY affair in The Bertrand Russell Case. Russell soon joined the Barnes Foundation, lecturing to a varied audience on the history of philosophy; these lectures formed the basis of A History of Western Philosophy. His relationship with the eccentric Albert C. Barnes soon soured, and he returned to the UK in 1944 to rejoin the faculty of Trinity College. Later life Russell participated in many broadcasts over the BBC, particularly The Brains Trust and for the Third Programme, on various topical and philosophical subjects. By this time Russell was world-famous outside academic circles, frequently the subject or author of magazine and newspaper articles, and was called upon to offer opinions on a wide variety of subjects, even mundane ones. En route to one of his lectures in Trondheim, Russell was one of 24 survivors (among a total of 43 passengers) of an aeroplane crash in Hommelvik in October 1948. He said he owed his life to smoking since the people who drowned were in the non-smoking part of the plane. A History of Western Philosophy (1945) became a best-seller and provided Russell with a steady income for the remainder of his life. In 1942, Russell argued in favour of a moderate socialism, capable of overcoming its metaphysical principles. In an inquiry on dialectical materialism, launched by the Austrian artist and philosopher Wolfgang Paalen in his journal DYN, Russell said: "I think the metaphysics of both Hegel and Marx plain nonsense—Marx's claim to be 'science' is no more justified than Mary Baker Eddy's. This does not mean that I am opposed to socialism." In 1943, Russell expressed support for Zionism: "I have come gradually to see that, in a dangerous and largely hostile world, it is essential to Jews to have some country which is theirs, some region where they are not suspected aliens, some state which embodies what is distinctive in their culture". In a speech in 1948, Russell said that if the USSR's aggression continued, it would be morally worse to go to war after the USSR possessed an atomic bomb than before it possessed one, because if the USSR had no bomb the West's victory would come more swiftly and with fewer casualties than if there were atomic bombs on both sides. At that time, only the United States possessed an atomic bomb, and the USSR was pursuing an extremely aggressive policy towards the countries in Eastern Europe which were being absorbed into the Soviet Union's sphere of influence. Many understood Russell's comments to mean that Russell approved of a first strike in a war with the USSR, including Nigel Lawson, who was present when Russell spoke of such matters. Others, including Griffin, who obtained a transcript of the speech, have argued that he was merely explaining the usefulness of America's atomic arsenal in deterring the USSR from continuing its domination of Eastern Europe. Just after the atomic bombs exploded over Hiroshima and Nagasaki, Russell wrote letters, and published articles in newspapers from 1945 to 1948, stating clearly that it was morally justified and better to go to war against the USSR using atomic bombs while the United States possessed them and before the USSR did. In September 1949, one week after the USSR tested its first A-bomb, but before this became known, Russell wrote that the USSR would be unable to develop nuclear weapons because following Stalin's purges only science based on Marxist principles would be practised in the Soviet Union. After it became known that the USSR had carried out its nuclear bomb tests, Russell declared his position advocating the total abolition of atomic weapons. In 1948, Russell was invited by the BBC to deliver the inaugural Reith Lectures—what was to become an annual series of lectures, still broadcast by the BBC. His series of six broadcasts, titled Authority and the Individual, explored themes such as the role of individual initiative in the development of a community and the role of state control in a progressive society. Russell continued to write about philosophy. He wrote a foreword to Words and Things by Ernest Gellner, which was highly critical of the later thought of Ludwig Wittgenstein and of ordinary language philosophy. Gilbert Ryle refused to have the book reviewed in the philosophical journal Mind, which caused Russell to respond via The Times. The result was a month-long correspondence in The Times between the supporters and detractors of ordinary language philosophy, which was only ended when the paper published an editorial critical of both sides but agreeing with the opponents of ordinary language philosophy. In the King's Birthday Honours of 9 June 1949, Russell was awarded the Order of Merit, and the following year he was awarded the Nobel Prize in Literature. When he was given the Order of Merit, George VI was affable but slightly embarrassed at decorating a former jailbird, saying, "You have sometimes behaved in a manner that would not do if generally adopted". Russell merely smiled, but afterwards claimed that the reply "That's right, just like your brother" immediately came to mind. In 1950, Russell attended the inaugural conference for the Congress for Cultural Freedom, a CIA-funded anti-communist organisation committed to the deployment of culture as a weapon during the Cold War. Russell was one of the best-known patrons of the Congress, until he resigned in 1956. In 1952, Russell was divorced by Spence, with whom he had been very unhappy. Conrad, Russell's son by Spence, did not see his father between the time of the divorce and 1968 (at which time his decision to meet his father caused a permanent breach with his mother). Russell married his fourth wife, Edith Finch, soon after the divorce, on 15 December 1952. They had known each other since 1925, and Edith had taught English at Bryn Mawr College near Philadelphia, sharing a house for 20 years with Russell's old friend Lucy Donnelly. Edith remained with him until his death, and, by all accounts, their marriage was a happy, close, and loving one. Russell's eldest son John suffered from serious mental illness, which was the source of ongoing disputes between Russell and his former wife Dora. In September 1961, at the age of 89, Russell was jailed for seven days in Brixton Prison for a "breach of the peace" after taking part in an anti-nuclear demonstration in London. The magistrate offered to exempt him from jail if he pledged himself to "good behaviour", to which Russell replied: "No, I won't." In 1962 Russell played a public role in the Cuban Missile Crisis: in an exchange of telegrams with Soviet leader Nikita Khrushchev, Khrushchev assured him that the Soviet government would not be reckless. Russell sent this telegram to President Kennedy: According to historian Peter Knight, after JFK's assassination, Russell, "prompted by the emerging work of the lawyer Mark Lane in the US ... rallied support from other noteworthy and left-leaning compatriots to form a Who Killed Kennedy Committee in June 1964, members of which included Michael Foot MP, Caroline Benn, the publisher Victor Gollancz, the writers John Arden and J. B. Priestley, and the Oxford history professor Hugh Trevor-Roper." Russell published a highly critical article weeks before the Warren Commission Report was published, setting forth 16 Questions on the Assassination and equating the Oswald case with the Dreyfus affair of late 19th-century France, in which the state convicted an innocent man. Russell also criticised the American press for failing to heed any voices critical of the official version. Political causes Bertrand Russell was opposed to war from a young age; his opposition to World War I being used as grounds for his dismissal from Trinity College at Cambridge. This incident fused two of his most controversial causes, as he had failed to be granted Fellow status which would have protected him from firing, because he was not willing to either pretend to be a devout Christian, or at least avoid admitting he was agnostic. He later described the resolution of these issues as essential to freedom of thought and expression, citing the incident in Free Thought and Official Propaganda, where he explained that the expression of any idea, even the most obviously "bad", must be protected not only from direct State intervention, but also economic leveraging and other means of being silenced: Russell spent the 1950s and 1960s engaged in political causes primarily related to nuclear disarmament and opposing the Vietnam War. The 1955 Russell–Einstein Manifesto was a document calling for nuclear disarmament and was signed by eleven of the most prominent nuclear physicists and intellectuals of the time. In 1966–1967, Russell worked with Jean-Paul Sartre and many other intellectual figures to form the Russell Vietnam War Crimes Tribunal to investigate the conduct of the United States in Vietnam. He wrote a great many letters to world leaders during this period. Early in his life Russell supported eugenicist policies. He proposed in 1894 that the state issue certificates of health to prospective parents and withhold public benefits from those considered unfit. In 1929 he wrote that people deemed "mentally defective" and "feebleminded" should be sexually sterilised because they "are apt to have enormous numbers of illegitimate children, all, as a rule, wholly useless to the community." Russell was also an advocate of population control: On 20 November 1948, in a public speech at Westminster School, addressing a gathering arranged by the New Commonwealth, Russell shocked some observers by suggesting that a preemptive nuclear strike on the Soviet Union was justified. Russell argued that war between the United States and the Soviet Union seemed inevitable, so it would be a humanitarian gesture to get it over with quickly and have the United States in the dominant position. Currently, Russell argued, humanity could survive such a war, whereas a full nuclear war after both sides had manufactured large stockpiles of more destructive weapons was likely to result in the extinction of the human race. Russell later relented from this stance, instead arguing for mutual disarmament by the nuclear powers. In 1956, immediately before and during the Suez Crisis, Russell expressed his opposition to European imperialism in the Middle East. He viewed the crisis as another reminder of the pressing need for a more effective mechanism for international governance, and to restrict national sovereignty to places such as the Suez Canal area "where general interest is involved". At the same time the Suez Crisis was taking place, the world was also captivated by the Hungarian Revolution and the subsequent crushing of the revolt by intervening Soviet forces. Russell attracted criticism for speaking out fervently against the Suez war while ignoring Soviet repression in Hungary, to which he responded that he did not criticise the Soviets "because there was no need. Most of the so-called Western World was fulminating". Although he later feigned a lack of concern, at the time he was disgusted by the brutal Soviet response, and on 16 November 1956, he expressed approval for a declaration of support for Hungarian scholars which Michael Polanyi had cabled to the Soviet embassy in London twelve days previously, shortly after Soviet troops had entered Budapest. In November 1957 Russell wrote an article addressing US President Dwight D. Eisenhower and Soviet Premier Nikita Khrushchev, urging a summit to consider "the conditions of co-existence". Khrushchev responded that peace could be served by such a meeting. In January 1958 Russell elaborated his views in The Observer, proposing a cessation of all nuclear weapons production, with the UK taking the first step by unilaterally suspending its own nuclear-weapons program if necessary, and with Germany "freed from all alien armed forces and pledged to neutrality in any conflict between East and West". US Secretary of State John Foster Dulles replied for Eisenhower. The exchange of letters was published as The Vital Letters of Russell, Khrushchev, and Dulles. Russell was asked by The New Republic, a liberal American magazine, to elaborate his views on world peace. He urged that all nuclear weapons testing and flights by planes armed with nuclear weapons be halted immediately, and negotiations be opened for the destruction of all hydrogen bombs, with the number of conventional nuclear devices limited to ensure a balance of power. He proposed that Germany be reunified and accept the Oder-Neisse line as its border, and that a neutral zone be established in Central Europe, consisting at the minimum of Germany, Poland, Hungary, and Czechoslovakia, with each of these countries being free of foreign troops and influence, and prohibited from forming alliances with countries outside the zone. In the Middle East, Russell suggested that the West avoid opposing Arab nationalism, and proposed the creation of a United Nations peacekeeping force to guard Israel's frontiers to ensure that Israel was prevented from committing aggression and protected from it. He also suggested Western recognition of the People's Republic of China, and that it be admitted to the UN with a permanent seat on the UN Security Council. He was in contact with Lionel Rogosin while the latter was filming his anti-war film Good Times, Wonderful Times in the 1960s. He became a hero to many of the youthful members of the New Left. In early 1963, Russell became increasingly vocal in his disapproval of the Vietnam War, and felt that the US government's policies there were near-genocidal. In 1963 he became the inaugural recipient of the Jerusalem Prize, an award for writers concerned with the freedom of the individual in society. In 1964 he was one of eleven world figures who issued an appeal to Israel and the Arab countries to accept an arms embargo and international supervision of nuclear plants and rocket weaponry. In October 1965 he tore up his Labour Party card because he suspected Harold Wilson's Labour government was going to send troops to support the United States in Vietnam. Final years, death and legacy In June 1955, Russell had leased Plas Penrhyn in Penrhyndeudraeth, Merionethshire, Wales and on 5 July of the following year it became his and Edith's principal residence. Russell published his three-volume autobiography in 1967, 1968, and 1969. He made a cameo appearance playing himself in the anti-war Hindi film Aman, by Mohan Kumar, which was released in India in 1967. This was Russell's only appearance in a feature film. On 23 November 1969, he wrote to The Times newspaper saying that the preparation for show trials in Czechoslovakia was "highly alarming". The same month, he appealed to Secretary General U Thant of the United Nations to support an international war crimes commission to investigate alleged torture and genocide by the United States in South Vietnam during the Vietnam War. The following month, he protested to Alexei Kosygin over the expulsion of Aleksandr Solzhenitsyn from the Soviet Union of Writers. On 31 January 1970, Russell issued a statement condemning "Israel's aggression in the Middle East", and in particular, Israeli bombing raids being carried out deep in Egyptian territory as part of the War of Attrition, which he compared to German bombing raids in the Battle of Britain and the US bombing of Vietnam. He called for an Israeli withdrawal to the pre-Six-Day War borders. This was Russell's final political statement or act. It was read out at the International Conference of Parliamentarians in Cairo on 3 February 1970, the day after his death. Russell died of influenza, just after 8 pm on 2 February 1970 at his home in Penrhyndeudraeth, aged 97. His body was cremated in Colwyn Bay on 5 February 1970 with five people present. In accordance with his will, there was no religious ceremony but one minute's silence; his ashes were later scattered over the Welsh mountains. Although he was born in Monmouthshire, and died in Penrhyndeudraeth in Wales, Russell identified as English. Later in 1970, on 23 October, his will was published showing he had left an estate valued at £69,423 (equivalent to £ million in ). In 1980, a memorial to Russell was commissioned by a committee including the philosopher A. J. Ayer. It consists of a bust of Russell in Red Lion Square in London sculpted by Marcelle Quinton. Lady Katharine Jane Tait, Russell's daughter, founded the Bertrand Russell Society in 1974 to preserve and understand his work. It publishes the Bertrand Russell Society Bulletin, holds meetings and awards prizes for scholarship, including the Bertrand Russell Society Award. She also authored several essays about her father; as well as a book, My Father, Bertrand Russell, which was published in 1975. All members receive Russell: The Journal of Bertrand Russell Studies. For the sesquicentennial of his birth, in May 2022, McMaster University's Bertrand Russell Archive, the university's largest and most heavily used research collection, organised both a physical and virtual exhibition on Russell's anti-nuclear stance in the post-war era, Scientists for Peace: the Russell-Einstein Manifesto and the Pugwash Conference, which included the earliest version of the Russell–Einstein Manifesto. The Bertrand Russell Peace Foundation held a commemoration at Conway Hall in Red Lion Square, London, on 18 May, the anniversary of his birth. For its part, on the same day, La Estrella de Panamá published a biographical sketch by Francisco Díaz Montilla, who commented that "[if he] had to characterize Russell's work in one sentence [he] would say: criticism and rejection of dogmatism." Bangladesh's first leader, Mujibur Rahman, named his youngest son Sheikh Russel in honour of Bertrand Russell. Marriages and issue Russell first married Alys Whitall Smith (died 1951) in 1894. The marriage was dissolved in 1921 with no issue. His second marriage was to Dora Winifred Black MBE (died 1986), daughter of Sir Frederick Black, in 1921. This was dissolved in 1935, having produced two children: John Conrad Russell, 4th Earl Russell (1921–1987) Lady Katharine Jane Russell (1923–2021), who married Rev. Charles Tait in 1948 and had issue Russell's third marriage was to Patricia Helen Spence (died 2004) in 1936, with the marriage producing one child: Conrad Sebastian Robert Russell, 5th Earl Russell (1937–2004) Russell's third marriage ended in divorce in 1952. He married Edith Finch in the same year. Finch survived Russell, dying in 1978. Titles and honours from birth Russell held throughout his life the following styles and honours: from birth until 1908: The Honourable Bertrand Arthur William Russell from 1908 until 1931: The Honourable Bertrand Arthur William Russell, FRS from 1931 until 1949: The Right Honourable The Earl Russell, FRS from 1949 until death: The Right Honourable The Earl Russell, OM, FRS Views Philosophy Russell is generally credited with being one of the founders of analytic philosophy. He was deeply impressed by Gottfried Leibniz (1646–1716), and wrote on every major area of philosophy except aesthetics. He was particularly prolific in the fields of metaphysics, logic and the philosophy of mathematics, the philosophy of language, ethics and epistemology. When Brand Blanshard asked Russell why he did not write on aesthetics, Russell replied that he did not know anything about it, though he hastened to add "but that is not a very good excuse, for my friends tell me it has not deterred me from writing on other subjects". On ethics, Russell wrote that he was a utilitarian in his youth, yet he later distanced himself from this view. For the advancement of science and protection of liberty of expression, Russell advocated The Will to Doubt, the recognition that all human knowledge is at most a best guess, that one should always remember: Religion Russell described himself in 1947 as an agnostic or an atheist: he found it difficult to determine which term to adopt, saying: For most of his adult life, Russell maintained religion to be little more than superstition and, despite any positive effects, largely harmful to people. He believed that religion and the religious outlook serve to impede knowledge and foster fear and dependency, and to be responsible for much of our world's wars, oppression, and misery. He was a member of the Advisory Council of the British Humanist Association and President of Cardiff Humanists until his death. Society Political and social activism occupied much of Russell's time for most of his life. Russell remained politically active almost to the end of his life, writing to and exhorting world leaders and lending his name to various causes. He was a prominent campaigner against Western intervention into the Vietnam War in the 1960s, writing essays, books, attending demonstrations, and even organising the Russell Tribunal in 1966 alongside other prominent philosophers such as Jean-Paul Sartre and Simone de Beauvoir, which fed into his 1967 book War Crimes in Vietnam. Russell argued for a "scientific society", where war would be abolished, population growth would be limited, and prosperity would be shared. He suggested the establishment of a "single supreme world government" able to enforce peace, claiming that "the only thing that will redeem mankind is co-operation". He was one of the signatories of the agreement to convene a convention for drafting a world constitution. As a result, for the first time in human history, a World Constituent Assembly convened to draft and adopt the Constitution for the Federation of Earth. Russell also expressed support for guild socialism, and commented positively on several socialist thinkers and activists. According to Jean Bricmont and Normand Baillargeon, "Russell was both a liberal and a socialist, a combination that was perfectly comprehensible in his time, but which has become almost unthinkable today. He was a liberal in that he opposed concentrations of power in all its manifestations, military, governmental, or religious, as well as the superstitious or nationalist ideas that usually serve as its justification. But he was also a socialist, even as an extension of his liberalism, because he was equally opposed to the concentrations of power stemming from the private ownership of the major means of production, which therefore needed to be put under social control (which does not mean state control)." Russell was an active supporter of the Homosexual Law Reform Society, being one of the signatories of A. E. Dyson's 1958 letter to The Times calling for a change in the law regarding male homosexual practices, which were partly legalised in 1967, when Russell was still alive. He expressed sympathy and support for the Palestinian people and was strongly critical of Israel's actions. He wrote in 1960 that, "I think it was a mistake to establish a Jewish State in Palestine, but it would be a still greater mistake to try to get rid of it now that it exists." In his final written document, read aloud in Cairo three days after his death on 31 January 1970, he condemned Israel as an aggressive imperialist power, which "wishes to consolidate with the least difficulty what it has already taken by violence. Every new conquest becomes the new basis of the proposed negotiation from strength, which ignores the injustice of the previous aggression." In regards to the Palestinian people and refugees, he wrote that, "No people anywhere in the world would accept being expelled en masse from their own country; how can anyone require the people of Palestine to accept a punishment which nobody else would tolerate? A permanent just settlement of the refugees in their homeland is an essential ingredient of any genuine settlement in the Middle East." Russell advocated – and was one of the first people in the UK to suggest – a universal basic income. In his 1918 book Roads to Freedom, Russell wrote that "Anarchism has the advantage as regards liberty, Socialism as regards the inducement to work.  Can we not find a method of combining these two advantages? It seems to me that we can. [...] Stated in more familiar terms, the plan we are advocating amounts essentially to this: that a certain small income, sufficient for necessaries, should be secured to all, whether they work or not, and that a larger income – as much larger as might be warranted by the total amount of commodities produced – should be given to those who are willing to engage in some work which the community recognizes as useful...When education is finished, no one should be compelled to work, and those who choose not to work should receive a bare livelihood and be left completely free." In "Reflections on My Eightieth Birthday" ("Postscript" in his Autobiography), Russell wrote: "I have lived in the pursuit of a vision, both personal and social. Personal: to care for what is noble, for what is beautiful, for what is gentle; to allow moments of insight to give wisdom at more mundane times. Social: to see in imagination the society that is to be created, where individuals grow freely, and where hate and greed and envy die because there is nothing to nourish them. These things I believe, and the world, for all its horrors, has left me unshaken". Freedom of opinion and expression Russell was a champion of freedom of opinion and an opponent of both censorship and indoctrination. In 1928, he wrote: "The fundamental argument for freedom of opinion is the doubtfulness of all our belief... when the State intervenes to ensure the indoctrination of some doctrine, it does so because there is no conclusive evidence in favour of that doctrine ... It is clear that thought is not free if the profession of certain opinions make it impossible to make a living". In 1957, he wrote: "'Free thought' means thinking freely ... to be worthy of the name freethinker he must be free of two things: the force of tradition and the tyranny of his own passions." Education Russell has presented ideas on the possible means of control of education in case of scientific dictatorship governments, of the kind of this excerpt taken from chapter II "General Effects of Scientific Technique" of "The Impact of Science on society": He pushed his visionary scenarios even further into details, in the chapter III "Scientific Technique in an Oligarchy" of the same book, stating as an example: Selected works Below are selected Russell's works in English, sorted by year of first publication: 1896. German Social Democracy. London: Longmans, Green 1897. An Essay on the Foundations of Geometry. Cambridge: Cambridge University Press 1900. A Critical Exposition of the Philosophy of Leibniz. Cambridge: Cambridge University Press 1903. The Principles of Mathematics. Cambridge University Press 1903. A Free man's worship, and other essays. 1905. On Denoting, Mind, Vol. 14. . Basil Blackwell 1910. Philosophical Essays. London: Longmans, Green 1910–1913. Principia Mathematica. (with Alfred North Whitehead). 3 vols. Cambridge: Cambridge University Press 1912. The Problems of Philosophy. London: Williams and Norgate 1914. Our Knowledge of the External World as a Field for Scientific Method in Philosophy. Chicago and London: Open Court Publishing. 1916. Principles of Social Reconstruction. London, George Allen and Unwin 1916. Why Men Fight. New York: The Century Co 1916. The Policy of the Entente, 1904–1914 : a reply to Professor Gilbert Murray. Manchester: The National Labour Press 1916. Justice in War-time. Chicago: Open Court 1917. Political Ideals. New York: The Century Co. 1918. Mysticism and Logic and Other Essays. London: George Allen & Unwin 1918. Proposed Roads to Freedom: Socialism, Anarchism, and Syndicalism. London: George Allen & Unwin 1919. Introduction to Mathematical Philosophy. London: George Allen & Unwin. ( for Routledge paperback) 1920. The Practice and Theory of Bolshevism. London: George Allen & Unwin 1921. The Analysis of Mind. London: George Allen & Unwin 1922. The Problem of China. London: George Allen & Unwin 1922. Free Thought and Official Propaganda, delivered at South Place Institute 1923. The Prospects of Industrial Civilization, in collaboration with Dora Russell. London: George Allen & Unwin 1923. The ABC of Atoms, London: Kegan Paul. Trench, Trubner 1924. Icarus; or, The Future of Science. London: Kegan Paul, Trench, Trubner 1925. The ABC of Relativity. London: Kegan Paul, Trench, Trubner (revised and edited by Felix Pirani) 1925. What I Believe. London: Kegan Paul, Trench, Trubner 1926. On Education, Especially in Early Childhood. London: George Allen & Unwin 1927. The Analysis of Matter. London: Kegan Paul, Trench, Trubner 1927. An Outline of Philosophy. London: George Allen & Unwin 1927. Why I Am Not a Christian. London: Watts 1927. Selected Papers of Bertrand Russell. New York: Modern Library 1928. Sceptical Essays. London: George Allen & Unwin 1929. Marriage and Morals. London: George Allen & Unwin 1930. The Conquest of Happiness. London: George Allen & Unwin 1931. The Scientific Outlook, London: George Allen & Unwin 1932. Education and the Social Order, London: George Allen & Unwin 1934. Freedom and Organization, 1814–1914. London: George Allen & Unwin 1935. In Praise of Idleness and Other Essays. London: George Allen & Unwin 1935. Religion and Science. London: Thornton Butterworth 1936. Which Way to Peace?. London: Jonathan Cape 1937. The Amberley Papers: The Letters and Diaries of Lord and Lady Amberley, with Patricia Russell, 2 vols., London: Leonard & Virginia Woolf at the Hogarth Press; reprinted (1966) as The Amberley Papers. Bertrand Russell's Family Background, 2 vols., London: George Allen & Unwin 1938. Power: A New Social Analysis. London: George Allen & Unwin 1940. An Inquiry into Meaning and Truth. New York: W. W. Norton & Company. 1945. The Bomb and Civilisation. Published in the Glasgow Forward on 18 August 1945 1945. A History of Western Philosophy and Its Connection with Political and Social Circumstances from the Earliest Times to the Present Day New York: Simon and Schuster 1948. Human Knowledge: Its Scope and Limits. London: George Allen & Unwin 1949. Authority and the Individual. London: George Allen & Unwin 1950. . London: George Allen & Unwin 1951. New Hopes for a Changing World. London: George Allen & Unwin 1952. The Impact of Science on Society. London: George Allen & Unwin 1953. Satan in the Suburbs and Other Stories. London: George Allen & Unwin 1954. Human Society in Ethics and Politics. London: George Allen & Unwin 1954. Nightmares of Eminent Persons and Other Stories. London: George Allen & Unwin 1956. Portraits from Memory and Other Essays. London: George Allen & Unwin 1956. Logic and Knowledge: Essays 1901–1950, edited by Robert C. Marsh. London: George Allen & Unwin 1957. Why I Am Not A Christian and Other Essays on Religion and Related Subjects, edited by Paul Edwards. London: George Allen & Unwin 1958. Understanding History and Other Essays. New York: Philosophical Library 1958. The Will to Doubt. New York: Philosophical Library 1959. Common Sense and Nuclear Warfare. London: George Allen & Unwin 1959. My Philosophical Development. London: George Allen & Unwin 1959. Wisdom of the West: A Historical Survey of Western Philosophy in Its Social and Political Setting, edited by Paul Foulkes. London: Macdonald 1960. Bertrand Russell Speaks His Mind, Cleveland and New York: World Publishing Company 1961. The Basic Writings of Bertrand Russell, edited by R. E. Egner and L. E. Denonn. London: George Allen & Unwin 1961. Fact and Fiction. London: George Allen & Unwin 1961. Has Man a Future? London: George Allen & Unwin 1963. Essays in Skepticism. New York: Philosophical Library 1963. Unarmed Victory. London: George Allen & Unwin 1965. Legitimacy Versus Industrialism, 1814–1848. London: George Allen & Unwin (first published as Parts I and II of Freedom and Organization, 1814–1914, 1934) 1965. On the Philosophy of Science, edited by Charles A. Fritz, Jr. Indianapolis: The Bobbs–Merrill Company 1966. The ABC of Relativity. London: George Allen & Unwin 1967. Russell's Peace Appeals, edited by Tsutomu Makino and Kazuteru Hitaka. Japan: Eichosha's New Current Books 1967. War Crimes in Vietnam. London: George Allen & Unwin 1951–1969. The Autobiography of Bertrand Russell, 3 vols., London: George Allen & Unwin. Vol. 2, 1956 1969. Dear Bertrand Russell... A Selection of his Correspondence with the General Public 1950–1968, edited by Barry Feinberg and Ronald Kasrils. London: George Allen and Unwin Russell was the author of more than sixty books and over two thousand articles. Additionally, he wrote many pamphlets, introductions, and letters to the editor. One pamphlet titled, I Appeal unto Caesar': The Case of the Conscientious Objectors, ghostwritten for Margaret Hobhouse, the mother of imprisoned peace activist Stephen Hobhouse, allegedly helped secure the release from prison of hundreds of conscientious objectors. His works can be found in anthologies and collections, including The Collected Papers of Bertrand Russell, which McMaster University began publishing in 1983. By March 2017 this collection of his shorter and previously unpublished works included 18 volumes, and several more are in progress. A bibliography in three additional volumes catalogues his publications. The Russell Archives held by McMaster's William Ready Division of Archives and Research Collections possess over 40,000 of his letters. See also Cambridge University Moral Sciences Club Criticism of Jesus Joseph Conrad (Russell's impression) List of peace activists List of pioneers in computer science Information Research Department Type theory Type system Logicomix, a graphic novel about the foundational quest in mathematics, the narrator of the story being Bertrand Russell and with his life as the main storyline Notes References Citations Sources Primary sources 1900, Sur la logique des relations avec des applications à la théorie des séries, Rivista di matematica 7: 115–148. 1901, On the Notion of Order, Mind (n.s.) 10: 35–51. 1902, (with Alfred North Whitehead), On Cardinal Numbers, American Journal of Mathematics 24: 367–384. 1948, BBC Reith Lectures: Authority and the Individual A series of six radio lectures broadcast on the BBC Home Service in December 1948. Secondary sources John Newsome Crossley. A Note on Cantor's Theorem and Russell's Paradox, Australian Journal of Philosophy 51, 1973, 70–71. Ivor Grattan-Guinness. The Search for Mathematical Roots 1870–1940. Princeton: Princeton University Press, 2000. Alan Ryan. Bertrand Russell: A Political Life, New York: Oxford University Press, 1981. Further reading Books about Russell's philosophy Alfred Julius Ayer. Russell, London: Fontana, 1972. . A lucid summary exposition of Russell's thought. Elizabeth Ramsden Eames. Bertrand Russell's Theory of Knowledge, London: George Allen and Unwin, 1969. . A clear description of Russell's philosophical development. Celia Green. The Lost Cause: Causation and the Mind-Body Problem, Oxford: Oxford Forum, 2003. Contains a sympathetic analysis of Russell's views on causality. A. C. Grayling. Russell: A Very Short Introduction, Oxford University Press, 2002. Nicholas Griffin. Russell's Idealist Apprenticeship, Oxford: Oxford University Press, 1991. A. D. Irvine, ed. Bertrand Russell: Critical Assessments, 4 volumes, London: Routledge, 1999. Consists of essays on Russell's work by many distinguished philosophers. Michael K. Potter. Bertrand Russell's Ethics, Bristol: Thoemmes Continuum, 2006. A clear and accessible explanation of Russell's moral philosophy. P. A. Schilpp, ed. The Philosophy of Bertrand Russell, Evanston and Chicago: Northwestern University, 1944. John Slater. Bertrand Russell, Bristol: Thoemmes Press, 1994. Biographical books A. J. Ayer. Bertrand Russell, New York: Viking Press, 1972, reprint ed. London: University of Chicago Press, 1988, Andrew Brink. Bertrand Russell: A Psychobiography of a Moralist, Atlantic Highlands, NJ: Humanities Press International, Inc., 1989, Ronald W. Clark. The Life of Bertrand Russell, London: Jonathan Cape, 1975, Ronald W. Clark. Bertrand Russell and His World, London: Thames & Hudson, 1981, Rupert Crawshay-Williams. Russell Remembered, London: Oxford University Press, 1970. Written by a close friend of Russell's John Lewis. Bertrand Russell: Philosopher and Humanist, London: Lawerence & Wishart, 1968 Ray Monk. Bertrand Russell: Mathematics: Dreams and Nightmares, London: Phoenix, 1997, Ray Monk. Bertrand Russell: The Spirit of Solitude, 1872–1920 Vol. I, New York: Routledge, 1997, Ray Monk. Bertrand Russell: The Ghost of Madness, 1921–1970 Vol. II, New York: Routledge, 2001, Caroline Moorehead. Bertrand Russell: A Life, New York: Viking, 1993, George Santayana. "Bertrand Russell", in Selected Writings of George Santayana, Norman Henfrey (ed.), Cambridge: Cambridge University Press, I, 1968, pp. 326–329 Peter Stone et al. Bertrand Russell's Life and Legacy. Wilmington: Vernon Press, 2017. Katharine Tait. My Father Bertrand Russell, New York: Thoemmes Press, 1975 Alan Wood. Bertrand Russell: The Passionate Sceptic, London: George Allen & Unwin, 1957. External links Bertrand Russell – media on YouTube The Bertrand Russell Archives at McMaster University The Bertrand Russell Society BBC Face to Face interview with Bertrand Russell and John Freeman, broadcast 4 March 1959 including the Nobel Lecture, 11 December 1950 "What Desires Are Politically Important?" Interview with Ray Monk at Today, 18 May 2022 (from 2:58:35) 1872 births 1970 deaths 19th-century atheists 19th-century English mathematicians 19th-century English philosophers 19th-century British essayists 20th-century atheists 20th-century English mathematicians 20th-century English philosophers 20th-century British essayists Academics of the London School of Economics Alumni of Trinity College, Cambridge Analytic philosophers Anti–Vietnam War activists Aristotelian philosophers Atheist philosophers British anti–nuclear weapons activists British anti–World War I activists British atheism activists British ethicists Campaign for Nuclear Disarmament activists British consciousness researchers and theorists Consequentialists British critics of Christianity Critics of religions Critics of the Catholic Church Critics of work and the work ethic De Morgan Medallists Deaths from influenza Earls Russell Empiricists English agnostics English anti-fascists English anti–nuclear weapons activists English atheist writers English essayists English historians of philosophy English humanists English logicians English male non-fiction writers English Nobel laureates British Nobel laureates English pacifists English people of Scottish descent English people of Welsh descent English political commentators English political philosophers English political writers English prisoners and detainees English sceptics English socialists Epistemologists European democratic socialists Fellows of the Royal Society Fellows of Trinity College, Cambridge Free love advocates Free speech activists Freethought writers Georgists Honorary Fellows of the British Academy Infectious disease deaths in Wales Intellectual historians Jerusalem Prize recipients Kalinga Prize recipients English LGBT rights activists Liberal socialism Linguistic turn Logicians Mathematical logicians Members of the Order of Merit Metaphilosophers Metaphysicians Metaphysics writers Nobel laureates in Literature Nonviolence advocates Ontologists People from Harting People from Monmouthshire Philosophers of culture Philosophers of economics Philosophers of education Philosophers of history Philosophers of language Philosophers of law Philosophers of literature Philosophers of logic Philosophers of love Philosophers of mathematics Philosophers of mind Philosophers of religion Philosophers of science Philosophers of sexuality Philosophers of social science Philosophers of technology Political philosophers Presidents of the Aristotelian Society Residents of Pembroke Lodge, Richmond Park Rhetoric theorists Bertrand Russell Secular humanists Set theorists The Nation (U.S. magazine) people Theorists on Western civilization Universal basic income writers University of California, Los Angeles faculty University of Chicago faculty Utilitarians Writers about activism and social change Writers about communism Writers about globalization Writers about religion and science Writers about the Soviet Union People from Penrhyndeudraeth Anti-nationalists Labour Party (UK) parliamentary candidates Labour Party (UK) hereditary peers World Constitutional Convention call signatories
9
The Boeing 767 is an American wide-body aircraft developed and manufactured by Boeing Commercial Airplanes. The aircraft was launched as the 7X7 program on July 14, 1978, the prototype first flew on September 26, 1981, and it was certified on July 30, 1982. The initial 767-200 variant entered service on September 8, 1982, with United Airlines, and the extended-range 767-200ER in 1984. It was stretched into the in October 1986, followed by the 767-300ER in 1988, the most popular variant. The 767-300F, a production freighter version, debuted in October 1995. It was stretched again into the 767-400ER from September 2000. To complement the larger 747, it has a seven-abreast cross-section, accommodating smaller LD2 ULD cargo containers. The 767 is Boeing's first wide-body twinjet, powered by General Electric CF6, Rolls-Royce RB211, or Pratt & Whitney JT9D turbofans. JT9D engines were eventually replaced by PW4000 engines. The aircraft has a conventional tail and a supercritical wing for reduced aerodynamic drag. Its two-crew glass cockpit, a first for a Boeing airliner, was developed jointly for the 757 − a narrow-body aircraft, allowing a common pilot type rating. Studies for a higher-capacity 767 in 1986 led Boeing to develop the larger 777 twinjet, introduced in June 1995. The 767-200 typically seats 216 passengers over 3,900 nautical miles [nmi] (7,200 km; ), while the 767-200ER seats 181 over a 6,590 nmi (12,200 km; ) range. The 767-300 typically seats 269 passengers over 3,900 nmi (7,200 km; ), while the 767-300ER seats 218 over 5,980 nmi (11,070 km; ). The 767-300F can haul over 3,225 nmi (6,025 km; ), and the 767-400ER typically seats 245 passengers over 5,625 nmi (10,415 km; ). Military derivatives include the E-767 for surveillance and the KC-767 and KC-46 aerial tankers. Initially marketed for transcontinental routes, a loosening of ETOPS rules starting in 1985 allowed the aircraft to operate transatlantic flights. A total of 742 of these aircraft were in service in July 2018, with Delta Air Lines being the largest operator with 77 aircraft in its fleet. , Boeing has received 1,392 orders from 74 customers, of which 1,288 airplanes have been delivered, while the remaining orders are for cargo or tanker variants. Competitors have included the Airbus A300, A310, and A330-200. Its successor, the 787 Dreamliner, entered service in 2011. Development Background In 1970, the 747 entered service as the first wide-body jetliner with a fuselage wide enough to feature a twin-aisle cabin. Two years later, the manufacturer began a development study, code-named 7X7, for a new wide-body jetliner intended to replace the 707 and other early generation narrow-body airliners. The aircraft would also provide twin-aisle seating, but in a smaller fuselage than the existing 747, McDonnell Douglas DC-10, and Lockheed L-1011 TriStar wide-bodies. To defray the high cost of development, Boeing signed risk-sharing agreements with Italian corporation Aeritalia and the Civil Transport Development Corporation (CTDC), a consortium of Japanese aerospace companies. This marked the manufacturer's first major international joint venture, and both Aeritalia and the CTDC received supply contracts in return for their early participation. The initial 7X7 was conceived as a short take-off and landing airliner intended for short-distance flights, but customers were unenthusiastic about the concept, leading to its redefinition as a mid-size, transcontinental-range airliner. At this stage the proposed aircraft featured two or three engines, with possible configurations including over-wing engines and a T-tail. By 1976, a twinjet layout, similar to the one which had debuted on the Airbus A300, became the baseline configuration. The decision to use two engines reflected increased industry confidence in the reliability and economics of new-generation jet powerplants. While airline requirements for new wide-body aircraft remained ambiguous, the 7X7 was generally focused on mid-size, high-density markets. As such, it was intended to transport large numbers of passengers between major cities. Advancements in civil aerospace technology, including high-bypass-ratio turbofan engines, new flight deck systems, aerodynamic improvements, and more efficient lightweight designs were to be applied to the 7X7. Many of these features were also included in a parallel development effort for a new mid-size narrow-body airliner, code-named 7N7, which would become the 757. Work on both proposals proceeded through the airline industry upturn in the late 1970s. In January 1978, Boeing announced a major extension of its Everett factory—which was then dedicated to manufacturing the 747—to accommodate its new wide-body family. In February 1978, the new jetliner received the 767 model designation, and three variants were planned: a with 190 seats, a with 210 seats, and a trijet 767MR/LR version with 200 seats intended for intercontinental routes. The 767MR/LR was subsequently renamed 777 for differentiation purposes. The 767 was officially launched on July 14, 1978, when United Airlines ordered 30 of the 767-200 variant, followed by 50 more 767-200 orders from American Airlines and Delta Air Lines later that year. The 767-100 was ultimately not offered for sale, as its capacity was too close to the 757's seating, while the 777 trijet was eventually dropped in favor of standardizing the twinjet configuration. Design effort In the late 1970s, operating cost replaced capacity as the primary factor in airliner purchases. As a result, the 767's design process emphasized fuel efficiency from the outset. Boeing targeted a 20 to 30 percent cost saving over earlier aircraft, mainly through new engine and wing technology. As development progressed, engineers used computer-aided design for over a third of the 767's design drawings, and performed 26,000 hours of wind tunnel tests. Design work occurred concurrently with the 757 twinjet, leading Boeing to treat both as almost one program to reduce risk and cost. Both aircraft would ultimately receive shared design features, including avionics, flight management systems, instruments, and handling characteristics. Combined development costs were estimated at $3.5 to $4 billion. Early 767 customers were given the choice of Pratt & Whitney JT9D or General Electric CF6 turbofans, marking the first time that Boeing had offered more than one engine option at the launch of a new airliner. Both jet engine models had a maximum output of of thrust. The engines were mounted approximately one-third the length of the wing from the fuselage, similar to previous wide-body trijets. The larger wings were designed using an aft-loaded shape which reduced aerodynamic drag and distributed lift more evenly across their surface span than any of the manufacturer's previous aircraft. The wings provided higher-altitude cruise performance, added fuel capacity, and expansion room for future stretched variants. The initial 767-200 was designed for sufficient range to fly across North America or across the northern Atlantic, and would be capable of operating routes up to . The 767's fuselage width was set midway between that of the 707 and the 747 at . While it was narrower than previous wide-body designs, seven abreast seating with two aisles could be fitted, and the reduced width produced less aerodynamic drag. The fuselage was not wide enough to accommodate two standard LD3 wide-body unit load devices side-by-side, so a smaller container, the LD2, was created specifically for the 767. Using a conventional tail design also allowed the rear fuselage to be tapered over a shorter section, providing for parallel aisles along the full length of the passenger cabin, and eliminating irregular seat rows toward the rear of the aircraft. The 767 was the first Boeing wide-body to be designed with a two-crew digital glass cockpit. Cathode ray tube (CRT) color displays and new electronics replaced the role of the flight engineer by enabling the pilot and co-pilot to monitor aircraft systems directly. Despite the promise of reduced crew costs, United Airlines initially demanded a conventional three-person cockpit, citing concerns about the risks associated with introducing a new aircraft. The carrier maintained this position until July 1981, when a US presidential task force determined that a crew of two was safe for operating wide-body jets. A three-crew cockpit remained as an option and was fitted to the first production models. Ansett Australia ordered 767s with three-crew cockpits due to union demands; it was the only airline to operate 767s so configured. The 767's two-crew cockpit was also applied to the 757, allowing pilots to operate both aircraft after a short conversion course, and adding incentive for airlines to purchase both types. Production and testing To produce the 767, Boeing formed a network of subcontractors which included domestic suppliers and international contributions from Italy's Aeritalia and Japan's CTDC. The wings and cabin floor were produced in-house, while Aeritalia provided control surfaces, Boeing Vertol made the leading edge for the wings, and Boeing Wichita produced the forward fuselage. The CTDC provided multiple assemblies through its constituent companies, namely Fuji Heavy Industries (wing fairings and gear doors), Kawasaki Heavy Industries (center fuselage), and Mitsubishi Heavy Industries (rear fuselage, doors, and tail). Components were integrated during final assembly at the Everett factory. For expedited production of wing spars, the main structural member of aircraft wings, the Everett factory received robotic machinery to automate the process of drilling holes and inserting fasteners. This method of wing construction expanded on techniques developed for the 747. Final assembly of the first aircraft began in July 1979. The prototype aircraft, registered N767BA and equipped with JT9D turbofans, rolled out on August 4, 1981. By this time, the 767 program had accumulated 173 firm orders from 17 customers, including Air Canada, All Nippon Airways, Britannia Airways, Transbrasil, and Trans World Airlines (TWA). On September 26, 1981, the prototype took its maiden flight under the command of company test pilots Tommy Edmonds, Lew Wallick, and John Brit. The maiden flight was largely uneventful, save for the inability to retract the landing gear because of a hydraulic fluid leak. The prototype was used for subsequent flight tests. The 10-month 767 flight test program utilized the first six aircraft built. The first four aircraft were equipped with JT9D engines, while the fifth and sixth were fitted with CF6 engines. The test fleet was largely used to evaluate avionics, flight systems, handling, and performance, while the sixth aircraft was used for route-proving flights. During testing, pilots described the 767 as generally easy to fly, with its maneuverability unencumbered by the bulkiness associated with larger wide-body jets. Following 1,600 hours of flight tests, the JT9D-powered 767-200 received certification from the US Federal Aviation Administration (FAA) and the UK Civil Aviation Authority (CAA) in July 1982. The first delivery occurred on August 19, 1982, to United Airlines. The CF6-powered 767-200 received certification in September 1982, followed by the first delivery to Delta Air Lines on October 25, 1982. Entry into service The 767 entered service with United Airlines on September 8, 1982. The aircraft's first commercial flight used a JT9D-powered on the Chicago-to-Denver route. The CF6-powered 767-200 commenced service three months later with Delta Air Lines. Upon delivery, early 767s were mainly deployed on domestic routes, including US transcontinental services. American Airlines and TWA began flying the 767-200 in late 1982, while Air Canada, China Airlines, El Al, and Pacific Western began operating the aircraft in 1983. The aircraft's introduction was relatively smooth, with few operational glitches and greater dispatch reliability than prior jetliners. Stretched derivatives Forecasting airline interest in larger-capacity models, Boeing announced the stretched in 1983 and the extended-range 767-300ER in 1984. Both models offered a 20 percent passenger capacity increase, while the extended-range version was capable of operating flights up to . Japan Airlines placed the first order for the -300 in September 1983. Following its first flight on January 30, 1986, the type entered service with Japan Airlines on October 20, 1986. The 767-300ER completed its first flight on December 9, 1986, but it was not until March 1987 that the first firm order, from American Airlines, was placed. The type entered service with American Airlines on March 3, 1988. The 767-300 and 767-300ER gained popularity after entering service, and came to account for approximately two-thirds of all 767s sold. After the debut of the first stretched 767s, Boeing sought to address airline requests for greater capacity by proposing larger models, including a partial double-deck version informally named the "Hunchback of Mukilteo" (from a town near Boeing's Everett factory) with a 757 body section mounted over the aft main fuselage. In 1986, Boeing proposed the 767-X, a revised model with extended wings and a wider cabin, but received little interest. By 1988, the 767-X had evolved into an all-new twinjet, which revived the 777 designation. Until the 777's 1995 debut, the 767-300 and 767-300ER remained Boeing's second-largest wide-bodies behind the 747. Buoyed by a recovering global economy and ETOPS approval, 767 sales accelerated in the mid-to-late 1980s; 1989 was the most prolific year with 132 firm orders. By the early 1990s, the wide-body twinjet had become its manufacturer's annual best-selling aircraft, despite a slight decrease due to economic recession. During this period, the 767 became the most common airliner for transatlantic flights between North America and Europe. By the end of the decade, 767s crossed the Atlantic more frequently than all other aircraft types combined. The 767 also propelled the growth of point-to-point flights which bypassed major airline hubs in favor of direct routes. Taking advantage of the aircraft's lower operating costs and smaller capacity, operators added non-stop flights to secondary population centers, thereby eliminating the need for connecting flights. The increased number of cities receiving non-stop services caused a paradigm shift in the airline industry as point-to-point travel gained prominence at the expense of the traditional hub-and-spoke model. In February 1990, the first 767 equipped with Rolls-Royce RB211 turbofans, a , was delivered to British Airways. Six months later, the carrier temporarily grounded its entire 767 fleet after discovering cracks in the engine pylons of several aircraft. The cracks were related to the extra weight of the RB211 engines, which are heavier than other 767 engines. During the grounding, interim repairs were conducted to alleviate stress on engine pylon components, and a parts redesign in 1991 prevented further cracks. Boeing also performed a structural reassessment, resulting in production changes and modifications to the engine pylons of all 767s in service. In January 1993, following an order from UPS Airlines, Boeing launched a freighter variant, the 767-300F, which entered service with UPS on October 16, 1995. The 767-300F featured a main deck cargo hold, upgraded landing gear, and strengthened wing structure. In November 1993, the Japanese government launched the first 767 military derivative when it placed orders for the , an Airborne Early Warning and Control (AWACS) variant based on the 767-200ER. The first two , featuring extensive modifications to accommodate surveillance radar and other monitoring equipment, were delivered in 1998 to the Japan Self-Defense Forces. In November 1995, after abandoning development of a smaller version of the 777, Boeing announced that it was revisiting studies for a larger 767. The proposed 767-400X, a second stretch of the aircraft, offered a 12 percent capacity increase versus the , and featured an upgraded flight deck, enhanced interior, and greater wingspan. The variant was specifically aimed at Delta Air Lines' pending replacement of its aging Lockheed L-1011 TriStars, and faced competition from the A330-200, a shortened derivative of the Airbus A330. In March 1997, Delta Air Lines launched the 767-400ER when it ordered the type to replace its L-1011 fleet. In October 1997, Continental Airlines also ordered the 767-400ER to replace its McDonnell Douglas DC-10 fleet. The type completed its first flight on October 9, 1999, and entered service with Continental Airlines on September 14, 2000. Dreamliner introduction In the early 2000s, cumulative 767 deliveries approached 900, but new sales declined during an airline industry downturn. In 2001, Boeing dropped plans for a longer-range model, the 767-400ERX, in favor of the proposed Sonic Cruiser, a new jetliner which aimed to fly 15 percent faster while having comparable fuel costs to the 767. The following year, Boeing announced the KC-767 Tanker Transport, a second military derivative of the 767-200ER. Launched with an order in October 2002 from the Italian Air Force, the KC-767 was intended for the dual role of refueling other aircraft and carrying cargo. The Japanese government became the second customer for the type in March 2003. In May 2003, the United States Air Force (USAF) announced its intent to lease KC-767s to replace its aging KC-135 tankers. The plan was suspended in March 2004 amid a conflict of interest scandal, resulting in multiple US government investigations and the departure of several Boeing officials, including Philip Condit, the company's chief executive officer, and chief financial officer Michael Sears. The first KC-767s were delivered in 2008 to the Japan Self-Defense Forces. In late 2002, after airlines expressed reservations about its emphasis on speed over cost reduction, Boeing halted development of the Sonic Cruiser. The following year, the manufacturer announced the 7E7, a mid-size 767 successor made from composite materials which promised to be 20 percent more fuel efficient. The new jetliner was the first stage of a replacement aircraft initiative called the Boeing Yellowstone Project. Customers embraced the 7E7, later renamed 787 Dreamliner, and within two years it had become the fastest-selling airliner in the company's history. In 2005, Boeing opted to continue 767 production despite record Dreamliner sales, citing a need to provide customers waiting for the 787 with a more readily available option. Subsequently, the 767-300ER was offered to customers affected by 787 delays, including All Nippon Airways and Japan Airlines. Some aging 767s, exceeding 20 years in age, were also kept in service past planned retirement dates due to the delays. To extend the operational lives of older aircraft, airlines increased heavy maintenance procedures, including D-check teardowns and inspections for corrosion, a recurring issue on aging 767s. The first 787s entered service with All Nippon Airways in October 2011, 42 months behind schedule. Continued production In 2007, the 767 received a production boost when UPS and DHL Aviation placed a combined 33 orders for the 767-300F. Renewed freighter interest led Boeing to consider enhanced versions of the 767-200 and 767-300F with increased gross weights, 767-400ER wing extensions, and 777 avionics. Net orders for the 767 declined from 24 in 2008 to just three in 2010. During the same period, operators upgraded aircraft already in service; in 2008, the first 767-300ER retrofitted with blended winglets from Aviation Partners Incorporated debuted with American Airlines. The manufacturer-sanctioned winglets, at in height, improved fuel efficiency by an estimated 6.5 percent. Other carriers including All Nippon Airways and Delta Air Lines also ordered winglet kits. On February 2, 2011, the 1,000th 767 rolled out, destined for All Nippon Airways. The aircraft was the 91st 767-300ER ordered by the Japanese carrier, and with its completion the 767 became the second wide-body airliner to reach the thousand-unit milestone after the 747. The 1,000th aircraft also marked the last model produced on the original 767 assembly line. Beginning with the 1,001st aircraft, production moved to another area in the Everett factory which occupied about half of the previous floor space. The new assembly line made room for 787 production and aimed to boost manufacturing efficiency by over twenty percent. At the inauguration of its new assembly line, the 767's order backlog numbered approximately 50, only enough for production to last until 2013. Despite the reduced backlog, Boeing officials expressed optimism that additional orders would be forthcoming. On February 24, 2011, the USAF announced its selection of the KC-767 Advanced Tanker, an upgraded variant of the KC-767, for its KC-X fleet renewal program. The selection followed two rounds of tanker competition between Boeing and Airbus parent EADS, and came eight years after the USAF's original 2003 announcement of its plan to lease KC-767s. The tanker order encompassed 179 aircraft and was expected to sustain 767 production past 2013. In December 2011, FedEx Express announced a 767-300F order for 27 aircraft to replace its DC-10 freighters, citing the USAF tanker order and Boeing's decision to continue production as contributing factors. FedEx Express agreed to buy 19 more of the −300F variant in June 2012. In June 2015, FedEx said it was accelerating retirements of planes both to reflect demand and to modernize its fleet, recording charges of $276 million (~$ in ). On July 21, 2015, FedEx announced an order for 50 767-300F with options on another 50, the largest order for the type. With the announcement FedEx confirmed that it has firm orders for 106 of the freighters for delivery between 2018 and 2023. In February 2018, UPS announced an order for 4 more 767-300Fs to increase the total on order to 63. With its successor, the Boeing New Midsize Airplane, that was planned for introduction in 2025 or later, and the 787 being much larger, Boeing could restart a passenger 767-300ER production to bridge the gap. A demand for 50 to 60 aircraft could have to be satisfied. Having to replace its 40 767s, United Airlines requested a price quote for other widebodies. In November 2017, Boeing CEO Dennis Muilenburg cited interest beyond military and freighter uses. However, in early 2018 Boeing Commercial Airplanes VP of marketing Randy Tinseth stated that the company did not intend to resume production of the passenger variant. In its first quarter of 2018 earnings report, Boeing plan to increase its production from 2.5 to 3 monthly beginning in January 2020 due to increased demand in the cargo market, as FedEx had 56 on order, UPS has four, and an unidentified customer has three on order. This rate could rise to 3.5 per month in July 2020 and 4 per month in January 2021, before decreasing to 3 per month in January 2025 and then 2 per month in July 2025. In 2019, unit cost was US$217.9 million for a -300ER, and US$220.3 million for a -300F. Re-engined 767-XF In October 2019, Boeing was reportedly studying a re-engined 767-XF for entry into service around 2025, based on the 767-400ER with an extended landing gear to accommodate larger General Electric GEnx turbofan engines. The cargo market is the main target, but a passenger version could be a cheaper alternative to the proposed New Midsize Airplane. Design Overview The 767 is a low-wing cantilever monoplane with a conventional tail unit featuring a single fin and rudder. The wings are swept at 31.5 degrees and optimized for a cruising speed of Mach 0.8 (). Each wing features a supercritical airfoil cross-section and is equipped with six-panel leading edge slats, single- and double-slotted flaps, inboard and outboard ailerons, and six spoilers. The airframe further incorporates Carbon-fiber-reinforced polymer composite material wing surfaces, Kevlar fairings and access panels, plus improved aluminum alloys, which together reduce overall weight by versus preceding aircraft. To distribute the aircraft's weight on the ground, the 767 has a retractable tricycle landing gear with four wheels on each main gear and two for the nose gear. The original wing and gear design accommodated the stretched 767-300 without major changes. The 767-400ER features a larger, more widely spaced main gear with 777 wheels, tires, and brakes. To prevent damage if the tail section contacts the runway surface during takeoff, 767-300 and 767-400ER models are fitted with a retractable tailskid. All passenger 767 models have exit doors near the front and rear of the aircraft. Most 767-200 and -200ER models have one overwing exit door for emergency use; an optional second overwing exit increases maximum allowable capacity from 255 to 290. The 767-300 and -300ER typically feature two overwing exit doors or, in a configuration with no overwing exits, three exit doors on each side and a smaller exit door aft of the wing. A further configuration featuring three exit doors on each side plus one overwing exit allows an increase in maximum capacity from 290 to 351. All 767-400ERs are configured with three exit doors on each side and a smaller exit door aft of the wing. The 767-300F has one exit door at the forward left-hand side of the aircraft. In addition to shared avionics and computer technology, the 767 uses the same auxiliary power unit, electric power systems, and hydraulic parts as the 757. A raised cockpit floor and the same forward cockpit windows result in similar pilot viewing angles. Related design and functionality allows 767 pilots to obtain a common type rating to operate the 757 and share the same seniority roster with pilots of either aircraft. Flight systems The original 767 flight deck uses six Rockwell Collins CRT screens to display Electronic flight instrument system (EFIS) and engine indication and crew alerting system (EICAS) information, allowing pilots to handle monitoring tasks previously performed by the flight engineer. The CRTs replace conventional electromechanical instruments found on earlier aircraft. An enhanced flight management system, improved over versions used on early 747s, automates navigation and other functions, while an automatic landing system facilitates CAT IIIb instrument landings in low visibility situations. The 767 became the first aircraft to receive CAT IIIb certification from the FAA for landings with minimum visibility in 1984. On the 767-400ER, the cockpit layout is simplified further with six Rockwell Collins liquid crystal display (LCD) screens, and adapted for similarities with the 777 and the Next Generation 737. To retain operational commonality, the LCD screens can be programmed to display information in the same manner as earlier 767s. In 2012, Boeing and Rockwell Collins launched a further 787-based cockpit upgrade for the 767, featuring three landscape-format LCD screens that can display two windows each. The 767 is equipped with three redundant hydraulic systems for operation of control surfaces, landing gear, and utility actuation systems. Each engine powers a separate hydraulic system, and the third system uses electric pumps. A ram air turbine provides power for basic controls in the event of an emergency. An early form of fly-by-wire is employed for spoiler operation, utilizing electric signaling instead of traditional control cables. The fly-by-wire system reduces weight and allows independent operation of individual spoilers. Interior The 767 features a twin-aisle cabin with a typical configuration of six abreast in business class and seven across in economy. The standard seven abreast, 2–3–2 economy class layout places approximately 87 percent of all seats at a window or aisle. As a result, the aircraft can be largely occupied before center seats need to be filled, and each passenger is no more than one seat from the aisle. It is possible to configure the aircraft with extra seats for up to an eight abreast configuration, but this is less common. The 767 interior introduced larger overhead bins and more lavatories per passenger than previous aircraft. The bins are wider to accommodate garment bags without folding, and strengthened for heavier carry-on items. A single, large galley is installed near the aft doors, allowing for more efficient meal service and simpler ground resupply. Passenger and service doors are an overhead plug type, which retract upwards, and commonly used doors can be equipped with an electric-assist system. In 2000, a 777-style interior, known as the Boeing Signature Interior, debuted on the 767-400ER. Subsequently, adopted for all new-build 767s, the Signature Interior features even larger overhead bins, indirect lighting, and sculpted, curved panels. The 767-400ER also received larger windows derived from the 777. Older 767s can be retrofitted with the Signature Interior. Some operators have adopted a simpler modification known as the Enhanced Interior, featuring curved ceiling panels and indirect lighting with minimal modification of cabin architecture, as well as aftermarket modifications such as the NuLook 767 package by Heath Tecna. Operational history In its first year, the 767 logged a 96.1 percent dispatch rate, which exceeded the industry average for all-new aircraft. Operators reported generally favorable ratings for the twinjet's sound levels, interior comfort, and economic performance. Resolved issues were minor and included the recalibration of a leading edge sensor to prevent false readings, the replacement of an evacuation slide latch, and the repair of a tailplane pivot to match production specifications. Seeking to capitalize on its new wide-body's potential for growth, Boeing offered an extended-range model, the 767-200ER, in its first year of service. Ethiopian Airlines placed the first order for the type in December 1982. Featuring increased gross weight and greater fuel capacity, the extended-range model could carry heavier payloads at distances up to , and was targeted at overseas customers. The 767-200ER entered service with El Al Airline on March 27, 1984. The type was mainly ordered by international airlines operating medium-traffic, long-distance flights. In May 1984, an Ethiopian Airlines 767-200ER set a non-stop record for a commercial twinjet of from Washington DC to Addis Ababa. In the mid-1980s, the 767 and its European rivals, the Airbus A300 and A310, spearheaded the growth of twinjet flights across the northern Atlantic under extended-range twin-engine operational performance standards (ETOPS) regulations, the FAA's safety rules governing transoceanic flights by aircraft with two engines. In 1976, the A300 was the first twinjet to secure permission to fly 90 minutes away from diversion airports, up from 60 minutes. In May 1985, the FAA granted its first approval for 120-minute ETOPS flights to the 767, on an individual airline basis starting with TWA, provided that the operator met flight safety criteria. This allowed the aircraft to fly overseas routes at up to two hours' distance from land. The 767 burned less fuel per hour than a Lockheed L-1011 TriStar on the route between Boston and Paris, a huge savings. The Airbus A310 secured approval for 120-minute ETOPS flights one month later in June. The larger safety margins were permitted because of the improved reliability demonstrated by twinjets and their turbofan engines. The FAA lengthened the ETOPS time to 180 minutes for CF6-powered 767s in 1989, making the type the first to be certified under the longer duration, and all available engines received approval by 1993. Regulatory approval spurred the expansion of transoceanic flights with twinjet aircraft and boosted the sales of both the 767 and its rivals. Variants The 767 has been produced in three fuselage lengths. These debuted in progressively larger form as the , , and 767-400ER. Longer-range variants include the 767-200ER and 767-300ER, while cargo models include the 767-300F, a production freighter, and conversions of passenger 767-200 and 767-300 models. When referring to different variants, Boeing and airlines often collapse the model number (767) and the variant designator, e.g. –200 or –300, into a truncated form, e.g. "762" or "763". Subsequent to the capacity number, designations may append the range identifier, though -200ER and -300ER are company marketing designations and not certificated as such. The International Civil Aviation Organization (ICAO) aircraft type designator system uses a similar numbering scheme, but adds a preceding manufacturer letter; all variants based on the 767-200 and 767-300 are classified under the codes "B762" and "B763"; the 767-400ER receives the designation of "B764". 767-200 The 767-200 was the original model and entered service with United Airlines in 1982. The type has been used primarily by mainline U.S. carriers for domestic routes between major hub centers such as Los Angeles to Washington. The 767-200 was the first aircraft to be used on transatlantic ETOPS flights, beginning with TWA on February 1, 1985, under 90-minute diversion rules. Deliveries for the variant totaled 128 aircraft. There were 52 examples of the model in commercial service , almost entirely as freighter conversions. The type's competitors included the Airbus A300 and A310. The 767-200 was produced until 1987 when production switched to the extended-range 767-200ER. Some early 767-200s were subsequently upgraded to extended-range specification. In 1998, Boeing began offering 767-200 conversions to 767-200SF (Special Freighter) specification for cargo use, and Israel Aerospace Industries has been licensed to perform cargo conversions since 2005. The conversion process entails the installation of a side cargo door, strengthened main deck floor, and added freight monitoring and safety equipment. The 767-200SF was positioned as a replacement for Douglas DC-8 freighters. 767-2C A commercial freighter version of the Boeing with wings from the -300 series and an updated flightdeck was first flown on December 29, 2014. A military tanker variant of the Boeing 767-2C is developed for the USAF as the KC-46. Boeing is building two aircraft as commercial freighters which will be used to obtain Federal Aviation Administration certification, a further two Boeing 767-2Cs will be modified as military tankers. , Boeing does not have customers for the freighter. 767-200ER The 767-200ER was the first extended-range model and entered service with El Al in 1984. The type's increased range is due to extra fuel capacity and higher maximum takeoff weight (MTOW) of up to . The additional fuel capacity is accomplished by using the center tank's dry dock to carry fuel. The non-ER variant's center tank is what is called cheek tanks; two interconnected halves in each wing root with a dry dock in between. The center tank is also used on the -300ER and -400ER variants. This version was originally offered with the same engines as the , while more powerful Pratt & Whitney PW4000 and General Electric CF6 engines later became available. The 767-200ER was the first 767 to complete a non-stop transatlantic journey, and broke the flying distance record for a twinjet airliner on April 17, 1988, with an Air Mauritius flight from Halifax, Nova Scotia to Port Louis, Mauritius, covering . The 767-200ER has been acquired by international operators seeking smaller wide-body aircraft for long-haul routes such as New York to Beijing. Deliveries of the type totaled 121 with no unfilled orders. As of July 2018, 21 examples of passenger and freighter conversion versions were in airline service. The type's main competitors of the time included the Airbus A300-600R and the A310-300. 767-300 The , the first stretched version of the aircraft, entered service with Japan Airlines in 1986. The type features a fuselage extension over the , achieved by additional sections inserted before and after the wings, for an overall length of . Reflecting the growth potential built into the original 767 design, the wings, engines, and most systems were largely unchanged on the . An optional mid-cabin exit door is positioned ahead of the wings on the left, while more powerful Pratt & Whitney PW4000 and Rolls-Royce RB211 engines later became available. The 767-300's increased capacity has been used on high-density routes within Asia and Europe. The 767-300 was produced from 1986 until 2000. Deliveries for the type totaled 104 aircraft with no unfilled orders remaining. As of July 2018, 34 of the variant were in airline service. The type's main competitor was the Airbus A300. 767-300ER The 767-300ER, the extended-range version of the , entered service with American Airlines in 1988. The type's increased range was made possible by greater fuel tankage and a higher MTOW of . Design improvements allowed the available MTOW to increase to by 1993. Power is provided by Pratt & Whitney PW4000, General Electric CF6, or Rolls-Royce RB211 engines. The 767-300ER comes in three exit configurations: the baseline configuration has four main cabin doors and four over-wing window exits, the second configuration has six main cabin doors and two over-wing window exits; and the third configuration has six main cabin doors, as well as two smaller doors that are located behind the wings. Typical routes for the type include New York to Frankfurt. The combination of increased capacity and range for the -300ER has been particularly attractive to both new and existing 767 operators. It is the most successful 767 version, with more orders placed than all other variants combined. , 767-300ER deliveries stand at 583 with no unfilled orders. There were 376 examples in service . The type's main competitor is the Airbus A330-200. At its 1990s peak, a new 767-300ER was valued at $85 million, dipping to around $12 million in 2018 for a 1996 build. 767-300F The 767-300F, the production freighter version of the 767-300ER, entered service with UPS Airlines in 1995. The 767-300F can hold up to 24 standard pallets on its main deck and up to 30 LD2 unit load devices on the lower deck, with a total cargo volume of . The freighter has a main deck cargo door and crew exit, while the lower deck features two starboard-side cargo doors and one port-side cargo door. A general market version with onboard freight-handling systems, refrigeration capability, and crew facilities was delivered to Asiana Airlines on August 23, 1996. , 767-300F deliveries stand at 161 with 61 unfilled orders. Airlines operated 222 examples of the freighter variant and freighter conversions in July 2018. Converted freighters In June 2008, All Nippon Airways took delivery of the first 767-300BCF (Boeing Converted Freighter), a modified passenger-to-freighter model. The conversion work was performed in Singapore by ST Aerospace Services, the first supplier to offer a 767-300BCF program, and involved the addition of a main deck cargo door, strengthened main deck floor, and additional freight monitoring and safety equipment. Israel Aerospace Industries offers a passenger-to-freighter conversion program called the 767-300BDSF (BEDEK Special Freighter). Wagner Aeronautical also offers a passenger-to-freighter conversion program for series aircraft. 767-400ER The 767-400ER, the first Boeing wide-body jet resulting from two fuselage stretches, entered service with Continental Airlines in 2000. The type features a stretch over the , for a total length of . The wingspan is also increased by through the addition of raked wingtips. The exit configuration uses six main cabin doors and two smaller exit doors behind the wings, similar to certain 767-300ERs. Other differences include an updated cockpit, redesigned landing gear, and 777-style Signature Interior. Power is provided by uprated General Electric CF6 engines. The FAA granted approval for the 767-400ER to operate 180-minute ETOPS flights before it entered service. Because its fuel capacity was not increased over preceding models, the 767-400ER has a range of , less than previous extended-range 767s. No 767-400 (non-extended range) version was developed. The longer-range 767-400ERX was offered in July 2000 before being cancelled a year later, leaving the 767-400ER as the sole version of the largest 767. Boeing dropped the 767-400ER and the -200ER from its pricing list in 2014. A total of 37 767-400ERs were delivered to the variant's two airline customers, Continental Airlines (now merged with United Airlines) and Delta Air Lines, with no unfilled orders. All 37 examples of the -400ER were in service in July 2018. One additional example was produced as a military testbed, and later sold as a VIP transport. The type's closest competitor is the Airbus A330-200. Military and government Versions of the 767 serve in a number of military and government applications, with responsibilities ranging from airborne surveillance and refueling to cargo and VIP transport. Several military 767s have been derived from the 767-200ER, the longest-range version of the aircraft. Airborne Surveillance Testbed – the Airborne Optical Adjunct (AOA) was modified from the prototype 767-200 for a United States Army program, under a contract signed with the Strategic Air Command in July 1984. Intended to evaluate the feasibility of using airborne optical sensors to detect and track hostile intercontinental ballistic missiles, the modified aircraft first flew on August 21, 1987. Alterations included a large "cupola" or hump on the top of the aircraft from above the cockpit to just behind the trailing edge of the wings, and a pair of ventral fins below the rear fuselage. Inside the cupola was a suite of infrared seekers used for tracking theater ballistic missile launches. The aircraft was later renamed as the Airborne Surveillance Testbed (AST). Following the end of the AST program in 2002, the aircraft was retired for scrapping. E-767 – the Airborne Early Warning and Control (AWACS) platform for the Japan Self-Defense Forces; it is essentially the Boeing E-3 Sentry mission package on a 767-200ER platform. E-767 modifications, completed on 767-200ERs flown from the Everett factory to Boeing Integrated Defense Systems in Wichita, Kansas, include strengthening to accommodate a dorsal surveillance radar system, engine nacelle alterations, as well as electrical and interior changes. Japan operates four E-767s. The first E-767s were delivered in March 1998. KC-767 Tanker Transport – the 767-200ER-based aerial refueling platform operated by the Italian Air Force (Aeronautica Militare), and the Japan Self-Defense Forces. Modifications conducted by Boeing Integrated Defense Systems include the addition of a fly-by-wire refueling boom, strengthened flaps, and optional auxiliary fuel tanks, as well as structural reinforcement and modified avionics. The four KC-767Js ordered by Japan have been delivered. The Aeronautica Militare received the first of its four KC-767As in January 2011. KC-767 Advanced Tanker – the 767-200ER-based aerial tanker developed for the USAF KC-X tanker competition. It is an updated version of the KC-767, originally selected as the USAF's new tanker aircraft in 2003, designated KC-767A, and then dropped amid conflict of interest allegations. The KC-767 Advanced Tanker is derived from studies for a longer-range cargo version of the 767-200ER, and features a fly-by-wire refueling boom, a remote vision refueling system, and a 767-400ER-based flight deck with LCD screens and head-up displays. KC-46 - a 767-based tanker, not derived from the KC-767, awarded as part of the KC-X contract for the USAF. Tanker conversions – the 767 MMTT or Multi-Mission Tanker Transport is a 767-200ER-based aircraft operated by the Colombian Air Force (Fuerza Aérea Colombiana) and modified by Israel Aerospace Industries. In 2013, the Brazilian Air Force ordered two 767-300ER tanker conversions from IAI for its KC-X2 program. E-10 MC2A - the Northrop Grumman E-10 was to be a 767-400ER-based replacement for the USAF's 707-based E-3 Sentry AWACS, Northrop Grumman E-8 Joint STARS, and RC-135 SIGINT aircraft. The E-10 would have included an all-new AWACS system, with a powerful active electronically scanned array (AESA) that was also capable of jamming enemy aircraft or missiles. One 767-400ER aircraft was built as a testbed for systems integration, but the program was terminated in January 2009 and the prototype was later sold to Bahrain as a VIP transport. Undeveloped variants 767-X In 1986, Boeing announced plans for a partial double-deck Boeing 767 design. The aircraft would have combined the Boeing with a Boeing 757 cross section mounted over the rear fuselage. The Boeing 767-X would have also featured extended wings and a wider cabin. The 767-X did not get enough interest from airlines to launch and the model was shelved in 1988 in favor of the Boeing 777. 767-400ERX In March 2000, Boeing was to launch the 259-seat 767-400ERX with an initial order for three from Kenya Airways with deliveries planned for 2004, as it was proposed to Lauda Air. Increased gross weight and a tailplane fuel tank would have boosted its range by , and GE could offer its CF6-80C2/G2. Rolls-Royce offered its Trent 600 for the 767-400ERX and the Boeing 747X. Offered in July, the longer-range -400ERX would have a strengthened wing, fuselage and landing gear for a 15,000 lb (6.8 t) higher MTOW, up to 465,000 lb (210.92 t). Thrust would rise to for better takeoff performance, with the Trent 600 or the General Electric/Pratt & Whitney Engine Alliance GP7172, also offered on the 747X. Range would increase by 525 nmi (950 km; ) to 6,150 nmi (11,390 km; ), with an additional fuel tank of 2,145 U.S. gallons (8,120 L) in the horizontal tail. The 767-400ERX would offer the capacity of the Airbus A330-200 with 3% lower fuel burn and costs. Boeing cancelled the variant development in 2001. Kenya Airways then switched its order to the 777-200ER. Operators In July 2018, 742 aircraft were in airline service: 73 -200s, 632 -300, and 37 -400ER with 65 -300F on order; the largest operators are Delta Air Lines (77), FedEx (60; largest cargo operator), UPS Airlines (59), United Airlines (), Japan Airlines (35), All Nippon Airways (34). The type's competitors included the Airbus A300 and A310. The largest 767 customers by orders placed are FedEx Express (150), Delta Air Lines (117), All Nippon Airways (96), American Airlines (88), and United Airlines (82). Delta and United are the only customers of all -200, -300, and -400ER passenger variants. In July 2015, FedEx placed a firm order for 50 Boeing 767 freighters with deliveries from 2018 to 2023. Orders and deliveries Boeing 767 orders and deliveries (cumulative, by year): Data . Model summary Data . Accidents and incidents , the Boeing 767 has been in 60 aviation occurrences, including 19 hull-loss accidents. Seven fatal crashes, including three hijackings, have resulted in a total of 854 occupant fatalities. Fatal accidents The airliner's first fatal crash, Lauda Air Flight 004, occurred near Bangkok on May 26, 1991, following the in-flight deployment of the left engine thrust reverser on a 767-300ER. None of the 223 aboard survived. As a result of this accident, all 767 thrust reversers were deactivated until a redesign was implemented. Investigators determined that an electronically controlled valve, common to late-model Boeing aircraft, was to blame. A new locking device was installed on all affected jetliners, including 767s. On October 31, 1999, EgyptAir Flight 990, a 767-300ER, crashed off Nantucket, Massachusetts, in international waters killing all 217 people on board. The United States National Transportation Safety Board (NTSB) concluded "not determined", but determined the probable cause to be a deliberate action by the first officer; Egypt disputed this conclusion. On April 15, 2002, Air China Flight 129, a 767-200ER, crashed into a hill amid inclement weather while trying to land at Gimhae International Airport in Busan, South Korea. The crash resulted in the death of 129 of the 166 people on board, and the cause was attributed to pilot error. On February 23, 2019, Atlas Air Flight 3591, a Boeing 767-300ERF air freighter operating for Amazon Air, crashed into Trinity Bay near Houston, Texas, while on descent into George Bush Intercontinental Airport; both pilots and the single passenger were killed. The cause was attributed to pilot error and spatial disorientation. Hijackings The 767 has been involved in six hijackings, three resulting in loss of life, for a combined total of 282 occupant fatalities. On November 23, 1996, Ethiopian Airlines Flight 961, a 767-200ER, was hijacked and crash-landed in the Indian Ocean near the Comoro Islands after running out of fuel, killing 125 out of the 175 persons on board; this was a rare example of occupants surviving a land-based aircraft ditching on water. Two 767s were involved in the September 11 attacks on the World Trade Center in 2001, resulting in the collapse of its two main towers. American Airlines Flight 11, a 767-200ER, crashed into the North Tower, killing all 92 people on board, and United Airlines Flight 175, a , crashed into the South Tower, with the death of all 65 on board. In addition, more than 2,600 people were killed in the towers or on the ground. A failed 2001 shoe bomb attempt that December involved an American Airlines 767-300ER. Hull losses On November 1, 2011, LOT Polish Airlines Flight 16, a 767-300ER, safely landed at Warsaw Chopin Airport in Warsaw, Poland, after a mechanical failure of the landing gear forced an emergency landing with the landing gear retracted. There were no injuries, but the aircraft involved was damaged and subsequently written off. At the time of the incident, aviation analysts speculated that it may have been the first instance of a complete landing gear failure in the 767's service history. Subsequent investigation determined that while a damaged hose had disabled the aircraft's primary landing gear extension system, an otherwise functional backup system was inoperative due to an accidentally deactivated circuit breaker. On October 28, 2016, American Airlines Flight 383, a 767-300ER with 161 passengers and 9 crew members, aborted takeoff at Chicago O'Hare Airport following an uncontained failure of the right GE CF6-80C2 engine. The engine failure, which hurled fragments over a considerable distance, caused a fuel leak, resulting in a fire under the right wing. Fire and smoke entered the cabin. All passengers and crew evacuated the aircraft, with 20 passengers and one flight attendant sustaining minor injuries using the evacuation slides. Other incidents The 767's first incident was Air Canada Flight 143, a , on July 23, 1983. The airplane ran out of fuel in-flight and had to glide with both engines out for almost to an emergency landing at Gimli, Manitoba, Canada. The pilots used the aircraft's ram air turbine to power the hydraulic systems for aerodynamic control. There were no fatalities and only minor injuries. This aircraft was nicknamed "Gimli Glider" after its landing site. The aircraft, registered C-GAUN, continued flying for Air Canada until its retirement in January 2008. In January 2014, the U.S. Federal Aviation Administration issued a directive that ordered inspections of the elevators on more than 400 767s beginning in March 2014; the focus was on fasteners and other parts that can fail and cause the elevators to jam. The issue was first identified in 2000 and has been the subject of several Boeing service bulletins. The inspections and repairs are required to be completed within six years. The aircraft has also had multiple occurrences of "uncommanded escape slide inflation" during maintenance or operations, and during flight. In late 2015, the FAA issued a preliminary directive to address the issue. Retirement and display As new 767 variants roll off the assembly line, older series models have been retired and converted to cargo use, stored or scrapped. One complete aircraft, N102DA, is the first to operate for Delta Air Lines and the twelfth example built. It was retired from airline service in February 2006 after being repainted back to its original 1982 Delta widget livery and given a farewell tour. It was then put on display at the Delta Flight Museum in the Delta corporate campus at the edge of Hartsfield–Jackson Atlanta International Airport. "The Spirit of Delta" is on public display as of 2022. In 2013 a Brazilian entrepreneur purchased a 767-200 that had operated for the now-defunct carrier Transbrasil under the registration PT-TAC. The aircraft, which was sold at a bankruptcy auction, was placed on outdoor display in Taguatinga as part of a proposed commercial development. , however, the development has not come to fruition. The aircraft is devoid of engines or landing gear, has deteriorated due to weather exposure and acts of vandalism, but remains publicly accessible to view. Specifications See also References Notes Citations Bibliography External links 767 Twinjets 1980s United States airliners Aircraft first flown in 1981
33
A utility knife is any type of knife used for general manual work purposes. Such knives were originally fixed-blade knives with durable cutting edges suitable for rough work such as cutting cordage, cutting/scraping hides, butchering animals, cleaning fish scales, reshaping timber, and other tasks. Craft knives are small utility knives used as precision-oriented tools for finer, more delicate tasks such as carving and papercutting. Today, the term "utility knife" also includes small folding-, retractable- and/or replaceable-razor blade knives suited for use in the general workplace or in the construction industry. The latter type is sometimes generically called a Stanley knife, after a prominent brand. There is also a utility knife for kitchen use, which is sized between a chef's knife and paring knife. History The fixed-blade utility knife was developed some 500,000 years ago, when human ancestors began to make stone knives. These knives were general-purpose tools, designed for cutting and shaping wooden implements, scraping hides, preparing food, and for other utilitarian purposes. By the 19th century the fixed-blade utility knife had evolved into a steel-bladed outdoors field knife capable of butchering game, cutting wood, and preparing campfires and meals. With the invention of the backspring, pocket-size utility knives were introduced with folding blades and other folding tools designed to increase the utility of the overall design. The folding pocketknife and utility tool is typified by the Camper or Boy Scout pocketknife, the Swiss Army Knife, and by multi-tools fitted with knife blades. The development of stronger locking blade mechanisms for folding knives—as with the Spanish navaja, the Opinel, and the Buck 110 Folding Hunter—significantly increased the utility of such knives when employed for heavy-duty tasks such as preparing game or cutting through dense or tough materials. Contemporary utility knives The fixed or folding blade utility knife is popular for both indoor and outdoor use. One of the most popular types of workplace utility knife is the retractable or folding utility knife (also known as a Stanley knife, box cutter, or by various other names). These types of utility knives are designed as multi-purpose cutting tools for use in a variety of trades and crafts. Designed to be lightweight and easy to carry and use, utility knives are commonly used in factories, warehouses, construction projects, and other situations where a tool is routinely needed to mark cut lines, trim plastic or wood materials, or to cut tape, cord, strapping, cardboard, or other packaging material. Names In British, Australian and New Zealand English, along with Dutch, Danish and Austrian German, a utility knife frequently used in the construction industry is known as a Stanley knife. This name is a generic trademark named after Stanley Works, a manufacturer of such knives. In Israel and Switzerland, these knives are known as Japanese knives. In Brazil they are known as estiletes or cortadores Olfa (the latter, being another genericised trademark). In Portugal, Panama and Canada they are also known as X-Acto (yet another genericised trademark ). In India, Russia, the Philippines, France, Iraq, Italy, Egypt, and Germany, they are simply called cutter. In the Flemish region of Belgium it is called cuttermes(je) (cutter knife). In general Spanish, they are known as cortaplumas (penknife, when it comes to folding blades); in Spain, Mexico, and Costa Rica, they are colloquially known as cutters; in Argentina and Uruguay the segmented fixed-blade knives are known as "Trinchetas". In Turkey, they are known as maket bıçağı (which literally translates as model knife). Other names for the tool are box cutter or boxcutter, razor blade knife, razor knife, carpet knife, pen knife, stationery knife, sheetrock knife, or drywall knife. Design Utility knives may use fixed, folding, or retractable or replaceable blades, and come in a wide variety of lengths and styles suited to the particular set of tasks they are designed to perform. Thus, an outdoors utility knife suited for camping or hunting might use a broad fixed blade, while a utility knife designed for the construction industry might feature a replaceable utility or razor blade for cutting packaging, cutting shingles, marking cut lines, or scraping paint. Fixed blade utility knife Large fixed-blade utility knives are most often employed in an outdoors context, such as fishing, camping, or hunting. Outdoor utility knives typically feature sturdy blades from in length, with edge geometry designed to resist chipping and breakage. The term "utility knife" may also refer to small fixed-blade knives used for crafts, model-making and other artisanal projects. These small knives feature light-duty blades best suited for cutting thin, lightweight materials. The small, thin blade and specialized handle permit cuts requiring a high degree of precision and control. Workplace utility knives The largest construction or workplace utility knives typically feature retractable and replaceable blades, and are made of either die-cast metal or molded plastic. Some use standard razor blades, others specialized double-ended utility blades. The user can adjust how far the blade extends from the handle, so that, for example, the knife can be used to cut the tape sealing a package without damaging the contents of the package. When the blade becomes dull, it can be quickly reversed or switched for a new one. Spare or used blades are stored in the hollow handle of some models, and can be accessed by removing a screw and opening the handle. Other models feature a quick-change mechanism that allows replacing the blade without tools, as well as a flip-out blade storage tray. The blades for this type of utility knife come in both double- and single-ended versions, and are interchangeable with many, but not all, of the later copies. Specialized blades also exist for cutting string, linoleum, and other materials. Another style is a snap-off utility knife that contains a long, segmented blade that slides out from it. As the endmost edge becomes dull, it can be broken off the remaining blade, exposing the next section, which is sharp and ready for use. The snapping is best accomplished with a blade snapper that is often built-in, or a pair of pliers, and the break occurs at the score lines, where the metal is thinnest. When all of the individual segments are used, the knife may be thrown away, or, more often, refilled with a replacement blade. This design was introduced by Japanese manufacturer Olfa Corporation in 1956 as the world's first snap-off blade and was inspired from analyzing the sharp cutting edge produced when glass is broken and how pieces of a chocolate bar break into segments. The sharp cutting edge on these knives is not on the edge where the blade is snapped off; rather one long edge of the whole blade is sharpened, and there are scored diagonal breakoff lines at intervals down the blade. Thus each snapped-off piece is roughly a parallelogram, with each long edge being a breaking edge, and one or both of the short ends being a sharpened edge. Another utility knife often used for cutting open boxes consists of a simple sleeve around a rectangular handle into which single-edge utility blades can be inserted. The sleeve slides up and down on the handle, holding the blade in place during use and covering the blade when not in use. The blade holder may either retract or fold into the handle, much like a folding-blade pocketknife. The blade holder is designed to expose just enough edge to cut through one layer of corrugated fibreboard, to minimize chances of damaging contents of cardboard boxes. Use as weapon Most utility knives are not well suited to use as offensive weapons, with the exception of some outdoor-type utility knives employing longer blades. However, even small razor-blade type utility knives may sometimes find use as slashing weapons. The 9-11 commission report stated passengers in cell phone calls reported knives or "box-cutters" were used as weapons (also Mace or a bomb) in hijacking airplanes in the September 11, 2001 terrorist attacks against the United States, though the exact design of the knives used is unknown. Two of the hijackers were known to have purchased Leatherman knives, which feature a ( slip-joint blade which were not prohibited on U.S. flights at the time. Those knives were not found in the possessions the two hijackers left behind. Similar cutters, including paper cutters, have also been known to be used as a lethal weapon. Small work-type utility knives have also been used to commit robbery and other crimes. In June 2004, a Japanese student was slashed to death with a segmented-type utility knife. In the United Kingdom, the law was changed (effective 1 October 2007) to raise the age limit for purchasing knives, including utility knives, from 16 to 18, and to make it illegal to carry a utility knife in public without a good reason. See also Automatic box-opening technology Everyday carry References Knives Woodworking hand tools Office equipment
5
The Benelux Union (; ; ) or Benelux is a politico-economic union and formal international intergovernmental cooperation of three neighbouring states in western Europe: Belgium, the Netherlands, and Luxembourg. The name is a portmanteau formed from joining the first few letters of each country's name and was first used to name the customs agreement that initiated the union (signed in 1944). It is now used more generally to refer to the geographic, economic, and cultural grouping of the three countries. The Benelux is an economically dynamic and densely populated region, with 5.6% of the European population (29.55 million residents) and 7.9% of the joint EU GDP (€36,000/resident) on 1.7% of the whole surface of the EU. Currently 37% of the total number of EU frontier workers work in the Benelux and surrounding areas. 35,000 Belgian citizens work in Luxembourg, while 37,000 Belgian citizens cross the border to work in the Netherlands each day. In addition, 12,000 Dutch and close to a thousand Luxembourg residents work in Belgium. The main institutions of the Union are the Committee of Ministers, the Council of the Union, the General Secretariat, the Interparliamentary Consultative Council and the Benelux Court of Justice while the Benelux Office for Intellectual Property covers the same land but is not part of the Benelux Union. The Benelux General Secretariat is located in Brussels. It is the central platform of the Benelux Union cooperation. It handles the secretariat of the Committee of Ministers, the Council of Benelux Union and the sundry committees and working parties. The General Secretariat provides day-to-day support for the Benelux cooperation on the substantive, procedural, diplomatic and logistical levels. The Secretary-General is Frans Weekers from the Netherlands and there are two deputies: Deputy Secretary-General Michel-Etienne Tilemans from Belgium and Deputy Secretary-General Jean-Claude Meyer from Luxembourg. The presidency of the Benelux is held in turn by the three countries for a period of one year. The Netherlands holds the presidency for 2023. History In 1944, exiled representatives of the three countries signed the London Customs Convention, the treaty that established the Benelux Customs Union. Ratified in 1947, the treaty was in force from 1948 until it was superseded by the Benelux Economic Union. The initial form of economic cooperation expanded steadily over time, leading to the signing of the treaty establishing the Benelux Economic Union (Benelux Economische Unie, Union Économique Benelux) on 3 February 1958 in The Hague, which came into force on 1 November 1960. Initially, the purpose of cooperation among the three partners was to put an end to customs barriers at their borders and ensure free movement of persons, capital, services, and goods between the three countries. This treaty was the first example of international economic integration in Europe since the Second World War. The three countries therefore foreshadowed and provided the model for future European integration, such as the European Coal and Steel Community, the European Economic Community (EEC), and the European Community–European Union (EC–EU). The three partners also launched the Schengen process, which came into operation in 1985. Benelux cooperation has been constantly adapted and now goes much further than mere economic cooperation, extending to new and topical policy areas connected with security, sustainable development, and the economy. In 1965, the treaty establishing a Benelux Court of Justice was signed. It entered into force in 1974. The court, composed of judges from the highest courts of the three states, has to guarantee the uniform interpretation of common legal rules. This international judicial institution is located in Luxembourg. Renewal of the agreement The 1958 Treaty between the Benelux countries establishing the Benelux Economic Union was limited to a period of 50 years. During the following years, and even more so after the creation of the European Union, the Benelux cooperation focused on developing other fields of activity within a constantly changing international context. At the end of the 50 years, the governments of the three Benelux countries decided to renew the agreement, taking into account the new aspects of the Benelux-cooperation – such as security – and the new federal government structure of Belgium. The original establishing treaty, set to expire in 2010, was replaced by a new legal framework (called the Treaty revising the Treaty establishing the Benelux Economic Union), which was signed on 17 June 2008. The new treaty has no set time limit and the name of the Benelux Economic Union changed to Benelux Union to reflect the broad scope on the union. The main objectives of the treaty are the continuation and enlargement of the cooperation between the three member states within a larger European context. The renewed treaty explicitly foresees the possibility that the Benelux countries will cooperate with other European member states or with regional cooperation structures. The new Benelux cooperation focuses on three main topics: internal market and economic union, sustainability, justice and internal affairs. The number of structures in the renewed Treaty has been reduced and thus simplified. Activities since 2008 Benelux seeks region-to-region cooperation, be it with France and Germany (North Rhine-Westphalia) or beyond with the Baltic States, the Nordic Council, the Visegrad countries, or even further. In 2018 a renewed political declaration was adopted between Benelux and North Rhine-Westphalia to give cooperation a further impetus. The Benelux is particularly active in the field of intellectual property. The three countries established a Benelux Trademarks Office and a Benelux Designs Office, both situated in The Hague. In 2005, they concluded a treaty establishing the Benelux Office for Intellectual Property, which replaced both offices upon its entry into force on 1 September 2006. This organisation is the official body for the registration of trademarks and designs in the Benelux. In addition, it offers the possibility to formally record the existence of ideas, concepts, designs, prototypes and the like. Some examples of recent Benelux initiatives include: automatic level recognition of diplomas and degrees within the Benelux for bachelor's and master's programs in 2015, and for all other degrees in 2018; common road inspections in 2014; and a Benelux pilot with digital consignment notes (e-CMR) in 2017; a new Benelux Treaty on Police Cooperation in 2018, providing for direct access to each other's police databases and population registers within the limits of national legislation, and allowing some police forces to cross borders in some situations. The Benelux is also committed to working together on adaptation to climate change. A joint political declaration in July 2020 called on the European Commission to prioritise cycling in European climate policy and Sustainable Transport strategies, to co-finance the construction of cycling infrastructure, and to provide funds to stimulate cycling policy. On 5 June 2018 the Benelux Treaty celebrated its 60 years of existence. In 2018, a Benelux Youth Parliament was created. In addition to cooperation based on a Treaty, there is also political cooperation in the Benelux context, including summits of the Benelux government leaders. In 2019 a Benelux summit was held in Luxembourg. In 2020, a Benelux summit was held – online, due to the COVID-19 pandemic – under Dutch Presidency on 7 October between the prime ministers. As of 1st January 2017, a new arrangement for NATO Air Policing started for the airspace of Belgium, the Netherlands and Luxemburg (Benelux). The Belgian Air Component and the Royal Netherlands Air Force will take four-month turns to ensure that Quick Reaction Alert (QRA) fighter jets are available at all times to be launched under NATO control. Cooperation with other geopolitical regions The Benelux countries also work together in the so-called Pentalateral Energy Forum, a regional cooperation group formed of five members—the Benelux states, France, Germany, Austria, and Switzerland. Formed on 6 June 2007, the ministers for energy from the various countries represent a total of 200 million residents and 40% of the European electricity network. In 2017 the members of the Benelux, the Baltic Assembly, three members of the Nordic Council (Sweden, Denmark and Finland), and all the other countries EU member states, sought to increase cooperation in the Digital Single Market, as well as discussing social matters, the Economic and Monetary Union of the European Union, immigration and defence cooperation. Foreign relations in the wake of Russia's annexation of Crimea and the 2017 Turkish constitutional referendum were also on the agenda. Since 2008 the Benelux Union works together with the German Land (state) North Rhine-Westphalia. In 2018 Benelux Union signed a declaration with France to strengthen cross-border cooperation. Politics Benelux institutions Under the 2008 treaty there are five Benelux institutions: the Benelux Committee of Ministers, the Benelux Council, the Benelux Parliament, the Benelux Court of Justice, the Benelux Secretariat General. Beside these five institutions, the Benelux Organisation for Intellectual Property is also an independent organisation. Benelux Committee of Ministers: The Committee of Ministers is the supreme decision-making body of the Benelux. It includes at least one representative at ministerial level from the three countries. Its composition varies according to its agenda. The ministers determine the orientations and priorities of Benelux cooperation. The presidency of the Committee rotates between the three countries on an annual basis. Benelux Council: The council is composed of senior officials from the relevant ministries. Its composition varies according to its agenda. The council's main task is to prepare the dossiers for the ministers. Benelux InterParliamentary Consultative Council: The Benelux Parliament (officially referred to as an "Interparliamentary Consultative Council") was created in 1955. This parliamentary assembly is composed of 49 members from the respective national parliaments (21 members of the Dutch parliament, 21 members of the Belgian national and regional parliaments, and 7 members of the Luxembourg parliament). Its members inform and advise their respective governments on all Benelux matters. On 20 January 2015, the governments of the three countries, including, as far as Belgium is concerned, the community and regional governments, signed in Brussels the Treaty of the Benelux Interparliamentary Assembly. This treaty entered into force on 1 August 2019. This superseded the 1955 Convention on the Consultative Interparliamentary Council for the Benelux. The official name has been largely obsolete in daily practice for a number of years: both internally in the Benelux and in external references, the name Benelux Parliament has been used de facto for a number of years now. Benelux Court of Justice: The Benelux Court of Justice is an international court. Its mission is to promote uniformity in the application of Benelux legislation. When faced with difficulty interpreting a common Benelux legal rule, national courts must seek an interpretive ruling from the Benelux Court, which subsequently renders a binding decision. The members of the Court are appointed from among the judges of the 'Cour de cassation' of Belgium, the 'Hoge Raad of the Netherlands' and the 'Cour de cassation' of Luxembourg. Benelux General Secretariat: The General Secretariat, which is based in Brussels, forms the cooperation platform of the Benelux Union. It acts as the secretariat of the Committee of Ministers, the council and various commissions and working groups. The General Secretariat has years of expertise in the area of Benelux cooperation and is familiar with the policy agreements and differences between the three countries. Building on what already been achieved, the General Secretariat puts its knowledge, network and experience at the service of partners and stakeholders who endorse its mission. It initiates, supports and monitors cooperation results in the areas of economy, sustainability and security. Benelux works together on the basis of an annual plan embedded in a four-year joint work programme. Benelux legal instruments The Benelux Union involves intergovernmental cooperation. The Treaty establishing the Benelux Union explicitly provides that the Benelux Committee of Ministers can resort to four legal instruments (art. 6, paragraph 2, under a), f), g) and h)): 1. Decisions Decisions are legally binding regulations for implementing the Treaty establishing the Benelux Union or other Benelux treaties. Their legally binding force concerns the Benelux states (and their sub-state entities), which have to implement them. However, they have no direct effect towards individual citizens or companies (notwithstanding any indirect protection of their rights based on such decisions as a source of international law). Only national provisions implementing a decision can directly create rights and obligations for citizens or companies. 2. Agreements The Committee of Ministers can draw up agreements, which are then submitted to the Benelux states (and/or their sub-state entities) for signature and subsequent parliamentary ratification. These agreements can deal with any subject matter, also in policy areas that are not yet covered by cooperation in the framework of the Benelux Union. These are in fact traditional treaties, with the same direct legally binding force towards both authorities and citizens or companies. The negotiations do however take place in the established context of the Benelux working groups and institutions, rather than on an ad hoc basis. 3. Recommendations Recommendations are non-binding orientations, adopted at ministerial level, which underpin the functioning of the Benelux Union. These (policy) orientations may not be legally binding, but given their adoption at the highest political level and their legal basis vested directly in the Treaty, they do entail a strong moral obligation for any authority concerned in the Benelux countries. 4. Directives Directives of the Committee of Ministers are mere inter-institutional instructions towards the Benelux Council and/or the Secretariat-General, for which they are binding. This instrument has so far only been used occasionally, basically in order to organize certain activities within a Benelux working group or to give them impetus. All four instruments require the unanimous approval of the members of the Committee of Ministers (and, in the case of agreements, subsequent signature and ratification at national level). Characteristics Countries Associated territories See also Admiral Benelux EU Med Group Baltic Assembly Inner Six Low Countries Nordic Council United Kingdom of the Netherlands Visegrád Group Polish–Czechoslovak confederation Proposed United Kingdom Confederation Notes References Further reading Willy van Ryckeghem : Benelux in: The European Economy - Growth and Crisis, Andrea Boltho, Editor, Oxford University Press, 1982, . External links Official sites (in Dutch and French) Benelux Court of Justice Benelux Office for Intellectual Property Regions of Europe History of the Low Countries Economic history of Belgium Economic history of Luxembourg Economic history of the Netherlands 1944 establishments in the Netherlands 1944 establishments in Belgium 1944 establishments in Luxembourg International economic organizations International organizations based in Europe International relations Supranational unions Organizations established in 1944 Northwestern Europe Bottom-up regional groups within the European Union
5
George Herman "Babe" Ruth (February 6, 1895 – August 16, 1948) was an American professional baseball player whose career in Major League Baseball (MLB) spanned 22 seasons, from 1914 through 1935. Nicknamed "the Bambino" and "the Sultan of Swat", he began his MLB career as a star left-handed pitcher for the Boston Red Sox, but achieved his greatest fame as a slugging outfielder for the New York Yankees. Ruth is regarded as one of the greatest sports heroes in American culture and is considered by many to be the greatest baseball player of all time. In 1936, Ruth was elected into the Baseball Hall of Fame as one of its "first five" inaugural members. At age seven, Ruth was sent to St. Mary's Industrial School for Boys, a reformatory where he was mentored by Brother Matthias Boutlier of the Xaverian Brothers, the school's disciplinarian and a capable baseball player. In 1914, Ruth was signed to play Minor League baseball for the Baltimore Orioles but was soon sold to the Red Sox. By 1916, he had built a reputation as an outstanding pitcher who sometimes hit long home runs, a feat unusual for any player in the dead-ball era. Although Ruth twice won 23 games in a season as a pitcher and was a member of three World Series championship teams with the Red Sox, he wanted to play every day and was allowed to convert to an outfielder. With regular playing time, he broke the MLB single-season home run record in 1919 with 29. After that season, Red Sox owner Harry Frazee sold Ruth to the Yankees amid controversy. The trade fueled Boston's subsequent 86-year championship drought and popularized the "Curse of the Bambino" superstition. In his 15 years with the Yankees, Ruth helped the team win seven American League (AL) pennants and four World Series championships. His big swing led to escalating home run totals that not only drew fans to the ballpark and boosted the sport's popularity but also helped usher in baseball's live-ball era, which evolved from a low-scoring game of strategy to a sport where the home run was a major factor. As part of the Yankees' vaunted "Murderers' Row" lineup of 1927, Ruth hit 60 home runs, which extended his own MLB single-season record by a single home run. Ruth's last season with the Yankees was 1934; he retired from the game the following year, after a short stint with the Boston Braves. In his career, he led the American League in home runs twelve times. During Ruth's career, he was the target of intense press and public attention for his baseball exploits and off-field penchants for drinking and womanizing. After his retirement as a player, he was denied the opportunity to manage a major league club, most likely because of poor behavior during parts of his playing career. In his final years, Ruth made many public appearances, especially in support of American efforts in World War II. In 1946, he became ill with nasopharyngeal cancer and died from the disease two years later. Ruth remains a major figure in American culture. Early years George Herman Ruth Jr. was born on February 6, 1895, at 216 Emory Street in the Pigtown section of Baltimore, Maryland. Ruth's parents, Katherine (née Schamberger) and George Herman Ruth Sr., were both of German ancestry. According to the 1880 census, his parents were both born in Maryland. His paternal grandparents were from Prussia and Hanover, Germany. Ruth Sr. worked a series of jobs that included lightning rod salesman and streetcar operator. The elder Ruth then became a counterman in a family-owned combination grocery and saloon business on Frederick Street. George Ruth Jr. was born in the house of his maternal grandfather, Pius Schamberger, a German immigrant and trade unionist. Only one of young Ruth's seven siblings, his younger sister Mamie, survived infancy. Many details of Ruth's childhood are unknown, including the date of his parents' marriage. As a child, Ruth spoke German. When Ruth was a toddler, the family moved to 339 South Woodyear Street, not far from the rail yards; by the time he was six years old, his father had a saloon with an upstairs apartment at 426 West Camden Street. Details are equally scanty about why Ruth was sent at the age of seven to St. Mary's Industrial School for Boys, a reformatory and orphanage. However, according to Julia Ruth Stevens' recount in 1999, because George Sr. was a saloon owner in Baltimore and had given Ruth little supervision growing up, he became a delinquent. Ruth was sent to St. Mary's because George Sr. ran out of ideas to discipline and mentor his son. As an adult, Ruth admitted that as a youth he ran the streets, rarely attended school, and drank beer when his father was not looking. Some accounts say that following a violent incident at his father's saloon, the city authorities decided that this environment was unsuitable for a small child. Ruth entered St. Mary's on June 13, 1902. He was recorded as "incorrigible" and spent much of the next 12 years there. Although St. Mary's boys received an education, students were also expected to learn work skills and help operate the school, particularly once the boys turned 12. Ruth became a shirtmaker and was also proficient as a carpenter. He would adjust his own shirt collars, rather than having a tailor do so, even during his well-paid baseball career. The boys, aged 5 to 21, did most of the work around the facility, from cooking to shoemaking, and renovated St. Mary's in 1912. The food was simple, and the Xaverian Brothers who ran the school insisted on strict discipline; corporal punishment was common. Ruth's nickname there was "Niggerlips", as he had large facial features and was darker than most boys at the all-white reformatory. Ruth was sometimes allowed to rejoin his family or was placed at St. James's Home, a supervised residence with work in the community, but he was always returned to St. Mary's. He was rarely visited by his family; his mother died when he was 12 and, by some accounts, he was permitted to leave St. Mary's only to attend the funeral. How Ruth came to play baseball there is uncertain: according to one account, his placement at St. Mary's was due in part to repeatedly breaking Baltimore's windows with long hits while playing street ball; by another, he was told to join a team on his first day at St. Mary's by the school's athletic director, Brother Herman, becoming a catcher even though left-handers rarely play that position. During his time there he also played third base and shortstop, again unusual for a left-hander, and was forced to wear mitts and gloves made for right-handers. He was encouraged in his pursuits by the school's Prefect of Discipline, Brother Matthias Boutlier, a native of Nova Scotia. A large man, Brother Matthias was greatly respected by the boys both for his strength and for his fairness. For the rest of his life, Ruth would praise Brother Matthias, and his running and hitting styles closely resembled his teacher's. Ruth stated, "I think I was born as a hitter the first day I ever saw him hit a baseball." The older man became a mentor and role model to Ruth; biographer Robert W. Creamer commented on the closeness between the two: The school's influence remained with Ruth in other ways. He was a lifelong Catholic who would sometimes attend Mass after carousing all night, and he became a well-known member of the Knights of Columbus. He would visit orphanages, schools, and hospitals throughout his life, often avoiding publicity. He was generous to St. Mary's as he became famous and rich, donating money and his presence at fundraisers, and spending $5,000 to buy Brother Matthias a Cadillac in 1926—subsequently replacing it when it was destroyed in an accident. Nevertheless, his biographer Leigh Montville suggests that many of the off-the-field excesses of Ruth's career were driven by the deprivations of his time at St. Mary's. Most of the boys at St. Mary's played baseball in organized leagues at different levels of proficiency. Ruth later estimated that he played 200 games a year as he steadily climbed the ladder of success. Although he played all positions at one time or another, he gained stardom as a pitcher. According to Brother Matthias, Ruth was standing to one side laughing at the bumbling pitching efforts of fellow students, and Matthias told him to go in and see if he could do better. Ruth had become the best pitcher at St. Mary's, and when he was 18 in 1913, he was allowed to leave the premises to play weekend games on teams that were drawn from the community. He was mentioned in several newspaper articles, for both his pitching prowess and ability to hit long home runs. Professional baseball Minor leagues: Baltimore Orioles In early 1914, Ruth signed a professional baseball contract with Jack Dunn, who owned and managed the minor-league Baltimore Orioles, an International League team. The circumstances of Ruth's signing are not known with certainty. By some accounts, Dunn was urged to attend a game between an all-star team from St. Mary's and one from another Xaverian facility, Mount St. Mary's College. Some versions have Ruth running away before the eagerly awaited game, to return in time to be punished, and then pitching St. Mary's to victory as Dunn watched. Others have Washington Senators pitcher Joe Engel, a Mount St. Mary's graduate, pitching in an alumni game after watching a preliminary contest between the college's freshmen and a team from St. Mary's, including Ruth. Engel watched Ruth play, then told Dunn about him at a chance meeting in Washington. Ruth, in his autobiography, stated only that he worked out for Dunn for a half hour, and was signed. According to biographer Kal Wagenheim, there were legal difficulties to be straightened out as Ruth was supposed to remain at the school until he turned 21, though SportsCentury stated in a documentary that Ruth had already been discharged from St. Mary's when he turned 19, and earned a monthly salary of $100. The train journey to spring training in Fayetteville, North Carolina, in early March was likely Ruth's first outside the Baltimore area. The rookie ballplayer was the subject of various pranks by veteran players, who were probably also the source of his famous nickname. There are various accounts of how Ruth came to be called "Babe", but most center on his being referred to as "Dunnie's babe" (or some variant). SportsCentury reported that his nickname was gained because he was the new "darling" or "project" of Dunn, not only because of Ruth's raw talent, but also because of his lack of knowledge of the proper etiquette of eating out in a restaurant, being in a hotel, or being on a train. "Babe" was, at that time, a common nickname in baseball, with perhaps the most famous to that point being Pittsburgh Pirates pitcher and 1909 World Series hero Babe Adams, who appeared younger than his actual age. Ruth made his first appearance as a professional ballplayer in an inter-squad game on March 7, 1914. He played shortstop and pitched the last two innings of a 15–9 victory. In his second at-bat, Ruth hit a long home run to right field; the blast was locally reported to be longer than a legendary shot hit by Jim Thorpe in Fayetteville. Ruth made his first appearance against a team in organized baseball in an exhibition game versus the major-league Philadelphia Phillies. Ruth pitched the middle three innings and gave up two runs in the fourth, but then settled down and pitched a scoreless fifth and sixth innings. In a game against the Phillies the following afternoon, Ruth entered during the sixth inning and did not allow a run the rest of the way. The Orioles scored seven runs in the bottom of the eighth inning to overcome a 6–0 deficit, and Ruth was the winning pitcher. Once the regular season began, Ruth was a star pitcher who was also dangerous at the plate. The team performed well, yet received almost no attention from the Baltimore press. A third major league, the Federal League, had begun play, and the local franchise, the Baltimore Terrapins, restored that city to the major leagues for the first time since 1902. Few fans visited Oriole Park, where Ruth and his teammates labored in relative obscurity. Ruth may have been offered a bonus and a larger salary to jump to the Terrapins; when rumors to that effect swept Baltimore, giving Ruth the most publicity he had experienced to date, a Terrapins official denied it, stating it was their policy not to sign players under contract to Dunn. The competition from the Terrapins caused Dunn to sustain large losses. Although by late June the Orioles were in first place, having won over two-thirds of their games, the paid attendance dropped as low as 150. Dunn explored a possible move by the Orioles to Richmond, Virginia, as well as the sale of a minority interest in the club. These possibilities fell through, leaving Dunn with little choice other than to sell his best players to major league teams to raise money. He offered Ruth to the reigning World Series champions, Connie Mack's Philadelphia Athletics, but Mack had his own financial problems. The Cincinnati Reds and New York Giants expressed interest in Ruth, but Dunn sold his contract, along with those of pitchers Ernie Shore and Ben Egan, to the Boston Red Sox of the American League (AL) on July 4. The sale price was announced as $25,000 but other reports lower the amount to half that, or possibly $8,500 plus the cancellation of a $3,000 loan. Ruth remained with the Orioles for several days while the Red Sox completed a road trip, and reported to the team in Boston on July 11. Boston Red Sox (1914–1919) Developing star On July 11, 1914, Ruth arrived in Boston with Egan and Shore. Ruth later told the story of how that morning he had met Helen Woodford, who would become his first wife. She was a 16-year-old waitress at Landers Coffee Shop, and Ruth related that she served him when he had breakfast there. Other stories, though, suggested that the meeting occurred on another day, and perhaps under other circumstances. Regardless of when he began to woo his first wife, he won his first game as a pitcher for the Red Sox that afternoon, 4–3, over the Cleveland Naps. His catcher was Bill Carrigan, who was also the Red Sox manager. Shore was given a start by Carrigan the next day; he won that and his second start and thereafter was pitched regularly. Ruth lost his second start, and was thereafter little used. In his major league debut as a batter, Ruth went 0-for-2 against left-hander Willie Mitchell, striking out in his first at bat before being removed for a pinch hitter in the seventh inning. Ruth was not much noticed by the fans, as Bostonians watched the Red Sox's crosstown rivals, the Braves, begin a legendary comeback that would take them from last place on the Fourth of July to the 1914 World Series championship. Egan was traded to Cleveland after two weeks on the Boston roster. During his time with the Red Sox, he kept an eye on the inexperienced Ruth, much as Dunn had in Baltimore. When he was traded, no one took his place as supervisor. Ruth's new teammates considered him brash and would have preferred him as a rookie to remain quiet and inconspicuous. When Ruth insisted on taking batting practice despite being both a rookie who did not play regularly and a pitcher, he arrived to find his bats sawed in half. His teammates nicknamed him "the Big Baboon", a name the swarthy Ruth, who had disliked the nickname "Niggerlips" at St. Mary's, detested. Ruth had received a raise on promotion to the major leagues and quickly acquired tastes for fine food, liquor, and women, among other temptations. Manager Carrigan allowed Ruth to pitch two exhibition games in mid-August. Although Ruth won both against minor-league competition, he was not restored to the pitching rotation. It is uncertain why Carrigan did not give Ruth additional opportunities to pitch. There are legends—filmed for the screen in The Babe Ruth Story (1948)—that the young pitcher had a habit of signaling his intent to throw a curveball by sticking out his tongue slightly, and that he was easy to hit until this changed. Creamer pointed out that it is common for inexperienced pitchers to display such habits, and the need to break Ruth of his would not constitute a reason to not use him at all. The biographer suggested that Carrigan was unwilling to use Ruth because of the rookie's poor behavior. On July 30, 1914, Boston owner Joseph Lannin had purchased the minor-league Providence Grays, members of the International League. The Providence team had been owned by several people associated with the Detroit Tigers, including star hitter Ty Cobb, and as part of the transaction, a Providence pitcher was sent to the Tigers. To soothe Providence fans upset at losing a star, Lannin announced that the Red Sox would soon send a replacement to the Grays. This was intended to be Ruth, but his departure for Providence was delayed when Cincinnati Reds owner Garry Herrmann claimed him off of waivers. After Lannin wrote to Herrmann explaining that the Red Sox wanted Ruth in Providence so he could develop as a player, and would not release him to a major league club, Herrmann allowed Ruth to be sent to the minors. Carrigan later stated that Ruth was not sent down to Providence to make him a better player, but to help the Grays win the International League pennant (league championship). Ruth joined the Grays on August 18, 1914. After Dunn's deals, the Baltimore Orioles managed to hold on to first place until August 15, after which they continued to fade, leaving the pennant race between Providence and Rochester. Ruth was deeply impressed by Providence manager "Wild Bill" Donovan, previously a star pitcher with a 25–4 win–loss record for Detroit in 1907; in later years, he credited Donovan with teaching him much about pitching. Ruth was often called upon to pitch, in one stretch starting (and winning) four games in eight days. On September 5 at Maple Leaf Park in Toronto, Ruth pitched a one-hit 9–0 victory, and hit his first professional home run, his only one as a minor leaguer, off Ellis Johnson. Recalled to Boston after Providence finished the season in first place, he pitched and won a game for the Red Sox against the New York Yankees on October 2, getting his first major league hit, a double. Ruth finished the season with a record of 2–1 as a major leaguer and 23–8 in the International League (for Baltimore and Providence). Once the season concluded, Ruth married Helen in Ellicott City, Maryland. Creamer speculated that they did not marry in Baltimore, where the newlyweds boarded with George Ruth Sr., to avoid possible interference from those at St. Mary's—both bride and groom were not yet of age and Ruth remained on parole from that institution until his 21st birthday. In March 1915, Ruth reported to Hot Springs, Arkansas, for his first major league spring training. Despite a relatively successful first season, he was not slated to start regularly for the Red Sox, who already had two "superb" left-handed pitchers, according to Creamer: the established stars Dutch Leonard, who had broken the record for the lowest earned run average (ERA) in a single season; and Ray Collins, a 20-game winner in both 1913 and 1914. Ruth was ineffective in his first start, taking the loss in the third game of the season. Injuries and ineffective pitching by other Boston pitchers gave Ruth another chance, and after some good relief appearances, Carrigan allowed Ruth another start, and he won a rain-shortened seven inning game. Ten days later, the manager had him start against the New York Yankees at the Polo Grounds. Ruth took a 3–2 lead into the ninth, but lost the game 4–3 in 13 innings. Ruth, hitting ninth as was customary for pitchers, hit a massive home run into the upper deck in right field off of Jack Warhop. At the time, home runs were rare in baseball, and Ruth's majestic shot awed the crowd. The winning pitcher, Warhop, would in August 1915 conclude a major league career of eight seasons, undistinguished but for being the first major league pitcher to give up a home run to Babe Ruth. Carrigan was sufficiently impressed by Ruth's pitching to give him a spot in the starting rotation. Ruth finished the 1915 season 18–8 as a pitcher; as a hitter, he batted .315 and had four home runs. The Red Sox won the AL pennant, but with the pitching staff healthy, Ruth was not called upon to pitch in the 1915 World Series against the Philadelphia Phillies. Boston won in five games. Ruth was used as a pinch hitter in Game Five, but grounded out against Phillies ace Grover Cleveland Alexander. Despite his success as a pitcher, Ruth was acquiring a reputation for long home runs; at Sportsman's Park against the St. Louis Browns, a Ruth hit soared over Grand Avenue, breaking the window of a Chevrolet dealership. In 1916, attention focused on Ruth's pitching as he engaged in repeated pitching duels with Washington Senators' ace Walter Johnson. The two met five times during the season with Ruth winning four and Johnson one (Ruth had a no decision in Johnson's victory). Two of Ruth's victories were by the score of 1–0, one in a 13-inning game. Of the 1–0 shutout decided without extra innings, AL president Ban Johnson stated, "That was one of the best ball games I have ever seen." For the season, Ruth went 23–12, with a 1.75 ERA and nine shutouts, both of which led the league. Ruth's nine shutouts in 1916 set a league record for left-handers that would remain unmatched until Ron Guidry tied it in 1978. The Red Sox won the pennant and World Series again, this time defeating the Brooklyn Robins (as the Dodgers were then known) in five games. Ruth started and won Game 2, 2–1, in 14 innings. Until another game of that length was played in 2005, this was the longest World Series game, and Ruth's pitching performance is still the longest postseason complete game victory. Carrigan retired as player and manager after 1916, returning to his native Maine to be a businessman. Ruth, who played under four managers who are in the National Baseball Hall of Fame, always maintained that Carrigan, who is not enshrined there, was the best skipper he ever played for. There were other changes in the Red Sox organization that offseason, as Lannin sold the team to a three-man group headed by New York theatrical promoter Harry Frazee. Jack Barry was hired by Frazee as manager. Emergence as a hitter Ruth went 24–13 with a 2.01 ERA and six shutouts in 1917, but the Sox finished in second place in the league, nine games behind the Chicago White Sox in the standings. On June 23 at Washington, when home plate umpire 'Brick' Owens called the first four pitches as balls, Ruth was ejected from the game and threw a punch at him, and was later suspended for ten days and fined $100. Ernie Shore was called in to relieve Ruth, and was allowed eight warm-up pitches. The runner who had reached base on the walk was caught stealing, and Shore retired all 26 batters he faced to win the game. Shore's feat was listed as a perfect game for many years. In 1991, Major League Baseball's (MLB) Committee on Statistical Accuracy amended it to be listed as a combined no-hitter. In 1917, Ruth was used little as a batter, other than for his plate appearances while pitching, and hit .325 with two home runs. The United States' entry into World War I occurred at the start of the season and overshadowed baseball. Conscription was introduced in September 1917, and most baseball players in the big leagues were of draft age. This included Barry, who was a player-manager, and who joined the Naval Reserve in an attempt to avoid the draft, only to be called up after the 1917 season. Frazee hired International League President Ed Barrow as Red Sox manager. Barrow had spent the previous 30 years in a variety of baseball jobs, though he never played the game professionally. With the major leagues shorthanded because of the war, Barrow had many holes in the Red Sox lineup to fill. Ruth also noticed these vacancies in the lineup. He was dissatisfied in the role of a pitcher who appeared every four or five days and wanted to play every day at another position. Barrow used Ruth at first base and in the outfield during the exhibition season, but he restricted him to pitching as the team moved toward Boston and the season opener. At the time, Ruth was possibly the best left-handed pitcher in baseball, and allowing him to play another position was an experiment that could have backfired. Inexperienced as a manager, Barrow had player Harry Hooper advise him on baseball game strategy. Hooper urged his manager to allow Ruth to play another position when he was not pitching, arguing to Barrow, who had invested in the club, that the crowds were larger on days when Ruth played, as they were attracted by his hitting. In early May, Barrow gave in; Ruth promptly hit home runs in four consecutive games (one an exhibition), the last off of Walter Johnson. For the first time in his career (disregarding pinch-hitting appearances), Ruth was assigned a place in the batting order higher than ninth. Although Barrow predicted that Ruth would beg to return to pitching the first time he experienced a batting slump, that did not occur. Barrow used Ruth primarily as an outfielder in the war-shortened 1918 season. Ruth hit .300, with 11 home runs, enough to secure him a share of the major league home run title with Tilly Walker of the Philadelphia Athletics. He was still occasionally used as a pitcher, and had a 13–7 record with a 2.22 ERA. In 1918, the Red Sox won their third pennant in four years and faced the Chicago Cubs in the World Series, which began on September 5, the earliest date in history. The season had been shortened because the government had ruled that baseball players who were eligible for the military would have to be inducted or work in critical war industries, such as armaments plants. Ruth pitched and won Game One for the Red Sox, a 1–0 shutout. Before Game Four, Ruth injured his left hand in a fight but pitched anyway. He gave up seven hits and six walks, but was helped by outstanding fielding behind him and by his own batting efforts, as a fourth-inning triple by Ruth gave his team a 2–0 lead. The Cubs tied the game in the eighth inning, but the Red Sox scored to take a 3–2 lead again in the bottom of that inning. After Ruth gave up a hit and a walk to start the ninth inning, he was relieved on the mound by Joe Bush. To keep Ruth and his bat in the game, he was sent to play left field. Bush retired the side to give Ruth his second win of the Series, and the third and last World Series pitching victory of his career, against no defeats, in three pitching appearances. Ruth's effort gave his team a three-games-to-one lead, and two days later the Red Sox won their third Series in four years, four-games-to-two. Before allowing the Cubs to score in Game Four, Ruth pitched consecutive scoreless innings, a record for the World Series that stood for more than 40 years until 1961, broken by Whitey Ford after Ruth's death. Ruth was prouder of that record than he was of any of his batting feats. With the World Series over, Ruth gained exemption from the war draft by accepting a nominal position with a Pennsylvania steel mill. Many industrial establishments took pride in their baseball teams and sought to hire major leaguers. The end of the war in November set Ruth free to play baseball without such contrivances. During the 1919 season, Ruth was used as a pitcher in only 17 of his 130 games and compiled a 9–5 record. Barrow used him as a pitcher mostly in the early part of the season, when the Red Sox manager still had hopes of a second consecutive pennant. By late June, the Red Sox were clearly out of the race, and Barrow had no objection to Ruth concentrating on his hitting, if only because it drew people to the ballpark. Ruth had hit a home run against the Yankees on Opening Day, and another during a month-long batting slump that soon followed. Relieved of his pitching duties, Ruth began an unprecedented spell of slugging home runs, which gave him widespread public and press attention. Even his failures were seen as majestic—one sportswriter said, "When Ruth misses a swipe at the ball, the stands quiver." Two home runs by Ruth on July 5, and one in each of two consecutive games a week later, raised his season total to 11, tying his career best from 1918. The first record to fall was the AL single-season mark of 16, set by Ralph "Socks" Seybold in 1902. Ruth matched that on July 29, then pulled ahead toward the major league record of 25, set by Buck Freeman in 1899. By the time Ruth reached this in early September, writers had discovered that Ned Williamson of the 1884 Chicago White Stockings had hit 27—though in a ballpark where the distance to right field was only . On September 20, "Babe Ruth Day" at Fenway Park, Ruth won the game with a home run in the bottom of the ninth inning, tying Williamson. He broke the record four days later against the Yankees at the Polo Grounds, and hit one more against the Senators to finish with 29. The home run at Washington made Ruth the first major league player to hit a home run at all eight ballparks in his league. In spite of Ruth's hitting heroics, the Red Sox finished sixth, games behind the league champion White Sox. In his six seasons with Boston, he won 89 games and recorded a 2.19 ERA. He had a four-year stretch where he was second in the AL in wins and ERA behind Walter Johnson, and Ruth had a winning record against Johnson in head-to-head matchups. Sale to New York As an out-of-towner from New York City, Frazee had been regarded with suspicion by Boston's sportswriters and baseball fans when he bought the team. He won them over with success on the field and a willingness to build the Red Sox by purchasing or trading for players. He offered the Senators $60,000 for Walter Johnson, but Washington owner Clark Griffith was unwilling. Even so, Frazee was successful in bringing other players to Boston, especially as replacements for players in the military. This willingness to spend for players helped the Red Sox secure the 1918 title. The 1919 season saw record-breaking attendance, and Ruth's home runs for Boston made him a national sensation. In March 1919 Ruth was reported as having accepted a three-year contract for a total of $27,000, after protracted negotiations. Nevertheless, on December 26, 1919, Frazee sold Ruth's contract to the New York Yankees. Not all the circumstances concerning the sale are known, but brewer and former congressman Jacob Ruppert, the New York team's principal owner, reportedly asked Yankee manager Miller Huggins what the team needed to be successful. "Get Ruth from Boston", Huggins supposedly replied, noting that Frazee was perennially in need of money to finance his theatrical productions. In any event, there was precedent for the Ruth transaction: when Boston pitcher Carl Mays left the Red Sox in a 1919 dispute, Frazee had settled the matter by selling Mays to the Yankees, though over the opposition of AL President Johnson. According to one of Ruth's biographers, Jim Reisler, "why Frazee needed cash in 1919—and large infusions of it quickly—is still, more than 80 years later, a bit of a mystery". The often-told story is that Frazee needed money to finance the musical No, No, Nanette, which was a Broadway hit and brought Frazee financial security. That play did not open until 1925, however, by which time Frazee had sold the Red Sox. Still, the story may be true in essence: No, No, Nanette was based on a Frazee-produced play, My Lady Friends, which opened in 1919. There were other financial pressures on Frazee, despite his team's success. Ruth, fully aware of baseball's popularity and his role in it, wanted to renegotiate his contract, signed before the 1919 season for $10,000 per year through 1921. He demanded that his salary be doubled, or he would sit out the season and cash in on his popularity through other ventures. Ruth's salary demands were causing other players to ask for more money. Additionally, Frazee still owed Lannin as much as $125,000 from the purchase of the club. Although Ruppert and his co-owner, Colonel Tillinghast Huston, were both wealthy, and had aggressively purchased and traded for players in 1918 and 1919 to build a winning team, Ruppert faced losses in his brewing interests as Prohibition was implemented, and if their team left the Polo Grounds, where the Yankees were the tenants of the New York Giants, building a stadium in New York would be expensive. Nevertheless, when Frazee, who moved in the same social circles as Huston, hinted to the colonel that Ruth was available for the right price, the Yankees owners quickly pursued the purchase. Frazee sold the rights to Babe Ruth for $100,000, the largest sum ever paid for a baseball player. The deal also involved a $350,000 loan from Ruppert to Frazee, secured by a mortgage on Fenway Park. Once it was agreed, Frazee informed Barrow, who, stunned, told the owner that he was getting the worse end of the bargain. Cynics have suggested that Barrow may have played a larger role in the Ruth sale, as less than a year after, he became the Yankee general manager, and in the following years made a number of purchases of Red Sox players from Frazee. The $100,000 price included $25,000 in cash, and notes for the same amount due November 1 in 1920, 1921, and 1922; Ruppert and Huston assisted Frazee in selling the notes to banks for immediate cash. The transaction was contingent on Ruth signing a new contract, which was quickly accomplished—Ruth agreed to fulfill the remaining two years on his contract, but was given a $20,000 bonus, payable over two seasons. The deal was announced on January 6, 1920. Reaction in Boston was mixed: some fans were embittered at the loss of Ruth; others conceded that Ruth had become difficult to deal with. The New York Times suggested that "The short right field wall at the Polo Grounds should prove an easy target for Ruth next season and, playing seventy-seven games at home, it would not be surprising if Ruth surpassed his home run record of twenty-nine circuit clouts next Summer." According to Reisler, "The Yankees had pulled off the sports steal of the century." According to Marty Appel in his history of the Yankees, the transaction, "changed the fortunes of two high-profile franchises for decades". The Red Sox, winners of five of the first 16 World Series, those played between 1903 and 1919, would not win another pennant until 1946, or another World Series until 2004, a drought attributed in baseball superstition to Frazee's sale of Ruth and sometimes dubbed the "Curse of the Bambino". Conversely, the Yankees had not won the AL championship prior to their acquisition of Ruth. They won seven AL pennants and four World Series with him, and led baseball with 40 pennants and 27 World Series titles in their history. New York Yankees (1920–1934) Initial success (1920–1923) When Ruth signed with the Yankees, he completed his transition from a pitcher to a power-hitting outfielder. His fifteen-season Yankee career consisted of over 2,000 games, and Ruth broke many batting records while making only five widely scattered appearances on the mound, winning all of them. At the end of April 1920, the Yankees were 4–7, with the Red Sox leading the league with a 10–2 mark. Ruth had done little, having injured himself swinging the bat. Both situations began to change on May 1, when Ruth hit a tape measure home run that sent the ball completely out of the Polo Grounds, a feat believed to have been previously accomplished only by Shoeless Joe Jackson. The Yankees won, 6–0, taking three out of four from the Red Sox. Ruth hit his second home run on May 2, and by the end of the month had set a major league record for home runs in a month with 11, and promptly broke it with 13 in June. Fans responded with record attendance figures. On May 16, Ruth and the Yankees drew 38,600 to the Polo Grounds, a record for the ballpark, and 15,000 fans were turned away. Large crowds jammed stadiums to see Ruth play when the Yankees were on the road. The home runs kept on coming. Ruth tied his own record of 29 on July 15 and broke it with home runs in both games of a doubleheader four days later. By the end of July, he had 37, but his pace slackened somewhat after that. Nevertheless, on September 4, he both tied and broke the organized baseball record for home runs in a season, snapping Perry Werden's 1895 mark of 44 in the minor Western League. The Yankees played well as a team, battling for the league lead early in the summer, but slumped in August in the AL pennant battle with Chicago and Cleveland. The pennant and the World Series were won by Cleveland, who surged ahead after the Black Sox Scandal broke on September 28 and led to the suspension of many of Chicago's top players, including Shoeless Joe Jackson. The Yankees finished third, but drew 1.2 million fans to the Polo Grounds, the first time a team had drawn a seven-figure attendance. The rest of the league sold 600,000 more tickets, many fans there to see Ruth, who led the league with 54 home runs, 158 runs, and 137 runs batted in (RBIs). In 1920 and afterwards, Ruth was aided in his power hitting by the fact that A.J. Reach Company—the maker of baseballs used in the major leagues—was using a more efficient machine to wind the yarn found within the baseball. The new baseballs went into play in 1920 and ushered the start of the live-ball era; the number of home runs across the major leagues increased by 184 over the previous year. Baseball statistician Bill James pointed out that while Ruth was likely aided by the change in the baseball, there were other factors at work, including the gradual abolition of the spitball (accelerated after the death of Ray Chapman, struck by a pitched ball thrown by Mays in August 1920) and the more frequent use of new baseballs (also a response to Chapman's death). Nevertheless, James theorized that Ruth's 1920 explosion might have happened in 1919, had a full season of 154 games been played rather than 140, had Ruth refrained from pitching 133 innings that season, and if he were playing at any other home field but Fenway Park, where he hit only 9 of 29 home runs. Yankees business manager Harry Sparrow had died early in the 1920 season. Ruppert and Huston hired Barrow to replace him. The two men quickly made a deal with Frazee for New York to acquire some of the players who would be mainstays of the early Yankee pennant-winning teams, including catcher Wally Schang and pitcher Waite Hoyt. The 21-year-old Hoyt became close to Ruth: In the offseason, Ruth spent some time in Havana, Cuba, where he was said to have lost $35,000 () betting on horse races. Ruth hit home runs early and often in the 1921 season, during which he broke Roger Connor's mark for home runs in a career, 138. Each of the almost 600 home runs Ruth hit in his career after that extended his own record. After a slow start, the Yankees were soon locked in a tight pennant race with Cleveland, winners of the 1920 World Series. On September 15, Ruth hit his 55th home run, breaking his year-old single-season record. In late September, the Yankees visited Cleveland and won three out of four games, giving them the upper hand in the race, and clinched their first pennant a few days later. Ruth finished the regular season with 59 home runs, batting .378 and with a slugging percentage of .846. Ruth's 177 runs scored, 119 extra-base hits, and 457 total bases set modern-era records that still stand as of . The Yankees had high expectations when they met the New York Giants in the 1921 World Series, every game of which was played in the Polo Grounds. The Yankees won the first two games with Ruth in the lineup. However, Ruth badly scraped his elbow during Game 2 when he slid into third base (he had walked and stolen both second and third bases). After the game, he was told by the team physician not to play the rest of the series. Despite this advice, he did play in the next three games, and pinch-hit in Game Eight of the best-of-nine series, but the Yankees lost, five games to three. Ruth hit .316, drove in five runs and hit his first World Series home run. After the Series, Ruth and teammates Bob Meusel and Bill Piercy participated in a barnstorming tour in the Northeast. A rule then in force prohibited World Series participants from playing in exhibition games during the offseason, the purpose being to prevent Series participants from replicating the Series and undermining its value. Baseball Commissioner Kenesaw Mountain Landis suspended the trio until May 20, 1922, and fined them their 1921 World Series checks. In August 1922, the rule was changed to allow limited barnstorming for World Series participants, with Landis's permission required. On March 4, 1922, Ruth signed a new contract for three years at $52,000 a year (). This was more than two times the largest sum ever paid to a ballplayer up to that point and it represented 40% of the team's player payroll. Despite his suspension, Ruth was named the Yankees' new on-field captain prior to the 1922 season. During the suspension, he worked out with the team in the morning and played exhibition games with the Yankees on their off days. He and Meusel returned on May 20 to a sellout crowd at the Polo Grounds, but Ruth batted 0-for-4 and was booed. On May 25, he was thrown out of the game for throwing dust in umpire George Hildebrand's face, then climbed into the stands to confront a heckler. Ban Johnson ordered him fined, suspended, and stripped of position as team captain. In his shortened season, Ruth appeared in 110 games, batted .315, with 35 home runs, and drove in 99 runs, but the 1922 season was a disappointment in comparison to his two previous dominating years. Despite Ruth's off-year, the Yankees managed to win the pennant and faced the New York Giants in the World Series for the second consecutive year. In the Series, Giants manager John McGraw instructed his pitchers to throw him nothing but curveballs, and Ruth never adjusted. Ruth had just two hits in 17 at bats, and the Yankees lost to the Giants for the second straight year, by 4–0 (with one tie game). Sportswriter Joe Vila called him, "an exploded phenomenon". After the season, Ruth was a guest at an Elks Club banquet, set up by Ruth's agent with Yankee team support. There, each speaker, concluding with future New York mayor Jimmy Walker, censured him for his poor behavior. An emotional Ruth promised reform, and, to the surprise of many, followed through. When he reported to spring training, he was in his best shape as a Yankee, weighing only . The Yankees' status as tenants of the Giants at the Polo Grounds had become increasingly uneasy, and in 1922, Giants owner Charles Stoneham said the Yankees' lease, expiring after that season, would not be renewed. Ruppert and Huston had long contemplated a new stadium, and had taken an option on property at 161st Street and River Avenue in the Bronx. Yankee Stadium was completed in time for the home opener on April 18, 1923, at which Ruth hit the first home run in what was quickly dubbed "the House that Ruth Built". The ballpark was designed with Ruth in mind: although the venue's left-field fence was further from home plate than at the Polo Grounds, Yankee Stadium's right-field fence was closer, making home runs easier to hit for left-handed batters. To spare Ruth's eyes, right field—his defensive position—was not pointed into the afternoon sun, as was traditional; left fielder Meusel soon developed headaches from squinting toward home plate. During the 1923 season, the Yankees were never seriously challenged and won the AL pennant by 17 games. Ruth finished the season with a career-high .393 batting average and 41 home runs, which tied Cy Williams for the most in the major-leagues that year. Ruth hit a career-high 45 doubles in 1923, and he reached base 379 times, then a major league record. For the third straight year, the Yankees faced the Giants in the World Series, which Ruth dominated. He batted .368, walked eight times, scored eight runs, hit three home runs and slugged 1.000 during the series, as the Yankees christened their new stadium with their first World Series championship, four games to two. Batting title and "bellyache" (1924–1925) In 1924, the Yankees were favored to become the first team to win four consecutive pennants. Plagued by injuries, they found themselves in a battle with the Senators. Although the Yankees won 18 of 22 at one point in September, the Senators beat out the Yankees by two games. Ruth hit .378, winning his only AL batting title, with a league-leading 46 home runs. Ruth did not look like an athlete; he was described as "toothpicks attached to a piano", with a big upper body but thin wrists and legs. Ruth had kept up his efforts to stay in shape in 1923 and 1924, but by early 1925 weighed nearly . His annual visit to Hot Springs, Arkansas, where he exercised and took saunas early in the year, did him no good as he spent much of the time carousing in the resort town. He became ill while there, and relapsed during spring training. Ruth collapsed in Asheville, North Carolina, as the team journeyed north. He was put on a train for New York, where he was briefly hospitalized. A rumor circulated that he had died, prompting British newspapers to print a premature obituary. In New York, Ruth collapsed again and was found unconscious in his hotel bathroom. He was taken to a hospital where he had multiple convulsions. After sportswriter W. O. McGeehan wrote that Ruth's illness was due to binging on hot dogs and soda pop before a game, it became known as "the bellyache heard 'round the world". However, the exact cause of his ailment has never been confirmed and remains a mystery. Glenn Stout, in his history of the Yankees, writes that the Ruth legend is "still one of the most sheltered in sports"; he suggests that alcohol was at the root of Ruth's illness, pointing to the fact that Ruth remained six weeks at St. Vincent's Hospital but was allowed to leave, under supervision, for workouts with the team for part of that time. He concludes that the hospitalization was behavior-related. Playing just 98 games, Ruth had his worst season as a Yankee; he finished with a .290 average and 25 home runs. The Yankees finished next to last in the AL with a 69–85 record, their last season with a losing record until 1965. Murderers' Row (1926–1928) Ruth spent part of the offseason of 1925–26 working out at Artie McGovern's gym, where he got back into shape. Barrow and Huggins had rebuilt the team and surrounded the veteran core with good young players like Tony Lazzeri and Lou Gehrig, but the Yankees were not expected to win the pennant. Ruth returned to his normal production during 1926, when he batted .372 with 47 home runs and 146 RBIs. The Yankees built a 10-game lead by mid-June and coasted to win the pennant by three games. The St. Louis Cardinals had won the National League with the lowest winning percentage for a pennant winner to that point (.578) and the Yankees were expected to win the World Series easily. Although the Yankees won the opener in New York, St. Louis took Games Two and Three. In Game Four, Ruth hit three home runs—the first time this had been done in a World Series game—to lead the Yankees to victory. In the fifth game, Ruth caught a ball as he crashed into the fence. The play was described by baseball writers as a defensive gem. New York took that game, but Grover Cleveland Alexander won Game Six for St. Louis to tie the Series at three games each, then got very drunk. He was nevertheless inserted into Game Seven in the seventh inning and shut down the Yankees to win the game, 3–2, and win the Series. Ruth had hit his fourth home run of the Series earlier in the game and was the only Yankee to reach base off Alexander; he walked in the ninth inning before being thrown out to end the game when he attempted to steal second base. Although Ruth's attempt to steal second is often deemed a baserunning blunder, Creamer pointed out that the Yankees' chances of tying the game would have been greatly improved with a runner in scoring position. The 1926 World Series was also known for Ruth's promise to Johnny Sylvester, a hospitalized 11-year-old boy. Ruth promised the child that he would hit a home run on his behalf. Sylvester had been injured in a fall from a horse, and a friend of Sylvester's father gave the boy two autographed baseballs signed by Yankees and Cardinals. The friend relayed a promise from Ruth (who did not know the boy) that he would hit a home run for him. After the Series, Ruth visited the boy in the hospital. When the matter became public, the press greatly inflated it, and by some accounts, Ruth allegedly saved the boy's life by visiting him, emotionally promising to hit a home run, and doing so. Ruth's 1926 salary of $52,000 was far more than any other baseball player, but he made at least twice as much in other income, including $100,000 from 12 weeks of vaudeville. The 1927 New York Yankees team is considered one of the greatest squads to ever take the field. Known as Murderers' Row because of the power of its lineup, the team clinched first place on Labor Day, won a then-AL-record 110 games and took the AL pennant by 19 games. There was no suspense in the pennant race, and the nation turned its attention to Ruth's pursuit of his own single-season home run record of 59 round trippers. Ruth was not alone in this chase. Teammate Lou Gehrig proved to be a slugger who was capable of challenging Ruth for his home run crown; he tied Ruth with 24 home runs late in June. Through July and August, the dynamic duo was never separated by more than two home runs. Gehrig took the lead, 45–44, in the first game of a doubleheader at Fenway Park early in September; Ruth responded with two blasts of his own to take the lead, as it proved permanently—Gehrig finished with 47. Even so, as of September 6, Ruth was still several games off his 1921 pace, and going into the final series against the Senators, had only 57. He hit two in the first game of the series, including one off of Paul Hopkins, facing his first major league batter, to tie the record. The following day, September 30, he broke it with his 60th homer, in the eighth inning off Tom Zachary to break a 2–2 tie. "Sixty! Let's see some son of a bitch try to top that one", Ruth exulted after the game. In addition to his career-high 60 home runs, Ruth batted .356, drove in 164 runs and slugged .772. In the 1927 World Series, the Yankees swept the Pittsburgh Pirates in four games; the National Leaguers were disheartened after watching the Yankees take batting practice before Game One, with ball after ball leaving Forbes Field. According to Appel, "The 1927 New York Yankees. Even today, the words inspire awe... all baseball success is measured against the '27 team." The following season started off well for the Yankees, who led the league in the early going. But the Yankees were plagued by injuries, erratic pitching and inconsistent play. The Philadelphia Athletics, rebuilding after some lean years, erased the Yankees' big lead and even took over first place briefly in early September. The Yankees, however, regained first place when they beat the Athletics three out of four games in a pivotal series at Yankee Stadium later that month, and clinched the pennant in the final weekend of the season. Ruth's play in 1928 mirrored his team's performance. He got off to a hot start and on August 1, he had 42 home runs. This put him ahead of his 60 home run pace from the previous season. He then slumped for the latter part of the season, and he hit just twelve home runs in the last two months. Ruth's batting average also fell to .323, well below his career average. Nevertheless, he ended the season with 54 home runs. The Yankees swept the favored Cardinals in four games in the World Series, with Ruth batting .625 and hitting three home runs in Game Four, including one off Alexander. "Called shot" and final Yankee years (1929–1934) Before the 1929 season, Ruppert (who had bought out Huston in 1923) announced that the Yankees would wear uniform numbers to allow fans at cavernous Yankee Stadium to easily identify the players. The Cardinals and Indians had each experimented with uniform numbers; the Yankees were the first to use them on both home and away uniforms. Ruth batted third and was given number 3. According to a long-standing baseball legend, the Yankees adopted their now-iconic pinstriped uniforms in hopes of making Ruth look slimmer. In truth, though, they had been wearing pinstripes since 1915. Although the Yankees started well, the Athletics soon proved they were the better team in 1929, splitting two series with the Yankees in the first month of the season, then taking advantage of a Yankee losing streak in mid-May to gain first place. Although Ruth performed well, the Yankees were not able to catch the Athletics—Connie Mack had built another great team. Tragedy struck the Yankees late in the year as manager Huggins died at 51 of erysipelas, a bacterial skin infection, on September 25, only ten days after he had last directed the team. Despite their past differences, Ruth praised Huggins and described him as a "great guy". The Yankees finished second, 18 games behind the Athletics. Ruth hit .345 during the season, with 46 home runs and 154 RBIs. On October 17, the Yankees hired Bob Shawkey as manager; he was their fourth choice. Ruth had politicked for the job of player-manager, but Ruppert and Barrow never seriously considered him for the position. Stout deemed this the first hint Ruth would have no future with the Yankees once he retired as a player. Shawkey, a former Yankees player and teammate of Ruth, would prove unable to command Ruth's respect. On January 7, 1930, salary negotiations between the Yankees and Ruth quickly broke down. Having just concluded a three-year contract at an annual salary of $70,000, Ruth promptly rejected both the Yankees' initial proposal of $70,000 for one year and their 'final' offer of two years at seventy-five—the latter figure equaling the annual salary of then US President Herbert Hoover; instead, Ruth demanded at least $85,000 and three years. When asked why he thought he was "worth more than the President of the United States," Ruth responded: "Say, if I hadn't been sick last summer, I'd have broken hell out of that home run record! Besides, the President gets a four-year contract. I'm only asking for three." Exactly two months later, a compromise was reached, with Ruth settling for two years at an unprecedented $80,000 per year. Ruth's salary was more than 2.4 times greater than the next-highest salary that season, a record margin . In 1930, Ruth hit .359 with 49 home runs (his best in his years after 1928) and 153 RBIs, and pitched his first game in nine years, a complete game victory. Nevertheless, the Athletics won their second consecutive pennant and World Series, as the Yankees finished in third place, sixteen games back. At the end of the season, Shawkey was fired and replaced with Cubs manager Joe McCarthy, though Ruth again unsuccessfully sought the job. McCarthy was a disciplinarian, but chose not to interfere with Ruth, who did not seek conflict with the manager. The team improved in 1931, but was no match for the Athletics, who won 107 games, games in front of the Yankees. Ruth, for his part, hit .373, with 46 home runs and 163 RBIs. He had 31 doubles, his most since 1924. In the 1932 season, the Yankees went 107–47 and won the pennant. Ruth's effectiveness had decreased somewhat, but he still hit .341 with 41 home runs and 137 RBIs. Nevertheless, he was sidelined twice because of injuries during the season. The Yankees faced the Cubs, McCarthy's former team, in the 1932 World Series. There was bad blood between the two teams as the Yankees resented the Cubs only awarding half a World Series share to Mark Koenig, a former Yankee. The games at Yankee Stadium had not been sellouts; both were won by the home team, with Ruth collecting two singles, but scoring four runs as he was walked four times by the Cubs pitchers. In Chicago, Ruth was resentful at the hostile crowds that met the Yankees' train and jeered them at the hotel. The crowd for Game Three included New York Governor Franklin D. Roosevelt, the Democratic candidate for president, who sat with Chicago Mayor Anton Cermak. Many in the crowd threw lemons at Ruth, a sign of derision, and others (as well as the Cubs themselves) shouted abuse at Ruth and other Yankees. They were briefly silenced when Ruth hit a three-run home run off Charlie Root in the first inning, but soon revived, and the Cubs tied the score at 4–4 in the fourth inning, partly due to Ruth's fielding error in the outfield. When Ruth came to the plate in the top of the fifth, the Chicago crowd and players, led by pitcher Guy Bush, were screaming insults at Ruth. With the count at two balls and one strike, Ruth gestured, possibly in the direction of center field, and after the next pitch (a strike), may have pointed there with one hand. Ruth hit the fifth pitch over the center field fence; estimates were that it traveled nearly . Whether or not Ruth intended to indicate where he planned to (and did) hit the ball (Charlie Devens, who, in 1999, was interviewed as Ruth's surviving teammate in that game, did not think so), the incident has gone down in legend as Babe Ruth's called shot. The Yankees won Game Three, and the following day clinched the Series with another victory. During that game, Bush hit Ruth on the arm with a pitch, causing words to be exchanged and provoking a game-winning Yankee rally. Ruth remained productive in 1933. He batted .301, with 34 home runs, 103 RBIs, and a league-leading 114 walks, as the Yankees finished in second place, seven games behind the Senators. Athletics manager Connie Mack selected him to play right field in the first Major League Baseball All-Star Game, held on July 6, 1933, at Comiskey Park in Chicago. He hit the first home run in the All-Star Game's history, a two-run blast against Bill Hallahan during the third inning, which helped the AL win the game 4–2. During the final game of the 1933 season, as a publicity stunt organized by his team, Ruth was called upon and pitched a complete game victory against the Red Sox, his final appearance as a pitcher. Despite unremarkable pitching numbers, Ruth had a 5–0 record in five games for the Yankees, raising his career totals to 94–46. In 1934, Ruth played in his last full season with the Yankees. By this time, years of high living were starting to catch up with him. His conditioning had deteriorated to the point that he could no longer field or run. He accepted a pay cut to $35,000 from Ruppert, but he was still the highest-paid player in the major leagues. He could still handle a bat, recording a .288 batting average with 22 home runs. However, Reisler described these statistics as "merely mortal" by Ruth's previous standards. Ruth was selected to the AL All-Star team for the second consecutive year, even though he was in the twilight of his career. During the game, New York Giants pitcher Carl Hubbell struck out Ruth and four other future Hall-of-Famers consecutively. The Yankees finished second again, seven games behind the Tigers. Boston Braves (1935) By this time, Ruth knew he was nearly finished as a player. He desired to remain in baseball as a manager. He was often spoken of as a possible candidate as managerial jobs opened up, but in 1932, when he was mentioned as a contender for the Red Sox position, Ruth stated that he was not yet ready to leave the field. There were rumors that Ruth was a likely candidate each time when the Cleveland Indians, Cincinnati Reds, and Detroit Tigers were looking for a manager, but nothing came of them. Just before the 1934 season, Ruppert offered to make Ruth the manager of the Yankees' top minor-league team, the Newark Bears, but he was talked out of it by his wife, Claire, and his business manager, Christy Walsh. Tigers owner Frank Navin seriously considered acquiring Ruth and making him player-manager. However, Ruth insisted on delaying the meeting until he came back from a trip to Hawaii. Navin was unwilling to wait. Ruth opted to go on his trip, despite Barrow advising him that he was making a mistake; in any event, Ruth's asking price was too high for the notoriously tight-fisted Navin. The Tigers' job ultimately went to Mickey Cochrane. Early in the 1934 season, Ruth openly campaigned to become the Yankees manager. However, the Yankee job was never a serious possibility. Ruppert always supported McCarthy, who would remain in his position for another 12 seasons. The relationship between Ruth and McCarthy had been lukewarm at best, and Ruth's managerial ambitions further chilled their interpersonal relations. By the end of the season, Ruth hinted that he would retire unless Ruppert named him manager of the Yankees. When the time came, Ruppert wanted Ruth to leave the team without drama or hard feelings. During the 1934–35 offseason, Ruth circled the world with his wife; the trip included a barnstorming tour of the Far East. At his final stop in the United Kingdom before returning home, Ruth was introduced to cricket by Australian player Alan Fairfax, and after having little luck in a cricketer's stance, he stood as a baseball batter and launched some massive shots around the field, destroying the bat in the process. Although Fairfax regretted that he could not have the time to make Ruth a cricket player, Ruth had lost any interest in such a career upon learning that the best batsmen made only about $40 per week. Also during the offseason, Ruppert had been sounding out the other clubs in hopes of finding one that would be willing to take Ruth as a manager and/or a player. However, the only serious offer came from Athletics owner-manager Connie Mack, who gave some thought to stepping down as manager in favor of Ruth. However, Mack later dropped the idea, saying that Ruth's wife would be running the team in a month if Ruth ever took over. While the barnstorming tour was underway, Ruppert began negotiating with Boston Braves owner Judge Emil Fuchs, who wanted Ruth as a gate attraction. The Braves had enjoyed modest recent success, finishing fourth in the National League in both 1933 and 1934, but the team drew poorly at the box office. Unable to afford the rent at Braves Field, Fuchs had considered holding dog races there when the Braves were not at home, only to be turned down by Landis. After a series of phone calls, letters, and meetings, the Yankees traded Ruth to the Braves on February 26, 1935. Ruppert had stated that he would not release Ruth to go to another team as a full-time player. For this reason, it was announced that Ruth would become a team vice president and would be consulted on all club transactions, in addition to playing. He was also made assistant manager to Braves skipper Bill McKechnie. In a long letter to Ruth a few days before the press conference, Fuchs promised Ruth a share in the Braves' profits, with the possibility of becoming co-owner of the team. Fuchs also raised the possibility of Ruth succeeding McKechnie as manager, perhaps as early as 1936. Ruppert called the deal "the greatest opportunity Ruth ever had". There was considerable attention as Ruth reported for spring training. He did not hit his first home run of the spring until after the team had left Florida, and was beginning the road north in Savannah. He hit two in an exhibition game against the Bears. Amid much press attention, Ruth played his first home game in Boston in over 16 years. Before an opening-day crowd of over 25,000, including five of New England's six state governors, Ruth accounted for all the Braves' runs in a 4–2 defeat of the New York Giants, hitting a two-run home run, singling to drive in a third run and later in the inning scoring the fourth. Although age and weight had slowed him, he made a running catch in left field that sportswriters deemed the defensive highlight of the game. Ruth had two hits in the second game of the season, but it quickly went downhill both for him and the Braves from there. The season soon settled down to a routine of Ruth performing poorly on the few occasions he even played at all. As April passed into May, Ruth's physical deterioration became even more pronounced. While he remained productive at the plate early on, he could do little else. His conditioning had become so poor that he could barely trot around the bases. He made so many errors that three Braves pitchers told McKechnie they would not take the mound if he was in the lineup. Before long, Ruth stopped hitting as well. He grew increasingly annoyed that McKechnie ignored most of his advice. McKechnie later said that Ruth's presence made enforcing discipline nearly impossible. Ruth soon realized that Fuchs had deceived him, and had no intention of making him manager or giving him any significant off-field duties. He later said his only duties as vice president consisted of making public appearances and autographing tickets. Ruth also found out that far from giving him a share of the profits, Fuchs wanted him to invest some of his money in the team in a last-ditch effort to improve its balance sheet. As it turned out, Fuchs and Ruppert had both known all along that Ruth's non-playing positions were meaningless. By the end of the first month of the season, Ruth concluded he was finished even as a part-time player. As early as May 12, he asked Fuchs to let him retire. Ultimately, Fuchs persuaded Ruth to remain at least until after the Memorial Day doubleheader in Philadelphia. In the interim was a western road trip, at which the rival teams had scheduled days to honor him. In Chicago and St. Louis, Ruth performed poorly, and his batting average sank to .155, with only two additional home runs for a total of three on the season so far. In the first two games in Pittsburgh, Ruth had only one hit, though a long fly caught by Paul Waner probably would have been a home run in any other ballpark besides Forbes Field. Ruth played in the third game of the Pittsburgh series on May 25, 1935, and added one more tale to his playing legend. Ruth went 4-for-4, including three home runs, though the Braves lost the game 11–7. The last two were off Ruth's old Cubs nemesis, Guy Bush. The final home run, both of the game and of Ruth's career, sailed out of the park over the right field upper deck–the first time anyone had hit a fair ball completely out of Forbes Field. Ruth was urged to make this his last game, but he had given his word to Fuchs and played in Cincinnati and Philadelphia. The first game of the doubleheader in Philadelphia—the Braves lost both—was his final major league appearance. Ruth retired on June 2 after an argument with Fuchs. He finished 1935 with a .181 average—easily his worst as a full-time position player—and the final six of his 714 home runs. The Braves, 10–27 when Ruth left, finished 38–115, at .248 the worst winning percentage in modern National League history. Insolvent like his team, Fuchs gave up control of the Braves before the end of the season; the National League took over the franchise at the end of the year. Of the 5 members in the inaugural class of Baseball Hall of Fame in 1936 (Ty Cobb, Honus Wagner, Christy Mathewson, Walter Johnson and Ruth himself), only Ruth was not given an offer to manage a baseball team. Retirement Although Fuchs had given Ruth his unconditional release, no major league team expressed an interest in hiring him in any capacity. Ruth still hoped to be hired as a manager if he could not play anymore, but only one managerial position, Cleveland, became available between Ruth's retirement and the end of the 1937 season. Asked if he had considered Ruth for the job, Indians owner Alva Bradley replied negatively. Team owners and general managers assessed Ruth's flamboyant personal habits as a reason to exclude him from a managerial job; Barrow said of him, "How can he manage other men when he can't even manage himself?" Creamer believed Ruth was unfairly treated in never being given an opportunity to manage a major league club. The author believed there was not necessarily a relationship between personal conduct and managerial success, noting that John McGraw, Billy Martin, and Bobby Valentine were winners despite character flaws. Ruth played much golf and in a few exhibition baseball games, where he demonstrated a continuing ability to draw large crowds. This appeal contributed to the Dodgers hiring him as first base coach in 1938. When Ruth was hired, Brooklyn general manager Larry MacPhail made it clear that Ruth would not be considered for the manager's job if, as expected, Burleigh Grimes retired at the end of the season. Although much was said about what Ruth could teach the younger players, in practice, his duties were to appear on the field in uniform and encourage base runners—he was not called upon to relay signs. In August, shortly before the baseball rosters expanded, Ruth sought an opportunity to return as an active player in a pinch hitting role. Ruth often took batting practice before games and felt that he could take on the limited role. Grimes denied his request, citing Ruth's poor vision in his right eye, his inability to run the bases, and the risk of an injury to Ruth. Ruth got along well with everyone except team captain Leo Durocher, who was hired as Grimes' replacement at season's end. Ruth then left his job as a first base coach and would never again work in any capacity in the game of baseball. On July 4, 1939, Ruth spoke on Lou Gehrig Appreciation Day at Yankee Stadium as members of the 1927 Yankees and a sellout crowd turned out to honor the first baseman, who was forced into premature retirement by ALS, which would kill him two years later. The next week, Ruth went to Cooperstown, New York, for the formal opening of the Baseball Hall of Fame. Three years earlier, he was one of the first five players elected to the hall. As radio broadcasts of baseball games became popular, Ruth sought a job in that field, arguing that his celebrity and knowledge of baseball would assure large audiences, but he received no offers. During World War II, he made many personal appearances to advance the war effort, including his last appearance as a player at Yankee Stadium, in a 1943 exhibition for the Army-Navy Relief Fund. He hit a long fly ball off Walter Johnson; the blast left the field, curving foul, but Ruth circled the bases anyway. In 1946, he made a final effort to gain a job in baseball when he contacted new Yankees boss MacPhail, but he was sent a rejection letter. In 1999, Ruth's granddaughter, Linda Tosetti, and his stepdaughter, Julia Ruth Stevens, said that Babe's inability to land a managerial role with the Yankees caused him to feel hurt and slump into a severe depression. Ruth started playing golf when he was 20 and continued playing the game throughout his life. His appearance at many New York courses drew spectators and headlines. Rye Golf Club was among the courses he played with teammate Lyn Lary in June 1933. With birdies on 3 holes, Ruth posted the best score. In retirement, he became one of the first celebrity golfers participating in charity tournaments, including one where he was pitted against Ty Cobb. Personal life Ruth met Helen Woodford (1897–1929), by some accounts, in a coffee shop in Boston, where she was a waitress. They married as teenagers on October 17, 1914. Although Ruth later claimed to have been married in Elkton, Maryland, records show that they were married at St. Paul's Catholic Church in Ellicott City. They adopted a daughter, Dorothy (1921–1989), in 1921. Ruth and Helen separated around 1925 reportedly because of Ruth's repeated infidelities and neglect. They appeared in public as a couple for the last time during the 1926 World Series. Helen died in January 1929 at age 31 in a fire in a house in Watertown, Massachusetts owned by Edward Kinder, a dentist with whom she had been living as "Mrs. Kinder". In her book, My Dad, the Babe, Dorothy claimed that she was Ruth's biological child by a mistress named Juanita Jennings. In 1980, Juanita admitted this to Dorothy and Dorothy's stepsister, Julia Ruth Stevens, who was at the time already very ill. On April 17, 1929, three months after the death of his first wife, Ruth married actress and model Claire Merritt Hodgson (1897–1976) and adopted her daughter Julia (1916–2019). It was the second and final marriage for both parties. Claire, unlike Helen, was well-travelled and educated, and put structure into Ruth's life, like Miller Huggins did for him on the field. By one account, Julia and Dorothy were, through no fault of their own, the reason for the seven-year rift in Ruth's relationship with teammate Lou Gehrig. Sometime in 1932, during a conversation that she assumed was private, Gehrig's mother remarked, "It's a shame [Claire] doesn't dress Dorothy as nicely as she dresses her own daughter." When the comment got back to Ruth, he angrily told Gehrig to tell his mother to mind her own business. Gehrig, in turn, took offense at what he perceived as Ruth's comment about his mother. The two men reportedly never spoke off the field until they reconciled at Yankee Stadium on Lou Gehrig Appreciation Day, July 4, 1939, shortly after Gehrig's retirement from baseball. Although Ruth was married throughout most of his baseball career, when team co-owner Tillinghast 'Cap' Huston asked him to tone down his lifestyle, Ruth replied, "I'll promise to go easier on drinking and to get to bed earlier, but not for you, fifty thousand dollars, or two-hundred and fifty thousand dollars will I give up women. They're too much fun." A detective that the Yankees hired to follow him one night in Chicago reported that Ruth had been with six women. Ping Bodie said that he was not Ruth's roommate while traveling; "I room with his suitcase". Before the start of the 1922 season, Ruth had signed a three-year contract at $52,000 per year with an option to renew for two additional years. His performance during the 1922 season had been disappointing, attributed in part to his drinking and late-night hours. After the end of the 1922 season, he was asked to sign a contract addendum with a morals clause. Ruth and Ruppert signed it on November 11, 1922. It called for Ruth to abstain entirely from the use of intoxicating liquors, and to not stay up later than 1:00 a.m. during the training and playing season without permission of the manager. Ruth was also enjoined from any action or misbehavior that would compromise his ability to play baseball. Cancer and death (1946–1948) As early as the war years, doctors had cautioned Ruth to take better care of his health, and he grudgingly followed their advice, limiting his drinking and not going on a proposed trip to support the troops in the South Pacific. In 1946, Ruth began experiencing severe pain over his left eye and had difficulty swallowing. In November 1946, Ruth entered French Hospital in New York for tests, which revealed that he had an inoperable malignant tumor at the base of his skull and in his neck. The malady was a lesion known as nasopharyngeal carcinoma, or "lymphoepithelioma". His name and fame gave him access to experimental treatments, and he was one of the first cancer patients to receive both drugs and radiation treatment simultaneously. Having lost , he was discharged from the hospital in February and went to Florida to recuperate. He returned to New York and Yankee Stadium after the season started. The new commissioner, Happy Chandler (Judge Landis had died in 1944), proclaimed April 27, 1947, Babe Ruth Day around the major leagues, with the most significant observance to be at Yankee Stadium. A number of teammates and others spoke in honor of Ruth, who briefly addressed the crowd of almost 60,000. By then, his voice was a soft whisper with a very low, raspy tone. Around this time, developments in chemotherapy offered some hope for Ruth. The doctors had not told Ruth he had cancer because of his family's fear that he might do himself harm. They treated him with pterolyl triglutamate (Teropterin), a folic acid derivative; he may have been the first human subject. Ruth showed dramatic improvement during the summer of 1947, so much so that his case was presented by his doctors at a scientific meeting, without using his name. He was able to travel around the country, doing promotional work for the Ford Motor Company on American Legion Baseball. He appeared again at another day in his honor at Yankee Stadium in September, but was not well enough to pitch in an old-timers game as he had hoped. The improvement was only a temporary remission, and by late 1947, Ruth was unable to help with the writing of his autobiography, The Babe Ruth Story, which was almost entirely ghostwritten. In and out of the hospital in Manhattan, he left for Florida in February 1948, doing what activities he could. After six weeks he returned to New York to appear at a book-signing party. He also traveled to California to witness the filming of the movie based on the book. On June 5, 1948, a "gaunt and hollowed out" Ruth visited Yale University to donate a manuscript of The Babe Ruth Story to its library. At Yale, he met with future president George H. W. Bush, who was the captain of the Yale baseball team. On June 13, Ruth visited Yankee Stadium for the final time in his life, appearing at the 25th-anniversary celebrations of "The House that Ruth Built". By this time he had lost much weight and had difficulty walking. Introduced along with his surviving teammates from 1923, Ruth used a bat as a cane. Nat Fein's photo of Ruth taken from behind, standing near home plate and facing "Ruthville" (right field) became one of baseball's most famous and widely circulated photographs, and won the Pulitzer Prize. Ruth made one final trip on behalf of American Legion Baseball, then entered Memorial Hospital, where he would die. He was never told he had cancer. But before his death, he surmised it. He was able to leave the hospital for a few short trips, including a final visit to Baltimore. On July 26, 1948, Ruth left the hospital to attend the premiere of the film The Babe Ruth Story. Shortly thereafter, he returned to the hospital for the final time. He was barely able to speak. Ruth's condition gradually grew worse, and only a few visitors were permitted to see him, one of whom was National League president and future Commissioner of Baseball Ford Frick. "Ruth was so thin it was unbelievable. He had been such a big man and his arms were just skinny little bones, and his face was so haggard", Frick said years later. Thousands of New Yorkers, including many children, stood vigil outside the hospital during Ruth's final days. On August 16, 1948, at 8:01 p.m., Ruth died in his sleep at the age of 53. His open casket was placed on display in the rotunda of Yankee Stadium, where it remained for two days; 77,000 people filed past to pay him tribute. His Requiem Mass was celebrated by Francis Cardinal Spellman at St. Patrick's Cathedral; a crowd estimated at 75,000 waited outside. Ruth is buried with his second wife, Claire, on a hillside in Section 25 at the Gate of Heaven Cemetery in Hawthorne, New York. Memorial and museum On April 19, 1949, the Yankees unveiled a granite monument in Ruth's honor in center field of Yankee Stadium. The monument was located in the field of play next to a flagpole and similar tributes to Huggins and Gehrig until the stadium was remodeled from 1974 to 1975, which resulted in the outfield fences moving inward and enclosing the monuments from the playing field. This area was known thereafter as Monument Park. Yankee Stadium, "the House that Ruth Built", was replaced after the 2008 season with a new Yankee Stadium across the street from the old one; Monument Park was subsequently moved to the new venue behind the center field fence. Ruth's uniform number 3 has been retired by the Yankees, and he is one of five Yankees players or managers to have a granite monument within the stadium. The Babe Ruth Birthplace Museum is located at 216 Emory Street, a Baltimore row house where Ruth was born, and three blocks west of Oriole Park at Camden Yards, where the AL's Baltimore Orioles play. The property was restored and opened to the public in 1973 by the non-profit Babe Ruth Birthplace Foundation, Inc. Ruth's widow, Claire, his two daughters, Dorothy and Julia, and his sister, Mamie, helped select and install exhibits for the museum. Impact Ruth was the first baseball star to be the subject of overwhelming public adulation. Baseball had been known for star players such as Ty Cobb and "Shoeless Joe" Jackson, but both men had uneasy relations with fans. In Cobb's case, the incidents were sometimes marked by violence. Ruth's biographers agreed that he benefited from the timing of his ascension to "Home Run King". The country had been hit hard by both the war and the 1918 flu pandemic and longed for something to help put these traumas behind it. Ruth also resonated in a country which felt, in the aftermath of the war, that it took second place to no one. Montville argued that Ruth was a larger-than-life figure who was capable of unprecedented athletic feats in the nation's largest city. Ruth became an icon of the social changes that marked the early 1920s. In his history of the Yankees, Glenn Stout writes that "Ruth was New York incarnate—uncouth and raw, flamboyant and flashy, oversized, out of scale, and absolutely unstoppable". During his lifetime, Ruth became a symbol of the United States. During World War II Japanese soldiers yelled in English, "To hell with Babe Ruth", to anger American soldiers. Ruth replied that he hoped "every Jap that mention[ed] my name gets shot". Creamer recorded that "Babe Ruth transcended sport and moved far beyond the artificial limits of baselines and outfield fences and sports pages". Wagenheim stated, "He appealed to a deeply rooted American yearning for the definitive climax: clean, quick, unarguable." According to Glenn Stout, "Ruth's home runs were exalted, uplifting experience that meant more to fans than any runs they were responsible for. A Babe Ruth home run was an event unto itself, one that meant anything was possible." Although Ruth was not just a power hitter—he was the Yankees' best bunter, and an excellent outfielder—Ruth's penchant for hitting home runs altered how baseball is played. Prior to 1920, home runs were unusual, and managers tried to win games by getting a runner on base and bringing him around to score through such means as the stolen base, the bunt, and the hit and run. Advocates of what was dubbed "inside baseball", such as Giants manager McGraw, disliked the home run, considering it a blot on the purity of the game. According to sportswriter W. A. Phelon, after the 1920 season, Ruth's breakout performance that season and the response in excitement and attendance, "settled, for all time to come, that the American public is nuttier over the Home Run than the Clever Fielding or the Hitless Pitching. Viva el Home Run and two times viva Babe Ruth, exponent of the home run, and overshadowing star." Bill James states, "When the owners discovered that the fans liked to see home runs, and when the foundations of the games were simultaneously imperiled by disgrace [in the Black Sox Scandal], then there was no turning back." While a few, such as McGraw and Cobb, decried the passing of the old-style play, teams quickly began to seek and develop sluggers. According to sportswriter Grantland Rice, only two sports figures of the 1920s approached Ruth in popularity—boxer Jack Dempsey and racehorse Man o' War. One of the factors that contributed to Ruth's broad appeal was the uncertainty about his family and early life. Ruth appeared to exemplify the American success story, that even an uneducated, unsophisticated youth, without any family wealth or connections, can do something better than anyone else in the world. Montville writes that "the fog [surrounding his childhood] will make him forever accessible, universal. He will be the patron saint of American possibility." Similarly, the fact that Ruth played in the pre-television era, when a relatively small portion of his fans had the opportunity to see him play allowed his legend to grow through word of mouth and the hyperbole of sports reporters. Reisler states that recent sluggers who surpassed Ruth's 60-home run mark, such as Mark McGwire and Barry Bonds, generated much less excitement than when Ruth repeatedly broke the single-season home run record in the 1920s. Ruth dominated a relatively small sports world, while Americans of the present era have many sports available to watch. Legacy Creamer describes Ruth as "a unique figure in the social history of the United States". Thomas Barthel describes him as one of the first celebrity athletes; numerous biographies have portrayed him as "larger than life". He entered the language: a dominant figure in a field, whether within or outside sports, is often referred to as "the Babe Ruth" of that field. Similarly, "Ruthian" has come to mean in sports, "colossal, dramatic, prodigious, magnificent; with great power". He was the first athlete to make more money from endorsements and other off-the-field activities than from his sport. In 2006, Montville stated that more books have been written about Ruth than any other member of the Baseball Hall of Fame. At least five of these books (including Creamer's and Wagenheim's) were written in 1973 and 1974. The books were timed to capitalize on the increase in public interest in Ruth as Hank Aaron approached his career home run mark, which he broke on April 8, 1974. As he approached Ruth's record, Aaron stated, "I can't remember a day this year or last when I did not hear the name of Babe Ruth." Montville suggested that Ruth is probably even more popular today than he was when his career home run record was broken by Aaron. The long ball era that Ruth started continues in baseball, to the delight of the fans. Owners build ballparks to encourage home runs, which are featured on SportsCenter and Baseball Tonight each evening during the season. The questions of performance-enhancing drug use, which dogged later home run hitters such as McGwire and Bonds, do nothing to diminish Ruth's reputation; his overindulgences with beer and hot dogs seem part of a simpler time. In various surveys and rankings, Ruth has been named the greatest baseball player of all time. In 1998, The Sporting News ranked him number one on the list of "Baseball's 100 Greatest Players". In 1999, baseball fans named Ruth to the Major League Baseball All-Century Team. He was named baseball's Greatest Player Ever in a ballot commemorating the 100th anniversary of professional baseball in 1969. The Associated Press reported in 1993 that Muhammad Ali was tied with Babe Ruth as the most recognized athlete in America. In a 1999 ESPN poll, he was ranked as the second-greatest U.S. athlete of the century, behind Michael Jordan. In 1983, the United States Postal Service honored Ruth with the issuance of a twenty-cent stamp. Several of the most expensive items of sports memorabilia and baseball memorabilia ever sold at auction are associated with Ruth. , Ruth's 1920 Yankees jersey, which sold for $4,415,658 in 2012 (equivalent to $ million in ), is the third most expensive piece of sports memorabilia ever sold, after Diego Maradona's 1986 World Cup jersey and Pierre de Coubertin's original 1892 Olympic Manifesto. The bat with which he hit the first home run at Yankee Stadium is in The Guinness Book of World Records as the most expensive baseball bat sold at auction, having fetched $1.265 million on December 2, 2004 (equivalent to $ million in ). A hat of Ruth's from the 1934 season set a record for a baseball cap when David Wells sold it at auction for $537,278 in 2012. In 2017, Charlie Sheen sold Ruth's 1927 World Series ring for $2,093,927 at auction. It easily broke the record for a championship ring previously set when Julius Erving's 1974 ABA championship ring sold for $460,741 in 2011. One long-term survivor of the craze over Ruth may be the Baby Ruth candy bar. The original company to market the confectionery, the Curtis Candy Company, maintained that the bar was named after Ruth Cleveland, daughter of former president Grover Cleveland. She died in 1904 and the bar was first marketed in 1921, at the height of the craze over Ruth. He later sought to market candy bearing his name; he was refused a trademark because of the Baby Ruth bar. Corporate files from 1921 are no longer extant; the brand has changed hands several times and is now owned by Ferrara Candy Company. The Ruth estate licensed his likeness for use in an advertising campaign for Baby Ruth in 1995. In 2005, the Baby Ruth bar became the official candy bar of Major League Baseball in a marketing arrangement. In 2018, President Donald Trump announced that Ruth, along with Elvis Presley and Antonin Scalia, would posthumously receive the Presidential Medal of Freedom. Montville describes the continuing relevance of Babe Ruth in American culture, more than three-quarters of a century after he last swung a bat in a major league game: See also List of career achievements by Babe Ruth Babe Ruth Award Babe Ruth Home Run Award Babe Ruth League DHL Hometown Heroes List of Major League Baseball home run records List of Major League Baseball runs batted in records The Year Babe Ruth Hit 104 Home Runs Babe's Dream statue in Baltimore, Maryland Notes References Book sources Further reading Books . Articles External links Official website Babe Ruth Birthplace and Museum 1895 births 1948 deaths American League All-Stars American League batting champions American League ERA champions American League home run champions American League RBI champions American people of German descent American people of Prussian descent American sportsmen Baltimore Orioles (International League) players Baseball players from Baltimore Boston Braves players Boston Red Sox players Brooklyn Dodgers coaches Burials at Gate of Heaven Cemetery (Hawthorne, New York) Catholics from Maryland Deaths from cancer in New York (state) Deaths from esophageal cancer Major League Baseball first base coaches Major League Baseball left fielders Major League Baseball pitchers Major League Baseball players with retired numbers Major League Baseball right fielders National Baseball Hall of Fame inductees New York Yankees players Presidential Medal of Freedom recipients Providence Grays (minor league) players Vaudeville performers
8
A battle is an occurrence of combat in warfare between opposing military units of any number or size. A war usually consists of multiple battles. In general, a battle is a military engagement that is well defined in duration, area, and force commitment. An engagement with only limited commitment between the forces and without decisive results is sometimes called a skirmish. The word "battle" can also be used infrequently to refer to an entire operational campaign, although this usage greatly diverges from its conventional or customary meaning. Generally, the word "battle" is used for such campaigns if referring to a protracted combat encounter in which either one or both of the combatants had the same methods, resources, and strategic objectives throughout the encounter. Some prominent examples of this would be the Battle of the Atlantic, Battle of Britain, and Battle of Stalingrad, all in World War II. Wars and military campaigns are guided by military strategy, whereas battles take place on a level of planning and execution known as operational mobility. German strategist Carl von Clausewitz stated that "the employment of battles ... to achieve the object of war" was the essence of strategy. Etymology Battle is a loanword from the Old French , first attested in 1297, from Late Latin , meaning "exercise of soldiers and gladiators in fighting and fencing", from Late Latin (taken from Germanic) "beat", from which the English word battery is also derived via Middle English . Characteristics The defining characteristic of the fight as a concept in military science has changed with the variations in the organisation, employment and technology of military forces. The English military historian John Keegan suggested an ideal definition of battle as "something which happens between two armies leading to the moral then physical disintegration of one or the other of them" but the origins and outcomes of battles can rarely be summarized so neatly. Battle in the 20th and 21st centuries is defined as the combat between large components of the forces in a military campaign, used to achieve military objectives. Where the duration of the battle is longer than a week, it is often for reasons of planning called an operation. Battles can be planned, encountered or forced by one side when the other is unable to withdraw from combat. A battle always has as its purpose the reaching of a mission goal by use of military force. A victory in the battle is achieved when one of the opposing sides forces the other to abandon its mission and surrender its forces, routs the other (i.e., forces it to retreat or renders it militarily ineffective for further combat operations) or annihilates the latter, resulting in their deaths or capture. A battle may end in a Pyrrhic victory, which ultimately favors the defeated party. If no resolution is reached in a battle, it can result in a stalemate. A conflict in which one side is unwilling to reach a decision by a direct battle using conventional warfare often becomes an insurgency. Until the 19th century the majority of battles were of short duration, many lasting a part of a day. (The Battle of Preston (1648), the Battle of Nations (1813) and the Battle of Gettysburg (1863) were exceptional in lasting three days.) This was mainly due to the difficulty of supplying armies in the field or conducting night operations. The means of prolonging a battle was typically with siege warfare. Improvements in transport and the sudden evolving of trench warfare, with its siege-like nature during the First World War in the 20th century, lengthened the duration of battles to days and weeks. This created the requirement for unit rotation to prevent combat fatigue, with troops preferably not remaining in a combat area of operations for more than a month. The use of the term "battle" in military history has led to its misuse when referring to almost any scale of combat, notably by strategic forces involving hundreds of thousands of troops that may be engaged in either one battle at a time (Battle of Leipzig) or operations (Battle of Kursk). The space a battle occupies depends on the range of the weapons of the combatants. A "battle" in this broader sense may be of long duration and take place over a large area, as in the case of the Battle of Britain or the Battle of the Atlantic. Until the advent of artillery and aircraft, battles were fought with the two sides within sight, if not reach, of each other. The depth of the battlefield has also increased in modern warfare with inclusion of the supporting units in the rear areas; supply, artillery, medical personnel etc. often outnumber the front-line combat troops. Battles are made up of a multitude of individual combats, skirmishes and small engagements and the combatants will usually only experience a small part of the battle. To the infantryman, there may be little to distinguish between combat as part of a minor raid or a big offensive, nor is it likely that he anticipates the future course of the battle; few of the British infantry who went over the top on the first day on the Somme, 1 July 1916, would have anticipated that the battle would last five months. Some of the Allied infantry who had just dealt a crushing defeat to the French at the Battle of Waterloo fully expected to have to fight again the next day (at the Battle of Wavre). Battlespace Battlespace is a unified strategic concept to integrate and combine armed forces for the military theatre of operations, including air, information, land, sea and space. It includes the environment, factors and conditions that must be understood to apply combat power, protect the force or complete the mission, comprising enemy and friendly armed forces; facilities; weather; terrain; and the electromagnetic spectrum. Factors Battles are decided by various factors, the number and quality of combatants and equipment, the skill of commanders and terrain are among the most prominent. Weapons and armour can be decisive; on many occasions armies have achieved victory through more advanced weapons than those of their opponents. An extreme example was in the Battle of Omdurman, in which a large army of Sudanese Mahdists armed in a traditional manner were destroyed by an Anglo-Egyptian force equipped with Maxim machine guns and artillery. On some occasions, simple weapons employed in an unorthodox fashion have proven advantageous; Swiss pikemen gained many victories through their ability to transform a traditionally defensive weapon into an offensive one. Zulus in the early 19th century were victorious in battles against their rivals in part because they adopted a new kind of spear, the iklwa. Forces with inferior weapons have still emerged victorious at times, for example in the Wars of Scottish Independence. Disciplined troops are often of greater importance; at the Battle of Alesia, the Romans were greatly outnumbered but won because of superior training. Battles can also be determined by terrain. Capturing high ground has been the main tactic in innumerable battles. An army that holds the high ground forces the enemy to climb and thus wear themselves down. Areas of jungle and forest, with dense vegetation act as force-multipliers, of benefit to inferior armies. Terrain may have lost importance in modern warfare, due to the advent of aircraft, though the terrain is still vital for camouflage, especially for guerrilla warfare. Generals and commanders also play an important role, Hannibal, Julius Caesar, Khalid ibn Walid, Subutai and Napoleon Bonaparte were all skilled generals and their armies were extremely successful at times. An army that can trust the commands of their leaders with conviction in its success invariably has a higher morale than an army that doubts its every move. The British in the naval Battle of Trafalgar owed its success to the reputation of Admiral Lord Nelson. Types Battles can be fought on land, at sea, and in the air. Naval battles have occurred since before the 5th century BC. Air battles have been far less common, due to their late conception, the most prominent being the Battle of Britain in 1940. Since the Second World War, land or sea battles have come to rely on air support. During the Battle of Midway, five aircraft carriers were sunk without either fleet coming into direct contact. A pitched battle is an encounter where opposing sides agree on the time and place of combat. A battle of encounter (or encounter battle) is a meeting engagement where the opposing sides collide in the field without either having prepared their attack or defence. A battle of attrition aims to inflict losses on an enemy that are less sustainable compared to one's own losses. These need not be greater numerical losses – if one side is much more numerous than the other then pursuing a strategy based on attrition can work even if casualties on both sides are about equal. Many battles of the Western Front in the First World War were intentionally (Verdun) or unintentionally (Somme) attrition battles. A battle of breakthrough aims to pierce the enemy's defences, thereby exposing the vulnerable flanks which can be turned. A battle of encirclement—the of the German battle of manoeuvre ()—surrounds the enemy in a pocket. A battle of envelopment involves an attack on one or both flanks; the classic example being the double envelopment of the Battle of Cannae. A battle of annihilation is one in which the defeated party is destroyed in the field, such as the French fleet at the Battle of the Nile. Battles are usually hybrids of different types listed above. A decisive battle is one with political effects, determining the course of the war such as the Battle of Smolensk or bringing hostilities to an end, such as the Battle of Hastings or the Battle of Hattin. A decisive battle can change the balance of power or boundaries between countries. The concept of the decisive battle became popular with the publication in 1851 of Edward Creasy's The Fifteen Decisive Battles of the World. British military historians J.F.C. Fuller (The Decisive Battles of the Western World) and B.H. Liddell Hart (Decisive Wars of History), among many others, have written books in the style of Creasy's work. Land There is an obvious difference in the way battles have been fought. Early battles were probably fought between rival hunting bands as unorganized crowds. During the Battle of Megiddo, the first reliably documented battle in the fifteenth century BC, both armies were organised and disciplined; during the many wars of the Roman Empire, barbarians continued to use mob tactics. As the Age of Enlightenment dawned, armies began to fight in highly disciplined lines. Each would follow the orders from their officers and fight as a unit instead of individuals. Armies were divided into regiments, battalions, companies and platoons. These armies would march, line up and fire in divisions. Native Americans, on the other hand, did not fight in lines, using guerrilla tactics. American colonists and European forces continued using disciplined lines into the American Civil War. A new style arose from the 1850s to the First World War, known as trench warfare, which also led to tactical radio. Chemical warfare also began in 1915. By the Second World War, the use of the smaller divisions, platoons and companies became much more important as precise operations became vital. Instead of the trench stalemate of 1915–1917, in the Second World War, battles developed where small groups encountered other platoons. As a result, elite squads became much more recognized and distinguishable. Maneuver warfare also returned with an astonishing pace with the advent of the tank, replacing the cannon of the Enlightenment Age. Artillery has since gradually replaced the use of frontal troops. Modern battles resemble those of the Second World War, along with indirect combat through the use of aircraft and missiles which has come to constitute a large portion of wars in place of battles, where battles are now mostly reserved for capturing cities. Naval One significant difference of modern naval battles, as opposed to earlier forms of combat is the use of marines, which introduced amphibious warfare. Today, a marine is actually an infantry regiment that sometimes fights solely on land and is no longer tied to the navy. A good example of an old naval battle is the Battle of Salamis. Most ancient naval battles were fought by fast ships using the battering ram to sink opposing fleets or steer close enough for boarding in hand-to-hand combat. Troops were often used to storm enemy ships as used by Romans and pirates. This tactic was usually used by civilizations that could not beat the enemy with ranged weaponry. Another invention in the late Middle Ages was the use of Greek fire by the Byzantines, which was used to set enemy fleets on fire. Empty demolition ships utilized the tactic to crash into opposing ships and set it afire with an explosion. After the invention of cannons, naval warfare became useful as support units for land warfare. During the 19th century, the development of mines led to a new type of naval warfare. The ironclad, first used in the American Civil War, resistant to cannons, soon made the wooden ship obsolete. The invention of military submarines, during World War I, brought naval warfare to both above and below the surface. With the development of military aircraft during World War II, battles were fought in the sky as well as below the ocean. Aircraft carriers have since become the central unit in naval warfare, acting as a mobile base for lethal aircraft. Aerial Although the use of aircraft has for the most part always been used as a supplement to land or naval engagements, since their first major military use in World War I aircraft have increasingly taken on larger roles in warfare. During World War I, the primary use was for reconnaissance, and small-scale bombardment. Aircraft began becoming much more prominent in the Spanish Civil War and especially World War II. Aircraft design began specializing, primarily into two types: bombers, which carried explosive payloads to bomb land targets or ships; and fighter-interceptors, which were used to either intercept incoming aircraft or to escort and protect bombers (engagements between fighter aircraft were known as dog fights). Some of the more notable aerial battles in this period include the Battle of Britain and the Battle of Midway. Another important use of aircraft came with the development of the helicopter, which first became heavily used during the Vietnam War, and still continues to be widely used today to transport and augment ground forces. Today, direct engagements between aircraft are rare – the most modern fighter-interceptors carry much more extensive bombing payloads, and are used to bomb precision land targets, rather than to fight other aircraft. Anti-aircraft batteries are used much more extensively to defend against incoming aircraft than interceptors. Despite this, aircraft today are much more extensively used as the primary tools for both army and navy, as evidenced by the prominent use of helicopters to transport and support troops, the use of aerial bombardment as the "first strike" in many engagements, and the replacement of the battleship with the aircraft carrier as the center of most modern navies. Naming Battles are usually named after some feature of the battlefield geography, such as a town, forest or river, commonly prefixed "Battle of...". Occasionally battles are named after the date on which they took place, such as The Glorious First of June. In the Middle Ages it was considered important to settle on a suitable name for a battle which could be used by the chroniclers. After Henry V of England defeated a French army on October 25, 1415, he met with the senior French herald and they agreed to name the battle after the nearby castle and so it was called the Battle of Agincourt. In other cases, the sides adopted different names for the same battle, such as the Battle of Gallipoli which is known in Turkey as the Battle of Çanakkale. During the American Civil War, the Union tended to name the battles after the nearest watercourse, such as the Battle of Wilsons Creek and the Battle of Stones River, whereas the Confederates favoured the nearby towns, as in the Battles of Chancellorsville and Murfreesboro. Occasionally both names for the same battle entered the popular culture, such as the First Battle of Bull Run and the Second Battle of Bull Run, which are also referred to as the First and Second Battles of Manassas. Sometimes in desert warfare, there is no nearby town name to use; map coordinates gave the name to the Battle of 73 Easting in the First Gulf War. Some place names have become synonymous with battles, such as the Passchendaele, Pearl Harbor, the Alamo, Thermopylae and Waterloo. Military operations, many of which result in battle, are given codenames, which are not necessarily meaningful or indicative of the type or the location of the battle. Operation Market Garden and Operation Rolling Thunder are examples of battles known by their military codenames. When a battleground is the site of more than one battle in the same conflict, the instances are distinguished by ordinal number, such as the First and Second Battles of Bull Run. An extreme case are the twelve Battles of the Isonzo—First to Twelfth—between Italy and Austria-Hungary during the First World War. Some battles are named for the convenience of military historians so that periods of combat can be neatly distinguished from one another. Following the First World War, the British Battles Nomenclature Committee was formed to decide on standard names for all battles and subsidiary actions. To the soldiers who did the fighting, the distinction was usually academic; a soldier fighting at Beaumont Hamel on November 13, 1916, was probably unaware he was taking part in what the committee named the Battle of the Ancre. Many combats are too small to be battles; terms such as "action", "affair", "skirmish", "firefight", "raid", or "offensive patrol" are used to describe small military encounters. These combats often take place within the time and space of a battle and while they may have an objective, they are not necessarily "decisive". Sometimes the soldiers are unable to immediately gauge the significance of the combat; in the aftermath of the Battle of Waterloo, some British officers were in doubt as to whether the day's events merited the title of "battle" or would be called an "action". Effects Battles affect the individuals who take part, as well as the political actors. Personal effects of battle range from mild psychological issues to permanent and crippling injuries. Some battle-survivors have nightmares about the conditions they encountered or abnormal reactions to certain sights or sounds and some experience flashbacks. Physical effects of battle can include scars, amputations, lesions, loss of bodily functions, blindness, paralysis and death. Battles affect politics; a decisive battle can cause the losing side to surrender, while a Pyrrhic victory such as the Battle of Asculum can cause the winning side to reconsider its goals. Battles in civil wars have often decided the fate of monarchs or political factions. Famous examples include the Wars of the Roses, as well as the Jacobite risings. Battles affect the commitment of one side or the other to the continuance of a war, for example the Battle of Inchon and the Battle of Huế during the Tet Offensive. See also List of battles Military strategy Military tactics Naval battle Pitched battle Skirmisher War Further reading Interstate War Battle dataset (1823–2003) References Sources no isbn External links Military operations by type
9
Botany, also called plant science (or plant sciences), plant biology or phytology, is the science of plant life and a branch of biology. A botanist, plant scientist or phytologist is a scientist who specialises in this field. The term "botany" comes from the Ancient Greek word () meaning "pasture", "herbs" "grass", or "fodder"; is in turn derived from (), "to feed" or "to graze". Traditionally, botany has also included the study of fungi and algae by mycologists and phycologists respectively, with the study of these three groups of organisms remaining within the sphere of interest of the International Botanical Congress. Nowadays, botanists (in the strict sense) study approximately 410,000 species of land plants of which some 391,000 species are vascular plants (including approximately 369,000 species of flowering plants), and approximately 20,000 are bryophytes. Botany originated in prehistory as herbalism with the efforts of early humans to identify – and later cultivate – plants that were edible, poisonous, and possibly medicinal, making it one of the first endeavours of human investigation. Medieval physic gardens, often attached to monasteries, contained plants possibly having medicinal benefit. They were forerunners of the first botanical gardens attached to universities, founded from the 1540s onwards. One of the earliest was the Padua botanical garden. These gardens facilitated the academic study of plants. Efforts to catalogue and describe their collections were the beginnings of plant taxonomy, and led in 1753 to the binomial system of nomenclature of Carl Linnaeus that remains in use to this day for the naming of all biological species. In the 19th and 20th centuries, new techniques were developed for the study of plants, including methods of optical microscopy and live cell imaging, electron microscopy, analysis of chromosome number, plant chemistry and the structure and function of enzymes and other proteins. In the last two decades of the 20th century, botanists exploited the techniques of molecular genetic analysis, including genomics and proteomics and DNA sequences to classify plants more accurately. Modern botany is a broad, multidisciplinary subject with contributions and insights from most other areas of science and technology. Research topics include the study of plant structure, growth and differentiation, reproduction, biochemistry and primary metabolism, chemical products, development, diseases, evolutionary relationships, systematics, and plant taxonomy. Dominant themes in 21st century plant science are molecular genetics and epigenetics, which study the mechanisms and control of gene expression during differentiation of plant cells and tissues. Botanical research has diverse applications in providing staple foods, materials such as timber, oil, rubber, fibre and drugs, in modern horticulture, agriculture and forestry, plant propagation, breeding and genetic modification, in the synthesis of chemicals and raw materials for construction and energy production, in environmental management, and the maintenance of biodiversity. History Early botany Botany originated as herbalism, the study and use of plants for their possible medicinal properties. The early recorded history of botany includes many ancient writings and plant classifications. Examples of early botanical works have been found in ancient texts from India dating back to before 1100 BCE, Ancient Egypt, in archaic Ancient Iranic Avestan writings, and in works from China purportedly from before 221 BCE. Modern botany traces its roots back to Ancient Greece specifically to Theophrastus (–287 BCE), a student of Aristotle who invented and described many of its principles and is widely regarded in the scientific community as the "Father of Botany". His major works, Enquiry into Plants and On the Causes of Plants, constitute the most important contributions to botanical science until the Middle Ages, almost seventeen centuries later. Another work from Ancient Greece that made an early impact on botany is , a five-volume encyclopedia about preliminary herbal medicine written in the middle of the first century by Greek physician and pharmacologist Pedanius Dioscorides. was widely read for more than 1,500 years. Important contributions from the medieval Muslim world include Ibn Wahshiyya's Nabatean Agriculture, Abū Ḥanīfa Dīnawarī's (828–896) the Book of Plants, and Ibn Bassal's The Classification of Soils. In the early 13th century, Abu al-Abbas al-Nabati, and Ibn al-Baitar (d. 1248) wrote on botany in a systematic and scientific manner. In the mid-16th century, botanical gardens were founded in a number of Italian universities. The Padua botanical garden in 1545 is usually considered to be the first which is still in its original location. These gardens continued the practical value of earlier "physic gardens", often associated with monasteries, in which plants were cultivated for suspected medicinal uses. They supported the growth of botany as an academic subject. Lectures were given about the plants grown in the gardens. Botanical gardens came much later to northern Europe; the first in England was the University of Oxford Botanic Garden in 1621. German physician Leonhart Fuchs (1501–1566) was one of "the three German fathers of botany", along with theologian Otto Brunfels (1489–1534) and physician Hieronymus Bock (1498–1554) (also called Hieronymus Tragus). Fuchs and Brunfels broke away from the tradition of copying earlier works to make original observations of their own. Bock created his own system of plant classification. Physician Valerius Cordus (1515–1544) authored a botanically and pharmacologically important herbal Historia Plantarum in 1544 and a pharmacopoeia of lasting importance, the Dispensatorium in 1546. Naturalist Conrad von Gesner (1516–1565) and herbalist John Gerard (1545–) published herbals covering the supposed medicinal uses of plants. Naturalist Ulisse Aldrovandi (1522–1605) was considered the father of natural history, which included the study of plants. In 1665, using an early microscope, Polymath Robert Hooke discovered cells, a term he coined, in cork, and a short time later in living plant tissue. Early modern botany During the 18th century, systems of plant identification were developed comparable to dichotomous keys, where unidentified plants are placed into taxonomic groups (e.g. family, genus and species) by making a series of choices between pairs of characters. The choice and sequence of the characters may be artificial in keys designed purely for identification (diagnostic keys) or more closely related to the natural or phyletic order of the taxa in synoptic keys. By the 18th century, new plants for study were arriving in Europe in increasing numbers from newly discovered countries and the European colonies worldwide. In 1753, Carl Linnaeus published his Species Plantarum, a hierarchical classification of plant species that remains the reference point for modern botanical nomenclature. This established a standardised binomial or two-part naming scheme where the first name represented the genus and the second identified the species within the genus. For the purposes of identification, Linnaeus's Systema Sexuale classified plants into 24 groups according to the number of their male sexual organs. The 24th group, Cryptogamia, included all plants with concealed reproductive parts, mosses, liverworts, ferns, algae and fungi. This clinical categorization of plants was soon followed by the creation of the categories of race and sexuality; the classification of plants necessitated classification of all other living things, including humans. As a result, taxonomy and botany played an influential role in the development of scientific racism. One example of this progression is in the works of Carl Linnaeus, the previously mentioned 18th century botanist. As Linnaeus moved on from classifying plants to classifying all organisms, he published Systema Naturae, a major classificatory piece that he would continue to edit and grow over time. In his 10th edition he expands from four "varieties" of man - Europeans Albus, Americanus Rubescens, Asiaticus Fuscus, and Africanus Niger, based on the four known continents - he also attributes certain skin color, medical temperament, body posture, physical traits, behavior, manner of clothing, and form of government to each variety of people. In these descriptions he labels Asian people as stern, taught, and greedy; black people as sly, sluggish, and neglectful; white people as light, wise, and inventors. Linnaeus is only one of many botanists who influenced scientific racism through the categorization of organisms. Increasing knowledge of plant anatomy, morphology and life cycles led to the realisation that there were more natural affinities between plants than the artificial sexual system of Linnaeus. Adanson (1763), de Jussieu (1789), and Candolle (1819) all proposed various alternative natural systems of classification that grouped plants using a wider range of shared characters and were widely followed. The Candollean system reflected his ideas of the progression of morphological complexity and the later Bentham & Hooker system, which was influential until the mid-19th century, was influenced by Candolle's approach. Darwin's publication of the Origin of Species in 1859 and his concept of common descent required modifications to the Candollean system to reflect evolutionary relationships as distinct from mere morphological similarity. Botany was greatly stimulated by the appearance of the first "modern" textbook, Matthias Schleiden's , published in English in 1849 as Principles of Scientific Botany. Schleiden was a microscopist and an early plant anatomist who co-founded the cell theory with Theodor Schwann and Rudolf Virchow and was among the first to grasp the significance of the cell nucleus that had been described by Robert Brown in 1831. In 1855, Adolf Fick formulated Fick's laws that enabled the calculation of the rates of molecular diffusion in biological systems. The system in which early modern botany was practiced was very extensive. Modern botany emerged following the surge in exploration of other continents by European colonizers. Plant collectors would travel to different countries in search of new specimens for botanists to classify. Plants usable for cultivataion would then be hybridized. The history of botany has been connected to imbalanced power structures in the past. Slave labor was widespread not only in plantations but also in the running of botanical gardens; for example, on St. Vincent Island, plantation slavery was vital for the economic success of the sugar colonies and for the maintenance of the breadfruit cultivation project in the St. Vincent botanical gardens. Late modern botany Building upon the gene-chromosome theory of heredity that originated with Gregor Mendel (1822–1884), August Weismann (1834–1914) proved that inheritance only takes place through gametes. No other cells can pass on inherited characters. The work of Katherine Esau (1898–1997) on plant anatomy is still a major foundation of modern botany. Her books Plant Anatomy and Anatomy of Seed Plants have been key plant structural biology texts for more than half a century. The discipline of plant ecology was pioneered in the late 19th century by botanists such as Eugenius Warming, who produced the hypothesis that plants form communities, and his mentor and successor Christen C. Raunkiær whose system for describing plant life forms is still in use today. The concept that the composition of plant communities such as temperate broadleaf forest changes by a process of ecological succession was developed by Henry Chandler Cowles, Arthur Tansley and Frederic Clements. Clements is credited with the idea of climax vegetation as the most complex vegetation that an environment can support and Tansley introduced the concept of ecosystems to biology. Building on the extensive earlier work of Alphonse de Candolle, Nikolai Vavilov (1887–1943) produced accounts of the biogeography, centres of origin, and evolutionary history of economic plants. Particularly since the mid-1960s there have been advances in understanding of the physics of plant physiological processes such as transpiration (the transport of water within plant tissues), the temperature dependence of rates of water evaporation from the leaf surface and the molecular diffusion of water vapour and carbon dioxide through stomatal apertures. These developments, coupled with new methods for measuring the size of stomatal apertures, and the rate of photosynthesis have enabled precise description of the rates of gas exchange between plants and the atmosphere. Innovations in statistical analysis by Ronald Fisher, Frank Yates and others at Rothamsted Experimental Station facilitated rational experimental design and data analysis in botanical research. The discovery and identification of the auxin plant hormones by Kenneth V. Thimann in 1948 enabled regulation of plant growth by externally applied chemicals. Frederick Campion Steward pioneered techniques of micropropagation and plant tissue culture controlled by plant hormones. The synthetic auxin 2,4-dichlorophenoxyacetic acid or 2,4-D was one of the first commercial synthetic herbicides. 20th century developments in plant biochemistry have been driven by modern techniques of organic chemical analysis, such as spectroscopy, chromatography and electrophoresis. With the rise of the related molecular-scale biological approaches of molecular biology, genomics, proteomics and metabolomics, the relationship between the plant genome and most aspects of the biochemistry, physiology, morphology and behaviour of plants can be subjected to detailed experimental analysis. The concept originally stated by Gottlieb Haberlandt in 1902 that all plant cells are totipotent and can be grown in vitro ultimately enabled the use of genetic engineering experimentally to knock out a gene or genes responsible for a specific trait, or to add genes such as GFP that report when a gene of interest is being expressed. These technologies enable the biotechnological use of whole plants or plant cell cultures grown in bioreactors to synthesise pesticides, antibiotics or other pharmaceuticals, as well as the practical application of genetically modified crops designed for traits such as improved yield. Modern morphology recognises a continuum between the major morphological categories of root, stem (caulome), leaf (phyllome) and trichome. Furthermore, it emphasises structural dynamics. Modern systematics aims to reflect and discover phylogenetic relationships between plants. Modern Molecular phylogenetics largely ignores morphological characters, relying on DNA sequences as data. Molecular analysis of DNA sequences from most families of flowering plants enabled the Angiosperm Phylogeny Group to publish in 1998 a phylogeny of flowering plants, answering many of the questions about relationships among angiosperm families and species. The theoretical possibility of a practical method for identification of plant species and commercial varieties by DNA barcoding is the subject of active current research. Scope and importance The study of plants is vital because they underpin almost all animal life on Earth by generating a large proportion of the oxygen and food that provide humans and other organisms with aerobic respiration with the chemical energy they need to exist. Plants, algae and cyanobacteria are the major groups of organisms that carry out photosynthesis, a process that uses the energy of sunlight to convert water and carbon dioxide into sugars that can be used both as a source of chemical energy and of organic molecules that are used in the structural components of cells. As a by-product of photosynthesis, plants release oxygen into the atmosphere, a gas that is required by nearly all living things to carry out cellular respiration. In addition, they are influential in the global carbon and water cycles and plant roots bind and stabilise soils, preventing soil erosion. Plants are crucial to the future of human society as they provide food, oxygen, biochemicals, and products for people, as well as creating and preserving soil. Historically, all living things were classified as either animals or plants and botany covered the study of all organisms not considered animals. Botanists examine both the internal functions and processes within plant organelles, cells, tissues, whole plants, plant populations and plant communities. At each of these levels, a botanist may be concerned with the classification (taxonomy), phylogeny and evolution, structure (anatomy and morphology), or function (physiology) of plant life. The strictest definition of "plant" includes only the "land plants" or embryophytes, which include seed plants (gymnosperms, including the pines, and flowering plants) and the free-sporing cryptogams including ferns, clubmosses, liverworts, hornworts and mosses. Embryophytes are multicellular eukaryotes descended from an ancestor that obtained its energy from sunlight by photosynthesis. They have life cycles with alternating haploid and diploid phases. The sexual haploid phase of embryophytes, known as the gametophyte, nurtures the developing diploid embryo sporophyte within its tissues for at least part of its life, even in the seed plants, where the gametophyte itself is nurtured by its parent sporophyte. Other groups of organisms that were previously studied by botanists include bacteria (now studied in bacteriology), fungi (mycology) – including lichen-forming fungi (lichenology), non-chlorophyte algae (phycology), and viruses (virology). However, attention is still given to these groups by botanists, and fungi (including lichens) and photosynthetic protists are usually covered in introductory botany courses. Palaeobotanists study ancient plants in the fossil record to provide information about the evolutionary history of plants. Cyanobacteria, the first oxygen-releasing photosynthetic organisms on Earth, are thought to have given rise to the ancestor of plants by entering into an endosymbiotic relationship with an early eukaryote, ultimately becoming the chloroplasts in plant cells. The new photosynthetic plants (along with their algal relatives) accelerated the rise in atmospheric oxygen started by the cyanobacteria, changing the ancient oxygen-free, reducing, atmosphere to one in which free oxygen has been abundant for more than 2 billion years. Among the important botanical questions of the 21st century are the role of plants as primary producers in the global cycling of life's basic ingredients: energy, carbon, oxygen, nitrogen and water, and ways that our plant stewardship can help address the global environmental issues of resource management, conservation, human food security, biologically invasive organisms, carbon sequestration, climate change, and sustainability. Human nutrition Virtually all staple foods come either directly from primary production by plants, or indirectly from animals that eat them. Plants and other photosynthetic organisms are at the base of most food chains because they use the energy from the sun and nutrients from the soil and atmosphere, converting them into a form that can be used by animals. This is what ecologists call the first trophic level. The modern forms of the major staple foods, such as hemp, teff, maize, rice, wheat and other cereal grasses, pulses, bananas and plantains, as well as hemp, flax and cotton grown for their fibres, are the outcome of prehistoric selection over thousands of years from among wild ancestral plants with the most desirable characteristics. Botanists study how plants produce food and how to increase yields, for example through plant breeding, making their work important to humanity's ability to feed the world and provide food security for future generations. Botanists also study weeds, which are a considerable problem in agriculture, and the biology and control of plant pathogens in agriculture and natural ecosystems. Ethnobotany is the study of the relationships between plants and people. When applied to the investigation of historical plant–people relationships ethnobotany may be referred to as archaeobotany or palaeoethnobotany. Some of the earliest plant-people relationships arose between the indigenous people of Canada in identifying edible plants from inedible plants. This relationship the indigenous people had with plants was recorded by ethnobotanists. Plant biochemistry Plant biochemistry is the study of the chemical processes used by plants. Some of these processes are used in their primary metabolism like the photosynthetic Calvin cycle and crassulacean acid metabolism. Others make specialised materials like the cellulose and lignin used to build their bodies, and secondary products like resins and aroma compounds. Plants make various photosynthetic pigments, some of which can be seen here through paper chromatography Xanthophylls Chlorophyll a Chlorophyll b Plants and various other groups of photosynthetic eukaryotes collectively known as "algae" have unique organelles known as chloroplasts. Chloroplasts are thought to be descended from cyanobacteria that formed endosymbiotic relationships with ancient plant and algal ancestors. Chloroplasts and cyanobacteria contain the blue-green pigment chlorophyll a. Chlorophyll a (as well as its plant and green algal-specific cousin chlorophyll b) absorbs light in the blue-violet and orange/red parts of the spectrum while reflecting and transmitting the green light that we see as the characteristic colour of these organisms. The energy in the red and blue light that these pigments absorb is used by chloroplasts to make energy-rich carbon compounds from carbon dioxide and water by oxygenic photosynthesis, a process that generates molecular oxygen (O2) as a by-product. The light energy captured by chlorophyll a is initially in the form of electrons (and later a proton gradient) that's used to make molecules of ATP and NADPH which temporarily store and transport energy. Their energy is used in the light-independent reactions of the Calvin cycle by the enzyme rubisco to produce molecules of the 3-carbon sugar glyceraldehyde 3-phosphate (G3P). Glyceraldehyde 3-phosphate is the first product of photosynthesis and the raw material from which glucose and almost all other organic molecules of biological origin are synthesised. Some of the glucose is converted to starch which is stored in the chloroplast. Starch is the characteristic energy store of most land plants and algae, while inulin, a polymer of fructose is used for the same purpose in the sunflower family Asteraceae. Some of the glucose is converted to sucrose (common table sugar) for export to the rest of the plant. Unlike in animals (which lack chloroplasts), plants and their eukaryote relatives have delegated many biochemical roles to their chloroplasts, including synthesising all their fatty acids, and most amino acids. The fatty acids that chloroplasts make are used for many things, such as providing material to build cell membranes out of and making the polymer cutin which is found in the plant cuticle that protects land plants from drying out. Plants synthesise a number of unique polymers like the polysaccharide molecules cellulose, pectin and xyloglucan from which the land plant cell wall is constructed. Vascular land plants make lignin, a polymer used to strengthen the secondary cell walls of xylem tracheids and vessels to keep them from collapsing when a plant sucks water through them under water stress. Lignin is also used in other cell types like sclerenchyma fibres that provide structural support for a plant and is a major constituent of wood. Sporopollenin is a chemically resistant polymer found in the outer cell walls of spores and pollen of land plants responsible for the survival of early land plant spores and the pollen of seed plants in the fossil record. It is widely regarded as a marker for the start of land plant evolution during the Ordovician period. The concentration of carbon dioxide in the atmosphere today is much lower than it was when plants emerged onto land during the Ordovician and Silurian periods. Many monocots like maize and the pineapple and some dicots like the Asteraceae have since independently evolved pathways like Crassulacean acid metabolism and the carbon fixation pathway for photosynthesis which avoid the losses resulting from photorespiration in the more common carbon fixation pathway. These biochemical strategies are unique to land plants. Medicine and materials Phytochemistry is a branch of plant biochemistry primarily concerned with the chemical substances produced by plants during secondary metabolism. Some of these compounds are toxins such as the alkaloid coniine from hemlock. Others, such as the essential oils peppermint oil and lemon oil are useful for their aroma, as flavourings and spices (e.g., capsaicin), and in medicine as pharmaceuticals as in opium from opium poppies. Many medicinal and recreational drugs, such as tetrahydrocannabinol (active ingredient in cannabis), caffeine, morphine and nicotine come directly from plants. Others are simple derivatives of botanical natural products. For example, the pain killer aspirin is the acetyl ester of salicylic acid, originally isolated from the bark of willow trees, and a wide range of opiate painkillers like heroin are obtained by chemical modification of morphine obtained from the opium poppy. Popular stimulants come from plants, such as caffeine from coffee, tea and chocolate, and nicotine from tobacco. Most alcoholic beverages come from fermentation of carbohydrate-rich plant products such as barley (beer), rice (sake) and grapes (wine). Native Americans have used various plants as ways of treating illness or disease for thousands of years. This knowledge Native Americans have on plants has been recorded by enthnobotanists and then in turn has been used by pharmaceutical companies as a way of drug discovery. Plants can synthesise coloured dyes and pigments such as the anthocyanins responsible for the red colour of red wine, yellow weld and blue woad used together to produce Lincoln green, indoxyl, source of the blue dye indigo traditionally used to dye denim and the artist's pigments gamboge and rose madder. Sugar, starch, cotton, linen, hemp, some types of rope, wood and particle boards, papyrus and paper, vegetable oils, wax, and natural rubber are examples of commercially important materials made from plant tissues or their secondary products. Charcoal, a pure form of carbon made by pyrolysis of wood, has a long history as a metal-smelting fuel, as a filter material and adsorbent and as an artist's material and is one of the three ingredients of gunpowder. Cellulose, the world's most abundant organic polymer, can be converted into energy, fuels, materials and chemical feedstock. Products made from cellulose include rayon and cellophane, wallpaper paste, biobutanol and gun cotton. Sugarcane, rapeseed and soy are some of the plants with a highly fermentable sugar or oil content that are used as sources of biofuels, important alternatives to fossil fuels, such as biodiesel. Sweetgrass was used by Native Americans to ward off bugs like mosquitoes. These bug repelling properties of sweetgrass were later found by the American Chemical Society in the molecules phytol and coumarin. Plant ecology Plant ecology is the science of the functional relationships between plants and their habitats – the environments where they complete their life cycles. Plant ecologists study the composition of local and regional floras, their biodiversity, genetic diversity and fitness, the adaptation of plants to their environment, and their competitive or mutualistic interactions with other species. Some ecologists even rely on empirical data from indigenous people that is gathered by ethnobotanists. This information can relay a great deal of information on how the land once was thousands of years ago and how it has changed over that time. The goals of plant ecology are to understand the causes of their distribution patterns, productivity, environmental impact, evolution, and responses to environmental change. Plants depend on certain edaphic (soil) and climatic factors in their environment but can modify these factors too. For example, they can change their environment's albedo, increase runoff interception, stabilise mineral soils and develop their organic content, and affect local temperature. Plants compete with other organisms in their ecosystem for resources. They interact with their neighbours at a variety of spatial scales in groups, populations and communities that collectively constitute vegetation. Regions with characteristic vegetation types and dominant plants as well as similar abiotic and biotic factors, climate, and geography make up biomes like tundra or tropical rainforest. Herbivores eat plants, but plants can defend themselves and some species are parasitic or even carnivorous. Other organisms form mutually beneficial relationships with plants. For example, mycorrhizal fungi and rhizobia provide plants with nutrients in exchange for food, ants are recruited by ant plants to provide protection, honey bees, bats and other animals pollinate flowers and humans and other animals act as dispersal vectors to spread spores and seeds. Plants, climate and environmental change Plant responses to climate and other environmental changes can inform our understanding of how these changes affect ecosystem function and productivity. For example, plant phenology can be a useful proxy for temperature in historical climatology, and the biological impact of climate change and global warming. Palynology, the analysis of fossil pollen deposits in sediments from thousands or millions of years ago allows the reconstruction of past climates. Estimates of atmospheric concentrations since the Palaeozoic have been obtained from stomatal densities and the leaf shapes and sizes of ancient land plants. Ozone depletion can expose plants to higher levels of ultraviolet radiation-B (UV-B), resulting in lower growth rates. Moreover, information from studies of community ecology, plant systematics, and taxonomy is essential to understanding vegetation change, habitat destruction and species extinction. Genetics Inheritance in plants follows the same fundamental principles of genetics as in other multicellular organisms. Gregor Mendel discovered the genetic laws of inheritance by studying inherited traits such as shape in Pisum sativum (peas). What Mendel learned from studying plants has had far-reaching benefits outside of botany. Similarly, "jumping genes" were discovered by Barbara McClintock while she was studying maize. Nevertheless, there are some distinctive genetic differences between plants and other organisms. Species boundaries in plants may be weaker than in animals, and cross species hybrids are often possible. A familiar example is peppermint, Mentha × piperita, a sterile hybrid between Mentha aquatica and spearmint, Mentha spicata. The many cultivated varieties of wheat are the result of multiple inter- and intra-specific crosses between wild species and their hybrids. Angiosperms with monoecious flowers often have self-incompatibility mechanisms that operate between the pollen and stigma so that the pollen either fails to reach the stigma or fails to germinate and produce male gametes. This is one of several methods used by plants to promote outcrossing. In many land plants the male and female gametes are produced by separate individuals. These species are said to be dioecious when referring to vascular plant sporophytes and dioicous when referring to bryophyte gametophytes. Unlike in higher animals, where parthenogenesis is rare, asexual reproduction may occur in plants by several different mechanisms. The formation of stem tubers in potato is one example. Particularly in arctic or alpine habitats, where opportunities for fertilisation of flowers by animals are rare, plantlets or bulbs, may develop instead of flowers, replacing sexual reproduction with asexual reproduction and giving rise to clonal populations genetically identical to the parent. This is one of several types of apomixis that occur in plants. Apomixis can also happen in a seed, producing a seed that contains an embryo genetically identical to the parent. Most sexually reproducing organisms are diploid, with paired chromosomes, but doubling of their chromosome number may occur due to errors in cytokinesis. This can occur early in development to produce an autopolyploid or partly autopolyploid organism, or during normal processes of cellular differentiation to produce some cell types that are polyploid (endopolyploidy), or during gamete formation. An allopolyploid plant may result from a hybridisation event between two different species. Both autopolyploid and allopolyploid plants can often reproduce normally, but may be unable to cross-breed successfully with the parent population because there is a mismatch in chromosome numbers. These plants that are reproductively isolated from the parent species but live within the same geographical area, may be sufficiently successful to form a new species. Some otherwise sterile plant polyploids can still reproduce vegetatively or by seed apomixis, forming clonal populations of identical individuals. Durum wheat is a fertile tetraploid allopolyploid, while bread wheat is a fertile hexaploid. The commercial banana is an example of a sterile, seedless triploid hybrid. Common dandelion is a triploid that produces viable seeds by apomictic seed. As in other eukaryotes, the inheritance of endosymbiotic organelles like mitochondria and chloroplasts in plants is non-Mendelian. Chloroplasts are inherited through the male parent in gymnosperms but often through the female parent in flowering plants. Molecular genetics A considerable amount of new knowledge about plant function comes from studies of the molecular genetics of model plants such as the Thale cress, Arabidopsis thaliana, a weedy species in the mustard family (Brassicaceae). The genome or hereditary information contained in the genes of this species is encoded by about 135 million base pairs of DNA, forming one of the smallest genomes among flowering plants. Arabidopsis was the first plant to have its genome sequenced, in 2000. The sequencing of some other relatively small genomes, of rice (Oryza sativa) and Brachypodium distachyon, has made them important model species for understanding the genetics, cellular and molecular biology of cereals, grasses and monocots generally. Model plants such as Arabidopsis thaliana are used for studying the molecular biology of plant cells and the chloroplast. Ideally, these organisms have small genomes that are well known or completely sequenced, small stature and short generation times. Corn has been used to study mechanisms of photosynthesis and phloem loading of sugar in plants. The single celled green alga Chlamydomonas reinhardtii, while not an embryophyte itself, contains a green-pigmented chloroplast related to that of land plants, making it useful for study. A red alga Cyanidioschyzon merolae has also been used to study some basic chloroplast functions. Spinach, peas, soybeans and a moss Physcomitrella patens are commonly used to study plant cell biology. Agrobacterium tumefaciens, a soil rhizosphere bacterium, can attach to plant cells and infect them with a callus-inducing Ti plasmid by horizontal gene transfer, causing a callus infection called crown gall disease. Schell and Van Montagu (1977) hypothesised that the Ti plasmid could be a natural vector for introducing the Nif gene responsible for nitrogen fixation in the root nodules of legumes and other plant species. Today, genetic modification of the Ti plasmid is one of the main techniques for introduction of transgenes to plants and the creation of genetically modified crops. Epigenetics Epigenetics is the study of heritable changes in gene function that cannot be explained by changes in the underlying DNA sequence but cause the organism's genes to behave (or "express themselves") differently. One example of epigenetic change is the marking of the genes by DNA methylation which determines whether they will be expressed or not. Gene expression can also be controlled by repressor proteins that attach to silencer regions of the DNA and prevent that region of the DNA code from being expressed. Epigenetic marks may be added or removed from the DNA during programmed stages of development of the plant, and are responsible, for example, for the differences between anthers, petals and normal leaves, despite the fact that they all have the same underlying genetic code. Epigenetic changes may be temporary or may remain through successive cell divisions for the remainder of the cell's life. Some epigenetic changes have been shown to be heritable, while others are reset in the germ cells. Epigenetic changes in eukaryotic biology serve to regulate the process of cellular differentiation. During morphogenesis, totipotent stem cells become the various pluripotent cell lines of the embryo, which in turn become fully differentiated cells. A single fertilised egg cell, the zygote, gives rise to the many different plant cell types including parenchyma, xylem vessel elements, phloem sieve tubes, guard cells of the epidermis, etc. as it continues to divide. The process results from the epigenetic activation of some genes and inhibition of others. Unlike animals, many plant cells, particularly those of the parenchyma, do not terminally differentiate, remaining totipotent with the ability to give rise to a new individual plant. Exceptions include highly lignified cells, the sclerenchyma and xylem which are dead at maturity, and the phloem sieve tubes which lack nuclei. While plants use many of the same epigenetic mechanisms as animals, such as chromatin remodelling, an alternative hypothesis is that plants set their gene expression patterns using positional information from the environment and surrounding cells to determine their developmental fate. Epigenetic changes can lead to paramutations, which do not follow the Mendelian heritage rules. These epigenetic marks are carried from one generation to the next, with one allele inducing a change on the other. Plant evolution The chloroplasts of plants have a number of biochemical, structural and genetic similarities to cyanobacteria, (commonly but incorrectly known as "blue-green algae") and are thought to be derived from an ancient endosymbiotic relationship between an ancestral eukaryotic cell and a cyanobacterial resident. The algae are a polyphyletic group and are placed in various divisions, some more closely related to plants than others. There are many differences between them in features such as cell wall composition, biochemistry, pigmentation, chloroplast structure and nutrient reserves. The algal division Charophyta, sister to the green algal division Chlorophyta, is considered to contain the ancestor of true plants. The Charophyte class Charophyceae and the land plant sub-kingdom Embryophyta together form the monophyletic group or clade Streptophytina. Nonvascular land plants are embryophytes that lack the vascular tissues xylem and phloem. They include mosses, liverworts and hornworts. Pteridophytic vascular plants with true xylem and phloem that reproduced by spores germinating into free-living gametophytes evolved during the Silurian period and diversified into several lineages during the late Silurian and early Devonian. Representatives of the lycopods have survived to the present day. By the end of the Devonian period, several groups, including the lycopods, sphenophylls and progymnosperms, had independently evolved "megaspory" – their spores were of two distinct sizes, larger megaspores and smaller microspores. Their reduced gametophytes developed from megaspores retained within the spore-producing organs (megasporangia) of the sporophyte, a condition known as endospory. Seeds consist of an endosporic megasporangium surrounded by one or two sheathing layers (integuments). The young sporophyte develops within the seed, which on germination splits to release it. The earliest known seed plants date from the latest Devonian Famennian stage. Following the evolution of the seed habit, seed plants diversified, giving rise to a number of now-extinct groups, including seed ferns, as well as the modern gymnosperms and angiosperms. Gymnosperms produce "naked seeds" not fully enclosed in an ovary; modern representatives include conifers, cycads, Ginkgo, and Gnetales. Angiosperms produce seeds enclosed in a structure such as a carpel or an ovary. Ongoing research on the molecular phylogenetics of living plants appears to show that the angiosperms are a sister clade to the gymnosperms. Plant physiology Plant physiology encompasses all the internal chemical and physical activities of plants associated with life. Chemicals obtained from the air, soil and water form the basis of all plant metabolism. The energy of sunlight, captured by oxygenic photosynthesis and released by cellular respiration, is the basis of almost all life. Photoautotrophs, including all green plants, algae and cyanobacteria gather energy directly from sunlight by photosynthesis. Heterotrophs including all animals, all fungi, all completely parasitic plants, and non-photosynthetic bacteria take in organic molecules produced by photoautotrophs and respire them or use them in the construction of cells and tissues. Respiration is the oxidation of carbon compounds by breaking them down into simpler structures to release the energy they contain, essentially the opposite of photosynthesis. Molecules are moved within plants by transport processes that operate at a variety of spatial scales. Subcellular transport of ions, electrons and molecules such as water and enzymes occurs across cell membranes. Minerals and water are transported from roots to other parts of the plant in the transpiration stream. Diffusion, osmosis, and active transport and mass flow are all different ways transport can occur. Examples of elements that plants need to transport are nitrogen, phosphorus, potassium, calcium, magnesium, and sulfur. In vascular plants, these elements are extracted from the soil as soluble ions by the roots and transported throughout the plant in the xylem. Most of the elements required for plant nutrition come from the chemical breakdown of soil minerals. Sucrose produced by photosynthesis is transported from the leaves to other parts of the plant in the phloem and plant hormones are transported by a variety of processes. Plant hormones Plants are not passive, but respond to external signals such as light, touch, and injury by moving or growing towards or away from the stimulus, as appropriate. Tangible evidence of touch sensitivity is the almost instantaneous collapse of leaflets of Mimosa pudica, the insect traps of Venus flytrap and bladderworts, and the pollinia of orchids. The hypothesis that plant growth and development is coordinated by plant hormones or plant growth regulators first emerged in the late 19th century. Darwin experimented on the movements of plant shoots and roots towards light and gravity, and concluded "It is hardly an exaggeration to say that the tip of the radicle . . acts like the brain of one of the lower animals . . directing the several movements". About the same time, the role of auxins (from the Greek , to grow) in control of plant growth was first outlined by the Dutch scientist Frits Went. The first known auxin, indole-3-acetic acid (IAA), which promotes cell growth, was only isolated from plants about 50 years later. This compound mediates the tropic responses of shoots and roots towards light and gravity. The finding in 1939 that plant callus could be maintained in culture containing IAA, followed by the observation in 1947 that it could be induced to form roots and shoots by controlling the concentration of growth hormones were key steps in the development of plant biotechnology and genetic modification. Cytokinins are a class of plant hormones named for their control of cell division (especially cytokinesis). The natural cytokinin zeatin was discovered in corn, Zea mays, and is a derivative of the purine adenine. Zeatin is produced in roots and transported to shoots in the xylem where it promotes cell division, bud development, and the greening of chloroplasts. The gibberelins, such as gibberelic acid are diterpenes synthesised from acetyl CoA via the mevalonate pathway. They are involved in the promotion of germination and dormancy-breaking in seeds, in regulation of plant height by controlling stem elongation and the control of flowering. Abscisic acid (ABA) occurs in all land plants except liverworts, and is synthesised from carotenoids in the chloroplasts and other plastids. It inhibits cell division, promotes seed maturation, and dormancy, and promotes stomatal closure. It was so named because it was originally thought to control abscission. Ethylene is a gaseous hormone that is produced in all higher plant tissues from methionine. It is now known to be the hormone that stimulates or regulates fruit ripening and abscission, and it, or the synthetic growth regulator ethephon which is rapidly metabolised to produce ethylene, are used on industrial scale to promote ripening of cotton, pineapples and other climacteric crops. Another class of phytohormones is the jasmonates, first isolated from the oil of Jasminum grandiflorum which regulates wound responses in plants by unblocking the expression of genes required in the systemic acquired resistance response to pathogen attack. In addition to being the primary energy source for plants, light functions as a signalling device, providing information to the plant, such as how much sunlight the plant receives each day. This can result in adaptive changes in a process known as photomorphogenesis. Phytochromes are the photoreceptors in a plant that are sensitive to light. Plant anatomy and morphology Plant anatomy is the study of the structure of plant cells and tissues, whereas plant morphology is the study of their external form. All plants are multicellular eukaryotes, their DNA stored in nuclei. The characteristic features of plant cells that distinguish them from those of animals and fungi include a primary cell wall composed of the polysaccharides cellulose, hemicellulose and pectin, larger vacuoles than in animal cells and the presence of plastids with unique photosynthetic and biosynthetic functions as in the chloroplasts. Other plastids contain storage products such as starch (amyloplasts) or lipids (elaioplasts). Uniquely, streptophyte cells and those of the green algal order Trentepohliales divide by construction of a phragmoplast as a template for building a cell plate late in cell division. The bodies of vascular plants including clubmosses, ferns and seed plants (gymnosperms and angiosperms) generally have aerial and subterranean subsystems. The shoots consist of stems bearing green photosynthesising leaves and reproductive structures. The underground vascularised roots bear root hairs at their tips and generally lack chlorophyll. Non-vascular plants, the liverworts, hornworts and mosses do not produce ground-penetrating vascular roots and most of the plant participates in photosynthesis. The sporophyte generation is nonphotosynthetic in liverworts but may be able to contribute part of its energy needs by photosynthesis in mosses and hornworts. The root system and the shoot system are interdependent – the usually nonphotosynthetic root system depends on the shoot system for food, and the usually photosynthetic shoot system depends on water and minerals from the root system. Cells in each system are capable of creating cells of the other and producing adventitious shoots or roots. Stolons and tubers are examples of shoots that can grow roots. Roots that spread out close to the surface, such as those of willows, can produce shoots and ultimately new plants. In the event that one of the systems is lost, the other can often regrow it. In fact it is possible to grow an entire plant from a single leaf, as is the case with plants in Streptocarpus sect. Saintpaulia, or even a single cell – which can dedifferentiate into a callus (a mass of unspecialised cells) that can grow into a new plant. In vascular plants, the xylem and phloem are the conductive tissues that transport resources between shoots and roots. Roots are often adapted to store food such as sugars or starch, as in sugar beets and carrots. Stems mainly provide support to the leaves and reproductive structures, but can store water in succulent plants such as cacti, food as in potato tubers, or reproduce vegetatively as in the stolons of strawberry plants or in the process of layering. Leaves gather sunlight and carry out photosynthesis. Large, flat, flexible, green leaves are called foliage leaves. Gymnosperms, such as conifers, cycads, Ginkgo, and gnetophytes are seed-producing plants with open seeds. Angiosperms are seed-producing plants that produce flowers and have enclosed seeds. Woody plants, such as azaleas and oaks, undergo a secondary growth phase resulting in two additional types of tissues: wood (secondary xylem) and bark (secondary phloem and cork). All gymnosperms and many angiosperms are woody plants. Some plants reproduce sexually, some asexually, and some via both means. Although reference to major morphological categories such as root, stem, leaf, and trichome are useful, one has to keep in mind that these categories are linked through intermediate forms so that a continuum between the categories results. Furthermore, structures can be seen as processes, that is, process combinations. Systematic botany Systematic botany is part of systematic biology, which is concerned with the range and diversity of organisms and their relationships, particularly as determined by their evolutionary history. It involves, or is related to, biological classification, scientific taxonomy and phylogenetics. Biological classification is the method by which botanists group organisms into categories such as genera or species. Biological classification is a form of scientific taxonomy. Modern taxonomy is rooted in the work of Carl Linnaeus, who grouped species according to shared physical characteristics. These groupings have since been revised to align better with the Darwinian principle of common descent – grouping organisms by ancestry rather than superficial characteristics. While scientists do not always agree on how to classify organisms, molecular phylogenetics, which uses DNA sequences as data, has driven many recent revisions along evolutionary lines and is likely to continue to do so. The dominant classification system is called Linnaean taxonomy. It includes ranks and binomial nomenclature. The nomenclature of botanical organisms is codified in the International Code of Nomenclature for algae, fungi, and plants (ICN) and administered by the International Botanical Congress. Kingdom Plantae belongs to Domain Eukaryota and is broken down recursively until each species is separately classified. The order is: Kingdom; Phylum (or Division); Class; Order; Family; Genus (plural genera); Species. The scientific name of a plant represents its genus and its species within the genus, resulting in a single worldwide name for each organism. For example, the tiger lily is Lilium columbianum. Lilium is the genus, and columbianum the specific epithet. The combination is the name of the species. When writing the scientific name of an organism, it is proper to capitalise the first letter in the genus and put all of the specific epithet in lowercase. Additionally, the entire term is ordinarily italicised (or underlined when italics are not available). The evolutionary relationships and heredity of a group of organisms is called its phylogeny. Phylogenetic studies attempt to discover phylogenies. The basic approach is to use similarities based on shared inheritance to determine relationships. As an example, species of Pereskia are trees or bushes with prominent leaves. They do not obviously resemble a typical leafless cactus such as an Echinocactus. However, both Pereskia and Echinocactus have spines produced from areoles (highly specialised pad-like structures) suggesting that the two genera are indeed related. Judging relationships based on shared characters requires care, since plants may resemble one another through convergent evolution in which characters have arisen independently. Some euphorbias have leafless, rounded bodies adapted to water conservation similar to those of globular cacti, but characters such as the structure of their flowers make it clear that the two groups are not closely related. The cladistic method takes a systematic approach to characters, distinguishing between those that carry no information about shared evolutionary history – such as those evolved separately in different groups (homoplasies) or those left over from ancestors (plesiomorphies) – and derived characters, which have been passed down from innovations in a shared ancestor (apomorphies). Only derived characters, such as the spine-producing areoles of cacti, provide evidence for descent from a common ancestor. The results of cladistic analyses are expressed as cladograms: tree-like diagrams showing the pattern of evolutionary branching and descent. From the 1990s onwards, the predominant approach to constructing phylogenies for living plants has been molecular phylogenetics, which uses molecular characters, particularly DNA sequences, rather than morphological characters like the presence or absence of spines and areoles. The difference is that the genetic code itself is used to decide evolutionary relationships, instead of being used indirectly via the characters it gives rise to. Clive Stace describes this as having "direct access to the genetic basis of evolution." As a simple example, prior to the use of genetic evidence, fungi were thought either to be plants or to be more closely related to plants than animals. Genetic evidence suggests that the true evolutionary relationship of multicelled organisms is as shown in the cladogram below – fungi are more closely related to animals than to plants. In 1998, the Angiosperm Phylogeny Group published a phylogeny for flowering plants based on an analysis of DNA sequences from most families of flowering plants. As a result of this work, many questions, such as which families represent the earliest branches of angiosperms, have now been answered. Investigating how plant species are related to each other allows botanists to better understand the process of evolution in plants. Despite the study of model plants and increasing use of DNA evidence, there is ongoing work and discussion among taxonomists about how best to classify plants into various taxa. Technological developments such as computers and electron microscopes have greatly increased the level of detail studied and speed at which data can be analysed. Symbols A few symbols are in current use in botany. A number of others are obsolete; for example, Linnaeus used planetary symbols (Mars) for biennial plants, (Jupiter) for herbaceous perennials and (Saturn) for woody perennials, based on the planets' orbital periods of 2, 12 and 30 years; and Willd used (Saturn) for neuter in addition to (Mercury) for hermaphroditic. The following symbols are still used: ♀ female ♂ male ⚥ hermaphrodite/bisexual ⚲ vegetative (asexual) reproduction ◊ sex unknown ☉ annual ⚇ biennial ♾ perennial ☠ poisonous 🛈 further information × crossbred hybrid + grafted hybrid See also Branches of botany Evolution of plants Glossary of botanical terms Glossary of plant morphology List of botany journals List of botanists List of botanical gardens List of botanists by author abbreviation List of domesticated plants List of flowers List of systems of plant taxonomy Outline of botany Timeline of British botany Notes References Citations Sources Supporting Information External links Articles containing video clips
8
Bacillus thuringiensis (or Bt) is a gram-positive, soil-dwelling bacterium, the most commonly used biological pesticide worldwide. B. thuringiensis also occurs naturally in the gut of caterpillars of various types of moths and butterflies, as well on leaf surfaces, aquatic environments, animal feces, insect-rich environments, and flour mills and grain-storage facilities. It has also been observed to parasitize other moths such as Cadra calidella—in laboratory experiments working with C. calidella, many of the moths were diseased due to this parasite. During sporulation, many Bt strains produce crystal proteins (proteinaceous inclusions), called delta endotoxins, that have insecticidal action. This has led to their use as insecticides, and more recently to genetically modified crops using Bt genes, such as Bt corn. Many crystal-producing Bt strains, though, do not have insecticidal properties. The subspecies israelensis is commonly used for control of mosquitoes and of fungus gnats. As a toxic mechanism, cry proteins bind to specific receptors on the membranes of mid-gut (epithelial) cells of the targeted pests, resulting in their rupture. Other organisms (including humans, other animals and non-targeted insects) that lack the appropriate receptors in their gut cannot be affected by the cry protein, and therefore are not affected by Bt. Taxonomy and discovery In 1902, B. thuringiensis was first discovered in silkworms by Japanese sericultural engineer . He named it B. sotto, using the Japanese word , here referring to bacillary paralysis. In 1911, German microbiologist Ernst Berliner rediscovered it when he isolated it as the cause of a disease called in flour moth caterpillars in Thuringia (hence the specific name thuringiensis, "Thuringian"). B. sotto would later be reassigned as B. thuringiensis var. sotto. In 1976, Robert A. Zakharyan reported the presence of a plasmid in a strain of B. thuringiensis and suggested the plasmid's involvement in endospore and crystal formation. B. thuringiensis is closely related to B. cereus, a soil bacterium, and B. anthracis, the cause of anthrax; the three organisms differ mainly in their plasmids. Like other members of the genus, all three are capable of producing endospores. Species group placement B. thuringiensis is placed in the Bacillus cereus group which is variously defined as: seven closely related species: B. cereus sensu stricto (B. cereus), B. anthracis, B. thuringiensis, B. mycoides, B. pseudomycoides, and B. cytotoxicus; or as six species in a Bacillus cereus sensu lato: B. weihenstephanensis, B. mycoides, B. pseudomycoides, B. cereus, B. thuringiensis, and B. anthracis. Within this grouping B.t. is more closely related to B.ce. It is more distantly related to B.w., B.m., B.p., and B.cy. Subspecies There are several dozen recognized subspecies of B. thuringiensis. Subspecies commonly used as insecticides include B. thuringiensis subspecies kurstaki (Btk), subspecies israelensis (Bti) and (Bta). Some Bti lineages are clonal. Genetics Some strains are known to carry the same genes that produce enterotoxins in B. cereus, and so it is possible that the entire B. cereus sensu lato group may have the potential to be enteropathogens. The proteins that B. thuringiensis is most known for are encoded by cry genes. In most strains of B. thuringiensis, these genes are located on a plasmid (in other words cry is not a chromosomal gene in most strains). If these plasmids are lost it becomes indistinguishable from B. cereus as B. thuringiensis has no other species characteristics. Plasmid exchange has been observed both naturally and experimentally both within B.t. and between B.t. and two congeners, B. cereus and B. mycoides. plcR is an indispensable transcription regulator of most virulence factors, its absence greatly reducing virulence and toxicity. Some strains do naturally complete their life cycle with an inactivated plcR. It is half of a two-gene operon along with the heptapeptide . papR is part of quorum sensing in B. thuringiensis. Various strains including Btk ATCC 33679 carry plasmids belonging to the wider pXO1-like family. (The pXO1 family being a B. cereus-common family with members of ≈330kb length. They differ from pXO1 by replacement of the pXO1 pathogenicity island.) The insect parasite Btk HD73 carries a pXO2-like plasmid (pBT9727) lacking the 35kb pathogenicity island of pXO2 itself, and in fact having no identifiable virulence factors. (The pXO2 family does not have replacement of the pathogenicity island, instead simply lacking that part of pXO2.) The genomes of the B. cereus group may contain two types of introns, dubbed group I and group II. B.t strains have variously 0-5 group Is and 0-13 group IIs. There is still insufficient information to determine whether chromosome-plasmid coevolution to enable adaptation to particular environmental niches has occurred or is even possible. Common with B. cereus but so far not found elsewhere - including in other members of the species group - are the efflux pump BC3663, the N-acyl--amino-acid amidohydrolase BC3664, and the methyl-accepting chemotaxis protein BC5034. Proteome Has similar proteome diversity to close relative B. cereus. Into the BT Cotton protein is 'Crystal protein' Mechanism of insecticidal action Upon sporulation, B. thuringiensis forms crystals of two types of proteinaceous insecticidal delta endotoxins (δ-endotoxins) called crystal proteins or Cry proteins, which are encoded by cry genes, and Cyt proteins. Cry toxins have specific activities against insect species of the orders Lepidoptera (moths and butterflies), Diptera (flies and mosquitoes), Coleoptera (beetles) and Hymenoptera (wasps, bees, ants and sawflies), as well as against nematodes. Thus, B. thuringiensis serves as an important reservoir of Cry toxins for production of biological insecticides and insect-resistant genetically modified crops. When insects ingest toxin crystals, their alkaline digestive tracts denature the insoluble crystals, making them soluble and thus amenable to being cut with proteases found in the insect gut, which liberate the toxin from the crystal. The Cry toxin is then inserted into the insect gut cell membrane, paralyzing the digestive tract and forming a pore. The insect stops eating and starves to death; live Bt bacteria may also colonize the insect, which can contribute to death. Death occurs within a few hours or weeks. The midgut bacteria of susceptible larvae may be required for B. thuringiensis insecticidal activity. A B. thuringiensis small RNA called BtsR1 can silence the Cry5Ba toxin expression when outside the host by binding to the RBS site of the Cry5Ba toxin transcript to avoid nematode behavioral defenses. The silencing results in an increase of the bacteria ingestion by C. elegans. The expression of BtsR1 is then reduced after ingestion, resulting in Cry5Ba toxin production and host death. In 1996 another class of insecticidal proteins in Bt was discovered: the vegetative insecticidal proteins (Vip; ). Vip proteins do not share sequence homology with Cry proteins, in general do not compete for the same receptors, and some kill different insects than do Cry proteins. In 2000, a novel subgroup of Cry protein, designated parasporin, was discovered from non-insecticidal B. thuringiensis isolates. The proteins of parasporin group are defined as B. thuringiensis and related bacterial parasporal proteins that are not hemolytic, but capable of preferentially killing cancer cells. As of January 2013, parasporins comprise six subfamilies: PS1 to PS6. Use of spores and proteins in pest control Spores and crystalline insecticidal proteins produced by B. thuringiensis have been used to control insect pests since the 1920s and are often applied as liquid sprays. They are now used as specific insecticides under trade names such as DiPel and Thuricide. Because of their specificity, these pesticides are regarded as environmentally friendly, with little or no effect on humans, wildlife, pollinators, and most other beneficial insects, and are used in organic farming; however, the manuals for these products do contain many environmental and human health warnings, and a 2012 European regulatory peer review of five approved strains found, while data exist to support some claims of low toxicity to humans and the environment, the data are insufficient to justify many of these claims. New strains of Bt are developed and introduced over time as insects develop resistance to Bt, or the desire occurs to force mutations to modify organism characteristics, or to use homologous recombinant genetic engineering to improve crystal size and increase pesticidal activity, or broaden the host range of Bt and obtain more effective formulations. Each new strain is given a unique number and registered with the U.S. EPA and allowances may be given for genetic modification depending on "its parental strains, the proposed pesticide use pattern, and the manner and extent to which the organism has been genetically modified". Formulations of Bt that are approved for organic farming in the US are listed at the website of the Organic Materials Review Institute (OMRI) and several university extension websites offer advice on how to use Bt spore or protein preparations in organic farming. Use of Bt genes in genetic engineering of plants for pest control The Belgian company Plant Genetic Systems (now part of Bayer CropScience) was the first company (in 1985) to develop genetically modified crops (tobacco) with insect tolerance by expressing cry genes from B. thuringiensis; the resulting crops contain delta endotoxin. The Bt tobacco was never commercialized; tobacco plants are used to test genetic modifications since they are easy to manipulate genetically and are not part of the food supply. Usage In 1985, were approved safe by the Environmental Protection Agency, making it the first human-modified pesticide-producing crop to be approved in the US, though many plants produce pesticides naturally, including tobacco, coffee plants, cocoa, cotton and black walnut. This was the 'New Leaf' potato, and it was removed from the market in 2001 due to lack of interest. In 1996, was approved, which killed the European corn borer and related species; subsequent Bt genes were introduced that killed corn rootworm larvae. The Bt genes engineered into crops and approved for release include, singly and stacked: Cry1A.105, CryIAb, CryIF, Cry2Ab, Cry3Bb1, Cry34Ab1, Cry35Ab1, mCry3A, and VIP, and the engineered crops include corn and cotton. Corn genetically modified to produce VIP was first approved in the US in 2010. In India, by 2014, more than seven million cotton farmers, occupying twenty-six million acres, had adopted . Monsanto developed a and the glyphosate-resistance gene for the Brazilian market, which completed the Brazilian regulatory process in 2010. - specifically Populus hybrids - have been developed. They do suffer lesser leaf damage from insect herbivory. The results have not been entirely positive however: The intended result - better timber yield - was not achieved, with no growth advantage despite that reduction in herbivore damage; one of their major pests still preys upon the transgenic trees; and besides that, their leaf litter decomposes differently due to the transgenic toxins, resulting in alterations to the aquatic insect populations nearby. Safety studies The use of Bt toxins as plant-incorporated protectants prompted the need for extensive evaluation of their safety for use in foods and potential unintended impacts on the environment. Dietary risk assessment Concerns over the safety of consumption of genetically modified plant materials that contain Cry proteins have been addressed in extensive dietary risk assessment studies. As a toxic mechanism, cry proteins bind to specific receptors on the membranes of mid-gut (epithelial) cells of the targeted pests, resulting in their rupture. While the target pests are exposed to the toxins primarily through leaf and stalk material, Cry proteins are also expressed in other parts of the plant, including trace amounts in maize kernels which are ultimately consumed by both humans and animals. However, other organisms (including humans, other animals and non-targeted insects) that lack the appropriate receptors in their gut cannot be affected by the cry protein, and therefore are not affected by Bt. Toxicology studies Animal models have been used to assess human health risk from consumption of products containing Cry proteins. The United States Environmental Protection Agency recognizes mouse acute oral feeding studies where doses as high as 5,000 mg/kg body weight resulted in no observed adverse effects. Research on other known toxic proteins suggests that , further suggesting that Bt toxins are not toxic to mammals. The results of toxicology studies are further strengthened by the lack of observed toxicity from decades of use of B. thuringiensis and its crystalline proteins as an insecticidal spray. Allergenicity studies Introduction of a new protein raised concerns regarding the potential for allergic responses in sensitive individuals. Bioinformatic analysis of known allergens has indicated there is no concern of allergic reactions as a result of consumption of Bt toxins. Additionally, skin prick testing using purified Bt protein resulted in no detectable production of toxin-specific IgE antibodies, even in atopic patients. Digestibility studies Studies have been conducted to evaluate the fate of Bt toxins that are ingested in foods. Bt toxin proteins have been shown to digest within minutes of exposure to simulated gastric fluids. The instability of the proteins in digestive fluids is an additional indication that Cry proteins are unlikely to be allergenic, since most known food allergens resist degradation and are ultimately absorbed in the small intestine. Ecological risk assessment Ecological risk assessment aims to ensure there is no unintended impact on non-target organisms and no contamination of natural resources as a result of the use of a new substance, such as the use of Bt in genetically modified crops. The impact of Bt toxins on the environments where transgenic plants are grown has been evaluated to ensure no adverse effects outside of targeted crop pests. Persistence in environment Concerns over possible environmental impact from accumulation of Bt toxins from plant tissues, pollen dispersal, and direct secretion from roots have been investigated. Bt toxins may persist in soil for over 200 days, with half-lives between 1.6 and 22 days. Much of the toxin is initially degraded rapidly by microorganisms in the environment, while some is adsorbed by organic matter and persists longer. Some studies, in contrast, claim that the toxins do not persist in the soil. Bt toxins are less likely to accumulate in bodies of water, but pollen shed or soil runoff may deposit them in an aquatic ecosystem. Fish species are not susceptible to Bt toxins if exposed. Impact on non-target organisms The toxic nature of Bt proteins has an adverse impact on many major crop pests, but ecological risk assessments have been conducted to ensure safety of beneficial non-target organisms that may come into contact with the toxins. Widespread concerns over toxicity in non-target lepidopterans, such as the monarch butterfly, have been disproved through proper exposure characterization, where it was determined that non-target organisms are not exposed to high enough amounts of the Bt toxins to have an adverse effect on the population. Soil-dwelling organisms, potentially exposed to Bt toxins through root exudates, are not impacted by the growth of Bt crops. Insect resistance Multiple insects have developed a resistance to B. thuringiensis. In November 2009, Monsanto scientists found the pink bollworm had become resistant to the first-generation Bt cotton in parts of Gujarat, India - that generation expresses one Bt gene, Cry1Ac. This was the first instance of Bt resistance confirmed by Monsanto anywhere in the world. Monsanto responded by introducing a second-generation cotton with multiple Bt proteins, which was rapidly adopted. Bollworm resistance to first-generation Bt cotton was also identified in Australia, China, Spain, and the United States. Additionally, resistance to Bt was documented in field population of diamondback moth in Hawaii, the continental US, and Asia. Studies in the cabbage looper have suggested that a mutation in the membrane transporter ABCC2 can confer resistance to Bt Cry1Ac. Secondary pests Several studies have documented surges in "sucking pests" (which are not affected by Bt toxins) within a few years of adoption of Bt cotton. In China, the main problem has been with mirids, which have in some cases "completely eroded all benefits from Bt cotton cultivation". The increase in sucking pests depended on local temperature and rainfall conditions and increased in half the villages studied. The increase in insecticide use for the control of these secondary insects was far smaller than the reduction in total insecticide use due to Bt cotton adoption. Another study in five provinces in China found the reduction in pesticide use in Bt cotton cultivars is significantly lower than that reported in research elsewhere, consistent with the hypothesis suggested by recent studies that more pesticide sprayings are needed over time to control emerging secondary pests, such as aphids, spider mites, and lygus bugs. Similar problems have been reported in India, with both mealy bugs and aphids although a survey of small Indian farms between 2002 and 2008 concluded Bt cotton adoption has led to higher yields and lower pesticide use, decreasing over time. Controversies The controversies surrounding Bt use are among the many genetically modified food controversies more widely. Lepidopteran toxicity The most publicised problem associated with Bt crops is the claim that pollen from Bt maize could kill the monarch butterfly. The paper produced a public uproar and demonstrations against Bt maize; however by 2001 several follow-up studies coordinated by the USDA had asserted that "the most common types of Bt maize pollen are not toxic to monarch larvae in concentrations the insects would encounter in the fields." Similarly, B. thuringiensis has been widely used for controlling Spodoptera littoralis larvae growth due to their detrimental pest activities in Africa and Southern Europe. However, S. littoralis showed resistance to many strains of B. thuriginesis and were only effectively controlled by a few strains. Wild maize genetic mixing A study published in Nature in 2001 reported Bt-containing maize genes were found in maize in its center of origin, Oaxaca, Mexico. Another Nature paper published in 2002 claimed that the previous paper's conclusion was the result of an artifact caused by an inverse polymerase chain reaction and that "the evidence available is not sufficient to justify the publication of the original paper." A significant controversy happened over the paper and Natures unprecedented notice. A subsequent large-scale study in 2005 failed to find any evidence of genetic mixing in Oaxaca. A 2007 study found the "transgenic proteins expressed in maize were found in two (0.96%) of 208 samples from farmers' fields, located in two (8%) of 25 sampled communities." Mexico imports a substantial amount of maize from the U.S., and due to formal and informal seed networks among rural farmers, many potential routes are available for transgenic maize to enter into food and feed webs. One study found small-scale (about 1%) introduction of transgenic sequences in sampled fields in Mexico; it did not find evidence for or against this introduced genetic material being inherited by the next generation of plants. That study was immediately criticized, with the reviewer writing, "Genetically, any given plant should be either non-transgenic or transgenic, therefore for leaf tissue of a single transgenic plant, a GMO level close to 100% is expected. In their study, the authors chose to classify leaf samples as transgenic despite GMO levels of about 0.1%. We contend that results such as these are incorrectly interpreted as positive and are more likely to be indicative of contamination in the laboratory." Colony collapse disorder As of 2007, a new phenomenon called colony collapse disorder (CCD) began affecting bee hives all over North America. Initial speculation on possible causes included new parasites, pesticide use, and the use of Bt transgenic crops. The Mid-Atlantic Apiculture Research and Extension Consortium found no evidence that pollen from Bt crops is adversely affecting bees. According to the USDA, "Genetically modified (GM) crops, most commonly Bt corn, have been offered up as the cause of CCD. But there is no correlation between where GM crops are planted and the pattern of CCD incidents. Also, GM crops have been widely planted since the late 1990s, but CCD did not appear until 2006. In addition, CCD has been reported in countries that do not allow GM crops to be planted, such as Switzerland. German researchers have noted in one study a possible correlation between exposure to Bt pollen and compromised immunity to Nosema." The actual cause of CCD was unknown in 2007, and scientists believe it may have multiple exacerbating causes. Beta-exotoxins Some isolates of B. thuringiensis produce a class of insecticidal small molecules called beta-exotoxin, the common name for which is thuringiensin. A consensus document produced by the OECD says: "Beta-exotoxins are known to be toxic to humans and almost all other forms of life and its presence is prohibited in B. thuringiensis microbial products". Thuringiensins are nucleoside analogues. They inhibit RNA polymerase activity, a process common to all forms of life, in rats and bacteria alike. Other hosts Opportunistic pathogen of animals other than insects, causing necrosis, pulmonary infection, and/or food poisoning. How common this is, is unknown, because these are always taken to be B. cereus infections and are rarely tested for the Cry and Cyt proteins that are the only factor distinguishing B. thuringiensis from B. cereus. New nomenclature for pesticidal proteins (Bt toxins) Bacillus thuringiensis is no longer the sole source of pesticidal proteins. The Bacterial Pesticidal Protein Resource Center (BPPRC) provides information on the rapidly expanding field of pesticidal proteins for academics, regulators, and research and development personnel See also Biological insecticides Genetically modified food Western corn rootworm Cry1Ac Diamondback moth References Further reading External links thuringiensis Biopesticides Genetically modified organisms in agriculture Bacteria described in 1915
8
A bacteriophage (), also known informally as a phage (), is a virus that infects and replicates within bacteria and archaea. The term was derived from "bacteria" and the Greek φαγεῖν (), meaning "to devour". Bacteriophages are composed of proteins that encapsulate a DNA or RNA genome, and may have structures that are either simple or elaborate. Their genomes may encode as few as four genes (e.g. MS2) and as many as hundreds of genes. Phages replicate within the bacterium following the injection of their genome into its cytoplasm. Bacteriophages are among the most common and diverse entities in the biosphere. Bacteriophages are ubiquitous viruses, found wherever bacteria exist. It is estimated there are more than 1031 bacteriophages on the planet, more than every other organism on Earth, including bacteria, combined. Viruses are the most abundant biological entity in the water column of the world's oceans, and the second largest component of biomass after prokaryotes, where up to 9x108 virions per millilitre have been found in microbial mats at the surface, and up to 70% of marine bacteria may be infected by phages. Phages have been used since the late 20th century as an alternative to antibiotics in the former Soviet Union and Central Europe, as well as in France. They are seen as a possible therapy against multi-drug-resistant strains of many bacteria (see phage therapy). Phages are known to interact with the immune system both indirectly via bacterial expression of phage-encoded proteins and directly by influencing innate immunity and bacterial clearance. Phage–host interactions are becoming increasingly important areas of research. Classification Bacteriophages occur abundantly in the biosphere, with different genomes and lifestyles. Phages are classified by the International Committee on Taxonomy of Viruses (ICTV) according to morphology and nucleic acid. It has been suggested that members of Picobirnaviridae infect bacteria, but not mammals. There are also many unassigned genera of the class Leviviricetes: Chimpavirus, Hohglivirus, Mahrahvirus, Meihzavirus, Nicedsevirus, Sculuvirus, Skrubnovirus, Tetipavirus and Winunavirus containing linear ssRNA genomes and the unassigned genus Lilyvirus of the order Caudovirales containing a linear dsDNA genome. History In 1896, Ernest Hanbury Hankin reported that something in the waters of the Ganges and Yamuna rivers in India had a marked antibacterial action against cholera and it could pass through a very fine porcelain filter. In 1915, British bacteriologist Frederick Twort, superintendent of the Brown Institution of London, discovered a small agent that infected and killed bacteria. He believed the agent must be one of the following: a stage in the life cycle of the bacteria an enzyme produced by the bacteria themselves, or a virus that grew on and destroyed the bacteria Twort's research was interrupted by the onset of World War I, as well as a shortage of funding and the discoveries of antibiotics. Independently, French-Canadian microbiologist Félix d'Hérelle, working at the Pasteur Institute in Paris, announced on 3 September 1917 that he had discovered "an invisible, antagonistic microbe of the dysentery bacillus". For d'Hérelle, there was no question as to the nature of his discovery: "In a flash I had understood: what caused my clear spots was in fact an invisible microbe... a virus parasitic on bacteria." D'Hérelle called the virus a bacteriophage, a bacteria-eater (from the Greek , meaning "to devour"). He also recorded a dramatic account of a man suffering from dysentery who was restored to good health by the bacteriophages. It was d'Hérelle who conducted much research into bacteriophages and introduced the concept of phage therapy. In 1919, in Paris, France, d'Hérelle conducted the first clinical application of a bacteriophage, with the first reported use in the United States being in 1922. Nobel prizes awarded for phage research In 1969, Max Delbrück, Alfred Hershey, and Salvador Luria were awarded the Nobel Prize in Physiology or Medicine for their discoveries of the replication of viruses and their genetic structure. Specifically the work of Hershey, as contributor to the Hershey–Chase experiment in 1952, provided convincing evidence that DNA, not protein, was the genetic material of life. Delbrück and Luria carried out the Luria–Delbrück experiment which demonstrated statistically that mutations in bacteria occur randomly and thus follow Darwinian rather than Lamarckian principles. Uses Phage therapy Phages were discovered to be antibacterial agents and were used in the former Soviet Republic of Georgia (pioneered there by Giorgi Eliava with help from the co-discoverer of bacteriophages, Félix d'Hérelle) during the 1920s and 1930s for treating bacterial infections. They had widespread use, including treatment of soldiers in the Red Army. However, they were abandoned for general use in the West for several reasons: Antibiotics were discovered and marketed widely. They were easier to make, store, and prescribe. Medical trials of phages were carried out, but a basic lack of understanding of phages raised questions about the validity of these trials. Publication of research in the Soviet Union was mainly in the Russian or Georgian languages and for many years was not followed internationally. The use of phages has continued since the end of the Cold War in Russia, Georgia, and elsewhere in Central and Eastern Europe. The first regulated, randomized, double-blind clinical trial was reported in the Journal of Wound Care in June 2009, which evaluated the safety and efficacy of a bacteriophage cocktail to treat infected venous ulcers of the leg in human patients. The FDA approved the study as a Phase I clinical trial. The study's results demonstrated the safety of therapeutic application of bacteriophages, but did not show efficacy. The authors explained that the use of certain chemicals that are part of standard wound care (e.g. lactoferrin or silver) may have interfered with bacteriophage viability. Shortly after that, another controlled clinical trial in Western Europe (treatment of ear infections caused by Pseudomonas aeruginosa) was reported in the journal Clinical Otolaryngology in August 2009. The study concludes that bacteriophage preparations were safe and effective for treatment of chronic ear infections in humans. Additionally, there have been numerous animal and other experimental clinical trials evaluating the efficacy of bacteriophages for various diseases, such as infected burns and wounds, and cystic fibrosis-associated lung infections, among others. On the other hand, phages of Inoviridae have been shown to complicate biofilms involved in pneumonia and cystic fibrosis and to shelter the bacteria from drugs meant to eradicate disease, thus promoting persistent infection. Meanwhile, bacteriophage researchers have been developing engineered viruses to overcome antibiotic resistance, and engineering the phage genes responsible for coding enzymes that degrade the biofilm matrix, phage structural proteins, and the enzymes responsible for lysis of the bacterial cell wall. There have been results showing that T4 phages that are small in size and short-tailed can be helpful in detecting E. coli in the human body. Therapeutic efficacy of a phage cocktail was evaluated in a mice model with nasal infection of multidrug-resistant (MDR) A. baumannii. Mice treated with the phage cocktail showed a 2.3-fold higher survival rate compared to those untreated at seven days post-infection. In 2017, a patient with a pancreas compromised by MDR A. baumannii was put on several antibiotics; despite this, the patient's health continued to deteriorate during a four-month period. Without effective antibiotics, the patient was subjected to phage therapy using a phage cocktail containing nine different phages that had been demonstrated to be effective against MDR A. baumannii. Once on this therapy the patient's downward clinical trajectory reversed, and returned to health. D'Herelle "quickly learned that bacteriophages are found wherever bacteria thrive: in sewers, in rivers that catch waste runoff from pipes, and in the stools of convalescent patients." This includes rivers traditionally thought to have healing powers, including India's Ganges River. Other Food industry – Phages have increasingly been used to safen food products and to forestall spoilage bacteria. Since 2006, the United States Food and Drug Administration (FDA) and United States Department of Agriculture (USDA) have approved several bacteriophage products. LMP-102 (Intralytix) was approved for treating ready-to-eat (RTE) poultry and meat products. In that same year, the FDA approved LISTEX (developed and produced by Micreos) using bacteriophages on cheese to kill Listeria monocytogenes bacteria, in order to give them generally recognized as safe (GRAS) status. In July 2007, the same bacteriophage were approved for use on all food products. In 2011 USDA confirmed that LISTEX is a clean label processing aid and is included in USDA. Research in the field of food safety is continuing to see if lytic phages are a viable option to control other food-borne pathogens in various food products. Diagnostics – In 2011, the FDA cleared the first bacteriophage-based product for in vitro diagnostic use. The KeyPath MRSA/MSSA Blood Culture Test uses a cocktail of bacteriophage to detect Staphylococcus aureus in positive blood cultures and determine methicillin resistance or susceptibility. The test returns results in about five hours, compared to two to three days for standard microbial identification and susceptibility test methods. It was the first accelerated antibiotic-susceptibility test approved by the FDA. Counteracting bioweapons and toxins – Government agencies in the West have for several years been looking to Georgia and the former Soviet Union for help with exploiting phages for counteracting bioweapons and toxins, such as anthrax and botulism. Developments are continuing among research groups in the U.S. Other uses include spray application in horticulture for protecting plants and vegetable produce from decay and the spread of bacterial disease. Other applications for bacteriophages are as biocides for environmental surfaces, e.g., in hospitals, and as preventative treatments for catheters and medical devices before use in clinical settings. The technology for phages to be applied to dry surfaces, e.g., uniforms, curtains, or even sutures for surgery now exists. Clinical trials reported in Clinical Otolaryngology show success in veterinary treatment of pet dogs with otitis. The SEPTIC bacterium sensing and identification method uses the ion emission and its dynamics during phage infection and offers high specificity and speed for detection. Phage display is a different use of phages involving a library of phages with a variable peptide linked to a surface protein. Each phage genome encodes the variant of the protein displayed on its surface (hence the name), providing a link between the peptide variant and its encoding gene. Variant phages from the library may be selected through their binding affinity to an immobilized molecule (e.g., botulism toxin) to neutralize it. The bound, selected phages can be multiplied by reinfecting a susceptible bacterial strain, thus allowing them to retrieve the peptides encoded in them for further study. Antimicrobial drug discovery – Phage proteins often have antimicrobial activity and may serve as leads for peptidomimetics, i.e. drugs that mimic peptides. Phage-ligand technology makes use of phage proteins for various applications, such as binding of bacteria and bacterial components (e.g. endotoxin) and lysis of bacteria. Basic research – Bacteriophages are important model organisms for studying principles of evolution and ecology. Detriments Dairy industry Bacteriophages present in the environment can cause cheese to not ferment. In order to avoid this, mixed-strain starter cultures and culture rotation regimes can be used. Genetic engineering of culture microbes – especially Lactococcus lactis and Streptococcus thermophilus – have been studied for genetic analysis and modification to improve phage resistance. This has especially focused on plasmid and recombinant chromosomal modifications. Some research has focused on the potential of bacteriophages as antimicrobial against foodborne pathogens and biofilm formation within the dairy industry. As the spread of antibiotic resistance is a main concern within the dairy industry, phages can serve as a promising alternative. Replication The life cycle of bacteriophages tends to be either a lytic cycle or a lysogenic cycle. In addition, some phages display pseudolysogenic behaviors. With lytic phages such as the T4 phage, bacterial cells are broken open (lysed) and destroyed after immediate replication of the virion. As soon as the cell is destroyed, the phage progeny can find new hosts to infect. Lytic phages are more suitable for phage therapy. Some lytic phages undergo a phenomenon known as lysis inhibition, where completed phage progeny will not immediately lyse out of the cell if extracellular phage concentrations are high. This mechanism is not identical to that of the temperate phage going dormant and usually is temporary. In contrast, the lysogenic cycle does not result in immediate lysing of the host cell. Those phages able to undergo lysogeny are known as temperate phages. Their viral genome will integrate with host DNA and replicate along with it, relatively harmlessly, or may even become established as a plasmid. The virus remains dormant until host conditions deteriorate, perhaps due to depletion of nutrients, then, the endogenous phages (known as prophages) become active. At this point they initiate the reproductive cycle, resulting in lysis of the host cell. As the lysogenic cycle allows the host cell to continue to survive and reproduce, the virus is replicated in all offspring of the cell. An example of a bacteriophage known to follow the lysogenic cycle and the lytic cycle is the phage lambda of E. coli. Sometimes prophages may provide benefits to the host bacterium while they are dormant by adding new functions to the bacterial genome, in a phenomenon called lysogenic conversion. Examples are the conversion of harmless strains of Corynebacterium diphtheriae or Vibrio cholerae by bacteriophages to highly virulent ones that cause diphtheria or cholera, respectively. Strategies to combat certain bacterial infections by targeting these toxin-encoding prophages have been proposed. Attachment and penetration Bacterial cells are protected by a cell wall of polysaccharides, which are important virulence factors protecting bacterial cells against both immune host defenses and antibiotics. Host growth conditions also influence the ability of the phage to attach and invade them. As phage virions do not move independently, they must rely on random encounters with the correct receptors when in solution, such as blood, lymphatic circulation, irrigation, soil water, etc. Myovirus bacteriophages use a hypodermic syringe-like motion to inject their genetic material into the cell. After contacting the appropriate receptor, the tail fibers flex to bring the base plate closer to the surface of the cell. This is known as reversible binding. Once attached completely, irreversible binding is initiated and the tail contracts, possibly with the help of ATP present in the tail, injecting genetic material through the bacterial membrane. The injection is accomplished through a sort of bending motion in the shaft by going to the side, contracting closer to the cell and pushing back up. Podoviruses lack an elongated tail sheath like that of a myovirus, so instead, they use their small, tooth-like tail fibers enzymatically to degrade a portion of the cell membrane before inserting their genetic material. Synthesis of proteins and nucleic acid Within minutes, bacterial ribosomes start translating viral mRNA into protein. For RNA-based phages, RNA replicase is synthesized early in the process. Proteins modify the bacterial RNA polymerase so it preferentially transcribes viral mRNA. The host's normal synthesis of proteins and nucleic acids is disrupted, and it is forced to manufacture viral products instead. These products go on to become part of new virions within the cell, helper proteins that contribute to the assemblage of new virions, or proteins involved in cell lysis. In 1972, Walter Fiers (University of Ghent, Belgium) was the first to establish the complete nucleotide sequence of a gene and in 1976, of the viral genome of bacteriophage MS2. Some dsDNA bacteriophages encode ribosomal proteins, which are thought to modulate protein translation during phage infection. Virion assembly In the case of the T4 phage, the construction of new virus particles involves the assistance of helper proteins that act catalytically during phage morphogenesis. The base plates are assembled first, with the tails being built upon them afterward. The head capsids, constructed separately, will spontaneously assemble with the tails. During assembly of the phage T4 virion, the morphogenetic proteins encoded by the phage genes interact with each other in a characteristic sequence. Maintaining an appropriate balance in the amounts of each of these proteins produced during viral infection appears to be critical for normal phage T4 morphogenesis. The DNA is packed efficiently within the heads. The whole process takes about 15 minutes. Release of virions Phages may be released via cell lysis, by extrusion, or, in a few cases, by budding. Lysis, by tailed phages, is achieved by an enzyme called endolysin, which attacks and breaks down the cell wall peptidoglycan. An altogether different phage type, the filamentous phage, makes the host cell continually secrete new virus particles. Released virions are described as free, and, unless defective, are capable of infecting a new bacterium. Budding is associated with certain Mycoplasma phages. In contrast to virion release, phages displaying a lysogenic cycle do not kill the host and instead become long-term residents as prophages. Communication Research in 2017 revealed that the bacteriophage Φ3T makes a short viral protein that signals other bacteriophages to lie dormant instead of killing the host bacterium. Arbitrium is the name given to this protein by the researchers who discovered it. Genome structure Given the millions of different phages in the environment, phage genomes come in a variety of forms and sizes. RNA phages such as MS2 have the smallest genomes, with only a few kilobases. However, some DNA phages such as T4 may have large genomes with hundreds of genes; the size and shape of the capsid varies along with the size of the genome. The largest bacteriophage genomes reach a size of 735 kb.Bacteriophage genomes can be highly mosaic, i.e. the genome of many phage species appear to be composed of numerous individual modules. These modules may be found in other phage species in different arrangements. Mycobacteriophages, bacteriophages with mycobacterial hosts, have provided excellent examples of this mosaicism. In these mycobacteriophages, genetic assortment may be the result of repeated instances of site-specific recombination and illegitimate recombination (the result of phage genome acquisition of bacterial host genetic sequences). Evolutionary mechanisms shaping the genomes of bacterial viruses vary between different families and depend upon the type of the nucleic acid, characteristics of the virion structure, as well as the mode of the viral life cycle. Some marine roseobacter phages contain deoxyuridine (dU) instead of deoxythymidine (dT) in their genomic DNA. There is some evidence that this unusual component is a mechanism to evade bacterial defense mechanisms such as restriction endonucleases and CRISPR/Cas systems which evolved to recognize and cleave sequences within invading phages, thereby inactivating them. Other phages have long been known to use unusual nucleotides. In 1963, Takahashi and Marmur identified a Bacillus phage that has dU substituting dT in its genome, and in 1977, Kirnos et al. identified a cyanophage containing 2-aminoadenine (Z) instead of adenine (A). Systems biology The field of systems biology investigates the complex networks of interactions within an organism, usually using computational tools and modeling. For example, a phage genome that enters into a bacterial host cell may express hundreds of phage proteins which will affect the expression of numerous host genes or the host's metabolism. All of these complex interactions can be described and simulated in computer models. For instance, infection of Pseudomonas aeruginosa by the temperate phage PaP3 changed the expression of 38% (2160/5633) of its host's genes. Many of these effects are probably indirect, hence the challenge becomes to identify the direct interactions among bacteria and phage. Several attempts have been made to map protein–protein interactions among phage and their host. For instance, bacteriophage lambda was found to interact with its host, E. coli, by dozens of interactions. Again, the significance of many of these interactions remains unclear, but these studies suggest that there most likely are several key interactions and many indirect interactions whose role remains uncharacterized. Host resistance Bacteriophages are a major threat to bacteria and prokaryotes have evolved numerous mechanisms to block infection or to block the replication of bacteriophages within host cells. The CRISPR system is one such mechanism as are retrons and the anti-toxin system encoded by them. The Thoeris defense system is known to deploy a unique strategy for bacterial antiphage resistance via NAD+ degradation. Bacteriophage–host symbiosis Temperate phages are bacteriophages that integrate their genetic material into the host as extrachromosomal episomes or as a prophage during a lysogenic cycle. Some temperate phages can confer fitness advantages to their host in numerous ways, including giving antibiotic resistance through the transfer or introduction of antibiotic resistance genes (ARGs), protecting hosts from phagocytosis, protecting hosts from secondary infection through superinfection exclusion, enhancing host pathogenicity, or enhancing bacterial metabolism or growth. Bacteriophage–host symbiosis may benefit bacteria by providing selective advantages while passively replicating the phage genome. In the environment Metagenomics has allowed the in-water detection of bacteriophages that was not possible previously. Also, bacteriophages have been used in hydrological tracing and modelling in river systems, especially where surface water and groundwater interactions occur. The use of phages is preferred to the more conventional dye marker because they are significantly less absorbed when passing through ground waters and they are readily detected at very low concentrations. Non-polluted water may contain approximately 2×108 bacteriophages per ml. Bacteriophages are thought to contribute extensively to horizontal gene transfer in natural environments, principally via transduction, but also via transformation. Metagenomics-based studies also have revealed that viromes from a variety of environments harbor antibiotic-resistance genes, including those that could confer multidrug resistance. In humans Although phages do not infect humans, there are countless phage particles in the human body, given our extensive microbiome. Our phage population has been called the human phageome, including the "healthy gut phageome" (HGP) and the "diseased human phageome" (DHP). The active phageome of a healthy human (i.e., actively replicating as opposed to nonreplicating, integrated prophage) has been estimated to comprise dozens to thousands of different viruses. There is evidence that bacteriophages and bacteria interact in the human gut microbiome both antagonistically and beneficially. Preliminary studies have indicated that common bacteriophages are found in 62% of healthy individuals on average, while their prevalence was reduced by 42% and 54% on average in patients with ulcerative colitis (UC) and Crohn's disease (CD). Abundance of phages may also decline in the elderly. The most common phages in the human intestine, found worldwide, are crAssphages. CrAssphages are transmitted from mother to child soon after birth, and there is some evidence suggesting that they may be transmitted locally. Each person develops their own unique crAssphage clusters. CrAss-like phages also may be present in primates besides humans. Commonly studied bacteriophage Among the countless phage, only a few have been studied in detail, including some historically important phage that were discovered in the early days of microbial genetics. These, especially the T-phage, helped to discover important principles of gene structure and function. 186 phage λ phage Φ6 phage Φ29 phage ΦX174 Bacteriophage φCb5 G4 phage M13 phage MS2 phage (23–28 nm in size) N4 phage P1 phage P2 phage P4 phage R17 phage T2 phage T4 phage (169 kbp genome, 200 nm long) T7 phage T12 phage See also Bacterivore CrAssphage CRISPR DNA viruses Macrophage Phage ecology Phage monographs (a comprehensive listing of phage and phage-associated monographs, 1921–present) Phagemid Polyphage RNA viruses Transduction Viriome Virophage, viruses that infect other viruses References Bibliography External links Biology
8
Bacillus Calmette–Guérin (BCG) vaccine is a vaccine primarily used against tuberculosis (TB). It is named after its inventors Albert Calmette and Camille Guérin. In countries where tuberculosis or leprosy is common, one dose is recommended in healthy babies as soon after birth as possible. In areas where tuberculosis is not common, only children at high risk are typically immunized, while suspected cases of tuberculosis are individually tested for and treated. Adults who do not have tuberculosis and have not been previously immunized, but are frequently exposed, may be immunized, as well. BCG also has some effectiveness against Buruli ulcer infection and other nontuberculous mycobacterial infections. Additionally, it is sometimes used as part of the treatment of bladder cancer. Rates of protection against tuberculosis infection vary widely and protection lasts up to 20 years. Among children, it prevents about 20% from getting infected and among those who do get infected, it protects half from developing disease. The vaccine is given by injection into the skin. No evidence shows that additional doses are beneficial. Serious side effects are rare. Often, redness, swelling, and mild pain occur at the site of injection. A small ulcer may also form with some scarring after healing. Side effects are more common and potentially more severe in those with immunosuppression. Although no harmful effects on the fetus have been observed, there is insufficient evidence about the safety of BCG vaccination during pregnancy and therefore, vaccine is not recommended for use during pregnancy. The vaccine was originally developed from Mycobacterium bovis, which is commonly found in cattle. While it has been weakened, it is still live. The BCG vaccine was first used medically in 1921. It is on the World Health Organization's List of Essential Medicines. , the vaccine is given to about 100 million children per year globally. Medical uses Tuberculosis The main use of BCG is for vaccination against tuberculosis. BCG vaccine can be administered after birth intradermally. BCG vaccination can cause a false positive Mantoux test. The most controversial aspect of BCG is the variable efficacy found in different clinical trials, which appears to depend on geography. Trials conducted in the UK have consistently shown a protective effect of 60 to 80%, but those conducted elsewhere have shown no protective effect, and efficacy appears to fall the closer one gets to the equator. A 1994 systematic review found that BCG reduces the risk of getting tuberculosis by about 50%. Differences in effectiveness depend on region, due to factors such as genetic differences in the populations, changes in environment, exposure to other bacterial infections, and conditions in the laboratory where the vaccine is grown, including genetic differences between the strains being cultured and the choice of growth medium. A systematic review and meta-analysis conducted in 2014 demonstrated that the BCG vaccine reduced infections by 19–27% and reduced progression to active tuberculosis by 71%. The studies included in this review were limited to those that used interferon gamma release assay. The duration of protection of BCG is not clearly known. In those studies showing a protective effect, the data are inconsistent. The MRC study showed protection waned to 59% after 15 years and to zero after 20 years; however, a study looking at Native Americans immunized in the 1930s found evidence of protection even 60 years after immunization, with only a slight waning in efficacy. BCG seems to have its greatest effect in preventing miliary tuberculosis or tuberculosis meningitis, so it is still extensively used even in countries where efficacy against pulmonary tuberculosis is negligible. The 100th anniversary of BCG was in 2021. It remains the only vaccine licensed against tuberculosis, which is an ongoing pandemic. Tuberculosis elimination is a goal of the World Health Organization (WHO), although the development of new vaccines with greater efficacy against adult pulmonary tuberculosis may be needed to make substantial progress. Efficacy A number of possible reasons for the variable efficacy of BCG in different countries have been proposed. None has been proven, some have been disproved, and none can explain the lack of efficacy in both low tuberculosis-burden countries (US) and high tuberculosis-burden countries (India). The reasons for variable efficacy have been discussed at length in a WHO document on BCG. Genetic variation in BCG strains: Genetic variation in the BCG strains used may explain the variable efficacy reported in different trials. Genetic variation in populations: Differences in genetic make-up of different populations may explain the difference in efficacy. The Birmingham BCG trial was published in 1988. The trial, based in Birmingham, United Kingdom, examined children born to families who originated from the Indian subcontinent (where vaccine efficacy had previously been shown to be zero). The trial showed a 64% protective effect, which is very similar to the figure derived from other UK trials, thus arguing against the genetic variation hypothesis. Interference by nontuberculous mycobacteria: Exposure to environmental mycobacteria (especially Mycobacterium avium, Mycobacterium marinum and Mycobacterium intracellulare) results in a nonspecific immune response against mycobacteria. Administering BCG to someone who already has a nonspecific immune response against mycobacteria does not augment the response already there. BCG will, therefore, appear not to be efficacious because that person already has a level of immunity and BCG is not adding to that immunity. This effect is called masking because the effect of BCG is masked by environmental mycobacteria. Clinical evidence for this effect was found in a series of studies performed in parallel in adolescent school children in the UK and Malawi. In this study, the UK school children had a low baseline cellular immunity to mycobacteria which was increased by BCG; in contrast, the Malawi school children had a high baseline cellular immunity to mycobacteria and this was not significantly increased by BCG. Whether this natural immune response is protective is not known. An alternative explanation is suggested by mouse studies; immunity against mycobacteria stops BCG from replicating and so stops it from producing an immune response. This is called the block hypothesis. Interference by concurrent parasitic infection: In another hypothesis, simultaneous infection with parasites changes the immune response to BCG, making it less effective. As Th1 response is required for an effective immune response to tuberculous infection, concurrent infection with various parasites produces a simultaneous Th2 response, which blunts the effect of BCG. Mycobacteria BCG has protective effects against some nontuberculosis mycobacteria. Leprosy: BCG has a protective effect against leprosy in the range of 20 to 80%. Buruli ulcer: BCG may protect against or delay the onset of Buruli ulcer. Cancer BCG has been one of the most successful immunotherapies. BCG vaccine has been the "standard of care for patients with bladder cancer (NMIBC)" since 1977. By 2014 there were more than eight different considered biosimilar agents or strains used for the treatment of nonmuscle-invasive bladder cancer. A number of cancer vaccines use BCG as an additive to provide an initial stimulation of the person's immune systems. BCG is used in the treatment of superficial forms of bladder cancer. Since the late 1970s, evidence has become available that instillation of BCG into the bladder is an effective form of immunotherapy in this disease. While the mechanism is unclear, it appears a local immune reaction is mounted against the tumor. Immunotherapy with BCG prevents recurrence in up to 67% of cases of superficial bladder cancer. BCG has been evaluated in a number of studies as a therapy for colorectal cancer. The US biotech company Vaccinogen is evaluating BCG as an adjuvant to autologous tumour cells used as a cancer vaccine in stage II colon cancer. Method of administration A tuberculin skin test is usually carried out before administering BCG. A reactive tuberculin skin test is a contraindication to BCG due to the risk of severe local inflammation and scarring; it does not indicate any immunity. BCG is also contraindicated in certain people who have IL-12 receptor pathway defects. BCG is given as a single intradermal injection at the insertion of the deltoid. If BCG is accidentally given subcutaneously, then a local abscess may form (a "BCG-oma") that can sometimes ulcerate, and may require treatment with antibiotics immediately, otherwise without treatment it could spread the infection, causing severe damage to vital organs. An abscess is not always associated with incorrect administration, and it is one of the more common complications that can occur with the vaccination. Numerous medical studies on treatment of these abscesses with antibiotics have been done with varying results, but the consensus is once pus is aspirated and analysed, provided no unusual bacilli are present, the abscess will generally heal on its own in a matter of weeks. The characteristic raised scar that BCG immunization leaves is often used as proof of prior immunization. This scar must be distinguished from that of smallpox vaccination, which it may resemble. When given for bladder cancer, the vaccine is not injected through the skin, but is instilled into the bladder through the urethra using a soft catheter. Adverse effects BCG immunization generally causes some pain and scarring at the site of injection. The main adverse effects are keloids—large, raised scars. The insertion to the deltoid muscle is most frequently used because the local complication rate is smallest when that site is used. Nonetheless, the buttock is an alternative site of administration because it provides better cosmetic outcomes. BCG vaccine should be given intradermally. If given subcutaneously, it may induce local infection and spread to the regional lymph nodes, causing either suppurative (production of pus) and nonsuppurative lymphadenitis. Conservative management is usually adequate for nonsuppurative lymphadenitis. If suppuration occurs, it may need needle aspiration. For nonresolving suppuration, surgical excision may be required. Evidence for the treatment of these complications is scarce. Uncommonly, breast and gluteal abscesses can occur due to haematogenous (carried by the blood) and lymphangiomatous spread. Regional bone infection (BCG osteomyelitis or osteitis) and disseminated BCG infection are rare complications of BCG vaccination, but potentially life-threatening. Systemic antituberculous therapy may be helpful in severe complications. When BCG is used for bladder cancer, around 2.9% of treated patients discontinue immunotherapy due to a genitourinary or systemic BCG-related infection, however while symptomatic bladder BCG infection is frequent, the involvement of other organs is very uncommon. When systemic involvement occurs, liver and lungs are the first organs to be affected (1 week [median] after the last BCG instillation). If BCG is accidentally given to an immunocompromised patient (e.g., an infant with severe combined immune deficiency), it can cause disseminated or life-threatening infection. The documented incidence of this happening is less than one per million immunizations given. In 2007, the WHO stopped recommending BCG for infants with HIV, even if the risk of exposure to tuberculosis is high, because of the risk of disseminated BCG infection (which is roughly 400 per 100,000 in that higher risk context). Usage The age of the person and the frequency with which BCG is given has always varied from country to country. The WHO currently recommends childhood BCG for all countries with a high incidence of tuberculosis and/or high leprosy burden. This is a partial list of historic and current BCG practice around the globe. A complete atlas of past and present practice has been generated. Americas Brazil introduced universal BCG immunization in 1967–1968, and the practice continues until now. According to Brazilian law, BCG is given again to professionals of the health sector and to people close to patients with tuberculosis or leprosy. Canadian Indigenous communities currently receive the BCG vaccine, and in the province of Quebec the vaccine was offered to children until the mid-70s. Most countries in Central and South America have universal BCG immunizations. The United States has never used mass immunization of BCG due to the rarity of tuberculosis in the US, relying instead on the detection and treatment of latent tuberculosis. Europe Asia China: Introduced in 1930s. Increasingly widespread after 1949. Majority inoculated by 1979. South Korea, Singapore, Taiwan and Malaysia. In these countries, BCG was given at birth and again at age 12. In Malaysia and Singapore from 2001, this policy was changed to once only at birth. South Korea stopped re-vaccination in 2008. Hong Kong: BCG is given to all newborns. Japan: In Japan, BCG was introduced in 1951, given typically at age 6. From 2005 it is administered between five and eight months after birth, and no later than a child's first birthday. BCG was administered no later than the fourth birthday until 2005, and no later than six months from birth from 2005 to 2012; the schedule was changed in 2012 due to reports of osteitis side effects from vaccinations at 3–4 months. Some municipalities recommend an earlier immunization schedule. Thailand: In Thailand, the BCG vaccine is given routinely at birth. India and Pakistan: India and Pakistan introduced BCG mass immunization in 1948, the first countries outside Europe to do so. In 2015, millions of infants were denied BCG vaccine in Pakistan for the first time due to shortage globally. Mongolia: All newborns are vaccinated with BCG. Previously, the vaccine was also given at ages 8 and 15, although this is no longer common practice. Philippines: BCG vaccine started in the Philippines in 1979 with the Expanded Program on Immunization. Sri Lanka: In Sri Lanka, The National Policy of Sri Lanka is to give BCG vaccination to all newborn babies immediately after birth. BCG vaccination is carried out under the Expanded Programme of Immunisation (EPI). Middle East Israel: BCG was given to all newborns between 1955 and 1982. Iran: Iran's vaccination policy implemented in 1984. Vaccination with the Bacillus Calmette–Guerin (BCG) is among the most important tuberculosis control strategies in Iran [2]. According to Iranian neonatal vaccination policy, BCG has been given as a single dose at children aged <6 years, shortly after birth or at first contact with the health services. Africa South Africa: In South Africa, the BCG Vaccine is given routinely at birth, to all newborns, except those with clinically symptomatic AIDS. The vaccination site is in the right shoulder. Morocco: In Morocco, the BCG was introduced in 1949. The current policy is BCG vaccination at birth, to all newborns. Kenya: In Kenya, the BCG Vaccine is given routinely at birth to all newborns. South Pacific Australia: BCG vaccination was used between 1950s and mid 1980. BCG is not part of routine vaccination since mid 1980. New Zealand: BCG Immunisation was first introduced for 13 yr olds in 1948. Vaccination was phased out 1963–1990. Manufacture BCG is prepared from a strain of the attenuated (virulence-reduced) live bovine tuberculosis bacillus, Mycobacterium bovis, that has lost its ability to cause disease in humans. It is specially subcultured in a culture medium, usually Middlebrook 7H9. Because the living bacilli evolve to make the best use of available nutrients, they become less well-adapted to human blood and can no longer induce disease when introduced into a human host. Still, they are similar enough to their wild ancestors to provide some degree of immunity against human tuberculosis. The BCG vaccine can be anywhere from 0 to 80% effective in preventing tuberculosis for a duration of 15 years; however, its protective effect appears to vary according to geography and the lab in which the vaccine strain was grown. A number of different companies make BCG, sometimes using different genetic strains of the bacterium. This may result in different product characteristics. OncoTICE, used for bladder instillation for bladder cancer, was developed by Organon Laboratories (since acquired by Schering-Plough, and in turn acquired by Merck & Co.). A similar application is the product of Onko BCG of the Polish company Biomed-Lublin, which owns the Brazilian substrain M. bovis BCG Moreau which is less reactogenic than vaccines including other BCG strains. Pacis BCG, made from the Montréal (Institut Armand-Frappier) strain, was first marketed by Urocor in about 2002. Urocor was since acquired by Dianon Systems. Evans Vaccines (a subsidiary of PowderJect Pharmaceuticals). Statens Serum Institut in Denmark markets BCG vaccine prepared using Danish strain 1331. Japan BCG Laboratory markets its vaccine, based on the Tokyo 172 substrain of Pasteur BCG, in 50 countries worldwide. According to a UNICEF report published in December 2015, on BCG vaccine supply security, global demand increased in 2015 from 123 to 152.2 million doses. To improve security and to [diversify] sources of affordable and flexible supply," UNICEF awarded seven new manufacturers contracts to produce BCG. Along with supply availability from existing manufacturers, and a "new WHO prequalified vaccine" the total supply will be "sufficient to meet both suppressed 2015 demand carried over to 2016, as well as total forecast demand through 2016–2018." Supply shortage In 2011, the Sanofi Pasteur plant flooded, causing problems with mold. The facility, located in Toronto, Ontario, Canada, produced BCG vaccine products made with substrain Connaught such as a tuberculosis vaccine and ImmuCYST, a BCG immunotherapeutic and bladder cancer drug. By April 2012 the FDA had found dozens of documented problems with sterility at the plant including mold, nesting birds and rusted electrical conduits. The resulting closure of the plant for over two years caused shortages of bladder cancer and tuberculosis vaccines. On 29 October 2014 Health Canada gave the permission for Sanofi to resume production of BCG. A 2018 analysis of the global supply concluded that the supplies are adequate to meet forecast BCG vaccine demand, but that risks of shortages remain, mainly due to dependence of 75 percent of WHO pre-qualified supply on just two suppliers. Dried Some BCG vaccines are freeze dried and become fine powder. Sometimes the powder is sealed with vacuum in a glass ampoule. Such a glass ampoule has to be opened slowly to prevent the airflow from blowing out the powder. Then the powder has to be diluted with saline water before injecting. History The history of BCG is tied to that of smallpox. By 1865 Jean Antoine Villemin had demonstrated that rabbits could be infected with tuberculosis from humans; by 1868 he had found that rabbits could be infected with tuberculosis from cows, and that rabbits could be infected with tuberculosis from other rabbits. Thus, he concluded that tuberculosis was transmitted via some unidentified microorganism (or "virus", as he called it). In 1882 Robert Koch regarded human and bovine tuberculosis as identical. But in 1895, Theobald Smith presented differences between human and bovine tuberculosis, which he reported to Koch. By 1901 Koch distinguished Mycobacterium bovis from Mycobacterium tuberculosis. Following the success of vaccination in preventing smallpox, established during the 18th century, scientists thought to find a corollary in tuberculosis by drawing a parallel between bovine tuberculosis and cowpox: it was hypothesized that infection with bovine tuberculosis might protect against infection with human tuberculosis. In the late 19th century, clinical trials using M. bovis were conducted in Italy with disastrous results, because M. bovis was found to be just as virulent as M. tuberculosis. Albert Calmette, a French physician and bacteriologist, and his assistant and later colleague, Camille Guérin, a veterinarian, were working at the Institut Pasteur de Lille (Lille, France) in 1908. Their work included subculturing virulent strains of the tuberculosis bacillus and testing different culture media. They noted a glycerin-bile-potato mixture grew bacilli that seemed less virulent, and changed the course of their research to see if repeated subculturing would produce a strain that was attenuated enough to be considered for use as a vaccine. The BCG strain was isolated after subculturing 239 times during 13 years from virulent strain on glycerine potato medium. The research continued throughout World War I until 1919, when the now avirulent bacilli were unable to cause tuberculosis disease in research animals. Calmette and Guerin transferred to the Paris Pasteur Institute in 1919. The BCG vaccine was first used in humans in 1921. Public acceptance was slow, and the Lübeck disaster, in particular, did much to harm it. Between 1929 and 1933 in Lübeck, 251 infants were vaccinated in the first 10 days of life; 173 developed tuberculosis and 72 died. It was subsequently discovered that the BCG administered there had been contaminated with a virulent strain that was being stored in the same incubator, which led to legal action against the manufacturers of the vaccine. Dr. R. G. Ferguson, working at the Fort Qu'Appelle Sanatorium in Saskatchewan, was among the pioneers in developing the practice of vaccination against tuberculosis. In Canada, more than 600 children from residential schools were used as involuntary participants in BCG vaccine trials between 1933 and 1945. In 1928, BCG was adopted by the Health Committee of the League of Nations (predecessor to the World Health Organization (WHO)). Because of opposition, however, it only became widely used after World War II. From 1945 to 1948, relief organizations (International Tuberculosis Campaign or Joint Enterprises) vaccinated over eight million babies in eastern Europe and prevented the predicted typical increase of tuberculosis after a major war. BCG is very efficacious against tuberculous meningitis in the pediatric age group, but its efficacy against pulmonary tuberculosis appears to be variable. Some countries have removed BCG from routine vaccination. Two countries that have never used it routinely are the United States and the Netherlands (in both countries, it is felt that having a reliable Mantoux test and therefore being able to accurately detect active disease is more beneficial to society than vaccinating against a condition that is now relatively rare there). Other names include "Vaccin Bilié de Calmette et Guérin vaccine" and "Bacille de Calmette et Guérin vaccine". Research Tentative evidence exists for a beneficial non-specific effect of BCG vaccination on overall mortality in low income countries, or for its reducing other health problems including sepsis and respiratory infections when given early, with greater benefit the earlier it is used. In rhesus macaques, BCG shows improved rates of protection when given intravenously. Some risks must be evaluated before it can be translated to humans. Type 1 diabetes , BCG vaccine is in the early stages of being studied in type 1 diabetes (T1D). COVID-19 Use of the BCG vaccine may provide protection against COVID-19. However, epidemiologic observations in this respect are ambiguous. The WHO does not recommend its use for prevention . , twenty BCG trials are in various clinical stages. , the results are extremely mixed. A 15-month trial involving people thrice-vaccinated over the two years before the pandemic shows positive results in preventing infection in BCG-naive people with type 1 diabetes. On the other hand, a 5-month trial shows that re-vaccinating with BCG does not help prevent infection in healthcare workers. Both were double-blind randomized controlled trials. References External links Frequently Asked Questions about BCG Professor P D O Davies, Tuberculosis Research Unit, Cardiothoracic Centre, Liverpool, UK. Vaccines Cancer vaccines Live vaccines Tuberculosis vaccines Schering-Plough brands Merck & Co. brands World Health Organization essential medicines (vaccines) Wikipedia medicine articles ready to translate
5
The common buzzard (Buteo buteo) is a medium-to-large bird of prey which has a large range. It is a member of the genus Buteo in the family Accipitridae. The species lives in most of Europe and extends its breeding range across much of the Palearctic as far as northwestern China (Tian Shan), far western Siberia and northwestern Mongolia. Over much of its range, it is a year-round resident. However, buzzards from the colder parts of the Northern Hemisphere as well as those that breed in the eastern part of their range typically migrate south for the northern winter, many journeying as far as South Africa. The common buzzard is an opportunistic predator that can take a wide variety of prey, but it feeds mostly on small mammals, especially rodents such as voles. It typically hunts from a perch. Like most accipitrid birds of prey, it builds a nest, typically in trees in this species, and is a devoted parent to a relatively small brood of young. The common buzzard appears to be the most common diurnal raptor in Europe, as estimates of its total global population run well into the millions. Taxonomy The first formal description of the common buzzard was by the Swedish naturalist Carl Linnaeus in 1758 in the tenth edition of his Systema Naturae under the binomial name Falco buteo. The genus Buteo was introduced by the French naturalist Bernard Germain de Lacépède in 1799 by tautonymy with the specific name of this species. The word buteo is Latin for a buzzard. It should not be confused with the Turkey vulture, which is sometimes called a buzzard in American English. The Buteoninae subfamily originated from and is most diversified in the Americas, with occasional broader radiations that led to common buzzards and other Eurasian and African buzzards. The common buzzard is a member of the genus Buteo, a group of medium-sized raptors with robust bodies and broad wings. The Buteo species of Eurasia and Africa are usually commonly referred to as "buzzards" while those in the Americas are called hawks. Under current classification, the genus includes approximately 28 species, the second most diverse of all extant accipitrid genera behind only Accipiter. DNA testing shows that the common buzzard is fairly closely related to the red-tailed hawk (Buteo jamaicensis) of North America, which occupies a similar ecological niche to the buzzard in that continent. The two species may belong to the same species complex. Two buzzards in Africa are likely closely related to the common buzzard based on genetic materials, the mountain (Buteo oreophilus) and forest buzzards (Buteo trizonatus), to the point where it has been questioned whether they are sufficiently distinct to qualify as full species. However, the distinctiveness of these African buzzards has generally been supported. Genetic studies have further indicated that the modern buzzards of Eurasia and Africa are a relatively young group, showing that they diverged at about 300,000 years ago. Nonetheless, fossils dating earlier than 5 million year old (the late Miocene period) showed Buteo species were present in Europe much earlier than that would imply, although it cannot be stated to a certainty that these would’ve been related to the extant buzzards. Subspecies and species splits Some 16 subspecies have been described in the past and up to 11 are often considered valid, although some authorities accept as few as seven. Common buzzard subspecies fall into two groups. The western buteo group is mainly resident or short-distance migrants and includes: B. b. buteo: Ranges in Europe from the Atlantic islands, the British Isles and the Iberian Peninsula (including Madeira Island, whose population was once considered a separate race, B. b. harterti) more or less continuously throughout Europe to Finland, Romania and Asia Minor. This highly individually variable race is described below. This is a relatively large and bulky race of buzzard. In males, the wing chord ranges from and the tail from . In comparison, the larger female has a wing chord measuring and tail length of . In both sexes, the tarsus measures in length. As illustrated by average body mass, sizes in the nominate race of common buzzard seem to confirm to Bergmann's rule, increasing to the north and decreasing closer to the Equator. In southern Norway, the mean weight of males was reportedly , while that of females was . British buzzards were of intermediate size, 214 males averaging and 261 females averaging . Birds to the south in Spain were smaller, averaging in 22 males and in 30 females. Cramp and Simmons (1980) listed the mean body mass overall of nominate buzzards in Europe overall as in males and in females. B. b. rothschildi: This proposed race is native to the Azores islands. It is generally considered a valid subspecies. This race differs from a typical intermediate of the nominate in being a darker, colder brown both above and below, closer to the darker individuals of the nominate. It averages smaller than most nominate buzzards. The wing chord of males ranges from while that of females ranges from . B. b. insularum: This race lives in the Canary Islands. Not all authorities consider this race suitably distinct, but others advocate it be retained as a full subspecies. It is typically of richer brown above and more heavily streaked below compared to nominate birds. It is similar in size to B. b. rothschildi and averages slightly smaller than the nominate race. Males have a reported wing chord of and females have a wing chord of . B. b. arrigonii: This race inhabits the islands of Corsica and Sardinia. It is generally considered a valid subspecies. The upper-side of these buzzards is an intermediate brown with very heavy streaking below, often covering the belly whereas most nominate buzzards show a whitish area the middle of the belly. Like most other insular races, this one is relatively small. Males possess a wing chord of while females have a wing chord of . The eastern vulpinus group includes: B. b. vulpinus: The steppe buzzard breeds as far west as eastern Sweden, in the southern two-thirds of Finland, eastern Estonia, much of Belarus and Ukraine, eastward to the northern Caucacus, northern Kazakhstan, Kyrgyzstan, much of Russia to Altai and south-central Siberia, Tien Shan in China and western Mongolia. B. b. vulpinus is a long-distance migrant. It winters largely in much of eastern and southern Africa. Less frequently and often very discontinuously, steppe buzzards winter in the southern peninsulas of Europe, Arabia and southwestern India in addition to some parts of southeastern Kazakhstan, Uzbekistan and Kyrgyzstan. In the open country favoured on the wintering grounds, steppe buzzards are often seen perched on roadside telephone poles. It at one time was considered a separate species due to differences in size, form, colouring and behaviour (especially in regards to migratory behaviour) but is genetically indistinct from nominate buzzards. Furthermore, the steppe buzzard engages in extensive interbreeding with the nominate race, mudding typical characteristics of both races. The zone of integration runs from Scandinavia through the European continent to the Black Sea, including any part of the overlapping ranges in Sweden, Finland, Estonia, Latvia, Lithuania, western Ukraine and eastern Romania. At times, the fertile hybrids of these two races have been erroneously proposed as races such as B. b. intermedius or B. b. zimmermannae. Intergrade buzzards are commonest where the grey-brown type of pale morphs of vulpinus are predominant. Steppe buzzards are usually distinctly smaller, with relatively longer wings and tail for their size, and thus often appear swifter and more agile in flight than nominate buzzards, whose wing beats can look slower and clumsier. Typically, their length is around , while wingspan of males average and females average . The wing chord is in males and in females. Tail length is in males and in females. Weights of birds from Russia can reportedly range from in males and in females. Weights of migrant birds appear to be lower than at other times of year for steppe buzzards. Two surveys of migrant buzzards during their huge spring movement in Eilat, Israel showed 420 birds averaged and 882 birds averaged . In comparison, weights of wintering steppe buzzards was higher, averaging in 35 birds in the former Transvaal (South Africa) and in 160 birds in the Cape Province. Weights of birds from Zambia were similar. B. b. menetriesi: This race is found in southern Crimea through the Caucasus to northern Iran and possibly into Turkey. This race has traditionally been listed as a resident race, but some sources consider it a migrant to eastern and southern Africa. Compared to the overlapping steppe buzzard subspecies, it is larger (roughly intermediate between the nominate race and vulpinus) and is duller in overall colour, being sandy below rather than rufous and lacking the bright rufous on the tail. Wing chord is in males and in females. At one time, races of the common buzzard were thought to range as far in Asia as a breeding bird well into the Himalayas and as far east as northeastern China, Russia to the Sea of Okhotsk, and all the islands of the Kurile Islands and of Japan, despite both the Himalayan and eastern birds showing a natural gap in distribution from the next nearest breeding common buzzard. However, DNA testing has revealed that the buzzards of these populations probably belong to different species. Most authorities now accept these buzzards as full species: the eastern buzzard (Buteo japonicus; with three subspecies of its own) and the Himalayan buzzard (Buteo refectus). Buzzards found on the islands of Cape Verde off of the coast of western Africa, once referred to as the subspecies B. b. bannermani, and Socotra Island off of the northern peninsula of Arabia, once referred to as the rarely recognized subspecies B. b. socotrae, are now generally thought not to belong to the common buzzard. DNA testing has indicated that these insular buzzards are actually more closely related to the long-legged buzzard (Buteo rufinus) than to the common buzzard. Subsequently, some researchers have advocated full species status for the Cape Verde population, but the placement of these buzzards is generally deemed unclear. Description The common buzzard is a medium to large sized raptor that is highly variable in plumage. Most buzzards are distinctly round headed with a somewhat slender bill, relatively long wings that either reach or fall slightly short of the tail tip when perched, a fairly short tail, and somewhat short and mainly bare tarsi. They can appear fairly compact in overall appearance but may also appear large relative to other more common raptorial birds such as kestrels and sparrowhawks. The common buzzard measures between in length with a wingspan. Females average about 2–7% larger than males linearly and weigh about 15% more. Body mass can show considerable variation. Buzzards from Great Britain alone can vary from in males, while females there can range from . In Europe, most typical buzzards are dark brown above and on the upperside of the head and mantle, but can become paler and warmer brown with worn plumage. The flight feathers on perched European buzzards are always brown in the nominate subspecies (B. b. buteo). Usually the tail will usually be narrowly barred grey-brown and dark brown with a pale tip and a broad dark subterminal band but the tail in palest birds can show a varying amount a white and reduced subterminal band or even appear almost all white. In European buzzards, the underside coloring can be variable but most typically show a brown-streaked white throat with a somewhat darker chest. A pale U across breast is often present; followed by a pale line running down the belly which separates the dark areas on breast-side and flanks. These pale areas tend to have highly variable markings that tend to form irregular bars. Juvenile buzzards are quite similar to adult in the nominate race, being best told apart by having a paler eye, a narrower subterminal band on the tail and underside markings that appear as streaks rather than bars. Furthermore, juveniles may show variable creamy to rufous fringes to upperwing coverts but these also may not be present. Seen from below in flight, buzzards in Europe typically have a dark trailing edge to the wings. If seen from above, one of the best marks is their broad dark subterminal tail band. Flight feathers of typical European buzzards are largely greyish, the aforementioned dark wing linings at front with contrasting paler band along the median coverts. In flight, paler individuals tend to show dark carpal patches that can appears as blackish arches or commas but these may be indistinct in darker individuals or can appear light brownish or faded in paler individuals. Juvenile nominate buzzards are best told apart from adults in flight by the lack of a distinct subterminal band (instead showing fairly even barring throughout) and below by having less sharp and brownish rather than blackish trailing wing edge. Juvenile buzzards show streaking paler parts of under wing and body showing rather than barring as do adults. Beyond the typical mid-range brownish buzzard, birds in Europe can range from almost uniform black-brown above to mainly white. Extreme dark individuals may range from chocolate brown to blackish with almost no pale showing but a variable, faded U on the breast and with or without faint lighter brown throat streaks. Extreme pale birds are largely whitish with variable widely spaced streaks or arrowheads of light brown about the mid-chest and flanks and may or may not show dark feather-centres on the head, wing-coverts and sometimes all but part of mantle. Individuals can show nearly endless variation of colours and hues in between these extremes and the common buzzard is counted among the most variably plumage diurnal raptors for this reason. One study showed that this variation may actually be the result of diminished single-locus genetic diversity. Beyond the nominate form (B. b. buteo) that occupies most of the common buzzard's European range, a second main, widely distributed subspecies is known as the steppe buzzard (B. b. vulpinus). The steppe buzzard race shows three main colour morphs, each of which can be predominant in a region of breeding range. It is more distinctly polymorphic rather than just individually very variable like the nominate race. This may be because, unlike the nominate buzzard, the steppe buzzard is highly migratory. Polymorphism has been linked with migratory behaviour. The most common type of steppe buzzard is the rufous morph which gives this subspecies its scientific name (vulpes is Latin for "fox"). This morph comprises a majority of birds seen in passage east of the Mediterranean. Rufous morph buzzards are a paler grey-brown above than most nominate B. b. buteo. Compared to the nominate race, rufous vulpinus show a patterning not dissimilar but generally far more rufous-toned on head, the fringes to mantle wing coverts and, especially, on the tail and the underside. The head is grey-brown with rufous tinges usually while the tail is rufous and can vary from almost unmarked to thinly dark-barred with a subterminal band. The underside can be uniformly pale to dark rufous, barred heavily or lightly with rufous or with dusky barring, usually with darker individuals showing the U as in nominate but with a rufous hue. The pale morph of the steppe buzzard is commonest in the west of its subspecies range, predominantly seen in winter and migration at the various land bridge of the Mediterranean. As in the rufous morph, the pale morph vulpinus is grey-brown above but the tail is generally marked with thin dark bars and a subterminal band, only showing rufous near the tip. The underside in the pale morph is greyish-white with dark grey-brown or somewhat streaked head to chest and barred belly and chest, occasionally showing darker flanks that can be somewhat rufous. Dark morph vulpinus tend to be found in the east and southeast of the subspecies range and are easily outnumbered by rufous morph while largely using similar migration points. Dark morph individuals vary from grey-brown to much darker blackish-brown, and have a tail that is dark grey or somewhat mixed grey and rufous, is distinctly marked with dark barring and has a broad, black subterminal band. Dark morph vulpinus have a head and underside that is mostly uniform dark, from dark brown to blackish-brown to almost pure black. Rufous morph juveniles are often distinctly paler in ground colour (ranging even to creamy-grey) than adults with distinct barring below actually increased in pale morph type juvenile. Pale and rufous morph juveniles can only be distinguished from each other in extreme cases. Dark morph juveniles are more similar to adult dark morph vulpinus but often show a little whitish streaking below, and like all other races have lighter coloured eyes and more evenly barred tails than adults. Steppe buzzards tend to appear smaller and more agile in flight than nominate whose wing beats can look slower and clumsier. In flight, rufous morph vulpinus have their whole body and underwing varying from uniform to patterned rufous (if patterning present, it is variable, but can be on chest and often thighs, sometimes flanks, pale band across median coverts), while the under-tail usually paler rufous than above. Whitish flight feathers are more prominent than in nominate and more marked contrast with the bold dark brown band along the trailing edges. Markings of pale vulpinus as seen in flight are similar to rufous morph (such as paler wing markings) but more greyish both on wings and body. In dark morph vulpinus the broad black trailing edges and colour of body make whitish areas of inner wing stand out further with an often bolder and blacker carpal patch than in other morphs. As in nominate, juvenile vulpinus (rufous/pale) tend to have much less distinct trailing edges, general streaking on body and along median underwing coverts. Dark morph vulpinus resemble adult in flight more so than other morphs. Similar species The common buzzard is often confused with other raptors especially in flight or at a distance. Inexperienced and over-enthusiastic observers have even mistaken darker birds for the far larger and differently proportioned golden eagle (Aquila chrysaetos) and also dark birds for western marsh harrier (Circus aeruginosus) which also flies in a dihedral but is obviously relatively much longer and slenderer winged and tailed and with far different flying methods. Also buzzards may possibly be confused with dark or light morph booted eagles (Hieraeetus pennatus), which are similar in size, but the eagle flies on level, parallel-edged wings which usually appear broader, has a longer squarer tail, with no carpal patch in pale birds and all dark flight feathers but for whitish wedge on inner primaries in dark morph ones. Pale individuals are sometimes also mistaken with pale morph short-toed eagles (Circaetus gallicus) which are much larger with a considerably bigger head, longer wings (which are usually held evenly in flight rather than in a dihedral) and paler underwing lacking any carpal patch or dark wing lining. More serious identification concerns lie in other Buteo species and in flight with honey buzzards, which are quite different looking when seen perched at close range. The European honey buzzard (Pernis apivorus) is thought in engage in mimicry of more powerful raptors, in particular, juveniles may mimic the plumage of the more powerful common buzzard. While less individually variable in Europe, the honey buzzard is more extensive polymorphic on underparts than even the common buzzard. The most common morph of the adult European honey buzzard is heavily and rufous barred on the underside, quite different from the common buzzard, however the brownish juvenile much more resembles an intermediate common buzzard. Honey buzzards flap with distinctively slower and more even wing beats than common buzzard. The wings are also lifted higher on each upstroke, creating a more regular and mechanical effect, furthermore their wings are held slightly arched when soaring but not in a V. On the honey buzzard, the head appears smaller, the body thinner, the tail longer and the wings narrower and more parallel edged. The steppe buzzard race is particularly often mistaken for juvenile European honey buzzards, to the point where early observers of raptor migration in Israel considered distant individuals indistinguishable. However, when compared to a steppe buzzard, the honey buzzard has distinctly darker secondaries on the underwing with fewer and broader bars and more extensive black wing-tips (whole fingers) contrasting with a less extensively pale hand. Found in the same range as the steppe buzzard in some parts of southern Siberia as well as (with wintering steppes) in southwestern India, the Oriental honey buzzard (Pernis ptilorhynchus) is larger than both the European honey buzzard and the common buzzard. The oriental species is with more similar in body plan to common buzzards, being relatively broader winged, shorter tailed and more amply-headed (though the head is still relatively small) relative to the European honey buzzard, but all plumages lack carpal patches. In much of Europe, the common buzzard is the only type of buzzard. However, the subarctic breeding rough-legged buzzard (Buteo lagopus) comes down to occupy much of the northern part of the continent during winter in the same haunts as the common buzzard. However, the rough-legged buzzard is typically larger and distinctly longer-winged with feathered legs, as well as having a white based tail with a broad subterminal band. Rough-legged buzzards have slower wing beats and hover far more frequently than do common buzzards. The carpal patch marking on the under-wing are also bolder and blacker on all paler forms of rough-legged hawk. Many pale morph rough-legged buzzards have a bold, blackish band across the belly against contrasting paler feathers, a feature which rarely appears in individual common buzzard. Usually the face also appears somewhat whitish in most pale morphs of rough-legged buzzards, which is true of only extremely pale common buzzards. Dark morph rough-legged buzzards are usually distinctly darker (ranging to almost blackish) than even extreme dark individuals of common buzzards in Europe and still have the distinct white-based tail and broad subterminal band of other roughlegs. In eastern Europe and much of the Asian range of common buzzards, the long-legged buzzard (Buteo rufinus) may live alongside the common species. As in the steppe buzzard race, the long-legged buzzard has three main colour morphs that are more or less similar in hue. In both the steppe buzzard race and long-legged buzzard, the main colour is overall fairly rufous. More so than steppe buzzards, long-legged buzzards tend to have a distinctly paler head and neck compared to other feathers, and, more distinctly, a normally unbarred tail. Furthermore, the long-legged buzzard is usually a rather larger bird, often considered fairly eagle-like in appearance (although it does appear gracile and small-billed even compared to smaller true eagles), an effect enhanced by its longer tarsi, somewhat longer neck and relatively elongated wings. The flight style of the latter species is deeper, slower and more aquiline, with much more frequent hovering, showing a more protruding head and a slightly higher V held in a soar. The smaller North African and Arabian race of long-legged buzzard (B. r. cirtensis) is more similar in size and nearly all colour characteristics to steppe buzzard, extending to the heavily streaked juvenile plumage, in some cases such birds can be distinguished only by their proportions and flight patterns which remain unchanged. Hybridization with the latter race (B. r. cirtensis) and nominate common buzzards has been observed in the Strait of Gibraltar, a few such birds have been reported potentially in the southern Mediterranean due to mutually encroaching ranges, which are blurring possibly due to climate change. Wintering steppe buzzards may live alongside mountain buzzards and especially with forest buzzard while wintering in Africa. The juveniles of steppe and forest buzzards are more or less indistinguishable and only told apart by proportions and flight style, the latter species being smaller, more compact, having a smaller bill, shorter legs and shorter and thinner wings than a steppe buzzard. However, size is not diagnostic unless side by side as the two buzzards overlap in this regard. Most reliable are the species wing proportions and their flight actions. Forest buzzard have more flexible wing beats interspersed with glides, additionally soaring on flatter wings and apparently never engage in hovering. Adult forest buzzards compared to the typical adult steppe buzzard (rufous morph) are also similar, but the forest typically has a whiter underside, sometimes mostly plain white, usually with heavy blotches or drop-shaped marks on abdomen, with barring on thighs, more narrow tear-shaped on chest and more spotted on leading edges of underwing, usually lacking marking on the white U across chest (which is otherwise similar but usually broader than that of vulpinus). In comparison, the mountain buzzard, which is more similar in size to the steppe buzzard and slightly larger than the forest buzzard, is usually duller brown above than a steppe buzzard and is more whitish below with distinctive heavy brown blotches from breasts to the belly, flanks and wing linings while juvenile mountain buzzard is buffy below with smaller and streakier markings. The steppe buzzard when compared to another African species, the red-necked buzzard (Buteo auguralis), which has red tail similar to vulpinus, is distinct in all other plumage aspects despite their similar size. The latter buzzard has a streaky rufous head and is white below with a contrasting bold dark chest in adult plumage and, in juvenile plumage, has heavy, dark blotches on the chest and flanks with pale wing-linings. Jackal and augur buzzards (Buteo rufofuscus & augur), also both rufous on the tail, are larger and bulkier than steppe buzzards and have several distinctive plumage characteristics, most notably both having their own striking, contrasting patterns of black-brown, rufous and cream. Distribution and habitat The common buzzard is found throughout several islands in the eastern Atlantic islands, including the Canary Islands and Azores and almost throughout Europe. It is today found in Ireland and in nearly every part of Scotland, Wales and England. In mainland Europe, remarkably, there are no substantial gaps without breeding common buzzards from Portugal and Spain to Greece, Estonia, Belarus and Ukraine, though are present mainly only in the breeding season in much of the eastern half of the latter three countries. They are also present in all larger Mediterranean islands such as Corsica, Sardinia, Sicily and Crete. Further north in Scandinavia, they are found mainly in southeastern Norway (though also some points in southwestern Norway close to the coast and one section north of Trondheim), just over the southern half of Sweden and hugging over the Gulf of Bothnia to Finland where they live as a breeding species over nearly two-thirds of the land. The common buzzard reaches its northern limits as a breeder in far eastern Finland and over the border to European Russia, continuing as a breeder over to the narrowest straits of the White Sea and nearly to the Kola Peninsula. In these northern quarters, the common buzzard is present typically only in summer but is a year-around resident of a hearty bit of southern Sweden and some of southern Norway. Outside of Europe, it is a resident of northern Turkey (largely close to the Black Sea) otherwise occurring mainly as a passage migrant or winter visitor in the remainder of Turkey, Georgia, sporadically but not rarely in Azerbaijan and Armenia, northern Iran (largely hugging the Caspian Sea) to northern Turkmenistan. Further north though its absent from either side of the northern Caspian Sea, the common buzzard is found in much of western Russia (though exclusively as a breeder) including all of the Central Federal District and the Volga Federal District, all but the northernmost parts of the Northwestern and Ural Federal Districts and nearly the southern half of the Siberian Federal District, its farthest easterly occurrence as a breeder. It also found in northern Kazakhstan, Kyrgyzstan, far northwestern China (Tien Shan) and northwestern Mongolia. Non-breeding populations occur, either as migrants or wintering birds, in southwestern India, Israel, Lebanon, Syria, Egypt (northeastern), northern Tunisia (and far northwestern Algeria), northern Morocco, near the coasts of The Gambia, Senegal and far southwestern Mauritania and Ivory Coast (and bordering Burkina Faso). In eastern and central Africa, it is found in winter from southeastern Sudan, Eritrea, about two-thirds of Ethiopia, much of Kenya (though apparently absent from the northeast and northwest), Uganda, southern and eastern Democratic Republic of the Congo, and more or less the entirety of southern Africa from Angola across to Tanzania down the remainder of the continent (but for an apparent gap along the coast from southwestern Angola to northwestern South Africa). Habitat The common buzzard generally inhabits the interface of woodlands and open grounds; most typically the species lives in forest edge, small woods or shelterbelts with adjacent grassland, arables or other farmland. It acquits to open moorland as long as there is some trees for perch hunting and nesting use. The woods they inhabit may be coniferous, temperate broadleaf and mixed forests and temperate deciduous forest with occasional preferences for the local dominant tree. It is absent from treeless tundra, as well as the Subarctic where the species almost entirely gives way to the rough-legged buzzard. The common buzzard is sporadic or rare in treeless steppe but can occasionally migrate through it (despite its name, the steppe buzzard subspecies breeds primarily in the wooded fringes of the steppe). The species may be found to some extent in both in mountainous or flat country. Although adaptable to and sometimes seen in wetlands and in coastal areas, buzzards are often considered more of an upland species and neither appear to be regularly attracted to or to strongly avoid bodies of waters in non-migratory times. Buzzards in well-wooded areas of eastern Poland largely used large, mature stands of trees that were more humid, richer and denser than prevalent in surrounding area, but showed preference for those within of openings. Mostly resident buzzards live in lowlands and foothills, but they can live in timbered ridges and uplands as well as rocky coasts, sometimes nesting on cliff ledges rather than trees. Buzzards may live from sea level to elevations of , breeding mostly below but they can winter to an elevation of and migrates easily to . In the mountainous Italian Apennines, buzzard nests were at a mean elevation of and were, relative to the surrounding area, further from human developed areas (i.e. roads) and nearer to valley bottoms in rugged, irregularly topographed places, especially ones that faced northeast. Common buzzards are fairly adaptable to agricultural lands but will show can show regional declines in apparent response to agriculture. Changes to more extensive agricultural practices were shown to reduce buzzard populations in western France where reduction of “hedgerows, woodlots and grasslands areas" caused a decline of buzzards and in Hampshire, England where more extensive grazing by free-range cattle and horses led to declines of buzzards, probably largely due to the seeming reduction of small mammal populations there. On the contrary, buzzards in central Poland adapted to removal of pine trees and reduction of rodent prey by changing nest sites and prey for a time with no strong change in their local numbers. Extensive urbanization seems to negatively affect buzzards, this species being generally less adaptable to urban areas than their New World counterparts, the red-tailed hawk. Although peri-urban areas can actually increase potential prey populations in a location at times, individual buzzard mortality, nest disturbances and nest site habitat degradation rises significantly in such areas. Common buzzards are fairly adaptive to rural areas as well as suburban areas with parks and large gardens, in addition to such areas if they're near farms. Behaviour The common buzzard is a typical Buteo in much of its behaviour. It is most often seen either soaring at varying heights or perched prominently on tree tops, bare branches, telegraph poles, fence posts, rocks or ledges, or alternately well inside tree canopies. Buzzards will also stand and forage on the ground. In resident populations, it may spend more than half of its day inactively perched. Furthermore, it has been described a "sluggish and not very bold" bird of prey. It is a gifted soarer once aloft and can do so for extended periods but can appear laborious and heavy in level flight, more so nominate buzzards than steppe buzzards. Particularly in migration, as was recorded in the case of steppe buzzards' movement over Israel, buzzards readily adjust their direction, tail and wing placement and flying height to adjust for the surrounding environment and wind conditions. In Israel, migrant buzzards rarely soar all that high (maximum above ground) due to the lack of mountain ridges that in other areas typically produce flyways; however tail-winds are significant and allow birds to cover a mean of . Migration The common buzzard is aptly described as a partial migrant. The autumn and spring movements of buzzards are subject to extensive variation, even down to the individual level, based on a region's food resources, competition (both from other buzzards and other predators), extent of human disturbance and weather conditions. Short distance movements are the norm for juveniles and some adults in autumn and winter, but more adults in central Europe and the British Isles remain on their year-around residence than do not. Even for first year juvenile buzzards dispersal may not take them very far. In England, 96% of first-years moved in winter to less than from their natal site. Southwestern Poland was recorded to be a fairly important wintering grounds for central European buzzards in early spring that apparently travelled from somewhat farther north, in winter average density was a locally high 2.12 individual per square kilometer. Habitat and prey availability seemed to be the primary drivers of habitat selection in fall for European buzzards. In northern Germany, buzzards were recorded to show preferences in fall for areas fairly distant from nesting site, with a large quantity of vole-holes and more widely dispersed perches. In Bulgaria, the mean wintering density was 0.34 individual per square kilometer, and buzzards showed a preference for agricultural over forested areas. Similar habitat preferences were recorded in northeastern Romania, where buzzard density was 0.334–0.539 individuals per square kilometer. The nominate buzzards of Scandinavia are somewhat more strongly migratory than most central European populations. However, birds from Sweden show some variation in migratory behaviours. A maximum of 41,000 individuals have been recorded at one of the main migration sites within southern Sweden in Falsterbo. In southern Sweden, winter movements and migration was studied via observation of buzzard colour. White individuals were substantially more common in southern Sweden rather than further north in their Swedish range. The southern population migrates earlier than intermediate to dark buzzards, in both adults and juveniles. A larger proportion of juveniles than of adults migrate in the southern population. Especially adults in the southern population are resident to a higher degree than more northerly breeders. The entire population of the steppe buzzard is strongly migratory, covering substantial distances during migration. In no part of the range do steppe buzzards use the same summering and wintering grounds. Steppe buzzards are slightly gregarious in migration, and travel in variously sized flocks. This race migrates in September to October often from Asia Minor to the Cape of Africa in about a month but does not cross water, following around the Winam Gulf of Lake Victoria rather than crossing the several kilometer wide gulf. Similarly, they will funnel along both sides of the Black Sea. Migratory behavior of steppe buzzards mirrors those of broad-winged & Swainson's hawks (Buteo platypterus & swainsoni) in every significant way as similar long-distance migrating Buteos, including trans-equatorial movements, avoidance of large bodies of waters and flocking behaviour. Migrating steppe buzzards will rise up with the morning thermals and can cover an average of hundreds of miles a day using the available currents along mountain ridges and other topographic features. The spring migration for steppe buzzards peaks around March–April, but the latest vulpinus arrive in their breeding grounds by late April or early May. Distances covered by migrating steppe buzzards in one way flights from northern Europe (i.e. Finland or Sweden) to southern Africa have ranged over within a season . For the steppe buzzards from eastern and northern Europe and western Russia (which compromise a majority of all steppe buzzards), peak migratory numbers occur in differing areas in autumn, when the largest recorded movements occurs through Asia Minor such as Turkey, than in spring, when the largest recorded movement are to the south in the Middle East, especially Israel. The two migratory movements barely differ overall until they reach the Middle East and east Africa, where the largest volume of migrants in autumn occurs at the southern part of the Red Sea, around Djibouti and Yemen, while the main volume in spring is in the northernmost strait, around Egypt and Israel. In autumn, numbers of steppe buzzards recorded in migration have ranged up to 32,000 (recorded 1971) in northwestern Turkey (Bosporus) and in northeastern Turkey (Black Sea) up to 205,000 (recorded 1976). Further down in migration, autumn numbers of up to 98,000 have been recorded in passage in Djibouti. Between 150,000 and nearly 466,000 Steppe Buzzard have been recorded migrating through Israel during spring, making this not only the most abundant migratory raptor here but one of the largest raptor migrations anywhere in the world. Migratory movements of southern Africa buzzards largely occur along the major mountain ranges, such as the Drakensberg and Lebombo Mountains. Wintering steppe buzzards occur far more irregularly in Transvaal than Cape region in winter. The onset of migratory movement for steppe buzzards back to the breeding grounds in southern Africa is mainly in March, peaking in the second week. Steppe buzzard molt their feathers rapidly upon arrival at wintering grounds and seems to split their flight feather molt between breeding ground in Eurasia and wintering ground in southern Africa, the molt pausing during migration. In last 50 years, it was recorded that nominate buzzards are typically migrating shorter distances and wintering further north, possibly in response to climate change, resulting in relatively smaller numbers of them at migration sites. They are also extending their breeding range possibly reducing/supplanting steppe buzzards. Vocalizations Resident populations of common buzzards tend to vocalize all year around, whereas migrants tend to vocalize only during the breeding season. Both nominate buzzards and steppe buzzards (and their numerous related subspecies within their types) tend to have similar voices. The main call of the species is a plaintive, far-carrying pee-yow or peee-oo, used as both contact call and more excitedly in aerial displays. Their call is sharper, more ringing when used in aggression, tends to be more drawn-out and wavering when chasing intruders, sharper, more yelping when as warning when approaching the nest or shorter and more explosive when called in alarm. Other variations of their vocal performances include a cat-like mew, uttered repeatedly on the wing or when perched, especially in display; a repeated mah has been recorded as uttered by pairs answering each other, further chuckles and croaks have also been recorded at nests. Juveniles can usually be distinguished by the discordant nature of their calls compared to those of adults. Dietary biology The common buzzard is a generalist predator which hunts a wide variety of prey given the opportunity. Their prey spectrum extents to a wide variety of vertebrates including mammals, birds (from any age from eggs to adult birds), reptiles, amphibians and, rarely, fish, as well as to various invertebrates, mostly insects. Young animals are often attacked, largely the nidifugous young of various vertebrates. In total well over 300 prey species are known to be taken by common buzzards. Furthermore, prey size can vary from tiny beetles, caterpillars and ants to large adult grouse and rabbits up to nearly twice their body mass. Mean body mass of vertebrate prey was estimated at in Belarus. At times, they will also subsist partially on carrion, usually of dead mammals or fish. However, dietary studies have shown that they mostly prey upon small mammals, largely small rodents. Like many temperate zone raptorial birds of varied lineages, voles are an essential part of the common buzzard's diet. This bird's preference for the interface between woods and open areas frequently puts them in ideal vole habitat. Hunting in relatively open areas has been found to increase hunting success whereas more complete shrub cover lowered success. A majority of prey is taken by dropping from perch, and is normally taken on ground. Alternately, prey may be hunted in a low flight. This species tends not to hunt in a spectacular stoop but generally drops gently then gradually accelerate at bottom with wings held above the back. Sometimes, the buzzard also forages by random glides or soars over open country, wood edges or clearings. Perch hunting may be done preferentially but buzzards fairly regularly also hunt from a ground position when the habitat demands it. Outside the breeding season, as many 15–30 buzzards have been recorded foraging on ground in a single large field, especially juveniles. Normally the rarest foraging type is hovering. A study from Great Britain indicated that hovering does not seem to increase hunting success. Mammals A high diversity of rodents may be taken given the chance, as around 60 species of rodent have been recorded in the foods of common buzzards. It seems clear that voles are the most significant prey type for European buzzards. Nearly every study from the continent makes reference to the importance, in particular, of the two most numerous and widely distributed European voles: the common vole (Microtus arvalis) and the somewhat more northerly ranging field vole (Microtus agrestis). In southern Scotland, field voles were the best-represented species in pellets, accounting for 32.1% of 581 pellets. In southern Norway, field voles were again the main food in years with peak vole numbers, accounting for 40.8% of 179 prey items in 1985 and 24.7% of 332 prey items in 1994. Altogether, rodents amount to 67.6% and 58.4% of the foods in these respective peak vole years. However, in low vole population years, the contribution of rodents to the diet was minor. As far west as the Netherlands, common voles were the most regular prey, amounting to 19.6% of 6624 prey items in a very large study. Common voles were the main foods recorded in central Slovakia, accounting for 26.5% of 606 prey items. The common vole, or other related vole species at times, were the main foods as well in Ukraine (17.2% of 146 prey items) ranging east to Russia in the Privolshky Steppe Nature Reserve (41.8% of 74 prey items) and in Samara (21.4% of 183 prey items). Other records from Russia and Ukraine show voles ranging from slightly secondary prey to as much as 42.2% of the diet. In Belarus, voles, including Microtus species and bank voles (Myodes glareolus), accounted for 34.8% of the biomass on average in 1065 prey items from different study areas over 4 years. At least 12 species of the genus Microtus are known to be hunted by common buzzards and even this is probably conservative, moreover similar species like lemmings will be taken if available. Other rodents are taken largely opportunistically rather than by preference. Several wood mice (Apodemus ssp.) are known to be taken quite frequently but given their preference for activity in deeper woods than the field-forest interfaces preferred, they are rarely more than secondary food items. An exception was in Samara where the yellow-necked mouse (Apodemus flavicollis), one of the largest of its genus at , made up 20.9%, putting it just behind the common vole in importance. Similarly, tree squirrels are readily taken but rarely important in the foods of buzzards in Europe, as buzzards apparently prefer to avoid taking prey from trees nor do they possess the agility typically necessary to capture significant quantities of tree squirrels. All four ground squirrels that range (mostly) into eastern Europe are also known to be common buzzard prey but little quantitative analysis has gone into how significant such predator-prey relations are. Rodent prey taken have ranged in size from the Eurasian harvest mouse (Micromys minutus) to the non-native, muskrat (Ondatra zibethicus). Other rodents taken either seldom or in areas where the food habits of buzzards are spottily known include flying squirrels, marmots (presumably very young if taken alive), chipmunks, spiny rats, hamsters, mole-rats, gerbils, jirds and jerboas and occasionally hearty numbers of dormice, although these are nocturnal. Surprisingly little research has gone into the diets of wintering steppe buzzards in southern Africa, considering their numerous status there. However, it has been indicated that the main prey remains consist of rodents such as the four-striped grass mouse (Rhabdomys pumilio) and Cape mole-rats (Georychus capensis). Other than rodents, two other groups of mammals can be counted as significant to the diet of common buzzards. One of these main prey types of import in the diets of common buzzards are leporids or lagomorphs, especially the European rabbit (Oryctolagus cuniculus) where it is found in numbers in a wild or feral state. In all dietary studies from Scotland, rabbits were highly important to the buzzard's diet. In southern Scotland, rabbits constituted 40.8% of remains at nests and 21.6% of pellet contents, while lagomorphs (mainly rabbits but also some young hares) were present in 99% of remains in Moray, Scotland. The nutritional richness relative to the commonest prey elsewhere, such as voles, might account for the high productivity of buzzards here. For example, clutch sizes were twice as large on average where rabbits were common (Moray) than were where they were rare (Glen Urquhart). In northern Ireland, an area of interest because it is devoid of any native vole species, rabbits were again the main prey. Here, lagomorphs constituted 22.5% of prey items by number and 43.7% by biomass. While rabbits are non-native, albeit long-established, in the British Isles, in their native area of the Iberian peninsula, rabbits are similarly significant to the buzzard's diet. In Murcia, Spain, rabbits were the most common mammal in the diet, making up 16.8% of 167 prey items. In a large study from northeastern Spain, rabbits were dominant in the buzzard's foods, making up 66.5% of 598 prey items. In the Netherlands, European rabbits were second in number (19.1% of 6624 prey items) only to common voles and the largest contributor of biomass to nests (36.7%). Outside of these (at least historically) rabbit-rich areas, leverets of the common hare species found in Europe can be important supplemental prey. European hare (Lepus europaeus) were the fourth most important prey species in central Poland and the third most significant prey species in Stavropol Krai, Russia. Buzzards normally attack the young of European rabbits and hares. Most of the rabbits taken by buzzard variously been estimated from , and infrequently up to in weight. Similarly, in different areas and the mean weight of brown hares taken in Finland was around . One young mountain hares (Lepus timidus) taken in Norway was estimated to about . However, common buzzards have the physical ability to kill adult rabbits. This is supported by remains of relatively large-sized tarsus bones of the rabbit, up to 64mm in length, suggesting prime adult rabbits weigh up to can be preyed upon. The other significant mammalian prey type is insectivores, among which more than 20 species are known to be taken by this species, including nearly all the species of shrew, mole and hedgehog found in Europe. Moles are taken particularly often among this order, since as is the case with "vole-holes", buzzards probably tend to watch molehills in fields for activity and dive quickly from their perch when one of the subterranean mammals pops up. The most widely found mole in the buzzard's northern range is the European mole (Talpa europaea) and this is one of the more important non-rodent prey items for the species. This species was present in 55% of 101 remains in Glen Urquhart, Scotland and was the second most common prey species (18.6%) in 606 prey items in Slovakia. In Bari, Italy, the Roman mole (Talpa romana), of similar size to the European species, was the leading identified mammalian prey, making up 10.7% of the diet. The full-size range of insectivores may be taken by buzzards, ranging from the world's smallest mammal (by weight), the Etruscan shrew (Suncus etruscus) to arguably the heaviest insectivore, the European hedgehog (Erinaceus europaeus). Mammalian prey for common buzzards other than rodents, insectivores, and lagomorphs is rarely taken. Occasionally, some weasels such as least weasel (Mustela nivalis) and stoat (Mustela erminea) are taken, and remains of young pine martens (Martes martes) and adult european polecats (Mustela putorius) was found in buzzard nest. Numerous larger mammals, including medium-sized carnivores such as dogs, cats and foxes and various ungulates, are sometimes eaten as carrion by buzzards, mainly during lean winter months. Still-borns of deer are also visited with some frequency. Birds When attacking birds, common buzzards chiefly prey on nestlings and fledglings of small to medium-sized birds, largely passerines but also a variety of gamebirds, but sometimes also injured, sickly or unwary but healthy adults. While capable of overpowering birds larger than itself, the common buzzard is usually considered to lack the agility necessary to capture many adult birds, even gamebirds which would presumably be weaker fliers considering their relatively heavy bodies and small wings. The amount of fledgling and younger birds preyed upon relative to adults is variable, however. For example, in the Italian Alps, 72% of birds taken were fledglings or recently fledged juveniles, 19% were nestlings and 8% were adults. On the contrary, in southern Scotland, even though the buzzards were taking relatively large bird prey, largely red grouse (Lagopus lagopus scotica), 87% of birds taken were reportedly adults. In total, as in many raptorial birds that are far from bird-hunting specialists, birds are the most diverse group in the buzzard's prey spectrum due to the sheer number and diversity of birds, few raptors do not hunt them at least occasionally. Nearly 150 species of bird have been identified in the common buzzard's diet. In general, despite many that are taken, birds usually take a secondary position in the diet after mammals. In northern Scotland, birds were fairly numerous in the foods of buzzards. The most often recorded avian prey and 2nd and 3rd most frequent prey species (after only field voles) in Glen Urquhart, were chaffinch (Fringilla coelebs) and meadow pipits (Anthus pratensis), with the buzzards taking 195 fledglings of these species against only 90 adults. This differed from Moray where the most frequent avian prey and 2nd most frequent prey species behind the rabbit was the common wood pigeon (Columba palumbus) and the buzzards took four times as many adults relative to fledglings. Birds were the primary food for common buzzards in the Italian Alps, where they made up 46% of the diet against mammal which accounted for 29% in 146 prey items. The leading prey species here were Eurasian blackbirds (Turdus merula) and Eurasian jays (Garrulus glandarius), albeit largely fledglings were taken of both. Birds could also take the leading position in years with low vole populations in southern Norway, in particular thrushes, namely the blackbird, the song thrush (Turdus philomelos) and the redwing (Turdus iliacus), which were collectively 22.1% of 244 prey items in 1993. In southern Spain, birds were equal in number to mammals in the diet, both at 38.3%, but most remains were classified as "unidentified medium-sized birds", although the most often identified species of those that apparently could be determined were Eurasian jays and red-legged partridges (Alectoris rufa). Similarly, in northern Ireland, birds were roughly equal in import to mammals but most were unidentified corvids. In Seversky Donets, Ukraine, birds and mammals both made up 39.3% of the foods of buzzards. Common buzzards may hunt nearly 80 species passerines and nearly all available gamebirds. Like many other largish raptors, gamebirds are attractive to hunt for buzzards due to their ground-dwelling habits. Buzzards were the most frequent predator in a study of juvenile pheasants in England, accounting for 4.3% of 725 deaths (against 3.2% by foxes, 0.7% by owls and 0.5% by other mammals). They also prey on a wide size range of birds, ranging down to Europe's smallest bird, the goldcrest (Regulus regulus). Very few individual birds hunted by buzzards weigh more than . However, there have been some particularly large avian kills by buzzards, including any that weigh more or , or about the largest average size of a buzzard, have including adults of mallard (Anas platyrhynchos), black grouse (Tetrao tetrix), ring-necked pheasant (Phasianus colchicus), common raven (Corvus corax) and some of the larger gulls if ambushed on their nests. The largest avian kill by a buzzard, and possibly largest known overall for the species, was an adult female western capercaillie (Tetrao urogallus) that weighed an estimated . At times, buzzards will hunt the young of large birds such as herons and cranes. Other assorted avian prey has included a few species of waterfowl, most available pigeons and doves, cuckoos, swifts, grebes, rails, nearly 20 assorted shorebirds, tubenoses, hoopoes, bee-eaters and several types of woodpecker. Birds with more conspicuous or open nesting areas or habits are more likely to have fledglings or nestlings attacked, such as water birds, while those with more secluded or inaccessible nests, such as pigeons/doves and woodpeckers, adults are more likely to be hunted. Reptiles and amphibians The common buzzard may be the most regular avian predator of reptiles and amphibians in Europe apart from the sections where they are sympatric with the largely snake-eating short-toed eagle. In total, the prey spectrum of common buzzards include nearly 50 herpetological prey species. In studies from northern and southern Spain, the leading prey numerically were both reptilian, although in Biscay (northern Spain) the leading prey (19%) was classified as "unidentified snakes". In Murcia, the most numerous prey was the ocellated lizard (Timon lepidus), at 32.9%. In total, at Biscay and Murcia, reptiles accounted for 30.4% and 35.9% of the prey items, respectively. Findings were similar in a separate study from northeastern Spain, where reptiles amounted to 35.9% of prey. In Bari, Italy, reptiles were the main prey, making up almost exactly half of the biomass, led by the large green whip snake (Hierophis viridiflavus), maximum size up to , at 24.2% of food mass. In Stavropol Krai, Russia, the sand lizard (Lacerta agilis) was the main prey at 23.7% of 55 prey items. The slowworm (Anguis fragilis), a legless lizard, became the most numerous prey for the buzzards of southern Norway in low vole years, amounting to 21.3% of 244 prey items in 1993 and were also common even in the peak vole year of 1994 (19% of 332 prey items). More or less any snake in Europe is potential prey and the buzzard has been known to be uncharacteristically bold in going after and overpowering large snakes such as rat snakes, ranging up to nearly in length, and healthy, large vipers despite the danger of being struck by such prey. However, in at least one case, the corpse of a female buzzard was found envenomed over the body of an adder that it had killed. In some parts of range, the common buzzard acquires the habit of taking many frogs and toads. This was the case in the Mogilev Region of Belarus where the moor frog (Rana arvalis) was the major prey (28.5%) over several years, followed by other frogs and toads amounting to 39.4% of the diet over the years. In central Scotland, the common toad (Bufo bufo) was the most numerous prey species, accounting for 21.7% of 263 prey items, while the common frog (Rana temporaria) made up a further 14.7% of the diet. Frogs made up about 10% of the diet in central Poland as well. Invertebrates and other prey When common buzzards feed on invertebrates, these are chiefly earthworms, beetles and caterpillars in Europe and largely seemed to be preyed on by juvenile buzzards with less refined hunting skills or in areas with mild winters and ample swarming or social insects. In most dietary studies, invertebrates are at best a minor supplemental contributor to the buzzard's diet. Nonetheless, roughly a dozen beetle species have found in the foods of buzzards from Ukraine alone. In winter in northeastern Spain, it was found that the buzzards switched largely from the vertebrate prey typically taken during spring and summer to a largely insect-based diet. Most of this prey was unidentified but the most frequently identified were European mantis (Mantis religiosa) and European mole cricket (Gryllotalpa gryllotalpa). In Ukraine, 30.8% of the food by number was found to be insects. Especially in winter quarters such as southern Africa, common buzzards are often attracted to swarming locusts and other orthopterans. In this way the steppe buzzard may mirror a similar long-distance migrant from the Americas, the Swainson's hawk, which feeds its young largely on nutritious vertebrates but switches to a largely insect-based once the reach their distant wintering grounds in South America. In Eritea, 18 returning migrant steppe buzzards were seen to feed together on swarms of grasshoppers. For wintering steppe buzzards in Zimbabwe, one source went so far as to refer to them as primarily insectivorous, apparently being somewhat locally specialized to feeding on termites. Stomach contents in buzzards from Malawi apparently consisted largely of grasshoppers (alternately with lizards). Fish tend to be the rarest class of prey found in the common buzzard's foods. There are a couple cases of predation of fish detected in the Netherlands, while elsewhere they've been known to have fed upon eels and carp. Interspecies predatory relationships Common buzzards co-occur with dozens of other raptorial birds through their breeding, resident and wintering grounds. There may be many other birds that broadly overlap in prey selection to some extent. Furthermore, their preference for interfaces of forest and field is used heavily by many birds of prey. Some of the most similar species by diet are the common kestrel (Falco tinniculus), hen harrier (Circus cyaenus) and lesser spotted eagle (Clanga clanga), not to mention nearly every European species of owl, as all but two may locally prefer rodents such as voles in their diets. Diet overlap was found to be extensive between buzzards and red foxes (Vulpes vulpes) in Poland, with 61.9% of prey selection overlapping by species although the dietary breadth of the fox was broader and more opportunistic. Both fox dens and buzzard roosts were found to be significantly closer to high vole areas relative to the overall environment here. The only other widely found European Buteo, the rough-legged buzzard, comes to winter extensively with common buzzards. It was found in southern Sweden, habitat, hunting and prey selection often overlapped considerably. Rough-legged buzzards appear to prefer slightly more open habitat and took slightly fewer wood mice than common buzzard. Roughlegs also hover much more frequently and are more given to hunting in high winds. The two buzzards are aggressive towards one another and excluded each other from winter feeding territories in similar ways to the way they exclude conspecifics. In northern Germany, the buffer of their habitat preferences apparently accounted for the lack of effect on each other's occupancy between the two buzzard species. Despite a broad range of overlap, very little is known about the ecology of common and long-legged buzzards where they co-exist. However, it can be inferred from the long-legged species preference for predation on differing prey, such as blind mole-rats, ground squirrels, hamsters and gerbils, from the voles usually preferred by the common species, that serious competition for food is unlikely. A more direct negative effect has been found in buzzard's co-existence with northern goshawk (Accipiter gentilis). Despite the considerable discrepancy of the two species dietary habits, habitat selection in Europe is largely similar between buzzards and goshawks. Goshawks are slightly larger than buzzards and are more powerful, agile and generally more aggressive birds, and so they are considered dominant. In studies from Germany and Sweden, buzzards were found to be less disturbance sensitive than goshawks but were probably displaced into inferior nesting spots by the dominant goshawks. The exposure of buzzards to a dummy goshawk was found to decrease breeding success whereas there was no effect on breeding goshawks when they were exposed to a dummy buzzard. In many cases, in Germany and Sweden, goshawks displaced buzzards from their nests to take them over for themselves. In Poland, buzzards productivity was correlated to prey population variations, particularly voles which could vary from 10–80 per hectare, whereas goshawks were seemingly unaffected by prey variations; buzzards were found here to number 1.73 pair per against goshawk 1.63 pair per . In contrast, the slightly larger counterpart of buzzards in North America, the red-tailed hawk (which is also slightly larger than American goshawks, the latter averaging smaller than European ones) are more similar in diet to goshawks there. Redtails are not invariably dominated by goshawks and are frequently able to outcompete them by virtue of greater dietary and habitat flexibility. Furthermore, red-tailed hawks are apparently equally capable of killing goshawks as goshawks are of killing them (killings are more one-sided in buzzard-goshawk interactions in favour of the latter). Other raptorial birds, including many of similar or mildly larger size than common buzzards themselves, may dominate or displace the buzzard, especially with aims to take over their nests. Species such as the black kite (Milvus migrans), booted eagle (Hieraeetus pennatus) and the lesser spotted eagle have been known to displace actively nesting buzzards, although in some cases the buzzards may attempt to defend themselves. The broad range of accipitrids that take over buzzard nests is somewhat unusual. More typically, common buzzards are victims of nest parasitism to owls and falcons, as neither of these other kinds of raptorial birds builds their own nests, but these may regularly take up occupancy on already abandoned or alternate nests rather than ones the buzzards are actively using. Even with birds not traditionally considered raptorial, such as common ravens, may compete for nesting sites with buzzards. In urban vicinities of southwestern England, it was found that peregrine falcons (Falco peregrinus) were harassing buzzards so persistently, in many cases resulting in injury or death for the buzzards, the attacks tending to peak during the falcon's breeding seasons and tend to be focused on subadult buzzards. Despite often being dominated in nesting site confrontations by even similarly sized raptors, buzzards appear to be bolder in direct competition over food with other raptors outside of the context of breeding, and has even been known to displace larger birds of prey such as red kites (Milvus milvus) and female buzzards may also dominate male goshawks (which are much smaller than the female goshawk) at disputed kills. Common buzzards are occasionally threatened by predation by other raptorial birds. Northern goshawks have been known to have preyed upon buzzards in a few cases. Much larger raptors are known to have killed a few buzzards as well, including steppe eagles (Aquila nipalensis) on migrating steppe buzzards in Israel. Further instances of predation on buzzards have involved golden, eastern imperial (Aquila heliaca), Bonelli's (Aquila fasciata) and white-tailed eagles (Haliaeetus albicilla) in Europe. Besides preying on adult buzzard, white-tailed eagles have been known to raise buzzards with their own young. These are most likely cases of eagles carrying off young buzzard nestlings with the intention of predation but, for unclear reasons, not killing them. Instead the mother eagle comes to brood the young buzzard. Despite the difference of the two species diets, white-tailed eagles are surprisingly successful at raising young buzzards (which are conspicuously much smaller than their own nestlings) to fledging. Studies in Lithuania of white-tailed eagle diets found that predation on common buzzards was more frequent than anticipated, with 36 buzzard remains found in 11 years of study of the summer diet of the white-tailed eagles. While nestling buzzards were multiple times more vulnerable to predation than adult buzzards in the Lithuanian data, the region's buzzards expelled considerable time and energy during the late nesting period trying to protect their nests. The most serious predator of common buzzards, however, is almost certainly the Eurasian eagle-owl (Bubo bubo). This is a very large owl with a mean body mass about three to four times greater than that of a buzzard. The eagle-owl, despite often taking small mammals that broadly overlap with those selected by buzzards, is considered a "super-predator" that is a major threat to nearly all co-existing raptorial birds, capably destroying whole broods of other raptorial birds and dispatching adult raptors even as large as eagles. Due to their large numbers in edge habitats, common buzzards frequently feature heavily in the eagle-owl's diet. Eagle-owls, as will some other large owls, also readily expropriate the nests of buzzards. In the Czech Republic and in Luxembourg, the buzzard was the third and fifth most frequent prey species for eagle-owls, respectively. The reintroduction of eagle-owls to sections of Germany has been found to have a slight deleterious effect on the local occupancy of common buzzards. The only sparing factor is the temporal difference (the buzzard nesting later in the year than the eagle-owl) and buzzards may locally be able to avoid nesting near an active eagle-owl family. As the ecology of the wintering population is relatively little studied, a similar very large owl at the top of the avian food chain, the Verreaux's eagle-owl (Bubo lacteus), is the only known predator of wintering steppe buzzards in southern Africa. Despite not being known predators of buzzards, other large, vole-eating owls are known to displace or to be avoided by nesting buzzards, such as great grey owls (Strix nebulosa) and Ural owls (Strix uralensis). Unlike with large birds of prey, next to nothing is known of mammalian predators of common buzzards, despite up to several nestlings and fledglings being likely depredated by mammals. Common buzzards themselves rarely present a threat to other raptorial birds but may occasionally kill a few of those of smaller size. The buzzard is a known predator of Eurasian sparrowhawks (Accipiter nisus), common kestrel and lesser kestrel (Falco naumanni) . Perhaps surprisingly, given the nocturnal habits of this prey, the group of raptorial birds the buzzard is known to hunt most extensively is owls. Known owl prey has included barn owls (Tyto alba), European scops owls (Otus scops), tawny owls (Strix aluco), little owls (Athene noctua), boreal owls (Aegolius funereus), long-eared owls (Asio otus) and short-eared owls (Asio flammeus). Despite their relatively large size, tawny owls are known to avoid buzzards as there are several records of them preying upon the owls. Breeding Nesting territories and density Home ranges of common buzzards are generally . The size of breeding territory seem to be generally correlated with food supply. In a German study, the range was with an average of . Some of the lowest pair densities of common buzzards seem to come from Russia. For instance, in Kerzhenets Nature Reserve, the recorded density was 0.6 pairs per and the average distance of nearest neighbors was . The Snowdonia region of northern Wales held a pair per with a mean nearest neighbor distance of ; in adjacent Migneint, pair occurrence was , with a mean distance of . In the Teno massif of the Canary Islands, the average density was estimated as 23 pairs per , similar to that of a middling continental population. On another set of islands, on Crete the density of pairs was lower at 5.7 pairs per ; here buzzards tend to have an irregular distribution, some in lower intensity harvest olive groves but their occurrence actually more common in agricultural than natural areas. In the Italian Alps, it was recorded in 1993–96 that there were from 28 to 30 pairs per . In central Italy, density average was lower at 19.74 pairs per . Higher density areas are known than those above. Two areas of the Midlands of England showed occupancies of 81 and 22 territorial pairs per . High buzzard densities there were associated with high proportions of unimproved pasture and mature woodland within the estimated territories. Similarly high densities of common buzzards were estimated in central Slovakia using two different methods, here indicating densities of 96 to 129 pairs per . Despite claims from the study of the English midlands were the highest known territory density for the species, a number ranging from 32 to 51 pairs in wooded area of merely in Czech Republic seems to surely exceed even those densities. The Czech study hypothesized that fragmentation of forest in human management of lands for wild sheep and deer, creating exceptional concentrations of prey such as voles, and lack of appropriate habitat in surrounding regions for the exceptionally high density. In the North-Estonian Neeruti landscape reserve (area 1250 ha), Marek Vahula found 9 populated nests in 1989 and 1990. One nest was found in 1982 and is apparently the oldest known nest that is still populated today. Common buzzards maintain their territories through flight displays. In Europe, territorial behaviour generally starts in February. However, displays are not uncommon throughout year in resident pairs, especially by males, and can elicit similar displays by neighbors. In them, common buzzards generally engage in high circling, spiraling upward on slightly raised wings. Mutual high circling by pairs sometimes go on at length, especially during the period prior to or during breeding season. In mutual displays, a pair may follow each other at in level flight. During the mutual displays, the male may engage in exaggerated deep flapping or zig-zag tumbling, apparently in response to the female being too distant. Two or three pairs may circle together at times and as many as 14 individual adults have been recorded over established display sites. Sky-dancing by common buzzards have been recorded in spring and autumn, typically by male but sometimes by female, nearly always with much calling. Their sky-dances are of the rollercoaster type, with upward sweep until they start to stall, but sometimes embellished with loops or rolls at the top. Next in the sky-dance, they dive on more or less closed wings before spreading them and shooting up again, upward sweeps of up to , with dive drops of up to at least . These dances may be repeated in series of 10 to 20. In the climax of the sky dance, the undulations become progressive shallower, often slowing and terminating directly onto a perch. Various other aerial displays include low contour flight or weaving among trees, frequently with deep beats and exaggerated upstrokes which show underwing pattern to rivals perched below. Talon grappling and occasionally cartwheeling downward with feet interlocked has been recorded in buzzards and, as in many raptors, is likely the physical culmination of the aggressive territorial display, especially between males. Despite the highly territorial nature of buzzards and their devotion to a single mate and breeding ground each summer, there is one case of a polyandrous trio of buzzards nesting in the Canary Islands. Nests Common buzzards tend to build a bulky nest of sticks, twigs and often heather. Commonly, nests are up to across and deep. With reuse over years, the diameter can reach or exceed and weight of nests can reach over . Active nests tend to be lined with greenery, most often this consists of broad-leafed foliage but sometimes also includes rush or seaweed locally. Nest height in trees is commonly , usually by main trunk or main crutch of the tree. In Germany, trees used for nesting consisted mostly of red beeches (Fagus sylvatica) (in 337 cases), whereas a further 84 were in assorted oaks. Buzzards were recorded to nest almost exclusively in pines in Spain at a mean height of . Trees are generally used for a nesting location but they will also utilize crags or bluffs if trees are unavailable. Buzzards in one English study were surprisingly partial to nesting on well-vegetated banks and due to the rich surrounding environment habitat and prey population, were actually more productive than nests located in other locations here. Furthermore, a few ground nests were recorded in high prey-level agricultural areas in the Netherlands. In the Italian Alps, 81% of 108 nests were on cliffs. The common buzzard generally lacks the propensity of its Nearctic counterpart, the red-tailed hawk, to occasionally nest on or near manmade structures (often in heavily urbanized areas) but in Spain some pairs recorded nesting along the perimeter of abandoned buildings. Pairs often have several nests but some pairs may use one over several consecutive years. Two to four alternate nests in a territory is typical for common buzzards, especially those breeding further north in their range. Reproduction and eggs The breeding season commences at differing times based on latitude. Common buzzard breeding seasons may fall as early as January to April but typically the breeding season is March to July in much of Palearctic. In the northern stretches of the range the breeding season may last into May–August. Mating usually occurs on or near the nest and lasts about 15 seconds, typically occurring several times a day. Eggs are usually laid in 2 to 3-day intervals. The clutch size can range from to 2 to 6, a relatively large clutch for an accipitrid. More northerly and westerly buzzard usually bear larger clutches, which average nearer 3, than those further east and south. In Spain, the average clutch size is about 2 to 2.3. From 4 locations in different parts of Europe, 43% had clutch size of 2, 41% had size of 3, clutches of 1 and 4 each constituted about 8%. Laying dates are remarkably constant throughout Great Britain. There are, however, highly significant differences in clutch size between British study areas. These do not follow any latitudinal gradient and it is likely that local factors such as habitat and prey availability are more important determinants of clutch size. The eggs are white in ground colour, rather round in shape with sporadic red to brown markings sometimes lightly showing. In the nominate race, egg size is in height by in diameter with an average of in 600 eggs. In the race of vulpinus, egg height is by with an average of in 303 eggs. Eggs are generally laid in late March to early April in extreme south, sometime in April in most of Europe, into May and possibly even early June in the extreme north. If eggs are lost to a predator (including humans) or fail in some other way, common buzzards do not usually lay replacement clutches but they have been recorded, even with 3 attempts of clutches by a single female. The female does most but not all of the incubating, doing so for a total of 33–35 days. The female remains at the nest brooding the young in the early stages with the male bringing all prey. At about 8–12 days, both the male and female will bring prey but the female continues to do all feeding until the young can tear up their own prey. Development of young Once hatching commences, it may take 48 hours for the chick to chip out. Hatching may take place over 3–7 days, with new hatchlings averaging about in body mass. Often the youngest nestling dies from starvation, especially in broods of three or more. In nestlings, the first down replaces by longer, coarser down at about 7 days of age with the first proper feathers appearing at 12 to 15 days. The young are nearly fully feathered rather than downy at about a month of age and can start to feed themselves as well. The first attempts to leave the nest are often at about 40–50 days, averaging usually 40–45 in nominate buzzards in Europe, but more quickly on average at 40–42 in vulpinus. Fledging occurs typically at 43–54 days but in extreme cases at as late 62 days. Sexual dimorphism is apparent in European fledglings, as females often scale about against in males. After leaving the nest, buzzards generally stay close by, but with migratory ones there is more definitive movement generally southbound. Full independence is generally sought 6 to 8 weeks after fledging. 1st year birds generally remain in wintering area for following summer but then return to near area of origin but then migrate south again without breeding. Radio-tracking suggests that most dispersal, even relatively early dispersals, by juvenile buzzards is undertaken independently rather than via exile by parents, as has been recorded in some other birds of prey. In common buzzards, generally speaking, siblings stay quite close to each other after dispersal from their parents and form something of a social group, although parents usually tolerate their presence on their territory until they are laying another clutch. However, the social group of siblings disbands at about a year of age. Juvenile buzzards are subordinate to adults during most encounters and tend to avoid direct confrontations and actively defended territories until they are of appropriate age (usually at least 2 years of age). This was the case as well for steppe buzzard juveniles wintering in southern Africa, although in some cases juveniles were able to successfully steal prey from adults there. Breeding success rates Numerous factors may weigh into the breeding success of common buzzards. Chiefly among these are prey populations, habitat, disturbance and persecution levels and innerspecies competition. In Germany, intra- and interspecific competition, plumage morph, laying date, precipitation levels and anthropogenic disturbances in the breeding territory, in declining order, were deemed to be the most significant bearers of breeding success. In an accompanying study, it was found that a mere 17% of adult birds of both sexes present in a German study area produced 50% of offspring, so breeding success may be lower than perceived and many adult buzzards for unknown causes may not attempt to breed at all. High breeding success was detected in Argyll, Scotland, due likely to hearty prey populations (rabbits) but also probably a lower local rate of persecution than elsewhere in the British isles. Here, the mean number of fledglings were 1.75 against 0.82–1.41 in other parts of Britain. It was found in the English Midlands that breeding success both by measure of clutch size and mean number of fledglings, was relatively high thanks again to high prey populations. Breeding success was lower farther from significant stands of trees in the Midlands and most nesting failures that could be determined occurred in the incubation stage, possibly in correlation with predation of eggs by corvids. More significant than even prey, late winter-early spring was found to be likely the primary driver of breeding success in buzzards from southern Norway. Here, even in peak vole years, nesting success could be considerably hampered by heavy snow at this crucial stage. In Norway, large clutches of 3+ were expected only in years with minimal snow cover, high vole populations and lighter rains in May–June. In the Italian Alps, the mean number of fledglings per pair was 1.07. 33.4% of nesting attempts were failures per a study in southwestern Germany, with an average of 1.06 of all nesting attempts and 1.61 for all successful attempt. In Germany, weather conditions and rodent populations seemed to be the primary drivers of nesting success. In Murcia part of Spain contrasted with Biscay to the north, higher levels of interspecific competition from booted eagles and northern goshawks did not appear to negatively affect breeding success due to more ample prey populations (rabbits again) in Murcia than in Biscay. In the Westphalia area of Germany, it was found that intermediate colour morphs were more productive than those that were darker or lighter. For reasons that are not entirely clear, apparently fewer parasites were found to afflict broods of intermediate plumaged buzzard less so than dark and light phenotypes, in particular higher melanin levels somehow were found to be more inviting to parasitic organism that effect the health of the buzzard's offspring. The composition of habitat and its relation to human disturbance were important variables for the dark and light phenotypes but were less important to intermediate individuals. Thus selection pressures resulting from different factors did not vary much between sexes but varied between the three phenotypes in the population. Breeding success in areas with wild European rabbits was considerably effected by rabbit myxomatosis and rabbit haemorrhagic disease, both of which have heavily depleted wild rabbit population. Breeding success in formerly rabbit-rich areas were recorded to decrease from as much as 2.6 to as little as 0.9 young per pair. Age of first breeding in several radio-tagged buzzards showed only a single male breeding as early as his 2nd summer (at about a year of age). Significantly more buzzards were found to start breeding at the 3 summer but breeding attempts can be individually erratic given the availability of habitat, food and mates. The mean life expectancy was estimated at 6.3 years in the late 1950s, but this was at a time of high persecution when humans were causing 50–80% of buzzard deaths. In a more modern context with regionally reduced persecution rates, the lifespan expected can be higher (possibly in excess of 10 years at times) but is still widely variable due to a wide variety of factors. Status The common buzzard is one of the most numerous birds of prey in its range. Almost certainly, it is the most numerous diurnal bird of prey throughout Europe. Conservative estimates put the total population at no fewer than 700,000 pairs in Europe, which are more than twice the total estimates for the next four birds of prey estimated as most common: the Eurasian sparrowhawk (more than 340,000 pairs), the common kestrel (more than 330,000 pairs) and the northern goshawk (more than 160,000 pairs). Ferguson-Lees et al. roughly estimated that the total population of the common buzzard ranges to nearly 5 million pairs but at time was including the now spilit-off species of eastern and Himalayan buzzards in those numbers. These numbers may be excessive but the total population of common buzzards is certain to total well over seven figures. More recently, the IUCN estimated the common buzzard (sans the Himalayan and eastern subspecies) to number somewhere between 2.1 and 3.7 million birds, which would put this buzzard one of the most numerous of all accipitrid family members (estimates for Eurasian sparrowhawks, red-tailed hawks and northern goshawks also may range over 2 million). In 1991, other than their absence in Iceland, after having been extent as breeder by 1910, buzzards recolonized Ireland sometime in the 1950s and has increased by the 1990s to 26 pairs. Supplemental feeding has reportedly helped the Irish buzzard population to rebound, especially where rabbits have decreased. Most other countries have at least four figures of breeding pairs. As of the 1990s, other countries such as Great Britain, France, Switzerland, Czech Republic, Poland, Sweden, Belarus and Ukraine all numbered pairs well into five figures, while Germany had an estimated 140,000 pairs and European Russian may have held 500,000 pairs. Between 44,000 and 61,000 pairs nested in Great Britain by 2001 with numbers gradually increasing after past persecution, habitat alteration and prey reductions, making it by far the most abundant diurnal raptor there. In Westphalia, Germany, population of Buzzards was shown to nearly triple over the last few decades. The Westphalian buzzards are possibly benefiting from increasingly warmer mean climate, which in turn is increasing vulnerability of voles. However, the rate of increase was significantly greater in males than in females, in part because of reintroduced Eurasian eagle-owls to the region preying on nests (including the brooding mother), which may in turn put undue pressure on the local buzzard population. At least 238 common buzzards killed through persecution were recovered in England from 1975 to 1989, largely through poisoning. Persecution did not significantly differ at any time due this span of years nor did the persecution rates decrease, nor did it when compared to rates of last survey of this in 1981. While some persecution persists in England, it is probably slightly less common today. The buzzard was found to be the most vulnerable raptor to power-line collision fatalities in Spain probably as it is one of the most common largish birds, and together with the common raven, it accounted for nearly a third of recorded electrocutions. Given its relative abundance, the common buzzard is held as an ideal bioindicator, as they are effected by a range of pesticide and metal contamination through pollution like other raptors but are largely resilient to these at the population levels. In turn, this allows biologists to study (and harvest if needed) the buzzards intensively and their environments without affecting their overall population. The lack of affect may be due to the buzzard's adaptability as well as its relatively short, terrestrially-based food chain, which exposes them to less risk of contamination and population depletions than raptors that prey more heavily on water-based prey (such as some large eagles) or other birds (such as falcons). Common buzzards are seldom vulnerable to egg-shell thinning from DDT as are other raptors but egg-shell thinning has been recorded. Other factors that negatively effect raptors have been studied in common buzzards are helminths, avipoxvirus and assorted other viruses. Gallery References Citations General sources External links Steppe Buzzard species text in The Atlas of Southern African Birds Madeira Birds: Buzzard. Page about the controversial subspecies harterti. Retrieved 28 November 2006. Ageing and sexing (PDF; 4.2 MB) by Javier Blasco-Zumeta & Gerd-Michael Heinze Feathers of Common Buzzard (Buteo buteo) common buzzard Birds of Africa Birds of prey of Eurasia Birds of Macaronesia common buzzard common buzzard
10
Barnard's Star is a small red dwarf star in the constellation of Ophiuchus. At a distance of from Earth, it is the fourth-nearest-known individual star to the Sun after the three components of the Alpha Centauri system, and the closest star in the northern celestial hemisphere. Its stellar mass is about 16% of the Sun's, and it has 19% of the Sun's diameter. Despite its proximity, the star has a dim apparent visual magnitude of +9.5 and is invisible to the unaided eye; it is much brighter in the infrared than in visible light. The star is named after E. E. Barnard, an American astronomer who in 1916 measured its proper motion as 10.3 arcseconds per year relative to the Sun, the highest known for any star. The star had previously appeared on Harvard University photographic plates in 1888 and 1890. Barnard's Star is among the most studied red dwarfs because of its proximity and favorable location for observation near the celestial equator. Historically, research on Barnard's Star has focused on measuring its stellar characteristics, its astrometry, and also refining the limits of possible extrasolar planets. Although Barnard's Star is ancient, it still experiences stellar flare events, one being observed in 1998. Barnard's Star has been subject to multiple claims of planets that were later disproven. From the early 1960s to the early 1970s, Peter van de Kamp argued that planets orbited Barnard's Star. His specific claims of large gas giants were refuted in the mid-1970s after much debate. In November 2018, a candidate super-Earth planetary companion known as Barnard's Star b was reported to orbit Barnard's Star. It was believed to have a minimum mass of and orbit at . However, work presented in July 2021 refuted the existence of this planet. Naming In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalogue and standardize proper names for stars. The WGSN approved the name Barnard's Star for this star on 1 February 2017 and it is now included in the List of IAU-approved Star Names. Description Barnard's Star is a red dwarf of the dim spectral type M4, and it is too faint to see without a telescope; Its apparent magnitude is 9.5. At 7–12 billion years of age, Barnard's Star is considerably older than the Sun, which is 4.5 billion years old, and it might be among the oldest stars in the Milky Way galaxy. Barnard's Star has lost a great deal of rotational energy, and the periodic slight changes in its brightness indicate that it rotates once in 130 days (the Sun rotates in 25). Given its age, Barnard's Star was long assumed to be quiescent in terms of stellar activity. In 1998, astronomers observed an intense stellar flare, showing that Barnard's Star is a flare star. Barnard's Star has the variable star designation V2500 Ophiuchi. In 2003, Barnard's Star presented the first detectable change in the radial velocity of a star caused by its motion. Further variability in the radial velocity of Barnard's Star was attributed to its stellar activity. The proper motion of Barnard's Star corresponds to a relative lateral speed of 90km/s. The 10.3 arcseconds it travels in a year amount to a quarter of a degree in a human lifetime, roughly half the angular diameter of the full Moon. The radial velocity of Barnard's Star is , as measured from the blueshift due to its motion toward the Sun. Combined with its proper motion and distance, this gives a "space velocity" (actual speed relative to the Sun) of . Barnard's Star will make its closest approach to the Sun around 11,800 CE, when it will approach to within about 3.75 light-years. Proxima Centauri is the closest star to the Sun at a position currently 4.24 light-years distant from it. However, despite Barnard's Star's even closer pass to the Sun in 11,800 CE, it will still not then be the nearest star, since by that time Proxima Centauri will have moved to a yet-nearer proximity to the Sun. At the time of the star's closest pass by the Sun, Barnard's Star will still be too dim to be seen with the naked eye, since its apparent magnitude will only have increased by one magnitude to about 8.5 by then, still being 2.5 magnitudes short of visibility to the naked eye. Barnard's Star has a mass of about 0.16 solar masses (), and a radius about 0.2 times that of the Sun. Thus, although Barnard's Star has roughly 150 times the mass of Jupiter (), its radius is only roughly 2 times larger, due to its much higher density. Its effective temperature is about 3,220 kelvin, and it has a luminosity of only 0.0034 solar luminosities. Barnard's Star is so faint that if it were at the same distance from Earth as the Sun is, it would appear only 100 times brighter than a full moon, comparable to the brightness of the Sun at 80 astronomical units. Barnard's Star has 10–32% of the solar metallicity. Metallicity is the proportion of stellar mass made up of elements heavier than helium and helps classify stars relative to the galactic population. Barnard's Star seems to be typical of the old, red dwarf population II stars, yet these are also generally metal-poor halo stars. While sub-solar, Barnard's Star's metallicity is higher than that of a halo star and is in keeping with the low end of the metal-rich disk star range; this, plus its high space motion, have led to the designation "intermediate population II star", between a halo and disk star. Although some recently published scientific papers have given much higher estimates for the metallicity of the star, very close to the Sun's level, between 75 and 125% of the solar metallicity. Search for planets Astrometric planetary claims For a decade from 1963 to about 1973, a substantial number of astronomers accepted a claim by Peter van de Kamp that he had detected, by using astrometry, a perturbation in the proper motion of Barnard's Star consistent with its having one or more planets comparable in mass with Jupiter. Van de Kamp had been observing the star from 1938, attempting, with colleagues at the Sproul Observatory at Swarthmore College, to find minuscule variations of one micrometre in its position on photographic plates consistent with orbital perturbations that would indicate a planetary companion; this involved as many as ten people averaging their results in looking at plates, to avoid systemic individual errors. Van de Kamp's initial suggestion was a planet having about at a distance of 4.4AU in a slightly eccentric orbit, and these measurements were apparently refined in a 1969 paper. Later that year, Van de Kamp suggested that there were two planets of 1.1 and . Other astronomers subsequently repeated Van de Kamp's measurements, and two papers in 1973 undermined the claim of a planet or planets. George Gatewood and Heinrich Eichhorn, at a different observatory and using newer plate measuring techniques, failed to verify the planetary companion. Another paper published by John L. Hershey four months earlier, also using the Swarthmore observatory, found that changes in the astrometric field of various stars correlated to the timing of adjustments and modifications that had been carried out on the refractor telescope's objective lens; the claimed planet was attributed to an artifact of maintenance and upgrade work. The affair has been discussed as part of a broader scientific review. Van de Kamp never acknowledged any error and published a further claim of two planets' existence as late as 1982; he died in 1995. Wulff Heintz, Van de Kamp's successor at Swarthmore and an expert on double stars, questioned his findings and began publishing criticisms from 1976 onwards. The two men were reported to have become estranged because of this. Barnard's Star b In November 2018, an international team of astronomers announced the detection by radial velocity of a candidate super-Earth orbiting in relatively close proximity to Barnard's Star. Led by Ignasi Ribas of Spain their work, conducted over two decades of observation, provided strong evidence of the planet's existence. However, the existence of the planet was refuted in 2021, because the radial velocity signal was found to originate from a stellar activity cycle, and a study in 2022 confirmed this result. Dubbed Barnard's Star b, the planet was thought to be near the stellar system's snow line, which is an ideal spot for the icy accretion of proto-planetary material. It was thought to orbit at 0.4AU every 233 days and had a proposed minimum mass of . The planet would have most likely been frigid, with an estimated surface temperature of about , and lie outside Barnard Star's presumed habitable zone. Direct imaging of the planet and its tell-tale light signature would have been possible in the decade after its discovery. Further faint and unaccounted-for perturbations in the system suggested there may be a second planetary companion even farther out. Refining planetary boundaries For the more than four decades between van de Kamp's rejected claim and the eventual announcement of a planet candidate, Barnard's Star was carefully studied and the mass and orbital boundaries for possible planets were slowly tightened. M dwarfs such as Barnard's Star are more easily studied than larger stars in this regard because their lower masses render perturbations more obvious. Null results for planetary companions continued throughout the 1980s and 1990s, including interferometric work with the Hubble Space Telescope in 1999. Gatewood was able to show in 1995 that planets with were impossible around Barnard's Star, in a paper which helped refine the negative certainty regarding planetary objects in general. In 1999, the Hubble work further excluded planetary companions of with an orbital period of less than 1,000 days (Jupiter's orbital period is 4,332 days), while Kuerster determined in 2003 that within the habitable zone around Barnard's Star, planets are not possible with an "M sin i" value greater than 7.5 times the mass of the Earth (), or with a mass greater than 3.1 times the mass of Neptune (much lower than van de Kamp's smallest suggested value). In 2013, a research paper was published that further refined planet mass boundaries for the star. Using radial velocity measurements, taken over a period of 25 years, from the Lick and Keck Observatories and applying Monte Carlo analysis for both circular and eccentric orbits, upper masses for planets out to 1,000-day orbits were determined. Planets above two Earth masses in orbits of less than 10 days were excluded, and planets of more than ten Earth masses out to a two-year orbit were also confidently ruled out. It was also discovered that the habitable zone of the star seemed to be devoid of roughly Earth-mass planets or larger, save for face-on orbits. Even though this research greatly restricted the possible properties of planets around Barnard's Star, it did not rule them out completely as terrestrial planets were always going to be difficult to detect. NASA's Space Interferometry Mission, which was to begin searching for extrasolar Earth-like planets, was reported to have chosen Barnard's Star as an early search target, however the mission was shut down in 2010. ESA's similar Darwin interferometry mission had the same goal, but was stripped of funding in 2007. The analysis of radial velocities that eventually led to discovery of the candidate super-Earth orbiting Barnard's Star was also used to set more precise upper mass limits for possible planets, up to and within the habitable zone: a maximum of up to the inner edge and on the outer edge of the optimistic habitable zone, corresponding to orbital periods of up to 10 and 40 days respectively. Therefore, it appears that Barnard's Star indeed does not host Earth-mass planets or larger, in hot and temperate orbits, unlike other M-dwarf stars that commonly have these types of planets in close-in orbits. Stellar flares 1998 In 1998 a stellar flare on Barnard's Star was detected based on changes in the spectral emissions on 17 July during an unrelated search for variations in the proper motion. Four years passed before the flare was fully analyzed, at which point it was suggested that the flare's temperature was 8,000K, more than twice the normal temperature of the star. Given the essentially random nature of flares, Diane Paulson, one of the authors of that study, noted that "the star would be fantastic for amateurs to observe". The flare was surprising because intense stellar activity is not expected in stars of such age. Flares are not completely understood, but are believed to be caused by strong magnetic fields, which suppress plasma convection and lead to sudden outbursts: strong magnetic fields occur in rapidly rotating stars, while old stars tend to rotate slowly. For Barnard's Star to undergo an event of such magnitude is thus presumed to be a rarity. Research on the star's periodicity, or changes in stellar activity over a given timescale, also suggest it ought to be quiescent; 1998 research showed weak evidence for periodic variation in the star's brightness, noting only one possible starspot over 130 days. Stellar activity of this sort has created interest in using Barnard's Star as a proxy to understand similar stars. It is hoped that photometric studies of its X-ray and UV emissions will shed light on the large population of old M dwarfs in the galaxy. Such research has astrobiological implications: given that the habitable zones of M dwarfs are close to the star, any planet located therein would be strongly affected by solar flares, stellar winds, and plasma ejection events. 2019 In 2019, two additional ultraviolet stellar flares were detected, each with far-ultraviolet energy of 3×1022 joules, together with one X-ray stellar flare with energy 1.6×1022 joules. The flare rate observed to date is enough to cause loss of 87 Earth atmospheres per billion years through thermal processes and ≈3 Earth atmospheres per billion years through ion loss processes on Barnard's Star b. Environment Barnard's Star shares much the same neighborhood as the Sun. The neighbors of Barnard's Star are generally of red dwarf size, the smallest and most common star type. Its closest neighbor is currently the red dwarf Ross 154, at a distance of 1.66 parsecs (5.41 light-years). The Sun and Alpha Centauri are, respectively, the next closest systems. From Barnard's Star, the Sun would appear on the diametrically opposite side of the sky at coordinates RA=, Dec=, in the westernmost part of the constellation Monoceros. The absolute magnitude of the Sun is 4.83, and at a distance of 1.834 parsecs, it would be a first-magnitude star, as Pollux is from the Earth. Proposed exploration Project Daedalus Barnard's Star was studied as part of Project Daedalus. Undertaken between 1973 and 1978, the study suggested that rapid, uncrewed travel to another star system was possible with existing or near-future technology. Barnard's Star was chosen as a target partly because it was believed to have planets. The theoretical model suggested that a nuclear pulse rocket employing nuclear fusion (specifically, electron bombardment of deuterium and helium-3) and accelerating for four years could achieve a velocity of 12% of the speed of light. The star could then be reached in 50 years, within a human lifetime. Along with detailed investigation of the star and any companions, the interstellar medium would be examined and baseline astrometric readings performed. The initial Project Daedalus model sparked further theoretical research. In 1980, Robert Freitas suggested a more ambitious plan: a self-replicating spacecraft intended to search for and make contact with extraterrestrial life. Built and launched in Jupiter's orbit, it would reach Barnard's Star in 47 years under parameters similar to those of the original Project Daedalus. Once at the star, it would begin automated self-replication, constructing a factory, initially to manufacture exploratory probes and eventually to create a copy of the original spacecraft after 1,000 years. See also Kepler-42 – Nearly identical to Barnard's star, and hosts three sub-Earth sized planets. Notes References External links Amateur work showing Barnard's Star movement over time. Animated image with frames approx. one year apart, beginning in 2007, showing the movement of Barnard's Star. Barnard's Star in the Staracle Tycho catalog Discoveries by Edward Emerson Barnard M-type main-sequence stars Ophiuchus BY Draconis variables Stars with proper names Ophiuchi, V2500 0699 BD+04 3561A 087937 ? Local Interstellar Cloud Hypothetical planetary systems J17574849+0441405
7
The Bay of Quinte () is a long, narrow bay shaped like the letter "Z" on the northern shore of Lake Ontario in the province of Ontario, Canada. It is just west of the head of the Saint Lawrence River that drains the Great Lakes into the Gulf of Saint Lawrence. It is located about east of Toronto and west of Montreal. The name "Quinte" is derived from "Kenté" or Kentio, an Iroquoian village located near the south shore of the Bay. Later on, an early French Catholic mission was built at Kenté, located on the north shore of what is now Prince Edward County, leading to the Bay being named after the Mission. Officially, in the Mohawk language, the community is called , which means "the place of the bay". The Cayuga name is or , "land of two logs." The Bay, as it is known locally, provides some of the best trophy walleye angling in North America as well as most sport fish common to the great lakes. The bay is subject to algal blooms in late summer. Zebra mussels as well as the other invasive species found in the Great Lakes are present. The Quinte area played a vital role in bootlegging during prohibition in the United States, with large volumes of liquor being produced in the area, and shipped via boat on the bay to Lake Ontario finally arriving in New York State where it was distributed. Illegal sales of liquor accounted for many fortunes in and around Belleville. Tourism in the area is significant, especially in the summer months due to the Bay of Quinte and its fishing, local golf courses, provincial parks, and wineries. Geography The northern side of the bay is defined by Ontario's mainland, while the southern side follows the shore of the Prince Edward County headland. Beginning in the east with the outlet to Lake Ontario, the bay runs west-southwest for to Picton (although this section is also called Adolphus Reach), where it turns north-northwest for another as far as Deseronto. From there it turns south-southwest again for another , running past Big Island on the south and Belleville on the north. The width of the bay rarely exceeds . The bay ends at Trenton (Quinte West) and the Trent River, both also on the north side. The Murray Canal has been cut through the "Carrying Place", the few kilometres separating the end of the bay and Lake Ontario on the west side. The Trent River is part of the Trent-Severn Waterway, a canal connecting Lake Ontario to Lake Simcoe and then Georgian Bay on Lake Huron. There are several sub-bays off the Bay of Quinte, including Hay Bay, Big Bay, and Muscote Bay. Bay of Quinte Region Quinte is also a region comprising several communities situated along the Bay of Quinte, including Quinte West, Brighton and the City of Belleville, which is the largest city in the Quinte Region, and represents a midpoint between Montreal, Ottawa, and Toronto. The Greater Bay of Quinte area includes the municipalities of Brighton, Quinte West, Belleville, Prince Edward County, and Greater Napanee as well as the Native Tyendinaga Mohawk Territory. Overall population of the area exceeds 200,000. Mohawks of the Bay of Quinte The Mohawks of the Bay of Quinte (Kenhtè:ke Kanyen'kehá:ka) live on traditional Tyendinaga Mohawk Territory. Their reserve Band number 244, their current land base, is on the Bay of Quinte in southeastern Ontario east of Belleville and immediately to the west of Deseronto. The community takes its name from a variant spelling of Mohawk leader Joseph Brant's traditional Mohawk name, Thayendanegea (standardized spelling Thayentiné:ken), which means 'two pieces of fire wood beside each other'. Officially, in the Mohawk language, the community is called "Kenhtè:ke" (Tyendinaga), which means "on the bay", and was the birthplace of Tekanawí:ta. The Cayuga name is Tyendinaga, Tayęda:ne:gęˀ or Detgayę:da:negęˀ, "land of two logs." Communities Belleville Quinte West Brighton Shannonville Napanee Deseronto Tyendinaga Mohawk Territory Rossmore Ameliasburgh Picton Consecon Carrying Place Education The Quinte Region, specifically the City of Belleville, is home to Loyalist College of Applied Arts and Technology. Other post-secondary schools in the region include Maxwell College of Advanced Technology, CDI College, and Quinte Literacy. Secondary schools in the region include Albert College (private school) and Sir James Whitney (a school for the deaf and severely hearing-impaired). Industry and employment The Bay of Quinte region is a hub for industry in eastern Ontario. The region is home to a diverse cluster of domestic and multi-national manufacturing and logistics companies. Sectors include; food processing, auto-parts, plastics and packaging, consumer goods, and more. The region's close proximity to North American markets, strong labour force and start-up and operating costs have attracted attention and new investment from companies all over the globe. Industry in the Bay of Quinte region is supported by a workforce of over 11,000. Investment attraction and industrial retention are supported regionally by the Quinte Economic Development Commission. Just a few of over 350 industries located in the Bay of Quinte Region include: Schütz Canada, German manufacturer of intermediate bulk containers Essroc Canada a division of Italcementi Magna Autosystems - lighting division (3 facilities) Hannon Climate Control Canada Ltd.—Automotive parts Procter and Gamble Inc.— Feminine hygiene products Kellogg — Breakfast cereal manufacturer Kruger - manufacturing facial and toilet tissue for the away from home market Hain Celestial - manufacturing Yves Veggie Cuisine products Sprague Foods - canned and jarred soups and beans Donini Chocolate - a division of John Vince Foods Redpath Sugar Trenton Cold Storage Group Inc.—Refrigerated warehousing and distribution. Custom co-packing Lactalis Canada—Black Diamond Cheese Division—Cheese manufacturing and packaging Avaya—A telecommunications research and product development centre Research Casting International—Canadian company specializing in moulding and casting for the production of museum exhibits Cooney Transport Ltd.—Trucking company Wellington Mushroom Farm / Highline Produce—Mushroom farm Domtech—Copper wiring ClearWater Design Canoe and Kayak—Boat manufacturer The SAB Group of Companies Limited—Consumer goods company Mapco Plastics—Biodegradable plastic packaging manufacturer Citipack Distribution—Cash and carry Babars Bazaar—International commodity trading Jobsters Staffing—Staffing agency Images References External links Official website of Belleville Official website of Quinte West Official website for the Region of Bay of Quinte Central Ontario Quinte Bays of Lake Ontario
5
The bassoon is a musical instrument in the woodwind family, which plays in the tenor and bass ranges. It is composed of six pieces, and is usually made of wood. It is known for its distinctive tone color, wide range, versatility, and virtuosity. It is a non-transposing instrument and typically its music is written in the bass and tenor clefs, and sometimes in the treble. There are two forms of modern bassoon: the Buffet (or French) and Heckel (or German) systems. It is typically played while sitting using a seat strap, but can be played while standing if the player has a harness to hold the instrument. Sound is produced by rolling both lips over the reed and blowing direct air pressure to cause the reed to vibrate. Its fingering system can be quite complex when compared to those of other instruments. Appearing in its modern form in the 19th century, the bassoon figures prominently in orchestral, concert band, and chamber music literature, and is occasionally heard in pop, rock, and jazz settings as well. One who plays a bassoon is called a bassoonist. Etymology The word bassoon comes from French and from Italian ( with the augmentative suffix ). However, the Italian name for the same instrument is , in Spanish, Dutch, Czech and Romanian it is , and in German . Fagot is an Old French word meaning a bundle of sticks. The dulcian came to be known as fagotto in Italy. However, the usual etymology that equates fagotto with "bundle of sticks" is somewhat misleading, as the latter term did not come into general use until later. However an early English variation, "faget", was used as early as 1450 to refer to firewood, which is 100 years before the earliest recorded use of the dulcian (1550). Further citation is needed to prove the lack of relation between the meaning "bundle of sticks" and "fagotto" (Italian) or variants. Some think that it may resemble the Roman fasces, a standard of bound sticks with an axe. A further discrepancy lies in the fact that the dulcian was carved out of a single block of wood—in other words, a single "stick" and not a bundle. Characteristics Range The range of the bassoon begins at B1 (the first one below the bass staff) and extends upward over three octaves, roughly to the G above the treble staff (G5). However, most writing for bassoon rarely calls for notes above C5 or D5; even Stravinsky's opening solo in The Rite of Spring only ascends to D5. Notes higher than this are possible, but seldom written, as they are difficult to produce (often requiring specific reed design features to ensure reliability), and at any rate are quite homogeneous in timbre to the same pitches on cor anglais, which can produce them with relative ease. French bassoon has greater facility in the extreme high register, and so repertoire written for it is somewhat likelier to include very high notes, although repertoire for French system can be executed on German system without alterations and vice versa. The extensive high register of the bassoon and its frequent role as a lyric tenor have meant that tenor clef is very commonly employed in its literature after the Baroque, partly to avoid excessive ledger lines, and, beginning in the 20th century, treble clef is also seen for similar reasons. Like other woodwind instruments, the lowest note is fixed, but A1 is possible with a special extension to the instrument—see "Extended techniques" below. Although the primary tone hole pitches are a pitched perfect 5th lower than other non-transposing Western woodwinds (effectively an octave beneath English horn) the bassoon is non-transposing, meaning that notes sounded match the written pitch. Construction The bassoon disassembles into six main pieces, including the reed. The bell (6), extending upward; the bass joint (or long joint) (5), connecting the bell and the boot; the boot (or butt) (4), at the bottom of the instrument and folding over on itself; the wing joint (or tenor joint) (3), which extends from boot to bocal; and the bocal (or crook) (2), a crooked metal tube that attaches the wing joint to a reed (1) (). Structure The bore of the bassoon is conical, like that of the oboe and the saxophone, and the two adjoining bores of the boot joint are connected at the bottom of the instrument with a U-shaped metal connector. Both bore and tone holes are precision-machined, and each instrument is finished by hand for proper tuning. The walls of the bassoon are thicker at various points along the bore; here, the tone holes are drilled at an angle to the axis of the bore, which reduces the distance between the holes on the exterior. This ensures coverage by the fingers of the average adult hand. Playing is facilitated by closing the distance between the widely spaced holes with a complex system of key work, which extends throughout nearly the entire length of the instrument. The overall height of the bassoon stretches to tall, but the total sounding length is considering that the tube is doubled back on itself. There are also short-reach bassoons made for the benefit of young or petite players. Materials A modern beginner's bassoon is generally made of maple, with medium-hardness types such as sycamore maple and sugar maple preferred. Less-expensive models are also made of materials such as polypropylene and ebonite, primarily for student and outdoor use. Metal bassoons were made in the past but have not been produced by any major manufacturer since 1889. Reeds The art of reed-making has been practiced for several hundred years, some of the earliest known reeds having been made for the dulcian, a predecessor of the bassoon. Current methods of reed-making consist of a set of basic methods; however, individual bassoonists' playing styles vary greatly and thus require that reeds be customized to best suit their respective bassoonist. Advanced players usually make their own reeds to this end. With regards to commercially made reeds, many companies and individuals offer pre-made reeds for sale, but players often find that such reeds still require adjustments to suit their particular playing style. Modern bassoon reeds, made of Arundo donax cane, are often made by the players themselves, although beginner bassoonists tend to buy their reeds from professional reed makers or use reeds made by their teachers. Reeds begin with a length of tube cane that is split into three or four pieces using a tool called a cane splitter. The cane is then trimmed and gouged to the desired thickness, leaving the bark attached. After soaking, the gouged cane is cut to the proper shape and milled to the desired thickness, or profiled, by removing material from the bark side. This can be done by hand with a file; more frequently it is done with a machine or tool designed for the purpose. After the profiled cane has soaked once again it is folded over in the middle. Prior to soaking, the reed maker will have lightly scored the bark with parallel lines with a knife; this ensures that the cane will assume a cylindrical shape during the forming stage. On the bark portion, the reed maker binds on one, two, or three coils or loops of brass wire to aid in the final forming process. The exact placement of these loops can vary somewhat depending on the reed maker. The bound reed blank is then wrapped with thick cotton or linen thread to protect it, and a conical steel mandrel (which sometimes has been heated in a flame) is quickly inserted in between the blades. Using a special pair of pliers, the reed maker presses down the cane, making it conform to the shape of the mandrel. (The steam generated by the heated mandrel causes the cane to permanently assume the shape of the mandrel.) The upper portion of the cavity thus created is called the "throat", and its shape has an influence on the final playing characteristics of the reed. The lower, mostly cylindrical portion will be reamed out with a special tool called a reamer, allowing the reed to fit on the bocal. After the reed has dried, the wires are tightened around the reed, which has shrunk after drying, or replaced completely. The lower part is sealed (a nitrocellulose-based cement such as Duco may be used) and then wrapped with thread to ensure both that no air leaks out through the bottom of the reed and that the reed maintains its shape. The wrapping itself is often sealed with Duco or clear nail varnish (polish). Electrical tape can also be used as a wrapping for amateur reed makers. The bulge in the wrapping is sometimes referred to as the "Turk's head"—it serves as a convenient handle when inserting the reed on the bocal. Alternatively, hot glue, epoxy, or heat shrink wrap may be used to seal the tube of the reed. The thread wrapping (commonly known as a "Turban" due to the criss-crossing fabric) is still more common in commercially sold reeds. To finish the reed, the end of the reed blank, originally at the center of the unfolded piece of cane, is cut off, creating an opening. The blades above the first wire are now roughly long. For the reed to play, a slight bevel must be created at the tip with a knife, although there is also a machine that can perform this function. Other adjustments with the reed knife may be necessary, depending on the hardness, the profile of the cane, and the requirements of the player. The reed opening may also need to be adjusted by squeezing either the first or second wire with the pliers. Additional material may be removed from the sides (the "channels") or tip to balance the reed. Additionally, if the "e" in the bass clef staff is sagging in pitch, it may be necessary to "clip" the reed by removing from its length using a pair of very sharp scissors or the equivalent. History Origin Music historians generally consider the dulcian to be the forerunner of the modern bassoon, as the two instruments share many characteristics: a double reed fitted to a metal crook, obliquely drilled tone holes and a conical bore that doubles back on itself. The origins of the dulcian are obscure, but by the mid-16th century it was available in as many as eight different sizes, from soprano to great bass. A full consort of dulcians was a rarity; its primary function seems to have been to provide the bass in the typical wind band of the time, either loud (shawms) or soft (recorders), indicating a remarkable ability to vary dynamics to suit the need. Otherwise, dulcian technique was rather primitive, with eight finger holes and two keys, indicating that it could play in only a limited number of key signatures. Circumstantial evidence indicates that the baroque bassoon was a newly invented instrument, rather than a simple modification of the old dulcian. The dulcian was not immediately supplanted, but continued to be used well into the 18th century by Bach and others; and, presumably for reasons of interchangeability, repertoire from this time is very unlikely to go beyond the smaller compass of the dulcian. The man most likely responsible for developing the true bassoon was Martin Hotteterre (d.1712), who may also have invented the three-piece flûte traversière (transverse flute) and the hautbois (baroque oboe). Some historians believe that sometime in the 1650s, Hotteterre conceived the bassoon in four sections (bell, bass joint, boot and wing joint), an arrangement that allowed greater accuracy in machining the bore compared to the one-piece dulcian. He also extended the compass down to B by adding two keys. An alternate view maintains Hotteterre was one of several craftsmen responsible for the development of the early bassoon. These may have included additional members of the Hotteterre family, as well as other French makers active around the same time. No original French bassoon from this period survives, but if it did, it would most likely resemble the earliest extant bassoons of Johann Christoph Denner and Richard Haka from the 1680s. Sometime around 1700, a fourth key (G♯) was added, and it was for this type of instrument that composers such as Antonio Vivaldi, Bach, and Georg Philipp Telemann wrote their demanding music. A fifth key, for the low E, was added during the first half of the 18th century. Notable makers of the 4-key and 5-key baroque bassoon include J.H. Eichentopf (c. 1678–1769), J. Poerschmann (1680–1757), Thomas Stanesby, Jr. (1668–1734), G.H. Scherer (1703–1778), and Prudent Thieriot (1732–1786). Modern configuration Increasing demands on capabilities of instruments and players in the 19th century—particularly larger concert halls requiring greater volume and the rise of virtuoso composer-performers—spurred further refinement. Increased sophistication, both in manufacturing techniques and acoustical knowledge, made possible great improvements in the instrument's playability. The modern bassoon exists in two distinct primary forms, the Buffet (or "French") system and the Heckel ("German") system. Most of the world plays the Heckel system, while the Buffet system is primarily played in France, Belgium, and parts of Latin America. A number of other types of bassoons have been constructed by various instrument makers, such as the rare Galandronome. Owing to the ubiquity of the Heckel system in English-speaking countries, references in English to the contemporary bassoon always mean the Heckel system, with the Buffet system being explicitly qualified where it appears. Heckel (German) system The design of the modern bassoon owes a great deal to the performer, teacher, and composer Carl Almenräder. Assisted by the German acoustic researcher Gottfried Weber, he developed the 17-key bassoon with a range spanning four octaves. Almenräder's improvements to the bassoon began with an 1823 treatise describing ways of improving intonation, response, and technical ease of playing by augmenting and rearranging the keywork. Subsequent articles further developed his ideas. His employment at Schott gave him the freedom to construct and test instruments according to these new designs, and he published the results in Caecilia, Schott's house journal. Almenräder continued publishing and building instruments until his death in 1846, and Ludwig van Beethoven himself requested one of the newly made instruments after hearing of the papers. In 1831, Almenräder left Schott to start his own factory with a partner, Johann Adam Heckel. Heckel and two generations of descendants continued to refine the bassoon, and their instruments became the standard, with other makers following. Because of their superior singing tone quality (an improvement upon one of the main drawbacks of the Almenräder instruments), the Heckel instruments competed for prominence with the reformed Wiener system, a Boehm-style bassoon, and a completely keyed instrument devised by Charles-Joseph Sax, father of Adolphe Sax. F.W. Kruspe implemented a latecomer attempt in 1893 to reform the fingering system, but it failed to catch on. Other attempts to improve the instrument included a 24-keyed model and a single-reed mouthpiece, but both these had adverse effects on tone and were abandoned. Coming into the 20th century, the Heckel-style German model of bassoon dominated the field. Heckel himself had made over 1,100 instruments by the turn of the 20th century (serial numbers begin at 3,000), and the British makers' instruments were no longer desirable for the changing pitch requirements of the symphony orchestra, remaining primarily in military band use. Except for a brief 1940s wartime conversion to ball bearing manufacture, the Heckel concern has produced instruments continuously to the present day. Heckel bassoons are considered by many to be the best, although a range of Heckel-style instruments is available from several other manufacturers, all with slightly different playing characteristics. Because its mechanism is primitive compared to most modern woodwinds, makers have occasionally attempted to "reinvent" the bassoon. In the 1960s, Giles Brindley began to develop what he called the "logical bassoon", which aimed to improve intonation and evenness of tone through use of an electrically activated mechanism, making possible key combinations too complex for the human hand to manage. Brindley's logical bassoon was never marketed. Buffet (French) system The Buffet system bassoon achieved its basic acoustical properties somewhat earlier than the Heckel. Thereafter, it continued to develop in a more conservative manner. While the early history of the Heckel bassoon included a complete overhaul of the instrument in both acoustics and key work, the development of the Buffet system consisted primarily of incremental improvements to the key work. This minimalist approach of the Buffet deprived it of improved consistency of intonation, ease of operation, and increased power, which is found in Heckel bassoons, but the Buffet is considered by some to have a more vocal and expressive quality. The conductor John Foulds lamented in 1934 the dominance of the Heckel-style bassoon, considering them too homogeneous in sound with the horn. The modern Buffet system has 22 keys with its range being the same as the Heckel; although Buffet instruments have greater facility in the upper registers, reaching E5 and F5 with far greater ease and less air resistance. Compared to the Heckel bassoon, Buffet system bassoons have a narrower bore and simpler mechanism, requiring different, and often more complex fingerings for many notes. Switching between Heckel and Buffet, or vice versa, requires extensive retraining. French woodwind instruments' tone in general exhibits a certain amount of "edge", with more of a vocal quality than is usual elsewhere, and the Buffet bassoon is no exception. This sound has been utilised effectively in writing for Buffet bassoon, but is less inclined to blend than the tone of the Heckel bassoon. As with all bassoons, the tone varies considerably, depending on individual instrument, reed, and performer. In the hands of a lesser player, the Heckel bassoon can sound flat and woody, but good players succeed in producing a vibrant, singing tone. Conversely, a poorly played Buffet can sound buzzy and nasal, but good players succeed in producing a warm, expressive sound. Though the United Kingdom once favored the French system, Buffet-system instruments are no longer made there and the last prominent British player of the French system retired in the 1980s. However, with continued use in some regions and its distinctive tone, the Buffet continues to have a place in modern bassoon playing, particularly in France, where it originated. Buffet-model bassoons are currently made in Paris by Buffet Crampon and the atelier Ducasse (Romainville, France). The Selmer Company stopped fabrication of French system bassoons around the year 2012. Some players, for example the late Gerald Corey in Canada, have learned to play both types and will alternate between them depending on the repertoire. Use in ensembles Ensembles prior to the 20th century Pre-1760 Prior to 1760, the early ancestor of the bassoon was the dulcian. It was used to reinforce the bass line in wind ensembles called consorts. However, its use in concert orchestras was sporadic until the late 17th century when double reeds began to make their way into standard instrumentation. Increasing use of the dulcian as a basso continuo instrument meant that it began to be included in opera orchestras, in works such as those by Reinhard Keiser and Jean-Baptiste Lully. Meanwhile, as the dulcian advanced technologically and was able to achieve more virtuosity, composers such as Joseph Bodin de Boismortier, Johann Ernst Galliard, Johann Friedrich Fasch and Georg Philipp Telemann wrote demanding solo and ensemble music for the instrument. Antonio Vivaldi brought it to prominence by featuring it in thirty-nine concerti. c. 1760–1830 While the bassoon was still often used to give clarity to the bassline due to its sonorous low register, the capabilities of wind instruments grew as technology advanced during the Classical era. This allowed the instrument to play in more keys than the dulcian. Joseph Haydn took advantage of this in his Symphony No. 45 ("Farewell Symphony"), in which the bassoon plays in F-sharp minor. Following with these advances, composers also began to exploit the bassoon for its unique color, flexibility, and virtuosic ability, rather than for its perfunctory ability to double the bass line. Those who did this include Ludwig van Beethoven in his three Duos for Clarinet and Bassoon (WoO 27) for clarinet and bassoon and Niccolo Paganini in his duets for violin and bassoon. In his Bassoon Concerto in B-flat major, K. 191, W. A. Mozart utilized all aspects of the bassoon's expressiveness with its contrasts in register, staccato playing, and expressive sound, and was especially noted for its singing quality in the second movement. This concerto is often considered one of the most important works in all of the bassoon's repertoire, even today. The bassoon's similarity to the human voice, in addition to its newfound virtuosic ability, was another quality many composers took advantage of during the classical era. After 1730, the German bassoon's range expended up to B♭4, and much higher with the French instrument. Technological advances also caused the bassoon's tenor register sound to become more resonant, and playing in this register grew in popularity, especially in the Austro-Germanic musical world. Pedagogues such as Josef Frohlich instructed students to practice scales, thirds, and fourths as vocal students would. In 1829, he wrote that the bassoon was capable of expressing "the worthy, the virile, the solemn, the great, the sublime, composure, mildness, intimacy, emotion, longing, heartfulness, reverence, and soulful ardour." In G.F. Brandt's performance of Carl Maria von Weber's Concerto for Bassoon in F Major, Op. 75 (J. 127) it was also likened to the human voice. In France, Pierre Cugnier described the bassoon's role as encompassing not only the bass part, but also to accompany the voice and harp, play in pairs with clarinets and horns in Harmonie, and to play in "nearly all types of music," including concerti, which were much more common than the sonatas of the previous era. Both Cugnier and Étienne Ozi emphasized the importance of the bassoon's similarity to the singing voice. The role of the bassoon in the orchestra varied depending on the country. In the Viennese orchestra the instrument offered a three-dimensional sound to the ensemble by doubling other instruments such as violins, as heard in Mozart's overture to The Marriage of Figaro, K 492. where it plays a rather technical part alongside the strings. He also wrote for the bassoon to change its timbre depending on which instrument it was paired with; warmer with clarinets, hollow with flutes, and dark and dignified with violins. In Germany and Scandinavian countries, orchestras typically featured only two bassoons. But in France, orchestras increased the number to four in the latter half of the nineteenth century. In England, the bassoonist's role varied depending on the ensemble. Johann Christian Bach wrote two concertos for solo bassoon, and it also appeared in more supportive roles such as accompanying church choirs after the Puritan revolution destroyed most church organs. In the American colonies, the bassoon was typically seen in a chamber setting. After the Revolutionary War, bassoonists were found in wind bands that gave public performances. By 1800, there was at least one bassoon in the United States Marine Band. In South America, the bassoon also appeared in small orchestras, bands, and military musique (similar to Harmonie ensembles). c. 1830–1900 The role of the bassoon during the Romantic era varied between a role as a supportive bass instrument and a role as a virtuosic, expressive, solo instrument. In fact, it was very much considered an instrument that could be used in almost any circumstance. The comparison of the bassoon's sound to the human voice continued on during this time, as much of the pedagogy surrounded emulating this sound. Giuseppe Verdi used the instrument's lyrical, singing voice to evoke emotion in pieces such as his Messa da Requiem. Eugene Jancourt compared the use of vibrato on the bassoon to that of singers, and Luigi Orselli wrote that the bassoon blended well with human voice. He also noted the function of the bassoon in the French orchestra at the time, which served to support the sound of the viola, reinforce staccato sound, and double the bass, clarinet, flute, and oboe. Emphasis also began to be placed on the unique sound of the bassoon's staccato, which might be described as quite short and aggressive, such as in Hector Berlioz's Symphonie fantastique, Op. 14 in the fifth movement. Paul Dukas utilized the staccato to depict the image of two brooms coming to life in The Sorcerer's Apprentice. It was common for there to be only two bassoons in German orchestras. Austrian and British military bands also only carried two bassoons, and were mainly used for accompaniment and offbeat playing. In France, Hector Berlioz also made it fashionable to use more than two bassoons; he often scored for three or four, and at time wrote for up to eight such as in his l’Impériale. At this point, composers expected bassoons to be as virtuosic as the other wind instruments, as they often wrote solos challenging the range and technique of the instrument. Examples of this include Nikolai Rimsky-Korsakov's bassoon solo and cadenza following the clarinet in Sheherazade, Op. 35 and in Richard Wagner's Tannhäuser, which required the bassoonist to triple tongue and also play up to the top of its range at an E5. Wagner also used the bassoon for its staccato ability in his work, and often wrote his three bassoon parts in thirds to evoke a darker sound with noticeable tone color. In Modest Mussorgsky's Night on Bald Mountain, the bassoons play fortissimo alongside other bass instruments in order to evoke "the voice of the Devil." 20th and 21st century ensembles At this point in time, the development of the bassoon slowed. Rather than making large leaps in technological improvements, tiny imperfections in the instrument's function were corrected. The instrument became quite versatile throughout the twentieth century; the instrument was at this point able to play three octaves, a variety of different trills, and maintained stable intonation across all registers and dynamic levels. The pedagogy among bassoonists varied among different countries, and so the overall instrument itself played a variety of roles. As was a common theme in previous eras, the bassoon was valued by composers for its unique voice, and its use rose higher in pitch. A famous example of this is in Igor Stravinsky's Rite of Spring in which the bassoon must play in its highest register in order to mimic the Russian dudka. Composers also wrote for the bassoon's middle register, such as in Stravinsky's "Berceuse" in The Firebird and Symphony No. 5 in E-flat major, op. 82 by Jean Sibelius's. They also continued to highlight the staccato sound of the bassoon, as heard in Sergei Prokofiev's Humorous Scherzo. In Sergei Prokofiev's Peter and the Wolf, the part of the grandfather is played by the bassoon. In orchestral settings, most orchestras from the beginning of the twentieth century to the present have three or four bassoonists, with the fourth typically covering contrabassoon as well. Greater emphasis on the use of timbre, vibrato, and phrasing began to appear in bassoon pedagogy, and many followed Marcel Tabuteau's philosophy on musical phrasing. Vibrato began to be used in ensemble playing, depending on the phrasing of the music. The bassoon was, and currently is, expected to be fluent with other woodwinds in terms of virtuosity and technique. Examples of this include the cadenza for bassoons in Maurice Ravel's Rapsodie espagnole and the multi-finger trills used in Stravinsky's Octet. In the twentieth century, the bassoon was less of a concerto soloist, and when it was, the accompanying ensemble was made softer and quieter. In addition, it was no longer used in marching bands, though still existed in concert bands with one or two of them. Orchestral repertoire remained very much the same Austro-Germanic tradition throughout most Western countries. It mostly appeared in solo, chamber, and symphonic settings. By the mid-1900s, broadcasting and recording grew in popularity, allowing for new opportunities for bassoonists, and leading to a slow decline of live performances. Much of the new music for bassoon in the late twentieth and early twenty-first centuries, often included extended techniques and was written for solo or chamber settings. One piece that included extended techniques was Luciano Berio's Sequenza XII, which called for microtonal fingerings, glissandos, and timbral trills. Double and triple tonguing, flutter tonguing, multiphonics, quarter-tones, and singing are all utilized in Bruno Bartolozzi's Concertazioni. There were also a variety of concerti and bassoon and piano pieces written, such as John Williams's Five Sacred Trees and André Previn's Sonata for bassoon and piano. There were also "performance" pieces such as Peter Schickele's Sonata Abassoonata, which required the bassoonist to be both a musician and an actor. The bassoon quartet became prominent at this time, with pieces such as Daniel Dorff's It Takes Four to Tango. Jazz The bassoon is infrequently used as a jazz instrument and rarely seen in a jazz ensemble. It first began appearing in the 1920s, when Garvin Bushell began incorporating the bassoon in his performances. Specific calls for its use occurred in Paul Whiteman's group, the unusual octets of Alec Wilder, and a few other session appearances. The next few decades saw the instrument used only sporadically, as symphonic jazz fell out of favor, but the 1960s saw artists such as Yusef Lateef and Chick Corea incorporate bassoon into their recordings. Lateef's diverse and eclectic instrumentation saw the bassoon as a natural addition (see, e.g., The Centaur and the Phoenix (1960) which features bassoon as part of a 6-man horn section, including a few solos) while Corea employed the bassoon in combination with flautist Hubert Laws. More recently, Illinois Jacquet, Ray Pizzi, Frank Tiberi, and Marshall Allen have both doubled on bassoon in addition to their saxophone performances. Bassoonist Karen Borca, a performer of free jazz, is one of the few jazz musicians to play only bassoon; Michael Rabinowitz, the Spanish bassoonist Javier Abad, and James Lassen, an American resident in Bergen, Norway, are others. Katherine Young plays the bassoon in the ensembles of Anthony Braxton. Lindsay Cooper, Paul Hanson, the Brazilian bassoonist Alexandre Silvério, Trent Jacobs and Daniel Smith are also currently using the bassoon in jazz. French bassoonists Jean-Jacques Decreux and Alexandre Ouzounoff have both recorded jazz, exploiting the flexibility of the Buffet system instrument to good effect. Popular music In conjunction with the use of electronic pickups and amplification, the instrument began to be used more somewhat in jazz and rock settings. However, the bassoon is still quite rare as a regular member of rock bands. Several 1960s pop music hits feature the bassoon, including "The Tears of a Clown" by Smokey Robinson and the Miracles (the bassoonist was Charles R. Sirard), "Jennifer Juniper" by Donovan, "59th Street Bridge Song" by Harpers Bizarre, and the oompah bassoon underlying The New Vaudeville Band's "Winchester Cathedral". From 1974 to 1978, the bassoon was played by Lindsay Cooper in the British avant-garde band Henry Cow. The Leonard Nimoy song "The Ballad of Bilbo Baggins" features the bassoon. In the 1970s it was played, in the British medieval/progressive rock band Gryphon, by Brian Gulland, as well as by the American band Ambrosia, where it was played by drummer Burleigh Drummond. The Belgian Rock in Opposition-band Univers Zero is also known for its use of the bassoon. More recently, These New Puritans's 2010 album Hidden makes heavy use of the instrument throughout; their principal songwriter, Jack Barnett, claimed repeatedly to be "writing a lot of music for bassoon" in the run-up to its recording. The rock band Better Than Ezra took their name from a passage in Ernest Hemingway's A Moveable Feast in which the author comments that listening to an annoyingly talkative person is still "better than Ezra learning how to play the bassoon", referring to Ezra Pound. British psychedelic/progressive rock band Knifeworld features the bassoon playing of Chloe Herrington, who also plays for experimental chamber rock orchestra Chrome Hoof. Fiona Apple featured the bassoon in the opening track of her 2004 album Extraordinary Machine. In 2016, the bassoon was featured on the album Gang Signs and Prayers by UK ”grime" artist Stormzy. Played by UK bassoonist Louise Watson, the bassoon is heard in the tracks "Cold" and "Mr Skeng" as a complement to the electronic synthesizer bass lines typically found in this genre. Appearance in Television The Cartoon Network animated series Over the Garden Wall features a bassoon in episode 6 entitled "Lullaby in Frogland", where the main character is encouraged to play the bassoon to impress a group of frogs. The character Jan Bellows in the Hulu series Only Murders in the Building is a professional bassoonist. Technique The bassoon is held diagonally in front of the player, but unlike the flute, oboe and clarinet, it cannot be easily supported by the player's hands alone. Some means of additional support is usually required; the most common ones are a seat strap attached to the base of the boot joint, which is laid across the chair seat prior to sitting down, or a neck strap or shoulder harness attached to the top of the boot joint. Occasionally a spike similar to those used for the cello or the bass clarinet is attached to the bottom of the boot joint and rests on the floor. It is possible to play while standing up if the player uses a neck strap or similar harness, or if the seat strap is tied to the belt. Sometimes a device called a balance hanger is used when playing in a standing position. This is installed between the instrument and the neck strap, and shifts the point of support closer to the center of gravity, adjusting the distribution of weight between the two hands. The bassoon is played with both hands in a stationary position, the left above the right, with five main finger holes on the front of the instrument (nearest the audience) plus a sixth that is activated by an open-standing key. Five additional keys on the front are controlled by the little fingers of each hand. The back of the instrument (nearest the player) has twelve or more keys to be controlled by the thumbs, the exact number varying depending on model. To stabilize the right hand, many bassoonists use an adjustable comma-shaped apparatus called a "crutch", or a hand rest, which mounts to the boot joint. The crutch is secured with a thumb screw, which also allows the distance that it protrudes from the bassoon to be adjusted. Players rest the curve of the right hand where the thumb joins the palm against the crutch. The crutch also keeps the right hand from tiring and enables the player to keep the finger pads flat on the finger holes and keys. An aspect of bassoon technique not found on any other woodwind is called flicking. It involves the left hand thumb momentarily pressing, or "flicking" the high A, C and D keys at the beginning of certain notes in the middle octave to achieve a clean slur from a lower note. This eliminates cracking, or brief multiphonics that happens without the use of this technique. Alternatively, a similar method is called "venting", which requires that the register key be used as part of the full fingering as opposed to being open momentarily at the start of the note. This is sometimes called the "European style"; venting raises the intonation of the notes slightly, and it can be advantageous when tuning to higher frequencies. Some bassoonists flick A and B when tongued, for clarity of articulation, but flicking (or venting) is practically ubiquitous for slurs. While flicking is used to slur up to higher notes, the whisper key is used for lower notes. From the A right below middle C and lower, the whisper key is pressed with the left thumb and held for the duration of the note. This prevents cracking, as low notes can sometimes crack into a higher octave. Both flicking and using the whisper key is especially important to ensure notes speak properly during slurring between high and low registers. While bassoons are usually critically tuned at the factory, the player nonetheless has a great degree of flexibility of pitch control through the use of breath support, embouchure, and reed profile. Players can also use alternate fingerings to adjust the pitch of many notes. Similar to other woodwind instruments, the length of the bassoon can be increased to lower pitch or decreased to raise pitch. On the bassoon, this is done preferably by changing the bocal to one of a different length, (lengths are denoted by a number on the bocal, usually starting at 0 for the shortest length, and 3 for the longest, but there are some manufacturers who will use other numbers) but it is possible to push the bocal in or out slightly to grossly adjust the pitch. Embouchure and sound production The bassoon embouchure is a very important aspect of producing a full, round, and rich sound on the instrument. The lips are both rolled over the teeth, often with the upper lip further along in an "overbite". The lips provide micromuscular pressure on the entire circumference of the reed, which grossly controls intonation and harmonic excitement, and thus must be constantly modulated with every change of note. How far along the reed the lips are placed affects both tone (with less reed in the mouth making the sound more edged or "reedy", and more reed making it smooth and less projectile) and the way the reed will respond to pressure. The musculature employed in a bassoon embouchure is primarily around the lips, which pressure the reed into the shapes needed for the desired sound. The jaw is raised or lowered to adjust the oral cavity for better reed control, but the jaw muscles are used much less for upward vertical pressure than in single reeds, only being substantially employed in the very high register. However, double reed students often "bite" the reed with these muscles because the control and tone of the labial and other muscles is still developing, but this generally makes the sound sharp and "choked" as it contracts the aperture of the reed and stifles the vibration of its blades. Apart from the embouchure proper, students must also develop substantial muscle tone and control in the diaphragm, throat, neck and upper chest, which are all employed to increase and direct air pressure. Air pressure is a very important aspect of the tone, intonation and projection of double reed instruments, affecting these qualities as much, or more than the embouchure does. Attacking a note on the bassoon with imprecise amounts of muscle or air pressure for the desired pitch will result in poor intonation, cracking or multiphonics, accidentally producing the incorrect partial, or the reed not speaking at all. These problems are compounded by the individual qualities of reeds, which are categorically inconsistent in behaviour for inherent and exherent reasons. The muscle requirements and variability of reeds mean it takes some time for bassoonists (and oboists) to develop an embouchure that exhibits consistent control across all reeds, dynamics and playing environments. Modern fingering The fingering technique of the bassoon varies more between players, by a wide margin, than that of any other orchestral woodwind. The complex mechanism and acoustics mean the bassoon lacks simple fingerings of good sound quality or intonation for some notes (especially in the higher range), but, conversely, there is a great variety of superior, but generally more complicated, fingerings for them. Typically, the simpler fingerings for such notes are used as alternate or trill fingerings, and the bassoonist will use as "full fingering" one or several of the more complex executions possible, for optimal sound quality. The fingerings used are at the discretion of the bassoonist, and, for particular passages, he or she may experiment to find new alternate fingerings that are thus idiomatic to the player. These elements have resulted in both "full" and alternate fingerings differing extensively between bassoonists, and are further informed by factors such as cultural difference in what sound is sought, how reeds are made, and regional variation in tuning frequencies (necessitating sharper or flatter fingerings). Regional enclaves of bassoonists tend to have some uniformity in technique, but on a global scale, technique differs such that two given bassoonists may share no fingerings for certain notes. Owing to these factors, ubiquitous bassoon technique can only be partially notated. The left thumb operates nine keys: B1, B1, C2, D2, D5, C5 (also B4), two keys when combined create A4, and the whisper key. The whisper key should be held down for notes between and including F2 and G3 and certain other notes; it can be omitted, but the pitch will destabilise. Additional notes can be created with the left thumb keys; the D2 and bottom key above the whisper key on the tenor joint (C key) together create both C3 and C4. The same bottom tenor-joint key is also used, with additional fingering, to create E5 and F5. D5 and C5 together create C5. When the two keys on the tenor joint to create A4 are used with slightly altered fingering on the boot joint, B4 is created. The whisper key may also be used at certain points throughout the instrument's high register, along with other fingerings, to alter sound quality as desired. The right thumb operates four keys. The uppermost key is used to produce B2 and B3, and may be used in B4,F4, C5, D5, F5, and E5. The large circular key, otherwise known as the "pancake key", is held down for all the lowest notes from E2 down to B1. It is also used, like the whisper key, in additional fingerings for muting the sound. For example, in Ravel's "Boléro", the bassoon is asked to play the ostinato on G4. This is easy to perform with the normal fingering for G4, but Ravel directs that the player should also depress the E2 key (pancake key) to mute the sound (this being written with Buffet system in mind; the G fingering on which involves the Bb key – sometimes called "French" G on Heckel). The next key operated by the right thumb is known as the "spatula key": its primary use is to produce F2 and F3. The lowermost key is used less often: it is used to produce A2 (G2) and A3 (G3), in a manner that avoids sliding the right fourth finger from another note. The four fingers of the left hand can each be used in two different positions. The key normally operated by the index finger is primarily used for E5, also serving for trills in the lower register. Its main assignment is the upper tone hole. This hole can be closed fully, or partially by rolling down the finger. This half-holing technique is used to overblow F3, G3 and G3. The middle finger typically stays on the centre hole on the tenor joint. It can also move to a lever used for E5, also a trill key. The ring finger operates, on most models, one key. Some bassoons have an alternate E key above the tone hole, predominantly for trills, but many do not. The smallest finger operates two side keys on the bass joint. The lower key is typically used for C2, but can be used for muting or flattening notes in the tenor register. The upper key is used for E2, E4, F4, F4, A4, B4, B4, C5, C5, and D5; it flattens G3 and is the standard fingering for it in many places that tune to lower Hertz levels such as A440. The four fingers of the right hand have at least one assignment each. The index finger stays over one hole, except that when E5 is played a side key at the top of the boot is used (this key also provides a C3 trill, albeit sharp on D). The middle finger remains stationary over the hole with a ring around it, and this ring and other pads are lifted when the smallest finger on the right hand pushes a lever. The ring finger typically remains stationary on the lower ring-finger key. However, the upper ring-finger key can be used, typically for B2 and B3, in place of the top thumb key on the front of the boot joint; this key comes from the oboe, and some bassoons do not have it because the thumb fingering is practically universal. The smallest finger operates three keys. The backmost one, closest to the bassoonist, is held down throughout most of the bass register. F4 may be created with this key, as well as G4, B4, B4, and C5 (the latter three employing solely it to flatten and stabilise the pitch). The lowest key for the smallest finger on the right hand is primarily used for A2 (G2) and A3 (G3) but can be used to improve D5, E5, and F5. The frontmost key is used, in addition to the thumb key, to create G2 and G3; on many bassoons this key operates a different tone hole to the thumb key and produces a slightly flatter F ("duplicated F"); some techniques use one as standard for both octaves and the other for utility, but others use the thumb key for the lower and the fourth finger for the higher. Extended techniques Many extended techniques can be performed on the bassoon, such as multiphonics, flutter-tonguing, circular breathing, double tonguing, and harmonics. In the case of the bassoon, flutter-tonguing may be accomplished by "gargling" in the back of the throat as well as by the conventional method of rolling Rs. Multiphonics on the bassoon are plentiful, and can be achieved by using particular alternative fingerings, but are generally heavily influenced by embouchure position. Also, again using certain fingerings, notes may be produced on the instrument that sound lower pitches than the actual range of the instrument. These notes tend to sound very gravelly and out of tune, but technically sound below the low B. The bassoonist may also produce lower notes than the bottom B by extending the length of bell. This can be achieved by inserting a specially made "low A extension" into the bell, but may also be achieved with a small paper or rubber tube or a clarinet/cor anglais bell sitting inside the bassoon bell (although the note may tend sharp). The effect of this is to convert the lower B into a lower note, almost always A natural; this broadly lowers the pitch of the instrument (most noticeably in the lower register) and will often accordingly convert the lowest B to B (and render the neighbouring C very flat). The idea of using low A was begun by Richard Wagner, who wanted to extend the range of the bassoon. Many passages in his later operas require the low A as well as the B-flat immediately above it; this is possible on a normal bassoon using an extension which also flattens low B to B, but all extensions to the bell have significant effects on intonation and sound quality in the bottom register of the instrument, and passages such as this are more often realised with comparative ease by the contrabassoon. Some bassoons have been specially made to allow bassoonists to realize similar passages. These bassoons are made with a "Wagner bell" which is an extended bell with a key for both the low A and the low B-flat, but they are not widespread; bassoons with Wagner bells suffer similar intonational problems as a bassoon with an ordinary A extension, and a bassoon must be constructed specifically to accommodate one, making the extension option far less complicated. Extending the bassoon's range even lower than the A, though possible, would have even stronger effects on pitch and make the instrument effectively unusable. Despite the logistic difficulties of the note, Wagner was not the only composer to write the low A. Another composer who has required the bassoon to be chromatic down to low A is Gustav Mahler. Richard Strauss also calls for the low A in his opera Intermezzo. Some works have optional low As, as in Carl Nielsen's Wind Quintet, op. 43, which includes an optional low A for the final cadence of the work. Learning the bassoon The complex fingering system and the expense and lack of access to quality bassoon reeds can make the bassoon more of a challenge to learn than some of the other woodwind instruments. Cost is another factor in a person's decision to pursue the bassoon. Prices may range from US$7,000 to over $45,000 for a high-quality instrument. In North America, schoolchildren may take up bassoon only after starting on another reed instrument, such as clarinet or saxophone. Students in America often begin to pursue the study of bassoon performance and technique in the middle years of their music education, often in association with their school band program. Students are often provided with a school instrument and encouraged to pursue lessons with private instructors. Students typically receive instruction in proper posture, hand position, embouchure, repertoire, and tone production. See also List of bassoonists Bassoon makers Bassoon repertoire International Double Reed Society British Double Reed Society References Citations Sources Waterhouse, William. "Bassoon." Grove Music Online. 2001. Oxford University Press. Vonk, Maarten. A Bundle of Joy: A Practical Handbook for Bassoon. FagotAielier Maarten Vonk, 2007. Hall, Ronn K. (2017). An Exploration into the Validity and Treatment of the Bassoon in Duet Repertoire from 1960 - 2016 (DMA). University of Maryland. Mettler, Larry Charles. (1960). An Analysis of the Bassoon and Its Literature (MS). Eastern Illinois University. Further reading The Double Reed (published quarterly), I.D.R.S. Publications Journal of the International Double Reed Society (1972–1999, in 2000 merged with The Double Reed), I.D.R.S. Publications Baines, Anthony (ed.), Musical Instruments Through the Ages, Penguin Books, 1961 Jansen, Will, The Bassoon: Its History, Construction, Makers, Players, and Music, Uitgeverij F. Knuf, 1978. 5 volumes Domínguez Moreno, Áurea: Bassoon Playing in Perspective: Character and Performance Practice from 1800 to 1850. (Dissertation.) Studia musicologica Universitatis Helsingiensis, 26. University of Helsinki, 2013. . . Kopp, James B., The Bassoon (Yale University Press; 2012) 297 pages; a scholarly history Sadie, Stanley (ed.), The New Grove Dictionary of Musical Instruments, s.v. "Bassoon", 2001 Spencer, William (rev. Mueller, Frederick), The Art of Bassoon Playing, Summy-Birchard, 1958 Stauffer, George B. (1986). "The Modern Orchestra: A Creation of the Late Eighteenth Century." In Joan Peyser (ed.) The Orchestra: Origins and Transformations pp. 41–72. Charles Scribner's Sons. Weaver, Robert L. (1986). "The Consolidation of the Main Elements of the Orchestra: 1470–1768." In Joan Peyser (ed.) The Orchestra: Origins and Transformations pp. 7–40. Charles Scribner's Sons. External links Documentary: The Production of a Bassoon by Francois de Rudder Internet Contrabassoon Resource Bassoon Fingering Charts A Guide to Bassoon Keywork Bassoons Baroque instruments Orchestral instruments
12
Bipedalism is a form of terrestrial locomotion where a tetrapod moves by means of its two rear (or lower) limbs or legs. An animal or machine that usually moves in a bipedal manner is known as a biped , meaning 'two feet' (from Latin bis 'double' and pes 'foot'). Types of bipedal movement include walking or running (a bipedal gait) and hopping. Several groups of modern species are habitual bipeds whose normal method of locomotion is two-legged. In the Triassic period some groups of archosaurs (a group that includes crocodiles and dinosaurs) developed bipedalism; among the dinosaurs, all the early forms and many later groups were habitual or exclusive bipeds; the birds are members of a clade of exclusively bipedal dinosaurs, the theropods. Within mammals, habitual bipedalism has evolved multiple times, with the macropods, kangaroo rats and mice, springhare, hopping mice, pangolins and hominin apes (australopithecines, including humans) as well as various other extinct groups evolving the trait independently. A larger number of modern species intermittently or briefly use a bipedal gait. Several lizard species move bipedally when running, usually to escape from threats. Many primate and bear species will adopt a bipedal gait in order to reach food or explore their environment, though there are a few cases where they walk on their hind limbs only. Several arboreal primate species, such as gibbons and indriids, exclusively walk on two legs during the brief periods they spend on the ground. Many animals rear up on their hind legs while fighting or copulating. Some animals commonly stand on their hind legs to reach food, keep watch, threaten a competitor or predator, or pose in courtship, but do not move bipedally. Etymology The word is derived from the Latin words bi(s) 'two' and ped- 'foot', as contrasted with quadruped 'four feet'. Advantages Limited and exclusive bipedalism can offer a species several advantages. Bipedalism raises the head; this allows a greater field of vision with improved detection of distant dangers or resources, access to deeper water for wading animals and allows the animals to reach higher food sources with their mouths. While upright, non-locomotory limbs become free for other uses, including manipulation (in primates and rodents), flight (in birds), digging (in the giant pangolin), combat (in bears, great apes and the large monitor lizard) or camouflage. The maximum bipedal speed appears slower than the maximum speed of quadrupedal movement with a flexible backbone – both the ostrich and the red kangaroo can reach speeds of , while the cheetah can exceed . Even though bipedalism is slower at first, over long distances, it has allowed humans to outrun most other animals according to the endurance running hypothesis. Bipedality in kangaroo rats has been hypothesized to improve locomotor performance, which could aid in escaping from predators. Facultative and obligate bipedalism Zoologists often label behaviors, including bipedalism, as "facultative" (i.e. optional) or "obligate" (the animal has no reasonable alternative). Even this distinction is not completely clear-cut — for example, humans other than infants normally walk and run in biped fashion, but almost all can crawl on hands and knees when necessary. There are even reports of humans who normally walk on all fours with their feet but not their knees on the ground, but these cases are a result of conditions such as Uner Tan syndrome — very rare genetic neurological disorders rather than normal behavior. Even if one ignores exceptions caused by some kind of injury or illness, there are many unclear cases, including the fact that "normal" humans can crawl on hands and knees. This article therefore avoids the terms "facultative" and "obligate", and focuses on the range of styles of locomotion normally used by various groups of animals. Normal humans may be considered "obligate" bipeds because the alternatives are very uncomfortable and usually only resorted to when walking is impossible. Movement There are a number of states of movement commonly associated with bipedalism. Standing. Staying still on both legs. In most bipeds this is an active process, requiring constant adjustment of balance. Walking. One foot in front of another, with at least one foot on the ground at any time. Running. One foot in front of another, with periods where both feet are off the ground. Jumping/hopping. Moving by a series of jumps with both feet moving together. Bipedal animals The great majority of living terrestrial vertebrates are quadrupeds, with bipedalism exhibited by only a handful of living groups. Humans, gibbons and large birds walk by raising one foot at a time. On the other hand, most macropods, smaller birds, lemurs and bipedal rodents move by hopping on both legs simultaneously. Tree kangaroos are able to walk or hop, most commonly alternating feet when moving arboreally and hopping on both feet simultaneously when on the ground. Extant reptiles Many species of lizards become bipedal during high-speed, sprint locomotion, including the world's fastest lizard, the spiny-tailed iguana (genus Ctenosaura). Early reptiles and lizards The first known biped is the bolosaurid Eudibamus whose fossils date from 290 million years ago. Its long hind-legs, short forelegs, and distinctive joints all suggest bipedalism. The species became extinct in the early Permian. Archosaurs (includes crocodilians and dinosaurs) Birds All birds are bipeds, as is the case for all theropod dinosaurs. However, hoatzin chicks have claws on their wings which they use for climbing. Other archosaurs Bipedalism evolved more than once in archosaurs, the group that includes both dinosaurs and crocodilians. All dinosaurs are thought to be descended from a fully bipedal ancestor, perhaps similar to Eoraptor. Dinosaurs diverged from their archosaur ancestors approximately 230 million years ago during the Middle to Late Triassic period, roughly 20 million years after the Permian-Triassic extinction event wiped out an estimated 95 percent of all life on Earth. Radiometric dating of fossils from the early dinosaur genus Eoraptor establishes its presence in the fossil record at this time. Paleontologists suspect Eoraptor resembles the common ancestor of all dinosaurs; if this is true, its traits suggest that the first dinosaurs were small, bipedal predators. The discovery of primitive, dinosaur-like ornithodirans such as Marasuchus and Lagerpeton in Argentinian Middle Triassic strata supports this view; analysis of recovered fossils suggests that these animals were indeed small, bipedal predators. Bipedal movement also re-evolved in a number of other dinosaur lineages such as the iguanodonts. Some extinct members of Pseudosuchia, a sister group to the avemetatarsalians (the group including dinosaurs and relatives), also evolved bipedal forms – a poposauroid from the Triassic, Effigia okeeffeae, is thought to have been bipedal. Pterosaurs were previously thought to have been bipedal, but recent trackways have all shown quadrupedal locomotion. Mammals A number of groups of extant mammals have independently evolved bipedalism as their main form of locomotion - for example humans, giant pangolins, the extinct giant ground sloths, numerous species of jumping rodents and macropods. Humans, as their bipedalism has been extensively studied, are documented in the next section. Macropods are believed to have evolved bipedal hopping only once in their evolution, at some time no later than 45 million years ago. Bipedal movement is less common among mammals, most of which are quadrupedal. All primates possess some bipedal ability, though most species primarily use quadrupedal locomotion on land. Primates aside, the macropods (kangaroos, wallabies and their relatives), kangaroo rats and mice, hopping mice and springhare move bipedally by hopping. Very few non-primate mammals commonly move bipedally with an alternating leg gait. Exceptions are the ground pangolin and in some circumstances the tree kangaroo. One black bear, Pedals, became famous locally and on the internet for having a frequent bipedal gait, although this is attributed to injuries on the bear's front paws. A two-legged fox was filmed in a Derbyshire garden in 2023, most likely having been born that way. Primates Most bipedal animals move with their backs close to horizontal, using a long tail to balance the weight of their bodies. The primate version of bipedalism is unusual because the back is close to upright (completely upright in humans), and the tail may be absent entirely. Many primates can stand upright on their hind legs without any support. Chimpanzees, bonobos, gorillas, gibbons and baboons exhibit forms of bipedalism. On the ground sifakas move like all indrids with bipedal sideways hopping movements of the hind legs, holding their forelimbs up for balance. Geladas, although usually quadrupedal, will sometimes move between adjacent feeding patches with a squatting, shuffling bipedal form of locomotion. However, they can only do so for brief amounts, as their bodies are not adapted for constant bipedal locomotion. Humans are the only primates who are normally biped, due to an extra curve in the spine which stabilizes the upright position, as well as shorter arms relative to the legs than is the case for the nonhuman great apes. The evolution of human bipedalism began in primates about four million years ago, or as early as seven million years ago with Sahelanthropus or about 12 million years ago with Danuvius guggenmosi. One hypothesis for human bipedalism is that it evolved as a result of differentially successful survival from carrying food to share with group members, although there are alternative hypotheses. Injured individuals Injured chimpanzees and bonobos have been capable of sustained bipedalism. Three captive primates, one macaque Natasha and two chimps, Oliver and Poko (chimpanzee), were found to move bipedally. Natasha switched to exclusive bipedalism after an illness, while Poko was discovered in captivity in a tall, narrow cage. Oliver reverted to knuckle-walking after developing arthritis. Non-human primates often use bipedal locomotion when carrying food, or while moving through shallow water. Limited bipedalism Limited bipedalism in mammals Other mammals engage in limited, non-locomotory, bipedalism. A number of other animals, such as rats, raccoons, and beavers will squat on their hindlegs to manipulate some objects but revert to four limbs when moving (the beaver will move bipedally if transporting wood for their dams, as will the raccoon when holding food). Bears will fight in a bipedal stance to use their forelegs as weapons. A number of mammals will adopt a bipedal stance in specific situations such as for feeding or fighting. Ground squirrels and meerkats will stand on hind legs to survey their surroundings, but will not walk bipedally. Dogs (e.g. Faith) can stand or move on two legs if trained, or if birth defect or injury precludes quadrupedalism. The gerenuk antelope stands on its hind legs while eating from trees, as did the extinct giant ground sloth and chalicotheres. The spotted skunk will walk on its front legs when threatened, rearing up on its front legs while facing the attacker so that its anal glands, capable of spraying an offensive oil, face its attacker. Limited bipedalism in non-mammals (and non-birds) Bipedalism is unknown among the amphibians. Among the non-archosaur reptiles bipedalism is rare, but it is found in the "reared-up" running of lizards such as agamids and monitor lizards. Many reptile species will also temporarily adopt bipedalism while fighting. One genus of basilisk lizard can run bipedally across the surface of water for some distance. Among arthropods, cockroaches are known to move bipedally at high speeds. Bipedalism is rarely found outside terrestrial animals, though at least two types of octopus walk bipedally on the sea floor using two of their arms, allowing the remaining arms to be used to camouflage the octopus as a mat of algae or a floating coconut. Evolution of human bipedalism There are at least twelve distinct hypotheses as to how and why bipedalism evolved in humans, and also some debate as to when. Bipedalism evolved well before the large human brain or the development of stone tools. Bipedal specializations are found in Australopithecus fossils from 4.2 to 3.9 million years ago and recent studies have suggested that obligate bipedal hominid species were present as early as 7 million years ago. Nonetheless, the evolution of bipedalism was accompanied by significant evolutions in the spine including the forward movement in position of the foramen magnum, where the spinal cord leaves the cranium. Recent evidence regarding modern human sexual dimorphism (physical differences between male and female) in the lumbar spine has been seen in pre-modern primates such as Australopithecus africanus. This dimorphism has been seen as an evolutionary adaptation of females to bear lumbar load better during pregnancy, an adaptation that non-bipedal primates would not need to make. Adapting bipedalism would have required less shoulder stability, which allowed the shoulder and other limbs to become more independent of each other and adapt for specific suspensory behaviors. In addition to the change in shoulder stability, changing locomotion would have increased the demand for shoulder mobility, which would have propelled the evolution of bipedalism forward. The different hypotheses are not necessarily mutually exclusive and a number of selective forces may have acted together to lead to human bipedalism. It is important to distinguish between adaptations for bipedalism and adaptations for running, which came later still. The form and function of modern-day humans' upper bodies appear to have evolved from living in a more forested setting. Living in this kind of environment would have made it so that being able to travel arboreally would have been advantageous at the time. It has also been proposed that, like some modern-day apes, early hominins had undergone a knuckle-walking stage prior to adapting the back limbs for bipedality while retaining forearms capable of grasping. Numerous causes for the evolution of human bipedalism involve freeing the hands for carrying and using tools, sexual dimorphism in provisioning, changes in climate and environment (from jungle to savanna) that favored a more elevated eye-position, and to reduce the amount of skin exposed to the tropical sun. It is possible that bipedalism provided a variety of benefits to the hominin species, and scientists have suggested multiple reasons for evolution of human bipedalism. There is also not only the question of why the earliest hominins were partially bipedal but also why hominins became more bipedal over time. For example, the postural feeding hypothesis describes how the earliest hominins became bipedal for the benefit of reaching food in trees while the savanna-based theory describes how the late hominins that started to settle on the ground became increasingly bipedal. Multiple factors Napier (1963) argued that it is unlikely that a single factor drove the evolution of bipedalism. He stated "It seems unlikely that any single factor was responsible for such a dramatic change in behaviour. In addition to the advantages of accruing from ability to carry objects – food or otherwise – the improvement of the visual range and the freeing of the hands for purposes of defence and offence may equally have played their part as catalysts." Sigmon (1971) demonstrated that chimpanzees exhibit bipedalism in different contexts, and one single factor should be used to explain bipedalism: preadaptation for human bipedalism. Day (1986) emphasized three major pressures that drove evolution of bipedalism: food acquisition, predator avoidance, and reproductive success. Ko (2015) stated that there are two questions main regarding bipedalism 1. Why were the earliest hominins partially bipedal? and 2. Why did hominins become more bipedal over time? He argued that these questions can be answered with combination of prominent theories such as Savanna-based, Postural feeding, and Provisioning. Savannah-based theory According to the Savanna-based theory, hominines came down from the tree's branches and adapted to life on the savanna by walking erect on two feet. The theory suggests that early hominids were forced to adapt to bipedal locomotion on the open savanna after they left the trees. One of the proposed mechanisms was the knuckle-walking hypothesis, which states that human ancestors used quadrupedal locomotion on the savanna, as evidenced by morphological characteristics found in Australopithecus anamensis and Australopithecus afarensis forelimbs, and that it is less parsimonious to assume that knuckle walking developed twice in genera Pan and Gorilla instead of evolving it once as synapomorphy for Pan and Gorilla before losing it in Australopithecus. The evolution of an orthograde posture would have been very helpful on a savanna as it would allow the ability to look over tall grasses in order to watch out for predators, or terrestrially hunt and sneak up on prey. It was also suggested in P. E. Wheeler's "The evolution of bipedality and loss of functional body hair in hominids", that a possible advantage of bipedalism in the savanna was reducing the amount of surface area of the body exposed to the sun, helping regulate body temperature. In fact, Elizabeth Vrba's turnover pulse hypothesis supports the savanna-based theory by explaining the shrinking of forested areas due to global warming and cooling, which forced animals out into the open grasslands and caused the need for hominids to acquire bipedality. Others state hominines had already achieved the bipedal adaptation that was used in the savanna. The fossil evidence reveals that early bipedal hominins were still adapted to climbing trees at the time they were also walking upright. It is possible that bipedalism evolved in the trees, and was later applied to the savanna as a vestigial trait. Humans and orangutans are both unique to a bipedal reactive adaptation when climbing on thin branches, in which they have increased hip and knee extension in relation to the diameter of the branch, which can increase an arboreal feeding range and can be attributed to a convergent evolution of bipedalism evolving in arboreal environments. Hominine fossils found in dry grassland environments led anthropologists to believe hominines lived, slept, walked upright, and died only in those environments because no hominine fossils were found in forested areas. However, fossilization is a rare occurrence—the conditions must be just right in order for an organism that dies to become fossilized for somebody to find later, which is also a rare occurrence. The fact that no hominine fossils were found in forests does not ultimately lead to the conclusion that no hominines ever died there. The convenience of the savanna-based theory caused this point to be overlooked for over a hundred years. Some of the fossils found actually showed that there was still an adaptation to arboreal life. For example, Lucy, the famous Australopithecus afarensis, found in Hadar in Ethiopia, which may have been forested at the time of Lucy's death, had curved fingers that would still give her the ability to grasp tree branches, but she walked bipedally. "Little Foot," a nearly-complete specimen of Australopithecus africanus, has a divergent big toe as well as the ankle strength to walk upright. "Little Foot" could grasp things using his feet like an ape, perhaps tree branches, and he was bipedal. Ancient pollen found in the soil in the locations in which these fossils were found suggest that the area used to be much more wet and covered in thick vegetation and has only recently become the arid desert it is now. Traveling efficiency hypothesis An alternative explanation is that the mixture of savanna and scattered forests increased terrestrial travel by proto-humans between clusters of trees, and bipedalism offered greater efficiency for long-distance travel between these clusters than quadrupedalism. In an experiment monitoring chimpanzee metabolic rate via oxygen consumption, it was found that the quadrupedal and bipedal energy costs were very similar, implying that this transition in early ape-like ancestors would not have been very difficult or energetically costing. This increased travel efficiency is likely to have been selected for as it assisted foraging across widely dispersed resources. Postural feeding hypothesis The postural feeding hypothesis has been recently supported by Dr. Kevin Hunt, a professor at Indiana University. This hypothesis asserts that chimpanzees were only bipedal when they eat. While on the ground, they would reach up for fruit hanging from small trees and while in trees, bipedalism was used to reach up to grab for an overhead branch. These bipedal movements may have evolved into regular habits because they were so convenient in obtaining food. Also, Hunt's hypotheses states that these movements coevolved with chimpanzee arm-hanging, as this movement was very effective and efficient in harvesting food. When analyzing fossil anatomy, Australopithecus afarensis has very similar features of the hand and shoulder to the chimpanzee, which indicates hanging arms. Also, the Australopithecus hip and hind limb very clearly indicate bipedalism, but these fossils also indicate very inefficient locomotive movement when compared to humans. For this reason, Hunt argues that bipedalism evolved more as a terrestrial feeding posture than as a walking posture. A related study conducted by University of Birmingham, Professor Susannah Thorpe examined the most arboreal great ape, the orangutan, holding onto supporting branches in order to navigate branches that were too flexible or unstable otherwise. In more than 75 percent of observations, the orangutans used their forelimbs to stabilize themselves while navigating thinner branches. Increased fragmentation of forests where A. afarensis as well as other ancestors of modern humans and other apes resided could have contributed to this increase of bipedalism in order to navigate the diminishing forests. Findings also could shed light on discrepancies observed in the anatomy of A. afarensis, such as the ankle joint, which allowed it to "wobble" and long, highly flexible forelimbs. If bipedalism started from upright navigation in trees, it could explain both increased flexibility in the ankle as well as long forelimbs which grab hold of branches. Provisioning model One theory on the origin of bipedalism is the behavioral model presented by C. Owen Lovejoy, known as "male provisioning". Lovejoy theorizes that the evolution of bipedalism was linked to monogamy. In the face of long inter-birth intervals and low reproductive rates typical of the apes, early hominids engaged in pair-bonding that enabled greater parental effort directed towards rearing offspring. Lovejoy proposes that male provisioning of food would improve the offspring survivorship and increase the pair's reproductive rate. Thus the male would leave his mate and offspring to search for food and return carrying the food in his arms walking on his legs. This model is supported by the reduction ("feminization") of the male canine teeth in early hominids such as Sahelanthropus tchadensis and Ardipithecus ramidus, which along with low body size dimorphism in Ardipithecus and Australopithecus, suggests a reduction in inter-male antagonistic behavior in early hominids. In addition, this model is supported by a number of modern human traits associated with concealed ovulation (permanently enlarged breasts, lack of sexual swelling) and low sperm competition (moderate sized testes, low sperm mid-piece volume) that argues against recent adaptation to a polygynous reproductive system. However, this model has been debated, as others have argued that early bipedal hominids were instead polygynous. Among most monogamous primates, males and females are about the same size. That is sexual dimorphism is minimal, and other studies have suggested that Australopithecus afarensis males were nearly twice the weight of females. However, Lovejoy's model posits that the larger range a provisioning male would have to cover (to avoid competing with the female for resources she could attain herself) would select for increased male body size to limit predation risk. Furthermore, as the species became more bipedal, specialized feet would prevent the infant from conveniently clinging to the mother - hampering the mother's freedom and thus make her and her offspring more dependent on resources collected by others. Modern monogamous primates such as gibbons tend to be also territorial, but fossil evidence indicates that Australopithecus afarensis lived in large groups. However, while both gibbons and hominids have reduced canine sexual dimorphism, female gibbons enlarge ('masculinize') their canines so they can actively share in the defense of their home territory. Instead, the reduction of the male hominid canine is consistent with reduced inter-male aggression in a pair-bonded though group living primate. Early bipedalism in homininae model Recent studies of 4.4 million years old Ardipithecus ramidus suggest bipedalism. It is thus possible that bipedalism evolved very early in homininae and was reduced in chimpanzee and gorilla when they became more specialized. Other recent studies of the foot structure of Ardipithecus ramidus suggest that the species was closely related to African-ape ancestors. This possibly provides a species close to the true connection between fully bipedal hominins and quadruped apes. According to Richard Dawkins in his book "The Ancestor's Tale", chimps and bonobos are descended from Australopithecus gracile type species while gorillas are descended from Paranthropus. These apes may have once been bipedal, but then lost this ability when they were forced back into an arboreal habitat, presumably by those australopithecines from whom eventually evolved hominins. Early hominines such as Ardipithecus ramidus may have possessed an arboreal type of bipedalism that later independently evolved towards knuckle-walking in chimpanzees and gorillas and towards efficient walking and running in modern humans (see figure). It is also proposed that one cause of Neanderthal extinction was a less efficient running. Warning display (aposematic) model Joseph Jordania from the University of Melbourne recently (2011) suggested that bipedalism was one of the central elements of the general defense strategy of early hominids, based on aposematism, or warning display and intimidation of potential predators and competitors with exaggerated visual and audio signals. According to this model, hominids were trying to stay as visible and as loud as possible all the time. Several morphological and behavioral developments were employed to achieve this goal: upright bipedal posture, longer legs, long tightly coiled hair on the top of the head, body painting, threatening synchronous body movements, loud voice and extremely loud rhythmic singing/stomping/drumming on external subjects. Slow locomotion and strong body odor (both characteristic for hominids and humans) are other features often employed by aposematic species to advertise their non-profitability for potential predators. Other behavioural models There are a variety of ideas which promote a specific change in behaviour as the key driver for the evolution of hominid bipedalism. For example, Wescott (1967) and later Jablonski & Chaplin (1993) suggest that bipedal threat displays could have been the transitional behaviour which led to some groups of apes beginning to adopt bipedal postures more often. Others (e.g. Dart 1925) have offered the idea that the need for more vigilance against predators could have provided the initial motivation. Dawkins (e.g. 2004) has argued that it could have begun as a kind of fashion that just caught on and then escalated through sexual selection. And it has even been suggested (e.g. Tanner 1981:165) that male phallic display could have been the initial incentive, as well as increased sexual signaling in upright female posture. Thermoregulatory model The thermoregulatory model explaining the origin of bipedalism is one of the simplest theories so far advanced, but it is a viable explanation. Dr. Peter Wheeler, a professor of evolutionary biology, proposes that bipedalism raises the amount of body surface area higher above the ground which results in a reduction in heat gain and helps heat dissipation. When a hominid is higher above the ground, the organism accesses more favorable wind speeds and temperatures. During heat seasons, greater wind flow results in a higher heat loss, which makes the organism more comfortable. Also, Wheeler explains that a vertical posture minimizes the direct exposure to the sun whereas quadrupedalism exposes more of the body to direct exposure. Analysis and interpretations of Ardipithecus reveal that this hypothesis needs modification to consider that the forest and woodland environmental preadaptation of early-stage hominid bipedalism preceded further refinement of bipedalism by the pressure of natural selection. This then allowed for the more efficient exploitation of the hotter conditions ecological niche, rather than the hotter conditions being hypothetically bipedalism's initial stimulus. A feedback mechanism from the advantages of bipedality in hot and open habitats would then in turn make a forest preadaptation solidify as a permanent state. Carrying models Charles Darwin wrote that "Man could not have attained his present dominant position in the world without the use of his hands, which are so admirably adapted to the act of obedience of his will". Darwin (1871:52) and many models on bipedal origins are based on this line of thought. Gordon Hewes (1961) suggested that the carrying of meat "over considerable distances" (Hewes 1961:689) was the key factor. Isaac (1978) and Sinclair et al. (1986) offered modifications of this idea, as indeed did Lovejoy (1981) with his "provisioning model" described above. Others, such as Nancy Tanner (1981), have suggested that infant carrying was key, while others again have suggested stone tools and weapons drove the change. This stone-tools theory is very unlikely, as though ancient humans were known to hunt, the discovery of tools was not discovered for thousands of years after the origin of bipedalism, chronologically precluding it from being a driving force of evolution. (Wooden tools and spears fossilize poorly and therefore it is difficult to make a judgment about their potential usage.) Wading models The observation that large primates, including especially the great apes, that predominantly move quadrupedally on dry land, tend to switch to bipedal locomotion in waist deep water, has led to the idea that the origin of human bipedalism may have been influenced by waterside environments. This idea, labelled "the wading hypothesis", was originally suggested by the Oxford marine biologist Alister Hardy who said: "It seems to me likely that Man learnt to stand erect first in water and then, as his balance improved, he found he became better equipped for standing up on the shore when he came out, and indeed also for running." It was then promoted by Elaine Morgan, as part of the aquatic ape hypothesis, who cited bipedalism among a cluster of other human traits unique among primates, including voluntary control of breathing, hairlessness and subcutaneous fat. The "aquatic ape hypothesis", as originally formulated, has not been accepted or considered a serious theory within the anthropological scholarly community. Others, however, have sought to promote wading as a factor in the origin of human bipedalism without referring to further ("aquatic ape" related) factors. Since 2000 Carsten Niemitz has published a series of papers and a book on a variant of the wading hypothesis, which he calls the "amphibian generalist theory" (). Other theories have been proposed that suggest wading and the exploitation of aquatic food sources (providing essential nutrients for human brain evolution or critical fallback foods) may have exerted evolutionary pressures on human ancestors promoting adaptations which later assisted full-time bipedalism. It has also been thought that consistent water-based food sources had developed early hominid dependency and facilitated dispersal along seas and rivers. Consequences Prehistoric fossil records show that early hominins first developed bipedalism before being followed by an increase in brain size. The consequences of these two changes in particular resulted in painful and difficult labor due to the increased favor of a narrow pelvis for bipedalism being countered by larger heads passing through the constricted birth canal. This phenomenon is commonly known as the obstetrical dilemma. Non-human primates habitually deliver their young on their own, but the same cannot be said for modern-day humans. Isolated birth appears to be rare and actively avoided cross-culturally, even if birthing methods may differ between said cultures. This is due to the fact that the narrowing of the hips and the change in the pelvic angle caused a discrepancy in the ratio of the size of the head to the birth canal. The result of this is that there is greater difficulty in birthing for hominins in general, let alone to be doing it by oneself. Physiology Bipedal movement occurs in a number of ways and requires many mechanical and neurological adaptations. Some of these are described below. Biomechanics Standing Energy-efficient means of standing bipedally involve constant adjustment of balance, and of course these must avoid overcorrection. The difficulties associated with simple standing in upright humans are highlighted by the greatly increased risk of falling present in the elderly, even with minimal reductions in control system effectiveness. Shoulder stability Shoulder stability would decrease with the evolution of bipedalism. Shoulder mobility would increase because the need for a stable shoulder is only present in arboreal habitats. Shoulder mobility would support suspensory locomotion behaviors which are present in human bipedalism. The forelimbs are freed from weight-bearing requirements, which makes the shoulder a place of evidence for the evolution of bipedalism. Walking Unlike non-human apes that are able to practice bipedality such as Pan and Gorilla, hominins have the ability to move bipedally without the utilization of a bent-hip-bent-knee (BHBK) gait, which requires the engagement of both the hip and the knee joints. This human ability to walk is made possible by the spinal curvature humans have that non-human apes do not. Rather, walking is characterized by an "inverted pendulum" movement in which the center of gravity vaults over a stiff leg with each step. Force plates can be used to quantify the whole-body kinetic & potential energy, with walking displaying an out-of-phase relationship indicating exchange between the two. This model applies to all walking organisms regardless of the number of legs, and thus bipedal locomotion does not differ in terms of whole-body kinetics. In humans, walking is composed of several separate processes: Vaulting over a stiff stance leg Passive ballistic movement of the swing leg A short 'push' from the ankle prior to toe-off, propelling the swing leg Rotation of the hips about the axis of the spine, to increase stride length Rotation of the hips about the horizontal axis to improve balance during stance Running Early hominins underwent post-cranial changes in order to better adapt to bipedality, especially running. One of these changes is having longer hindlimbs proportional to the forelimbs and their effects. As previously mentioned, longer hindlimbs assist in thermoregulation by reducing the total surface area exposed to direct sunlight while simultaneously allowing for more space for cooling winds. Additionally, having longer limbs is more energy-efficient, since longer limbs mean that overall muscle strain is lessened. Better energy efficiency, in turn, means higher endurance, particularly when running long distances. Running is characterized by a spring-mass movement. Kinetic and potential energy are in phase, and the energy is stored & released from a spring-like limb during foot contact, achieved by the plantar arch and the Achilles tendon in the foot and leg, respectively. Again, the whole-body kinetics are similar to animals with more limbs. Musculature Bipedalism requires strong leg muscles, particularly in the thighs. Contrast in domesticated poultry the well muscled legs, against the small and bony wings. Likewise in humans, the quadriceps and hamstring muscles of the thigh are both so crucial to bipedal activities that each alone is much larger than the well-developed biceps of the arms. In addition to the leg muscles, the increased size of the gluteus maximus in humans is an important adaptation as it provides support and stability to the trunk and lessens the amount of stress on the joints when running. Respiration Quadrupeds, have more restrictive breathing respire while moving than do bipedal humans. "Quadrupedal species normally synchronize the locomotor and respiratory cycles at a constant ratio of 1:1 (strides per breath) in both the trot and gallop. Human runners differ from quadrupeds in that while running they employ several phase-locked patterns (4:1, 3:1, 2:1, 1:1, 5:2, and 3:2), although a 2:1 coupling ratio appears to be favored. Even though the evolution of bipedal gait has reduced the mechanical constraints on respiration in man, thereby permitting greater flexibility in breathing pattern, it has seemingly not eliminated the need for the synchronization of respiration and body motion during sustained running." Respiration through bipedality means that there is better breath control in bipeds, which can be associated with brain growth. The modern brain utilizes approximately 20% of energy input gained through breathing and eating, as opposed to species like chimpanzees who use up twice as much energy as humans for the same amount of movement. This excess energy, leading to brain growth, also leads to the development of verbal communication. This is because breath control means that the muscles associated with breathing can be manipulated into creating sounds. This means that the onset of bipedality, leading to more efficient breathing, may be related to the origin of verbal language. Bipedal robots For nearly the whole of the 20th century, bipedal robots were very difficult to construct and robot locomotion involved only wheels, treads, or multiple legs. Recent cheap and compact computing power has made two-legged robots more feasible. Some notable biped robots are ASIMO, HUBO, MABEL and QRIO. Recently, spurred by the success of creating a fully passive, un-powered bipedal walking robot, those working on such machines have begun using principles gleaned from the study of human and animal locomotion, which often relies on passive mechanisms to minimize power consumption. See also Allometry Orthograde posture Quadrupedalism Notes References Further reading Darwin, C., "The Descent of Man and Selection in Relation to Sex", Murray (London), (1871). Dart, R. A., "Australopithecus africanus: The Ape Man of South Africa" Nature, 145, 195–199, (1925). Dawkins, R., "The Ancestor's Tale", Weidenfeld and Nicolson (London), (2004). DeSilva, J., "First Steps: How Upright Walking Made Us Human" HarperCollins (New York), (2021) Hewes, G. W., "Food Transport and the Origin of Hominid Bipedalism" American Anthropologist, 63, 687–710, (1961). Hunt, K. D., "The Evolution of Human Bipedality" Journal of Human Evolution, 26, 183–202, (1994). Isaac, G. I., "The Archeological Evidence for the Activities of Early African Hominids" In:Early Hominids of Africa (Jolly, C.J. (Ed.)), Duckworth (London), 219–254, (1978). Tanner, N. M., "On Becoming Human", Cambridge University Press (Cambridge), (1981) Wheeler, P. E. (1984) "The Evolution of Bipedality and Loss of Functional Body Hair in Hominoids." Journal of Human Evolution, 13, 91–98, External links The Origin of Bipedalism Human Timeline (Interactive) – Smithsonian, National Museum of Natural History (August 2016) Terrestrial locomotion Animal anatomy 2 (number)
10
In general, bootstrapping usually refers to a self-starting process that is supposed to continue or grow without external input. Etymology Tall boots may have a tab, loop or handle at the top known as a bootstrap, allowing one to use fingers or a boot hook tool to help pull the boots on. The saying "to " was already in use during the 19th century as an example of an impossible task. The idiom dates at least to 1834, when it appeared in the Workingman's Advocate: "It is conjectured that Mr. Murphee will now be enabled to hand himself over the Cumberland river or a barn yard fence by the straps of his boots." In 1860 it appeared in a comment on philosophy of mind: "The attempt of the mind to analyze itself [is] an effort analogous to one who would lift himself by his own bootstraps." Bootstrap as a metaphor, meaning to better oneself by one's own unaided efforts, was in use in 1922. This metaphor spawned additional metaphors for a series of self-sustaining processes that proceed without external help. The term is sometimes attributed to a story in Rudolf Erich Raspe's The Surprising Adventures of Baron Munchausen, but in that story Baron Munchausen pulls himself (and his horse) out of a swamp by his hair (specifically, his pigtail), not by his bootstraps and no explicit reference to bootstraps has been found elsewhere in the various versions of the Munchausen tales. Applications Computing In computer technology, the term bootstrapping refers to language compilers that are able to be coded in the same language. (For example, a C compiler is now written in the C language. Once the basic compiler is written, improvements can be iteratively made, thus pulling the language up by its bootstraps). Also, booting usually refers to the process of loading the basic software into the memory of a computer after power-on or general reset, the kernel will load the operating system which will then take care of loading other device drivers and software as needed. Software loading and execution Booting is the process of starting a computer, specifically with regard to starting its software. The process involves a chain of stages, in which at each stage, a relatively small and simple program loads and then executes the larger, more complicated program of the next stage. It is in this sense that the computer "pulls itself up by its bootstraps"; i.e., it improves itself by its own efforts. Booting is a chain of events that starts with execution of hardware-based procedures and may then hand-off to firmware and software which is loaded into main memory. Booting often involves processes such as performing self-tests, loading configuration settings, loading a BIOS, resident monitors, a hypervisor, an operating system, or utility software. The computer term bootstrap began as a metaphor in the 1950s. In computers, pressing a bootstrap button caused a hardwired program to read a bootstrap program from an input unit. The computer would then execute the bootstrap program, which caused it to read more program instructions. It became a self-sustaining process that proceeded without external help from manually entered instructions. As a computing term, bootstrap has been used since at least 1953. Software development Bootstrapping can also refer to the development of successively more complex, faster programming environments. The simplest environment will be, perhaps, a very basic text editor (e.g., ed) and an assembler program. Using these tools, one can write a more complex text editor, and a simple compiler for a higher-level language and so on, until one can have a graphical IDE and an extremely high-level programming language. Historically, bootstrapping also refers to an early technique for computer program development on new hardware. The technique described in this paragraph has been replaced by the use of a cross compiler executed by a pre-existing computer. Bootstrapping in program development began during the 1950s when each program was constructed on paper in decimal code or in binary code, bit by bit (1s and 0s), because there was no high-level computer language, no compiler, no assembler, and no linker. A tiny assembler program was hand-coded for a new computer (for example the IBM 650) which converted a few instructions into binary or decimal code: A1. This simple assembler program was then rewritten in its just-defined assembly language but with extensions that would enable the use of some additional mnemonics for more complex operation codes. The enhanced assembler's source program was then assembled by its predecessor's executable (A1) into binary or decimal code to give A2, and the cycle repeated (now with those enhancements available), until the entire instruction set was coded, branch addresses were automatically calculated, and other conveniences (such as conditional assembly, macros, optimisations, etc.) established. This was how the early Symbolic Optimal Assembly Program (SOAP) was developed. Compilers, linkers, loaders, and utilities were then coded in assembly language, further continuing the bootstrapping process of developing complex software systems by using simpler software. The term was also championed by Doug Engelbart to refer to his belief that organizations could better evolve by improving the process they use for improvement (thus obtaining a compounding effect over time). His SRI team that developed the NLS hypertext system applied this strategy by using the tool they had developed to improve the tool. Compilers The development of compilers for new programming languages first developed in an existing language but then rewritten in the new language and compiled by itself, is another example of the bootstrapping notion. Installers During the installation of computer programs, it is sometimes necessary to update the installer or package manager itself. The common pattern for this is to use a small executable bootstrapper file (e.g., setup.exe) which updates the installer and starts the real installation after the update. Sometimes the bootstrapper also installs other prerequisites for the software during the bootstrapping process. Overlay networks A bootstrapping node, also known as a rendezvous host, is a node in an overlay network that provides initial configuration information to newly joining nodes so that they may successfully join the overlay network. Discrete-event simulation A type of computer simulation called discrete-event simulation represents the operation of a system as a chronological sequence of events. A technique called bootstrapping the simulation model is used, which bootstraps initial data points using a pseudorandom number generator to schedule an initial set of pending events, which schedule additional events, and with time, the distribution of event times approaches its steady state—the bootstrapping behavior is overwhelmed by steady-state behavior. Artificial intelligence and machine learning Bootstrapping is a technique used to iteratively improve a classifier's performance. Typically, multiple classifiers will be trained on different sets of the input data, and on prediction tasks the output of the different classifiers will be combined. Seed AI is a hypothesized type of artificial intelligence capable of recursive self-improvement. Having improved itself, it would become better at improving itself, potentially leading to an exponential increase in intelligence. No such AI is known to exist, but it remains an active field of research. Seed AI is a significant part of some theories about the technological singularity: proponents believe that the development of seed AI will rapidly yield ever-smarter intelligence (via bootstrapping) and thus a new era. Statistics Bootstrapping is a resampling technique used to obtain estimates of summary statistics. Business Bootstrapping in business means starting a business without external help or working capital. Entrepreneurs in the startup development phase of their company survive through internal cash flow and are very cautious with their expenses. Generally at the start of a venture, a small amount of money will be set aside for the bootstrap process. Bootstrapping can also be a supplement for econometric models. Bootstrapping was also expanded upon in the book Bootstrap Business by Richard Christiansen, the Harvard Business Review article The Art of Bootstrapping and the follow-up book The Origin and Evolution of New Businesses by Amar Bhide. There is also an entire bible written on how to properly bootstrap by Seth Godin. Experts have noted that several common stages exist for bootstrapping a business venture: Birth-stage: This is the first stage to bootstrapping by which the entrepreneur utilizes any personal savings or borrowed and/or invested money from friends and family to launch the business. It is also possible for the business owner to be running or working for another organization at the time which may help to fuel their business and cover initial expenses. Funding from sales to consumers-stage: In this particular stage, money from customers is used to keep the business operating afloat. Once expenses caused by normal day-to-day business operations are met, the rate growth usually increases. Outsourcing-stage: At this point in the company's existence, the entrepreneur in question normally concentrates on the specific operating activities. This is the time in which entrepreneurs decide how to improve and upgrade equipment (subsequently increasing output) or even employing new staff members. At this point in time, the company may seek loans or even lean on other methods of additional funding such as venture capital to help with expansion and other improvements. There are many types of companies that are eligible for bootstrapping. Early-stage companies that do not necessarily require large influxes of capital (particularly from outside sources) qualify. This would specifically allow for flexibility for the business and time to grow. Serial entrepreneur companies could also possibly reap the benefits of bootstrapping. These are organizations whereby the founder has money from the sale of a previous companies they can use to invest. There are different methods of bootstrapping. Future business owners aspiring to use bootstrapping as way of launching their product or service often use the following methods: Using accessible money from their own personal savings. Managing their working capital in a way that minimizes their company's accounts receivable. Cashing out 401k retirement funds and pay them off at later dates. Gradually increasing the business’ accounts payable through delaying payments or even renting equipment instead of buying them. Bootstrapping is often considered successful. When taking into account statistics provided by Fundera, approximately 77% of small business rely on some sort of personal investment and or savings in order to fund their startup ventures. The average small business venture requires approximately $10,000 in startup capital with a third of small business launching with less than $5,000 bootstrapped. Based on startup data presented by Entrepreneur.com, in comparison other methods of funding, bootstrapping is more commonly used than others. “0.91% of startups are funded by angel investors, while 0.05% are funded by VCs. In contrast, 57 percent of startups are funded by personal loans and credit, while 38 percent receive funding from family and friends.” Some examples of successful entrepreneurs that have used bootstrapping in order to finance their businesses include serial entrepreneur Mark Cuban. He has publicly endorsed bootstrapping claiming that “If you can start on your own … do it by [yourself] without having to go out and raise money.” When asked why he believed this approach was most necessary, he replied, “I think the biggest mistake people make is once they have an idea and the goal of starting a business, they think they have to raise money. And once you raise money, that’s not an accomplishment, that’s an obligation” because “now, you’re reporting to whoever you raised money from.” Bootstrapped companies such as Apple Inc. (APPL), eBay Inc. (EBAY) and Coca-Cola Co. have also claimed that they attribute some of their success to the fact that this method of funding enables them to remain highly focused on a specific array of profitable product. There are advantages to bootstrapping. Entrepreneurs are in full control over the finances of the business and can maintain control over the organization's inflows and outflows of cash. Equity is retained by the owner and can be redistributed at their discretion. There is less liability or opportunity to accumulate debt from other financial sources. Bootstrapping often leads to entrepreneurs operating their businesses with freedom to do as they see fit; in a similar fashion to sole proprietors. This is an effective method if the business owner's goal is to be able to fund future investments back into the business. Besides the direct stakeholders of the business, entrepreneurs do not have to answer to a board of investors which could possibly pressure them into making certain decisions beneficial to them. There are also drawbacks of bootstrapping. Personal liability is one. Credit lines usually must be established in owner's name which is the downfall of some companies due to debt being accumulated from various credit cards, etc. All financial risks pertaining to the business in question all fall on the owner's shoulders. The owner is forced to put either their own or their family/friend's investments in jeopardy in the event of the business failing. Possible legal issues are another drawback. There have been some cases in which entrepreneurs have been sued by family or even close friends for the improper use of their bootstrapped money. Because financing is limited to what the owner or company makes, this can create a ceiling which prohibits room for growth. Without the aid of occasional external sources of funding, entrepreneurs can find themselves unable to promote employees or even expand their businesses. A lack of money could possibly lead to a reduction of the quality of the service or product meant to be provided. Certain investors tend to be well-respected within specific industries and running a company without their backing or support could cause pivotal opportunities to be lost. Personal stress to entrepreneur or business owner in question is common. Tackling funding by themselves has often led to stressful times for certain individuals. Startups can grow by reinvesting profits in its own growth if bootstrapping costs are low and return on investment is high. This financing approach allows owners to maintain control of their business and forces them to spend with discipline. In addition, bootstrapping allows startups to focus on customers rather than investors, thereby increasing the likelihood of creating a profitable business. This leaves startups with a better exit strategy with greater returns. Leveraged buyouts, or highly leveraged or "bootstrap" transactions, occur when an investor acquires a controlling interest in a company's equity and where a significant percentage of the purchase price is financed through leverage, i.e. borrowing by the acquired company. Bootstrapping in finance refers to the method to create the spot rate curve. Operation Bootstrap (Operación Manos a la Obra) refers to the ambitious projects that industrialized Puerto Rico in the mid-20th century. Biology Richard Dawkins in his book River Out of Eden used the computer bootstrapping concept to explain how biological cells differentiate: "Different cells receive different combinations of chemicals, which switch on different combinations of genes, and some genes work to switch other genes on or off. And so the bootstrapping continues, until we have the full repertoire of different kinds of cells." Phylogenetics Bootstrapping analysis gives a way to judge the strength of support for clades on phylogenetic trees. A number is written by a node, which reflects the percentage of bootstrap trees which also resolve the clade at the endpoints of that branch. Law Bootstrapping is a rule preventing the admission of hearsay evidence in conspiracy cases. Linguistics Bootstrapping is a theory of language acquisition. Physics Quantum theory Bootstrapping is using very general consistency criteria to determine the form of a quantum theory from some assumptions on the spectrum of particles or operators. Magnetically confined fusion plasmas In tokamak fusion devices, bootstrapping refers to the process in which a bootstrap current is self-generated by the plasma, which reduces or eliminates the need for an external current driver. Maximising the bootstrap current is a major goal of advanced tokamak designs. Inertially confined fusion plasmas Bootstrapping in inertial confinement fusion refers to the alpha particles produced in the fusion reaction providing further heating to the plasma. This heating leads to ignition and an overall energy gain. Electronics Bootstrapping is a form of positive feedback in analog circuit design. Electric power grid An electric power grid is almost never brought down intentionally. Generators and power stations are started and shut down as necessary. A typical power station requires power for start up prior to being able to generate power. This power is obtained from the grid, so if the entire grid is down these stations cannot be started. Therefore, to get a grid started, there must be at least a small number of power stations that can start entirely on their own. A black start is the process of restoring a power station to operation without relying on external power. In the absence of grid power, one or more black starts are used to bootstrap the grid. Cellular networks A Bootstrapping Server Function (BSF) is an intermediary element in cellular networks which provides application independent functions for mutual authentication of user equipment and servers unknown to each other and for 'bootstrapping' the exchange of secret session keys afterwards. The term 'bootstrapping' is related to building a security relation with a previously unknown device first and to allow installing security elements (keys) in the device and the BSF afterwards. Nuclear Power Plant A nuclear power plant always needs to have a way to remove decay heat, which is usually done with electrical cooling pumps. But in the rare case of a complete loss of electrical power, this can still be achieved by booting a turbine generator. As steam builds up in the steam generator, it can be used to power the turbine generator (initially with no oil pumps, circ water pumps, or condensation pumps). Once the turbine generator is producing electricity, the auxiliary pumps can be powered on, and the reactor cooling pumps can be run momentarily. Eventually the steam pressure will become insufficient to power the turbine generator, and the process can be shut down in reverse order. The process can be repeated until no longer needed. This can cause great damage to the turbine generator, but more importantly, it saves the nuclear reactor. See also Robert A. Heinlein's short sci-fi story By His Bootstraps References External links Dictionary.com entries for Bootstrap Freedictionary.com entries for Bootstrap Engelbart Institute on Bootstrapping Strategies American English idioms Metaphors
11
Bioinformatics () is an interdisciplinary field of science that develops methods and software tools for understanding biological data, especially when the data sets are large and complex. Bioinformatics uses biology, chemistry, physics, computer science, computer programming, information engineering, mathematics and statistics to analyze and interpret biological data. The subsequent process of analyzing and interpreting data is referred to as computational biology. Computational, statistical, and computer programming techniques have been used for computer simulation analyses of biological queries. They include reused specific analysis "pipelines", particularly in the field of genomics, such as by the identification of genes and single nucleotide polymorphisms (SNPs). These pipelines are used to better understand the genetic basis of disease, unique adaptations, desirable properties (esp. in agricultural species), or differences between populations. Bioinformatics also includes proteomics, which tries to understand the organizational principles within nucleic acid and protein sequences. Image and signal processing allow extraction of useful results from large amounts of raw data. In the field of genetics, it aids in sequencing and annotating genomes and their observed mutations. Bioinformatics includes text mining of biological literature and the development of biological and gene ontologies to organize and query biological data. It also plays a role in the analysis of gene and protein expression and regulation. Bioinformatics tools aid in comparing, analyzing and interpreting genetic and genomic data and more generally in the understanding of evolutionary aspects of molecular biology. At a more integrative level, it helps analyze and catalogue the biological pathways and networks that are an important part of systems biology. In structural biology, it aids in the simulation and modeling of DNA, RNA, proteins as well as biomolecular interactions. History The first definition of the term bioinformatics was coined by Paulien Hogeweg and Ben Hesper in 1970, to refer to the study of information processes in biotic systems. This definition placed bioinformatics as a field parallel to biochemistry (the study of chemical processes in biological systems). Bioinformatics and computational biology involved the analysis of biological data, particularly DNA, RNA, and protein sequences. The field of bioinformatics experienced explosive growth starting in the mid-1990s, driven largely by the Human Genome Project and by rapid advances in DNA sequencing technology. Analyzing biological data to produce meaningful information involves writing and running software programs that use algorithms from graph theory, artificial intelligence, soft computing, data mining, image processing, and computer simulation. The algorithms in turn depend on theoretical foundations such as discrete mathematics, control theory, system theory, information theory, and statistics. Sequences There has been a tremendous advance in speed and cost reduction since the completion of the Human Genome Project, with some labs able to sequence over 100,000 billion bases each year, and a full genome can be sequenced for $1,000 or less. Computers became essential in molecular biology when protein sequences became available after Frederick Sanger determined the sequence of insulin in the early 1950s. Comparing multiple sequences manually turned out to be impractical. Margaret Oakley Dayhoff, a pioneer in the field, compiled one of the first protein sequence databases, initially published as books as well as methods of sequence alignment and molecular evolution. Another early contributor to bioinformatics was Elvin A. Kabat, who pioneered biological sequence analysis in 1970 with his comprehensive volumes of antibody sequences released online with Tai Te Wu between 1980 and 1991. In the 1970s, new techniques for sequencing DNA were applied to bacteriophage MS2 and øX174, and the extended nucleotide sequences were then parsed with informational and statistical algorithms. These studies illustrated that well known features, such as the coding segments and the triplet code, are revealed in straightforward statistical analyses and were the proof of the concept that bioinformatics would be insightful. Goals In order to study how normal cellular activities are altered in different disease states, raw biological data must be combined to form a comprehensive picture of these activities. Therefore, the field of bioinformatics has evolved such that the most pressing task now involves the analysis and interpretation of various types of data. This also includes nucleotide and amino acid sequences, protein domains, and protein structures. Important sub-disciplines within bioinformatics and computational biology include: Development and implementation of computer programs to efficiently access, manage, and use various types of information. Development of new mathematical algorithms and statistical measures to assess relationships among members of large data sets. For example, there are methods to locate a gene within a sequence, to predict protein structure and/or function, and to cluster protein sequences into families of related sequences. The primary goal of bioinformatics is to increase the understanding of biological processes. What sets it apart from other approaches is its focus on developing and applying computationally intensive techniques to achieve this goal. Examples include: pattern recognition, data mining, machine learning algorithms, and visualization. Major research efforts in the field include sequence alignment, gene finding, genome assembly, drug design, drug discovery, protein structure alignment, protein structure prediction, prediction of gene expression and protein–protein interactions, genome-wide association studies, the modeling of evolution and cell division/mitosis. Bioinformatics entails the creation and advancement of databases, algorithms, computational and statistical techniques, and theory to solve formal and practical problems arising from the management and analysis of biological data. Over the past few decades, rapid developments in genomic and other molecular research technologies and developments in information technologies have combined to produce a tremendous amount of information related to molecular biology. Bioinformatics is the name given to these mathematical and computing approaches used to glean understanding of biological processes. Common activities in bioinformatics include mapping and analyzing DNA and protein sequences, aligning DNA and protein sequences to compare them, and creating and viewing 3-D models of protein structures. Sequence analysis Since the bacteriophage Phage Φ-X174 was sequenced in 1977, the DNA sequences of thousands of organisms have been decoded and stored in databases. This sequence information is analyzed to determine genes that encode proteins, RNA genes, regulatory sequences, structural motifs, and repetitive sequences. A comparison of genes within a species or between different species can show similarities between protein functions, or relations between species (the use of molecular systematics to construct phylogenetic trees). With the growing amount of data, it long ago became impractical to analyze DNA sequences manually. Computer programs such as BLAST are used routinely to search sequences—as of 2008, from more than 260,000 organisms, containing over 190 billion nucleotides. DNA sequencing Before sequences can be analyzed, they are obtained from a data storage bank, such as GenBank. DNA sequencing is still a non-trivial problem as the raw data may be noisy or affected by weak signals. Algorithms have been developed for base calling for the various experimental approaches to DNA sequencing. Sequence assembly Most DNA sequencing techniques produce short fragments of sequence that need to be assembled to obtain complete gene or genome sequences. The shotgun sequencing technique (used by The Institute for Genomic Research (TIGR) to sequence the first bacterial genome, Haemophilus influenzae) generates the sequences of many thousands of small DNA fragments (ranging from 35 to 900 nucleotides long, depending on the sequencing technology). The ends of these fragments overlap and, when aligned properly by a genome assembly program, can be used to reconstruct the complete genome. Shotgun sequencing yields sequence data quickly, but the task of assembling the fragments can be quite complicated for larger genomes. For a genome as large as the human genome, it may take many days of CPU time on large-memory, multiprocessor computers to assemble the fragments, and the resulting assembly usually contains numerous gaps that must be filled in later. Shotgun sequencing is the method of choice for virtually all genomes sequenced (rather than chain-termination or chemical degradation methods), and genome assembly algorithms are a critical area of bioinformatics research. Genome annotation In genomics, annotation refers to the process of marking the stop and start regions of genes and other biological features in a sequenced DNA sequence. Many genomes are too large to be annotated by hand. As the rate of sequencing exceeds the rate of genome annotation, genome annotation has become the new bottleneck in bioinformatics. Genome annotation can be classified into three levels: the nucleotide, protein, and process levels. Gene finding is a chief aspect of nucleotide-level annotation. For complex genomes, a combination of ab initio gene prediction and sequence comparison with expressed sequence databases and other organisms can be successful. Nucleotide-level annotation also allows the integration of genome sequence with other genetic and physical maps of the genome. The principal aim of protein-level annotation is to assign function to the protein products of the genome. Databases of protein sequences and functional domains and motifs are used for this type of annotation. About half of the predicted proteins in a new genome sequence tend to have no obvious function. Understanding the function of genes and their products in the context of cellular and organismal physiology is the goal of process-level annotation. An obstacle of process-level annotation has been the inconsistency of terms used by different model systems. The Gene Ontology Consortium is helping to solve this problem. The first description of a comprehensive annotation system was published in 1995 by The Institute for Genomic Research, which performed the first complete sequencing and analysis of the genome of a free-living (non-symbiotic) organism, the bacterium Haemophilus influenzae. The system identifies the genes encoding all proteins, transfer RNAs, ribosomal RNAs, in order to make initial functional assignments. The GeneMark program trained to find protein-coding genes in Haemophilus influenzae is constantly changing and improving. Following the goals that the Human Genome Project left to achieve after its closure in 2003, the ENCODE project was developed by the National Human Genome Research Institute. This project is a collaborative data collection of the functional elements of the human genome that uses next-generation DNA-sequencing technologies and genomic tiling arrays, technologies able to automatically generate large amounts of data at a dramatically reduced per-base cost but with the same accuracy (base call error) and fidelity (assembly error). Gene function prediction While genome annotation is primarily based on sequence similarity (and thus homology), other properties of sequences can be used to predict the function of genes. In fact, most gene function prediction methods focus on protein sequences as they are more informative and more feature-rich. For instance, the distribution of hydrophobic amino acids predicts transmembrane segments in proteins. However, protein function prediction can also use external information such as gene (or protein) expression data, protein structure, or protein-protein interactions. Computational evolutionary biology Evolutionary biology is the study of the origin and descent of species, as well as their change over time. Informatics has assisted evolutionary biologists by enabling researchers to: trace the evolution of a large number of organisms by measuring changes in their DNA, rather than through physical taxonomy or physiological observations alone, compare entire genomes, which permits the study of more complex evolutionary events, such as gene duplication, horizontal gene transfer, and the prediction of factors important in bacterial speciation, build complex computational population genetics models to predict the outcome of the system over time track and share information on an increasingly large number of species and organisms Future work endeavours to reconstruct the now more complex tree of life. Comparative genomics The core of comparative genome analysis is the establishment of the correspondence between genes (orthology analysis) or other genomic features in different organisms. Intergenomic maps are made to trace the evolutionary processes responsible for the divergence of two genomes. A multitude of evolutionary events acting at various organizational levels shape genome evolution. At the lowest level, point mutations affect individual nucleotides. At a higher level, large chromosomal segments undergo duplication, lateral transfer, inversion, transposition, deletion and insertion. Entire genomes are involved in processes of hybridization, polyploidization and endosymbiosis that lead to rapid speciation. The complexity of genome evolution poses many exciting challenges to developers of mathematical models and algorithms, who have recourse to a spectrum of algorithmic, statistical and mathematical techniques, ranging from exact, heuristics, fixed parameter and approximation algorithms for problems based on parsimony models to Markov chain Monte Carlo algorithms for Bayesian analysis of problems based on probabilistic models. Many of these studies are based on the detection of sequence homology to assign sequences to protein families. Pan genomics Pan genomics is a concept introduced in 2005 by Tettelin and Medini. Pan genome is the complete gene repertoire of a particular monophyletic taxonomic group. Although initially applied to closely related strains of a species, it can be applied to a larger context like genus, phylum, etc. It is divided in two parts: the Core genome, a set of genes common to all the genomes under study (often housekeeping genes vital for survival), and the Dispensable/Flexible genome: a set of genes not present in all but one or some genomes under study. A bioinformatics tool BPGA can be used to characterize the Pan Genome of bacterial species. Genetics of disease As of 2013, the existence of efficient high-throughput next-generation sequencing technology allows for the identification of cause many different human disorders. Simple Mendelian inheritance has been observed for over 3,000 disorders that have been identified at the Online Mendelian Inheritance in Man database, but complex diseases are more difficult. Association studies have found many individual genetic regions that individually are weakly associated with complex diseases (such as infertility, breast cancer and Alzheimer's disease), rather than a single cause. There are currently many challenges to using genes for diagnosis and treatment, such as how we don't know which genes are important, or how stable the choices an algorithm provides. Genome-wide association studies have successfully identified thousands of common genetic variants for complex diseases and traits; however, these common variants only explain a small fraction of heritability. Rare variants may account for some of the missing heritability. Large-scale whole genome sequencing studies have rapidly sequenced millions of whole genomes, and such studies have identified hundreds of millions of rare variants. Functional annotations predict the effect or function of a genetic variant and help to prioritize rare functional variants, and incorporating these annotations can effectively boost the power of genetic association of rare variants analysis of whole genome sequencing studies. Some tools have been developed to provide all-in-one rare variant association analysis for whole-genome sequencing data, including integration of genotype data and their functional annotations, association analysis, result summary and visualization. Meta-analysis of whole genome sequencing studies provides an attractive solution to the problem of collecting large sample sizes for discovering rare variants associated with complex phenotypes. Analysis of mutations in cancer In cancer, the genomes of affected cells are rearranged in complex or unpredictable ways. In addition to single-nucleotide polymorphism arrays identifying point mutations that cause cancer, oligonucleotide microarrays can be used to identify chromosomal gains and losses (called comparative genomic hybridization). These detection methods generate terabytes of data per experiment. The data is often found to contain considerable variability, or noise, and thus Hidden Markov model and change-point analysis methods are being developed to infer real copy number changes. Two important principles can be used to identify cancer by mutations in the exome. First, cancer is a disease of accumulated somatic mutations in genes. Second, cancer contains driver mutations which need to be distinguished from passengers. Further improvements in bioinformatics could allow for classifying types of cancer by analysis of cancer driven mutations in the genome. Furthermore, tracking of patients while the disease progresses may be possible in the future with the sequence of cancer samples. Another type of data that requires novel informatics development is the analysis of lesions found to be recurrent among many tumors. Gene and protein expression Analysis of gene expression The expression of many genes can be determined by measuring mRNA levels with multiple techniques including microarrays, expressed cDNA sequence tag (EST) sequencing, serial analysis of gene expression (SAGE) tag sequencing, massively parallel signature sequencing (MPSS), RNA-Seq, also known as "Whole Transcriptome Shotgun Sequencing" (WTSS), or various applications of multiplexed in-situ hybridization. All of these techniques are extremely noise-prone and/or subject to bias in the biological measurement, and a major research area in computational biology involves developing statistical tools to separate signal from noise in high-throughput gene expression studies. Such studies are often used to determine the genes implicated in a disorder: one might compare microarray data from cancerous epithelial cells to data from non-cancerous cells to determine the transcripts that are up-regulated and down-regulated in a particular population of cancer cells. Analysis of protein expression Protein microarrays and high throughput (HT) mass spectrometry (MS) can provide a snapshot of the proteins present in a biological sample. The former approach faces similar problems as with microarrays targeted at mRNA, the latter involves the problem of matching large amounts of mass data against predicted masses from protein sequence databases, and the complicated statistical analysis of samples when multiple incomplete peptides from each protein are detected. Cellular protein localization in a tissue context can be achieved through affinity proteomics displayed as spatial data based on immunohistochemistry and tissue microarrays. Analysis of regulation Gene regulation is a complex process where a signal, such as an extracellular signal such as a hormone, eventually leads to an increase or decrease in the activity of one or more proteins. Bioinformatics techniques have been applied to explore various steps in this process. For example, gene expression can be regulated by nearby elements in the genome. Promoter analysis involves the identification and study of sequence motifs in the DNA surrounding the protein-coding region of a gene. These motifs influence the extent to which that region is transcribed into mRNA. Enhancer elements far away from the promoter can also regulate gene expression, through three-dimensional looping interactions. These interactions can be determined by bioinformatic analysis of chromosome conformation capture experiments. Expression data can be used to infer gene regulation: one might compare microarray data from a wide variety of states of an organism to form hypotheses about the genes involved in each state. In a single-cell organism, one might compare stages of the cell cycle, along with various stress conditions (heat shock, starvation, etc.). Clustering algorithms can be then applied to expression data to determine which genes are co-expressed. For example, the upstream regions (promoters) of co-expressed genes can be searched for over-represented regulatory elements. Examples of clustering algorithms applied in gene clustering are k-means clustering, self-organizing maps (SOMs), hierarchical clustering, and consensus clustering methods. Analysis of cellular organization Several approaches have been developed to analyze the location of organelles, genes, proteins, and other components within cells. A gene ontology category, cellular component, has been devised to capture subcellular localization in many biological databases. Microscopy and image analysis Microscopic pictures allow for the location of organelles as well as molecules, which may be the source of abnormalities in diseases. Protein localization Finding the location of proteins allows us to predict what they do. This is called protein function prediction. For instance, if a protein is found in the nucleus it may be involved in gene regulation or splicing. By contrast, if a protein is found in mitochondria, it may be involved in respiration or other metabolic processes. There are well developed protein subcellular localization prediction resources available, including protein subcellular location databases, and prediction tools. Nuclear organization of chromatin Data from high-throughput chromosome conformation capture experiments, such as Hi-C (experiment) and ChIA-PET, can provide information on the three-dimensional structure and nuclear organization of chromatin. Bioinformatic challenges in this field include partitioning the genome into domains, such as Topologically Associating Domains (TADs), that are organised together in three-dimensional space. Structural bioinformatics Finding the structure of proteins is an important application of bioinformatics. The Critical Assessment of Protein Structure Prediction (CASP) is an open competition where worldwide research groups submit protein models for evaluating unknown protein models. Amino acid sequence The linear amino acid sequence of a protein is called the primary structure. The primary structure can be easily determined from the sequence of codons on the DNA gene that codes for it. In most proteins, the primary structure uniquely determines the 3-dimensional structure of a protein in its native environment. An exception is the misfolded protein involved in bovine spongiform encephalopathy. This structure is linked to the function of the protein. Additional structural information includes the secondary, tertiary and quaternary structure. A viable general solution to the prediction of the function of a protein remains an open problem. Most efforts have so far been directed towards heuristics that work most of the time. Homology In the genomic branch of bioinformatics, homology is used to predict the function of a gene: if the sequence of gene A, whose function is known, is homologous to the sequence of gene B, whose function is unknown, one could infer that B may share A's function. In structural bioinformatics, homology is used to determine which parts of a protein are important in structure formation and interaction with other proteins. Homology modeling is used to predict the structure of an unknown protein from existing homologous proteins. One example of this is hemoglobin in humans and the hemoglobin in legumes (leghemoglobin), which are distant relatives from the same protein superfamily. Both serve the same purpose of transporting oxygen in the organism. Although both of these proteins have completely different amino acid sequences, their protein structures are virtually identical, which reflects their near identical purposes and shared ancestor. Other techniques for predicting protein structure include protein threading and de novo (from scratch) physics-based modeling. Another aspect of structural bioinformatics include the use of protein structures for Virtual Screening models such as Quantitative Structure-Activity Relationship models and proteochemometric models (PCM). Furthermore, a protein's crystal structure can be used in simulation of for example ligand-binding studies and in silico mutagenesis studies. A 2021 deep-learning algorithms-based software called AlphaFold, developed by Google's DeepMind, greatly outperforms all other prediction software methods, and has released predicted structures for hundreds of millions of proteins in the AlphaFold protein structure database. Network and systems biology Network analysis seeks to understand the relationships within biological networks such as metabolic or protein–protein interaction networks. Although biological networks can be constructed from a single type of molecule or entity (such as genes), network biology often attempts to integrate many different data types, such as proteins, small molecules, gene expression data, and others, which are all connected physically, functionally, or both. Systems biology involves the use of computer simulations of cellular subsystems (such as the networks of metabolites and enzymes that comprise metabolism, signal transduction pathways and gene regulatory networks) to both analyze and visualize the complex connections of these cellular processes. Artificial life or virtual evolution attempts to understand evolutionary processes via the computer simulation of simple (artificial) life forms. Molecular interaction networks Tens of thousands of three-dimensional protein structures have been determined by X-ray crystallography and protein nuclear magnetic resonance spectroscopy (protein NMR) and a central question in structural bioinformatics is whether it is practical to predict possible protein–protein interactions only based on these 3D shapes, without performing protein–protein interaction experiments. A variety of methods have been developed to tackle the protein–protein docking problem, though it seems that there is still much work to be done in this field. Other interactions encountered in the field include Protein–ligand (including drug) and protein–peptide. Molecular dynamic simulation of movement of atoms about rotatable bonds is the fundamental principle behind computational algorithms, termed docking algorithms, for studying molecular interactions. Biodiversity informatics Biodiversity informatics deals with the collection and analysis of biodiversity data, such as taxonomic databases, or microbiome data. Examples of such analyses include phylogenetics, niche modelling, species richness mapping, DNA barcoding, or species identification tools. A growing area is also macro-ecology, i.e. the study of how biodiversity is connected to ecology and human impact, such as climate change. Others Literature analysis The enormous number of published literature makes it virtually impossible for individuals to read every paper, resulting in disjointed sub-fields of research. Literature analysis aims to employ computational and statistical linguistics to mine this growing library of text resources. For example: Abbreviation recognition – identify the long-form and abbreviation of biological terms Named-entity recognition – recognizing biological terms such as gene names Protein–protein interaction – identify which proteins interact with which proteins from text The area of research draws from statistics and computational linguistics. High-throughput image analysis Computational technologies are used to automate the processing, quantification and analysis of large amounts of high-information-content biomedical imagery. Modern image analysis systems can improve an observer's accuracy, objectivity, or speed. Image analysis is important for both diagnostics and research. Some examples are: high-throughput and high-fidelity quantification and sub-cellular localization (high-content screening, cytohistopathology, Bioimage informatics) morphometrics clinical image analysis and visualization determining the real-time air-flow patterns in breathing lungs of living animals quantifying occlusion size in real-time imagery from the development of and recovery during arterial injury making behavioral observations from extended video recordings of laboratory animals infrared measurements for metabolic activity determination inferring clone overlaps in DNA mapping, e.g. the Sulston score High-throughput single cell data analysis Computational techniques are used to analyse high-throughput, low-measurement single cell data, such as that obtained from flow cytometry. These methods typically involve finding populations of cells that are relevant to a particular disease state or experimental condition. Ontologies and data integration Biological ontologies are directed acyclic graphs of controlled vocabularies. They create categories for biological concepts and descriptions so they can be easily analyzed with computers. When categorised in this way, it is possible to gain added value from holistic and integrated analysis. The OBO Foundry was an effort to standardise certain ontologies. One of the most widespread is the Gene ontology which describes gene function. There are also ontologies which describe phenotypes. Databases Databases are essential for bioinformatics research and applications. Databases exist for many different information types, including DNA and protein sequences, molecular structures, phenotypes and biodiversity. Databases can contain both empirical data (obtained directly from experiments) and predicted data (obtained from analysis of existing data). They may be specific to a particular organism, pathway or molecule of interest. Alternatively, they can incorporate data compiled from multiple other databases. Databases can have different formats, access mechanisms, and be public or private. Some of the most commonly used databases are listed below: Used in biological sequence analysis: Genbank, UniProt Used in structure analysis: Protein Data Bank (PDB) Used in finding Protein Families and Motif Finding: InterPro, Pfam Used for Next Generation Sequencing: Sequence Read Archive Used in Network Analysis: Metabolic Pathway Databases (KEGG, BioCyc), Interaction Analysis Databases, Functional Networks Used in design of synthetic genetic circuits: GenoCAD Software and tools Software tools for bioinformatics include simple command-line tools, more complex graphical programs, and standalone web-services. They are made by bioinformatics companies or by public institutions. Open-source bioinformatics software Many free and open-source software tools have existed and continued to grow since the 1980s. The combination of a continued need for new algorithms for the analysis of emerging types of biological readouts, the potential for innovative in silico experiments, and freely available open code bases have created opportunities for research groups to contribute to both bioinformatics regardless of funding. The open source tools often act as incubators of ideas, or community-supported plug-ins in commercial applications. They may also provide de facto standards and shared object models for assisting with the challenge of bioinformation integration. Open-source bioinformatics software includes Bioconductor, BioPerl, Biopython, BioJava, BioJS, BioRuby, Bioclipse, EMBOSS, .NET Bio, Orange with its bioinformatics add-on, Apache Taverna, UGENE and GenoCAD. The non-profit Open Bioinformatics Foundation and the annual Bioinformatics Open Source Conference promote open-source bioinformatics software. Web services in bioinformatics SOAP- and REST-based interfaces have been developed to allow client computers to use algorithms, data and computing resources from servers in other parts of the world. The main advantage are that end users do not have to deal with software and database maintenance overheads. Basic bioinformatics services are classified by the EBI into three categories: SSS (Sequence Search Services), MSA (Multiple Sequence Alignment), and BSA (Biological Sequence Analysis). The availability of these service-oriented bioinformatics resources demonstrate the applicability of web-based bioinformatics solutions, and range from a collection of standalone tools with a common data format under a single web-based interface, to integrative, distributed and extensible bioinformatics workflow management systems. Bioinformatics workflow management systems A bioinformatics workflow management system is a specialized form of a workflow management system designed specifically to compose and execute a series of computational or data manipulation steps, or a workflow, in a Bioinformatics application. Such systems are designed to provide an easy-to-use environment for individual application scientists themselves to create their own workflows, provide interactive tools for the scientists enabling them to execute their workflows and view their results in real-time, simplify the process of sharing and reusing workflows between the scientists, and enable scientists to track the provenance of the workflow execution results and the workflow creation steps. Some of the platforms giving this service: Galaxy, Kepler, Taverna, UGENE, Anduril, HIVE. BioCompute and BioCompute Objects In 2014, the US Food and Drug Administration sponsored a conference held at the National Institutes of Health Bethesda Campus to discuss reproducibility in bioinformatics. Over the next three years, a consortium of stakeholders met regularly to discuss what would become BioCompute paradigm. These stakeholders included representatives from government, industry, and academic entities. Session leaders represented numerous branches of the FDA and NIH Institutes and Centers, non-profit entities including the Human Variome Project and the European Federation for Medical Informatics, and research institutions including Stanford, the New York Genome Center, and the George Washington University. It was decided that the BioCompute paradigm would be in the form of digital 'lab notebooks' which allow for the reproducibility, replication, review, and reuse, of bioinformatics protocols. This was proposed to enable greater continuity within a research group over the course of normal personnel flux while furthering the exchange of ideas between groups. The US FDA funded this work so that information on pipelines would be more transparent and accessible to their regulatory staff. In 2016, the group reconvened at the NIH in Bethesda and discussed the potential for a BioCompute Object, an instance of the BioCompute paradigm. This work was copied as both a "standard trial use" document and a preprint paper uploaded to bioRxiv. The BioCompute object allows for the JSON-ized record to be shared among employees, collaborators, and regulators. Education platforms Bioinformatics is not only taught as in-person masters degree at many universities. The computational nature of bioinformatics lends it to computer-aided and online learning. Software platforms designed to teach bioinformatics concepts and methods include Rosalind and online courses offered through the Swiss Institute of Bioinformatics Training Portal. The Canadian Bioinformatics Workshops provides videos and slides from training workshops on their website under a Creative Commons license. The 4273π project or 4273pi project also offers open source educational materials for free. The course runs on low cost Raspberry Pi computers and has been used to teach adults and school pupils. 4283 is actively developed by a consortium of academics and research staff who have run research level bioinformatics using Raspberry Pi computers and the 4283π operating system. MOOC platforms also provide online certifications in bioinformatics and related disciplines, including Coursera's Bioinformatics Specialization (UC San Diego) and Genomic Data Science Specialization (Johns Hopkins) as well as EdX's Data Analysis for Life Sciences XSeries (Harvard). Conferences There are several large conferences that are concerned with bioinformatics. Some of the most notable examples are Intelligent Systems for Molecular Biology (ISMB), European Conference on Computational Biology (ECCB), and Research in Computational Molecular Biology (RECOMB). See also References Further reading Sehgal et al. : Structural, phylogenetic and docking studies of D-amino acid oxidase activator(DAOA ), a candidate schizophrenia gene. Theoretical Biology and Medical Modelling 2013 10 :3. Achuthsankar S Nair Computational Biology & Bioinformatics – A gentle Overview , Communications of Computer Society of India, January 2007 Aluru, Srinivas, ed. Handbook of Computational Molecular Biology. Chapman & Hall/Crc, 2006. (Chapman & Hall/Crc Computer and Information Science Series) Baldi, P and Brunak, S, Bioinformatics: The Machine Learning Approach, 2nd edition. MIT Press, 2001. Barnes, M.R. and Gray, I.C., eds., Bioinformatics for Geneticists, first edition. Wiley, 2003. Baxevanis, A.D. and Ouellette, B.F.F., eds., Bioinformatics: A Practical Guide to the Analysis of Genes and Proteins, third edition. Wiley, 2005. Baxevanis, A.D., Petsko, G.A., Stein, L.D., and Stormo, G.D., eds., Current Protocols in Bioinformatics. Wiley, 2007. Cristianini, N. and Hahn, M. Introduction to Computational Genomics , Cambridge University Press, 2006. ( |) Durbin, R., S. Eddy, A. Krogh and G. Mitchison, Biological sequence analysis. Cambridge University Press, 1998. Keedwell, E., Intelligent Bioinformatics: The Application of Artificial Intelligence Techniques to Bioinformatics Problems. Wiley, 2005. Kohane, et al. Microarrays for an Integrative Genomics. The MIT Press, 2002. Lund, O. et al. Immunological Bioinformatics. The MIT Press, 2005. Pachter, Lior and Sturmfels, Bernd. "Algebraic Statistics for Computational Biology" Cambridge University Press, 2005. Pevzner, Pavel A. Computational Molecular Biology: An Algorithmic Approach The MIT Press, 2000. Soinov, L. Bioinformatics and Pattern Recognition Come Together Journal of Pattern Recognition Research (JPRR ), Vol 1 (1) 2006 p. 37–41 Stevens, Hallam, Life Out of Sequence: A Data-Driven History of Bioinformatics, Chicago: The University of Chicago Press, 2013, Tisdall, James. "Beginning Perl for Bioinformatics" O'Reilly, 2001. Catalyzing Inquiry at the Interface of Computing and Biology (2005) CSTB report Calculating the Secrets of Life: Contributions of the Mathematical Sciences and computing to Molecular Biology (1995) Foundations of Computational and Systems Biology MIT Course Computational Biology: Genomes, Networks, Evolution Free MIT Course External links Bioinformatics Resource Portal (SIB)
26
Brian Russell De Palma (born September 11, 1940) is an American film director and screenwriter. With a career spanning over 50 years, he is best known for work in the suspense, crime and psychological thriller genres. His films include mainstream box office hits such as Carrie (1976), Dressed to Kill (1980), Scarface (1983), The Untouchables (1987), and Mission: Impossible (1996), as well as cult favorites such as Sisters (1972), Phantom of the Paradise (1974), Blow Out (1981), Casualties of War (1989), and Carlito's Way (1993). De Palma was a leading member of the New Hollywood generation of film directors. His direction often makes use of quotations from other films or cinematic styles, and bears the influence of filmmakers such as Alfred Hitchcock and Michelangelo Antonioni. His work has been criticized for its violence and sexual content but has also been championed by American critics such as Roger Ebert and Pauline Kael. Early life and education De Palma was born on September 11, 1940, in Newark, New Jersey, the youngest of three boys. His Italian-American parents were Vivienne DePalma (née Muti), and Anthony DePalma, an orthopedic surgeon who was the son of immigrants from Alberona, Province of Foggia. He was raised in Philadelphia, Pennsylvania, and New Hampshire, and attended various Protestant and Quaker schools, eventually graduating from Friends' Central School. He had a poor relationship with his father, and would secretly follow him to record his adulterous behavior; this would eventually inspire the teenage character played by Keith Gordon in De Palma's 1980 film Dressed to Kill. When he was in high school, he built computers. He won a regional science-fair prize for a project titled "An Analog Computer to Solve Differential Equations". Enrolled at Columbia University as a physics student, De Palma became enraptured with the filmmaking process after viewing Citizen Kane and Vertigo. After receiving his undergraduate degree in 1962, De Palma enrolled at the newly coed Sarah Lawrence College as a graduate student in their theater department, earning an M.A. in the discipline in 1964 and becoming one of the first male students among a female population. Once there, influences as various as drama teacher Wilford Leach, the Maysles brothers, Michelangelo Antonioni, Jean-Luc Godard, Andy Warhol, and Alfred Hitchcock impressed upon De Palma the many styles and themes that would shape his own cinema in the coming decades. Career 1963–1976: Rise to prominence An early association with a young Robert De Niro resulted in The Wedding Party. The film, which was co-directed with Leach and producer Cynthia Munroe, had been shot in 1963 but remained unreleased until 1969, when De Palma's star had risen sufficiently within the Greenwich Village filmmaking scene. De Niro was unknown at the time; the credits mistakenly display his name as "Robert ". The film is noteworthy for its invocation of silent film techniques and an insistence on the jump-cut for effect. De Palma followed this style with various small films for the NAACP and the Treasury Department. During the 1960s, De Palma began making a living producing documentary films, notably The Responsive Eye, a 1966 movie about The Responsive Eye op-art exhibit curated by William Seitz for MOMA in 1965. In an interview with Joseph Gelmis from 1969, De Palma described the film as "very good and very successful. It's distributed by Pathe Contemporary and makes lots of money. I shot it in four hours, with synched sound. I had two other guys shooting people's reactions to the paintings, and the paintings themselves." Dionysus in '69 (1969) was De Palma's other major documentary from this period. The film records the Performance Group's performance of Euripides' The Bacchae, starring, amongst others, De Palma regular William Finley. The play is noted for breaking traditional barriers between performers and audience. The film's most striking quality is its extensive use of the split-screen. De Palma recalls that he was "floored" by this performance upon first sight, and in 1973 recounts how he "began to try and figure out a way to capture it on film. I came up with the idea of split-screen, to be able to show the actual audience involvement, to trace the life of the audience and that of the play as they merge in and out of each other." De Palma's most significant features from this decade are Greetings (1968) and Hi, Mom! (1970). Both films star Robert De Niro and espouse a leftist revolutionary viewpoint common to the era in which they were released. Greetings was entered into the 19th Berlin International Film Festival, where it won a Silver Bear award. His other major film from this period is the slasher comedy Murder a la Mod. Each of these films experiments with narrative and intertextuality, reflecting De Palma's stated intention to become the "American Godard" while integrating several of the themes which permeated Hitchcock's work. In 1970, De Palma left New York for Hollywood at age thirty to make Get to Know Your Rabbit (1972), starring Orson Welles and Tommy Smothers. Making the film was a crushing experience for De Palma, as Smothers did not like many of De Palma's ideas. Here he made several small, studio and independently-released films that included stand-outs Sisters (1972), Phantom of the Paradise (1974), and Obsession (1976). 1976–1979: Breakthrough In November 1976, De Palma released a film adaptation of the 1974 novel Carrie by Stephen King. Though some see the psychic thriller as De Palma's bid for a blockbuster, the project was in fact small, underfunded by United Artists, and well under the cultural radar during the early months of production, as the source novel had yet to climb the bestseller list. De Palma gravitated toward the project and changed crucial plot elements based upon his own predilections, not the saleability of the novel. The cast was young and relatively new, though Sissy Spacek and John Travolta had gained attention for previous work in, respectively, film and episodic sitcoms. Carrie became De Palma's first genuine box-office success, garnering Spacek and Piper Laurie Oscar nominations for their performances. Pre-production for the film had coincided with the casting process for George Lucas's Star Wars, and many of the actors cast in De Palma's film had been earmarked as contenders for Lucas's movie, and vice versa. The "shock ending" finale is effective even while it upholds horror-film convention, its suspense sequences are buttressed by teen comedy tropes, and its use of split-screen, split-diopter and slow motion shots tell the story visually rather than through dialogue. As for Lucas' project, De Palma complained in an early viewing of Star Wars that the opening text crawl was poorly written and volunteered to help edit the text to a more concise and engaging form. The financial and critical success of Carrie allowed De Palma to pursue more personal material. The Demolished Man was a novel that had fascinated De Palma since the late 1950s and appealed to his background in mathematics and avant-garde storytelling. Its unconventional unfolding of plot (exemplified in its mathematical layout of dialogue) and its stress on perception have analogs in De Palma's filmmaking. He sought to adapt it numerous times, though the project would carry a substantial price tag, and has yet to appear on-screen (Steven Spielberg's 2002 adaptation of Philip K. Dick's Minority Report bears striking similarities to De Palma's visual style and some of the themes of The Demolished Man). The result of his experience with adapting The Demolished Man was the 1978 science fiction psychic thriller film The Fury, starring Kirk Douglas, Carrie Snodgress, John Cassavetes and Amy Irving. The film was admired by Jean-Luc Godard, who featured a clip in his mammoth Histoire(s) du cinéma, and Pauline Kael, who championed both The Fury and De Palma. The film boasted a larger budget than Carrie, though the consensus view at the time was that De Palma was repeating himself, with diminishing returns. As a film, it retains De Palma's considerable visual flair, but points more toward his work in mainstream entertainments such as Mission: Impossible, the thematic complex thriller for which he is now better known. 1980–1996: Established career The 1980s were marked by some of De Palma's best known films including the erotic psychological thriller Dressed to Kill (1980) starring Michael Caine, and Angie Dickinson. Although the film received critical admiration, it also received criticism and controversy for its negative depiction of the transgender community. The following year he directed the neo-noir mystery thriller Blow Out (1981) starring John Travolta, Nancy Allen, and John Lithgow. The film received critical acclaim. The New Yorker film critic Pauline Kael praised the director writing, "De Palma has sprung to the place that Robert Altman achieved with films such as McCabe & Mrs. Miller and Nashville and that Francis Ford Coppola reached with The Godfather films—that is, to the place where genre is transcended and what we're moved by is an artist's vision...it's a great movie. Travolta and Allen are radiant performers". De Palma then directed crime film Scarface (1983) starring Al Pacino and Michelle Pfeiffer with a screenplay by Oliver Stone. The film received mixed reviews with its negative depictions of ethnic stereotypes, as well as its violence and profanity. It has since been re-evaluated and became a cult classic. The following year he made another neo-noir erotic thriller, Body Double (1984), starring Craig Wasson and Melanie Griffith. The film also received mixed reviews but has since had a reassessment and found acclaim. De palma directed the music video for Bruce Springsteen's single "Dancing in the Dark" the same year. In 1987, De Palma directed the crime film The Untouchables loosely based off the book of the same name, adapted by David Mamet. The film stars Kevin Costner, Andy Garcia, Robert De Niro, and Sean Connery, the later of whom won the Academy Award for Best Supporting Actor for the film. It received critical acclaim and box-office success. De Palma's Vietnam War film Casualties of War (1989) won critical praise but performed poorly in theatres and The Bonfire of the Vanities (1990) was a notorious failure with both critics and audiences. De Palma then had subsequent successes with Raising Cain (1992) and Carlito's Way (1993) with Mission: Impossible (1996) becoming his highest grossing film and starting a successful franchise. 1998–present: Career slump De Palma's work after Mission: Impossible has been less well received. His ensuing films Snake Eyes (1998), Mission to Mars (2000), and Femme Fatale (2002) all failed at the box office and received generally poor reviews, though Femme Fatale has since been revived in the eyes of many film critics and became a cult classic. His 2006 adaptation of The Black Dahlia was also unsuccessful and is currently the last movie De Palma has directed with backing from Hollywood. A political controversy erupted over the portrayal of US soldiers in De Palma's 2007 film Redacted. Loosely based on the 2006 Mahmudiyah killings by American soldiers in Iraq, the film echoes themes that appeared in Casualties of War. Redacted received a limited release in the United States and grossed less than $1 million against a $5 million budget. De Palma's output has slowed since the release of Redacted, with subsequent projects often falling into development hell, due mostly to creative differences. In 2012, his film Passion starring Rachel McAdams and Noomi Rapace was selected to compete for the Golden Lion at the 69th Venice International Film Festival but received mixed reviews and was financially unsuccessful. De Palma's next project was the thriller Domino (2019), released two years after the film began production. It received generally negative reviews and was released direct-to-VOD in the United States, grossing less than half a million dollars internationally. De Palma has also expressed dissatisfaction with both the production of the film and the final result; "I never experienced such a horrible movie set." In 2018, De Palma published his debut novel in France, Les serpents sont-ils nécessaires? (English translation: Are Snakes Necessary?), co-written with Susan Lehman. It was published in the U.S. in 2020. De Palma and Lehman also wrote a second book, currently unpublished, called Terry, based on one of De Palma's passion projects about a French film production making an adaptation of Thérèse Raquin. Trademarks and style Themes De Palma's films can fall into two categories, his psychological thrillers (Sisters, Body Double, Obsession, Dressed to Kill, Blow Out, Raising Cain) and his mainly commercial films (Scarface, The Untouchables, Carlito's Way, and Mission: Impossible). He has often produced "De Palma" films one after the other before going on to direct a different genre, but would always return to his familiar territory. Because of the subject matter and graphic violence of some of De Palma's films, such as Dressed to Kill, Scarface and Body Double, they are often at the center of controversy with the Motion Picture Association of America, film critics and the viewing public. De Palma frequently quotes and references other directors' work. Michelangelo Antonioni's Blowup and Francis Ford Coppola's The Conversation plots were used for the basis of Blow Out. The Untouchables finale shoot out in the train station is a clear borrowing from the Odessa Steps sequence in Sergei Eisenstein's The Battleship Potemkin. The main plot from Rear Window was used for Body Double, while it also used elements of Vertigo. Vertigo was also the basis for Obsession. Dressed to Kill was a note-for-note homage to Hitchcock's Psycho, including such moments as the surprise death of the lead actress and the exposition scene by the psychiatrist at the end. Camera shots Film critics have often noted De Palma's penchant for unusual camera angles and compositions. He often frames characters against the background using a canted angle shot. Split-screen techniques have been used to show two separate events happening simultaneously. To emphasize the dramatic impact of a certain scene De Palma has employed a 360-degree camera pan. Slow sweeping, panning and tracking shots are often used throughout his films, often through precisely-choreographed long takes lasting for minutes without cutting. Split focus shots, often referred to as "di-opt", are used by De Palma to emphasize the foreground person/object while simultaneously keeping a background person/object in focus. Slow-motion is frequently used in his films to increase suspense. Personal life De Palma has been married and divorced three times, to actress Nancy Allen (1979–1983), producer Gale Anne Hurd (1991–1993), and Darnell Gregorio (1995–1997). He has one daughter from his marriage to Hurd, Lolita de Palma, born in 1991, and one daughter from his marriage to Gregorio, Piper De Palma, born in 1996. He resides in Manhattan, New York. Reception and legacy De Palma is often cited as a leading member of the New Hollywood generation of film directors, a distinct pedigree who either emerged from film schools or are overtly cine-literate. His contemporaries include Martin Scorsese, Paul Schrader, John Milius, George Lucas, Francis Ford Coppola, Steven Spielberg, John Carpenter, and Ridley Scott. His artistry in directing and use of cinematography and suspense in several of his films has often been compared to the work of Alfred Hitchcock. Psychologists have been intrigued by De Palma's fascination with pathology, by the aberrant behavior aroused in characters who find themselves manipulated by others. De Palma has encouraged and fostered the filmmaking careers of directors such as Mark Romanek and Keith Gordon, the latter of whom collaborated with him twice as an actor, both in 1980's Home Movies and Dressed to Kill. Filmmakers influenced by De Palma include Terrence Malick, Quentin Tarantino, Ronny Yu, Don Mancini, Nacho Vigalondo, and Jack Thomas Smith. During an interview with De Palma, Quentin Tarantino said that Blow Out is one of his all-time favorite films, and that after watching Scarface he knew how to make his own film. John Travolta's performance as Jack Terry in Blow Out even resulted in Tarantino casting him as Vincent Vega in his 1994 film Pulp Fiction, which would go on to reinvigorate Travolta's then-declining career. Tarantino also placed Carrie at number eight in a list of his favorite films. Critics who frequently admire De Palma's work include Pauline Kael and Roger Ebert. Kael wrote in her review of Blow Out, "At forty, Brian De Palma has more than twenty years of moviemaking behind him, and he has been growing better and better. Each time a new film of his opens, everything he has done before seems to have been preparation for it." In his review of Femme Fatale, Roger Ebert wrote about the director: "De Palma deserves more honor as a director. Consider also these titles: Sisters, Blow Out, The Fury, Dressed to Kill, Carrie, Scarface, Wise Guys, Casualties of War, Carlito's Way, Mission: Impossible. Yes, there are a few failures along the way (Snake Eyes, Mission to Mars, The Bonfire of the Vanities), but look at the range here, and reflect that these movies contain treasure for those who admire the craft as well as the story, who sense the glee with which De Palma manipulates images and characters for the simple joy of being good at it. It's not just that he sometimes works in the style of Hitchcock, but that he has the nerve to." The influential French film magazine Cahiers du Cinéma has placed five of De Palma's films (Carlito's Way, Mission: Impossible, Snake Eyes, Mission to Mars, and Redacted) on their annual top ten list, with Redacted placing first on the 2008 list. The magazine also listed Carlito's Way as the greatest film of the 1990s. Julie Salamon has written that critics have accused De Palma of being "a perverse misogynist", to which De Palma has responded with, "I'm always attacked for having an erotic, sexist approach chopping up women, putting women in peril. I'm making suspense movies! What else is going to happen to them?" His films have also been interpreted as feminist and examined for their perceived queer affinities. In Film Comment "Queer and Now and Then" column on Femme Fatale, film critic Michael Koresky writes that "De Palma's films radiate an undeniable queer energy" and notes the "intense appeal" De Palma's films have for gay critics. In her book The Erotic Thriller in Contemporary Cinema, Linda Ruth Williams writes that "De Palma understood the cinematic potency of dangerous fucking, perhaps earlier than his feminist detractors". Robin Wood considered Sisters an overtly feminist film, writing that "one can define the monster of Sisters as women's liberation; adding only that the film follows the time-honored horror film tradition of making the monster emerge as the most sympathetic character and its emotional center." Pauline Kael's review of Casualties of War, "A Wounded Apparition", describes the film as "feminist" and notes that "De Palma was always involved in examining (and sometimes satirizing) victimization, but he was often accused of being a victimizer". Helen Grace, in a piece for Lola, writes that upon seeing Dressed to Kill amidst calls for a boycott from feminist groups Women Against Violence Against Women and Women Against Pornography, that the film "seemed to say more about masculine anxiety than about the fears that women were expressing in relation to the film". David Thomson wrote in his entry for De Palma, "There is a self-conscious cunning in De Palma's work, ready to control everything except his own cruelty and indifference." Matt Zoller Seitz objected to this characterisation, writing that there are films from the director which can be seen as "straightforwardly empathetic and/or moralistic". His life and career in his own words was the subject of the 2015 documentary De Palma, directed by Noah Baumbach and Jake Paltrow. Filmography Awards and nominations Bibliography References Citations General and cited sources Thomson, David (October 26, 2010). The New Biographical Dictionary of Film: Fifth Edition, Completely Updated and Expanded (hardcover ed.). Knopf. . Salamon, Julie (1991). Devil's Candy: The Bonfire of the Vanities Goes to Hollywood (hardcover ed.). Houghton. . Further reading Bliss, Michael (1986). Brian De Palma. Scarecrow. Blumenfeld, Samuel; Vachaud, Laurent (2001). Brian De Palma. Calmann-Levy. Dworkin, Susan (1984). Double De Palma: A Film Study with Brian De Palma. Newmarket. External links Senses of Cinema: Great Directors Critical Database Photos and discussion around the director Literature on Brian De Palma Brian De Palma bibliography (via UC Berkeley) 1940 births 20th-century American male writers 20th-century American screenwriters 21st-century American male writers 21st-century American screenwriters Action film directors American film producers American male screenwriters American writers of Italian descent Columbia University alumni English-language film directors Film directors from New Jersey Film producers from New Jersey Friends' Central School alumni Giallo film directors Horror film directors Living people People of Apulian descent Postmodernist filmmakers Sarah Lawrence College alumni Screenwriters from New Jersey Venice Best Director Silver Lion winners Writers from Newark, New Jersey
5
The North American B-25 Mitchell is an American medium bomber that was introduced in 1941 and named in honor of Brigadier General William "Billy" Mitchell, a pioneer of U.S. military aviation. Used by many Allied air forces, the B-25 served in every theater of World War II, and after the war ended, many remained in service, operating across four decades. Produced in numerous variants, nearly 10,000 B-25s were built, It was the most-produced American medium bomber and the third most-produced American bomber overall. These included several limited models such as the F-10 reconnaissance aircraft, the AT-24 crew trainers, and the United States Marine Corps' PBJ-1 patrol bomber. Design and development The US Army Air Corps issued a specification for a medium bomber in March 1939 that was capable of carrying a payload of over at North American Aviation (NAA) used its NA-40B design to develop the NA-62, which competed for the medium bomber contract. No YB-25 was available for prototype service tests. In September 1939, the Air Corps ordered the NA-62 into production as the B-25, along with the other new Air Corps medium bomber, the Martin B-26 Marauder "off the drawing board". Early into B-25 production, NAA incorporated a significant redesign to the wing dihedral. The first nine aircraft had a constant-dihedral, meaning the wing had a consistent, upward angle from the fuselage to the wingtip. This design caused stability problems. "Flattening" the outer wing panels by giving them a slight anhedral angle just outboard of the engine nacelles nullified the problem and gave the B-25 its gull wing configuration. Less noticeable changes during this period included an increase in the size of the tail fins and a decrease in their inward tilt at their tops. NAA continued design and development in 1940 and 1941. Both the B-25A and B-25B series entered USAAF service. The B-25B was operational in 1942. Combat requirements led to further developments. Before the year was over, NAA was producing the B-25C and B-25D series at different plants. Also in 1942, the manufacturer began design work on the cannon-armed B-25G series. The NA-100 of 1943 and 1944 was an interim armament development at the Kansas City complex known as the B-25D2. Similar armament upgrades by U.S-based commercial modification centers involved about half of the B-25G series. Further development led to the B-25H, B-25J, and B-25J2. The gunship design concept dates to late 1942 and NAA sent a field technical representative to the SWPA. The factory-produced B-25G entered production during the NA-96 order followed by the redesigned B-25H gunship. The B-25J reverted to the bomber role, but it, too, could be outfitted as a strafer. NAA manufactured the greatest number of aircraft in World War II, the first time a company had produced trainers, bombers, and fighters simultaneously (the AT-6/SNJ Texan/Harvard, B-25 Mitchell, and the P-51 Mustang). It produced B-25s at both its Inglewood main plant and an additional 6,608 aircraft at its Kansas City, Kansas, plant at Fairfax Airport. After the war, the USAF placed a contract for the TB-25L trainer in 1952. This was a modification program by Hayes of Birmingham, Alabama. Its primary role was reciprocating engine pilot training. A development of the B-25 was the North American XB-28 Dragon, designed as a high-altitude bomber. Two prototypes were built with the second prototype, the XB-28A, evaluated as a photo-reconnaissance platform, but the aircraft did not enter production. Flight characteristics The B-25 was a safe and forgiving aircraft to fly. With one engine out, 60° banking turns into the dead engine were possible, and control could be easily maintained down to 145 mph (230 km/h). The pilot had to remember to maintain engine-out directional control at low speeds after takeoff with rudder; if this maneuver were attempted with ailerons, the aircraft could snap out of control. The tricycle landing gear made for excellent visibility while taxiing. The only significant complaint about the B-25 was its extremely noisy engines; as a result, many pilots eventually suffered from some degree of hearing loss. The high noise level was due to design and space restrictions in the engine cowlings, which resulted in the exhaust "stacks" protruding directly from the cowling ring and partly covered by a small triangular fairing. This arrangement directed exhaust and noise directly at the pilot and crew compartments. Durability The Mitchell was exceptionally sturdy and could withstand tremendous punishment. One B-25C of the 321st Bomb Group was nicknamed "Patches" because its crew chief painted all the aircraft's flak hole patches with bright yellow zinc chromate primer. By the end of the war, this aircraft had completed over 300 missions, had been belly-landed six times, and had over 400 patched holes. The airframe of "Patches" was so distorted from battle damage that straight-and-level flight required 8° of left aileron trim and 6° of right rudder, causing the aircraft to "crab" sideways across the sky. Operational history Asia-Pacific Most B-25s in American service were used in the war against Japan in Asia and the Pacific. The Mitchell fought from the Northern Pacific to the South Pacific and the Far East. These areas included the campaigns in the Aleutian Islands, Papua New Guinea, the Solomon Islands, New Britain, China, Burma and the island hopping campaign in the Central Pacific. The aircraft's potential as a ground-attack aircraft emerged during the Pacific war. The jungle environment reduced the usefulness of medium-level bombing, and made low-level attack the best tactic. Using similar mast height level tactics and skip bombing, the B-25 proved itself to be a capable anti-shipping weapon and sank many enemy sea vessels. An ever-increasing number of forward firing guns made the B-25 a formidable strafing aircraft for island warfare. The strafer models were the B-25C1/D1, the B-25J1 and with the NAA strafer nose, the J2 subseries. In Burma, the B-25 was used to attack Japanese communication links, especially bridges in central Burma. It also helped supply the besieged troops at Imphal in 1944. The China Air Task Force, the Chinese American Composite Wing, the First Air Commando Group, the 341st Bomb Group, and eventually, the relocated 12th Bomb Group, all operated the B-25 in the China Burma India Theater. Many of these missions involved battle-field isolation, interdiction, and close air support. Later in the war, as the USAAF acquired bases in other parts of the Pacific, the Mitchell could strike targets in Indochina, Formosa, and Kyushu, increasing the usefulness of the B-25. It was also used in some of the shortest raids of the Pacific War, striking from Saipan against Guam and Tinian. The 41st Bomb Group used it against Japanese-occupied islands that had been bypassed by the main campaign, such as the Marshall Islands. Middle East and Italy The first B-25s arrived in Egypt and were carrying out independent operations by October 1942. Operations there against Axis airfields and motorized vehicle columns supported the ground actions of the Second Battle of El Alamein. Thereafter, the aircraft took part in the rest of the campaign in North Africa, the invasion of Sicily, and the advance up Italy. In the Strait of Messina to the Aegean Sea, the B-25 conducted sea sweeps as part of the coastal air forces. In Italy, the B-25 was used in the ground attack role, concentrating on attacks against road and rail links in Italy, Austria, and the Balkans. The B-25 had a longer range than the Douglas A-20 Havoc and Douglas A-26 Invader, allowing it to reach further into occupied Europe. The five bombardment groups – 20 squadrons – of the Ninth and Twelfth Air Forces that used the B-25 in the Mediterranean Theater of Operations were the only U.S. units to employ the B-25 in Europe. Europe The RAF received nearly 900 Mitchells, using them to replace Douglas Bostons, Lockheed Venturas, and Vickers Wellington bombers. The Mitchell entered active RAF service on 22 January 1943. At first, it was used to bomb targets in occupied Europe. After the Normandy invasion, the RAF and France used Mitchells in support of the Allies in Europe. Several squadrons moved to forward airbases on the continent. The USAAF used the B-25 in combat in the European theater of operations. USAAF The B-25B found fame as the bomber used in the 18 April 1942 Doolittle Raid, in which 15 B-25Bs led by Lieutenant Colonel Jimmy Doolittle attacked mainland Japan, four months after the Japanese attack on Pearl Harbor (a 16th plane which participated was forced to abort, landing in Russia, where it and the crew were initially interned). The mission gave a much-needed lift in morale to the Americans and alarmed the Japanese, who had believed their home islands to be inviolable by enemy forces. Although the amount of actual damage done was relatively minor, it forced the Japanese to divert troops for home defense for the remainder of the war. The raiders took off from the carrier and bombed Tokyo and four other Japanese cities. Fifteen of the bombers subsequently crash-landed en route to recovery fields in eastern China. The losses resulted from the task force being spotted by a Japanese vessel, which forced the bombers to take off early, fuel exhaustion, stormy nighttime conditions with zero visibility, and the failure to activate electronic homing aids at the recovery bases. Only one B-25 bomber landed intact, in Siberia, where its five-man crew was interned and the aircraft confiscated. Of the 80 aircrew members, 69 survived their historic mission and eventually made it back to American lines. Following additional modifications, including the addition of a Plexiglas dome for navigational sightings to replace the overhead window for the navigator, and heavier nose armament, de-icing and anti-icing equipment, the B-25C entered USAAF operations. Through block 20, the B-25C and B-25D differed only in the location of manufacture: C series at Inglewood, California, and D series at Kansas City, Kansas. After block 20, some NA-96s began the transition to the G series, while some NA-87s acquired interim modifications eventually produced as the B-25D2 and ordered as the NA-100. NAA built a total of 3,915 B-25Cs and Ds during World War II. Although the B-25 was designed to bomb from medium altitudes in level flight, it was frequently used in the Southwest Pacific theatre in treetop-level strafing and missions with parachute-retarded fragmentation bombs against Japanese airfields in New Guinea and the Philippines. These heavily armed Mitchells were field-modified at Townsville, Australia, under the direction of Major Paul I. "Pappy" Gunn and North American technical representative Jack Fox. These "commerce destroyers" were also used on strafing and skip bombing missions against Japanese shipping trying to resupply their armies. Under the leadership of Lieutenant General George C. Kenney, Mitchells of the Far East Air Forces and its existing components, the Fifth and Thirteenth Air Forces, devastated Japanese targets in the Southwest Pacific Theater during 1944 to 1945. The USAAF played a significant role in pushing the Japanese back to their home islands. The type operated with great effect in the Central Pacific, Alaska, North Africa, Mediterranean, and China-Burma-India theaters. The USAAF Antisubmarine Command made great use of the B-25 in 1942 and 1943. Some of the earliest B-25 bomb groups also flew the Mitchell on coastal patrols after the Pearl Harbor attack, prior to the AAFAC organization. Many of the two dozen or so antisubmarine squadrons flew the B-25C, D, and G series in the American Theater antisubmarine campaign, often in the distinctive, white sea-search camouflage. Combat developments Use as a gunship In anti-shipping operations, the USAAF had an urgent need for hard-hitting aircraft, and North American responded with the B-25G. In this series, the transparent nose and bombardier/navigator position was changed for a shorter, hatched nose with two fixed .50 in (12.7 mm) machine guns and a manually loaded 75 mm (2.95 in) M4 cannon, one of the largest weapons fitted to an aircraft, similar to the British 57 mm gun-armed Mosquito Mk. XVIII and the autoloading German 75 mm long-barrel Bordkanone BK 7,5 heavy-caliber ordnance fitted to both the Henschel Hs 129B-3 and Junkers Ju 88P-1. The B-25G's shorter nose placed the cannon breech behind the pilot, where it could be manually loaded and serviced by the navigator; his crew station was moved to a position just behind the pilot. The navigator signaled the pilot when the gun was ready and the pilot fired the weapon using a button on his control wheel. The Royal Air Force, U.S. Navy, and Soviet VVS each conducted trials with this series, but none adopted it. The G series comprised one prototype, five preproduction C conversions, 58 C series modifications, and 400 production aircraft for a total of 464 B-25Gs. In its final version, the G-12, an interim armament modification, eliminated the lower Bendix turret and added a starboard dual gun pack, waist guns, and a canopy for the tail gunner to improve the view when firing the single tail gun. In April 1945, the air depots in Hawaii refurbished about two dozen of these and included the eight-gun nose and rocket launchers in the upgrade. The B-25H series continued the development of the gunship version. NAA Inglewood produced 1000. The H had even more firepower. Most replaced the M4 gun with the lighter T13E1, designed specifically for the aircraft, but 20-odd H-1 block aircraft completed by the Republic Aviation modification center at Evansville had the M4 and two-machine-gun nose armament. The 75 mm (2.95 in) gun fired at a muzzle velocity of . Due to its slow rate of fire (about four rounds could be fired in a single strafing run), relative ineffectiveness against ground targets, and the substantial recoil, the 75 mm gun was sometimes removed from both G and H models and replaced with two additional .50 in (12.7 mm) machine guns as a field modification. In the new FEAF, these were redesignated the G1 and H1 series, respectively. The H series normally came from the factory mounting four fixed, forward-firing .50 in (12.7 mm) machine guns in the nose; four in a pair of under-cockpit conformal flank-mount gun pod packages (two guns per side); two more in the manned dorsal turret, relocated forward to a position just behind the cockpit (which became standard for the J-model); one each in a pair of new waist positions, introduced simultaneously with the forward-relocated dorsal turret; and lastly, a pair of guns in a new tail-gunner's position. Company promotional material bragged that the B-25H could "bring to bear 10 machine guns coming and four going, in addition to the 75 mm cannon, eight rockets, and 3,000 lb (1,360 kg) of bombs." The H had a modified cockpit with single flight controls operated by the pilot. The co-pilot's station and controls were removed and replaced by a smaller seat used by the navigator/cannoneer, The radio operator crew position was aft of the bomb bay with access to the waist guns. Factory production totals were 405 B-25Gs and 1,000 B-25Hs, with 248 of the latter being used by the Navy as PBJ-1Hs. Elimination of the co-pilot saved weight, and moving the dorsal turret forward partially counterbalanced the waist guns and the manned rear turret. Return to medium bomber Following the two-gunship series, NAA again produced the medium bomber configuration with the B-25J series. It optimized the mix of the interim NA-100 and the H series, having both the bombardier's station and fixed guns of the D and the forward turret and refined armament of the H series. NAA also produced a strafer nose-first shipped to air depots as kits, then introduced on the production line in alternating blocks with the bombardier nose. The solid metal "strafer" nose housed eight centerline Browning M2 .50 caliber machine guns. The remainder of the armament was as in the H-5. NAA also supplied kits to mount eight underwing 5 inch High Velocity Airborne Rockets just outside the propeller arcs. These were mounted on zero-length launch rails, four per wing. The final, and most numerous, series of the Mitchell, the B-25J, looked less like earlier series apart from the well-glazed bombardier's nose of nearly identical appearance to the earliest B-25 subtypes. Instead, the J followed the overall configuration of the H series from the cockpit aft. It had the forward dorsal turret and other armament and airframe advancements. All J models included four .50 in (12.7 mm) light-barrel Browning AN/M2 guns in a pair of "fuselage packages", conformal gun pods each flanking the lower cockpit, each pod containing two Browning M2s. By 1945, however, combat squadrons removed these. The J series restored the co-pilot's seat and dual flight controls. The factory-made kits available to the Air Depot system to create the strafer-nose B-25J-2. This configuration carried a total of 18 .50 in (12.7 mm) light-barrel AN/M2 Browning M2 machine guns: eight in the nose, four in the flank-mount conformal gun pod packages, two in the dorsal turret, one each in the pair of waist positions, and a pair in the tail – with 14 of the guns either aimed directly forward or aimed to fire directly forward for strafing missions. Some aircraft had eight 5-inch (130 mm) high-velocity aircraft rockets. NAA introduced the J-2 into production in alternating blocks at the J-22. Total J series production was 4,318. Postwar (USAF) use In 1947, legislation created an independent United States Air Force and by that time, the B-25 inventory numbered only a few hundred. Some B-25s continued in service into the 1950s in training, reconnaissance, and support roles. The principal use during this period was undergraduate training of multiengine aircraft pilots slated for reciprocating engine or turboprop cargo, aerial refueling, or reconnaissance aircraft. Others were assigned to units of the Air National Guard in training roles in support of Northrop F-89 Scorpion and Lockheed F-94 Starfire operations. During its USAF tenure, many B-25s received the so-called "Hayes modification" and as a result, surviving B-25s often have exhaust systems with a semi collector ring that splits emissions into two different systems. The upper seven cylinders are collected by a ring, while the other cylinders remain directed to individual ports. TB-25J-25-NC Mitchell, 44-30854, the last B-25 in the USAF inventory, assigned at March AFB, California, as of March 1960, was flown to Eglin AFB, Florida, from Turner Air Force Base, Georgia, on 21 May 1960, the last flight by a USAF B-25. It was presented by Brigadier General A. J. Russell, Commander of SAC's 822d Air Division at Turner AFB, to the Air Proving Ground Center Commander, Brigadier General Robert H. Warren. He in turn presented the bomber to Valparaiso, Florida, Mayor Randall Roberts on behalf of the Niceville-Valparaiso Chamber of Commerce. Four of the original Tokyo Raiders were present for the ceremony, Colonel (later Major General) David Jones, Colonel Jack Simms, Lieutenant Colonel Joseph Manske, and retired Master Sergeant Edwin W. Horton. It was donated back to the Air Force Armament Museum c. 1974 and marked as Doolittle's 40-2344. U.S. Navy and USMC The U.S. Navy designation for the Mitchell was the PBJ-1 and apart from increased use of radar, it was configured like its Army Air Forces counterparts. Under the pre-1962 USN/USMC/USCG aircraft designation system, PBJ-1 stood for Patrol (P) Bomber (B) built by North American Aviation (J), first variant (-1) under the existing American naval aircraft designation system of the era. The PBJ had its origin in an inter-service agreement of mid-1942 between the Navy and the USAAF exchanging the Boeing Renton plant for the Kansas plant for B-29 Superfortress production. The Boeing XPBB Sea Ranger flying boat, competing for B-29 engines, was cancelled in exchange for part of the Kansas City Mitchell production. Other terms included the interservice transfer of 50 B-25Cs and 152 B-25Ds to the Navy. The bombers carried Navy bureau numbers (BuNos), beginning with BuNo 34998. The first PBJ-1 arrived in February 1943, and nearly all reached Marine Corps squadrons, beginning with Marine Bombing Squadron 413 (VMB-413). Following the AAFAC format, the Marine Mitchells had search radar in a retractable radome replacing the remotely operated ventral turret. Later D and J series had nose-mounted APS-3 radar; and later still, J and H series mounted radar in the starboard wingtip. The large quantities of B-25H and J series became known as PBJ-1H and PBJ-1J, respectively. These aircraft often operated along with earlier PBJ series in Marine squadrons. The PBJs were operated almost exclusively by the Marine Corps as land-based bombers. The U.S. Marine Corps established Marine bomber squadrons (VMB), beginning with VMB-413, in March 1943 at MCAS Cherry Point, North Carolina. Eight VMB squadrons were flying PBJs by the end of 1943 as the initial Marine medium bombardment group. Four more squadrons were in the process of formation in late 1945, but had not yet deployed by the time the war ended. Operations of the Marine Corps PBJ-1s began in March 1944. The Marine PBJs flew from the Philippines, Saipan, Iwo Jima, and Okinawa during the last few months of the Pacific war. Their primary mission was the long-range interdiction of enemy shipping trying to run the blockade, which was strangling Japan. The weapon of choice during these missions was usually the five-inch HVAR rocket, eight of which could be carried. Some VMB-612 intruder PBJ-1D and J series planes flew without top turrets to save weight and increase range on night patrols, especially towards the end of the war when air superiority had been achieved. During the war, the Navy tested the cannon-armed G series and conducted carrier trials with an H equipped with arresting gear. After World War II, some PBJs stationed at the Navy's rocket laboratory in Inyokern, California, site of the present-day Naval Air Weapons Station China Lake, tested air-to-ground rockets and arrangements. One arrangement was a twin-barrel nose that could fire 10 spin-stabilized five-inch rockets in one salvo. Royal Air Force The Royal Air Force (RAF) was an early customer for the B-25 via Lend-Lease. The first Mitchells were given the service name Mitchell I by the RAF and were delivered in August 1941, to No. 111 Operational Training Unit based in the Bahamas. These bombers were used exclusively for training and familiarization and never became operational. The B-25Cs and Ds were designated Mitchell II. Altogether, 167 B-25Cs and 371 B-25Ds were delivered to the RAF. The RAF tested the cannon-armed G series but did not adopt the series nor the follow-on H series. By the end of 1942, the RAF had taken delivery of 93 Mitchells, marks I and II. Some served with squadrons of No. 2 Group RAF, the RAF's tactical medium-bomber force, including No. 139 Wing RAF at RAF Dunsfold. The first RAF operation with the Mitchell II took place on 22 January 1943, when six aircraft from No. 180 Squadron RAF attacked oil installations at Ghent. After the invasion of Europe (by which point 2 Group was part of Second Tactical Air Force), all four Mitchell squadrons moved to bases in France and Belgium (Melsbroek) to support Allied ground forces. The British Mitchell squadrons were joined by No. 342 (Lorraine) Squadron of the French Air Force in April 1945. As part of its move from Bomber Command, No 305 (Polish) Squadron flew Mitchell IIs from September to December 1943 before converting to the de Havilland Mosquito. In addition to No. 2 Group, the B-25 was used by various second-line RAF units in the UK and abroad. In the Far East, No. 3 PRU, which consisted of Nos. 681 and 684 Squadrons, flew the Mitchell (primarily Mk IIs) on photographic reconnaissance sorties. Royal Canadian Air Force The Royal Canadian Air Force (RCAF) used the B-25 Mitchell for training during the war. Postwar use continued operations with most of the 162 Mitchells received. The first B-25s had been diverted to Canada from RAF orders. These included one Mitchell I, 42 Mitchell IIs, and 19 Mitchell IIIs. No 13 (P) Squadron was formed unofficially at RCAF Rockcliffe in May 1944 and used Mitchell IIs on high-altitude aerial photography sorties. No. 5 Operational Training Unit at Boundary Bay, British Columbia and Abbotsford, British Columbia, operated the B-25D Mitchell in the training role together with B-24 Liberators for Heavy Conversion as part of the BCATP. The RCAF retained the Mitchell until October 1963. No 418 (Auxiliary) Squadron received its first Mitchell IIs in January 1947. It was followed by No 406 (auxiliary), which flew Mitchell IIs and IIIs from April 1947 to June 1958. No 418 operated a mix of IIs and IIIs until March 1958. No 12 Squadron of Air Transport Command also flew Mitchell IIIs along with other types from September 1956 to November 1960. In 1951, the RCAF received an additional 75 B-25Js from USAF stocks to make up for attrition and to equip various second-line units. Royal Australian Air Force The Australians received Mitchells by the spring of 1944. The joint Australian-Dutch No. 18 (Netherlands East Indies) Squadron RAAF had more than enough Mitchells for one squadron, so the surplus went to re-equip the RAAF's No. 2 Squadron, replacing their Beauforts. Dutch Air Force During World War II, the Mitchell served in fairly large numbers with the Air Force of the Dutch government-in-exile. They participated in combat in the East Indies, as well as on the European front. On 30 June 1941, the Netherlands Purchasing Commission, acting on behalf of the Dutch government-in-exile in London, signed a contract with North American Aviation for 162 B-25C aircraft. The bombers were to be delivered to the Netherlands East Indies to help deter any Japanese aggression into the region. In February 1942, the British Overseas Airways Corporation agreed to ferry 20 Dutch B-25s from Florida to Australia travelling via Africa and India, and an additional 10 via the South Pacific route from California. During March, five of the bombers on the Dutch order had reached Bangalore, India, and 12 had reached Archerfield in Australia. The B-25s in Australia were used as the nucleus of a new squadron, No. 18. This squadron was staffed jointly by Australian and Dutch aircrews plus a smattering of aircrews from other nations and operated under Royal Australian Air Force command for the remainder of the war. The B-25s of No. 18 Squadron were painted with the Dutch national insignia (at that time a rectangular Netherlands flag) and carried NEIAF serials. Discounting the ten "temporary" B-25s delivered to 18 Squadron in early 1942, a total of 150 Mitchells were taken on strength by the NEIAF, 19 in 1942, 16 in 1943, 87 in 1944, and 28 in 1945. They flew bombing raids against Japanese targets in the East Indies. In 1944, the more capable B-25J Mitchells replaced most of the earlier C and D models. In June 1940, No. 320 (Netherlands) Squadron RAF had been formed from personnel formerly serving with the Royal Dutch Naval Air Service, who had escaped to England after the German occupation of the Netherlands. Equipped with various British aircraft, No. 320 Squadron flew antisubmarine patrols, convoy escort missions, and performed air-sea rescue duties. They acquired the Mitchell II in September 1943, performing operations over Europe against gun emplacements, railway yards, bridges, troops, and other tactical targets. They moved to Belgium in October 1944, and transitioned to the Mitchell III in 1945. No. 320 Squadron was disbanded in August 1945. Following the war, B-25s were used by Dutch forces during the Indonesian National Revolution. Soviet Air Force The USSR received 862 B-25s (B, C, D, G, and J types) from the United States under Lend-Lease during World War II via the Alaska–Siberia ALSIB ferry route. A total of 870 B-25s were sent to the Soviets, meaning that 8 aircraft were lost during transportation. Other damaged B-25s arrived or crashed in the Far East of Russia, and one Doolittle Raid aircraft landed there short of fuel after attacking Japan. This lone airworthy Doolittle Raid aircraft to reach the Soviet Union was lost in a hangar fire in the early 1950s while undergoing routine maintenance. In general, the B-25 was operated as a ground-support and tactical daylight bomber (as similar Douglas A-20 Havocs were used). It saw action in fights from Stalingrad (with B/C/D models) to the German surrender during May 1945 (with G/J types). The B-25s that remained in Soviet Air Force service after the war were assigned the NATO reporting name "Bank". China Well over 100 B-25Cs and Ds were supplied to the Nationalist Chinese during the Second Sino-Japanese War. In addition, a total of 131 B-25Js were supplied to China under Lend-Lease. The four squadrons of the 1st BG (1st, 2nd, 3rd, and 4th) of the 1st Medium Bomber Group were formed during the war. They formerly operated Russian-built Tupolev SB bombers, then transferred to the B-25. The 1st BG was under the command of Chinese-American Composite Wing while operating B-25s. Following the end of the war in the Pacific, these four bombardment squadrons were established to fight against the Communist insurgency that was rapidly spreading throughout the country. During the Chinese Civil War, Chinese Mitchells fought alongside de Havilland Mosquitos. In December 1948, the Nationalists were forced to retreat to the island of Taiwan, taking many of their Mitchells with them. However, some B-25s were left behind and were pressed into service with the air force of the new People's Republic of China. Brazilian Air Force During the war, the Força Aérea Brasileira received a few B-25s under Lend-Lease. Brazil declared war against the Axis powers in August 1942 and participated in the war against the U-boats in the southern Atlantic. The last Brazilian B-25 was finally declared surplus in 1970. Free French The Royal Air Force issued at least 21 Mitchell IIIs to No 342 Squadron, which was made up primarily of Free French aircrews. Following the liberation of France, this squadron transferred to the newly formed French Air Force (Armée de l'Air) as GB I/20 Lorraine. The aircraft continued in operation after the war, with some being converted into fast VIP transports. They were struck off charge in June 1947. Biafra In October 1967, during the Nigerian Civil War, Biafra bought two Mitchells. After a few bombings in November, they were put out of action in December. Variants B-25 The initial production version of B-25s, they were powered by R-2600-9 engines. and carried up to 3,600 lb (1,600 kg) of bombs and defensive armament of three .30 machine guns in nose, waist, and ventral positions, with one .50 machine gun in the tail. The first nine aircraft were built with constant dihedral angle. Due to low stability, the wing was redesigned so that the dihedral was eliminated on the outboard section (number made: 24). B-25A This version of the B-25 was modified to make it combat ready; additions included self-sealing fuel tanks, crew armor, and an improved tail-gunner station. No changes were made in the armament. It was redesignated obsolete (RB-25A) in 1942 (number made: 40). B-25B The tail and gun position were removed and replaced by a manned dorsal turret on the rear fuselage and retractable, remotely operated ventral turret, each with a pair of .50 in (12.7 mm) machine guns. A total of 120 were built (this version was used in the Doolittle Raid). A total of 23 were supplied to the Royal Air Force as the Mitchell Mk I. B-25C An improved version of the B-25B, its powerplants were upgraded from Wright R-2600-9 radials to R-2600-13s; de-icing and anti-icing equipment were added; the navigator received a sighting blister; and nose armament was increased to two .50 in (12.7 mm) machine guns, one fixed and one flexible. The B-25C model was the first mass-produced B-25 version; it was also used in the United Kingdom (as the Mitchell Mk II), in Canada, China, the Netherlands, and the Soviet Union (number made: 1,625). ZB-25C B-25D Through block 20, the series was near identical to the B-25C. The series designation differed in that the B-25D was made in Kansas City, Kansas, whereas the B-25C was made in Inglewood, California. Later blocks with interim armament upgrades, the D2s, first flew on 3 January 1942 (number made: 2,290). F-10 The F-10 designation distinguished 45 B-25Ds modified for photographic reconnaissance. All armament, armor, and bombing equipment were stripped. Three K.17 cameras were installed, one pointing down and two more mounted at oblique angles within blisters on each side of the nose. Optionally, a second downward-pointing camera could also be installed in the aft fuselage. Although designed for combat operations, these aircraft were mainly used for ground mapping. B-25D weather reconnaissance variant In 1944, four B-25Ds were converted for weather reconnaissance. One later user was the 53d Weather Reconnaissance Squadron, originally called the Army Hurricane Reconnaissance Unit, now called the "Hurricane Hunters". Weather reconnaissance first started in 1943 with the 1st Weather Reconnaissance Squadron, with flights on the North Atlantic ferry routes. ZB-25D XB-25E A single B-25C was modified to test de-icing and anti-icing equipment that circulated exhaust from the engines in chambers in the leading and trailing edges and empennage. The aircraft was tested for almost two years, beginning in 1942; while the system proved extremely effective, no production models were built that used it before the end of World War II. Many surviving warbird-flown B-25 aircraft today use the de-icing system from the XB-25E (number made: 1, converted). ZXB-25E XB-25F-A A modified B-25C, it used insulated electrical coils mounted inside the wing and empennage leading edges to test the effectiveness as a de-icing system. The hot air de-icing system tested on the XB-25E was determined to be the more practical of the two (number made: 1, converted). XB-25G This modified B-25C had the transparent nose replaced to create a short-nosed gunship carrying two fixed .50 in (12.7 mm) machine guns and a 75 mm (2.95 in) M4 cannon, then the largest weapon ever carried on an American bomber (number made: 1, converted). B-25G The B-25G followed the success of the prototype XB-25G and production was a continuation of the NA96. The production model featured increased armor and a greater fuel supply than the XB-25G. One B-25G was passed to the British, who gave it the name Mitchell II that had been used for the B-25C. The USSR also tested the G (number made: 463; five converted Cs, 58 modified Cs, 400 production). B-25H An improved version of the B-25G, this version relocated the manned dorsal turret to a more forward location on the fuselage just aft of the flight deck. It also featured two additional fixed .50 in (12.7 mm) machine guns in the nose and in the H-5 onward, four in fuselage-mounted pods. The T13E1 light weight cannon replaced the heavy M4 cannon 75 mm (2.95 in). Single controls were installed from the factory with navigator in the right seat (number made: 1000; two airworthy ). B-25J-NC Follow-on production at Kansas City, the B-25J could be called a cross between the B-25D and the B-25H. It had a transparent nose, but many of the delivered aircraft were modified to have a strafer nose (J2). Most of its 14–18 machine guns were forward-facing for strafing missions, including the two guns of the forward-located dorsal turret. The RAF received 316 aircraft, which were known as the Mitchell III. The J series was the last factory series production of the B-25 (number made: 4,318). CB-25J Utility transport version VB-25J A number of B-25s were converted for use as staff and VIP transports. Henry H. Arnold and Dwight D. Eisenhower both used converted B-25Js as their personal transports. The last VB-25J in active service was retired in May 1960 at the Eglin Air Force Base in Florida. Trainer variants Most models of the B-25 were used at some point as training aircraft. TB-25D Originally designated AT-24A (Advanced Trainer, Model 24, Version A), trainer modification of B-25D often with the dorsal turret omitted, in total, 60 AT-24s were built. TB-25G Originally designated AT-24B, trainer modification of B-25G TB-25C Originally designated AT-24C, trainer modification of B-25C TB-25J Originally designated AT-24D, trainer modification of B-25J, another 600 B-25Js were modified after the war. TB-25K Hughes E1 fire-control radar trainer (Hughes) (number made: 117) TB-25L Hayes pilot-trainer conversion (number made: 90) TB-25M Hughes E5 fire-control radar trainer (number made: 40) TB-25N Hayes navigator-trainer conversion (number made: 47) U.S. Navy / U.S. Marine Corps variants PBJ-1C Similar to the B-25C for the U.S. Navy, it was often fitted with airborne search radar and used in the antisubmarine role. PBJ-1D Similar to the B-25D for the U.S. Navy and U.S. Marine Corps, it differed in having a single .50 in (12.7 mm) machine gun in the tail turret and waist gun positions similar to the B-25H. Often it was fitted with airborne search radar and used in the antisubmarine role. PBJ-1G U.S. Navy/U.S. Marine Corps designation for the B-25G, trials only PBJ-1H U.S. Navy/U.S. Marine Corps designation for the B-25H One PBJ-1H was modified with carrier takeoff and landing equipment and successfully tested on the USS Shangri-La, but the Navy did not continue development. PBJ-1J U.S. Navy designation for the B-25J (Blocks −1 through −35), it had improvements in radio and other equipment. Beside the standard armament package, the Marines often fitted it with 5-inch underwing rockets and search radar for the antishipping/antisubmarine role. The large Tiny Tim rocket-powered warhead was used in 1945. Operators An ex-USAAF TB-25N (s/n 44-31173) was acquired in June 1961 and registered locally as LV-GXH, it was privately operated as a smuggling aircraft. It was confiscated by provincial authorities in 1971 and handed over to Empresa Provincial de Aviacion Civil de San Juan, which operated it until its retirement due to a double engine failure in 1976. Currently, it is under restoration to airworthiness. Royal Australian Air Force – 50 aircraft, including three joint units with Military Aviation – Royal Dutch East Indies Army (ML-KNIL): No. 2 Squadron RAAF; No. 18 (Netherlands East Indies) Squadron RAAF; No. 19 (Netherlands East Indies) Squadron RAAF and; No. 119 (Netherlands East Indies) Squadron RAAF. Biafran Air Force operated two aircraft. Bolivian Air Force operated 13 aircraft Brazilian Air Force operated 75 aircraft, including B-25B, B-25C, and B-25J. Royal Canadian Air Force operated 164 aircraft in bomber, light transport, trainer, and "special" mission roles. No. 13 (P) Squadron Mitchell II at RCAF Station Rockcliffe No. 406 Auxiliary Squadron Mitchell III Republic of China Air Force operated more than 180 aircraft. People's Liberation Army Air Force operated captured Nationalist Chinese aircraft. Chilean Air Force operated 12 aircraft. Colombian Air Force operated three aircraft. Cuban Army Air Force operated six aircraft. Fuerza Aérea del Ejército de Cuba Cuerpo de Aviación del Ejército de Cuba Dominican Air Force operated five aircraft. French Air Force operated 11 aircraft. Free French Air Force operated 18 aircraft. Indonesian Air Force – in 1950, received some B-25 Mitchells previously operated by the Military Aviation – Royal Dutch East Indies Army (ML-KNIL). The last of these served the Indonesian military until 1979. Mexican Air Force received three B-25Js in December 1945, which remained in use until at least 1950. Eight Mexican civil registrations were allocated to B-25s, including one aircraft registered to the Bank of Mexico, but used by the President of Mexico. Military Aviation – Royal Dutch East Indies Army (ML-KNIL; 1942–1950): 149 aircraft (initially in three joint units with the Royal Australian Air Force) during World War II and the Indonesian War of Independence: No. 18 Squadron (NEI) RAAF/18 Squadron ML-KNIL (1942–1950) – bomber No. 119 (Netherlands East Indies) Squadron RAAF (1943–1943) – bomber No. 19 Squadron (NEI) RAAF/19 Squadron ML-KNIL (1944–1948) – transport 16 Squadron ML-KNIL (1946–1948) – ground attack 20 Squadron ML-KNIL (1946–1950) – transport Naval Aviation Service (MLD) – 107 aircraft; initially in a joint unit with the UK Royal Air Force: No. 320 (Netherlands) Squadron RAF (1942–1946) Peruvian Air Force received eight B-25Js in 1947, which formed Bomber Squadron N° 21 at Talara. Polish Air Forces on exile in Great Britain No. 305 Polish Bomber Squadron Spanish Air Force operated one ex-USAAF example interned in 1944 and operated between 1948 and 1956. Soviet Air Force (Voyenno-Vozdushnye Sily. VVS) received a total of 866 B-25s of the C, D, G*, and J series. * trials only (5). Royal Air Force received just over 700 aircraft. No. 98 Squadron RAF – September 1942 – November 1945 (converted to the Mosquito No. 180 Squadron RAF – September 1942 – September 1945 (converted to the Mosquito) No. 226 Squadron RAF – May 1943 – September 1945 (disbanded) No. 305 Polish Bomber Squadron – September 1943 – December 1943 (converted to the Mosquito) No. 320 (Netherlands) Squadron RAF – March 1943 – August 1945 (transferred to Netherlands) No. 342 (GB I/20 'Lorraine') Squadron RAF – March 1945 – December 1945 (transferred to France) No. 681 Squadron RAF – January 1943 – December 1943 (Mitchell withdrawn) No. 684 Squadron RAF – September 1943 – April 1944 (Replaced by Mosquito) No. 111 Operational Training Unit RAF, Nassau Airport, Bahamas, August 1942 – August 1945 (disbanded) Royal Navy Fleet Air Arm operated 1 aircraft for evaluation United States Army Air Forces see B-25 Mitchell units of the United States Army Air Forces United States Navy received 706 aircraft, most of which were then transferred to the USMC. United States Marine Corps Uruguayan Air Force operated 15 aircraft. Venezuelan Air Force operated 24 aircraft. Accidents and incidents Empire State Building crash At 9:40 on 28 July 1945, a USAAF B-25D crashed in thick fog into the north side of the Empire State Building between the 79th and 80th floors. Fourteen people died — 11 in the building and the three occupants of the aircraft, including the pilot, Colonel William F. Smith. Betty Lou Oliver, an elevator attendant, survived the impact and the subsequent fall of the elevator cage 75 stories to the basement. French general Philippe Leclerc was aboard his North American B-25 Mitchell, Tailly II, when it crashed near Colomb-Béchar in French Algeria on 28 November 1947, killing everyone on board. Lake Erie skydiving disaster A bit after 16:00 on 27 August 1967, a converted civilian B-25 mistakenly dropped eighteen skydivers over Lake Erie, four or five nautical miles (7.5–9.3 km) from Huron, Ohio. The air traffic controller had confused the B-25 with a Cessna 180 Skywagon that was trailing it to take photographs, causing the B-25 pilot to think he was over the intended drop site at Ortner Airport. Sixteen of the jumpers drowned, while two were rescued. A National Transportation Safety Board report faulted the pilot, and to a lesser extent the skydivers, for executing a jump when they could not see the ground, and faulted the controller for the misidentification. The United States was subsequently held liable for the controller's negligence. Surviving aircraft Many B-25s are currently kept in airworthy condition by air museums and collectors. Specifications (B-25H) Notable appearances in media See also Notes References Bibliography Borth, Christy. Masters of Mass Production. Indianapolis, Indiana: Bobbs-Merrill Co., 1945. Bridgman, Leonard, ed. "The North American Mitchell." Jane's Fighting Aircraft of World War II. London: Studio, 1946. . Caidin, Martin. Air Force. New York: Arno Press, 1957. Chorlton, Martyn. "Database: North American B-25 Mitchell". Aeroplane, Vol. 41, No. 5, May 2013. pp. 69–86. Dorr, Robert F. "North American B-25 Variant Briefing". Wings of Fame, Volume 3, 1996. London: Aerospace Publishing. . . pp. 118–141. Green, William. Famous Bombers of the Second World War. New York: Doubleday & Company, 1975. . Hagedorn, Dan. "Latin Mitchells: North American B-25s in South America, Part One". Air Enthusiast No. 105, May/June 2003. pp. 52–55. Hagedorn, Dan. "Latin Mitchells: North American B-25s in South America, Part Three". Air Enthusiast Mo. 107, September/October 2003. pp. 36–41. Hardesty, Von. Red Phoenix: The Rise of Soviet Air Power 1941–1945. Washington, D.C.: Smithsonian Institution, 1991, first edition 1982. . Heller, Joseph. Catch 22. New York: Simon & Schuster, 1961. . Herman, Arthur. Freedom's Forge: How American Business Produced Victory in World War II, New York: Random House, 2012. . Higham, Roy and Carol Williams, eds. Flying Combat Aircraft of USAAF-USAF (Vol. 1). Andrews AFB, Maryland: Air Force Historical Foundation, 1975. . Higham, Roy and Carol Williams, eds. Flying Combat Aircraft of USAAF-USAF (Vol. 2). Andrews AFB, Maryland: Air Force Historical Foundation, 1978. . Johnsen, Frederick A. North American B-25 Mitchell. Stillwater, Minnesota: Voyageur Press, 1997. . Kingwell, Mark. Nearest Thing to Heaven: The Empire State Building and American Dreams. New Haven, Connecticut: Yale University Press, 2007. . Kinzey, Bert. B-25 Mitchell in Detail. Carrollton, Texas: Squadron/Signal Publications Inc., 1999. . Kit, Mister and Jean-Pierre De Cock. North American B-25 Mitchell (in French). Paris, France: Éditions Atlas, 1980. McDowell, Ernest R. B-25 Mitchell in Action (Aircraft number 34). Carrollton, Texas: Squadron/Signal Publications Inc., 1978. . McDowell, Ernest R. North American B-25A/J Mitchell (Aircam No.22). Canterbury, Kent, UK: Osprey Publications Ltd., 1971. . Mizrahi, J.V. North American B-25: The Full Story of World War II's Classic Medium. Hollywood, California: Challenge Publications Inc., 1965. Norton, Bill. American Bomber Aircraft Development in World War 2. Hersham, Surrey, UK: Midland Publishing, 2012. . Pace, Steve. B-25 Mitchell Units in the MTO. Oxford, UK: Osprey Publishing, 2002. . Pace, Steve. Warbird History: B-25 Mitchell. St. Paul, Minnesota: Motorbooks International, 1994. . Parker, Dana T. Building Victory: Aircraft Manufacturing in the Los Angeles Area in World War II. Cypress, California: Dana Parker Enterprises, 2013. . Powell, Albrecht. "Mystery in the Mon". 1994 Scutts, Jerry. B-25 Mitchell at War. London: Ian Allan, 1983. . Scutts, Jerry. North American B-25 Mitchell. Ramsbury, Marlborough, Wiltshire, UK: Crowood Press, 2001. . Skaarup, Harold A. Canadian Warplanes. Bloomington, Indiana: IUniverse, 2009. . Swanborough, F.G. and Peter M. Bowers. United States Military Aircraft since 1909. London: Putnam, 1963. Swanborough, Gordon. North American, An Aircraft Album No. 6. New York: Arco Publishing Company Inc., 1973. . Tallman, Frank. Flying the Old Planes. New York: Doubleday and Company, 1973. . Wolf, William. North American B-25 Mitchell, The Ultimate Look: from Drawing Board to Flying Arsenal. Atglen, Pennsylvania: Schiffer Publishing, 2008. . Yenne, Bill. Rockwell: The Heritage of North American. New York: Crescent Books, 1989. . External links North American B-25 Mitchell Joe Baugher, American Military Aircraft: US Bomber Aircraft I Fly Mitchell's, February 1944 Popular Science article on B-25s in North Africa Theater Flying Big Gun, February 1944, Popular Science article on 75 mm cannon mount Early B-25 model's tail gun position, extremely rare photo A collection photos of the Marine VMB-613 post in the Kwajalein Island at the University of Houston Digital Library Hi-res spherical panoramas; B-25H: A look inside & out – "Barbie III" (1943) Report No. NA-5785 Temporary Handbook of Erection and Maintenance Instructions for the B-25 H-1-NA Medium Bombardment Airplanes "The B-25 Mitchell in the USSR", an account of the service history of the Mitchell in the Soviet Union's VVS during World War II Lake Murray's Mitchell B-25 Recovery and Preservation Project Rubicon Foundation Aircraft first flown in 1940 Gull-wing aircraft Mid-wing aircraft B-25 1940s United States bomber aircraft World War II bombers of the United States Twin piston-engined tractor aircraft
17
Sir Robert Charlton (11 October 1937 – 21 October 2023) was an English professional footballer who played as an attacking midfielder, central midfielder, and left winger. Widely considered one of the greatest players of all time, he was a member of the England team that won the 1966 FIFA World Cup, the year he also won the Ballon d'Or. He finished second in the Ballon d'Or voting in 1967 and 1968. He played almost all of his club football at Manchester United, where he became renowned for his attacking instincts, passing abilities from midfield, ferocious long-range shooting from both left and right foot, fitness, and stamina. He was cautioned only twice in his career; once against Argentina in the 1966 World Cup, and once in a league match against Chelsea. With success at club and international level, he was one of nine players to have won the FIFA World Cup, the European Cup and the Ballon d'Or. His elder brother Jack, who was also in the World Cup–winning team, was a former defender for Leeds United and international manager. Born in Ashington, Northumberland, Charlton made his debut for the Manchester United first-team in 1956, aged 18, and soon gained a regular place in the team, during which time he became a Football League First Division champion in 1957 then survived the Munich air disaster of February 1958 after being rescued by teammate Harry Gregg; Charlton was the last survivor of the crash from the club. After helping United to win the FA Cup in 1963 and the Football League in 1965 and 1967, he captained the team that won the European Cup in 1968, scoring two goals in the final to help them become the first English club to win the competition. Charlton left Manchester United to become manager of Preston North End for the 1973–74 season. He changed to player-manager the following season. He next accepted a post as a director with Wigan Athletic, then became a member of Manchester United's board of directors in 1984. At international level, Charlton was named in the England squad for four World Cups (1958, 1962, 1966, and 1970), though he did not play in the first. At the time of his retirement from the England team in 1970, he was the nation's most capped player, having turned out 106 times at the highest level; Bobby Moore overtook this in 1973. Charlton was the long-time record goalscorer for both Manchester United and England, and United's long-time record appearance maker – his total of 758 matches for United took until 2008 to be beaten, when Ryan Giggs did so in that year's Champions League final. With 249 goals, he was the club's highest all-time goalscorer for more than 40 years, until his record was surpassed by Wayne Rooney in 2017. He is also the third-highest goalscorer for England; his record of 49 goals was beaten in 2015 by Rooney, and again by Harry Kane in 2022. Early life Robert Charlton was born on 11 October 1937 in Ashington, Northumberland, England, to coal miner Robert "Bob" Charlton (24 May 1909 – April 1982) and Elizabeth Ellen "Cissie" Charlton (née Milburn; 11 November 1912 – 25 March 1996). He was related to several professional footballers on his mother's side of the family: his uncles were Jack Milburn (Leeds United and Bradford City), George Milburn (Leeds United and Chesterfield), Jim Milburn (Leeds United and Bradford Park Avenue) and Stan Milburn (Chesterfield, Leicester City and Rochdale), and legendary Newcastle United and England footballer Jackie Milburn was his mother's cousin. However, Charlton credited much of the early development of his career to his grandfather Tanner and his mother Cissie. His elder brother, Jack, initially worked as a miner before applying to the police, only to also become a professional footballer with Leeds United. Club career On 9 February 1953, then a Bedlington Grammar School pupil, Charlton was spotted playing for East Northumberland schools by Manchester United chief scout Joe Armstrong. Charlton went on to play for England Schoolboys and the 15-year-old signed amateur forms with United on 1 January 1953 along with Wilf McGuinness, also aged 15. Initially his mother was reluctant to let him commit to an insecure football career, so he began an apprenticeship as an electrical engineer; however, he went on to turn professional in October 1954. Charlton became one of the famed Busby Babes, the collection of talented footballers who emerged through the system at Old Trafford in the 1940s, 1950s and 1960s as Matt Busby set about a long-term plan of rebuilding the club after the Second World War. He worked his way through the pecking order of teams, scoring regularly for the youth and reserve sides before he was handed his first team debut against Charlton Athletic in October 1956. At the same time, he was doing his National service with the Royal Army Ordnance Corps in Shrewsbury, where Busby had advised him to apply as it meant he could still play for Manchester United at the weekend. Also doing his army service in Shrewsbury at the same time was his United teammate Duncan Edwards. Charlton played 17 times for United in that first season, scoring twice on his debut and managing a total of 12 goals in all competitions, and including a hat-trick in a 5–1 away win over Charlton Athletic in February. United won the league championship but were denied the 20th century's first "double" when they controversially lost the 1957 FA Cup Final to Aston Villa. Charlton, still only 19, was selected for the game, which saw United goalkeeper Ray Wood carried off with a broken cheekbone after a clash with Villa centre forward Peter McParland. Charlton was a candidate to go in goal to replace Wood (in the days before substitutes, and certainly before goalkeeping substitutes), but it was teammate Jackie Blanchflower who ended up playing in goal. Charlton was an established player by the time the next season was fully underway, which saw United, as current League champions, become the first English team to compete in the European Cup. Previously, the Football Association had scorned the competition, but United made progress, reaching the semi-finals where they lost to holders Real Madrid. Their reputation was further enhanced the next season in the 1957–58 European Cup as they reached the quarter-finals to play Red Star Belgrade. In the first leg at home, United won 2–1. The return in Yugoslavia saw Charlton score twice as United stormed 3–0 ahead, although the hosts came back to earn a 3–3 draw. However, United maintained their aggregate lead to reach the last four and were in jubilant mood as they left to catch their flight home, thinking of an important League game against Wolves at the weekend. Munich air disaster The aeroplane which took the United players and staff home from Zemun Airport needed to stop in Munich to refuel. This was carried out in worsening weather, and by the time the refuelling was complete and the call was made for the passengers to re-board the aircraft, the wintry showers had taken hold and snow had settled heavily on the runway and around the airport. There were two aborted take-offs which led to concern on board, and the passengers were advised by a stewardess to disembark again while a minor technical error was fixed. The team were back in the airport terminal for barely ten minutes when the call came to reconvene on the plane, and a number of passengers began to feel nervous. Charlton and teammate Dennis Viollet swapped places with Tommy Taylor and David Pegg, who had decided they would be safer at the back of the plane. The plane clipped the fence at the end of the runway on its next take-off attempt and a wing tore through a nearby house, setting it alight. The wing and part of the tail came off and hit a tree and a wooden hut, the plane spinning along the snow until coming to a halt. It had been cut in half. Charlton, strapped into his seat, had fallen out of the cabin; when United goalkeeper Harry Gregg (who had somehow got through a hole in the plane unscathed and begun a one-man rescue mission) found him, he thought he was dead. Nevertheless, he grabbed both Charlton and Viollet by their trouser waistbands and dragged them away from the plane, in constant fear that it would explode. Gregg returned to the plane to try to help the appallingly injured Busby and Blanchflower, and when he turned around again, he was relieved to see that Charlton and Viollet, both of whom he had presumed to be dead, had got out of their detached seats and were looking into the wreckage. Charlton suffered cuts to his head and severe shock, and was in hospital for a week. Seven of his teammates had perished at the scene, including Taylor and Pegg, with whom he and Viollet had swapped seats prior to the fatal take-off attempt. Club captain Roger Byrne was also killed, along with Mark Jones, Billy Whelan, Eddie Colman and Geoff Bent. Duncan Edwards died a fortnight later from the injuries he had sustained. In total, the crash claimed 23 lives. Initially, ice on the wings was blamed, but a later inquiry declared that slush on the runway had made a safe take-off almost impossible. Of the 44 passengers and crew (including the 17-strong Manchester United squad), 23 people (eight of them Manchester United players) died as a result of their injuries in the crash. Charlton survived with minor injuries. Of the eight other players who survived, two of them were injured so badly that they never played again. Charlton was the first injured survivor to leave hospital. Harry Gregg and Bill Foulkes were not hospitalised, for they escaped uninjured. He arrived back in England on 14 February 1958, eight days after the crash. As he convalesced with family in Ashington, he spent some time kicking a ball around with local youths, and a famous photograph of him was taken. He was still only 20 years old, yet now there was an expectation that he would help with the rebuilding of the club as Busby's aides tried to piece together what remained of the season. Between Harry Gregg's death in 2020 and his own in 2023, Charlton was the last living survivor of the crash. Resuming his career Charlton returned to playing in an FA Cup tie against West Bromwich Albion on 1 March; the game was a draw and United won the replay 1–0. Not unexpectedly, United went out of the European Cup to A.C. Milan in the semi-finals to a 5–2 aggregate defeat and fell behind in the League. Yet somehow they reached their second consecutive FA Cup final, and the big day at Wembley coincided with Busby's return to work. However, Nat Lofthouse scored twice to give Bolton Wanderers a 2–0 win. Further success with Manchester United came at last when they beat Leicester City 3–1 in the FA Cup final of 1963, with Charlton finally earning a winners' medal in his third final. Busby's post-Munich rebuilding programme continued to progress, with two League championships within three seasons, in 1965 and 1967. A successful (though trophyless) season with Manchester United saw him take the honours of Football Writers' Association Footballer of the Year and European Footballer of the Year into the competition. Manchester United reached the 1968 European Cup Final, ten seasons after Munich. Even though other clubs had taken part in the competition in the intervening decade, the team which got to this final was still the first English side to do so. On a highly emotional night at Wembley, Charlton scored twice in a 4–1 win after extra time against Benfica and, as United captain, lifted the trophy. During the early 1970s, Manchester United were no longer competing among the top teams in England, and at several stages were battling against relegation. At times, Charlton was not on speaking terms with United's other superstars, George Best and Denis Law, and Best refused to play in Charlton's testimonial match against Celtic, saying that "to do so would be hypocritical". Charlton left Manchester United at the end of the 1972–73 season, having scored 249 goals and set a club record of 758 appearances, a record which Ryan Giggs broke in the 2008 UEFA Champions League Final. Charlton's last game for Manchester United was against Chelsea at Stamford Bridge on 28 April 1973. Chelsea won the match 1–0. Coincidentally, this day also marked his brother Jackie's last appearance as well (for Leeds). Charlton's final goal for the club came a month earlier, on 31 March, in a 2–0 win at Southampton, also in the First Division. Charlton was the subject of an episode of This Is Your Life in 1969 when he was surprised by Eamonn Andrews at The Sportsman's Club in central London. International career Charlton's emergence as the country's leading young football talent was completed when he was called up to join the England squad for a British Home Championship game against Scotland at Hampden Park on 19 April 1958, just over two months after he had survived the Munich air disaster. Charlton was handed his debut as England romped home 4–0, with the new player gaining even more admirers after scoring a magnificent thumping volley dispatched with authority after a cross by the left winger Tom Finney. He scored both goals in his second game as England beat Portugal 2–1 in a friendly at Wembley, and overcame obvious nerves on a return to Belgrade to play his third match against Yugoslavia; England lost that game 5–0 and Charlton played poorly. Charlton was selected for the squad which competed at the 1958 World Cup in Sweden, but he did not play. In 1959, Charlton scored a hat-trick as England demolished the US 8–1; and his second England hat-trick came in 1961 in an 8–0 thrashing of Mexico. He also managed to score in every British Home Championship tournament he played in except 1963 in an association with the tournament that lasted from 1958 to 1970 and included 16 goals and 10 tournament victories (five shared). 1962 World Cup Charlton played in qualifiers for the 1962 World Cup in Chile against Luxembourg and Portugal and was named in the squad for the finals themselves. His goal in the 3–1 group win over Argentina was his 25th for England in just 38 appearances, and he was still only 24 years old; but his individual success could not be replicated by that of the team, which was eliminated in the quarter-final by Brazil, who went on to win the tournament. By now, England were coached by Alf Ramsey, who had managed to gain sole control of the recruitment and team selection procedure from the committee-based call-up system which had lasted up to the previous World Cup. Ramsey had already cleared out some of the older players who had been reliant on the loyalty of the committee for their continued selection. A hat-trick in the 8–1 rout of Switzerland in June 1963 took Charlton's England goal tally to 30, equalling the record jointly held by Tom Finney and Nat Lofthouse; Charlton's 31st goal, against Wales in October the same year, gave him the record alone. Charlton's role was developing from traditional inside-forward to what today would be termed an attacking midfield player, with Ramsey planning to build the team for the 1966 World Cup around him. When England beat the USA 10–0 in a friendly on 27 May 1964, he scored one goal, his 33rd at senior level for England. His goals became a little less frequent, and indeed Jimmy Greaves, playing purely as a striker, overtook his England tally in October 1964. Nevertheless, Charlton was still scoring and creating freely, and as the tournament was about to start he was expected to become one of its stars and galvanise his established reputation as one of the world's best footballers. 1966 World Cup England drew the opening game of the tournament 0–0 with Uruguay. Charlton scored the first goal in the 2–0 win over Mexico. This was followed by an identical scoreline against France, allowing England to qualify for the quarter-finals, where they defeated Argentina 1–0. The game was the only international match in which Charlton received a caution. They faced Portugal in the semi-finals. This turned out to be one of Charlton's most important games for England. Charlton opened the scoring with a crisp side-footed finish after a run by Roger Hunt had forced the Portuguese goalkeeper out of his net; his second was a sweetly struck shot after a run and pull-back from Geoff Hurst. Charlton and Hunt were now England's joint-highest scorers in the tournament with three each, and a final against West Germany beckoned. The final turned out to be one of Charlton's quieter days; he and a young Franz Beckenbauer effectively marked each other out of the game. England won 4–2 after extra time. Euro 1968 Charlton's next England game was his 75th, as England beat Northern Ireland; after two more appearances he became England's second most-capped player, behind the veteran Billy Wright, who was approaching his 100th match when Charlton was starting out and ended with 105 caps. Weeks later he scored his 45th England goal in a friendly against Sweden, breaking the record of 44 set the previous year by Jimmy Greaves. He was then in the England team which made it to the semi-finals of the 1968 European Championships, where they were knocked out by Yugoslavia in Florence. During the match Charlton struck a Yugoslav post. England defeated the Soviet Union 2–0 in the third place match. In 1969, Charlton was appointed an OBE for services to football. More milestones followed as he won his 100th England cap on 21 April 1970 against Northern Ireland, and was made captain by Ramsey for the occasion. Inevitably, he scored; this was his 48th goal for his country – his 49th and final goal followed a month later in a 4–0 win over Colombia during a warm-up tour for the 1970 World Cup, designed to get the players adapted to altitude conditions. Charlton's inevitable selection by Ramsey for the tournament made him the first – and still, to date, only – England player to feature in four World Cup squads. 1970 World Cup Shortly before the World Cup, Charlton was involved in the Bogotá Bracelet incident in which he and Bobby Moore were accused of stealing a bracelet from a jewellery store. Moore was later arrested and detained for four days before being granted a conditional release, while Charlton was not arrested. England began the tournament with two victories in the group stages, plus a memorable defeat against Brazil. Charlton played in all three, though was substituted for Alan Ball in the final game of the group against Czechoslovakia. Ramsey, confident of victory and progress to the quarter-final, wanted Charlton to rest. England reached the last eight where they again faced West Germany. With England leading 2-1, Ramsey replaced Charlton with Colin Bell in the 69th minute: Germany went on to win 3–2 after extra time. England were eliminated and, after a record 106 caps and 49 goals, Charlton decided to end his international career at the age of 32. On the flight home from Mexico, he asked Ramsey not to consider him again. His brother Jack, two years his senior but 71 caps his junior, did likewise. Charlton's caps record lasted until 1973, when Bobby Moore overtook him; as of October 2023, he lies seventh in the all-time England appearances list behind Moore, Wayne Rooney, Ashley Cole, Steven Gerrard, David Beckham and Peter Shilton, whose own England career began in the first game after Charlton's had ended. Charlton's goalscoring record was surpassed by Wayne Rooney on 8 September 2015, when Rooney scored a penalty in a 2–0 win over Switzerland in a qualifying match for UEFA Euro 2016. Management career and directorships Charlton became the manager of Preston North End in 1973, signing his former United and England teammate Nobby Stiles as player-coach. His first season ended in relegation, and although he began playing again, he left Preston early in the 1975–76 season after a disagreement with the board over the transfer of John Bird to Newcastle United. He was appointed a CBE that year and began a casual association with BBC for punditry on matches, which continued for many years. In early 1976, he scored once in three league appearances for Waterford United. He also made a handful of appearances for Australian clubs Newcastle KB United, Perth Azzurri and Blacktown City. Charlton joined Wigan Athletic as a director, and was briefly caretaker manager there in 1983. He then spent some time playing in South Africa. He also built up several businesses in areas such as travel, jewellery and hampers, and ran soccer schools in the UK, the US, Canada, Australia and China. In 1984, he was invited to become member of the board of directors at Manchester United, partly because of his football knowledge and partly because it was felt that the club needed a "name" on the board after the resignation of Sir Matt Busby. In June 2005, when the American Glazer family bought Manchester United amidst fan opposition, Charlton apologised to the new owners: "I tried to explain they couldn't ignore the fans, who are so emotionally involved in the club, but who sometimes do go a bit too far". Personal life and retirement Charlton met his wife, Norma Ball, at an ice rink in Manchester in 1959 and they married in 1961. They had two daughters, Suzanne and Andrea. Suzanne was a weather forecaster for the BBC during the 1990s. They went on to have grandchildren, including Suzanne's son Robert, who is named in honour of his grandfather. In 2007, while publicising his forthcoming autobiography, Charlton revealed that he had a long-running feud with his brother Jack. They rarely spoke to each other after a falling-out between his wife Norma and his mother Cissie (who died in 1996 at the age of 83). Bobby Charlton did not see his mother after 1992 as a result of the feud. Jack presented him with his BBC Sports Personality of the Year Lifetime Achievement Award on 14 December 2008. He said that he was 'knocked out' as he was presented the award by his brother. He received a standing ovation as he stood waiting for his prize. Charlton helped to promote Manchester's bids for the 1996 and 2000 Olympic Games and the 2002 Commonwealth Games, England's bid for the 2006 World Cup and London's successful bid for the 2012 Summer Olympics. He received a knighthood in 1994 and was an Inaugural Inductee to the English Football Hall of Fame in 2002. On accepting his award, he commented: "I'm really proud to be included in the National Football Museum's Hall of Fame. It's a great honour. If you look at the names included I have to say I couldn't argue with them. They are all great players and people I would love to have played with." He was also the (honorary) president of the National Football Museum, an organisation about which he said "I can't think of a better museum anywhere in the world." On 2 March 2009, Charlton was given the freedom of the city of Manchester. He stated: "I'm just so proud, it's fantastic. It's a great city. I have always been very proud of it." Charlton was involved in a number of charitable activities, including fund raising for cancer hospitals. After visits to Bosnia and Cambodia, Charlton became involved in the cause of land mine clearance, and supported the Mines Advisory Group as well as founding his own charity Find a Better Way, which funds research into improved civilian landmine clearance. In January 2011, Charlton was voted the fourth-greatest Manchester United player of all time by the readers of Inside United and ManUtd.com, behind Ryan Giggs (who topped the poll), Eric Cantona and George Best. He was a member of the Laureus World Sports Academy. On 6 February 2012 Charlton was taken to hospital after falling ill, and subsequently had a gallstone removed. This prevented him from collecting a Lifetime Achievement Award at the Laureus World Sports Awards. On 15 February 2016, Manchester United announced the South Stand of Old Trafford would be renamed in honour of Sir Bobby Charlton. The unveiling took place at the home game against Everton on 3 April 2016. In October 2017, Charlton had a pitch named after him at St George's Park National Football Centre in Burton-upon-Trent. In November 2020, it was revealed that Charlton had been diagnosed with dementia and as a result, he withdrew from public life. Death On 21 October 2023, a statement from Charlton's family confirmed that he had died that morning. He was 86, and the cause of death was given as complications from dementia. His death leaves Geoff Hurst as the last surviving English player of the 1966 World Cup final. Manchester United paid tribute to Charlton at their Champions League match against Copenhagen at Old Trafford three days later in a number of ways. First, United's players wore black armbands, and manager Erik ten Hag was flanked by Alex Stepney and U-21 captain Dan Gore before ten Hag laid a wreath and a minute's silence was observed before the match began. Another wreath was also laid in Charlton's seat in the director's box. In addition, the cover of United's match programme, the United Review, featured Charlton on the front, and supporters laid flowers and scarves at the United Trinity. In popular culture In the episode "Taking Liberties" of the NBC American sitcom Frasier, Daphne Moon (Jane Leeves), who is from Manchester, mentions that one of her uncles tried fanatically to get Charlton's autograph, "until Bobby cracked him over the head with a can of lager. Twelve stitches, and he still has the can!" In the 2011 film United, centred on the successes of the Busby Babes and the decimation of the team in the Munich crash, Charlton was portrayed by actor Jack O'Connell. In the episode "Munich Air Disaster" of the air crash documentary Mayday, Charlton was interviewed as a survivor in the show, alongside Harry Gregg. Career statistics Club International Honours Manchester United Youth FA Youth Cup: 1953–54, 1954–55, 1955–56 Manchester United Football League First Division: 1956–57, 1964–65, 1966–67 FA Cup: 1962–63; runner-up 1956–57, 1957–58 FA Charity Shield: 1965, 1967 European Cup: 1967–68 England FIFA World Cup: 1966 UEFA European Championship third place: 1968 British Home Championship (outright): 1961, 1965, 1966, 1968, 1969 (shared) 1958, 1959, 1960, 1964, 1970 Individual FWA Footballer of the Year: 1965–66 FIFA World Cup Golden Ball: 1966 FIFA World Cup All-Star Team: 1966, 1970 Ballon d'Or: 1966; runner-up: 1967, 1968 PFA Merit Award: 1974 FWA Tribute Award: 1989 FIFA World Cup All-Time Team: 1994 Football League 100 Legends: 1998 English Football Hall of Fame: 2002 FIFA 100: 2004 UEFA Golden Jubilee Poll: 14th PFA England League Team of the Century (1907 to 2007): Team of the Century 1907–1976 Overall Team of the Century BBC Sports Personality of the Year Lifetime Achievement Award: 2008 UEFA President's Award: 2008 Laureus Lifetime Achievement Award: 2012 FIFA Player of the Century: FIFA internet vote: 16th IFFHS vote: 10th World Soccer The Greatest Players of the 20th century: 12th IFFHS Legends Orders and special awards Officer of the Most Excellent Order of the British Empire (OBE): 1969 Commander of the Most Excellent Order of the British Empire (CBE): 1974 Knight Bachelor: 1994 Order of the Rising Sun, 4th class: 2012 See also List of men's footballers with 100 or more international caps References Notes External links International Football Hall of Fame: Bobby Charlton Planet World Cup: Bobby Charlton A fans view: Bobby Charlton – legend BBC radio interview with Bobby Charlton, 1999 Sir Alex Ferguson Way - Club Legends - Sir Bobby Charlton 1937 births 2023 deaths 1958 FIFA World Cup players 1962 FIFA World Cup players 1966 FIFA World Cup players 1970 FIFA World Cup players 20th-century British Army personnel Association football people awarded knighthoods BBC Sports Personality Lifetime Achievement Award recipients Ballon d'Or winners Bangor City F.C. players Blacktown City FC players Commanders of the Order of the British Empire Deaths from dementia English autobiographers English men's footballers England men's international footballers England men's under-23 international footballers English expatriate men's footballers English expatriate sportspeople in Australia English expatriate sportspeople in Ireland English expatriate sportspeople in South Africa English Football Hall of Fame inductees English Football League players English Football League representative players English football managers English knights Expatriate men's association footballers in the Republic of Ireland Expatriate men's soccer players in Australia Expatriate men's soccer players in South Africa FIFA 100 FIFA Men's Century Club FIFA World Cup-winning players Footballers from Ashington Knights Bachelor Laureus World Sports Awards winners League of Ireland players Manchester United F.C. players Men's association football forwards Men's association football midfielders Newcastle KB United players People educated at Bedlingtonshire Community High School Place of death missing Preston North End F.C. managers Preston North End F.C. players Recipients of the Order of the Rising Sun, 4th class Royal Army Ordnance Corps soldiers Survivors of aviation accidents or incidents UEFA Champions League winning players UEFA Euro 1968 players Waterford F.C. players Wigan Athletic F.C. managers
5
Barry Lyndon is a 1975 historical drama film written, directed, and produced by Stanley Kubrick, based on the 1844 novel The Luck of Barry Lyndon by William Makepeace Thackeray. Starring Ryan O'Neal, Marisa Berenson, Patrick Magee, Leonard Rossiter, and Hardy Krüger, the film recounts the early exploits and later unravelling of an 18th-century Anglo-Irish rogue and golddigger who marries a rich widow to climb the social ladder and assume her late husband's aristocratic position. Kubrick began production on Barry Lyndon after his 1971 film A Clockwork Orange. He had originally intended to direct a biopic on Napoleon, but lost his financing because of the commercial failure of the similar 1970 Dino De Laurentiis-produced Waterloo. Kubrick eventually directed Barry Lyndon, set partially during the Seven Years' War, utilising his research from the Napoleon project. Filming began in December 1973 and lasted roughly eight months, taking place in England, Ireland, and Germany. The film's cinematography has been described as ground-breaking. Especially notable are the long double shots, usually ended with a slow backwards zoom, the scenes shot entirely in candlelight, and the settings based on William Hogarth paintings. The exteriors were filmed on location in England, Ireland, and Germany, with the interiors shot mainly in London. The production had problems related to logistics, weather, and politics (Kubrick feared that he might be an IRA hostage target). Barry Lyndon received seven nominations at the 48th Academy Awards, including Best Picture, winning four for Best Scoring: Original Song Score and Adaptation or Scoring: Adaptation, Best Cinematography, Best Art Direction, and Best Costume Design. Although some critics took issue with the film's slow pace and restrained emotion, its reputation, like that of many of Kubrick's works, has grown over time. In the 2022 Sight & Sound Greatest Films of All Time poll, Barry Lyndon placed 12th in the directors' poll and 45th in the critics' poll. Plot Part I: By What Means Redmond Barry Acquired the Style and Title of Barry Lyndon In 1750s Ireland, Redmond Barry's father is killed in a duel. Barry becomes infatuated with his cousin Nora Brady, and shoots her suitor British Army captain John Quin in a duel. He flees but is robbed by highwaymen on his way to Dublin. Penniless, Barry enlists in the British Army. Family friend Captain Grogan informs him that Quin is not dead: the duel was staged so that Nora's family can get rid of Barry and improve their finances through her marriage to Quin. Barry serves with his regiment in Germany in the Seven Years' War, but deserts after Grogan is fatally wounded in a skirmish against the French Royal Army. Masquerading as a British lieutenant, he has a brief affair with Frau Lieschen, a peasant woman. On his way to Bremen, Barry encounters the Prussian Captain Potzdorf, who sees through his ruse and impresses him into the Prussian Army. Barry later saves Potzdorf's life and receives commendation from Frederick the Great. At the end of the war, Barry is employed by Potzdorf's uncle in the Prussian Ministry of Police. The Prussians suspect the Chevalier de Balibari, a professional gambler, of spying for the Austrians, and have Barry become his servant. Barry confides everything to the Chevalier, a fellow Irishman, and they become confederates. After they cheat the Prince of Tübingen at cards, the Prince refuses to pay his debts, and the Chevalier in turn demands satisfaction. The Prussians, still suspecting the Chevalier, arrange to have both him and Barry expelled from the country. Barry and the Chevalier travel across Europe, perpetrating gambling scams, with Barry forcing payment from debtors with sword duels. In Spa, he encounters the beautiful and wealthy Lady Lyndon. He seduces her, and her elderly husband Sir Charles Lyndon dies from apoplexy, induced by Barry's goading and verbal repartee. Part II: Containing an Account of the Misfortunes and Disasters Which Befell Barry Lyndon In 1773, Barry marries Lady Lyndon, takes her last name and settles in England. Lord Bullingdon, Lady Lyndon's ten-year-old son by Sir Charles, despises Barry. Barry responds by physically abusing him. The Countess bears Barry a son, Bryan Patrick, but the marriage is unhappy: Barry is openly unfaithful and squanders his wife's wealth while keeping her in seclusion. A few years later, Barry's mother comes to live with him. She warns him that if Lady Lyndon were to die, Bullingdon would inherit everything, and advises him to obtain a title to protect himself. To this end, he cultivates the influential Lord Wendover and spends large sums of money to ingratiate himself with high society. Bullingdon, now a young adult, disrupts a party Barry throws for Lady Lyndon; he declares his hatred for his stepfather and states that he will leave the family estate as long as Barry remains there. Barry assaults Bullingdon until physically restrained. He becomes ostracised by society and plunges further into financial ruin. An overindulgent father, Barry gifts Bryan a full-grown horse for his ninth birthday, leading to his death in a riding accident. Barry turns to alcohol, while Lady Lyndon seeks solace in religion, assisted by the Rev. Samuel Runt, who had been tutor to Bullingdon and Bryan. Barry's mother dismisses Runt, for fear that his influence will worsen Lady Lyndon's condition. Lady Lyndon attempts suicide. Runt and Graham, the family's steward, then seek out Bullingdon, who returns and challenges Barry to a duel. After Bullingdon nervously misfires the first shot, Barry magnanimously fires into the ground. Bullingdon refuses to end the duel and shoots Barry in the leg, forcing the leg to be amputated below the knee. While Barry is recovering, Bullingdon takes control of the Lyndon estate. He offers Barry 500 guineas a year on the condition that he leaves England forever. With his credit exhausted, Barry accepts. Barry resumes his gambling profession, though without his former success. In December 1789, a middle-aged Lady Lyndon signs Barry's annuity cheque as her son looks on. Cast Critic Tim Robey suggests that the film "makes you realise that the most undervalued aspect of Kubrick's genius could well be his way with actors." He adds that the supporting cast is a "glittering procession of cameos, not from star names but from vital character players." The cast featured Leon Vitali as the older Lord Bullingdon, who then became Kubrick's personal assistant, working as the casting director on his following films, and supervising film-to-video transfers for Kubrick. Their relationship lasted until Kubrick's death. The film's cinematographer, John Alcott, appears at the men's club in the non-speaking role of the man asleep in a chair near the title character when Lord Bullingdon challenges Barry to a duel. Kubrick's daughter Vivian also appears (in an uncredited role) as a guest at Bryan's birthday party. Other Kubrick featured regulars were Leonard Rossiter (2001: A Space Odyssey), Steven Berkoff, Patrick Magee, Godfrey Quigley, Anthony Sharp, and Philip Stone (A Clockwork Orange). Stone went on to feature in The Shining. Production Development After completing post production on 2001: A Space Odyssey, Kubrick resumed planning a film about Napoleon. During pre-production, Sergei Bondarchuk and Dino De Laurentiis' Waterloo was released, and failed at the box office. Reconsidering, Kubrick's financiers pulled funding, and he turned his attention towards an adaptation of Anthony Burgess's 1962 novel A Clockwork Orange. Subsequently, Kubrick showed an interest in Thackeray's Vanity Fair but dropped the project when a serialised version for television was produced. He told an interviewer, "At one time, Vanity Fair interested me as a possible film but, in the end, I decided the story could not be successfully compressed into the relatively short time-span of a feature film ... as soon as I read Barry Lyndon I became very excited about it." Having earned Oscar nominations for Dr. Strangelove, 2001: A Space Odyssey and A Clockwork Orange, Kubrick's reputation in the early 1970s was that of "a perfectionist auteur who loomed larger over his movies than any concept or star". His studio—Warner Bros.—was therefore "eager to bankroll" his next project, which Kubrick kept "shrouded in secrecy" from the press partly due to the furore surrounding the controversially violent A Clockwork Orange (particularly in the UK) and partly due to his "long-standing paranoia about the tabloid press." Kubrick was initially rumored to be developing an adaptation of Arthur Schnitzler's 1926 novell Dream Story, which would serve as the source material for his later film Eyes Wide Shut (1999). Having felt compelled to set aside his plans for a film about Napoleon Bonaparte, in 1972 Kubrick set his sights on Thackeray's 1844 "satirical picaresque about the fortune-hunting of an Irish rogue," Barry Lyndon, the setting of which allowed Kubrick to take advantage of the copious period research he had done for the now-aborted Napoleon. At the time, Kubrick merely announced that his next film would star Ryan O'Neal (deemed "a seemingly un-Kubricky choice of leading man") and Marisa Berenson, a former Vogue and Time magazine cover model, and be shot largely in Ireland. So heightened was the secrecy surrounding the film that "Even Berenson, when Kubrick first approached her, was told only that it was to be an 18th-century costume piece [and] she was instructed to keep out of the sun in the months before production, to achieve the period-specific pallor he required." Screenplay Kubrick based his adapted screenplay on William Makepeace Thackeray's The Luck of Barry Lyndon (republished as the novel Memoirs of Barry Lyndon, Esq.), a picaresque tale written and published in serial form in 1844. The film departs from the novel in several ways. In Thackeray's writings, events are related in the first person by Barry himself. A comic tone pervades the work, as Barry proves both a raconteur and an unreliable narrator. Kubrick's film, by contrast, presents the story objectively. Though the film contains voice-over (by actor Michael Hordern), the comments expressed are not Barry's, but those of an omniscient narrator. Kubrick felt that using a first-person narrative would not be useful in a film adaptation: Kubrick made several changes to the plot, including the addition of the final duel. Principal photography Principal photography lasted 300 days, from spring 1973 through to early 1974, with a break for Christmas. Kubrick initially wished to film the entire production near his home in Borehamwood, but Ken Adam convinced him to relocate the shoot to Ireland. The crew arrived in Dublin in May 1973. Jan Harlan recalls that Kubrick "loved his time in Ireland – he rented a lovely house west of Dublin, he loved the scenery and the culture and the people". Many of the exteriors were shot in Ireland, playing "itself, England, and Prussia during the Seven Years' War." Kubrick and cinematographer Alcott drew inspiration from "the landscapes of Watteau and Gainsborough," and also relied on the art direction of Ken Adam and Roy Walker. Alcott, Adam and Walker were among those who would win Oscars for their work on the film. Several of the interior scenes were filmed in Powerscourt House, an 18th-century mansion in County Wicklow. The house was destroyed in an accidental fire several months after filming (November 1974), so the film serves as a record of the lost interiors, particularly the "Saloon" which was used for more than one scene. The Wicklow Mountains are visible, for example, through the window of the saloon during a scene set in Berlin. Other locations included Kells Priory (the English Redcoat encampment) Blenheim Palace, Castle Howard (exteriors of the Lyndon estate), Huntington Castle, Clonegal (exterior), Corsham Court (various interiors and the music room scene), Petworth House (chapel), Stourhead (lake and temple), Longleat, and Wilton House (interior and exterior) in England, Lavenham Guildhall at Lavenham in Suffolk (amputation scene), Dunrobin Castle (exterior and garden as Spa) in Scotland, Dublin Castle in Ireland (the chevalier's home), Ludwigsburg Palace near Stuttgart and Frederick II of Prussia's Neues Palais at Potsdam near Berlin (suggesting Berlin's main street Unter den Linden as construction in Potsdam had just begun in 1763). Some exterior shots were also filmed at Waterford Castle (now a luxury hotel and golf course) and Little Island, Waterford. Moorstown Castle in Tipperary also featured. Several scenes were filmed at Castletown House in Celbridge, County Kildare, outside Carrick-on-Suir, County Tipperary, and at Youghal, County Cork. The filming took place in the backdrop of some of the most intense years of the Troubles in Ireland, during which the Provisional Irish Republican Army (Provisional IRA) was waging an armed campaign in order to unite the island. On 30 January 1974, while filming in Dublin City's Phoenix Park, shooting had to be cancelled due to the chaos caused by 14 bomb threats. One day a phone call was received and Kubrick was given 24 hours to leave the country; he left within 12 hours. The phone call alleged that the Provisional IRA had him on a hit list and Harlan recalls "Whether the threat was a hoax or it was real, almost doesn't matter ... Stanley was not willing to take the risk. He was threatened, and he packed his bag and went home" Production of the film was one-third completed when this occurred, and it was rumored that the film would be abandoned. Nonetheless, Kubrick continued shooting the remaining two-thirds of the film at locations in southern England and Germany. Cinematography The film, as with "almost every Kubrick film", is a "showcase for [a] major innovation in technique." While 2001: A Space Odyssey had featured "revolutionary effects," and The Shining would later feature heavy use of the Steadicam, Barry Lyndon saw a considerable number of sequences shot "without recourse to electric light." The film's cinematography was overseen by director of photography John Alcott (who won an Oscar for his work), and is particularly noted for the technical innovations that made some of its most spectacular images possible. To achieve photography without electric lighting "[f]or the many densely furnished interior scenes… meant shooting by candlelight," which is known to be difficult in still photography, "let alone with moving images." Kubrick was "determined not to reproduce the set-bound, artificially lit look of other costume dramas from that time." After "tinker[ing] with different combinations of lenses and film stock," the production obtained three super-fast 50mm lenses (Carl Zeiss Planar 50mm f/0.7) developed by Zeiss for use by NASA in the Apollo Moon landings, which Kubrick had discovered. These super-fast lenses "with their huge aperture (the film actually features the lowest f-stop in film history) and fixed focal length" were problematic to mount, and were extensively modified into three versions by Cinema Products Corp. for Kubrick to gain a wider angle of view, with input from optics expert Richard Vetter of Todd-AO. The rear element of the lens had to be 2.5 mm away from the film plane, requiring special modification to the rotating camera shutter. This allowed Kubrick and Alcott to shoot scenes lit in candlelight to an average lighting volume of only three candela, "recreating the huddle and glow of a pre-electrical age." In addition, Kubrick had the entire film push-developed by one stop. Although Kubrick and Alcott sought to avoid electric lighting where possible, most shots were achieved with conventional lenses and lighting, but were lit to deliberately mimic natural light rather than for compositional reasons. In addition to potentially seeming more realistic, these methods also gave a particular period look to the film which has often been likened to 18th-century paintings (which of course depict a world devoid of electric lighting), in particular owing "a lot to William Hogarth, with whom Thackeray had always been fascinated." The film is widely regarded as having a stately, static, painterly quality, mostly due to its lengthy, wide-angle long shots. To illuminate the more notable interior scenes, artificial lights called "Mini-Brutes" were placed outside and aimed through the windows, which were covered in a diffuse material to scatter the light evenly through the room rather than being placed inside for maximum use as most conventional films do. In some instances, the natural daylight was allowed to come through, which when recorded on the film stock used by Kubrick showed up as blue-tinted compared to the incandescent electric light. Despite such slight tinting effects, this method of lighting not only gave the look of natural daylight coming in through the windows, but it also protected the historic locations from the damage caused by mounting the lights on walls or ceilings and the heat from the lights. This helped the film "fit… perfectly with Kubrick's gilded-cage aesthetic – the film is consciously a museum piece, its characters pinned to the frame like butterflies." Music The film's period setting allowed Kubrick to indulge his penchant for using classical music, and the film score includes pieces by Vivaldi, Bach, Handel, Paisiello, Mozart, and Schubert. The piece most associated with the film, however, is the main title music, Handel's Sarabande from the Keyboard suite in D minor (HWV 437). Originally for solo harpsichord, the versions for the main and end titles are performed with strings, timpani, and continuo. The score also includes Irish folk music, including Seán Ó Riada's song "Women of Ireland", arranged by Paddy Moloney and performed by The Chieftains. "The British Grenadiers" also features in scenes with Redcoats marching. Charts Certifications Box office and reception Contemporaneous The film "was not the commercial success Warner Bros. had been hoping for" within the United States, although it fared better in Europe. In the US it earned $9.1 million. Ultimately, the film grossed a worldwide total of $31.5 million on an $11 million budget. This mixed reaction saw the film (in the words of one retrospective review) "greeted, on its release, with dutiful admiration – but not love. Critics… rail[ed] against the perceived coldness of Kubrick's style, the film's self-conscious artistry and slow pace. Audiences, on the whole, rather agreed…" Roger Ebert gave the film three and a half stars out of four and wrote that it "is almost aggressive in its cool detachment. It defies us to care, it forces us to remain detached about its stately elegance." He added, "This must be one of the most beautiful films ever made." Vincent Canby of The New York Times called the film "another fascinating challenge from one of our most remarkable, independent-minded directors." Gene Siskel of the Chicago Tribune gave the film three and a half stars out of four and wrote "I found 'Barry Lyndon' to be quite obvious about its intentions and thoroughly successful in achieving them. Kubrick has taken a novel about a social class and has turned it into an utterly comfortable story that conveys the stunning emptiness of upper-class life only 200 years past." He ranked the film fifth on his year-end list of the best films of 1975. Charles Champlin of the Los Angeles Times called it "the motion picture equivalent of one of those very large, very heavy, very expensive, very elegant and very dull books that exist solely to be seen on coffee tables. It is ravishingly beautiful and incredibly tedious in about equal doses, a succession of salon quality still photographs—as often as not very still indeed." The Washington Post wrote, "It's not inaccurate to describe 'Barry Lyndon' as a masterpiece, but it's a deadend masterpiece, an objet d'art rather than a movie. It would be more at home, and perhaps easier to like, on the bookshelf, next to something like 'The Age of the Grand Tour,' than on the silver screen." Pauline Kael of The New Yorker wrote that "Kubrick has taken a quick-witted story" and "controlled it so meticulously that he's drained the blood out of it," adding, "It's a coffee-table movie; we might as well be at a three-hour slide show for art-history majors." This "air of disappointment" factored into Kubrick's decision for his next film, an adaption of Stephen King's The Shining, a project that would not only please him artistically, but was more likely to succeed financially. Re-evaluation Over time, the film has gained a more positive reaction. On review aggregator Rotten Tomatoes, the film holds an approval rating of 88% based on 81 reviews, with an average rating of 8.3/10. The website's critical consensus reads, "Cynical, ironic, and suffused with seductive natural lighting, Barry Lyndon is a complex character piece of a hapless man doomed by Georgian society." On Metacritic, the film has a weighted average score of 89 out of 100 based on reviews from 21 critics, indicating "universal acclaim". Roger Ebert added the film to his 'Great Movies' list on 9 September 2009 and increased his original rating from three and a half stars to four, writing, "Stanley Kubrick's Barry Lyndon, received indifferently in 1975, has grown in stature in the years since and is now widely regarded as one of the master's best. It is certainly in every frame a Kubrick film: technically awesome, emotionally distant, remorseless in its doubt of human goodness." The Village Voice ranked the film at number 46 in its Top 250 "Best Films of the Century" list in 1999, based on a poll of critics. Director Martin Scorsese has named Barry Lyndon as his favourite Kubrick film, and it is also one of Lars von Trier's favourite films. Barry Lyndon was included on Times All-Time 100 best movies list. In the 2012 Sight & Sound Greatest Films of All Time poll, Barry Lyndon placed 19th in the directors' poll and 59th in the critics' poll. The film ranked 27th in BBC's 2015 list of the 100 greatest American films. In the 2022 Sight & Sound Greatest Films of All Time poll, Barry Lyndon placed 12th in the directors' poll and 45th in the critics' poll. In a list compiled by The Irish Times critics Tara Brady and Donald Clarke in 2020, Barry Lyndon was named the greatest Irish film of all time. The Japanese filmmaker Akira Kurosawa cited the movie as one of his 100 favorite films. Awards and nominations Thematic analysis The main theme explored in Barry Lyndon is one of fate and destiny. Barry is pushed through life by a series of key events, some of which seem unavoidable. As Roger Ebert says, "He is a man to whom things happen." He declines to eat with the highwayman Captain Feeney, where he would most likely have been robbed, but is robbed anyway farther down the road. The narrator repeatedly emphasizes the role of fate as he announces events before they unfold on screen, like Bryan's death and Bullingdon seeking satisfaction. This theme of fate is also developed in the recurring motif of the painting. Just like the events featured in the paintings, Barry is participating in events which always were. Another major theme is between father and son. Barry lost his father at a young age and throughout the film he seeks and attaches himself to father-figures. Examples include his uncle, Grogan, and the Chevalier. When given the chance to be a father, Barry loves his son to the point of spoiling him. This contrasts with his role as a father to Lord Bullingdon, whom he disregards and punishes. See also List of American films of 1975 Overlord – the 1975 Stuart Cooper WWII film John Alcott also worked on Cinema of Ireland Notes References Further reading Tibbetts, John C., and James M. Welsh, eds. The Encyclopedia of Novels into Film (2nd ed. 2005) pp 23–24. External links Barry Lyndon: Time Regained an essay by Geoffrey O'Brien at the Criterion Collection Screenplay of Barry Lyndon (18 February 1973) at Daily script. Barry Lyndon Press Kit at Indelible Inc. The Kubrick Site, a "non-profit resource archive for documentary materials", including essays and articles. Stanley Kubrick’s letter to projectionists on Barry Lyndon at Some Came Running. 1975 films 1970s war drama films Adultery in films American war drama films British war drama films 1970s English-language films Films based on British novels Films based on works by William Makepeace Thackeray Films directed by Stanley Kubrick Films produced by Stanley Kubrick Films set in England Films set in Ireland Films set in Prussia Films set in the 1750s Films set in 1763 Films set in 1773 Films set in the 1780s Films shot in Dublin (city) Films shot in the Republic of Ireland Films shot in Somerset Films shot in Oxfordshire Films shot in West Sussex Films shot in Wiltshire Films shot in North Yorkshire Films shot in Scotland Films shot in Germany Films that won the Best Costume Design Academy Award Films that won the Best Original Score Academy Award Films whose art director won the Best Art Direction Academy Award Films whose cinematographer won the Best Cinematography Academy Award Films whose director won the Best Direction BAFTA Award Films with screenplays by Stanley Kubrick Seven Years' War films Films about gambling Warner Bros. films 1975 drama films Films shot in County Wicklow Films shot in County Waterford Films shot at EMI-Elstree Studios 1970s American films 1970s British films
6
The cell is the basic structural and functional unit of all forms of life. Every cell consists of cytoplasm enclosed within a membrane, and contains many macromolecules such as proteins, DNA and RNA, as well as many small molecules of nutrients and metabolites. The term comes from the Latin word meaning 'small room'. Cells can acquire specified function and carry out various tasks within the cell such as replication, DNA repair, protein synthesis, and motility. Cells are capable of specialization and mobility within the cell. Most plant and animal cells are only visible under a light microscope, with dimensions between 1 and 100 micrometres. Electron microscopy gives a much higher resolution showing greatly detailed cell structure. Organisms can be classified as unicellular (consisting of a single cell such as bacteria) or multicellular (including plants and animals). Most unicellular organisms are classed as microorganisms. The study of cells and how they work has led to many other studies in related areas of biology, including: discovery of DNA, cancer systems biology, aging and developmental biology. Cell biology is the study of cells, which were discovered by Robert Hooke in 1665, who named them for their resemblance to cells inhabited by Christian monks in a monastery. Cell theory, first developed in 1839 by Matthias Jakob Schleiden and Theodor Schwann, states that all organisms are composed of one or more cells, that cells are the fundamental unit of structure and function in all living organisms, and that all cells come from pre-existing cells. Cells emerged on Earth about 4 billion years ago. Discovery With continual improvements made to microscopes over time, magnification technology became advanced enough to discover cells. This discovery is largely attributed to Robert Hooke, and began the scientific study of cells, known as cell biology. When observing a piece of cork under the scope, he was able to see pores. This was shocking at the time as it was believed no one else had seen these. To further support his theory, Matthias Schleiden and Theodor Schwann both also studied cells of both animal and plants. What they discovered were significant differences between the two types of cells. This put forth the idea that cells were not only fundamental to plants, but animals as well. Number of cells The number of cells in plants and animals varies from species to species; it has been estimated that the human body contains around 37 trillion (3.72×1013) cells, and more recent studies put this number at around 30 trillion (~36 trillion cells in the male, ~28 trillion in the female). The human brain accounts for around 80 billion of these cells. Hatton et al. provide numbers for most other human organs. Cell types Cells are broadly categorized into two types: eukaryotic cells, which possesses a nucleus, and prokaryotic cells, which lack a nucleus but still has a nucleoid region. Prokaryotes are single-celled organisms, whereas eukaryotes can be either single-celled or multicellular. Prokaryotic cells Prokaryotes include bacteria and archaea, two of the three domains of life. Prokaryotic cells were the first form of life on Earth, characterized by having vital biological processes including cell signaling. They are simpler and smaller than eukaryotic cells, and lack a nucleus, and other membrane-bound organelles. The DNA of a prokaryotic cell consists of a single circular chromosome that is in direct contact with the cytoplasm. The nuclear region in the cytoplasm is called the nucleoid. Most prokaryotes are the smallest of all organisms ranging from 0.5 to 2.0 μm in diameter. A prokaryotic cell has three regions: Enclosing the cell is the cell envelope, generally consisting of a plasma membrane covered by a cell wall which, for some bacteria, may be further covered by a third layer called a capsule. Though most prokaryotes have both a cell membrane and a cell wall, there are exceptions such as Mycoplasma (bacteria) and Thermoplasma (archaea) which only possess the cell membrane layer. The envelope gives rigidity to the cell and separates the interior of the cell from its environment, serving as a protective filter. The cell wall consists of peptidoglycan in bacteria and acts as an additional barrier against exterior forces. It also prevents the cell from expanding and bursting (cytolysis) from osmotic pressure due to a hypotonic environment. Some eukaryotic cells (plant cells and fungal cells) also have a cell wall. Inside the cell is the cytoplasmic region that contains the genome (DNA), ribosomes and various sorts of inclusions. The genetic material is freely found in the cytoplasm. Prokaryotes can carry extrachromosomal DNA elements called plasmids, which are usually circular. Linear bacterial plasmids have been identified in several species of spirochete bacteria, including members of the genus Borrelia notably Borrelia burgdorferi, which causes Lyme disease. Though not forming a nucleus, the DNA is condensed in a nucleoid. Plasmids encode additional genes, such as antibiotic resistance genes. On the outside, some prokaryotes have flagella and pili that project from the cell's surface. These are structures made of proteins that facilitate movement and communication between cells. Bacterial shapes Cell shape, also called cell morphology, has been hypothesized to form from the arrangement and movement of the cytoskeleton. Many advancements in the study of cell morphology come from studying simple bacteria such as Staphylococcus aureus, E. coli, and B. subtilis. Different cell shapes have been found and described, but how and why cells form different shapes is still widely unknown. Some cell shapes that have been identified include rods, cocci and spirochaetes. Cocci are circular, bacilli are elongated rods, and spirochaetes are spiral in form. Eukaryotic cells Plants, animals, fungi, slime moulds, protozoa, and algae are all eukaryotic. These cells are about fifteen times wider than a typical prokaryote and can be as much as a thousand times greater in volume. The main distinguishing feature of eukaryotes as compared to prokaryotes is compartmentalization: the presence of membrane-bound organelles (compartments) in which specific activities take place. Most important among these is a cell nucleus, an organelle that houses the cell's DNA. This nucleus gives the eukaryote its name, which means "true kernel (nucleus)". Some of the other differences are: The plasma membrane resembles that of prokaryotes in function, with minor differences in the setup. Cell walls may or may not be present. The eukaryotic DNA is organized in one or more linear molecules, called chromosomes, which are associated with histone proteins. All chromosomal DNA is stored in the cell nucleus, separated from the cytoplasm by a membrane. Some eukaryotic organelles such as mitochondria also contain some DNA. Many eukaryotic cells are ciliated with primary cilia. Primary cilia play important roles in chemosensation, mechanosensation, and thermosensation. Each cilium may thus be "viewed as a sensory cellular antennae that coordinates a large number of cellular signaling pathways, sometimes coupling the signaling to ciliary motility or alternatively to cell division and differentiation." Motile eukaryotes can move using motile cilia or flagella. Motile cells are absent in conifers and flowering plants. Eukaryotic flagella are more complex than those of prokaryotes. Subcellular components All cells, whether prokaryotic or eukaryotic, have a membrane that envelops the cell, regulates what moves in and out (selectively permeable), and maintains the electric potential of the cell. Inside the membrane, the cytoplasm takes up most of the cell's volume. Except red blood cells, which lack a cell nucleus and most organelles to accommodate maximum space for hemoglobin, all cells possess DNA, the hereditary material of genes, and RNA, containing the information necessary to build various proteins such as enzymes, the cell's primary machinery. There are also other kinds of biomolecules in cells. This article lists these primary cellular components, then briefly describes their function. Cell membrane The cell membrane, or plasma membrane, is a selectively permeable biological membrane that surrounds the cytoplasm of a cell. In animals, the plasma membrane is the outer boundary of the cell, while in plants and prokaryotes it is usually covered by a cell wall. This membrane serves to separate and protect a cell from its surrounding environment and is made mostly from a double layer of phospholipids, which are amphiphilic (partly hydrophobic and partly hydrophilic). Hence, the layer is called a phospholipid bilayer, or sometimes a fluid mosaic membrane. Embedded within this membrane is a macromolecular structure called the porosome the universal secretory portal in cells and a variety of protein molecules that act as channels and pumps that move different molecules into and out of the cell. The membrane is semi-permeable, and selectively permeable, in that it can either let a substance (molecule or ion) pass through freely, to a limited extent or not at all. Cell surface membranes also contain receptor proteins that allow cells to detect external signaling molecules such as hormones. Cytoskeleton The cytoskeleton acts to organize and maintain the cell's shape; anchors organelles in place; helps during endocytosis, the uptake of external materials by a cell, and cytokinesis, the separation of daughter cells after cell division; and moves parts of the cell in processes of growth and mobility. The eukaryotic cytoskeleton is composed of microtubules, intermediate filaments and microfilaments. In the cytoskeleton of a neuron the intermediate filaments are known as neurofilaments. There are a great number of proteins associated with them, each controlling a cell's structure by directing, bundling, and aligning filaments. The prokaryotic cytoskeleton is less well-studied but is involved in the maintenance of cell shape, polarity and cytokinesis. The subunit protein of microfilaments is a small, monomeric protein called actin. The subunit of microtubules is a dimeric molecule called tubulin. Intermediate filaments are heteropolymers whose subunits vary among the cell types in different tissues. Some of the subunit proteins of intermediate filaments include vimentin, desmin, lamin (lamins A, B and C), keratin (multiple acidic and basic keratins), and neurofilament proteins (NF–L, NF–M). Genetic material Two different kinds of genetic material exist: deoxyribonucleic acid (DNA) and ribonucleic acid (RNA). Cells use DNA for their long-term information storage. The biological information contained in an organism is encoded in its DNA sequence. RNA is used for information transport (e.g., mRNA) and enzymatic functions (e.g., ribosomal RNA). Transfer RNA (tRNA) molecules are used to add amino acids during protein translation. Prokaryotic genetic material is organized in a simple circular bacterial chromosome in the nucleoid region of the cytoplasm. Eukaryotic genetic material is divided into different, linear molecules called chromosomes inside a discrete nucleus, usually with additional genetic material in some organelles like mitochondria and chloroplasts (see endosymbiotic theory). A human cell has genetic material contained in the cell nucleus (the nuclear genome) and in the mitochondria (the mitochondrial genome). In humans, the nuclear genome is divided into 46 linear DNA molecules called chromosomes, including 22 homologous chromosome pairs and a pair of sex chromosomes. The mitochondrial genome is a circular DNA molecule distinct from nuclear DNA. Although the mitochondrial DNA is very small compared to nuclear chromosomes, it codes for 13 proteins involved in mitochondrial energy production and specific tRNAs. Foreign genetic material (most commonly DNA) can also be artificially introduced into the cell by a process called transfection. This can be transient, if the DNA is not inserted into the cell's genome, or stable, if it is. Certain viruses also insert their genetic material into the genome. Organelles Organelles are parts of the cell that are adapted and/or specialized for carrying out one or more vital functions, analogous to the organs of the human body (such as the heart, lung, and kidney, with each organ performing a different function). Both eukaryotic and prokaryotic cells have organelles, but prokaryotic organelles are generally simpler and are not membrane-bound. There are several types of organelles in a cell. Some (such as the nucleus and Golgi apparatus) are typically solitary, while others (such as mitochondria, chloroplasts, peroxisomes and lysosomes) can be numerous (hundreds to thousands). The cytosol is the gelatinous fluid that fills the cell and surrounds the organelles. Eukaryotic Cell nucleus: A cell's information center, the cell nucleus is the most conspicuous organelle found in a eukaryotic cell. It houses the cell's chromosomes, and is the place where almost all DNA replication and RNA synthesis (transcription) occur. The nucleus is spherical and separated from the cytoplasm by a double membrane called the nuclear envelope, space between these two membrane is called perinuclear space. The nuclear envelope isolates and protects a cell's DNA from various molecules that could accidentally damage its structure or interfere with its processing. During processing, DNA is transcribed, or copied into a special RNA, called messenger RNA (mRNA). This mRNA is then transported out of the nucleus, where it is translated into a specific protein molecule. The nucleolus is a specialized region within the nucleus where ribosome subunits are assembled. In prokaryotes, DNA processing takes place in the cytoplasm. Mitochondria and chloroplasts: generate energy for the cell. Mitochondria are self-replicating double membrane-bound organelles that occur in various numbers, shapes, and sizes in the cytoplasm of all eukaryotic cells. Respiration occurs in the cell mitochondria, which generate the cell's energy by oxidative phosphorylation, using oxygen to release energy stored in cellular nutrients (typically pertaining to glucose) to generate ATP (aerobic respiration). Mitochondria multiply by binary fission, like prokaryotes. Chloroplasts can only be found in plants and algae, and they capture the sun's energy to make carbohydrates through photosynthesis. Endoplasmic reticulum: The endoplasmic reticulum (ER) is a transport network for molecules targeted for certain modifications and specific destinations, as compared to molecules that float freely in the cytoplasm. The ER has two forms: the rough ER, which has ribosomes on its surface that secrete proteins into the ER, and the smooth ER, which lacks ribosomes. The smooth ER plays a role in calcium sequestration and release and also helps in synthesis of lipid. Golgi apparatus: The primary function of the Golgi apparatus is to process and package the macromolecules such as proteins and lipids that are synthesized by the cell. Lysosomes and peroxisomes: Lysosomes contain digestive enzymes (acid hydrolases). They digest excess or worn-out organelles, food particles, and engulfed viruses or bacteria. Peroxisomes have enzymes that rid the cell of toxic peroxides, Lysosomes are optimally active in an acidic environment. The cell could not house these destructive enzymes if they were not contained in a membrane-bound system. Centrosome: the cytoskeleton organizer: The centrosome produces the microtubules of a cell—a key component of the cytoskeleton. It directs the transport through the ER and the Golgi apparatus. Centrosomes are composed of two centrioles which lie perpendicular to each other in which each has an organization like a cartwheel, which separate during cell division and help in the formation of the mitotic spindle. A single centrosome is present in the animal cells. They are also found in some fungi and algae cells. Vacuoles: Vacuoles sequester waste products and in plant cells store water. They are often described as liquid filled spaces and are surrounded by a membrane. Some cells, most notably Amoeba, have contractile vacuoles, which can pump water out of the cell if there is too much water. The vacuoles of plant cells and fungal cells are usually larger than those of animal cells. Vacuoles of plant cells are surrounded by a membrane which transports ions against concentration gradients. Eukaryotic and prokaryotic Ribosomes: The ribosome is a large complex of RNA and protein molecules. They each consist of two subunits, and act as an assembly line where RNA from the nucleus is used to synthesise proteins from amino acids. Ribosomes can be found either floating freely or bound to a membrane (the rough endoplasmatic reticulum in eukaryotes, or the cell membrane in prokaryotes). Plastids: Plastid are membrane-bound organelle generally found in plant cells and euglenoids and contain specific pigments, thus affecting the colour of the plant and organism. And these pigments also helps in food storage and tapping of light energy. There are three types of plastids based upon the specific pigments. Chloroplasts contain chlorophyll and some carotenoid pigments which helps in the tapping of light energy during photosynthesis. Chromoplasts contain fat-soluble carotenoid pigments like orange carotene and yellow xanthophylls which helps in synthesis and storage. Leucoplasts are non-pigmented plastids and helps in storage of nutrients. Structures outside the cell membrane Many cells also have structures which exist wholly or partially outside the cell membrane. These structures are notable because they are not protected from the external environment by the cell membrane. In order to assemble these structures, their components must be carried across the cell membrane by export processes. Cell wall Many types of prokaryotic and eukaryotic cells have a cell wall. The cell wall acts to protect the cell mechanically and chemically from its environment, and is an additional layer of protection to the cell membrane. Different types of cell have cell walls made up of different materials; plant cell walls are primarily made up of cellulose, fungi cell walls are made up of chitin and bacteria cell walls are made up of peptidoglycan. Prokaryotic Capsule A gelatinous capsule is present in some bacteria outside the cell membrane and cell wall. The capsule may be polysaccharide as in pneumococci, meningococci or polypeptide as Bacillus anthracis or hyaluronic acid as in streptococci. Capsules are not marked by normal staining protocols and can be detected by India ink or methyl blue, which allows for higher contrast between the cells for observation. Flagella Flagella are organelles for cellular mobility. The bacterial flagellum stretches from cytoplasm through the cell membrane(s) and extrudes through the cell wall. They are long and thick thread-like appendages, protein in nature. A different type of flagellum is found in archaea and a different type is found in eukaryotes. Fimbriae A fimbria (plural fimbriae also known as a pilus, plural pili) is a short, thin, hair-like filament found on the surface of bacteria. Fimbriae are formed of a protein called pilin (antigenic) and are responsible for the attachment of bacteria to specific receptors on human cells (cell adhesion). There are special types of pili involved in bacterial conjugation. Cellular processes Replication Cell division involves a single cell (called a mother cell) dividing into two daughter cells. This leads to growth in multicellular organisms (the growth of tissue) and to procreation (vegetative reproduction) in unicellular organisms. Prokaryotic cells divide by binary fission, while eukaryotic cells usually undergo a process of nuclear division, called mitosis, followed by division of the cell, called cytokinesis. A diploid cell may also undergo meiosis to produce haploid cells, usually four. Haploid cells serve as gametes in multicellular organisms, fusing to form new diploid cells. DNA replication, or the process of duplicating a cell's genome, always happens when a cell divides through mitosis or binary fission. This occurs during the S phase of the cell cycle. In meiosis, the DNA is replicated only once, while the cell divides twice. DNA replication only occurs before meiosis I. DNA replication does not occur when the cells divide the second time, in meiosis II. Replication, like all cellular activities, requires specialized proteins for carrying out the job. DNA repair Cells of all organisms contain enzyme systems that scan their DNA for DNA damage and carry out repair processes when damage is detected. Diverse repair processes have evolved in organisms ranging from bacteria to humans. The widespread prevalence of these repair processes indicates the importance of maintaining cellular DNA in an undamaged state in order to avoid cell death or errors of replication due to damage that could lead to mutation. E. coli bacteria are a well-studied example of a cellular organism with diverse well-defined DNA repair processes. These include: nucleotide excision repair, DNA mismatch repair, non-homologous end joining of double-strand breaks, recombinational repair and light-dependent repair (photoreactivation). Growth and metabolism Between successive cell divisions, cells grow through the functioning of cellular metabolism. Cell metabolism is the process by which individual cells process nutrient molecules. Metabolism has two distinct divisions: catabolism, in which the cell breaks down complex molecules to produce energy and reducing power, and anabolism, in which the cell uses energy and reducing power to construct complex molecules and perform other biological functions. Complex sugars consumed by the organism can be broken down into simpler sugar molecules called monosaccharides such as glucose. Once inside the cell, glucose is broken down to make adenosine triphosphate (ATP), a molecule that possesses readily available energy, through two different pathways. Protein synthesis Cells are capable of synthesizing new proteins, which are essential for the modulation and maintenance of cellular activities. This process involves the formation of new protein molecules from amino acid building blocks based on information encoded in DNA/RNA. Protein synthesis generally consists of two major steps: transcription and translation. Transcription is the process where genetic information in DNA is used to produce a complementary RNA strand. This RNA strand is then processed to give messenger RNA (mRNA), which is free to migrate through the cell. mRNA molecules bind to protein-RNA complexes called ribosomes located in the cytosol, where they are translated into polypeptide sequences. The ribosome mediates the formation of a polypeptide sequence based on the mRNA sequence. The mRNA sequence directly relates to the polypeptide sequence by binding to transfer RNA (tRNA) adapter molecules in binding pockets within the ribosome. The new polypeptide then folds into a functional three-dimensional protein molecule. Motility Unicellular organisms can move in order to find food or escape predators. Common mechanisms of motion include flagella and cilia. In multicellular organisms, cells can move during processes such as wound healing, the immune response and cancer metastasis. For example, in wound healing in animals, white blood cells move to the wound site to kill the microorganisms that cause infection. Cell motility involves many receptors, crosslinking, bundling, binding, adhesion, motor and other proteins. The process is divided into three steps: protrusion of the leading edge of the cell, adhesion of the leading edge and de-adhesion at the cell body and rear, and cytoskeletal contraction to pull the cell forward. Each step is driven by physical forces generated by unique segments of the cytoskeleton. Navigation, control and communication In August 2020, scientists described one way cells—in particular cells of a slime mold and mouse pancreatic cancer-derived cells—are able to navigate efficiently through a body and identify the best routes through complex mazes: generating gradients after breaking down diffused chemoattractants which enable them to sense upcoming maze junctions before reaching them, including around corners. Multicellularity Cell specialization/differentiation Multicellular organisms are organisms that consist of more than one cell, in contrast to single-celled organisms. In complex multicellular organisms, cells specialize into different cell types that are adapted to particular functions. In mammals, major cell types include skin cells, muscle cells, neurons, blood cells, fibroblasts, stem cells, and others. Cell types differ both in appearance and function, yet are genetically identical. Cells are able to be of the same genotype but of different cell type due to the differential expression of the genes they contain. Most distinct cell types arise from a single totipotent cell, called a zygote, that differentiates into hundreds of different cell types during the course of development. Differentiation of cells is driven by different environmental cues (such as cell–cell interaction) and intrinsic differences (such as those caused by the uneven distribution of molecules during division). Origin of multicellularity Multicellularity has evolved independently at least 25 times, including in some prokaryotes, like cyanobacteria, myxobacteria, actinomycetes, Magnetoglobus multicellularis, or Methanosarcina. However, complex multicellular organisms evolved only in six eukaryotic groups: animals, fungi, brown algae, red algae, green algae, and plants. It evolved repeatedly for plants (Chloroplastida), once or twice for animals, once for brown algae, and perhaps several times for fungi, slime molds, and red algae. Multicellularity may have evolved from colonies of interdependent organisms, from cellularization, or from organisms in symbiotic relationships. The first evidence of multicellularity is from cyanobacteria-like organisms that lived between 3 and 3.5 billion years ago. Other early fossils of multicellular organisms include the contested Grypania spiralis and the fossils of the black shales of the Palaeoproterozoic Francevillian Group Fossil B Formation in Gabon. The evolution of multicellularity from unicellular ancestors has been replicated in the laboratory, in evolution experiments using predation as the selective pressure. Origins The origin of cells has to do with the origin of life, which began the history of life on Earth. Origin of the first cell There are several theories about the origin of small molecules that led to life on the early Earth. They may have been carried to Earth on meteorites (see Murchison meteorite), created at deep-sea vents, or synthesized by lightning in a reducing atmosphere (see Miller–Urey experiment). There is little experimental data defining what the first self-replicating forms were. RNA is thought to be the earliest self-replicating molecule, as it is capable of both storing genetic information and catalyzing chemical reactions (see RNA world hypothesis), but some other entity with the potential to self-replicate could have preceded RNA, such as clay or peptide nucleic acid. Cells emerged at least 3.5 billion years ago. The current belief is that these cells were heterotrophs. The early cell membranes were probably more simple and permeable than modern ones, with only a single fatty acid chain per lipid. Lipids are known to spontaneously form bilayered vesicles in water, and could have preceded RNA, but the first cell membranes could also have been produced by catalytic RNA, or even have required structural proteins before they could form. Origin of eukaryotic cells Eukaryotic cells were created some 2.2 billion years ago in a process called eukaryogenesis. This is widely agreed to have involved symbiogenesis, in which archaea and bacteria came together to create the first eukaryotic common ancestor. This cell had a new level of complexity and capability, with a nucleus and facultatively aerobic mitochondria. It evolved some 2 billion years ago into a population of single-celled organisms that included the last eukaryotic common ancestor, gaining capabilities along the way, though the sequence of the steps involved has been disputed, and may not have started with symbiogenesis. It featured at least one centriole and cilium, sex (meiosis and syngamy), peroxisomes, and a dormant cyst with a cell wall of chitin and/or cellulose. In turn, the last eukaryotic common ancestor gave rise to the eukaryotes' crown group, containing the ancestors of animals, fungi, plants, and a diverse range of single-celled organisms. The plants were created around 1.6 billion years ago with a second episode of symbiogenesis that added chloroplasts, derived from cyanobacteria. History of research 1632–1723: Antonie van Leeuwenhoek taught himself to make lenses, constructed basic optical microscopes and drew protozoa, such as Vorticella from rain water, and bacteria from his own mouth. 1665: Robert Hooke discovered cells in cork, then in living plant tissue using an early compound microscope. He coined the term cell (from Latin cellula, meaning "small room") in his book Micrographia (1665). 1839: Theodor Schwann and Matthias Jakob Schleiden elucidated the principle that plants and animals are made of cells, concluding that cells are a common unit of structure and development, and thus founding the cell theory. 1855: Rudolf Virchow stated that new cells come from pre-existing cells by cell division (omnis cellula ex cellula). 1931: Ernst Ruska built the first transmission electron microscope (TEM) at the University of Berlin. By 1935, he had built an EM with twice the resolution of a light microscope, revealing previously unresolvable organelles. 1981: Lynn Margulis published Symbiosis in Cell Evolution detailing how eukaryotic cells were created by symbiogenesis. See also Cell cortex Cell culture Cellular model Cytoneme Cytorrhysis Cytotoxicity Lipid raft List of distinct cell types in the adult human body Outline of cell biology Parakaryon myojinensis Plasmolysis Syncytium Tunneling nanotube Vault (organelle) References Further reading ; The fourth edition is freely available from National Center for Biotechnology Information Bookshelf. External links MBInfo – Descriptions on Cellular Functions and Processes Inside the Cell – a science education booklet by National Institutes of Health, in PDF and ePub. Cell Biology in "The Biology Project" of University of Arizona. Centre of the Cell online The Image & Video Library of The American Society for Cell Biology , a collection of peer-reviewed still images, video clips and digital books that illustrate the structure, function and biology of the cell. WormWeb.org: Interactive Visualization of the C. elegans Cell lineage – Visualize the entire cell lineage tree of the nematode C. elegans Cell biology Cell anatomy 1665 in science
13
Barnard College, officially titled as Barnard College, Columbia University, is a private women's liberal arts college in the borough of Manhattan in New York City. It was founded in 1889 by a group of women led by young student activist Annie Nathan Meyer, who petitioned Columbia University's trustees to create an affiliated college named after Columbia's recently deceased 10th president, Frederick A.P. Barnard. The college is one of the original Seven Sisters—seven liberal arts colleges in the Northeastern United States that were historically women's colleges. Barnard is currently one of four Columbia undergraduate colleges with independent admission, curriculum, and financials. Students share classes, libraries, clubs, Greek life, athletic fields, and dining halls with Columbia, as well as sports teams, through the Columbia-Barnard Athletic Consortium, an agreement that makes Barnard the only women's college to offer its students the ability to compete in NCAA Division I athletics. Students receive their diploma from Columbia University. Barnard offers Bachelor of Arts degree programs in about 50 areas of study. Students may also pursue elements of their education at Columbia, the Juilliard School, the Manhattan School of Music, and The Jewish Theological Seminary, which are also based in New York City. Its campus is located in the Upper Manhattan neighborhood of Morningside Heights, stretching along Broadway between 116th and 120th Streets. It is directly across from Columbia's main campus. Barnard College alumnae include leaders in science, religion, politics, the Peace Corps, medicine, law, education, communications, theater, and business. Barnard graduates have been recipients of Emmy, Tony, Grammy, Academy, and Peabody Awards, Guggenheim Fellowships, MacArthur Fellowships, the Presidential Medal of Freedom, the National Medal of Science, and the Pulitzer Prize. History Founding For its first 229 years, Columbia College of Columbia University admitted only men for undergraduate study. Barnard College was founded in 1889 as a response to Columbia's refusal to admit women into its institution. The college was named after Frederick Augustus Porter Barnard, a deaf American educator and mathematician who served as the 10th president of Columbia from 1864 to 1889. He advocated for equal educational privileges for men and women, preferably in a coeducational setting, and began proposing in 1879 that Columbia admit women. Columbia's Board of Trustees repeatedly rejected Barnard's suggestion, but in 1883 agreed to create a detailed syllabus of study for women. While they could not attend Columbia classes, those who passed examinations based on the syllabus would receive a degree. The first such woman graduate received her bachelor's degree in 1887. A former student of the program, Annie Meyer, and other prominent New York women persuaded the board in 1889 to create a women's college connected to Columbia. Men and women were evenly represented among the founding Trustees of Barnard College. Barnard College's original 1889 home was a rented brownstone at 343 Madison Avenue, where a faculty of six offered instruction to 36 students. Morningside campus When Columbia University announced in 1892 its impending move to Morningside Heights, Barnard built a new campus nearby with gifts from Mary E. Brinckerhoff, Elizabeth Milbank Anderson and Martha Fiske. Two of these gifts were made with several stipulations attached. Brinckerhoff insisted that Barnard acquire land within 1,000 feet of the Columbia campus within the next four years. The Barnard trustees purchased land between 119th-120th Streets after receiving funds for that purpose in 1895. Anderson requested that Charles A. Rich be hired. Rich designed the Milbank, Brinckerhoff, and Fiske Halls, built in 1897–1898; these were listed on the National Register of Historic Places in 2003. The first classes at the new campus were held in 1897. Despite Brinckerhoff's, Anderson's, and Fiske's gifts, Barnard remained in debt. Ella Weed supervised the college in its first four years; Emily James Smith succeeded her as Barnard's first dean. Jessica Finch is credited with coining the phrase "current events" while teaching at Barnard College in the 1890s. The college received the three blocks south of 119th Street from Anderson in 1903. Rich provided a master plan for the campus, but only Brooks Hall was built, being constructed between 1906 and 1908. None of Rich's other plans were carried out. Students' Hall, now known as Barnard Hall, was built in 1916 to a design by Arnold Brunner. Hewitt Hall was the last structure to be erected, in 1926–1927. All three buildings were listed on the National Register of Historic Places in 2003. By the mid-20th century Barnard had succeeded in its original goal of providing a top-tier education to women. Between 1920 and 1974, only the much larger Hunter College and University of California, Berkeley produced more women graduates who later received doctorates. In the 1970s, Barnard faced considerable pressure to merge with male only Columbia College, which was fiercely resisted by its president, Jacquelyn Mattfeld. Academics Barnard students are able to pursue a Bachelor of Arts degree in about 50 areas of study. Joint programs for the Bachelor of Science and other degrees exist with Columbia University, Juilliard School, and The Jewish Theological Seminary. The most popular majors at the college by 2021 graduates, were .: Econometrics and Quantitative Economics (62) Research and Experimental Psychology (56) History (43) English Language and Literature (39) Political Science and Government (36) Neuroscience (33) Art History, Criticism and Conservation (33) The liberal arts general education requirements are collectively called Foundations. Students must take two courses in the sciences (one of which must be accompanied by a laboratory course), study a single foreign language for two semesters, and take two courses in the arts/humanities as well as two in the social sciences. In addition, students must complete at least one three-credit course in the so-called "Modes of Thinking" series, and fulfill other requirements. Admissions Admissions to Barnard are considered "most selective" by U.S. News & World Report. It is the most selective women's college in the nation; in 2017, Barnard had the lowest acceptance rate of the five Seven Sisters that remain single-sex in admissions. The class of 2026's admission rate was 8% of the 12,009 applicants, the lowest acceptance rate in the institution's history. The median SAT composite score of enrolled students was 1440, with median subscores of 720 in Math and 715 in Evidence-Based Reading and Writing. The median ACT Composite score was 33. In 2015, Barnard announced that it would admit transgender women who "consistently live and identify as women, regardless of the gender assigned to them at birth" and would continue to support and enroll those students who transitioned to male after they had already been admitted. The college practices need-blind admission for domestic first-year applicants. Rankings Barnard is ranked tied at 11th of 185 overall and tied for 25th of 36 for "Best Undergraduate Teaching," among U.S. liberal arts colleges by U.S. News & World Report. Forbes ranked Barnard 73rd of 500 colleges in 2023. Campus Library While Barnard students have access to the libraries at Columbia University, the college has always maintained a library of its own. The Barnard Library also encompasses the Archives and Special Collections, with material that documents Barnard's history from its founding to the present day. Among the collections are the Ntozake Shange papers. Zine Collection The Barnard Zine Library is a unit of the Barnard Library and Academic Information Systems (BLAIS). Zine collections target primarily female, default queer, intentionally of color, and gender expansive topics. Student life Student organizations Every Barnard student is part of the Student Government Association (SGA), which elects a representative student government. SGA aims to facilitate the expression of opinions on matters that directly affect the Barnard community. Student groups include theatre and vocal music groups, language clubs, literary magazines, a freeform radio station called WBAR, a biweekly magazine called the Barnard Bulletin, Club Q, community service groups, and others. Barnard students can also join extracurricular activities or organizations at Columbia University, while Columbia University students are allowed in most, but not all, Barnard organizations. Barnard's McIntosh Activities Council organizes various community focused events on campus, such as Big Sub and Midnight Breakfast. There are sub-committees focussed on cultural events (Mosaic), health and wellness (Wellness), networking (Network), even-planning (Community), and service (Action). Sororities Barnard students participate in various sororities. , Barnard does not fully recognize the National Panhellenic Conference sororities at Columbia, but it does provide some funding to account for Barnard students living in Columbia housing through these organizations. Traditions Barnard Greek Games: One of Barnard's oldest traditions, the Barnard Greek Games were first held in 1903, and occurred annually until the Columbia University protests in 1968. Since then they have been sporadically revived. The games consist of competitions between each graduating class at Barnard, and events have traditionally included Greek poetry recitation, dance, chariot racing, and a torch race. Take Back the Night: Each April, Barnard and Columbia students participate in the Take Back the Night march and speak-out. This annual event grew out of a 1988 Seven Sisters conference. The march has grown from under 200 participants in 1988 to more than 2,500 in 2007. Midnight Breakfast marks the beginning of finals week. As a highly popular event and long-standing college tradition, Midnight Breakfast is hosted by the student-run activities council, McAC (McIntosh Activities Council). In addition to providing standard breakfast foods, each year's theme is also incorporated into the menu. Past themes have included "I YUMM the 90s," "Grease," and "Take Me Out to the Ballgame." The event is a school-wide affair as college deans, trustees and the president serve food to about a thousand students. It takes place the night before finals begin every semester. Big Sub: Towards the beginning of each fall semester, Barnard College supplies a 700+ feet long subway sandwich. Students from the college can take as much of the sub as they can carry. The sub has kosher, dairy free, vegetarian, and vegan sections. This event is organized by the student-run activities council, McAC. Academic affiliations Relationship with Columbia University The Barnard Bulletin in 1976 described the relationship between the college and Columbia University as "intricate and ambiguous". Barnard president Debora Spar said in 2012 that "the relationship is admittedly a complicated one, a unique one and one that may take a few sentences to explain to the outside community". Outside sources often describe Barnard as part of Columbia; The New York Times in 2013, for example, called Barnard "an undergraduate women's college of Columbia University". Its front gates read "Barnard College of Columbia University." Barnard describes itself as "both an independently incorporated educational institution and an official college of Columbia University" that is "one of the University's four colleges, but we're largely autonomous, with our own leadership and purse strings", and advises students to state "Barnard College, Columbia University" or "Barnard College of Columbia University" on résumés. Columbia describes Barnard as an affiliated institution that is a faculty of the university or is "in partnership with" it. Both the college and Columbia evaluate Barnard faculty for tenure, and Barnard graduates receive Columbia diplomas signed by the Barnard and the Columbia presidents. Before coeducation at Columbia Smith and Columbia president Seth Low worked to open Columbia classes to Barnard students. By 1900 they could attend Columbia classes in philosophy, political science, and several scientific fields. That year Barnard formalized an affiliation with the university which made available to its students the instruction and facilities of Columbia. Franz Boas, who taught at both Columbia and Barnard in the early 1900s, was among those faculty members who reportedly found Barnard students superior to their male Columbia counterparts. From 1955, Columbia and Barnard students could register for the other school's classes with the permission of the instructor; from 1973 no permission was needed. Except for Columbia College, by the 1940s, other undergraduate and graduate divisions of Columbia University admitted women. Columbia president William J. McGill predicted in 1970, that Barnard College and Columbia College would merge within five years. In 1973, Columbia and Barnard signed a three-year agreement to increase sharing classrooms, facilities, and housing, and cooperation in faculty appointments, which they described as "integration without assimilation"; by the mid-1970s, most Columbia dormitories were coed. The university's financial difficulties during the decade increased its desire to merge to end what Columbia described as the "anachronism" of single-sex education, but Barnard resisted doing so because of Columbia's large debt, rejecting in 1975 Columbia dean Peter Pouncey's proposal to merge Barnard and the three Columbia undergraduate schools. The 1973–1976 chairwoman of the board at Barnard, Eleanor Thomas Elliott, led the resistance to this takeover. The college's marketing emphasized the Columbia relationship, however, the Bulletin in 1976 stating that Barnard described it as identical to the one between Harvard College and Radcliffe College ("who are merged in practically everything but name at this point"). After Barnard rejected subsequent merger proposals from Columbia and a one-year extension to the 1973 agreement expired, in 1977, the two schools began discussing their future relationship. By 1979, the relationship had so deteriorated that Barnard officials stopped attending meetings. Because of an expected decline in enrollment, in 1980 a Columbia committee recommended that Columbia College begin admitting women without Barnard's cooperation. A 1981 committee found that Columbia was no longer competitive with other Ivy League universities without women, and that admitting women would not affect Barnard's applicant pool. That year Columbia president Michael Sovern agreed for the two schools to cooperate in admitting women to Columbia, but Barnard faculty's opposition caused president Ellen Futter to reject the agreement. A decade of negotiations for a Columbia-Barnard merger akin to Harvard and Radcliffe had failed. In January 1982, the two schools instead announced that Columbia College would begin admitting women in 1983, and Barnard's control over tenure for its faculty would increase; previously, a committee on which Columbia faculty outnumbered Barnard's three to two controlled the latter's tenure. Applications to Columbia rose 56% that year, making admission more selective, and nine Barnard students transferred to Columbia. Eight students admitted to both Columbia and Barnard chose Barnard, while 78 chose Columbia. Within a few years, however, selectivity rose at both schools as they received more women applicants than expected. After coeducation The Columbia-Barnard affiliation continued. , Barnard pays Columbia about $5 million a year under the terms of the "interoperate relationship", which the two schools renegotiate every 15 years. Despite the affiliation Barnard is legally and financially separate from Columbia, with an independent faculty and board of trustees. It is responsible for its own separate admissions, health, security, guidance and placement services, and has its own alumnae association. Nonetheless, Barnard students participate in the academic, social, athletic and extracurricular life of the broader University community on a reciprocal basis. The affiliation permits the two schools to share some academic resources; for example, only Barnard has an urban studies department, and only Columbia has a computer science department. Most Columbia classes are open to Barnard students and vice versa. Barnard students and faculty are represented in the University Senate, and student organizations such as the Columbia Daily Spectator are open to all students. Barnard students play on Columbia athletics teams, and Barnard uses Columbia email, telephone and network services. Barnard athletes compete in the Ivy League (NCAA Division I) through the Columbia/Barnard Athletic Consortium, which was established in 1983. Through this arrangement, Barnard is the only women's college offering DivisionI athletics. There are 15 intercollegiate teams, and students also compete at the intramural and club levels. From 1975 to 1983, before the establishment of the Columbia/Barnard Athletic Consortium, Barnard students competed as the "Barnard Bears". Prior to 1975, students referred to themselves as the "Barnard honeybears". Controversies In the spring of 1960, Columbia University president Grayson Kirk complained to the president of Barnard that Barnard students were wearing inappropriate clothing. The garments in question were pants and Bermuda shorts. The administration forced the student council to institute a dress code. Students would be allowed to wear shorts and pants only at Barnard and only if the shorts were no more than two inches above the knee and the pants were not tight. Barnard women crossing the street to enter the Columbia campus wearing shorts or pants were required to cover themselves with a long coat. In March 1968, The New York Times ran an article on students who cohabited, identifying one of the persons they interviewed as a student at Barnard College from New Hampshire named "Susan". Barnard officials searched their records for women from New Hampshire and were able to determine that "Susan" was the pseudonym of a student (Linda LeClair) who was living with her boyfriend, a student at Columbia University. She was called before Barnard's student-faculty administration judicial committee, where she faced the possibility of expulsion. A student protest included a petition signed by 300 other Barnard women, admitting that they too had broken the regulations against cohabitating. The judicial committee reached a compromise and the student was allowed to remain in school, but was denied use of the college cafeteria and barred from all social activities. The student briefly became a focus of intense national attention. She eventually dropped out of Barnard. Administration The following lists all the presidents and deans of Barnard College from 1889 to present. Ella Weed (1889–1894) Emily James Smith (1894–1900) Laura Drake Gill (1901–1907) Virginia Gildersleeve (1911–1947) Millicent McIntosh (1952–1962) Rosemary Park (1962–1967) Martha Peterson (1967–1975) Jacquelyn Mattfeld (1976–1981) Ellen Futter (1981–1993) Judith Shapiro (1994–2008) Debora Spar (2008–2017) Sian Beilock (2017–2023) Laura A. Rosenbury (2023–present) Notable people Barnard College has graduated many prominent leaders in science, religion, politics, the Peace Corps, medicine, law, education, communications, theater, and business; and acclaimed actors, architects, artists, astronauts, engineers, human rights activists, inventors, musicians, philanthropists, and writers. They include academic Louise Holland (1914), author Zora Neale Hurston (1928), author and political activist Grace Lee Boggs (1935), television host Ronnie Eldridge (1952), Phyllis E. Grann, CEO of Penguin Putnam, U.S. Representative Helen Gahagan (1924), Spelman College's 11th President and former chair of the Presidential Advisory Council on HIV/AIDS Helene D. Gayle (1970), president of the American Civil Liberties Union Susan Herman (1968), Chief Judge of the New York Court of Appeals Judith Kaye (1958), chair of the National Labor Relations Board Wilma B. Liebman (1971), musician and performance artist Laurie Anderson (1969), actress, activist, and gubernatorial candidate Cynthia Nixon (1988), author of The Sisterhood of the Traveling Pants Ann Brashares (1989), The New Yorker cartoonist Amy Hwang (2000), actress from Grey's Anatomy Kelly McCreary (2003), writer and director Greta Gerwig (2004), and Disney Channel actress Christy Carlson Romano (2015). See also Athena Film Festival Barnard Center for Research on Women Hidden Ivies: Thirty Colleges of Excellence Women's colleges in the United States List of coordinate colleges References Citations Sources Horowitz, Helen Lefkowitz (1993). Alma Mater: Design and Experience in the Women's Colleges from Their Nineteenth-Century Beginnings to the 1930s (2nd edition). Amherst: University of Massachusetts Press. External links 1889 establishments in New York (state) Columbia University Educational institutions established in 1889 Liberal arts colleges in New York City Morningside Heights, Manhattan Private universities and colleges in New York (state) Private universities and colleges in New York City Seven Sister Colleges Universities and colleges in Manhattan Women's universities and colleges in the United States Need-blind educational institutions
5
Boxing (also known as "western boxing" or "pugilism") is a combat sport and a martial art in which two people, usually wearing protective gloves and other protective equipment such as hand wraps and mouthguards, throw punches at each other for a predetermined amount of time in a boxing ring. Although the term boxing is commonly attributed to Western boxing, in which only fists are involved, it has developed in different ways in different geographical areas and cultures of the World. In global terms, "boxing" today is also a set of combat sports focused on striking, in which two opponents face each other in a fight using at least their fists, and possibly involving other actions such as kicks, elbow strikes, knee strikes, and headbutts, depending on the rules. Some of these variants are the bare knuckle boxing, kickboxing, muay-thai, lethwei, savate, and sanda. Boxing techniques have been incorporated into many martial arts, military systems, and other combat sports. Though humans have fought in hand-to-hand combat since the dawn of human history and the origin of the sport of boxing is unknown, according to some sources boxing has prehistoric origins in present-day Ethiopia where it appeared in the sixth millennium BC and when the Egyptians invaded Nubia they learned the art of boxing from the local population and they took the sport to Egypt where it became popular and from Egypt boxing spread to other countries including Greece, and eastward to Mesopotamia and northward to Rome. The earliest visual evidence of any type of boxing is from Egypt and Sumer both from the third millennia and can be seen in Sumerian carvings from the third and second millennia BC. The earliest evidence of boxing rules dates back to Ancient Greece, where boxing was established as an Olympic game in 688 BC. Boxing evolved from 16th- and 18th-century prizefights, largely in Great Britain, to the forerunner of modern boxing in the mid-19th century with the 1867 introduction of the Marquess of Queensberry Rules. Amateur boxing is both an Olympic and Commonwealth Games sport and is a standard fixture in most international games—it also has its world championships. Boxing is overseen by a referee over a series of one-to-three-minute intervals called "rounds". A winner can be resolved before the completion of the rounds when a referee deems an opponent incapable of continuing, disqualifies an opponent, or the opponent resigns. When the fight reaches the end of its final round with both opponents still standing, the judges' scorecards determine the victor. In case both fighters gain equal scores from the judges, a professional bout is considered a draw. In Olympic boxing, because a winner must be declared, judges award the contest to one fighter on technical criteria. History Ancient history Hitting with different extremities of the body, such as kicks and punches, as an act of human aggression, has existed across the world throughout human history, being a combat system as old as wrestling. However, in terms of sports competition, due to the lack of writing in the prehistoric times and the lack of references, it is not possible to determine rules of any kind of boxing in prehistory, and in ancient times only can be inferred from the few intact sources and references to the sport. The origin of the sport of boxing is unknown, however according to some sources boxing has prehistoric origins in present-day Ethiopia, where it appeared in the sixth millennium BC. When the Egyptians invaded Nubia they learned the art of boxing from the local population, and they took the sport to Egypt where it became popular. From Egypt, boxing spread to other countries including Greece, eastward to Mesopotamia, and northward to Rome. The earliest visual evidence of boxing comes from Egypt and Sumer both from the third millennium BC. A relief sculpture from Egyptian Thebes () shows both boxers and spectators. These early Middle-Eastern and Egyptian depictions showed contests where fighters were either bare-fisted or had a band supporting the wrist. The earliest evidence of use of gloves can be found in Minoan Crete (–1400 BC). Various types of boxing existed in ancient India. The earliest references to musti-yuddha come from classical Vedic epics such as the Rig Veda (c. 1500–1000 BCE) and Ramayana (c. 700–400 BCE). The Mahabharata describes two combatants boxing with clenched fists and fighting with kicks, finger strikes, knee strikes and headbutts during the time of King Virata. Duels (niyuddham) were often fought to the death. During the period of the Western Satraps, the ruler Rudradaman—in addition to being well-versed in "the great sciences" which included Indian classical music, Sanskrit grammar, and logic—was said to be an excellent horseman, charioteer, elephant rider, swordsman and boxer. The Gurbilas Shemi, an 18th-century Sikh text, gives numerous references to musti-yuddha. The martial art is related to other forms of martial arts found in other parts of the Indian cultural sphere including Muay Thai in Thailand, Muay Lao in Laos, Pradal Serey in Cambodia and Lethwei in Myanmar. In Ancient Greece boxing was a well developed sport called pygmachia, and enjoyed consistent popularity. In Olympic terms, it was first introduced in the 23rd Olympiad, 688 BC. The boxers would wind leather thongs around their hands in order to protect them. There were no rounds and boxers fought until one of them acknowledged defeat or could not continue. Weight categories were not used, which meant heavier fighters had a tendency to dominate. The style of boxing practiced typically featured an advanced left leg stance, with the left arm semi-extended as a guard, in addition to being used for striking, and with the right arm drawn back ready to strike. It was the head of the opponent which was primarily targeted, and there is little evidence to suggest that targeting the body or the use of kicks was common, in which it resembled modern western boxing. Boxing was a popular spectator sport in Ancient Rome. Fighters protected their knuckles with leather strips wrapped around their fists. Eventually harder leather was used and the strips became a weapon. Metal studs were introduced to the strips to make the cestus. Fighting events were held at Roman amphitheatres. Early London prize ring rules Records of boxing activity disappeared in the west after the fall of the Western Roman Empire when the wearing of weapons became common once again and interest in fighting with the fists waned. However, there are detailed records of various fist-fighting sports that were maintained in different cities and provinces of Italy between the 12th and 17th centuries. There was also a sport in ancient Rus called kulachniy boy or 'fist fighting'. As the wearing of swords became less common, there was renewed interest in fencing with the fists. The sport later resurfaced in England during the early 16th century in the form of bare-knuckle boxing, sometimes referred to as prizefighting. The first documented account of a bare-knuckle fight in England appeared in 1681 in the London Protestant Mercury, and the first English bare-knuckle champion was James Figg in 1719. This is also the time when the word "boxing" first came to be used. This earliest form of modern boxing was very different. Contests in Mr. Figg's time, in addition to fist fighting, also contained fencing and cudgeling. On 6 January 1681, the first recorded boxing match took place in Britain when Christopher Monck, 2nd Duke of Albemarle (and later Lieutenant Governor of Jamaica), engineered a bout between his butler and his butcher with the latter winning the prize. Early fighting had no written rules. There were no weight divisions or round limits, and no referee. In general, it was extremely chaotic. An early article on boxing was published in Nottingham in 1713, by Sir Thomas Parkyns, 2nd Baronet, a wrestling patron from Bunny, Nottinghamshire, who had practised the techniques he described. The article, a single page in his manual of wrestling and fencing, Progymnasmata: The inn-play, or Cornish-hugg wrestler, described a system of headbutting, punching, eye-gouging, chokes, and hard throws, not recognized in boxing today. The first boxing rules, called the Broughton Rules, were introduced by champion Jack Broughton in 1743 to protect fighters in the ring where deaths sometimes occurred. Under these rules, if a man went down and could not continue after a count of 30 seconds, the fight was over. Hitting a downed fighter and grasping below the waist were prohibited. Broughton encouraged the use of "mufflers", a form of padded bandage or mitten, to be used in "jousting" or sparring sessions in training, and in exhibition matches. These rules did allow the fighters an advantage not enjoyed by today's boxers; they permitted the fighter to drop to one knee to end the round and begin the 30-second count at any time. Thus a fighter realizing he was in trouble had an opportunity to recover. However, this was considered "unmanly" and was frequently disallowed by additional rules negotiated by the seconds of the boxers. In modern boxing, there is a three-minute limit to rounds (unlike the downed fighter ends the round rule). Intentionally going down in modern boxing will cause the recovering fighter to lose points in the scoring system. Furthermore, as the contestants did not have heavy leather gloves and wristwraps to protect their hands, they used different punching technique to preserve their hands because the head was a common target to hit full out. Almost all period manuals have powerful straight punches with the whole body behind them to the face (including forehead) as the basic blows. The British sportswriter Pierce Egan coined the term "the sweet science" as an epithet for prizefighting – or more fully "the sweet science of bruising" as a description of England's bare-knuckle fight scene in the early nineteenth century. Boxing could also be used to settle disputes even by females. In 1790 in Waddington, Lincolnshire Mary Farmery and Susanna Locker both laid claim to the affections of a young man; this produced a challenge from the former to fight for the prize, which was accepted by the latter. Proper sidesmen were chosen, and every matter conducted in form. After several knock-down blows on both sides, the battle ended in favour of Mary Farmery. The London Prize Ring Rules introduced measures that remain in effect for professional boxing to this day, such as outlawing butting, gouging, scratching, kicking, hitting a man while down, holding the ropes, and using resin, stones or hard objects in the hands, and biting. Marquess of Queensberry rules (1867) In 1867, the Marquess of Queensberry rules were drafted by John Chambers for amateur championships held at Lillie Bridge in London for lightweights, middleweights and heavyweights. The rules were published under the patronage of the Marquess of Queensberry, whose name has always been associated with them. There were twelve rules in all, and they specified that fights should be "a fair stand-up boxing match" in a 24-foot-square or similar ring. Rounds were three minutes with one-minute rest intervals between rounds. Each fighter was given a ten-second count if he was knocked down, and wrestling was banned. The introduction of gloves of "fair-size" also changed the nature of the bouts. An average pair of boxing gloves resembles a bloated pair of mittens and are laced up around the wrists. The gloves can be used to block an opponent's blows. As a result of their introduction, bouts became longer and more strategic with greater importance attached to defensive maneuvers such as slipping, bobbing, countering and angling. Because less defensive emphasis was placed on the use of the forearms and more on the gloves, the classical forearms outwards, torso leaning back stance of the bare knuckle boxer was modified to a more modern stance in which the torso is tilted forward and the hands are held closer to the face. Late 19th and early 20th centuries Through the late nineteenth century, the martial art of boxing or prizefighting was primarily a sport of dubious legitimacy. Outlawed in England and much of the United States, prizefights were often held at gambling venues and broken up by police. Brawling and wrestling tactics continued, and riots at prizefights were common occurrences. Still, throughout this period, there arose some notable bare knuckle champions who developed fairly sophisticated fighting tactics. The English case of R v. Coney in 1882 found that a bare-knuckle fight was an assault occasioning actual bodily harm, despite the consent of the participants. This marked the end of widespread public bare-knuckle contests in England. The first world heavyweight champion under the Queensberry Rules was "Gentleman Jim" Corbett, who defeated John L. Sullivan in 1892 at the Pelican Athletic Club in New Orleans. The first instance of film censorship in the United States occurred in 1897 when several states banned the showing of prize fighting films from the state of Nevada, where it was legal at the time. Throughout the early twentieth century, boxers struggled to achieve legitimacy. They were aided by the influence of promoters like Tex Rickard and the popularity of great champions such as John L. Sullivan. Modern boxing The modern sport arose from illegal venues and outlawed prizefighting and has become a multibillion-dollar commercial enterprise. A majority of young talent still comes from poverty-stricken areas around the world. Places like Mexico, Africa, South America, and Eastern Europe prove to be filled with young aspiring athletes who wish to become the future of boxing. Even in the U.S., places like the inner cities of New York, and Chicago have given rise to promising young talent. According to Rubin, "boxing lost its appeal with the American middle class, and most of who boxes in modern America come from the streets and are street fighters". Rules The Marquess of Queensberry rules have been the general rules governing modern boxing since their publication in 1867. A boxing match typically consists of a determined number of three-minute rounds, a total of up to 9 to 12 rounds with a minute spent between each round with the fighters resting in their assigned corners and receiving advice and attention from their coach and staff. The fight is controlled by a referee who works within the ring to judge and control the conduct of the fighters, rule on their ability to fight safely, count knocked-down fighters, and rule on fouls. Up to three judges are typically present at ringside to score the bout and assign points to the boxers, based on punches and elbows that connect, defense, knockdowns, hugging and other, more subjective, measures. Because of the open-ended style of boxing judging, many fights have controversial results, in which one or both fighters believe they have been "robbed" or unfairly denied a victory. Each fighter has an assigned corner of the ring, where their coach, as well as one or more "seconds" may administer to the fighter at the beginning of the fight and between rounds. Each boxer enters into the ring from their assigned corners at the beginning of each round and must cease fighting and return to their corner at the signalled end of each round. A bout in which the predetermined number of rounds passes is decided by the judges, and is said to "go the distance". The fighter with the higher score at the end of the fight is ruled the winner. With three judges, unanimous and split decisions are possible, as are draws. A boxer may win the bout before a decision is reached through a knock-out; such bouts are said to have ended "inside the distance". If a fighter is knocked down during the fight, determined by whether the boxer touches the canvas floor of the ring with any part of their body other than the feet as a result of the opponent's punch and not a slip, as determined by the referee, the referee begins counting until the fighter returns to their feet and can continue. Some jurisdictions require the referee to count to eight regardless of if the fighter gets up before. Should the referee count to ten, then the knocked-down boxer is ruled "knocked out" (whether unconscious or not) and the other boxer is ruled the winner by knockout (KO). A "technical knock-out" (TKO) is possible as well, and is ruled by the referee, fight doctor, or a fighter's corner if a fighter is unable to safely continue to fight, based upon injuries or being judged unable to effectively defend themselves. Many jurisdictions and sanctioning agencies also have a "three-knockdown rule", in which three knockdowns in a given round result in a TKO. A TKO is considered a knockout in a fighter's record. A "standing eight" count rule may also be in effect. This gives the referee the right to step in and administer a count of eight to a fighter that the referee feels may be in danger, even if no knockdown has taken place. After counting the referee will observe the fighter, and decide if the fighter is fit to continue. For scoring purposes, a standing eight count is treated as a knockdown. In general, boxers are prohibited from hitting below the belt, holding, tripping, pushing, biting, or spitting. The boxer's shorts are raised so the opponent is not allowed to hit to the groin area with intent to cause pain or injury. Failure to abide by the former may result in a foul. They also are prohibited from kicking, head-butting, or hitting with any part of the arm other than the knuckles of a closed fist (including hitting with the elbow, shoulder or forearm, as well as with open gloves, the wrist, the inside, back or side of the hand). They are prohibited as well from hitting the back, back of the head or neck (called a "rabbit-punch") or the kidneys. They are prohibited from holding the ropes for support when punching, holding an opponent while punching, or ducking below the belt of their opponent (dropping below the waist of your opponent, no matter the distance between). If a "clinch" – a defensive move in which a boxer wraps their opponent's arms and holds on to create a pause – is broken by the referee, each fighter must take a full step back before punching again (alternatively, the referee may direct the fighters to "punch out" of the clinch). When a boxer is knocked down, the other boxer must immediately cease fighting and move to the furthest neutral corner of the ring until the referee has either ruled a knockout or called for the fight to continue. Violations of these rules may be ruled "fouls" by the referee, who may issue warnings, deduct points, or disqualify an offending boxer, causing an automatic loss, depending on the seriousness and intentionality of the foul. An intentional foul that causes injury that prevents a fight from continuing usually causes the boxer who committed it to be disqualified. A fighter who suffers an accidental low-blow may be given up to five minutes to recover, after which they may be ruled knocked out if they are unable to continue. Accidental fouls that cause injury ending a bout may lead to a "no contest" result, or else cause the fight to go to a decision if enough rounds (typically four or more, or at least three in a four-round fight) have passed. Unheard of in the modern era, but common during the early 20th Century in North America, a "newspaper decision (NWS)" might be made after a no decision bout had ended. A "no decision" bout occurred when, by law or by pre-arrangement of the fighters, if both boxers were still standing at the fight's conclusion and there was no knockout, no official decision was rendered and neither boxer was declared the winner. But this did not prevent the pool of ringside newspaper reporters from declaring a consensus result among themselves and printing a newspaper decision in their publications. Officially, however, a "no decision" bout resulted in neither boxer winning or losing. Boxing historians sometimes use these unofficial newspaper decisions in compiling fight records for illustrative purposes only. Often, media outlets covering a match will personally score the match, and post their scores as an independent sentence in their report. Professional vs. amateur boxing Throughout the 17th to 19th centuries, boxing bouts were motivated by money, as the fighters competed for prize money, promoters controlled the gate, and spectators bet on the result. The modern Olympic movement revived interest in amateur sports, and amateur boxing became an Olympic sport in 1908. In their current form, Olympic and other amateur bouts are typically limited to three or four rounds, scoring is computed by points based on the number of clean blows landed, regardless of impact, and fighters wear protective headgear, reducing the number of injuries, knockdowns, and knockouts. Currently scoring blows in amateur boxing are subjectively counted by ringside judges, but the Australian Institute for Sport has demonstrated a prototype of an Automated Boxing Scoring System, which introduces scoring objectivity, improves safety, and arguably makes the sport more interesting to spectators. Professional boxing remains by far the most popular form of the sport globally, though amateur boxing is dominant in Cuba and some former Soviet republics. For most fighters, an amateur career, especially at the Olympics, serves to develop skills and gain experience in preparation for a professional career. Western boxers typically participate in one Olympics and then turn pro, while Cubans and boxers from other socialist countries have an opportunity to collect multiple medals. In 2016, professional boxers were admitted in the Olympic Games and other tournaments sanctioned by AIBA. This was done in part to level the playing field and give all of the athletes the same opportunities government-sponsored boxers from socialist countries and post-Soviet republics have. However, professional organizations strongly opposed that decision. Amateur boxing Amateur boxing may be found at the collegiate level, at the Olympic Games, Commonwealth Games, Asian Games, etc. In many other venues sanctioned by amateur boxing associations. Amateur boxing has a point scoring system that measures the number of clean blows landed rather than physical damage. Bouts consist of three rounds of three minutes in the Olympic and Commonwealth Games, and three rounds of three minutes in a national ABA (Amateur Boxing Association) bout, each with a one-minute interval between rounds. Competitors wear protective headgear and gloves with a white strip or circle across the knuckle. There are cases however, where white ended gloves are not required but any solid color may be worn. The white end is just a way to make it easier for judges to score clean hits. Each competitor must have their hands properly wrapped, pre-fight, for added protection on their hands and for added cushion under the gloves. Gloves worn by the fighters must be twelve ounces in weight unless the fighters weigh under , thus allowing them to wear ten ounce gloves. A punch is considered a scoring punch only when the boxers connect with the white portion of the gloves. Each punch that lands cleanly on the head or torso with sufficient force is awarded a point. A referee monitors the fight to ensure that competitors use only legal blows. A belt worn over the torso represents the lower limit of punches – any boxer repeatedly landing low blows below the belt is disqualified. Referees also ensure that the boxers don't use holding tactics to prevent the opponent from swinging. If this occurs, the referee separates the opponents and orders them to continue boxing. Repeated holding can result in a boxer being penalized or ultimately disqualified. Referees will stop the bout if a boxer is seriously injured, if one boxer is significantly dominating the other or if the score is severely imbalanced. Amateur bouts which end this way may be noted as "RSC" (referee stopped contest) with notations for an outclassed opponent (RSCO), outscored opponent (RSCOS), injury (RSCI) or head injury (RSCH). Professional boxing Professional bouts are usually much longer than amateur bouts, typically ranging from ten to twelve rounds, though four-round fights are common for less experienced fighters or club fighters. There are also some two- and three-round professional bouts, especially in Australia. Through the early 20th century, it was common for fights to have unlimited rounds, ending only when one fighter quit, benefiting high-energy fighters like Jack Dempsey. Fifteen rounds remained the internationally recognized limit for championship fights for most of the 20th century until the early 1980s, when the death of boxer Kim Duk-koo eventually prompted the World Boxing Council and other organizations sanctioning professional boxing to reduce the limit to twelve rounds. Headgear is not permitted in professional bouts, and boxers are generally allowed to take much more damage before a fight is halted. At any time, the referee may stop the contest if he believes that one participant cannot defend himself due to injury. In that case, the other participant is awarded a technical knockout win. A technical knockout would also be awarded if a fighter lands a punch that opens a cut on the opponent, and the opponent is later deemed not fit to continue by a doctor because of the cut. For this reason, fighters often employ cutmen, whose job is to treat cuts between rounds so that the boxer is able to continue despite the cut. If a boxer simply quits fighting, or if his corner stops the fight, then the winning boxer is also awarded a technical knockout victory. In contrast with amateur boxing, professional male boxers have to be bare-chested. Boxing styles Definition of style "Style" is often defined as the strategic approach a fighter takes during a bout. No two fighters' styles are alike, as each is determined by that individual's physical and mental attributes. Three main styles exist in boxing: outside fighter ("boxer"), brawler (or "slugger"), and inside fighter ("swarmer"). These styles may be divided into several special subgroups, such as counter puncher, etc. The main philosophy of the styles is, that each style has an advantage over one, but disadvantage over the other one. It follows the rock paper scissors scenario – boxer beats brawler, brawler beats swarmer, and swarmer beats boxer. Boxer/out-fighter A classic "boxer" or stylist (also known as an "out-fighter") seeks to maintain distance between himself and his opponent, fighting with faster, longer range punches, most notably the jab, and gradually wearing his opponent down. Due to this reliance on weaker punches, out-fighters tend to win by point decisions rather than by knockout, though some out-fighters have notable knockout records. They are often regarded as the best boxing strategists due to their ability to control the pace of the fight and lead their opponent, methodically wearing him down and exhibiting more skill and finesse than a brawler. Out-fighters need reach, hand speed, reflexes, and footwork. Notable out-fighters include Muhammad Ali, Larry Holmes, Joe Calzaghe, Wilfredo Gómez, Salvador Sánchez, Cecilia Brækhus, Gene Tunney, Ezzard Charles, Willie Pep, Meldrick Taylor, Ricardo "Finito" López, Floyd Mayweather Jr., Roy Jones Jr., Sugar Ray Leonard, Miguel Vázquez, Sergio "Maravilla" Martínez, Wladimir Klitschko and Guillermo Rigondeaux. This style was also used by fictional boxer Apollo Creed. Boxer-puncher A boxer-puncher is a well-rounded boxer who is able to fight at close range with a combination of technique and power, often with the ability to knock opponents out with a combination and in some instances a single shot. Their movement and tactics are similar to that of an out-fighter (although they are generally not as mobile as an out-fighter), but instead of winning by decision, they tend to wear their opponents down using combinations and then move in to score the knockout. A boxer must be well rounded to be effective using this style. Notable boxer-punchers include Muhammad Ali, Canelo Álvarez, Sugar Ray Leonard, Roy Jones Jr., Wladimir Klitschko, Vasyl Lomachenko, Lennox Lewis, Joe Louis, Wilfredo Gómez, Oscar De La Hoya, Archie Moore, Miguel Cotto, Nonito Donaire, Sam Langford, Henry Armstrong, Sugar Ray Robinson, Tony Zale, Carlos Monzón, Alexis Argüello, Érik Morales, Terry Norris, Marco Antonio Barrera, Naseem Hamed, Thomas Hearns, Julian Jackson and Gennady Golovkin. Counter puncher Counter punchers are slippery, defensive style fighters who often rely on their opponent's mistakes in order to gain the advantage, whether it be on the score cards or more preferably a knockout. They use their well-rounded defense to avoid or block shots and then immediately catch the opponent off guard with a well placed and timed punch. A fight with a skilled counter-puncher can turn into a war of attrition, where each shot landed is a battle in itself. Thus, fighting against counter punchers requires constant feinting and the ability to avoid telegraphing one's attacks. To be truly successful using this style they must have good reflexes, a high level of prediction and awareness, pinpoint accuracy and speed, both in striking and in footwork. Notable counter punchers include Muhammad Ali, Joe Calzaghe, Vitali Klitschko, Evander Holyfield, Max Schmeling, Chris Byrd, Jim Corbett, Jack Johnson, Bernard Hopkins, Laszlo Papp, Jerry Quarry, Anselmo Moreno, James Toney, Marvin Hagler, Juan Manuel Márquez, Humberto Soto, Floyd Mayweather Jr., Roger Mayweather, Pernell Whitaker, Sergio Martínez and Guillermo Rigondeaux. This style of boxing is also used by fictional boxer Little Mac. Counter punchers usually wear their opponents down by causing them to miss their punches. The more the opponent misses, the faster they tire, and the psychological effects of being unable to land a hit will start to sink in. The counter puncher often tries to outplay their opponent entirely, not just in a physical sense, but also in a mental and emotional sense. This style can be incredibly difficult, especially against seasoned fighters, but winning a fight without getting hit is often worth the pay-off. They usually try to stay away from the center of the ring, in order to outmaneuver and chip away at their opponents. A large advantage in counter-hitting is the forward momentum of the attacker, which drives them further into your return strike. As such, knockouts are more common than one would expect from a defensive style. Brawler/slugger A brawler is a fighter who generally lacks finesse and footwork in the ring, but makes up for it through sheer punching power. Many brawlers tend to lack mobility, preferring a less mobile, more stable platform and have difficulty pursuing fighters who are fast on their feet. They may also have a tendency to ignore combination punching in favor of continuous beat-downs with one hand and by throwing slower, more powerful single punches (such as hooks and uppercuts). Their slowness and predictable punching pattern (single punches with obvious leads) often leaves them open to counter punches, so successful brawlers must be able to absorb a substantial amount of punishment. However, not all brawler/slugger fighters are not mobile; some can move around and switch styles if needed but still have the brawler/slugger style such as Wilfredo Gómez, Prince Naseem Hamed and Danny García. A brawler's most important assets are power and chin (the ability to absorb punishment while remaining able to continue boxing). Examples of this style include George Foreman, Rocky Marciano, Julio César Chávez, Jack Dempsey, Riddick Bowe, Danny García, Wilfredo Gómez, Sonny Liston, John L. Sullivan, Max Baer, Prince Naseem Hamed, Ray Mancini, David Tua, Arturo Gatti, Micky Ward, Brandon Ríos, Ruslan Provodnikov, Michael Katsidis, James Kirkland, Marcos Maidana, Vitali Klitschko, Jake LaMotta, Manny Pacquiao, and Ireland's John Duddy. This style of boxing was also used by fictional boxers Rocky Balboa and James "Clubber" Lang. Brawlers tend to be more predictable and easy to hit but usually fare well enough against other fighting styles because they train to take punches very well. They often have a higher chance than other fighting styles to score a knockout against their opponents because they focus on landing big, powerful hits, instead of smaller, faster attacks. Oftentimes they place focus on training on their upper body instead of their entire body, to increase power and endurance. They also aim to intimidate their opponents because of their power, stature and ability to take a punch. Swarmer/in-fighter In-fighters/swarmers (sometimes called "pressure fighters") attempt to stay close to an opponent, throwing intense flurries and combinations of hooks and uppercuts. Mainly Mexican, Irish, Irish-American, Puerto Rican, and Mexican-American boxers popularized this style. A successful in-fighter often needs a good "chin" because swarming usually involves being hit with many jabs before they can maneuver inside where they are more effective. In-fighters operate best at close range because they are generally shorter and have less reach than their opponents and thus are more effective at a short distance where the longer arms of their opponents make punching awkward. However, several fighters tall for their division have been relatively adept at in-fighting as well as out-fighting. The essence of a swarmer is non-stop aggression. Many short in-fighters use their stature to their advantage, employing a bob-and-weave defense by bending at the waist to slip underneath or to the sides of incoming punches. Unlike blocking, causing an opponent to miss a punch disrupts his balance, this permits forward movement past the opponent's extended arm and keeps the hands free to counter. A distinct advantage that in-fighters have is when throwing uppercuts, they can channel their entire bodyweight behind the punch; Mike Tyson was famous for throwing devastating uppercuts. Marvin Hagler was known for his hard "chin", punching power, body attack and the stalking of his opponents. Some in-fighters, like Mike Tyson, have been known for being notoriously hard to hit. The key to a swarmer is aggression, endurance, chin, and bobbing-and-weaving. Notable in-fighters include Henry Armstrong, Aaron Pryor, Julio César Chávez, Jack Dempsey, Shawn Porter, Miguel Cotto, Gennady Golovkin, Joe Frazier, Danny García, Mike Tyson, Manny Pacquiao, Rocky Marciano, Wayne McCullough, James Braddock, Gerry Penalosa, Harry Greb, David Tua, James Toney and Ricky Hatton. This style was also used by the Street Fighter character Balrog. Combinations of styles All fighters have primary skills with which they feel most comfortable, but truly elite fighters are often able to incorporate auxiliary styles when presented with a particular challenge. For example, an out-fighter will sometimes plant his feet and counter punch, or a slugger may have the stamina to pressure fight with his power punches. Old history of the development of boxing and its prevalence contribute to fusion of various types of martial arts and the emergence of new ones that are based on them. For example, a combination of boxing and sportive sambo techniques gave rise to a combat sambo. Style matchups There is a generally accepted rule of thumb about the success each of these boxing styles has against the others. In general, an in-fighter has an advantage over an out-fighter, an out-fighter has an advantage over a brawler, and a brawler has an advantage over an in-fighter; these form a cycle with each style being stronger relative to one, and weaker relative to another, with none dominating, as in rock paper scissors. Naturally, many other factors, such as the skill level and training of the combatants, determine the outcome of a fight, but the widely held belief in this relationship among the styles is embodied in the cliché amongst boxing fans and writers that "styles make fights". Brawlers tend to overcome swarmers or in-fighters because, in trying to get close to the slugger, the in-fighter will invariably have to walk straight into the guns of the much harder-hitting brawler, so, unless the former has a very good chin and the latter's stamina is poor, the brawler's superior power will carry the day. A famous example of this type of match-up advantage would be George Foreman's knockout victory over Joe Frazier in their original bout "The Sunshine Showdown". Although in-fighters struggle against heavy sluggers, they typically enjoy more success against out-fighters or boxers. Out-fighters prefer a slower fight, with some distance between themselves and the opponent. The in-fighter tries to close that gap and unleash furious flurries. On the inside, the out-fighter loses a lot of his combat effectiveness, because he cannot throw the hard punches. The in-fighter is generally successful in this case, due to his intensity in advancing on his opponent and his good agility, which makes him difficult to evade. For example, the swarming Joe Frazier, though easily dominated by the slugger George Foreman, was able to create many more problems for the boxer Muhammad Ali in their three fights. Joe Louis, after retirement, admitted that he hated being crowded, and that swarmers like untied/undefeated champ Rocky Marciano would have caused him style problems even in his prime. The boxer or out-fighter tends to be most successful against a brawler, whose slow speed (both hand and foot) and poor technique makes him an easy target to hit for the faster out-fighter. The out-fighter's main concern is to stay alert, as the brawler only needs to land one good punch to finish the fight. If the out-fighter can avoid those power punches, he can often wear the brawler down with fast jabs, tiring him out. If he is successful enough, he may even apply extra pressure in the later rounds in an attempt to achieve a knockout. Most classic boxers, such as Muhammad Ali, enjoyed their best successes against sluggers. An example of a style matchup was the historical fight of Julio César Chávez, a swarmer or in-fighter, against Meldrick Taylor, the boxer or out-fighter (see Julio César Chávez vs. Meldrick Taylor). The match was nicknamed "Thunder Meets Lightning" as an allusion to punching power of Chávez and blinding speed of Taylor. Chávez was the epitome of the "Mexican" style of boxing. Taylor's hand and foot speed and boxing abilities gave him the early advantage, allowing him to begin building a large lead on points. Chávez remained relentless in his pursuit of Taylor and due to his greater punching power Chávez slowly punished Taylor. Coming into the later rounds, Taylor was bleeding from the mouth, his entire face was swollen, the bones around his eye socket had been broken, he had swallowed a considerable amount of his own blood, and as he grew tired, Taylor was increasingly forced into exchanging blows with Chávez, which only gave Chávez a greater chance to cause damage. While there was little doubt that Taylor had solidly won the first three quarters of the fight, the question at hand was whether he would survive the final quarter. Going into the final round, Taylor held a secure lead on the scorecards of two of the three judges. Chávez would have to knock Taylor out to claim a victory, whereas Taylor merely needed to stay away from the Mexican legend. However, Taylor did not stay away, but continued to trade blows with Chávez. As he did so, Taylor showed signs of extreme exhaustion, and every tick of the clock brought Taylor closer to victory unless Chávez could knock him out. With about a minute left in the round, Chávez hit Taylor squarely with several hard punches and stayed on the attack, continuing to hit Taylor with well-placed shots. Finally, with about 25 seconds to go, Chávez landed a hard right hand that caused Taylor to stagger forward towards a corner, forcing Chávez back ahead of him. Suddenly Chávez stepped around Taylor, positioning him so that Taylor was trapped in the corner, with no way to escape from Chávez' desperate final flurry. Chávez then nailed Taylor with a tremendous right hand that dropped the younger man. By using the ring ropes to pull himself up, Taylor managed to return to his feet and was given the mandatory 8-count. Referee Richard Steele asked Taylor twice if he was able to continue fighting, but Taylor failed to answer. Steele then concluded that Taylor was unfit to continue and signaled that he was ending the fight, resulting in a TKO victory for Chávez with only two seconds to go in the bout. Equipment Since boxing involves forceful, repetitive punching, precautions must be taken to prevent damage to bones in the hand. Most trainers do not allow boxers to train and spar without wrist wraps and boxing gloves. Hand wraps are used to secure the bones in the hand, and the gloves are used to protect the hands from blunt injury, allowing boxers to throw punches with more force than if they did not use them. Gloves have been required in competition since the late nineteenth century, though modern boxing gloves are much heavier than those worn by early twentieth-century fighters. Prior to a bout, both boxers agree upon the weight of gloves to be used in the bout, with the understanding that lighter gloves allow heavy punchers to inflict more damage. The brand of gloves can also affect the impact of punches, so this too is usually stipulated before a bout. Both sides are allowed to inspect the wraps and gloves of the opponent to help ensure both are within agreed upon specifications and no tampering has taken place. A mouthguard is important to protect the teeth and gums from injury, and to cushion the jaw, resulting in a decreased chance of knockout. Both fighters must wear soft soled shoes to reduce the damage from accidental (or intentional) stepping on feet. While older boxing boots more commonly resembled those of a professional wrestler, modern boxing shoes and boots tend to be quite similar to their amateur wrestling counterparts. Boxers practice their skills on several types of punching bags. A small, tear-drop-shaped "speed bag" is used to hone reflexes and repetitive punching skills, while a large cylindrical "heavy bag" filled with sand, a synthetic substitute, or water is used to practice power punching and body blows. The double-end bag is usually connected by elastic on the top and bottom and moves randomly upon getting struck and helps the fighter work on accuracy and reflexes. In addition to these distinctive pieces of equipment, boxers also use sport-nonspecific training equipment to build strength, speed, agility, and stamina. Common training equipment includes free weights, rowing machines, jump rope, and medicine balls. Boxers also use punch/focus mitts in which a trainer calls out certain combinations and the fighter strikes the mitts accordingly. This is a great exercise for stamina as the boxer isn't allowed to go at his own pace but that of the trainer, typically forcing the fighter to endure a higher output and volume than usual. In addition, they also allow trainers to make boxers utilize footwork and distances more accurately. Boxing matches typically take place in a boxing ring, a raised platform surrounded by ropes attached to posts rising in each corner. The term "ring" has come to be used as a metaphor for many aspects of prize fighting in general. Technique Stance The modern boxing stance differs substantially from the typical boxing stances of the 19th and early 20th centuries. The modern stance has a more upright vertical-armed guard, as opposed to the more horizontal, knuckles-facing-forward guard adopted by early 20th century hook users such as Jack Johnson. In a fully upright stance, the boxer stands with the legs shoulder-width apart and the rear foot a half-step in front of the lead man. Right-handed or orthodox boxers lead with the left foot and fist (for most penetration power). Both feet are parallel, and the right heel is off the ground. The lead (left) fist is held vertically about six inches in front of the face at eye level. The rear (right) fist is held beside the chin and the elbow tucked against the ribcage to protect the body. The chin is tucked into the chest to avoid punches to the jaw which commonly cause knock-outs and is often kept slightly off-center. Wrists are slightly bent to avoid damage when punching and the elbows are kept tucked in to protect the ribcage. Some boxers fight from a crouch, leaning forward and keeping their feet closer together. The stance described is considered the "textbook" stance and fighters are encouraged to change it around once it's been mastered as a base. Case in point, many fast fighters have their hands down and have almost exaggerated footwork, while brawlers or bully fighters tend to slowly stalk their opponents. In order to retain their stance boxers take 'the first step in any direction with the foot already leading in that direction.' Different stances allow for bodyweight to be differently positioned and emphasised; this may in turn alter how powerfully and explosively a type of punch can be delivered. For instance, a crouched stance allows for the bodyweight to be positioned further forward over the lead left leg. If a lead left hook is thrown from this position, it will produce a powerful springing action in the lead leg and produce a more explosive punch. This springing action could not be generated effectively, for this punch, if an upright stance was used or if the bodyweight was positioned predominantly over the back leg. Mike Tyson was a keen practitioner of a crouched stance and this style of power punching. The preparatory positioning of the bodyweight over the bent lead leg is also known as an isometric preload. Left-handed or southpaw fighters use a mirror image of the orthodox stance, which can create problems for orthodox fighters unaccustomed to receiving jabs, hooks, or crosses from the opposite side. The southpaw stance, conversely, is vulnerable to a straight right hand. North American fighters tend to favor a more balanced stance, facing the opponent almost squarely, while many European fighters stand with their torso turned more to the side. The positioning of the hands may also vary, as some fighters prefer to have both hands raised in front of the face, risking exposure to body shots. Punches There are four basic punches in boxing: the jab, cross, hook and uppercut. Any punch other than a jab is considered a power punch. If a boxer is right-handed (orthodox), their left hand is the lead hand and his right hand is the rear hand. For a left-handed boxer or southpaw, the hand positions are reversed. For clarity, the following assumes a right-handed boxer. Jab – A quick, straight punch thrown with the lead hand from the guard position. The jab extends from the side of the torso and typically does not pass in front of it. It is accompanied by a small, clockwise rotation of the torso and hips, while the fist rotates 90 degrees, becoming horizontal upon impact. As the punch reaches full extension, the lead shoulder can be brought up to guard the chin. The rear hand remains next to the face to guard the jaw. After making contact with the target, the lead hand is retracted quickly to resume a guard position in front of the face. The jab is recognized as the most important punch in a boxer's arsenal because it provides a fair amount of its own cover and it leaves the least space for a counter punch from the opponent. It has the longest reach of any punch and does not require commitment or large weight transfers. Due to its relatively weak power, the jab is often used as a tool to gauge distances, probe an opponent's defenses, harass an opponent, and set up heavier, more powerful punches. A half-step may be added, moving the entire body into the punch, for additional power. Some notable boxers who have been able to develop relative power in their jabs and use it to punish or wear down their opponents to some effect include Larry Holmes and Wladimir Klitschko. Cross – A powerful, straight punch thrown with the rear hand. From the guard position, the rear hand is thrown from the chin, crossing the body and traveling towards the target in a straight line. The rear shoulder is thrust forward and finishes just touching the outside of the chin. At the same time, the lead hand is retracted and tucked against the face to protect the inside of the chin. For additional power, the torso and hips are rotated counter-clockwise as the cross is thrown. A measure of an ideally extended cross is that the shoulder of the striking arm, the knee of the front leg and the ball of the front foot are on the same vertical plane. Weight is also transferred from the rear foot to the lead foot, resulting in the rear heel turning outwards as it acts as a fulcrum for the transfer of weight. Body rotation and the sudden weight transfer give the cross its power. Like the jab, a half-step forward may be added. After the cross is thrown, the hand is retracted quickly and the guard position resumed. It can be used to counter punch a jab, aiming for the opponent's head (or a counter to a cross aimed at the body) or to set up a hook. The cross is also called a "straight" or "right", especially if it does not cross the opponent's outstretched jab. Hook – A semi-circular punch thrown with the lead hand to the side of the opponent's head. From the guard position, the elbow is drawn back with a horizontal fist (palm facing down) though in modern times a wide percentage of fighters throw the hook with a vertical fist (palm facing themselves). The rear hand is tucked firmly against the jaw to protect the chin. The torso and hips are rotated clockwise, propelling the fist through a tight, clockwise arc across the front of the body and connecting with the target. At the same time, the lead foot pivots clockwise, turning the left heel outwards. Upon contact, the hook's circular path ends abruptly and the lead hand is pulled quickly back into the guard position. A hook may also target the lower body and this technique is sometimes called the "rip" to distinguish it from the conventional hook to the head. The hook may also be thrown with the rear hand. Notable left hookers include Joe Frazier, Roy Jones Jr. and Mike Tyson. Uppercut – A vertical, rising punch thrown with the rear hand. From the guard position, the torso shifts slightly to the right, the rear hand drops below the level of the opponent's chest and the knees are bent slightly. From this position, the rear hand is thrust upwards in a rising arc towards the opponent's chin or torso. At the same time, the knees push upwards quickly and the torso and hips rotate anti-clockwise and the rear heel turns outward, mimicking the body movement of the cross. The strategic utility of the uppercut depends on its ability to "lift" an opponent's body, setting it off-balance for successive attacks. The right uppercut followed by a left hook is a deadly combination employing the uppercut to lift an opponent's chin into a vulnerable position, then the hook to knock the opponent out. These different punch types can be thrown in rapid succession to form combinations or "combos". The most common is the jab and cross combination, nicknamed the "one-two combo". This is usually an effective combination, because the jab blocks the opponent's view of the cross, making it easier to land cleanly and forcefully. A large, swinging circular punch starting from a cocked-back position with the arm at a longer extension than the hook and all of the fighter's weight behind it is sometimes referred to as a "roundhouse", "haymaker", "overhand", or sucker-punch. Relying on body weight and centripetal force within a wide arc, the roundhouse can be a powerful blow, but it is often a wild and uncontrolled punch that leaves the fighter delivering it off balance and with an open guard. Wide, looping punches have the further disadvantage of taking more time to deliver, giving the opponent ample warning to react and counter. For this reason, the haymaker or roundhouse is not a conventional punch, and is regarded by trainers as a mark of poor technique or desperation. Sometimes it has been used, because of its immense potential power, to finish off an already staggering opponent who seems unable or unlikely to take advantage of the poor position it leaves the puncher in. Another unconventional punch is the rarely used bolo punch, in which the opponent swings an arm out several times in a wide arc, usually as a distraction, before delivering with either that or the other arm. An illegal punch to the back of the head or neck is known as a rabbit punch. Both the hook and uppercut may be thrown with both hands, resulting in differing footwork and positioning from that described above if thrown by the other hand. Generally the analogous opposite is true of the footwork and torso movement. Defense There are several basic maneuvers a boxer can use in order to evade or block punches, depicted and discussed below. Slip – Slipping rotates the body slightly so that an incoming punch passes harmlessly next to the head. As the opponent's punch arrives, the boxer sharply rotates the hips and shoulders. This turns the chin sideways and allows the punch to "slip" past. Muhammad Ali was famous for extremely fast and close slips, as was an early Mike Tyson. Sway or fade – To anticipate a punch and move the upper body or head back so that it misses or has its force appreciably lessened. Also called "rolling with the punch" or " Riding The Punch. Bob and weave – Bobbing moves the head laterally and beneath an incoming punch. As the opponent's punch arrives, the boxer bends the legs quickly and simultaneously shifts the body either slightly right or left. Once the punch has been evaded, the boxer "weaves" back to an upright position, emerging on either the outside or inside of the opponent's still-extended arm. To move outside the opponent's extended arm is called "bobbing to the outside". To move inside the opponent's extended arm is called "bobbing to the inside". Joe Frazier, Jack Dempsey, Mike Tyson and Rocky Marciano were masters of bobbing and weaving. Parry/block – Parrying or blocking uses the boxer's shoulder, hands or arms as defensive tools to protect against incoming attacks. A block generally receives a punch while a parry tends to deflect it. A "palm", "catch", or "cuff" is a defence which intentionally takes the incoming punch on the palm portion of the defender's glove. Cover-up – Covering up is the last opportunity (other than rolling with a punch) to avoid an incoming strike to an unprotected face or body. Generally speaking, the hands are held high to protect the head and chin and the forearms are tucked against the torso to impede body shots. When protecting the body, the boxer rotates the hips and lets incoming punches "roll" off the guard. To protect the head, the boxer presses both fists against the front of the face with the forearms parallel and facing outwards. This type of guard is weak against attacks from below. Clinch – Clinching is a form of trapping or a rough form of grappling and occurs when the distance between both fighters has closed and straight punches cannot be employed. In this situation, the boxer attempts to hold or "tie up" the opponent's hands so he is unable to throw hooks or uppercuts. To perform a clinch, the boxer loops both hands around the outside of the opponent's shoulders, scooping back under the forearms to grasp the opponent's arms tightly against his own body. In this position, the opponent's arms are pinned and cannot be used to attack. Clinching is a temporary match state and is quickly dissipated by the referee. Clinching is technically against the rules, and in amateur fights points are deducted fairly quickly for it. It is unlikely, however, to see points deducted for a clinch in professional boxing. Unorthodox strategies Rope-a-dope : Used by Muhammad Ali in his 1974 "the Rumble in the Jungle" bout against George Foreman, the rope-a-dope method involves lying back against the ropes, covering up defensively as much as possible and allowing the opponent to attempt numerous punches. The back-leaning posture, which does not cause the defending boxer to become as unbalanced as he would during normal backward movement, also maximizes the distance of the defender's head from his opponent, increasing the probability that punches will miss their intended target. Weathering the blows that do land, the defender lures the opponent into expending energy while conserving his/her own. If successful, the attacking opponent will eventually tire, creating defensive flaws which the boxer can exploit. In modern boxing, the rope-a-dope is generally discouraged since most opponents are not fooled by it and few boxers possess the physical toughness to withstand a prolonged, unanswered assault. Recently, however, eight-division world champion Manny Pacquiao skillfully used the strategy to gauge the power of welterweight titlist Miguel Cotto in November 2009. Pacquiao followed up the rope-a-dope gambit with a withering knockdown. Tyson Fury also attempted this against Francesco Pianeto but did not pull it off as smoothly. Bolo punch: Occasionally seen in Olympic boxing, the bolo punch is an arm punch which owes its power to the shortening of a circular arc rather than to transference of body weight; it tends to have more of an effect due to the surprise of the odd angle it lands at rather than the actual power of the punch. This is more of a gimmick than a technical maneuver; this punch is not taught, being on the same plane in boxing technicality as is the Ali shuffle. Nevertheless, a few professional boxers have used the bolo-punch to great effect, including former welterweight champions Sugar Ray Leonard, and Kid Gavilán as well as current British fighter Chris Eubank Jr. Middleweight champion Ceferino Garcia is regarded as the inventor of the bolo punch. Overhand: The overhand is a punch, thrown from the rear hand, not found in every boxer's arsenal. Unlike the cross, which has a trajectory parallel to the ground, the overhand has a looping circular arc as it is thrown over the shoulder with the palm facing away from the boxer. It is especially popular with smaller stature boxers trying to reach taller opponents. Boxers who have used this punch consistently and effectively include former heavyweight champions Rocky Marciano and Tim Witherspoon, as well as MMA champions Chuck Liddell and Fedor Emelianenko. The overhand has become a popular weapon in other tournaments that involve fist striking. Deontay Wilder heavily favours and is otherwise known for knocking many of his opponents out with one of his right overhands. Check hook: A check hook is employed to prevent aggressive boxers from lunging in. There are two parts to the check hook. The first part consists of a regular hook. The second, trickier part involves the footwork. As the opponent lunges in, the boxer should throw the hook and pivot on his left foot and swing his right foot 180 degrees around. If executed correctly, the aggressive boxer will lunge in and sail harmlessly past his opponent like a bull missing a matador. This is rarely seen in professional boxing as it requires a great disparity in skill level to execute. Technically speaking it has been said that there is no such thing as a check hook and that it is simply a hook applied to an opponent that has lurched forward and past his opponent who simply hooks him on the way past. Others have argued that the check hook exists but is an illegal punch due to it being a pivot punch which is illegal in the sport. Floyd Mayweather Jr. employed the use of a check hook against Ricky Hatton, which sent Hatton flying head first into the corner post and being knocked down. Ring corner In boxing, each fighter is given a corner of the ring where they rest in between rounds for 1 minute and where their trainers stand. Typically, three individuals stand in the corner besides the boxer; these are the trainer, the assistant trainer and the cutman. The trainer and assistant typically give advice to the boxer on what they are doing wrong as well as encouraging them if they are losing. The cutman is a cutaneous doctor responsible for keeping the boxer's face and eyes free of cuts, blood and excessive swelling. This is of particular importance because many fights are stopped because of cuts or swelling that threaten the boxer's eyes. In addition, the corner is responsible for stopping the fight if they feel their fighter is in grave danger of permanent injury. The corner will occasionally throw in a white towel to signify a boxer's surrender (the idiomatic phrase "to throw in the towel", meaning to give up, derives from this practice). This can be seen in the fight between Diego Corrales and Floyd Mayweather. In that fight, Corrales' corner surrendered despite Corrales' steadfast refusal. Health concerns Knocking a person unconscious or even causing a concussion may cause permanent brain damage. There is no clear division between the force required to knock a person out and the force likely to kill a person. Additionally, contact sports, especially combat sports, are directly related to a brain disease called chronic traumatic encephalopathy, abbreviated as CTE. This disease begins to develop during the life of the athlete, and continues to develop even after sports activity has ceased. In March 1981, neurosurgeon Dr. Fred Sonstein sought to use CAT scans in an attempt to track the degeneration of boxers' cognitive functions after seeing the decline of Bennie Briscoe. From 1980 to 2007, more than 200 amateur boxers, professional boxers and Toughman fighters died due to ring or training injuries. In 1983, editorials in the Journal of the American Medical Association called for a ban on boxing. The editor, Dr. George Lundberg, called boxing an "obscenity" that "should not be sanctioned by any civilized society". Since then, the British, Canadian and Australian Medical Associations have called for bans on boxing. Supporters of the ban state that boxing is the only sport where hurting the other athlete is the goal. Dr. Bill O'Neill, boxing spokesman for the British Medical Association, has supported the BMA's proposed ban on boxing: "It is the only sport where the intention is to inflict serious injury on your opponent, and we feel that we must have a total ban on boxing." Opponents respond that such a position is misguided opinion, stating that amateur boxing is scored solely according to total connecting blows with no award for "injury". They observe that many skilled professional boxers have had rewarding careers without inflicting injury on opponents by accumulating scoring blows and avoiding punches winning rounds scored 10-9 by the 10-point must system, and they note that there are many other sports where concussions are much more prevalent. However, the data shows that the concussion rate in boxing is the highest of all contact sports. In addition, repetitive and subconcussive blows to the head, and not just concussions, cause CTE, and the evidence indicates that brain damage and the effects of CTE are more severe in boxing. In 2007, one study of amateur boxers showed that protective headgear did not prevent brain damage, and another found that amateur boxers faced a high risk of brain damage. The Gothenburg study analyzed temporary levels of neurofilament light in cerebral spinal fluid which they conclude is evidence of damage, even though the levels soon subside. More comprehensive studies of neurological function on larger samples performed by Johns Hopkins University in 1994 and accident rates analyzed by National Safety Council in 2017 show amateur boxing is a comparatively safe sport due to the regulations of amateur boxing and a greater control of the athletes, although the studies did not focus on CTE or its long-term effects. In addition, a good training methodology and short career can reduce the effects of brain damage. In 1997, the American Association of Professional Ringside Physicians was established to create medical protocols through research and education to prevent injuries in boxing. Professional boxing is forbidden in Iceland, Iran and North Korea. It was banned in Sweden until 2007 when the ban was lifted but strict restrictions, including four three-minute rounds for fights, were imposed. Boxing was banned in Albania from 1965 until the fall of Communism in 1991. Norway legalized professional boxing in December 2014. The International Boxing Association (AIBA) restricted the use of head guards for senior males after 2013. A literature review study analyses present knowledge about protecting headgear and injury prevention in boxing to determine if injury risks associated with not head guard usage increased. The research of the reviewed literature indicates that head guards cover well against lacerations and skull fractures. Therefore, AIBA's decision to terminate the head guard must be considered cautiously, and injury rates among (male) boxers should be continuously evaluated. Possible health benefits Like other active and dynamic sports, boxing may be argued to provide some general benefits, such as fat burning, increased muscle tone, strong bones and ligaments, cardiovascular fitness, muscular endurance, improved core stability, co-ordination and body awareness, strength and power, stress relief and self-esteem. Boxing Halls of Fame The sport of boxing has two internationally recognized boxing halls of fame; the International Boxing Hall of Fame (IBHOF) and the Boxing Hall of Fame Las Vegas. The latter opened in Las Vegas, Nevada in 2013 and was founded by Steve Lott, former assistant manager for Mike Tyson. The International Boxing Hall of Fame opened in Canastota, New York in 1989. The first inductees in 1990 included Jack Johnson, Benny Leonard, Jack Dempsey, Henry Armstrong, Sugar Ray Robinson, Archie Moore, and Muhammad Ali. Other world-class figures include Salvador Sanchez, Jose Napoles, Roberto "Manos de Piedra" Durán, Ricardo Lopez, Gabriel "Flash" Elorde, Vicente Saldivar, Ismael Laguna, Eusebio Pedroza, Carlos Monzón, Azumah Nelson, Rocky Marciano, Pipino Cuevas, Wilfred Benitez, Wilfredo Gomez, Felix Trinidad and Ken Buchanan. The Hall of Fame's induction ceremony is held every June as part of a four-day event. The fans who come to Canastota for the Induction Weekend are treated to a number of events, including scheduled autograph sessions, boxing exhibitions, a parade featuring past and present inductees, and the induction ceremony itself. The Boxing Hall of Fame Las Vegas features the $75 million ESPN Classic Sports fight film and tape library and radio broadcast collection. The collection includes the fights of many great champions, including: Muhammad Ali, Mike Tyson, George Foreman, Roberto Durán, Marvin Hagler, Jack Dempsey, Joe Louis, Joe Frazier, Rocky Marciano and Sugar Ray Robinson. It is this exclusive fight film library that will separate the Boxing Hall of Fame Las Vegas from the other halls of fame which do not have rights to any video of their sports. The inaugural inductees included Muhammad Ali, Henry Armstrong, Tony Canzoneri, Ezzard Charles, Julio César Chávez Sr., Jack Dempsey, Roberto Durán, Joe Louis, and Sugar Ray Robinson. Governing and sanctioning bodies Governing bodies British Boxing Board of Control (BBBofC) European Boxing Union (EBU) Nevada State Athletic Commission (NSAC) Major sanctioning bodies World Boxing Association (WBA) World Boxing Council (WBC) International Boxing Federation (IBF) World Boxing Organization (WBO) Intermediate International Boxing Organization (IBO) Novice Intercontinental Boxing Federation (IBFed) Amateur International Boxing Association (IBA; now also professional) Boxing rankings There are various organization and websites, that rank boxers in both weight class and pound-for-pound manner. Transnational Boxing Rankings Board (ratings ) ESPN (ratings) The Ring (ratings) BoxRec (ratings) Fightstat (rating) See also List of boxing films List of current world boxing champions List of female boxers List of male boxers Milling – military training exercise related to boxing Tag team boxing Undisputed champion Weight class in boxing Women's boxing World Colored Heavyweight Championship Notes References Bibliography Robert Anasi (2003). The Gloves: A Boxing Chronicle. North Point Press. . Sam Andre and Nat Fleischer (1988). A Pictorial History of Boxing. Hamlyn. . Baker, Mark Allen (2010). TITLE TOWN, USA: Boxing in Upstate New York. . History of London Boxing. BBC News. Weight classification, "2009". Encyclopædia Britannica. Fleischer, Nat, Sam Andre, Nigel Collins, Dan Rafael (2002). An Illustrated History of Boxing. Citadel Press. . Fox, James A. (2001). Boxing. Stewart, Tabori and Chang. . Gunn M, Ormerod D. "The legality of boxing". Legal Studies. 1995;15:181. Halbert, Christy (2003). The Ultimate Boxer: Understanding the Sport and Skills of Boxing. Impact Seminars, Inc. . Hatmaker, Mark (2004). Boxing Mastery: Advanced Technique, Tactics, and Strategies from the Sweet Science. Tracks Publishing. . "Accidents Take Lives of Young Alumni" (July/August 2005). Illinois Alumni, 18(1), 47. Death Under the Spotlight: The Manuel Velazquez Boxing Fatality Collection McIlvanney, Hugh (2001). The Hardest Game: McIlvanney on Boxing. McGraw-Hill. . Myler, Patrick (1997). A Century of Boxing Greats: Inside the Ring with the Hundred Best Boxers. Robson Books (UK) / Parkwest Publications (US). . Price, Edmund The Science of Self Defence: A Treatise on Sparring and Wrestling, 1867 (available at Internet Archive, , access date 26 June 2018). Schulberg, Budd (2007). Ringside: A Treasury of Boxing Reportage. Ivan R. Dee. . Silverman, Jeff (2004). The Greatest Boxing Stories Ever Told: Thirty-Six Incredible Tales from the Ring. The Lyons Press. . Snowdon, David (2013). Writing the Prizefight: Pierce Egan's Boxiana World (Peter Lang Ltd) Scully, John Learn to Box with the Iceman Ronald J. Ross, M.D., Cole, Monroe, Thompson, Jay S., Kim, Kyung H.: "Boxers: Computed Axial Tomography, Electroencephalography and Neurological Evaluation." Journal of the American Medical Association, Vol. 249, No. 2, 211–213, January 14, 1983. U.S. Amateur Boxing Inc. (1994). Coaching Olympic Style Boxing. Cooper Pub Group. . External links Official website of the International Boxing Hall of Fame "Boxing". Encyclopædia Britannica Online. Boxing Prints Collection. General Collection, Beinecke Rare Book and Manuscript Library, Yale University. Articles containing video clips European martial arts Individual sports Summer Olympic sports
5
Hindi cinema, popularly known as Bollywood and formerly as Bombay cinema, refers to the film industry based in Mumbai, engaged in production of motion pictures in Hindi language. The popular term Bollywood, is a portmanteau of "Bombay" (former name of Mumbai) and "Hollywood". The industry is a part of the larger Indian cinema, which also includes South Indian cinema and other smaller film industries. In 2017, Indian cinema produced 1,986 feature films, of which the largest number, 364 have been in Hindi. , Hindi cinema represented 43 percent of Indian net box-office revenue; Tamil and Telugu cinema represented 36 percent, and the remaining regional cinema constituted 21 percent. Hindi cinema is one of the largest centres for film production in the world. Hindi films sold an estimated 341 million tickets in India in 2019. Earlier Hindi films tended to use vernacular Hindustani, mutually intelligible by speakers of either Hindi or Urdu, while modern Hindi productions increasingly incorporate elements of Hinglish. The most popular commercial genre in Hindi cinema since the 1970s has been the masala film, which freely mixes different genres including action, comedy, romance, drama and melodrama along with musical numbers. Masala films generally fall under the musical film genre, of which Indian cinema has been the largest producer since the 1960s when it exceeded the American film industry's total musical output after musical films declined in the West. Dadasaheb Phalke's silent film Raja Harishchandra (1913) is the first feature length film made in India. The first Indian musical talkie was Alam Ara (1931), four years after the first Hollywood sound film The Jazz Singer (1927). Alongside commercial masala films, a distinctive genre of art films known as parallel cinema has also existed, presenting realistic content and avoidance of musical numbers. In more recent years, the distinction between commercial masala and parallel cinema has been gradually blurring, with an increasing number of mainstream films adopting the conventions which were once strictly associated with parallel cinema. Etymology "Bollywood" is a portmanteau derived from Bombay (the former name of Mumbai) and "Hollywood", a shorthand reference for the American film industry which is based in Hollywood, California. The term "Tollywood", for the Tollygunge-based cinema of West Bengal, predated "Bollywood". It was used in a 1932 American Cinematographer article by Wilford E. Deming, an American engineer who helped produce the first Indian sound picture. "Bollywood" was probably invented in Bombay-based film trade journals in the 1960s or 1970s, though the exact inventor varies by account. Film journalist Bevinda Collaco claims she coined the term for the title of her column in Screen magazine. Her column entitled "On the Bollywood Beat" covered studio news and celebrity gossip. Other sources state that lyricist, filmmaker and scholar Amit Khanna was its creator. It is unknown if it was derived from "Hollywood" through "Tollywood", or was inspired directly by "Hollywood". The term has been criticised by some film journalists and critics, who believe it implies that the industry is a poor cousin of Hollywood. Many noted Hindi film actors, directors prefer to call it Hindi cinema rather than Bollywood and advice others to mention it as 'Hindi cinema'. In 2020, Sudhir Mishra dissociated himself from Bollywood term, Hansal Mehta echoed same sentiment, he said "Bollywood" is "very derogatory " term for Hindi cinema, veteran director Shyam Benegal : "Bollywood is a term copied from Hollywood. The Indian film industry is the largest in the world. Why should we take a terminology that belongs to the industry of some other country?", Ketan Mehta always preferred calling it Hindi cinema, Anurag Basu said "Calling ourselves Bollywood is a feudal mindset, we have our own identity. We are Indian cinema, where films are made in more than 15 languages...We should not degrade by calling ourselves Bollywood. When I go to International film festivals, I feel ashamed when we are called Bollywood. There is Korean cinema, French cinema, Italian cinema... why not Indian cinema?” Noted South Indian director Mani Ratnam expressed that 'Hindi cinema should stop calling itself Bollywood '. Present SS Rajamouli's Baahubali: The Beginning (2015), Telugu language film started new wave of Pan-India films. Due to COVID-19, the Hindi industry halted, many movies got delayed and released after pandemic ended, in the meanwhile due to years lockdowns audience got confined and got exposed to World cinema through number of OTT platforms such as Netflix, Prime video, Sony LIV etc who became popular, Indian audience not only watched Hollywood movies on them but also lots of movies of South Korean, Spanish etc film industries, and their web series. According to some film critics, the test and understanding of the audience evolved, they became more content driven, they began exploring various film genres. From 2015 onwards, the position of Bollywood as the top film industry of India waned. Some directors, exhibitors, actors and producers claimed that audiences became smarter, and they wanted movies with good stories, and they do not accept mediocre films. Instead of understanding it, Bollywood's film producers continued making films on cliched, bad stories, and did not evolve with their audience. Consequently, big-budget Bollywood films ended up as Box-office disasters in recent past. Since Bahubali (2015) released, many regional language movies emerged as hits throughout India and regional film industries such as Telugu, Tamil, Kannada Film Industry etc., started giving tough competition to the Bollywood movies at the box-office. Many regional actors became known outside their state, where previously they were unknown. Rajamouli's RRR (2021), Telugu film emerged as one of the highest grossing films of Indian cinema. Many Bollywood producers and directors acknowledge the might of regional film industries. Some trade experts and critics believe that audiences eventually will return to Bollywood. In 2022, Hindi industry released 44 movies; out of those, 4 emerged as hits and 40 flopped. History Early history (1890s–1930s) In 1897, a film presentation by Professor Stevenson featured a stage show at Calcutta's Star Theatre. With Stevenson's encouragement and camera, Hiralal Sen, an Indian photographer, made a film of scenes from that show, The Flower of Persia (1898). The Wrestlers (1899) by H. S. Bhatavdekar showed a wrestling match at the Hanging Gardens in Bombay. Dadasaheb Phalke's silent Raja Harishchandra (1913) is the first feature film made in India. By the 1930s, the industry was producing over 200 films per year. The first Indian sound film, Ardeshir Irani's Alam Ara (1931), was commercially successful. With a great demand for talkies and musicals, Hindustani cinema (as Hindi cinema was then known as) and the other regional film industries quickly switched to sound films. Challenges and market expansion (1930s–1940s) The 1930s and 1940s were tumultuous times; India was buffeted by the Great Depression, World War II, the Indian independence movement, and the violence of the Partition. Although most early Bombay films were unabashedly escapist, a number of filmmakers tackled tough social issues or used the struggle for Indian independence as a backdrop for their films. Irani made the first Hindi colour film, Kisan Kanya, in 1937. The following year, he made a colour version of Mother India. However, colour did not become a popular feature until the late 1950s. At this time, lavish romantic musicals and melodramas were cinematic staples. The decade of the 1940s saw an expansion of Bombay cinema's commercial market and its presence in the national consciousness. The year 1943 saw the arrival of Indian cinema's first 'blockbuster' offering, the movie Kismet, which grossed in excess of the important barrier of one crore (10 million) rupees, made on a budget of only two lakh (200,000) rupees. The film tackled contemporary issues, especially those arising from the Indian Independence movement, and went on to become "the longest running hit of Indian cinema", a title it held till the 1970s. Film personalities like Bimal Roy, Sahir Ludhianvi and Prithviraj Kapoor participated in the creation of a national movement against colonial rule in India, while simultaneously leveraging the popular political movement to increase their own visibility and popularity. Themes from the Independence Movement deeply influenced Bombay film directors, screen-play writers, and lyricists, who saw their films in the context of social reform and the problems of the common people. Before the Partition, the Bombay film industry was closely linked to the Lahore film industry (now the Pakistani film industry also known as "Lollywood"); both produced films in Hindustani (also known as Hindi-Urdu), the lingua franca of northern and central India. Another centre of Hindustani-language film production was the Bengal film industry in Calcutta, Bengal Presidency (now Kolkata, West Bengal), which produced Hindustani-language films and local Bengali language films. Many actors, filmmakers and musicians from the Lahore industry migrated to the Bombay industry during the 1940s, including actors K. L. Saigal, Prithviraj Kapoor, Dilip Kumar and Dev Anand as well as playback singers Mohammed Rafi, Noorjahan and Shamshad Begum. Around the same time, filmmakers and actors from the Calcutta film industry began migrating to Bombay; as a result, Bombay became the center of Hindustani-language film production. The 1947 partition of India divided the country into the Republic of India and Pakistan, which precipitated the migration of filmmaking talent from film production centres like Lahore and Calcutta, which bore the brunt of the partition violence. This included actors, filmmakers and musicians from Bengal, Punjab (particularly the present-day Pakistani Punjab), and the North-West Frontier Province (present-day Khyber Pakhtunkhwa). These events further consolidated the Bombay film industry's position as the preeminent center for film production in India. Golden age (late 1940s–1960s) The period from the late 1940s to the early 1960s, after India's independence, is regarded by film historians as the Golden Age of Hindi cinema. Some of the most critically acclaimed Hindi films of all time were produced during this time. Examples include Pyaasa (1957) and Kaagaz Ke Phool (1959), directed by Guru Dutt and written by Abrar Alvi; Awaara (1951) and Shree 420 (1955), directed by Raj Kapoor and written by Khwaja Ahmad Abbas, and Aan (1952), directed by Mehboob Khan and starring Dilip Kumar. The films explored social themes, primarily dealing with working-class life in India (particularly urban life) in the first two examples. Awaara presented the city as both nightmare and dream, and Pyaasa critiqued the unreality of urban life. Mehboob Khan's Mother India (1957), a remake of his earlier Aurat (1940), was the first Indian film nominated for the Academy Award for Best Foreign Language Film; it lost by a single vote. Mother India defined conventional Hindi cinema for decades. It spawned a genre of dacoit films, in turn defined by Gunga Jumna (1961). Written and produced by Dilip Kumar, Gunga Jumna was a dacoit crime drama about two brothers on opposite sides of the law (a theme which became common in Indian films during the 1970s). Some of the best-known epic films of Hindi cinema were also produced at this time, such as K. Asif's Mughal-e-Azam (1960). Other acclaimed mainstream Hindi filmmakers during this period included Kamal Amrohi and Vijay Bhatt. The three most popular male Indian actors of the 1950s and 1960s were Dilip Kumar, Raj Kapoor, and Dev Anand, each with a unique acting style. Kapoor adopted Charlie Chaplin's tramp persona; Anand modeled himself on suave Hollywood stars like Gregory Peck and Cary Grant, and Kumar pioneered a form of method acting which predated Hollywood method actors such as Marlon Brando. Kumar, who was described as "the ultimate method actor" by Satyajit Ray, inspired future generations of Indian actors. Much like Brando's influence on Robert De Niro and Al Pacino, Kumar had a similar influence on Amitabh Bachchan, Naseeruddin Shah, Shah Rukh Khan and Nawazuddin Siddiqui. Veteran actresses such as Suraiya, Nargis, Sumitra Devi, Madhubala, Meena Kumari, Waheeda Rehman, Nutan, Sadhana, Mala Sinha and Vyjayanthimala have had their share of influence on Hindi cinema. While commercial Hindi cinema was thriving, the 1950s also saw the emergence of a parallel cinema movement. Although the movement (emphasising social realism) was led by Bengali cinema, it also began gaining prominence in Hindi cinema. Early examples of parallel cinema include (1946), directed by Khwaja Ahmad Abbas and based on the Bengal famine of 1943,; (1946) directed by Chetan Anand and written by Khwaja Ahmad Abbas, and Bimal Roy's Do Bigha Zamin (1953). Their critical acclaim and the latter's commercial success paved the way for Indian neorealism and the Indian New Wave (synonymous with parallel cinema). Internationally acclaimed Hindi filmmakers involved in the movement included Mani Kaul, Kumar Shahani, Ketan Mehta, Govind Nihalani, Shyam Benegal, and Vijaya Mehta. After the social-realist film received the Palme d'Or at the inaugural 1946 Cannes Film Festival, Hindi films were frequently in competition for Cannes' top prize during the 1950s and early 1960s and some won major prizes at the festival. Guru Dutt, overlooked during his lifetime, received belated international recognition during the 1980s. Film critics polled by the British magazine Sight & Sound included several of Dutt's films in a 2002 list of greatest films, and Time's All-Time 100 Movies lists Pyaasa as one of the greatest films of all time. During the late 1960s and early 1970s, the industry was dominated by musical romance films with romantic-hero leads. Classic Hindi cinema (1970s–1980s) By 1970, Hindi cinema was thematically stagnant and dominated by musical romance films. The arrival of screenwriting duo Salim–Javed (Salim Khan and Javed Akhtar) was a paradigm shift, revitalising the industry. They began the genre of gritty, violent, Bombay underworld crime films early in the decade with films such as Zanjeer (1973) and Deewaar (1975). Salim-Javed reinterpreted the rural themes of Mehboob Khan's Mother India (1957) and Dilip Kumar's Gunga Jumna (1961) in a contemporary urban context, reflecting the socio-economic and socio-political climate of 1970s India and channeling mass discontent, disillusionment and the unprecedented growth of slums with anti-establishment themes and those involving urban poverty, corruption and crime. Their "angry young man", personified by Amitabh Bachchan, reinterpreted Dilip Kumar's performance in Gunga Jumna in a contemporary urban context and anguished urban poor. By the mid-1970s, romantic confections had given way to gritty, violent crime films and action films about gangsters (the Bombay underworld) and bandits (dacoits). Salim-Javed's writing and Amitabh Bachchan's acting popularised the trend with films such as Zanjeer and (particularly) Deewaar, a crime film inspired by Gunga Jumna which pitted "a policeman against his brother, a gang leader based on real-life smuggler Haji Mastan" (Bachchan); according to Danny Boyle, Deewaar was "absolutely key to Indian cinema". In addition to Bachchan, several other actors followed by riding the crest of the trend (which lasted into the early 1990s). Actresses from the era include Hema Malini, Jaya Bachchan, Raakhee, Shabana Azmi, Zeenat Aman, Parveen Babi, Rekha, Dimple Kapadia, Smita Patil, Jaya Prada and Padmini Kolhapure. The name "Bollywood" was coined during the 1970s, when the conventions of commercial Hindi films were defined. Key to this was the masala film, which combines a number of genres (action, comedy, romance, drama, melodrama, and musical). The masala film was pioneered early in the decade by filmmaker Nasir Hussain, and the Salim-Javed screenwriting duo, pioneering the Bollywood-blockbuster format. Yaadon Ki Baarat (1973), directed by Hussain and written by Salim-Javed, has been identified as the first masala film and the first quintessentially "Bollywood" film. Salim-Javed wrote more successful masala films during the 1970s and 1980s. Masala films made Amitabh Bachchan the biggest star of the period. A landmark of the genre was Amar Akbar Anthony (1977), directed by Manmohan Desai and written by Kader Khan, and Desai continued successfully exploiting the genre. Both genres (masala and violent-crime films) are represented by the blockbuster Sholay (1975), written by Salim-Javed and starring Amitabh Bachchan. It combined the dacoit film conventions of Mother India and Gunga Jumna with spaghetti Westerns, spawning the Dacoit Western (also known as the curry Western) which was popular during the 1970s. Some Hindi filmmakers, such as Shyam Benegal, Mani Kaul, Kumar Shahani, Ketan Mehta, Govind Nihalani and Vijaya Mehta, continued to produce realistic parallel cinema throughout the 1970s. Although the art film bent of the Film Finance Corporation was criticised during a 1976 Committee on Public Undertakings investigation which accused the corporation of not doing enough to encourage commercial cinema, the decade saw the rise of commercial cinema with films such as Sholay (1975) which consolidated Amitabh Bachchan's position as a star. The devotional classic Jai Santoshi Ma was also released that year. By 1983, the Bombay film industry was generating an estimated annual revenue of ( 7 billion, ), equivalent to (, 111.33 billion) when adjusted for inflation. By 1986, India's annual film output had increased from 741 films produced annually to 833 films annually, making India the world's largest film producer. The most internationally acclaimed Hindi film of the 1980s was Mira Nair's Salaam Bombay! (1988), which won the Camera d'Or at the 1988 Cannes Film Festival and was nominated for the Academy Award for Best Foreign Language Film. New Hindi cinema (1990s–2020s) Hindi cinema experienced another period of box-office decline during the late 1980s with due to concerns by audiences over increasing violence and a decline in musical quality, and a rise in video piracy. One of the turning points came with such films as Qayamat Se Qayamat Tak (1988), presenting a blend of youthfulness, family entertainment, emotional intelligence and strong melodies, all of which lured audiences back to the big screen. It brought back the template for Bollywood musical romance films which went on to define 1990s Hindi cinema. Known since the 1990s as "New Bollywood", contemporary Bollywood is linked to economic liberalization in India during the early 1990s. Early in the decade, the pendulum swung back toward family-centered romantic musicals. Qayamat Se Qayamat Tak (1988) was followed by blockbusters such as Maine Pyar Kiya (1989), Hum Aapke Hain Kaun (1994), Dilwale Dulhania Le Jayenge (1995), Raja Hindustani (1996), Dil To Pagal Hai (1997) and Kuch Kuch Hota Hai (1998), introducing a new generation of popular actors, including the three Khans: Aamir Khan, Shah Rukh Khan, and Salman Khan, who have starred in most of the top ten highest-grossing Bollywood films. The Khans and have had successful careers since the late 1980s and early 1990s, and have dominated the Indian box office for three decades. Shah Rukh Khan was the most successful Indian actor for most of the 1990s and 2000s, and Aamir Khan has been the most successful Indian actor since the mid 2000s. Action and comedy films, starring such actors as Akshay Kumar and Govinda. The decade marked the entrance of new performers in art and independent films, some of which were commercially successful. The most influential example was Satya (1998), directed by Ram Gopal Varma and written by Anurag Kashyap. Its critical and commercial success led to the emergence of a genre known as Mumbai noir: urban films reflecting the city's social problems. This led to a resurgence of parallel cinema by the end of the decade. The films featured actors whose performances were often praised by critics. The 2000s saw increased Bollywood recognition worldwide due to growing (and prospering) NRI and Desi communities overseas. The growth of the Indian economy and a demand for quality entertainment in this era led the country's film industry to new heights in production values, cinematography and screenwriting as well as technical advances in areas such as special effects and animation. Some of the largest production houses, among them Yash Raj Films and Dharma Productions were the producers of new modern films. Some popular films of the decade were Kaho Naa... Pyaar Hai (2000), Kabhi Khushi Kabhie Gham... (2001), Gadar: Ek Prem Katha (2001), Lagaan (2001), Koi... Mil Gaya (2003), Kal Ho Naa Ho (2003), Veer-Zaara (2004), Rang De Basanti (2006), Lage Raho Munna Bhai (2006), Dhoom 2 (2006), Krrish (2006), and Jab We Met (2007), among others, showing the rise of new movie stars. During the 2010s, the industry saw established stars such as making big-budget masala films like Dabangg (2010), Singham (2011), Ek Tha Tiger (2012), Son of Sardaar (2012), Rowdy Rathore (2012), Chennai Express (2013), Kick (2014) and Happy New Year (2014) with much-younger actresses. Although the films were often not praised by critics, they were commercially successful. Some of the films starring Aamir Khan, from (2007) and 3 Idiots (2009) to Dangal (2016) and Secret Superstar (2018), have been credited with redefining and modernising the masala film with a distinct brand of socially conscious cinema. Most stars from the 2000s continued successful careers into the next decade, and the 2010s saw a new generation of popular actors in different films. Among new conventions, female-centred films such as The Dirty Picture (2011), Kahaani (2012), and Queen (2014), Pink (2016), Raazi (2018), Gangubai Kathiawadi (2022) started gaining wide financial success. Influences on Hindi cinema Moti Gokulsing and Wimal Dissanayake identify six major influences which have shaped Indian popular cinema: The branching structures of ancient Indian epics, like the Mahabharata and Ramayana. Indian popular films often have plots which branch off into sub-plots. Ancient Sanskrit drama, with its stylised nature and emphasis on spectacle in which music, dance and gesture combine "to create a vibrant artistic unit with dance and mime being central to the dramatic experience." Matthew Jones of De Montfort University also identifies the Sanskrit concept of rasa, or "the emotions felt by the audience as a result of the actor’s presentation", as crucial to Bollywood films. Traditional folk theater, which became popular around the 10th century with the decline of Sanskrit theater. Its regional traditions include the Jatra of Bengal, the Ramlila of Uttar Pradesh, and the Terukkuttu of Tamil Nadu. Parsi theatre, which "blended realism and fantasy, music and dance, narrative and spectacle, earthy dialogue and ingenuity of stage presentation, integrating them into a dramatic discourse of melodrama. The Parsi plays contained crude humour, melodious songs and music, sensationalism and dazzling stagecraft." Hollywood, where musicals were popular from the 1920s to the 1950s. Western musical television (particularly MTV), which has had an increasing influence since the 1990s. Its pace, camera angles, dance sequences and music may be seen in 2000s Indian films. An early example of this approach was Mani Ratnam's Bombay (1995). Sharmistha Gooptu identifies Indo-Persian-Islamic culture as a major influence. During the early 20th century, Urdu was the lingua franca of popular cultural performance across northern India and established in popular performance art traditions such as nautch dancing, Urdu poetry, and Parsi theater. Urdu and related Hindi dialects were the most widely understood across northern India, and Hindustani became the standard language of early Indian talkies. Films based on "Persianate adventure-romances" led to a popular genre of "Arabian Nights cinema". Scholars Chaudhuri Diptakirti and Rachel Dwyer and screenwriter Javed Akhtar identify Urdu literature as a major influence on Hindi cinema. Most of the screenwriters and scriptwriters of classic Hindi cinema came from Urdu literary backgrounds, from Khwaja Ahmad Abbas and Akhtar ul Iman to Salim–Javed and Rahi Masoom Raza; a handful came from other Indian literary traditions, such as Bengali and Hindi literature. Most of Hindi cinema's classic scriptwriters wrote primarily in Urdu, including Salim-Javed, Gulzar, Rajinder Singh Bedi, Inder Raj Anand, Rahi Masoom Raza and Wajahat Mirza. Urdu poetry and the ghazal tradition strongly influenced filmi (Bollywood lyrics). Javed Akhtar was also greatly influenced by Urdu novels by Pakistani author Ibn-e-Safi, such as the Jasoosi Dunya and Imran series of detective novels; they inspired, for example, famous Bollywood characters such as Gabbar Singh in Sholay (1975) and Mogambo in Mr. India (1987). In recent times, accusations have been made against Bollywood of being anti-Hindu and promoting Urdu too much, to the extent of transforming into "Urduwood"; boycotts against Bollywood have been launched by Hindu nationalists on this point. Todd Stadtman identifies several foreign influences on 1970s commercial Bollywood masala films, including New Hollywood, Italian exploitation films, and Hong Kong martial arts cinema. After the success of Bruce Lee films (such as Enter the Dragon) in India, Deewaar (1975) and other Bollywood films incorporated fight scenes inspired by 1970s martial arts films from Hong Kong cinema until the 1990s. Bollywood action scenes emulated Hong Kong rather than Hollywood, emphasising acrobatics and stunts and combining kung fu (as perceived by Indians) with Indian martial arts such as pehlwani. Influence of Hindi cinema India Perhaps Hindi cinema's greatest influence has been on India's national identity, where (with the rest of Indian cinema) it has become part of the "Indian story". In India, Bollywood is often associated with India's national identity. According to economist and Bollywood biographer Meghnad Desai, "Cinema actually has been the most vibrant medium for telling India its own story, the story of its struggle for independence, its constant struggle to achieve national integration and to emerge as a global presence". Scholar Brigitte Schulze has written that Indian films, most notably Mehboob Khan's Mother India (1957), played a key role in shaping the Republic of India's national identity in the early years after independence from the British Raj; the film conveyed a sense of Indian nationalism to urban and rural citizens alike. Bollywood has long influenced Indian society and culture as the biggest entertainment industry; many of the country's musical, dancing, wedding and fashion trends are Bollywood-inspired. Bollywood fashion trendsetters have included Madhubala in Mughal-e-Azam (1960) and Madhuri Dixit in Hum Aapke Hain Koun..! (1994). Hindi films have also had a socio-political impact on Indian society, reflecting Indian politics. In classic 1970s Bollywood films, Bombay underworld crime films written by Salim–Javed and starring Amitabh Bachchan such as Zanjeer (1973) and Deewaar (1975) reflected the socio-economic and socio-political realities of contemporary India. They channeled growing popular discontent and disillusionment and state failure to ensure welfare and well-being at a time of inflation, shortages, loss of confidence in public institutions, increasing crime and the unprecedented growth of slums. Salim-Javed and Bachchan's films dealt with urban poverty, corruption and organised crime; they were perceived by audiences as anti-establishment, often with an "angry young man" protagonist presented as a vigilante or anti-hero whose suppressed rage voiced the anguish of the urban poor. Overseas Hindi films have been a significant form of soft power for India, increasing its influence and changing overseas perceptions of India. In Germany, Indian stereotypes included bullock carts, beggars, sacred cows, corrupt politicians, and catastrophes before Bollywood and the IT industry transformed global perceptions of India. According to author Roopa Swaminathan, "Bollywood cinema is one of the strongest global cultural ambassadors of a new India." Its role in expanding India's global influence is comparable to Hollywood's similar role with American influence. Monroe Township, Middlesex County, New Jersey, in the New York metropolitan area, has been profoundly impacted by Bollywood; this U.S. township has displayed one of the fastest growth rates of its Indian population in the Western Hemisphere, increasing from 256 (0.9%) as of the 2000 Census to an estimated 5,943 (13.6%) as of 2017, representing a 2,221.5% (a multiple of 23) numerical increase over that period, including many affluent professionals and senior citizens as well as charitable benefactors to the COVID-19 relief efforts in India in official coordination with Monroe Township, as well as actors with second homes. During the 2000s, Hindi cinema began influencing musical films in the Western world and was instrumental role in reviving the American musical film. Baz Luhrmann said that his musical film, Moulin Rouge! (2001), was inspired by Bollywood musicals; the film incorporated a Bollywood-style dance scene with a song from the film China Gate. The critical and financial success of Moulin Rouge! began a renaissance of Western musical films such as Chicago, Rent, and Dreamgirls. Indian film composer A. R. Rahman wrote the music for Andrew Lloyd Webber's Bombay Dreams, and a musical version of Hum Aapke Hain Koun was staged in London's West End. The sports film Lagaan (2001) was nominated for the Academy Award for Best Foreign Language Film, and two other Hindi films (2002's Devdas and 2006's Rang De Basanti) were nominated for the BAFTA Award for Best Film Not in the English Language. Danny Boyle's Slumdog Millionaire (2008), which won four Golden Globes and eight Academy Awards, was inspired by mainstream Hindi films and is considered an "homage to Hindi commercial cinema". It was also inspired by Mumbai-underworld crime films, such as Deewaar (1975), Satya (1998), Company (2002) and Black Friday (2007). Deewaar had a Hong Kong remake, The Brothers (1979), which inspired John Woo's internationally acclaimed breakthrough A Better Tomorrow (1986); the latter was a template for Hong Kong action cinema's heroic bloodshed genre. "Angry young man" 1970s epics such as Deewaar and Amar Akbar Anthony (1977) also resemble the heroic-bloodshed genre of 1980s Hong Kong action cinema. The influence of filmi may be seen in popular music worldwide. Technopop pioneers Haruomi Hosono and Ryuichi Sakamoto of the Yellow Magic Orchestra produced a 1978 electronic album, Cochin Moon, based on an experimental fusion of electronic music and Bollywood-inspired Indian music. Truth Hurts' 2002 song "Addictive", produced by DJ Quik and Dr. Dre, was lifted from Lata Mangeshkar's "Thoda Resham Lagta Hai" in Jyoti (1981). The Black Eyed Peas' Grammy Award winning 2005 song "Don't Phunk with My Heart" was inspired by two 1970s Bollywood songs: "Ye Mera Dil Yaar Ka Diwana" from Don (1978) and "Ae Nujawan Hai Sub" from Apradh (1972). Both songs were composed by Kalyanji Anandji, sung by Asha Bhosle, and featured the dancer Helen. The Kronos Quartet re-recorded several R. D. Burman compositions sung by Asha Bhosle for their 2005 album, You've Stolen My Heart: Songs from R.D. Burman's Bollywood, which was nominated for Best Contemporary World Music Album at the 2006 Grammy Awards. Filmi music composed by A. R. Rahman (who received two Academy Awards for the Slumdog Millionaire soundtrack) has frequently been sampled by other musicians, including the Singaporean artist Kelly Poon, the French rap group La Caution and the American artist Ciara. Many Asian Underground artists, particularly those among the overseas Indian diaspora, have also been inspired by Bollywood music. Genres Hindi films are primarily musicals, and are expected to have catchy song-and-dance numbers woven into the script. A film's success often depends on the quality of such musical numbers. A film's music and song and dance portions are usually produced first and these are often released before the film itself, increasing its audience. Indian audiences expect value for money, and a good film is generally referred to as paisa vasool, (literally "money's worth"). Songs, dances, love triangles, comedy and dare-devil thrills are combined in a three-hour show (with an intermission). These are called masala films, after the Hindi word for a spice mixture. Like masalas, they are a mixture of action, comedy and romance; most have heroes who can fight off villains single-handedly. Bollywood plots have tended to be melodramatic, frequently using formulaic ingredients such as star-crossed lovers, angry parents, love triangles, family ties, sacrifice, political corruption, kidnapping, villains, kind-hearted courtesans, long-lost relatives and siblings, reversals of fortune and serendipity. Parallel cinema films tended to be less popular at the box office. A large Indian diaspora in English-speaking countries and increased Western influence in India have nudged Bollywood films closer to Hollywood. According to film critic Lata Khubchandani, "Our earliest films ... had liberal doses of sex and kissing scenes in them. Strangely, it was after Independence the censor board came into being and so did all the strictures." Although Bollywood plots feature Westernised urbanites dating and dancing in clubs rather than pre-arranged marriages, traditional Indian culture continues to exist outside the industry and is an element of resistance by some to Western influences. Bollywood plays a major role, however, in Indian fashion. Studies have indicated that some people, unaware that changing fashion in Bollywood films is often influenced by globalisation, consider the clothes worn by Bollywood actors as authentically Indian. Casts and crews Bollywood employs people from throughout India. It attracts thousands of aspiring actors hoping for a break in the industry. Models and beauty contestants, television actors, stage actors and ordinary people come to Mumbai with the hope of becoming a star. As in Hollywood, very few succeed. Since many Bollywood films are shot abroad, many foreign extras are employed. In Bollywood producers pay minimal amount, low wages to the writers. Very few non-Indian actors are able to make a mark in Hindi cinema, although many have tried. Since the early decades of the industry, many South Indian actresses debuted in the Bombay industry and became mainstream Bollywood stars, including Vyjayanthimala, Hema Malini, Rekha, and Sridevi. A number of foreign actresses became successful in spite of their ignorance of the Hindi language. Hindi cinema can be insular, and relatives of film-industry figures have an edge in obtaining coveted roles in films or being part of a film crew. However, industry connections are no guarantee of a long career: competition is fierce, and film-industry scions will falter if they do not succeed at the box office. A few Hindi filmmakers regularly got criticised for allegedly practising nepotism. Critics and fans accused them of hindering the careers of outsiders (potential artists who do not have any connections in the industry) and readily giving roles to promote the kids of established actors, directors and producers. Criticism targeted the big production houses (e.g. Yash Raj Films) for their tendency to work with actors who are from their social circles. Moreover, the problem of casting couch has been mentioned in reference to the Hindi film industry, and received stronger notice during the MenToo movement. Scripts, dialogues, and lyrics In Hindi films, scripts, dialogues and song lyrics might be written by different people. Earlier, scripts were usually written in an unadorned Hindustani, which would be understood by the largest possible audience. Post-Independence, Hindi films tended to use a colloquial register of Hindustani, mutually intelligible by Hindi and Urdu speakers, but the use of the latter has declined over years. Some films have used regional dialects to evoke a village setting, or archaic Urdu in medieval historical films. A number of the dominant early scriptwriters of Hindi cinema primarily wrote in Urdu; Salim-Javed wrote in Urdu script, which was then transcribed by an assistant into Devanagari script so Hindi readers could read them. During the 1970s, Urdu writers Krishan Chander and Ismat Chughtai said that "more than seventy-five per cent of films are made in Urdu" but were categorised as Hindi films by the government. Encyclopedia of Hindi Cinema noted a number of top Urdu writers for preserving the language through film. Urdu poetry has strongly influenced Hindi film songs, whose lyrics also draw from the ghazal tradition (filmi-ghazal). According to Javed Akhtar in 1996, despite the loss of Urdu in Indian society, Urdu diction dominated Hindi film dialogue and lyrics. In her book, The Cinematic ImagiNation, Jyotika Virdi wrote about the presence and decline of Urdu in Hindi films. Virdi notes that although Urdu was widely used in classic Hindi cinema decades after partition because it was widely taught in pre-partition India, its use has declined in modern Hindi cinema: "The extent of Urdu used in commercial Hindi cinema has not been stable ... the ultimate victory of Hindi in the official sphere has been more or less complete. This decline of Urdu is mirrored in Hindi films ... It is true that many Urdu words have survived and have become part of Hindi cinema's popular vocabulary. But that is as far as it goes. The fact is, for the most part, popular Hindi cinema has forsaken the florid Urdu that was part of its extravagance and retained a 'residual' Urdu", affected by an aggressive state policy that promoted a Sanskritized version of Hindi as the national language." Contemporary mainstream films also use English; according to the article "Bollywood Audiences Editorial", "English has begun to challenge the ideological work done by Urdu." Some film scripts are first written in Latin script. Characters may shift from one language to the other to evoke a particular atmosphere (for example, English in a business setting and Hindi in an informal one). The blend of Hindi and English sometimes heard in modern Hindi films, known as Hinglish, has become increasingly common. For years before the turn of the millennium and even after, cinematic language (in dialogues or lyrics) would often be melodramatic, invoking God, family, mother, duty, and self-sacrifice. Song lyrics are often about love and, especially in older films, frequently used the poetic vocabulary of court Urdu, with a number of Persian loanwords. Another source for love lyrics in films such as Jhanak Jhanak Payal Baje and Lagaan is the long Hindu tradition of poetry about the loves of Krishna, Radha, and the gopis. Music directors often prefer working with certain lyricists, and the lyricist and composer may be seen as a team. This phenomenon has been compared to the pairs of American composers and songwriters who created classic Broadway musicals. In 2008 and before, Bollywood scripts were often handwritten because, in the industry, there is a perception that manual writing is the quickest way to create scripts. Sound Sound in early Bollywood films was usually not recorded on location (sync sound). It was usually created (or re-created) in the studio, with the actors speaking their lines in the studio and sound effects added later; this created synchronisation problems. Commercial Indian films are known for their lack of ambient sound, and the Arriflex 3 camera necessitated dubbing. Lagaan (2001) was filmed with sync sound, and several Bollywood films have recorded on-location sound since then. Bollywood films are also notorious for lack or less of Foley sound, due to which most of the times audience don't experience all the sounds from objects on screen. Sometimes lound background music makes dialogues inaudible. Usually Hindi film's makers do not write Foley artist's name in end credits. Female makeup artists In 1955, the Bollywood Cine Costume Make-Up Artist & Hair Dressers' Association (CCMAA) ruled that female makeup artists were barred from membership. The Supreme Court of India ruled in 2014 that the ban violated Indian constitutional guarantees under Article 14 (right to equality), 19(1)(g) (freedom to work) and Article 21 (right to liberty). According to the court, the ban had no "rationale nexus" to the cause sought to be achieved and was "unacceptable, impermissible and inconsistent" with the constitutional rights guaranteed to India's citizens. The court also found illegal the rule which mandated that for any artist to work in the industry, they must have lived for five years in the state where they intend to work. In 2015, it was announced that Charu Khurana was the first woman registered by the Cine Costume Make-Up Artist & Hair Dressers' Association. Song and dance Bollywood film music is called filmi (from the Hindi "of films"). Bollywood songs were introduced with Ardeshir Irani's Alam Ara (1931) song, "De De Khuda Ke Naam pay pyaare". Bollywood songs are generally pre-recorded by professional playback singers, with the actors then lip syncing the words to the song on-screen (often while dancing). Although most actors are good dancers, few are also singers; a notable exception was Kishore Kumar, who starred in several major films during the 1950s while having a rewarding career as a playback singer. K. L. Saigal, Suraiyya, and Noor Jehan were known as singers and actors, and some actors in the last thirty years have sung one or more songs themselves. Songs can make and break a film, determining whether it will be a flop or a hit: "Few films without successful musical tracks, and even fewer without any songs and dances, succeed". Globalization has changed Bollywood music, with lyrics an increasing mix of Hindi and English. Global trends such as salsa, pop and hip hop have influenced the music heard in Bollywood films. Playback singers are featured in the opening credits, and have fans who will see an otherwise-lackluster film to hear their favourites. Notable singers are Lata Mangeshkar, Asha Bhosle, Geeta Dutt, Shamshad Begum, Kavita Krishnamurthy, Sadhana Sargam, Alka Yagnik and Shreya Goshal (female), and K. L. Saigal, Kishore Kumar, Talat Mahmood, Mukesh, Mohammed Rafi, Manna Dey, Hemant Kumar, Kumar Sanu, Udit Narayan and Sonu Nigam (male). Composers of film music, known as music directors, are also well-known. Remixing of film songs with modern rhythms is common, and producers may release remixed versions of some of their films' songs with the films' soundtrack albums. Dancing in Bollywood films, especially older films, is modeled on Indian dance: classical dance, dances of north-Indian courtesans (tawaif) or folk dances. In modern films, Indian dance blends with Western dance styles as seen on MTV or in Broadway musicals; Western pop and classical-dance numbers are commonly seen side-by-side in the same film. The hero (or heroine) often performs with a troupe of supporting dancers. Many song-and-dance routines in Indian films contain unrealistically-quick shifts of location or changes of costume between verses of a song. If the hero and heroine dance and sing a duet, it is often staged in natural surroundings or architecturally-grand settings. Songs typically comment on the action taking place in the film. A song may be worked into the plot, so a character has a reason to sing. It may externalise a character's thoughts, or presage an event in the film (such as two characters falling in love). The songs are often referred to as a "dream sequence", with things happening which would not normally happen in the real world. Song and dance scenes were often filmed in Kashmir but, due to political unrest in Kashmir since the end of the 1980s, they have been shot in western Europe (particularly Switzerland and Austria). Contemporary movie stars attracted popularity as dancers, including Madhuri Dixit, Hrithik Roshan, Aishwarya Rai Bachchan, Sridevi, Meenakshi Seshadri, Malaika Arora Khan, Shahid Kapoor, Katrina Kaif and Tiger Shroff. Older dancers include Helen (known for her cabaret numbers), Madhubala, Vyjanthimala, Padmini, Hema Malini, Mumtaz, Cuckoo Moray, Parveen Babi , Waheeda Rahman, Meena Kumari, and Shammi Kapoor. Film producers have been releasing soundtracks (as tapes or CDs) before a film's release, hoping that the music will attract audiences; a soundtrack is often more popular than its film. Some producers also release music videos, usually (but not always) with a song from the film. Finances Bollywood films are multi-million dollar productions, with the most expensive productions costing up to 1 billion (about US$20 million). The science-fiction film Ra.One was made on a budget of 1.35 billion (about $27 million), making it the most expensive Bollywood film of all time. Sets, costumes, special effects and cinematography were less than world-class, with some notable exceptions, until the mid-to-late 1990s. As Western films and television are more widely distributed in India, there is increased pressure for Bollywood films to reach the same production levels (particularly in action and special effects). Recent Bollywood films, like Krrish (2006), have employed international technicians such as Hong Kong-based action choreographer Tony Ching. The increasing accessibility of professional action and special effects, coupled with rising film budgets, have seen an increase in action and science-fiction films. Since overseas scenes are attractive at the box office, Mumbai film crews are filming in Australia, Canada, New Zealand, the United Kingdom, the United States, Europe and elsewhere. Indian producers have also obtained funding for big-budget films shot in India, such as Lagaan and Devdas. Funding for Bollywood films often comes from private distributors and a few large studios. Although Indian banks and financial institutions had been forbidden from lending to film studios, the ban has been lifted. Finances are not regulated; some funding comes from illegitimate sources such as the Mumbai underworld, which is known to influence several prominent film personalities. Mumbai organised-crime hitmen shot Rakesh Roshan, a film director and father of star Hrithik Roshan, in January 2000. In 2001, the Central Bureau of Investigation seized all prints of Chori Chori Chupke Chupke after the film was found to be funded by members of the Mumbai underworld. Another problem facing Bollywood is widespread copyright infringement of its films. Often, bootleg DVD copies of movies are available before they are released in cinemas. Manufacturing of bootleg DVD, VCD, and VHS copies of the latest movie titles is an established small-scale industry in parts of south and southeast Asia. The Federation of Indian Chambers of Commerce and Industry (FICCI) estimates that the Bollywood industry loses $100 million annually from unlicensed home videos and DVDs. In addition to the homegrown market, demand for these copies is large amongst portions of the Indian diaspora. Bootleg copies are the only way people in Pakistan can watch Bollywood movies, since the Pakistani government has banned their sale, distribution and telecast. Films are frequently broadcast without compensation by small cable-TV companies in India and other parts of South Asia. Small convenience stores, run by members of the Indian diaspora in the US and the UK, regularly stock tapes and DVDs of dubious provenance; consumer copying adds to the problem. The availability of illegal copies of movies on the Internet also contributes to industry losses. Satellite TV, television and imported foreign films are making inroads into the domestic Indian entertainment market. In the past, most Bollywood films could make money; now, fewer do. Most Bollywood producers make money, however, recouping their investments from many sources of revenue (including the sale of ancillary rights). There are increasing returns from theatres in Western countries like the United Kingdom, Canada, and the United States, where Bollywood is slowly being noticed. As more Indians migrate to these countries, they form a growing market for upscale Indian films. In 2002, Bollywood sold 3.6 billion tickets and had a total revenue (including theatre tickets, DVDs and television) of $1.3 billion; Hollywood films sold 2.6 billion tickets, and had a total revenue of $51 billion. Advertising A number of Indian artists hand-painted movie billboards and posters. M. F. Husain painted film posters early in his career; human labour was found to be cheaper than printing and distributing publicity material. Most of the large, ubiquitous billboards in India's major cities are now created with computer-printed vinyl. Old hand-painted posters, once considered ephemera, are collectible folk art. Releasing film music, or music videos, before a film's release may be considered a form of advertising. A popular tune is believed to help attract audiences. Bollywood publicists use the Internet as a venue for advertising. Most bigger-budget films have a websites on which audiences can view trailers, stills and information on the story, cast, and crew. Bollywood is also used to advertise other products. Product placement, used in Hollywood, is also common in Bollywood. International filming Bollywood's increasing use of international settings such as Switzerland, London, Paris, New York, Mexico, Brazil and Singapore does not necessarily represent the people and cultures of those locales. Contrary to these spaces and geographies being filmed as they are, they are actually Indianised by adding Bollywood actors and Hindi speaking extras to them. While immersing in Bollywood films, viewers get to see their local experiences duplicated in different locations around the world. According to Shakuntala Rao, "Media representation can depict India's shifting relation with the world economy, but must retain its 'Indianness' in moments of dynamic hybridity"; "Indianness" (cultural identity) poses a problem with Bollywood's popularity among varied diaspora audiences, but gives its domestic audience a sense of uniqueness from other immigrant groups. Distribution To release their film theatrically or online in the country, every film maker first apply for film certification to the Central Board of Film Certification (CBFC) along film print, only after receiving CBFC certificate a film trailer or a film can be released in India, members of CBFC view the film, give rating–age restriction, suggest cuts on objectionable scenes or can ban the film from exhibition in anywhere in the country. Film distribution in an important part in the movie business, through film distribution circuit Hindi movies get distributed in India. PVR Cinemas, INOX Leisure etc. are some top multiplexes chains in India, which have cinemas across the nation and exhibit films. Book My Show is the leading tickets selling mobile android application in India, it has tie-up with many such multiplexes. Although PVR and INOX also sell tickets through their application- websites. Due to the convince in tickets booking online most of the viewers pre-book tickets through mobile application. Since advancement of internet service in the country online ticket selling business having robust growth here.2010 decade onward online platform gained popularity in the nation thus Many film-makers many time prefer to release their films online on one of the paid app : Netflix, Amazon Prime, SonyLIV, ZEE5, Disney+ Hotstar etc. and avoiding theatrical release. Awards The Filmfare Awards are some of the most prominent awards given to Hindi films in India. The Indian screen magazine Filmfare began the awards in 1954 (recognising the best films of 1953), and they were originally known as the Clare Awards after the magazine's editor. Modeled on the Academy of Motion Picture Arts and Sciences' poll-based merit format, individuals may vote in separate categories. A dual voting system was developed in 1956. The National Film Awards were also introduced in 1954. The Indian government has sponsored the awards, given by its Directorate of Film Festivals (DFF), since 1973. The DFF screens Bollywood films, films from the other regional movie industries, and independent/art films. The awards are made at an annual ceremony presided over by the president of India. Unlike the Filmfare Awards, which are chosen by the public and a committee of experts, the National Film Awards are decided by a government panel. Other awards ceremonies for Hindi films in India are the Screen Awards (begun in 1995) and the Stardust Awards, which began in 2003. The International Indian Film Academy Awards (begun in 2000) and the Zee Cine Awards, begun in 1998, are held abroad in a different country each year. Global markets In addition to their popularity among the Indian diaspora from Nigeria and Senegal to Egypt and Russia, generations of non-Indians have grown up with Bollywood. Indian cinema's early contacts with other regions made inroads into the Soviet Union, the Middle East, Southeast Asia, and China. Bollywood entered the consciousness of Western audiences and producers during the late 20th century, and Western actors now seek roles in Bollywood films. Asia-Pacific South Asia Bollywood films are also popular in Pakistan, Bangladesh, and Nepal, where Hindustani is widely understood. Many Pakistanis understand Hindi, due to its linguistic similarity to Urdu. Although Pakistan banned the import of Bollywood films in 1965, trade in unlicensed DVDs and illegal cable broadcasts ensured their continued popularity. Exceptions to the ban were made for a few films, such as the colourised re-release of Mughal-e-Azam and Taj Mahal in 2006. Early in 2008, the Pakistani government permitted the import of 16 films. More easing followed in 2009 and 2010. Although it is opposed by nationalists and representatives of Pakistan's small film industry, it is embraced by cinema owners who are making a profit after years of low receipts. The most popular actors in Pakistan are the three Khans of Bollywood: Salman, Shah Rukh, and Aamir. The most popular actress is Madhuri Dixit; at India-Pakistan cricket matches during the 1990s, Pakistani fans chanted "Madhuri dedo, Kashmir lelo!" ("Give Madhuri, take Kashmir!") Bollywood films in Nepal earn more than Nepali films, and Salman Khan, Akshay Kumar and Shah Rukh Khan are popular in the country. The films are also popular in Afghanistan due to its proximity to the Indian subcontinent and their cultural similarities, particularly in music. Popular actors include Shah Rukh Khan, Ajay Devgan, Sunny Deol, Aishwarya Rai, Preity Zinta, and Madhuri Dixit. A number of Bollywood films were filmed in Afghanistan and some dealt with the country, including Dharmatma, Kabul Express, Khuda Gawah and Escape From Taliban. Southeast Asia Bollywood films are popular in Southeast Asia, particularly in maritime Southeast Asia. The three Khans are very popular in the Malay world, including Indonesia, Malaysia, and Singapore. The films are also fairly popular in Thailand. India has cultural ties with Indonesia, and Bollywood films were introduced to the country at the end of World War II in 1945. The "angry young man" films of Amitabh Bachchan and Salim–Javed were popular during the 1970s and 1980s before Bollywood's popularity began gradually declining in the 1980s and 1990s. It experienced an Indonesian revival with the release of Shah Rukh Khan's Kuch Kuch Hota Hai (1998) in 2001, which was a bigger box-office success in the country than Titanic (1997). Bollywood has had a strong presence in Indonesia since then, particularly Shah Rukh Khan films such as Mohabbatein (2000), Kabhi Khushi Kabhie Gham... (2001), Kal Ho Naa Ho, Chalte Chalte and Koi... Mil Gaya (all 2003), and Veer-Zaara (2004). East Asia Some Bollywood films have been widely appreciated in China, Japan, and South Korea. Several Hindi films have been commercially successful in Japan, including Mehboob Khan's Aan (1952, starring Dilip Kumar) and Aziz Mirza's Raju Ban Gaya Gentleman (1992, starring Shah Rukh Khan). The latter sparked a two-year boom in Indian films after its 1997 release, with Dil Se.. (1998) a beneficiary of the boom. The highest-grossing Hindi film in Japan is 3 Idiots (2009), starring Aamir Khan, which received a Japanese Academy Award nomination. The film was also a critical and commercial success in South Korea. Dr. Kotnis Ki Amar Kahani, Awaara, and Do Bigha Zamin were successful in China during the 1940s and 1950s, and remain popular with their original audience. Few Indian films were commercially successful in the country during the 1970s and 1980s, among them Tahir Hussain's Caravan, Noorie and Disco Dancer. Indian film stars popular in China included Raj Kapoor, Nargis, and Mithun Chakraborty. Hindi films declined significantly in popularity in China during the 1980s. Films by Aamir Khan have recently been successful, and Lagaan was the first Indian film with a nationwide Chinese release in 2011. Chinese filmmaker He Ping was impressed by Lagaan (particularly its soundtrack), and hired its composer A. R. Rahman to score his Warriors of Heaven and Earth (2003). When 3 Idiots was released in China, China was the world's 15th-largest film market (partly due to its widespread pirate DVD distribution at the time). The pirate market introduced the film to Chinese audiences, however, and it became a cult hit. According to the Douban film-review site, 3 Idiots is China's 12th-most-popular film of all time; only one domestic Chinese film (Farewell My Concubine) ranks higher, and Aamir Khan acquired a large Chinese fan base as a result. After 3 Idiots, several of Khan's other films (including 2007's and 2008's Ghajini) also developed cult followings. China became the world's second-largest film market (after the United States) by 2013, paving the way for Khan's box-office success with Dhoom 3 (2013), PK (2014), and Dangal (2016). The latter is the 16th-highest-grossing film in China, the fifth-highest-grossing non-English language film worldwide, and the highest-grossing non-English foreign film in any market. Several Khan films, including , 3 Idiots, and Dangal, are highly rated on Douban. His next film, Secret Superstar (2017, starring Zaira Wasim), broke Dangals record for the highest-grossing opening weekend by an Indian film and cemented Khan's status as "a king of the Chinese box office"; Secret Superstar was China's highest-grossing foreign film of 2018 to date. Khan has become a household name in China, with his success described as a form of Indian soft power improving China–India relations despite political tensions. With Bollywood competing with Hollywood in the Chinese market, the success of Khan's films has driven up the price for Chinese distributors of Indian film imports. Salman Khan's Bajrangi Bhaijaan and Irrfan Khan's Hindi Medium were also Chinese hits in early 2018. Oceania Although Bollywood is less successful on some Pacific islands such as New Guinea, it ranks second to Hollywood in Fiji (with its large Indian minority), Australia and New Zealand. Australia also has a large South Asian diaspora, and Bollywood is popular amongst non-Asians in the country as well. Since 1997, the country has been a backdrop for an increasing number of Bollywood films. Indian filmmakers, attracted to Australia's diverse locations and landscapes, initially used the country as a setting for song-and-dance scenes; however, Australian locations now figure in Bollywood film plots. Hindi films shot in Australia usually incorporate Australian culture. Yash Raj Films' Salaam Namaste (2005), the first Indian film shot entirely in Australia, was the most successful Bollywood film of 2005 in that country. It was followed by the box-office successes Heyy Babyy, (2007) Chak De! India (2007), and Singh Is Kinng (2008). Prime Minister John Howard said during a visit to India after the release of Salaam Namaste that he wanted to encourage Indian filmmaking in Australia to increase tourism, and he appointed Steve Waugh as tourism ambassador to India. Australian actress Tania Zaetta, who appeared in Salaam Namaste and several other Bollywood films, was eager to expand her career in Bollywood. Eastern Europe and Central Asia Bollywood films are popular in the former Soviet Union (Russia, Eastern Europe, and Central Asia), and have been dubbed into Russian. Indian films were more popular in the Soviet Union than Hollywood films and, sometimes, domestic Soviet films. The first Indian film released in the Soviet Union was Dharti Ke Lal (1946), directed by Khwaja Ahmad Abbas and based on the Bengal famine of 1943, in 1949. Three hundred Indian films were released in the Soviet Union after that; most were Bollywood films with higher average audience figures than domestic Soviet productions. Fifty Indian films had over 20 million viewers, compared to 41 Hollywood films. Some, such as Awaara (1951) and Disco Dancer (1982), had more than 60 million viewers and established actors Raj Kapoor, Nargis, Rishi Kapoor and Mithun Chakraborty in the country. According to diplomat Ashok Sharma, who served in the Commonwealth of Independent States, After the collapse of the Soviet film-distribution system, Hollywood filled the void in the Russian film market and Bollywood's market share shrank. A 2007 Russia Today report noted a renewed interest in Bollywood by young Russians. In Poland, Shah Rukh Khan has a large following. He was introduced to Polish audiences with the 2005 release of Kabhi Khushi Kabhie Gham... (2001) and his other films, including Dil Se.. (1998), Main Hoon Na (2004) and Kabhi Alvida Naa Kehna (2006), became hits in the country. Bollywood films are often covered in Gazeta Wyborcza, formerly Poland's largest newspaper. The upcoming movie Squad, is the first Indian film to be shot in Belarus. A majority of the film was shot at Belarusfilm studios, in Minsk. Middle East and North Africa Hindi films have become popular in Arab countries, and imported Indian films are usually subtitled in Arabic when they are released. Bollywood has progressed in Israel since the early 2000s, with channels dedicated to Indian films on cable television; MBC Bollywood and Zee Aflam show Hindi movies and serials. In Egypt, Bollywood films were popular during the 1970s and 1980s. In 1987, however, they were restricted to a handful of films by the Egyptian government. Amitabh Bachchan has remained popular in the country and Indian tourists visiting Egypt are asked, "Do you know Amitabh Bachchan?" Bollywood movies are regularly screened in Dubai cinemas, and Bollywood is becoming popular in Turkey; Barfi! was the first Hindi film to have a wide theatrical release in that country. Bollywood also has viewers in Central Asia (particularly Uzbekistan and Tajikistan). South America Bollywood films are not influential in most of South America, although its culture and dance is recognised. Due to significant South Asian diaspora communities in Suriname and Guyana, however, Hindi-language movies are popular. In 2006, Dhoom 2 became the first Bollywood film to be shot in Rio de Janeiro. In January 2012, it was announced that UTV Motion Pictures would begin releasing films in Peru with Guzaarish. Africa Hindi films were originally distributed to some parts of Africa by Lebanese businessmen. In the 1950s, Hindi and Egyptian films were generally more popular than Hollywood films in East Africa. By the 1960s, East Africa was one of the largest overseas export markets for Indian films, accounting for about 20-50% of global earnings for many Indian films. Mother India (1957) continued to be screened in Nigeria decades after its release. Indian movies have influenced Hausa clothing, songs have been covered by Hausa singers, and stories have influenced Nigerian novelists. Stickers of Indian films and stars decorate taxis and buses in Nigeria's Northern Region, and posters of Indian films hang on the walls of tailoring shops and mechanics' garages. Unlike Europe and North America, where Indian films cater to the expatriate marke, Bollywood films became popular in West Africa despite the lack of a significant Indian audience. One possible explanation is cultural similarity: the wearing of turbans, animals in markets; porters carrying large bundles, and traditional wedding celebrations. Within Muslim culture, Indian movies were said to show "respect" toward women; Hollywood movies were seen as having "no shame". In Indian movies, women are modestly dressed; men and women rarely kiss and there is no nudity, so the films are said to "have culture" which Hollywood lacks. The latter "don't base themselves on the problems of the people"; Indian films are based on socialist values and the reality of developing countries emerging from years of colonialism. Indian movies permitted a new youth culture without "becoming Western." The first Indian film shot in Mauritius was Souten, starring Rajesh Khanna, in 1983. In South Africa, film imports from India were watched by black and Indian audiences. Several Bollywood figures have travelled to Africa for films and off-camera projects. Padmashree Laloo Prasad Yadav (2005) was filmed in South Africa. Dil Jo Bhi Kahey... (2005) was also filmed almost entirely in Mauritius, which has a large ethnic-Indian population. Bollywood, however, seems to be diminishing in popularity in Africa. New Bollywood films are more sexually explicit and violent. Nigerian viewers observed that older films (from the 1950s and 1960s) had more culture and were less Westernised. The old days of India avidly "advocating decolonization ... and India's policy was wholly influenced by his missionary zeal to end racial domination and discrimination in the African territories" were replaced. The emergence of Nollywood (West Africa's film industry) has also contributed to the declining popularity of Bollywood films, as sexualised Indian films became more like American films. Kishore Kumar and Amitabh Bachchan have been popular in Egypt and Somalia. In Ethiopia, Bollywood movies are shown with Hollywood productions in town square theatres such as the Cinema Ethiopia in Addis Ababa. Less-commercial Bollywood films are also screened elsewhere in North Africa. Western Europe and North America The first Indian film to be released in the Western world and receive mainstream attention was Aan (1952), directed by Mehboob Khan and starring Dilip Kumar and Nimmi. It was subtitled in 17 languages and released in 28 countries, including the United Kingdom, the United States, and France. Aan received significant praise from British critics, and The Times compared it favourably to Hollywood productions. Mehboob Khan's later Academy Award-nominated Mother India (1957) was a success in overseas markets, including Europe, Russia, the Eastern Bloc, French territories, and Latin America. Many Bollywood films have been commercially successful in the United Kingdom. The most successful Indian actor at the British box office has been Shah Rukh Khan, whose popularity in British Asian communities played a key role in introducing Bollywood to the UK with films such as Darr (1993), Dilwale Dulhaniya Le Jayenge (1995), and Kuch Kuch Hota Hai (1998). Dil Se (1998) was the first Indian film to enter the UK top ten. A number of Indian films, such as Dilwale Dulhaniya Le Jayenge and Kabhi Khushi Kabhie Gham (2001), have been set in London. Bollywood is also appreciated in France, Germany, the Netherlands, and Scandinavia. Bollywood films are dubbed in German and shown regularly on the German television channel RTL II. Germany is the second-largest European market for Indian films, after the United Kingdom. The most recognised Indian actor in Germany is Shah Rukh Khan, who has had box-office success in the country with films such as Don 2 (2011) and Om Shanti Om (2007). He has a large German fan base, particularly in Berlin (where the tabloid Die Tageszeitung compared his popularity to that of the pope). Bollywood has experienced revenue growth in Canada and the United States, particularly in the South Asian communities of large cities such as Toronto, Chicago, and New York City. Yash Raj Films, one of India's largest production houses and distributors, reported in September 2005 that Bollywood films in the United States earned about $100 million per year in theatre screenings, video sales and the sale of movie soundtracks; Indian films earn more money in the United States than films from any other non-English speaking country. Since the mid-1990s, a number of Indian films have been largely (or entirely) shot in New York, Los Angeles, Vancouver or Toronto. Films such as The Guru (2002) and Marigold: An Adventure in India (2007) attempted to popularise Bollywood for Hollywood. Plagiarism Pressured by rushed production schedules and small budgets, some writers and musicians in Hindi cinema have been notorious to plagiarise. Ideas, plot lines, tunes or riffs have been copied from other Indian film industries (including Telugu cinema, Tamil cinema, Malayalam cinema and others) or foreign films (including Hollywood and other Asian films) without acknowledging the source. Before the 1990s, plagiarism occurred with impunity. Copyright enforcement was lax in India, and few actors or directors saw an official contract. The Hindi film industry was not widely known in the Global North (except in the Soviet states), who would be unaware that their material had been copied. Audiences may not have been aware of plagiarism, since many in India were unfamiliar with foreign films and music. Although copyright enforcement in India is still somewhat lenient, Bollywood and other film industries are more aware of each other and Indian audiences are more familiar with foreign films and music. Organisations such as the India EU Film Initiative seek to foster a community between filmmakers and industry professionals in India and the European Union. Many hit films of 1980s to 2000s was unofficial remakes (some argue, was adaptation or inspired movies) of Hollywood movies such as Jo Jeeta Wohi Sikandar (1992), Baazigar (1993), Ghulam (1998), which were said to be inspired by Breaking Away (1979), On the Waterfront (1954), and A Kiss Before Dying (1991), respectively. Only after mid of 2000s Bollywood makers initiated legally purchasing copyrights of Hollywood and other foreign movies, such as Players (2012), Bang Bang! (2014) and Lal Singh Chaddha, which were official remakes of The Italian Job (2003), Knight And Day (2010) and Forrest Gump (1994), respectively. Not only Hollywood but allegedly Bollywood makers copied films from South Korean and Japanese film industry also, such as Zinda (2006), an unofficial remake of Oldboy. (2003). Some Bollywood directors and writers used plots from regional language films in other languages but did not acknowledge the original source. A commonly-reported justification for plagiarism in Bollywood is that cautious producers want to remake popular Hollywood films in an Indian context. Although screenwriters generally produce original scripts, many are rejected due to uncertainty about whether a film will be successful. Poorly-paid screenwriters have also been criticised for a lack of creativity. Some filmmakers see plagiarism in Bollywood as an integral part of globalisation, with which Western (particularly American) culture is embedding itself into Indian culture. Vikram Bhatt, director of Raaz (a remake of What Lies Beneath) and Kasoor (a remake of Jagged Edge), has spoken about the influence of American culture and Bollywood's desire to produce box-office hits based along the same lines: "Financially, I would be more secure knowing that a particular piece of work has already done well at the box office. Copying is endemic everywhere in India. Our TV shows are adaptations of American programmes. We want their films, their cars, their planes, their Diet Cokes and also their attitude. The American way of life is creeping into our culture." According to Mahesh Bhatt, "If you hide the source, you're a genius. There's no such thing as originality in the creative sphere". Although very few cases of film-copyright violations have been taken to court because of a slow legal process, the makers of Partner (2007) and Zinda (2005) were targeted by the owners and distributors of the original films: Hitch and Oldboy. The American studio 20th Century Fox brought Mumbai-based B. R. Films to court over the latter's forthcoming Banda Yeh Bindaas Hai, which Fox alleged was an illegal remake of My Cousin Vinny. B. R. Films eventually settled out of court for about $200,000, paving the way for its film's release. Some studios comply with copyright law; in 2008, Orion Pictures secured the rights to remake Hollywood's Wedding Crashers. Music The Pakistani Qawwali musician Nusrat Fateh Ali Khan had a big impact on Hindi film music, inspiring numerous Indian musicians working in Bollywood, especially during the 1990s. However, there were many instances of Indian music directors plagiarising Khan's music to produce hit filmi songs. Several popular examples include Viju Shah's hit song "Tu Cheez Badi Hai Mast Mast" in Mohra (1994) being plagiarised from Khan's popular Qawwali song "Dam Mast Qalandar", "Mera Piya Ghar Aya" used in Yaarana (1995), and "Sanoo Ek Pal Chain Na Aaye" in Judaai (1997). Despite the significant number of hit Bollywood songs plagiarised from his music, Nusrat Fateh Ali Khan was reportedly tolerant towards the plagiarism. One of the Bollywood music directors who frequently plagiarised him, Anu Malik, claimed that he loved Khan's music and was actually showing admiration by using his tunes. However, Khan was reportedly aggrieved when Malik turned his spiritual "Allah Hoo, Allah Hoo" into "I Love You, I Love You" in Auzaar (1997). Khan said "he has taken my devotional song Allahu and converted it into I love you. He should at least respect my religious songs." Bollywood soundtracks also plagiarised Guinean singer Mory Kanté, particularly his 1987 album Akwaba Beach. His song, "Tama", inspired two Bollywood songs: Bappi Lahiri's "Tamma Tamma" in Thanedaar (1990) and "Jumma Chumma" in Laxmikant–Pyarelal's soundtrack for Hum (1991). The latter also featured "Ek Doosre Se", which copied Kanté's "Inch Allah". His song "Yé ké yé ké" was used as background music in the 1990 Bollywood film Agneepath, inspired the Bollywood song "Tamma Tamma" in Thanedaar. Film education Film and Television Institute of India (FTII) is the government film making education school. The institute is situated in Pune, Maharashtra. See also Film City Lists of Hindi films List of highest-grossing Hindi films worldwide List of highest-grossing films in India List of highest domestic net collection of Hindi films Major film industries in the world - Hollywood Cinema of South Korea Cinema of Spain Cinema of Hong Kong Cinema of Indonesia Cinema of Italy References Bibliography Explanatory notes Further reading Alter, Stephen. Fantasies of a Bollywood Love-Thief: Inside the World of Indian Moviemaking. . Begum-Hossain, Momtaz. Bollywood Crafts: 20 Projects Inspired by Popular Indian Cinema, 2006. The Guild of Mastercraftsman Publications. . Bose, Mihir, Bollywood: A History, New Delhi, Roli Books, 2008. . Dwyer, Rachel. Bollywood's India: Hindi Cinema as a Guide to Contemporary India (Reaktion Books, distributed by University of Chicago Press; 2014) 295 pages Ganti, Tejaswini. Bollywood, Routledge, New York and London, 2004. Ganti, Tejaswini. Producing Bollywood: Inside the Contemporary Hindi Film Industry (Duke University Press; 2012) 424 pages; looks at how major changes in film production since the 1990s have been influenced by the liberal restructuring of India's state and economy. Gibson, Bernard. 'Bollywood'. Passing the Envelope, 1994. Jolly, Gurbir, Zenia Wadhwani, and Deborah Barretto, eds. Once Upon a Time in Bollywood: The Global Swing in Hindi Cinema, TSAR Publications. 2007. . Joshi, Lalit Mohan. Bollywood: Popular Indian Cinema. . Kabir, Nasreen Munni. Bollywood, Channel 4 Books, 2001. Mehta, Suketu. Maximum City, Knopf, 2004. Mishra, Vijay. Bollywood Cinema: Temples of Desire. . Pendakur, Manjunath. Indian Popular Cinema: Industry, Ideology, and Consciousness. . Prasad, Madhava. Ideology of the Hindi Film: A Historical Construction, Oxford University Press, 2000. . Raheja, Dinesh and Kothari, Jitendra. Indian Cinema: The Bollywood Saga. . Raj, Aditya (2007) "Bollywood Cinema and Indian Diaspora" in Media Literacy: A Reader edited by Donaldo Macedo and Shirley Steinberg New York: Peter Lang Rajadhyaksa, Ashish (1996), "India: Filming the Nation", The Oxford History of World Cinema, Oxford University Press, . Rajadhyaksha, Ashish and Willemen, Paul. Encyclopedia of Indian Cinema, Oxford University Press, revised and expanded, 1999. Jha, Subhash and Bachchan, Amitabh (foreword). The Essential Guide to Bollywood. . External links National Geographic Magazine: "Welcome to Bollywood" National Institute Of Film and Fine Arts 1913 establishments in India Indian art Economy of Mumbai Hindustani language Indian film industries
12
The Baháʼí Faith is a religion founded in the 19th century that teaches the essential worth of all religions and the unity of all people. Established by Baháʼu'lláh ("Bahaa Allah", Arabic: "Glory to God"), it initially developed in Iran and parts of the Middle East, where it has faced ongoing persecution since its inception. The religion is estimated to have five to eight million adherents, known as Baháʼís, spread throughout most of the world's countries and territories. The Baháʼí Faith has three central figures: the Báb (1819–1850), executed for heresy, who taught that a prophet similar to Jesus and Muhammad would soon appear; Baháʼu'lláh (1817–1892), who claimed to be that prophet in 1863 and had to endure both exile and imprisonment; and his son, ʻAbdu'l-Bahá (1844–1921), who made teaching trips to Europe and the United States after his release from confinement in 1908. After ʻAbdu'l-Bahá's death in 1921, the leadership of the religion fell to his grandson Shoghi Effendi (1897–1957). Baháʼís annually elect local, regional, and national Spiritual Assemblies that govern the religion's affairs, and every five years an election is held for the Universal House of Justice, the nine-member governing institution of the worldwide Baháʼí community that is located in Haifa, Israel, near the Shrine of the Báb. According to Baháʼí teachings, religion is revealed in an orderly and progressive way by a single God through Manifestations of God, who are the founders of major world religions throughout human history; Buddha, Jesus, and Muhammad are noted as the most recent of these before the Báb and Baháʼu'lláh. Baháʼís regard the world's major religions as fundamentally unified in purpose, but diverging in terms of social practices and interpretations. The Baháʼí Faith stresses the unity of all people as its core teaching and explicitly rejects notions of racism, sexism, and nationalism. At the heart of Baháʼí teachings is the goal of a unified world order that ensures the prosperity of all nations, races, creeds, and classes. Letters and epistles by Baháʼu'lláh, along with writings and talks by his son ʻAbdu'l-Bahá, have been collected and assembled into a canon of Baháʼí scriptures. This collection includes works by the Báb, who is regarded as Baháʼu'lláh's forerunner. Prominent among the works of Baháʼí literature are the Kitáb-i-Aqdas, the Kitáb-i-Íqán, Some Answered Questions, and The Dawn-Breakers. Etymology The word Baháʼí () is used either as an adjective to refer to the Baháʼí Faith or as a term for a follower of Baháʼu'lláh. The proper name of the religion is the Baháʼí Faith, not Baháʼí or Baha'ism (the latter, once common among academics, is regarded as derogatory by the Baháʼís). It is derived from the Arabic Baháʼ (), a name Baháʼu'lláh chose for himself, referring to the 'glory' or 'splendor' of God. In English, the word is commonly pronounced (), but the more accurate rendering of the Arabic is (). The accent marks above the letters, representing long vowels, derive from a system of transliterating Arabic and Persian script that was adopted by Baháʼís in 1923, and which has been used in almost all Baháʼí publications since. Baháʼís prefer the orthographies Baháʼí, the Báb, Baháʼu'lláh, and ʻAbdu'l-Bahá. When accent marks are unavailable, Bahai, Bahaʼi, or Bahaullah are often used. Beliefs The teachings of Baháʼu'lláh form the foundation of Baháʼí beliefs. Three principles are central to these teachings: the unity of God, the unity of religion, and the unity of humanity. Baha'is believe that God periodically reveals his will through divine messengers, whose purpose is to transform the character of humankind and to develop, within those who respond, moral and spiritual qualities. Religion is thus seen as orderly, unified, and progressive from age to age. God Baháʼí writings describe a single, personal, inaccessible, omniscient, omnipresent, imperishable, and almighty God who is the creator of all things in the universe. The existence of God and the universe are thought to be eternal, with no beginning or end. Even though God is not directly accessible, he is seen as being conscious of creation, with a will and a purpose which is expressed through messengers who are called Manifestations of God. Baháʼí teachings state that God is too great for humans to fully comprehend, and based on them, humans cannot create a complete and accurate image of God by themselves. Therefore, human understanding of God is achieved through the recognition of the person of the Manifestation and through the understanding of his revelations via his Manifestations. In the Baháʼí Faith, God is often referred to by titles and attributes (for example, the All-Powerful, or the All-Loving), and there is a substantial emphasis on monotheism. Baháʼí teachings state that these attributes do not apply to God directly but are used to translate Godliness into human terms and to help people concentrate on their own attributes in worshipping God to develop their potentialities on their spiritual path. According to the Baháʼí teachings the human purpose is to learn to know and love God through such methods as prayer, reflection, and being of service to others. Religion Baháʼí notions of progressive religious revelation result in their accepting the validity of the well known religions of the world, whose founders and central figures are seen as Manifestations of God. Religious history is interpreted as a series of dispensations, where each manifestation brings a somewhat broader and more advanced revelation that is rendered as a text of scripture and passed on through history with greater or lesser reliability but at least true in substance, suited for the time and place in which it was expressed. Specific religious social teachings (for example, the direction of prayer, or dietary restrictions) may be revoked by a subsequent manifestation so that a more appropriate requirement for the time and place may be established. Conversely, certain general principles (for example, neighbourliness, or charity) are seen to be universal and consistent. In Baháʼí belief, this process of progressive revelation will not end; it is, however, believed to be cyclical. Baháʼís do not expect a new manifestation of God to appear within 1000 years of Baháʼu'lláh's revelation. Baháʼís assert that their religion is a distinct tradition with its own scriptures and laws, and not a sect of another religion. The religion was initially seen as a sect of Islam because of its origins. Most religious specialists now see it as an independent religion, with its religious background in Shiʻa Islam being seen as analogous to the Jewish context in which Christianity was established. Baháʼís describe their faith as an independent world religion, differing from the other traditions in its relative age and modern context. Human beings The Baháʼí writings state that human beings have a "rational soul", and that this provides the species with a unique capacity to recognize God's status and humanity's relationship with its creator. Every human is seen to have a duty to recognize God through his Messengers, and to conform to their teachings. Through recognition and obedience, service to humanity and regular prayer and spiritual practice, the Baháʼí writings state that the soul becomes closer to God, the spiritual ideal in Baháʼí belief. According to Baháʼí belief when a human dies the soul is permanently separated from the body and carries on in the next world where it is judged based on the person's actions in the physical world. Heaven and Hell are taught to be spiritual states of nearness or distance from God that describe relationships in this world and the next, and not physical places of reward and punishment achieved after death. The Baháʼí writings emphasize the essential equality of human beings, and the abolition of prejudice. Humanity is seen as essentially one, though highly varied; its diversity of race and culture are seen as worthy of appreciation and acceptance. Doctrines of racism, nationalism, caste, social class, and gender-based hierarchy are seen as artificial impediments to unity. The Baháʼí teachings state that the unification of humanity is the paramount issue in the religious and political conditions of the present world. Social principles When ʻAbdu'l-Bahá first traveled to Europe and America in 1911–1912, he gave public talks that articulated the basic principles of the Baháʼí Faith. These included preaching on the equality of men and women, race unity, the need for world peace, and other progressive ideas for the early 20th century. Published summaries of the Baháʼí teachings often include a list of these principles, and lists vary in wording and what is included. The concept of the unity of humankind, seen by Baháʼís as an ancient truth, is the starting point for many of the ideas. The equality of races and the elimination of extremes of wealth and poverty, for example, are implications of that unity. Another outgrowth of the concept is the need for a united world federation, and some practical recommendations to encourage its realization involve the establishment of a universal language, a standard economy and system of measurement, universal compulsory education, and an international court of arbitration to settle disputes between nations. Nationalism, according to this viewpoint, should be abandoned in favor of allegiance to the whole of humankind. With regard to the pursuit of world peace, Baháʼu'lláh prescribed a world-embracing collective security arrangement. Other Baháʼí social principles revolve around spiritual unity. Religion is viewed as progressive from age to age, but to recognize a newer revelation one has to abandon tradition and independently investigate. Baháʼís are taught to view religion as a source of unity, and religious prejudice as destructive. Science is also viewed in harmony with true religion. Though Baháʼu'lláh and ʻAbdu'l-Bahá called for a united world that is free of war, they also anticipate that over the long term, the establishment of a lasting peace (The Most Great Peace) and the purging of the "overwhelming Corruptions" requires that the people of the world unite under a universal faith with spiritual virtues and ethics to complement material civilization. Shoghi Effendi, the head of the religion from 1921 to 1957, wrote the following summary of what he considered to be the distinguishing principles of Baháʼu'lláh's teachings, which, he said, together with the laws and ordinances of the Kitáb-i-Aqdas constitute the bedrock of the Baháʼí Faith: Covenant Baháʼís highly value unity, and Baháʼu'lláh clearly established rules for holding the community together and resolving disagreements. Within this framework no individual follower may propose 'inspired' or 'authoritative' interpretations of scripture, and individuals agree to support the line of authority established in Baháʼí scriptures. This practice has left the Baháʼí community unified and avoided any serious fracturing. The Universal House of Justice is the final authority to resolve any disagreements among Baháʼís, and the dozen or so attempts at schism have all either become extinct or remained extremely small, numbering a few hundred adherents collectively. The followers of such divisions are regarded as Covenant-breakers and shunned. Sacred texts The canonical texts of the Baháʼí Faith are the writings of the Báb, Baháʼu'lláh, ʻAbdu'l-Bahá, Shoghi Effendi and the Universal House of Justice, and the authenticated talks of ʻAbdu'l-Bahá. The writings of the Báb and Baháʼu'lláh are considered as divine revelation, the writings and talks of ʻAbdu'l-Bahá and the writings of Shoghi Effendi as authoritative interpretation, and those of the Universal House of Justice as authoritative legislation and elucidation. Some measure of divine guidance is assumed for all of these texts. Some of Baháʼu'lláh's most important writings include the Kitáb-i-Aqdas ("Most Holy Book"), which defines many laws and practices for individuals and society, the Kitáb-i-Íqán ("Book of Certitude"), which became the foundation of much of Baháʼí belief, and Gems of Divine Mysteries, which includes further doctrinal foundations. Although the Baháʼí teachings have a strong emphasis on social and ethical issues, a number of foundational texts have been described as mystical. These include the Seven Valleys and the Four Valleys. The Seven Valleys was written to a follower of Sufism, in the style of ʻAttar, the Persian Muslim poet, and sets forth the stages of the soul's journey towards God. It was first translated into English in 1906, becoming one of the earliest available books of Baháʼu'lláh to the West. The Hidden Words is another book written by Baháʼu'lláh during the same period, containing 153 short passages in which Baháʼu'lláh claims to have taken the basic essence of certain spiritual truths and written them in brief form. History The Baháʼí Faith traces its beginnings to the religion of the Báb and the Shaykhi movement that immediately preceded it. The Báb was a merchant who began preaching in 1844 that he was the bearer of a new revelation from God, but was rejected by the generality of Islamic clergy in Iran, ending in his public execution for the crime of heresy. The Báb taught that God would soon send a new messenger, and Baháʼís consider Baháʼu'lláh to be that person. Although they are distinct movements, the Báb is so interwoven into Baháʼí theology and history that Baháʼís celebrate his birth, death, and declaration as holy days, consider him one of their three central figures (along with Baháʼu'lláh and ʻAbdu'l-Bahá), and a historical account of the Bábí movement (The Dawn-Breakers) is considered one of three books that every Baháʼí should "master" and read "over and over again". The Baháʼí community was mostly confined to the Iranian and Ottoman empires until after the death of Baháʼu'lláh in 1892, at which time he had followers in 13 countries of Asia and Africa. Under the leadership of his son, ʻAbdu'l-Bahá, the religion gained a footing in Europe and America, and was consolidated in Iran, where it still suffers intense persecution. ʻAbdu'l-Bahá's death in 1921 marks the end of what Baháʼís call the "heroic age" of the religion. Báb On the evening of 22 May 1844, Siyyid ʻAlí-Muhammad of Shiraz gained his first convert and took on the title of "the Báb" ( "Gate"), referring to his later claim to the status of Mahdi of Shiʻa Islam. His followers were therefore known as Bábís. As the Báb's teachings spread, which the Islamic clergy saw as blasphemous, his followers came under increased persecution and torture. The conflicts escalated in several places to military sieges by the Shah's army. The Báb himself was imprisoned and eventually executed in 1850. Baháʼís see the Báb as the forerunner of the Baháʼí Faith, because the Báb's writings introduced the concept of "He whom God shall make manifest", a messianic figure whose coming, according to Baháʼís, was announced in the scriptures of all of the world's great religions, and whom Baháʼu'lláh, the founder of the Baháʼí Faith, claimed to be. The Báb's tomb, located in Haifa, Israel, is an important place of pilgrimage for Baháʼís. The remains of the Báb were brought secretly from Iran to the Holy Land and eventually interred in the tomb built for them in a spot specifically designated by Baháʼu'lláh. The writings of the Báb are considered inspired scripture by Baháʼís, though having been superseded by the laws and teachings of Baháʼu'lláh. The main written works translated into English of the Báb are compiled in Selections from the Writings of the Báb (1976) out of the estimated 135 works. Baháʼu'lláh Mírzá Husayn ʻAlí Núrí was one of the early followers of the Báb, and later took the title of Baháʼu'lláh. In August 1852, a few Bábís made a failed attempt to assassinate the Shah. The Persian government responded by killing and in some cases torturing about 50 Bábís in Tehran initially, further bloodshed was spread around the country: hundreds were reported in period newspapers by October, and tens of thousands by the end of December. Baháʼu'lláh was not involved in the assassination attempt but was imprisoned in Tehran until his release was arranged four months later by the Russian ambassador, after which he joined other Bábís in exile in Baghdad. Shortly thereafter he was expelled from Iran and traveled to Baghdad, in the Ottoman Empire. In Baghdad, his leadership revived the persecuted followers of the Báb in Iran, so Iranian authorities requested his removal, which instigated a summons to Constantinople (now Istanbul) from the Ottoman Sultan. In 1863, at the time of his removal from Baghdad, Baháʼu'lláh first announced his claim of prophethood to his family and followers, which he said came to him years earlier while in a dungeon of Tehran. From the time of the initial exile from Iran, tensions grew between him and Subh-i-Azal, the appointed leader of the Bábís, who did not recognize Baháʼu'lláh's claim. Throughout the rest of his life Baháʼu'lláh gained the allegiance of almost all of the Bábís, who came to be known as Baháʼís, while a remnant of Bábís became known as Azalis. He spent less than four months in Constantinople. After receiving chastising letters from Baháʼu'lláh, Ottoman authorities turned against him and put him under house arrest in Adrianople (now Edirne), where he remained for four years, until a royal decree of 1868 banished all Bábís to either Cyprus or ʻAkká. It was in or near the Ottoman penal colony of ʻAkká, in present-day Israel, that Baháʼu'lláh spent the remainder of his life. After initially strict and harsh confinement, he was allowed to live in a home near ʻAkká, while still officially a prisoner of that city. He died there in 1892. Baháʼís regard his resting place at Bahjí as the Qiblih to which they turn in prayer each day. He produced over 18,000 works in his lifetime, in both Arabic and Persian, of which only 8% have been translated into English. During the period in Adrianople, he began declaring his mission as a Messenger of God in letters to the world's religious and secular rulers, including Pope Pius IX, Napoleon III, and Queen Victoria. ʻAbdu'l-Bahá ʻAbbás Effendi was Baháʼu'lláh's eldest son, known by the title of ʻAbdu'l-Bahá ("Servant of Bahá"). His father left a will that appointed ʻAbdu'l-Bahá as the leader of the Baháʼí community. ʻAbdu'l-Bahá had shared his father's long exile and imprisonment, which continued until ʻAbdu'l-Bahá's own release as a result of the Young Turk Revolution in 1908. Following his release he led a life of travelling, speaking, teaching, and maintaining correspondence with communities of believers and individuals, expounding the principles of the Baháʼí Faith. As of 2020, there are over 38,000 extant documents containing the words of ʻAbdu'l-Bahá, which are of widely varying lengths. Only a fraction of these documents have been translated into English. Among the more well known are The Secret of Divine Civilization, Some Answered Questions, the Tablet to Auguste-Henri Forel, the Tablets of the Divine Plan, and the Tablet to The Hague. Additionally notes taken of a number of his talks were published in various volumes like Paris Talks during his journeys to the West. Shoghi Effendi Baháʼu'lláh's Kitáb-i-Aqdas and The Will and Testament of ʻAbdu'l-Bahá are foundational documents of the Baháʼí administrative order. Baháʼu'lláh established the elected Universal House of Justice, and ʻAbdu'l-Bahá established the appointed hereditary Guardianship and clarified the relationship between the two institutions. In his Will, ʻAbdu'l-Bahá appointed Shoghi Effendi, his eldest grandson, as the first Guardian of the Baháʼí Faith. Shoghi Effendi served for 36 years as the head of the religion until his death. Throughout his lifetime, Shoghi Effendi translated Baháʼí texts; developed global plans for the expansion of the Baháʼí community; developed the Baháʼí World Centre; carried on a voluminous correspondence with communities and individuals around the world; and built the administrative structure of the religion, preparing the community for the election of the Universal House of Justice. He unexpectedly died after a brief illness on 4 November 1957, in London, England, under conditions that did not allow for a successor to be appointed. In 1937, Shoghi Effendi launched a seven-year plan for the Baháʼís of North America, followed by another in 1946. In 1953, he launched the first international plan, the Ten Year World Crusade. This plan included extremely ambitious goals for the expansion of Baháʼí communities and institutions, the translation of Baháʼí texts into several new languages, and the sending of Baháʼí pioneers into previously unreached nations. He announced in letters during the Ten Year Crusade that it would be followed by other plans under the direction of the Universal House of Justice, which was elected in 1963 at the culmination of the Crusade. Universal House of Justice Since 1963, the Universal House of Justice has been the elected head of the Baháʼí Faith. The general functions of this body are defined through the writings of Baháʼu'lláh and clarified in the writings of Abdu'l-Bahá and Shoghi Effendi. These functions include teaching and education, implementing Baháʼí laws, addressing social issues, and caring for the weak and the poor. Starting with the Nine Year Plan that began in 1964, the Universal House of Justice has directed the work of the Baháʼí community through a series of multi-year international plans. Starting with the Nine-Year Plan that began in 1964, the Baháʼí leadership sought to continue the expansion of the religion but also to "consolidate" new members, meaning increase their knowledge of the Baháʼí teachings. In this vein, in the 1970s, the Ruhi Institute was founded by Baháʼís in Colombia to offer short courses on Baháʼí beliefs, ranging in length from a weekend to nine days. The associated Ruhi Foundation, whose purpose was to systematically "consolidate" new Baháʼís, was registered in 1992, and since the late 1990s the courses of the Ruhi Institute have been the dominant way of teaching the Baháʼí Faith around the world. By 2013 there were over 300 Baháʼí training institutes around the world and 100,000 people participating in courses. The courses of the Ruhi Institute train communities to self-organize classes for the spiritual education of children and youth, among other activities. Additional lines of action the Universal House of Justice has encouraged for the contemporary Baháʼí community include social action and participation in the prevalent discourses of society. Annually, on 21 April, the Universal House of Justice sends a 'Ridván' message to the worldwide Baháʼí community, that updates Baháʼís on current developments and provides further guidance for the year to come. At local, regional, and national levels, Baháʼís elect members to nine-person Spiritual Assemblies, which run the affairs of the religion. There are also appointed individuals working at various levels, including locally and internationally, which perform the function of propagating the teachings and protecting the community. The latter do not serve as clergy, which the Baháʼí Faith does not have. The Universal House of Justice remains the supreme governing body of the Baháʼí Faith, and its 9 members are elected every five years by the members of all National Spiritual Assemblies. Any male Baháʼí, 21 years or older, is eligible to be elected to the Universal House of Justice; all other positions are open to male and female Baháʼís. Malietoa Tanumafili II of Samoa, who became Baháʼí in 1968 and died in 2007, was the first serving head of state to embrace the Baháʼí Faith. Demographics As of around 2020, there were about 8 million Bahá'ís in the world. In 2013, two scholars of demography wrote that, "The Baha'i Faith is the only religion to have grown faster in every United Nations region over the past 100 years than the general population; Bahaʼi [sic] was thus the fastest-growing religion between 1910 and 2010, growing at least twice as fast as the population of almost every UN region." (See Growth of religion.) The largest proportions of the total worldwide Bahá'í population were found in sub-Saharan Africa (29.9%) and South Asia (26.8%), followed by Southeast Asia (12.7%) and Latin America (12.2%). Lesser populations are found in North America (7.6%) and the Middle East/North Africa (6.2%), while the smallest populations in Europe (2.0%), Australasia (1.6%), and Northeast Asia (0.9%). In 2015, the internationally recognized religion was the second-largest international religion in Iran, Panama, Belize, Bolivia, Zambia, and Papua New Guinea; and the third-largest in Chad, and Kenya. From the Bahá'í Faith's origins in the 19th century until the 1950s, the vast majority of Baháʼís were found in Iran; converts from outside Iran were mostly found in India and the Western world. From having roughly 200,000 Baháʼís in 1950, the religion grew to have over 4 million by the late 1980s, with a wide international distribution. As of 2008, there were about 110,000 followers in Iran. Most of the growth in the late 20th century was seeded out of North America by means of the planned migration of individuals. Yet, rather than being a cultural spread from either Iran or North America, in 2001, sociologist David B. Barrett wrote that the Baháʼí Faith is, "A world religion with no racial or national focus". However, the growth has not been even. From the late 1920s to the late 1980s, the religion was banned and adherents of it were harassed in the Soviet-led Eastern Bloc, and then again from the 1970s into the 1990s across some countries in sub-Saharan Africa. The most intense opposition has been in Iran and neighboring Shia-majority countries, considered an attempted genocide by some scholars, watchdog agencies and human rights organizations. Meanwhile, in other times and places, the religion has experienced surges in growth. Before it was banned in certain countries, the religion "hugely increased" in sub-Saharan Africa. In 1989 the Universal House of Justice named Bolivia, Bangladesh, Haiti, India, Liberia, Peru, the Philippines, and Taiwan as countries where the growth of the religion had been notable in the previous decades. Bahá'í sources claimed "more than five million" Bahá'ís in 1991-2. However, since around 2001 the Universal House of Justice has prioritized statistics of the community by their levels of activity rather than simply their population of avowed adherents or numbers of local assemblies. Because Bahá'ís do not represent the majority of the population in any country, and most often represent only a tiny fraction of countries' total populations, there are problems of under-reporting. In addition, there are examples where the adherents have their highest density among minorities in societies who face their own challenges. Social practices Exhortations The following are a few examples from Baháʼu'lláh's teachings on personal conduct that are required or encouraged of his followers: Baháʼís over the age of 15 should individually recite an obligatory prayer each day, using fixed words and form. In addition to the daily obligatory prayer, Baháʼís should offer daily devotional prayer and should meditate and study sacred scripture. Adult Baháʼís should observe a Nineteen-Day Fast each year during daylight hours in March, with certain exemptions. There are specific requirements for Baháʼí burial that include a specified prayer to be read at the interment. Embalming or cremating the body is strongly discouraged. Baháʼís should make a 19% voluntary payment on any wealth in excess of what is necessary to live comfortably, after the remittance of any outstanding debt. The payments go to the Universal House of Justice. Prohibitions The following are a few acts of personal conduct that are prohibited or discouraged by Baháʼu'lláh's teachings: Backbiting and gossipping are prohibited and denounced. Drinking and selling alcohol are forbidden. Sexual intercourse is only permitted between a husband and a wife, and as a result, premarital, extramarital, and homosexual intercourse are all forbidden. (See also Homosexuality and the Baháʼí Faith) Participation in partisan politics is forbidden. Begging is forbidden as a profession. The observance of personal laws, such as prayer or fasting, is the sole responsibility of the individual. There are, however, occasions when a Baháʼí might be administratively expelled from the community for a public disregard of the laws, or gross immorality. Such expulsions are administered by the National Spiritual Assembly and do not involve shunning. While some of the laws in the Kitáb-i-Aqdas are applicable at the present time, other laws are dependent upon the existence of a predominantly Baháʼí society, such as the punishments for arson and murder. The laws, when not in direct conflict with the civil laws of the country of residence, are binding on every Baháʼí. Marriage The purpose of marriage in the Baháʼí Faith is mainly to foster spiritual harmony, fellowship and unity between a man and a woman and to provide a stable and loving environment for the rearing of children. The Baháʼí teachings on marriage call it a fortress for well-being and salvation and place marriage and the family as the foundation of the structure of human society. Baháʼu'lláh highly praised marriage, discouraged divorce, and required chastity outside of marriage; Baháʼu'lláh taught that a husband and wife should strive to improve the spiritual life of each other. Interracial marriage is also highly praised throughout Baháʼí scripture. Baháʼís intending to marry are asked to obtain a thorough understanding of the other's character before deciding to marry. Although parents should not choose partners for their children, once two individuals decide to marry, they must receive the consent of all living biological parents, whether they are Baháʼí or not. The Baháʼí marriage ceremony is simple; the only compulsory part of the wedding is the reading of the wedding vows prescribed by Baháʼu'lláh which both the groom and the bride read, in the presence of two witnesses. The vows are "We will all, verily, abide by the Will of God." Transgender people can gain recognition of their gender in the Baháʼí Faith if they have medically transitioned and undergone sex reassignment surgery (SRS). After SRS, they are considered transitioned and may have a Baháʼí marriage. Work Baháʼu'lláh prohibited a mendicant and ascetic lifestyle. Monasticism is forbidden, and Baháʼís are taught to practice spirituality while engaging in useful work. The importance of self-exertion and service to humanity in one's spiritual life is emphasised further in Baháʼu'lláh's writings, where he states that work done in the spirit of service to humanity enjoys a rank equal to that of prayer and worship in the sight of God. Places of worship Bahá'í devotional meetings in most communities currently take place in people's homes or Bahá'í centres, but in some communities Bahá'í Houses of Worship (also known as Bahá'í temples) have been built. Bahá'í Houses of Worship are places where both Baháʼís and non-Baháʼís can express devotion to God. They are also known by the name Mashriqu'l-Adhkár (Arabic for "Dawning-place of the remembrance of God"). Only the holy scriptures of the Bahá'í Faith and other religions can be read or chanted inside, and while readings and prayers that have been set to music may be sung by choirs, no musical instruments may be played inside. Furthermore, no sermons may be delivered, and no ritualistic ceremonies practiced. All Bahá'í Houses of Worship have a nine-sided shape (nonagon) as well as nine pathways leading outward and nine gardens surrounding them. There are currently eight "continental" Bahá'í Houses of Worship and some local Bahá'í Houses of Worship completed or under construction. The Bahá'í writings also envision Bahá'í Houses of Worship being surrounded by institutions for humanitarian, scientific, and educational pursuits, though none has yet been built up to such an extent. Calendar The Baháʼí calendar is based upon the calendar established by the Báb. The year consists of 19 months, each having 19 days, with four or five intercalary days, to make a full solar year. The Baháʼí New Year corresponds to the traditional Iranian New Year, called Naw Rúz, and occurs on the vernal equinox, near 21 March, at the end of the month of fasting. Once every Baháʼí month there is a gathering of the Baháʼí community called a Nineteen Day Feast with three parts: first, a devotional part for prayer and reading from Baháʼí scripture; second, an administrative part for consultation and community matters; and third, a social part for the community to interact freely. Each of the 19 months is given a name which is an attribute of God; some examples include Baháʼ (Splendour), ʻIlm (Knowledge), and Jamál (Beauty). The Baháʼí week is familiar in that it consists of seven days, with each day of the week also named after an attribute of God. Baháʼís observe 11 Holy Days throughout the year, with work suspended on 9 of these. These days commemorate important anniversaries in the history of the religion. Symbols The symbols of the religion are derived from the Arabic word Baháʼ ( "splendor" or "glory"), with a numerical value of nine. This numerical connection to the name of Baháʼu'lláh, as well as nine being the highest single-digit, symbolizing completeness, are why the most common symbol of the religion is a nine-pointed star, and Baháʼí temples are nine-sided. The nine-pointed star is commonly set on Baháʼí gravestones. The ringstone symbol and calligraphy of the Greatest Name are also often encountered. The ringstone symbol consists of two five-pointed stars interspersed with a stylized Baháʼ whose shape is meant to recall God, the Manifestation of God, and the world of man; the Greatest Name is a calligraphic rendering of the phrase Yá Baháʼu'l-Abhá ( "O Glory of the Most Glorious!") and is commonly found in Baháʼí temples and homes. Socio-economic development Since its inception the Baháʼí Faith has had involvement in socio-economic development beginning by giving greater freedom to women, promulgating the promotion of female education as a priority concern, and that involvement was given practical expression by creating schools, agricultural co-ops, and clinics. The religion entered a new phase of activity when a message from the Universal House of Justice dated 20 October 1983 was released. Baháʼís were urged to seek out ways, compatible with the Baháʼí teachings, in which they could become involved in the social and economic development of the communities in which they lived. Worldwide in 1979 there were 129 officially recognized Baháʼí socio-economic development projects. By 1987, the number of officially recognized development projects had increased to 1482. Current initiatives of social action include activities in areas like health, sanitation, education, gender equality, arts and media, agriculture, and the environment. Educational projects include schools, which range from village tutorial schools to large secondary schools, and some universities. By 2017, the Baháʼí Office of Social and Economic Development estimated that there were 40,000 small-scale projects, 1,400 sustained projects, and 135 Baháʼí-inspired organizations. United Nations Baháʼu'lláh wrote of the need for world government in this age of humanity's collective life. Because of this emphasis the international Baháʼí community has chosen to support efforts of improving international relations through organizations such as the League of Nations and the United Nations, with some reservations about the present structure and constitution of the UN. The Baháʼí International Community is an agency under the direction of the Universal House of Justice in Haifa, and has consultative status with the following organizations: United Nations Children's Fund (UNICEF) United Nations Development Fund for Women (UNIFEM) United Nations Economic and Social Council (ECOSOC) United Nations Environment Programme (UNEP) World Health Organization (WHO) The Baháʼí International Community has offices at the United Nations in New York and Geneva and representations to United Nations regional commissions and other offices in Addis Ababa, Bangkok, Nairobi, Rome, Santiago, and Vienna. In recent years, an Office of the Environment and an Office for the Advancement of Women were established as part of its United Nations Office. The Baháʼí Faith has also undertaken joint development programs with various other United Nations agencies. In the 2000 Millennium Forum of the United Nations a Baháʼí was invited as one of the only non-governmental speakers during the summit. Persecution Baháʼís continue to be persecuted in some majority-Islamic countries, whose leaders do not recognize the Baháʼí Faith as an independent religion, but rather as apostasy from Islam. The most severe persecutions have occurred in Iran, where more than 200 Baháʼís were executed between 1978 and 1998. The rights of Baháʼís have been restricted to greater or lesser extents in numerous other countries, including Egypt, Afghanistan, Indonesia, Iraq, Morocco, Yemen, and several countries in sub-Saharan Africa. Iran The most enduring persecution of Baháʼís has been in Iran, the birthplace of the religion. When the Báb started attracting a large following, the clergy hoped to stop the movement from spreading by stating that its followers were enemies of God. These clerical directives led to mob attacks and public executions. Starting in the twentieth century, in addition to repression aimed at individual Baháʼís, centrally directed campaigns that targeted the entire Baháʼí community and its institutions were initiated. In one case in Yazd in 1903 more than 100 Baháʼís were killed. Baháʼí schools, such as the Tarbiyat boys' and girls' schools in Tehran, were closed in the 1930s and 1940s, Baháʼí marriages were not recognized and Baháʼí texts were censored. During the reign of Mohammad Reza Pahlavi, to divert attention from economic difficulties in Iran and from a growing nationalist movement, a campaign of persecution against the Baháʼís was instituted. An approved and coordinated anti-Baháʼí campaign (to incite public passion against the Baháʼís) started in 1955 and it included the spreading of anti-Baháʼí propaganda on national radio stations and in official newspapers. During that campaign, initiated by Mulla Muhammad Taghi Falsafi, the Bahá'í center in Tehran was demolished at the orders of Tehran military governor, General Teymur Bakhtiar. In the late 1970s the Shah's regime consistently lost legitimacy due to criticism that it was pro-Western. As the anti-Shah movement gained ground and support, revolutionary propaganda was spread which alleged that some of the Shah's advisors were Baháʼís. Baháʼís were portrayed as economic threats, and as supporters of Israel and the West, and societal hostility against the Baháʼís increased. Since the Islamic Revolution of 1979, Iranian Baháʼís have regularly had their homes ransacked or have been banned from attending university or from holding government jobs, and several hundred have received prison sentences for their religious beliefs, most recently for participating in study circles. Baháʼí cemeteries have been desecrated and property has been seized and occasionally demolished, including the House of Mírzá Buzurg, Baháʼu'lláh's father. The House of the Báb in Shiraz, one of three sites to which Baháʼís perform pilgrimage, has been destroyed twice. In May 2018, the Iranian authorities expelled a young woman student from university of Isfahan because she was Baháʼí. In March 2018, two more Baháʼí students were expelled from universities in the cities of Zanjan and Gilan because of their religion. According to a US panel, attacks on Baháʼís in Iran increased under Mahmoud Ahmadinejad's presidency. The United Nations Commission on Human Rights revealed an October 2005 confidential letter from Command Headquarters of the Armed Forces of Iran ordering its members to identify Baháʼís and to monitor their activities. Due to these actions, the Special Rapporteur of the United Nations Commission on Human Rights stated on 20 March 2006, that she "also expresses concern that the information gained as a result of such monitoring will be used as a basis for the increased persecution of, and discrimination against, members of the Baháʼí faith, in violation of international standards. The Special Rapporteur is concerned that this latest development indicates that the situation with regard to religious minorities in Iran is, in fact, deteriorating." On 14 May 2008, members of an informal body known as the "Friends" that oversaw the needs of the Baháʼí community in Iran were arrested and taken to Evin prison. The Friends court case has been postponed several times, but was finally underway on 12 January 2010. Other observers were not allowed in the court. Even the defense lawyers, who for two years have had minimal access to the defendants, had difficulty entering the courtroom. The chairman of the U.S. Commission on International Religious Freedom said that it seems that the government has already predetermined the outcome of the case and is violating international human rights law. Further sessions were held on 7 February 2010, 12 April 2010 and 12 June 2010. On 11 August 2010 it became known that the court sentence was 20 years imprisonment for each of the seven prisoners which was later reduced to ten years. After the sentence, they were transferred to Gohardasht prison. In March 2011 the sentences were reinstated to the original 20 years. On 3 January 2010, Iranian authorities detained ten more members of the Baha'i minority, reportedly including Leva Khanjani, granddaughter of Jamaloddin Khanjani, one of seven Baha'i leaders jailed since 2008 and in February, they arrested his son, Niki Khanjani. The Iranian government claims that the Baháʼí Faith is not a religion, but is instead a political organization, and hence refuses to recognize it as a minority religion. However, the government has never produced convincing evidence supporting its characterization of the Baháʼí community. The Iranian government also accuses the Baháʼí Faith of being associated with Zionism. These accusations against the Baháʼís appear to lack basis in historical fact, with some arguing they were invented by the Iranian government in order to use the Baháʼís as "scapegoats". In 2019, the Iranian government made it impossible for the Baháʼís to legally register with the Iranian state. National identity card applications in Iran no longer include the “other religions” option effectively making the Baháʼí Faith unrecognized by the state. Egypt During the 1920s, Egypt's religious Tribunal recognized the Baha'i Faith as a new, independent religion, totally separate from Islam, due to the nature of the 'laws, principles and beliefs' of the Baha'is. Baháʼí institutions and community activities have been illegal under Egyptian law since 1960. All Baháʼí community properties, including Baháʼí centers, libraries, and cemeteries, have been confiscated by the government and fatwas have been issued charging Baháʼís with apostasy. The Egyptian identification card controversy began in the 1990s when the government modernized the electronic processing of identity documents, which introduced a de facto requirement that documents must list the person's religion as Muslim, Christian, or Jewish (the only three religions officially recognized by the government). Consequently, Baháʼís were unable to obtain government identification documents (such as national identification cards, birth certificates, death certificates, marriage or divorce certificates, or passports) necessary to exercise their rights in their country unless they lied about their religion, which conflicts with Baháʼí religious principle. Without documents, they could not be employed, educated, treated in hospitals, travel outside of the country, or vote, among other hardships. Following a protracted legal process culminating in a court ruling favorable to the Baháʼís, the interior minister of Egypt released a decree on 14 April 2009, amending the law to allow Egyptians who are not Muslim, Christian, or Jewish to obtain identification documents that list a dash in place of one of the three recognized religions. The first identification cards were issued to two Baháʼís under the new decree on 8 August 2009. See also Baháʼí administration Baháʼí–Azali split Baháʼí cosmology Baháʼí Faith and gender equality Baháʼí Faith in fiction Baháʼí studies Baháʼí timeline Progressive revelation (Baháʼí) Baháʼí views on science Baháʼí World Centre buildings Criticism of the Baháʼí Faith Huqúqu'lláh List of Baháʼís List of writings of Baháʼu'lláh Outline of the Baháʼí Faith Terraces (Baháʼí) World Religion Day Notes Citations References Books Encyclopedias Iranica </ref> Journals News media Other Further reading External links bahai.org – The website of the worldwide Bahá’í community Bahá’í Media Bank – Photographs for download Bahá’í Reference Library – Online source of Authoritative Bahá’í writings in English, Farsi, and Arabic Bahá’í Library Online Baha'i – Video at PBS Learning Media Abrahamic religions Iranian religions Monotheistic religions
7
In computer science, binary search, also known as half-interval search, logarithmic search, or binary chop, is a search algorithm that finds the position of a target value within a sorted array. Binary search compares the target value to the middle element of the array. If they are not equal, the half in which the target cannot lie is eliminated and the search continues on the remaining half, again taking the middle element to compare to the target value, and repeating this until the target value is found. If the search ends with the remaining half being empty, the target is not in the array. Binary search runs in logarithmic time in the worst case, making comparisons, where is the number of elements in the array. Binary search is faster than linear search except for small arrays. However, the array must be sorted first to be able to apply binary search. There are specialized data structures designed for fast searching, such as hash tables, that can be searched more efficiently than binary search. However, binary search can be used to solve a wider range of problems, such as finding the next-smallest or next-largest element in the array relative to the target even if it is absent from the array. There are numerous variations of binary search. In particular, fractional cascading speeds up binary searches for the same value in multiple arrays. Fractional cascading efficiently solves a number of search problems in computational geometry and in numerous other fields. Exponential search extends binary search to unbounded lists. The binary search tree and B-tree data structures are based on binary search. Algorithm Binary search works on sorted arrays. Binary search begins by comparing an element in the middle of the array with the target value. If the target value matches the element, its position in the array is returned. If the target value is less than the element, the search continues in the lower half of the array. If the target value is greater than the element, the search continues in the upper half of the array. By doing this, the algorithm eliminates the half in which the target value cannot lie in each iteration. Procedure Given an array of elements with values or records sorted such that , and target value , the following subroutine uses binary search to find the index of in . Set to and to . If , the search terminates as unsuccessful. Set (the position of the middle element) to the floor of , which is the greatest integer less than or equal to . If , set to and go to step 2. If , set to and go to step 2. Now , the search is done; return . This iterative procedure keeps track of the search boundaries with the two variables and . The procedure may be expressed in pseudocode as follows, where the variable names and types remain the same as above, floor is the floor function, and unsuccessful refers to a specific value that conveys the failure of the search. function binary_search(A, n, T) is L := 0 R := n − 1 while L ≤ R do m := floor((L + R) / 2) if A[m] < T then L := m + 1 else if A[m] > T then R := m − 1 else: return m return unsuccessful Alternatively, the algorithm may take the ceiling of . This may change the result if the target value appears more than once in the array. Alternative procedure In the above procedure, the algorithm checks whether the middle element () is equal to the target () in every iteration. Some implementations leave out this check during each iteration. The algorithm would perform this check only when one element is left (when ). This results in a faster comparison loop, as one comparison is eliminated per iteration, while it requires only one more iteration on average. Hermann Bottenbruch published the first implementation to leave out this check in 1962. Set to and to . While , Set (the position of the middle element) to the ceiling of , which is the least integer greater than or equal to . If , set to . Else, ; set to . Now , the search is done. If , return . Otherwise, the search terminates as unsuccessful. Where ceil is the ceiling function, the pseudocode for this version is: function binary_search_alternative(A, n, T) is L := 0 R := n − 1 while L != R do m := ceil((L + R) / 2) if A[m] > T then R := m − 1 else: L := m if A[L] = T then return L return unsuccessful Duplicate elements The procedure may return any index whose element is equal to the target value, even if there are duplicate elements in the array. For example, if the array to be searched was and the target was , then it would be correct for the algorithm to either return the 4th (index 3) or 5th (index 4) element. The regular procedure would return the 4th element (index 3) in this case. It does not always return the first duplicate (consider which still returns the 4th element). However, it is sometimes necessary to find the leftmost element or the rightmost element for a target value that is duplicated in the array. In the above example, the 4th element is the leftmost element of the value 4, while the 5th element is the rightmost element of the value 4. The alternative procedure above will always return the index of the rightmost element if such an element exists. Procedure for finding the leftmost element To find the leftmost element, the following procedure can be used: Set to and to . While , Set (the position of the middle element) to the floor of , which is the greatest integer less than or equal to . If , set to . Else, ; set to . Return . If and , then is the leftmost element that equals . Even if is not in the array, is the rank of in the array, or the number of elements in the array that are less than . Where floor is the floor function, the pseudocode for this version is: function binary_search_leftmost(A, n, T): L := 0 R := n while L < R: m := floor((L + R) / 2) if A[m] < T: L := m + 1 else: R := m return L Procedure for finding the rightmost element To find the rightmost element, the following procedure can be used: Set to and to . While , Set (the position of the middle element) to the floor of , which is the greatest integer less than or equal to . If , set to . Else, ; set to . Return . If and , then is the rightmost element that equals . Even if is not in the array, is the number of elements in the array that are greater than . Where floor is the floor function, the pseudocode for this version is: function binary_search_rightmost(A, n, T): L := 0 R := n while L < R: m := floor((L + R) / 2) if A[m] > T: R := m else: L := m + 1 return R - 1 Approximate matches The above procedure only performs exact matches, finding the position of a target value. However, it is trivial to extend binary search to perform approximate matches because binary search operates on sorted arrays. For example, binary search can be used to compute, for a given value, its rank (the number of smaller elements), predecessor (next-smallest element), successor (next-largest element), and nearest neighbor. Range queries seeking the number of elements between two values can be performed with two rank queries. Rank queries can be performed with the procedure for finding the leftmost element. The number of elements less than the target value is returned by the procedure. Predecessor queries can be performed with rank queries. If the rank of the target value is , its predecessor is . For successor queries, the procedure for finding the rightmost element can be used. If the result of running the procedure for the target value is , then the successor of the target value is . The nearest neighbor of the target value is either its predecessor or successor, whichever is closer. Range queries are also straightforward. Once the ranks of the two values are known, the number of elements greater than or equal to the first value and less than the second is the difference of the two ranks. This count can be adjusted up or down by one according to whether the endpoints of the range should be considered to be part of the range and whether the array contains entries matching those endpoints. Performance In terms of the number of comparisons, the performance of binary search can be analyzed by viewing the run of the procedure on a binary tree. The root node of the tree is the middle element of the array. The middle element of the lower half is the left child node of the root, and the middle element of the upper half is the right child node of the root. The rest of the tree is built in a similar fashion. Starting from the root node, the left or right subtrees are traversed depending on whether the target value is less or more than the node under consideration. In the worst case, binary search makes iterations of the comparison loop, where the notation denotes the floor function that yields the greatest integer less than or equal to the argument, and is the binary logarithm. This is because the worst case is reached when the search reaches the deepest level of the tree, and there are always levels in the tree for any binary search. The worst case may also be reached when the target element is not in the array. If is one less than a power of two, then this is always the case. Otherwise, the search may perform iterations if the search reaches the deepest level of the tree. However, it may make iterations, which is one less than the worst case, if the search ends at the second-deepest level of the tree. On average, assuming that each element is equally likely to be searched, binary search makes iterations when the target element is in the array. This is approximately equal to iterations. When the target element is not in the array, binary search makes iterations on average, assuming that the range between and outside elements is equally likely to be searched. In the best case, where the target value is the middle element of the array, its position is returned after one iteration. In terms of iterations, no search algorithm that works only by comparing elements can exhibit better average and worst-case performance than binary search. The comparison tree representing binary search has the fewest levels possible as every level above the lowest level of the tree is filled completely. Otherwise, the search algorithm can eliminate few elements in an iteration, increasing the number of iterations required in the average and worst case. This is the case for other search algorithms based on comparisons, as while they may work faster on some target values, the average performance over all elements is worse than binary search. By dividing the array in half, binary search ensures that the size of both subarrays are as similar as possible. Space complexity Binary search requires three pointers to elements, which may be array indices or pointers to memory locations, regardless of the size of the array. Therefore, the space complexity of binary search is in the word RAM model of computation. Derivation of average case The average number of iterations performed by binary search depends on the probability of each element being searched. The average case is different for successful searches and unsuccessful searches. It will be assumed that each element is equally likely to be searched for successful searches. For unsuccessful searches, it will be assumed that the intervals between and outside elements are equally likely to be searched. The average case for successful searches is the number of iterations required to search every element exactly once, divided by , the number of elements. The average case for unsuccessful searches is the number of iterations required to search an element within every interval exactly once, divided by the intervals. Successful searches In the binary tree representation, a successful search can be represented by a path from the root to the target node, called an internal path. The length of a path is the number of edges (connections between nodes) that the path passes through. The number of iterations performed by a search, given that the corresponding path has length , is counting the initial iteration. The internal path length is the sum of the lengths of all unique internal paths. Since there is only one path from the root to any single node, each internal path represents a search for a specific element. If there are elements, which is a positive integer, and the internal path length is , then the average number of iterations for a successful search , with the one iteration added to count the initial iteration. Since binary search is the optimal algorithm for searching with comparisons, this problem is reduced to calculating the minimum internal path length of all binary trees with nodes, which is equal to: For example, in a 7-element array, the root requires one iteration, the two elements below the root require two iterations, and the four elements below require three iterations. In this case, the internal path length is: The average number of iterations would be based on the equation for the average case. The sum for can be simplified to: Substituting the equation for into the equation for : For integer , this is equivalent to the equation for the average case on a successful search specified above. Unsuccessful searches Unsuccessful searches can be represented by augmenting the tree with external nodes, which forms an extended binary tree. If an internal node, or a node present in the tree, has fewer than two child nodes, then additional child nodes, called external nodes, are added so that each internal node has two children. By doing so, an unsuccessful search can be represented as a path to an external node, whose parent is the single element that remains during the last iteration. An external path is a path from the root to an external node. The external path length is the sum of the lengths of all unique external paths. If there are elements, which is a positive integer, and the external path length is , then the average number of iterations for an unsuccessful search , with the one iteration added to count the initial iteration. The external path length is divided by instead of because there are external paths, representing the intervals between and outside the elements of the array. This problem can similarly be reduced to determining the minimum external path length of all binary trees with nodes. For all binary trees, the external path length is equal to the internal path length plus . Substituting the equation for : Substituting the equation for into the equation for , the average case for unsuccessful searches can be determined: Performance of alternative procedure Each iteration of the binary search procedure defined above makes one or two comparisons, checking if the middle element is equal to the target in each iteration. Assuming that each element is equally likely to be searched, each iteration makes 1.5 comparisons on average. A variation of the algorithm checks whether the middle element is equal to the target at the end of the search. On average, this eliminates half a comparison from each iteration. This slightly cuts the time taken per iteration on most computers. However, it guarantees that the search takes the maximum number of iterations, on average adding one iteration to the search. Because the comparison loop is performed only times in the worst case, the slight increase in efficiency per iteration does not compensate for the extra iteration for all but very large . Running time and cache use In analyzing the performance of binary search, another consideration is the time required to compare two elements. For integers and strings, the time required increases linearly as the encoding length (usually the number of bits) of the elements increase. For example, comparing a pair of 64-bit unsigned integers would require comparing up to double the bits as comparing a pair of 32-bit unsigned integers. The worst case is achieved when the integers are equal. This can be significant when the encoding lengths of the elements are large, such as with large integer types or long strings, which makes comparing elements expensive. Furthermore, comparing floating-point values (the most common digital representation of real numbers) is often more expensive than comparing integers or short strings. On most computer architectures, the processor has a hardware cache separate from RAM. Since they are located within the processor itself, caches are much faster to access but usually store much less data than RAM. Therefore, most processors store memory locations that have been accessed recently, along with memory locations close to it. For example, when an array element is accessed, the element itself may be stored along with the elements that are stored close to it in RAM, making it faster to sequentially access array elements that are close in index to each other (locality of reference). On a sorted array, binary search can jump to distant memory locations if the array is large, unlike algorithms (such as linear search and linear probing in hash tables) which access elements in sequence. This adds slightly to the running time of binary search for large arrays on most systems. Binary search versus other schemes Sorted arrays with binary search are a very inefficient solution when insertion and deletion operations are interleaved with retrieval, taking time for each such operation. In addition, sorted arrays can complicate memory use especially when elements are often inserted into the array. There are other data structures that support much more efficient insertion and deletion. Binary search can be used to perform exact matching and set membership (determining whether a target value is in a collection of values). There are data structures that support faster exact matching and set membership. However, unlike many other searching schemes, binary search can be used for efficient approximate matching, usually performing such matches in time regardless of the type or structure of the values themselves. In addition, there are some operations, like finding the smallest and largest element, that can be performed efficiently on a sorted array. Linear search Linear search is a simple search algorithm that checks every record until it finds the target value. Linear search can be done on a linked list, which allows for faster insertion and deletion than an array. Binary search is faster than linear search for sorted arrays except if the array is short, although the array needs to be sorted beforehand. All sorting algorithms based on comparing elements, such as quicksort and merge sort, require at least comparisons in the worst case. Unlike linear search, binary search can be used for efficient approximate matching. There are operations such as finding the smallest and largest element that can be done efficiently on a sorted array but not on an unsorted array. Trees A binary search tree is a binary tree data structure that works based on the principle of binary search. The records of the tree are arranged in sorted order, and each record in the tree can be searched using an algorithm similar to binary search, taking on average logarithmic time. Insertion and deletion also require on average logarithmic time in binary search trees. This can be faster than the linear time insertion and deletion of sorted arrays, and binary trees retain the ability to perform all the operations possible on a sorted array, including range and approximate queries. However, binary search is usually more efficient for searching as binary search trees will most likely be imperfectly balanced, resulting in slightly worse performance than binary search. This even applies to balanced binary search trees, binary search trees that balance their own nodes, because they rarely produce the tree with the fewest possible levels. Except for balanced binary search trees, the tree may be severely imbalanced with few internal nodes with two children, resulting in the average and worst-case search time approaching comparisons. Binary search trees take more space than sorted arrays. Binary search trees lend themselves to fast searching in external memory stored in hard disks, as binary search trees can be efficiently structured in filesystems. The B-tree generalizes this method of tree organization. B-trees are frequently used to organize long-term storage such as databases and filesystems. Hashing For implementing associative arrays, hash tables, a data structure that maps keys to records using a hash function, are generally faster than binary search on a sorted array of records. Most hash table implementations require only amortized constant time on average. However, hashing is not useful for approximate matches, such as computing the next-smallest, next-largest, and nearest key, as the only information given on a failed search is that the target is not present in any record. Binary search is ideal for such matches, performing them in logarithmic time. Binary search also supports approximate matches. Some operations, like finding the smallest and largest element, can be done efficiently on sorted arrays but not on hash tables. Set membership algorithms A related problem to search is set membership. Any algorithm that does lookup, like binary search, can also be used for set membership. There are other algorithms that are more specifically suited for set membership. A bit array is the simplest, useful when the range of keys is limited. It compactly stores a collection of bits, with each bit representing a single key within the range of keys. Bit arrays are very fast, requiring only time. The Judy1 type of Judy array handles 64-bit keys efficiently. For approximate results, Bloom filters, another probabilistic data structure based on hashing, store a set of keys by encoding the keys using a bit array and multiple hash functions. Bloom filters are much more space-efficient than bit arrays in most cases and not much slower: with hash functions, membership queries require only time. However, Bloom filters suffer from false positives. Other data structures There exist data structures that may improve on binary search in some cases for both searching and other operations available for sorted arrays. For example, searches, approximate matches, and the operations available to sorted arrays can be performed more efficiently than binary search on specialized data structures such as van Emde Boas trees, fusion trees, tries, and bit arrays. These specialized data structures are usually only faster because they take advantage of the properties of keys with a certain attribute (usually keys that are small integers), and thus will be time or space consuming for keys that lack that attribute. As long as the keys can be ordered, these operations can always be done at least efficiently on a sorted array regardless of the keys. Some structures, such as Judy arrays, use a combination of approaches to mitigate this while retaining efficiency and the ability to perform approximate matching. Variations Uniform binary search Uniform binary search stores, instead of the lower and upper bounds, the difference in the index of the middle element from the current iteration to the next iteration. A lookup table containing the differences is computed beforehand. For example, if the array to be searched is , the middle element () would be . In this case, the middle element of the left subarray () is and the middle element of the right subarray () is . Uniform binary search would store the value of as both indices differ from by this same amount. To reduce the search space, the algorithm either adds or subtracts this change from the index of the middle element. Uniform binary search may be faster on systems where it is inefficient to calculate the midpoint, such as on decimal computers. Exponential search Exponential search extends binary search to unbounded lists. It starts by finding the first element with an index that is both a power of two and greater than the target value. Afterwards, it sets that index as the upper bound, and switches to binary search. A search takes iterations before binary search is started and at most iterations of the binary search, where is the position of the target value. Exponential search works on bounded lists, but becomes an improvement over binary search only if the target value lies near the beginning of the array. Interpolation search Instead of calculating the midpoint, interpolation search estimates the position of the target value, taking into account the lowest and highest elements in the array as well as length of the array. It works on the basis that the midpoint is not the best guess in many cases. For example, if the target value is close to the highest element in the array, it is likely to be located near the end of the array. A common interpolation function is linear interpolation. If is the array, are the lower and upper bounds respectively, and is the target, then the target is estimated to be about of the way between and . When linear interpolation is used, and the distribution of the array elements is uniform or near uniform, interpolation search makes comparisons. In practice, interpolation search is slower than binary search for small arrays, as interpolation search requires extra computation. Its time complexity grows more slowly than binary search, but this only compensates for the extra computation for large arrays. Fractional cascading Fractional cascading is a technique that speeds up binary searches for the same element in multiple sorted arrays. Searching each array separately requires time, where is the number of arrays. Fractional cascading reduces this to by storing specific information in each array about each element and its position in the other arrays. Fractional cascading was originally developed to efficiently solve various computational geometry problems. Fractional cascading has been applied elsewhere, such as in data mining and Internet Protocol routing. Generalization to graphs Binary search has been generalized to work on certain types of graphs, where the target value is stored in a vertex instead of an array element. Binary search trees are one such generalization—when a vertex (node) in the tree is queried, the algorithm either learns that the vertex is the target, or otherwise which subtree the target would be located in. However, this can be further generalized as follows: given an undirected, positively weighted graph and a target vertex, the algorithm learns upon querying a vertex that it is equal to the target, or it is given an incident edge that is on the shortest path from the queried vertex to the target. The standard binary search algorithm is simply the case where the graph is a path. Similarly, binary search trees are the case where the edges to the left or right subtrees are given when the queried vertex is unequal to the target. For all undirected, positively weighted graphs, there is an algorithm that finds the target vertex in queries in the worst case. Noisy binary search Noisy binary search algorithms solve the case where the algorithm cannot reliably compare elements of the array. For each pair of elements, there is a certain probability that the algorithm makes the wrong comparison. Noisy binary search can find the correct position of the target with a given probability that controls the reliability of the yielded position. Every noisy binary search procedure must make at least comparisons on average, where is the binary entropy function and is the probability that the procedure yields the wrong position. The noisy binary search problem can be considered as a case of the Rényi-Ulam game, a variant of Twenty Questions where the answers may be wrong. Quantum binary search Classical computers are bounded to the worst case of exactly iterations when performing binary search. Quantum algorithms for binary search are still bounded to a proportion of queries (representing iterations of the classical procedure), but the constant factor is less than one, providing for a lower time complexity on quantum computers. Any exact quantum binary search procedure—that is, a procedure that always yields the correct result—requires at least queries in the worst case, where is the natural logarithm. There is an exact quantum binary search procedure that runs in queries in the worst case. In comparison, Grover's algorithm is the optimal quantum algorithm for searching an unordered list of elements, and it requires queries. History The idea of sorting a list of items to allow for faster searching dates back to antiquity. The earliest known example was the Inakibit-Anu tablet from Babylon dating back to . The tablet contained about 500 sexagesimal numbers and their reciprocals sorted in lexicographical order, which made searching for a specific entry easier. In addition, several lists of names that were sorted by their first letter were discovered on the Aegean Islands. Catholicon, a Latin dictionary finished in 1286 CE, was the first work to describe rules for sorting words into alphabetical order, as opposed to just the first few letters. In 1946, John Mauchly made the first mention of binary search as part of the Moore School Lectures, a seminal and foundational college course in computing. In 1957, William Wesley Peterson published the first method for interpolation search. Every published binary search algorithm worked only for arrays whose length is one less than a power of two until 1960, when Derrick Henry Lehmer published a binary search algorithm that worked on all arrays. In 1962, Hermann Bottenbruch presented an ALGOL 60 implementation of binary search that placed the comparison for equality at the end, increasing the average number of iterations by one, but reducing to one the number of comparisons per iteration. The uniform binary search was developed by A. K. Chandra of Stanford University in 1971. In 1986, Bernard Chazelle and Leonidas J. Guibas introduced fractional cascading as a method to solve numerous search problems in computational geometry. Implementation issues When Jon Bentley assigned binary search as a problem in a course for professional programmers, he found that ninety percent failed to provide a correct solution after several hours of working on it, mainly because the incorrect implementations failed to run or returned a wrong answer in rare edge cases. A study published in 1988 shows that accurate code for it is only found in five out of twenty textbooks. Furthermore, Bentley's own implementation of binary search, published in his 1986 book Programming Pearls, contained an overflow error that remained undetected for over twenty years. The Java programming language library implementation of binary search had the same overflow bug for more than nine years. In a practical implementation, the variables used to represent the indices will often be of fixed size (integers), and this can result in an arithmetic overflow for very large arrays. If the midpoint of the span is calculated as , then the value of may exceed the range of integers of the data type used to store the midpoint, even if and are within the range. If and are nonnegative, this can be avoided by calculating the midpoint as . An infinite loop may occur if the exit conditions for the loop are not defined correctly. Once exceeds , the search has failed and must convey the failure of the search. In addition, the loop must be exited when the target element is found, or in the case of an implementation where this check is moved to the end, checks for whether the search was successful or failed at the end must be in place. Bentley found that most of the programmers who incorrectly implemented binary search made an error in defining the exit conditions. Library support Many languages' standard libraries include binary search routines: C provides the function bsearch() in its standard library, which is typically implemented via binary search, although the official standard does not require it so. C++'s Standard Template Library provides the functions binary_search(), lower_bound(), upper_bound() and equal_range(). D's standard library Phobos, in std.range module provides a type SortedRange (returned by sort() and assumeSorted() functions) with methods contains(), equaleRange(), lowerBound() and trisect(), that use binary search techniques by default for ranges that offer random access. COBOL provides the SEARCH ALL verb for performing binary searches on COBOL ordered tables. Go's sort standard library package contains the functions Search, SearchInts, SearchFloat64s, and SearchStrings, which implement general binary search, as well as specific implementations for searching slices of integers, floating-point numbers, and strings, respectively. Java offers a set of overloaded binarySearch() static methods in the classes and in the standard java.util package for performing binary searches on Java arrays and on Lists, respectively. Microsoft's .NET Framework 2.0 offers static generic versions of the binary search algorithm in its collection base classes. An example would be System.Array's method BinarySearch<T>(T[] array, T value). For Objective-C, the Cocoa framework provides the NSArray -indexOfObject:inSortedRange:options:usingComparator: method in Mac OS X 10.6+. Apple's Core Foundation C framework also contains a CFArrayBSearchValues() function. Python provides the bisect module that keeps a list in sorted order without having to sort the list after each insertion. Ruby's Array class includes a bsearch method with built-in approximate matching. See also – the same idea used to solve equations in the real numbers Notes and references Notes Citations Sources External links NIST Dictionary of Algorithms and Data Structures: binary search Comparisons and benchmarks of a variety of binary search implementations in C Articles with example pseudocode Search algorithms 2 (number)
12
The Battle of Stalingrad (23 August 19422 February 1943) was a major battle on the Eastern Front of World War II where Nazi Germany and its allies unsuccessfully fought the Soviet Union for control of the city of Stalingrad (later renamed Volgograd) in Southern Russia. The battle was marked by fierce close-quarters combat and direct assaults on civilians in air raids, with the battle epitomizing urban warfare. It was the bloodiest battle of the Second World War, with both sides suffering enormous casualties. Today, the Battle of Stalingrad is universally regarded as the turning point in the European theatre of war, as it forced the Oberkommando der Wehrmacht (German High Command) to withdraw considerable military forces from other areas in occupied Europe to replace German losses on the Eastern Front, ending with the rout of the six field armies of Army Group B, including the destruction of Nazi Germany's 6th Army and an entire corps of its 4th Panzer Army. The Soviet victory energized the Red Army and shifted the balance of power in the favour of the Soviets. Stalingrad was strategically important to both sides as a major industrial and transport hub on the Volga River. Whoever controlled Stalingrad would have access to the oil fields of the Caucasus and would gain control of the Volga. Germany, already operating on dwindling fuel supplies, focused its efforts on moving deeper into Soviet territory and taking the oil fields at any cost. On 4 August, the Germans launched an offensive by using the 6th Army and elements of the 4th Panzer Army. The attack was supported by intense Luftwaffe bombing that reduced much of the city to rubble. The battle degenerated into house-to-house fighting as both sides poured reinforcements into the city. By mid-November, the Germans, at great cost, had pushed the Soviet defenders back into narrow zones along the west bank of the river. Winter conditions became particularly brutal, with temperatures reaching as low as −40°C in late November. On 19 November, the Red Army launched Operation Uranus, a two-pronged attack targeting the Romanian armies protecting the 6th Army's flanks. The Axis flanks were overrun and the 6th Army was cut off and surrounded in the Stalingrad area. Adolf Hitler was determined to hold the city at all costs and forbade the 6th Army from trying a breakout; instead, attempts were made to supply it by air and to break the encirclement from the outside. The Soviets were successful in preventing the Germans from delivering enough supplies through the air to the trapped Axis forces. Nevertheless, heavy fighting continued for another two months. On 2 February 1943, the German 6th Army, having exhausted their ammunition and food, finally capitulated after over five months of fighting, making it the first of Hitler's field armies to surrender in World War II. The Soviet victory is commemorated in Russia as the Day of Military Honour. Background By the spring of 1942, despite the failure of Operation Barbarossa to vanquish the Soviet Union in a single campaign, the Wehrmacht had captured vast expanses of territory, including Ukraine, Belarus, and the Baltic republics. On the Western Front, Germany held most of Europe, the U-boat offensive in the Atlantic was holding American support at bay, and in North Africa Erwin Rommel had just captured Tobruk. In the east, the Germans had stabilised a front running from Leningrad south to Rostov, with a number of minor salients. Hitler was confident that he could break the Red Army despite the heavy German losses west of Moscow in winter 1941–42, because Army Group Centre (Heeresgruppe Mitte) had been unable to engage 65% of its infantry, which had meanwhile been rested and re-equipped. Neither Army Group North nor Army Group South had been particularly hard-pressed over the winter. Hitler decided that Germany's summer campaign in 1942 would be directed at the southern parts of the Soviet Union. The initial objectives in the region around Stalingrad were to destroy the industrial capacity of the city and to block the Volga River traffic connecting the Caucasus and Caspian Sea to central Russia, as the city is strategically located near a big bend of the Volga. The Germans cut the pipeline from the oilfields when they captured Rostov on 23 July. The capture of Stalingrad would make the delivery of Lend-Lease supplies via the Persian Corridor much more difficult. On 23 July 1942, Hitler personally rewrote the operational objectives for the 1942 campaign, greatly expanding them to include the occupation of the city of Stalingrad. Both sides began to attach propaganda value to the city, which bore the name of the Soviet leader meaning that the capture of the city would have been a great ideological victory for the Reich. Hitler proclaimed that after Stalingrad's capture, its male citizens were to be killed and all women and children were to be deported because its population was "thoroughly communistic" and "especially dangerous". Hitler planned for the fall of the city firmly securing the northern and western flanks of the German armies as they advanced on Baku, with the aim of gaining its strategic petroleum resources for Germany. The expansion of objectives was a significant factor in Germany's failure at Stalingrad, caused by German overconfidence and an underestimation of Soviet reserves. Meanwhile, Stalin was convinced by Soviet intelligence that the main German attack would target Moscow, and gave priority for fresh troops and the new equipment to the defense of the Soviet capital. As the Soviet winter counteroffensive of 1941–1942 culminated in March, the Soviet high command began planning for the summer campaign. Stalin desired a general offensive, but was dissuaded by Chief of the General Staff Boris Shaposhnikov, Deputy Chief of the General Staff Aleksandr Vasilevsky, and Western Main Direction commander Georgy Zhukov. Ultimately, Stalin instructed that the summer campaign be based on what he termed "active strategic defense," but also ordered the Soviet high command, Stavka, to began planning for a series of local offensives across the Eastern Front. Southwestern Main Direction commander Semyon Timoshenko suggested an attack from the Izyum salient south of Kharkov in northeastern Ukraine, gained during the winter campaign, to take advantage of what Soviet intelligence believed to be weak opposing forces in that sector and divert German troops from the anticipated attack on Moscow. His proposal for a drive on Kharkov by the Southwestern Front, advancing in a northern and southern pincer to encircle and destroy the German 6th Army received Stalin's approval despite the opposition of Shaposhnikov and Vaslievsky. After delays in moving troops into position and logistical difficulties, the Kharkov operation began on 12 May. The Soviet troops achieved initial success and 6th Army commander Friedrich Paulus requested reinforcements. Three divisions slated for Case Blue and air units from Crimea were diverted to the Kharkov sector. The advance of Southwestern Front's northern strike group was halted by a German counterattack that began on 13 May, while the front's southern strike group continued its progress 40 kilometers into the German rear. In response, Ewald von Kleist's two armies launched a counterattack, Operation Fridericus I, on 17 May against the Southern Front, covering the Southwestern Front's southern flank. Kleist's counterattack caught the Soviet defenders off guard, with Timoshenko having committed his armored reserves to the Kharkov operation. In the ensuing Second Battle of Kharkov, Kleist's forces encircled and destroyed much of the forces of the Southern Front and the advancing Southwestern Front, inflicting over 300,000 casualties in return for losses of 20,000. The disaster at Kharkov was a crippling blow to the Soviet forces in the south, leaving them vulnerable to the forthcoming German summer offensive. Despite the defeat, Stalin continued to believe that a German attack on Moscow was the main threat and allocated four newly formed strategic reserve armies there rather than to the Southwestern Main Direction. Instead, the Southwestern Front received seven rifle divisions and three tank corps, which proved inadequate to deal with the German threat. At the same time, the commitment of the panzer divisions that Paulus and Kleist needed for Case Blau to the Second Battle of Kharkov further delayed the start of the offensive, since they required time to train and replace their losses from the battle. At a conference at Army Group South's headquarters at Poltava on 1 June, Hitler modified the plans for the summer operations. Before the main offensive began, simultaneous attacks were to be launched on 7 June: Operation Wilhelm at Volchansk northeast of Kharkov and Operation Störfang against Sevastopol. The latter aimed to destroy the last Soviet troops in Crimea in order to secure the German southern flank. Kleist was to follow with these Operation Fridericus II on 12 June against the Izyum salient. The attacks in Ukraine aimed to give German forces space to amass supplies east of the Donets. The start of Case Blau itself was delayed to 20 June, by which point victory in the preliminary operations was anticipated. Prelude Army Group South was selected for a sprint forward through the southern Russian steppes into the Caucasus to capture the vital Soviet oil fields there. The planned summer offensive, code-named Fall Blau (Case Blue), was to include the German 6th, 17th, 4th Panzer and 1st Panzer Armies. Army Group South had overrun the Ukrainian Soviet Socialist Republic in 1941. Poised in Eastern Ukraine, it was to spearhead the offensive. Hitler intervened, however, ordering the Army Group to split in two. Army Group South (A), under the command of Wilhelm List, was to continue advancing south towards the Caucasus as planned with the 17th Army and First Panzer Army. Army Group South (B), including Friedrich Paulus's 6th Army and Hermann Hoth's 4th Panzer Army, was to move east towards the Volga and Stalingrad. Army Group B was commanded by General Maximilian von Weichs. The start of Case Blue had been planned for late May 1942. However, a number of German and Romanian units that were to take part in Blau were besieging Sevastopol on the Crimean Peninsula. Delays in ending the siege pushed back the start date for Blau several times, and the city did not fall until early July. Operation Fridericus I by the Germans against the "Izyum bulge", pinched off the Soviet salient in the Second Battle of Kharkov, and resulted in the envelopment of a large Soviet force between 17 May and 29 May. Similarly, Operation Wilhelm attacked Voltshansk on 13 June, and Operation Fridericus attacked Kupiansk on 22 June. Blau finally opened as Army Group South began its attack into southern Russia on 28 June 1942. The German offensive started well. Soviet forces offered little resistance in the vast empty steppes and started streaming eastward. Several attempts to re-establish a defensive line failed when German units outflanked them. Two major pockets were formed and destroyed: the first, northeast of Kharkov, on 2 July, and a second, around Millerovo, Rostov Oblast, a week later. Meanwhile, the Hungarian 2nd Army and the German 4th Panzer Army had launched an assault on Voronezh, capturing the city on 5 July. The initial advance of the 6th Army was so successful that Hitler intervened and ordered the 4th Panzer Army to join Army Group South (A) to the south. A massive road block resulted when the 4th Panzer and the 1st Panzer choked the roads, stopping both in their tracks while they cleared the mess of thousands of vehicles. The traffic jam is thought to have delayed the advance by at least one week. With the advance now slowed, Hitler changed his mind and reassigned the 4th Panzer Army back to the attack on Stalingrad. By the end of July, the Germans had pushed the Soviets across the Don River. At this point, the Don and Volga Rivers are only apart, and the Germans left their main supply depots west of the Don, which had important implications later in the course of the battle. The Germans began using the armies of their Italian, Hungarian and Romanian allies to guard their left (northern) flank. Occasionally Italian actions were mentioned in official German communiques. Italian forces were generally held in little regard by the Germans, and were accused of low morale: in reality, the Italian divisions fought comparatively well, with the 3rd Infantry Division "Ravenna" and 5th Infantry Division "Cosseria" showing spirit, according to a German liaison officer. The Italians were forced to retreat only after a massive armoured attack in which German reinforcements failed to arrive in time, according to German historian Rolf-Dieter Müller. On 25 July the Germans faced stiff resistance with a Soviet bridgehead west of Kalach. "We had had to pay a high cost in men and material ... left on the Kalach battlefield were numerous burnt-out or shot-up German tanks." The Germans formed bridgeheads across the Don on 20 August, with the 295th and 76th Infantry Divisions enabling the XIVth Panzer Corps "to thrust to the Volga north of Stalingrad." The German 6th Army was only a few dozen kilometres from Stalingrad. The 4th Panzer Army, ordered south on 13 July to block the Soviet retreat "weakened by the 17th Army and the 1st Panzer Army", had turned northwards to help take the city from the south. To the south, Army Group A was pushing far into the Caucasus, but their advance slowed as supply lines grew overextended. The two German army groups were too far apart to support one another. After German intentions became clear in July 1942, Stalin appointed General Andrey Yeryomenko commander of the Southeastern Front on 1 August 1942. Yeryomenko and Commissar Nikita Khrushchev were tasked with planning the defence of Stalingrad. Beyond the Volga River on the eastern boundary of Stalingrad, additional Soviet units were formed into the 62nd Army under Lieutenant General Vasiliy Chuikov on 11 September 1942. Tasked with holding the city at all costs, Chuikov proclaimed, "We will defend the city or die in the attempt." The battle earned him one of his two Hero of the Soviet Union awards. Orders of battle Red Army During the defence of Stalingrad, the Red Army deployed five armies in and around the city (28th, 51st, 57th, 62nd and 64th Armies); and an additional nine armies in the encirclement counteroffensive (24th, 65th, 66th Armies and 16th Air Army from the north as part of the Don Front offensive, and 1st Guards Army, 5th Tank, 21st Army, 2nd Air Army and 17th Air Army from the south as part of the Southwestern Front). Axis Attack on Stalingrad Initial attack David Glantz indicated that four hard-fought battles – collectively known as the Kotluban Operations – north of Stalingrad, where the Soviets made their greatest stand, decided Germany's fate before the Nazis ever set foot in the city itself, and were a turning point in the war. Beginning in late August, continuing in September and into October, the Soviets committed between two and four armies in hastily coordinated and poorly controlled attacks against the Germans' northern flank. The actions resulted in more than 200,000 Soviet Army casualties but did slow the German assault. On 23 August the 6th Army reached the outskirts of Stalingrad in pursuit of the 62nd and 64th Armies, which had fallen back into the city. Kleist later said after the war: The Soviets had enough warning of the German advance to ship grain, cattle, and railway cars across the Volga out of harm's way, but Stalin refused to evacuate the 400,000 civilian residents of Stalingrad. This "harvest victory" left the city short of food even before the German attack began. Before the Heer reached the city itself, the Luftwaffe had cut off shipping on the Volga, vital for bringing supplies into the city. Between 25 and 31 July, 32 Soviet ships were sunk, with another nine crippled. The battle began with the heavy bombing of the city by Generaloberst Wolfram von Richthofen's Luftflotte 4. Some 1,000 tons of bombs were dropped in 48 hours, more than in London at the height of the Blitz. At least 90% of the city's housing stock was obliterated. The aerial assault on Stalingrad was the most concentrated on the Ostfront according to Beevor. The exact number of civilians killed is unknown but was most likely very high. Around 40,000 civilians were taken to Germany as slave workers, some fled during battle and a small number were evacuated by the Soviets, but by February 1943 only 10,000 to 60,000 civilians were still alive. Much of the city was smashed to rubble, although some factories continued production while workers joined in the fighting. The Stalingrad Tractor Factory continued to turn out T-34 tanks up until German troops burst into the plant. The 369th (Croatian) Reinforced Infantry Regiment was the only non-German unit selected by the Wehrmacht to enter Stalingrad city during assault operations. It fought as part of the 100th Jäger Division. Stalin rushed all available troops to the east bank of the Volga, some from as far away as Siberia. Regular river ferries were quickly destroyed by the Luftwaffe, which then targeted troop barges being towed slowly across by tugs. It has been said that Stalin prevented civilians from leaving the city in the belief that their presence would encourage greater resistance from the city's defenders. Civilians, including women and children, were put to work building trenchworks and protective fortifications. A massive German air raid on 23 August caused a firestorm, killing hundreds and turning Stalingrad into a vast landscape of rubble and burnt ruins. Ninety percent of the living space in the Voroshilovskiy area was destroyed. Between 23 and 26 August, Soviet reports indicate 955 people were killed and another 1,181 wounded as a result of the bombing. Casualties were estimated to have been 40,000, or as many as 70,000, though these estimates were likely exaggerated, and after 25 August the Soviets did not record any civilian and military casualties as a result of air raids. The Soviet Air Force, the Voyenno-Vozdushnye Sily (VVS), was swept aside by the Luftwaffe. The VVS bases in the immediate area lost 201 aircraft between 23 and 31 August, and despite meagre reinforcements of some 100 aircraft in August, it was left with just 192 serviceable aircraft, 57 of which were fighters. The Soviets continued to pour aerial reinforcements into the Stalingrad area in late September, but continued to suffer appalling losses; the Luftwaffe had complete control of the skies. Early on 23 August, the German 16th Panzer and 3rd Motorized Divisions attacked out of the Vertyachy bridgehead with a force 120 tanks and over 200 armored personnel carriers strong. The German attack broke through the 1382nd Rifle Regiment of the 87th Rifle Division and the 137th Tank Brigade, which were forced to retreat towards Dmitryevka. Encountering little resistance, the 16th Panzer Division drove east towards the Volga, supported by the strikes of Henschel Hs 129 ground attack aircraft. Crossing the railway line to Stalingrad at 564 km Station around midday, both divisions continued their rush towards the river. Around 15:00, Hyacinth Graf Strachwitz's Panzer Detachment and the kampfgruppe of the 2nd Battalion, 64th Panzer Grenadier Regiment from the 16th Panzer reached the area of Latashanka, Rynok, and Spartanovka, northern suburbs of Stalingrad, and the Stalingrad Tractor Factory. One of the first units to offer resistance in this area was the 1077th Anti-Aircraft Regiment, covering the Stalingrad Tractor Factory and the Volga ferry near Latashanka. The majority of the regiment was composed of men, but its directing and rangefinding crews and unit headquarters were made up of women. Several women also crewed anti-aircraft guns. The 1077th was notified of the German tanks' approach at 14:30 and its 6th Battery, dominating the Sukhaya Mechatka ravine, claimed the destruction of 28 German tanks. Later that day, its 3rd Battery on the road between Yerzovka and Stalingrad, saw particularly intense fighting against the 16th Panzer, reportedly fighting "shot for shot." Two women were decorated for their actions that day, and the regiment's report praised the "exceptional steadfastness and heroism" of the women soldiers. The regiment lost 35 guns, eighteen killed, 46 wounded, and 74 missing on 23 and 24 August. The 16th Panzer Division's history mentioned its encounter with the regiment, claiming the destruction of 37 guns, and the unit's surprise that its opponents had in part included women. In the early stages of the battle, the NKVD organised poorly armed "Workers' militias" similar to those that had defended the city twenty-four years earlier, composed of civilians not directly involved in war production for immediate use in the battle. The civilians were often sent into battle without rifles. Staff and students from the local technical university formed a "tank destroyer" unit. They assembled tanks from leftover parts at the tractor factory. These tanks, unpainted and lacking gun-sights, were driven directly from the factory floor to the front line. They could only be aimed at point-blank range through the bore of their gun barrels. By the end of August, Army Group South (B) had finally reached the Volga, north of Stalingrad. Another advance to the river south of the city followed, while the Soviets abandoned their Rossoshka position for the inner defensive ring west of Stalingrad. The wings of the 6th Army and the 4th Panzer Army met near Jablotchni along the Zaritza on 2 Sept. By 1 September, the Soviets could only reinforce and supply their forces in Stalingrad by perilous crossings of the Volga under constant bombardment by artillery and aircraft. September city battles On 5 September, the Soviet 24th and 66th Armies organized a massive attack against XIV Panzer Corps. The Luftwaffe helped repel the offensive by heavily attacking Soviet artillery positions and defensive lines. The Soviets were forced to withdraw at midday after only a few hours. Of the 120 tanks the Soviets had committed, 30 were lost to air attack. Soviet operations were constantly hampered by the Luftwaffe. On 18 September, the Soviet 1st Guards and 24th Army launched an offensive against VIII Army Corps at Kotluban. VIII. Fliegerkorps dispatched multiple waves of Stuka dive-bombers to prevent a breakthrough. The offensive was repelled. The Stukas claimed 41 of the 106 Soviet tanks knocked out that morning, while escorting Bf 109s destroyed 77 Soviet aircraft. Amid the debris of the wrecked city, the Soviet 62nd and 64th Armies, which included the Soviet 13th Guards Rifle Division, anchored their defence lines with strong-points in houses and factories. Fighting within the ruined city was fierce and desperate. Lieutenant General Alexander Rodimtsev was in charge of the 13th Guards Rifle Division, and received one of two Heroes of the Soviet Union awarded during the battle for his actions. Stalin's Order No. 227 of 27 July 1942 decreed that all commanders who ordered unauthorised retreats would be subject to a military tribunal. Blocking detachments composed of NKVD or regular troops were positioned behind Red Army units to prevent desertion and straggling, sometimes executing deserters and perceived malingerers. During the battle the 62nd Army had the most arrests and executions: 203 in all, of which 49 were executed, while 139 were sent to penal companies and battalions. Blocking detachments of the Stalingrad and Don Fronts detained 51,758 men from the beginning of the battle to 15 October, with the majority returned to their units. Of those detained, the vast majority of which were from the Don Front, 980 were executed, and 1,349 sent to penal companies. In the two day period between 13 and 15 September, the 62nd Army blocking detachment detained 1,218 men, returning most to their units while shooting 21 men and arresting ten. Beevor claims that 13,500 Soviet soldiers were executed by Soviet authorities at Stalingrad, however, this claim is disputed by Hellbeck. By 12 September, at the time of their retreat into the city, the Soviet 62nd Army had been reduced to 90 tanks, 700 mortars and just 20,000 personnel. The remaining tanks were used as immobile strong-points within the city. The initial German attack on 14 September attempted to take the city in a rush. The 51st Army Corps' 295th Infantry Division went after the Mamayev Kurgan hill, the 71st attacked the central rail station and toward the central landing stage on the Volga, while 48th Panzer Corps attacked south of the Tsaritsa River. Though initially successful, the German attacks stalled in the face of Soviet reinforcements brought in from across the Volga. Rodimtsev's 13th Guards Rifle Division had been hurried up to cross the river and join the defenders inside the city. Assigned to counterattack at the Mamayev Kurgan and at Railway Station No. 1, it suffered particularly heavy losses. Despite their losses, Rodimtsev's troops were able to inflict similar damage on their opponents. By 26 September, the opposing 71st Infantry Division had half of its battalions considered exhausted, reduced from all of them being considered average in combat capability when the attack began twelve days earlier. The brutality of the battle was noted in a journal found on German lieutenant Weiner: A ferocious battle raged for several days at the giant grain elevator in the south of the city. About fifty Red Army defenders, cut off from resupply, held the position for five days and fought off ten different assaults before running out of ammunition and water. Only forty dead Soviet fighters were found, though the Germans had thought there were many more due to the intensity of resistance. The Soviets burned large amounts of grain during their retreat in order to deny the enemy food. Paulus chose the grain elevator and silos as the symbol of Stalingrad for a patch he was having designed to commemorate the battle after a German victory. In another part of the city, a Soviet platoon under the command of Sergeant Yakov Pavlov fortified a four-story building that oversaw a square 300 meters from the river bank, later called Pavlov's House. The soldiers surrounded it with minefields, set up machine-gun positions at the windows and breached the walls in the basement for better communications. The soldiers found about ten Soviet civilians hiding in the basement. They were not relieved, and not significantly reinforced, for two months. The building was labelled Festung ("Fortress") on German maps. Sgt. Pavlov was awarded the Hero of the Soviet Union for his actions. Stubborn defenses of semi-fortified buildings in the center of the city cost the Germans countless soldiers. A violent battle occurred for the Univermag department store on Red Square, which served as the headquarters of the 1st Battalion of the 13th Guards Rifle Division's 42nd Guards Rifle Regiment. Another battle occurred for a nearby warehouse dubbed the "nail factory". In a three-story building close by, guardsmen fought on for five days, their noses and throats filled with brick dust from pulverized walls, with only six out of close to half a battalion escaping alive. The Germans made slow but steady progress through the city. Positions were taken individually, but the Germans were never able to capture the key crossing points along the river bank. By 27 Sept. the Germans occupied the southern portion of the city, but the Soviets held the centre and northern part. Most importantly, the Soviets controlled the ferries to their supplies on the east bank of the Volga. Strategy and tactics German military doctrine was based on the principle of combined-arms teams and close cooperation between tanks, infantry, engineers, artillery and ground-attack aircraft. To negate the German usage of tanks and artillery in the ruins of the city, Soviet commander Vasily Chuikov introduced a tactic he described as "hugging" the enemy: keeping Soviet front-line positions as close as possible to those of the Germans so that German artillery and aircraft could not attack without risking friendly fire. After mid-September, to reduce casualties, he ceased launching organized daylight counterattacks, instead emphasizing small unit tactics in which Soviet infantry moved through the city's sewers to strike into the rear of attacking German units. The Soviets preferred night attacks, which disrupted German morale by depriving them of sleep. Soviet reconnaissance patrols were used to find German positions and take prisoners for interrogation, enabling them to anticipate attacks. When Soviet troops detected a coming attack, they launched their own counterattacks at dawn before German air support could arrive. Soviet troops blunted the German attacks themselves through ambushes that separated tanks from their supporting infantry, as well as the employment of booby traps and mines. These tactical innovations became widespread as the battle continued. According to Beevor, "Red Army soldiers enjoyed inventing gadgets to kill Germans. New booby traps were dreamed up, each seemingly more ingenious and unpredictable in its results than the last." An important weapon during the battle was the flamethrower, which was "effectively terrifying" in its use of clearing sewer tunnels, cellars, and inaccessible hiding places. The operators of the weapon were immediately targeted as soon as they were spotted. The Soviet urban warfare tactics relied on 20-to-50-man-assault groups, armed with machine guns, grenades and satchel charges, and buildings fortified as strongpoints with clear fields of fire. While the strongpoints were defended by guns or tanks on the ground floor, machine gunners and artillery observers operated from the upper floors. Meanwhile, assault groups used sewers or broke through walls into adjoining buildings to maintain concealment while moving into the rear of German attacks. The American historian David Glantz summarizes Soviet tactical innovations as a "combination of intelligence, discipline, and determination" enabling the Soviet defenders to keep fighting when the Germans had achieved victory by "all conventional measures." The Red Army gradually adopted a strategy to hold for as long as possible all the ground in the city. Thus, they converted multi-floored apartment blocks, factories, warehouses, street corner residences and office buildings into a series of well-defended strong-points with small 5–10-man units. Manpower in the city was constantly refreshed by bringing additional troops over the Volga. When a position was lost, an immediate attempt was usually made to re-take it with fresh forces. Bitter fighting raged for ruins, streets, factories, houses, basements, and staircases. Blocks and buildings would change hands numerous times through intense hand-to-hand fighting. Even the sewers were the sites of firefights. The Germans called this unseen urban warfare Rattenkrieg ("Rat War"), and bitterly joked about capturing the kitchen but still fighting for the living room and the bedroom. Buildings had to be cleared room by room through the bombed-out debris of residential areas, office blocks, basements and apartment high-rises. Beevor describes how this process was particularly brutal, stating that, "In its way, the fighting in Stalingrad was even more terrifying than the impersonal slaughter at Verdun. The close-quarter combat in ruined buildings, bunkers, cellars and sewers was soon dubbed 'Rattenkrieg' by German soldiers. It possessed a savage intimacy which appalled their generals, who felt that they were rapidly losing control over events." Some of the taller buildings, blasted into roofless shells by earlier German aerial bombardment, saw floor-by-floor, close-quarters combat, with the Germans and Soviets on alternate levels, firing at each other through holes in the floors. Fighting on and around Mamayev Kurgan, a prominent hill above the city, was particularly merciless; indeed, the position changed hands many times. Gerhardt's Mill was a noticeable building that was brutally fought for, and is kept till this day as a memorial to the battle. It was eventually cleared by the 39th Guards Regiment in pitiless close-quarters combat. The brutality of the close-quarters combat was shown by the number of military casualties taken by units. The 13th Guards Rifle Division suffered 30% casualties in the first twenty-four hours, with only 320 men out of 10,000 remaining at the battle's conclusion. With buildings and floors changing hands dozens of times and taking up to several days to win, platoons and companies took up to 90% and even 100% casualties to win a building or floor within it. The Germans used aircraft, tanks and heavy artillery to clear the city with varying degrees of success. Toward the end of the battle, the gigantic railroad gun nicknamed Dora was brought into the area. The Soviets built up a large number of artillery batteries on the east bank of the Volga. This artillery was able to bombard the German positions or at least provide counter-battery fire. Snipers on both sides used the ruins to inflict casualties, with the Soviet command heavily emphasizing sniper tactics to wear down the Germans. The most famous Soviet sniper in Stalingrad was Vasily Zaytsev, who became a propaganda hero, credited with 225 kills during the battle. Targets were often soldiers bringing up food or water to forward positions. Artillery spotters were an especially prized target for snipers. A significant historical debate concerns the degree of terror in the Red Army. The British historian Antony Beevor noted the "sinister" message from the Stalingrad Front's Political Department on 8 October 1942 that: "The defeatist mood is almost eliminated and the number of treasonous incidents is getting lower" as an example of the sort of coercion Red Army soldiers experienced under the Special Detachments (later to be renamed SMERSH). On the other hand, Beevor noted the often extraordinary bravery of the Soviet soldiers in a battle that was only comparable to Verdun, and argued that terror alone cannot explain such self-sacrifice. A Soviet officer interviewed, Nikolai Aksyonov, explained the general feeling amongst the Soviets in Stalingrad, "There was this sense that every soldier and officer in Stalingrad was itching to kill as many Germans as possible. In Stalingrad people felt a particularly intense hatred for the Germans." Richard Overy addresses the question of just how important the Red Army's coercive methods were to the Soviet war effort compared with other motivational factors such as hatred for the enemy. He argues that, though it is "easy to argue that from the summer of 1942 the Soviet army fought because it was forced to fight," to concentrate solely on coercion is nonetheless to "distort our view of the Soviet war effort." After conducting hundreds of interviews with Soviet veterans on the subject of terror on the Eastern Front – and specifically about Order No. 227 ("Not a step back!") at Stalingrad – Catherine Merridale notes that, seemingly paradoxically, "their response was frequently relief." Infantryman Lev Lvovich's explanation, for example, is typical for these interviews; as he recalls, "[i]t was a necessary and important step. We all knew where we stood after we had heard it. And we all – it's true – felt better. Yes, we felt better." Many women fought on the Soviet side or were under fire. As General Chuikov acknowledged, "Remembering the defence of Stalingrad, I can't overlook the very important question … about the role of women in war, in the rear, but also at the front. Equally with men they bore all the burdens of combat life and together with us men, they went all the way to Berlin." At the beginning of the battle there were 75,000 women and girls from the Stalingrad area who had finished military or medical training, and all of whom were to serve in the battle. Women staffed a great many of the anti-aircraft batteries that fought not only the Luftwaffe but German tanks. Soviet nurses not only treated wounded personnel under fire but were involved in the highly dangerous work of bringing wounded soldiers back to the hospitals under enemy fire. Many of the Soviet wireless and telephone operators were women who often suffered heavy casualties when their command posts came under fire. Though women were not usually trained as infantry, many Soviet women fought as machine gunners, mortar operators, and scouts. Women were also snipers at Stalingrad. Three air regiments at Stalingrad were entirely female. At least three women won the title Hero of the Soviet Union while driving tanks at Stalingrad. For both Stalin and Hitler, Stalingrad became a matter of prestige far beyond its strategic significance. The Soviet command moved units from the Red Army strategic reserve in the Moscow area to the lower Volga and transferred aircraft from the entire country to the Stalingrad region. The strain on both military commanders was immense: Paulus developed an uncontrollable tic in his eye, which eventually affected the left side of his face, while Chuikov experienced an outbreak of eczema that required him to have his hands completely bandaged. Troops on both sides faced the constant strain of close-range combat. The Soviets used psychological warfare tactics to intimidate and demoralize German forces. On loudspeakers throughout the ruined city, they would announce "Every seven seconds a German soldier dies. Stalingrad. . .mass grave". The sound was interspersed with the monotonous sound of a ticking clock, and an orchestral melody dubbed the "Tango of Death". Fighting in the industrial district After 27 September, much of the fighting in the city shifted north to the industrial district. Having slowly advanced over 10 days against strong Soviet resistance, the 51st Army Corps was finally in front of the three giant factories of Stalingrad: the Red October Steel Factory, the Barrikady Arms Factory and Stalingrad Tractor Factory. It took a few more days for them to prepare for the most savage offensive of all, which was unleashed on 14 October. According to Beevor about the danger of the factories, he states, "The Red October complex and Barrikady gun factory had been turned into fortresses as lethal as those of Verdun. If anything, they were more dangerous because the Soviet regiments were so well hidden." The danger of the Barrikady Arms Factory was made apparent firsthand by sergeant Ernst Wohlfahrt, who witnessed 18 German pioneers get killed by a Russian booby trap. Exceptionally intense shelling and bombing paved the way for the first German assault groups. The main attack (led by the 14th Panzer and 305th Infantry Divisions) attacked towards the tractor factory, while another assault led by the 24th Panzer Division hit to the south of the giant plant. The German onslaught crushed the 37th Guards Rifle Division of Major General Viktor Zholudev and in the afternoon the forward assault group reached the tractor factory before arriving at the Volga River, splitting the 62nd Army into two. In response to the German breakthrough to the Volga, the front headquarters committed three battalions from the 300th Rifle Division and the 45th Rifle Division of Colonel Vasily Sokolov, a substantial force of over 2,000 men, to the fighting at the Red October Factory. Fighting raged inside the Barrikady Factory until the end of October. The Soviet-controlled area shrank down to a few strips of land along the western bank of the Volga, and in November the fighting concentrated around what Soviet newspapers referred to as "Lyudnikov's Island", a small patch of ground behind the Barrikady Factory where the remnants of Colonel Ivan Lyudnikov's 138th Rifle Division resisted all ferocious assaults thrown by the Germans and became a symbol of the stout Soviet defence of Stalingrad. Air attacks From 5 to 12 September, Luftflotte 4 conducted 7,507 sorties (938 per day). From 16 to 25 September, it carried out 9,746 missions (975 per day). Determined to crush Soviet resistance, Luftflotte 4's Stukawaffe flew 900 individual sorties against Soviet positions at the Stalingrad Tractor Factory on 5 October. Several Soviet regiments were wiped out; the entire staff of the Soviet 339th Infantry Regiment was killed the following morning during an air raid. The Luftwaffe retained air superiority into November, and Soviet daytime aerial resistance was nonexistent. However, the combination of constant air support operations on the German side and the Soviet surrender of the daytime skies began to affect the strategic balance in the air. From 28 June to 20 September, Luftflotte 4's original strength of 1,600 aircraft, of which 1,155 were operational, fell to 950, of which only 550 were operational. The fleet's total strength decreased by 40 percent. Daily sorties decreased from 1,343 per day to 975 per day. Soviet offensives in the central and northern portions of the Eastern Front tied down Luftwaffe reserves and newly built aircraft, reducing Luftflotte 4's percentage of Eastern Front aircraft from 60 percent on 28 June to 38 percent by 20 September. The Kampfwaffe (bomber force) was the hardest hit, having only 232 out of an original force of 480 left. The VVS remained qualitatively inferior, but by the time of the Soviet counter-offensive, the VVS had reached numerical superiority. In mid-October, after receiving reinforcements from the Caucasus theatre, the Luftwaffe intensified its efforts against the remaining Red Army positions holding the west bank. Luftflotte 4 flew 1,250 sorties on 14 October and its Stukas dropped 550 tonnes of bombs, while German infantry surrounded the three factories. Stukageschwader 1, 2, and 77 had largely silenced Soviet artillery on the eastern bank of the Volga before turning their attention to the shipping that was once again trying to reinforce the narrowing Soviet pockets of resistance. The 62nd Army had been cut in two and, due to intensive air attack on its supply ferries, was receiving much less material support. With the Soviets forced into a strip of land on the western bank of the Volga, over 1,208 Stuka missions were flown in an effort to eliminate them. The Soviet bomber force, the Aviatsiya Dal'nego Deystviya (Long Range Aviation; ADD), having taken crippling losses over the past 18 months, was restricted to flying at night. The Soviets flew 11,317 night sorties over Stalingrad and the Don-bend sector between 17 July and 19 November. These raids caused little damage and were of nuisance value only. On 8 November, substantial units from Luftflotte 4 were withdrawn to combat the Allied landings in North Africa. The German air arm found itself spread thinly across Europe, struggling to maintain its strength in the other southern sectors of the Soviet-German front. As historian Chris Bellamy notes, the Germans paid a high strategic price for the aircraft sent into Stalingrad: the Luftwaffe was forced to divert much of its air strength away from the oil-rich Caucasus, which had been Hitler's original grand-strategic objective. The Royal Romanian Air Force was also involved in the Axis air operations at Stalingrad. Starting 23 October 1942, Romanian pilots flew a total of 4,000 sorties, during which they destroyed 61 Soviet aircraft. The Romanian Air Force lost 79 aircraft, most of them captured on the ground along with their airfields. Medical and food conditions The conditions of both armies during the battle was atrocious. Disease ran rampant on both sides, with many deaths due to dysentery, typhus, diphtheria, tuberculosis and jaundice, causing medical staff to fear a possible epidemic. Vermin such as rats and mice were plentiful on the battlefield, serving as one reason the Germans could not counterattack in time, due to mice having chewed their tank wiring. Lice were also heavily prevalent, and plagues of flies would gather around kitchens, adding to the possibility of wound infections. The brutal winter conditions affected soldiers tremendously, with temperatures at times reaching as low as −40°C in late November, or −30°C in late January. The conditions caused rapid frostbite with many cases of gangrene and eventual amputation as a result. The conditions also saw soldiers dying en masse due to frostbite and hypothermia. Both armies also suffered from food shortages, with mass starvation on both sides. Stress, tiredness and cold upset the metabolism of many soldiers, with them receiving a reduced amount of calories from food. The German forces eventually ran out of medical supplies such as ether, antiseptics, bandages and other medical supplies. With the shortages, surgery had to be done without anaesthesia. Germans reach the Volga After three months of slow advance, the Germans finally reached the river banks, capturing 90% of the ruined city and splitting the remaining Soviet forces into two narrow pockets. Ice floes on the Volga now prevented boats and tugs from supplying the Soviet defenders. Nevertheless, the fighting continued, especially on the slopes of Mamayev Kurgan and inside the factory area in the northern part of the city. From 21 August to 20 November, the German 6th Army lost 60,548 men, including 12,782 killed, 45,545 wounded and 2,221 missing. Soviet counter-offensives Recognising that German troops were ill-prepared for offensive operations during the winter of 1942 and that most of them were redeployed elsewhere on the southern sector of the Eastern Front, the Stavka decided to conduct a number of offensive operations between 19 November 1942 and 2 February 1943. These operations opened the Winter Campaign of 1942–1943 (19 November 1942 – 3 March 1943), which involved some fifteen Armies operating on several fronts. According to Zhukov, "German operational blunders were aggravated by poor intelligence: they failed to spot preparations for the major counter-offensive near Stalingrad where there were 10 field, 1 tank and 4 air armies." Weakness on the Axis flanks During the siege, the German and allied Italian, Hungarian, and Romanian armies protecting Army Group B's north and south flanks had pressed their headquarters for support. The Hungarian 2nd Army was given the task of defending a section of the front north of Stalingrad between the Italian Army and Voronezh. This resulted in a very thin line, with some sectors where stretches were being defended by a single platoon (platoons typically have around 20 to 50 men). These forces were also lacking in effective anti-tank weapons. Zhukov states, "Compared with the Germans, the troops of the satellites were not so well armed, less experienced and less efficient, even in defence." Because of the total focus on the city, the Axis forces had neglected for months to consolidate their positions along the natural defensive line of the Don River. The Soviet forces were allowed to retain bridgeheads on the right bank from which offensive operations could be quickly launched. These bridgeheads in retrospect presented a serious threat to Army Group B. Similarly, on the southern flank of the Stalingrad sector, the front southwest of Kotelnikovo was held only by the Romanian 4th Army. Beyond that army, a single German division, the 16th Motorised Infantry, covered 400 km. Paulus had requested permission to "withdraw the 6th Army behind the Don," but was rejected. According to Paulus's comments to his adjutant Wilhelm Adam, "There is still the order whereby no commander of an army group or an army has the right to relinquish a village, even a trench, without Hitler's consent." Operation Uranus In autumn, the Soviet generals Georgy Zhukov and Aleksandr Vasilevsky, responsible for strategic planning in the Stalingrad area, concentrated forces in the steppes to the north and south of the city. The northern flank was defended by Hungarian and Romanian units, often in open positions on the steppes. The natural line of defence, the Don River, had never been properly established by the German side. The armies in the area were also poorly equipped in terms of anti-tank weapons. The plan was to punch through the overstretched and weakly defended German flanks and surround the German forces in the Stalingrad region. During the preparations for the attack, Marshal Zhukov personally visited the front and noticing the poor organisation, insisted on a one-week delay in the start date of the planned attack. The operation was code-named "Uranus" and launched in conjunction with Operation Mars, which was directed at Army Group Center about to the northwest. The plan was similar to the one Zhukov had used to achieve victory at Khalkhin Gol three years before, where he had sprung a double envelopment and destroyed the 23rd Division of the Japanese army. On 19 November 1942, the Red Army launched Operation Uranus. The attacking Soviet units under the command of Gen. Nikolay Vatutin consisted of three complete armies, the 1st Guards Army, 5th Tank Army and 21st Army, including a total of 18 infantry divisions, eight tank brigades, two motorised brigades, six cavalry divisions and one anti-tank brigade. The preparations for the attack could be heard by the Romanians, who continued to push for reinforcements, only to be refused again. Thinly spread, deployed in exposed positions, outnumbered and poorly equipped, the Romanian 3rd Army, which held the northern flank of the German 6th Army, was overrun. Behind the front lines, no preparations had been made to defend key points in the rear such as Kalach. The response by the Wehrmacht was both chaotic and indecisive. Poor weather prevented effective air action against the Soviet offensive. Army Group B was in disarray and faced strong Soviet pressure across all its fronts. Hence it was ineffective in relieving the 6th Army. On 20 November, a second Soviet offensive (two armies) was launched to the south of Stalingrad against points held by the Romanian 4th Army Corps. The Romanian forces, made up primarily of infantry, were overrun by large numbers of tanks. The Soviet forces raced west and met on 23 November at the town of Kalach, sealing the ring around Stalingrad. The link-up of the Soviet forces, not filmed at the time, was later re-enacted for a propaganda film which was shown worldwide. Sixth Army surrounded The surrounded Axis personnel comprised 265,000 Germans, Romanians, Italians, and Croatians. In addition, the German 6th Army included between 40,000 and 65,000 Hilfswillige (Hiwi), or "volunteer auxiliaries", a term used for personnel recruited amongst Soviet POWs and civilians from areas under occupation. Hiwi often proved to be reliable Axis personnel in rear areas and were used for supporting roles, but also served in some front-line units as their numbers had increased. The conditions of the German 6th Army had been "reduced to conditions very similar to those in the First World War". The most insanitary conditions were found in units who were forced by the Soviets to take up new positions in the open steppe. Beevor explains, "In such conditions, troops had not yet had a chance to dig communications trenches and latrines. Soldiers were sleeping, packed together like sardines, in holes in the ground covered by a tarpaulin. Infections spread rapidly. Dysentery soon had a debilitating and demoralizing effect, as weakened soldiers squatted over shovels in their trenches, then threw the contents out over the parapet." German personnel in the pocket numbered about 210,000, according to strength breakdowns of the 20 field divisions (average size 9,000) and 100 battalion-sized units of the Sixth Army on 19 November 1942. Inside the pocket (, literally "cauldron"), there were also around 10,000 Soviet civilians and several thousand Soviet soldiers the Germans had taken captive during the battle. Not all of the 6th Army was trapped: 50,000 soldiers were brushed aside outside the pocket. These belonged mostly to the other two divisions of the 6th Army between the Italian and Romanian armies: the 62nd and 298th Infantry Divisions. Of the 210,000 Germans, 10,000 remained to fight on, 105,000 surrendered, 35,000 left by air and the remaining 60,000 died. Even with the desperate situation of the 6th Army, Army Group A had to hold its position in the Caucasus further south. No troops were pulled off that region to help relieve the 6th Army. Only on December 31, after Soviet forces had broken through German positions in Operation Little Saturn and threatened to retake Rostov-on-Don and cut off Army Group A completely, was it ordered to withdraw from the Caucasus to avoid being trapped. Army Group Don was formed under Field Marshal von Manstein. Under his command were the twenty German and two Romanian divisions encircled at Stalingrad, Adam's battle groups formed along the Chir River and on the Don bridgehead, plus the remains of the Romanian 3rd Army. The Red Army units immediately formed two defensive fronts: a circumvallation facing inward and a contravallation facing outward. Field Marshal Erich von Manstein advised Hitler not to order the 6th Army to break out, stating that he could break through the Soviet lines and relieve the besieged 6th Army. The American historians Williamson Murray and Alan Millet wrote that it was Manstein's message to Hitler on 24 November advising him that the 6th Army should not break out, along with Göring's statements that the Luftwaffe could supply Stalingrad that "... sealed the fate of the Sixth Army". After 1945, Manstein claimed that he told Hitler that the 6th Army must break out. The American historian Gerhard Weinberg wrote that Manstein distorted his record on the matter. Manstein was tasked to conduct a relief operation, named Operation Winter Storm (Unternehmen Wintergewitter) against Stalingrad, which he thought was feasible if the 6th Army was temporarily supplied through the air. Adolf Hitler had declared in a public speech (in the Berlin Sportpalast) on 30 September 1942 that the German army would never leave the city. At a meeting shortly after the Soviet encirclement, German army chiefs pushed for an immediate breakout to a new line on the west of the Don, but Hitler was at his Bavarian retreat of Obersalzberg in Berchtesgaden with the head of the Luftwaffe, Hermann Göring. When asked by Hitler, Göring replied, after being convinced by Hans Jeschonnek, that the Luftwaffe could supply the 6th Army with an "air bridge". This would allow the Germans in the city to fight on temporarily while a relief force was assembled. A similar plan had been used a year earlier at the Demyansk Pocket, albeit on a much smaller scale: a corps at Demyansk rather than an entire army. The director of Luftflotte 4, Wolfram von Richthofen, tried to get this decision overturned. The forces under the 6th Army were almost twice as large as a regular German army unit, plus there was also a corps of the 4th Panzer Army trapped in the pocket. Due to a limited number of available aircraft and having only one available airfield, at Pitomnik, the Luftwaffe could only deliver 105 tonnes of supplies per day, only a fraction of the minimum 750 tonnes that both Paulus and Zeitzler estimated the 6th Army needed. To supplement the limited number of Junkers Ju 52 transports, the Germans pressed other aircraft into the role, such as the Heinkel He 177 bomber. Some bombers performed adequately – the Heinkel He 111 proved to be quite capable and was much faster than the Ju 52. General Richthofen informed Manstein on 27 November of the small transport capacity of the Luftwaffe and the impossibility of supplying 300 tons a day by air. Manstein now saw the enormous technical difficulties of a supply by air of these dimensions. The next day he made a six-page situation report to the general staff. Based on the information of the expert Richthofen, he declared that contrary to the example of the pocket of Demyansk the permanent supply by air would be impossible. If only a narrow link could be established to Sixth Army, he proposed that this should be used to pull it out from the encirclement, and said that the Luftwaffe should instead of supplies deliver only enough ammunition and fuel for a breakout attempt. He acknowledged the heavy moral sacrifice that giving up Stalingrad would mean, but this would be made easier to bear by conserving the combat power of the Sixth Army and regaining the initiative. He ignored the limited mobility of the army and the difficulties of disengaging the Soviets. Hitler reiterated that the Sixth Army would stay at Stalingrad and that the air bridge would supply it until the encirclement was broken by a new German offensive. Supplying the 270,000 men trapped in the "cauldron" required 700 tons of supplies a day. That would mean 350 Ju 52 flights a day into Pitomnik. At a minimum, 500 tons were required. However, according to Adam, "On not one single day have the minimal essential number of tons of supplies been flown in." The Luftwaffe was able to deliver an average of 85 tonnes of supplies per day out of an air transport capacity of 106 tonnes per day. The most successful day, 19 December, the Luftwaffe delivered 262 tonnes of supplies in 154 flights. The outcome of the airlift was the Luftwaffe's failure to provide its transport units with the tools they needed to maintain an adequate count of operational aircraft – tools that included airfield facilities, supplies, manpower, and even aircraft suited to the prevailing conditions. These factors, taken together, prevented the Luftwaffe from effectively employing the full potential of its transport forces, ensuring that they were unable to deliver the quantity of supplies needed to sustain the 6th Army. In the early parts of the operation, fuel was shipped at a higher priority than food and ammunition because of a belief that there would be a breakout from the city. Transport aircraft also evacuated technical specialists and sick or wounded personnel from the besieged enclave. Sources differ on the number flown out: at least 25,000 to at most 35,000. Initially, supply flights came in from the field at Tatsinskaya, called 'Tazi' by the German pilots. On 23 December, the Soviet 24th Tank Corps, commanded by Major-General Vasily Mikhaylovich Badanov, reached nearby Skassirskaya and in the early morning of 24 December, the tanks reached Tatsinskaya. Without any soldiers to defend the airfield, it was abandoned under heavy fire; in a little under an hour, 108 Ju 52s and 16 Ju 86s took off for Novocherkassk – leaving 72 Ju 52s and many other aircraft burning on the ground. A new base was established some from Stalingrad at Salsk. The additional distance became another obstacle to the resupply efforts. Salsk was abandoned in turn by mid-January for a rough facility at Zverevo, near Shakhty. The field at Zverevo was attacked repeatedly on 18 January and a further 50 Ju 52s were destroyed. Winter weather conditions, technical failures, heavy Soviet anti-aircraft fire and fighter interceptions eventually led to the loss of 488 German aircraft. In spite of the failure of the German offensive to reach the 6th Army, the air supply operation continued under ever more difficult circumstances. The 6th Army slowly starved. General Zeitzler, moved by their plight, began to limit himself to their slim rations at meal times. After a few weeks on such a diet, he had "visibly lost weight", according to Albert Speer, and Hitler "commanded Zeitzler to resume at once taking sufficient nourishment". The toll on the Transportgruppen was heavy. 160 aircraft were destroyed and 328 were heavily damaged (beyond repair). Some 266 Junkers Ju 52s were destroyed; one-third of the fleet's strength on the Eastern Front. The He 111 gruppen lost 165 aircraft in transport operations. Other losses included 42 Ju 86s, 9 Fw 200 Condors, 5 He 177 bombers and 1 Ju 290. The Luftwaffe also lost close to 1,000 highly experienced bomber crew personnel. So heavy were the Luftwaffes losses that four of Luftflotte 4's transport units (KGrzbV 700, KGrzbV 900, I./KGrzbV 1 and II./KGzbV 1) were "formally dissolved". End of the battle Operation Winter Storm Manstein's plan to rescue the Sixth Army – Operation Winter Storm – was developed in full consultation with Führer headquarters. It aimed to break through to the Sixth Army and establish a corridor to keep it supplied and reinforced, so that, according to Hitler's order, it could maintain its "cornerstone" position on the Volga, "with regard to operations in 1943". Manstein, however, who knew that Sixth Army could not survive the winter there, instructed his headquarters to draw up a further plan in the event of Hitler's seeing sense. This would include the subsequent breakout of Sixth Army, in the event of a successful first phase, and its physical reincorporation in Army Group Don. This second plan was given the name Operation Thunderclap. Winter Storm, as Zhukov had predicted, was originally planned as a two-pronged attack. One thrust would come from the area of Kotelnikovo, well to the south, and around from the Sixth Army. The other would start from the Chir front west of the Don, which was little more than from the edge of the Kessel, but the continuing attacks of Romanenko's 5th Tank Army against the German detachments along the river Chir ruled out that start-line. This left only the LVII Panzer Corps around Kotelnikovo, supported by the rest of Hoth's very mixed Fourth Panzer Army, to relieve Paulus's trapped divisions. The LVII Panzer Corps, commanded by General Friedrich Kirchner, had been weak at first. It consisted of two Romanian cavalry divisions and the 23rd Panzer Division, which mustered no more than thirty serviceable tanks. The 6th Panzer Division, arriving from France, was a vastly more powerful formation, but its members hardly received an encouraging impression. The Austrian divisional commander, General Erhard Raus, was summoned to Manstein's royal carriage in Kharkov station on 24 November, where the field marshal briefed him. "He described the situation in very sombre terms", recorded Raus. Three days later, when the first trainload of Raus's division steamed into Kotelnikovo station to unload, his troops were greeted by "a hail of shells" from Soviet batteries. "As quick as lightning, the Panzergrenadiers jumped from their wagons. But already the enemy was attacking the station with their battle-cries of 'Urrah! By 18 December, the German Army had pushed to within 48 km (30 mi) of Sixth Army's positions. However, the predictable nature of the relief operation brought significant risk for all German forces in the area. The starving encircled forces at Stalingrad made no attempt to break out or link up with Manstein's advance. Some German officers requested that Paulus defy Hitler's orders to stand fast and instead attempt to break out of the Stalingrad pocket. Paulus refused, concerned about the Red Army attacks on the flank of Army Group Don and Army Group B in their advance on Rostov-on-Don, "an early abandonment" of Stalingrad "would result in the destruction of Army Group A in the Caucasus", and the fact that his 6th Army tanks only had fuel for a 30 km advance towards Hoth's spearhead, a futile effort if they did not receive assurance of resupply by air. Of his questions to Army Group Don, Paulus was told, "Wait, implement Operation 'Thunderclap' only on explicit orders!" – Operation Thunderclap being the code word initiating the breakout. Operation Little Saturn On 16 December, the Soviets launched Operation Little Saturn, which attempted to punch through the Axis army (mainly Italians) on the Don. The Germans set up a "mobile defence" of small units that were to hold towns until supporting armour arrived. From the Soviet bridgehead at Mamon, 15 divisions – supported by at least 100 tanks – attacked the Italian Cosseria and Ravenna Divisions, and although outnumbered 9 to 1, the Italians initially fought well, with the Germans praising the quality of the Italian defenders, but on 19 December, with the Italian lines disintegrating, ARMIR headquarters ordered the battered divisions to withdraw to new lines. The fighting forced a total revaluation of the German situation. Sensing that this was the last chance for a breakout, Manstein pleaded with Hitler on 18 December, but Hitler refused. Paulus himself also doubted the feasibility of such a breakout. The attempt to break through to Stalingrad was abandoned and Army Group A was ordered to pull back from the Caucasus. The 6th Army now was beyond all hope of German relief. While a motorised breakout might have been possible in the first few weeks, the 6th Army now had insufficient fuel and the German soldiers would have faced great difficulty breaking through the Soviet lines on foot in harsh winter conditions. But in its defensive position on the Volga, the 6th Army continued to tie down a significant number of Soviet Armies. On 23 December, the attempt to relieve Stalingrad was abandoned and Manstein's forces switched over to the defensive to deal with new Soviet offensives. As Zhukov states, Soviet victory The Red Army High Command sent three envoys while simultaneously aircraft and loudspeakers announced terms of capitulation on 7 January 1943. The letter was signed by Colonel-General of Artillery Voronov and the commander-in-chief of the Don Front, Lieutenant-General Rokossovsky. A low-level Soviet envoy party (comprising Major Aleksandr Smyslov, Captain Nikolay Dyatlenko and a trumpeter) carried generous surrender terms to Paulus: if he surrendered within 24 hours, he would receive a guarantee of safety for all prisoners, medical care for the sick and wounded, prisoners being allowed to keep their personal belongings, "normal" food rations, and repatriation to any country they wished after the war. Rokossovsky's letter also stressed that Paulus' men were in an untenable situation. Paulus requested permission to surrender, but Hitler rejected Paulus' request out of hand. Accordingly, Paulus did not respond. The German High Command informed Paulus, "Every day that the army holds out longer helps the whole front and draws away the Russian divisions from it." The Germans inside the pocket retreated from the suburbs of Stalingrad to the city itself. The loss of the two airfields, at Pitomnik on 16 January 1943 and Gumrak on the night of 21/22 January, meant an end to air supplies and to the evacuation of the wounded. The third and last serviceable runway was at the Stalingradskaya flight school, which reportedly had the last landings and takeoffs on 23 January. After 23 January, there were no more reported landings, just intermittent air drops of ammunition and food until the end. The Germans were now not only starving but running out of ammunition. Nevertheless, they continued to resist, in part because they believed the Soviets would execute any who surrendered. Postwar German historians emphasized the fatigue and defeatism that prevailed in the Germans towards the end of the battle. However, transcripts show that despite many German soldiers yelling "Hitler kaput" to avoid being shot while surrendering, the level of armed resistance remained extraordinarily high till the end of the battle. In particular, the so-called HiWis, Soviet citizens fighting for the Germans, had no illusions about their fate if captured. The Soviets were initially surprised by the number of Germans they had trapped and had to reinforce their encircling troops. Bloody urban warfare began again in Stalingrad, but this time it was the Germans who were pushed back to the banks of the Volga. The Germans adopted a simple defence of fixing wire nets over all windows to protect themselves from grenades. The Soviets responded by fixing fish hooks to the grenades so they stuck to the nets when thrown. The Germans had no usable tanks in the city, and those that still functioned could, at best, be used as makeshift pillboxes. The Soviets did not bother employing tanks in areas where urban destruction restricted their mobility. On 22 January, Rokossovsky once again offered Paulus a chance to surrender. Paulus requested that he be granted permission to accept the terms. He told Hitler that he was no longer able to command his men, who were without ammunition or food. Hitler rejected it on a point of honour. He telegraphed the 6th Army later that day, claiming that it had made a historic contribution to the greatest struggle in German history and that it should stand fast "to the last soldier and the last bullet". Hitler told Goebbels that the plight of the 6th Army was a "heroic drama of German history". On 24 January, in his radio report to Hitler, Paulus reported: "18,000 wounded without the slightest aid of bandages and medicines." On 26 January 1943, the German forces inside Stalingrad were split into two pockets north and south of Mamayev-Kurgan. The northern pocket consisting of the VIIIth Corps, under General Walter Heitz, and the XIth Corps, was now cut off from telephone communication with Paulus in the southern pocket. Now "each part of the cauldron came personally under Hitler". On 28 January, the cauldron was split into three parts. The northern cauldron consisted of the XIth Corps, the central with the VIIIth and LIst Corps, and the southern with the XIVth Panzer Corps and IVth Corps "without units". The sick and wounded reached 40,000 to 50,000. On 30 January 1943, the 10th anniversary of Hitler's coming to power, Goebbels read out a proclamation that included the sentence: "The heroic struggle of our soldiers on the Volga should be a warning for everybody to do the utmost for the struggle for Germany's freedom and the future of our people, and thus in a wider sense for the maintenance of our entire continent." The same day, Hermann Göring broadcast from the air ministry, comparing the situation of the surrounded German 6th Army to that of the Spartans at the Battle of Thermopylae, the speech was not well received by soldiers however. Paulus notified Hitler that his men would likely collapse before the day was out. In response, Hitler issued a tranche of field promotions to the Sixth Army's officers. He promoted Paulus to the rank of Generalfeldmarschall. In deciding to promote Paulus, Hitler noted that there was no record of a German or Prussian field marshal having ever surrendered. The implication was clear: if Paulus surrendered, he would shame himself and would become the highest-ranking German officer ever to be captured. Hitler believed that Paulus would either fight to the last man or commit suicide. On the next day, the southern pocket in Stalingrad collapsed. Soviet forces reached the entrance to the German headquarters in the ruined GUM department store. Major Anatoly Soldatov described the conditions of the department store basement as such, "it was unbelievably filthy, you couldn't get through the front or back doors, the filth came up to your chest, along with human waste and who knows what else. The stench was unbelievable." When interrogated by the Soviets, Paulus claimed that he had not surrendered. He said that he had been taken by surprise. He denied that he was the commander of the remaining northern pocket in Stalingrad and refused to issue an order in his name for them to surrender. There was no one with a camera present to film the capture of Paulus. One person, though, Roman Karmen, managed to record the first interrogation of Paulus that took place the same day, at Shumilov's 64th Army's HQ, and a few hours later at Rokossovsky's Don Front HQ. The central pocket, under the command of Heitz, surrendered the same day, while the northern pocket, under the command of General Karl Strecker, held out for two more days. Four Soviet armies were deployed against the northern pocket. At four in the morning on 2 February, Strecker was informed that one of his own officers had gone to the Soviets to negotiate surrender terms. Seeing no point in continuing, he sent a radio message saying that his command had done its duty and fought to the last man. When Strecker finally surrendered, he and his chief of staff, Helmuth Groscurth, drafted the final signal sent from Stalingrad, purposely omitting the customary exclamation to Hitler, replacing it with "Long live Germany!" Around 91,000 exhausted, ill, wounded, and starving prisoners were taken. The prisoners included 22 generals. Hitler was furious and confided that Paulus "could have freed himself from all sorrow and ascended into eternity and national immortality, but he prefers to go to Moscow". Casualties The Axis suffered 747,300–868,374 combat casualties (killed, wounded or captured) among all branches of the German armed forces and their allies: 282,606 in the 6th Army from 21 August to the end of the battle; 17,293 in the 4th Panzer Army from 21 August to 31 January; 55,260 in the Army Group Don from 1 December 1942 to the end of the battle (12,727 killed, 37,627 wounded and 4,906 missing) Walsh estimates the losses to 6th Army and 4th Panzer division were over 300,000; including other German army groups between late June 1942 and February 1943, total German casualties were over 600,000. Louis A. DiMarco estimated the German suffered 400,000 total casualties (killed, wounded or captured) during this battle. According to Frieser, et al.: 109,000 Romanians casualties (from November 1942 to December 1942), included 70,000 captured or missing. 114,000 Italians and 105,000 Hungarians were killed, wounded or captured (from December 1942 to February 1943). According to Stephen Walsh: Romanian casualties were 158,854; 114,520 Italians (84,830 killed, missing and 29,690 wounded); and 143,000 Hungarian (80,000 killed, missing and 63,000 wounded). Losses among Soviet POW turncoats Hiwis, or Hilfswillige range between 19,300 and 52,000. 235,000 German and allied troops in total, from all units, including Manstein's ill-fated relief force, were captured during the battle. It is estimated that as many as over 1 million soldiers and civilians combined were killed during the battle. Author William Craig, while researching for his book, stressed the incredible death toll of the battle, "Most appalling was the growing realization, formed by statistics I uncovered, that the battle was the greatest military bloodbath in recorded history. Well over a million men and women died because of Stalingrad, a number far surpassing the previous records of dead at the first battle of the Somme and Verdun in 1916." Historian Jochen Hellbeck described the lethality of the battle as such, "The battle of Stalingrad—the most ferocious and lethal battle in human history—ended on February 2. With an estimated death toll in an excess of a million, the bloodletting at Stalingrad far exceeded that of Verdun, one of the costliest battles of World War I." The Germans lost 900 aircraft (including 274 transports and 165 bombers used as transports), 500 tanks and 6,000 artillery pieces. According to a contemporary Soviet report, 5,762 guns, 1,312 mortars, 12,701 heavy machine guns, 156,987 rifles, 80,438 sub-machine guns, 10,722 trucks, 744 aircraft; 1,666 tanks, 261 other armoured vehicles, 571 half-tracks and 10,679 motorcycles were captured by the Soviets. The USSR, according to archival figures, suffered 1,129,619 total casualties; 478,741 personnel killed or missing, and 650,878 wounded or sick. The USSR lost 4,341 tanks destroyed or damaged, 15,728 artillery pieces and 2,769 combat aircraft. 955 Soviet civilians died in Stalingrad and its suburbs from aerial bombing by Luftflotte 4 as the German 4th Panzer and 6th Armies approached the city. Luftwaffe losses The losses of transport planes were especially serious, as they destroyed the capacity for supply of the trapped 6th Army. The destruction of 72 aircraft when the airfield at Tatsinskaya Airfield was overrun meant the loss of about 10 percent of the Luftwaffe transport fleet. These losses amounted to about 50 percent of the aircraft committed and the Luftwaffe training program was stopped and sorties in other theatres of war were significantly reduced to save fuel for use at Stalingrad. Aftermath The German public was not officially told of the impending disaster until the end of January 1943, though positive media reports had stopped in the weeks before the announcement. Stalingrad marked the first time that the Nazi government publicly acknowledged a failure in its war effort. On 31 January, regular programmes on German state radio were replaced by a broadcast of the sombre Adagio movement from Anton Bruckner's Seventh Symphony, followed by the announcement of the defeat at Stalingrad. On 18 February, Minister of Propaganda Joseph Goebbels gave the famous Sportpalast speech in Berlin, encouraging the Germans to accept a total war that would claim all resources and efforts from the entire population. Based on Soviet records, over 11,000 German soldiers continued to resist in isolated groups within the city for the next month. Some have presumed that they were motivated by a belief that fighting on was better than a slow death in Soviet captivity. Brown University historian Omer Bartov claims they were motivated by belief in Hitler and National Socialism. He studied 11,237 letters sent by soldiers inside of Stalingrad between 20 December 1942 and 16 January 1943 to their families in Germany. Almost every letter expressed belief in Germany's ultimate victory and their willingness to fight and die at Stalingrad to achieve that victory. Bartov reported that a great many of the soldiers were well aware that they would not be able to escape from Stalingrad, but in their letters to their families stated that they were proud to "sacrifice themselves for the Führer". A Soviet officer interviewed months after the battle, Nikolai Nikitich Aksyonov, described the scale of devastation and conflict at Stalingrad, stating that "As a historian, I tried to draw comparisons to battles I know from history: Borodino, Verdun during the Imperialist War, but none of that was right because the scale of conflict in Stalingrad makes it hard to compare it to anything. It seemed as if Stalingrad was breathing fire for days on end." Many German soldiers expressed in their letters that they were trapped in a "second Verdun", while Soviet defenders described the battle as their "Red Verdun", in which they would refuse to surrender to the enemy. However, a Soviet war correspondent reporting in October of 1942 remarked that "A city of peace has become a city of war. The laws of warfare have placed it on the front line, at the epicenter of a battle that will shape the outcome of the entire war. After sixty days of fighting the Germans now know what this means. 'Verdun!' they scoff. 'This is no Verdun. This is something new in the history of warfare. This is Stalingrad.' " The battle is not only infamous for being a military bloodbath, but also for its disregard for civilians by both sides. Sniper Vasily Zaytsev took note of atrocities that took place during the battle, stating that, "another time you see young girls, children hanging from trees in the park. . .It has tremendous impact." A sergeant in the 389th Infantry Division, noted during the battles for the Barrikady workers' settlement that "Russian women who came out of the houses with their bundles and then tried to seek shelter from the firing on the German side, were cut down from behind by Russian machine-gun fire." The bombing campaign and five months of fighting in the city had utterly destroyed 99% of the city, with the city being nothing more than a heap of rubble. Of the population of more than half a million before the battle, a quick census revealed only 1,515 remained following the battle's conclusion. However, Beevor notes that a census revealed that 9,796 civilians had lived through the fighting, including 994 children. The remaining forces continued to resist, hiding in cellars and sewers, but by early March 1943 the last small and isolated pockets of resistance had surrendered. According to Soviet intelligence documents shown in the documentary, a remarkable NKVD report from March 1943 is available showing the tenacity of some of these German groups: The operative report of the Don Front's staff issued on 5 February 1943, 22:00 said, The condition of the troops that surrendered was pitiful. British war correspondent Alexander Werth described the following scene in his Russia at War book, based on a first-hand account of his visit to Stalingrad on 3–5 February 1943, Out of the nearly 91,000 German prisoners captured in Stalingrad, only about 5,000 returned. Weakened by disease, starvation and lack of medical care during the encirclement, they were sent on forced marches to prisoner camps and later to labour camps all over the Soviet Union. Some 35,000 were eventually sent on transports, of which 17,000 did not survive. Most died of wounds, disease (particularly typhus), cold, overwork, mistreatment and malnutrition. Some were kept in the city to help rebuild it. A handful of senior officers were taken to Moscow and used for propaganda purposes, and some of them joined the National Committee for a Free Germany. Some, including Paulus, signed anti-Hitler statements that were broadcast to German troops. Paulus testified for the prosecution during the Nuremberg Trials and assured families in Germany that those soldiers taken prisoner at Stalingrad were safe. He remained in the Soviet Union until 1952, then moved to Dresden in East Germany, where he spent the remainder of his days defending his actions at Stalingrad and was quoted as saying that Communism was the best hope for postwar Europe. General Walther von Seydlitz-Kurzbach offered to raise an anti-Hitler army from the Stalingrad survivors, but the Soviets did not accept the offer. It was not until 1955 that the last of the 5,000–6,000 survivors were repatriated (to West Germany) after a plea to the Politburo by Konrad Adenauer. Significance Stalingrad has been described as the greatest defeat in the history of the German Army. It is often identified as the turning point on the Eastern Front, in the war against Germany overall, and in the entire Second World War. The Red Army had the initiative, and the Wehrmacht was in retreat. A year of German gains during Case Blue had been wiped out. Germany's Sixth Army had ceased to exist, and the forces of Germany's European allies, except Finland, had been shattered. In a speech on 9 November 1944, Hitler himself blamed Stalingrad for Germany's impending doom. The destruction of an entire army (the largest killed, captured, wounded figures for Axis soldiers, nearly 1 million, during the war) and the frustration of Germany's grand strategy made the battle a watershed moment. At the time, the global significance of the battle was not in doubt. A Dresden newspaper wrote in early August that the battle would become the "most fateful battle of the war", and an article in September from the British Daily Telegraph shared the same sentiment. Joseph Goebbels shared similar sentiment, declaring that the battle was a "question of life or death, and all of our prestige, just as that of the Soviet Union, will depend on how it will end." Writing in his diary on 1 January 1943, British General Alan Brooke, Chief of the Imperial General Staff, reflected on the change in the position from a year before: At this point, the British had won the Battle of El Alamein in November 1942. However, there were only about 50,000 German soldiers at El Alamein in Egypt, while at Stalingrad 300,000 to 400,000 Germans had been lost. Regardless of the strategic implications, there is little doubt about Stalingrad's symbolism. Germany's defeat shattered its reputation for invincibility and dealt a devastating blow to German morale. On 30 January 1943, the tenth anniversary of his coming to power, Hitler chose not to speak. Joseph Goebbels read the text of his speech for him on the radio. The speech contained an oblique reference to the battle, which suggested that Germany was now in a defensive war. The public mood was sullen, depressed, fearful, and war-weary. The reverse was the case on the Soviet side. There was an overwhelming surge in confidence and belief in victory. A common saying was: "You cannot stop an army which has done Stalingrad." Stalin was feted as the hero of the hour and made a Marshal of the Soviet Union. The news of the battle echoed round the world, with many people now believing that Hitler's defeat was inevitable. The Turkish Consul in Moscow predicted that "the lands which the Germans have destined for their living space will become their dying space". Britain's conservative The Daily Telegraph proclaimed that the victory had saved European civilisation. The country celebrated "Red Army Day" on 23 February 1943. A ceremonial Sword of Stalingrad was forged to the order of King George VI. After being put on public display in Britain, this was presented to Stalin by Winston Churchill at the Tehran Conference later in 1943. Soviet propaganda spared no effort and wasted no time in capitalising on the triumph, impressing a global audience. The prestige of Stalin, the Soviet Union, and the worldwide Communist movement was immense, and their political position greatly enhanced. Commemoration In recognition of the determination of its defenders, Stalingrad was awarded the title Hero City in 1945. A colossal monument called The Motherland Calls was erected in 1967 on Mamayev Kurgan, the hill overlooking the city where bones and rusty metal splinters can still be found. The statue forms part of a war memorial complex which includes the ruins of the Grain Silo and Pavlov's House. On 2 February 2013 Volgograd hosted a military parade and other events to commemorate the 70th anniversary of the final victory. Since then, military parades have always commemorated the victory in the city. Every year still, hundreds of bodies of soldiers who died in the battle are recovered in the area around Stalingrad and reburied in the cemeteries at Mamayev Kurgan or Rossoshka. In popular culture The events of the Battle for Stalingrad have been covered in numerous media works of British, American, German, and Russian origin, for its significance as a turning point in the Second World War and for the loss of life associated with the battle. The term Stalingrad has become almost synonymous with large-scale urban battles with high casualties on both sides. See also Barmaley Fountain Hitler's Stalingrad speech Italian participation in the Eastern Front Soviet Black Sea Fleet during the Battle of Stalingrad Stalingrad legal defense References Footnotes Citations Bibliography Further reading External links Detailed summary of campaign Stalingrad battle Newsreels // Net-Film Newsreels and Documentary Films Archive Story of the Stalingrad battle with pictures, maps, video and other primary and secondary sources Volgograd State Panoramic Museum official homepage The Battle of Stalingrad in Film and History Written with strong Socialist/Communist political under and overtones. Roberts, Geoffrey. "Victory on the Volga", The Guardian, 28 February 2003 Stalingrad-info.com, Russian archival docs translated into English, original battle maps, aerial photos, pictures taken at the battlefields, relics collection H-Museum: Stalingrad/Volgograd 1943–2003. Memory Battle of Stalingrad Pictures Images from the Battle of Stalingrad (Getty) The photo album of Wehrmacht NCO named Nemela of 9. Machine-Gewehr Bataillon (mot) There are several unique photos of parade and award ceremony for Wehrmacht personnel who survived the Battle of Stalingrad. Stalingrad Battle Data Project: order of battle, strength returns, interactive map Stalingrad documentaries by the Army University Press Stalingrad Battle Data documentary base 1942 in the Soviet Union 1943 in the Soviet Union Airbridge (logistics) Stalingrad Stalingrad Stalingrad Stalingrad Stalingrad Stalingrad Sniper warfare Stalingrad Stalingrad Stalingrad S
17
A base pair (bp) is a fundamental unit of double-stranded nucleic acids consisting of two nucleobases bound to each other by hydrogen bonds. They form the building blocks of the DNA double helix and contribute to the folded structure of both DNA and RNA. Dictated by specific hydrogen bonding patterns, "Watson–Crick" (or "Watson–Crick–Franklin") base pairs (guanine–cytosine and adenine–thymine) allow the DNA helix to maintain a regular helical structure that is subtly dependent on its nucleotide sequence. The complementary nature of this based-paired structure provides a redundant copy of the genetic information encoded within each strand of DNA. The regular structure and data redundancy provided by the DNA double helix make DNA well suited to the storage of genetic information, while base-pairing between DNA and incoming nucleotides provides the mechanism through which DNA polymerase replicates DNA and RNA polymerase transcribes DNA into RNA. Many DNA-binding proteins can recognize specific base-pairing patterns that identify particular regulatory regions of genes. Intramolecular base pairs can occur within single-stranded nucleic acids. This is particularly important in RNA molecules (e.g., transfer RNA), where Watson–Crick base pairs (guanine–cytosine and adenine–uracil) permit the formation of short double-stranded helices, and a wide variety of non–Watson–Crick interactions (e.g., G–U or A–A) allow RNAs to fold into a vast range of specific three-dimensional structures. In addition, base-pairing between transfer RNA (tRNA) and messenger RNA (mRNA) forms the basis for the molecular recognition events that result in the nucleotide sequence of mRNA becoming translated into the amino acid sequence of proteins via the genetic code. The size of an individual gene or an organism's entire genome is often measured in base pairs because DNA is usually double-stranded. Hence, the number of total base pairs is equal to the number of nucleotides in one of the strands (with the exception of non-coding single-stranded regions of telomeres). The haploid human genome (23 chromosomes) is estimated to be about 3.2 billion bases long and to contain 20,000–25,000 distinct protein-coding genes. A kilobase (kb) is a unit of measurement in molecular biology equal to 1000 base pairs of DNA or RNA. The total number of DNA base pairs on Earth is estimated at 5.0 with a weight of 50 billion tonnes. In comparison, the total mass of the biosphere has been estimated to be as much as 4 TtC (trillion tons of carbon). Hydrogen bonding and stability Top, a G.C base pair with three hydrogen bonds. Bottom, an A.T base pair with two hydrogen bonds. Non-covalent hydrogen bonds between the bases are shown as dashed lines. The wiggly lines stand for the connection to the pentose sugar and point in the direction of the minor groove. Hydrogen bonding is the chemical interaction that underlies the base-pairing rules described above. Appropriate geometrical correspondence of hydrogen bond donors and acceptors allows only the "right" pairs to form stably. DNA with high GC-content is more stable than DNA with low GC-content. Crucially, however, stacking interactions are primarily responsible for stabilising the double-helical structure; Watson-Crick base pairing's contribution to global structural stability is minimal, but its role in the specificity underlying complementarity is, by contrast, of maximal importance as this underlies the template-dependent processes of the central dogma (e.g. DNA replication). The bigger nucleobases, adenine and guanine, are members of a class of double-ringed chemical structures called purines; the smaller nucleobases, cytosine and thymine (and uracil), are members of a class of single-ringed chemical structures called pyrimidines. Purines are complementary only with pyrimidines: pyrimidine–pyrimidine pairings are energetically unfavorable because the molecules are too far apart for hydrogen bonding to be established; purine–purine pairings are energetically unfavorable because the molecules are too close, leading to overlap repulsion. Purine–pyrimidine base-pairing of AT or GC or UA (in RNA) results in proper duplex structure. The only other purine–pyrimidine pairings would be AC and GT and UG (in RNA); these pairings are mismatches because the patterns of hydrogen donors and acceptors do not correspond. The GU pairing, with two hydrogen bonds, does occur fairly often in RNA (see wobble base pair). Paired DNA and RNA molecules are comparatively stable at room temperature, but the two nucleotide strands will separate above a melting point that is determined by the length of the molecules, the extent of mispairing (if any), and the GC content. Higher GC content results in higher melting temperatures; it is, therefore, unsurprising that the genomes of extremophile organisms such as Thermus thermophilus are particularly GC-rich. On the converse, regions of a genome that need to separate frequently — for example, the promoter regions for often-transcribed genes — are comparatively GC-poor (for example, see TATA box). GC content and melting temperature must also be taken into account when designing primers for PCR reactions. Examples The following DNA sequences illustrate pair double-stranded patterns. By convention, the top strand is written from the 5′-end to the 3′-end; thus, the bottom strand is written 3′ to 5′. A base-paired DNA sequence: The corresponding RNA sequence, in which uracil is substituted for thymine in the RNA strand: Base analogs and intercalators Chemical analogs of nucleotides can take the place of proper nucleotides and establish non-canonical base-pairing, leading to errors (mostly point mutations) in DNA replication and DNA transcription. This is due to their isosteric chemistry. One common mutagenic base analog is 5-bromouracil, which resembles thymine but can base-pair to guanine in its enol form. Other chemicals, known as DNA intercalators, fit into the gap between adjacent bases on a single strand and induce frameshift mutations by "masquerading" as a base, causing the DNA replication machinery to skip or insert additional nucleotides at the intercalated site. Most intercalators are large polyaromatic compounds and are known or suspected carcinogens. Examples include ethidium bromide and acridine. Mismatch repair Mismatched base pairs can be generated by errors of DNA replication and as intermediates during homologous recombination. The process of mismatch repair ordinarily must recognize and correctly repair a small number of base mispairs within a long sequence of normal DNA base pairs. To repair mismatches formed during DNA replication, several distinctive repair processes have evolved to distinguish between the template strand and the newly formed strand so that only the newly inserted incorrect nucleotide is removed (in order to avoid generating a mutation). The proteins employed in mismatch repair during DNA replication, and the clinical significance of defects in this process are described in the article DNA mismatch repair. The process of mispair correction during recombination is described in the article gene conversion. Length measurements The following abbreviations are commonly used to describe the length of a D/RNA molecule: bp = base pair—one bp corresponds to approximately 3.4 Å (340 pm) of length along the strand, and to roughly 618 or 643 daltons for DNA and RNA respectively. kb (= kbp) = kilo–base-pair = 1,000 bp Mb (= Mbp) = mega–base-pair = 1,000,000 bp Gb (= Gbp) = giga–base-pair = 1,000,000,000 bp For single-stranded DNA/RNA, units of nucleotides are used—abbreviated nt (or knt, Mnt, Gnt)—as they are not paired. To distinguish between units of computer storage and bases, kbp, Mbp, Gbp, etc. may be used for base pairs. The centimorgan is also often used to imply distance along a chromosome, but the number of base pairs it corresponds to varies widely. In the human genome, the centimorgan is about 1 million base pairs. Unnatural base pair (UBP) An unnatural base pair (UBP) is a designed subunit (or nucleobase) of DNA which is created in a laboratory and does not occur in nature. DNA sequences have been described which use newly created nucleobases to form a third base pair, in addition to the two base pairs found in nature, A-T (adenine – thymine) and G-C (guanine – cytosine). A few research groups have been searching for a third base pair for DNA, including teams led by Steven A. Benner, Philippe Marliere, Floyd E. Romesberg and Ichiro Hirao. Some new base pairs based on alternative hydrogen bonding, hydrophobic interactions and metal coordination have been reported. In 1989 Steven Benner (then working at the Swiss Federal Institute of Technology in Zurich) and his team led with modified forms of cytosine and guanine into DNA molecules in vitro. The nucleotides, which encoded RNA and proteins, were successfully replicated in vitro. Since then, Benner's team has been trying to engineer cells that can make foreign bases from scratch, obviating the need for a feedstock. In 2002, Ichiro Hirao's group in Japan developed an unnatural base pair between 2-amino-8-(2-thienyl)purine (s) and pyridine-2-one (y) that functions in transcription and translation, for the site-specific incorporation of non-standard amino acids into proteins. In 2006, they created 7-(2-thienyl)imidazo[4,5-b]pyridine (Ds) and pyrrole-2-carbaldehyde (Pa) as a third base pair for replication and transcription. Afterward, Ds and 4-[3-(6-aminohexanamido)-1-propynyl]-2-nitropyrrole (Px) was discovered as a high fidelity pair in PCR amplification. In 2013, they applied the Ds-Px pair to DNA aptamer generation by in vitro selection (SELEX) and demonstrated the genetic alphabet expansion significantly augment DNA aptamer affinities to target proteins. In 2012, a group of American scientists led by Floyd Romesberg, a chemical biologist at the Scripps Research Institute in San Diego, California, published that his team designed an unnatural base pair (UBP). The two new artificial nucleotides or Unnatural Base Pair (UBP) were named d5SICS and dNaM. More technically, these artificial nucleotides bearing hydrophobic nucleobases, feature two fused aromatic rings that form a (d5SICS–dNaM) complex or base pair in DNA. His team designed a variety of in vitro or "test tube" templates containing the unnatural base pair and they confirmed that it was efficiently replicated with high fidelity in virtually all sequence contexts using the modern standard in vitro techniques, namely PCR amplification of DNA and PCR-based applications. Their results show that for PCR and PCR-based applications, the d5SICS–dNaM unnatural base pair is functionally equivalent to a natural base pair, and when combined with the other two natural base pairs used by all organisms, A–T and G–C, they provide a fully functional and expanded six-letter "genetic alphabet". In 2014 the same team from the Scripps Research Institute reported that they synthesized a stretch of circular DNA known as a plasmid containing natural T-A and C-G base pairs along with the best-performing UBP Romesberg's laboratory had designed and inserted it into cells of the common bacterium E. coli that successfully replicated the unnatural base pairs through multiple generations. The transfection did not hamper the growth of the E. coli cells and showed no sign of losing its unnatural base pairs to its natural DNA repair mechanisms. This is the first known example of a living organism passing along an expanded genetic code to subsequent generations. Romesberg said he and his colleagues created 300 variants to refine the design of nucleotides that would be stable enough and would be replicated as easily as the natural ones when the cells divide. This was in part achieved by the addition of a supportive algal gene that expresses a nucleotide triphosphate transporter which efficiently imports the triphosphates of both d5SICSTP and dNaMTP into E. coli bacteria. Then, the natural bacterial replication pathways use them to accurately replicate a plasmid containing d5SICS–dNaM. Other researchers were surprised that the bacteria replicated these human-made DNA subunits. The successful incorporation of a third base pair is a significant breakthrough toward the goal of greatly expanding the number of amino acids which can be encoded by DNA, from the existing 20 amino acids to a theoretically possible 172, thereby expanding the potential for living organisms to produce novel proteins. The artificial strings of DNA do not encode for anything yet, but scientists speculate they could be designed to manufacture new proteins which could have industrial or pharmaceutical uses. Experts said the synthetic DNA incorporating the unnatural base pair raises the possibility of life forms based on a different DNA code. Non-canonical base pairing In addition to the canonical pairing, some conditions can also favour base-pairing with alternative base orientation, and number and geometry of hydrogen bonds. These pairings are accompanied by alterations to the local backbone shape. The most common of these is the wobble base pairing that occurs between tRNAs and mRNAs at the third base position of many codons during transcription and during the charging of tRNAs by some tRNA synthetases. They have also been observed in the secondary structures of some RNA sequences. Additionally, Hoogsteen base pairing (typically written as A•U/T and G•C) can exist in some DNA sequences (e.g. CA and TA dinucleotides) in dynamic equilibrium with standard Watson–Crick pairing. They have also been observed in some protein–DNA complexes. In addition to these alternative base pairings, a wide range of base-base hydrogen bonding is observed in RNA secondary and tertiary structure. These bonds are often necessary for the precise, complex shape of an RNA, as well as its binding to interaction partners. See also List of Y-DNA single-nucleotide polymorphisms Non-canonical base pairing Chargaff's rules References Further reading (See esp. ch. 6 and 9) External links DAN—webserver version of the EMBOSS tool for calculating melting temperatures Nucleobases Molecular genetics Nucleic acids
5
The Baltimore Ravens are a professional American football team based in Baltimore. The Ravens compete in the National Football League (NFL) as a member of the American Football Conference (AFC) North division. The team plays its home games at M&T Bank Stadium and is headquartered in Owings Mills, Maryland. The Baltimore Ravens were established in 1996 after Art Modell, then owner of the Cleveland Browns, announced plans in 1995 to relocate the franchise from Cleveland to Baltimore. As part of a settlement between the league and the city of Cleveland, Modell was required to leave the Browns' history, team colors, and records in Cleveland for a replacement team and replacement personnel that would resume play in 1999. In return, he was allowed to take his own personnel and team to Baltimore, where such personnel would form an expansion team. The team is now owned by Steve Bisciotti and valued at $2.98 billion, making the Ravens the 33rd-most valuable sports franchise in the world as of 2021. The Ravens have been one of the more successful franchises since their inception, compiling a regular season record of , the third-highest among active franchises. They are also tied for the fourth-highest playoff winning percentage at . The team has qualified for the NFL playoffs 14 times since 2000 with two Super Bowl titles (Super Bowl XXXV and Super Bowl XLVII), two AFC Championship titles (2000 and 2012), four AFC Championship game appearances (2000, 2008, 2011 and 2012) and six AFC North division titles (2003, 2006, 2011, 2012, 2018, and 2019). They are one of two teams to be undefeated in multiple Super Bowl appearances, along with the Tampa Bay Buccaneers. The Ravens organization was led by general manager Ozzie Newsome from 1996 until his retirement following the 2018 season, and has had three head coaches: Ted Marchibroda, Brian Billick, and since 2008, John Harbaugh. Starting with a record-breaking defensive performance in their 2000 season, the Ravens have established a reputation for strong defensive play throughout team history. Former players such as middle linebacker Ray Lewis, safety Ed Reed, and offensive tackle Jonathan Ogden have been enshrined in the Pro Football Hall of Fame. History Team name The name "Ravens" was inspired by Edgar Allan Poe's poem The Raven. Chosen in a fan contest that drew 33,288 voters, the allusion honors Poe who spent the early part of his career in Baltimore and is buried there. As The Baltimore Sun reported at the time, fans also "liked the tie-in with the other birds in town, the Orioles, and found it easy to visualize a tough, menacing black bird". Edgar Allan Poe also had distant relatives who played football for the Princeton Tigers in the 1880s through the early 1900s. These brothers were famous players in the early days of American football. Before the football team, there was the Baltimore Ravens wheelchair basketball team — the original Baltimore Ravens. In 1972, the Ravens wheelchair basketball team was founded by Ralph Smith, long-term resident of Baltimore, second Vice President of the National Wheelchair Basketball Association (NWBA) and Member of the NWBA Hall of Fame. The name "Ravens" was inspired by Bob Ardinger, a member of the Ravens wheelchair basketball team. In the 1990s, the naming rights were later sold to the football team when they came to the city and the wheelchair basketball team became known as the Maryland Ravens, Inc. Background After the controversial relocation of the Colts to Indianapolis, several attempts were made to bring an NFL team back to Baltimore. In 1993, ahead of the 1995 league expansion, the city was considered a favorite, behind only St. Louis, to be granted one of two new franchises. League officials and team owners feared litigation due to conflicts between rival bidding groups if St. Louis was awarded a franchise. In October Charlotte, North Carolina was the first city chosen. Several weeks later, Baltimore's bid for a franchise—dubbed the Baltimore Bombers, in honor of the locally produced Martin B-26 Marauder bomber—had three ownership groups in place and a state financial package which included a proposed $200 million, rent-free stadium and permission to charge up to $80 million in personal seat license fees. Baltimore, however, was unexpectedly passed over in favor of Jacksonville, Florida, despite Jacksonville's minor TV market status and that the city had withdrawn from contention in the summer, only to return with then-Commissioner Paul Tagliabue's urging. Although league officials denied that any city had been favored, it was reported that Tagliabue and his longtime friend Washington Redskins owner Jack Kent Cooke had lobbied against Baltimore due to its proximity to Washington, D.C., and that Tagliabue had used the initial committee voting system to prevent the entire league ownership from voting on Baltimore's bid. This led to public outrage and The Baltimore Sun describing Tagliabue as having an "Anybody But Baltimore" policy. Maryland governor William Donald Schaefer said afterward that Tagliabue had led him on, praising Baltimore and the proposed owners while working behind-the-scenes to oppose Baltimore's bid. By May 1994, Baltimore Orioles owner Peter Angelos had gathered a new group of investors, including author Tom Clancy, to bid on teams whose owners had expressed interest in relocating. Angelos found a potential partner in Georgia Frontiere, who was open to moving the Los Angeles Rams to Baltimore. Jack Kent Cooke opposed the move, intending to build the Redskins' new stadium in Laurel, Maryland, close enough to Baltimore to cool outside interest in bringing in a new franchise. This led to heated arguments between Cooke and Angelos, who accused Cooke of being a "carpetbagger." The league eventually persuaded Rams team president John Shaw to relocate to St. Louis instead, leading to a league-wide rumor that Tagliabue was again steering interest away from Baltimore, a claim which Tagliabue denied. In response to anger in Baltimore, including Governor Schaefer's threat to announce over the loudspeakers Tagliabue's exact location in Camden Yards any time he attended a Baltimore Orioles game, Tagliabue remarked of Baltimore's financial package: "Maybe (Baltimore) can open another museum with that money." Following this, Angelos made an unsuccessful $200 million bid to bring the Tampa Bay Buccaneers to Baltimore. Having failed to obtain a franchise via the expansion, the city, despite having "misgivings," turned to the possibility of obtaining the Cleveland Browns, whose owner Art Modell was financially struggling and at odds with the city of Cleveland over needed improvements to the team's stadium. Return of American football in Baltimore Enticed by Baltimore's available funds for a first-class stadium and a promised yearly operating subsidy of $25 million, Modell announced on November 6, 1995, his intention to relocate the team from Cleveland to Baltimore the following year. The resulting controversy ended when representatives of Cleveland and the NFL reached a settlement on February 8, 1996. Tagliabue promised the city of Cleveland that an NFL team would be located in Cleveland, either through relocation or expansion, "no later than 1999". Additionally, the agreement stipulated that the Browns' name, colors, uniform design and franchise records would remain in Cleveland. The franchise history includes Browns club records and connections with Pro Football Hall of Fame players. Modell's Baltimore team, while retaining all current player contracts, would, for purposes of team history, appear as an expansion team, a new franchise. Not all players, staff or front office would make the move to Baltimore, however. After relocation, Modell hired Ted Marchibroda as the head coach for his new team in Baltimore. Marchibroda was already well known because of his work as head coach of the Baltimore Colts during the 1970s and the Indianapolis Colts during the early 1990s. Ozzie Newsome, the Browns' tight end for many seasons, joined Modell in Baltimore as director of football operations. He was later promoted to vice-president/general manager. The home stadium for the Ravens first two seasons was Baltimore's Memorial Stadium, previously home to the Baltimore Colts, the Baltimore Orioles, and the Canadian Football League’s Baltimore Stallions. The Ravens moved to their own new stadium, now known as M&T Bank Stadium, next to Camden Yards in 1998. The early years and Ted Marchibroda era (1996–1998) In the 1996 NFL Draft, the Ravens, with two picks in the first round, drafted offensive tackle Jonathan Ogden at No. 4 overall and linebacker Ray Lewis at No. 26 overall. Both Ogden and Lewis went on to play for the Ravens for their entire professional careers and were both inducted into the Pro Football Hall of Fame. The 1996 Ravens won their opening game against the Oakland Raiders, but finished the season 4–12 despite receiver Michael Jackson leading the league with 14 touchdown catches. The 1997 Ravens started 3–1. Peter Boulware, a rookie defender from Florida State, recorded 11.5 sacks and was named AFC Defensive Rookie of the Year. The team finished 6–9–1. On October 26, the team made its first trip to Landover, Maryland to play their new regional rivals, the Washington Redskins. The Ravens won the game 20–17. On December 14, 1997, the Ravens played the final professional sporting event at Baltimore’s historic Memorial Stadium, winning 21–19 over the Tennessee Oilers. 1998 marked the opening of a new stadium for the Ravens, currently known as M&T Bank Stadium, but originally named “PSINet Stadium” after the now-defunct internet service provider which purchased the original naming rights. Quarterback Vinny Testaverde left for the New York Jets before the season, and was replaced by former Indianapolis Colt Jim Harbaugh, and later Eric Zeier. Cornerback Rod Woodson joined the team after a successful stint with the Pittsburgh Steelers, and Priest Holmes started getting the first playing time of his career and ran for 1,000 yards. The Ravens finished 1998 with a 6–10 record. On November 29, the Ravens welcomed the Colts back to Baltimore for the first time in 15 years. Amidst a shower of negative cheers towards the Colts, the Ravens won 38–31. Brian Billick era (1999–2007) Three consecutive losing seasons under Marchibroda led to a change in the head coach. Brian Billick took over as head coach in 1999. Billick had been offensive coordinator for the record-setting Minnesota Vikings the season before. Quarterback Tony Banks came to Baltimore from the St. Louis Rams and had the best season of his career with 17 touchdown passes and an 81.2 pass rating. He was joined by receiver Qadry Ismail, who posted a 1,000-yard season. The Ravens initially struggled with a record of 4–7 but managed to finish with an 8–8 record. Due to continual financial hardships for the organization, the NFL took an unusual move and directed Modell to initiate the sale of his franchise. On March 27, 2000, NFL owners approved the sale of 49% of the Ravens to Steve Bisciotti. In the deal, Bisciotti had an option to purchase the remaining 51% for $325 million in 2004 from Art Modell. On April 9, 2004, the NFL approved Steve Bisciotti's purchase of the majority stake in the club. 2000: Super Bowl XXXV champions Banks shared playing time in the 2000 regular season with Trent Dilfer. Both players put up decent numbers (and a 1,364-yard rushing season by rookie Jamal Lewis helped too) but the defense became the team's hallmark and bailed a struggling offense out in many instances through the season. Ray Lewis was named Defensive Player of the Year. Two of his defensive teammates, Sam Adams and Rod Woodson, made the Pro Bowl. Baltimore's season started strong with a 5–1 record. But the team struggled through mid-season, at one point going five games without scoring an offensive touchdown. The team regrouped and won each of their last seven games, finishing 12–4 and making the playoffs for the first time. During the 2000 season, the Ravens' dominating defense broke two notable NFL records. They held opposing teams to 165 total points, surpassing the 1985 Chicago Bears mark of 198 points for a 16-game season as well as surpassing the 1986 Chicago Bears mark of 187 points for a 16-game season, which at that time was the current NFL record these things along with outstanding play by the defense places the 2000 Ravens in the discussion as one of the greatest NFL defenses of all time along with the 1985 Chicago Bears, 2002 Tampa Bay Buccaneers, and the 2015 Denver Broncos defenses. Since the divisional rival Tennessee Titans had a record of 13–3, the Ravens had to play in the wild card round. They dominated the Denver Broncos 21–3 in their first game. In the divisional playoff, they went on the road to Tennessee. With the score tied 10–10 in the fourth quarter, an Al Del Greco field goal attempt was blocked and returned for a touchdown by Anthony Mitchell, and a Ray Lewis interception return for a score put the game squarely in Baltimore's favor. The 24–10 win put the Ravens in the AFC Championship against the Oakland Raiders. The game was rarely in doubt. Shannon Sharpe's 96-yard touchdown catch early in the second quarter followed by an injury to Raiders quarterback Rich Gannon were crucial as the Ravens won easily, 16–3. Baltimore then went to Tampa for Super Bowl XXXV against the New York Giants. The Ravens’ defense carried them to a win. They recorded four sacks and forced five turnovers, one of which was a Kerry Collins interception returned for a touchdown by Duane Starks. The Giants' only score was a Ron Dixon kickoff return for a touchdown; however, the Ravens immediately countered with a touchdown return on the ensuing kickoff by Jermaine Lewis. The Ravens became champions with a 34–7 win. 2001–2007 In 2001, the Ravens attempted to defend their title with Elvis Grbac as their new starting quarterback, but a season-ending injury to Jamal Lewis on the first day of training camp and poor offensive performances stymied the team. After a 3–3 start, the Ravens defeated the Minnesota Vikings in the final week to clinch a wild card berth at 10–6. In the first round the Ravens showed flashes of their previous year with a 20–3 win over the Miami Dolphins, in which the team forced three turnovers and out-gained the Dolphins 347 yards to 151. In the divisional playoff the Ravens played the Pittsburgh Steelers. Three interceptions by Grbac ended the Ravens' season, as they lost 27–10. Baltimore ran into salary cap problems entering the 2002 season and was forced to part with a number of impact players. In the NFL Draft, the team selected Ed Reed with the 24th overall pick. Reed would go on to become one of the best safeties in NFL history, making nine Pro Bowls until leaving the Ravens for the Houston Texans in 2013. Despite low expectations, the Ravens stayed somewhat competitive in 2002 until a losing streak in December eliminated any chances of a postseason berth and a 7–9 finish. In 2003, the Ravens drafted their new quarterback, Kyle Boller, but he was injured midway through the season and was replaced by Anthony Wright. Jamal Lewis ran for 2,066 yards (including a then-NFL record 295 yards in one game against the Cleveland Browns on September 14). With a 10–6 record, Baltimore won their first AFC North division title. Their first playoff game, at home against the Tennessee Titans, went back and forth, with the Ravens being held to only 54 yards total rushing. The Titans won 20–17 on a late field goal, and Baltimore's season ended early. Ray Lewis was also named Defensive Player of the year for the second time in his career. In April 2003, Art Modell sold 49% of the team to Steve Bisciotti, a local businessman who had made his fortune in the temporary staffing field. After the season, Art Modell sold his remaining 51% ownership to Bisciotti, ending over 40 years of tenure as an NFL franchise owner. The Ravens did not make the playoffs in 2004 and finished the season with a record of 9–7 with Boller spending the season at QB. They did get good play from veteran corner Deion Sanders and third-year safety Ed Reed, who won the NFL Defensive Player of the Year award. They were also the only team to defeat the 15–1 Pittsburgh Steelers in the regular season. The next off-season, the Ravens looked to augment their receiving corps (which was second-worst in the NFL in 2004) by signing Derrick Mason from the Titans and drafting Oklahoma wide receiver Mark Clayton in the first round of the 2005 NFL Draft. However, the Ravens ended their season 6–10. The 2006 Baltimore Ravens season began with the team trying to improve on their 6–10 record of 2005. The Ravens, for the first time in franchise history, started 4–0, under the leadership of former Titans quarterback Steve McNair. In 2006, The Ravens lost two straight games mid-season on offensive troubles, prompting coach Billick to drop their offensive coordinator Jim Fassel in their week seven bye. After the bye, and with Billick calling the offense, Baltimore would record a five-game win streak before losing to the Cincinnati Bengals in week 13. Still ranked second overall to first-place San Diego Chargers, the Ravens continued on. They defeated the Kansas City Chiefs, and held the defending Super Bowl champion Pittsburgh Steelers to only one touchdown at Heinz Field, allowing the Ravens to clinch the AFC North. The Ravens ended the regular season with a franchise-best 13–3 record. Baltimore had secured the AFC North title, the No. 2 AFC playoff seed, and clinched a 1st-round bye by season's end. The Ravens were slated to face the Indianapolis Colts in the second round of the playoffs, in the first meeting of the two teams in the playoffs. Many Baltimore and Indianapolis fans saw this historic meeting as a sort of "Judgment Day" with the new team of Baltimore facing the old team of Baltimore (the former Baltimore Colts having left Baltimore under questionable circumstances in 1984). Both Indianapolis and Baltimore were held to scoring only field goals as the two defenses slugged it out all over M&T Bank Stadium. McNair threw two costly interceptions, including one at the 1-yard line. The eventual Super Bowl champion Colts won 15–6, ending Baltimore's season. The Ravens hoped to improve upon their 13–3 record but injuries and poor play plagued the team. The Ravens finished the 2007 season in the AFC North cellar with a disappointing 5–11 record. A humiliating 22–16 overtime loss to the previously winless Miami Dolphins on December 16 ultimately led to Billick's dismissal after the end of the regular season. He was replaced by John Harbaugh, the special teams coach of the Philadelphia Eagles and the older brother of former Ravens quarterback Jim Harbaugh (1998). John Harbaugh/Joe Flacco era (2008–2018) 2008: Arrival of Harbaugh and Flacco With rookies at head coach (John Harbaugh) and quarterback (Joe Flacco), the Ravens entered the 2008 campaign with much uncertainty. Baltimore smartly recovered in 2008, winning eleven games and achieving a wild card spot in the postseason. On the strength of four interceptions, one resulting in an Ed Reed touchdown, the Ravens began its postseason run by winning a rematch over Miami 27–9 at Dolphin Stadium on January 4, 2009, in a wild-card game. Six days later, they advanced to the AFC Championship Game by avenging a Week 5 loss to the Titans 13–10 at LP Field on a Matt Stover field goal with 53 seconds left in regulation time. The Ravens fell one victory short of Super Bowl XLIII by losing to the Steelers 23–14 at Heinz Field on January 18, 2009. 2009–2011 In 2009, the Ravens won their first three matches, then lost the next three, including a close match in Minnesota. The rest of the season was an uneven string of wins and losses, which included a home victory over Pittsburgh in overtime followed by a Monday Night loss in Green Bay. That game was notable for the number of penalties committed, costing a total of 310 yards, and almost tying with the record set by Tampa Bay and Seattle in 1976. Afterwards, the Ravens easily crushed the Lions and Bears, giving up less than ten points in both games. The next match was against the Steelers, where Baltimore lost a close one before beating the Raiders to end the season. With a record of 9–7, the team finished second in the division and gained another wild card. Moving into the playoffs, they overwhelmed the Patriots; nevertheless they did not reach the AFC Championship because they were routed 20–3 by the Colts in the Divisional Round a week later. Baltimore managed to beat the Jets 10–9 on the 2010 opener, but then lost a poorly played game against Cincinnati the following week. The Ravens rebounded against the other two division teams, beating Cleveland 24–17 in Week 3 and then . The Ravens scored a fine win (31–17) at home against Denver in Week 5. The Ravens finished the season 12–4, second in the division due to a tiebreaker with Pittsburgh, and earning a wild card spot. Baltimore headed to Kansas City and defeated the Chiefs 30–7, but once again were knocked from the playoffs by Pittsburgh in a hard-fought game. The Ravens hosted their arch-enemy in Week 1 of the 2011 season. On a hot, humid day in M&T Bank Stadium, crowd noise and multiple Steelers mistakes allowed Baltimore to crush them with three touchdowns 35–7. The frustrated Pittsburgh players also committed several costly penalties. Thus, the Ravens had gained their first-ever victory over the Steelers with Ben Roethlisberger playing and avenged themselves of repeated regular and postseason losses in the series. But in Week 2, the Ravens collapsed in Tennessee and lost 26–13. They rebounded by routing the Rams in Week 3 and then overpowering the Jets 34–17 in Week 4. Week 5, the Ravens had a bye week, following a game against the Texans. But in Week 7, Baltimore had a stunning MNF upset loss in Jacksonville as they were held to one touchdown in a 12–7 loss. Their final scoring drive failed as Joe Flacco threw an interception in the closing seconds of the game. After beating the Cincinnati Bengals in Week 17 of the regular season, the Ravens advanced to the playoffs as the Number 2 seed in the AFC with a record of 12–4. They gained the distinction of AFC North Champions over Pittsburgh (12–4) due to a tie-breaker. Ravens' Lee Evans was stripped of a 14-yard touchdown pass by the Patriots Sterling Moore with 22 seconds left and Ravens kicker Billy Cundiff pushed a 32-yard field goal attempt wide left on fourth down as the Patriots held on to beat the Ravens 23–20 during the AFC championship game and advance to Super Bowl XLVI. 2012: Ray Lewis' final season and second Super Bowl victory The Ravens' attempt to convert Joe Flacco into a pocket passer remained a work in progress as the 2012 season began. Terrell Suggs suffered a tendon injury during an off-season basketball game and was unable to play for at least several weeks. In the opener on September 10, Baltimore routed Cincinnati 44–13. After this easy win, the team headed to Philadelphia, but lost 24–23. Returning home for a primetime rematch of the AFC Championship, another bizarre game ensued. New England picked apart the Baltimore defense (which was considerably weakened without Terrell Suggs and some other players lost over the off-season) for the first half. Trouble began early in the game when a streaker ran out onto the field and had to be tackled by security, and accelerated when, at 2:18 in the 4th quarter, the referees made a holding call on RG Marshal Yanda. Enraged fans repeatedly chanted an obscenity at this penalty. The Ravens finally drove downfield and on the last play of the game, Justin Tucker kicked a 27-yard field goal to win the game 31–30, capping off a second intense and controversially officiated game in a row for the Ravens. The Ravens would win the AFC North with a 10–6 record, but finished 4th in the AFC playoff seeding, and thus had to play a wild-card game. After defeating the Indianapolis Colts 24–9 at home (the final home game of Ray Lewis), the Ravens traveled to Denver to play against the top-seeded Broncos. In a very back-and-forth contest, the Ravens pulled out a 38–35 victory in two overtimes. They then won their 2nd AFC championship by coming back from a 13–7 halftime deficit to defeat the Patriots once again, 28–13. The Ravens played the Super Bowl XLVII against the San Francisco 49ers. Baltimore built a 28–6 lead early in the third quarter before a partial power outage in the Superdome suspended play for 34 minutes (earning the game the added nickname of the Blackout Bowl). After play resumed, San Francisco scored 17 unanswered third-quarter points to cut the Ravens' lead, 28–23, and continued to chip away in the fourth quarter. With the Ravens leading late in the game, 34–29, the 49ers advanced to the Baltimore 7-yard line just before the two-minute warning but turned the ball over on downs. The Ravens then took an intentional safety in the waning moments of the game to preserve the victory. Baltimore quarterback Joe Flacco, who completed 22 of 33 passes for 287 yards and three touchdowns, was named Super Bowl MVP. 2013–2018 Coming off as the defending Super Bowl champions, this was the first year in franchise history for the team without Ray Lewis. The Ravens started out 3–2, and started the 2–0 Houston Texans 14-loss streak by shutting them 30–9 in Week 3. However, the Ravens lost their next 3 games, losing to the Green Bay Packers and Pittsburgh Steelers in last-minute field goals and were shut out in an attempt to tie the game against the Cleveland Browns 24–18. After winning and losing their next game, the Ravens came out 4–6, but managed winning their next four games in dominating the Jets 19–3, a Steelers win 22–20 during Thanksgiving, a booming ending in Baltimore against the Vikings 29–26, and an 18–16 win at Detroit, including Justin Tucker's 61-yard game-winning field goal. The Ravens were 8–6, with the 6th seed, but after losing their next two games, and the San Diego Chargers winning their next two to clinch the 6th seed, the Ravens finished 8–8 and missed the playoffs for the first time since 2007. On January 27, 2014, the Ravens hired former Houston Texans head coach Gary Kubiak to be their new offensive coordinator after Jim Caldwell accepted the new available head coaching job with the Detroit Lions. On February 15, 2014, star running back Ray Rice and his fiancée Janay Palmer were arrested and charged with assault after a physical altercation at Revel Casino in Atlantic City, New Jersey. Celebrity news website TMZ posted a video of Rice dragging Palmer's body out of an elevator after apparently knocking her out. For the incident, Rice was initially suspended for the first two games of the 2014 NFL season on July 25, 2014, which led to widespread criticism of the NFL. In Week 1, on September 7, the Baltimore Ravens lost to the Cincinnati Bengals, 23–16. The next day, on September 8, 2014, TMZ released additional footage from an elevator camera showing Rice punching Palmer. The Baltimore Ravens terminated Rice's contract as a result, and was later indefinitely suspended by the NFL, although a judge later vacated this indefinite suspension. In Week 12, the Ravens traveled down for an interconference battle with the New Orleans Saints, which the Ravens won. In Week 16, the Ravens traveled to Houston to take on the Texans. In one of Flacco's worst performances, the offense sputtered against the Houston defense and Flacco threw three interceptions, falling to the Texans 25–13. With their playoff chances and season hanging in the balance, the Ravens took on the Browns in Week 17 at home. After three quarters had gone by and down 10–3, Joe Flacco led the Ravens on a comeback scoring 17 unanswered points, winning 20–10. With the win, and the Kansas City Chiefs defeating the San Diego Chargers, the Ravens clinched their sixth playoff berth in seven seasons. In the wild card round, the Ravens won 30–17 against their divisional rivals, the Pittsburgh Steelers, at Heinz Field. In the next game in the Divisional round, the Ravens faced the New England Patriots. Despite a strong offensive effort and having a 14-point lead twice in the game, the Ravens were defeated by the Patriots 35–31, ending their season. The 2015 season marked 20 seasons of the franchise's existence competing in the NFL, which the franchise recognized with a special badge being worn on their uniforms during the 2015 NFL season. The Ravens lost key players such as Joe Flacco, Justin Forsett, Terrell Suggs, Steve Smith Sr., and Eugene Monroe to season-ending injuries. Injuries and their inability to win close games early in the season led to the first losing season in the Harbaugh-Flacco era. The 2016 Ravens finished 8–8, but failed to qualify the playoffs for the second straight year. They were eliminated from playoff contention after their Week 16 loss to their division rivals, the Steelers. This was the first time the Ravens missed the playoffs in consecutive seasons since 2004–2005, as well as the first in the Harbaugh/Flacco era. During the 2017 season, the Ravens improved upon their 8–8 record from 2016 by one win, finishing the season 9-7 and missing the playoffs for the third year in a row. This marked the first time the Ravens failed to make the playoffs in three straight seasons since the team's first three years of existence (1996–1998). The Ravens suffered a loss at home to the Cincinnati Bengals in the final game of the season that prevented them from earning a playoff berth. Lamar Jackson era (2018–present) The Ravens drafted QB Lamar Jackson with the 32nd pick in the 2018 draft. After the team started the season with a 4–5 record, Jackson took over as the starting QB in Week 11 when Joe Flacco was sidelined with a hip injury. The team won six of its next seven games, finishing the 2018 season with a 10–6 record and winning the AFC North, giving them their first playoff appearance since 2014 and their first division title since 2012. The Ravens lost to the Los Angeles Chargers in the Wild Card round with Jackson at quarterback, making him the youngest QB in NFL history to start a playoff game. At the conclusion of the season, Ozzie Newsome stepped down as the team's general manager. He was replaced by longtime assistant Eric DeCosta. On March 13, 2019, the Ravens traded Joe Flacco to the Denver Broncos in exchange for a fourth-round pick in the 2019 NFL Draft. That season, Lamar Jackson led the Ravens to a franchise-best 14–2 record, including a 12-game winning streak to finish the regular season. On December 22, they clinched home-field advantage for the first time in franchise history following a win over the Cleveland Browns. On December 8, Jackson became only the second player in NFL history to rush for over 1,000 yards from the quarterback position. Four days later, Jackson broke Michael Vick's single-season quarterback rushing record of 1,037 yards. Thirteen Ravens were selected to the 2019 Pro Bowl, matching the all-time NFL record. The Ravens finished the 2019 regular season with 3,296 rushing yards, the most rushing yards by any team in NFL history during a season and they became the first team in NFL history to average at least 200 passing yards and 200 rushing yards per game in the same season. Despite earning the number-one seed in the playoffs, the Ravens were eliminated by the sixth-seeded Tennessee Titans in the Divisional Round of the playoffs, 28–12. Lamar Jackson was unanimously voted AP NFL MVP, becoming only the second player in NFL history to do so, after Tom Brady in 2010. In 2020, the Ravens went 6–5 in their first 11 games, but rebounded and finished the season 11–5, taking second place in the AFC North and earning a Wild Card playoff berth with the fifth seed. They also led the NFL in rushing yards for the second year in a row during the regular season, with 3,071 yards. In the Wild Card round, they defeated the fourth-seeded Tennessee Titans in Nashville, 20–13. In the Divisional Round, they fell to the second-seeded Buffalo Bills, 17–3. In 2021, the Ravens claimed the record of consecutive preseason wins with 20, overtaking Vince Lombardi’s Green Bay Packers record. In Week 3 of the 2021 season against the Detroit Lions, Justin Tucker put his name in the NFL record books by kicking the longest field goal in the history of the National Football League, 66 yards, which also was the field goal that won the game and 5 yards longer than his previous career long of 61 yards that was also kicked in Detroit. The following week, the Ravens tied the NFL record of consecutive 100 yard rushing games by a team with 43 in a win over the Denver Broncos, equaling the 1974 to ‘77 Pittsburgh Steelers record. The team reached an 8–3 record by Week 12, but ended the season on a six-game losing streak to finish 8–9, missing the playoffs and coming in last in the AFC North. Jackson sustained an ankle injury during the Week 14 loss to the Browns and did not appear in any subsequent games. Rivalries Pittsburgh Steelers By far the team's biggest rival is the Pittsburgh Steelers. Pittsburgh and Baltimore are separated by a less-than-5-hour drive along Interstate 70. Both teams are known for their hard-hitting physical style of play. They play twice a year in the AFC North, and have met four times in the playoffs. Pittsburgh leads the all-time series, 30–24, and holds a 3–1 advantage in the four matchups in the postseason. Games between these two teams usually come down to the wire as most within the last 5 years have come down to under 4 points. The rivalry is considered one of the most significant and intense in the NFL today. Other AFC North rivals The Ravens also have divisional rivalries with the Cleveland Browns and Cincinnati Bengals. The rivalry with the Browns has been very one-sided; Baltimore holds an advantage of 33–11 against Cleveland. The rivalry with Cincinnati has been closer, with the Ravens slightly holding the edge in the all-time series 27–25. New England Patriots The Ravens first met the New England Patriots in 1996, but the rivalry truly started in 2007 when the Ravens suffered a bitter 27–24 loss in the Patriots' quest for perfection. The rivalry began to escalate in 2009 when the Patriots beat the Ravens 27–21 in a game that involved a confrontation between Patriots quarterback Tom Brady and Ravens linebacker Terrell Suggs. Both players would go on to take verbal shots at each other through the media after the game. While the Patriots lead the overall series, 11–4, the teams have split four postseason meetings, 2–2. The Ravens won the 2009 Wild Card Round, 33–14, and the 2012 AFC Championship game, 28–13. The Patriots won the 2011 AFC Championship Game 23–20 and the 2014 Divisional Round, 35–31. Tennessee Titans Reemerging in the late 2010s, the rivalry actually started in the early 2000s when both teams were in the AFC Central, with both teams having tough and bitter games, Ravens gave the Titans their first ever loss at the new Adelphia Coliseum in the 2000 season and the Ravens eliminated Tennessee during the playoffs later on. Fans and analysts have noted an emerging rivalry between the Baltimore Ravens and the Tennessee Titans of the AFC South. While there is no known animosity between the cities of Baltimore and Nashville, games between their respective teams have become heated and included fiery verbal exchanges between coaches and players. Overall Record Logo controversy The team's first helmet logo, used from 1996 through the 1999 Pro Bowl, featured raven wings outspread from a shield displaying a letter B framed by the word Ravens overhead and a cross bottony underneath. The US Fourth Circuit Court of Appeals affirmed a jury verdict that the logo infringed on a copyright retained by Frederick E. Bouchat, an amateur artist and security guard in Maryland, but that he was entitled to only three dollars in damages from the NFL. Bouchat had submitted his design to the Maryland Stadium Authority by fax after learning that Baltimore was to acquire an NFL team. He was not credited for the design when the logo was announced. Bouchat sued the team, claiming to be the designer of the emblem; representatives of the team asserted that the image had been designed independently. The court ruled in favor of Bouchat, noting that team owner Modell had access to Bouchat's work. Bouchat's fax had gone to John Moag, the Maryland Stadium Authority chairman, whose office was located in the same building as Modell's. Bouchat ultimately was not awarded monetary compensation in the damages phase of the case. The Baltimore Sun ran a poll showing three designs for new helmet logos. Fans participating in the poll expressed a preference for a raven's head in profile over other designs. Art Modell announced that he would honor this preference but still wanted a letter B to appear somewhere in the design. The new Ravens logo, introduced in 1999, featured a raven's head in profile with the letter B superimposed. The secondary logo is a shield that honors Baltimore's history of heraldry. Alternating Calvert and Crossland emblems (seen also in the flag of Maryland and the flag of Baltimore) are interlocked with stylized letters B and R. Uniforms The design of the Ravens uniform has remained essentially unchanged since the team's inaugural season in 1996. Art Modell admitted to ESPN's Roy Firestone that the Ravens' colors, introduced in early 1996, were inspired by the Northwestern Wildcats 1995 dream season. Helmets are black with purple "talon" stripes rising from the facemask to the crown. Players normally wear purple jerseys at home and white jerseys on the road. In 1996 the team wore black pants with a single large white stripe for all games. In 1997 the Ravens opted for a more classic NFL look with white pants sporting stripes in purple and black, along with the jerseys sporting a different font for the uniform numbers. The white pants were worn with both home and road jerseys. The road uniform (white pants with white jerseys) was worn by the Ravens in Super Bowl XXXV, at the end of the 2000 NFL season. This all-white combination was originally worn with black socks, but starting in 2021, the Ravens began wearing white hosiery with the all-white uniform. In the 2002 season the Ravens began the practice of wearing white jerseys for the home opener that has a 1:00 kickoff. In recent seasons, the practice has come when the home game is played in week one. Since John Harbaugh became the head coach in 2008, the Ravens have also worn their white jerseys at home for preseason games. In November 2004 the team introduced an alternate uniform design featuring black jerseys and solid black pants with black socks. The all-black uniform was first worn for a home game against the Cleveland Browns, entitled "Pitch Black" night, that resulted in a Ravens win. The uniform has since been worn for select prime-time national game broadcasts and other games of significance. The Ravens began wearing black pants again with the white jersey in 2008. On December 7, 2008, during a Sunday Night Football game against the Washington Redskins, the Ravens introduced a new combination of black jersey with white pants. It was believed to be due to the fact that John Harbaugh doesn't like the "blackout" look. However, on December 19, 2010, the Ravens wore their black jerseys and black pants in a 30–24 victory over the New Orleans Saints. Since 2010, the Ravens have worn their black jerseys at least twice each season. From 2011 to 2013 and again in 2015, they wore the all blacks once and the black on white once. In 2014 and 2016, they wore all black both times they wore alternate uniforms. In 2017, they wore all black twice and black on white once (although the league is supposed to limit teams to wearing alternate jerseys a maximum of two times a season). On December 5, 2010, the Ravens reverted to the black pants with the purple jerseys versus the Pittsburgh Steelers during NBC's Sunday Night Football telecast. The Ravens lost to the Steelers 13–10. They wore the same look again for their game against the Cleveland Browns on December 24, 2011, and they won, 20–14. They wore this combination a third time against the Houston Texans on January 15, 2012, in the AFC Divisional playoff. They won 20–13. They would again wear this combination on January 6, 2013, during the AFC Wild Card playoff and what turned out to be Ray Lewis' final home game, where they defeated the Indianapolis Colts 24–9. From their inaugural season until 2006, the Ravens wore white cleats with their uniforms; they switched to black cleats in 2007. From the mid-2010s onward, the NFL relaxed its rules regarding primary cleat colors, and Ravens players began wearing customized cleats in either purple, black, gold or white. On December 20, 2015, the team unexpectedly debuted gold pants for the first time, wearing them with their regular purple jerseys against the Kansas City Chiefs. Although gold is an official accent color of the Ravens, the pants got an overwhelmingly negative response on social media by both Ravens fans and fans of other NFL teams, with some comparisons being made to the rival Pittsburgh Steelers' pants, and mustard. During the 2015 season, the NFL announced a jersey promotion called Color Rush in which teams would wear uniforms typically of one color head-to-toe during select prime-time games. The promotion was used three times that season; all the games that featured them were on Thursday Night and had both teams wear them in each. The following season, the league released uniforms for all 32 teams and announced they would be worn during all Thursday Night games that year, as well as on Christmas. The Ravens had one Thursday Night game in 2016; they wore their all-purple Color Rush uniforms and won 28–7 over the division rival Cleveland Browns. They had one other Thursday Night game the following season, in which they again wore the jerseys and won 40–0 over the Miami Dolphins. In their Christmas 2016 game against the Steelers, the Ravens wore their regular all-white uniforms while their rivals wore their Color Rush uniforms. On September 13, 2018, the Ravens debuted a new combination in a road game against the Cincinnati Bengals, wearing white jerseys with purple pants. The purple pants are similar to the ones used for Color Rush except that it has side stripes of black and white; the Color Rush purple pants have gold and white stripes. Then on October 21 against the New Orleans Saints, the Ravens paired their new purple pants with their regular purple uniforms. Black socks were originally worn with this combination, but on January 2, 2022, the Ravens wore purple socks with the regular all-purple combination against the Los Angeles Rams, essentially replicating their Color Rush uniforms but with minimal gold elements. For the regular season finale against the Browns on December 30, the Ravens wore their black uniforms with purple pants. The Ravens wore this combination again October 11, 2021, against the Indianapolis Colts on Monday Night Football in a 31–25 overtime win. Marching band The team marching band is called Baltimore's Marching Ravens. They began as the Colts' marching band and have operated continuously from September 7, 1947, to the present. They helped campaign for football to return to Baltimore after the Colts moved. Because they stayed in Baltimore after the Colts left, the band is nicknamed "the band that would not die" and were the subject of an episode of ESPN's 30 for 30. The Washington Commanders are the only other NFL team that currently has a marching band. Players of note Current roster Pro Football Hall of Fame Note: The following lists players who officially played for the Ravens. For other Hall of Famers, players whose numbers were retired, and players who played for the Baltimore Colts, see Indianapolis Colts. Bold number notes player inducted as a member of the Ravens. For Cleveland Browns players, including those in the Hall of Fame and those whose numbers were retired, see Cleveland Browns. Retired numbers The Ravens do not have officially retired numbers. However, the number 19 has not been issued out of respect for Baltimore Colts quarterback Johnny Unitas, except for quarterback Scott Mitchell in his lone season in Baltimore in 1999. In addition, numbers 75, 52, 20, 55, and 73 in honor of Jonathan Ogden, Ray Lewis, Ed Reed, Terrell Suggs, and Marshal Yanda respectively, have not been issued since those players' retirements from football. Ring of Honor The Ravens have a "Ring of Honor" which is on permanent display encircling the field of M&T Bank Stadium. The ring currently honors 20 members, including eight former members of the Baltimore Colts. Key/Legend First-round draft picks The team's first draft was the 1996 NFL Draft, where they selected UCLA offensive tackle Jonathan Ogden fourth overall and University of Miami linebacker Ray Lewis 24th overall. Both players won a Super Bowl with the team, earned numerous Pro Bowl and All-Pro selections, and are members of the Pro Football Hall of Fame. Along with their pick in the next year's draft, this was the highest first-round draft pick that the Ravens have had. In 1996, 2000, 2018 and 2020, the Ravens had two first-round draft picks (2018 was the only year in the Ravens traded up during the draft). However, in 2004 they had none. Two of their first round picks have made at least ten Pro Bowls. Team records Passing + = min. 500 attempts, # = min. 100 attempts, ∗ = minimum 15 attempts, Rushing ∗ = minimum 15 attempts, # = min. 100 attempts, + = min. 500 attempts Receiving ∗ = minimum 4 receptions, # = min. 20 receptions, + = min. 200 receptions Other Returns Kicking Defense Exceptional performances Other career records Most Tackles: Ray Lewis, ILB, 1,573 (1996–2012) Most Forced Fumbles: Terrell Suggs, EDGE, 28 (2003–2018) Longest Field Goal Made: Justin Tucker, 66 yards (2012–present) Longest Fumble Recovery: Marlon Humphrey, CB, 70 yards (November 3, 2019) All records as of December 18, 2019 per Pro-Football Reference.com Staff Head coaches Ted Marchibroda (1996–1998) Brian Billick (1999–2007) John Harbaugh (2008–present) Current staff Broadcast media References Further reading (available online) External links Baltimore Ravens at the National Football League official website National Football League teams American football teams in Baltimore American football teams established in 1996 1996 establishments in Maryland
7
The British National Party (BNP) is a far-right, British fascist political party in the United Kingdom. It is headquartered in Wigton, Cumbria, and is led by Adam Walker. A minor party, it has no elected representatives at any level of UK government. The party was founded in 1982, and reached its greatest level of success in the 2000s, when it had over fifty seats in local government, one seat on the London Assembly, and two Members of the European Parliament. Taking its name from that of a defunct 1960s far-right party, the BNP was created by John Tyndall and other former members of the fascist National Front (NF). During the 1980s and 1990s, the BNP placed little emphasis on contesting elections, in which it did poorly. Instead, it focused on street marches and rallies, creating the Combat 18 paramilitary—its name a coded reference to Nazi German leader Adolf Hitler—to protect its events from anti-fascist protesters. A growing 'moderniser' faction was frustrated by Tyndall's leadership, and ousted him in 1999. The new leader Nick Griffin sought to broaden the BNP's electoral base by presenting a more moderate image, targeting concerns about rising immigration rates, and emphasising localised community campaigns. This resulted in increased electoral growth throughout the 2000s, to the extent that it became the most electorally successful far-right party in British history. Concerns regarding financial mismanagement resulted in Griffin being removed as leader in 2014. By this point, the BNP's membership and vote share had declined dramatically, groups like Britain First and National Action had splintered off, and the English Defence League had supplanted it as the UK's foremost far-right group. Ideologically positioned on the extreme-right or far-right of British politics, the BNP has been characterised as fascist or neo-fascist by political scientists. Under Tyndall's leadership, it was more specifically regarded as neo-Nazi. The party is ethnic nationalist, and it once espoused the view that only white people should be citizens of the United Kingdom. It calls for an end to non-white migration into the UK. It called initially for the compulsory expulsion of non-whites but, since 1999, it has advocated voluntary removals with financial incentives. It promotes biological racism and the white genocide conspiracy theory, calling for global racial separatism and condemning interracial relationships. Under Tyndall, the BNP emphasised anti-semitism and Holocaust denial, promoting the conspiracy theory that Jews seek to dominate the world through both communism and international capitalism. Under Griffin, the party's focus switched from anti-semitism towards Islamophobia. It promotes economic protectionism, Euroscepticism, and a transformation away from liberal democracy, while its social policies oppose feminism, LGBT rights, and societal permissiveness. Operating around a highly centralised structure that gave its chair near total control, the BNP built links with far-right parties across Europe and created various sub-groups, including a record label and trade union. The BNP attracted most support from within White British working-class communities in northern and eastern England, particularly among middle-aged and elderly men. A poll in the 2000s suggested that most Britons favoured a ban on the party. It faced much opposition from anti-fascists, religious organisations, the mainstream media, and most politicians, and BNP members were banned from various professions. History John Tyndall's leadership: 1982–1999 The British National Party (BNP) was founded by the extreme-right political activist John Tyndall. Tyndall had been involved in neo-Nazi groups since the late 1950s before leading the far-right National Front (NF) throughout most of the 1970s. Following an argument with senior party member Martin Webster, he resigned from the NF in 1980. In June 1980 Tyndall established a rival, the New National Front (NNF). At the recommendation of Ray Hill—who was secretly an anti-fascist spy seeking to sow disharmony among Britain's far-right—Tyndall decided to unite an array of extreme-right groups as a single party. To this end, Tyndall established a Committee for Nationalist Unity (CNU) in January 1982. In March 1982, the CNU held a conference at the Charing Cross Hotel in London, at which 50 far-right activists agreed to the formation of the BNP. The BNP was formally launched on 7 April 1982 at a press conference in Victoria. Led by Tyndall, most of its early members came from the NNF, although others were defectors from the NF, British Movement, British Democratic Party, and Nationalist Party. Tyndall remarked that there was "scarcely any difference [between the BNP and NF] in ideology or policy save in the minutest detail", and most of the BNP's leading activists had formerly been senior NF figures. Under Tyndall's leadership the party was neo-Nazi in orientation and engaged in nostalgia for Nazi Germany. It adopted the NF's tactic of holding street marches and rallies, believing that these boosted morale and attracted new recruits. Their first march took place in London on St. George's Day 1982. These marches often involved clashes with anti-fascist protesters and resulted in multiple arrests, helping to cement the BNP's association with political violence and older fascist groups in the public eye. As a result, BNP organisers began to favour indoor rallies, although street marches continued to be held throughout the mid-to-late 1980s. In its early years, the BNP's involvement in elections was "irregular and intermittent", and for its first two decades it faced consistent electoral failure. It suffered from low finances and few personnel, and its leadership was aware that its electoral viability was weakened by the anti-immigration rhetoric of Conservative Party Prime Minister Margaret Thatcher. In the 1983 general election the BNP stood 54 candidates, although it only campaigned in five seats. Although it was able to air its first party political broadcast, it averaged a vote share of 0.06% in the seats it contested. After the Representation of the People Act 1985 raised the electoral deposit to £500, the BNP adopted a policy of "very limited involvement" in elections. It abstained in the 1987 general election, and stood only 13 candidates in the 1992 general election. In a 1993 local by-election the BNP gained one council seat—won by Derek Beackon in the East London district of Millwall—after a campaign that played to local whites who were angry at the perceived preferential treatment received by Bangladeshi migrants in social housing. Following an anti-BNP campaign launched by local religious groups and the Anti-Nazi League, it lost this seat during the 1994 local elections. In the 1997 general election, it contested 55 seats and gained an average 1.4% of the vote. In the early 1990s, the paramilitary group Combat 18 (C18) was formed to protect BNP events from anti-fascists. In 1992, C18 carried out attacks on left-wing targets like an anarchist bookshop and the headquarters of the Morning Star. Tyndall was angered by C18's growing influence on the BNP's street activities, and by August 1993, C18 activists were physically clashing with other BNP members. In December 1993, Tyndall issued a bulletin to BNP branches declaring C18 to be a proscribed organisation, furthermore suggesting that it may have been established by agents of the state to discredit the party. To counter the group's influence among militant British nationalists, he secured the American white nationalist militant William Pierce as a guest speaker at the BNP's annual rally in November 1995. In the early 1990s, a "moderniser" faction emerged within the party, favouring a more electorally palatable strategy and an emphasis on building grassroots support to win local elections. They were impressed by the electoral gains made by a number of extreme-right parties in continental Europe—such as Jörg Haider's Austrian Freedom Party and Jean-Marie Le Pen's National Front—which had been achieved by both switching focus from biological racism to the perceived cultural incompatibility of different racial groups and by replacing anti-democratic platforms with populist ones. The modernisers called for community campaigns among the white working-class populations of London's East End, and Northern England. While the modernisers gained some concessions from the party's hard-liners, Tyndall opposed many of their ideas and sought to stem their growing influence. In his view, "we should not be looking for ways of applying ideological cosmetic surgery to ourselves in order to make our features more appealing to the public". Nick Griffin's leadership: 1999–2014 After the BNP's poor performance at the 1997 general election, opposition to Tyndall's leadership grew. The modernisers called the party's first leadership election, and in October 1999 Tyndall was ousted when two-thirds of those voting backed Nick Griffin, who offered an improved administration, financial transparency, and greater support for local branches. Often characterised as a political chameleon, Griffin had once been considered a party hardliner before switching allegiance to the modernisers in the late 1990s. In his youth, he had been involved in the NF as well as Third Positionist groups like Political Soldier and the International Third Position. Criticising his predecessors for fuelling the image of the BNP as "thugs, losers and troublemakers", Griffin inaugurated a period of change in the party. Influenced by Le Pen's National Front in France, Griffin sought to widen the BNP's appeal to individuals who were concerned about immigration but had not previously voted for the extreme-right. The BNP replaced Tyndall's policy of compulsory deportation of non-whites to a voluntary system whereby non-whites would be given financial incentives to emigrate. It downplayed biological racism and stressed the cultural incompatibility of different racial groups. This emphasis on culture allowed it to foreground Islamophobia, and following the September 11 attacks in 2001 it launched a "Campaign Against Islam". It stressed the claim that the BNP was "not a racist party" but an "organised response to anti-white racism". At the same time Griffin sought to reassure the party's base that these reforms were based on pragmatism and not a change in principle. Griffin also sought to shed the BNP's image as a single-issue party, by embracing a diverse array of social and economic issues. Griffin renamed the party's monthly newspaper from British Nationalist to The Voice of Freedom, and established a new journal, Identity. The party developed community-based campaigns, through which it targeted local issues, particularly in those areas with large numbers of skilled white working-class people who were disaffected with the Labour Party government. For instance, in Burnley it campaigned for lower speed limits on housing estates and against the closure of a local swimming bath, while in South Birmingham it targeted pensioners' concerns about youth gangs. In 2006 the party urged its activists to carry out local activities like cleaning up children's play areas and removing graffiti while wearing high-vis jackets emblazoned with the party logo. Griffin believed that Peak Oil and a growth in Third World migrants arriving in Britain would result in a BNP government coming to power by 2040. The close of the twentieth century produced more favourable conditions for the extreme-right in Britain as a result of increased public concerns about immigration and established Muslim communities coupled with growing dissatisfaction with the established mainstream parties. In turn, the BNP gained rapidly growing levels of support over the coming years. In July 2000, it came second in the council elections for the North End of the London Borough of Bexley, its best result since 1993. At the 2001 general election it gained 16% of the vote in one constituency and over 10% in two others. In the 2002 local elections the BNP gained four councillors, three of whom were in Burnley, where it had capitalised on white anger surrounding the disproportionately high levels of funding being directed to the Asian-dominated Daneshouse ward. This breakthrough generated public anxieties about the party, with a poll finding that six in ten supported a ban on it. In the 2003 local elections the BNP gained 13 additional councillors, including seven more in Burnley, having attained over 100,000 votes. Concerned that much of their potential vote was going to the UK Independence Party (UKIP), in 2003 the BNP offered UKIP an electoral pact but was rebuffed. Griffin then accused UKIP of being a Labour Party scheme to steal the BNP's votes. They invested much in the campaign for the 2004 European Parliament election, at which they gained 800,000 votes but failed to secure a parliamentary seat. In the 2004 local elections, they secured four more seats, including three in Epping. For the 2005 general election, the BNP expanded its number of candidates to 119 and targeted specific regions. Its average vote in the areas it contested rose to 4.3%. It gained significantly more support in three seats, achieving 10% in Burnley, 13% in Dewsbury, and 17% in Barking. In the 2006 local elections the party gained 220,000 votes, with 33 additional councillors, having averaged a vote share of 18% in the areas it contested. In Barking and Dagenham, it saw 12 of its 13 candidates elected to the council. At the 2008 London Assembly election, the BNP gained 130,000 votes, reaching the 5% mark and thus gaining an Assembly seat. At the 2009 European Parliament election, the party gained almost 1 million votes, with two of its candidates, Nick Griffin and Andrew Brons, being elected as Members of the European Parliament for North West England and Yorkshire and the Humber respectively. That election also saw extreme-right parties winning seats for various other EU member-states. This victory marked a major watershed for the party. Amid significant public controversy, Griffin was invited to appear on the BBC show Question Time in October 2009, the first time that the BNP had been invited to share a national television platform with mainstream panellists. Griffin's performance was however widely regarded as poor. Despite its success, there was dissent in the party. In 2007 a group of senior members known as the "December rebels" challenged Griffin, calling for internal party democracy and financial transparency, but were expelled. In 2008, a group of BNP activists in Bradford split to form the Democratic Nationalists. In November 2008, the BNP membership list was posted to WikiLeaks, after appearing briefly on a weblog. A year later, in October 2009, another list of BNP members was leaked. Eddy Butler then led a challenge to Griffin's leadership, alleging financial corruption, but he had insufficient support. The rebels who supported him split into two groups: one section remained as the internal Reform Group, the other left the BNP to form the British Freedom Party. By 2010, there was discontent among the party's grassroots, a result of the change to its white-only membership policy and rumours of financial corruption among its leadership. Some defected to the National Front or left to form parties like the Britannica Party. Anti-fascist groups like Hope not Hate had campaigned extensively in Barking to stop the area's locals voting for the BNP. At the 2010 general election, the BNP had hoped to make a breakthrough by gaining a seat in the House of Commons, although it failed to achieve this. It nevertheless gained the fifth largest national vote share, with 1.9% of the vote, representing the most successful electoral performance for an extreme-right party in UK history. In the 2010 local elections, it lost all of its councillors in Barking and Dagenham. Nationally, the party's number of councillors dropped from over fifty to 28. Griffin described the results as "disastrous". Decline: 2014–present In a 2011 leadership election, Griffin secured a narrow victory, beating Brons by nine votes of a total of 2,316 votes cast. In October 2012, Brons left the party, leaving Griffin as its sole MEP. In the 2012 local elections, the party lost all of its seats and saw its vote share fall dramatically; whereas it gained over 240,000 votes in 2008, this had fallen to under 26,000 by 2012. Commenting on the result, the political scientist Matthew Goodwin noted: "Put simply, the BNP's electoral challenge is over." In the 2012 London mayoral election, the BNP candidate came seventh, with 1.3% of first-preference votes, its poorest showing in the London mayoral contest. The 2012 election results established that the BNP's steady growth had ended. In the 2013 local elections, the BNP fielded 99 candidates but failed to win any council seats, leaving it with only two. In June 2013, Griffin visited Syria along with members of Hungarian far-right party Jobbik to meet with government officials, including the Speaker of the Syrian People's Assembly, Mohammad Jihad al-Laham, and the Prime Minister Wael Nader al-Halqi. Griffin claims he was influential in the speaker of Syria's Parliament writing an open letter to British MPs urging them to "turn Great Britain from the warpath" by not intervening in the Syrian conflict. Griffin lost his European Parliament seat in the May 2014 European election. The party blamed the UK Independence Party for its decline, accusing the latter of stealing BNP policies and slogans. In July 2014, Griffin resigned and was succeeded by Adam Walker as acting chairman. In October, Griffin was expelled from the party for "trying to cause disunity [in the party] by deliberately fabricating a state of crisis". In January 2015, membership of the party numbered 500, down from 4,220 in December 2013. At the general election in 2015, the BNP fielded eight candidates, down from 338 in 2010. The party's vote share declined 99.7% from its 2010 result. In January 2016, the Electoral Commission de-registered the BNP for failing to pay its annual registration fee of £25. At this time, it was estimated that BNP assets totalled less than £50,000. According to the commission, "BNP candidates cannot, at present, use the party's name, descriptions or emblems on the ballot paper at elections." A month later, the party was re-registered. There were ten BNP candidates at the general election in 2017. At the 2018 local elections, the party's last remaining councillor—Brian Parker of Pendle—decided not to stand for re-election, leaving the party without representation at any level of UK government. The BNP fielded only one candidate, David Furness, at the 2019 general election in Hornchurch and Upminster, where he came last. Ideology Far-right politics, fascism, and neo-Nazism Many academic historians and political scientists have described the BNP as a far-right party, or as an extreme-right party. As the political scientist Matthew Goodwin used it, the term referred to "a particular form of political ideology that is defined by two anti-constitutional and anti-democratic elements: first, right-wing extremists are extremist because they reject or undermine the values, procedures and institutions of the democratic constitutional state; and second they are right-wing because they reject the principle of fundamental human equality". Various political scientists and historians have described the BNP as being fascist in ideology. Others have instead described it as neo-fascist, a term which the historian Nigel Copsey argued was more exact. Academic observers—including the historians Copsey, Graham Macklin, and Roger Griffin, and the political theologian Andrew P. Davey—have argued that Nick Griffin's reforms were little more than a cosmetic process to obfuscate the party's fascist roots. According to Copsey, under Griffin the BNP was "fascism recalibrated — a form of neo-fascism — to suit contemporary sensibilities". Macklin noted that despite Griffin's 'modernisation' project, the BNP retained its ideological continuity with earlier fascist groups and thus had not transformed itself into a genuinely "post-fascist" party. In this it was distinct from parties like the Italian National Alliance of Gianfranco Fini, which has been credited with successfully shedding its fascist past and becoming post-fascist. The anti-fascist activist Gerry Gable referred to the BNP as a "Nazi organisation", while the Anti-Nazi League published leaflets describing the BNP as the "British Nazi Party". Copsey suggested that while the BNP under Tyndall could be described as neo-Nazi, it was not "crudely mimetic" of the original German Nazism. Davey characterised the BNP as a "populist ethno-nationalist" party. In his writings, Griffin acknowledged that much of his 'modernisation' was an attempt to hide the BNP's core ideology behind more electorally palatable policies. Like the National Front, the BNP's private discourse differed from its public one, with Griffin stating that "Of course we must teach the truth to the hardcore... [but] when it comes to influencing the public, forget about racial differences, genetics, Zionism, historical revisionism and so on... we must at all times present them with an image of moderate reasonableness". The BNP has eschewed the labels "fascist" and "Nazi", stating that it is neither. In its 1992 electoral manifesto, it said that "Fascism was Italian. Nazism was German. We are British. We will do things our own way; we will not copy foreigners". In 2009, Griffin that the term "fascism" was simply "a smear that comes from the far left"; he added that the term should be reserved for groups that engaged in "political violence" and desired a state that "should impose its will on people", claiming that it was the anti-fascist group Unite Against Fascism—and not the BNP—who were the real fascists. More broadly, many on Britain's extreme-right sought to avoid the term "British fascism" because of its electorally unpalatable connotations, utilising "British nationalism" in its place. After Griffin took control of the party, it made increasing use of nativist themes in order to emphasise its "British" credentials. In its published material, the party made appeals to the idea of Britain and Britishness in a manner not dissimilar to mainstream political parties. In this material it has also made prominent use of the Union flag and the colours red, white, and blue. Roger Griffin noted that the terms "Britain" and "England" appear "confusingly interchangeable" in BNP literature, while Copsey has pointed out that the BNP's form of British nationalism is "Anglo-centric". The party employed militaristic rhetoric under both Tyndall and Griffin's leadership; under the latter for example its published material spoke of a "war without uniforms" and a "war for our survival as a people". Tyndall described the BNP as a revolutionary party, calling it a "guerrilla army operating in occupied territory". Ethnic nationalism and biological racism The BNP adheres to biological racist ideas, displaying an obsession with the perceived differences of racial groups. Both Tyndall and Griffin believed that there was a biologically distinct white-skinned "British race" which was one branch of a wider Nordic race, a view akin to those of earlier fascists such as Hitler and Arnold Leese. The BNP adheres to an ideology of ethnic nationalism. It promotes the idea that not all citizens of the United Kingdom belong to the British nation. Instead, it states that the nation only belongs to "the English, Scots, Irish and Welsh along with the limited numbers of peoples of European descent, who have arrived centuries or decades ago and who have fully integrated into our society". This is a group that Griffin referred to as the "home people" or "the folk". According to Tyndall, "The BNP is a racial nationalist party which believes in Britain for the British, that is to say racial separatism." Richard Edmonds in 1993 told The Guardian'''s Duncan Campbell that "we [the BNP] are 100% racist". The BNP does not regard UK citizens who are not ethnic white Europeans as "British", and party literature calls on supporters to avoid referring to such individuals as "Black Britons" or "Asian Britons", instead describing them as "racial foreigners". Tyndall believed the white British and the broader Nordic race to be superior to other races, and under his leadership, the BNP promoted pseudoscientific claims in support of white supremacy. Following Griffin's ascendency to power in the party, it officially repudiated racial supremacism and insisted that no racial group was superior or inferior to another. Instead it foregrounded an "ethno-pluralist" racial separatism, claiming that different racial groups had to be kept separate and distinct for their own preservation, maintaining that global ethno-cultural diversity was something to be protected. This switch in focus owed much to the discourse of the French Nouvelle Droite movement which had emerged within France's extreme-right during the 1960s. At the same time the BNP switched focus from openly promoting biological racism to stressing what it perceived as the cultural incompatibility of racial groups. It placed great focus on opposing what it referred to as "multiculturalism", characterising this as a form of "cultural genocide", and stating that it promoted the interests of non-whites at the expense of the white British population. However, internal documents produced and circulated under Griffin's leadership demonstrated that—despite the shift in its public statements—it remained privately committed to biological racist ideas. The party emphasises what it sees as the need to protect the racial purity of the white British. It condemns miscegenation and "race mixing", stating that this is a threat to the British race. Tyndall said that he "felt deeply sorry for the child of a mixed marriage" but had "no sympathy whatsoever for the parents". Griffin similarly stated that mixed-race children were "the most tragic victims of enforced multi-racism", and that the party would not "accept miscegenation as moral or normal ... we never will". In its 1983 election manifesto, the BNP stated that "family size is a private matter" but still called for white Britons who are "of intelligent, healthy and industrious stock" to have large families and thus raise the white British birth-rate. The encouragement of high birth rates among white British families continued under Griffin's leadership. Under Tyndall's leadership, the BNP promoted eugenics, calling for the forced sterilisation of those with genetically transmittable disabilities. In party literature, it talked of improving the British 'racial stock' by removing "inferior strains within the indigenous races of the British Isles". Tyndall argued that medical professionals should be responsible for determining whom to sterilise, while a lowering of welfare benefits would discourage breeding among those he deemed to be genetic inferiors. In his magazine Spearhead, Tyndall also stated that "the gas chamber system" should be used to eliminate "sub-human elements", "perverts", and "asocials" from British society. Anti-immigration and repatriation Opposition to immigration has been central to the BNP's political platform. It has engaged in xenophobic campaigns which emphasise the idea that immigrants and ethnic minorities are both different from, and a threat to, the white British and white Irish populations. In its campaign material it presented non-whites both as a source of crime in the UK, and as a socio-economic threat to the white British population by taking jobs, housing, and welfare away from them. It engaged in welfare chauvinism, calling for white Britons to be prioritised by the UK's welfare state. Party literature included such as claims as that the BNP was the only party which could "do anything effective about the swamping of Britain by the Third World" or "lead the native peoples of Britain in our version of the New Crusade that must be organised if Europe is not to sink under the Islamic yoke". Much of its published material made claims about a forthcoming race war and promoted the conspiracy theory about white genocide. In a 2009 radio interview, Griffin referred to this as a "bloodless genocide". It presents the idea that white Britons are engaged in a battle against their own extinction as a racial group. It reiterated a sense of urgency about the situation, claiming that both high immigration rates and high birth rates among ethnic minorities were a threat to the white British. In 2010, it for instance was promoting the idea that at current levels, "indigenous Britons" would be a minority within the UK by 2060. The BNP calls for the non-white population of Britain to either be reduced in size or removed from the country altogether. Under Tyndall's leadership it promoted the compulsory removal of non-whites from the UK, stating that under a BNP government they would be "repatriated" to their countries of origin. In the early 1990s it produced stickers with the slogan "Our Final Solution: Repatriation". Tyndall understood this to be a two-stage process that would take ten to twenty years, with some non-whites initially leaving willingly and the others then being forcibly deported. During the 1990s, party modernisers suggested that the BNP move away from a policy of compulsory repatriation and toward a voluntary system, whereby non-white persons would be offered financial incentives to leave the UK. This idea, adopted from Powellism, was deemed more electorally palatable. When Griffin took control of the party, the policy of voluntary repatriation was officially adopted, with the party suggesting that this could be financed through the use of the UK's pre-existing foreign aid budget. It stated that any non-whites who refused to leave would be stripped of their British citizenship and categorised as "permanent guests", while continuing to be offered incentives to emigrate. Griffin's BNP also stressed its support for an immediate halt to non-white immigration into Britain and for the deportation of any migrants illegally in the country. Speaking on the BBC's Andrew Marr Show in 2009, Griffin declared that, unlike Tyndall, he "does not want all-white UK" because "nobody out there wants it or would pay for it". Anti-Semitism and Islamophobia Under Tyndall's leadership, the BNP was openly anti-Semitic. From A. K. Chesterton, Tyndall had inherited a belief that there was a global conspiracy of Jews bent on world domination, viewing the Protocols of the Elders of Zion as genuine evidence for this. He believed that Jews were responsible for both communism and international finance capitalism and that they were responsible for undermining the British Empire and the British race. He believed that both democratic government and immigration into Europe were parts of the Jewish conspiracy to weaken other races. In an early edition of Spearhead published in the 1960s, Tyndall wrote that "if Britain were to become Jew-clean she would have no nigger neighbours to worry about... It is the Jews who are our misfortune: T-h-e J-e-w-s. Do you hear me? THE JEWS?" Tyndall added Holocaust denial to the anti-Semitic beliefs inherited from Chesterton, believing that The Holocaust was a hoax created by the Jews to gain sympathy for themselves and thus aid their plot for world domination. Among those to endorse such anti-Semitic conspiracy theories was Griffin, who promoted them in his 1997 pamphlet, Who are the Mind Benders? Griffin also engaged in Holocaust denial, publishing articles promoting such ideas in The Rune, a magazine produced by the Croydon BNP. In 1998, these articles resulted in Griffin being convicted of inciting racial hatred. When Griffin took power, he sought to banish overt anti-Semitic discourse from the party. He informed party members that "we can get away with criticising Zionists, but any criticism of Jews is likely to be legal and political suicide". In 2006, he complained that the "obsession" that many BNP members had with "the Jews" was "insane and politically disastrous". In 2004, the party selected a Jewish candidate, Pat Richardson, to stand for it during local council elections, something Tyndall lambasted as a "gimmick". References to Jews in BNP literature were often coded to hide the party's electorally unpalatable anti-Semitic ideas. For instance, the term "Zionists" was often used in party literature as a euphemism for "Jews". As noted by Macklin, Griffin still framed many of his arguments "within the parameters of recognizably anti-Semitic discourse". The BNP's literature is replete with references to a conspiratorial group who have sought to suppress nationalist sentiment among the British population, who have encouraged immigration and mixed-race relationships, and who are promoting the Islamification of the country. This group is likely a reference to the Jews, being an old fascist canard. Sectors of the extreme-right were highly critical of Griffin's softening on the subject of the Jews, claiming that he had "sold out" to the 'Zionist Occupied Government'. In 2006, John Bean, editor of Identity, included an article in which he reassured BNP members that the party had not "sold out to the Jews" or "embraced Zionism" but that it remained "committed to fighting... subversive Jews". Under Griffin, the BNP's website linked to other web pages that explicitly portrayed immigration as part of a Jewish conspiracy, while it also sold books that promoted Holocaust denial. In 2004, secretly filmed footage was captured in which Griffin was seen claiming that "the Jews simply bought the West, in terms of press and so on, for their own political ends". Copsey noted that a "culture of anti-Semitism" still pervaded the BNP. In 2004, a London activist told reporters that "most of us hate Jews", while a Scottish BNP group was observed making Nazi salutes while shouting "Auschwitz". The party's Newcastle upon Tyne Central candidate compared the Auschwitz concentration camp to Disneyland, while their Luton North candidate stated her refusal to buy from "the kikes that run Tesco". In 2009, a BNP councillor from Stoke-on-Trent resigned from the party, complaining that it still contained Holocaust deniers and Nazi sympathisers. Griffin informed BNP members that rather than "bang on" about the Jews—which would be deemed extremist and prove electorally unpopular—their party should focus on criticising Islam, an issue that would be more resonant among the British public. After Griffin took over, the party increasingly embraced an Islamophobic stance, launching a "Campaign Against Islam" in September 2001. In Islam: A Threat to Us All, a leaflet distributed to London households in 2007, the BNP claimed that it would stand up to both Islamic extremism and "the threat that 'mainstream' Islam poses to our British culture". In contrast to the mainstream British view that the actions of militant Islamists—such as those who perpetrated the 7 July 2005 London bombings—are not representative of mainstream Islam, the BNP insists that they are. In some of its literature it presents the view that every Muslim in Britain is a threat to the country. Griffin referred to Islam as an "evil, wicked faith", and elsewhere publicly described it as a "cancer" that needed to be removed from Europe through "chemotherapy". The BNP has called for the prohibition of immigration from Muslim countries and for the banning of the burka, halal meat, and the building of new mosques in the UK. It also called for the immediate deportation of radical Islamist preachers from the country. In 2005 the party stated that its primary issue of concern was the "growth of fundamentalist-militant Islam in the UK and its ever-increasing threat to Western civilization and our implicit values". To broaden its anti-Islamic agenda, Griffin's BNP made overtures to the UK's Hindu, Sikh, and Jewish communities; Griffin's claim that Jews can make "good allies" in the fight against Islam caused controversy within the international far-right. Government Tyndall believed that liberal democracy was damaging to British society, claiming that liberalism was a "doctrine of decay and degeneration". Under Tyndall, the party sought to dismantle the UK's liberal democratic system of parliamentary governance, although was vague about what it sought to replace this system with. In his 1988 work The Eleventh Hour, Tyndall wrote of the need for "an utter rejection of liberalism and a dedication to the resurgence of authority". Tyndall's BNP perceived itself as a revolutionary force that would bring about a national rebirth in Britain, entailing a radical transformation of society. It proposed a state in which the Prime Minister would have full executive powers, and would be elected directly by the population for an indefinite period of time. This Prime Minister could be dismissed from office in a further election that could be called if Parliament produced a vote of no confidence in them. It stated that rather than having political parties, candidates standing for election to the parliament would be independent. During the period of Griffin's leadership, the party downplayed its anti-democratic themes and instead foregrounded populist ones. Its campaign material called for the devolution of greater powers to local communities, the reestablishment of county councils, and the introduction of citizens' initiative referendums based on those used in Switzerland. The BNP has adopted a hard Eurosceptic platform from its foundation. Under Tyndall's leadership, the BNP had overt anti-Europeanist tendencies. Throughout the 1980s and 1990s he maintained the party's opposition to the European Economic Community. Antagonism toward what became the European Union was retained under Griffin's leadership, which called for the UK to leave the Union. One of Vote Leave's biggest donors during the Brexit referendum was former BNP member Gladys Bramall, and the party has claimed that its anti-Establishment rhetoric "created the road" to Britain's vote to leave the European Union. Tyndall suggested replacing the EEC with a trading association among the "White Commonwealth", namely countries like Canada, Australia, and New Zealand. Tyndall held imperialist views and was sympathetic to the re-establishment of the British Empire through the recolonization of parts of Africa. However, officially the BNP had no plans to re-establish the British Empire or secure dominion over non-white nations. In the 2000s, it called for an immediate military withdrawal from both the Iraq War and the Afghan War. During his appearance on Question Time, regarding the Iraq War, Griffin described the war as "illegal", saying, "We shouldn't have gone into Iraq, we must never go into Iran, we should leave them alone." It has advocated ending overseas aid to provide economic support within the UK and to finance the voluntary repatriation of legal immigrants. Under Tyndall, the BNP rejected both Welsh nationalism and Scottish nationalism, stating that they were bogus because they caused division among the wider 'British race'. Tyndall also led the BNP in support of Ulster loyalism, for instance by holding public demonstrations against the Irish republican party Sinn Féin, and endorsing Ulster loyalist paramilitaries. Under Griffin, the BNP continued to support Ulster's membership of the United Kingdom, calling for the crushing of the Irish Republican Army and the scrapping of the Anglo-Irish Agreement. Griffin later expressed the view that "the only solution that could possibly be acceptable to loyalists and republicans alike" would be the reintegration of the Irish Republic into the United Kingdom, which would be reorganised along federal lines. However, while retaining the party's commitment to Ulster loyalism, under Griffin the importance of the issue was downplayed, something that was criticised by Tyndall loyalists. Economic policy Tyndall described his approach to the economy as "National Economics", expressing the view that "politics must lead, and not be led by, economic forces". His approach rejected economic liberalism because it did not serve "the national interest", although still saw advantages in a capitalist system, looking favourably on individual enterprise. He called on capitalist elements to be combined with socialist ones, with the government playing a role in planning the economy. He promoted the idea of the UK becoming an autarky which was economically self-sufficient, with domestic production protected from foreign competition. This attitude was heavily informed by the corporatist system that had been introduced in Benito Mussolini's Fascist Italy. A number of senior members, including Griffin and John Bean, had anti-capitalist leanings, having been influenced by Strasserism and National Bolshevism. Under Griffin's leadership, the BNP promoted economic protectionism and opposed globalisation. Its economic policies reflect a vague commitment to distributist economics, ethno-socialism, and national autarky. The BNP maintains a policy of protectionism and economic nationalism, although in comparison with other far-right nationalist parties, the BNP focuses less on corporatism. It has called for British ownership of its own industries and resources and the "subordination of the power of the City to the power of the government". It has promoted the regeneration of farming in the United Kingdom, with the object of achieving maximum self-sufficiency in food production. In 2002, the party criticised corporatism as a "mixture of big capitalism and state control", saying it favoured a "distributionist tradition established by home-grown thinkers" favouring small business. The BNP has also called for the renationalisation of the railways. When it comes to environmentalism, the BNP refers to itself as the "real green party", stating that the Green Party of England and Wales engages in "watermelon" politics by being green (environmentalist) on the outside but red (leftist) on the inside. Influenced by the Nouvelle Droite, it framed its arguments regarding environmentalism in an anti-immigration manner, talking about the need for 'sustainability'. It engages in climate change denial, with Griffin claiming that global warming is a hoax orchestrated by those trying to establish the New World Order. Social issues The BNP is opposed to feminism and has pledged that—if in government—it would introduce financial incentives to encourage women to leave employment and become housewives. It would also seek to discourage children being born out of wedlock. It has stated that it would criminalise abortion, except in cases where the child has been conceived as a result of rape, the mother's life is threatened, or the child will be disabled. There are nevertheless circumstances where it has altered this anti-abortion stance; an article in British Nationalist stated that a white woman bearing the child of a black man should "abort the pregnancy... for the good of society". More widely, the party censures inter-racial sex and accuses the British media of encouraging inter-racial relationships. Under Tyndall, the BNP called for the re-criminalisation of homosexual activity. Following Griffin's takeover, it moderated its policy on homosexuality. However, it opposed the 2004 introduction of civil partnerships for same-sex couples. During his 2009 Question Time appearance, Griffin described the sight of two men kissing as "for a lot of us (Christians)... really creepy". The party has also condemned the availability of pornography; its 1992 manifesto stated that the BNP would give the "pedlars of this filth... the criminal status that they deserve". The BNP promoted the reintroduction of capital punishment, and the sterilisation of some criminals. It also called for the reintroduction of national service in the UK, adding that on completion of this service adults would be permitted to keep their standard issue assault rifle. According to the academic Steven Woodbridge, the BNP had a "rather ambivalent attitude toward Christian belief and religious themes in general" during most of its history, but under Griffin's modernisation the party increasingly utilised Christian terminology and themes in its discourse. Various members of the party presented themselves as "true Christians", and defenders of the faith, with key ideologues stating that the religion has been "betrayed" and "sold out" by mainstream clergy and the British establishment. British Christianity, the BNP said, was under threat from Islam, Marxism, multiculturalism, and "political correctness". On analysing the BNP's use of Christianity, Davey argued that the party's emphasis was not on Christian faith itself, but on the inheritance of European Christian culture. The BNP long considered the mainstream media to be one of its major impediments to electoral success. Tyndall said that the media represents a "state above the state" which was committed to the "left-liberal" goals of internationalism, liberal democracy, and racial integration. The party has said that the mainstream media has given disproportionate coverage to the achievements of ethnic minority sportsmen and to the victims of anti-black racism while ignoring white victims of racial prejudice and the BNP's activities. Both Tyndall and Griffin have said that the mainstream media is controlled by Jews, who use it for their own devices; the latter promoted this idea in his Who are the Mind Benders? Griffin has described the BBC as "a thoroughly unpleasant, ultra-leftist establishment". The BNP has stated that if it took power, it would end "the dictatorship of the media over free debate". It said that it would introduce a law prohibiting the media from disseminating falsehoods about an individual or organisation for financial or political gain, and that it would ban the media from promoting racial integration. BNP policy pledges to protect freedom of speech, as part of which it would repeal all laws banning racial or religious hate speech. It would repeal the 1998 Human Rights Act and withdraw from the European Convention on Human Rights. Support Finances In contrast to the UK's mainstream parties, the BNP received few donations from businesses, wealthy donors, or trade unions. Instead it relied on finances produced by its membership. Under Tyndall, the party operated on a shoestring budget with a lack of transparency; in 1992 it collected £5000 and in 1997 it collected £10,000. It also tried raising money by selling extreme-right literature, and opened a bookshop in Welling in 1989, although this was closed in 1996 after being attacked by anti-fascists and proving too costly to run. In 1992 the party formed a dining club of its wealthier supporters, which was renamed the Trafalgar Club in 2000. By the 1997 general election it admitted that its expenses had "far out-stripped" its income, and it was appealing for donations to pay off loans it had taken out. Griffin placed greater emphasis on fundraising, and from 2001 through to 2008 the BNP's annual turnover increased almost fivefold. Membership subscriptions grew from £35,000 to £166,000, while its donations raised from £38,000 to £660,000. However, expenses also rose as the BNP spent more on its electoral campaigns, and the party reported a financial deficit in 2004 and again in 2005. Between 2007 and 2009 the BNP accumulated debts of £500,000. Membership For most of its history, the BNP had a whites-only membership policy. In 2009, the state's Equality and Human Rights Commission stated that this was a violation of the Race Relations Act 1976 and called on the party to amend its constitution accordingly. Responding to this, in early 2010 members voted to remove the racial restriction to membership, although it is unlikely that many non-whites joined. At its creation, the BNP had approximately 1,200 members. By the 1983 general election this had grown to approximately 2,500, although by 1987 had slumped to 1,000, with no significant further growth until the 21st century. After taking control Griffin began publishing the party's membership figures: 2,174 in 2001, 3,487 in 2002, 5,737 in 2003, and 7,916 in 2004. Membership dropped slightly to 6,281 in 2005, but had grown to 9,297 in 2007 and to 10,276 in spring 2010. In 2011, it was noted that this meant that the BNP had experienced the most rapid growth since 2001 of any minor party in the UK. A party membership list dating from late 2007 was leaked onto the internet by a disgruntled activist, containing the names and addresses of 12,000 members. This included names, addresses and other personal details. People on the list included prison officers (barred from BNP membership), teachers, soldiers, civil servants and members of the clergy. The leaked list indicated that membership was concentrated in particular areas, namely the East Midlands, Essex, and Pennine Lancashire, but with particular clusters in Charnwood, Pendle, and Amber Valley. Many of these areas had long been targeted by extreme-right campaigns, dating back to the NF activity of the 1970s, suggesting that such longstanding activism may have had an effect on levels of BNP membership. This information also revealed that membership was most likely in urban areas with low rates of educational attainment and large numbers of economically insecure people employed in manufacturing, with further correlations to nearby Muslim communities. Following an investigation by Welsh police and the Information Commissioner's Office, two people were arrested in December 2008 for breach of the Data Protection Act concerning the leak. Matthew Single was subsequently found guilty and fined £200 in September 2009. The 'low' fine was criticised as an "absolute disgrace" by a BNP spokesman and a detective sergeant involved said he was "disappointed" with the outcome, stating that people were fearful for their safety. More than 160 complaints were made nationally to police after attacks on BNP members and their property. The leaked membership list showed that the party was 17.22% female. While women have occupied key positions within the BNP, men dominated at every level of the party. In 2009, over 80% of the party's Advisory Council was male and from 2002 to 2009, three-quarters of its councillors were male. The average percentage of female candidates presented at local elections in 2001 was 6%, although this had risen to 16% by 2010. Since 2006, the party had made a point of selecting female candidates, with Griffin stating that this was necessary to "soften" the party's image. Goodwin suggested that membership fell into three camps: the "activist old guard" who had previously been involved in the NF during the 1970s, the "political wanderers" who had defected from other parties to the BNP, and the "new recruits" who had joined post-2001 and who had little or no political interest or experience beforehand. Having performed qualitative research among the BNP by interviewing various members, Goodwin noted that few of those he interviewed "conformed to the popular stereotypes of them being irrational and uninformed crude racists". He noted that most strongly identified with the working class and claimed to have either been former Labour voters or from a Labour-voting family. None of those interviewed claimed a family background in the ethnic nationalist movement. Instead, he noted that members said that they joined the party as a result of a "profound sense of anxiety over immigration and rising ethno-cultural diversity" in Britain, along with its concomitant impact on "British culture and society". He noted that among these members, the perceived cultural threat of immigrants and ethnic minorities was given greater prominence than the perceived economic threat that they posed to white Britons. He noted that in his interviews with them, members often framed Islam in particular as a threat to British values and society, expressing the fear that British Muslims wanted to Islamicise the country and eventually impose sharia on its population. Voter base Goodwin described the BNP's voters as being "socially distinct and concerned about a specific set of issues". Under Griffin's leadership, the party targeted areas with high proportions of skilled white working-class voters, particularly those who were disenchanted with the Labour government. It has attempted to appeal to disaffected Labour voters with slogans such as "We are the Labour Party your Grandfather Voted For". The BNP had little success in gaining support from women, the middle classes, and the more educated. Goodwin noted a "strong male bias" in the party's support base, with statistical polling revealing that between 2002 and 2006, seven out of ten BNP voters were male. That same research also indicated that BNP voters were disproportionately middle-aged and elderly, with three quarters being aged over 35, and only 11% aged between 18 and 24. This contrasted to the NF's support base during the 1970s, when 40% of its voters were aged between 18 and 24. Goodwin suggested two possibilities for the BNP's failure to appeal to younger voters: one was the 'life cycle effect', that older people have obtained more during their life and thus have more to lose, feeling both more threatened by change and more socially conservative in their views. The other explanation was the 'generational effect', with younger Britons who have grown up since the onset of mass immigration having had greater social exposure to ethnic minorities and thus being more tolerant toward them. Conversely, many older voters came of age during the 1970s, under the impact of the anti-immigrant rhetoric promoted by Powellism, Thatcherism, and the NF, and thus have less tolerant attitudes. Most BNP voters had no formal qualifications and the party's support was centred largely in areas with low educational attainment. According to the 2002–06 data, two-thirds of BNP voters had either no formal qualifications or had left education after their O-levels/GCSEs. Only one in ten BNP voters possessed an A-level, and an even smaller percentage had a university degree. Most of the BNP's voting base were from the financially insecure lower classes. Research conducted from 2002 to 2006 indicated that seven out of ten BNP voters were either skilled or unskilled workers or unemployed. A 2009 poll found that six out of ten BNP voters fitted this profile. Goodwin suggested that it was the skilled working classes rather than their unskilled or unemployed neighbours who were the main support base behind the BNP, because they owned some assets and thus felt that they had more to lose as a result of the economic threat posed by immigrants and ethnic minorities. Research indicated that BNP voters also held opinions that were distinct from the average British citizen. They were far more pessimistic about their economic prospects than average, with seven out of ten BNP voters expecting their economic prospects to decline in future, contrasted with four out of ten who held this view in the wider population. In the 2002–06 period, 59% of BNP voters considered immigration to be the most important issue facing the UK, compared with only 16% of the wider population who agreed. By 2009, 87% of BNP voters identified immigration and asylum as the most important issue, to 49% of the wider population. BNP voters were also more likely to identify law and order, the EU, and Islamic extremism as the most important issues facing the UK than other voters, and less likely than average to rate the economy, NHS, pensions, and housing market as the most important. BNP voters were also more likely than average to believe both that white Britons face unfair discrimination, and that Muslims, non-whites, and homosexuals had unfair advantages in British society. 78% of BNP voters endorsed the belief that the Labour Party prioritised immigrants and ethnic minorities over white British people, to 44% of the wider population. When asked questions about immigration and Muslims, BNP voters were found to be far more hostile to them than the average Briton, and also more willing than average to support outright racially discriminatory policies toward them. Copsey believed that "popular racism"—namely against asylum seekers and Muslims—generated the BNP's "largest reservoir of support", and that in many Northern English towns the main factors behind BNP support were white resentment toward Asian communities, anger at Asian-on-white crime, and the perception that Asians received disproportionately high levels of public funding. Research also indicated that BNP voters were more mistrustful of the establishment than average citizens. In 2002–06, 92% of BNP voters described themselves as being dissatisfied with the government, to 62% of the wider population. Over 80% of BNP voters were found to distrust their local Member of Parliament, council officials, and civil servants, and were also more likely than average to think that politicians were personally corrupt. There was also a tendency for BNP voters to read tabloids like the Daily Mail, Daily Express, and The Sun, all of which promote anti-immigration sentiment. Whether these voters gained such sentiment as a result of reading these tabloids or they read these tabloids because it endorsed their pre-existing views is unclear. The early stronghold of the BNP was in London, where it established enclaves of support in the boroughs of Enfield, Hackney, Lewisham, Southwark, and Tower Hamlets, with smaller units in Bexley, Camden, Greenwich, Hillingdon, Lambeth, and Redbridge. By the late 1990s, the party was increasingly retreating from its original East End heartland, finding that its electoral support had declined in the area. Griffin expressed the view that it was too dangerous for BNP activists to campaign in the East End, suggesting that they would likely be attacked by opponents. Instead the party shifted its focus to parts of Outer London, in particular the boroughs of Barking, Bexley, Dagenham, Greenwich, and Havering. After Griffin took power, the party focused on building support in the North of England, taking advantage of the anxieties generated by the ethnic riots that took place in Bradford, Oldham, and Burnley in 2001. In the period between 2002 and 2006, over 40% of the BNP's voters were in Northern England. The decline of the BNP as an electoral force around 2014 helped to open the way for the growth of another right-wing party, UKIP. In a study Goodwin produced with Robert Ford, the two political scientists noted that UKIP's support base mirrored the BNP's in that it had the same "very clear social profile": the "old, male, working class, white and less educated". One area where the two differed, they noted, was in the fact that BNP support had been highest among the middle-aged before tailing off among the over 55s, whereas UKIP retained strong support with those over 55. Ford and Goodwin suggested that this might be because more over 55s had "direct or indirect experiences" of the Second World War, in which Britain defeated the fascist powers, resulting in them being less inclined to support fascist parties than their younger counterparts. Despite these commonalities, UKIP proved far more successful at mobilising these social groups than did the BNP. This was likely in part because UKIP had a "reputational shield"; it emerged from within the Eurosceptic tradition of British politics rather than from the far-right and thus, while often ridiculed by the mainstream, was regarded as a legitimate democratic actor in a way that the BNP was not. Organisation and structure On its formation, the BNP avoided the National Front's committee-rule system of collective leadership in the hope of evading the infighting and factionalism that had damaged the NF. Instead it was founded around what it called the "leadership principle", with a central chairman having complete control over the party, which was then arranged in a highly hierarchical structure. The BNP lacked internal democracy, with the grassroots membership having almost no formal power, except for electing the party leader. On taking power, Griffin retained the leadership principle inherited from Tyndall. He nevertheless established an Advisory Council which would meet several times a year; the members were to be selected by Griffin himself and would serve as his advisors. The party's branches and local groups were referred to as "units" within the party. These were designed to recruit followers, raise funds, and campaign during elections. Under Tyndall, the party operated with a skeleton organisation. It had no full-time staff and for most of the 1980s lacked a telephone number. Instead it relied on a handful of geographically scattered, unpaid regional organisers. Its early activists were recruited from within the extreme-right movement, and thus lacked the experience and skills in electoral campaigning. When Griffin took control, he introduced a variety of internal departments to help manage the party's activities: the administration and enquiries department, department for group development, legal affairs department, security department, and communications department. Griffin tried to build a more professional party machine by educating and training BNP members, providing them with incentives, establishing a steady income stream, and overcoming factionalism and dissent. He launched an "annual college" for activists in 2001 and formed an education and training department in 2007. In 2008 and 2010 he oversaw the establishment of "summer schools" for high-ranking officials. The party also began employing full-time members of staff, having three in 2001 and 13 in 2007. To incentivise members to remain committed to the party, Griffin followed the example of the Swedish National Democrats by implementing a new "voting membership" scheme in 2007. This meant that those who had been BNP members for two years could become a "voting member", at which they would go on a year's probation. During this year they were required to attend educational and training seminars, to engage in a certain amount of activism, and to donate a specified amount of money to the party. Once completed, they were allowed to vote on certain matters at general members' meetings and annual conferences, to participate in policy debates, and to be eligible for intermediate and senior positions. This policy ensured that those who reached the higher echelons of the BNP were fully trained in the party's ideology and electoral strategy. Sub-groups and propaganda output Griffin hoped to build a wider social movement around the BNP by establishing affiliated networks and organisations. In many cases, these were presented to the public in a way that concealed any direct connection to the BNP. Most of these affiliated groups were poorly funded and had few members. The party established its own record label, Great White Records, a radio station, and a trade union known as Solidarity – The Union for British Workers. It formed a group for young people known as the Young BNP, although in 2010 renamed this group as the BNP Crusaders, "to pay homage to our ancestors from the Middle Ages who saved Christian Europe from the onslaught of Islam". It established a Land and People group to recruit support in rural areas, a Family Circle to recruit women and families, and both a Veterans Group and an Association of British ex-Servicemen for former military servicemen. A group called Families Against Immigrant Racism was established to counter perceived racism against white Britons, while an Ethnic Liaison Committee was created to build links with anti-Muslim Hindu and Sikh groups active in Britain. Another group was the American Friends of the British National Party (AFBNP), set up by Mark Cotterill in 1999 to gain support from sympathisers in the United States. In 2001 it had 100 members, and by 2008 had 107. A group called Islands of the North Atlantic (IONA) was established to promote the BNP's view of British culture and identity. The British Students Association was founded to promote the party's views among university students in 2000. Albion Life Insurance was set up in September 2006 as an insurance brokerage company established on behalf of the BNP to raise funds for its activities. The firm ceased to operate in November 2006. In 2006, the BNP launched the Christian Council of Britain (CCB), a group designed to rival the Muslim Council of Britain and oppose the growing "Islamification" of inner city areas. The CCB was established and run by BNP member Robert West, who claimed to have been ordained by the Apostolic Church, a claim that the church denies. West is a Calvinist and espouses a theology of nations which is influenced by Calvinist theologians like Abraham Kuyper, holding that God wishes every race and nation to remain separate until end time. Griffin's BNP also established an annual Red, White and Blue festival, which was based on the 'Blue Blanc Rouge' organised by France's National Front. The festival brought party activists together and aimed to promote a more family friendly image for the group, although it also provided a venue for white power skinhead bands like Stigger, Nemesis and Warlord. Around 1,000 BNP members attended the party's 2001 festival. Under Griffin's leadership, the BNP zealously embraced the use of alternative media to promote itself in a way different from the negative portrayal that featured in the mainstream media. On its website—which had been established in 1995—it created an internet television channel, 'BNPtv'. It has created blogs that cover different themes without being explicitly political in order to promote the party's message. The BNP established an online marketing platform, Excalibur, through which to sell its merchandise. In 2003, the BNP claimed that it had the most viewed website of a political party in Britain, and by 2011 was claiming to have the most viewed such website in Europe. In September 2007, The Daily Telegraph newspaper reported that Hitwise, the online competitive intelligence service, said that the BNP website had more hits than any other website of a British political party. Affiliations in the wider extreme-right Under Griffin, the BNP forged stronger links with various extreme-right parties elsewhere in Europe, among them France's National Front, Germany's National Democratic Party (NPD), Sweden's National Democrats, and Hungary's Jobbik. Griffin unsuccessfully urged the NPD to move away from neo-Nazism and embark on the same 'modernisation' project that he had taken the BNP. Jean-Marie Le Pen of the French Front National was the guest of honour at an "Anglo-French Patriotic Dinner" held by the BNP in April 2004. Griffin met leaders of the Hungarian far right party Jobbik to discuss co-operation between the two parties and spoke at a Jobbik party rally in August 2008. In April 2009, Simon Darby, deputy chairman of the BNP, was welcomed with fascist salutes by members of the Italian nationalist Forza Nuova during a trip to Milan. Darby stated that the BNP would look to form an alliance with France's Front National in the European Parliament. Following the election of two BNP MEPs in 2009, the following year saw the BNP join with other extreme-right parties to form the Alliance of European National Movements, with Griffin becoming its vice president. The party also had close links with the Historical Review Press, a publisher focused on promoting Holocaust denial. Britain's extreme-right has long faced internal and public divisions. Disgruntled BNP members left the party to found or join a wide range of rivals, among them the British Freedom Party, White Nationalist Party, Nationalist Alliance, Wolf's Hook White Brotherhood, British People's Party, England First Party, Britain First, Democratic Nationalists, and the New Nationalist Party. Various BNP members were involved in the nascent English Defence League (EDL)—with EDL leader Tommy Robinson having been a former BNP activist—although Griffin proscribed the organisation and condemned it as having been manipulated by "Zionists". The political scientist Chris Allen noted that the EDL shared much of the BNP's ideology, but that its "strategies and actions" were very different, with the EDL favouring street marches over electoral politics. By 2014, both the BNP and EDL were in decline, and Britain First—founded by former BNP members James Dowson and Paul Golding—had risen to prominence. It combined the electoral tactics of the BNP with the street marches of the EDL. The Steadfast Trust was established as a charity in 2004 with the stated aims of reducing poverty among those of Anglo-Saxon descent and supporting English culture. It has many former and current BNP, NF and British Ku Klux Klan members. It was deregistered as a charity by the Charity Commission in February 2014. In 2014, after Nick Griffin lost the leadership of BNP, he set up British Voice, but before it was launched, he decided to set up a different group, British Unity. Some members of the BNP were radicalised during their involvement with the party and subsequently sought to carry out acts of violence and terrorism. Tony Lecomber was imprisoned for three years for possessing explosives, after a nail bomb exploded while he was transporting it to the offices of the Workers' Revolutionary Party in 1985. He was imprisoned for three years in 1991 whilst serving as the BNP's Director of Propaganda for assaulting a Jewish teacher. In 1999, the ex-BNP member David Copeland used nail bombs to target homosexuals and ethnic minorities in London. In 2005, the BNP's Burnley candidate Robert Cottage was convicted of stockpiling chemicals for use in what he believed was a coming civil war, while a Yorkshire BNP member, Terry Gavan, was convicted in 2010 for stockpiling firearms and nail bombs. Party leaders Electoral performance The BNP has contested seats in England, Wales, Scotland and Northern Ireland. Research from Robert Ford and Matthew Goodwin shows that BNP support is concentrated among older and less educated working-class men living in the declining industrial towns of the North and Midlands regions, in contrast to previous significant far-right parties like the National Front, which drew support from a younger demographic. General elections The BNP placed comparatively little emphasis on elections to the British House of Commons, aware that the first past the post voting system was a major obstacle. The British National Party has contested general elections since 1983. The BNP in the 2001 general election saved five deposits (out of 33 contested seats) and secured its best general election result in Oldham West and Royton (which had recently been the scene of racially motivated rioting between white and Asian youths) where party leader Nick Griffin secured 16% of the vote. The 2005 general election was considered a major breakthrough by the BNP, as they picked up 192,746 votes in the 119 constituencies it contested, took a 0.7% share of the overall vote, and retained a deposit in 40 of the seats. The BNP put forward candidates for 338 out of 650 seats for the 2010 general election gaining 563,743 votes (1.9%), finishing in fifth place and failing to win any seats. However, a record of 73 deposits were saved. Party chairman Griffin came third in the Barking constituency, behind Margaret Hodge of Labour and Simon Marcus of the Conservatives, who were first and second respectively. At 14.6%, this was the BNP's best result in any of the seats it contested that year. Local elections The BNP's first electoral success came in 1993, when Derek Beackon was returned as a councillor in Millwall, London. He lost his seat in elections the following year. The next BNP success in local elections was not until the 2002 local elections, when three BNP candidates gained seats on the Burnley council. The BNP's first councillor for six years was John Haycock, elected as a parish councillor for Bromyard and Winslow in Herefordshire in 2000. Haycock failed to attend any council meetings for six months and was later disqualified from office. The party had 55 councillors for a time in 2009. After the 2013 local county council elections, the BNP was left with a total of two borough councillors in England: As of 2011, the BNP had yet to make "a major breakthrough" on local councils. The BNP's councillors usually had "an extremely limited impact on local politics" because they were isolated as individuals or small groups on the council. Councillors from the main parties often disliked their BNP colleagues and deemed having to work alongside them as an affront to dignity and decency. Questions were often raised as to whether BNP councillors could adequately represent the interests of all of their local constituents. On being elected, Beackon for instance stated that he refused to serve his Asian constituents in Millwall. There were also allegations made that BNP councillors had particularly low attendance at council meetings, although research indicated that this was not the case, with the BNP's attendance record being largely average. There is evidence to suggest that racially and religiously motivated crime increased in those areas where BNP councillors had been elected. For instance, after the 1993 election of Beackon, there was a spike in racist attacks in the borough of Tower Hamlets. BNP members were directly responsible for some of this; the party's national organiser Richard Edmonds was sentenced to three months imprisonment for his part in an attack on a black man and his white girlfriend. Regional assemblies and parliaments BNP lead candidate Richard Barnbrook won a seat in the London Assembly in May 2008, after the party gained 5.3% of the London-wide vote. However, in August 2010, he resigned the party whip and became an independent. In the 2007 Welsh Assembly elections, the BNP fielded 20 candidates, four in each of the five regional lists, with Nick Griffin standing in the South Wales West region. It did not win any seats, but was the only minor party to have saved deposits in the electoral regions, one in the North Wales region and the other in the South Wales West region. In total the BNP polled 42,197 votes (4.3%). In the 2011 Welsh Assembly elections, the BNP fielded 20 candidates, four in each of the five regional lists and for the first time 7 candidates were fielded in FPTP constituencies. On the regional lists, the BNP polled 22,610 votes (2.4%), down 1.9% from 2007. In 2 out of the 7 FPTP constituencies contested the BNP saved deposits: (Swansea East and Islwyn). In the 2007 Scottish Parliament election, the party fielded 32 candidates, entitling it to public funding and an election broadcast, prompting criticism. The BNP received 24,616 votes (1.2%), no seats were won, nor were any deposits saved. In the 2011 Scottish Parliament election, the BNP fielded 32 candidates in the regional lists. 15,580 votes were polled (0.78%). The BNP fielded 3 candidates for the first time in three constituencies each in the 2011 Northern Ireland Legislative Assembly elections (Belfast East, East Antrim and South Antrim). 1,252 votes were polled (0.2%), winning no seats for the party. European Parliament The BNP has taken part in European Parliament elections since 1999, when they received 1.13% of the total vote (102,647 votes). In the 2004 elections to the European Parliament, the BNP won 4.9% of the vote, making it the sixth biggest party overall, but did not win any seats. The BNP won two seats in the European Parliament in the 2009 elections. Andrew Brons was elected in the Yorkshire and the Humber regional constituency with 9.8% of the vote. Party chairman Nick Griffin was elected in the North West region, with 8% of the vote. Nationally, the BNP received 6.26%. The British Government announced in 2009 that the BNP's two MEPs would be denied some of the access and information afforded to other MEPs. The BNP would be subject to the "same general principles governing official impartiality" and they would receive "standard written briefings as appropriate from time to time", but diplomats would not be "proactive" in dealing with the BNP MEPs and that any requests for policy briefings from them would be treated differently and on a discretionary basis. The BNP did not stand any candidates in the 2019 European Parliament election in the United Kingdom. Association with violence The leaders and senior officers of the BNP have criminal convictions for inciting racial hatred. John Hagan claims that the BNP has conducted right-wing extremist violence to gain "institutionalized power". A 1997 report by Human Rights Watch accused the party of recruiting from skinhead groups and promoting racist violence. In the past, Nick Griffin has defended the threat of violence to further the party's aims. After the BNP won its first council seat in 1993, he wrote that the BNP should not be a "postmodernist rightist party" but "a strong, disciplined organisation with the ability to back up its slogan 'Defend Rights for Whites' with well-directed boots and fists. When the crunch comes, power is the product of force and will, not of rational debate". In 1997 he said: "It is more important to control the streets of a city than its council chambers." A BBC Panorama programme reported on a number of BNP members who have had criminal convictions, some racially motivated. Some of the more notable convictions include: John Tyndall had convictions for assault and organising paramilitary neo-Nazi activities. In 1986 he was jailed for conspiracy to publish material likely to incite racial hatred. In 1998, Nick Griffin was convicted of violating section 19 of the Public Order Act 1986, relating to incitement to racial hatred. He received a nine-month prison sentence, suspended for two years, and was fined £2,300. Joseph Owens, a BNP candidate in Liverpool's local elections, served eight months in prison for sending razor blades in the post to Jewish people and another term for carrying CS gas and knuckledusters. Colin Smith, who in 2004 was the BNP's South East London organiser, has 17 convictions for burglary, theft, possession of drugs and assaulting a police officer. Richard Edmonds, at the time BNP National Organiser, was sentenced to three months in prison in 1994 for his part in a racist attack. Edmonds threw a glass at the victim as he was walking past an East London pub where a group of BNP supporters was drinking. Others then 'glassed' the man in the face and punched and kicked him as he lay on the ground, including BNP supporter Stephen O'Shea, who was jailed for 12 months. Another BNP supporter, Simon Biggs, was jailed for four and a half years for his part in the attack. Reception In 2011, Goodwin described the BNP as being "the most successful party in the history of the extreme right in Britain". That same year, John E. Richardson noted that it had achieved "a level of electoral success that is unparalleled in the history of British fascism". The historian Alan Sykes stated that "in electoral terms", the BNP achieved "more in the first three years of the twenty-first century" than the British far right "as a whole achieved in the previous seventy". However, Copsey said that the party's belief that one day the conditions would be right for it to win a general election belonged to the "Never-Never Land of British politics". Copsey also said that the BNP's electoral successes had been modest in comparison to those achieved by extreme-right groups elsewhere in Western Europe such as France's National Front, Italy's National Alliance, and Belgium's Vlaams Blok. The BNP's growth met a hostile reaction, and in 2011 the political scientists Copsey and Macklin described it as "Britain's most disliked party". It was widely reviled as racist and even following Griffin's "modernisation" project it was still heavily tainted by its associations with neo-Nazism. For many years it remained closely associated with the National Front in the British public imagination. The BNP remained unable to gain a broad appeal or widespread credibility. In a 2004 poll, seven out of ten voters said that they would never consider voting for the BNP. A 2009 poll found that two-thirds would "under no circumstances" consider voting BNP, while only 4% of respondents would "definitely consider" voting for them. The Conservative leader Michael Howard stated that the BNP were a "stain" on British democracy, adding that "this is not a political movement, this is a bunch of thugs dressed up as a political party". His successor David Cameron described it as a "completely unacceptable" organisation which "thrives on hatred". The Labour prime minister, Tony Blair , called it a "nasty, extreme organisation", while the Liberal Democrat leader Nick Clegg termed it a "party of thugs and fascists". In 2004, the General Synod of the Church of England declared that supporting the BNP was incompatible with Christianity, comparing it to "spitting in the face of God". Christian groups throughout Britain have maintained that the BNP's hostility toward cultural and ethnic diversity in the country was at odds with mainstream Christianity's emphasis on inclusiveness, tolerance, and interfaith dialogue. Winston Churchill's family has criticised the BNP's use of his image and quotations, labelling it "offensive and disgusting". The singer Vera Lynn condemned the party for selling a CD featuring her recordings on its website. In 2009, the Royal British Legion asked Griffin—at first privately and then publicly—to not wear their poppy symbol. The British police, Fire Brigades Union, and Church of England, prohibited its members from joining the BNP. In 2002, Martin Narey, banned BNP membership among prison workers; he subsequently received death threats. In 2010, the Education Secretary Michael Gove announced bans allowing headteachers to ban their staff from being party members. Individuals whose membership of the party was made public sometimes faced ostracism and the loss of their job: examples include a school headmaster who had to resign, a caretaker who was sacked after attending a BNP rally, and a police officer dismissed from his position. After BNP membership lists were leaked on the Internet, a number of police forces investigated officers whose names appeared on the lists. In 2005, an invitation to Nick Griffin by the University of St Andrews Union Debating Society to participate in a debate on multiculturalism was withdrawn after protests. The BNP says that National Union of Journalists guidelines on reporting "far right" organisations forbid unionised journalists from reporting uncritically on the party. In April 2007, an election broadcast was cancelled by BBC Radio Wales whose lawyers believed that the broadcast was defamatory of the Chief Constable of North Wales Police, Richard Brunstrom. The BNP said that BBC editors were following an agenda. Mainstream media and academia Attitudes toward the BNP in both mainstream broadcast media and print journalism have been overwhelmingly negative, and no mainstream newspaper has endorsed the party. This hostile coverage has even been found in right-wing tabloids like the Daily Mail, Daily Express and The Sun which otherwise share the BNP's hostile attitude toward issues like immigration. In 2003, the Daily Mail described the BNP as "poisonous bigots", while in 2004 The Sun printed the headline of "BNP: Bloody Nasty People". Senior BNP figures nevertheless believed that these tabloids' hostile coverage of immigration and Islam helped to legitimise and normalise the party and its views among much of the British public, a view echoed by some academic observers. When, in 2004, anti-racist activists picketed outside the Daily Mail office in central London to protest against its negative coverage of asylum seekers, BNP members organised a counter-picket at which they displayed the placard "Vote BNP, Read the Daily Mail". The BNP initially faced a 'no platform for fascists' policy from the broadcast media, although this eroded as Griffin was invited on to a number of television programmes amid the party's growing electoral success. When the BBC invited him to appear on Question Time in 2009 it was criticised by several trade unions, sections of the media, and several Labour politicians, all of whom believed that the BNP should not be given a public platform. Anti-fascist protesters assembled outside of the television studio to protest Griffin's inclusion. The first academic attention to be directed at the BNP appeared after it gained a councillor in the 1993 local elections. Nevertheless, throughout the 1990s it remained the subject of little academic research. Academic interest increased following its victories at local elections from 2002 onward. The first detailed monograph study to be devoted to the party was Nigel Copsey's Contemporary British Fascism, first published in 2004. In September 2008, an academic symposium on the BNP was held at Teesside University. The wider extreme-right and anti-fascists Opposition to the BNP also came from the organised anti-fascist movement. By the mid-1990s, the BNP's attempts to stage public events in Scotland, the North West and the Midlands were largely thwarted by the militant disruption of the Anti-Fascist Action (AFA) group. The BNP's modernisation and move away from street demonstrations and toward electoral campaigning caused problems for the AFA, who proved unable to successfully change their tactics; on those occasions when AFA activists tried to forcibly disrupt BNP activities, they were prevented and arrested by riot police. More liberal sections of the anti-fascist movement sought to counter the BNP through community-based initiatives. Searchlight encouraged trade unions to establish localised campaigns that would ensure that ethnic minority and other anti-BNP locals voted. It suggested that such campaigns should avoid associating with the mainstream parties from which BNP voters felt disenfranchised and that they should not be afraid of calling out Islamic fundamentalists and extremists active in the area. The Unite Against Fascism group also sought to maximise anti-BNP turnout at elections, calling on the electorate to vote for "anyone but fascists". Evidence suggests that such anti-fascist activities did little to erode the far-right vote; this was in part because anti-fascist groups had encouraged the stereotype that BNP candidates were violent skinheads, something which conflicted with the more normal, friendly image that BNP activists cultivated when canvassing. The BNP often received a hostile response from other sections of the British extreme-right. Some extreme-right-wingers, such as the British Freedom Party, expressed frustration at the party's inability to moderate itself further on the issue of race, while those such as Colin Jordan and the NF accused the BNP—particularly under Griffin's leadership—of being too moderate. This latter view was articulated by an extreme-right groupuscule, the International Third Position, when it claimed that the BNP "has been openly courting the Jewish vote and pumping out material which confirms what most us knew years ago: the BNP has become a multi-racist, Zionist, queer-tolerant anti-Muslim pressure group". In ASLEF v. United Kingdom'', the European Court of Human Rights overturned an employment appeal tribunal ruling that awarded BNP member and train driver Jay Lee damages for expulsion from a trade union. In Redfearn v United Kingdom, the court ruled that members of racist organisations could lawfully be dismissed on health and safety grounds if there was a danger of violence occurring in the workplace. In November 2012, the European Court of Human Rights made a majority ruling (4 to 3) that in Redfearn's case against the UK government, his rights under Article 11 (free association) had been infringed, but not those under Article 10 (free expression) or Article 14 (discrimination). See also List of political parties in the United Kingdom opposed to austerity Britain First English Defence League Billy Brit Notes References Footnotes Sources Further reading External links The Lost Race BBC documentary about the British National Party broadcast in 1999 Antisemitism in the United Kingdom National Front (UK) breakaway groups Eurosceptic parties in the United Kingdom Holocaust denial in the United Kingdom Political parties established in 1982 1982 establishments in the United Kingdom 1982 in British politics British nationalism Nationalist parties in the United Kingdom White nationalist parties Right-wing populism in the United Kingdom White nationalism in the United Kingdom Anti-Islam sentiment in the United Kingdom Anti-austerity political parties in the United Kingdom Fascist parties in the United Kingdom Fascism in the United Kingdom Right-wing populist parties Far-right political parties in the United Kingdom Organisations that oppose LGBT rights in the United Kingdom
9
Baptism (from ) is a Christian sacrament of initiation and adoption, almost invariably with the use of water. It may be performed by sprinkling or pouring water on the head, or by immersing in water either partially or completely, traditionally three times, once for each person of the Trinity. The synoptic gospels recount that John the Baptist baptised Jesus. Baptism is considered a sacrament in most churches, and as an ordinance in others. Baptism according to the Trinitarian formula, which is done in most mainstream Christian denominations, is seen as being a basis for Christian ecumenism, the concept of unity amongst Christians. Baptism is also called christening, although some reserve the word "christening" for the baptism of infants. In certain Christian denominations, such as the Catholic Churches, Eastern Orthodox Churches, Oriental Orthodox Churches, Assyrian Church of the East, and Lutheran Churches, baptism is the door to church membership, with candidates taking baptismal vows. It has also given its name to the Baptist churches and denominations. Some Christian thinking regards baptism as necessary for salvation, but some writers, such as Huldrych Zwingli (1484–1531), have denied its necessity. Though water baptism is extremely common among Christian denominations, some, such as The Salvation Army, do not practice water baptism at all. Among denominations that practice water baptism, differences occur in the manner and mode of baptizing and in the understanding of the significance of the rite. Most Christians baptize using the trinitarian formula "in the name of the Father, and of the Son, and of the Holy Spirit" (following the Great Commission), but Oneness Pentecostals baptize using Jesus' name only. The majority of Christians baptize infants; many others, such as Baptist Churches, regard only believer's baptism as true baptism. In certain denominations, such as the Eastern and Oriental Orthodox Churches, the individual being baptized receives a cross necklace that is worn for the rest of their life, inspired by the Sixth Ecumenical Council (Synod) of Constantinople. Outside of Christianity, Mandaeans undergo repeated baptism for purification instead of initiation. They consider John the Baptist to be their greatest prophet and name all rivers yardena after the River Jordan. The term baptism has also been used metaphorically to refer to any ceremony, trial, or experience by which a person is initiated, purified, or given a name. Martyrdom was identified early in Christian church history as "baptism by blood", enabling the salvation of martyrs who had not been baptized by water. Later, the Catholic Church identified a baptism of desire, by which those preparing for baptism who die before actually receiving the sacrament are considered saved. Etymology The English word baptism is derived indirectly through Latin from the neuter Greek concept noun (Greek , ), which is a neologism in the New Testament derived from the masculine Greek noun (), a term for ritual washing in Greek language texts of Hellenistic Judaism during the Second Temple period, such as the Septuagint. Both of these nouns are derived from the verb (, transitive verb), which is used in Jewish texts for ritual washing, and in the New Testament both for ritual washing and also for the apparently new rite of . The Greek verb (), , from which the verb is derived, is in turn hypothetically traced to a reconstructed Indo-European root *gʷabh-, . The Greek words are used in a great variety of meanings. and in Hellenism had the general usage of "immersion," "going under" (as a material in a liquid dye) or "perishing" (as in a ship sinking or a person drowning), with the same double meanings as in English "to sink into" or "to be overwhelmed by," with bathing or washing only occasionally used and usually in sacral contexts. History The practice of baptism emerged from Jewish ritualistic practices during the Second Temple Period, out of which figures such as John the Baptist emerged. For example, various texts in the Dead Sea Scrolls (DSS) corpus at Qumran describe ritual practices involving washing, bathing, sprinkling, and immersing. One example of such a text is a DSS known as the Rule of the Community, which says "And by the compliance of his soul with all the laws of God his flesh is cleansed by being sprinkled with cleansing waters and being made holy with the waters of repentance." The Mandaeans, who are followers of John the Baptist, practice frequent full immersion baptism (masbuta) as a ritual of purification. According to Mandaean sources, they left the Jordan Valley in the 1st century AD. John the Baptist, who is considered a forerunner to Christianity, used baptism as the central sacrament of his messianic movement. The apostle Paul distinguished between the baptism of John, ("baptism of repentance") and baptism in the name of Jesus, and it is questionable whether Christian baptism was in some way linked with that of John. However, according to Mark 1:8, John seems to connect his water baptism as a type of the true, ultimate baptism of Jesus, which is by the Spirit. Christians consider Jesus to have instituted the sacrament of baptism. Though some form of immersion was likely the most common method of baptism in the early church, many of the writings from the ancient church appeared to view this mode of baptism as inconsequential. The Didache 7.1–3 (AD 60–150) allowed for affusion practices in situations where immersion was not practical. Likewise, Tertullian (AD 196–212) allowed for varying approaches to baptism even if those practices did not conform to biblical or traditional mandates (cf. De corona militis 3; De baptismo 17). Finally, Cyprian (ca. AD 256) explicitly stated that the amount of water was inconsequential and defended immersion, affusion, and aspersion practices (Epistle 75.12). As a result, there was no uniform or consistent mode of baptism in the ancient church prior to the fourth century. By the third and fourth centuries, baptism involved catechetical instruction as well as chrismation, exorcisms, laying on of hands, and recitation of a creed. In the early middle ages infant baptism became common and the rite was significantly simplified and increasingly emphasized. In Western Europe Affusion became the normal mode of baptism between the twelfth and fourteenth centuries, though immersion was still practiced into the sixteenth. In the medieval period, some radical Christians rejected the practice of baptism as a sacrament. Sects such as the Tondrakians, Cathars, Arnoldists, Petrobrusians, Henricans, Brethren of the Free Spirit and the Lollards were regarded as heretics by the Catholic Church. In the sixteenth century, Martin Luther retained baptism as a sacrament, but Swiss reformer Huldrych Zwingli considered baptism and the Lord's Supper to be symbolic. Anabaptists denied the validity of the practice of infant baptism, and rebaptized converts. Mode and manner Baptism is practiced in several different ways. Aspersion is the sprinkling of water on the head, and affusion is the pouring of water over the head. Traditionally, a person is sprinkled, poured, or immersed three times for each person of the Holy Trinity, with this ancient Christian practice called trine baptism or triune baptism. The Didache specifies: Aspersion or sprinkling best describes cleansing aspect of baptism as indicated in Psalm 51:7, "Cleanse me with hyssop, and I will be clean; wash me, and I will be whiter than snow". Affusion or pouring best describes anointing, which points to the pouring of the Holy Spirit unto the believing person as indicated in many of the Old Testament types of anointing kings, prophets, and priests with oil. Immersion or submersion best describes burial and resurrection of the believer in Christ. The word "immersion" is derived from late Latin immersio, a noun derived from the verb immergere (in – "into" + mergere "dip"). In relation to baptism, some use it to refer to any form of dipping, whether the body is put completely under water or is only partly dipped in water; they thus speak of immersion as being either total or partial. Others, of the Anabaptist belief, use "immersion" to mean exclusively plunging someone entirely under the surface of the water. The term "immersion" is also used of a form of baptism in which water is poured over someone standing in water, without submersion of the person. On these three meanings of the word "immersion", see Immersion baptism. When "immersion" is used in opposition to "submersion", it indicates the form of baptism in which the candidate stands or kneels in water and water is poured over the upper part of the body. Immersion in this sense has been employed in West and East since at least the 2nd century and is the form in which baptism is generally depicted in early Christian art. In the West, this method of baptism began to be replaced by affusion baptism from around the 8th century, but it continues in use in Eastern Christianity. The word submersion comes from the late Latin (sub- "under, below" + mergere "plunge, dip") and is also sometimes called "complete immersion". It is the form of baptism in which the water completely covers the candidate's body. Submersion is practiced in the Orthodox and several other Eastern Churches. In the Latin Church of the Catholic Church, baptism by submersion is used in the Ambrosian Rite and is one of the methods provided in the Roman Rite of the baptism of infants. It is seen as obligatory among some groups that have arisen since the Protestant Reformation, such as Baptists. Meaning of the Greek verb baptizein The Greek-English Lexicon of Liddell and Scott gives the primary meaning of the verb baptízein, from which the English verb "baptize" is derived, as "dip, plunge", and gives examples of plunging a sword into a throat or an embryo and for drawing wine by dipping a cup in the bowl; for New Testament usage it gives two meanings: "baptize", with which it associates the Septuagint mention of Naaman dipping himself in the Jordan River, and "perform ablutions", as in Luke 11:38. Although the Greek verb baptízein does not exclusively mean dip, plunge or immerse (it is used with literal and figurative meanings such as "sink", "disable", "overwhelm", "go under", "overborne", "draw from a bowl"), lexical sources typically cite this as a meaning of the word in both the Septuagint and the New Testament. "While it is true that the basic root meaning of the Greek words for baptize and baptism is immerse/immersion, it is not true that the words can simply be reduced to this meaning, as can be seen from Mark 10:38–39, Luke 12:50, Matthew 3:11 Luke 3:16 and Corinthians10:2." Two passages in the Gospels indicate that the verb baptízein did not always indicate submersion. The first is Luke 11:38, which tells how a Pharisee, at whose house Jesus ate, "was astonished to see that he did not first wash (ἐβαπτίσθη, aorist passive of βαπτίζω—literally, "was baptized") before dinner". This is the passage that Liddell and Scott cites as an instance of the use of to mean perform ablutions. Jesus' omission of this action is similar to that of his disciples: "Then came to Jesus scribes and Pharisees, which were of Jerusalem, saying, Why do thy disciples transgress the tradition of the elders? for they wash () not their hands when they eat bread". The other Gospel passage pointed to is: "The Pharisees...do not eat unless they wash (, the ordinary word for washing) their hands thoroughly, observing the tradition of the elders; and when they come from the market place, they do not eat unless they wash themselves (literally, "baptize themselves"—βαπτίσωνται, passive or middle voice of βαπτίζω)". Scholars of various denominations claim that these two passages show that invited guests, or people returning from market, would not be expected to immerse themselves ("baptize themselves") totally in water but only to practise the partial immersion of dipping their hands in water or to pour water over them, as is the only form admitted by present Jewish custom. In the second of the two passages, it is actually the hands that are specifically identified as "washed", not the entire person, for whom the verb used is baptízomai, literally "be baptized", "be immersed", a fact obscured by English versions that use "wash" as a translation of both verbs. Zodhiates concludes that the washing of the hands was done by immersing them. The Liddell–Scott–Jones Greek-English Lexicon (1996) cites the other passage (Luke 11:38) as an instance of the use of the verb baptízein to mean "perform ablutions", not "submerge". References to the cleaning of vessels which use βαπτίζω also refer to immersion. As already mentioned, the lexicographical work of Zodhiates says that, in the second of these two cases, the verb baptízein indicates that, after coming from the market, the Pharisees washed their hands by immersing them in collected water. Balz & Schneider understand the meaning of βαπτίζω, used in place of ῥαντίσωνται (sprinkle), to be the same as βάπτω, to dip or immerse, a verb used of the partial dipping of a morsel held in the hand into wine or of a finger into spilled blood. A possible additional use of the verb baptízein to relate to ritual washing is suggested by Peter Leithart (2007) who suggests that Paul's phrase "Else what shall they do who are baptized for the dead?" relates to Jewish ritual washing. In Jewish Greek the verb baptízein "baptized" has a wider reference than just "baptism" and in Jewish context primarily applies to the masculine noun baptismós "ritual washing" The verb baptízein occurs four times in the Septuagint in the context of ritual washing, baptismós; Judith cleansing herself from menstrual impurity, Naaman washing seven times to be cleansed from leprosy, etc. Additionally, in the New Testament only, the verb baptízein can also relate to the neuter noun báptisma "baptism" which is a neologism unknown in the Septuagint and other pre-Christian Jewish texts. This broadness in the meaning of baptízein is reflected in English Bibles rendering "wash", where Jewish ritual washing is meant: for example Mark 7:4 states that the Pharisees "except they wash (Greek "baptize"), they do not eat", and "baptize" where báptisma, the new Christian rite, is intended. Derived nouns Two nouns derived from the verb baptízō (βαπτίζω) appear in the New Testament: the masculine noun baptismós (βαπτισμός) and the neuter noun báptisma (βάπτισμα): baptismós (βαπτισμός) refers in Mark 7:4 to a water-rite for the purpose of purification, washing, cleansing, of dishes; in the same verse and in Hebrews 9:10 to Levitical cleansings of vessels or of the body; and in Hebrews 6:2 perhaps also to baptism, though there it may possibly refer to washing an inanimate object. According to Spiros Zodhiates when referring merely to the cleansing of utensils baptismós (βαπτισμός) is equated with rhantismós (ῥαντισμός, "sprinkling"), found only in Hebrews 12:24 and Peter 1:2, a noun used to indicate the symbolic cleansing by the Old Testament priest. báptisma (βάπτισμα), which is a neologism appearing to originate in the New Testament, and probably should not be confused with the earlier Jewish concept of baptismós (βαπτισμός), Later this is found only in writings by Christians. In the New Testament, it appears at least 21 times: 13 times with regard to the rite practised by John the Baptist; 3 times with reference to the specific Christian rite (4 times if account is taken of its use in some manuscripts of Colossians 2:12, where, however, it is most likely to have been changed from the original baptismós than vice versa); 5 times in a metaphorical sense. Manuscript variation: In Colossians, some manuscripts have neuter noun báptisma (βάπτισμα), but some have masculine noun baptismós (βαπτισμός), and this is the reading given in modern critical editions of the New Testament. If this reading is correct, then this is the only New Testament instance in which baptismós (βαπτισμός) is clearly used of Christian baptism, rather than of a generic washing, unless the opinion of some is correct that Hebrews 6:2 may also refer to Christian baptism. The feminine noun baptisis, along with the masculine noun baptismós both occur in Josephus' Antiquities (J. AJ 18.5.2) relating to the murder of John the Baptist by Herod. This feminine form is not used elsewhere by Josephus, nor in the New Testament. Apparel Until the Middle Ages, most baptisms were performed with the candidates naked—as is evidenced by most of the early portrayals of baptism (some of which are shown in this article), and the early Church Fathers and other Christian writers. Deaconesses helped female candidates for reasons of modesty. Typical of these is Cyril of Jerusalem who wrote "On the Mysteries of Baptism" in the 4th century (c. 350 AD): The symbolism is threefold: 1. Baptism is considered to be a form of rebirth—"by water and the Spirit"—the nakedness of baptism (the second birth) paralleled the condition of one's original birth. For example, John Chrysostom calls the baptism "λοχείαν", i.e., giving birth, and "new way of creation...from water and Spirit" ("to John" speech 25,2), and later elaborates: 2. The removal of clothing represented the "image of putting off the old man with his deeds" (as per Cyril, above), so the stripping of the body before for baptism represented taking off the trappings of sinful self, so that the "new man", which is given by Jesus, can be put on. 3. As Cyril again asserts above, as Adam and Eve in scripture were naked, innocent and unashamed in the Garden of Eden, nakedness during baptism was seen as a renewal of that innocence and state of original sinlessness. Other parallels can also be drawn, such as between the exposed condition of Christ during His crucifixion, and the crucifixion of the "old man" of the repentant sinner in preparation for baptism. Changing customs and concerns regarding modesty probably contributed to the practice of permitting or requiring the baptismal candidate to either retain their undergarments (as in many Renaissance paintings of baptism such as those by da Vinci, Tintoretto, Van Scorel, Masaccio, de Wit and others) or to wear, as is almost universally the practice today, baptismal robes. These robes are most often white, symbolizing purity. Some groups today allow any suitable clothes to be worn, such as trousers and a T-shirt—practical considerations include how easily the clothes will dry (denim is discouraged), and whether they will become see-through when wet. In certain Christian denominations, the individual being baptized receives a cross necklace that is worn for the rest of their life as a "sign of the triumph of Christ over death and our belonging to Christ" (though it is replaced with a new cross pendant if lost or broken). This practice of baptized Christians wearing a cross necklace at all times is derived from Canon 73 and Canon 82 of the Sixth Ecumenical Council (Synod) of Constantinople, which declared: Meaning and effects There are differences in views about the effect of baptism for a Christian. Catholics, Orthodox, and most mainline Protestant groups assert baptism is a requirement for salvation and a sacrament, and speak of "baptismal regeneration". Its importance is related to their interpretation of the meaning of the "Mystical Body of Christ" as found in the New Testament. This view is shared by the Catholic and Eastern Orthodox denominations, and by churches formed early during the Protestant Reformation such as Lutheran and Anglican. For example, Martin Luther said: The Churches of Christ," Jehovah's Witnesses, Christadelphians, and the Church of Jesus Christ of Latter-day Saints espouse baptism as necessary for salvation. For Roman Catholics, baptism by water is a sacrament of initiation into the life of the children of God (Catechism of the Catholic Church, 1212–13). It configures the person to Christ (CCC 1272), and obliges the Christian to share in the church's apostolic and missionary activity (CCC 1270). The Catholic holds that there are three types of baptism by which one can be saved: sacramental baptism (with water), baptism of desire (explicit or implicit desire to be part of the church founded by Jesus Christ), and baptism of blood (martyrdom). In his encyclical Mystici corporis Christi of June 29, 1943, Pope Pius XII spoke of baptism and profession of the true faith as what makes members of the one true church, which is the body of Jesus Christ himself, as God the Holy Spirit has taught through the Apostle Paul: By contrast, Anabaptist and Evangelical Protestants recognize baptism as an outward sign of an inward reality following on an individual believer's experience of forgiving grace. Reformed and Methodist Protestants maintain a link between baptism and regeneration, but insist that it is not automatic or mechanical, and that regeneration may occur at a different time than baptism. Churches of Christ consistently teach that in baptism a believer surrenders his life in faith and obedience to God, and that God "by the merits of Christ's blood, cleanses one from sin and truly changes the state of the person from an alien to a citizen of God's kingdom. Baptism is not a human work; it is the place where God does the work that only God can do." Thus, they see baptism as a passive act of faith rather than a meritorious work; it "is a confession that a person has nothing to offer God". Christian traditions The liturgy of baptism for Catholics, Eastern Orthodox, Lutheran, Anglican, and Methodist makes clear reference to baptism as not only a symbolic burial and resurrection, but an actual supernatural transformation, one that draws parallels to the experience of Noah and the passage of the Israelites through the Red Sea divided by Moses. Thus, baptism is literally and symbolically not only cleansing, but also dying and rising again with Christ. Catholics believe baptism is necessary to cleanse the taint of original sin, and so commonly baptise infants. The Eastern Churches (Eastern Orthodox Church and Oriental Orthodoxy) also baptize infants on the basis of texts, such as Matthew 19:14, which are interpreted as supporting full church membership for children. In these denominations, baptism is immediately followed by Chrismation and Communion at the next Divine Liturgy, regardless of age. Orthodox likewise believe that baptism removes what they call the ancestral sin of Adam. Anglicans believe that baptism is also the entry into the church. Most Methodists and Anglicans agree that it also cleanses the taint of what in the West is called original sin, in the East ancestral sin. Eastern Orthodox Christians usually insist on complete threefold immersion as both a symbol of death and rebirth into Christ, and as a washing away of sin. Latin Church Catholics generally baptize by affusion (pouring); Eastern Catholics usually by submersion, or at least partial immersion. However, submersion is gaining in popularity within the Latin Catholic Church. In newer church sanctuaries, the baptismal font may be designed to expressly allow for baptism by immersion. Anglicans baptize by immersion or affusion. According to evidence which can be traced back to about the year 200, sponsors or godparents are present at baptism and vow to uphold the Christian education and life of the baptized. Baptists argue that the Greek word originally meant "to immerse". They interpret some Biblical passages concerning baptism as requiring submersion of the body in water. They also state that only submersion reflects the symbolic significance of being "buried" and "raised" with Christ. Baptist Churches baptize in the name of the Trinity—the Father, the Son, and the Holy Spirit. However, they do not believe that baptism is necessary for salvation; but rather that it is an act of Christian obedience. Some "Full Gospel" charismatic churches such as Oneness Pentecostals baptize only in the name of Jesus Christ, citing Peter's preaching baptism in the name of Jesus as their authority. Ecumenical statements In 1982 the World Council of Churches published the ecumenical paper Baptism, Eucharist and Ministry. The preface of the document states: A 1997 document, Becoming a Christian: The Ecumenical Implications of Our Common Baptism, gave the views of a commission of experts brought together under the aegis of the World Council of Churches. It states: Those who heard, who were baptized and entered the community's life, were already made witnesses of and partakers in the promises of God for the last days: the forgiveness of sins through baptism in the name of Jesus and the outpouring of the Holy Ghost on all flesh. Similarly, in what may well be a baptismal pattern, 1 Peter testifies that proclamation of the resurrection of Jesus Christ and teaching about new life lead to purification and new birth. This, in turn, is followed by eating and drinking God's food, by participation in the life of the community—the royal priesthood, the new temple, the people of God—and by further moral formation. At the beginning of 1 Peter the writer sets this baptism in the context of obedience to Christ and sanctification by the Spirit. So baptism into Christ is seen as baptism into the Spirit. In the fourth gospel Jesus' discourse with Nicodemus indicates that birth by water and Spirit becomes the gracious means of entry into the place where God rules. Validity considerations by some churches The vast majority of Christian denominations admit the theological idea that baptism is a sacrament, that has actual spiritual, holy and salvific effects. Certain key criteria must be complied with for it to be valid, i.e., to actually have those effects. If these key criteria are met, violation of some rules regarding baptism, such as varying the authorized rite for the ceremony, renders the baptism illicit (contrary to the church's laws) but still valid. One of the criteria for validity is use of the correct form of words. The Roman Catholic Church teaches that the use of the verb "to baptize" is essential. Catholics of the Latin Church, Anglicans and Methodists use the form "I baptize you in the name of...". The passive voice is used by Eastern Orthodox and Byzantine Catholics, the form being "The Servant of God is baptized in the name of...". Use of the Trinitarian formula "in the name of the Father, and of the Son, and of the Holy Spirit" is also considered essential; thus these churches do not accept as valid baptisms of non-Trinitarian churches such as Oneness Pentecostals. Another essential condition is use of water. A baptism in which some liquid that would not usually be called water, such as wine, milk, soup or fruit juice was used would not be considered valid. Another requirement is that the celebrant intends to perform baptism. This requirement entails merely the intention "to do what the Church does", not necessarily to have Christian faith, since it is not the person baptizing, but the Holy Spirit working through the sacrament, who produces the effects of the sacrament. Doubt about the faith of the baptizer is thus no ground for doubt about the validity of the baptism. Some conditions expressly do not affect validity—for example, whether submersion, immersion, affusion (pouring) or aspersion (sprinkling) is used. However, if water is sprinkled, there is a danger that the water may not touch the skin of the unbaptized. As has been stated, "it is not sufficient for the water to merely touch the candidate; it must also flow, otherwise there would seem to be no real ablution. At best, such a baptism would be considered doubtful. If the water touches only the hair, the sacrament has probably been validly conferred, though in practice the safer course must be followed. If only the clothes of the person have received the aspersion, the baptism is undoubtedly void." For many communions, validity is not affected if a single submersion or pouring is performed rather than a triple, but in Orthodoxy this is controversial. According to the Catholic Church, baptism imparts an indelible "seal" upon the soul of the baptized and therefore a person who has already been baptized cannot be validly baptized again. This teaching was affirmed against the Donatists who practiced rebaptism. The grace received in baptism is believed to operate ex opere operato and is therefore considered valid even if administered in heretical or schismatic groups. Recognition by other denominations The Catholic, Lutheran, Anglican, Presbyterian and Methodist Churches accept baptism performed by other denominations within this group as valid, subject to certain conditions, including the use of the Trinitarian formula. It is only possible to be baptized once, thus people with valid baptisms from other denominations may not be baptized again upon conversion or transfer. For Roman Catholics, this is affirmed in the Canon Law 864, in which it is written that "[e]very person not yet baptized and only such a person is capable of baptism." Such people are accepted upon making a profession of faith and, if they have not yet validly received the sacrament/rite of confirmation or chrismation, by being confirmed. Specifically, "Methodist theologians argued that since God never abrogated a covenant made and sealed with proper intentionality, rebaptism was never an option, unless the original baptism had been defective by not having been made in the name of the Trinity." In some cases it can be difficult to decide if the original baptism was in fact valid; if there is doubt, conditional baptism is administered, with a formula on the lines of "If you are not yet baptized, I baptize you...." The Catholic Church ordinarily recognizes as valid the baptisms of Christians of the Eastern Orthodox, Churches of Christ, Congregationalist, Anglican, Lutheran, Old Catholic, Polish National Catholic, Reformed, Baptist, Brethren, Methodist, Presbyterian, Waldensian, and United Protestant denominations; Christians of these traditions are received into the Catholic Church through the sacrament of Confirmation. Some individuals of the Mennonite, Pentecostal and Adventist traditions who wish to be received into the Catholic Church may be required to receive a conditional baptism due to concerns about the validity of the sacraments in those traditions. The Catholic Church has explicitly denied the validity of the baptism conferred in The Church of Jesus Christ of Latter-day Saints. The Reformed Churches recognize as valid, baptisms administered in the Catholic Church, among other churches using the Trinitarian formula. Practice in the Eastern Orthodox Church for converts from other communions is not uniform. However, generally baptisms performed in the name of the Holy Trinity are accepted by the Orthodox Christian Church; Christians of the Oriental Orthodox, Roman Catholic, Lutheran, Old Catholic, Moravian, Anglican, Methodist, Reformed, Presbyterian, Brethren, Assemblies of God, or Baptist traditions can be received into the Eastern Orthodox Church through the sacrament of Chrismation. If a convert has not received the sacrament (mysterion) of baptism, he or she must be baptised in the name of the Holy Trinity before they may enter into communion with the Orthodox Church. If he/she has been baptized in another Christian confession (other than Orthodox Christianity) his/her previous baptism is considered retroactively filled with grace by chrismation or, in rare circumstances, confession of faith alone as long as the baptism was done in the name of the Holy Trinity (Father, Son and Holy Spirit). The exact procedure is dependent on local canons and is the subject of some controversy. Oriental Orthodox Churches recognise the validity of baptisms performed within the Eastern Orthodox Communion. Some also recognise baptisms performed by Catholic Churches. Any supposed baptism not performed using the Trinitarian formula is considered invalid. In the eyes of the Catholic Church, all Orthodox Churches, Anglican and Lutheran Churches, the baptism conferred by The Church of Jesus Christ of Latter-day Saints is invalid. An article published together with the official declaration to that effect gave reasons for that judgment, summed up in the following words: "The Baptism of the Catholic Church and that of the Church of Jesus Christ of Latter-day Saints differ essentially, both for what concerns faith in the Father, Son and Holy Spirit, in whose name Baptism is conferred, and for what concerns the relationship to Christ who instituted it." The Church of Jesus Christ of Latter-day Saints stresses that baptism must be administered by one having proper authority; consequently, the church does not recognize the baptism of any other church as effective. Jehovah's Witnesses do not recognise any other baptism occurring after 1914 as valid, as they believe that they are now the one true church of Christ, and that the rest of "Christendom" is false religion. Officiator There is debate among Christian churches as to who can administer baptism. Some claim that the examples given in the New Testament only show apostles and deacons administering baptism. Ancient Christian churches interpret this as indicating that baptism should be performed by the clergy except in extremis, i.e., when the one being baptized is in immediate danger of death. Then anyone may baptize, provided, in the view of the Eastern Orthodox Church, the person who does the baptizing is a member of that church, or, in the view of the Catholic Church, that the person, even if not baptized, intends to do what the church does in administering the rite. Many Protestant churches see no specific prohibition in the biblical examples and permit any believer to baptize another. In the Roman Catholic Church, canon law for the Latin Church lays down that the ordinary minister of baptism is a bishop, priest or deacon, but its administration is one of the functions "especially entrusted to the parish priest". If the person to be baptized is at least fourteen years old, that person's baptism is to be referred to the bishop, so that he can decide whether to confer the baptism himself. If no ordinary minister is available, a catechist or some other person whom the local ordinary has appointed for this purpose may licitly do the baptism; indeed in a case of necessity any person (irrespective of that person's religion) who has the requisite intention may confer the baptism By "a case of necessity" is meant imminent danger of death because of either illness or an external threat. "The requisite intention" is, at the minimum level, the intention "to do what the Church does" through the rite of baptism. In the Eastern Catholic Churches, a deacon is not considered an ordinary minister. Administration of the sacrament is reserved to the Parish Priest or to another priest to whom he or the local hierarch grants permission, a permission that can be presumed if in accordance with canon law. However, "in case of necessity, baptism can be administered by a deacon or, in his absence or if he is impeded, by another cleric, a member of an institute of consecrated life, or by any other Christian faithful; even by the mother or father, if another person is not available who knows how to baptize." The discipline of the Eastern Orthodox Church, Oriental Orthodoxy and the Assyrian Church of the East is similar to that of the Eastern Catholic Churches. They require the baptizer, even in cases of necessity, to be of their own faith, on the grounds that a person cannot convey what he himself does not possess, in this case membership in the church. The Latin Catholic Church does not insist on this condition, considering that the effect of the sacrament, such as membership of the church, is not produced by the person who baptizes, but by the Holy Spirit. For the Orthodox, while Baptism in extremis may be administered by a deacon or any lay-person, if the newly baptized person survives, a priest must still perform the other prayers of the Rite of Baptism, and administer the Mystery of Chrismation. The discipline of Anglicanism and Lutheranism is similar to that of the Latin Catholic Church. For Methodists and many other Protestant denominations, too, the ordinary minister of baptism is a duly ordained or appointed minister of religion. Newer movements of Protestant Evangelical churches, particularly non-denominational, allow laypeople to baptize. In The Church of Jesus Christ of Latter-day Saints, only a man who has been ordained to the Aaronic priesthood holding the priesthood office of priest or higher office in the Melchizedek priesthood may administer baptism. A Jehovah's Witnesses baptism is performed by a "dedicated male" adherent. Only in extraordinary circumstances would a "dedicated" baptizer be unbaptized (see section Jehovah's Witnesses). Practitioners Protestantism Anabaptist Early Anabaptists were given that name because they re-baptized persons who they felt had not been properly baptized, as they did not recognize infant baptism. The traditional form of Anabaptist baptism was pouring, the form commonly used in Western Christianity in the early 16th century when they emerged. Pouring continues to be normative in Mennonite, Amish and Hutterite traditions of Anabaptist Christianity. The Mennonite Brethren Church, Schwarzenau Brethren and River Brethren denominations of Anabaptist Christianity practice immersion. The Schwarzenau church immerses in the forward position three times, for each person of the Holy Trinity and because "the Bible says Jesus bowed his head (letting it fall forward) and died. Baptism represents a dying of the old, sinful self." Today all modes of baptism (such as pouring and immersion) can be found among Anabaptists. Conservative Mennonite Anabaptists count baptism to be one of the seven ordinances. In Anabaptist theology, baptism is a part of the process of salvation. For Anabaptists, "believer's baptism consists of three parts, the Spirit, the water, and the blood—these three witnesses on earth." According to Anabaptist theology: (1) In believer's baptism, the Holy Spirit witnesses the candidate entering into a covenant with God. (2) God, in believer's baptism, "grants a baptized believer the water of baptism as a sign of His covenant with them—that such a one indicates and publicly confesses that he wants to live in true obedience towards God and fellow believers with a blameless life." (3) Integral to believer's baptism is the candidate's mission to witness to the world even unto martyrdom, echoing Jesus' words that "they would be baptized with His baptism, witnessing to the world when their blood was spilt." Baptist For the majority of Baptists, Christian baptism is the immersion of a believer in water in the name of the Father, the Son, and the Holy Spirit. Baptism does not accomplish anything in itself, but is an outward personal sign that the person's sins have already been washed away by the blood of Christ's cross. For a new convert the general practice is that baptism also allows the person to be a registered member of the local Baptist congregation (though some churches have adopted "new members classes" as an additional mandatory step for congregational membership). Regarding rebaptism the general rules are: baptisms by other than immersion are not recognized as valid and therefore rebaptism by immersion is required; and baptisms by immersion in other denominations may be considered valid if performed after the person having professed faith in Jesus Christ (though among the more conservative groups such as Independent Baptists, rebaptism may be required by the local congregation if performed in a non-Baptist church – and, in extreme cases, even if performed within a Baptist church that wasn't an Independent Baptist congregation) For newborns, there is a ceremony called child dedication. Tennessee antebellum Methodist circuit rider and newspaper publisher William G. Brownlow stated within his 1856 book The Great Iron Wheel Examined; or, Its False Spokes Extracted, and an Exhibition of Elder Graves, Its Builder that the immersion baptism practiced within the Baptist churches as found within the United States did not extend in a "regular line of succession...from John the Baptist – but from old Zeke Holliman and his true yoke-fellow, Mr. [Roger] Williams" as during 1639 Holliman and Williams first immersion baptized each other and then immersion baptized the ten other members of the first Baptist church in British America at Providence, Rhode Island. Churches of Christ Baptism in Churches of Christ is performed only by full bodily immersion, based on the Koine Greek verb baptizo which means to dip, immerse, submerge or plunge. Submersion is seen as more closely conforming to the death, burial and resurrection of Jesus than other modes of baptism. Churches of Christ argue that historically immersion was the mode used in the 1st century, and that pouring and sprinkling later emerged as secondary modes when immersion was not possible. Over time these secondary modes came to replace immersion. Only those mentally capable of belief and repentance are baptized (i.e., infant baptism is not practiced because the New Testament has no precedent for it). Churches of Christ have historically had the most conservative position on baptism among the various branches of the Restoration Movement, understanding baptism by immersion to be a necessary part of conversion. The most significant disagreements concerned the extent to which a correct understanding of the role of baptism is necessary for its validity. David Lipscomb insisted that if a believer was baptized out of a desire to obey God, the baptism was valid, even if the individual did not fully understand the role baptism plays in salvation. Austin McGary contended that to be valid, the convert must also understand that baptism is for the forgiveness of sins. McGary's view became the prevailing one in the early 20th century, but the approach advocated by Lipscomb never totally disappeared. As such, the general practice among churches of Christ is to require rebaptism by immersion of converts, even those who were previously baptized by immersion in other churches. More recently, the rise of the International Churches of Christ has caused some to reexamine the issue. Churches of Christ consistently teach that in baptism a believer surrenders his life in faith and obedience to God, and that God "by the merits of Christ's blood, cleanses one from sin and truly changes the state of the person from an alien to a citizen of God's kingdom. Baptism is not a human work; it is the place where God does the work that only God can do." Baptism is a passive act of faith rather than a meritorious work; it "is a confession that a person has nothing to offer God." While Churches of Christ do not describe baptism as a "sacrament", their view of it can legitimately be described as "sacramental." They see the power of baptism coming from God, who chose to use baptism as a vehicle, rather than from the water or the act itself, and understand baptism to be an integral part of the conversion process, rather than just a symbol of conversion. A recent trend is to emphasize the transformational aspect of baptism: instead of describing it as just a legal requirement or sign of something that happened in the past, it is seen as "the event that places the believer 'into Christ' where God does the ongoing work of transformation." There is a minority that downplays the importance of baptism to avoid sectarianism, but the broader trend is to "reexamine the richness of the biblical teaching of baptism and to reinforce its central and essential place in Christianity." Because of the belief that baptism is a necessary part of salvation, some Baptists hold that the Churches of Christ endorse the doctrine of baptismal regeneration. However, members of the Churches of Christ reject this, arguing that since faith and repentance are necessary, and that the cleansing of sins is by the blood of Christ through the grace of God, baptism is not an inherently redeeming ritual. Rather, their inclination is to point to the biblical passage in which Peter, analogizing baptism to Noah's flood, posits that "likewise baptism doth also now save us" but parenthetically clarifies that baptism is "not the putting away of the filth of the flesh but the response of a good conscience toward God" (1 Peter 3:21). One author from the churches of Christ describes the relationship between faith and baptism this way, "Faith is the reason why a person is a child of God; baptism is the time at which one is incorporated into Christ and so becomes a child of God" (italics are in the source). Baptism is understood as a confessional expression of faith and repentance, rather than a "work" that earns salvation. Lutheranism In Lutheran Christianity, baptism is a sacrament that regenerates the soul. Upon one's baptism, one receives the Holy Spirit and becomes a part of the church. Methodism The Methodist Articles of Religion, with regard to baptism, teach: While baptism imparts grace, Methodists teach that a personal acceptance of Jesus Christ (the first work of grace) is essential to one's salvation; during the second work of grace, entire sanctification, a believer is purified of original sin and made holy. In the Methodist Churches, baptism is a sacrament of initiation into the visible Church. Wesleyan covenant theology further teaches that baptism is a sign and a seal of the covenant of grace: Methodists recognize three modes of baptism as being valid—"immersion, sprinkling, or pouring" in the name of the Holy Trinity. Moravianism The Moravian Church teaches that baptism is a sign and a seal, recognizing three modes of baptism as being valid: immersion, aspersion, and affusion. Reformed Protestantism In Reformed baptismal theology, baptism is seen as primarily God's offer of union with Christ and all his benefits to the baptized. This offer is believed to be intact even when it is not received in faith by the person baptized. Reformed theologians believe the Holy Spirit brings into effect the promises signified in baptism. Baptism is held by almost the entire Reformed tradition to effect regeneration, even in infants who are incapable of faith, by effecting faith which would come to fruition later. Baptism also initiates one into the visible church and the covenant of grace. Baptism is seen as a replacement of circumcision, which is considered the rite of initiation into the covenant of grace in the Old Testament. Reformed Christians believe that immersion is not necessary for baptism to be properly performed, but that pouring or sprinkling are acceptable. Only ordained ministers are permitted to administer baptism in Reformed churches, with no allowance for emergency baptism, though baptisms performed by non-ministers are generally considered valid. Reformed churches, while rejecting the baptismal ceremonies of the Roman Catholic church, accept the validity of baptisms performed with them and do not rebaptize. United Protestants In United Protestant Churches, such as the United Church of Canada, Church of North India, Church of Pakistan, Church of South India, Protestant Church in the Netherlands, Uniting Church in Australia and United Church of Christ in Japan, baptism is a sacrament. Catholicism In Catholic teaching, baptism is stated to be "necessary for salvation by actual reception or at least by desire". Catholic discipline requires the baptism ceremony to be performed by deacons, priests, or bishops, but in an emergency such as danger of death, anyone can licitly baptize. This teaching is based on the Gospel according to John which says that Jesus proclaimed: "Truly, truly, I say to you, unless one is born of water and the Spirit, he cannot enter into the Kingdom of God." It dates back to the teachings and practices of 1st-century Christians, and the connection between salvation and baptism was not, on the whole, an item of major dispute until Huldrych Zwingli denied the necessity of baptism, which he saw as merely a sign granting admission to the Christian community. The Catechism of the Catholic Church states that "Baptism is necessary for salvation for those to whom the Gospel has been proclaimed and who have had the possibility of asking for this sacrament." The Council of Trent also states in the Decree Concerning Justification from session six that baptism is necessary for salvation. A person who knowingly, willfully and unrepentantly rejects baptism has no hope of salvation. However, if knowledge is absent, "those also can attain to salvation who through no fault of their own do not know the Gospel of Christ or His Church, yet sincerely seek God and moved by grace strive by their deeds to do His will as it is known to them through the dictates of conscience." The Catechism of the Catholic Church also states: "Since Baptism signifies liberation from sin and from its instigator the devil, one or more exorcisms are pronounced over the candidate". In the Roman Rite of the baptism of a child, the wording of the prayer of exorcism is: "Almighty and ever-living God, you sent your only Son into the world to cast out the power of Satan, spirit of evil, to rescue man from the kingdom of darkness and bring him into the splendour of your kingdom of light. We pray for this child: set him (her) free from original sin, make him (her) a temple of your glory, and send your Holy Spirit to dwell with him (her). Through Christ our Lord." In the Catholic Church by baptism all sins are forgiven, original sin and all personal sins. Baptism not only purifies from all sins, but also makes the neophyte "a new creature," an adopted son of God, who has become a "partaker of the divine nature," member of Christ and co-heir with him, and a temple of the Holy Spirit. Given once for all, baptism cannot be repeated: just as a man can be born only once, so he is baptized only once. For this reason the holy Fathers added to the Nicene Creed the words We acknowledge one Baptism. Sanctifying grace, the grace of justification, given by God by baptism, erases the original sin and personal actual sins. The power of Baptism consists in cleansing a man from all his sins as regards both guild and punishment, for which reason no penance is imposed on those who receive Baptism, no matter how great their sins may have been. And if they were to die immediately after Baptism, they would rise at once to eternal life. In the Latin Church of the Catholic Church a valid baptism requires, according to Canon 758 of the 1917 Code of Canon Law, the baptizer to pronounce the formula "I baptize you in the name of the Father and of the Son and of the Holy Spirit" while putting the baptized in contact with water. The contact may be immersion, "affusion" (pouring), or "aspersion" (sprinkling). The formula requires "name" to be singular, emphasising the monotheism of the Trinity. It is claimed that Pope Stephen I, Ambrose and Pope Nicholas I declared that baptisms in the name of "Jesus" only as well as in the name of "Father, Son and Holy Spirit" were valid. The correct interpretation of their words is disputed. Current canonical law requires the Trinitarian formula and water for validity. The formula requires "I baptize" rather than "we baptize", as clarified by a responsum of June 24, 2020. In 2022 the Diocese of Phoenix accepted the resignation of a parish priest whose use of "we baptize" had invalidated "thousands of baptisms over more than 20 years". Note that in the Byzantine Rite the formla is in the passive voice, "The servant of God N. is baptized in the Name of the Father, and of the Son, and of the Holy Spirit." Offspring of practicing Catholic parents are typically baptized as infants. Baptism is part of the Rite of Christian Initiation of Adults, provided for converts from non-Christian backgrounds and others not baptized as infants. Baptism by non-Catholic Christians is valid if the formula and water are present, and so converts from other Christian denominations are not given a Catholic baptism. The church recognizes two equivalents of baptism with water: "baptism of blood" and "baptism of desire". Baptism of blood is that undergone by unbaptized individuals who are martyred for their faith, while baptism of desire generally applies to catechumens who die before they can be baptized. The Catechism of the Catholic Church describes these two forms: The Church has always held the firm conviction that those who suffer death for the sake of the faith without having received Baptism are baptized by their death for and with Christ. This Baptism of blood, like the desire for Baptism, brings about the fruits of Baptism without being a sacrament. — 1258 For catechumens who die before their Baptism, their explicit desire to receive it, together with repentance for their sins, and charity, assures them the salvation that they were not able to receive through the sacrament. — 1259 The Catholic Church holds that those who are ignorant of Christ's Gospel and of the church, but who seek the truth and do God's will as they understand it, may be supposed to have an implicit desire for baptism and can be saved: Since Christ died for all, and since all men are in fact called to one and the same destiny, which is divine, we must hold that the Holy Spirit offers to all the possibility of being made partakers, in a way known to God, of the Paschal mystery.' Every man who is ignorant of the Gospel of Christ and of his Church, but seeks the truth and does the will of God in accordance with his understanding of it, can be saved. It may be supposed that such persons would have desired Baptism explicitly if they had known its necessity." As for unbaptized infants, the church is unsure of their fate; "the Church can only entrust them to the mercy of God". Eastern Orthodoxy In Eastern Orthodoxy, baptism is considered a sacrament and mystery which transforms the old and sinful person into a new and pure one, where the old life, the sins, any mistakes made are gone and a clean slate is given. In Greek and Russian Orthodox traditions, it is taught that through Baptism a person is united to the Body of Christ by becoming an official member of the Orthodox Church. During the service, the Orthodox priest blesses the water to be used. The catechumen (the one baptised) is fully immersed in the water three times in the name of the Trinity. This is considered to be a death of the "old man" by participation in the crucifixion and burial of Christ, and a rebirth into new life in Christ by participation in his resurrection. Properly a new name is given, which becomes the person's name. Babies of Orthodox families are normally baptized shortly after birth. Older converts to Orthodoxy are usually formally baptized into the Orthodox Church, though exceptions are sometimes made. Those who choose to convert from a different religion to Eastern Orthodoxy typically undergo Chrismation, known as confirmation in the Roman Catholic Church. Properly and generally, the Mystery of Baptism is administered by bishops and other priests; however, in emergencies any Orthodox Christian can baptize. In such cases, should the person survive the emergency, it is likely that the person will be properly baptized by a priest at some later date. This is not considered to be a second baptism, nor is it imagined that the person is not already Orthodox, but rather it is a fulfillment of the proper form. The service of baptism in Greek Orthodox (and other Eastern Orthodox) churches has remained largely unchanged for over 1500 years. This fact is witnessed to by Cyril of Jerusalem (d. 386), who, in his Discourse on the Sacrament of Baptism, describes the service in much the same way as is currently in use. Other groups Jehovah's Witnesses Jehovah's Witnesses believe that baptism should be performed by complete immersion (submersion) in water and only when an individual is old enough to understand its significance. They believe that water baptism is an outward symbol that a person has made an unconditional dedication through Jesus Christ to do the will of God. Only after baptism, is a person considered a full-fledged Witness, and an official member of the Christian Congregation. They consider baptism to constitute ordination as a minister. Prospective candidates for baptism must express their desire to be baptized well in advance of a planned baptismal event, to allow for congregation elders to assess their suitability (regarding true repentance and conversion). Elders approve candidates for baptism if the candidates are considered to understand what is expected of members of the religion and to demonstrate sincere dedication to the faith. Most baptisms among Jehovah's Witnesses are performed at scheduled assemblies and conventions by elders and ministerial servants, in special pools, or sometimes oceans, rivers, or lakes, depending on circumstances, and rarely occur at local Kingdom Halls. Prior to baptism, at the conclusion of a pre-baptism talk, candidates must affirm two questions: Only baptized males (elders or ministerial servants) may baptize new members. Baptizers and candidates wear swimsuits or other informal clothing for baptism, but are directed to avoid clothing that is considered undignified or too revealing. Generally, candidates are individually immersed by a single baptizer, unless a candidate has special circumstances such as a physical disability. In circumstances of extended isolation, a qualified candidate's dedication and stated intention to become baptized may serve to identify him as a member of Jehovah's Witnesses, even if immersion itself must be delayed. In rare instances, unbaptized males who had stated such an intention have reciprocally baptized each other, with both baptisms accepted as valid. Individuals who had been baptized in the 1930s and 1940s by female Witnesses due to extenuating circumstances, such as in concentration camps, were later re-baptized but still recognized their original baptism dates. Church of Jesus Christ of Latter-day Saints In the Church of Jesus Christ of Latter-day Saints (LDS Church), baptism is recognized as the first of several ordinances (rituals) of the gospel. In Mormonism, baptism has the main purpose of remitting the sins of the participant. It is followed by confirmation, which inducts the person into membership in the church and constitutes a baptism with the Holy Spirit. Latter-day Saints believe that baptism must be by full immersion, and by a precise ritualized ordinance: if some part of the participant is not fully immersed, or the ordinance was not recited verbatim, the ritual must be repeated. It typically occurs in a baptismal font. In addition, members of the LDS Church do not believe a baptism is valid unless it is performed by a Latter-day Saint one who has proper authority (a priest or elder). Authority is passed down through a form of apostolic succession. All new converts to the faith must be baptized or re-baptized. Baptism is seen as symbolic both of Jesus' death, burial and resurrection and is also symbolic of the baptized individual discarding their "natural" self and donning a new identity as a disciple of Jesus. According to Latter-day Saint theology, faith and repentance are prerequisites to baptism. The ritual does not cleanse the participant of original sin, as Latter-day Saints do not believe the doctrine of original sin. Mormonism rejects infant baptism and baptism must occur after the age of accountability, defined in Latter-day Saint scripture as eight years old. Latter-day Saint theology also teaches baptism for the dead in which deceased ancestors are baptized vicariously by the living, and believe that their practice is what Paul wrote of in Corinthians 15:29. This occurs in Latter-day Saint temples. Freemasonry Due to tensions between the Roman Catholic Church and Freemasons in France in the aftermath of the French Revolution, French Freemasons developed rituals to replace those of the Church, including baptism. Chrétien-Guillaume Riebesthal’s Rituel Maçonnique pour tous les Rites (Masonic Ritual for All Rites), published in Strasbourg in 1826, includes one such baptismal rite. Lodges in Louisiana and Wisconsin performed baptism ceremonies in 1859, though they were widely condemned by their Grand Lodges. In 1865, Albert Pike, publicly performed a ceremony of Masonic baptism in New York City. The ceremony was greeted with skepticism by many American Masons including Albert Mackey. A ceremony for Masonic baptism was published by Charles T. McClenechan in 1884. Non-practitioners Quakers Quakers (members of the Religious Society of Friends) do not believe in the baptism of either children or adults with water, rejecting all forms of outward sacraments in their religious life. Robert Barclay's Apology for the True Christian Divinity (a historic explanation of Quaker theology from the 17th century), explains Quakers' opposition to baptism with water thus: Barclay argued that water baptism was only something that happened until the time of Christ, but that now, people are baptised inwardly by the spirit of Christ, and hence there is no need for the external sacrament of water baptism, which Quakers argue is meaningless. Salvation Army The Salvation Army does not practice water baptism, or indeed other outward sacraments. William Booth and Catherine Booth, the founders of the Salvation Army, believed that many Christians had come to rely on the outward signs of spiritual grace rather than on grace itself. They believed what was important was spiritual grace itself. However, although the Salvation Army does not practice baptism, they are not opposed to baptism within other Christian denominations. Hyperdispensationalism There are some Christians termed "Hyperdispensationalists" (Mid-Acts dispensationalism) who accept only Paul's Epistles as directly applicable for the church today. They do not accept water baptism as a practice for the church since Paul who was God's apostle to the nations was not sent to baptize. Ultradispensationalists (Acts 28 dispensationalism) who do not accept the practice of the Lord's supper, do not practice baptism because these are not found in the Prison Epistles. Both sects believe water baptism was a valid practice for covenant Israel. Hyperdispensationalists also teach that Peter's gospel message was not the same as Paul's. Hyperdispensationalists assert: The great commission and its baptism is directed to early Jewish believers, not the Gentile believers of mid-Acts or later. The baptism of Acts 2:36–38 is Peter's call for Israel to repent of complicity in the death of their Messiah; not as a Gospel announcement of atonement for sin, a later doctrine revealed by Paul. Water baptism found early in the Book of Acts is, according to this view, now supplanted by the one baptism foretold by John the Baptist. Others make a distinction between John's prophesied baptism by Christ with the Holy Spirit and the Holy Spirit's baptism of the believer into the body of Christ; the latter being the one baptism for today. The one baptism for today, it is asserted, is the "baptism of the Holy Spirit" of the believer into the Body of Christ church. Many in this group also argue that John's promised baptism by fire is pending, referring to the destruction of the world by fire. Other Hyperdispensationalists believe that baptism was necessary until mid-Acts. Debaptism Most Christian churches see baptism as a once-in-a-lifetime event that can be neither repeated nor undone. They hold that those who have been baptized remain baptized, even if they renounce the Christian faith by adopting a non-Christian religion or by rejecting religion entirely. But some other organizations and individuals are practicing debaptism. Comparative summary A comparative summary of the practice of baptism throughout various Christian denominations is given below. (This section does not give a complete listing of denominations, and therefore, it only mentions a fraction of the churches practicing "believer's baptism".) Baptism of objects The word "baptism" or "christening" is sometimes used to describe the naming or inauguration of certain objects for use. Boats and ships Baptism of Ships: since at least the time of the Crusades, rituals have contained a blessing for ships. The priest asks God to bless the vessel and protect those who sail on it. The ship is usually sprinkled with holy water. Church bells The name Baptism of Bells has been given to the blessing of (musical, especially church) bells, at least in France, since the 11th century. It is derived from the washing of the bell with holy water by the bishop, before he anoints it with the oil of the infirm without and with chrism within; a fuming censer is placed under it and the bishop prays that these sacramentals of the church may, at the sound of the bell, put the demons to flight, protect from storms, and call the faithful to prayer. Dolls "Baptism of Dolls": the custom of 'dolly dunking' was once a common practice in parts of the United Kingdom, particularly in Cornwall where it has been revived in recent years. Other initiation ceremonies Many cultures practice or have practiced initiation rites, with or without the use of water, including the ancient Egyptian, the Hebraic/Jewish, the Babylonian, the Mayan, and the Norse cultures. The modern Japanese practice of Miyamairi is such a ceremony that does not use water. In some, such evidence may be archaeological and descriptive in nature, rather than a modern practice. Mystery religion initiation rites Many scholars have drawn parallels between rites from mystery religions and baptism in Christianity. Apuleius, a 2nd-century Roman writer, described an initiation into the mysteries of Isis. The initiation was preceded by a normal bathing in the public baths and a ceremonial sprinkling by the priest of Isis, after which the candidate was given secret instructions in the temple of the goddess. The candidate then fasted for ten days from meat and wine, after which he was dressed in linen and led at night into the innermost part of the sanctuary, where the actual initiation took place, the details of which were secret. On the next two days, dressed in the robes of his consecration, he participated in feasting. Apuleius describes also an initiation into the cult of Osiris and yet a third initiation, of the same pattern as the initiation into the cult of Isis, without mention of a preliminary bathing. The water-less initiations of Lucius, the character in Apuleius's story who had been turned into an ass and changed back by Isis into human form, into the successive degrees of the rites of the goddess was accomplished only after a significant period of study to demonstrate his loyalty and trustworthiness, akin to catechumenal practices preceding baptism in Christianity. Jan Bremmer has written on the putative connection between rites from mystery religions and baptism: There are thus some verbal parallels between early Christianity and the Mysteries, but the situation is rather different as regards early Christian ritual practice. Much ink was spilled around 1900 arguing that the rituals of baptism and of the Last Supper derived from the ancient Mysteries, but Nock and others after him have easily shown that these attempts grossly misinterpreted the sources. Baptism is clearly rooted in Jewish purificatory rituals, and cult meals are so widespread in antiquity that any specific derivation is arbitrary. It is truly surprising to see how long the attempts to find some pagan background to these two Christian sacraments have persevered. Secularising ideologies clearly played an important part in these interpretations but, nevertheless, they have helped to clarify the relations between nascent Christianity and its surroundings. Thus the practice is derivative, whether from Judaism, the Mysteries or a combination (see the reference to Hellenistic Judaism in the Etymology section.) Gnostic Catholicism and Thelema The Ecclesia Gnostica Catholica, or Gnostic Catholic Church (the ecclesiastical arm of Ordo Templi Orientis), offers its Rite of Baptism to any person at least 11 years old. Mandaean baptism Mandaeans revere John the Baptist and practice frequent baptism (masbuta) as a ritual of purification, not of initiation. They are possibly the earliest people to practice baptism. Mandaeans undergo baptism on Sundays (Habshaba), wearing a white sacral robe (rasta). Baptism for Mandaeans consists of a triple full immersion in water, a triple signing of the forehead with water and a triple drinking of water. The priest (Rabbi) then removes a ring made of myrtle worn by the baptized and places it on their forehead. This is then followed by a handshake (kushta, "hand of truth") with the priest. The final blessing involves the priest laying his right hand on the baptized person's head. Living water (fresh, natural, flowing water) is a requirement for baptism, therefore can only take place in rivers. All rivers are named Jordan (yardena) and are believed to be nourished by the World of Light. By the river bank, a Mandaean's forehead is anointed with sesame oil (misha) and partakes in a communion of bread (pihta) and water. Baptism for Mandaeans allows for salvation by connecting with the World of Light and for forgiveness of sins. Sethian baptism The Sethian baptismal rite is known as the Five Seals, in which the initiate is immersed five times in running water. Yazidi baptism Yazidi baptism is called mor kirin (literally: "to seal"). Traditionally, Yazidi children are baptised at birth with water from the Kaniya Sipî ("White Spring") at Lalish. It essentially consists of pouring holy water from the spring on the child's head three times. Islamic practice of wudu Many Islamic scholars such as Shaikh Bawa Muhaiyaddeen have compared the Islamic practice of wudu to a baptism. Wudu is a practice that Muslims practice to go from ritual impurity to ritual purity. Ritual purity is required for Salah (praying) and also to hold a physical copy of the Qur’an, and so wudu is often done before salah. However, it is permissible to pray more than one salah without repeating wudu, as long as ritual purity is not broken, for example by using the bathroom. Another similar purification ritual is ghusl, which takes someone from major ritual impurity (janabah) to lesser ritual impurity, which is then purified by wudu. If one is in a state of janabah, both ghusl and wudu are required if one wants to pray. Although original sin does not exist in Islam, wudu is widely regarded to remove sins. In a Sahih hadith, Muhammad says "Whenever a man performs his ablution intending to pray and he washes his hands, the sins of his hands fall down with the first drop. When he rinses his mouth and nose, the sins of his tongue and lips fall down with the first drop. When he washes his face, the sins of his hearing and sight fall down with the first drop. When he washes his arms to his elbows and his feet to his ankles, he is purified from every sin and fault like the day he was born from his mother. If he stands for prayer, Allah will raise his status by a degree. If he sits, he will sit in peace." Baptism in the Yadav community People of Yadav community of Hindu religion follow baptism, where it is called Karah Pujan. In this, the person who is being baptized is bathed in boiling Milk. The newborn baby is also included in this process, in which he is bathed with boiling milk and then he is garlanded with flowers. See also Amrit Sanchar, in Sikhism Baptism by fire Baptistery Chrism Christifideles Consolamentum Disciple (Christianity) Divine filiation Ghusl Holy water in Eastern Christianity Mikvah Misogi Prevenient Grace Ritual purification Theophany Water and religion Notes References Further reading . 26 pp. N.B.: States the Evangelical Anglican position of the Reformed Episcopal Church. External links "Writings of the Early Church Fathers on Baptism" "Baptism." Encyclopædia Britannica Online. Christian terminology Conversion to Christianity Rites of passage Ritual purity in Christianity Sacraments Mandaean rituals Masonic rites
7
Beatmatching or pitch cue is a disc jockey technique of pitch shifting or time stretching an upcoming track to match its tempo to that of the currently playing track, and to adjust them such that the beats (and, usually, the bars) are synchronized—e.g. the kicks and snares in two house records hit at the same time when both records are played simultaneously. Beatmatching is a component of beatmixing which employs beatmatching combined with equalization, attention to phrasing and track selection in an attempt to make a single mix that flows together and has a good structure. The technique was developed to keep the people from leaving the dancefloor at the end of the song. These days it is considered basic among disc jockeys (DJs) in electronic dance music genres, and it is standard practice in clubs to keep the constant beat through the night, even if DJs change in the middle. Technique The beatmatching technique consists of the following steps: While a record is playing, start a second record playing, but only monitored through headphones, not being fed to the main PA system. Use gain (or trim) control on the mixer to match the levels of the two records. Restart and slip-cue the new record at the right time, on beat with the record currently playing. If the beat on the new record hits before the beat on the current record, then the new record is too fast; reduce the pitch and manually slow the speed of the new record to bring the beats back in sync. If the beat on the new record hits after the beat on the current record, then the new record is too slow; increase the pitch and manually increase the speed of the new record to bring the beats back in sync. Continue this process until the two records are in sync with each other. It can be difficult to sync the two records perfectly, so manual adjustment of the records is necessary to maintain the beat synchronization. Gradually fade in parts of the new track while fading out the old track. While in the mix, ensure that the tracks are still synchronized, adjusting the records if needed. The fade can be repeated several times, for example, from the first track, fade to the second track, then back to first, then to second again. One of the key things to consider when beatmatching is the tempo of both songs, and the musical theory behind the songs. Attempting to beatmatch songs with completely different beats per minute (BPM) will result in one of the songs sounding too fast or too slow. When beatmatching, a popular technique is to vary the equalization of both tracks. For example, when the kicks are occurring on the same beat, a more seamless transition can occur if the lower frequencies are taken out of one of the songs, and the lower frequencies of the other song is boosted. Doing so creates a smoother transition. Pitch and tempo The pitch and tempo of a track are normally linked together: spin a disc 5% faster and both pitch and tempo will be 5% higher. However, some modern DJ software can change pitch and tempo independently using time-stretching and pitch-shifting, allowing harmonic mixing. There is also a feature in modern DJ software which may be called "master tempo" or "key adjust" which changes the tempo while keeping the original pitch. History Francis Grasso was one of the first people to beatmatch in the late 1960s, being taught the technique by Bob Lewis. These days beat-matching is considered central to DJing, and features making it possible are a requirement for DJ-oriented players. In 1978, the Technics SL-1200MK2 turntable was released, whose comfortable and precise sliding pitch control and high torque direct drive motor made beat-matching easier and it became the standard among DJs. With the advent of the compact disc, DJ-oriented compact disc players with pitch control and other features enabling beat-matching (and sometimes scratching), dubbed CDJs, were introduced by various companies. More recently, software with similar capabilities has been developed to allow manipulation of digital audio files stored on computers using turntables with special vinyl records (e.g. Final Scratch, M-Audio Torq, Serato Scratch Live) or computer interface (e.g. Traktor DJ Studio, Mixxx, VirtualDJ). Other software including algorithmic beat-matching is Ableton Live, which allows for realtime music manipulation and deconstruction. Freeware software such as Rapid Evolution can detect the beats per minute and determine the percent BPM difference between songs. Most modern DJ hardware and software now offer a "sync" feature which automatically adjusts the tempo between tracks being mixed so the DJ no longer needs to beatmatch manually. See also Clubdjpro DJ mix Harmonic mixing Mashup Segue References Audio mixing Disco DJing American inventions
6
Black Sabbath were an English rock band formed in Birmingham in 1968 by guitarist Tony Iommi, drummer Bill Ward, bassist Geezer Butler and vocalist Ozzy Osbourne. They are often cited as pioneers of heavy metal music. The band helped define the genre with releases such as Black Sabbath (1970), Paranoid (1970) and Master of Reality (1971). The band had multiple line-up changes following Osbourne's departure in 1979, with Iommi being the only constant member throughout their history. After previous iterations of the group – the Polka Tulk Blues Band and Earth – the band settled on the name Black Sabbath in 1969. They distinguished themselves through occult themes with horror-inspired lyrics and down-tuned guitars. Signing to Philips Records in November 1969, they released their first single, "Evil Woman", in January 1970, and their debut album, Black Sabbath, was released the following month. Though it received a negative critical response, the album was a commercial success, leading to a follow-up record, Paranoid, later that year. The band's popularity grew, and by 1973's Sabbath Bloody Sabbath, critics were starting to respond favourably. This album, along with its predecessor Vol. 4 (1972) and its successors Sabotage (1975), Technical Ecstasy (1976) and Never Say Die! (1978), saw the band explore more experimental and progressive styles. Osbourne's excessive substance abuse led to his firing in 1979. He was replaced by former Rainbow vocalist Ronnie James Dio. Sabbath recorded three albums with Dio, Heaven and Hell (1980), Mob Rules (1981) and the live album Live Evil (1982), with the last two featuring drummer Vinny Appice replacing Ward. Following Dio and Appice's departures, Butler and Iommi recorded Born Again (1983) with Deep Purple vocalist Ian Gillan and Ward returning on drums, while Electric Light Orchestra drummer Bev Bevan replaced Ward on the subsequent tour. Black Sabbath split in 1984, with Iommi assembling a new version of the band the following year. For the next twelve years, the band endured many personnel changes that included vocalists Glenn Hughes, Ray Gillen and Tony Martin, as well as several drummers and bassists. Martin, who replaced Gillen in 1987, was the second-longest serving vocalist after Osbourne and recorded three albums with Black Sabbath before his dismissal in 1991. That same year, Iommi rejoined with Butler, Dio and Appice to record Dehumanizer (1992). After two more studio albums with Martin, who returned to replace Dio in 1993, the band's original line-up reunited in 1997 and released a live album, Reunion, the following year; they continued to tour occasionally until 2005. Other than various back catalogue reissues and compilation albums, as well as the Mob Rules-era line-up reuniting as Heaven & Hell, there was no further activity under the Black Sabbath name until 2011 with the release of their final studio album and 19th overall, 13, in 2013, which features all of the original members except Ward. During their farewell tour, the band played their final concert in their home city of Birmingham on 4 February 2017. Occasional partial reunions have happened since, most recently when Osbourne and Iommi performed together at the closing ceremony of the 2022 Commonwealth Games in Birmingham. Black Sabbath have sold over 70 million records worldwide as of 2013, making them one of the most commercially successful heavy metal bands. Black Sabbath, together with Deep Purple and Led Zeppelin, have been referred to as the "unholy trinity of British hard rock and heavy metal in the early to mid-seventies". They were ranked by MTV as the "Greatest Metal Band of All Time" and placed second on VH1's "100 Greatest Artists of Hard Rock" list. Rolling Stone magazine ranked them number 85 on their "100 Greatest Artists of All Time". Black Sabbath were inducted into the UK Music Hall of Fame in 2005 and the Rock and Roll Hall of Fame in 2006. They have also won two Grammy Awards for Best Metal Performance, and in 2019 the band were presented a Grammy Lifetime Achievement Award. History 1968–1969: Formation and early days Following the break-up of their previous band, Mythology, in 1968, guitarist Tony Iommi and drummer Bill Ward sought to form a heavy blues rock band in Aston, Birmingham. They enlisted bassist Geezer Butler and vocalist Ozzy Osbourne, who had played together in a band called Rare Breed, Osbourne having placed an advertisement in a local music shop: "OZZY ZIG Needs Gig – has own PA". The new group was initially named the Polka Tulk Blues Band, the name taken either from a brand of talcum powder or an Indian/Pakistani clothing shop; the exact origin is confused. The Polka Tulk Blues Band included slide guitarist Jimmy Phillips, a childhood friend of Osbourne's, and saxophonist Alan "Aker" Clarke. After shortening the name to Polka Tulk, the band again changed their name to Earth (which Osbourne hated) and continued as a four-piece without Phillips and Clarke. Iommi became concerned that Phillips and Clarke lacked the necessary dedication and were not taking the band seriously. Rather than asking them to leave, they instead decided to break up and then quietly reformed the band as a four-piece. While the band was performing under the Earth moniker, they recorded several demos written by Norman Haines such as "The Rebel", "When I Came Down" and "Song for Jim", the latter of which being a reference to Jim Simpson, who was a manager for the bands Bakerloo Blues Line and Tea & Symphony, as well as the trumpet player for the group Locomotive. Simpson had recently started a new club named Henry's Blueshouse at The Crown Hotel in Birmingham and offered to let Earth play there after they agreed to waive the usual support band fee in return for free T-shirts. The audience response was positive and Simpson agreed to manage Earth. In December 1968, Iommi abruptly left Earth to join Jethro Tull. Although his stint with the band would be short-lived, Iommi made an appearance with Jethro Tull on The Rolling Stones Rock and Roll Circus TV show. Unsatisfied with the direction of Jethro Tull, Iommi returned to Earth by the end of the month. "It just wasn't right, so I left", Iommi said. "At first I thought Tull were great, but I didn't much go for having a leader in the band, which was Ian Anderson's way. When I came back from Tull, I came back with a new attitude altogether. They taught me that to get on, you got to work for it." While playing shows in England in 1969, the band discovered they were being mistaken for another English group named Earth, so they decided to change their name again (this name change would give rise to the well-known debate about the alleged aesthetic influence of Coven, which the British band always denied). A cinema across the street from the band's rehearsal room was showing the 1963 horror film Black Sabbath, starring Boris Karloff and directed by Mario Bava. While watching people line up to see the film, Butler noted that it was "strange that people spend so much money to see scary movies". Following that, Osbourne and Butler wrote the lyrics for a song called "Black Sabbath", which was inspired by the work of horror and adventure-story writer Dennis Wheatley, along with a vision that Butler had of a black silhouetted figure standing at the foot of his bed. Making use of the musical tritone, also known as "the Devil's Interval", the song's ominous sound and dark lyrics pushed the band in a darker direction, a stark contrast to the popular music of the late 1960s, which was dominated by flower power, folk music and hippie culture. Judas Priest frontman Rob Halford has called the track "probably the most evil song ever written". Inspired by the new sound, the band changed their name to Black Sabbath in August 1969, and made the decision to focus on writing similar material in an attempt to create the musical equivalent of horror films. 1969–1971: Black Sabbath and Paranoid The band's first show as Black Sabbath took place on 30 August 1969 in Workington, England. They were signed to Philips Records in November 1969 and released their first single, "Evil Woman" (a cover of a song by the band Crow), which was recorded at Trident Studios through Philips subsidiary Fontana Records in January 1970. Later releases were handled by Philips' newly formed progressive rock label, Vertigo Records. Black Sabbath's first major exposure came when the band appeared on John Peel's Top Gear radio show in 1969, performing "Black Sabbath", "N.I.B.", "Behind the Wall of Sleep" and "Sleeping Village" to a national audience in Great Britain shortly before recording of their first album commenced. Although the "Evil Woman" single failed to chart, the band were afforded two days of studio time in November to record their debut album with producer Rodger Bain. Iommi recalls recording live: "We thought, 'We have two days to do it, and one of the days is mixing.' So we played live. Ozzy was singing at the same time; we just put him in a separate booth and off we went. We never had a second run of most of the stuff". Black Sabbath was released on Friday the 13th, February 1970, and reached number eight in the UK Albums Chart. Following its U.S. and Canadian release in May 1970 by Warner Bros. Records, the album reached number 23 on the Billboard 200, where it remained for over a year. The album was given negative reviews by many critics. Lester Bangs dismissed it in a Rolling Stone review as "discordant jams with bass and guitar reeling like velocitised speedfreaks all over each other's musical perimeters, yet never quite finding synch". It sold in substantial numbers despite being panned, giving the band their first mainstream exposure. It has since been certified Platinum in both U.S. by the Recording Industry Association of America (RIAA) and in the UK by British Phonographic Industry (BPI), and is now generally accepted as the first heavy metal album. The band returned to the studio in June 1970, just four months after Black Sabbath was released. The new album was initially set to be named War Pigs after the song "War Pigs", which was critical of the Vietnam War; however, Warner changed the title of the album to Paranoid. The album's lead single, "Paranoid", was written in the studio at the last minute. Ward explains: "We didn't have enough songs for the album, and Tony just played the [Paranoid] guitar lick and that was it. It took twenty, twenty-five minutes from top to bottom." The single was released in September 1970 and reached number four on the UK Singles Chart, remaining Black Sabbath's only top 10 hit. The album followed in the UK in October 1970, where, pushed by the success of the "Paranoid" single, it reached number one on the UK Albums Chart. The U.S. release was held off until January 1971, as the Black Sabbath album was still on the chart at the time of Paranoids UK release. The album reached No. 12 in the U.S. in March 1971, and would go on to sell four million copies in the U.S. with virtually no radio airplay. Like Black Sabbath, the album was panned by rock critics of the era, but modern-day reviewers such as AllMusic's Steve Huey cite Paranoid as "one of the greatest and most influential heavy metal albums of all time", which "defined the sound and style of heavy metal more than any other record in rock history". The album was ranked at number 131 on Rolling Stone magazine's list of The 500 Greatest Albums of All Time. Paranoids chart success allowed the band to tour the U.S. for the first time – their first U.S. show was at a club called Ungano's at 210 West 70th Street in New York City – and spawned the release of the album's second single, "Iron Man". Although the single failed to reach the top 40, it remains one of Black Sabbath's most popular songs, as well as the band's highest-charting U.S. single until 1998's "Psycho Man". 1971–1973: Master of Reality and Vol. 4 In February 1971, after a one-off performance at the Myponga Pop Festival in Australia, Black Sabbath returned to the studio to begin work on their third album. Following the chart success of Paranoid, the band were afforded more studio time, along with a "briefcase full of cash" to buy drugs. "We were getting into coke, big time", Ward explained. "Uppers, downers, Quaaludes, whatever you like. It got to the stage where you come up with ideas and forget them, because you were just so out of it." Production completed in April 1971, and in July the band released Master of Reality, just six months after the U.S. release of Paranoid. The album reached the top 10 in the U.S. and the United Kingdom, and was certified Gold in less than two months, eventually receiving Platinum certification in the 1980s and Double Platinum in the early 21st century. It contained Sabbath's first acoustic songs, alongside fan favourites such as "Children of the Grave" and "Sweet Leaf". Critical response of the era was generally unfavourable, with Lester Bangs delivering an ambivalent review of Master of Reality in Rolling Stone, describing the closing "Children of the Grave" as "naïve, simplistic, repetitive, absolute doggerel – but in the tradition [of rock 'n' roll] ... The only criterion is excitement, and Black Sabbath's got it". (In 2003, Rolling Stone would place the album at number 300 on their 500 Greatest Albums of All Time list.) Following the Master of Reality world tour in 1972, the band took their first break in three years. As Ward explained: "The band started to become very fatigued and very tired. We'd been on the road non-stop, year in and year out, constantly touring and recording. I think Master of Reality was kind of like the end of an era, the first three albums, and we decided to take our time with the next album." In June 1972, the band reconvened in Los Angeles to begin work on their next album at the Record Plant. With more time in the studio, the album saw the band experimenting with new textures, such as strings, piano, orchestration and multi-part songs. Recording was plagued with problems, many as a result of substance abuse issues. Struggling to record the song "Cornucopia" after "sitting in the middle of the room, just doing drugs", Ward was nearly fired. "I hated the song, there were some patterns that were just ... horrible," the drummer said. "I nailed it in the end, but the reaction I got was the cold shoulder from everybody. It was like, 'Well, just go home; you're not being of any use right now.' I felt like I'd blown it, I was about to get fired". Butler thought that the end product "was very badly produced, as far as I was concerned. Our then-manager insisted on producing it, so he could claim production costs". The album was originally titled Snowblind after the song of the same name, which deals with cocaine abuse. The record company changed the title at the last minute to Black Sabbath Vol. 4. Ward observed, "There was no Volume 1, 2 or 3, so it's a pretty stupid title, really". Vol. 4 was released in September 1972, and while critics were dismissive, it achieved Gold status in less than a month, and was the band's fourth consecutive release to sell a million in the U.S. "Tomorrow's Dream" was released as a single – the band's first since "Paranoid" – but failed to chart. Following an extensive tour of the U.S., in 1973 the band travelled again to Australia, followed by a tour for the first time to New Zealand, before moving onto mainland Europe. "The band were definitely in their heyday", recalled Ward, "in the sense that nobody had burnt out quite yet". 1973–1976: Sabbath Bloody Sabbath and Sabotage Following the Vol. 4 world tour, Black Sabbath returned to Los Angeles to begin work on their next release. Pleased with the Vol. 4 album, the band sought to recreate the recording atmosphere, and returned to the Record Plant studio in Los Angeles. With new musical innovations of the era, the band were surprised to find that the room they had used previously at the Record Plant was replaced by a "giant synthesiser". The band rented a house in Bel Air and began writing in the summer of 1973, but in part because of substance issues and fatigue, they were unable to complete any songs. "Ideas weren't coming out the way they were on Vol. 4, and we really got discontent", Iommi said. "Everybody was sitting there waiting for me to come up with something. I just couldn't think of anything. And if I didn't come up with anything, nobody would do anything". After a month in Los Angeles with no results, the band opted to return to England. They rented Clearwell Castle in The Forest of Dean. "We rehearsed in the dungeons and it was really creepy, but it had some atmosphere, it conjured up things and stuff started coming out again". While working in the dungeon, Iommi stumbled onto the main riff of "Sabbath Bloody Sabbath", which set the tone for the new material. Recorded at Morgan Studios in London by Mike Butcher and building off the stylistic changes introduced on Vol. 4, new songs incorporated synthesisers, strings and complex arrangements. Yes keyboardist Rick Wakeman was brought in as a session player, appearing on "Sabbra Cadabra". In November 1973, Black Sabbath began to receive positive reviews in the mainstream press after the release of Sabbath Bloody Sabbath, with Gordon Fletcher of Rolling Stone calling the album "an extraordinarily gripping affair" and "nothing less than a complete success". Later reviewers such as AllMusic's Eduardo Rivadavia cite the album as a "masterpiece, essential to any heavy metal collection", while also displaying "a newfound sense of finesse and maturity". The album marked the band's fifth consecutive Platinum-selling album in the U.S., reaching number four on the UK Albums Chart and number 11 in the U.S. The band began a world tour in January 1974, which culminated at the California Jam festival in Ontario, California, on 6 April 1974. Attracting over 200,000 fans, Black Sabbath appeared alongside popular 1970s rock and pop bands Deep Purple, Eagles, Emerson, Lake & Palmer, Rare Earth, Seals & Crofts, Black Oak Arkansas and Earth, Wind & Fire. Portions of the show were telecast on ABC Television in the U.S., exposing the band to a wider American audience. In the same year, the band shifted management, signing with notorious English manager Don Arden. The move caused a contractual dispute with Black Sabbath's former management, and while on stage in the U.S., Osbourne was handed a subpoena that led to two years of litigation. Black Sabbath began work on their sixth album in February 1975, again in England at Morgan Studios in Willesden, this time with a decisive vision to differ the sound from Sabbath, Bloody Sabbath. "We could've continued and gone on and on, getting more technical, using orchestras and everything else which we didn't particularly want to. We took a look at ourselves, and we wanted to do a rock album – Sabbath, Bloody Sabbath wasn't a rock album, really". Produced by Black Sabbath and Mike Butcher, Sabotage was released in July 1975. As with its precursor, the album initially saw favourable reviews, with Rolling Stone stating "Sabotage is not only Black Sabbath's best record since Paranoid, it might be their best ever", although later reviewers such as AllMusic noted that "the magical chemistry that made such albums as Paranoid and Volume 4 so special was beginning to disintegrate". Sabotage reached the top 20 in both the U.S. and the United Kingdom, but was the band's first release not to achieve Platinum status in the U.S., only achieving Gold certification. Although the album's only single "Am I Going Insane (Radio)" failed to chart, Sabotage features fan favourites such as "Hole in the Sky" and "Symptom of the Universe". Black Sabbath toured in support of Sabotage with openers Kiss, but were forced to cut the tour short in November 1975, following a motorcycle accident in which Osbourne ruptured a muscle in his back. In December 1975, the band's record companies released a greatest hits album without input from the band, titled We Sold Our Soul for Rock 'n' Roll. The album charted throughout 1976, eventually selling two million copies in the U.S. 1976–1979: Technical Ecstasy, Never Say Die!, and Osbourne's departure Black Sabbath began work for their next album at Criteria Studios in Miami, Florida, in June 1976. To expand their sound, the band added keyboard player Gerald Woodroffe, who also had appeared to a lesser extent on Sabotage. During the recording of Technical Ecstasy, Osbourne admits that he began losing interest in Black Sabbath and began to consider the possibility of working with other musicians. Recording of Technical Ecstasy was difficult; by the time the album was completed, Osbourne was admitted to Stafford County Asylum in Britain. It was released on 25 September 1976 to mixed reviews, and – for the first time – later music critics gave the album less favourable retrospective reviews; two decades after its release, AllMusic gave the album two stars, and noted that the band was "unravelling at an alarming rate". The album featured less of the doomy, ominous sound of previous efforts, and incorporated more synthesisers and uptempo rock songs. Technical Ecstasy failed to reach the top 50 in the U.S. and was the band's second consecutive release not to achieve Platinum status, although it was later certified Gold in 1997. The album included "Dirty Women", which remains a live staple, as well as Ward's first lead vocal on the song "It's Alright". Touring in support of Technical Ecstasy began in November 1976, with openers Boston and Ted Nugent in the U.S., and completed in Europe with AC/DC in April 1977. In late 1977, while in rehearsal for their next album and just days before the band was set to enter the studio, Osbourne abruptly quit the band. Iommi called vocalist Dave Walker, a longtime friend of the band who had previously been a member of Fleetwood Mac and Savoy Brown, and informed him that Osbourne had left the band. Walker, who was at that time fronting a band called Mistress, flew to Birmingham from California in late 1977 to write material and rehearse with Black Sabbath. On 8 January 1978, Black Sabbath made their only live performance with Walker on vocals, playing an early version of the song "Junior's Eyes" on the BBC Television programme "Look! Hear!" Walker later recalled that, while in Birmingham, he had bumped into Osbourne in a pub and came to the conclusion that Osbourne was not fully committed to leaving Black Sabbath. "The last Sabbath albums were just very depressing for me", Osbourne said. "I was doing it for the sake of what we could get out of the record company, just to get fat on beer and put a record out." Walker has said that he wrote a lot of lyrics during his brief time in the band, but none of them were ever used. If any recordings of this version of the band other than the "Look! Hear!" footage still exist, Walker says that he is not aware of them. Osbourne initially set out to form a solo project featuring former Dirty Tricks members John Frazer-Binnie, Terry Horbury and Andy Bierne. As the new band were in rehearsals in January 1978, Osbourne had a change of heart and rejoined Black Sabbath. "Three days before we were due to go into the studio, Ozzy wanted to come back to the band", Iommi explained. "He wouldn't sing any of the stuff we'd written with the other guy (Walker), so it made it very difficult. We went into the studio with basically no songs. We'd write in the morning so we could rehearse and record at night. It was so difficult, like a conveyor belt, because you couldn't get time to reflect on stuff. 'Is this right? Is this working properly?' It was very difficult for me to come up with the ideas and putting them together that quick". The band spent five months at Sounds Interchange Studios in Toronto, Ontario, Canada, writing and recording what would become Never Say Die!. "It took quite a long time", Iommi said. "We were getting really drugged out, doing a lot of dope. We'd go down to the sessions, and have to pack up because we were too stoned, we'd have to stop. Nobody could get anything right, we were all over the place, everybody's playing a different thing. We'd go back and sleep it off, and try again the next day". The album was released in September 1978, reaching number 12 in the United Kingdom and number 69 in the U.S. Press response was unfavourable and did not improve over time, with Eduardo Rivadavia of AllMusic stating two decades after its release that the album's "unfocused songs perfectly reflected the band's tense personnel problems and drug abuse". The album featured the singles "Never Say Die" and "Hard Road", both of which cracked the top 40 in the United Kingdom. The band also made their second appearance on the BBC's Top of the Pops, performing "Never Say Die". It took nearly 20 years for the album to be certified Gold in the U.S. Touring in support of Never Say Die! began in May 1978 with openers Van Halen. Reviewers called Black Sabbath's performance "tired and uninspired", a stark contrast to the "youthful" performance of Van Halen, who were touring the world for the first time. The band filmed a performance at the Hammersmith Odeon in June 1978, which was later released on DVD as Never Say Die. The final show of the tour – and Osbourne's last appearance with the band until later reunions – was in Albuquerque, New Mexico, on 11 December. Following the tour, Black Sabbath returned to Los Angeles and again rented a house in Bel Air, where they spent nearly a year working on new material for the next album. The entire band were abusing both alcohol and other drugs, but Iommi says Osbourne "was on a totally different level altogether". The band would come up with new song ideas, but Osbourne showed little interest and would refuse to sing them. Pressure from the record label and frustrations with Osbourne's lack of input coming to a head, Iommi made the decision to fire Osbourne in 1979. Iommi believed the only options available were to fire Osbourne or break the band up completely. "At that time, Ozzy had come to an end", Iommi said. "We were all doing a lot of drugs, a lot of coke, a lot of everything, and Ozzy was getting drunk so much at the time. We were supposed to be rehearsing and nothing was happening. It was like, 'Rehearse today? No, we'll do it tomorrow.' It really got so bad that we didn't do anything. It just fizzled out". Ward, who was close with Osbourne, was chosen by Tony to break the news to the singer on 27 April 1979. "I hope I was professional, I might not have been, actually. When I'm drunk I am horrible, I am horrid", Ward said. "Alcohol was definitely one of the most damaging things to Black Sabbath. We were destined to destroy each other. The band were toxic, very toxic". 1979–1982: Dio joins, Heaven and Hell and Mob Rules Sharon Arden (later Sharon Osbourne), daughter of Black Sabbath manager Don Arden, suggested former Rainbow vocalist Ronnie James Dio to replace Ozzy Osbourne in 1979. Don Arden was at this point still trying to convince Osbourne to rejoin the band, as he viewed the original line-up as the most profitable. Dio officially joined in June, and the band began writing their next album. With a notably different vocal style from Osbourne's, Dio's addition to the band marked a change in Black Sabbath's sound. "They were totally different altogether", Iommi explains. "Not only voice-wise, but attitude-wise. Ozzy was a great showman, but when Dio came in, it was a different attitude, a different voice and a different musical approach, as far as vocals. Dio would sing across the riff, whereas Ozzy would follow the riff, like in "Iron Man". Ronnie came in and gave us another angle on writing." Geezer Butler temporarily left the band in September 1979 for personal reasons. According to Dio, the band initially hired Craig Gruber, with whom Dio had previously played while in Elf and Rainbow, on bass to assist with writing the new album. Gruber was soon replaced by Geoff Nicholls of Quartz. The new line-up returned to Criteria Studios in November to begin recording work, with Butler returning to the band in January 1980 and Nicholls moving to keyboards. Produced by Martin Birch, Heaven and Hell was released on 25 April 1980, to critical acclaim. Over a decade after its release, AllMusic said the album was "one of Sabbath's finest records, the band sounds reborn and re-energised throughout". Heaven and Hell peaked at number nine in the United Kingdom and number 28 in the U.S., the band's highest-charting album since Sabotage. The album eventually sold a million copies in the U.S., and the band embarked on an extensive world tour, making their first live appearance with Dio in Germany on 17 April 1980. Black Sabbath toured the U.S. throughout 1980 with Blue Öyster Cult on the "Black and Blue" tour, with a show at Nassau Coliseum in Uniondale, New York, filmed and released theatrically in 1981 as Black and Blue. On 26 July 1980, the band played to 75,000 fans at a sold-out Los Angeles Memorial Coliseum with Journey, Cheap Trick and Molly Hatchet. The next day, the band appeared at the 1980 Day on the Green at Oakland Coliseum. While on tour, Black Sabbath's former label in England issued a live album culled from a seven-year-old performance, titled Live at Last without any input from the band. The album reached number five on the UK chart and saw the re-release of "Paranoid" as a single, which reached the top 20. On 18 August 1980, after a show in Minneapolis, Ward quit the band. "It was intolerable for me to get on the stage without Ozzy. And I drank 24 hours a day, my alcoholism accelerated". Geezer Butler stated that after Ward's final show, the drummer came in drunk, stating that "he might as well be a Martian". Ward then got angry, packed his things and got on a bus to leave. Following Ward's sudden departure, the group hired drummer Vinny Appice. Further trouble for the band came during their 9 October 1980 concert at the Milwaukee Arena, which degenerated into a riot that caused $10,000 in damages to the arena and resulted in 160 arrests. According to the Associated Press: "The crowd of mostly adolescent males first became rowdy in a performance by the Blue Oyster Cult" and then grew restless while waiting an hour for Black Sabbath to begin playing. A member of the audience threw a beer bottle that struck bassist Butler and effectively ended the show. "The band then abruptly halted its performance and began leaving" as the crowd rioted. The band completed the Heaven and Hell world tour in February 1981 and returned to the studio to begin work on their next album. Black Sabbath's second studio album that was produced by Martin Birch and featured Ronnie James Dio as vocalist, Mob Rules, was released in October 1981 and was well received by fans, but less so by critics. Rolling Stone reviewer J. D. Considine gave the album one star, claiming "Mob Rules finds the band as dull-witted and flatulent as ever". Like most of the band's earlier work, time helped to improve the opinions of the music press. A decade after its release, AllMusic's Eduardo Rivadavia called Mob Rules "a magnificent record". The album was certified Gold and reached the top 20 on the UK chart. The album's title track, "The Mob Rules", which was recorded at John Lennon's old house in England, was also featured in the 1981 animated film Heavy Metal, although the film version is an alternate take and differs from the album version. Unhappy with the quality of 1980's Live at Last, the band recorded another live album – titled Live Evil – during the Mob Rules world tour, across the United States in Dallas, San Antonio and Seattle, in 1982. During the mixing process for the album, Iommi and Butler had a falling-out with Dio. Misinformed by their then-current mixing engineer, Iommi and Butler accused Dio of sneaking into the studio at night to raise the volume of his vocals. In addition, Dio was not satisfied with the pictures of him in the artwork. Butler also accused Dio and Appice of working on a solo album during the album's mixing without telling the other members of Black Sabbath. "Ronnie wanted more say in things", Iommi said. "And Geezer would get upset with him and that is where the rot set in. Live Evil is when it all fell apart. Ronnie wanted to do more of his own thing, and the engineer we were using at the time in the studio didn't know what to do, because Ronnie was telling him one thing and we were telling him another. At the end of the day, we just said, 'That's it, the band is over'". "When it comes time for the vocal, nobody tells me what to do. Nobody! Because they're not as good as me, so I do what I want to do", Dio later said. "I refuse to listen to Live Evil, because there are too many problems. If you look at the credits, the vocals and drums are listed off to the side. Open up the album and see how many pictures there are of Tony, and how many there are of me and Vinny". Ronnie James Dio left Black Sabbath in November 1982 to start his own band and took drummer Vinny Appice with him. Live Evil was released in January 1983, but was overshadowed by Ozzy Osbourne's Platinum-selling album Speak of the Devil. 1982–1984: Gillan as singer and Born Again The remaining original members, Iommi and Butler, began auditioning singers for the band's next release. Deep Purple and Whitesnake's David Coverdale, Samson's Nicky Moore and Lone Star's John Sloman were all considered and Iommi states in his autobiography that Michael Bolton auditioned. The band settled on former Deep Purple vocalist Ian Gillan to replace Dio in December 1982. The project was initially not to be called Black Sabbath, but pressure from the record label forced the group to retain the name. The band entered The Manor Studios in Shipton-on-Cherwell, Oxfordshire, in June 1983 with a returned and newly sober Bill Ward on drums. "That was the very first album that I ever did clean and sober," Ward recalled. "I only got drunk after I finished all my work on the album – which wasn't a very good idea... Sixty to seventy per cent of my energy was taken up on learning how to get through the day without taking a drink and learning how to do things without drinking, and thirty per cent of me was involved in the album." Born Again (7 August 1983) was panned on release by critics. Despite this negative reception, it reached number four in the UK, and number 39 in the U.S. Even three decades after its release, AllMusic's Eduardo Rivadavia called the album "dreadful", noting that "Gillan's bluesy style and humorous lyrics were completely incompatible with the lords of doom and gloom". Unable to tour because of the pressures of the road, Ward quit the band. "I fell apart with the idea of touring," he later explained. "I got so much fear behind touring, I didn't talk about the fear, I drank behind the fear instead and that was a big mistake." He was replaced by former Electric Light Orchestra drummer Bev Bevan for the Born Again '83–'84 world tour, (often unofficially referred to as the 'Feighn Death Sabbath '83–'84' World Tour) which began in Europe with Diamond Head, and later in the U.S. with Quiet Riot and Night Ranger. The band headlined the 1983 Reading Festival in England, adding Deep Purple's "Smoke on the Water" to their encore. The tour in support of Born Again included a giant set of the Stonehenge monument. In a move later parodied in the mockumentary This Is Spinal Tap, the band made a mistake in ordering the set piece. Butler explained: 1984–1987: Hiatus, Hughes as singer, Seventh Star, and Gillen as singer Following the completion of the Born Again tour in March 1984, vocalist Ian Gillan left Black Sabbath to re-join Deep Purple, which was reforming after a long hiatus. Bevan left at the same time, and Gillan remarked that he and Bevan were made to feel like "hired help" by Iommi. The band then recruited an unknown Los Angeles vocalist named David Donato and Ward once again rejoined the band. The new line-up wrote and rehearsed throughout 1984, and eventually recorded a demo with producer Bob Ezrin in October. Unhappy with the results, the band parted ways with Donato shortly after. Disillusioned with the band's revolving line-up, Ward left shortly after stating "This isn't Black Sabbath". Butler would quit Sabbath next in November 1984 to form a solo band. "When Ian Gillan took over that was the end of it for me," he said. "I thought it was just a joke and I just totally left. When we got together with Gillan it was not supposed to be a Black Sabbath album. After we had done the album we gave it to Warner Bros. and they said they were going to put it out as a Black Sabbath album and we didn't have a leg to stand on. I got really disillusioned with it and Gillan was really pissed off about it. That lasted one album and one tour and then that was it." One vocalist whose status is disputed, both inside and outside Sabbath, is Christian evangelist and former Joshua frontman Jeff Fenholt. Fenholt insists he was a singer in Sabbath between January and May 1985. Iommi has never confirmed this. Fenholt gives a detailed account in Garry Sharpe-Young's book Sabbath Bloody Sabbath: The Battle for Black Sabbath. Following both Ward's and Butler's exits, sole remaining original member Iommi put Sabbath on hiatus, and began work on a solo album with long-time Sabbath keyboardist Geoff Nicholls. While working on new material, the original Sabbath line-up agreed to a spot at Bob Geldof's Live Aid, performing at the Philadelphia show on 13 July 1985. This event – which also featured reunions of The Who and Led Zeppelin – marked the first time the original line-up had appeared on stage since 1978. "We were all drunk when we did Live Aid," recalled Geezer Butler, "but we'd all got drunk separately." Returning to his solo work, Iommi enlisted bassist Dave Spitz (ex-Great White), drummer Eric Singer and initially intended to use multiple singers, including Rob Halford of Judas Priest, former Deep Purple and Trapeze vocalist Glenn Hughes, and former Sabbath vocalist Ronnie James Dio. This plan didn't work as he forecasted. "We were going to use different vocalists on the album, guest vocalists, but it was so difficult getting it together and getting releases from their record companies. Glenn Hughes came along to sing on one track and we decided to use him on the whole album." The band spent the remainder of the year in the studio, recording what would become Seventh Star (1986). Warner Bros. refused to release the album as a Tony Iommi solo release, instead insisting on using the name Black Sabbath. Pressured by the band's manager, Don Arden, the two compromised and released the album as "Black Sabbath featuring Tony Iommi" in January 1986. "It opened up a whole can of worms," Iommi explained. "If we could have done it as a solo album, it would have been accepted a lot more." Seventh Star sounded little like a Sabbath album, incorporating instead elements popularised by the 1980s Sunset Strip hard rock scene. It was panned by the critics of the era, although later reviewers such as AllMusic gave album verdicts, calling the album "often misunderstood and underrated". The new line-up rehearsed for six weeks preparing for a full world tour, although the band were eventually forced to use the Sabbath name. "I was into the 'Tony Iommi project', but I wasn't into the Black Sabbath moniker," Hughes said. "The idea of being in Black Sabbath didn't appeal to me whatsoever. Glenn Hughes singing in Black Sabbath is like James Brown singing in Metallica. It wasn't gonna work." Just four days before the start of the tour, Hughes got into a bar fight with the band's production manager John Downing which splintered the singer's orbital bone. The injury interfered with Hughes' ability to sing, and the band brought in vocalist Ray Gillen to continue the tour with W.A.S.P. and Anthrax, although nearly half of the U.S. dates would be cancelled because of poor ticket sales. Black Sabbath began work on new material in October 1986 at Air Studios in Montserrat with producer Jeff Glixman. The recording was fraught with problems from the beginning, as Glixman left after the initial sessions to be replaced by producer Vic Coppersmith-Heaven. Bassist Dave Spitz quit over "personal issues", and former Rainbow and Ozzy Osbourne bassist Bob Daisley was brought in. Daisley re-recorded all of the bass tracks, and wrote the album's lyrics, but before the album was complete, he left to join Gary Moore's backing band, taking drummer Eric Singer with him. After problems with second producer Coppersmith-Heaven, the band returned to Morgan Studios in England in January 1987 to work with new producer Chris Tsangarides. While working in the United Kingdom, new vocalist Ray Gillen abruptly left Black Sabbath to form Blue Murder with guitarist John Sykes (ex-Tygers of Pan Tang, Thin Lizzy, Whitesnake). 1987–1990: Martin joins, The Eternal Idol, Headless Cross, and Tyr The band enlisted heavy metal vocalist Tony Martin to re-record Gillen's tracks, and former Electric Light Orchestra drummer Bev Bevan to complete a few percussion overdubs. Before the release of the new album Black Sabbath accepted an offer to play six shows at Sun City, South Africa during the apartheid era. The band drew criticism from activists and artists involved with Artists United Against Apartheid, who had been boycotting South Africa since 1985. Drummer Bev Bevan refused to play the shows, and was replaced by Terry Chimes, formerly of the Clash, while Dave Spitz returned on bass. After nearly a year in production, The Eternal Idol was released on 8 December 1987 and ignored by contemporary reviewers. On-line internet era reviews were mixed. AllMusic said that "Martin's powerful voice added new fire" to the band, and the album contained "some of Iommi's heaviest riffs in years." Blender gave the album two stars, claiming the album was "Black Sabbath in name only". The album would stall at No. 66 in the United Kingdom, while peaking at 168 in the U.S. The band toured in support of Eternal Idol in Germany, Italy and for the first time, Greece. In part due to a backlash from promoters over the South Africa incident, other European shows were cancelled. Bassist Dave Spitz left the band again shortly before the tour, and was replaced by Jo Burt, formerly of Virginia Wolf. Following the poor commercial performance of The Eternal Idol, Black Sabbath were dropped by both Vertigo Records and Warner Bros. Records, and signed with I.R.S. Records. The band took time off in 1988, returning in August to begin work on their next album. As a result of the recording troubles with Eternal Idol, Tony Iommi opted to produce the band's next album himself. "It was a completely new start", Iommi said. "I had to rethink the whole thing, and decided that we needed to build up some credibility again". Iommi enlisted former Rainbow drummer Cozy Powell, long-time keyboardist Nicholls and session bassist Laurence Cottle, and rented a "very cheap studio in England". Black Sabbath released Headless Cross in April 1989, and it was also ignored by contemporary reviewers, although AllMusic contributor Eduardo Rivadavia gave the album four stars and called it "the finest non-Ozzy or Dio Black Sabbath album". Anchored by the number 62 charting single "Headless Cross", the album reached number 31 on the UK chart, and number 115 in the U.S. Queen guitarist Brian May, a good friend of Iommi's, played a guest solo on the song "When Death Calls". Following the album's release the band added touring bassist Neil Murray, formerly of Colosseum II, National Health, Whitesnake, Gary Moore's backing band, and Vow Wow. The unsuccessful Headless Cross U.S. tour began in May 1989 with openers Kingdom Come and Silent Rage, but because of poor ticket sales, the tour was cancelled after just eight shows. The European leg of the tour began in September, where the band were enjoying chart success. After a string of Japanese shows the band embarked on a 23 date Russian tour with Girlschool. Black Sabbath was one of the first bands to tour Russia, after Mikhail Gorbachev opened the country to western acts for the first time in 1989. The band returned to the studio in February 1990 to record Tyr, the follow-up to Headless Cross. While not technically a concept album, some of the album's lyrical themes are loosely based on Norse mythology. Tyr was released on 6 August 1990, reaching number 24 on the UK albums chart, but was the first Black Sabbath release not to break the Billboard 200 in the U.S. The album would receive mixed internet-era reviews, with AllMusic noting that the band "mix myth with metal in a crushing display of musical synthesis", while Blender gave the album just one star, claiming that "Iommi continues to besmirch the Sabbath name with this unremarkable collection". The band toured in support of Tyr with Circus of Power in Europe, but the final seven United Kingdom dates were cancelled because of poor ticket sales. For the first time in their career, the band's touring cycle did not include U.S. dates. 1990–1992: Dio rejoins and Dehumanizer While on his Lock Up the Wolves U.S. tour in August 1990, former Sabbath vocalist Ronnie James Dio was joined onstage at the Roy Wilkins Auditorium by Geezer Butler to perform "Neon Knights". Following the show, the two expressed interest in rejoining Sabbath. Butler convinced Iommi, who in turn broke up the current line-up, dismissing vocalist Tony Martin and bassist Neil Murray. "I do regret that in a lot of ways," Iommi said. "We were at a good point then. We decided to [reunite with Dio] and I don't even know why, really. There's the financial aspect, but that wasn't it. I seemed to think maybe we could recapture something we had." Dio and Butler joined Iommi and Cozy Powell in autumn 1990 to begin the next Sabbath release. While rehearsing in November, Powell suffered a broken hip when his horse died and fell on the drummer's legs. Unable to complete the album, Powell was replaced by former drummer Vinny Appice, reuniting the Mob Rules line-up, and the band entered the studio with producer Reinhold Mack. The year-long recording was plagued with problems, primarily stemming from writing tension between Iommi and Dio. Songs were rewritten multiple times. "It was just hard work," Iommi said. "We took too long on it, that album cost us a million dollars, which is bloody ridiculous." Dio recalled the album as difficult, but worth the effort: "It was something we had to really wring out of ourselves, but I think that's why it works. Sometimes you need that kind of tension, or else you end up making the Christmas album". The resulting Dehumanizer was released on 22 June 1992. In the U.S., the album was released on 30 June 1992 by Reprise Records, as Dio and his namesake band were still under contract to the label at the time. While the album received mixed , it was the band's biggest commercial success in a decade. Anchored by the top 40 rock radio single "TV Crimes", the album peaked at number 44 on the Billboard 200. The album also featured "Time Machine", a version of which had been recorded for the 1992 film Wayne's World. Additionally, the perception among fans of a return of some semblance of the "real" Sabbath provided the band with much needed momentum. Sabbath began touring in support of Dehumanizer in July 1992 with Testament, Danzig, Prong, and Exodus. While on tour, former vocalist Ozzy Osbourne announced his first retirement, and invited Sabbath to open for his solo band at the final two shows of his No More Tours tour in Costa Mesa, California. The band agreed, aside from Dio, who told Iommi, "I'm not doing that. I'm not supporting a clown." Dio spoke of the situation years later: Dio quit Sabbath following a show in Oakland, California on 13 November 1992, one night before the band were set to appear at Osbourne's retirement show. Judas Priest vocalist Rob Halford stepped in at the last minute, performing two nights with the band. Iommi and Butler joined Osbourne and former drummer Ward on stage for the first time since 1985's Live Aid concert, performing a brief set of Sabbath songs. This set the stage for a longer-term reunion of the original line-up, though that plan proved short-lived. "Ozzy, Geezer, Tony and Bill announced the reunion of Black Sabbath – again," remarked Dio. "And I thought that it was a great idea. But I guess Ozzy didn't think it was such a great idea… I'm never surprised when it comes to whatever happens with them. Never at all. They are very predictable. They don't talk." 1992–1997: Martin rejoins, Cross Purposes, and Forbidden Drummer Vinny Appice left the band following the reunion show to rejoin Ronnie James Dio's solo band, later appearing on Dio's Strange Highways and Angry Machines. Iommi and Butler enlisted former Rainbow drummer Bobby Rondinelli, and reinstated former vocalist Tony Martin. The band returned to the studio to work on new material, although the project was not originally intended to be released under the Black Sabbath name. As Geezer Butler explains: Under pressure from their record label, the band released their seventeenth studio album, Cross Purposes, on 8 February 1994, under the Black Sabbath name. The album received mixed reviews, with Blender giving the album two stars, calling Soundgarden's 1994 album Superunknown "a far better Sabbath album than this by-the-numbers potboiler". AllMusic's Bradley Torreano called Cross Purposes "the first album since Born Again that actually sounds like a real Sabbath record". The album just missed the Top 40 in the UK reaching number 41, and also reached 122 on the Billboard 200 in the U.S. Cross Purposes contained the song "Evil Eye", which was co-written by Van Halen guitarist Eddie Van Halen, although uncredited because of record label restrictions. Touring in support of Cross Purposes began in February with Morbid Angel and Motörhead in the U.S. The band filmed a live performance at the Hammersmith Apollo on 13 April 1994, which was released on VHS accompanied by a CD, titled Cross Purposes Live. After the European tour with Cathedral and Godspeed in June 1994, drummer Bobby Rondinelli quit the band and was replaced by original Black Sabbath drummer Ward for five shows in South America. Following the touring cycle for Cross Purposes, bassist Geezer Butler quit the band for the second time. "I finally got totally disillusioned with the last Sabbath album, and I much preferred the stuff I was writing to the stuff Sabbath were doing". Butler formed a solo project called GZR, and released Plastic Planet in 1995. The album contained the song "Giving Up the Ghost", which was critical of Tony Iommi for carrying on with the Black Sabbath name, with the lyrics: You plagiarised and parodied / the magic of our meaning / a legend in your own mind / left all your friends behind / you can't admit that you're wrong / the spirit is dead and gone ("I heard it's something about me..." said Iommi. "I had the album given to me a while back. I played it once, then somebody else had it, so I haven't really paid any attention to the lyrics... It's nice to see him doing his own thing – getting things off his chest. I don't want to get into a rift with Geezer. He's still a friend." Following Butler's departure, newly returned drummer Ward once again left the band. Iommi reinstated former members Neil Murray on bass and Cozy Powell on drums, effectively reuniting the 1990 Tyr line-up. The band enlisted Body Count guitarist Ernie C to produce the new album, which was recorded in London in autumn of 1994. The album featured a guest vocal on "Illusion of Power" by Body Count vocalist Ice-T. The resulting Forbidden was released on 8 June 1995, but failed to chart in the U.S. The album was widely panned by critics; AllMusic's Bradley Torreano said "with boring songs, awful production, and uninspired performances, this is easily avoidable for all but the most enthusiastic fan"; while Blender magazine called Forbidden "an embarrassment... the band's worst album". Black Sabbath embarked on a world tour in July 1995 with openers Motörhead and Tiamat, but two months into the tour, drummer Cozy Powell left the band, citing health issues, and was replaced by former drummer Bobby Rondinelli. "The members I had in the last lineup – Bobby Rondinelli, Neil Murray – they're great, great characters..." Iommi told Sabbath fanzine Southern Cross. "That, for me, was an ideal lineup. I wasn't sure vocally what we should do, but Neil Murray and Bobby Rondinelli I really got on well with." After completing Asian dates in December 1995, Tony Iommi put the band on hiatus, and began work on a solo album with former Black Sabbath vocalist Glenn Hughes, and former Judas Priest drummer Dave Holland. The album was not officially released following its completion, although a widely traded bootleg called Eighth Star surfaced soon after. The album was officially released in 2004 as The 1996 DEP Sessions, with Holland's drums re-recorded by session drummer Jimmy Copley. In 1997, Tony Iommi disbanded the current line-up to officially reunite with Ozzy Osbourne and the original Black Sabbath line-up. Vocalist Tony Martin claimed that an original line-up reunion had been in the works since the band's brief reunion at Ozzy Osbourne's 1992 Costa Mesa show, and that the band released subsequent albums to fulfill their record contract with I.R.S. Records. Martin later recalled Forbidden (1995) as a "filler album that got the band out of the label deal, rid of the singer, and into the reunion. However I wasn't privy to that information at the time". I.R.S. Records released a compilation album in 1996 to fulfill the band's contract, titled The Sabbath Stones, which featured songs from Born Again (1983) to Forbidden (1995). 1997–2006: Osbourne rejoins and Reunion In the summer of 1997, Iommi, Butler and Osbourne reunited to coheadline the Ozzfest tour alongside Osbourne's solo band. The line-up featured Osbourne's drummer Mike Bordin filling in for Ward. "It started off with me going off to join Ozzy for a couple of numbers," explained Iommi, "and then it got into Sabbath doing a short set, involving Geezer. And then it grew as it went on… We were concerned in case Bill couldn't make it – couldn't do it – because it was a lot of dates, and important dates… The only rehearsal that we had to do was for the drummer. But I think if Bill had come in, it would have took a lot more time. We would have had to focus a lot more on him." In December 1997, the group was joined by Ward, marking the first reunion of the original quartet since Osbourne's 1992 "retirement show". This line-up recorded two shows at the Birmingham NEC, released as the double album Reunion on 20 October 1998. The album reached number eleven on the Billboard 200, went platinum in the U.S. and spawned the single "Iron Man", which won Sabbath their first Grammy Award in 2000 for Best Metal Performance, 30 years after the song was originally released. Reunion featured two new studio tracks, "Psycho Man" and "Selling My Soul", both of which cracked the top 20 of the Billboard Mainstream Rock Tracks chart. Shortly before a European tour in the summer of 1998, Ward had a heart attack and was temporarily replaced by former drummer Vinny Appice. Ward returned for a U.S. tour with openers Pantera, which began in January 1999 and continued through the summer, headlining the annual Ozzfest tour. Following these appearances, the band was put on hiatus while members worked on solo material. Iommi released his first official solo album, Iommi, in 2000, while Osbourne continued work on Down to Earth (2001). Sabbath returned to the studio to work on new material with all four original members and producer Rick Rubin in the spring of 2001, but the sessions were halted when Osbourne was called away to finish tracks for his solo album in the summer. "It just came to an end…" Iommi said. "It's a shame because [the songs] were really Iommi commented on the difficulty getting all the members together to work: In March 2002, Osbourne's Emmy-winning reality show The Osbournes debuted on MTV, and quickly became a worldwide hit. The show introduced Osbourne to a broader audience and to capitalise, the band's back catalogue label, Sanctuary Records released a double live album Past Lives (2002), which featured concert material recorded in the 1970s, including the Live at Last (1980) album. The band remained on hiatus until the summer of 2004 when they returned to headline Ozzfest 2004 and 2005. In November 2005, Black Sabbath were inducted into the UK Music Hall of Fame, and in March 2006, after eleven years of eligibility—Osbourne famously refused the Hall's "meaningless" initial nomination in 1999—the band were inducted into the U.S. Rock and Roll Hall of Fame. At the awards ceremony Metallica played two Sabbath songs, "Hole in the Sky" and "Iron Man" in tribute. 2006–2010: The Dio Years and Heaven & Hell While Ozzy Osbourne was working on new solo album material in 2006, Rhino Records released Black Sabbath: The Dio Years, a compilation of songs culled from the four Black Sabbath releases featuring Ronnie James Dio. For the release, Iommi, Butler, Dio, and Appice reunited to write and record three new songs as Black Sabbath. The Dio Years was released on 3 April 2007, reaching number 54 on the Billboard 200, while the single "The Devil Cried" reached number 37 on the Mainstream Rock Tracks chart. Pleased with the results, Iommi and Dio decided to reunite the Dio era line-up for a world tour. While the line-up of Osbourne, Butler, Iommi, and Ward was still officially called Black Sabbath, the new line-up opted to call themselves Heaven & Hell, after the album of the same title, to avoid confusion. When asked about the name of the group, Iommi stated "it really is Black Sabbath, whatever we do... so everyone knows what they're getting [and] so people won't expect to hear 'Iron Man' and all those songs. We've done them for so many years, it's nice to do just all the stuff we did with Ronnie again." Ward was initially set to participate, but dropped out before the tour began due to musical differences with "a couple of the band members". He was replaced by former drummer Vinny Appice, effectively reuniting the line-up that had featured on the Mob Rules (1981) and Dehumanizer (1992) albums. Heaven & Hell toured the U.S. with openers Megadeth and Machine Head, and recorded a live album and DVD in New York on 30 March 2007, titled Live from Radio City Music Hall. In November 2007, Dio confirmed that the band had plans to record a new studio album, which was recorded in the following year. In April 2008 the band announced the upcoming release of a new box set and their participation in the Metal Masters Tour, alongside Judas Priest, Motörhead and Testament. The box set, The Rules of Hell, featuring remastered versions of all the Dio fronted Black Sabbath albums, was supported by the Metal Masters Tour. In 2009, the band announced the title of their debut studio album, The Devil You Know, released on 28 April. On 26 May 2009, Osbourne filed suit in a federal court in New York against Iommi alleging that he illegally claimed the band name. Iommi noted that he has been the only constant band member for its full 41-year career and that his bandmates relinquished their rights to the name in the 1980s, therefore claiming more rights to the name of the band. Although in the suit, Osbourne was seeking 50% ownership of the trademark, he said that he hoped the proceedings would lead to equal ownership among the four original members. In March 2010, Black Sabbath announced that along with Metallica they would be releasing a limited edition single together to celebrate Record Store Day. It was released on 17 April 2010. Ronnie James Dio died on 16 May 2010 from stomach cancer. In June 2010, the legal battle between Ozzy Osbourne and Tony Iommi over the trademarking of the Black Sabbath name ended, but the terms of the settlement have not been disclosed. 2010–2014: Second Osbourne reunion and 13 In a January 2010 interview while promoting his biography I Am Ozzy, Osbourne stated that although he would not rule it out, he was doubtful there would be a reunion with all four original members of the band. Osbourne stated: "I'm not gonna say I've written it out forever, but right now I don't think there's any chance. But who knows what the future holds for me? If it's my destiny, fine." In July, Butler said that there would be no reunion in 2011, as Osbourne was already committed to touring with his solo band. However, by that August they had already met up to rehearse together, and continued to do so through the autumn. On 11 November 2011, Iommi, Butler, Osbourne, and Ward announced that they were reuniting to record a new album with a full tour in support beginning in 2012. Guitarist Iommi was diagnosed with lymphoma on 9 January 2012, which forced the band to cancel all but two shows (Download Festival, and Lollapalooza Festival) of a previously booked European tour. It was later announced that an intimate show would be played in their hometown Birmingham. It was the first concert since the reunion and the only indoors concerts that year. In February 2012, drummer Ward announced that he would not participate further in the band's reunion until he was offered a "signable contract". On 21 May 2012, at the O2 Academy in Birmingham, Black Sabbath played their first concert since 2005, with Tommy Clufetos playing the drums. In June, they performed at the Download Festival at the Donington Park motorsports circuit in Leicestershire, England, followed by the last concert of the short tour at Lollapalooza Festival in Chicago. Later that month, the band started recording an album. On 13 January 2013, the band announced that the album would be released in June under the title 13. Brad Wilk of Rage Against the Machine was chosen as the drummer, and Rick Rubin was chosen as the producer. Mixing of the album commenced in February. On 12 April 2013, the band released the album's track listing. The standard version of the album features eight new tracks, and the deluxe version features three bonus tracks. The band's first single from 13, "God Is Dead?", was released on 19 April 2013. On 20 April 2013, Black Sabbath commenced their first Australia/New Zealand tour in 40 years followed by a North American Tour in Summer 2013. The second single of the album, "End of the Beginning", debuted on 15 May in a CSI: Crime Scene Investigation episode, where all three members appeared. In June 2013, 13 topped both the UK Albums Chart and the U.S. Billboard 200, becoming their first album to reach number one on the latter chart. In 2014, Black Sabbath received their first Grammy Award since 2000 with "God Is Dead?" winning Best Metal Performance. In July 2013, Black Sabbath embarked on a North American Tour (for the first time since July 2001), followed by a Latin American tour in October 2013. In November 2013, the band started their European tour which lasted until December 2013. In March and April 2014, they made 12 stops in North America (mostly in Canada) as the second leg of their North American Tour before embarking in June 2014 on the second leg of their European tour, which ended with a concert at London's Hyde Park. 2014–2017: Cancelled twentieth album, The End, and disbandment On 29 September 2014, Osbourne told Metal Hammer that Black Sabbath would begin work on their twentieth studio album in early 2015 with producer Rick Rubin, followed by a final tour in 2016. In an April 2015 interview, however, Osbourne said that these plans "could change", and added, "We all live in different countries and some of them want to work and some of them don't want to, I believe. But we are going to do another tour together." On 3 September 2015, it was announced that Black Sabbath would embark on their final tour, titled The End, from January 2016 to February 2017. Numerous dates and locations across the U.S., Canada, Europe, Australia and New Zealand were announced. The final shows of The End tour took place at the Genting Arena in their home city of Birmingham, England on 2 and 4 February 2017. On 26 October 2015, it was announced the band consisting of Osbourne, Iommi and Butler would be returning to the Download Festival on 11 June 2016. Despite earlier reports that they would enter the studio before their farewell tour, Osbourne stated that there would not be another Black Sabbath studio album. However, an 8-track CD entitled The End was sold at dates on the tour. Along with some live recordings, the CD includes four unused tracks from the 13 sessions. On 4 March 2016, Iommi discussed future re-releases of the Tony Martin-era catalogue: "We've held back on the reissues of those albums because of the current Sabbath thing with Ozzy Osbourne, but they will certainly be happening... I'd like to do a couple of new tracks for those releases with Tony Martin... I'll also be looking at working on Cross Purposes and Forbidden." Martin had suggested that this could coincide with the 30th anniversary of The Eternal Idol, in 2017. In an interview that August, Martin added "[Iommi] still has his cancer issues of course and that may well stop it all from happening but if he wants to do something I am ready." On 10 August 2016, Iommi revealed that his cancer was in remission. Asked in November 2016 about his plans after Black Sabbath's final tour, Iommi replied, "I'll be doing some writing. Maybe I'll be doing something with the guys, maybe in the studio, but no touring." The band played their final concert on 4 February 2017 in Birmingham. The final song was streamed live on the band's Facebook page and fireworks went off as the band took their final bow. The band's final tour was not an easy one, as longstanding tensions between Osbourne and Iommi returned to the surface. Iommi stated that he would not rule out the possibility of one-off shows, "I wouldn't write that off, if one day that came about. That's possible. Or even doing an album, 'cause then, again, you're in one place. But I don't know if that would happen." In an April 2017 interview, Butler revealed that Black Sabbath considered making a blues album as the follow-up to 13, but added that, "the tour got in the way." On 7 March 2017, Black Sabbath announced their disbandment through posts made on their official social media accounts. 2017–present: Aftermath In a June 2018 interview with ITV News, Osbourne expressed interest in reuniting with Black Sabbath for a performance at the 2022 Commonwealth Games which would be held in their home city Birmingham. Iommi said that performing at the event as Black Sabbath would be "a great thing to do to help represent Birmingham. I'm up for it. Let's see what happens." He also did not rule out the possibility for the band to reform only for a one-off performance rather than a full-length tour. Iommi was later announced to be part of the opening ceremony for the 2022 Commonwealth Games alongside Duran Duran. On 8 August 2022, Osbourne and Iommi made a surprise reunion to end the closing ceremony of the 2022 Commonwealth Games at the Alexander Stadium in Birmingham. They were joined by 2017 Black Sabbath touring musicians Tommy Clufetos and Adam Wakeman for a medley of "Iron Man" and "Paranoid". In September 2020, Osbourne stated in an interview that he was no longer interested in a reunion: "Not for me. It's done. The only thing I do regret is not doing the last farewell show in Birmingham with Bill Ward. I felt really bad about that. It would have been so nice. I don't know what the circumstances behind it were, but it would have been nice. I've talked to Tony a few times, but I don't have any of the slightest interest in doing another gig. Maybe Tony's getting bored now." Butler also ruled out the possibility of any future Black Sabbath performances in an interview with Eonmusic on 10 November 2020, stating that the band is over: "There will definitely be no more Sabbath. It's done." Iommi however, pondered the possibility of another reunion tour in an interview with The Mercury News, stating that he "would like to play with the guys again" and that he misses the audiences and stage. Bill Ward stated in an interview with Eddie Trunk that he no longer has the ability or chops to perform with Black Sabbath in concert, but expressed that he would love to make another album with Osbourne, Butler and Iommi. Despite ruling out the possibility of another Black Sabbath reunion, Osbourne revealed in an episode of Ozzy Speaks on Ozzy's Boneyard that he is working with Iommi, who appeared as one of the guests for his thirteenth solo album, Patient Number 9. In an October 2021 interview with the Metro, Ward revealed that he has kept "in contact" with his former bandmates and stated that he is "very open-minded" to the possibility of recording another Black Sabbath album: "I haven't spoken to the guys about it, but I have talked to a couple of people in management about the possibility of making a recording." On 30 September 2020, Black Sabbath announced a new Dr. Martens shoe collection. The partnership with the British footwear company celebrated the 50th anniversaries of the band's Black Sabbath and Paranoid albums, with the boots depicting artwork from the former. On 13 January 2021, the band announced that they would reissue both Heaven & Hell and Mob Rules as expanded deluxe editions on 5 March 2021, with unreleased material included. In September 2022, Osbourne reiterated that he was unwilling to continue Black Sabbath, stating that if another Black Sabbath album is released, he won't sing on it. However, he is open to working with Iommi on more solo projects following the latter's involvement on Patient Number 9. Osbourne later retired from touring in February 2023 after not sufficiently recovering from medical treatment, putting the possibility of another Black Sabbath reunion in concert in further doubt. Butler, who had retired in June 2023, insisted that Black Sabbath has been "put to bed", until August 2023 when he stated that he was open to performing a one-off show, but expressed that he had "no desire to tour again" with Black Sabbath. The Birmingham Royal Ballet presented Black Sabbath: The Ballet which premiered at the Birmingham Hippodrome in September 2023, before touring to Theatre Royal, Plymouth and Sadler's Wells Theatre in October. Musical style Black Sabbath were a heavy metal band. The band have also been cited as a key influence on genres including stoner rock, grunge, doom metal, and sludge metal. Early on, Black Sabbath were influenced by Cream, The Beatles, Fleetwood Mac, Jimi Hendrix, John Mayall & the Bluesbreakers, Blue Cheer, Led Zeppelin, and Jethro Tull. Although Black Sabbath went through many line-ups and stylistic changes, their core sound focuses on ominous lyrics and doomy music, often making use of the musical tritone, also called the "devil's interval". While their Ozzy-era albums such as Sabbath Bloody Sabbath (1973) had slight compositional similarities to the progressive rock genre that was growing in popularity at the time, standing in stark contrast to popular music of the early 1970s, Black Sabbath's dark sound was dismissed by rock critics of the era. Much like many of their early heavy metal contemporaries, the band received virtually no airplay on rock radio. As the band's primary songwriter, Tony Iommi wrote the majority of Black Sabbath's music, while Osbourne would write vocal melodies, and bassist Geezer Butler would write lyrics. The process was sometimes frustrating for Iommi, who often felt pressured to come up with new material: "If I didn't come up with anything, nobody would do anything." On Iommi's influence, Osbourne later said: Beginning with their third album, Master of Reality (1971), Black Sabbath began to feature tuned-down guitars. In 1965, before forming Black Sabbath, guitarist Tony Iommi suffered an accident while working in a sheet metal factory, losing the tips of two fingers on his right hand. Iommi almost gave up music, but was urged by the factory manager to listen to Django Reinhardt, a jazz guitarist who lost the use of two fingers in a fire. Inspired by Reinhardt, Iommi created two thimbles made of plastic and leather to cap off his missing fingertips. The guitarist began using lighter strings, and detuning his guitar, to better grip the strings with his prosthesis. Early in the band's history Iommi experimented with different dropped tunings, including C tuning, or 3 semitones down, before settling on E/D tuning, or a half-step down from standard tuning. Legacy Black Sabbath has sold over 70 million records worldwide, including a RIAA-certified 15 million in the U.S. They are one of the most influential heavy metal bands of all time. The band helped to create the genre with ground-breaking releases such as Paranoid (1970), an album that Rolling Stone magazine said "changed music forever", and called the band "the Beatles of heavy metal". Time magazine called Paranoid "the birthplace of heavy metal", placing it in their Top 100 Albums of All Time. MTV placed Black Sabbath at number one on their Top Ten Heavy Metal Bands and VH1 placed them at number two on their list of the 100 Greatest Artists of Hard Rock. VH1 ranked Black Sabbath's "Iron Man" the number one song on their 40 Greatest Metal Songs countdown. Rolling Stone magazine ranked the band number 85 in their list of the "100 Greatest Artists of All Time". AllMusic's William Ruhlmann said: According to Rolling Stone Holly George-Warren, "Black Sabbath was the heavy metal king of the 1970s." Although initially "despised by rock critics and ignored by radio programmers", the group sold more than 8 million albums by the end of that decade. "The heavy metal band…" marvelled Ronnie James Dio. "A band that didn't apologise for coming to town; it just stepped on buildings when it came to town." Influence and innovation Black Sabbath have influenced many acts including Judas Priest, Iron Maiden, Diamond Head, Slayer, Metallica, Nirvana, Korn, Black Flag, Mayhem, Venom, Guns N' Roses, Soundgarden, Body Count, Alice in Chains, Anthrax, Disturbed, Death, Opeth, Pantera, Megadeth, the Smashing Pumpkins, Slipknot, Foo Fighters, Fear Factory, Candlemass, Godsmack, and Van Halen. Two Gold-selling tribute albums have been released, Nativity in Black Volume 1 & 2, including covers by Sepultura, White Zombie, Type O Negative, Faith No More, Machine Head, Primus, System of a Down, and Monster Magnet. Metallica's Lars Ulrich, who, along with bandmate James Hetfield inducted Black Sabbath into the Rock and Roll Hall of Fame in 2006, said "Black Sabbath is and always will be synonymous with heavy metal", while Hetfield said "Sabbath got me started on all that evil-sounding shit, and it's stuck with me. Tony Iommi is the king of the heavy riff." Guns N' Roses guitarist Slash said of the Paranoid album: "There's just something about that whole record that, when you're a kid and you're turned onto it, it's like a whole different world. It just opens up your mind to another dimension...Paranoid is the whole Sabbath experience; very indicative of what Sabbath meant at the time. Tony's playing style—doesn't matter whether it's off Paranoid or if it's off Heaven and Hell—it's very distinctive." Anthrax guitarist Scott Ian said "I always get the question in every interview I do, 'What are your top five metal albums?' I make it easy for myself and always say the first five Sabbath albums." Lamb of God's Chris Adler said: "If anybody who plays heavy metal says that they weren't influenced by Black Sabbath's music, then I think that they're lying to you. I think all heavy metal music was, in some way, influenced by what Black Sabbath did." Judas Priest vocalist Rob Halford commented: "They were and still are a groundbreaking band...you can put on the first Black Sabbath album and it still sounds as fresh today as it did 30-odd years ago. And that's because great music has a timeless ability: To me, Sabbath are in the same league as the Beatles or Mozart. They're on the leading edge of something extraordinary." On Black Sabbath's standing, Rage Against the Machine guitarist Tom Morello states: "The heaviest, scariest, coolest riffs and the apocalyptic Ozzy wail are without peer. You can hear the despair and menace of the working-class Birmingham streets they came from in every kick-ass, evil groove. Their arrival ground hippy, flower-power psychedelia to a pulp and set the standard for all heavy bands to come." Phil Anselmo of Pantera and Down stated that "Only a fool would leave out what Black Sabbath brought to the heavy metal genre". According to Tracii Guns of L.A. Guns and former member of Guns N' Roses, the main riff of "Paradise City" by Guns N' Roses, from Appetite for Destruction (1987), was influenced by the song "Zero the Hero" from the Born Again album. King Diamond guitarist Andy LaRocque affirmed that the clean guitar part of "Sleepless Nights" from Conspiracy (1989) is inspired by Tony Iommi's playing on Never Say Die!. In addition to being pioneers of heavy metal, they also have been credited for laying the foundations for heavy metal subgenres stoner rock, sludge metal, thrash metal, black metal and doom metal as well as for alternative rock subgenre grunge. According to the critic Bob Gulla, the band's sound "shows up in virtually all of grunge's most popular bands, including Nirvana, Soundgarden, and Alice in Chains". Tony Iommi has been credited as the pioneer of lighter gauge guitar strings. The tips of his fingers were severed in a steel factory, and while using thimbles (artificial finger tips) he found that standard guitar strings were too difficult to bend and play. He found that there was only one size of strings available, so after years with Sabbath he had strings custom made. Culturally, Black Sabbath have exerted a huge influence in both television and literature and have in many cases become synonymous with heavy metal. In the film Almost Famous, Lester Bangs gives the protagonist an assignment to cover the band (plot point one) with the immortal line: 'Give me 500 words on Black Sabbath'. Contemporary music and arts publication Trebuchet Magazine has put this to practice by asking all new writers to write a short piece (500 words) on Black Sabbath as a means of proving their creativity and voice on a well documented subject. Band members Original line-up Tony Iommi – guitars Bill Ward – drums Geezer Butler – bass Ozzy Osbourne – vocals, harmonica Discography Studio albums Black Sabbath (1970) Paranoid (1970) Master of Reality (1971) Vol. 4 (1972) Sabbath Bloody Sabbath (1973) Sabotage (1975) Technical Ecstasy (1976) Never Say Die! (1978) Heaven and Hell (1980) Mob Rules (1981) Born Again (1983) Seventh Star (1986) The Eternal Idol (1987) Headless Cross (1989) Tyr (1990) Dehumanizer (1992) Cross Purposes (1994) Forbidden (1995) 13 (2013) Tours Polka Tulk Blues/Earth Tour 1968–1969 Black Sabbath Tour 1970 Paranoid Tour 1970–1971 Master of Reality Tour 1971–1972 Vol. 4 Tour 1972–1973 Sabbath Bloody Sabbath Tour 1973–1974 Sabotage Tour 1975–1976 Technical Ecstasy Tour 1976–1977 Never Say Die! Tour 1978 Heaven & Hell Tour 1980–1981 Mob Rules Tour 1981–1982 Born Again Tour 1983 Seventh Star Tour 1986 Eternal Idol Tour 1987 Headless Cross Tour 1989 Tyr Tour 1990 Dehumanizer Tour 1992 Cross Purposes Tour 1994 Forbidden Tour 1995 Ozzfest Tour 1997 European Tour 1998 Reunion Tour 1998–1999 Ozzfest Tour 1999 U.S. Tour 1999 European Tour 1999 Ozzfest Tour 2001 Ozzfest Tour 2004 European Tour 2005 Ozzfest Tour 2005 Black Sabbath Reunion Tour, 2012–2014 The End Tour 2016–2017 See also List of cover versions of Black Sabbath songs Heavy metal groups References Sources External links Black Sabbath biography by James Christopher Monger, discography and album reviews, credits & releases at AllMusic Black Sabbath discography, album releases & credits at Discogs.com Musical groups established in 1968 Musical groups disestablished in 2006 Musical groups reestablished in 2011 Musical groups disestablished in 2017 English heavy metal musical groups Grammy Lifetime Achievement Award winners 1968 establishments in England 2017 disestablishments in England Kerrang! Awards winners I.R.S. Records artists Vertigo Records artists Musical groups from Birmingham, West Midlands English musical quartets
13
The Buffalo Bills are a professional American football team based in the Buffalo-Niagara Falls metropolitan area. The Bills compete in the National Football League (NFL) as a member club of the league's American Football Conference (AFC) East division. The team plays its home games at Highmark Stadium in Orchard Park, New York. Founded in 1960 as a charter member of the American Football League (AFL), they joined the NFL in 1970 following the AFL–NFL merger. The Bills' name is derived from an All-America Football Conference (AAFC) franchise from Buffalo that was in turn named after western frontiersman Buffalo Bill. Drawing much of its fanbase from Western New York and Southern Ontario, the Bills are the only NFL team that plays home games in the state of New York. The franchise is owned by Terry and Kim Pegula, who purchased the Bills after the death of original owner Ralph Wilson in 2014. The Bills won consecutive AFL Championships in 1964 and 1965, the only major professional sports championships from a team representing Buffalo. After joining the NFL, they struggled heavily during the 1970s before they became perennial postseason contenders during the late 1980s to the late 1990s. Their greatest success occurred between 1990 and 1993 when they appeared in a record four consecutive Super Bowls; an accomplishment often overshadowed by them losing each game. From the early 2000s to the mid-2010s, the Bills endured the longest playoff drought of 17 years in the four major North American professional sports, making them the last franchise in the four leagues to qualify for the postseason in the 21st century. They returned to consistent postseason contention by the late 2010s, although the Bills have not returned to the Super Bowl. Alongside the Minnesota Vikings, their four Super Bowl appearances are the most among NFL franchises that have not won the Super Bowl. Franchise history The Bills began competitive play in 1960 as a charter member of the American Football League led by head coach Buster Ramsey and joined the NFL as part of the AFL–NFL merger in 1970. The Bills won two consecutive American Football League titles in 1964 and 1965 with quarterback Jack Kemp and coach Lou Saban, but the club has yet to win a league championship since. Once the AFL–NFL merger took effect, the Bills became the second NFL team to represent the city; they followed the Buffalo All-Americans, a charter member of the league. Buffalo had been left out of the league since the All-Americans (by that point renamed the Bisons) folded in 1929; the Bills were no less than the third professional non-NFL team to compete in the city before the merger, following the Indians/Tigers of the early 1940s and an earlier team named the Bills, originally the Bisons, in the late 1940s in the All-America Football Conference (AAFC). Following the AFL–NFL merger, the Bills were generally mediocre in the 1970s, but featured All-Pro running back O. J. Simpson. After being pushed to the brink of failure in the mid-1980s, the collapse of the United States Football League and a series of highly drafted players such as Jim Kelly (who initially played for the USFL instead of the Bills), Thurman Thomas, Bruce Smith and Darryl Talley allowed the Bills to rebuild into a perennial contender in the late 1980s through the mid-1990s, a period in which the team won four consecutive AFC Championships; the team nevertheless lost all four subsequent Super Bowls, records in both categories that still stand. The rise of the division rival New England Patriots under Bill Belichick and Tom Brady, along with numerous failed attempts at rebuilding in the 2000s and 2010s, helped prevent the Bills from reaching the playoffs in seventeen consecutive seasons between 2000 and 2016, a 17-year drought that was the longest active playoff drought in all major professional sports at the time. On October 8, 2014, Buffalo Sabres owners Terry and Kim Pegula received unanimous approval to acquire the Bills during the NFL owners' meetings, becoming the second ownership group of the team after team founder Ralph Wilson. Under head coach Sean McDermott, the Bills broke the playoff drought, appearing in the playoffs for four of the next five seasons. The team earned its first division championship and playoff wins since 1995 during the 2020 season, aided by Brady's departure to Tampa Bay and out of the AFC East as well as the Bills' own development of a core of talent including Josh Allen, Stefon Diggs, and Tre'Davious White. Logos and uniforms For their first two seasons, the Bills wore uniforms based on those of the Detroit Lions at the time. Ralph Wilson had been a minority owner of the Lions before founding the Bills, and the Bills' predecessors in the AAFC had also worn blue and silver uniforms. The team's original colors were Honolulu blue, silver and white, and the helmets were silver with no striping. There was no logo on the helmet, which displayed the players' numbers on each side. In 1962, the standing red bison was designated as the logo and took its place on a white helmet. In 1962, the team's colors also changed to red, white, and blue. The team switched to blue jerseys with red and white shoulder stripes similar to those worn by the Buffalo Bisons AHL hockey team of the same era. The helmets were white with a red center stripe. The jerseys again saw a change in 1964 when the shoulder stripes were replaced by a distinctive stripe pattern on the sleeves consisting of four stripes, two thicker inner stripes and two thinner outer stripes all bordered by red piping. By 1965, red and blue center stripes were put on the helmets. The Bills introduced blue pants worn with the white jerseys in 1973, the last year of the standing buffalo helmet. The blue pants remained through 1985. The face mask on the helmet was blue from 1974 through 1986 before changing to white. The standing bison logo was replaced by a blue charging one with a red slanting stripe streaming from its horn. The newer emblem, which is still the primary one used by the franchise, was designed by aerospace designer Stevens Wright in 1974. In 1984, the helmet's shell color was changed from white to red, primarily to help Bills quarterback Joe Ferguson distinguish them more readily from three of their division rivals at that time, the Baltimore Colts, the Miami Dolphins, and the New England Patriots, who all also wore white helmets at that point. Ferguson said "Everyone we played had white helmets at that time. Our new head coach Kay Stephenson just wanted to get more of a contrast on the field that may help spot a receiver down the field." (The Patriots have worn silver helmets since 1993, the Colts have since been realigned to the AFC South, and in 2019 the New York Jets have since switched back to green-colored helmets, after playing 20 years with white ones.) In 2002, under the direction of general manager Tom Donahoe, the Bills' uniforms went through radical changes. A darker shade of blue was introduced as the main jersey color, and nickel gray was introduced as an accent color. Both the blue and white jerseys featured red side panels. The white jerseys included a dark blue shoulder yoke and royal blue numbers. The helmet remained primarily red with one navy blue, two nickel, two royal blue, two white stripes, and white face mask. A new logo, a stylized "B" consisting of two bullets and a more detailed buffalo head on top, was proposed and had been released (it can be seen on a few baseball caps that were released for sale), but fan backlash led to the team retaining the running bison logo. The helmet logo adopted in 1974—a charging royal blue bison, with a red streak, white horn and eyeball—remained unchanged. In 2005, the Bills revived the standing bison helmet and uniform of the mid-1960s as a throwback uniform. The Bills usually wore the all-blue combination at home and the all-white combination on the road when not wearing the throwback uniforms. They stopped wearing blue-on-white after 2006, while the white-on-blue was not worn after 2007. For the 2011 season, the Bills unveiled a new uniform design, an updated rendition of the 1975–83 design. This change includes a return to the white helmets with "charging buffalo" logo, and a return to royal blue instead of navy. The set initially featured striped socks, but by 2021, the Bills gradually reduced its usage and began wearing either all-white or all-blue hosiery without stripes in most games. Buffalo sporadically wore white at home in the 1980s, including all eight home games in 1984, but stopped doing so beginning in 1987. On November 6, 2011, against the New York Jets, the Bills wore white at home for the first time since 1986. Since 2011, the Bills have worn white for a home game either with their primary uniform or a throwback set. The Bills' uniform received minor alterations as part of the league's new uniform contract with Nike. The new Nike uniform was unveiled on April 3, 2012. On November 12, 2015, the Bills and the New York Jets became the first two teams to participate in the NFL's Color Rush uniform initiative, with Buffalo wearing an all-red combination for the first time in team history. Like the primary uniforms, the set initially had red socks with white and blue stripes, but in 2020, it was replaced with red socks without stripes. A notable use of the Bills' uniforms outside of football was in the 2018 World Junior Ice Hockey Championships, when the United States men's national junior ice hockey team wore Bills-inspired uniforms in their outdoor game against Team Canada on December 29, 2017. On April 1, 2021, the team announced they will wear white face masks during the upcoming season and beyond. Rivalries The Bills have rivalries with their three AFC East opponents, and also have had historical rivalries with other teams such as the Baltimore/Indianapolis Colts (a former divisional rival), Kansas City Chiefs, Houston Oilers/Tennessee Titans, Jacksonville Jaguars, and Dallas Cowboys. They also play an annual preseason game against the Detroit Lions. The Cleveland Browns once shared a rivalry with the Bills' predecessors in the All-America Football Conference. The current teams have a more friendly relationship and have played sporadically since the AFL–NFL merger. Divisional rivalries Miami Dolphins This is often considered Buffalo's most famous rivalry. Though the Bills and Dolphins both originated in the American Football League, the Dolphins did not start playing until 1966 as an expansion team while the Bills were one of the original eight teams. The rivalry first gained prominence when the Dolphins won every match-up against the Bills in the 1970s for an NFL-record 20 straight wins against a single opponent (the Bills defeated the Dolphins in their first matchup of the 1980s). Fortunes changed in the following decades with the rise of Jim Kelly as Buffalo's franchise quarterback, and though Kelly and Dolphins quarterback Dan Marino shared a competitive rivalry in the 1980s and 1990s, the Bills became dominant in the 1990s. Things have since cooled down after the retirements of Kelly and Marino and the rise of the New England Patriots, but Miami remains a fierce rival of the Bills, coming in second place in a recent poll of Buffalo's primary rival, and the two teams have typically been close to each other in win–loss records. Miami leads the overall series 62–56–1 as of 2022, but Buffalo has the advantage in the playoffs at 4–1, including a win in the 1992 AFC Championship Game. New England Patriots The rivalry with the New England Patriots began when both teams were original franchises in the American Football League (AFL) prior to the NFL–AFL merger, but did not gain notability until the emergence of New England's Tom Brady in 2001. The teams were very competitive prior to the 2000s. However, the arrival of Patriots quarterback Brady in the early 2000s led to New England dominating the AFC East, including the Bills, for two decades. As a result, the Patriots replaced the Dolphins as Buffalo's most hated rival. The Bills have taken a 6–1 edge since Brady's departure in 2020, which included consecutive AFC East titles from 2020 to 2022 and a series sweep of the Patriots in two of the three years. In 2021, the Bills dominated in a 47–17 victory against the Patriots in the rivalry's first playoff matchup in 59 years, which saw the Bills score a touchdown on every offensive drive throughout the entire game and as such is the only "perfect offensive game" in NFL history. Overall, the Patriots lead the series 77–49–1, but trail the Bills by a 46–45–1 margin without Brady on the field. The rivalry is also noted for several players being a member of both teams during their careers, including Drew Bledsoe, Doug Flutie, Lawyer Milloy, Brandon Spikes, Scott Chandler, Chris Hogan, Mike Gillislee, and Stephon Gilmore. New York Jets The Bills and Jets were both original AFL teams, and both represent the state of New York, though the Jets (since 1984) actually play their games in East Rutherford, New Jersey. While the rivalry represents the differences between New York City and Western New York, it has historically not been as intense as the Bills' rivalries with the Dolphins and Patriots, and the teams' fanbases either have grudging respect or low-key annoyance (stemming more from the broader upstate-downstate tensions than the teams or sport) for each other when the teams are not playing one another. Oftentimes the Bills-Jets rivalry has become characterized by ugly games and shared mediocrity, but it has had a handful of competitive moments. The series heated up recently when former Jets head coach Rex Ryan became the Bills' head coach for two seasons, and had become notable again as Bills quarterback Josh Allen and former Jets quarterback Sam Darnold, both drafted in the same year, maintained a friendly rivalry with one another. Buffalo leads the series 68–57 as of 2022, including a playoff win in 1981. Other rivalries Tennessee Titans The Tennessee Titans (formerly the Houston Oilers) share an extended history with the Bills, both teams being original AFL clubs in 1960 and rivals in that league's East Division before the AFL-NFL merger. Matchups were intense in the 1990s with quarterback Warren Moon leading the Oilers against Jim Kelly's Bills. After both teams failed to meet the same success in the late 2000s to early 2010s, they have returned to consistent playoff contention since 2017, resulting in several high-profile games as of late. Memorable playoff moments between the teams include The Comeback, in which the Frank Reich-led Bills overcame a 35–3 deficit to stun the Oilers 41–38 in 1992, and the Music City Miracle, in which the now-Titans scored on a near-last-minute kickoff return with a controversial lateral pass ruling to beat the Bills 22–16 in 1999. The Music City Miracle was notable for being Buffalo's last playoff appearance until 2017. The Titans currently lead the series 30–20. Jacksonville Jaguars A new rivalry emerged between the Bills and the Jacksonville Jaguars after former Bills head coach Doug Marrone, who had quit the team after the 2014 season, was hired as a coaching assistant for Jacksonville and eventually rose to become the Jaguars' head coach. The first game between the Marrone led Jaguars was a London game in week 7 of the 2015 season which saw the Jaguars' win 34–31. The most important game of this series was an ugly, low-scoring Wild Card game in 2017 that saw the Jaguars win 10–3. This game is notable as it was the first Bills playoff appearance in 17 seasons. Prior to this, Jacksonville had handed Buffalo its first playoff loss in Bills Stadium in 1996. Following the 2017 wild card game the Bills and Jaguars have met two additional times. The first was a "rematch" game in week 12 of the 2018 season which saw the Bills win 24–21. During this game trash talk from former Jaguars players such as Jalen Ramsey resulted in a brawl between the teams. The second time was in week 9 of the 2021 season. By now the "point" of the rivalry, Marrone's feud with the Bills organization, and the personal drama between Bills and Jaguars players no longer applied as Marrone had been fired and replaced by Urban Meyer and all the players from the 2017 Jaguars team have since moved on to other teams or retired. Regardless, this game was the seventh largest upset at the time in NFL history which saw the 15.5-point favorite Bills lose 6–9. The current series record is tied at 9-9-0. Kansas City Chiefs The Bills and the Kansas City Chiefs were also original teams in the AFL and have had a long history against each other, despite never being in the same division. Buffalo currently leads the series 27–24–1, which has included five playoff meetings, three of which were AFL/AFC championship games; Kansas City won the 1966 AFL Championship game that determined the AFL's representative in the first Super Bowl, going on to face the Green Bay Packers, in addition to the 2020 AFC Championship game that saw the team advance to its second straight Super Bowl appearance, while Buffalo defeated Kansas City in the 1993 AFC championship game to advance to its fourth straight Super Bowl appearance. Each time the Super Bowl participant would end up losing the big game. Despite a lull in the series in the 2000s and 2010s, the rivalry gained attention nonetheless as the Bills and Chiefs met in nine of ten years from 2008 to 2017. After a 2-year hiatus in the series, four high-profile matchups occurred between the Bills and Chiefs in 2020 and 2021, including the aforementioned 2020 championship game and the 2021 Divisional round game, which is now considered one of the greatest playoff games of all time but was also controversial due to the league's overtime rules. A rivalry between Josh Allen and Chiefs quarterback Patrick Mahomes has also developed, drawing comparisons to Jim Kelly's rivalry with Dan Marino as well as the rivalry between Tom Brady and Peyton Manning. Notable players Retired numbers The Buffalo Bills have retired three numbers in franchise history: No. 12 for Jim Kelly, No. 34 for Thurman Thomas and No. 78 for Bruce Smith. Despite the fact that the Bills have retired only three jersey numbers, the team has other numbers no longer issued to any player or in reduced circulation. Reduced circulation: 83 Andre Reed, WR, 1985–1999 (Lee Evans III wore No. 83 by special permission) Since the earliest days of the team, the number 31 was not supposed to be issued to any other player. The Bills had stationery and various other team merchandise showing a running player wearing that number, and it was not supposed to represent any specific person, but the 'spirit of the team.' In the first three decades of the team's existence, the number 31 was only seen once: in 1969, when reserve running back Preston Ridlehuber damaged his number 36 jersey during a game, equipment manager Tony Marchitte gave him the number 31 jersey to wear while repairing the number 36. The number 31 was not issued again until 1990 when first round draft choice James (J.D.) Williams wore it for his first two seasons; it has since been returned to general circulation, currently worn as of 2022 by Dean Marlowe. Number 32 had been withdrawn from circulation, but not retired, after O. J. Simpson. Former owner Ralph Wilson insisted on not reissuing the number, even after Simpson's highly publicized murder case and later robbery conviction. The number was placed back into circulation in 2019 with Senorise Perry wearing the number that year; practice squad cornerback Kyler McMichael was the last player to wear the number. Number 15 was historically only issued sparingly after the retirement of Jack Kemp, it was last worn by Jake Kumerow in 2021. Number 1 has also only rarely been used, for reasons never explained. While there is no proper explanation, Tommy Hughitt was a player-coach for the early Buffalo teams in the New York Pro Football League and NFL from 1918 to 1924 and was both a major on-field success and a fixture in Buffalo culture after his retirement as a politician and auto salesman. Hugitt was reported to wear number 1 during this time. Wide receiver Emmanuel Sanders is the most recent Bill to wear the number; prior to his arrival in 2021, it had been 19 years since it had been worn in the regular season, when kicker Mike Hollis wore it in 2002. Ralph C. Wilson Jr. Distinguished Service Award recipients Wall of Fame Pro Football Hall of Fame All-time first round draft picks Recent Pro Bowl selections Coaching staff Head coaches Current staff Current roster Radio and television The Buffalo Bills Radio Network is flagshipped at WGR AM 550 in Buffalo, with sister station WWKB AM 1520 simulcasting all home games. Chris Brown is the team's current play-by-play announcer, having taken over from John Murphy (the announcer from 2003 to 2022 and color commentator most years from 1984 to 2003) after Murphy suffered a stroke. Former Bills center Eric Wood serves as the color analyst. In 2018, the team signed an agreement with Nexstar Media Group to carry Bills preseason games across its network of stations in the region. As of 2020, WIVB-TV serves as the flagship station of the network, which includes WJET-TV in Erie, WROC-TV in Rochester, WSYR-TV in Syracuse, WUTR in Utica, WETM-TV in Elmira and WIVT in Binghamton. Steve Tasker does color commentary on these games; the play-by-play position is rotated between Andrew Catalon and Rob Stone. WROC-TV reporter Thad Brown is the sideline reporter. Since 2008, preseason games have been broadcast in high definition. Beginning in the 2016 season, as per a new rights deal which covers rights to the team as well as its sister NHL franchise, the Buffalo Sabres, most team-related programming, including studio programming and the coach's show, was re-located to MSG Western New York—a joint venture of MSG and the team ownership. Preseason games will continue to air in simulcast on broadcast television. In the event regular-season games are broadcast by ESPN, in accordance with the league's television policies, a local Buffalo station simulcasts the game. From 2014 to 2017, WKBW-TV held the broadcast rights to that contest, with the station having won back the rights to cable games after WBBZ-TV held the rights for 2012 and 2013. Training camp sites 1960–1962 Roycroft Inn, East Aurora, New York 1963–1967 Camelot Hotel, Blasdell, New York 1968–1980 Niagara University, Lewiston, New York 1981–1999 State University of New York at Fredonia, Fredonia, New York 2000–present, St. John Fisher University, Pittsford, New York Source: Mascots, cheerleaders and marching band The Bills' official mascot is Billy Buffalo, an eight-foot-tall, anthropomorphic blue American bison who wears the jersey "number" BB. The Bills do not have cheerleaders. The Bills operated a cheerleading squad named the Buffalo Jills from 1967 to 1985; from 1986 to 2013, the Jills operated as an independent organization sponsored by various companies. The Jills suspended operations prior to the 2014 season due to legal actions. The Bills and Jills were previously involved in a legal battle, in which the Jills alleged they were employees, not independent contractors, and sought back pay. On March 3, 2022, a settlement was reached where the Bills agreed to pay the Jills $3.5 million, while Cumulus Media paid $4 million in stock options of the company while admitting no wrongdoing. The Bills are one of six teams in the NFL to designate an official marching band or drumline (the others being the Baltimore Ravens, Washington Commanders, New York Jets, Carolina Panthers and Seattle Seahawks). Since the last game of the 2013 season, this position has been served by the Stampede Drumline, known outside of Buffalo as Downbeat Percussion. The Bills have also used the full marching bands from Attica High School, the University of Pittsburgh and Syracuse University at home games in recent years. The Bills have several theme songs associated with them. The most popular is a variation of the Isley Brothers hit "Shout", recorded by Scott Kemper, which served as the Bills' official promotional song from 1987 through 1990s. It can be heard at every Bills home game following a field goal or touchdown and at the end of the game if the Bills win. The Bills' unofficial fight song, "Go Bills", was penned by Bills head coach Marv Levy in the mid-1990s on a friendly wager with his players that he will write the song if the team won a particular game. Supporters The "Bills Backers" are the official fan organization of the Buffalo Bills. It has over 200 chapters across North America, Europe and Oceania. Also notable is the "Bills Mafia", organized via Twitter beginning in 2010 by Del Reid, Leslie Wille, and Breyon Harris; the phrase "Bills Mafia" had by 2017 grown to unofficially represent the broad community surrounding and encompassing the team as a whole, and players who join the Bills often speak of joining the Bills Mafia. Outsiders often treat the Bills' fan base in derogatory terms, especially since the 2010s, in part because of negative press coverage of select fans' wilder antics. In 2020, the Bills filed to trademark the "Bills Mafia" name. Bills fans are particularly well known for their wearing of Zubaz zebra-printed sportswear; so much is the association between Bills fans and Zubaz that when a revival of the company opened their first brick-and-mortar storefront, it chose Western New York as its first location. The "wing hat," a hat shaped like a spicy chicken wing (much in the same style as the Green Bay Packers' Cheesehead hats), can also frequently be seen atop Bills fans' heads, having originated as promotional merchandise by the Anchor Bar, the purported inventors of the modern chicken wing as a delicacy. Another hat associated with the Bills fandom is the water buffalo hat, resembling the headgear of the fictional Loyal Order of Water Buffaloes seen in the TV series The Flintstones; this hat gained particular popularity with the Water Buffalo Club 716, a community of over 2,000 Bills supporters from around the world founded in 2021 by Therese Forton-Barnes. Bills Mafia members are also well known for jumping off of elevated surfaces (often cars or RVs) into folding tables, in the style of professional wrestlers, during the pre-game tailgate. Bills fans are noted for their frequent support for charitable causes. After the Bills received help in breaking their 17-year playoff drought on a last-minute Cincinnati Bengals victory, Bills fans crowdfunded the charities of Bengals players Andy Dalton and Tyler Boyd with hundreds of thousands of dollars as a gesture of thanks. Also in 2020, following a November 8 upset win over the Seattle Seahawks led by one of the best career performances by quarterback Josh Allen, news emerged that Allen had elected to take the field after having been given the option to sit out the contest as he had received news of his grandmother's death only the night before. Fans showed support for their team and community by donating nearly $700,000 to the Oishei Children's Hospital, an organization supported by Allen throughout his time in Buffalo. Following the Bills' defeat of the Baltimore Ravens in the 2020–21 NFL playoffs and an injury to Ravens quarterback Lamar Jackson late in that game, Bills fans crowdfunded his favorite charity, Blessings in a Backpack. The Bills are one of the favorite teams of ESPN announcer Chris Berman, who picked the Bills to reach the Super Bowl nearly every year in the 1990s. Berman often uses the catchphrase "No one circles the wagons like the Buffalo Bills!" Berman gave the induction speech for Bills owner Ralph Wilson when Wilson was inducted into the Pro Football Hall of Fame in 2009. The Bills were also the favorite team of late NBC political commentator Tim Russert, a South Buffalo native, who often referred to the Bills on his Sunday morning talk show, Meet the Press. (His son, Luke, is also a notable fan of the team.) CNN's Wolf Blitzer, also a Buffalo native, has proclaimed he is also a fan, as has CBS Evening News lead anchor and Tonawanda native Jeff Glor and DNC Chairman Tom Perez. ESPN anchor Kevin Connors is also a noted Bills fan, dating to his time attending Ithaca College. Actor Nick Bakay, a Buffalo native, is also a well-known Bills fan; he has discussed the team in segments of NFL Top 10. Character actor William Fichtner, raised in Cheektowaga, is a fan, and did a commercial for the team in 2014. In 2015, Fichtner also narrated the ESPN 30 for 30 documentary on the Bills' four Super Bowl appearances, "Four Falls of Buffalo". Former Olympic swimmer Summer Sanders (an in-law to former Bills kicker Todd Schlopy) has professed her fandom of the team. Actor Christopher McDonald, who was raised in Romulus, New York, is a fan of the team. Persons notable almost entirely for their Bills fandom include Ken "Pinto Ron" Johnson, whose antics while appearing at every Bills home and away game since 1994 earned enough scrutiny that his tailgate parties were banned from stadium property on order of the league; John Lang, an Elvis impersonator who carries a large guitar that he uses as a billboard; Marc Miller, whose professional wrestling promo-style interview with WGRZ prior to Super Bowl XXVII (distinguished by the line "Dallas is going down, Gary!" and picked up at the time by The George Michael Sports Machine) was rediscovered in 2019; and Ezra Castro, also known as "Pancho Billa," a native of El Paso, Texas who wore a large sombrero and lucha mask in Bills colors. Castro was diagnosed with a spinal tumor that had metastasized in 2017; he was invited on stage during the 2018 NFL Draft to read one of the Bills' selections. Castro died on May 14, 2019. In popular culture Several former Buffalo Bills players earned a name in politics in the late 20th century after their playing careers had ended, nearly always as members of the Republican Party. The most famous of these was quarterback Jack Kemp, who was elected to the U.S. House of Representatives from Western New York in 1971—two years after his playing career ended and remained there for nearly two decades, serving as the Republican Party nominee for Vice President of the United States under Bob Dole in 1996. Kemp's backup, Ed Rutkowski, served as county executive of Erie County from 1979 to 1987. Former tight end Jay Riemersma, defensive tackle Fred Smerlas and defensive end Phil Hansen have all run for Congress, though all three either lost or withdrew from their respective races. Quarterback Jim Kelly and running back Thurman Thomas have also both been mentioned as potential candidates for political office, although both have declined all requests to date. See also List of American Football League players Major North American professional sports teams Notes References External links Buffalo Bills at NFL.com American football in Buffalo, New York American Football League teams American football teams in New York (state) National Football League teams Pegula Sports and Entertainment American football teams established in 1960 1960 establishments in New York (state) Western New York
5
The Central Artery/Tunnel Project (CA/T Project), commonly known as the Big Dig, was a megaproject in Boston that rerouted the Central Artery of Interstate 93 (I-93), the chief highway through the heart of the city, into the 1.5-mile (2.4 km) Thomas P. O'Neill Jr. Tunnel. The project also included the construction of the Ted Williams Tunnel (extending I-90 to Logan International Airport), the Leonard P. Zakim Bunker Hill Memorial Bridge over the Charles River, and the Rose Kennedy Greenway in the space vacated by the previous I-93 elevated roadway. Initially, the plan was also to include a rail connection between Boston's two major train terminals. Planning began in 1982; the construction work was carried out between 1991 and 2006; and the project concluded on December 31, 2007, when the partnership between the program manager and the Massachusetts Turnpike Authority ended. The Big Dig was the most expensive highway project in the United States, and was plagued by cost overruns, delays, leaks, design flaws, charges of poor execution and use of substandard materials, criminal charges and arrests, and the death of one motorist. The project was originally scheduled to be completed in 1998 at an estimated cost of $2.8 billion (in 1982 dollars, US$7.4 billion adjusted for inflation ). However, the project was completed in December 2007 at a cost of over $8.08 billion (in 1982 dollars, $21.5 billion adjusted for inflation, meaning a cost overrun of about 190%) . The Boston Globe estimated that the project will ultimately cost $22 billion, including interest, and that it would not be paid off until 2038. As a result of a death, leaks, and other design flaws, Bechtel and Parsons Brinckerhoff—the consortium that oversaw the project—agreed to pay $407 million in restitution and several smaller companies agreed to pay a combined sum of approximately $51 million. The Rose Fitzgerald Kennedy Greenway is a roughly series of parks and public spaces which were the final part of the Big Dig after Interstate 93 was put underground. The Greenway was named in honor of Kennedy family matriarch Rose Fitzgerald Kennedy, and was officially dedicated on July 26, 2004. Origin This project was developed in response to traffic congestion on Boston's historically tangled streets which were laid out centuries before the advent of the automobile. As early as 1930 the city's Planning Board recommended a raised express highway running north–south through the downtown district in order to draw through traffic off the city streets. Commissioner of Public Works William Callahan promoted plans for the Central Artery, an elevated expressway which eventually was constructed between the downtown area and the waterfront. Governor John Volpe interceded in the 1950s to change the design of the last section of the Central Artery, putting it underground through the Dewey Square Tunnel. While traffic moved somewhat better, the other problems remained. There was chronic congestion on the Central Artery (I-93), the elevated six-lane highway through the center of downtown Boston, which was, in the words of Pete Sigmund, "like a funnel full of slowly-moving, or stopped, cars (and swearing motorists)." In 1959, the 1.5-mile-long (2.4 km) road section carried approximately 75,000 vehicles a day, but by the 1990s, this had grown to 190,000 vehicles a day. Traffic jams of 16 hours were predicted for 2010. The expressway had tight turns, an excessive number of entrances and exits, entrance ramps without merge lanes, and as the decades passed and other planned expressways were cancelled, continually escalating vehicular traffic that was well beyond its design capacity. Local businesses again wanted relief, city leaders sought a reuniting of the waterfront with the city, and nearby residents desired removal of the matte green-painted elevated road which mayor Thomas Menino called Boston's "other Green Monster" (as an unfavorable comparison to Fenway Park's famed left-field wall). MIT engineers Bill Reynolds and (eventual state Secretary of Transportation) Frederick P. Salvucci envisioned moving the whole expressway underground. Cancellation of the Inner Belt project Another important motivation for the final form of the Big Dig was the abandonment of the Massachusetts Department of Public Works' intended expressway system through and around Boston. The Central Artery, as part of Mass. DPW's Master Plan of 1948, was originally planned to be the downtown Boston stretch of Interstate 95, and was signed as such; a bypass road called the Inner Belt, was subsequently renamed Interstate 695. (The law establishing the Interstate highway system was enacted in 1956.) The Inner Belt District was to pass to the west of the downtown core, through the neighborhood of Roxbury and the cities of Brookline, Cambridge, and Somerville. Earlier controversies over impact of the Boston extension of the Massachusetts Turnpike, particularly on the heavily populated neighborhood of Brighton, and the additional large amount of housing that would have had to be destroyed led to massive community opposition to both the Inner Belt and the Boston section of I-95. By 1970, building demolition and land clearances had been completed along the I-95 right of way through the neighborhoods of Roxbury, Jamaica Plain, the South End and Roslindale, which led to secession threats by Hyde Park, Boston's youngest and southernmost neighborhood (which I-95 was also slated to go through). By 1972, with relatively little work done on the Southwest Corridor portion of I-95 and none on the potentially massively disruptive Inner Belt, Governor Francis Sargent put a moratorium on highway construction within the Route 128 corridor, except for the final short stretch of Interstate 93. In 1974, the remainder of the Master Plan was canceled. With ever-increasing traffic volumes funneled onto I-93 alone, the Central Artery became chronically gridlocked. The Sargent moratorium led to the rerouting of I-95 away from Boston around the Route 128 beltway and the conversion of the cleared land in the southern part of the city into the Southwest Corridor linear park, as well as a new right-of-way for the Orange Line subway and Amtrak. Parts of the planned I-695 right-of-way remain unused and under consideration for future mass-transit projects. The original 1948 Master Plan included a Third Harbor Tunnel plan that was hugely controversial in its own right, because it would have disrupted the Maverick Square area of East Boston. It was never built. Mixing of traffic A major reason for the all-day congestion was that the Central Artery carried not only north–south traffic, but it also carried east–west traffic. Boston's Logan Airport lies across Boston Harbor in East Boston; and before the Big Dig, the only access to the airport from downtown was through the paired Callahan and Sumner tunnels. Traffic on the major highways from west of Boston—the Massachusetts Turnpike and Storrow Drive—mostly traveled on portions of the Central Artery to reach these tunnels. Getting between the Central Artery and the tunnels involved short diversions onto city streets, increasing local congestion. Mass transit A number of public transportation projects were included as part of an environmental mitigation for the Big Dig. The most expensive was the building of the Phase II Silver Line tunnel under Fort Point Channel, done in coordination with Big Dig construction. Silver Line buses now use this tunnel and the Ted Williams Tunnel to link South Station and Logan Airport. Construction of the MBTA Green Line extension beyond Lechmere to Medford/Tufts station opened on December 12, 2022. , promised projects to connect the Red and Blue subway lines, and to restore the Green Line streetcar service to the Arborway in Jamaica Plain have not been completed. The Red and Blue subway line connection underwent initial design, but no funding has been designated for the project. The Arborway Line restoration has been abandoned, following a final court decision in 2011. The original Big Dig plan also included the North-South Rail Link, which would have connected North and South Stations (the major passenger train stations in Boston), but this aspect of the project was ultimately dropped by the state transportation administration early in the Dukakis administration. Negotiations with the federal government had led to an agreement to widen some of the lanes in the new harbor tunnel, and accommodating these would require the tunnel to be deeper and mechanically vented; this left no room for the rail lines, and having diesel trains (then in use) passing through the tunnel would have substantially increased the cost of the ventilation system. Early planning The project was conceived in the 1970s by the Boston Transportation Planning Review to replace the rusting elevated six-lane Central Artery. The expressway separated downtown from the waterfront, and was increasingly choked with bumper-to-bumper traffic. Business leaders were more concerned about access to Logan Airport, and pushed instead for a third harbor tunnel. In their second terms, Michael Dukakis (governor) and Fred Salvucci (secretary of transportation) came up with the strategy of tying the two projects together—thereby combining the project that the business community supported with the project that they and the City of Boston supported. Planning for the Big Dig as a project officially began in 1982, with environmental impact studies starting in 1983. After years of extensive lobbying for federal dollars, a 1987 public works bill appropriating funding for the Big Dig was passed by the US Congress, but it was vetoed by President Ronald Reagan for being too expensive. When Congress overrode the veto, the project had its green light and ground was first broken in 1991. In 1997, the state legislature created the Metropolitan Highway System and transferred responsibility for the Central Artery and Tunnel "CA/T" Project from the Massachusetts Highway Department and the Massachusetts Governor's Office to the Massachusetts Turnpike Authority (MTA). The MTA, which had little experience in managing an undertaking of the scope and magnitude of the CA/T Project, hired a joint venture to provide preliminary designs, manage design consultants and construction contractors, track the project's cost and schedule, advise MTA on project decisions, and (in some instances) act as the MTA's representative. Eventually, MTA combined some of its employees with joint venture employees in an integrated project organization. This was intended to make management more efficient, but it hindered MTA's ability to independently oversee project activities because MTA and the joint venture had effectively become partners in the project. Obstacles In addition to political and financial difficulties, the project received resistance from residents of Boston's historic North End, who in the 1950s had seen 20% of the neighborhood's businesses displaced by development of the Central Artery. In 1993, the North End Waterfront Central Artery Committee (NEWCAC) created, co-founded by Nancy Caruso, representing residents, businesses, and institutions in the North End and Waterfront neighborhoods of Boston. The NEWCAC Committee's goal included lessening the impact of the Central Artery/Tunnel Project on the community, representing the neighborhoods to government agencies, keeping the community informed, developing a list of priorities of immediate neighborhood concerns, and promoting responsible and appropriate development of the post-construction artery corridor in the North End and Waterfront neighborhoods. The political, financial and residential obstacles were magnified when several environmental and engineering obstacles occurred. The downtown area through which the tunnels were to be dug was largely land fill, and included existing Red Line and Blue Line subway tunnels as well as innumerable pipes and utility lines that would have to be replaced or moved. Tunnel workers encountered many unexpected geological and archaeological barriers, ranging from glacial debris to foundations of buried houses and a number of sunken ships lying within the reclaimed land. The project received approval from state environmental agencies in 1991, after satisfying concerns including release of toxins by the excavation and the possibility of disrupting the homes of millions of rats, causing them to roam the streets of Boston in search of new housing. By the time the federal environmental clearances were delivered in 1994, the process had taken some seven years, during which time inflation greatly increased the project's original cost estimates. Reworking such a busy corridor without seriously restricting traffic flow required a number of state-of-the-art construction techniques. Because the old elevated highway (which remained in operation throughout the construction process) rested on pylons located throughout the designated dig area, engineers first utilized slurry wall techniques to create concrete walls upon which the highway could rest. These concrete walls also stabilized the sides of the site, preventing cave-ins during the continued excavation process. The multi-lane Interstate highway also had to pass under South Station's seven railroad tracks, which carried over 40,000 commuters and 400 trains per day. To avoid multiple relocations of train lines while the tunneling advanced, as had been initially planned, a specially designed jack was constructed to support the ground and tracks to allow the excavation to take place below. Construction crews also used ground freezing (an artificial induction of permafrost) to help stabilize surrounding ground as they excavated the tunnel. This was the largest tunneling project undertaken beneath railroad lines anywhere in the world. The ground freezing enabled safer, more efficient excavation, and also assisted in environmental issues, as less contaminated fill needed to be exported than if a traditional cut-and-cover method had been applied. Other challenges included existing subway tunnels crossing the path of the underground highway. To build slurry walls past these tunnels, it was necessary to dig beneath the tunnels and to build an underground concrete bridge to support the tunnels' weight, without interrupting rail service. Construction phase The project was managed by the Massachusetts Turnpike Authority, with the Big Dig and the Turnpike's Boston Extension from the 1960s being financially and legally joined by the legislature as the Metropolitan Highway System. Design and construction was supervised by a joint venture of Bechtel Corporation and Parsons Brinckerhoff. Because of the enormous size of the project—too large for any company to undertake alone—the design and construction of the Big Dig was broken up into dozens of smaller subprojects with well-defined interfaces between contractors. Major heavy-construction contractors on the project included Jay Cashman, Modern Continental, Obayashi Corporation, Perini Corporation, Peter Kiewit Sons' Incorporated, J. F. White, and the Slattery division of Skanska USA. (Of those, Modern Continental was awarded the greatest gross value of contracts, joint ventures included.) The nature of the Charles River crossing had been a source of major controversy throughout the design phase of the project. Many environmental advocates preferred a river crossing entirely in tunnels, but this, along with 27 other plans, was rejected as too costly. Finally, with a deadline looming to begin construction on a separate project that would connect the Tobin Bridge to the Charles River crossing, Salvucci overrode the objections and chose a variant of the plan known as "Scheme Z". This plan was considered to be reasonably cost-effective, but had the drawback of requiring highway ramps stacked up as high as immediately adjacent to the Charles River. The city of Cambridge objected to the visual impact of the chosen Charles River crossing design. The city sued to revoke the project's environmental certificate and forced the project planners to redesign the river crossing again. Swiss engineer Christian Menn took over the design of the bridge. He suggested a cradle cable-stayed bridge that would carry ten lanes of traffic. The plan was accepted and construction began on the Leonard P. Zakim Bunker Hill Memorial Bridge. The bridge employed an asymmetrical design and a hybrid of steel and concrete was used to construct it. The distinctive bridge is supported by two forked towers connected to the span by cables and girders. It was the first bridge in the country to employ this method and it was, at the time, the widest cable-stayed bridge in the world, having since been surpassed by the Eastern span replacement of the San Francisco–Oakland Bay Bridge. Meanwhile, construction continued on the Tobin Bridge approach. By the time all parties agreed on the I-93 design, construction of the Tobin connector (today known as the "City Square Tunnel" for a Charlestown area it bypasses) was far along, significantly adding to the cost of constructing the US Route 1 interchange and retrofitting the tunnel. Boston blue clay and other soils extracted from the path of the tunnel were used to cap many local landfills, fill in the Granite Rail Quarry in Quincy, and restore the surface of Spectacle Island in the Boston Harbor Islands National Recreation Area. The Storrow Drive Connector, a companion bridge to the Zakim, began carrying traffic from I-93 to Storrow Drive in 1999. The project had been under consideration for years, but was opposed by the wealthy residents of the Beacon Hill neighborhood. However, it finally was accepted because it would funnel traffic bound for Storrow Drive and downtown Boston away from the mainline roadway. The Connector ultimately used a pair of ramps that had been constructed for Interstate 695, enabling the mainline I-93 to carry more traffic that would have used I-695 under the original Master Plan. When construction began, the project cost, including the Charles River crossing, was estimated at $5.8 billion. Eventual cost overruns were so high that the chairman of the Massachusetts Turnpike Authority, James Kerasiotes, was fired in 2000. His replacement had to commit to an $8.55 billion cap on federal contributions. The total expenses eventually passed $15 billion. Interest brought this cost to $21.93 billion. Engineering methods and details Several unusual engineering challenges arose during the project, requiring unusual solutions and methods to address them. At the beginning of the project, engineers had to figure out the safest way to build the tunnel without endangering the existing elevated highway above. Eventually, they created horizontal braces as wide as the tunnel, then cut away the elevated highway's struts, and lowered it onto the new braces. Three alternative construction methods were studied with their corresponding structural design to address existing conditions, safety measures, and constructability. In addition to codified loads, construction loads were computed to support final design and field execution . Final phases On January 18, 2003, the opening ceremony was held for the I-90 Connector Tunnel, extending the Massachusetts Turnpike (Interstate 90) east into the Ted Williams Tunnel, and onwards to Boston Logan International Airport. The Ted Williams tunnel had been completed and was in limited use for commercial traffic and high-occupancy vehicles since late 1995. The westbound lanes opened on the afternoon of January 18 and the eastbound lanes on January 19. The next phase, moving the elevated Interstate 93 underground, was completed in two stages: northbound lanes opened on March 29, 2003, and southbound lanes (in a temporary configuration) on December 20, 2003. A tunnel underneath Leverett Circle connecting eastbound Storrow Drive to I-93 North and the Tobin Bridge opened December 19, 2004, easing congestion at the circle. All southbound lanes of I-93 opened to traffic on March 5, 2005, including the left lane of the Zakim Bridge, and all of the refurbished Dewey Square Tunnel. By the end of December 2004, 95% of the Big Dig was completed. Major construction remained on the surface, including construction of final ramp configurations in the North End and in the South Bay interchange, and reconstruction of the surface streets. The final ramp downtown—exit 16A (formerly 20B) from I-93 south to Albany Street—opened January 13, 2006. In 2006, the two Interstate 93 tunnels were dedicated as the Thomas P. O'Neill Jr. Tunnel, after the former Democratic speaker of the House of Representatives from Massachusetts who pushed to have the Big Dig funded by the federal government. Coordinated projects The Commonwealth of Massachusetts was required under the Federal Clean Air Act to mitigate air pollution generated by the highway improvements. Secretary of Transportation Fred Salvucci signed an agreement with the Conservation Law Foundation in 1990 enumerating 14 specific projects the state agreed to build. This list was affirmed in a 1992 lawsuit settlement. Projects which have been completed include: Restoration of three Old Colony Commuter Rail lines Expansion of Framingham Line to serve Worcester full-time Restoration of the Newburyport/Rockport Line Six-car trains on the MBTA Blue Line, requiring platform lengthening, station modernization, and all new train cars MBTA Silver Line service to the South Boston waterfront 1,000 new commuter parking spaces Fairmount Line improvements As of 2023, one mitigation project has been partially completed: Green Line Extension (opened to serve Medford in 2022, however required extension of the line to Route 16 is incomplete) Some projects have been removed or replaced, including: Design of the Red-Blue Connector at Charles Street (removed) Restoration of Green Line "E" Arborway service (replaced with other projects with similar air-quality improvements) Surface treatments Some surface treatments that were part of the original project plan were dropped due to the massive cost overruns on the highway portion of the project. $99.1 million was allocated for mitigating improvements to the Charles River Basin, including the construction of North Point Park in Cambridge and Paul Revere Park in Charlestown. The North Bank Bridge, providing pedestrian and bicycle connectivity between the parks, was not funded until the American Recovery and Reinvestment Act of 2009. Nashua Street Park on the Boston side was completed in 2003, by McCourt Construction with $7.9 million in funding from MassDOT. As of 2017, $30.5 million had been transferred to the Massachusetts Department of Conservation and Recreation to complete five projects. Another incomplete but required project is the South Bank Bridge over the MBTA Commuter Rail tracks at North Station (connecting Nashua Street Park to the proposed South Bank Park, which is currently a parking lot under the Zakim Bridge at the Charles River locks). Improvements in the lower Charles River Basin include the new walkway at Lovejoy Wharf (constructed by the developer of 160 North Washington Street, the new headquarters of Converse), the Lynch Family Skate Park (constructed in 2015 by the Charles River Conservancy), rehabilitation of historic operations buildings for the Charles River Dam and lock, a maintenance facility, and a planned pedestrian walkway across the Charles River next to the MBTA Commuter Rail drawbridge at North Station (connecting Nashua Street Park and North Point Park). MassDOT is funding the South Bank Park, and replacement of the North Washington Street Bridge (construction Aug 2018–23). EF Education is funding public greenspace improvements as part of its three-phase expansion at North Point. Remaining funding may be used to construct the North Point Inlet pedestrian bridge, and a pedestrian walkway over Leverett Circle. Before being replaced with surface access during the reconstruction of the Science Park MBTA Green Line station, Leverett Circle had pedestrian bridges with stairs that provided elevated access between the station, the Charles River Parks, and the sidewalk to the Boston Museum of Science. The replacement ramps would comply with Americans with Disabilities Act requirements and allow easy travel by wheelchair or bicycle over the busy intersection. Public art While not a legally mandated requirement, public art was part of the urban design planning process (and later design development work) through the Artery Arts Program. The intent of the program was to integrate public art into highway infrastructure (retaining walls, fences, and lighting) and the essential elements of the pedestrian environment (walkways, park landscape elements, and bridges). As overall project costs increased, the Artery Arts Program was seen as a potential liability, even though there was support and interest from the public and professional arts organizations in the area. At the beginning of the highway design process, a temporary arts program was initiated, and over 50 proposals were selected. However, development began on only a few projects before funding for the program was cut. Permanent public art that was funded includes: super graphic text and facades of former West End houses cast into the concrete elevated highway abutment support walls near North Station by artist Sheila Levrant de Bretteville; Harbor Fog, a sensor-activated mist, light and sound sculptural environment by artist Ross Miller in parcel 17; a historical sculpture celebrating the 18th and 19th century shipbuilding industry and a bust of shipbuilder Donald McKay in East Boston; blue interior lighting of the Zakim Bridge; and the Miller's River Littoral Way walkway and lighting under the loop ramps north of the Charles River. Extensive landscape planting, as well as a maintenance program to support the plantings, was requested by many community members during public meetings. Impact on traffic The Big Dig separated the co-mingled traffic from the Massachusetts Turnpike and the Sumner and Callahan tunnels. While only one net lane in each direction was added to the north–south I-93, several new east–west lanes became available. East–west traffic on the Massachusetts Turnpike/I-90 now proceeds directly through the Ted Williams Tunnel to Logan Airport and Route 1A beyond. Traffic between Storrow Drive and the Callahan and Sumner Tunnels still uses a short portion of I-93, but additional lanes and direct connections are provided for this traffic. The result was a 62% reduction in vehicle hours of travel on I-93, the airport tunnels, and the connection from Storrow Drive, from an average 38,200 hours per day before construction (1994–1995) to 14,800 hours per day in 2004–2005, after the project was largely complete. The savings for travelers was estimated at $166 million annually in the same 2004–2005 time frame. Travel times on the Central Artery northbound during the afternoon peak hour were reduced 85.6%. A 2008 Boston Globe report asserted that waiting time for the majority of trips actually increased as a result of demand induced by the increased road capacity. Because more drivers were opting to use the new roads, traffic bottlenecks were only pushed outward from the city, not reduced or eliminated (although some trips are now faster). The report states, "Ultimately, many motorists going to and from the suburbs at peak rush hours are spending more time stuck in traffic, not less." The Globe also asserted that their analysis provides a fuller picture of the traffic situation than a state-commissioned study done two years earlier, in which the Big Dig was credited with helping to save at least $167 million a year by increasing economic productivity and decreasing motor vehicle operating costs. That study did not look at highways outside the Big Dig construction area and did not take into account new congestion elsewhere. Impact on property values Towards the end of the Big Dig in 2003, it was estimated that the demolition of the Central Artery highway would cause a $732 million increase in property value in Boston's financial district, with the replacement parks providing an additional $252 million in value. Additionally, as a result of the Big Dig, a large amount of waterfront space was opened up, which is now a high-rent residential and commercial area called the Seaport District. The development of Seaport alone was estimated to create $7 billion in private investment and 43,000 jobs. Operations Control Center (OCC) As part of the project, an elaborate Operations Control Center (OCC) control room was constructed in South Boston. Staffed on a "24/7/365" basis, this center monitors and reports on traffic congestion, and responds to emergencies. Continuous video surveillance is provided by hundreds of cameras, and thousands of sensors monitor traffic speed and density, air quality, water levels, temperatures, equipment status, and other conditions inside the tunnel. The OCC can activate emergency ventilation fans, change electronic display signs, and dispatch service crews when necessary. Problems "Thousands of leaks" As far back as 2001, Turnpike Authority officials and contractors knew of thousands of leaks in ceiling and wall fissures, extensive water damage to steel supports and fireproofing systems, and overloaded drainage systems. Many of the leaks were a result of Modern Continental and other subcontractors failing to remove gravel and other debris before pouring concrete. This information was not made public until engineers at MIT (volunteer students and professors) performed several experiments and found serious problems with the tunnel. On September 15, 2004, a major leak in the Interstate 93 north tunnel forced the closure of the tunnel while repairs were conducted. This also forced the Turnpike Authority to release information regarding its non-disclosure of prior leaks. A follow-up reported on "extensive" leaks that were more severe than state authorities had previously acknowledged. The report went on to state that the $14.6 billion tunnel system was riddled with more than 400 leaks. A Boston Globe report, however, countered that by stating there were nearly 700 leaks in a single section of tunnel beneath South Station. Turnpike officials also stated that the number of leaks being investigated was down from 1,000 to 500. The problem of leaks is further aggravated by the fact that many of them involve corrosive salt water. This is caused by the proximity of Boston Harbor and the Atlantic Ocean, causing a mix of salt and fresh water leaks in the tunnel. The situation is made worse by road salt spread in the tunnel to melt ice during freezing weather, or brought in by vehicles passing through. Salt water and salt spray are well-known issues that must be dealt with in any marine environment. It has been reported that "hundreds of thousands of gallons of salt water are pumped out monthly" in the Big Dig, and a map has been prepared showing "hot spots" where water leakage is especially serious. Salt-accelerated corrosion has caused ceiling light fixtures to fail (see below), but can also cause rapid deterioration of embedded rebar and other structural steel reinforcements holding the tunnel walls and ceiling in place. Substandard materials Massachusetts State Police searched the offices of Aggregate Industries, the largest concrete supplier for the underground portions of the project, in June 2005. They seized evidence that Aggregate delivered concrete that did not meet contract specifications. In March 2006 Massachusetts Attorney General Tom Reilly announced plans to sue project contractors and others because of poor work on the project. Over 200 complaints were filed by the state of Massachusetts as a result of leaks, cost overruns, quality concerns, and safety violations. In total, the state has sought approximately $100 million from the contractors ($1 for every $141 spent). In May 2006, six employees of the company were arrested and charged with conspiracy to defraud the United States. The employees were accused of reusing old concrete and double-billing loads. In July 2007, Aggregate Industries settled the case with an agreement to pay $50 million. $42 million of the settlement went to civil cases and $8 million was paid in criminal fines. The company will provide $75 million in insurance for maintenance as well as pay $500,000 toward routine checks on areas suspected to contain substandard concrete. In July 2009, two of the accused, Gerard McNally and Keith Thomas, both managers, pled guilty to charges of conspiracy, mail fraud, and filing false reports. The following month, the remaining four, Robert Prosperi, Mark Blais, Gregory Stevenson, and John Farrar, were found guilty on conspiracy and fraud charges. The four were sentenced to probation and home confinement and Blais and Farrar were additionally sentenced to community service. Fatal ceiling collapse A fatal accident raised safety questions and closed part of the project for most of the summer of 2006. On July 10, 2006, concrete ceiling panels and debris weighing and measuring fell on a car traveling on the two-lane ramp connecting northbound I-93 to eastbound I-90 in South Boston, killing Milena Del Valle, who was a passenger, and injuring her husband, Angel Del Valle, who was driving. Immediately following the fatal ceiling collapse, Governor Mitt Romney ordered a "stem-to-stern" safety audit conducted by the engineering firm of Wiss, Janney, Elstner Associates, Inc. to look for additional areas of risk. Said Romney: "We simply cannot live in a setting where a project of this scale has the potential of threatening human life, as has already been seen". The collapse and closure of the tunnel greatly snarled traffic in the city. The resulting traffic jams are cited as contributing to the death of another person, a heart attack victim who died en route to Boston Medical Center when his ambulance was caught in one such traffic jam two weeks after the collapse. On September 1, 2006, one eastbound lane of the connector tunnel was re-opened to traffic. Following extensive inspections and repairs, Interstate 90 east- and westbound lanes reopened in early January 2007. The final piece of the road network, a high occupancy vehicle lane connecting Interstate 93 north to the Ted Williams Tunnel, reopened on June 1, 2007. On July 10, 2007, after a lengthy investigation, the National Transportation Safety Board found that epoxy glue used to hold the roof in place during construction was not appropriate for long-term bonding. This was determined to be the cause of the roof collapse. The Power-Fast Epoxy Adhesive used in the installation was designed for short-term loading, such as wind or earthquake loads, not long-term loading, such as the weight of a panel. Powers Fasteners, the makers of the adhesive, revised their product specifications on May 15, 2007, to increase the safety factor from 4 to 10 for all of their epoxy products intended for use in overhead applications. The safety factor on Power-Fast Epoxy was increased from 4 to 16. On December 24, 2007, the Del Valle family announced they had reached a settlement with Powers Fasteners that would pay the family $6 million. In December 2008, Powers Fasteners agreed to pay $16 million to the state to settle manslaughter charges. "Ginsu guardrails" Public safety workers have called the walkway safety handrails in the Big Dig tunnels "ginsu guardrails", because the squared-off edges of the support posts have caused mutilations and deaths of passengers ejected from crashed vehicles. After an eighth reported death involving the safety handrails, MassDOT officials announced plans to cover or remove the allegedly dangerous fixtures, but only near curves or exit ramps. This partial removal of hazards has been criticized by a safety specialist, who suggests that the handrails are just as dangerous in straight sections of the tunnel. Lighting fixtures In March 2011, it became known that senior MassDOT officials had failed to disclose an issue with the lighting fixtures in the O'Neill tunnel. In early February 2011, a maintenance crew found a fixture lying in the middle travel lane in the northbound tunnel. Assuming it to be simple road debris, the maintenance team picked it up and brought it back to its home facility. The next day, a supervisor passing through the yard realized that the fixture was not road debris but was in fact one of the fixtures used to light the tunnel itself. Further investigation revealed that the fixture's mounting apparatus had failed, due to galvanic corrosion of incompatible metals, caused by having aluminum in direct contact with stainless steel, in the presence of salt water. The electrochemical potential difference between stainless steel and aluminum is in the range of 0.5 to 1.0V, depending on the exact alloys involved, and can cause considerable corrosion within months under unfavorable conditions. After the discovery of the reason why the fixture had failed, a comprehensive inspection of the other fixtures in the tunnel revealed that numerous other fixtures were also in the same state of deterioration. Some of the worst fixtures were temporarily shored up with plastic ties. Moving forward with temporary repairs, members of the MassDOT administration team decided not to let the news of the systemic failure and repair of the fixtures be released to the public or to Governor Deval Patrick's administration. , it appeared that all of the 25,000 light fixtures would have to be replaced, at an estimated cost of $54 million. The replacement work was mostly done at night, and required lane closures or occasional closing of the entire tunnel for safety, and was estimated to take up to two years to complete. , replacement of the light fixtures continued. See also Massachusetts Turnpike Massachusetts Bay Transportation Authority Megaproject Vincent Zarrilli – critic of the Big Dig who proposed the Boston Bypass Alaskan Way Viaduct replacement tunnel – similar project in Seattle, Washington Carmel Tunnels – similar project in Haifa, Israel Central–Wan Chai Bypass – similar project in the areas of Central, Wan Chai and Causeway Bay, within Victoria City, Hong Kong Cross City Tunnel – similar project in Sydney, New South Wales, Australia Dublin Port Tunnel – similar project on a smaller scale in Dublin, Ireland Gardiner Expressway – an elevated freeway in Toronto with similar future plans Autopista de Circunvalación M-30, and – similar project along the banks of Manzanares River, Madrid, Spain Blanka tunnel complex – similar project in Prague, Czech Republic and the longest city tunnel in Europe (6.4 km / 4.0 mi) Yamate Tunnel – similar project on a larger scale in Tokyo, Japan References External links Official site Project map on page vi of Highway to the Past: The Archaeology of Boston's Big Dig (Massachusetts Historical Commission, 2001) Map of Central Artery Project on page 21 of report on Climate Change Vulnerability List of Massachusetts State Reports on Central Artery Project in Boston Boston CA/T Project History at MIT Rotch Library YouTube version 2007 establishments in Massachusetts 2007 in Boston Engineering projects Interstate 93 Megaprojects North End, Boston Road tunnels in Massachusetts Transport infrastructure completed in 2007 Tunnels completed in 2007 Tunnels in Boston U.S. Route 1
13
In computer science, a binary search tree (BST), also called an ordered or sorted binary tree, is a rooted binary tree data structure with the key of each internal node being greater than all the keys in the respective node's left subtree and less than the ones in its right subtree. The time complexity of operations on the binary search tree is linear with respect to the height of the tree. Binary search trees allow binary search for fast lookup, addition, and removal of data items. Since the nodes in a BST are laid out so that each comparison skips about half of the remaining tree, the lookup performance is proportional to that of binary logarithm. BSTs were devised in the 1960s for the problem of efficient storage of labeled data and are attributed to Conway Berners-Lee and David Wheeler. The performance of a binary search tree is dependent on the order of insertion of the nodes into the tree since arbitrary insertions may lead to degeneracy; several variations of the binary search tree can be built with guaranteed worst-case performance. The basic operations include: search, traversal, insert and delete. BSTs with guaranteed worst-case complexities perform better than an unsorted array, which would require linear search time. The complexity analysis of BST shows that, on average, the insert, delete and search takes for nodes. In the worst case, they degrade to that of a singly linked list: . To address the boundless increase of the tree height with arbitrary insertions and deletions, self-balancing variants of BSTs are introduced to bound the worst lookup complexity to that of the binary logarithm. AVL trees were the first self-balancing binary search trees, invented in 1962 by Georgy Adelson-Velsky and Evgenii Landis. Binary search trees can be used to implement abstract data types such as dynamic sets, lookup tables and priority queues, and used in sorting algorithms such as tree sort. History The binary search tree algorithm was discovered independently by several researchers, including P.F. Windley, Andrew Donald Booth, Andrew Colin, Thomas N. Hibbard. The algorithm is attributed to Conway Berners-Lee and David Wheeler, who used it for storing labeled data in magnetic tapes in 1960. One of the earliest and popular binary search tree algorithm is that of Hibbard. The time complexities of a binary search tree increases boundlessly with the tree height if the nodes are inserted in an arbitrary order, therefore self-balancing binary search trees were introduced to bound the height of the tree to . Various height-balanced binary search trees were introduced to confine the tree height, such as AVL trees, Treaps, and red–black trees. The AVL tree was invented by Georgy Adelson-Velsky and Evgenii Landis in 1962 for the efficient organization of information. It was the first self-balancing binary search tree to be invented. Overview A binary search tree is a rooted binary tree in which nodes are arranged in strict total order in which the nodes with keys greater than any particular node A is stored on the right sub-trees to that node A and the nodes with keys equal to or less than A are stored on the left sub-trees to A, satisfying the binary search property. Binary search trees are also efficacious in sortings and search algorithms. However, the search complexity of a BST depends upon the order in which the nodes are inserted and deleted; since in worst case, successive operations in the binary search tree may lead to degeneracy and form a singly linked list (or "unbalanced tree") like structure, thus has the same worst-case complexity as a linked list. Binary search trees are also a fundamental data structure used in construction of abstract data structures such as sets, multisets, and associative arrays. Operations Searching Searching in a binary search tree for a specific key can be programmed recursively or iteratively. Searching begins by examining the root node. If the tree is , the key being searched for does not exist in the tree. Otherwise, if the key equals that of the root, the search is successful and the node is returned. If the key is less than that of the root, the search proceeds by examining the left subtree. Similarly, if the key is greater than that of the root, the search proceeds by examining the right subtree. This process is repeated until the key is found or the remaining subtree is . If the searched key is not found after a subtree is reached, then the key is not present in the tree. Recursive search The following pseudocode implements the BST search procedure through recursion. The recursive procedure continues until a or the being searched for are encountered. Iterative search The recursive version of the search can be "unrolled" into a while loop. On most machines, the iterative version is found to be more efficient. Since the search may proceed till some leaf node, the running time complexity of BST search is where is the height of the tree. However, the worst case for BST search is where is the total number of nodes in the BST, because an unbalanced BST may degenerate to a linked list. However, if the BST is height-balanced the height is . Successor and predecessor For certain operations, given a node , finding the successor or predecessor of is crucial. Assuming all the keys of the BST are distinct, the successor of a node in BST is the node with the smallest key greater than 's key. On the other hand, the predecessor of a node in BST is the node with the largest key smaller than 's key. Following is pseudocode for finding the successor and predecessor of a node in BST. Operations such as finding a node in a BST whose key is the maximum or minimum are critical in certain operations, such as determining the successor and predecessor of nodes. Following is the pseudocode for the operations. Insertion Operations such as insertion and deletion cause the BST representation to change dynamically. The data structure must be modified in such a way that the properties of BST continue to hold. New nodes are inserted as leaf nodes in the BST. Following is an iterative implementation of the insertion operation. The procedure maintains a "trailing pointer" as a parent of . After initialization on line 2, the while loop along lines 4-11 causes the pointers to be updated. If is , the BST is empty, thus is inserted as the root node of the binary search tree , if it is not , insertion proceeds by comparing the keys to that of on the lines 15-19 and the node is inserted accordingly. Deletion The deletion of a node, say , from the binary search tree has three cases: If is a leaf node, the parent node of gets replaced by and consequently is removed from the , as shown in (a). If has only one child, the child node of gets elevated by modifying the parent node of to point to the child node, consequently taking 's position in the tree, as shown in (b) and (c). If has both left and right child, the successor of , say , displaces by following the two cases: If is 's right child, as shown in (d), displaces and 's right child remain unchanged. If lies within 's right subtree but is not 's right child, as shown in (e), first gets replaced by its own right child, and then it displaces 's position in the tree. The following pseudocode implements the deletion operation in a binary search tree. The procedure deals with the 3 special cases mentioned above. Lines 2-3 deal with case 1; lines 4-5 deal with case 2 and lines 6-16 for case 3. The helper function is used within the deletion algorithm for the purpose of replacing the node with in the binary search tree . This procedure handles the deletion (and substitution) of from . Traversal A BST can be traversed through three basic algorithms: inorder, preorder, and postorder tree walks. Inorder tree walk: Nodes from the left subtree get visited first, followed by the root node and right subtree. Such a traversal visits all the nodes in the order of non-decreasing key sequence. Preorder tree walk: The root node gets visited first, followed by left and right subtrees. Postorder tree walk: Nodes from the left subtree get visited first, followed by the right subtree, and finally, the root. Following is a recursive implementation of the tree walks. Balanced binary search trees Without rebalancing, insertions or deletions in a binary search tree may lead to degeneration, resulting in a height of the tree (where is number of items in a tree), so that the lookup performance is deteriorated to that of a linear search. Keeping the search tree balanced and height bounded by is a key to the usefulness of the binary search tree. This can be achieved by "self-balancing" mechanisms during the updation operations to the tree designed to maintain the tree height to the binary logarithmic complexity. Height-balanced trees A tree is height-balanced if the heights of the left sub-tree and right sub-tree are guaranteed to be related by a constant factor. This property was introduced by the AVL tree and continued by the red–black tree. The heights of all the nodes on the path from the root to the modified leaf node have to be observed and possibly corrected on every insert and delete operation to the tree. Weight-balanced trees In a weight-balanced tree, the criterion of a balanced tree is the number of leaves of the subtrees. The weights of the left and right subtrees differ at most by . However, the difference is bound by a ratio of the weights, since a strong balance condition of cannot be maintained with rebalancing work during insert and delete operations. The -weight-balanced trees gives an entire family of balance conditions, where each left and right subtrees have each at least a fraction of of the total weight of the subtree. Types There are several self-balanced binary search trees, including T-tree, treap, red-black tree, B-tree, 2–3 tree, and Splay tree. Examples of applications Sort Binary search trees are used in sorting algorithms such as tree sort, where all the elements are inserted at once and the tree is traversed at an in-order fashion. BSTs are also used in quicksort. Priority queue operations Binary search trees are used in implementing priority queues, using the node's key as priorities. Adding new elements to the queue follows the regular BST insertion operation but the removal operation depends on the type of priority queue: If it is an ascending order priority queue, removal of an element with the lowest priority is done through leftward traversal of the BST. If it is a descending order priority queue, removal of an element with the highest priority is done through rightward traversal of the BST. See also Search tree Join-based tree algorithms Optimal binary search tree Geometry of binary search trees Ternary search tree References Further reading External links Ben Pfaff: An Introduction to Binary Search Trees and Balanced Trees. (PDF; 1675 kB) 2004. Binary Tree Visualizer (JavaScript animation of various BT-based data structures) Articles with example C++ code Articles with example Python (programming language) code Binary trees Search trees
5
In computer science, a binary tree is a tree data structure in which each node has at most two children, referred to as the left child and the right child. That is, it is a k-ary tree with . A recursive definition using set theory is that a binary tree is a tuple (L, S, R), where L and R are binary trees or the empty set and S is a singleton set containing the root. From a graph theory perspective, binary trees as defined here are arborescences. A binary tree may thus be also called a bifurcating arborescence, a term which appears in some very old programming books before the modern computer science terminology prevailed. It is also possible to interpret a binary tree as an undirected, rather than directed graph, in which case a binary tree is an ordered, rooted tree. Some authors use rooted binary tree instead of binary tree to emphasize the fact that the tree is rooted, but as defined above, a binary tree is always rooted. In mathematics, what is termed binary tree can vary significantly from author to author. Some use the definition commonly used in computer science, but others define it as every non-leaf having exactly two children and don't necessarily label the children as left and right either. In computing, binary trees can be used in two very different ways: First, as a means of accessing nodes based on some value or label associated with each node. Binary trees labelled this way are used to implement binary search trees and binary heaps, and are used for efficient searching and sorting. The designation of non-root nodes as left or right child even when there is only one child present matters in some of these applications, in particular, it is significant in binary search trees. However, the arrangement of particular nodes into the tree is not part of the conceptual information. For example, in a normal binary search tree the placement of nodes depends almost entirely on the order in which they were added, and can be re-arranged (for example by balancing) without changing the meaning. Second, as a representation of data with a relevant bifurcating structure. In such cases, the particular arrangement of nodes under and/or to the left or right of other nodes is part of the information (that is, changing it would change the meaning). Common examples occur with Huffman coding and cladograms. The everyday division of documents into chapters, sections, paragraphs, and so on is an analogous example with n-ary rather than binary trees. Definitions Recursive definition To define a binary tree, the possibility that only one of the children may be empty must be acknowledged. An artifact, which in some textbooks is called an extended binary tree, is needed for that purpose. An extended binary tree is thus recursively defined as: the empty set is an extended binary tree if T1 and T2 are extended binary trees, then denote by T1 • T2 the extended binary tree obtained by by adding edges when these sub-trees are non-empty. Another way of imagining this construction (and understanding the terminology) is to consider instead of the empty set a different type of node—for instance square nodes if the regular ones are circles. Using graph theory concepts A binary tree is a rooted tree that is also an ordered tree (a.k.a. plane tree) in which every node has at most two children. A rooted tree naturally imparts a notion of levels (distance from the root), thus for every node a notion of children may be defined as the nodes connected to it a level below. Ordering of these children (e.g., by drawing them on a plane) makes it possible to distinguish a left child from a right child. But this still doesn't distinguish between a node with left but not a right child from a one with right but no left child. The necessary distinction can be made by first partitioning the edges, i.e., defining the binary tree as triplet (V, E1, E2), where (V, E1 ∪ E2) is a rooted tree (equivalently arborescence) and E1 ∩ E2 is empty, and also requiring that for all j ∈ { 1, 2 } every node has at most one Ej child. A more informal way of making the distinction is to say, quoting the Encyclopedia of Mathematics, that "every node has a left child, a right child, neither, or both" and to specify that these "are all different" binary trees. Types of binary trees Tree terminology is not well-standardized and so varies in literatures. A binary tree has a root node and every node has at most two children. A binary tree (sometimes referred to as a proper, plane, or strict binary tree) is a tree in which every node has either 0 or 2 children. Another way of defining a full binary tree is a recursive definition. A full binary tree is either: A single vertex (a single node as the root node). A tree whose root node has two subtrees, both of which are full binary trees. A binary tree is a binary tree in which all interior nodes have two children and all leaves have the same depth or same level (the level of a node defined as the number of edges or links from the root node to a node). A perfect binary tree is a full binary tree. A binary tree is a binary tree in which every level, except possibly the last, is completely filled, and all nodes in the last level are as far left as possible. It can have between 1 and 2h nodes at the last level h. A perfect tree is therefore always complete but a complete tree is not always perfect. Some authors use the term complete to refer instead to a perfect binary tree as defined above, in which case they call this type of tree (with a possibly not filled last level) an almost complete binary tree or nearly complete binary tree. A complete binary tree can be efficiently represented using an array. In the infinite complete binary tree is a tree with levels, where for each level d the number of existing nodes at level d is equal to 2d. The cardinal number of the set of all levels is (countably infinite). The cardinal number of the set of all paths (the "leaves", so to speak) is uncountable, having the cardinality of the continuum. A balanced binary tree is a binary tree structure in which the left and right subtrees of every node differ in height (the number of edges from the top-most node to the farthest node in a subtree) by no more than 1. One may also consider binary trees where no leaf is much farther away from the root than any other leaf. (Different balancing schemes allow different definitions of "much farther".) A degenerate (or pathological) tree is where each parent node has only one associated child node. This means that the tree will behave like a linked list data structure. In this case, an advantage of using a binary tree is significantly reduced because it is essentially a linked list which time complexity is O(n) (n as the number of nodes) and it has more data space than the linked list due to two pointers per node, while the complexity of O(log2n) for data search in a balanced binary tree is normally expected. Properties of binary trees The number of nodes in a full binary tree is at least and at most (i.e., the number of nodes in a perfect binary tree), where is the height of the tree. A tree consisting of only a root node has a height of 0. The least number of nodes is obtained by adding only two children nodes per adding height so (1 for counting the root node). The maximum number of nodes is obtained by fully filling nodes at each level, i.e., it is a perfect tree. For a perfect tree, the number of nodes is , where the last equality is from the geometric series sum. The number of leaf nodes in a perfect binary tree is (where is the number of nodes in the tree) because (by using the above property) and the number of leaves is so . It also means that . In terms of the tree height , . For any non-empty binary tree with leaf nodes and nodes of degree 2 (internal nodes with two child nodes), . The proof is the following. For a perfect binary tree, the total number of nodes is (A perfect binary tree is a full binary tree.) and , so . To make a full binary tree from a perfect binary tree, a pair of two sibling nodes are removed one by one. This results in "two leaf nodes removed" and "one internal node removed" and "the removed internal node becoming a leaf node", so one leaf node and one internal node is removed per removing two sibling nodes. As a result, also holds for a full binary tree. To make a binary tree with a leaf node without its sibling, a single leaf node is removed from a full binary tree, then "one leaf node removed" and "one internal nodes with two children removed" so also holds. This relation now covers all non-empty binary trees. With given nodes, the minimum possible tree height is with which the tree is a balanced full tree or perfect tree. With a given heigh , the number of nodes can't exceed the as the number of nodes in a perfect tree. Thus . A binary Tree with leaves has at least the height . With a given height , the number of leaves at that height can't exceed as the number of leaves at the height in a perfect tree. Thus . In a non-empty binary tree, if is the total number of nodes and is the total number of edges, then . This is obvious because each node requires one edge except for the root node. The number of null links (i.e., absent children of the nodes) in a binary tree of n nodes is (n + 1). The number of internal nodes in a complete binary tree of n nodes is . Combinatorics In combinatorics, one considers the problem of counting the number of full binary trees of a given size. Here the trees have no values attached to their nodes (this would just multiply the number of possible trees by an easily determined factor), and trees are distinguished only by their structure; however, the left and right child of any node are distinguished (if they are different trees, then interchanging them will produce a tree distinct from the original one). The size of the tree is taken to be the number n of internal nodes (those with two children); the other nodes are leaf nodes and there are of them. The number of such binary trees of size n is equal to the number of ways of fully parenthesizing a string of symbols (representing leaves) separated by n binary operators (representing internal nodes), to determine the argument subexpressions of each operator. For instance for one has to parenthesize a string like , which is possible in five ways: The correspondence to binary trees should be obvious, and the addition of redundant parentheses (around an already parenthesized expression or around the full expression) is disallowed (or at least not counted as producing a new possibility). There is a unique binary tree of size 0 (consisting of a single leaf), and any other binary tree is characterized by the pair of its left and right children; if these have sizes i and j respectively, the full tree has size . Therefore, the number of binary trees of size n has the following recursive description , and for any positive integer n. It follows that is the Catalan number of index n. The above parenthesized strings should not be confused with the set of words of length 2n in the Dyck language, which consist only of parentheses in such a way that they are properly balanced. The number of such strings satisfies the same recursive description (each Dyck word of length 2n is determined by the Dyck subword enclosed by the initial '(' and its matching ')' together with the Dyck subword remaining after that closing parenthesis, whose lengths 2i and 2j satisfy ); this number is therefore also the Catalan number . So there are also five Dyck words of length 6: These Dyck words do not correspond to binary trees in the same way. Instead, they are related by the following recursively defined bijection: the Dyck word equal to the empty string corresponds to the binary tree of size 0 with only one leaf. Any other Dyck word can be written as (), where , are themselves (possibly empty) Dyck words and where the two written parentheses are matched. The bijection is then defined by letting the words and correspond to the binary trees that are the left and right children of the root. A bijective correspondence can also be defined as follows: enclose the Dyck word in an extra pair of parentheses, so that the result can be interpreted as a Lisp list expression (with the empty list () as only occurring atom); then the dotted-pair expression for that proper list is a fully parenthesized expression (with NIL as symbol and '.' as operator) describing the corresponding binary tree (which is, in fact, the internal representation of the proper list). The ability to represent binary trees as strings of symbols and parentheses implies that binary trees can represent the elements of a free magma on a singleton set. Methods for storing binary trees Binary trees can be constructed from programming language primitives in several ways. Nodes and references In a language with records and references, binary trees are typically constructed by having a tree node structure which contains some data and references to its left child and its right child. Sometimes it also contains a reference to its unique parent. If a node has fewer than two children, some of the child pointers may be set to a special null value, or to a special sentinel node. This method of storing binary trees wastes a fair bit of memory, as the pointers will be null (or point to the sentinel) more than half the time; a more conservative representation alternative is threaded binary tree. In languages with tagged unions such as ML, a tree node is often a tagged union of two types of nodes, one of which is a 3-tuple of data, left child, and right child, and the other of which is a "leaf" node, which contains no data and functions much like the null value in a language with pointers. For example, the following line of code in OCaml (an ML dialect) defines a binary tree that stores a character in each node. type chr_tree = Empty | Node of char * chr_tree * chr_tree Arrays Binary trees can also be stored in breadth-first order as an implicit data structure in arrays, and if the tree is a complete binary tree, this method wastes no space. In this compact arrangement, if a node has an index i, its children are found at indices (for the left child) and (for the right), while its parent (if any) is found at index (assuming the root has index zero). Alternatively, with a 1-indexed array, the implementation is simplified with children found at and , and parent found at . This method benefits from more compact storage and better locality of reference, particularly during a preorder traversal. However, it is expensive to grow and wastes space proportional to 2h - n for a tree of depth h with n nodes. This method of storage is often used for binary heaps. Encodings Succinct encodings A succinct data structure is one which occupies close to minimum possible space, as established by information theoretical lower bounds. The number of different binary trees on nodes is , the th Catalan number (assuming we view trees with identical structure as identical). For large , this is about ; thus we need at least about bits to encode it. A succinct binary tree therefore would occupy bits. One simple representation which meets this bound is to visit the nodes of the tree in preorder, outputting "1" for an internal node and "0" for a leaf. If the tree contains data, we can simply simultaneously store it in a consecutive array in preorder. This function accomplishes this: function EncodeSuccinct(node n, bitstring structure, array data) { if n = nil then append 0 to structure; else append 1 to structure; append n.data to data; EncodeSuccinct(n.left, structure, data); EncodeSuccinct(n.right, structure, data); } The string structure has only bits in the end, where is the number of (internal) nodes; we don't even have to store its length. To show that no information is lost, we can convert the output back to the original tree like this: function DecodeSuccinct(bitstring structure, array data) { remove first bit of structure and put it in b if b = 1 then create a new node n remove first element of data and put it in n.data n.left = DecodeSuccinct(structure, data) n.right = DecodeSuccinct(structure, data) return n else return nil } More sophisticated succinct representations allow not only compact storage of trees but even useful operations on those trees directly while they're still in their succinct form. Encoding ordered trees as binary trees There is a natural one-to-one correspondence between ordered trees and binary trees. It allows any ordered tree to be uniquely represented as a binary tree, and vice versa: Let T be a node of an ordered tree, and let B denote T's image in the corresponding binary tree. Then B's left child represents T's first child, while the B's right child represents T'''s next sibling. For example, the ordered tree on the left and the binary tree on the right correspond: In the pictured binary tree, the black, left, edges represent first child, while the blue, right, edges represent next sibling. This representation is called a left-child right-sibling binary tree. Common operations There are a variety of different operations that can be performed on binary trees. Some are mutator operations, while others simply return useful information about the tree. Insertion Nodes can be inserted into binary trees in between two other nodes or added after a leaf node. In binary trees, a node that is inserted is specified as to whose child it will be. Leaf nodes To add a new node after leaf node A, A assigns the new node as one of its children and the new node assigns node A as its parent. Internal nodes Insertion on internal nodes is slightly more complex than on leaf nodes. Say that the internal node is node A and that node B is the child of A. (If the insertion is to insert a right child, then B is the right child of A, and similarly with a left child insertion.) A assigns its child to the new node and the new node assigns its parent to A. Then the new node assigns its child to B and B assigns its parent as the new node. Deletion Deletion is the process whereby a node is removed from the tree. Only certain nodes in a binary tree can be removed unambiguously. Node with zero or one children Suppose that the node to delete is node A. If A has no children, deletion is accomplished by setting the child of A's parent to null. If A has one child, set the parent of A's child to A's parent and set the child of A's parent to A's child. Node with two children In a binary tree, a node with two children cannot be deleted unambiguously. However, in certain binary trees (including binary search trees) these nodes can be deleted, though with a rearrangement of the tree structure. Traversal Pre-order, in-order, and post-order traversal visit each node in a tree by recursively visiting each node in the left and right subtrees of the root. Below are the brief descriptions of above mentioned traversals. Pre-order In pre-order we always visit the current node, next we recursively traverse the current node's left subtree and then we recursively traverse the current node's right subtree. The pre-order traversal is a topologically sorted one, because a parent node is processed before any of its child nodes is done. In-order In in-order we always recursively traverse the current node's left subtree, next we visit the current node and lastly we recursively traverse the current node's right subtree Post-order In post-order we always recursively traverse the current node's left subtree, next we recursively traverse the current node's right subtree and then visit the current node. Post-order traversal can be useful to get postfix expression of a binary expression tree. Depth-first order In depth-first order, we always attempt to visit the node farthest from the root node that we can, but with the caveat that it must be a child of a node we have already visited. Unlike a depth-first search on graphs, there is no need to remember all the nodes we have visited, because a tree cannot contain cycles. Pre-order is a special case of this. See depth-first search for more information. Breadth-first order Contrasting with depth-first order is breadth-first order, which always attempts to visit the node closest to the root that it has not already visited. See breadth-first search for more information. Also called a level-order traversal. In a complete binary tree, a node's breadth-index (i − (2d − 1)) can be used as traversal instructions from the root. Reading bitwise from left to right, starting at bit d − 1, where d is the node's distance from the root (d = ⌊log(i+1)⌋) and the node in question is not the root itself (d > 0). When the breadth-index is masked at bit d − 1, the bit values and mean to step either left or right, respectively. The process continues by successively checking the next bit to the right until there are no more. The rightmost bit indicates the final traversal from the desired node's parent to the node itself. There is a time-space trade-off between iterating a complete binary tree this way versus each node having pointer/s to its sibling/s. See also References Citations Bibliography Donald Knuth. The Art of Computer Programming vol 1. Fundamental Algorithms'', Third Edition. Addison-Wesley, 1997. . Section 2.3, especially subsections 2.3.1–2.3.2 (pp. 318–348). External links binary trees entry in the FindStat database Binary Tree Proof by Induction Balanced binary search tree on array How to create bottom-up an Ahnentafel list, or a balanced binary search tree on array Binary trees and Implementation of the same with working code examples Binary Tree JavaScript Implementation with source code
5
Backgammon is a two-player board game played with counters and dice on tables boards. It is the most widespread Western member of the large family of tables games, whose ancestors date back nearly 5,000 years to the regions of Mesopotamia and Persia. The earliest record of backgammon itself dates to 17th-century England, being descended from the 16th-century game of Irish. Backgammon is a two-player game of contrary movement in which each player has fifteen pieces known traditionally as men (short for 'tablemen'), but increasingly known as 'checkers' in the US in recent decades, analogous to the other board game of Checkers. The backgammon table pieces move along twenty-four 'points' according to the roll of two dice. The objective of the game is to move the fifteen pieces around the board and be first to bear off, i.e., remove them from the board. The achievement of this while the opponent is still a long way behind results in a triple win known as a backgammon, hence the name of the game. Backgammon involves a combination of strategy and luck (from rolling dice). While the dice may determine the outcome of a single game, the better player will accumulate the better record over a series of many games. With each roll of the dice, players must choose from numerous options for moving their pieces and anticipate possible counter-moves by the opponent. The optional use of a doubling cube allows players to raise the stakes during the game. History Contrary to popular belief, backgammon is not the oldest board game in the world, nor are all tables games variants of backgammon. In fact, the earliest known mention of backgammon was in a letter dated 1635, when it was emerging as a variant of the popular mediaeval Anglo-Scottish game of Irish; the latter was described as a better game. By the 19th century, however, backgammon had spread to Europe, where it rapidly superseded other tables games like Trictrac in popularity, and also to America, where the doubling cube was introduced. In other parts of the world, different tables games such as Nard or Nardy are better known. Tables games Backgammon is a recent member of the large family of tables games that date back to ancient times. The following is an overview of their development up to the time when backgammon appeared on the scene. Ancient history The history of board games can be traced back nearly 5,000 years to archaeological discoveries of the Jiroft culture, located in present-day Iran, the world's oldest game set having been discovered in the region with equipment comprising a dumbbell-shaped board, counters and dice. Although its precise rules are unknown, it has been termed the Game of 20 Squares and Irving Finkel has suggested a possible reconstruction. The Royal Game of Ur from 2600 BC may also be an ancestor or intermediate of modern-day table games like backgammon and is the oldest game for which rules have been handed down. It used tetrahedral dice. Various other board games spanning the 10th to 7th centuries BC have been found throughout modern day Iraq, Syria, Israel, Egypt and western Iran. Sasanian Empire The Persian tables game of nard or nardšir emerged somewhere between the 3rd and 6th century AD, one text (Kār-nāmag ī Ardaxšēr ī Pāpakān) linking it with Ardashir I (r. 224–41), founder of the Sasanian dynasty, whereas another (Wičārišn ī čatrang ud nihišn ī nēw-ardaxšēr) attributes it to Bozorgmehr Bokhtagan, the vizier of Khosrow I (r. 531–79), who is credited with the invention of the game. Roman and Byzantine Empires The earliest identifiable tables game, Tabula, meaning 'table' or 'board', is described in an epigram of Byzantine Emperor Zeno (AD 476–491). The overall aim was to be first to bear one's pieces off; the board had the typical tables layout, with 24 points, 12 on each side; and there were 15 counters per player. However, unlike modern Western backgammon, there were three cubical dice not two, no bar nor doubling die, and all counters started off the board. Modern backgammon follows the same rules as tabula for hitting a blot and for bearing off; and the rules for re-entering pieces in backgammon are the same as those for initially entering pieces in tabula. The name Tavli () is still used in Greece for various tables games, which are frequently played in town plateias and cafes. The of Emperor Zeno's time is believed to be a direct descendant of the earlier Roman Ludus duodecim scriptorum ('Game of twelve lines') with the board's middle row of points removed, and only the two outer rows remaining. used a board with three rows of 12 points each, with the 15 pieces being moved in opposing directions by the two players across three rows according to the roll of the three cubical dice. Little specific text about the gameplay of has survived; it may have been related to the older Ancient Greek dice game Kubeia. The earliest known mention of the game is in Ovid's Ars Amatoria ('The Art of Love'), written between 1 BC and 8 AD. In Roman times, this game was also known as alea. Western Europe Tables games first appeared in France during the 11th century and became a favourite pastime of gamblers. In 1254, Louis IX issued a decree prohibiting his court officials and subjects from playing. They were played in Germany in the 12th century, and had reached Iceland by the 13th century. In Spain, the Alfonso X manuscript Libro de los juegos, completed in 1283, describes rules for a number of dice and table games in addition to its discussion of chess. By the 17th century, games at tables had spread to Sweden. A wooden board and counters were recovered from the wreck of the Vasa among the belongings of the ship's officers. Tables games appear widely in paintings of this period, mainly those of Dutch and German painters, such as Van Ostade, Jan Steen, Hieronymus Bosch, and Bruegel. Among surviving artworks are Cardsharps by Caravaggio. Backgammon Early backgammon Backgammon's immediate predecessor was the 16th century tables game of Irish. Irish was the Anglo-Scottish equivalent of the French Toutes Tables and Spanish Todas Tablas, the latter name first being used in the 1283 El Libro de los Juegos, a translation of Arabic manuscripts by the Toledo School of Translators. Irish had been popular at the Scottish court of James IV and considered to be "the more serious and solid game" when the variant which became known as Backgammon began to emerge in the first half of the 17th century. In medieval Italy, Barail was played on a backgammon board, with the important difference that both players moved their pieces counter-clockwise and starting from the same side of the board. The game rules for Barail are recorded in a 13th century manuscript held in the Italian National Library in Florence. The earliest mention of backgammon, under the name Baggammon, was by James Howell in a letter dated 1635. In English, the word "backgammon" is most likely derived from "back" and , meaning "game" or "play". Meanwhile, the first use documented by the Oxford English Dictionary was in 1650. In 1666, it is reported that the "old name for backgammon used by Shakespeare and others" was Tables. However, it is clear from Willughby that "tables" was a generic name and that the phrase "playing at tables" was used in a similar way to "playing at cards". The first known rules of "Back Gammon" were produced by Francis Willughby around 1672; they were quickly followed by Charles Cotton in 1674. In the 16th century, Elizabethan laws and church regulations had prohibited "playing at tables" in England, but by the 18th century, Backgammon had superseded Irish and become popular among the English clergy. Edmond Hoyle published A Short Treatise on the Game of Back-Gammon in 1753; this described rules and strategy for the game and was bound together with a similar text on whist. The early form of backgammon was very similar to its predecessor, Irish. The aim, board, number of pieces or "men", direction of play and starting layout were the same as in the modern game. However, there was no doubling die, there was no bar on the board or the bar was not used (men simply being moved off the table when hit) and the scoring was different. The game was won double if either the winning throw was a doublet or the opponent still had men outside the home board. It was won triple if a player bore all men off before any of the opponent's men reached the home board; this was a back-gammon. Some terminology, such as "point", "hitting a blot", "home", "doublet", "bear off" and "men" are recognisably the same as in the modern game; others, such as "binding a man" (adding a second man to a point) "binding up the tables" (taking all one's first 6 points), "fore game", "latter game", "nipping a man" (hitting a blot and playing it on forwards) "playing at length" (using both dice to move one man) are no longer in vogue. Modern backgammon By no later than 1850, the rules of play had changed to those used today. Tables boards were now made with a 'bar' in the centre and men that were hit went onto the bar. Winning double or by "two hits" was achieved by bearing all one's men off before the other has borne this was now called a gammon. If the winner bore off all men while the loser still had men in his adversary's table, it was a back-gammon and worth "three hits", i.e., triple. The most recent major development in backgammon was the addition of the doubling cube. Doubles had originally been recorded by placing "common parlour matches" on the bar in the centre of the board. A doubling cube was first introduced in the 1920s in New York City among members of gaming clubs in the Lower East Side. The cube required players not only to select the best move in a given position, but also to estimate the probability of winning from that position, transforming backgammon into the expected value-driven game played in the 20th and 21st centuries. The popularity of backgammon surged in the mid-1960s, in part due to the charisma of Prince Alexis Obolensky who became known as "The Father of Modern Backgammon". "Obe", as he was called by friends, co-founded the International Backgammon Association, which published a set of official rules. He also established the World Backgammon Club of Manhattan, devised a backgammon tournament system in 1963, then organized the first major international backgammon tournament in March 1964, which attracted royalty, celebrities and the press. The game became a huge fad and was played on college campuses, in discothèques and at country clubs; stockbrokers and bankers began playing at conservative men's clubs. People young and old all across the country dusted off their boards and pieces. Cigarette, liquor and car companies began to sponsor tournaments, and Hugh Hefner held backgammon parties at the Playboy Mansion. Backgammon clubs were formed and tournaments were held, resulting in a World Championship promoted in Las Vegas in 1967. In the second half of the 20th century, new terms were introduced in America, such as 'beaver' and 'checkers' for men (although American backgammon experts Jacoby and Crawford continued to use both the older terms as well as the new ones). Most recently, the United States Backgammon Federation (USBGF) was organized in 2009 to repopularize the game in the United States. Board and committee members include many of the top players, tournament directors and writers in the worldwide backgammon community. The USBGF has recently created Standards of Ethical Practice to address issues on which tournament rules fail to touch. In its country of origin, the UK Backgammon Federation is the national authority and runs a backgammon the Backgammon Galaxy UK Open as well as club championships, online leagues and knockout tournaments. Like the USBGF they are active members of the World Backgammon Federation (WBF) and their tournament rules have been adopted in their entirety by the WBF. Software Backgammon software has been developed not only to play and analyze games, but also for people to play one another over the internet. Dice rolls are provided by random or pseudorandom number generators. Real-time online play began with the First Internet Backgammon Server in July 1992, but there are now a range of options, many of which are commercial. Rules Since 2018, backgammon has been overseen internationally by the World Backgammon Federation who set the rules of play for international tournaments. Backgammon playing pieces may be termed men, checkers, draughts, stones, counters, pawns, discs, pips, chips, or nips. Checkers is a relatively modern American English term derived from another board game, draughts, which in US English is called checkers. The objective is for players to bear off all their disc pieces from the board before their opponent can do the same. As the playing time for each individual game is short, it is often played in matches where victory is awarded to the first player to reach a certain number of points. Board The dimensions of a board when opened, for a tournament game, should be at a minimum of 44 cm by 55 cm to a maximum of 66 cm by 88 cm. Setup Each side of the board has a track of 12 long triangles, called points. The points form a continuous track in the shape of a horseshoe, and are numbered from 1 to 24. In the most commonly used setup, each player begins with fifteen pieces; two are placed on their 24-point, three on their 8-point, and five each on their 13-point and their 6-point. The two players move their pieces in opposing directions, from the 24-point towards the 1-point. Points 1 through 6 are called the home board or inner board, and points 7 through 12 are called the outer board. The 7-point is referred to as the bar point, and the 13-point as the midpoint. Usually the 5-point for each player is called the "golden point". Movement To start the game, each player rolls one die, and the player with the higher number moves first using the numbers shown on both dice. If the players roll the same number, they must roll again, leaving the first pair of dice on the board. The player with the higher number on the second roll moves using only the numbers shown on the dice used for the second roll. Both dice must land completely flat on the right-hand side of the gameboard. The players then take alternate turns, rolling two dice at the beginning of each turn. After rolling the dice, players must, if possible, move their pieces according to the number shown on each die. For example, if the player rolls a 6 and a 3 (denoted as "6-3"), the player must move one checker six points forward, and another or the same checker three points forward. The same checker may be moved twice, as long as the two moves can be made separately and legally: six and then three, or three and then six. If a player rolls two of the same number, called doubles, that player must play each die twice. For example, a roll of 5-5 allows the player to make four moves of five spaces each. On any roll, a player must move according to the numbers on both dice if it is at all possible to do so. If one or both numbers do not allow a legal move, the player forfeits that portion of the roll and the turn ends. If moves can be made according to either one die or the other, but not both, the higher number must be used. If one die is unable to be moved, but such a move is made possible by the moving of the other die, that move is compulsory. In the course of a move, a checker may land on any point that is unoccupied or is occupied by one or more of the player's own checkers. It may also land on a point occupied by exactly one opposing checker, or "blot". In this case, the blot has been "hit" and is placed in the middle of the board on the bar that divides the two sides of the playing surface. A checker may never land on a point occupied by two or more opposing checkers; thus, no point is ever occupied by checkers from both players simultaneously. There is no limit to the number of checkers that can occupy a point or the bar at any given time. Checkers placed on the bar must re-enter the game through the opponent's home board before any other move can be made. A roll of 1 allows the checker to enter on the 24-point (opponent's 1), a roll of 2 on the 23-point (opponent's 2), and so forth, up to a roll of 6 allowing entry on the 19-point (opponent's 6). Checkers may not enter on a point occupied by two or more opposing checkers. Checkers can enter on unoccupied points, or on points occupied by a single opposing checker; in the latter case, the single checker is hit and placed on the bar. A player may not move any other checkers until all checkers belonging to that player on the bar have re-entered the board. If a player has checkers on the bar, but rolls a combination that does not allow any of those checkers to re-enter, the player does not move. If the opponent's home board is completely "closed" (i.e. all six points are each occupied by two or more checkers), there is no roll that will allow a player to enter a checker from the bar, and that player stops rolling and playing until at least one point becomes open (occupied by one or zero checkers) due to the opponent's moves. A turn ends only when the player has removed his or her dice from the board. Prior to this moment, a move can be undone and replayed an unlimited number of times. Bearing off When all of a player's checkers are in that player's home board, that player may start removing them; this is called "bearing off". A roll of 1 may be used to bear off a checker from the 1-point, a 2 from the 2-point, and so on. If all of a player's checkers are on points lower than the number showing on a particular die, the player must use that die to bear off one checker from the highest occupied point. For example, if a player rolls a 6 and a 5, but has no checkers on the 6-point and two on the 5-point, then the 6 and the 5 must be used to bear off the two checkers from the 5-point. When bearing off, a player may also move a lower die roll before the higher even if that means the full value of the higher die is not fully utilized. For example, if a player has exactly one checker remaining on the 6-point, and rolls a 6 and a 1, the player may move the 6-point checker one place to the 5-point with the lower die roll of 1, and then bear that checker off the 5-point using the die roll of 6; this is sometimes useful tactically. As before, if there is a way to use all moves showing on the dice by moving checkers within the home board or by bearing them off, the player must do so. If a player's checker is hit while in the process of bearing off, that player may not bear off any others until it has been re-entered into the game and moved into the player's home board, according to the normal movement rules. The first player to bear off all fifteen of their own checkers wins the game. When keeping score in backgammon, the points awarded depend on the scale of the victory. A player who bears off all fifteen pieces when the opponent has borne off at least one, wins a single game worth 1 point. If all fifteen have been borne off before the opponent gets at least one checker off, this is a gammon or double game worth 2 points. A backgammon or triple game is worth 3 points and occurs when the losing player has borne off no pieces and has one or more on the bar and/or in the winner's home table (inner board). Doubling cube To speed up match play and to provide an added dimension for strategy, a doubling cube is usually used. The doubling cube is not a die to be rolled, but rather a marker, with the numbers 2, 4, 8, 16, 32, and 64 inscribed on its sides to denote the current stake. At the start of each game, the doubling cube is placed on the midpoint of the bar with the number 64 showing; the cube is then said to be "centered, on 1". When the cube is still centered, either player may start their turn by proposing that the game be played for twice the current stakes. Their opponent must either accept ("take") the doubled stakes or resign ("drop") the game immediately. Whenever a player accepts doubled stakes, the cube is placed on their side of the board with the corresponding power of two facing upward, to indicate that the right to redouble, which is to offer to continue doubling the stakes, belongs exclusively to that player. If the opponent drops the doubled stakes, they lose the game at the current value of the doubling cube. For instance, if the cube showed the number 2 and a player wanted to redouble the stakes to put it at 4, the opponent choosing to drop the redouble would lose two, or twice the original stake. There is no limit on the number of redoubles. Although 64 is the highest number depicted on the doubling cube, the stakes may rise to 128, 256, and so on. In money games, a player is often permitted to "beaver" when offered the cube, doubling the value of the game again, while retaining possession of the cube. A variant of the doubling cube "beaver" is the "raccoon". Players who doubled their opponent, seeing the opponent beaver the cube, may in turn then double the stakes once again ("raccoon") as part of that cube phase before any dice are rolled. The opponent retains the doubling cube. An example of a "raccoon" is the following: White doubles Black to 2 points, Black accepts then beavers the cube to 4 points; White, confident of a win, raccoons the cube to 8 points, while Black retains the cube. Such a move adds greatly to the risk of having to face the doubling cube coming back at 8 times its original value when first doubling the opponent (offered at 2 points, counter offered at 16 points) should the luck of the dice change. Some players may opt to invoke the "Murphy rule" or the "automatic double rule". If both opponents roll the same opening number, the doubling cube is incremented on each occasion yet remains in the middle of the board, available to either player. The Murphy rule may be invoked with a maximum number of automatic doubles allowed and that limit is agreed to prior to a game or match commencing. When a player decides to double the opponent, the value is then a double of whatever face value is shown (e.g. if two automatic doubles have occurred putting the cube up to 4, the first in-game double will be for 8 points). The Murphy rule is not an official rule in backgammon and is rarely, if ever, seen in use at officially sanctioned tournaments. The "Jacoby rule", named after Oswald Jacoby, allows gammons and backgammons to count for their respective double and triple values only if the cube has already been offered and accepted. This encourages a player with a large lead to double, possibly ending the game, rather than to play it to conclusion hoping for a gammon or backgammon. The Jacoby rule is widely used in money play but is not used in match play. The "Crawford rule", named after John R. Crawford, is designed to make match play more equitable for the player in the lead. If a player is one point away from winning a match, that player's opponent will always want to double as early as possible in order to catch up. Whether the game is worth one point or two, the trailing player must win to continue the match. To balance the situation, the Crawford rule requires that when a player first reaches a score one point short of winning, neither player may use the doubling cube for the following game, called the "Crawford game". After the Crawford game, normal use of the doubling cube resumes. The Crawford rule is routinely used in tournament match play. It is possible for a Crawford game to never occur in a match. If the Crawford rule is in effect, then another option is the "Holland rule", named after Tim Holland, which stipulates that after the Crawford game, a player cannot double until after at least two rolls have been played by each side. It was common in tournament play in the 1980s, but is now rarely used. Related games Minor variations to the standard game are common among casual players in certain regions. For instance, only allowing a maximum of five men on any point (Britain) or disallowing "hit-and-run" in the home board (Middle East). There are also many relatives of backgammon within the tables family with different aims, modes of play and strategies. Some are played primarily throughout one geographic region, and others add new tactical elements to the game. These other tables games commonly have a different starting position, restrict certain moves, or assign special value to certain dice rolls, but in some geographic games even the rules and direction of movement of the counters change, rendering them fundamentally different. Acey-deucey is a relative of backgammon in which players start with no counters on the board, and must enter them onto the board at the beginning of the game. The roll of 1-2 is given special consideration, allowing the player, after moving the 1 and the 2, to select any desired doubles move. A player also receives an extra turn after a roll of 1-2 or of doubles. Hypergammon is a game in which players have only three counters on the board, starting with one each on the 24, 23 and 22 points. With the aid of a computer this game was solved by Hugh Sconyers around 1994, meaning that exact equities for all cube positions are available for all 32 million possible positions. Nard is a traditional tables game from Persia which may be an ancestor of backgammon. It has a different opening layout in which all 15 pieces start on the 24th point. During play pieces may not be hit and there are no gammons or backgammons. Russian backgammon is a variant described in 1895 as: "...much in vogue in Russia, Germany, and other parts of the Continent...". Players start with no counters on the board, and both players move in the same direction to bear off in a common home board. In this variant, doubles are powerful: four moves are played as in backgammon, followed by four moves according to the difference of the dice value from 7, and then the player has another turn (with the caveat that the turn ends if any portion of it cannot be completed). Gul bara and Tapa are tables games popular in south-eastern Europe and Turkey. The play will iterate among Backgammon, Gul Bara, and Tapa until one of the players reaches a score of 7 or 5. Coan ki is an ancient Chinese tables game. Plakoto, Fevga and Portes are three varieties of tables games played in Greece. Together, the three are referred to as Tavli and are usually played one after the other; game being three, five, or seven points. Misere (backgammon to lose) is a variant of backgammon in which the objective is to lose the game. Tavla is a Turkish variation. Strategy and tactics Backgammon has an established opening theory, although it is less detailed than that of chess. The tree of positions expands rapidly because of the number of possible dice rolls and the moves available on each turn. Recent computer analysis has offered more insight on opening plays, but the midgame is reached quickly. After the opening, backgammon players frequently rely on some established general strategies, combining and switching among them to adapt to the changing conditions of a game. A blot has the highest probability of being hit when it is 6 points away from an opponent's checker. Strategies can derive from that. The most direct one is simply to avoid being hit, trapped, or held in a stand-off. A "running game" describes a strategy of moving as quickly as possible around the board, and is most successful when a player is already ahead in the race. When this fails, one may opt for a "holding game", maintaining control of a point on one's opponent's side of the board, called an anchor. As the game progresses, this player may gain an advantage by hitting an opponent's blot from the anchor, or by rolling large doubles that allow the checkers to escape into a running game. The "priming game" involves building a wall of checkers, called a prime, covering a number of consecutive points. This obstructs opposing checkers that are behind the prime. A checker trapped behind a six-point prime cannot escape until the prime is broken. A particularly successful priming effort may lead to a "blitz", which is a strategy of covering the entire home board as quickly as possible while keeping one's opponent on the bar. Because the opponent has difficulty re-entering from the bar or escaping, a player can quickly gain a running advantage and win the game, often with a gammon. A "backgame" is a strategy that involves holding two or more anchors in an opponent's home board while being substantially behind in the race. The anchors obstruct the opponent's checkers and create opportunities to hit them as they move home. The backgame is generally used only to salvage a game wherein a player is already significantly behind. Using a backgame as an initial strategy is usually unsuccessful. "Duplication" refers to the placement of checkers such that one's opponent needs the same dice rolls to achieve different goals. For example, players may position all of their blots in such a way that the opponent must roll a 2 in order to hit any of them, reducing the probability of being hit more than once. "Diversification" refers to a complementary tactic of placing one's own checkers in such a way that more numbers are useful. Many positions require a measurement of a player's standing in the race, for example, in making a doubling cube decision, or in determining whether to run home and begin bearing off. The minimum total of pips needed to move a player's checkers around and off the board is called the "pip count". The difference between the two players' pip counts is frequently used as a measure of the leader's racing advantage. Players often use mental calculation techniques to determine pip counts in live play. Backgammon is played in two principal variations, "money" and "match" play. Money play means that every point counts evenly and every game stands alone, whether money is actually being wagered or not. "Match" play means that the players play until one side scores (or exceeds) a certain number of points. The format has a significant effect on strategy. In a match, the objective is not to win the maximum possible number of points, but rather to simply reach the score needed to win the match. For example, a player leading a 9-point match by a score of 7–5 would be very reluctant to turn the doubling cube, as their opponent could take and make a costless redouble to 4, placing the entire outcome of the match on the current game. Conversely, the trailing player would double very aggressively, particularly if they have chances to win a gammon in the current game. In money play, the theoretically correct checker play and cube action would never vary based on the score. In 1975, Emmet Keeler and Joel Spencer considered the question of when to double or accept a double using an idealized version of backgammon. In their idealized version, the probability of winning varies randomly over time by Brownian motion, and there are no gammons or backgammons. They showed that the optimal time to offer a double was when the probability of winning reached 80%, and it is wise to accept a double only if the probability of winning is at least 20%. As their assumptions do not correspond perfectly to the real game, actual doubling strategy may vary, but the 80% number still provides a possible rule of thumb. Cheating To reduce the possibility of cheating, most good-quality backgammon sets use precision dice and a dice cup. This reduces the likelihood of loaded dice being used, which is the main way of cheating in face-to-face play. A common method of cheating online is the use of a computer program to find the optimal move on each turn; to combat this, many online sites use move-comparison software that identifies when a player's moves resemble those of a backgammon program. Online cheating has therefore become extremely difficult. Social and competitive play Legality In State of Oregon v. Barr, a 1982 court case pivotal to the continued widespread organised playing of backgammon in the US, the State argued that backgammon is a game of chance and that it was therefore subject to Oregon's stringent gambling laws. Paul Magriel was a key witness for the defence, contradicting Roger Nelson, the expert prosecution witness, by saying, "Game theory, however, really applies to games with imperfect knowledge, where something is concealed, such as poker. Backgammon is not such a game. Everything is in front of you. The person who uses that information in the most effective manner will win." After the closing arguments, Judge Stephen S. Walker concluded that backgammon is a game of skill, not a game of chance, and found the defendant, backgammon tournament director Ted Barr, not guilty of promoting gambling. Club and tournament play Enthusiasts have formed clubs for social play of backgammon. Local clubs may hold informal gatherings, with members meeting at cafés and bars in the evening to play and converse. A few clubs offer additional services, maintaining their own facilities or offering computer analysis of troublesome plays. Around 2003, some club leaders noticed a growth of interest in backgammon, and attributed it to the game's popularity on the Internet. A backgammon chouette permits three or more players to participate in a single game, often for money. One player competes against a team of all the other participants, and positions rotate after each game. Chouette play often permits the use of multiple doubling cubes. Backgammon clubs may also organize tournaments. Large club tournaments sometimes draw competitors from other regions, with final matches viewed by hundreds of spectators. The top players at regional tournaments often compete in major national and international championships. Winners at major tournaments may receive prizes of tens of thousands of dollars. International competition The first world championship competition in backgammon was held in Las Vegas, Nevada in 1967. Tim Holland was declared the winner that year and at the tournament the following year. For unknown reasons, there was no championship in 1970, but in 1971, Tim Holland again won the title. The competition remained in Las Vegas until 1975, when it moved to Paradise Island in the Bahamas. The years 1976, 1977 and 1978 saw "dual" World Championships, one in the Bahamas attended by the Americans, and the European Open Championships in Monte Carlo with mostly European players. In 1979, Lewis Deyong, who had promoted the Bahamas World Championship for the prior three years, suggested that the two events be combined. Monte Carlo was universally acknowledged as the site of the World Backgammon Championship and has remained as such for thirty years. The Monte Carlo tournament draws hundreds of players and spectators, and is played over the course of a week. By the 21st century, the largest international tournaments had established the basis of a tour for top professional players. Major tournaments are held yearly worldwide. PartyGaming sponsored the first World Series of Backgammon in 2006 from Cannes and later the "Backgammon Million" tournament held in the Bahamas in January 2007 with a prize pool of one million dollars, the largest for any tournament to date. In 2008, the World Series of Backgammon ran the world's largest international events in London, the UK Masters, the biggest tournament ever held in the UK with 128 international class players; the Nordic Open, which instantly became the largest in the world with around 500 players in all flights and 153 in the championship, and Cannes, which hosted the Riviera Cup, the traditional follow-up tournament to the World Championships. Cannes also hosted the WSOB championship, the WSOB finale, which saw 16 players play three-point shootout matches for €160,000. The event was recorded for television in Europe and aired on Eurosport. The World Backgammon Association (WBA) has been holding the biggest backgammon tour on the circuit since 2007, the "European Backgammon Tour" (EBGT). In 2011, the WBA collaborated with the online backgammon provider Play65 for the 2011 season of the European Backgammon Tour and with "Betfair" in 2012. The 2013 season of the European Backgammon Tour featured 11 stops and 19 qualified players competing for €19,000 in a grand finale in Lefkosa, Northern Cyprus. Gambling When backgammon is played for money, the most common arrangement is to assign a monetary value to each point, and to play to a certain score, or until either player chooses to stop. The stakes are raised by gammons, backgammons, and use of the doubling cube. Backgammon is sometimes available in casinos. Before the commercialization of artificial neural network programs, proposition bets on specific positions were very common among backgammon players and gamblers. As with most gambling games, successful play requires a combination of luck and skill, as a single dice roll can sometimes significantly change the outcome of the game. Mediterranean and West Asian cultural significance Backgammon is considered the national game in many countries of the Eastern Mediterranean: Egypt, Turkey, Cyprus, Syria, Israel, Palestine, Lebanon and Greece. The popularity of the game across the region is primarily an oral tradition, and appears to have been strengthened during the era of the Ottoman Empire, which controlled the whole Eastern Mediterranean in the early modern period. Afif Bahnassi, Syria's director of antiquities, stated in 1988: "For some reason, backgammon became the rage of the Ottoman Empire. It really spread across the Arab world with the Turks, and it stayed behind when they left." The game is a common feature of coffeehouses throughout the region. Since at least the early 19th century, Damascus became well known as the preeminent location for Damascene-style wooden marquetry backgammon sets that have become famous throughout the region. A unique feature of backgammon throughout the region is players' use of mixed Persian and Turkish numbers to announce dice rolls, rather than Arabic or other local languages. Related to this phenomenon, the game is frequently referred to as Shesh Besh, which is a rhyming combination shesh, meaning six in Persian (as well as many historical and current Semitic languages), and besh, meaning five in Turkish. Shesh besh is commonly used to refer to when a player scores a 5 and 6 at the same time on dice. This language contains six types of irregular inflections: 1) doubles in pure Persian, (6-6 and 3-3); 2a) unequal throws in pure Persian, higher followed by lower 2b) unequal throws in pure Persian, with a connecting vowel in between 3) Mixed Turkish-Persian numeral (6-5, 5-5, 4-4) 4) alternatives for 2a and 2b in pure Turkish (6-4, 5-4, 5-1, 2-1) 5) special cases (3-2, 2-2, 1-1); where 3-2 is a version of 2a with a "ba" added for phonetic reasons, 2-2 is for "twice" or two-times-two, and 1-1 is a hybrid Turkish-Persian where hep is Turkish for "altogether". In the early 20th century, as use of Classical Arabic was being promoted with the rise of Arab nationalism, efforts were made to replace the Persian-Turkish numbers used in backgammon play. Studies and analysis Backgammon has been studied considerably by computer scientists. Neural networks and other approaches have offered significant advances to software for gameplay and analysis. With 15 white and 15 black counters and 24 possible positions, backgammon has 18 quintillion possible legal positions. The first strong computer opponent was BKG 9.8. It was written by Hans Berliner in the late 1970s on a DEC PDP-10 as an experiment in evaluating board game positions. Early versions of BKG played badly even against poor players, but Berliner noticed that its critical mistakes were always at transitional phases in the game. He applied principles of fuzzy logic to improve its play between phases, and by July 1979, BKG 9.8 was strong enough to play against the reigning world champion Luigi Villa. It won the match 7–1, becoming the first computer program to defeat a world champion in any board game. Berliner stated that the victory was largely a matter of luck, as the computer received more favorable dice rolls. In the late 1980s, backgammon programmers found more success with an approach based on artificial neural networks. TD-Gammon, developed by Gerald Tesauro of IBM, was the first of these programs to play near the expert level. Its neural network was trained using temporal difference learning applied to data generated from self-play. According to assessments by Bill Robertie and Kit Woolsey, TD-Gammon's play was at or above the level of the top human players in the world. Woolsey said of the program that "There is no question in my mind that its positional judgment is far better than mine." Tesauro proposed using rollout analysis to compare the performance of computer algorithms against human players. In this method, a Monte-Carlo evaluation of positions is conducted (typically thousands of trials) where different random dice sequences are simulated. The rollout score of the human (or the computer) is the difference of the average game results by following the selected move versus following the best move, then averaged for the entire set of taken moves. Neural network research has resulted in three modern proprietary programs, JellyFish, Snowie and eXtreme Gammon, as well as the shareware BGBlitz and the free software GNU Backgammon. These programs not only play the game, but offer tools for analyzing games and detailed comparisons of individual moves. The strength of these programs lies in their neural networks' weights tables, which are the result of months of training. Without them, these programs play no better than a human novice. For the bearoff phase, backgammon software usually relies on a database containing precomputed equities for all possible bearoff positions. There are 54,263 bearoff positions for each side. This means there are 542632 total bearoff positions (~3 billion positions). In 1981 Hugh Sconyers wrote a computer program that solved all positions with nine checkers or fewer for both sides. In the early 1990s Hugh extended his results to all bearoff positions. For each position there are four results: no cube, roller's cube, center cube and opponent's cube. So, Hugh's bearoff database contains the exact answers to ~12 billion bearoff situations. Computer-versus-computer competitions are also held at Computer Olympiad events. See also Backgammon notation Tables games Tabletop games References Notes Citations Sources _ (1930). The Retail Bookseller: Trade news for the Book Buyer. Vol. 33. Retail Bookseller. Bohn, Henry George (1850). The Hand-Book of Games. London: Bohn. Fiske, Willard (1905). Chess in Iceland and in Icelandic Literature: with Historical Notes on Other Table-Games. Florence: The Florentine Typographical Society. Howell, James (1835). "LXVII. [Letter] To Master G. Stone" in Familiar Letters. Vol. 2. (1850). London: Humphrey Moseley. p. 105. Jacoby, Oswald and Crawford John R. (1970). The Backgammon Book. NY: Viking. IA Papahristou, Nikolaos (2015). Decision Making in Multiplayer Environments: Application in backgammon variants. Thessaloniki: University of Macedonia. Doctoral Studies Programme. Parlett, David (1999). The Oxford History of Board Games. Oxford: OUP. Wheatley, Henry B. (1666). The Diary of Samuel Pepys. Vol. 9. London and NY: Croscup. (Critical edition of Willughby's volume containing descriptions of games and pastimes, c.1660-1672. Manuscript in the Middleton collection, University of Nottingham; document reference Mi LM 14) Willughby, Francis, A Volume of Plaies'' containing descriptions of games and pastimes ("Francis Willughby's Book of Games"), c.1660-1672. Manuscript in the Middleton collection, University of Nottingham; document reference Mi LM 14 External links Backgammon World Championship - Monte Carlo Traditional board games Articles containing video clips 17th-century board games British board games
9
Batman is a superhero appearing in American comic books published by DC Comics. The character was created by artist Bob Kane and writer Bill Finger, and debuted in the 27th issue of the comic book Detective Comics on March 30, 1939. In the DC Universe continuity, Batman is the alias of Bruce Wayne, a wealthy American playboy, philanthropist, and industrialist who resides in Gotham City. Batman's origin story features him swearing vengeance against criminals after witnessing the murder of his parents Thomas and Martha as a child, a vendetta tempered with the ideal of justice. He trains himself physically and intellectually, crafts a bat-inspired persona, and monitors the Gotham streets at night. Kane, Finger, and other creators accompanied Batman with supporting characters, including his sidekicks Robin and Batgirl; allies Alfred Pennyworth, James Gordon, and Catwoman; and foes such as the Penguin, the Riddler, Two-Face, and his archenemy, the Joker. Kane conceived Batman in early 1939 to capitalize on the popularity of DC's Superman; although Kane frequently claimed sole creation credit, Finger substantially developed the concept from a generic superhero into something more bat-like. The character received his own spin-off publication, Batman, in 1940. Batman was originally introduced as a ruthless vigilante who frequently killed or maimed criminals, but evolved into a character with a stringent moral code and strong sense of justice. Unlike most superheroes, Batman does not possess any superpowers, instead relying on his intellect, fighting skills, and wealth. The 1960s Batman television series used a camp aesthetic, which continued to be associated with the character for years after the show ended. Various creators worked to return the character to his darker roots in the 1970s and 1980s, culminating with the 1986 miniseries The Dark Knight Returns by Frank Miller. DC has featured Batman in many comic books, including comics published under its imprints such as Vertigo and Black Label. The longest-running Batman comic, Detective Comics, is the longest-running comic book in the United States. Batman is frequently depicted alongside other DC superheroes, such as Superman and Wonder Woman, as a member of organizations such as the Justice League and the Outsiders. In addition to Bruce Wayne, other characters have taken on the Batman persona on different occasions, such as Jean-Paul Valley / Azrael in the 1993–1994 "Knightfall" story arc; Dick Grayson, the first Robin, from 2009 to 2011; and Jace Fox, son of Wayne's ally Lucius, as of 2021. DC has also published comics featuring alternate versions of Batman, including the incarnation seen in The Dark Knight Returns and its successors, the incarnation from the Flashpoint (2011) event, and numerous interpretations from Elseworlds stories. One of the most iconic characters in popular culture, Batman has been listed among the greatest comic book superheroes and fictional characters ever created. He is one of the most commercially successful superheroes, and his likeness has been licensed and featured in various media and merchandise sold around the world; this includes toy lines such as Lego Batman and video games like the Batman: Arkham series. Batman has been adapted in live-action and animated incarnations, including the 1960s Batman television series played by Adam West and in film by Michael Keaton in Batman (1989), Batman Returns (1992), and The Flash (2023), Val Kilmer in Batman Forever (1995), George Clooney in Batman & Robin (1997), Christian Bale in The Dark Knight trilogy (2005–2012), Ben Affleck in the DC Extended Universe (2016–2023), and Robert Pattinson in The Batman (2022). Kevin Conroy, Diedrich Bader, Jensen Ackles, Troy Baker, and Will Arnett, among others, have provided the character's voice. Publication history Creation In early 1939, the success of Superman in Action Comics prompted editors at National Comics Publications (the future DC Comics) to request more superheroes for its titles. In response, Bob Kane created "the Bat-Man". Collaborator Bill Finger recalled that "Kane had an idea for a character called 'Batman,' and he'd like me to see the drawings. I went over to Kane's, and he had drawn a character who looked very much like Superman with kind of ...reddish tights, I believe, with boots ...no gloves, no gauntlets ...with a small domino mask, swinging on a rope. He had two stiff wings that were sticking out, looking like bat wings. And under it was a big sign ...BATMAN". The bat-wing-like cape was suggested by Bob Kane, inspired as a child by Leonardo da Vinci's sketch of an ornithopter flying device. Finger suggested giving the character a cowl instead of a simple domino mask, a cape instead of wings, and gloves; he also recommended removing the red sections from the original costume. Finger said he devised the name Bruce Wayne for the character's secret identity: "Bruce Wayne's first name came from Robert the Bruce, the Scottish patriot. Wayne, being a playboy, was a man of gentry. I searched for a name that would suggest colonialism. I tried Adams, Hancock ...then I thought of Mad Anthony Wayne." He later said his suggestions were influenced by Lee Falk's popular The Phantom, a syndicated newspaper comic-strip character with which Kane was also familiar. Kane and Finger drew upon contemporary 1930s popular culture for inspiration regarding much of the Bat-Man's look, personality, methods, and weaponry. Details find predecessors in pulp fiction, comic strips, newspaper headlines, and autobiographical details referring to Kane himself. As an aristocratic hero with a double identity, Batman had predecessors in the Scarlet Pimpernel (created by Baroness Emmuska Orczy, 1903) and Zorro (created by Johnston McCulley, 1919). Like them, Batman performed his heroic deeds in secret, averted suspicion by playing aloof in public, and marked his work with a signature symbol. Kane noted the influence of the films The Mark of Zorro (1920) and The Bat Whispers (1930) in the creation of the character's iconography. Finger, drawing inspiration from pulp heroes like Doc Savage, The Shadow, Dick Tracy, and Sherlock Holmes, made the character a master sleuth. In his 1989 autobiography, Kane detailed Finger's contributions to Batman's creation: Golden Age Subsequent creation credit Kane signed away ownership in the character in exchange for, among other compensation, a mandatory byline on all Batman comics. This byline did not originally say "Batman created by Bob Kane"; his name was simply written on the title page of each story. The name disappeared from the comic book in the mid-1960s, replaced by credits for each story's actual writer and artists. In the late 1970s, when Jerry Siegel and Joe Shuster began receiving a "created by" credit on the Superman titles, along with William Moulton Marston being given the byline for creating Wonder Woman, Batman stories began saying "Created by Bob Kane" in addition to the other credits. Finger did not receive the same recognition. While he had received credit for other DC work since the 1940s, he began, in the 1960s, to receive limited acknowledgment for his Batman writing; in the letters page of Batman #169 (February 1965) for example, editor Julius Schwartz names him as the creator of the Riddler, one of Batman's recurring villains. However, Finger's contract left him only with his writing page rate and no byline. Kane wrote, "Bill was disheartened by the lack of major accomplishments in his career. He felt that he had not used his creative potential to its fullest and that success had passed him by." At the time of Finger's death in 1974, DC had not officially credited Finger as Batman co-creator. Jerry Robinson, who also worked with Finger and Kane on the strip at this time, has criticized Kane for failing to share the credit. He recalled Finger resenting his position, stating in a 2005 interview with The Comics Journal: Although Kane initially rebutted Finger's claims at having created the character, writing in a 1965 open letter to fans that "it seemed to me that Bill Finger has given out the impression that he and not myself created the ''Batman, t' as well as Robin and all the other leading villains and characters. This statement is fraudulent and entirely untrue." Kane himself also commented on Finger's lack of credit. "The trouble with being a 'ghost' writer or artist is that you must remain rather anonymously without 'credit'. However, if one wants the 'credit', then one has to cease being a 'ghost' or follower and become a leader or innovator." In 1989, Kane revisited Finger's situation, recalling in an interview: In September 2015, DC Entertainment revealed that Finger would be receiving credit for his role in Batman's creation on the 2016 superhero film Batman v Superman: Dawn of Justice and the second season of Gotham after a deal was worked out between the Finger family and DC. Finger received credit as a creator of Batman for the first time in a comic in October 2015 with Batman and Robin Eternal #3 and Batman: Arkham Knight Genesis #3. The updated acknowledgment for the character appeared as "Batman created by Bob Kane with Bill Finger". Early years The first Batman story, "The Case of the Chemical Syndicate", was published in Detective Comics #27 (cover dated May 1939). It was inspired, some say plagiarized, by the 60 page story “Partners of Peril” in The Shadow #113, which was written by Theodore Tinsley and illustrated by Tom Lovell. Finger said, "Batman was originally written in the style of the pulps", and this influence was evident with Batman showing little remorse over killing or maiming criminals. Batman proved a hit character, and he received his own solo title in 1940 while continuing to star in Detective Comics. By that time, Detective Comics was the top-selling and most influential publisher in the industry; Batman and the company's other major hero, Superman, were the cornerstones of the company's success. The two characters were featured side by side as the stars of World's Finest Comics, which was originally titled World's Best Comics when it debuted in fall 1940. Creators including Jerry Robinson and Dick Sprang also worked on the strips during this period. Over the course of the first few Batman strips elements were added to the character and the artistic depiction of Batman evolved. Kane noted that within six issues he drew the character's jawline more pronounced, and lengthened the ears on the costume. "About a year later he was almost the full figure, my mature Batman", Kane said. Batman's characteristic utility belt was introduced in Detective Comics #29 (July 1939), followed by the boomerang-like batarang and the first bat-themed vehicle, the Batplane, in #31 (September 1939). The character's origin was revealed in #33 (November 1939), unfolding in a two-page story that establishes the brooding persona of Batman, a character driven by the death of his parents. Written by Finger, it depicts a young Bruce Wayne witnessing his parents' murder at the hands of a mugger. Days later, at their grave, the child vows that "by the spirits of my parents [I will] avenge their deaths by spending the rest of my life warring on all criminals". The early, pulp-inflected portrayal of Batman started to soften in Detective Comics #38 (April 1940) with the introduction of Robin, Batman's junior counterpart. Robin was introduced, based on Finger's suggestion, because Batman needed a "Watson" with whom Batman could talk. Sales nearly doubled, despite Kane's preference for a solo Batman, and it sparked a proliferation of "kid sidekicks". The first issue of the solo spin-off series Batman was notable not only for introducing two of his most persistent enemies, the Joker and Catwoman, but for a pre-Robin inventory story, originally meant for Detective Comics #38, in which Batman shoots some monstrous giants to death. That story prompted editor Whitney Ellsworth to decree that the character could no longer kill or use a gun. By 1942, the writers and artists behind the Batman comics had established most of the basic elements of the Batman mythos. In the years following World War II, DC Comics "adopted a postwar editorial direction that increasingly de-emphasized social commentary in favor of lighthearted juvenile fantasy". The impact of this editorial approach was evident in Batman comics of the postwar period; removed from the "bleak and menacing world" of the strips of the early 1940s, Batman was instead portrayed as a respectable citizen and paternal figure that inhabited a "bright and colorful" environment. Silver and Bronze Ages 1950s and early 1960s Batman was one of the few superhero characters to be continuously published as interest in the genre waned during the 1950s. In the story "The Mightiest Team in the World" in Superman #76 (June 1952), Batman teams up with Superman for the first time and the pair discover each other's secret identity. Following the success of this story, World's Finest Comics was revamped so it featured stories starring both heroes together, instead of the separate Batman and Superman features that had been running before. The team-up of the characters was "a financial success in an era when those were few and far between"; this series of stories ran until the book's cancellation in 1986. Batman comics were among those criticized when the comic book industry came under scrutiny with the publication of psychologist Fredric Wertham's book Seduction of the Innocent in 1954. Wertham's thesis was that children imitated crimes committed in comic books, and that these works corrupted the morals of the youth. Wertham criticized Batman comics for their supposed homosexual overtones and argued that Batman and Robin were portrayed as lovers. Wertham's criticisms raised a public outcry during the 1950s, eventually leading to the establishment of the Comics Code Authority, a code that is no longer in use by the comic book industry. The tendency towards a "sunnier Batman" in the postwar years intensified after the introduction of the Comics Code. Scholars have suggested that the characters of Batwoman (in 1956) and the pre-Barbara Gordon Bat-Girl (in 1961) were introduced in part to refute the allegation that Batman and Robin were gay, and the stories took on a campier, lighter feel. In the late 1950s, Batman stories gradually became more science fiction-oriented, an attempt at mimicking the success of other DC characters that had dabbled in the genre. New characters such as Batwoman, the original Bat-Girl, Ace the Bat-Hound, and Bat-Mite were introduced. Batman's adventures often involved odd transformations or bizarre space aliens. In 1960, Batman debuted as a member of the Justice League of America in The Brave and the Bold #28 (February 1960), and went on to appear in several Justice League comic book series starting later that same year. "New Look" Batman and camp By 1964, sales of Batman titles had fallen drastically. Bob Kane noted that, as a result, DC was "planning to kill Batman off altogether". In response to this, editor Julius Schwartz was assigned to the Batman titles. He presided over drastic changes, beginning with 1964's Detective Comics #327 (May 1964), which was cover-billed as the "New Look". Schwartz introduced changes designed to make Batman more contemporary, and to return him to more detective-oriented stories. He brought in artist Carmine Infantino to help overhaul the character. The Batmobile was redesigned, and Batman's costume was modified to incorporate a yellow ellipse behind the bat-insignia. The space aliens, time travel, and characters of the 1950s such as Batwoman, Ace the Bat-Hound, and Bat-Mite were retired. Bruce Wayne's butler Alfred was killed off (though his death was quickly reversed) while a new female relative for the Wayne family, Aunt Harriet Cooper, came to live with Bruce Wayne and Dick Grayson. The debut of the Batman television series in 1966 had a profound influence on the character. The success of the series increased sales throughout the comic book industry, and Batman reached a circulation of close to 900,000 copies. Elements such as the character of Batgirl and the show's campy nature were introduced into the comics; the series also initiated the return of Alfred. Although both the comics and TV show were successful for a time, the camp approach eventually wore thin and the show was canceled in 1968. In the aftermath, the Batman comics themselves lost popularity once again. As Julius Schwartz noted, "When the television show was a success, I was asked to be campy, and of course when the show faded, so did the comic books." Starting in 1969, writer Dennis O'Neil and artist Neal Adams made a deliberate effort to distance Batman from the campy portrayal of the 1960s TV series and to return the character to his roots as a "grim avenger of the night". O'Neil said his idea was "simply to take it back to where it started. I went to the DC library and read some of the early stories. I tried to get a sense of what Kane and Finger were after." O'Neil and Adams first collaborated on the story "The Secret of the Waiting Graves" in Detective Comics #395 (January 1970). Few stories were true collaborations between O'Neil, Adams, Schwartz, and inker Dick Giordano, and in actuality these men were mixed and matched with various other creators during the 1970s; nevertheless the influence of their work was "tremendous". Giordano said: "We went back to a grimmer, darker Batman, and I think that's why these stories did so well ..." While the work of O'Neil and Adams was popular with fans, the acclaim did little to improve declining sales; the same held true with a similarly acclaimed run by writer Steve Englehart and penciler Marshall Rogers in Detective Comics #471–476 (August 1977 – April 1978), which went on to influence the 1989 movie Batman and be adapted for Batman: The Animated Series, which debuted in 1992. Regardless, circulation continued to drop through the 1970s and 1980s, hitting an all-time low in 1985. Modern Age The Dark Knight Returns Frank Miller's limited series The Dark Knight Returns (February – June 1986) returned the character to his darker roots, both in atmosphere and tone. The comic book, which tells the story of a 55-year-old Batman coming out of retirement in a possible future, reinvigorated interest in the character. The Dark Knight Returns was a financial success and has since become one of the medium's most noted touchstones. The series also sparked a major resurgence in the character's popularity. That year Dennis O'Neil took over as editor of the Batman titles and set the template for the portrayal of Batman following DC's status quo-altering 12-issue miniseries Crisis on Infinite Earths. O'Neil operated under the assumption that he was hired to revamp the character and as a result tried to instill a different tone in the books than had gone before. One outcome of this new approach was the "Year One" storyline in Batman #404–407 (February – May 1987), in which Frank Miller and artist David Mazzucchelli redefined the character's origins. Writer Alan Moore and artist Brian Bolland continued this dark trend with 1988's 48-page one-shot issue Batman: The Killing Joke, in which the Joker, attempting to drive Commissioner Gordon insane, cripples Gordon's daughter Barbara, and then kidnaps and tortures the commissioner, physically and psychologically. The Batman comics garnered major attention in 1988 when DC Comics created a 900 number for readers to call to vote on whether Jason Todd, the second Robin, lived or died. Voters decided in favor of Jason's death by a narrow margin of 28 votes (see Batman: A Death in the Family). Knightfall The 1993 "Knightfall" story arc introduced a new villain, Bane, who critically injures Batman after pushing him to the limits of his endurance. Jean-Paul Valley, known as Azrael, is called upon to wear the Batsuit during Bruce Wayne's convalescence. Writers Doug Moench, Chuck Dixon, and Alan Grant worked on the Batman titles during "Knightfall", and would also contribute to other Batman crossovers throughout the 1990s. 1998's "Cataclysm" storyline served as the precursor to 1999's "No Man's Land", a year-long storyline that ran through all the Batman-related titles dealing with the effects of an earthquake-ravaged Gotham City. At the conclusion of "No Man's Land", O'Neil stepped down as editor and was replaced by Bob Schreck. Another writer who rose to prominence on the Batman comic series, was Jeph Loeb. Along with longtime collaborator Tim Sale, they wrote two miniseries (The Long Halloween and Dark Victory) that pit an early-in-his-career version of Batman against his entire rogues gallery (including Two-Face, whose origin was re-envisioned by Loeb) while dealing with various mysteries involving serial killers Holiday and the Hangman. 21st century Hush and Under the Hood In 2003, Loeb teamed with artist Jim Lee to work on another mystery arc: "Batman: Hush" for the main Batman book. The 12–issue story line has Batman and Catwoman teaming up against Batman's entire rogues gallery, including an apparently resurrected Jason Todd, while seeking to find the identity of the mysterious supervillain Hush. While the character of Hush failed to catch on with readers, the arc was a sales success for DC. The series became #1 on the Diamond Comic Distributors sales chart for the first time since Batman #500 (October 1993) and Todd's appearance laid the groundwork for writer Judd Winick's subsequent run as writer on Batman, with another multi-issue arc, "Under the Hood", which ran from Batman #637–650 (April 2005 – April 2006). All Star Batman & Robin the Boy Wonder In 2005, DC launched All Star Batman & Robin the Boy Wonder, a stand-alone comic book miniseries set outside the main DC Universe continuity. Written by Frank Miller and drawn by Jim Lee, the series was a commercial success for DC Comics, although it was widely panned by critics for its writing, characterization, and strong depictions of violence. Grant Morrison's Batman Run Starting in 2006, Grant Morrison and Paul Dini were the regular writers of Batman and Detective Comics, with Morrison reincorporating controversial elements of Batman lore. Most notably of these elements were the science fiction-themed storylines of the 1950s Batman comics, which Morrison revised as hallucinations Batman experienced under the influence of various mind-bending gases and extensive sensory deprivation training. Morrison's run climaxed with "Batman R.I.P.", which brought Batman up against the villainous "Black Glove" organization, which sought to drive Batman into madness. "Batman R.I.P." segued into Final Crisis (also written by Morrison), which saw the apparent death of Batman at the hands of Darkseid. In the 2009 miniseries Batman: Battle for the Cowl, Wayne's former protégé Dick Grayson becomes the new Batman, and Wayne's son Damian becomes the new Robin. In June 2009, Judd Winick returned to writing Batman, while Grant Morrison was given their own series, titled Batman and Robin. In 2010, the storyline Batman: The Return of Bruce Wayne saw Bruce travel through history, eventually returning to the present day. Although he reclaimed the mantle of Batman, he also allowed Grayson to continue being Batman as well. Bruce decided to take his crime-fighting cause globally, which is the central focus of Batman Incorporated. DC Comics would later announce that Grayson would be the main character in Batman, Detective Comics, and Batman and Robin, while Wayne would be the main character in Batman Incorporated. Also, Bruce appeared in another ongoing series, Batman: The Dark Knight. The New 52 In September 2011, DC Comics' entire line of superhero comic books, including its Batman franchise, were cancelled and relaunched with new #1 issues as part of The New 52 reboot. Bruce Wayne is the only character to be identified as Batman and is featured in Batman, Detective Comics, Batman and Robin, and Batman: The Dark Knight. Dick Grayson returns to the mantle of Nightwing and appears in his own ongoing series. While many characters have their histories significantly altered to attract new readers, Batman's history remains mostly intact. Batman Incorporated was relaunched in 2012–2013 to complete the "Leviathan" storyline. With the beginning of The New 52, Scott Snyder was the writer of the Batman title. His first major story arc was "Night of the Owls", where Batman confronts the Court of Owls, a secret society that has controlled Gotham for centuries. The second story arc was "Death of the Family", where the Joker returns to Gotham and simultaneously attacks each member of the Batman family. The third story arc was "Batman: Zero Year", which redefined Batman's origin in The New 52. It followed Batman vol. 2 #0, published in June 2012, which explored the character's early years. The final storyline before the Convergence (2015) storyline was "Endgame", depicting the supposed final battle between Batman and the Joker when he unleashes the deadly Endgame virus onto Gotham City. The storyline ends with Batman and the Joker's supposed deaths. Starting with Batman vol. 2 #41, Commissioner James Gordon takes over Bruce's mantle as a new, state-sanctioned, robotic-Batman, debuting in the Free Comic Book Day special comic Divergence. However, Bruce Wayne is soon revealed to be alive, albeit now with almost total amnesia of his life as Batman and only remembering his life as Bruce Wayne through what he has learned from Alfred. Bruce Wayne finds happiness and proposes to his girlfriend, Julie Madison, but Mr. Bloom heavily injures Jim Gordon and takes control of Gotham City and threatens to destroy the city by energizing a particle reactor to create a "strange star" to swallow the city. Bruce Wayne discovers the truth that he was Batman and after talking to a stranger who smiles a lot (it is heavily implied that this is the amnesic Joker) he forces Alfred to implant his memories as Batman, but at the cost of his memories as the reborn Bruce Wayne. He returns and helps Jim Gordon defeat Mr. Bloom and shut down the reactor. Gordon gets his job back as the commissioner, and the government Batman project is shut down. In 2015, DC Comics released The Dark Knight III: The Master Race, the sequel to Frank Miller's The Dark Knight Returns and The Dark Knight Strikes Again. DC Rebirth and Infinite Frontier In June 2016, the DC Rebirth event relaunched DC Comics' entire line of comic book titles. Batman was rebooted as starting with a one-shot issue entitled Batman: Rebirth #1 (August 2016). The series then began shipping twice-monthly as a third volume, starting with Batman vol. 3 #1 (August 2016). The third volume of Batman was written by Tom King, and artwork was provided by David Finch and Mikel Janín. The Batman series introduced two vigilantes, Gotham and Gotham Girl. Detective Comics resumed its original numbering system starting with June 2016's #934, and the New 52 series was labeled as volume 2, with issues numbering from #1-52. Similarly with the Batman title, the New 52 issues were labeled as volume 2 and encompassed issues #1-52. Writer James Tynion IV and artists Eddy Barrows and Alvaro Martinez worked on Detective Comics #934, and the series initially featured a team consisting of Tim Drake, Stephanie Brown, Cassandra Cain, and Clayface, led by Batman and Batwoman. DC Comics ended the DC Rebirth branding in December 2017, opting to include everything under a larger DC Universe banner and naming. The continuity established by DC Rebirth continues across DC's comic book titles, including volume 1 of Detective Comics and volume 3 of Batman. After the conclusion of Batman vol. 3 #85 a new creative team consisting of James Tynion IV with art by Tony S. Daniel and Danny Miki replaced Tom King, David Finch and Mikel Janín. Following Tynion's departure from DC Comics, Joshua Williamson, who previously wrote the backup story in issue #106, briefly became the new head writer in December 2021 starting with issue #118. Chip Zdarsky then became the head writer with artist Jorge Jimenez returning after having previously illustrated parts of Tynion's run. Their run begun with issue #125, which was released on July 5, 2022 and starts with "Failsafe", a six-issue story arc. Characterization Bruce Wayne Batman's secret identity is Bruce Wayne, a wealthy American industrialist. As a child, Bruce witnessed the murder of his parents, Dr. Thomas Wayne and Martha Wayne, which ultimately led him to craft the Batman persona and seek justice against criminals. He resides on the outskirts of Gotham City in his personal residence, Wayne Manor. Wayne averts suspicion by acting the part of a superficial playboy idly living off his family's fortune and the profits of Wayne Enterprises, his inherited conglomerate. He supports philanthropic causes through his nonprofit Wayne Foundation, which in part addresses social issues encouraging crime as well as assisting victims of it, but is more widely known as a celebrity socialite. In public, he frequently appears in the company of high-status women, which encourages tabloid gossip while feigning near-drunkenness with consuming large quantities of disguised ginger ale since Wayne is actually a strict teetotaler to maintain his physical and mental prowess. Although Bruce Wayne leads an active romantic life, his vigilante activities as Batman account for most of his time. Various modern stories have portrayed the extravagant, playboy image of Bruce Wayne as a facade. This is in contrast to the Post-Crisis Superman, whose Clark Kent persona is the true identity, while the Superman persona is the facade. In Batman Unmasked, a television documentary about the psychology of the character, behavioral scientist Benjamin Karney notes that Batman's personality is driven by Bruce Wayne's inherent humanity; that "Batman, for all its benefits and for all of the time Bruce Wayne devotes to it, is ultimately a tool for Bruce Wayne's efforts to make the world better". Bruce Wayne's principles include the desire to prevent future harm and a vow not to kill. Bruce Wayne believes that our actions define us, we fail for a reason and anything is possible. Writers of Batman and Superman stories have often compared and contrasted the two. Interpretations vary depending on the writer, the story, and the timing. Grant Morrison notes that both heroes "believe in the same kind of things" despite the day/night contrast their heroic roles display. Morrison notes an equally stark contrast in their real identities. Bruce Wayne and Clark Kent belong to different social classes: "Bruce has a butler, Clark has a boss." T. James Musler's book Unleashing the Superhero in Us All explores the extent to which Bruce Wayne's vast personal wealth is important in his life story, and the crucial role it plays in his efforts as Batman. Will Brooker notes in his book Batman Unmasked that "the confirmation of the Batman's identity lies with the young audience ...he doesn't have to be Bruce Wayne; he just needs the suit and gadgets, the abilities, and most importantly the morality, the humanity. There's just a sense about him: 'they trust him ...and they're never wrong." Personality Batman's primary character traits can be summarized as "wealth; physical prowess; deductive abilities and obsession". The details and tone of Batman comic books have varied over the years with different creative teams. Dennis O'Neil noted that character consistency was not a major concern during early editorial regimes: "Julie Schwartz did a Batman in Batman and Detective and Murray Boltinoff did a Batman in the Brave and the Bold and apart from the costume they bore very little resemblance to each other. Julie and Murray did not want to coordinate their efforts, nor were they asked to do so. Continuity was not important in those days." The driving force behind Bruce Wayne's character is his parents' murder and their absence. Bob Kane and Bill Finger discussed Batman's background and decided that "there's nothing more traumatic than having your parents murdered before your eyes". Despite his trauma, he sets his mind on studying to become a scientist and to train his body into physical perfection to fight crime in Gotham City as Batman, an inspired idea from Wayne's insight into the criminal mind. He also speaks over 40 languages. Another of Batman's characterizations is that of a vigilante; in order to stop evil that started with the death of his parents, he must sometimes break the law himself. Although manifested differently by being re-told by different artists, it is nevertheless that the details and the prime components of Batman's origin have never varied at all in the comic books, the "reiteration of the basic origin events holds together otherwise divergent expressions". The origin is the source of the character's traits and attributes, which play out in many of the character's adventures. Batman is often treated as a vigilante by other characters in his stories. Frank Miller views the character as "a dionysian figure, a force for anarchy that imposes an individual order". Dressed as a bat, Batman deliberately cultivates a frightening persona in order to aid him in crime-fighting, a fear that originates from the criminals' own guilty conscience. Miller is often credited with reintroducing anti-heroic traits into Batman's characterization, such as his brooding personality, willingness to use violence and torture, and increasingly alienated behavior. Batman, shortly a year after his debut and the introduction of Robin, was changed in 1940 after DC editor Whitney Ellsworth felt the character would be tainted by his lethal methods and DC established their own ethical code, subsequently he was retconned to have a stringent moral code, which has stayed with the character of Batman ever since. Miller's Batman was closer to the original pre-Robin version, who was willing to kill criminals if necessary. Others On several occasions former Robin Dick Grayson has served as Batman; most notably in 2009 while Wayne was believed dead, and served as a second Batman even after Wayne returned in 2010. As part of DC's 2011 continuity relaunch, Grayson returned to being Nightwing following the Flashpoint crossover event. In an interview with IGN, Morrison detailed that having Dick Grayson as Batman and Damian Wayne as Robin represented a "reverse" of the normal dynamic between Batman and Robin, with, "a more light-hearted and spontaneous Batman and a scowling, badass Robin". Morrison explained their intentions for the new characterization of Batman: "Dick Grayson is kind of this consummate superhero. The guy has been Batman's partner since he was a kid, he's led the Teen Titans, and he's trained with everybody in the DC Universe. So he's a very different kind of Batman. He's a lot easier; a lot looser and more relaxed." Over the years, there have been numerous others to assume the name of Batman, or to officially take over for Bruce during his leaves of absence. Jean-Paul Valley, also known as Azrael, assumed the cowl after the events of the Knightfall saga. Jim Gordon donned a mecha-suit after the events of Batman: Endgame, and served as Batman in 2015 and 2016. In 2021, as part of the Fear State crossover event, Lucius Fox's son Jace Fox succeeds Bruce as Batman in a 2021 storyline, depicted in the series I Am Batman, after Batman was declared dead. Additionally, members of the group Batman Incorporated, Bruce Wayne's experiment at franchising his brand of vigilantism, have at times stood in as the official Batman in cities around the world. Various others have also taken up the role of Batman in stories set in alternative universes and possible futures, including, among them, various former proteges of Bruce Wayne. Supporting characters Batman's interactions with both villains and cohorts have, over time, developed a strong supporting cast of characters. Enemies Batman faces a variety of foes ranging from common criminals to outlandish supervillains. Many of them mirror aspects of the Batman's character and development, often having tragic origin stories that lead them to a life of crime. These foes are commonly referred to as Batman's rogues gallery. Batman's "most implacable foe" is the Joker, a homicidal maniac with a clown-like appearance. The Joker is considered by critics to be his perfect adversary, since he is the antithesis of Batman in personality and appearance; the Joker has a maniacal demeanor with a colorful appearance, while Batman has a serious and resolute demeanor with a dark appearance. As a "personification of the irrational", the Joker represents "everything Batman [opposes]". Other long-time recurring foes that are part of Batman's rogues gallery include Catwoman (a cat burglar anti-heroine who is variously an ally and romantic interest), the Penguin, Ra's al Ghul, Two-Face, the Riddler, the Scarecrow, Mr. Freeze, Poison Ivy, Harley Quinn, Bane, Clayface, and Killer Croc, among others. Many of Batman's adversaries are often psychiatric patients at Arkham Asylum. Allies Alfred Batman's butler, Alfred Pennyworth, first appeared in Batman #16 (1943). He serves as Bruce Wayne's loyal father figure and is one of the few persons to know his secret identity. Alfred raised Bruce after his parents' death and knows him on a very personal level. He is sometimes portrayed as a sidekick to Batman and the only other resident of Wayne Manor aside from Bruce. The character "[lends] a homely touch to Batman's environs and [is] ever ready to provide a steadying and reassuring hand" to the hero and his sidekick. "Batman family" The informal name "Batman family" is used for a group of characters closely allied with Batman, generally masked vigilantes who either have been trained by Batman or operate in Gotham City with his tacit approval. Currently, the Batfamily consists of Jean-Paul Valley/Azrael, Michael Lane/Azrael, Barbara Gordon/Batgirl/Oracle, Cassandra Cain/Batgirl/Orphan, Stephanie Brown/Batgirl/Spoiler, Luke Fox/Batwing, Kate Kane/Batwoman, Harper Row/Bluebird, Selina Kyle/Catwoman, Minkhoa Khan/Ghost-Maker, Harleen Quinzel/Harley Quinn, Dick Grayson/Nightwing, Jason Todd/Red Hood, Damian Wayne/Robin, Tim Drake/Robin, and Duke Thomas/The Signal. Civilians Lucius Fox, a technology specialist and Bruce Wayne's business manager who is well aware of his employer's clandestine vigilante activities (Lucius' son Luke would later become aware of Bruce's secret identity and adopt the superhero mantle of Batwing as well as become a member of the Justice League); Dr. Leslie Thompkins, a family friend who like Alfred became a surrogate parental figure to Bruce Wayne after the deaths of his parents, and is also aware of his secret identity; Vicki Vale, an investigative journalist who often reports on Batman's activities for the Gotham Gazette; Ace the Bat-Hound, Batman's canine partner who was mainly active in the 1950s and 1960s; and Bat-Mite, an extra-dimensional imp mostly active in the 1960s who idolizes Batman. GCPD As Batman's ally in the Gotham City police, Commissioner James "Jim" Gordon debuted along with Batman in Detective Comics #27 and has been a consistent presence ever since. As a crime-fighting everyman, he shares Batman's goals while offering, much as the character of Dr. Watson does in Sherlock Holmes stories, a normal person's perspective on the work of Batman's extraordinary genius. Justice League Batman is at times a member of superhero teams such as the Justice League of America and the Outsiders. Batman has often been paired in adventures with his Justice League teammate Superman, notably as the co-stars of World's Finest Comics and Superman/Batman series. In Pre-Crisis continuity, the two are depicted as close friends; however, in current continuity, they are still close friends but an uneasy relationship, with an emphasis on their differing views on crime-fighting and justice. In Superman/Batman #3 (December 2003), Superman observes, "Sometimes, I admit, I think of Bruce as a man in a costume. Then, with some gadget from his utility belt, he reminds me that he has an extraordinarily inventive mind. And how lucky I am to be able to call on him." Robin Robin, Batman's vigilante partner, has been a widely recognized supporting character for many years; each iteration of the Robin character, of which there have been five in the mainstream continuity, function as members of the Batman family, but additionally, as Batman's "central" sidekick in various media. Bill Finger stated that he wanted to include Robin because "Batman didn't have anyone to talk to, and it got a little tiresome always having him thinking." The first Robin, Dick Grayson, was introduced in 1940. In the 1970s he finally grew up, went off to college and became the hero Nightwing. A second Robin, Jason Todd, appeared in the 1980s. In the stories he was eventually badly beaten and then killed in an explosion set by the Joker, but was later revived. He used the Joker's old persona, the Red Hood, and became an antihero vigilante with no qualms about using firearms or deadly force. Carrie Kelley, the first female Robin to appear in Batman stories, was the final Robin in the continuity of Frank Miller's graphic novels The Dark Knight Returns and The Dark Knight Strikes Again, fighting alongside an aging Batman in stories set out of the mainstream continuity. The third Robin in the mainstream comics is Tim Drake, who first appeared in 1989. He went on to star in his own comic series, and currently goes by the Red Robin, a variation on the traditional Robin persona. In the first decade of the new millennium, Stephanie Brown served as the fourth in-universe Robin between stints as her self-made vigilante identity the Spoiler, and later as Batgirl. After Brown's apparent death, Drake resumed the role of Robin for a time. The role eventually passed to Damian Wayne, the 10-year-old son of Bruce Wayne and Talia al Ghul, in the late 2000s. Damian's tenure as du jour Robin ended when the character was killed off in the pages of Batman Incorporated in 2013. Batman's next young sidekick is Harper Row, a streetwise young woman who avoids the name Robin but followed the ornithological theme nonetheless; she debuted the codename and identity of the Bluebird in 2014. Unlike the Robins, the Bluebird is willing and permitted to use a gun, albeit non-lethal; her weapon of choice is a modified rifle that fires taser rounds. In 2015, a new series began titled We Are...Robin, focused on a group of teenagers using the Robin persona to fight crime in Gotham City. The most prominent of these, Duke Thomas, later becomes Batman's crimefighting partner as The Signal. Relationships Children Throughout the character's history, Batman has had numerous adopted and biological children throughout the character's various interpretations and iterations. In the mainline DC Universe, Bruce is the adoptive father of Dick Grayson, Jason Todd, Tim Drake, and Cassandra Cain, and the biological father of Damian Wayne, who is Bruce's son with his old lover Talia Al Ghul (the daughter of Bruce's ex-mentor and League of Assassins head Ra's al Ghul, thus making Ra's Damian's grandfather), all of whom Bruce trained to become and have followed his path as crimefighters. In addition to his legally adopted children, Bruce is also a surrogate father to proteges Duke Thomas and Stephanie Brown. Notable alternate universe biological children of Bruce's include Helena Wayne, the biological daughter of the Golden Age/Earth 2's Bruce Wayne and that universe's Selina Kyle, Terry and Matt McGuiness from the DC Animated Universe, and Ibn al Xu'ffasch from the Kingdom Come universe. Damian Wayne is the biological son of Bruce Wayne and Talia al Ghul, and thus the grandson of Ra's al Ghul. Terry and Matt McGuiness were the result of an experiment created by the DCAU's Amanda Waller to create a new Batman to replace Bruce when he eventually became too old for the mantle. This was done by Waller and her team collecting Bruce's genetic material and having Terry and Matt's legal father Warren McGuiness experimentally alter his reproductive system to be identical to Batman's, which led to Terry and Matt genetically being Bruce's sons and Terry eventually becoming Bruce's successor as Batman during the events of the TV show Batman Beyond (though he did not discover his relation to Bruce until 15 years after they first met). Much like Damian, Ibn al Xu'ffasch is Bruce's son with Talia al Ghul, but unlike Damian, al Xu'ffasch did not learn about his connection to Bruce until adulthood. Romantic interests Writers have varied in the approach over the years to the "playboy" aspect of Bruce Wayne's persona. Some writers show his playboy reputation as a manufactured illusion to support his mission as Batman, while others have depicted Bruce Wayne as genuinely enjoying the benefits of being "Gotham's most eligible bachelor". Bruce Wayne has been portrayed as being romantically linked with many women throughout his various incarnations. Batman's first romantic interest was Julie Madison in Detective Comics #31 (September 1939); however, their romance was short-lived. Some of Batman's romantic interests have been women with a respected status in society, such as Julie Madison, Vicki Vale, and Silver St. Cloud. Batman has also been romantically involved with allies and enemies such as Kathy Kane (Batwoman), Sasha Bordeaux, Zatanna, Wonder Woman, and Pamela Isley (Poison Ivy). His most significant relationships occurred with Selina Kyle (Catwoman) and Talia al Ghul; both women bore his biological offsprings, Helena Wayne and Damian Wayne, respectively. Catwoman While most of Batman's romantic relationships tend to be short in duration, Catwoman has been his most enduring romance throughout the years. The attraction between Batman and Catwoman, whose real name is Selina Kyle, is present in nearly every version and medium in which the characters appear, including a love story between their two secret identities as early as in the 1966 film Batman. Although Catwoman is typically portrayed as a villain, Batman and Catwoman have worked together in achieving common goals and are usually depicted as having a romantic connection. In an early 1980s storyline, Selina Kyle and Bruce Wayne develop a relationship, in which the closing panel of the final story shows her referring to Batman as "Bruce". However, a change in the editorial team brought a swift end to that storyline and, apparently, all that transpired during the story arc. Out of costume, Bruce and Selina develop a romantic relationship during The Long Halloween. The story shows Selina saving Bruce from Poison Ivy. However, the relationship ends when Bruce rejects her advances twice; once as Bruce and once as Batman. In Batman: Dark Victory, he stands her up on two holidays, causing her to leave him for good and to leave Gotham City for a while. When the two meet at an opera many years later, during the events of the 12-issue story arc called "Hush", Bruce comments that the two no longer have a relationship as Bruce and Selina. However, "Hush" sees Batman and Catwoman allied against the entire rogues gallery and rekindling their romantic relationship. In "Hush", Batman reveals his true identity to Catwoman. The Earth-Two Batman, a character from a parallel world, partners with and marries the reformed Earth-Two Selina Kyle, as shown in Superman Family #211. They have a daughter named Helena Wayne, who becomes the Huntress. Along with Dick Grayson, the Earth-Two Robin, the Huntress takes the role as Gotham's protector once Bruce Wayne retires to become police commissioner, a position he occupies until he is killed during one final adventure as Batman. Batman and Catwoman are shown having a sexual encounter on the roof of a building in Catwoman vol. 4 #1 (2011); the same issue implies that the two have an ongoing sexual relationship. Following the 2016 DC Rebirth continuity reboot, the two once again have a sexual encounter on top of a building in Batman vol. 3 #14 (2017). Following the 2016 DC Rebirth continuity reboot, Batman and Catwoman work together in the third volume of Batman. The two also have a romantic relationship, in which they are shown having a sexual encounter on a rooftop and sleeping together. Bruce proposes to Selina in Batman vol. 3 #24 (2017), and in issue #32, Selina asks Bruce to propose to her again. When he does so, she says, "Yes." Batman vol. 3 Annual #2 (January 2018) centers on a romantic storyline between Batman and Catwoman. Towards the end, the story is flash-forwarded to the future, in which Bruce Wayne and Selina Kyle are a married couple in their golden years. Bruce receives a terminal medical diagnosis, and Selina cares for him until his death. Abilities Skills and training Batman has no inherent superhuman powers; he relies on "his own scientific knowledge, detective skills, and athletic prowess". Batman's inexhaustible wealth gives him access to advanced technologies, and as a proficient scientist, he is able to use and modify these technologies to his advantage. In the stories, Batman is regarded as one of the world's greatest detectives, if not the world's greatest crime solver. Batman has been repeatedly described as having a genius-level intellect, being one of the greatest martial artists in the DC Universe, and having peak human physical and mental conditioning. As a polymath, his knowledge and expertise in countless disciplines is nearly unparalleled by any other character in the DC Universe. He has shown prowess in assorted fields such as mathematics, biology, physics, chemistry, and several levels of engineering. He has traveled the world acquiring the skills needed to aid him in his endeavors as Batman. In the Superman: Doomed story arc, Superman considers Batman to be one of the most brilliant minds on the planet. Batman has trained extensively in various fighting styles, making him one of the best hand-to-hand fighters in the DC Universe. He has fully utilized his photographic memory to master a total of 127 forms of martial arts including, but not limited to, Aikido, boxing, Brazilian jiu-jitsu, Capoeira, Eskrima, fencing, Gatka, Hapkido, Jeet Kune Do, Judo, Kalaripayattu, Karate, Kenjutsu, Kenpo, kickboxing, Kobudo, Krav Maga, Kyudo, Bōjutsu, Muay Thai, Ninjutsu, Pankration, Sambo, Savate, Silat, Taekwondo, wrestling, numerous styles of Wushu (Kung Fu) (such as Baguazhang, Chin Na, Hung Ga, Shaolinquan, Tai Chi, Wing Chun), and Yaw-Yan. In terms of his physical condition, Batman is described as peak human and far beyond an Olympic-athlete-level condition, able to perform feats such as easily running across rooftops in a Parkour-esque fashion, pressing thousands of pounds regularly, and even bench pressing six hundred pounds of soil and coffin in a poisoned and starved state. Superman describes Batman as "the most dangerous man on Earth", able to defeat an entire team of superpowered extraterrestrials by himself in order to rescue his imprisoned teammates in Grant Morrison's first storyline in JLA. Batman is strongly disciplined, and he has the ability to function under great physical pain and resist most forms of telepathy and mind control. He is a master of disguise, multilingual, and an expert in espionage, often gathering information under the identity of a notorious gangster named Matches Malone. Batman is highly skilled in stealth movement and escapology, which allows him to appear and disappear at will and to break free of nearly inescapable deathtraps with little to no harm. He is also a master strategist, considered DC’s greatest tactician, with numerous plans in preparation for almost any eventuality. Batman is an expert in interrogation techniques and his intimidating and frightening appearance alone is often all that is needed in getting information from suspects. Despite having the potential to harm his enemies, Batman's most defining characteristic is his strong commitment to justice and his reluctance to take a life. This unyielding moral rectitude has earned him the respect of several heroes in the DC Universe, most notably that of Superman and Wonder Woman. Among physical and other crime fighting related training, he is also proficient at other types of skills. Some of these include being a licensed pilot (in order to operate the Batplane), as well as being able to operate other types of machinery. In some publications, he even underwent some magician training. Technology Batman utilizes a vast arsenal of specialized, high-tech vehicles and gadgets in his war against crime, the designs of which usually share a bat motif. Batman historian Les Daniels credits Gardner Fox with creating the concept of Batman's arsenal with the introduction of the utility belt in Detective Comics #29 (July 1939) and the first bat-themed weapons the batarang and the "Batgyro" in Detective Comics #31 and 32 (Sept. and October 1939). Personal armor Batman's batsuit aids in his combat against enemies, having the properties of both Kevlar and Nomex. It protects him from gunfire and other significant impacts, and incorporates the imagery of a bat in order to frighten criminals. The details of the Batman costume change repeatedly through various decades, stories, media and artists' interpretations, but the most distinctive elements remain consistent: a scallop-hem cape; a cowl covering most of the face; a pair of bat-like ears; a stylized bat emblem on the chest; and the ever-present utility belt. His gloves typically feature three scallops that protrude from long, gauntlet-like cuffs, although in his earliest appearances he wore short, plain gloves without the scallops. The overall look of the character, particularly the length of the cowl's ears and of the cape, varies greatly depending on the artist. Dennis O'Neil said, "We now say that Batman has two hundred suits hanging in the Batcave so they don't have to look the same ...Everybody loves to draw Batman, and everybody wants to put their own spin on it." Finger and Kane originally conceptualized Batman as having a black cape and cowl and grey suit, but conventions in coloring called for black to be highlighted with blue. Hence, the costume's colors have appeared in the comics as dark blue and grey; as well as black and grey. In the Tim Burton's Batman and Batman Returns films, Batman has been depicted as completely black with a bat in the middle surrounded by a yellow background. Christopher Nolan's The Dark Knight Trilogy depicted Batman wearing high-tech gear painted completely black with a black bat in the middle. Ben Affleck's Batman in the DC Extended Universe films wears a suit grey in color with a black cowl, cape, and bat symbol. Seemingly following the suit of the DC Extended Universe outfit, Robert Pattinson's uniform in The Batman restores the more traditional gray bodysuit and black appendage design, notably different from prior iterations by mostly utilizing real world armor and apparel pieces from modern military and motorcycle gear. Batmobile Batman's primary vehicle is the Batmobile, which is usually depicted as an imposing black car, often with tailfins that suggest a bat's wings. Batman also has an aircraft called the Batplane (originally a relatively traditionally, but bat-motifed plane, later seen as the much more unique "Batwing" starting in the 1989 film), along with various other means of transportation. In proper practice, the "bat" prefix (as in Batmobile or batarang) is rarely used by Batman himself when referring to his equipment, particularly after some portrayals (primarily the 1960s Batman live-action television show and the Super Friends animated series) stretched the practice to campy proportions. For example, the 1960s television show depicted a Batboat, Bat-Sub, and Batcycle, among other bat-themed vehicles. The 1960s television series Batman has an arsenal that includes such "bat-" names as the Bat-computer, Bat-scanner, bat-radar, bat-cuffs, bat-pontoons, bat-drinking water dispenser, bat-camera with polarized bat-filter, bat-shark repellent bat-spray, and Bat-rope. The storyline "A Death in the Family" suggests that given Batman's grim nature, he is unlikely to have adopted the "bat" prefix on his own. In The Dark Knight Returns, Batman tells Carrie Kelley that the original Robin came up with the name "Batmobile" when he was young, since that is what a kid would call Batman's vehicle. The Batmobile, which was before frequently depicted to resemble a sports car, was redesigned in 2011 when DC Comics relaunched its entire line of comic books, with the Batmobile being given heavier armor and new aesthetics. Utility belt Batman keeps most of his field equipment in his utility belt. Over the years it has shown to contain an assortment of crime-fighting tools, weapons, and investigative and technological instruments. Different versions of the belt have these items stored in compartments, often as pouches or hard cylinders attached evenly around it. Since the 1989 film, Batman is often depicted as carrying a projectile which shoots a retractable grappling hook attached to a cable (before this, a he employed a traditionally thrown grappling hook.) This allows him to attach to distant objects, be propelled into the air, and thus swing from the rooftops of Gotham City. An exception to the range of Batman's equipment are hand guns, which he refuses to use on principle, since a gun was used in his parents' murder. In modern stories in terms of his vehicles, Batman compromises on that principle to install weapon systems on them for the purpose of non-lethally disabling other vehicles, forcing entry into locations and attacking dangerous targets too large to defeat by other means. Bat-Signal When Batman is needed, the Gotham City police activate a searchlight with a bat-shaped insignia over the lens called the Bat-Signal, which shines into the night sky, creating a bat-symbol on a passing cloud which can be seen from any point in Gotham. The origin of the signal varies, depending on the continuity and medium. In various incarnations, most notably the 1960s Batman TV series, Commissioner Gordon also has a dedicated phone line, dubbed the Bat-Phone, connected to a bright red telephone (in the TV series) which sits on a wooden base and has a transparent top. The line connects directly to Batman's residence, Wayne Manor, specifically both to a similar phone sitting on the desk in Bruce Wayne's study and the extension phone in the Batcave. Batcave The Batcave is Batman's secret headquarters, consisting of a series of caves beneath his mansion, Wayne Manor. As his command center, the Batcave serves multiple purposes; supercomputer, surveillance, redundant power-generators, forensics lab, medical infirmary, private study, training dojo, fabrication workshop, arsenal, hangar and garage. It houses the vehicles and equipment Batman uses in his campaign to fight crime. It is also a trophy room and storage facility for Batman's unique memorabilia collected over the years from various cases he has worked on. In both the comic book Batman: Shadow of the Bat #45 and the 2005 film Batman Begins, the cave is said to have been part of the Underground Railroad. Fictional character biography Batman's history has undergone many retroactive continuity revisions, both minor and major. Elements of the character's history have varied greatly. Scholars William Uricchio and Roberta E. Pearson noted in the early 1990s, "Unlike some fictional characters, the Batman has no primary urtext set in a specific period, but has rather existed in a plethora of equally valid texts constantly appearing over more than five decades." 20th century Origin The central fixed event in the Batman stories is the character's origin story. As a young boy, Bruce Wayne was horrified and traumatized when he watched his parents, the physician Dr. Thomas Wayne and his wife Martha, murdered with a gun by a mugger named Joe Chill. Batman refuses to utilize any sort of gun on the principle that a gun was used to murder his parents. This event drove him to train his body to its peak condition and fight crime in Gotham City as Batman. Pearson and Uricchio also noted beyond the origin story and such events as the introduction of Robin, "Until recently, the fixed and accruing and hence, canonized, events have been few in number", a situation altered by an increased effort by later Batman editors such as Dennis O'Neil to ensure consistency and continuity between stories. Golden Age In Batman's first appearance in Detective Comics #27, he is already operating as a crime-fighter. Batman's origin is first presented in Detective Comics #33 (November 1939) and is later expanded upon in Batman #47. As these comics state, Bruce Wayne is born to Dr. Thomas Wayne and his wife Martha, two very wealthy and charitable Gotham City socialites. Bruce is brought up in Wayne Manor, and leads a happy and privileged existence until the age of 8, when his parents are killed by a small-time criminal named Joe Chill while on their way home from a movie theater. That night, Bruce Wayne swears an oath to spend his life fighting crime. He engages in intense intellectual and physical training; however, he realizes that these skills alone would not be enough. "Criminals are a superstitious cowardly lot", Wayne remarks, "so my disguise must be able to strike terror into their hearts. I must be a creature of the night, black, terrible ..." As if responding to his desires, a bat suddenly flies through the window, inspiring Bruce to craft the Batman persona. In early strips, Batman's career as a vigilante earns him the ire of the police. During this period, Bruce Wayne has a fiancé named Julie Madison. In Detective Comics #38, Wayne takes in an orphaned circus acrobat, Dick Grayson, who becomes his vigilante partner, Robin. Batman also becomes a founding member of the Justice Society of America, although he, like Superman, is an honorary member, and thus only participates occasionally. Batman's relationship with the law thaws quickly, and he is made an honorary member of Gotham City's police department. During this time, Alfred Pennyworth arrives at Wayne Manor, and after deducing the Dynamic Duo's secret identities, joins their service as their butler. Silver Age The Silver Age of Comic Books in DC Comics is sometimes held to have begun in 1956 when the publisher introduced Barry Allen as a new, updated version of the Flash. Batman is not significantly changed by the late 1950s for the continuity which would be later referred to as Earth-One. The lighter tone Batman had taken in the period between the Golden and Silver Ages led to the stories of the late 1950s and early 1960s that often feature many science-fiction elements, and Batman is not significantly updated in the manner of other characters until Detective Comics #327 (May 1964), in which Batman reverts to his detective roots, with most science-fiction elements jettisoned from the series. After the introduction of DC Comics' Multiverse in the 1960s, DC established that stories from the Golden Age star the Earth-Two Batman, a character from a parallel world. This version of Batman partners with and marries the reformed Earth-Two Catwoman (Selina Kyle). The two have a daughter, Helena Wayne, who becomes the Huntress. She assumes the position as Gotham's protector along with Dick Grayson, the Earth-Two Robin, once Bruce Wayne retires to become police commissioner. Wayne holds the position of police commissioner until he is killed during one final adventure as Batman. Batman titles, however, often ignored that a distinction had been made between the pre-revamp and post-revamp Batmen (since unlike the Flash or Green Lantern, Batman comics had been published without interruption through the 1950s) and would occasionally make reference to stories from the Golden Age. Nevertheless, details of Batman's history were altered or expanded upon through the decades. Additions include meetings with a future Superman during his youth, his upbringing by his uncle Philip Wayne (introduced in Batman #208 (February 1969)) after his parents' death, and appearances of his father and himself as prototypical versions of Batman and Robin, respectively. In 1980, then-editor Paul Levitz commissioned the Untold Legend of the Batman miniseries to thoroughly chronicle Batman's origin and history. Batman meets and regularly works with other heroes during the Silver Age, most notably Superman, whom he began regularly working alongside in a series of team-ups in World's Finest Comics, starting in 1954 and continuing through the series' cancellation in 1986. Batman and Superman are usually depicted as close friends. As a founding member of the Justice League of America, Batman appears in its first story, in 1960's The Brave and the Bold #28. In the 1970s and 1980s, The Brave and the Bold became a Batman title, in which Batman teams up with a different DC Universe superhero each month. Bronze Age In 1969, Dick Grayson attends college as part of DC Comics' effort to revise the Batman comics. Additionally, Batman also moves from his mansion, Wayne Manor into a penthouse apartment atop the Wayne Foundation building in downtown Gotham City, in order to be closer to Gotham City's crime. In 1974's "Night of the Stalker" storyline, a diploma on the wall reveals Bruce Wayne as a graduate of Yale Law School. Batman spends the 1970s and early 1980s mainly working solo, with occasional team-ups with Robin and/or Batgirl. Batman's adventures also become somewhat darker and more grim during this period, depicting increasingly violent crimes, including the first appearance (since the early Golden Age) of the Joker as a homicidal psychopath, and the arrival of Ra's al Ghul, a centuries-old terrorist who knows Batman's secret identity. In the 1980s, Dick Grayson becomes Nightwing. In the final issue of The Brave and the Bold in 1983, Batman quits the Justice League and forms a new group called the Outsiders. He serves as the team's leader until Batman and the Outsiders #32 (1986) and the comic subsequently changed its title. Modern Age After the 12-issue miniseries Crisis on Infinite Earths, DC Comics retconned the histories of some major characters in an attempt at updating them for contemporary audiences. Frank Miller retold Batman's origin in the storyline "Year One" from Batman #404–407, which emphasizes a grittier tone in the character. Though the Earth-Two Batman is erased from history, many stories of Batman's Silver Age/Earth-One career (along with an amount of Golden Age ones) remain canonical in the Post-Crisis universe, with his origins remaining the same in essence, despite alteration. For example, Gotham's police are mostly corrupt, setting up further need for Batman's existence. The guardian Phillip Wayne is removed, leaving young Bruce to be raised by Alfred Pennyworth. Additionally, Batman is no longer a founding member of the Justice League of America, although he becomes leader for a short time of a new incarnation of the team launched in 1987. To help fill in the revised backstory for Batman following Crisis, DC launched a new Batman title called Legends of the Dark Knight in 1989 and has published various miniseries and one-shot stories since then that largely take place during the "Year One" period. Subsequently, Batman begins exhibiting an excessive, reckless approach to his crimefighting, a result of the pain of losing Jason Todd. Batman works solo until the decade's close, when Tim Drake becomes the new Robin. Many of the major Batman storylines since the 1990s have been intertitle crossovers that run for a number of issues. In 1993, DC published "Knightfall". During the storyline's first phase, the new villain Bane paralyzes Batman, leading Wayne to ask Azrael to take on the role. After the end of "Knightfall", the storylines split in two directions, following both the Azrael-Batman's adventures, and Bruce Wayne's quest to become Batman once more. The story arcs realign in "KnightsEnd", as Azrael becomes increasingly violent and is defeated by a healed Bruce Wayne. Wayne hands the Batman mantle to Dick Grayson (then Nightwing) for an interim period, while Wayne trains for a return to the role. The 1994 company-wide crossover storyline Zero Hour: Crisis in Time! changes aspects of DC continuity again, including those of Batman. Noteworthy among these changes is that the general populace and the criminal element now consider Batman an urban legend rather than a known force. Batman once again becomes a member of the Justice League during Grant Morrison's 1996 relaunch of the series, titled JLA. During this time, Gotham City faces catastrophe in the decade's closing crossover arc. In 1998's "Cataclysm" storyline, Gotham City is devastated by an earthquake and ultimately cut off from the United States. Deprived of many of his technological resources, Batman fights to reclaim the city from legions of gangs during 1999's "No Man's Land". Meanwhile, Batman's relationship with the Gotham City Police Department changed for the worse with the events of "Batman: Officer Down" and "Batman: War Games/War Crimes"; Batman's long-time law enforcement allies Commissioner Gordon and Harvey Bullock are forced out of the police department in "Officer Down", while "War Games" and "War Crimes" saw Batman become a wanted fugitive after a contingency plan of his to neutralize Gotham City's criminal underworld is accidentally triggered, resulting in a massive gang war that ends with the sadistic Black Mask the undisputed ruler of the city's criminal gangs. Lex Luthor arranges for the murder of Batman's on-again, off-again love interest Vesper Lynd (introduced in the mid-1990s) during the "Bruce Wayne: Murderer?" and "Bruce Wayne: Fugitive" story arcs. Though Batman is able to clear his name, he loses another ally in the form of his new bodyguard Sasha, who is recruited into the organization known as "Checkmate" while stuck in prison due to her refusal to turn state's evidence against her employer. While he was unable to prove that Luthor was behind the murder of Vesper, Batman does get his revenge with help from Talia al Ghul in Superman/Batman #1–6. 21st century 2000s DC Comics' 2005 miniseries Identity Crisis reveals that JLA member Zatanna had edited Batman's memories to prevent him from stopping the Justice League from lobotomizing Dr. Light after he raped Sue Dibny. Batman later creates the Brother I satellite surveillance system to watch over and, if necessary, kill the other heroes after he remembered. The revelation of Batman's creation and his tacit responsibility for the Blue Beetle's death becomes a driving force in the lead-up to the Infinite Crisis miniseries, which again restructures DC continuity. Batman and a team of superheroes destroy Brother EYE and the OMACs, though, at the very end, Batman reaches his apparent breaking point when Alexander Luthor Jr. seriously wounds Nightwing. Picking up a gun, Batman nearly shoots Luthor in order to avenge his former sidekick, until Wonder Woman convinces him to not pull the trigger. Following Infinite Crisis, Bruce Wayne, Dick Grayson (having recovered from his wounds), and Tim Drake retrace the steps Bruce had taken when he originally left Gotham City, to "rebuild Batman". In the Face the Face storyline, Batman and Robin return to Gotham City after their year-long absence. Part of this absence is captured during Week 30 of the 52 series, which shows Batman fighting his inner demons. Later on in 52, Batman is shown undergoing an intense meditation ritual in Nanda Parbat. This becomes an important part of the regular Batman title, which reveals that Batman is reborn as a more effective crime fighter while undergoing this ritual, having "hunted down and ate" the last traces of fear in his mind. At the end of the "Face the Face" story arc, Bruce officially adopts Tim (who had lost both of his parents at various points in the character's history) as his son. The follow-up story arc in Batman, Batman and Son, introduces Damian Wayne, who is Batman's son with Talia al Ghul. Although originally, in Batman: Son of the Demon, Bruce's coupling with Talia was implied to be consensual, this arc retconned it into Talia forcing herself on Bruce. Batman, along with Superman and Wonder Woman, reforms the Justice League in the new Justice League of America series, and is leading the newest incarnation of the Outsiders. Grant Morrison's 2008 storyline, "Batman R.I.P." featured Batman being physically and mentally broken by the enigmatic villain Doctor Hurt and attracted news coverage in advance of its highly promoted conclusion, which would speculated to feature the death of Bruce Wayne. However, though Batman is shown to possibly perish at the end of the arc, the two-issue arc "Last Rites", which leads into the crossover storyline "Final Crisis", shows that Batman survives his helicopter crash into the Gotham City River and returns to the Batcave, only to be summoned to the Hall of Justice by the JLA to help investigate the New God Orion's death. The story ends with Batman retrieving the god-killing bullet used to kill Orion, setting up its use in "Final Crisis". In the pages of Final Crisis Batman is reduced to a charred skeleton. In Final Crisis #7, Wayne is shown witnessing the passing of the first man, Anthro. Wayne's "death" sets up the three-issue Battle for the Cowl miniseries in which Wayne's ex-proteges compete for the "right" to assume the role of Batman, which concludes with Grayson becoming Batman, while Tim Drake takes on the identity of the Red Robin. Dick and Damian continue as Batman and Robin, and in the crossover storyline "Blackest Night", what appears to be Bruce's corpse is reanimated as a Black Lantern zombie, but is later shown that Bruce's corpse is one of Darkseid's failed Batman clones. Dick and Batman's other friends conclude that Bruce is alive. 2010s Bruce subsequently returned in Morrison's miniseries Batman: The Return of Bruce Wayne, which depicted his travels through time from prehistory to present-day Gotham. Bruce's return set up Batman Incorporated, an ongoing series which focused on Wayne franchising the Batman identity across the globe, allowing Dick and Damian to continue as Gotham's Dynamic Duo. Bruce publicly announced that Wayne Enterprises will aid Batman on his mission, known as "Batman, Incorporated". However, due to rebooted continuity that occurred as part of DC Comics' 2011 relaunch of all of its comic books, The New 52, Dick Grayson was restored as Nightwing with Wayne serving as the sole Batman once again. The relaunch also interrupted the publication of Batman, Incorporated, which resumed its story in 2012–2013 with changes to suit the new status quo. The New 52 During The New 52, all of DC's continuity was reset and the timeline was changed, making Batman the first superhero to emerge. This emergence took place during Zero Year, where Bruce Wayne returns to Gotham and becomes Batman, fighting the original Red Hood and the Riddler. In the present day, Batman discovers the Court of Owls, a secret organization operating in Gotham for decades. Batman somewhat defeats the Court by defeating Owlman, although the Court continues to operate on a smaller scale. The Joker returns after losing the skin on his face (as shown in the opening issue of the second volume of Detective Comics) and attempts to kill the Batman's allies, though he is stopped by Batman. After some time, Joker returns again, and both he and Batman die while fighting each other. Jim Gordon temporarily becomes Batman, using a high-tech suit, while it is revealed that an amnesiac Bruce Wayne is still alive. Gordon attempts to fight a new villain called Mr. Bloom, while Wayne, regains his memories with the help of Alfred Pennyworth and Julie Madison. Once with his memories, Wayne becomes Batman again and defeats Mr. Bloom with the help of Gordon. DC Rebirth The timeline was reset again during Rebirth, although no significant changes were made to the Batman mythos. Batman meets two new superheroes operating in Gotham named Gotham and Gotham Girl. Psycho-Pirate gets into Gotham's head and turns against Batman, and is finally defeated when he is killed. This event is very traumatic for Gotham Girl and she begins to lose her sanity. Batman forms his own Suicide Squad, including Catwoman, and attempts to take down Bane. The mission is successful, and Batman breaks Bane's back. Batman proposes to Catwoman. After healing from his wounds, an angry Bane travels to Gotham, where he fights Batman and loses. Batman then tells Catwoman about the War of Jokes and Riddles, and she agrees to marry him. Bane takes control of Arkham Asylum and manipulates Catwoman into leaving Wayne before the wedding. This causes Wayne to become very angry, and, as Batman, lashes out against criminals, nearly killing Mr. Freeze. Batman learns of Bane's control over Arkham and teams up with the Penguin to stop him. Bane captures Batman, and Scarecrow causes him to hallucinate, although he eventually breaks free. Batman escapes and reunites with Catwoman, while Bane captures and kills Alfred Pennyworth. Batman returns and defeats Bane, although too late to save Alfred. Gotham Girl prompts him to marry Catwoman. It is revealed that the Joker who was working for Bane was really Clayface in disguise. The real Joker has been plotting a master plan to take over Gotham. This plan comes to fruition during The Joker War, in which Joker takes over the city. Batman defeats the Joker who vanishes after an explosion. Ghost-Maker, an enemy from Batman's past, appears in Gotham, and, after a battle, becomes a sort of ally to Batman. A new group called the Magistrate rises up in Gotham, led by Simon Saint, whose goal is to outlaw vigilantes such as Batman. At the same time, Scarecrow returns, fighting Batman. During Fear State, Batman battles and defeats both Scarecrow and the Magistrate's Peacekeepers. Other versions Smallville Batman/Bruce Wayne is featured in the Smallville Season 11 digital comic based on the TV series. As a young boy, Bruce Wayne saw his parents gunned down by Joe Chill. This incident changed Bruce's life forever. In 2001, Bruce donned the persona of "Batman", to fight the criminals of Gotham City. Bruce fought criminals on his own for the better part of the next ten years. However, by 2011, Bruce had begun working with the young Barbara Gordon who became known as Nightwing. This same year, Bruce learned that Joe Chill was in Metropolis and went to confront him. His quest for Chill briefly led to Bruce getting into conflict with Superman. However, the two soon joined forces. When they found Chill, Bruce came close to killing him, but the Prankster and Mister Freeze beat him to it, on behalf of Intergang. The Prankster also gunned down Superman with Green Kryptonite bullets. Bruce managed to save his life, after which they apprehended the Prankster and Mister Freeze. Bruce was reluctant to join the Watchtower Network but kept finding himself working alongside its agents. Eventually, Bruce gave in and joined, to help them with the Crisis. After the battle against the Monitors, Bruce became a founding member of the Justice League. Furthermore, as Barbara was leaving Earth, Bruce got a new partner in Dick Grayson. An villainous version of Bruce appears in the form of Earth-13 Batman resembling the Joker with a patchwork costume. Citizen Wayne In Batman: Citizen Wayne, the role of Batman is taken on by Harvey Dent after his whole face has been destroyed by an enemy. Bruce Wayne is a newspaper publisher who is highly critical of Batman and his brutal methods and goes after him when he actually kills the enemy in question, both men dying in the final battle. DC Bombshells In the opening of the DC Bombshells continuity set during World War II, Bruce's parents are saved from Joe Chill's attack thanks to the baseball superheroine known as Batwoman. While Batman does not exist in this continuity, Kate Kane does borrow a number of elements from the main version, such as inspiring younger heroines to follow in her steps as Batgirls and losing a child named Jason. In the book's conclusion that takes place 15 years into the future, a grown up Bruce Wayne becomes Batman (not out of tragedy but out of inspiration by the Bombshells) and is trained by the older Catwoman to herald in the new age of superheroes. The Dark Knight Returns The Batman from Frank Miller's Batman: The Dark Knight Returns and its spin-offs, Batman: The Dark Knight Strikes Again and All Star Batman and Robin the Boy Wonder is a tired vigilante in a much darker, edgier setting home to Miller's own new interpretations of various DC characters. The Dark Multiverse In the 2017 Dark Nights: Metal event, it is revealed that a Dark Multiverse exists alongside the main DC Multiverse. Each reality in the Dark Multiverse is negative and transient reflection of its existing counterpart, which were intended to be acquired by World Forger who would feed these timelines to his 'dragon', Barbatos. However, this balance came to an end when Barbatos escaped his bonds and allowed the rejected timelines to remain in some form of existence. Eventually, Barbatos is released onto the DC universe when Batman is treated with five unique metals, turning him into a portal to the Dark Multiverse, with this portal also allowing Barbatos to summon an army of evil alternate Batmen known as the Dark Knights, led by a God-like Batman, who describe themselves as having been created based on Batman's dark imaginations of what he could do if he possessed the powers of his colleagues. During the Dark Nights: Death Metal storyline, more Dark Multiverse versions of Batman appear. Barbatos Barbatos is a hooded, God-like being in the Dark Multiverse. Barbatos had previously visited Prime-Earth in the DC Multiverse and founded the Tribe of Judas, which would later become the Court of Owls. Sometime before returning (either willingly or not) to the Dark Multiverse, Barbatos encountered Hawkman/Carter Hall, and was hit by his mace. Barbatos tried to return to the Multiverse but the events of Final Crisis prevented him from doing so. However, after witnessing Bruce Wayne/Batman being sent back in time by Darkseid's Omega Beams, Barbatos realised the similarities between his and Bruce's Bat emblems and believed he could use him as a doorway. Barbatos' followers manipulated events in order for Bruce to be injected with four out of the five metals needed to create the doorway, and after the fifth was injected in the present day, Barbatos was able to transport himself and the Dark Knights to Prime-Earth to conquer it. The Batman Who Laughs The Batman Who Laughs is a version of Batman from Earth -22, a dark reflection of the Earth-22. In that reality, the Earth -22 Joker learned of Batman's identity as Bruce Wayne and killed most of Batman's other rogues along with Commissioner Gordon. He then subjected a sizeable population of Gotham's populace to the chemicals that transformed him, subsequently killing several parents in front of their children with the goal of turning them into essentially a combination of himself and Batman. When Batman grappled with the Joker, it resulted in the latter's death as Batman is exposed to a purified form of the chemicals that gradually turned him into a new Joker, the process proving irreversible by the time Batman discovered what was happening to him. The Batman who Laughs proceeded to take over Earth-22, killing off most of his allies and turning Damian into a mini-Joker. The Batman Who Laughs seems to be the de facto leader or second-in-command of Barbatos' Dark Knights and recruited the other members. After arriving on Prime-Earth, the Batman Who Laughs takes control of Gotham and oversees events at the Challenger's mountain. He distributes joker cards to the Batman's Rogues, giving them the ability to alter reality and take over sections of the city. Accompanying him are Damian and three other youths whom he also calls his sons, all four being twisted versions of Robin, having intended to destroy all of reality by linking the Over-Monitor to Anti-Monitor's astral brain. But The Batman Who Laughs is defeated when the Prime Universe Batman is aided by the Joker, who notes the alternate Batman's failure to perceive this scenario due to still being a version of Batman. While assumed dead, the Batman who Laughs is revealed to be in the custody of Lex Luthor who offers him a place in the Legion of Doom. Red Death The Red Death is a version of Batman from Earth -52, originally an aged man who broke after the deaths of Dick, Jason, Tim, and Damian. Believing he has a chance to prevent the loss of more loved ones, Bruce decides he needs the Flash's Speed Force to achieve this and equips himself with the Rogues' equipment to capture the Flash. He knocks Barry out and ties him to the Batmobile, which has a machine created from reverse-engineering the Cosmic Treadmill attached to it. Using this machine against Barry's wishes, Bruce drove straight into the Speed Force while absorbing Barry in the process. Scarred by the ordeal, he developed a split personality created from residual traces of the Earth -52 Barry's mind. The newly-born Red Death tests his new powers but realizes he cannot stop his Earth from its destruction until he is recruited by The Batman Who Laughs, who promises him a new Earth to live upon. After entering Prime-Earth, the Red Death arrives in Central City and is confronted by Iris West and Wally West, during which he uses his powers to slow Wally and age them both. The Flash confronts the Red Death, and Doctor Fate saves Barry before the latter can attack. The Red Death proclaims that he will save Central City and make it his new home. After Barry is transported to a 'sand'-filled cave beneath Central City, the Red Death arrives and reveals several Flashmobiles and chases after Barry. During the events of the Wild Hunt, the Red Death ceased when exposed by an energy wave from the release of a newlyborn universe with the restored Earth -52 Barry eventually destroyed from the energy consuming him. An original incarnation of the Red Death in the ninth season of The Flash, portrayed by Javicia Leslie. This version is a doppelgänger of Ryan Wilder from Earth-4125, a world where she was adopted by the Wayne family and Batman does not exist. Following her adoptive parents' murder, she became Batwoman to protect Gotham City as an adult, during which she met other heroes such as the Flash and befriended his wife Iris West-Allen. In time, Wilder gradually realized her methods for fighting crime resulted in criminals being temporarily incarcerated and allowed to escape. Studying the Flash's abilities, she built a suit of armor capable of artificially channeling the Speed Force and changed her codename to the Red Death. Eventually, she became reckless, killed Iris while fighting the Flash, and was rejected by the Speed Force due to her inorganic connection to it, which left her trapped in a vibrational form. In the present, she reaches Earth-Prime and forms the Rogues to steal Wayne Enterprises technology to restore her physical form and build a time machine. However, she comes into conflict with the Earth-Prime Flash, who forms his own Rogues and joins forces with his Batwoman to defeat her, after which the Red Death is taken into A.R.G.U.S. custody. Murder Machine The Murder Machine is a version of Batman from Earth -44, a dark reflection of the Earth-44. Distraught from having lost Alfred, Batman requested Cyborg to help him finish the Alfred Protocol, an A.I. version of Alfred. But the Alfred Protocol malfunctioned upon activation and began to multiply and kill all of Batman's Rogues Gallery. Bruce pleaded with Cyborg to help find a way to fix it but the latter refused. The Alfred Protocol began to merge with Bruce and the two became the Murder Machine, and his first act as this new entity was to kill Cyborg. After being recruited by the Batman Who Laughs, the Murder Machine arrives on Prime-Earth with the other Dark Knights. He proceeds to the Justice League's Watchtower and confronts Cyborg. After Cyborg is incapacitated by the other Dark Knights, the Murder Machine infects and converts the Watchtower as the Dark Knights' new base of operations. Dawnbreaker The Dawnbreaker is a version of Batman from Earth -32, a dark reflection of the Earth-32 where Batman became a Green Lantern. When Earth -32 Bruce lost his parents to Joe Chill, he is chosen by a Green Power Ring to become a Green Lantern. But Bruce's will overrides the ring's ban on lethal force and corrupts it, enabling him to use it to kill Chill and various criminals. After Bruce killed Gordon when eventually confronted, he wipes out the Green Lantern Corp and the Guardians of the Universe when they confront him. Bruce then entered his giant Green Lantern Power Battery and exits with a new outfit and moniker, the Dawnbreaker. However, he finds that his Earth has begun to collapse and he is met by the Batman Who Laughs who, after recruiting the Red Death and the Murder Machine, recruits the Dawnbreaker, promising him a new world to shroud in darkness. After arriving on Earth-0, Dawnbreaker heads to Coast City where he is confronted by Hal Jordan. Dawnbreaker tries to consume Hal Jordan in a 'blackout' but the latter is rescued by Doctor Fate. With Green Lantern gone, Dawnbreaker takes control of Coast City. The Dawnbreaker confronts Hal Jordan in a blacked out cave underneath Coast City, claiming that the Green Lantern oath is worthless in his cave. Drowned The Drowned is a version of Batman from Earth -11, a dark reflection of the reversed-gender Earth-11. Originally known as Batwoman, Bryce Wayne was in a relationship with Sylvester Kyle (Earth-11's male version of Selina Kyle) until he was killed by a metahuman. A revenge-driven Bryce spent 18 months hunting down every rogue metahuman before Aquawoman and the Atlanteans emerged from their self-imposed exile. While Aquawoman claimed her people came in peace, a skeptical Bryce declared war on Atlantis with the Atlanteans flooding Gotham in retaliation when their queen was killed. Bryce survived the disaster by performing auto-surgery on herself by introducing mutated hybrid DNA into her body, giving Bryce the ability to breath underwater, accelerated healing, and water manipulation. She also created an army of Dead Waters to fight for her. Donning a new attire, Bryce called herself The Drowned and successfully conquered Atlantis at the cost of flooding every city. After seeing her signal being lit, the Drowned met the Batman Who Laughs, who recruits her as a Dark Knight. After arriving on Earth-0, the Drowned headed to Amnesty Bay, where she was confronted by Aquaman and Mera. The two were unable to combat the Drowned and her army of Dead Waters, with Mera becoming infected and controlled by the Drowned while Aquaman was saved by Doctor Fate. The Drowned proceeded to take control of Amnesty Bay. When Aquaman is transported fathoms below Amnesty Bay, the Drowned attacks him, revealing that the infected Mera has mutated into a gargantuan shark/crab/octopus creature. Merciless The Merciless is a version of Batman from Earth -12. Here Batman is in a relationship with Wonder Woman. Having killed Ares in a fit of rage when Ares presumably kills Wonder Woman, the Earth -12 Batman acquired Ares's helmet and assumed that he can channel its power to war with justice and mercy rather than ruthless brutality. But it corrupted him and the 'Merciless' Batman ended up killing Wonder Woman (who had actually just been knocked out) while eliminating all his enemies. The Merciless is later depicted as destroying the Valhalla Mountain when Sam Lane, Amanda Waller, Steve Trevor and Mister Bones attempt a counter-attack against the Dark Batmen after the regular heroes have apparently failed. The Merciless confronts Wonder Woman after she is transported under the foundation of A.R.G.U.S Headquarters in Washington D.C., revealing his armory filled with the divine arsenal of the Gods he killed on his Earth. He reveals to her that his Diana taught him to fight and after he destroyed the Gods, the Merciless found Themyscria and fought them for three days. The Merciless also reveals that he ordered the Ferryman at the River Styx to gather every coin from every dead Amazon seeking passage into the afterlife which he melted into a giant golden drachma, which he strikes with a hammer, summoning the undead Amazons. Devastator The Devastator is a version of Batman from Earth -1, a dark reflection of Earth-1. When Superman turned evil and kills friend and foe alike along with Lois, the Earth-1 Batman injected himself with an engineered version of the Doomsday virus to stop the Kryptonian at the cost of his humanity as he transformed into a Doomsday-like monster. Despite his victory, the Devastator still feels remorse for not being able to protect Metropolis from Superman's wrath. The Batman Who Laughs offers The Devastator a second chance at saving those whom he feels are blindly inspired by Superman. Bruce infects the Earth-0 Lois Lane, Supergirl, and all of Metropolis with the Doomsday virus as he views it as the only way to protect them from Superman's strength and false prophecies. Along with the Murder Machine, the Devastator was sent to retrieve the Cosmic Tuning Tower, ripping it out of its foundation and throwing it outside the Fortress of Solitude. He is then confronted by the two Green Lanterns of Earth (Simon Baz and Jessica Cruz), The Flash/Wally West, Firestorm, and Lobo and he proceeds to incapacitate all except Lobo who he throws into the Sun. Grabbing the Cosmic Tuning Tower, the Devastator leaps into space and lands on the Challenger's Mountain, planting the tower on top of it. Bathomet Bathomet is a Cthulhu-like Batman from an unknown part of the Dark Multiverse. Batmage Batmage is an evil sorcerer version of Batman from an unknown part of the Dark Multiverse. Batmanosaurus Rex Batmanosaurus Rex (also called B-Rex) is a version of Batman from an unknown part of the Dark Multiverse. It is the result of Batman uploading his mind into the robotic Tyrannosaurus that he has in the Batcave when the Batcave collapsed for an unknown reason. Castle Bat Castle Bat is a version of Batman from an unknown part of the Dark Multiverse who sacrificed Damian Wayne as part of a ritual that would merge his soul with Gotham City enabling him to easily hunt down every villain. The Batman Who Laughs uses him as a headquarters for the Dark Knights. Darkfather Darkfather is a Batman from an unknown part of the Dark Multiverse who defeated Darkseid and acquired his powers. After mastering the Anti-Life Equation, Darkfather turned the Parademons of Apokolips into his Pararobins. Dr. Arkham Dr. Arkham is a Batman from an unknown part of the Dark Multiverse who left the vigilante business and took part in performing experiments on humans. Batmanhattan Batmanhattan is a Batman from an unknown part of the Dark Multiverse who harnessed the powers of Doctor Manhattan. The Batman Who Laughs would later lobotomize Batmanhattan and then transplant his brain into Batmanhattan in order to become Darkest Knight. Batom Batom is a Batman from an unknown part of the Dark Multiverse who sports the same Bio-Belt as Atom. Batmobeast Batmobeast is a version of Batman from an unknown part of the Dark Multiverse whose consciousness was uploaded into a monster truck after every digital system was destroyed by the people of his Earth. Robin King Robin King is a child version of Bruce Wayne from an unknown part of the Dark Multiverse who developed mass-murdering tendencies. Baby Batman Baby Batman is a baby version of Batman from an unknown part of the Dark Multiverse who downloaded his mind into an infant-resembling artificial body. Grim Knight Grim Knight is a Batman from an unknown part of the Dark Multiverse who wields firearms ever since the day his parents were killed by Joe Chill. Injustice: Gods Among Us In Injustice: Gods Among Us, Batman was originally close friends with Superman (with Superman even asking him to be godfather to his child with Lois Lane) but when Superman was tricked by the Joker into killing Lois and destroying Metropolis, their relationship slowly went from estranged to antagonistic to enemies. Superman begins a new world order where he and the Justice League use brute force and fear to coerce people into following the law, but Batman sees the tyranny in this and opposes Superman's Regime with his Insurgency. He suffers a few losses, notably of Dick Grayson by the hands of his biological son Damian (albeit by accident), who sided with Superman. By the end of Year One Superman breaks Batman's back in an attempt to delay any future defiance. During most of Year Two Batman is out of commission, relying on his allies to stop the Regime when the Green Lantern Corps gets involved. In Year Three Batman allies himself with magic-users, notably John Constantine, though this ends with Constantine revealed to have been using Batman to further his own goals. Year Four has Batman look to the Greek gods to stop Superman. However, he comes to regret this when the gods decide to overpower humanity themselves, leading him to enlist the New God Highfather to stop them. He evades a trap set up by Superman when the fallen hero tries to make a meeting to discuss their problems. By the game's events, Batman has suffered many losses by the hands of the Regime and in a last-ditch effort summons the counterparts of Wonder Woman, Green Lantern, Green Arrow, and Aquaman from the mainstream universe, needing them to help him retrieve a shard of kryptonite from his now-abandoned Batcave; the kryptonite was meant to be a last resort for if Superman went rogue, but Batman made it he could only access it if key members of the League agreed. Since most of them allied with Superman who are dead (Green Arrow) he needed duplicates. When this plan fails, he is reluctant to bring over the mainstream Superman, convinced that any version of Superman is corruptible. However, his prime counterpart convinces him to have faith and he does so, with the mainstream Superman defeating his counterpart and ending the Regime's influence. JLA/Avengers In JLA/Avengers, Batman appears along with his teammates in the Justice League, when they are made to fight the Avengers in the Grandmaster's cosmic game. While touring the Marvel Universe for the first time, Batman witnesses the Punisher killing a gang of drug dealers, and attacks him (the fight takes place off-panel). He later forms an alliance with Captain America after engaging in a brief fistfight to test his opponent's skills. Due to this alliance, he realizes the stakes of the game and loses it for the JLA. When the two universes are merged by Krona, the heroes are left confused as to what actually occurred in their reality; the Grandmaster clarifies by showing them the various tragedies that befell the heroes in their lifetimes. Batman, for his part, witnesses Jason Todd's death and his injury at the hands of Bane. In the final battle, Krona defeats the JLA with minor difficulty, but is defeated when the Flash and Hawkeye disrupt his control of his power source. Just Imagine Just Imagine... is a series of comics created by Stan Lee (the co-creator of several Marvel Comics characters), with reimaginings of various DC characters. In this continuity, Wayne Williams is framed for a crime he did not commit, works his way into getting out of prison, and becomes a mysterious wrestler known as Batman to fund a career as a vigilante using complex equipment to avenge himself against the criminals who originally framed him. Kingdom Come The Kingdom Come limited series depicts a Batman who, ravaged by years of fighting crime, uses an exoskeleton to keep himself together and keeps the peace on the streets of Gotham using remote-controlled robots. He is late middle-aged and wears an eerie grin. It is no longer a secret that he is Bruce Wayne and is referred to as the "Batman" even when he appears in civilian guise. Superman: American Alien In Superman: American Alien, a 2016 comic that shows an alternate retelling of Superman's origin, Bruce Wayne is training under Ra's al Ghul when he is told about someone posing as him at a birthday party thrown for him, causing Bruce to become interested in this person. Years later, having been Batman for a while, he finds out that the same person, revealed to be Clark Kent, is a reporter who spoke to Bruce's new ward Dick Grayson. Donning his costume, Bruce confronts Clark but is quickly overpowered, and is shocked when none of his equipment harms Clark. Clark finds out Bruce's identity by taking his mask and cape, and Bruce escapes. He seemingly leaves behind Clark's recording of his conversation with Dick, and Clark does not reveal Bruce's double life to the public. Bruce's cape later becomes part of Clark's prototype costume as he first begins his crime fighting career. Batman: White Knight In the reality of Batman: White Knight, Bruce has grown up believing he is a descendant of Edmond Wayne, the founder of Gotham City and Wayne Enterprises. In reality, though, he is actually a descendant of Bakkar, a disgraced former member of the Order of St. Dumas, who murdered Edmond Wayne and assumed his identity. Jean-Paul Valley, a.k.a. Azrael, is actually the real Wayne descendant, which Bruce only learns from Jack Napier right before Jack forces Harley Quinn to kill him as the Joker will not let him kill himself. After helping save the city from the plotting of Neo-Joker to destroy Gotham, he hands over the keys to his various Batmobiles to the GCPD Gotham Terrorism Oppression unit and unmasks himself in front of Gordon to earn back his trust after the ordeal. He later turns himself in after incapacitating Azrael in combat for his many unintentional crimes while Batman. During his trial, he and Harleen Quinzel married on her suggestion to keep her from testifying against him, resulting in him becoming stepfather to hers and Jack's children. After 12 years in prison, he helps stop a riot, but upon hearing that someone has stolen a prototype Batsuit and is going around as Batman, he escapes with Jason Todd's help and heads into the city, aided by a sentient AI program of Jack (from a chip placed in his head before). Aided by commissioner Barbara Gordon and Duke Thomas, Bruce encounters the new Batman, and finds out more about the plans of Derek Powers, who he reveals was responsible for building his Batman gear before taking over Wayne Enterprises. In the end, Bruce teas up with Terry, the GCPD, and the GTO unit led by Dick Grayson into attacking the company headquarters to stop Powers' plan to illegally sell Bat-mechas across the planet. After that succeeds, Bruce is approached by FBI agents Diana Prince and John Stewart, who offer to change his sentence to time served in exchange for his help in investigating reports of a flying teenager in Kansas. Cultural impact and legacy Batman has become a pop culture icon, recognized around the world. The character's presence has extended beyond his comic book origins; events such as the release of the 1989 Batman film and its accompanying merchandising "brought the Batman to the forefront of public consciousness". In an article commemorating the sixtieth anniversary of the character, The Guardian wrote, "Batman is a figure blurred by the endless reinvention that is modern mass culture. He is at once an icon and a commodity: the perfect cultural artefact for the 21st century." In other media The character of Batman has appeared in various media aside from comic books, such as newspaper syndicated comic strips, books, radio dramas, television, a stage show, and several theatrical feature films. The first adaptation of Batman was as a daily newspaper comic strip which premiered on October 25, 1943. That same year the character was adapted in the 15-part serial Batman, with Lewis Wilson becoming the first actor to portray Batman on screen. While Batman never had a radio series of his own, the character made occasional guest appearances in The Adventures of Superman, starting in 1945 on occasions when Superman voice actor Bud Collyer needed time off. A second movie serial, Batman and Robin, followed in 1949, with Robert Lowery taking over the role of Batman. The exposure provided by these adaptations during the 1940s "helped make [Batman] a household name for millions who never bought a comic book". In the 1964 publication of Donald Barthelme's collection of short stories Come Back, Dr. Caligari, Barthelme wrote "The Joker's Greatest Triumph". Batman is portrayed for purposes of spoof as a pretentious French-speaking rich man. Television The Batman television series, starring Adam West, premiered in January 1966 on the ABC television network. Inflected with a camp sense of humor, the show became a pop culture phenomenon. In his memoir, Back to the Batcave, West notes his dislike for the term 'camp' as it was applied to the 1960s series, opining that the show was instead a farce or lampoon, and a deliberate one, at that. The series ran for 120 episodes, ending in 1968. In between the first and second season of the Batman television series, the cast and crew made the theatrical film Batman (1966). The Who recorded the theme song from the Batman show for their 1966 EP Ready Steady Who, and the Kinks performed the theme song on their 1967 album Live at Kelvin Hall. Adam West also appeared in character as Batman in several commercials and a 1966 US Government PSA for Savings Bonds. Despite not having an immediate continuation, the series spawned a (failed) pilot episode for a spin-off Batgirl television series and, decades later, the Batman '66 (2013-2016) comic book series, the animated films Batman: Return of the Caped Crusaders (2016) and Batman vs. Two-Face (2017), and even the mockumentary Return to the Batcave: The Misadventures of Adam and Burt (2003). In the 1996 episode Heroes and Villains of Only Fools and Horses, David Jason spoofed the role of Batman. The popularity of the Batman TV series also resulted in the first animated adaptation of Batman in The Batman/Superman Hour; the Batman segments of the series were repackaged as The Adventures of Batman and Batman with Robin the Boy Wonder which produced thirty-three episodes between 1968 and 1977. From 1973 until 1986, Batman had a starring role in ABC's Super Friends series, which was animated by Hanna-Barbera. Olan Soule was the voice of Batman in all these shows, but was eventually replaced during Super Friends by Adam West, who also voiced the character in Filmation's 1977 series The New Adventures of Batman. In 1992, Batman: The Animated Series premiered on the Fox television network, produced by Warner Bros. Animation and featuring Kevin Conroy as the voice of Batman. The series received considerable acclaim for its darker tone, mature writing, stylistic design, and thematic complexity compared to previous superhero cartoons, in addition to multiple Emmy Awards. The series' success led to the theatrical film Batman: Mask of the Phantasm (1993), as well as various spin-off TV series that included Superman: The Animated Series, The New Batman Adventures, Justice League and Justice League Unlimited (each of which also featured Conroy as Batman's voice). The futuristic series Batman Beyond also took place in this same animated continuity and featured a newer, younger Batman voiced by Will Friedle, with the elderly Bruce Wayne (again voiced by Conroy) as a mentor. In 2004, an unrelated animated series titled The Batman made its debut with Rino Romano voicing Batman. In 2008, this show was replaced by another animated series, Batman: The Brave and the Bold, featuring Diedrich Bader's voice as Batman. In 2013, a new CGI-animated series titled Beware the Batman made its debut, with Anthony Ruivivar voicing Batman. In 2014, the live-action TV series Gotham premiered on the Fox network, featuring David Mazouz as a 12-year-old Bruce Wayne. In 2018, when the series was renewed for its fifth and final season it was announced that Batman would make an appearance in the series finale's flash-forward. Iain Glen portrays Bruce Wayne in the live-action series Titans, appearing in the show's second season in 2019. Prior to Glen, Batman was played by stunt doubles Alain Moussi and Maxim Savarias in the first season. To commemorate the 75th anniversary of the character, Warner Bros aired the television short film, Batman: Strange Days, that was also posted on DC's YouTube channel. In August 2019, it was announced that Kevin Conroy would make his live-action television debut as an older Bruce Wayne in the upcoming Arrowverse crossover, Crisis on Infinite Earths. In the crossover, he portrayed a parallel universe iteration of Batman from Earth-99. In Batwoman, the Earth-Prime version of Bruce Wayne / Batman is portrayed by Warren Christie. In May 2021, it was announced that a new animated series titled Batman: Caped Crusader was in development by Bruce Timm (co-creator of Batman: The Animated Series), JJ Abrams, and Matt Reeves. The series is said to be a reimagining of the Caped Crusader that returns to the character's noir roots. Film As previously stated, Batman's first cinematic appearances consisted of the 1943 serial film Batman and its 1949 sequel Batman and Robin, which were both released by Columbia Pictures and depicted a government-backed version of Batman and Robin (censorship at the time would not have allowed for vigilantes to be depicted as unauthorized crimefighters). The serials (especially the first one) are, though, notorious for their accentuation on anti-Japanese sentiments due to their World War II-period setting. In 1966, 20th Century Fox released Batman's first feature-length film, titled Batman (also advertised as Batman: The Movie), based on and featuring most of the cast from the 1960s TV series. Burton/Schumacher series In 1989, Warner Bros. released the feature film Batman, directed by Tim Burton and starring Michael Keaton as the title character. The film was a huge success; not only was it the top-grossing film of the year, but at the time was the fifth highest-grossing film in history. The film also won the Academy Award for Best Art Direction. The film's success spawned three sequels: Batman Returns (1992), Batman Forever (1995) and Batman & Robin (1997), the latter two of which were directed by Joel Schumacher instead of Burton, and replaced Keaton as Batman with Val Kilmer and George Clooney, respectively. The second Schumacher film failed to outgross any of its predecessors and was critically panned, causing Warner Bros. to cancel the planned fourth sequel, Batman Unchained, and end the initial film series. The first two films later became the basis for the Burton-inspired comic book series Batman '89 (2021). Keaton would later reprise his role as Bruce Wayne / Batman for the 2023 film, The Flash. The Dark Knight Trilogy In 2005, Batman Begins was released by Warner Bros. as a reboot of the film series, directed by Christopher Nolan and starring Christian Bale as Batman. Its sequel, The Dark Knight (2008), set the record for the highest grossing opening weekend of all time in the U.S., earning approximately $158 million, and became the fastest film to reach the $400 million mark in the history of American cinema (eighteenth day of release). These record-breaking attendances saw The Dark Knight end its run as the second-highest domestic grossing film (at the time) with $533 million, bested then only by Titanic. The film also won two Academy Awards, including Best Supporting Actor for the late Heath Ledger. It was eventually followed by The Dark Knight Rises (2012), which served as a conclusion to Nolan's film series that has since been known as The Dark Knight Trilogy. Animated films Since 2008, Batman has also starred in various direct-to-video films under the DC Universe Animated Original Movies label. Kevin Conroy reprised his voice role of Batman for several of these films while others have featured celebrity voice actors in the role, including Jeremy Sisto, William Baldwin, Bruce Greenwood, Ben McKenzie, Peter Weller, and Jensen Ackles. In the direct-to-video films of the DC Animated Movie Universe, Batman was voiced by Kevin Conroy again in Justice League: The Flashpoint Paradox (2013) and by Jason O'Mara in all subsequent films, such as The Death of Superman (2018) and Batman: Hush (2019). A Lego-themed version of Batman was also featured as one of the protagonists in the theatrically-released animated film The Lego Movie (2014), with Will Arnett providing the voice. Arnett reprised the voice role for the spin-off film The Lego Batman Movie (2017), as well as for the sequel The Lego Movie 2: The Second Part (2019). Keanu Reeves voiced Batman in the animated film DC League of Super-Pets (2022). DC Extended Universe In 2016, Ben Affleck began portraying Batman in the DC Extended Universe with the release of the film Batman v Superman: Dawn of Justice, directed by Zack Snyder, a younger child version of the character was played by Brandon Spink in the same film. Affleck also made a cameo appearance as Batman in David Ayer's film Suicide Squad (2016). Affleck reprised the role in the 2017 film Justice League, also set in the DC Extended Universe, as well as the director's cut, Zack Snyder's Justice League. Affleck reprised his role in the 2023 film, The Flash, also set in the DC Extended Universe. This and a cameo appearance in Aquaman and the Lost Kingdom are expected to be Affleck's last appearance in the role. DC Elseworlds films Dante Pereira-Olson portrays a young Bruce Wayne in the 2019 film Joker. Robert Pattinson portrays Bruce Wayne / Batman in the 2022 film, The Batman, directed by Matt Reeves. DC Universe A new iteration of Batman is set to appear in the DC Universe (DCU) franchise, beginning with the film The Brave and the Bold, produced by DC Studios. The film will focus on Batman and Damian Wayne. Fine art Starting with the Pop Art period, and on a continuing basis, since the 1960s, the character of Batman has been "appropriated" by multiple visual artists and incorporated into contemporary artwork, most notably by Andy Warhol, Roy Lichtenstein, Mel Ramos, Dulce Pinzon, Mr. Brainwash, Raymond Pettibon, Peter Saul, and others. Video games Since 1986, Batman has starred in multiple video games, most of which were adaptations of the various cinematic or animated incarnations of the character. Among the most successful of these games is the Batman: Arkham series. The first installment, Batman: Arkham Asylum (2009), was released by Rocksteady Studios to critical acclaim; review aggregator Metacritic reports it as having received 92% positive reviews. It was followed by the sequel Batman: Arkham City (2011), which also received widespread acclaim and holds a Metacritic ranking of 94%. A prequel game titled Batman: Arkham Origins (2013) was later released by WB Games Montréal. A fourth game titled Batman: Arkham Knight (2015) has also been released by Rocksteady. As with most animated Batman media, Kevin Conroy provided the voice of the character for these games, with the exception of Arkham Origins in which the younger Batman is voiced by Roger Craig Smith. In 2016, Telltale Games released Batman: The Telltale Series adventure game, which changed the Wayne family's history as it is depicted in the Batman mythos. A sequel, titled Batman: The Enemy Within, was released in 2017. Role-playing games Mayfair Games published the DC Heroes role-playing game in 1985, then published the 80-page supplement Batman the following year, written by Mike Stackpole, with cover art by Ed Hannigan. In 1989, Mayfair Games published an updated 96-page softcover Batman Sourcebook, again written by Mike Stackpole, with additional material by J. Santana, Louis Prosperi, Jack Barker and Ray Winninger, with graphic design by Gregory Scott, and cover and interior art by DC Comics staff. Mayfair released a simplified version of DC Heroes called The Batman Role-Playing Game in 1989 to coincide with the Batman film. Interpretations Gay interpretations Gay interpretations of the character have been part of the academic study of Batman since psychologist Fredric Wertham asserted in Seduction of the Innocent in 1954 that "Batman stories are psychologically homosexual ...The Batman type of story may stimulate children to homosexual fantasies, of the nature of which they may be unconscious." Andy Medhurst wrote in his 1991 essay "Batman, Deviance, and Camp" that Batman is interesting to gay audiences because "he was one of the first fictional characters to be attacked on the grounds of his presumed homosexuality". Professor of film and cultural studies Will Brooker argues the validity of a queer reading of Batman, and that gay readers would naturally find themselves drawn to the lifestyle depicted within, whether the character of Bruce Wayne himself were explicitly gay or not. He also identifies a homophobic element to the vigor with which mainstream fandom rejects the possibility of a gay reading of the character. In 2005, painter Mark Chamberlain displayed a number of watercolors depicting both Batman and Robin in suggestive and sexually explicit poses, prompting DC to threaten legal action. Creators associated with the character have expressed their own opinions. Writer Alan Grant has stated, "The Batman I wrote for 13 years isn't gay ...everybody's Batman all the way back to Bob Kane ...none of them wrote him as a gay character. Only Joel Schumacher might have had an opposing view." Frank Miller views the character as sublimating his sexual urges into crimefighting so much so that he is "borderline pathological", concluding "He'd be much healthier if he were gay." Grant Morrison said that "Gayness is built into Batman ...Obviously as a fictional character he's intended to be heterosexual, but the basis of the whole concept is utterly gay." Psychological interpretations Batman has been the subject of psychological study for some time, and there have been a number of interpretations into the character's psyche. In Batman and Psychology: A Dark and Stormy Knight, Dr. Travis Langley argues that the concept of archetypes as described by psychologists Carl Jung and Joseph Campbell is present in the Batman mythos, such that the character represents the "shadow archetype". This archetype, according to Langley, represents a person's own dark side; it is not necessarily an evil one, but rather one that is hidden from the outside and concealed from both the world and oneself. Langley argues that Bruce Wayne confronts his own darkness early in life; he chooses to use it to instill fear in wrongdoers, with his bright and dark sides working together to fight evil. Langley uses the Jungian perspective to assert that Batman appeals to our own need to face our "shadow selves". Langley also taught a class called Batman, a title he was adamant about. "I could have called it something like the Psychology of Nocturnal Vigilantism, but no. I called it Batman," Langley says. Several psychologists have explored Bruce Wayne/Batman's mental health. Robin. S. Rosenberg evaluated his actions and problems to determine if they reach the level of mental disorders. She examined the possibility of several mental health issues, including dissociative identity disorder, obsessive–compulsive disorder, and several others. She concluded that Bruce Wayne/Batman may have a disorder or a combination of disorders but due to his fictional nature, a definitive diagnosis will remain unknown. However, Langley himself states in his book that Batman is far too functional and well-adjusted, due to his training, confrontation of his fear early on and other factors, to be mentally ill. More likely, he asserts Batman's mental attitude is far more in line with a dedicated Olympic athlete. Notes References Sources Further reading External links Batman Bio at the Unofficial Guide to the DC Universe Batman (1940–present) Comics Inventory Batman characters Batman: Arkham characters 1939 comics debuts 1939 establishments in the United States Characters created by Bill Finger Characters created by Bob Kane Comics characters introduced in 1939 DC Comics American superheroes DC Comics businesspeople DC Comics male superheroes DC Comics martial artists DC Comics orphans DC Comics scientists Fictional aviators Fictional billionaires Fictional blade and dart throwers Fictional business executives Fictional characters with eidetic memory Fictional characters with post-traumatic stress disorder Fictional criminologists Fictional engineers Fictional English American people Fictional Irish American people Fictional escapologists Fictional foster carers Fictional gentleman detectives Fictional hackers Fictional hybrid martial artists Fictional inventors Fictional male martial artists Fictional martial arts trainers Fictional philanthropists Fictional socialites Fictional torturers and interrogators Fictional victims of sexual assault Fictional Yale University people Superheroes with alter egos Superhero detectives Vigilante characters in comics
17
The Boston Red Sox are an American professional baseball team based in Boston. The Red Sox compete in Major League Baseball (MLB) as a member club of the American League (AL) East division. Founded in as one of the American League's eight charter franchises, the team's home ballpark has been Fenway Park since . The "Red Sox" name was chosen by the team owner, John I. Taylor, , following the lead of previous teams that had been known as the "Boston Red Stockings", including the Boston Braves (now the Atlanta Braves). The team has won nine World Series championships, tied for the third-most of any MLB team, and has played in 13 World Series. Their most recent World Series appearance and win was in . In addition, they won the American League pennant, but were not able to defend their 1903 World Series championship when the New York Giants refused to participate in the 1904 World Series. The Red Sox were a dominant team in the new league, defeating the Pittsburgh Pirates in the first World Series in 1903 and winning four more championships by 1918. However, they then went into one of the longest championship droughts in baseball history, dubbed the "Curse of the Bambino" after its alleged inception due to the Red Sox' sale of star player Babe Ruth to the rival New York Yankees two years after their World Championship in 1918. The Sox endured an 86-year wait before the team's sixth World Championship in . The team's history during that period was punctuated with some of the most memorable moments in World Series history, including Enos Slaughter's "mad dash" in , the "Impossible Dream" of , Carlton Fisk's home run in , and Bill Buckner's error in . Following their victory in the 2018 World Series, they became the first team to win four World Series trophies in the 21st century, with championships in , , and 2018. The team's history has also been marked by the team's intense rivalry with the New York Yankees, arguably the fiercest and most historic in North American professional sports. The Red Sox are owned by Fenway Sports Group, which also owns Liverpool F.C. of the Premier League in England, the National Hockey League's Pittsburgh Penguins, and partially owns RFK Racing of the NASCAR Cup Series. They are consistently one of the top MLB teams in average road attendance, while the small capacity of Fenway Park prevents them from leading in overall attendance. From May 15, 2003, to April 10, 2013, the Red Sox sold out every home game—a total of 820 games (794 regular season) for a major professional sports record. Both Neil Diamond's "Sweet Caroline" and The Standells' "Dirty Water" have become anthems for the Red Sox. As of the end of the 2022 season, the franchise's all-time regular-season record is 9,796–9,098 (). Nickname The name Red Sox, chosen by owner John I. Taylor after the 1907 season, refers to the red hose in the team uniform beginning in 1908. Sox had been previously adopted for the Chicago White Sox by newspapers needing a headline-friendly form of Stockings, as "Stockings Win!" in large type did not fit in a column. The team name "Red Sox" had previously been used as early as 1888 by a 'colored' team from Norfolk, Virginia. The Spanish language media sometimes refers to the team as Medias Rojas, a translation of "red socks". The official Spanish site uses the variant "Los Red Sox". The Red Stockings nickname was previously used by the Cincinnati Red Stockings, who were members of the pioneering National Association of Base Ball Players. Managed by Harry Wright, Cincinnati adopted a uniform with white knickers and red stockings and earned the famous nickname, a year or two before hiring the first fully professional team in 1869. When the club folded after the 1870 season, Wright was hired by Boston businessman Ivers Whitney Adams to organize a new team in Boston, and he brought three teammates and the "Red Stockings" nickname along. (Most nicknames were then unofficial — neither club names nor registered trademarks — so the migration was informal.) The Boston Red Stockings won four championships in the five seasons of the new National Association, the first professional league. When a new Cincinnati club was formed as a charter member of the National League in 1876, the "Red Stockings" nickname was commonly reserved for them once again, and the Boston team was referred to as the "Red Caps". Other names were sometimes used before Boston officially adopted the nickname "Braves" in 1912; the club eventually left Boston for Milwaukee and is now playing in Atlanta. In 1901, the upstart American League established a competing club in Boston. (Originally, a team was supposed to be started in Buffalo, but league ownership at the last minute removed that city from their plans in favor of the expansion Boston franchise.) For seven seasons, the AL team wore dark blue stockings and had no official nickname. They were simply "Boston", "Bostonians" or "the Bostons"; or the "Americans" or "Boston Americans" as in "American Leaguers", Boston being a two-team city. Their 1901–1907 jerseys, both home, and road, just read "Boston", except for 1902 when they sported large letters "B" and "A" denoting "Boston" and "American." Newspaper writers of the time used other nicknames for the club, including "Somersets" (for owner Charles Somers), "Plymouth Rocks", "Beaneaters", the "Collinsites" (for manager Jimmy Collins)", and "Pilgrims." For years many sources have listed "Pilgrims" as the early Boston AL team's official nickname, but researcher Bill Nowlin has demonstrated that the name was barely used, if at all, during the team's early years. The origin of the nickname appears to be a poem entitled "The Pilgrims At Home" written by Edwin Fitzwilliam that was sung at the 1907 home opener ("Rory O'More" melody). This nickname was commonly used during that season, perhaps because the team had a new manager and several rookie players. John I. Taylor had said in December 1907 that the Pilgrims "sounded too much like homeless wanderers." The National League club in Boston, though seldom called the "Red Stockings" anymore, still wore red trim. In 1907, the National League club adopted an all-white uniform, and the American League team saw an opportunity. On December 18, 1907, Taylor announced that the club had officially adopted red as its new team color. The 1908 uniforms featured a large icon of a red stocking angling across the shirt front. For 1908, the National League club returned to wearing red trim, but the American League team finally had an official nickname and remained the "Red Sox" for good. The name is often shortened to "Bosox" or "BoSox", a combination of "Boston" and "Sox" (similar to the "ChiSox" in Chicago or the minor league "WooSox" of Worcester, a minor league affiliate of Boston). Sportswriters sometimes refer to the Red Sox as the Crimson Hose and the Olde Towne Team. Recently, media have begun to call them the "Sawx" casually, reflecting how the word is pronounced with a New England accent. However, most fans simply refer to the team as the "Sox" when the context is understood to mean Red Sox. The formal name of the entity which owns the team is "Boston Red Sox Baseball Club Limited Partnership". The name shown on a door near the main entrance to Fenway Park, "Boston American League Baseball Company", was used prior to the team's reorganization as a limited partnership on May 26, 1978. History 1901–1919: The Golden Era In 1901, the minor Western League, led by Ban Johnson, declared itself to be equal to the National League, then the only major league in baseball. Johnson had changed the name of the league to the American League prior to the 1900 season. In 1901, the league created a franchise in Boston, called the "Boston Americans", to compete with the National League team there. Playing their home games at Huntington Avenue Grounds, the Boston franchise finished second in the league in 1901 and third in 1902. The team was originally owned by C.W. Somers. In January 1902, he sold all but one share of the team to Henry Killilea. The early teams were led by manager and star third baseman Jimmy Collins, outfielders Chick Stahl, Buck Freeman, and Patsy Dougherty, and pitcher Cy Young, who in 1901 won the pitching Triple Crown with 33 wins (41.8% of the team's 79 wins), 1.62 ERA and 158 strikeouts. In 1903, the team won their first American League pennant and, as a result, Boston participated in the first modern World Series, going up against the Pittsburgh Pirates. Aided by the modified chants of "Tessie" by the Royal Rooters fan club and by its stronger pitching staff, the Americans won the best-of-nine series five games to three. In April 1904, the team was purchased by John I. Taylor of Boston. The 1904 team found itself in a pennant race against the New York Highlanders. A predecessor to what became a storied rivalry, this race featured the trade of Patsy Dougherty to the Highlanders for Bob Unglaub. In order to win the pennant, the Highlanders needed to win both games of their final doubleheader with the Americans at the Highlanders' home stadium, Hilltop Park. With Jack Chesbro on the mound, and the score tied 2–2 with a man on third in the top of the ninth, a spitball got away from Chesbro and Lou Criger scored the go-ahead run and the Americans won their second pennant. However, the NL champion New York Giants declined to play any postseason series, but a sharp public reaction led the two leagues to make the World Series a permanent championship, starting in 1905. In 1906, Boston lost 105 games and finished last in the league. In December 1907, Taylor proposed that the Boston Americans name change to the Boston Red Sox. By 1909, center fielder Tris Speaker had become a fixture in the Boston outfield, and the team finished the season in third place. In 1912, the Red Sox won 105 games and the pennant. The 105 wins stood as the club record until the 2018 club won 108. Anchored by an outfield including Tris Speaker, Harry Hooper and Duffy Lewis, and pitcher Smoky Joe Wood, the Red Sox beat the New York Giants 4–3–1 in the 1912 World Series best known for Snodgrass's Muff. From 1913 to 1916 the Red Sox were owned by Joseph Lannin. In 1914, Lannin signed a young up-and-coming pitcher named Babe Ruth from the Baltimore Orioles of the International League. In 1915, the team won 101 games and went on to the 1915 World Series, where they beat the Philadelphia Phillies four games to one. Following the 1915 season, Tris Speaker was traded to the Cleveland Indians. The Red Sox went on to win the 1916 World Series, defeating the Brooklyn Robins. Harry Frazee bought the Red Sox from Joseph Lannin in 1916 for about $675,000. In 1918, Babe Ruth led the team to another World Series championship over the Chicago Cubs. Sale of Babe Ruth and Aftermath (1920–1938) Prior to the sale of Babe Ruth, multiple trades occurred between the Red Sox and the Yankees. On December 18, 1918, outfielder Duffy Lewis, pitcher Dutch Leonard and pitcher Ernie Shore were traded to the Yankees for pitcher Ray Caldwell, Slim Love, Roxy Walters, Frank Gilhooley and $15,000. In July 1919, pitcher Carl Mays quit the team and then was traded to the Yankees for Bob McGraw, Allan Russell and $40,000. After Mays was traded, league president Ban Johnson suspended him due to his breaking of his contract with the Red Sox. The Yankees went to court after Johnson suspended Mays. After the Yankees were able to play Mays, the American League split into two factions: the Yankees, Red Sox and White Sox, known as the "Insurrectos", versus Johnson and the remaining five clubs, a.k.a. the "Loyal Five". On December 26, 1919, the team sold Babe Ruth, who had played the previous six seasons for the Red Sox, to the rival New York Yankees. The sale was announced on January 6, 1920. In 1919, Ruth had broken the single-season home run record, hitting 29 home runs. It was believed that Frazee sold Ruth to finance the Broadway musical No, No, Nanette. While No, No, Nanette did not open on Broadway until 1925, Leigh Montville's book, The Big Bam: The Life and Times of Babe Ruth, reports that No, No, Nanette had originated as a non-musical stage play called My Lady Friends, which opened on Broadway in December 1919. According to the book, My Lady Friends had been financed by Ruth's sale to the Yankees. The sale of Babe Ruth came to be viewed as the beginning of the Yankees–Red Sox rivalry, considered the "best rivalry" by American sports journalists. In the December 1920, Wally Schang, Waite Hoyt, Harry Harper and Mike McNally were traded to the Yankees for Del Pratt, Muddy Ruel, Hank Thormahlen, Sammy Vick. The following winter, shortstop Everett Scott, and pitchers Bullet Joe Bush and Sad Sam Jones were traded to the Yankees for Roger Peckinpaugh, who was immediately traded to the Washington Senators, Jack Quinn, Rip Collins, Bill Piercy. On July 23, 1922, Joe Dugan and Elmer Smith were traded to the Yankees for Elmer Miller, Chick Fewster, Johnny Mitchell, and Lefty O'Doul. Acquiring Dugan helped the Yankees edge the St. Louis Browns in a tight pennant race. After late trades in 1922, a June 15 trading deadline went into effect. In 1923, Herb Pennock was traded by the Red Sox to the Yankees for Camp Skinner, Norm McMillan, and George Murray. The loss of several top players sent the Red Sox into free fall. During the 1920s and early 1930s, the Red Sox were fixtures in the second division, never finishing closer than 20 games out of first. The losses increased after Frazee sold the team to Bob Quinn in 1923. The team bottomed out in 1932 with a record of 43–111, still the worst record in franchise history. However, in 1931, Earl Webb set the all-time mark for most doubles in a season with 67. In 1933, Tom Yawkey bought the team. Yawkey acquired pitchers Wes Ferrell and Lefty Grove, Joe Cronin, a shortstop and manager, and first baseman Jimmie Foxx. In 1938, Foxx hit 50 home runs, which stood as a club record for 68 years. That year Foxx also set a club-record of 175 runs. 1939–1960: The Ted Williams Era In 1939, the Red Sox purchased the contract of outfielder Ted Williams from the minor league San Diego Padres of the Pacific Coast League, ushering in an era of the team sometimes called the "Ted Sox." Williams consistently hit for both high power and high average, and is generally considered one of the greatest hitters of all time. The right-field bullpens in Fenway were built in part for Williams' left-handed swing, and are sometimes called "Williamsburg." Before this addition, it was over to right field. He served two stints in the United States Marine Corps as a pilot and saw active duty in both World War II and the Korean War, missing at least five full seasons of baseball. His book The Science of Hitting is widely read by students of baseball. He is currently the last player to hit over .400 for a full season, batting .406 in 1941. Williams feuded with sports writers his whole career, calling them "The Knights of the Keyboard", and his relationship with the fans was often rocky as he was seen spitting towards the stands on more than one occasion. With Williams, the Red Sox reached the 1946 World Series but lost to the St. Louis Cardinals in seven games in part because of the use of the "Williams Shift", a defensive tactic in which the shortstop moves to the right side of the infield to make it harder for the left-handed-hitting Williams to hit to that side of the field. Some have claimed that he was too proud to hit to the other side of the field, not wanting to let the Cardinals take away his game. His performance may have also been affected by a pitch he took in the elbow in an exhibition game a few days earlier. Either way, in his only World Series, Williams gathered just five singles in 25 at-bats for a .200 average. The Cardinals won the 1946 Series when Enos Slaughter scored the go-ahead run all the way from first base on a base hit to left field. The throw from Leon Culberson was cut off by shortstop Johnny Pesky, who relayed the ball to the plate just a hair too late. Some say Pesky hesitated or "held the ball" before he turned to throw the ball, but this has been disputed. Along with Williams and Pesky, the Red Sox featured several other star players during the 1940s, including second baseman Bobby Doerr and center fielder Dom DiMaggio (the younger brother of Joe DiMaggio). The Red Sox narrowly lost the AL pennant in 1948 and 1949. In 1948, Boston finished in a tie with Cleveland, and their loss to Cleveland in a one-game playoff ended hopes of an all-Boston World Series. Curiously, manager Joseph McCarthy chose journeyman Denny Galehouse to start the playoff game when the young lefty phenom Mel Parnell was available to pitch. In 1949, the Red Sox were one game ahead of the New York Yankees, with the only two games left for both teams being against each other, and they lost both of those games. The 1950s were viewed as a time of tribulation for the Red Sox. After Williams returned from the Korean War in 1953, many of the best players from the late 1940s had retired or been traded. The stark contrast in the team led critics to call the Red Sox' daily lineup "Ted Williams and the Seven Dwarfs." Jackie Robinson was even worked out by the team at Fenway Park, however, owner Tom Yawkey did not want an African American player on his team. Willie Mays also tried out for Boston and was highly praised by team scouts. In 1955, Frank Malzone debuted at third base and Ted Williams hit .388 at the age of 38 in 1957, but there was little else for Boston fans to root for. Williams retired at the end of the 1960 season, famously hitting a home run in his final at-bat as memorialized in the John Updike story "Hub fans bid Kid adieu." The Red Sox finally became the last Major League team to field an African American player when they promoted infielder Pumpsie Green from their AAA farm team in 1959. 1960s: Yaz and the Impossible Dream The 1960s also started poorly for the Red Sox, though 1961 saw the debut of Carl "Yaz" Yastrzemski, Williams' replacement in left field, who developed into one of the better hitters of a pitching-rich decade. Red Sox fans know 1967 as the season of the "Impossible Dream." The slogan refers to the hit song from the popular musical play "Man of La Mancha". 1967 saw one of the great pennant races in baseball history with four teams in the AL pennant race until almost the last game. The BoSox had finished the 1966 season in ninth place, but they found new life with Yastrzemski as the team won the pennant to reach the 1967 World Series. Yastrzemski won the American League Triple Crown (the most recent player to accomplish such a feat until Miguel Cabrera did so in 2012), hitting .326 with 44 home runs and 121 runs batted in. He was named the league's Most Valuable Player, just one vote shy of a unanimous selection as a Minnesota sportswriter placed Twins center fielder César Tovar first on his ballot. But the Red Sox lost the series to the St. Louis Cardinals in seven games. Cardinals pitcher Bob Gibson stymied the Red Sox, winning three games. An 18-year-old Bostonian rookie named Tony Conigliaro slugged 24 home runs in 1964. "Tony C" became the youngest player in Major League Baseball to hit his 100th home run, a record that stands today. He was struck just above the left cheek bone by a fastball thrown by Jack Hamilton of the California Angels on Friday, August 18, 1967, and sat out the entire next season with headaches and blurred vision. Although he did have a productive season in 1970, he was never the same. 1970s: The Red Hat Era Although the Red Sox were competitive for much of the late 1960s and early 1970s, they never finished higher than second place in their division. The closest they came to a divisional title was 1972 when they lost by a half-game to the Detroit Tigers. The start of the season was delayed by a players' strike, and the Red Sox had lost one more game to the strike than the Tigers had. Games lost to the strike were not made up. The Red Sox went to Detroit with a half-game lead for the final series of the season, but lost the first two of those three and were eliminated from the pennant race. 1975 The Red Sox won the AL pennant in 1975. The 1975 Red Sox were as colorful as they were talented, with Yastrzemski and rookie outfielders Jim Rice and Fred Lynn, veteran outfielder Dwight Evans, catcher Carlton Fisk, and pitchers Luis Tiant and eccentric junkballer Bill "The Spaceman" Lee. Fred Lynn won both the American League Rookie of the Year award and the Most Valuable Player award, a feat which had never previously been accomplished, and was not duplicated until Ichiro Suzuki did it in 2001. In the 1975 American League Championship Series, the Red Sox swept the Oakland A's. In the 1975 World Series, they faced the heavily favored Cincinnati Reds, also known as The Big Red Machine. Luis Tiant won games 1 and 4 of the World Series but after five games, the Red Sox trailed the series 3 games to 2. Game 6 at Fenway Park is considered among the greatest games in postseason history. Down 6–3 in the bottom of the eighth inning, Red Sox pinch hitter Bernie Carbo hit a three-run homer into the center field bleachers off Reds fireman Rawly Eastwick to tie the game. In the top of the 11th inning, right fielder Dwight Evans made a spectacular catch of a Joe Morgan line drive and doubled off Ken Griffey at first base to preserve the tie. In the bottom of the 12th inning, Carlton Fisk hit a deep fly ball that sliced towards the left-field foul pole above the Green Monster. As the ball sailed into the night, Fisk waved his arms frantically towards fair territory, seemingly pleading with the ball not to go foul. The ball complied, and bedlam ensued at Fenway as Fisk rounded the bases to win the game for the Red Sox 7–6. The Red Sox lost game 7, 4–3 even though they had an early 3–0 lead. Starting pitcher Bill Lee threw a slow looping curve which he called a "Leephus pitch" or "space ball" to Reds first baseman Tony Pérez who hit the ball over the Green Monster and across the street. The Reds scored the winning run in the 9th inning. Carlton Fisk said famously about the 1975 World Series, "We won that thing 3 games to 4." 1978 pennant race In 1978, the Red Sox and the Yankees were involved in a tight pennant race. The Yankees were games behind the Red Sox in July, and on September 10, after completing a 4-game sweep of the Red Sox (known as "The Boston Massacre"), the Yankees tied for the divisional lead. On September 16 the Yankees held a game lead over the Red Sox, but the Sox won 11 of their next 13 games and by the final day of the season, the Yankees' magic number to win the division was one—with a win over Cleveland or a Boston loss to the Toronto Blue Jays clinching the division. However, New York lost 9–2 and Boston won 5–0, forcing a one-game playoff to be held at Fenway Park on Monday, October 2. The most remembered moment from the game was Bucky Dent's 7th inning three-run home run in off Mike Torrez just over the Green Monster, giving the Yankees their first lead. The dejected Boston manager, Don Zimmer, gave Mr. Dent a new middle name which lives on in Boston sports lore to this day, uttering three words as the ball sailed over the left-field wall: "Bucky Fucking Dent!" Reggie Jackson provided a solo home run in the 8th that proved to be the difference in the Yankees' 5–4 win, which ended with Yastrzemski popping out to Graig Nettles in foul territory with Rick Burleson representing the tying run at third. Although Dent became a Red Sox demon, the Red Sox got retribution in 1990 when the Yankees fired Dent as their manager during a series at Fenway Park. 1986 World Series and Game Six Carl Yastrzemski retired after the 1983 season, during which the Red Sox finished sixth in the seven-team AL East, posting their worst record since 1966. However, in 1986, it appeared that the team's fortunes were about to change. The offense had remained strong with Jim Rice, Dwight Evans, Don Baylor and Wade Boggs. Roger Clemens led the pitching staff, going 24–4 with a 2.48 ERA, and had a 20-strikeout game to win both the American League Cy Young and Most Valuable Player awards. Clemens became the first starting pitcher to win both awards since Vida Blue in 1971. Despite spending a month and a half on the disabled list in the middle of the season, left-hander Bruce Hurst went 13–8, striking out 167 and pitching four shutout games. Boston sportswriters that season compared Clemens and Hurst to Don Drysdale and Sandy Koufax from the 1960s Los Angeles Dodgers. The Red Sox won the AL East for the first time in 11 seasons, and faced the California Angels in the ALCS. The teams split the first two games in Boston, but the Angels won the next two home games, taking a 3–1 lead in the series. With the Angels poised to win the series, the Red Sox trailed 5–2 heading into the ninth inning of Game 5. A two-run homer by Baylor cut the lead to one. With two outs and a runner on, and one strike away from elimination, Dave Henderson homered off Donnie Moore to put Boston up 6–5. Although the Angels tied the game in the bottom of the ninth, the Red Sox won in the 11th on a Henderson sacrifice fly off Moore. The Red Sox then found themselves with six- and seven-run wins at Fenway Park in Games 6 and 7 to win the American League title. The Red Sox faced a heavily favored New York Mets team that had won 108 games in the regular season in the 1986 World Series. Boston won the first two games in Shea Stadium but lost the next two at Fenway, knotting the series at 2 games apiece. After Bruce Hurst recorded his second victory of the series in Game 5, the Red Sox returned to Shea Stadium looking to garner their first championship in 68 years. However, Game 6 became one of the most devastating losses in club history. After pitching seven strong innings, Clemens was lifted from the game with a 3–2 lead. Years later, Manager John McNamara said Clemens was suffering from a blister and asked to be taken out of the game, a claim Clemens denied. The Mets then scored a run off reliever and former Met Calvin Schiraldi to tie the score 3–3. The game went to extra innings, where the Red Sox took a 5–3 lead in the top of the 10th on a solo home run by Henderson, a double by Boggs and an RBI single by second baseman Marty Barrett. After recording two outs in the bottom of the 10th, a graphic appeared on the NBC telecast hailing Barrett as the Player of the Game and Bruce Hurst as Most Valuable Player of the World Series. A message even appeared briefly on the Shea Stadium scoreboard congratulating the Red Sox as world champions. After so many years of abject frustration, Red Sox fans around the world could taste victory. With the count at two balls and one strike, Mets catcher Gary Carter hit a single. It was followed by singles by Kevin Mitchell and Ray Knight. With Mookie Wilson batting, a wild pitch by Bob Stanley tied the game at 5. Wilson then hit a slow ground ball to first; the ball rolled through Bill Buckner's legs, allowing Knight to score the winning run from second. While Buckner was singled out as responsible for the loss, many observers—as well as both Wilson and Buckner—have noted that even if Buckner had fielded the ball cleanly, the speedy Wilson probably would have still been safe, leaving the game-winning run at third with two out. Many observers questioned why Buckner was in the game at that point considering he had bad knees and that Dave Stapleton had come in as a late-inning defensive replacement in prior series games. It appeared as though McNamara was trying to reward Buckner for his long and illustrious career by leaving him in the game. After falling behind 3–0, the Mets then won Game 7, concluding the devastating collapse and feeding the myth that the Red Sox were "cursed." This World Series loss had a strange twist: Red Sox General Manager Lou Gorman was vice-president, player personnel, of the Mets from 1980 to 1983. Working under Mets' GM Frank Cashen, with whom Gorman served with the Orioles, he helped lay the foundation for the Mets' championship. 1988–1991: Morgan Magic The Red Sox returned to the postseason in 1988. With the club in fourth place midway through the 1988 season at the All-Star break, manager John McNamara was fired and replaced by Walpole resident and longtime minor-league manager Joe Morgan on July 15. The club immediately won 12 games in a row, and 19 of 20 overall, to surge to the AL East title in what was called Morgan Magic. But the magic was short-lived, as the team was swept by the Oakland Athletics in the ALCS. The Most Valuable Player of that Series was former Red Sox pitcher and Baseball Hall of Fame player Dennis Eckersley, who saved all four wins for Oakland. Two years later, in 1990, the Red Sox again won the division and face the Athletics in the ALCS. However, the outcome was the same, with the A's sweeping the ALCS in four straight. In 1990, Yankees fans started to chant "1918!" to taunt the Red Sox. The demeaning chant echoed at Yankee Stadium each time the Red Sox were there. Also, Fenway Park became the scene of Bucky Dent's worst moment as a manager, although it was where he had his greatest triumph. In June, when the Red Sox swept the Yankees during a four-game series at Fenway Park, the Yankees fired Dent as their manager. Red Sox fans felt retribution to Dent being fired on their field, but the Yankees used him as a scapegoat. However, Dan Shaughnessy of The Boston Globe severely criticized Yankees owner George Steinbrenner for firing Dent—his 18th managerial change in as many years since becoming owner—in Boston and said he should "have waited until the Yankees got to Baltimore" to fire Dent. He said that "if Dent had been fired in Seattle or Milwaukee, this would have been just another event in an endless line of George's jettisons. But it happened in Boston and the nightly news had its hook." "The firing was only special because ... it's the first time a Yankee manager—who was also a Red Sox demon—was purged on the ancient Indian burial grounds of the Back Bay." However, Bill Pennington called the firing of Dent "merciless." 1992–2001: Mixed results Tom Yawkey died in 1976, and his wife Jean R. Yawkey took control of the team until her death in 1992. Their initials are shown in two stripes on the left field wall in Morse code. Upon Jean's death, control of the team passed to the Yawkey Trust, led by John Harrington. The trust sold the team in 2002, concluding 70 years of Yawkey ownership. In 1994, General Manager Lou Gorman was replaced by Dan Duquette, a Massachusetts native who had worked for the Montreal Expos. Duquette revived the team's farm system, which during his tenure produced players such as Nomar Garciaparra, Carl Pavano and David Eckstein. Duquette also spent money on free agents, notably an 8-year, $160 million deal for Manny Ramírez after the 2000 season. The Red Sox won the newly realigned American League East in 1995, finishing seven games ahead of the Yankees. However, they were swept in three games in the ALDS by the Cleveland Indians. Their postseason losing streak reached 13 straight games, dating back to the 1986 World Series. Roger Clemens tied his major league record by fanning 20 Detroit Tigers on September 18, 1996, in one of his final appearances in a Red Sox uniform. After Clemens had turned 30 and then had four seasons, 1993–96, which were by his standards mediocre at best, Duquette said the pitcher was entering "the twilight of his career". Clemens went on to pitch well for another ten years and win four more Cy Young Awards. Out of contention in 1997, the team traded closer Heathcliff Slocumb to Seattle for catching prospect Jason Varitek and right-handed pitcher Derek Lowe. Prior to the start of the 1998 season, the Red Sox dealt pitchers Tony Armas Jr. and Carl Pavano to the Montreal Expos for pitcher Pedro Martínez. Martínez became the anchor of the team's pitching staff and turned in several outstanding seasons. In 1998, the team won the American League Wild Card but again lost the American League Division Series to the Indians. In 1999, Duquette called Fenway Park "economically obsolete" and, along with Red Sox ownership, led a push for a new stadium. On the field, the 1999 Red Sox were finally able to overturn their fortunes against the Indians in the American League Division Series. Cleveland took a 2–0 series lead, but Boston won the next three games behind strong pitching by Derek Lowe, Pedro Martínez and his brother Ramón Martínez. Game 4's 23–7 win by the Red Sox was the highest-scoring playoff game in major league history. Game 5 began with the Indians taking a 5–2 lead after two innings, but Pedro Martínez, nursing a shoulder injury, came on in the fourth inning and pitched six innings without allowing a hit while the team's offense rallied for a 12–8 win behind two home runs and seven runs batted in from outfielder Troy O'Leary. After the ALDS victory, the Red Sox lost the American League Championship Series to the Yankees, four games to one. The one bright spot was a lopsided win for the Red Sox in the much-hyped Martinez-Clemens game. 2002–present: John Henry era 2002–03 In 2002, the Red Sox were sold by Yawkey trustee and president Harrington to New England Sports Ventures, a consortium headed by principal owner John Henry. Tom Werner served as executive chairman, Larry Lucchino served as president and CEO, and serving as vice-chairman was Les Otten. Dan Duquette was fired as GM of the club on February 28, with former Angels GM Mike Port taking the helm for the 2002 season. A week later, manager Joe Kerrigan was fired and was replaced by Grady Little. While nearly all offseason moves were made under Duquette, such as signing outfielder Johnny Damon away from the Oakland Athletics, the new ownership made additions such as outfielder Cliff Floyd and relief pitcher Alan Embree. Nomar Garciaparra, Manny Ramírez, and Floyd all hit well, while Pedro Martínez put up his usual outstanding numbers. Derek Lowe, newly converted into a starter, won 20 games—becoming the first player to save 20 games and win 20 games in back-to-back seasons. After failing to reach the playoffs, Port was replaced by Yale University graduate Theo Epstein. Epstein, raised in Brookline, Massachusetts, and just 28 at the time of his hiring, became the youngest general manager in MLB history. The 2003 team was known as the "Cowboy Up" team, a nickname derived from first baseman Kevin Millar's challenge to his teammates to show more determination. In the 2003 American League Division Series, the Red Sox rallied from a 0–2 series deficit against the Athletics to win the best-of-five series. Derek Lowe returned to his former relief pitching role to save Game 5, a 4–3 victory. The team then faced the Yankees in the 2003 American League Championship Series. In Game 7, Boston led 5–2 in the eighth inning, but Pedro Martínez allowed three runs to tie the game. The Red Sox could not score off Mariano Rivera over the last three innings and eventually lost the game 6–5 when Yankee third baseman Aaron Boone hit a solo home run off Tim Wakefield. Some placed the blame for the loss on manager Grady Little for failing to remove starting pitcher Martínez in the 8th inning after some observers believe he began to show signs of tiring. It was stated by Epstein that the decision to not renew Little's contract was "made on a body of work after careful contemplation of the big picture...did not depend on any one decision in any one postseason game." Boston would hire former Philadelphia Phillies manager Terry Francona to manage the 2004 season. "The Idiots": 2004 World Series Championship During the 2003–04 offseason, the Red Sox acquired another ace pitcher, Curt Schilling, and a closer, Keith Foulke. Due to some midseason struggles with injuries, management shook up the team at the July 31 trading deadline as part of a four-team trade. The Red Sox traded the team's popular, yet oft-injured, shortstop Nomar Garciaparra and outfielder Matt Murton to the Chicago Cubs, and received first baseman Doug Mientkiewicz from the Minnesota Twins, and shortstop Orlando Cabrera from the Montreal Expos. In a separate transaction, the Red Sox acquired center fielder Dave Roberts from the Los Angeles Dodgers. Following the trades, the club won 22 out of 25 games and qualified for the playoffs as the AL Wild Card. Players and fans affectionately referred to the players as "the Idiots", a term coined by Damon and Millar during the playoff push to describe the team's eclectic roster and devil-may-care attitude toward their supposed "curse." Boston began the postseason by sweeping the AL West champion Anaheim Angels in the ALDS. In the third game of the series, David Ortiz hit a walk-off two-run homer in the 10th inning to win the game and the series to advance to a rematch of the previous year's ALCS in the ALCS against the Yankees. The ALCS started very poorly for the Red Sox, as they lost the first three games (including a crushing 19–8 home loss in game 3). In Game 4, the Red Sox found themselves facing elimination, trailing 4–3 in the ninth with Mariano Rivera in to close for the Yankees. After Rivera issued a walk to Millar, Roberts came on to pinch run and promptly stole second base. He then scored on an RBI single by Bill Mueller, sending the game into extra innings. The Red Sox went on to win the game 6–4 on a two-run home run by Ortiz in the 12th inning. The odds were still very much against the Sox in the series, but Ortiz also made the walk-off hit in the 14th inning of Game 5. The comeback continued with a victory from an injured Schilling in Game 6. Three sutures being used to stabilize the tendon in Schilling's right ankle bled throughout the game, famously making his sock appear bloody red. With it, Boston became the first team in MLB history to force a series-deciding Game 7 after trailing 3–0 in games. The Red Sox completed their historic comeback in Game 7 with a 10–3 victory over the Yankees. Ortiz began the scoring with a two-run homer. Along with his game-winning runs batted in during games 4 and 5, he was named ALCS Most Valuable Player. The Red Sox joined the 1942 Toronto Maple Leafs and the 1975 New York Islanders as the only North American professional sports teams in history at the time to win a best-of-seven games series after being down 3–0. (The 2010 Philadelphia Flyers and the 2014 Los Angeles Kings would later accomplish the feat). The Red Sox swept the St. Louis Cardinals in the 2004 World Series. The Red Sox never trailed throughout the series; Mark Bellhorn hit a game-winning home run off Pesky's Pole in game 1, and Schilling pitched another bloodied-sock victory in game 2, followed by similarly masterful pitching performances by Martinez and Derek Lowe. It was the Red Sox' first championship in 86 years. Manny Ramírez was named World Series MVP. To add a final, surreal touch to Boston's championship season, on the night of Game 4 a total lunar eclipse colored the moon red over Busch Stadium. The Red Sox earned many accolades from the sports media and throughout the nation for their season, such as in December, when Sports Illustrated named the Boston Red Sox the 2004 Sportsmen of the Year. 2007: World Series Championship The 2005 AL East was decided on the last weekend of the season, with the Yankees coming to Fenway Park with a one-game lead in the standings. The Red Sox won two of the three games to finish the season with the same record as the Yankees, 95–67. However, a playoff was not needed, as the loser of such a playoff would still make the playoffs as a wild card team. As the Yankees had won the season series, they were awarded the division title, and the Red Sox competed in the playoffs as the wild card team. Boston failed to defend their championship, and was swept in three games by the eventual 2005 World Series champion Chicago White Sox in the first round of the playoffs. In 2006 David Ortiz broke Jimmie Foxx's single-season Red Sox home run record by hitting 54 homers. However, Boston failed to make the playoffs after compiling a 9–21 record in the month of August due to several injuries in the club's roster. Theo Epstein's first step toward restocking the team for 2007 was to pursue one of the most anticipated acquisitions in baseball history. On November 14, MLB announced that Boston had won the bid for the rights to negotiate a contract with Japanese Nippon Professional Baseball superstar pitcher Daisuke Matsuzaka. Boston placed a bid of $51.1 million to negotiate with Matsuzaka and completed a 6-year, $52 million contract after they were announced as the winning bid. The Red Sox moved into first place in the AL East by mid-April and never relinquished their division lead. Initially, rookie second baseman Dustin Pedroia under-performed, hitting below .200 in April. Manager Terry Francona refused to bench him and his patience paid off as Pedroia eventually won the AL Rookie of the Year Award for his performance that season, which included 165 hits and a .317 batting average. On the mound, Josh Beckett emerged as the ace of the staff with his first 20-win season, as fellow starting pitchers Schilling, Matsuzaka, Wakefield and Julián Tavárez all struggled at times. Relief pitcher Hideki Okajima, another recent arrival from the NPB, posted an ERA of 0.88 through the first half and was selected for the All-Star Game. Okajima finished the season with a 2.22 ERA and 5 saves, emerging as one of baseball's top relievers. Minor league call-up Clay Buchholz provided a spark on September 1 by pitching a no-hitter in his second career start. The Red Sox captured their first AL East title since 1995. The Red Sox swept the Angels in the ALDS. Facing the Cleveland Indians in the ALCS, the Red Sox fell in games 2, 3, and 4 before Beckett picked up his second victory of the series in game 5, starting a comeback. The Red Sox captured their twelfth American League pennant by outscoring the Indians 30–5 over the final three games. The Red Sox faced the Colorado Rockies in the 2007 World Series, and swept the Rockies in four games. In Game 4, Wakefield gave up his spot in the rotation to a recovered Jon Lester, who gave the Red Sox an impressive start, pitching shutout innings. Key home runs late in the game by third baseman Mike Lowell and pinch-hitter Bobby Kielty secured the Red Sox' second title in four years, as Lowell was named Most Valuable Player in the World Series. 2008–2012: Injuries and collapses The Red Sox began their season by participating in the third opening day game in MLB history to be played in Japan, where they defeated the Oakland A's in the Tokyo Dome. On May 19, Jon Lester threw the 18th no-hitter in team history, defeating the Kansas City Royals 7–0. Down the stretch, outfielder Manny Ramirez became embroiled in controversy surrounding public incidents with fellow players and other team employees, as well as criticism of ownership and not playing, which some claimed was due to laziness and nonexistent injuries. The front office decided to move the disgruntled outfielder at the July 31 trade deadline, shipping him to the Dodgers in a three-way deal with the Pittsburgh Pirates that landed them Jason Bay to replace him in left field. With Ramirez gone, and Bay providing a new spark in the lineup, the Red Sox improved vastly and made the playoffs as the AL Wild Card. The Red Sox defeated the Angels in the 2008 ALDS three games to one. The Red Sox then took on their AL East rivals the Tampa Bay Rays in the ALCS. Down three games to one in the 5th game of the ALCS, Boston mounted a comeback from trailing 7–0 in the 7th inning to win 8–7. They tied the series at three games apiece with a Game 6 victory before losing Game 7, 3–1, thus becoming the eighth team in a row since 2000 to fail to repeat as world champions. The Red Sox returned to postseason play in 2009 but were swept in the ALDS by the Los Angeles Angels. In 2010 they placed third in the division and failed to make the playoffs. In 2011 the Red Sox collapsed, becoming the first team in MLB history to blow a 9-game lead in the division heading into September, as they went 7–20 in the final month and failed again to make the playoffs. In December 2011, Bobby Valentine was hired as a new manager. The 2012 season marked the centennial of Fenway Park, and on April 20, past and present Red Sox players and coaches assembled to celebrate the park's anniversary. However, the collapse that they endured in September 2011 carried over into the season. The Red Sox struggled throughout the season due to injuries, inconsistent play, and off-field news. They finished 69–93 for their first losing season since 1997, and their worst season since 1965. Boston Strong: 2013 World Series Champions Boston, which finished last in the American League East with a 69–93 record in 2012 (26 games behind the Yankees), became the 11th team in major league history to go from worst in the division to first the next season when it clinched the A.L. East division title on September 20, 2013. Many credit the team's turnaround with the hiring of manager John Farrell, the former Red Sox pitching coach under Terry Francona from 2007 to 2010. As a former member of the staff, he had the respect of influential players such as Lester, Pedroia, and Ortiz. But there were other moves made in the offseason by general manager Ben Cherington who targeted "character" players to fill the team's needs. These acquisitions included veteran catcher David Ross, Jonny Gomes, Mike Napoli, and Shane Victorino. While some questioned these players as "re-treads", it was clear that Cherington was trying to move past 2011–2012 by bringing in "clubhouse players". Essential to the turnaround, however, was the pitching staff. With ace veteran John Lackey coming off Tommy John surgery and both Jon Lester and Clay Buchholz returning to their prior form, this allowed the team to rely less on their bullpen. Everything seemed in danger of collapsing, however, when both closers, Joel Hanrahan and Andrew Bailey, went down early with season-ending injuries. Farrell gave the closing job to Koji Uehara on June 21 who delivered with a 1.09 ERA and an MLB record 0.565 WHIP. On September 11, the 37-year-old right-hander set a new Red Sox record when he retired 33 straight batters. Other reasons include the trade deadline acquisition of pitcher Jake Peavy when the Red Sox were in second place in the AL East, the depth of the bench with players such as Mike Carp and rookies Jackie Bradley Jr. and Xander Bogaerts, and the re-emergence of players such as Will Middlebrooks and Daniel Nava. On September 28, 2013, the team secured home field advantage throughout the American League playoffs when their closest competition, the Oakland Athletics, lost. The next day, the team finished the season going 97–65, the best record in the American League and tied with the St. Louis Cardinals for the best record in baseball. They proceeded to defeat the St. Louis Cardinals in the 2013 World Series, four games to two. The Red Sox became the first team since the 1991 Minnesota Twins to win the World Series a year after finishing in last place, and the second overall. The 2012 Red Sox's .426 winning percentage was the lowest for a team in a season prior to a World Series championship. Throughout the season, the Red Sox players and organization formed a close association with the city of Boston and its people in relation to the Boston Marathon bombing that occurred on April 15, 2013. On April 20, the day after the alleged bombers were captured, David Ortiz gave a pre-game speech following a ceremony honoring the victims and the local law enforcement, in which he stated, "This is our fucking city! And nobody is going to dictate our freedom! Stay strong!" For the entirety of the season, the team wore an additional arm patch that exhibited the Red Sox "B" logo and the word "Strong" within a blue circle. The team also hung up in the dugout a custom jersey that read "Boston Strong" with the number 617, representing the city of Boston's area code. On many occasions during the season, victims of the attack and law enforcement involved were given the honor of throwing the ceremonial first pitch. Following their victory in the 2013 World Series, the first one clinched at home in Fenway Park since 1918, Red Sox players Jonny Gomes and Jarrod Saltalamacchia performed a ceremony during the team's traditional duck boat victory parade, in which they placed the World Series trophy and the custom 617 jersey on the Boston Marathon finish line on Boylston Street, followed by a moment of silence and the singing of "God Bless America". This ceremony helped the city "reclaim" its spirit that was lost after the bombing. Overall, the Red Sox team and organization played a role in the healing process after the tragedy, owing to the team's unifying effect on the city. 2014–2017 Following the 2013 championship, the team finished last in the AL East during 2014 with a record of 71–91, and again in 2015 with a record of 78–84. On September 12, 2015, David Ortiz hit his 500th career home run off Matt Moore in Tropicana Field becoming the 27th player in MLB history to achieve that prestigious milestone; in November 2015, Ortiz announced that the 2016 season was to be his last. The Red Sox had a record of 93–69 and won their division in 2016, with six American League All-Stars, the AL Cy Young Award winner in Rick Porcello, and the runner-up for the AL Most Valuable Player Award, Mookie Betts. Rookie Andrew Benintendi established himself in the Red Sox outfield, and Steven Wright emerged as one of the year's biggest surprises. The Red Sox grabbed the lead in the AL East early and held on to it throughout the year, which included many teams honoring Ortiz throughout the season. Despite the success, the team lost five of their last six games of the regular season and were swept in the ALDS by the eventual American League Champion Cleveland Indians. The Red Sox once again finished with a record of 93–69 in 2017 and repeated as division champions. The team went 5–5 in their last ten regular-season games and were eliminated by the Houston Astros in the ALDS in four games. The Red Sox subsequently fired their manager, John Farrell, and hired Alex Cora, signing him to a three-year deal. "Damage done": 2018 World Series Championship The Red Sox finished with a record, winning the American League East division title for the third consecutive season, eight games ahead of the second-place New York Yankees, and were the first team to clinch a berth in the 2018 postseason. The Red Sox surpassed the 100-win mark for the first time since 1946, broke the franchise record of 105 wins that had been set in 1912, and won the most games of any MLB team since the 2001 Seattle Mariners won 116. The 2018 Red Sox were led by All-Stars Mookie Betts, J. D. Martinez, Chris Sale, and Craig Kimbrel. Betts led baseball in batting average and slugging percentage, while Martinez led in runs batted in. Sale tossed only 158 innings due to a shoulder injury late in the year, but was otherwise superb, posting a 2.11 earned run average to go along with 237 strikeouts. Kimbrel saved 42 games and struck out 96 batters. The Red Sox entered the postseason as the top seed in the American League, and defeated the New York Yankees (100–62) in four games in the Division Series. Next, they defeated the defending champion Houston Astros (103–59) in five games in the League Championship Series. Boston then defeated the Los Angeles Dodgers (92–71) in five games in the World Series, for the team's fourth championship in 15 years and ninth in franchise history. The team's motto during the season, "do damage", became "damage done" upon their victory. Based on these exploits, the team is considered the best MLB team of the 2010s, one of the best Red Sox teams ever, and one of the best baseball teams since the 1998 New York Yankees. 2019–present: Decline and Struggles Despite retaining most players from the 2018 championship team, the 2019 Red Sox won 24 fewer games, finishing third in the division and missing the playoffs for the first time since 2015. President of Baseball Operations Dave Dombrowski was dismissed following a September loss to the Yankees. On October 28, the Red Sox hired Chaim Bloom as his replacement on a five-year contract, with the title of Chief Baseball Officer. On January 7, 2020, it was reported in The Athletic that the Red Sox had used their video replay room to steal signs during their 2018 season. On January 15, the Red Sox and manager Alex Cora agreed to mutually part ways after he was named in MLB's report about the Houston Astros sign stealing scandal, which occurred during his tenure as bench coach with the 2017 Astros. Ron Roenicke was subsequently named Boston's interim manager. On February 10, a trade of Mookie Betts and David Price to the Los Angeles Dodgers was made official, in a move seen as a salary dump by analysts, although denied by Red Sox executives. In March, the start of the MLB season was indefinitely postponed, due to the COVID-19 pandemic. In April, MLB's investigation into 2018 sign-stealing resulted in a finding of improper actions by the team's replay operator, who as a result was suspended for the 2020 season, and the team forfeited their second-round selection in the 2020 MLB draft. The "interim" tag was subsequently removed from Roenicke's title. The team struggled throughout their abbreviated 60-game regular season, contested July 24 through September 27, finishing in last place in the AL East division, with a record of 24–36. Prior to the final regular season game, management announced that Roenicke would not return as manager for the 2021 season. Alex Cora returned as manager for the 2021 season, with the team finishing at 92–70 and qualifying for the postseason as the fourth seed in the AL. The Red Sox defeated the Yankees in the AL Wild Card Game, and defeated the Rays in the Division Series, but were eliminated by the Astros in the League Championship Series. The 2022 season was much less successful, with the team finishing in last place within their division with a 78–84 record, the first losing record for the team in a 162-game season since 2015. Bloom was fired on September 14, 2023. His replacement, Craig Breslow, an executive with the Chicago Cubs and former pitcher for the Red Sox, was hired on October 25, 2023. Awards For major MLB awards, voted by the Baseball Writers' Association of America (BBWAA), Red Sox players have won the MVP Award 12 times, most recently by Mookie Betts in 2018; the Cy Young Award seven times, most recently by Rick Porcello in 2016; Rookie of the Year six times, most recently by Dustin Pedroia in 2007; and Manager of the Year twice, most recently by Jimy Williams in 1999. Roster Regular season home attendance Fenway Park Due to the COVID-19 pandemic, the 2020 season was contested behind closed doors, and some 2021 games were contested with limited attendance per local ordinances. Source: Uniforms Spring training The franchise's first spring training was held in Charlottesville, Virginia, in 1901, when the team was known as the Boston Americans. Since 1993, the city of Fort Myers, Florida, has hosted Boston's spring training, first at City of Palms Park, and since 2012 at JetBlue Park at Fenway South. JetBlue Park In October 2008, the Lee County, Florida, Board of Commissioners approved an agreement with the Red Sox to build a new spring training facility for the team. In November 2008, the Red Sox signed an agreement with Lee County intended to keep their spring training home in the Fort Myers area for 30 more years. In April 2009, the Red Sox announced that the new stadium would be located on a lot north of Southwest Florida International Airport. In March 2011, the team and JetBlue Airlines officials announced that the new field would be named JetBlue Park at Fenway South. JetBlue Park opened in March 2012. Many characteristics of the stadium have been taken from Fenway Park, including a Green Monster wall in left field. Included in the wall is a restored version of the manual scoreboard that was housed at Fenway for almost 30 years, beginning in the 1970s. The field dimensions are identical to those at Fenway. Truck Day The unofficial beginning of the spring training season for the Red Sox is Truck Day, the day a tractor-trailer filled with equipment leaves Fenway Park bound for the team's spring training facility in Florida. 2021's Truck Day was February 8. Rivalries New York Yankees The Red Sox and New York Yankees have been rivals for more than 100 years. The rivalry is often considered one of the oldest, fiercest and most famous rivalries in professional sports. The rivalry is often a heated subject of conversation in the Northeastern United States. Since the inception of the wild card team and an added Division Series, every postseason except for 2014 and 2023 has featured one or both of the American League East rivals. The two teams have squared off in the American League Championship Series (ALCS) three times, with the Yankees winning in 1999 and 2003 and the Sox winning in 2004. The teams have faced off in one American League Division Series (ALDS); 2018, won by the Red Sox in four games. The teams have played one American League Wild Card Game on October 5, 2021, which the Red Sox won as well. The teams have twice met in the last regular-season series to decide the league title, in 1904 (which the Red Sox won) and 1949 (which the Yankees won). The teams also finished tied for first in 1978, when the Yankees won a high-profile one-game playoff for the division title. The 1978 division race is memorable for the Red Sox having held a 14-game lead over the Yankees more than halfway through the season. In 2003, The Red Sox lost in Game 7 of the ALCS on Aaron Boone's walk-off home run. Similarly, the 2004 ALCS is notable for the Yankees leading 3 games to 0 and ultimately losing the best-of-seven series. The Red Sox comeback was the first time in major league history that a team came back from an 0–3 deficit to win a series. The rivalry is often termed "the best" and "greatest rivalry in all of sports." Games between the two teams often generate a great deal of interest and get extensive media coverage, including being broadcast on national television. Tampa Bay Rays The rivalry between Boston and the Tampa Bay Rays developed in the late 2000s, after the two clubs had their first postseason meeting in the 2008 ALCS. Since then, both teams have won the American League East division a combined seven times. While the rivalry is more recent than Sox' rivalry with the Yankees, it has been called one of the most competitive in modern baseball. The teams have met three times in the MLB postseason, with the Rays winning the 2008 ALCS and the Red Sox winning the 2013 ALDS and 2021 ALDS. Radio and television The flagship radio station of the Red Sox is WEEI-FM 93.7. Joe Castiglione has broadcast Red Sox games since 1983 (initially assisting Ken Coleman) and has been the lead play-by-play announcer since 1993. Tim Neverett worked with him from 2016 through 2018, but in 2019, WEEI opted for a more conversational format with a variety of commentators (see the above link) alongside Castiglione. Former Red Sox player Lou Merloni has provided color commentary since 2013. Castiglione's predecessors include Curt Gowdy and Ned Martin. He has also worked with play-by-play veterans Bob Starr and Jerry Trupiano. Many stations throughout New England and beyond carry the broadcasts. All Red Sox telecasts not shown nationally are available on New England Sports Network (NESN), with Dave O'Brien calling play-by-play, and Kevin Youkilis, Kevin Millar and Will Middlebrooks splitting color commentary duties. Jerry Remy, a former Red Sox second baseman, served as color analyst from 1988 up until his death in 2021. Remy had lung cancer, and would at times step away from broadcasting duties to focus on his health. Former Red Sox pitcher Dennis Eckersley, a former Red Sox pitcher, worked as a color commentator for NESN until his retirement following the 2022 season. Several local television stations, including the original WHDH-TV, WNAC-TV (now the current WHDH), WBZ-TV, WSBK-TV, WLVI, WABU, and WFXT, broadcast Red Sox games prior to 2006, when NESN became the exclusive home of the team. Music The integration of music into the culture of the Red Sox dates back to the Americans era, which saw the first use of the popular 1902 showtune Tessie as a rallying cry by fans. The tune saw a resurgence in popularity when a new version by Boston area band The Dropkick Murphys was featured in the 2005 film Fever Pitch, which tells the story of an obsessive Red Sox fan. The song is frequently played after home wins and inspired the name of Red Sox mascot Wally the Green Monster's "sister" Tessie. Their song "I'm Shipping Up to Boston" was used to signify the entrance of Boston's closing pitcher. Another song associated with the team and its fanbase is Neil Diamond's 1969 single "Sweet Caroline". The song was first introduced to Fenway Park in 1997. By 2002, its play had been established as a nightly occurrence. It continues to be played at every home game during the 8th inning, sung along to by those in attendance. In 2007, Diamond revealed that the song was written for Caroline Kennedy, American diplomat and daughter of Boston icon President John F. Kennedy. Caroline Kennedy's great-grandfather, John F. Fitzgerald, threw Fenway Park's first-ever ceremonial opening pitch on April 20, 1912. When Diamond was named a Kennedy Center Honors recipient in 2011, Red Sox executive assistant Claire Durant arranged for 80 Red Sox fans to travel to Washington for the ceremony, which culminated in them singing the song behind Smokey Robinson onstage. Retired numbers Previously, the Red Sox published three official requirements for a player to have his number retired on their website and in their annual media guides. The requirements were as follows: Election to the National Baseball Hall of Fame At least 10 years played with the Red Sox Finished his career with the club. These requirements were reconsidered after the election of Carlton Fisk to the Hall of Fame in 2000; who met the first two requirements but played the second half of his career with the Chicago White Sox. As a means of meeting the criteria, then-GM Dan Duquette hired Fisk for one day as a special assistant, which allowed Fisk to technically finish his career with the Red Sox. In 2008, the Red Sox made an "exception" by retiring number 6 for Johnny Pesky. Pesky neither spent ten years as a player nor was elected to the Baseball Hall of Fame; however, Red Sox ownership cited "... his versatility of his contributions—on the field, off the field, [and] in the dugout ...", including as a manager, scout, and special instructor and decided that the honor had been well-earned. Pesky spent 57 years with the Red Sox organization; as a minor league player (1940–1941), major league player (1942, 1946–1952), minor league manager (1961–1962, 1990), major league manager (1963–1964, 1980), broadcaster (1969–1974), major league coach (1975–1984), and as a special instructor and assistant general manager (1985–2012). In 2015, the Red Sox chose to forgo the official criteria and retire Pedro Martínez' number 45. Martínez only spent seven of his 18 seasons in Boston. In justifying the number's retirement, Red Sox principal owner John Henry stated, "To be elected into the Baseball Hall of Fame upon his first year of eligibility speaks volumes regarding Pedro's outstanding career, and is a testament to the respect and admiration so many in baseball have for him." After announcing Martínez's number retirement, the official criteria no longer appeared on the team website nor future media guides. In 2017, less than eight months after he played the final game of his illustrious career, David Ortiz had his number 34 retired by the Red Sox. Ortiz was elected to the Hall of Fame in his first year of eligibility in 2022. To date, Ortiz is the only Red Sox player to have been on the active playoff roster of three World Series championship teams (2004, 2007, 2013) since the issuance of jersey numbers starting in 1931. The number 42 was officially retired by Major League Baseball in 1997, but Mo Vaughn was one of a handful of players to continue wearing number 42 due to a grandfather clause. He last wore it for the team in 1998. In commemoration of Jackie Robinson Day, MLB invited players to wear the number 42 for games played on April 15, Coco Crisp (CF), David Ortiz (DH), and DeMarlo Hale (Coach) did that in 2007 and again in 2008. Starting in 2009, MLB had all uniformed players for all teams wear number 42 for Jackie Robinson Day. While not officially retired, the Red Sox have not issued several numbers since the departure of prominent figures who wore them, specifically: 15 – Dustin Pedroia 2B (MLB 2006–2019; all with Boston) 21 – Roger Clemens RHP (MLB 1984–2007; Boston 1984–1996) 33 – Jason Varitek C (MLB 1997–2011; all with Boston). Varitek reclaimed his #33 when he became a coach in 2021. 49 – Tim Wakefield RHP (MLB 1992–1993, 1995–2011; Boston 1995–2011) There has also been debate in Boston media circles and among fans about the potential retiring of Tony Conigliaro's number 25. Nonetheless, since Conigliaro's last full season in Boston, 1970, the number has never been taken out of circulation and issued to multiple players—notably Troy O'Leary from 1995 to 2001—along with coach Dwight Evans in 2002 and manager Bobby Valentine in 2012. Until the late 1990s, the numbers originally hung on the right-field facade in the order in which they were retired: 9–4–1–8. It was pointed out that the numbers, when read as a date (9/4/18), marked the eve of the first game of the 1918 World Series, the last championship series that the Red Sox won before 2004. After the facade was repainted, the numbers were rearranged in numerical order. In 2012, the numbers were rearranged again in chronological order of retirement (9, 4, 1, 8, 27, 6, 14) followed by Robinson's 42. As additional numbers were retired, Robinson's 42 was moved to the right so it remains the right-most number hanging. Baseball Hall of Famers Ford C. Frick Award recipients BBWAA Career Excellence Award recipients Several baseball writers, professionally based in Boston while writing about the Red Sox, have been recipients of the BBWAA Career Excellence Award (formerly the J. G. Taylor Spink Award), given for "meritorious contributions to baseball writing". Each of these writers spent at least part of their career with The Boston Globe. Boston Red Sox Hall of Fame Since 1995, the team has maintained its own hall of fame, recognizing distinguished careers of former uniformed and non-uniformed team personnel. Red Sox personnel inducted to the National Baseball Hall of Fame are automatically inducted to the team's hall of fame. Other honorees are chosen via a 15-member selection committee. Minor league affiliations As of the 2021 season, Boston's farm system consists of six minor league affiliates, fielding seven minor league teams (the Red Sox have two teams in the Dominican Summer League). Other notable seasons and team records Nomar Garciaparra hit .372 in 2000, the club record for a right-handed hitter. David Ortiz set the franchise record for home runs in a season with 54 in 2006, surpassing Jimmie Foxx's record of 50 home runs set in 1938. On April 22, 2007, Manny Ramírez, J. D. Drew, Mike Lowell, and Jason Varitek hit four consecutive home runs in the 3rd inning off 10 pitches from Chase Wright of the New York Yankees in his second Major League start and his fourth above Single-A ball. This was the fifth time in Major League history, and the first time in Red Sox history this feat has occurred. Notable is that J. D. Drew had previously contributed to a four consecutive home run sequence on September 18, 2006 (coincidentally also the second batter in the sequence) while with the Los Angeles Dodgers. Additionally, then-Red Sox manager Terry Francona's father, Tito Francona, also was a part of such a four consecutive home run sequence for the Cleveland Indians in 1963. The overall regular-season winning percentage since club inception in 1901 is .519, a record of 9,605–8,912 for games played through July 30, 2020. On September 1, 2007, Clay Buchholz no-hit the Baltimore Orioles in his second Major League start. He is the first Red Sox rookie and 17th Red Sox pitcher to throw a no-hitter. On September 22, 2007, with a victory over the Tampa Bay Devil Rays, the Red Sox clinched a spot in the postseason for the fourth time in five years, the first time in club history this has happened. Also, with this postseason berth, manager Terry Francona becomes the first manager in team history to lead the club to three playoff appearances. Between May 15, 2003, and April 10, 2013, the Red Sox sold out every home game. The 820-game streak is a record for all major American sports, narrowly passing the Portland Trail Blazers record of 814 between 1977 and 1995. The previous major league baseball record had been held by the Cleveland Indians, who sold out 455 games between June 12, 1995, and April 2, 2001. That is: a sellout only covers ticket sales, not spectators in physical seats.) On May 21, 2011, the Red Sox played against the Chicago Cubs at Fenway Park for the first time since the 1918 World Series (they had faced each other at Chicago's Wrigley Field in 2005). Both teams wore uniforms that matched the style worn in 1918. In 2016, David Ortiz set all-time records for most home runs and runs batted in in a player's final MLB season. Ortiz finished the season with 38 homers, which surpassed Dave Kingman's 35 in 1986, and 127 runs batted in, which surpassed Shoeless Joe Jackson's 123 in 1920. The Red Sox set a team record for wins in a regular season with 108 in 2018, surpassing the 106-year-old record of 105 wins set in 1912. Including playoffs, the Red Sox won a total of 119 games, the third most total wins in an MLB season. On October 8, 2018, Brock Holt became the first player in MLB history to hit for the cycle in the postseason, doing so in a 16–1 win over the New York Yankees in Game 3 of the 2018 American League Division Series. See also General information History of the Boston Red Sox Red Sox Nation Tony Conigliaro Award The Jimmy Fund Sports in Massachusetts Sports in Boston Lists Boston Red Sox all-time roster List of Boston Red Sox award winners List of Boston Red Sox coaches List of Boston Red Sox managers List of Boston Red Sox seasons List of Boston Red Sox team records List of Major League Baseball franchise postseason streaks Media Game 6 – a film covering the team's ultimately unsuccessful 1986 World Series championship run Red Sox Rule – a 2008 book written by Michael Holley Notes References External links Season-by-Season Records Boston Red Sox Video at ESPN Video Archive 1901 establishments in Massachusetts Baseball teams established in 1901 Red Sox Grapefruit League Laureus World Sports Awards winners Major League Baseball teams Professional baseball teams in Massachusetts
10
The Baltimore Orioles (also known as the O's) are an American professional baseball team based in Baltimore, Maryland. The Orioles compete in Major League Baseball (MLB) as a member of the American League (AL) East division. As one of the American League's eight charter teams in 1901, the franchise spent its first year as a major league club in Milwaukee as the Milwaukee Brewers before moving to St. Louis to become the St. Louis Browns in 1902. After 52 years in St. Louis, the franchise was purchased in 1953 by a syndicate of Baltimore business and civic interests led by attorney and civic activist Clarence Miles and Mayor Thomas D'Alesandro Jr. The team's current owner is American trial lawyer Peter Angelos. The Orioles' home ballpark is Oriole Park at Camden Yards, which opened in 1992 in downtown Baltimore. The oriole is the official state bird of Maryland; the name has been used by several baseball clubs in the city, including another AL charter member franchise which moved to New York in 1903 and became the Yankees. Nicknames for the team include the "O's" and the "Birds". The franchise's first World Series appearance came in 1944 when the Browns lost to the St. Louis Cardinals. The Orioles went on to make six World Series appearances from 1966 to 1983, winning three in 1966, 1970, and 1983. This era of the club featured several future Hall of Famers who would later be inducted representing the Orioles, such as third baseman Brooks Robinson, outfielder Frank Robinson, starting pitcher Jim Palmer, first baseman Eddie Murray, shortstop Cal Ripken Jr., and manager Earl Weaver. The Orioles have won a total of ten division championships (1969, 1970, 1971, 1973, 1974, 1979, 1983, 1997, 2014, 2023), seven pennants (1944 while in St. Louis, 1966, 1969, 1970, 1971, 1979, 1983), and three wild card berths (1996, 2012, 2016). The franchise was the last charter member of the American League to win a pennant, and the last charter member to win a World Series. After 14 consecutive losing seasons between 1998 and 2011, the team qualified for the postseason three times under manager Buck Showalter and general manager Dan Duquette, including a division title and advancement to the American League Championship Series for the first time in 17 years in 2014. Four years later, the Orioles lost 115 games, the most in franchise history. The Orioles chose not to renew the expired contracts of Showalter and Duquette after the season, ending their respective tenures with Baltimore. The Orioles' current manager is Brandon Hyde, while Mike Elias serves as general manager and executive vice president. Two years after finishing 52-110 in 2021, the Orioles went 101-61 in 2023, en route to winning the AL East for the first time since 2014. From 1901 through the end of 2023, the franchise's overall win–loss record is 9,029–10,013 (). Since moving to Baltimore in 1954, the Orioles have an overall win–loss record of 5,567–5,459 () through the end of 2023. History The modern Orioles franchise can trace its roots back to the original Milwaukee Brewers of the minor Western League, beginning in 1877, when the league reorganized. The Brewers were there when the Western League renamed itself the American League in 1900. Milwaukee Brewers (1901) At the end of the 1900 season, the American League removed itself from baseball's National Agreement (the formal understanding between the NL and the minor leagues). Two months later, the AL declared itself a competing major league. As a result of several franchise shifts, the Brewers were one of only two Western League teams that didn't fold, move or get kicked out of the league (the other being the Detroit Tigers). In its first game in the American League, the team lost to the Detroit Tigers 14–13 after surrendering a nine-run lead in the 9th inning. To this day, it is a major league record for the biggest deficit overcome that late in the game. In the first American League season in 1901, they finished last (eighth place) with a record of 48–89. During its lone Major League season, the team played at Lloyd Street Grounds, between 16th and 18th Streets in Milwaukee. St. Louis Browns (1902–1953) After one year in Milwaukee, the club relocated to St Louis, and for a while enjoyed some success, especially in the 1920s behind Hall of Fame first baseman George Sisler. However, the team's fortunes declined from then on, as playing success and gate receipts instead went increasingly to the Browns' own tenants at Sportsman's Park, the National League Cardinals, who became perennial NL contenders in the 1920s due to organizational innovations by team president Branch Rickey, a former player and manager for the Browns. Through World War II, the Browns won only one pennant, in the 1944 season stocked with wartime replacement players, and lost to the Cardinals in the third and last World Series played entirely in one ballpark, (until 2020 due to the COVID-19 pandemic). In 1953, with the Browns unable to afford even basic stadium upkeep and facing potential condemnation of the park by safety inspectors, owner Bill Veeck sold Sportsman's Park to the Cardinals and attempted to move the club back to Milwaukee, but this was vetoed by the other American League owners. Instead, Veeck sold his franchise to a partnership of Baltimore businessmen. Because Veeck was unpopular with fellow American League owners, his leaving baseball was a condition for the AL owners to approve the move. Baltimore Orioles (since 1954) The Miles-Krieger (Gunther Brewing Company)-Hoffberger group renamed their new team the Baltimore Orioles soon after taking control of the franchise. The nickname has a rich history in Baltimore, having been used by a National League club in the 1890s, an American League club (1901–02), and an International League club (AAA) from 1903 to 1953. The IL Orioles' most famous player was a local Baltimore product, hard-hitting left-handed pitcher Babe Ruth. When Oriole Park burned down in 1944, the team moved to a temporary home, Municipal Stadium, where they won the Junior World Series. Their large postseason crowds caught the attention of the major leagues, eventually leading to a new MLB franchise in Baltimore. First years in Baltimore (1954–1965) The new AL Orioles took about six years to become competitive even after jettisoning most of the holdovers from St. Louis. Under the guidance of Paul Richards, who served as both field manager and general manager from 1955 to 1958 (the first man since John McGraw to hold both positions simultaneously), the Orioles began a slow climb to respectability. While they posted a .500 record only once in their first five years (76–76 in ), they were a success at the gate. In their first season, for instance, they drew more than 1.06 million fans – more than five times what they had ever drawn in their tenures in Milwaukee and St. Louis. This came amid slight turnover in the ownership group. Miles served as team president for two years, then stepped down in favor of developer James Keelty. In turn, Keelty gave way in to financier Joe Iglehart. By the early 1960s, stars such as Brooks Robinson, John "Boog" Powell, and Dave McNally were being developed by a strong farm system. The Orioles first made themselves heard in , when they finished 89–65, good enough for second in the American League. While they were still eight games behind the Yankees, it was the first time they had been a factor in a pennant race that late in the season since 1944. It was also the first season of a 26-year stretch where the team would have only two losing seasons. Shortstop Ron Hansen was named AL Rookie of the Year, and first-year pitcher Chuck Estrada tied for the league lead in wins with 18, finishing second to Hansen in the Rookie of the Year balloting. After the 1965 season, Hoffberger acquired controlling interest in the Orioles from Iglehart and installed himself as president. He had been serving as a silent partner over the past decade despite being the largest shareholder. Frank Cashen, advertising chief of Hoffberger's brewery, became executive vice-president. Best years in Baltimore (1966–1983) The Orioles farm system had begun to produce a number of high-quality players and coaches who formed the core of winning teams; from 1966 to 1983, the Orioles won three World Series titles (1966, 1970, and 1983), six American League pennants (1966, 1969, 1970, 1971, 1979, and 1983), and five of the first six American League East titles. The first of those titles, in 1966, made the Orioles the last of the eight teams that made up the American League from 1903 to 1960 to win a World Series. During this time, the Orioles were known for playing baseball the Oriole Way, an organizational ethic best described by longtime farm hand and coach Cal Ripken, Sr.'s phrase "perfect practice makes perfect!" The Oriole Way was a belief that hard work, professionalism, and a strong understanding of fundamentals were the keys to success at the major league level. It was based on the belief that if every coach, at every level, taught the game the same way, the organization could produce "replacement parts" that could be substituted seamlessly into the big league club with little or no adjustment. This led to a run of success from 1966 to 1983 which saw the Orioles become the envy of the league, and the winningest team in baseball. During this stretch, three different Orioles were named Most Valuable Player (Frank Robinson in 1966, Boog Powell in 1970, and Cal Ripken Jr. in 1983), four Oriole pitchers combined for six Cy Young Awards (Mike Cuellar in 1969, Jim Palmer in 1973, 1975, and 1976, Mike Flanagan in 1979, and Steve Stone in 1980), and three players were named Rookie of the Year (Al Bumbry in 1973, Eddie Murray in 1977, and Cal Ripken Jr. in 1982). It was also during this time that the Orioles severed their last remaining financial link to their era in St. Louis. In 1979, Hoffberger sold the Orioles to his longtime friend, Washington attorney Edward Bennett Williams. As part of the deal, Williams bought the publicly traded shares Donald Barnes had issued in 1936 while the team was still in St. Louis, making the franchise privately held once again and severing one of the few remaining links with the Orioles' past in St. Louis. During this rise to prominence, Weaver Ball came into vogue. Named for fiery manager Earl Weaver, it was defined by the Oriole trifecta of "Pitching, Defense, and the Three-Run Home Run." When an Oriole GM was told by a reporter that Weaver, as the skipper of a very talented team, was a "push-button manager", he replied, "Earl built the machine and installed all the buttons!" As Frank and Brooks Robinson grew older, newer stars emerged, including multiple Cy Young Award winner Jim Palmer and switch-hitting first baseman Eddie Murray. With the decline and eventual departure of two other professional sports teams in the area, the NFL's Baltimore Colts and baseball's Washington Senators, the Orioles' excellence paid off at the gate, as the team cultivated a large and rabid fan base at Memorial Stadium. Final seasons at Memorial Stadium (1984–1991) After winning the 1983 World Series, the Orioles spent the next five years in steady decline, finishing 1986 in last place for the first time since the franchise moved to Baltimore. The team hit bottom in 1988 when it started the season 0–21, en route to 107 losses and the worst record in the majors that year. The "Why Not?" Orioles surprised the baseball world the following year by spending most of the summer in first place until September when the Toronto Blue Jays overtook them and seized the AL East title on the final weekend of the regular season. The next two years were spent below the .500 mark, highlighted only by Cal Ripken Jr. winning his second AL MVP Award in 1991. The Orioles said goodbye to Memorial Stadium, the team's home for 38 years, at the end of the 1991 campaign. Camden Yards opens and Ripken's record (1992–1995) Opening to much fanfare in 1992, Oriole Park at Camden Yards was an instant success, inspiring other retro-style major league ballparks in the next decades. The stadium was the site of the 1993 All-Star Game. The Orioles returned to contention in the first two seasons at Camden Yards, only to finish in third place both times. In 1993, with then-owner Eli Jacobs forced to divest himself of the franchise, Baltimore-based attorney Peter Angelos, along with the ownership syndicate he headed, was awarded the Orioles in bankruptcy court in New York City, returning the team to local ownership for the first time since 1979. Angelos' partners included author Tom Clancy and comic book distributor Steve Geppi. The Orioles, who spent all of 1994 chasing the New York Yankees, occupied second place in the new five-team AL East when the players strike, which began on August 11, forced the eventual cancellation of the season. The labor impasse would continue into the spring of 1995. Almost all the major league clubs held spring training using replacement players, with the intention of beginning the season with them. The Orioles, whose owner was a labor union lawyer, were the lone dissenters against creating an ersatz team, choosing instead to sit out spring training and possibly the entire season. Had they fielded a substitute team, Cal Ripken Jr.'s consecutive games streak would have been jeopardized. The replacements questions became moot when the strike was finally settled. The Ripken countdown resumed once the season began. Ripken finally broke Lou Gehrig's consecutive games streak of 2,130 games in a nationally televised game against the California Angels on September 6. This was later voted the all-time baseball moment of the 20th century by fans from around the country in 1999. Ripken finished his streak with 2,632 straight games, finally sitting on September 20, 1998, the Orioles final home game of the season against the Yankees at Camden Yards. Playoff years (1996–1997) Before the 1996 season, Angelos hired Pat Gillick as general manager. Given the green light to spend heavily on established talent, Gillick signed several premium players like B. J. Surhoff, Randy Myers, David Wells and Roberto Alomar. Under new manager Davey Johnson and on the strength of a then-major league record 257 home runs in a single season, the Orioles returned to the playoffs after a 12-year absence by clinching the AL wild card berth. Alomar set off a firestorm in September when he spat into home plate umpire John Hirschbeck's face during an argument in Toronto. He was later suspended for the first five games of the 1997 season, even though most wanted him banned from the postseason. After dethroning the defending American League champion Cleveland Indians 3–1 in the Division Series, the Orioles fell to the Yankees 4–1 in an ALCS notable for right field umpire Rich Garcia's failure to call fan interference in the first game of the series, when 12-year-old Yankee fan Jeffrey Maier reached over the outfield wall to catch an in-play ball, which was scored as a home run for Derek Jeter, tying the game at 4–4 in the eighth inning. Absent Maier's interference, it appeared as if the ball might have been off the wall or caught by right fielder Tony Tarasco. The Yankees went on to win the game in extra innings on an ensuing walk-off home run by Bernie Williams. The Orioles went "wire-to-wire" (first place from start to finish) in winning the AL East title in 1997. After eliminating the Seattle Mariners 3–1 in the Division Series, the team lost again in the ALCS, this time to the underdog Indians 4–2, with each Oriole loss by only a run. Johnson resigned as manager after the season, largely due to a spat with Angelos concerning Alomar's fine for missing a team function being donated to Johnson's wife's charity. Pitching coach Ray Miller replaced Johnson. Downturn (1998–2006) With Miller at the helm, the Orioles found themselves not only out of the playoffs, but also with a losing season. When Gillick's contract expired in 1998, it was not renewed. Angelos brought in Frank Wren to take over as GM. The Orioles added volatile slugger Albert Belle, but the team's woes continued in the 1999 season, with stars like Rafael Palmeiro, Roberto Alomar, and Eric Davis leaving in free agency. After a second straight losing season, Angelos fired both Miller and Wren. He named Syd Thrift the new GM and brought in former Cleveland manager Mike Hargrove. In a rare event on March 28, 1999, the Orioles staged an exhibition series against the Cuban national team in Havana. The Orioles won the game 3–2 in 11 innings. They were the first Major League team to play in Cuba since 1959, when the Los Angeles Dodgers faced the Orioles in an exhibition. The Cuban team visited Baltimore in May 1999 (winning 10–6). The first decade of the 21st century saw the Orioles struggle due to the combination of lackluster play on the team's part, a string of ineffective management, and the ascent of New York and Boston to the top of the game – each rival having a clear advantage in financial flexibility due to their larger media market size. Further complicating the situation for the Orioles was the relocation of the National League's Montreal Expos franchise to nearby Washington, D.C., in 2004. Orioles owner Peter Angelos demanded compensation from Major League Baseball, as the new Washington Nationals threatened to carve into the Orioles fan base and television dollars. However, there was some hope that having competition in the larger Baltimore-Washington metro market would spur the Orioles to field a better product to compete for fans with the Nationals. Rebuilding years & arrival of Buck Showalter (2007–2011) A new President of Baseball of Operations named Andy MacPhail was brought in about halfway through the 2007 season. MacPhail spent the remainder of the 2007 season assessing the talent level of the Orioles, and determined that significant steps needed to be made if the Orioles were ever to be a contender again in the American League East. He completed two blockbuster trades during the next off-season, each sending a premium player away in return for five prospects (or younger less expensive players). Tejada, who had hit .296 with 18 HR and 81 RBI in 2007, went to the Houston Astros in exchange for outfielder Luke Scott, pitchers Matt Albers, Troy Patton, and Dennis Sarfate, and third baseman Mike Costanzo. Also, the newly designated ace of the Orioles rotation Érik Bédard, who went 13–5 with a 3.16 ERA in 2007 with 221 strikeouts, was sent to the Seattle Mariners in exchange for top outfield prospect Adam Jones, left-handed pitcher George Sherrill, and three minor league pitchers Chris Tillman, Kam Mickolio, and Tony Butler. The Bedard trade in particular would go down as one of the most lop-sided and successful trades in the history of the franchise. While MacPhail would find success in most of his trades made for the Orioles over the long-term, the veteran stop acquisitions that he would make would not often pan out, and as a result, the team would never finish higher than 4th place in the AL East, or with more than 69 wins, while MacPhail was in charge. Although some of his free agent signings would have positive contributions (such as reliever Koji Uehara), most gave mediocre returns, at best. In particular, the Orioles never managed to cobble together a successful pitching staff during this time. Their most consistent starting pitcher from 2008 to 2011 was the late bloomer Jeremy Guthrie who was named the Opening Day starter in 3 of the 4 seasons and had a cumulative 4.12 ERA during this stretch. Following Davey Johnson's dismissal after the 1997 playoff season, Orioles ownership struggled to find a manager that they liked, and this time period was no exception. Dave Trembley was brought on as an interim manager in June 2007, and had the interim tag removed later that year. Trembley was at the helm again in 2008 and 2009 but was never able to lead the team out of the cellar in the AL East. After starting the 2010 season a dismal 15–39, Dave Trembley was fired and third base coach Juan Samuel was named the interim manager. The Orioles were seeking a more permanent solution at manager as the 2010 season continued to unfold, and two-time AL Manager of the Year Buck Showalter was eventually hired in July 2010. The Orioles went 34–23 after Buck took over, foreshadowing that a brighter future might be on the horizon, and giving Orioles fans renewed hope and optimism for the team's future. The Orioles made some aggressive moves to improve the team in 2011 in the hopes of securing their first playoff berth since 1997. Andy MacPhail completed trades to bring in established veterans like Mark Reynolds and J. J. Hardy from the Diamondbacks and Twins, respectively. Veteran free agents Derrek Lee and Vladimir Guerrero were also brought in to help improve the offense. At the 2011 trade deadline, fan favorite Koji Uehara was sent to the Texas Rangers in exchange for Chris Davis and Tommy Hunter, a move that would not pay immediate dividends, but would be crucial to the team's later success. While these moves had varying impacts, the Orioles did score 95 more runs in 2011 than they had the previous year. The team still finished last in the AL East due to the utter failures of the team's pitching staff. Brian Matusz compiled one of the highest single-season ERAs in MLB history (10.69 over 12 starts) and every pitcher who started a game for the Orioles in 2011 ended the season with an ERA of 4.50 or higher except for Jeremy Guthrie. The Orioles finished 30th out of 30 MLB teams that year with a 4.89 team ERA. Andy MacPhail's contract was not renewed in October 2011 and a search for a new GM began. After a public interview process where several candidates declined to take the position, ex-GM Dan Duquette was brought in to serve as the Executive Vice-President of Baseball Operations. Return to success under Showalter (2012–2016) Duquette wasted no time in overhauling the Orioles roster, especially the MLB-worst pitching staff. He traded fan favorite Jeremy Guthrie to the Colorado Rockies in exchange for Jason Hammel. He brought in new free agent starting pitcher Wei-Yin Chen from the Nippon Professional Baseball league, and Miguel González was signed as a minor league free agent. Nate McLouth was signed to a minor league deal in June 2012 and would prove to make a significant impact down the stretch. This year also marked the debut of the much hyped prospect Manny Machado. The Orioles won 93 games in 2012 (after winning 69 in the previous year) thanks in large part to a 29–9 record in one-run games, and a 16–2 record in extra inning games. The difference between this Orioles bullpen and bullpens past was like night and day, led by Jim Johnson and his 51 saves. He finished with a 2.49 ERA that season with Darren O'Day, Luis Ayala, Pedro Strop, and Troy Patton all finishing as well with ERAs under 3.00. Experts were amazed as the team continued to outperform expectations, but regression never came that year. They battled with the New York Yankees for first place in the AL East up until September, and would earn their first playoff berth in 15 years by winning the second wildcard spot in the American League. In the 'sudden death' wildcard game against the Texas Rangers, Joe Saunders (acquired in August of that year in exchange for Matt Lindstrom) defeated Yu Darvish to help the Orioles advance to the divisional round, where they faced a familiar opponent, the Yankees. The Orioles forced the series to go five games (losing games 1 and 3 of the series, while winning 2 and 4), but CC Sabathia outpitched the Orioles Jason Hammel in Game 5 and the Orioles were eliminated from the playoffs. While the Orioles would ultimately miss the playoffs in 2013, they finished with a record of 85–77, tying the Yankees for third place in the AL East. By posting winning records in 2012 and 2013, the Orioles achieved the feat of back-to-back winning seasons for the first time since 1996 and 1997. On September 16, 2014, the Orioles clinched the division for the first time since 1997 with a win against the Toronto Blue Jays as well as making it back to the postseason for the second time in three years. The Orioles finished the 2014 season with a 96–66 record and went on to sweep the Detroit Tigers in the ALDS. The O's were then in turn swept by the Kansas City Royals in the ALCS. Out of an abundance of caution, the Baltimore Orioles announced the postponement of the April 27 and 28 games in 2015 against the Chicago White Sox following violent riots in West Baltimore following the death of Freddie Gray. Following the announcement of the second postponement, the Orioles also announced that the third game in the series scheduled for Wednesday, April 29 was to be closed to the public and would be televised only, apparently the first time in 145 years of Major League Baseball that a game had no spectators and breaking the previous 131-year-old record for lowest paid attendance to an official game (the previous record being 6.) The Orioles beat the White Sox, 8–2. The Orioles said the make-up games would be played Thursday, May 28, as a double-header. In addition, the weekend games against the Tampa Bay Rays was moved to the Rays' home stadium in St. Petersburg where Baltimore played as the home team. Downfall and final years under Showalter (2017–2018) Despite the 2016 season being another above .500 season for the Orioles; they would fail to win their division, but were able to secure a Wild card spot. However, they would lose against the Toronto Blue Jays in the AL Wild Card game. This was the last postseason appearance for Baltimore until 2023. In 2017, the Orioles started with a modest 22–10 record. Despite early season success, the Orioles suffered their first losing season since 2011. The Orioles would suffer one of Major League Baseball's worst seasons in 2018, en route to going 47–115. 2018 proved to be general manager Duquette and Showalter's final season in Baltimore, as their contracts were not renewed after the season. Brandon Hyde era (2019–present) The Orioles began their rebuild by trading away fan favorites Manny Machado, Zach Britton, Jonathan Schoop, Brad Brach, Kevin Gausman, and Darren O'Day in July 2018. In 2019, the Orioles finished 54–108, which was the second Orioles team to surpass the 1988 Orioles team's losses. In 2020, the Orioles experienced an upside in their rebuild, as they finished 25–35 in a season shortened by the COVID-19 pandemic, their best finish since 2017. In 2021, the Orioles experienced two different losing streaks of at least 14, en route to a 52–110 finish. 2021 was their third 110-loss season in team history. Their first was 1939 when they were known as the St. Louis Browns, the second was in 2018 as the Baltimore Orioles. On June 9, 2022, Louis Angelos sued his brother, Orioles chairman and CEO John P. Angelos, and mother Georgia Angelos in Baltimore County Circuit Court. Louis Angelos claims that their father intended for the brothers and their mother to share control of the team. The lawsuit states the elder Angelos collapsed in 2017 due to heart problems and established a trust with his wife and sons as co-trustees. Louis Angelos is seeking to have his brother and mother removed as co-trustees of the trust that controls the Orioles and removed as co-agents of Peter Angelos' power of attorney. The suit claims Georgia Angelos wants to sell the team and an advisor attempted to negotiate a sale in 2020 but John Angelos vetoed a potential deal. The suit claims Angelos unilaterally fired long-time employees loyal to his father, including former center fielder Brady Anderson, the longtime special assistant to the executive vice president for baseball operations. The suit claims John Angelos transferred tens of millions of dollars worth of property out of his father's law firm and into a limited liability company controlled by his personal attorney. In separate statements released by the team, Georgia and John Angelos refuted the claims. In the event of any sale, Major League Baseball has reportedly encouraged Cal Ripken Jr to be part of any incoming ownership group that may take control of the team. In April 2023, the Orioles went 19–9, setting a franchise record for wins in the month of April. By August 2023, the Orioles, led by a core of first-and-second-year players Adley Rutschman, Gunnar Henderson, Félix Bautista and Kyle Bradish, were in first place in the division and described in The Athletic as "young, fun and arguably the best story in baseball." Much of their good will was overshadowed, however, when it was reported that play-by-play announcer Kevin Brown had been suspended indefinitely by the Orioles for his pregame remarks on MASN, the team-owned network, two weeks earlier. During a "seemingly benign" introduction to a series against the Tampa Bay Rays, Brown observed that the team had struggled to win a series in Tampa in the past several seasons. It was described in The Athletic as a "petty" move by John Angelos, "the only person [in the organization] with enough power that no one dare question the validity of anything he says and does, no matter how foolish it is." Several broadcasters came to Brown's defense after the news broke. Gary Cohen said the team had "draped itself in utter humiliation" and Michael Kay said the suspension made "the Orioles look so small and insignificant and minor league." Regular season home attendance Memorial Stadium Oriole Park at Camden Yards Logos and uniforms The Orioles' home uniform is white with the word "Orioles" written across the chest. The road uniform is gray with the word "Baltimore" written across the chest. This style, with noticeable changes in the script, striping and materials, has been worn for much of the team's history, but with a few exceptions: In 1954, 1989–94 (road) and 1995–2003 (home), the scripted word "Orioles" and block letters are rendered in black with orange trim. The 1995–2003 style featured orange numbers in front but black letters in the back. From 1963 to 1965, the home uniforms featured "Orioles" in block lettering instead of the more familiar cursive script style. It was also rendered in black with orange trim. The underline below the word "Orioles" disappeared from 1966 to 1988. Road uniforms bore the team name from 1954 to 1955 and from 1973 to 2008. Extra white trim was added to the road and alternate uniforms from 1995 to 2000. Sleeveless home alternate uniforms were used in the 1968 and 1969 seasons. Player names were added to the uniforms in 1966, but the home uniforms originally featured black block letters. It would not match the road uniform lettering until 1971, which were orange with black trim. A long campaign of several decades was waged by numerous fans and sportswriters to return the name of the city to the "away" jerseys which was used since the 1950s and had been formerly dropped during the 1970s era of Edward Bennett Williams when the ownership was continuing to market the team also to fans in the nation's capital region after the moving of the former Washington Senators in 1971. After several decades, approximately 20% of the team's attendance came from the metro Washington area. An alternate uniform is black with the word "Orioles" written across the chest. They first wore black uniforms in the 1993 season and continue to do so since; the current style with the letters lacking additional trim was first used in 2000. The Orioles wear their black alternate jerseys for Friday night games with the alternate "O's" cap (first introduced in 2005), whether at home or on the road; the regular batting helmet is still used with this uniform. In 2017, the Orioles began to use their batting practice caps for select games with the black uniforms. The aforementioned caps resemble their regular road caps save for the black bill. Occasionally, the Orioles would also wear the black alternates on other days of the week, often pairing them with the home or road "cartoon bird" caps. After the "City Connect" uniforms became the team's Friday home uniform (see below), the black alternates were only used on Friday road games and on home games depending on the preference of the starting pitcher. The Orioles also wore orange alternate uniforms at various points in their history. The orange alternates were first used in the 1971 season and were paired with orange pants, but these lasted only two seasons. The second orange uniform, which was a pullover style, was worn from 1975 to 1987, but were not worn at all in the 1983, 1985 and 1986 seasons. A third orange uniform was used from 1988 to 1992, returning to the button-down style. In 2012, the Orioles brought back the orange uniforms as a second alternate uniform; the team currently wears them on Saturdays at home or on the road, though they've also worn them on other days of the week either due to pitcher's preference or a previously postponed contest. The Orioles' cap design have alternated between the team's iconic "cartoon bird" logo and the full-bodied bird logo. Initially, the caps had the full-bodied bird logo between 1954 and 1965, alternating between an all-black cap and an orange-brimmed black cap. They also wore a black cap with an orange block-letter "B" for part of the 1963 season. The "cartoon bird" was first used in 1966, and with minor tweaks, was prominently featured on the team's caps until 1988. Initially, the Orioles kept the orange-brimmed black cap with the "cartoon bird", but switched to a white-paneled black cap with orange brim in 1975. Also that same year, they wore orange-paneled black caps to pair with the orange alternates, but these lasted only two seasons. In 1989, the full-bodied bird logo returned along with the all-black cap, with a few tweaks along the way. Initially the cap was used regardless of home or road games, but in 2002 the caps were worn only on the road until 2008. An orange-brimmed variety was also introduced in 1995. Initially exclusive to the team's black uniforms, this style became the home cap in 2002 and became the team's regular cap (home or away) from 2009 to 2011. In 2012, the Orioles brought back a modernized version of the "cartoon bird" along with the white-paneled and orange-brimmed black cap for home games and the orange-brimmed black cap for road games. In 2013, ESPN ran a "Battle of the Uniforms" contest between all 30 Major League clubs. Despite using a ranking system that had the Orioles as a #13 seed, the Birds beat the #1 seed Cardinals in the championship round. In 2023, the Orioles introduced a new City Connect jersey, inspired by the art and culture of Baltimore and its neighborhoods. Radio and television coverage Radio In Baltimore, Orioles radio broadcasts can be heard on WBAL-AM and WIYY, both owned by Hearst Television. Geoff Arnold, Melanie Newman, Brett Hollander, Scott Garceau and Kevin Brown alternate as play-by-play announcers. WBAL feeds the games to a network of 36 stations, covering Washington, D.C., and all or portions of Maryland, Pennsylvania, Delaware, Virginia, West Virginia, and North Carolina. This is WBAL's fourth stint as the Orioles flagship. WBAL has carried Orioles games for most of the team's time in Baltimore. Prior to WBAL and WIYY, Orioles games were broadcast locally on WJZ-FM from 2015 to 2021. WJZ had earlier carried broadcasts from 2007 to 2010. Six former Orioles franchise radio announcers have received the Hall of Fame's Ford C. Frick Award for excellence in broadcasting: Chuck Thompson (who was also the voice of the old NFL Baltimore Colts); Jon Miller (now with the San Francisco Giants); Ernie Harwell, Herb Carneal; Bob Murphy and Harry Caray (as a St. Louis Browns announcer in the 1940s). Other former Baltimore announcers include Josh Lewin (currently with New York Mets), Bill O'Donnell, Tom Marr, Scott Garceau (returned in 2020 season), Mel Proctor, Michael Reghi, former major league catcher Buck Martinez (now Toronto Blue Jays play-by-play), and former Oriole players including Brooks Robinson, pitcher Mike Flanagan and outfielder John Lowenstein. In 1991, the Orioles experimented with longtime TV writer/producer Ken Levine as a play-by-play broadcaster. Levine was best noted for his work on TV shows such as Cheers and M*A*S*H, but lasted only one season in the Orioles broadcast booth. Television MASN, co-owned by the Orioles and the Washington Nationals, is the team's exclusive television broadcaster. MASN airs almost the entire slate of regular season games. Some exceptions include Saturday games on either Fox (via its Baltimore affiliate, WBFF) or Fox Sports 1, or Sunday Night Baseball on ESPN. Many MASN telecasts in conflict with Nationals' game telecasts air on an alternate MASN2 feed. Veteran sportscaster Gary Thorne served as lead television announcer from 2007 to 2019, with Jim Hunter as his backup along with Hall of Fame member and former Orioles pitcher Jim Palmer and former Oriole infielder Mike Bordick as color analysts, who almost always work separately. In 2020, Thorne and Palmer were removed from the television booth due to COVID-19 concerns, and replaced with Scott Garceau. In 2021, MASN let go Thorne, Hunter, analysts Mike Bordick and Rick Dempsey, and studio host Tom Davis, and added Ben McDonald as a secondary analyst. Starting in 2022, Kevin Brown became the primary TV play-by-play announcer, with Garceau, Arnold or Newman the backups. The Orioles severed their ties with Comcast SportsNet Mid-Atlantic (now NBC Sports Washington) at the end of the 2006 season in favor of MASN, a joint venture with the Washington Nationals. It had been the Orioles' cable partner since 1984, when it was known as Home Team Sports. The Orioles and the Washington Nationals have been in a dispute since the early 2010s, MASN is owned by both teams with the Orioles holding an 80% stake. The dispute which is ongoing as of October 2020 contends that the Nationals deserves a greater fee from MASN due to the team's recent success and market growth. When fees paid to each team were first negotiated, both teams were paid the same fees. WJZ-TV was the Orioles' broadcast TV home, completing its latest stint from 1994 through 2017. Since MASN acquired rights in 2007, its coverage was simulcast on WJZ-TV under the branding "MASN on WJZ 13". MASN elected not to syndicate any Orioles or Washington Nationals games to broadcast television for the 2018 season, marking the first time since the Orioles' arrival that their games are not on local broadcast television. Previously, WJZ-TV carried the team from their arrival in Baltimore in 1954 through 1978. In the first four seasons, WJZ-TV shared coverage with Baltimore's other two stations, WMAR-TV and WBAL-TV. The games moved to WMAR from 1979 through 1993 before returning to WJZ-TV. From 1994 to 2009, some Orioles games aired on WNUV. Musical traditions "O!" Since its introduction at games by the "Roar from 34", led by Wild Bill Hagy and others, in the late 1970s, it has been a tradition at Orioles games for fans to yell out the "Oh" in the line "Oh, say does that Star-Spangled Banner yet wave" in "The Star-Spangled Banner". "The Star-Spangled Banner" has special meaning to Baltimore historically, as it was written during the Battle of Baltimore in the War of 1812 by Francis Scott Key, a Baltimorean. The tradition is often carried out at other sporting events, both professional and amateur, and even sometimes at non-sporting events where the anthem is played, throughout the Baltimore/Washington area and beyond. Fans in Norfolk, Virginia, chanted "O!" even before the Tides became an Orioles affiliate. The practice caught some attention in the spring of 2005, when fans performed the "O!" cry at Washington Nationals games at RFK Stadium. The "O!" chant is also common at sporting events for the various Maryland Terrapins teams at the University of Maryland, College Park. At Cal Ripken Jr.'s induction into the National Baseball Hall of Fame, the crowd, composed mostly of Orioles fans, carried out the "O!" tradition during Tony Gwynn's daughter's rendition of "The Star-Spangled Banner". Additionally, a faint but audible "O!" could be heard on the television broadcast of Barack Obama's pre-inaugural visit to Baltimore as the national anthem played before his entrance. A resounding "O!" bellowed from the nearly 30,000 Ravens fans who attended the November 21, 2010, away game at the Carolina Panthers' Bank of America Stadium in Charlotte, North Carolina. A similar loud "O!" was heard from fans attending Super Bowl XLVII between the Baltimore Ravens and the San Francisco 49ers. The "O!" chant was also heard during the 2016 Summer Olympics in Rio de Janeiro, Brazil, when Baltimore native Michael Phelps received his gold medal for the freestyle on August 9, 2016. In recent years, when the Orioles host the Toronto Blue Jays, fans have begun to shout out the multiple instances of the word "O" in "O Canada". Washington Capitals fans will do the same when they play one of the NHL's Canadian teams. "Thank God I'm a Country Boy" It has been an Orioles tradition since 1975 to play John Denver's "Thank God I'm a Country Boy" during the seventh-inning stretch. In the edition of July 5, 2007, of Baltimore's weekly sports publication Press Box, an article by Mike Gibbons covered the apocryphal details of how this tradition came to be. During "Thank God I'm a Country Boy", Charlie Zill, then an usher, would put on overalls, a straw hat, and false teeth and dance around the club level section (244) that he tended to. He also has an orange violin that spins for the fiddle solos. He went by the name Zillbilly and had done the skit from the 1999 season until shortly before he died in early 2013. Of course, that does nothing to explain why the Orioles' Audio staff began playing the song during every game's seventh inning stretch beginning in August, 1975. In reality, the song was tremendously successful nationwide, topping the Billboard Top 100 for one week in 1975, and was played in stadiums across the country. The Orioles were chasing the Red Sox for the American League East Division title and incorporated numerous "good luck charms." After an inspiring comeback win, Oriole staff began playing this song at the seventh-inning stretch of every home game as one of the good-luck charms, beginning in August. During a nationally televised game on September 20, 1997, Denver himself danced to the song atop the Orioles' dugout, one of his final public appearances before dying in a plane crash three weeks later. "Orioles Magic" and other songs Songs from notable games in the team's history include "One Moment in Time" for Cal Ripken's record-breaking game in 1995, as well as the theme from Pearl Harbor, "There You'll Be" by Faith Hill, during his final game in 2001. The theme from Field of Dreams was played at the last game at Memorial Stadium in 1991, and the song "Magic to Do" from the stage musical Pippin was used that season to commemorate "Orioles Magic" on 33rd Street. During the Orioles' heyday in the 1970s, a club song, appropriately titled "Orioles Magic (Feel It Happen)", was composed by Walt Woodward, and played when the team ran out until Opening Day of 2008. Since then, the song (a favorite among all fans, who appreciated its references to Wild Bill Hagy and Earl Weaver) is played (along with a video featuring several Orioles stars performing the song) only after wins. Seven Nation Army is played as a hype song while the fans chant the signature bass riff as a rally cry during key moments of a game or after a walk-off hit. The First Army Band During the Orioles' final homestand of the season, it is a tradition to display a replica of the 15-star, 15-stripe American flag at Camden Yards. Prior to 1992, the 15-star, 15-stripe flag flew from Memorial Stadium's center-field flagpole in place of the 50-star, 13-stripe flag during the final homestand. Since the move to Camden Yards, the former flag has been displayed on the batters' eye. During the Orioles' final home game of the season, The United States Army Field Band from Fort Meade performs the National Anthem prior to the start of the game. The Band has also played the National Anthem at the finales of three World Series in which the Orioles played: 1970, 1971 and 1979. They are introduced as the "First Army Band" during the pregame ceremonies. PA announcer For 23 years, Rex Barney was the PA announcer for the Orioles. His voice became a fixture of both Memorial Stadium and Camden Yards, and his expression "Give that fan a contract", uttered whenever a fan caught a foul ball, was one of his trademarksthe other being his distinct "Thank Yooooou..." following every announcement. (He was also known on occasion to say "Give that fan an error" after a dropped foul ball.) Barney died on August 12, 1997, and in his honor that night's game at Camden Yards against the Oakland Athletics was held without a public–address announcer. Barney was replaced as Camden Yards' PA announcer by Dave McGowan, who held the position until December 2011. Lifelong Orioles fan and former MLB Fan Cave resident Ryan Wagner soon took over as the PA announcer. He was chosen out of a field of more than 670 applicants in the 2011–12 offseason. As of the 2022 season, Adrienne Roberson is the current Orioles PA announcer. Postseason appearances Of the eight original American League teams, the Orioles were the last of the eight to win the World Series, doing so in 1966 with its four–game sweep of the heavily favored Los Angeles Dodgers. When the Orioles were the St. Louis Browns, they played in only one World Series, the 1944 matchup against their Sportsman's Park tenants, the Cardinals. The Orioles won the first-ever American League Championship Series in 1969, and in 2012 the Orioles beat the Texas Rangers in the inaugural American League Wild Card game, where for the first time two Wild Card teams faced each other during postseason play. Baseball Hall of Famers Ford C. Frick Award (broadcasters only) Retired numbers The Orioles will retire a number only when a player has been inducted into the Hall of Fame with Cal Ripken Jr. being the only exception. However, the Orioles have placed moratoriums on other former Orioles' numbers following their deaths (see note below). To date, the Orioles have retired the following numbers: Note: Cal Ripken Sr.'s number 7, Elrod Hendricks' number 44, and Mike Flanagan's number 46 have not officially been retired, but a moratorium has been placed on them and they have not been issued by the team since their deaths. †Jackie Robinson's number 42 is retired throughout Major League Baseball Maryland State Athletic Hall of Fame Baltimore Orioles Hall of Fame The Orioles' official team hall of fame is located on display on Eutaw Street at Camden Yards. Team captains 33 Eddie Murray, 1B/DH, 1986–1988 Roster Minor league affiliates The Baltimore Orioles farm system consists of seven minor league affiliates. Franchise records and award winners Season records Individual records – batting Highest batting average: .340, Melvin Mora (2004) Most at bats: 673, B. J. Surhoff (1999) Most plate appearances: 749, Brady Anderson (1992) Most games: 163, Brooks Robinson (1961, 1964) and Cal Ripken (1996) Most runs: 132, Roberto Alomar (1996) Most hits: 214, Miguel Tejada (2006) Most total bases: 370, Chris Davis (2013) Highest slugging %: .646, Jim Gentile (1961) Highest on-base %: .442, Bob Nieman (1956) Most singles: 158, Al Bumbry (1980) Most doubles: 56, Brian Roberts (2009) Most triples: 12, Paul Blair (1967) Most home runs, RHB: 49, Frank Robinson (1966) Most home runs, LHB: 53, Chris Davis (2013) Most home runs, leadoff hitter: 35, Brady Anderson (1996) Most home runs, leading off game: 12, Brady Anderson (1996) Most consecutive games leading off with a home run: 4, Brady Anderson (April 18–21, 1996) Most extra base hits: 96, Chris Davis (2013) Most RBI, LHB: 142, Rafael Palmeiro (1996) Most RBI, RHB: 150, Miguel Tejada (2004) Most RBI, switch: 124, Eddie Murray (1985) Most RBI, month: 37, Albert Belle (June 2000) Most GWRBI: 25, Rafael Palmeiro (1998) Most consecutive games hit safely: 30, Eric Davis (1998) Most sac hits: 23, Mark Belanger (1975) Most sac flies: 17, Bobby Bonilla (1996) Most stolen bases: 57, Luis Aparicio (1964) Most walks: 118, Ken Singleton (1975) Most intentional walks: 25, Eddie Murray (1984) Most strikeouts: 219, Chris Davis (2016) Fewest strikeouts: 19, Rich Dauer (1980) Most hit by pitch: 24, Brady Anderson (1999) Most GIDP: 32, Cal Ripken (1985) Most pinch hits: 24, Dave Philley (1961) Most consecutive pinch hits: 6, Bob Johnson (1964) Most pinch hit RBI: 18, Dave Philley (1961) Individual records – pitching Most games: 81, Jamie Walker (2007) Most games, rookie: 67, Jorge Julio (2002) Most games, started: 40, Dave McNally (1969–70), Mike Cuellar (1970), Jim Palmer (1976), and Mike Flanagan (1978) Most games started, rookie: 36, Bob Milacki (1989) Most complete games: 25, Jim Palmer (1975) Most games finished: 63, Jim Johnson (2012–13) Most wins: 25, Steve Stone (1980) Most wins, rookie: 19, Wally Bunker (1964) Most losses: 21, Don Larsen (1954) Best won-lost %: .808, Dave McNally (1971) Most bases on balls: 181, Bob Turley (1954) Most hit batsmen: 18, Daniel Cabrera (2008) Most strikeouts: 221, Érik Bédard (2007) Most innings pitched: 323, Jim Palmer (1975) Most innings pitched, rookie: 243, Bob Milacki (1989) Most shutouts: 10, Jim Palmer (1975) Most consecutive shutout innings: 36, Hal Brown (July 7 – August 8, 1961) Most home runs allowed: 35, 4 times; last: Jeremy Guthrie (2009) Fewest home runs allowed (by qualifier): 8, Milt Pappas (209 IP) (1959) and Billy Loes (155 IP) (1957) Lowest ERA (by qualifier): 1.95, Dave McNally (1968) Highest ERA (by qualifier): 5.90, Rodrigo Lopez (2006) Most saves: 51, Jim Johnson (2012) Most saves, rookie: 27, Gregg Olson (1989) Most wins, reliever: 14, Stu Miller (1965) Most relief points: 131, Randy Myers (1997) Most innings pitched by reliever: 140.1, Sammy Stewart (1983) Most consecutive wins: 15, Dave McNally (April 12 – August 3, 1969) Most consecutive losses: 10, Jay Tibbs (July 10 – October 1, 1988) Most consecutive losses, start of season: 8, Mike Boddicker (1988) and Jason Johnson (2000) Most wins vs. one club: 6, Wally Bunker vs. Kansas City (1964) Most losses vs. one club: 5 Don Larsen vs. White Sox (1954), Joe Coleman vs. Yankees (1954), and Jim Wilson vs. Cleveland (1955) Most wins by opponent: 6, Andy Pettitte, Yankees (2003) and Bud Daley, Kansas City (1959) Most losses by opponent: 5, Ned Garver, Kansas City (1957), Dick Stigman, Minnesota (1963), Stan Williams, Cleveland (1969), and Catfish Hunter, Yankees (1976) Rivalries The Orioles have a regional rivalry with the nearby Washington Nationals nicknamed the Beltway Series or Battle of the Beltways. Baltimore currently leads the series with a 55–39 record over the Nationals. Notes References Bibliography Bready, James H. The Home Team. 4th ed. Baltimore: 1984. Eisenberg, John. From 33rd Street to Camden Yards. New York: Contemporary Books, 2001. Hawkins, John C. This Date in Baltimore Orioles & St. Louis Browns History. Briarcliff Manor, New York: Stein & Day, 1983. Miller, James Edward. The Baseball Business: Pursuing Pennants and Profits in Baltimore. Chapel Hill, North Carolina: University of North Carolina Press, 1990. Patterson, Ted. The Baltimore Orioles. Dallas: Taylor Publishing Co., 1994. External links Waldman, Ed. "Sold! Angelos scored with '93 home run" , The Baltimore Sun, August 1, 2004 Major League Baseball teams Grapefruit League Professional baseball teams in Maryland Baseball teams established in 1894 Fictional passerine birds 1954 establishments in Maryland 1894 establishments in Wisconsin Baseball in Milwaukee
6
BBC Radio 1 is a British national radio station owned and operated by the BBC. It specialises in modern popular music and current chart hits throughout the day. The station provides alternative genres at night, including electronica, dance, hip hop and indie, while its sister station 1Xtra plays black contemporary music, including hip hop and R&B. Radio 1 also runs two online streams, Radio 1 Dance, dedicated to dance music, and Radio 1 Relax, dedicated to chill-out music; both are available to listen only on BBC Sounds. Radio 1 broadcasts throughout the UK on FM between and , digital radio, digital TV and BBC Sounds. It was launched in 1967 to meet the demand for music generated by pirate radio stations, when the average age of the UK population was 27. The BBC claims that it targets the 15–29 age group, and the average age of its UK audience since 2009 is 30. BBC Radio 1 started 24-hour broadcasting on 1 May 1991. According to RAJAR, the station broadcasts to a weekly audience of 7.7 million with a listening share of 4.8% as of September 2023. History First broadcast Radio 1 was established in 1967 (along with the more middle-of-the-road BBC Radio 2) as a successor to the BBC Light Programme, which had broadcast popular music and other entertainment since 1945. Radio 1 was conceived as a direct response to the popularity of offshore pirate radio stations such as Radio Caroline and Radio London, which had been outlawed by Act of Parliament. The new service was initially promoted in the summer of 1967 by trails (voiced by Kenny Everett) which referred to it as "Radio 247", the station's temporary working title. Radio 1 was launched at 7:00am on Saturday 30 September 1967. Broadcasts were on AM (247 metres), using a network of transmitters which had carried the Light Programme. Most were of comparatively low power, at less than 50 kilowatts, leading to patchy coverage of the country. The first disc jockey to broadcast on the new station was Tony Blackburn, who had previously been on Radio Caroline and Radio London, and presented what became known as the Radio 1 Breakfast Show. The first words on Radio 1 – after a countdown by the Controller of Radios 1 and 2, Robin Scott, and a jingle, recorded at PAMS in Dallas, Texas, beginning "The voice of Radio 1" – were: This was the first use of US-style jingles on BBC radio, but the style was familiar to listeners who were acquainted with Blackburn and other DJs from their days on pirate radio. The reason jingles from PAMS were used was that the Musicians' Union would not agree to a single fee for the singers and musicians if the jingles were made "in-house" by the BBC; they wanted repeat fees each time one was played. The first music to be heard on the station was an extract from "Beefeaters" by Johnny Dankworth. "Theme One", specially composed for the launch by George Martin was played for the first time before Radio 1 officially launched at 7 am. The first complete record played on Radio 1 was "Flowers in the Rain" by The Move, the number 2 record in that week's Top 20 (the number 1 record, The Last Waltz by Engelbert Humperdink, would have been inappropriate for the station's sound). The second single was "Massachusetts" by the Bee Gees. The breakfast show remains the most prized slot in the Radio 1 schedule, with every change of breakfast show presenter generating considerable media interest. The initial rota of staff included John Peel, Pete Myers, and a gaggle of others, some transferred from pirate stations, such as Keith Skues, Ed Stewart, Mike Raven, David Ryder, Jim Fisher, Jimmy Young, Dave Cash, Kenny Everett, Simon Dee, Terry Wogan, Duncan Johnson, Doug Crawford, Tommy Vance, Chris Denning, and Emperor Rosko. Many of the most popular pirate radio voices, such as Simon Dee, had only a one-hour slot per week ("Midday Spin.") 1970s Initially, the station was unpopular with some of its target audience who, it is claimed, disliked the fact that much of its airtime was shared with Radio 2 and that it was less unequivocally aimed at a young audience than the offshore stations, with some DJs such as Jimmy Young being in their 40s. The very fact that it was part of an "establishment" institution such as the BBC was a turn-off for some, and needle time restrictions prevented it from playing as many records as offshore stations had. It also had limited finances and often, as in January 1975, suffered disproportionately when the BBC had to make financial cutbacks, strengthening an impression that it was regarded as a lower priority by senior BBC executives. Despite this, it gained massive audiences, becoming the most listened-to station in the world, with audiences of over ten million claimed for some of its shows (up to twenty million for some of the combined Radio 1 and Radio 2 shows). In the early-to-mid-1970s Radio 1 presenters were rarely out of the British tabloids, thanks to the Publicity Department's high-profile work. The touring summer live broadcasts called the Radio 1 Roadshow – usually as part of the BBC 'Radio Weeks' promotions that took Radio 1, 2 and 4 shows on the road – drew some of the largest crowds of the decade. The station undoubtedly played a role in maintaining the high sales of 45 rpm single records, although it benefited from a lack of competition, apart from Radio Luxembourg, and from Manx Radio in the Isle of Man. (Independent Local Radio did not begin until October 1973, took many years to cover virtually all of the UK and was initially a mixture of music and talk). Alan Freeman's "Saturday Rock Show" was voted "Best Radio Show" five years running by readers of a national music publication, and was then axed by controller Derek Chinnery. News coverage on the station was boosted in 1973 when Newsbeat bulletins aired for the first time, and Richard Skinner joined the station as one of the new programme's presenters. On air, 1978 was the busiest year of the decade. David Jensen replaced Dave Lee Travis as host of the weekday drivetime programme so that DLT could replace Noel Edmonds as presenter of the Radio 1 Breakfast show. Later in the year the Sunday teatime chart show was extended from a Top 20 countdown to a Top 40 countdown, and Tommy Vance, one of the station's original presenters, rejoined the station to present a new programme, The Friday Rock Show. and on 23 November Radio 1 moved from 247m (1214 kHz) to 275 & 285m (1053 & 1089 kHz) medium wave as part of a plan to improve national AM reception, and to conform with the Geneva Frequency Plan of 1975. Annie Nightingale, whose first Radio 1 programme aired on 5 October 1969, was Britain's first national female DJ (the earliest record presenter is thought to be Jean Metcalfe of Family Favourites, but given that Metcalfe only presented the programme she is not considered a "true" DJ) and is now the longest-serving presenter, having constantly evolved her musical tastes with the times. In 1978, Al Matthews became the first black disc jockey to join Radio 1. His Saturday night show Discovatin was broadcast for over two years. During the summer months a Wednesday show was also broadcast featuring live acts. 1980s At the start of 1981, Mike Read took over The Radio 1 Breakfast Show from Dave Lee Travis. Towards the end of the year, Steve Wright started the long-running Steve Wright in the Afternoon show. In 1982, the new Radio 1's Weekend Breakfast Show started, initially with Tony Blackburn supported by Maggie Philbin and Keith Chegwin. Adrian John and Pat Sharp also joined for the early weekend shows. Gary Davies and Janice Long also joined, hosting Saturday night late and evening shows respectively. In 1984, Robbie Vincent joined to host a Sunday evening soul show. Mike Smith left for a while to present BBC1's Breakfast Time; Gary Davies then took over the weekday lunchtime slot. Bruno Brookes joined and replaced Peter Powell as presenter of the teatime show, with Powell replacing Blackburn on a new weekend breakfast show. In 1985, Radio 1 relocated from its studios in Broadcasting House to Egton House. In March 1985, Ranking Miss P became the first black female DJ on the station, hosting a reggae programme. In July, Andy Kershaw also joined the station. Simon Mayo joined the station in 1986, while Smith re-joined to replace Read on the breakfast show. In response to the growth in dance and rap music, Jeff Young joined in October 1987 with the Big Beat show. At the end of the year Nicky Campbell, Mark Goodier and Liz Kershaw all joined, and Janice Long left. Mayo replaced Smith on the breakfast show in May 1988. In September, Goodier and Kershaw took over weekend breakfasts with Powell departing. Campbell took over weekday evenings as part of a move into night-time broadcasting as 1 October 1988 saw Radio 1 extend broadcast hours until 02:00; previously the station had closed for the night at midnight. From September 1988, Radio 1 began its FM switch-on, with further major transmitter switch-ons in 1989 and 1990. It was not until the mid-1990s that all existing BBC radio transmitters had Radio 1 added. Previously, Radio 1 had "borrowed" Radio 2's VHF/FM frequencies for around 25 hours each week. 1990s On 1 May 1991, Radio 1 began 24-hour broadcasting, although only on FM, as the station's MW transmitters were switched off between midnight and 06:00. In 1992, Radio 1, for the first and only time, covered a general election. Their coverage was presented by Nicky Campbell. In his last few months as controller, Johnny Beerling commissioned a handful of new shows that in some ways set the tone for what was to come under Matthew Bannister. One of these "Loud'n'proud" was the UK's first national radio series aimed at a gay audience, which was produced in Manchester and aired from August 1993. Far from being a "parting quirk", the show was a surprise hit and led to the network's first coverage of the large outdoor Gay Pride event in 1994. The Man Ezeke became Radio 1's first black regular daytime presenter when he began hosting on Sunday lunchtimes in January 1993. Bannister took the reins fully in October 1993. His aim was to rid the station of its "Smashie and Nicey" image in order to appeal to the under-25s. Although originally launched as a youth station, by the early 1990s, its loyal listeners and DJs had aged with the station over its 25-year history. Many long-standing DJs, such as Simon Bates, Dave Lee Travis, Alan Freeman, Bob Harris, Paul Gambaccini, Gary Davies, and later Steve Wright, Bruno Brookes and Johnnie Walker left the station or were dismissed, and in January 1995, older music (typically anything recorded before 1990) was dropped from the daytime playlist. Many listeners rebelled as the first new DJs to be introduced represented a crossover from other parts of the BBC (notably Bannister and Trevor Dann's former colleagues at the BBC's London station, GLR) with Emma Freud and Danny Baker. Another problem was that, at the time, Radio 2 was sticking resolutely to a format which appealed mainly to those who had been listening since the days of the Light Programme, and commercial radio, which was targeting the "Radio 1 and a half" audience, consequently enjoyed a massive increase in its audience share at Radio 1's expense. After the departure of Steve Wright, who had been unsuccessfully moved from his long-running afternoon show to the breakfast show in January 1994, Bannister hired Chris Evans to present the breakfast show in April 1995. Evans was a popular presenter but was dismissed in 1997 after he demanded to present the breakfast show for only four days per week. Evans was replaced from 17 February 1997 by Mark and Lard – Mark Radcliffe and his sidekick Marc Riley – who found the slick, mass-audience style required for a breakfast show did not come naturally to them. They were replaced by Zoe Ball and Kevin Greening eight months later in October 1997; Greening soon moved on, leaving Ball as sole presenter. The reinvention of the station happened at a fortuitous time, with the rise of Britpop in the mid-1990s – bands like Oasis, Blur and Pulp were popular and credible at the time, and the station's popularity rose with them. Documentaries like John Peel's Lost in Music, which looked at the influence that the use of drugs have had over popular musicians, received critical acclaim but were slated inside Broadcasting House. At just before 09:00 on 1 July 1994, Radio 1 broadcast on mediumwave for the final time. In March 1995, Radio 1 hosted an "Interactive Radio Night" with Jo Whiley and Steve Lamacq broadcasting from Cyberia, an internet café and featuring live performances by Orbital via ISDN. Later in the 1990s the Britpop boom declined, and manufactured chart pop (boy bands and acts aimed at sub-teenagers) came to dominate the charts. New-genre music occupied the evenings (indie on weekdays and dance at weekends), with a mix of specialist shows and playlist fillers through late nights. The rise of rave culture through the late 1980s and early 1990s gave the station the opportunity to move into a controversial and youth-orientated movement by bringing in club DJ Pete Tong amongst others. There had been a dance music programme on Radio 1 since 1987 and Pete Tong was the second DJ to present an all dance music show. This quickly gave birth to the Essential Mix where underground DJs mix electronic and club based music in a two-hour slot. Dance and urban music has been a permanent feature on Radio 1 since with club DJs such as Judge Jules, Danny Rampling, Trevor Nelson, and the Dreem Teem all moving from London's Kiss 100 to the station. 2000s Listening numbers continued to decline but the station succeeded in targeting a younger, cross-gender age group. Eventually, this change in content was reflected by a rise in audience that is continuing to this day. Notably, the station has received praise for shows such as The Surgery, Bobby Friction and Nihal's show, The Evening Session and its successor Zane Lowe's show. Its website has also been well received. However, the breakfast show and the UK Top 40 continued to struggle. In 2000, Zoe Ball was replaced in the mornings by close friend and fellow ladette Sara Cox, but, despite heavy promotion, listening figures for the breakfast show continued to fall. In 2004 Cox was replaced by Chris Moyles. The newly rebranded breakfast show was known as The Chris Moyles Show and it increased its audience, ahead of the Today programme on Radio 4 as the second most popular breakfast show (after The Chris Evans Breakfast Show hosted by Chris Evans). Moyles continued to use inappropriate ways to try to tempt listeners from the Wake Up to Wogan show. In 2006, for example, creating a SAY NO TO WOGAN campaign live on-air. This angered the BBC hierarchy, though the row simmered down when it was clear that the 'campaign' had totally failed to alter the listening trends of the time – Wogan still increased figures at a faster rate than Moyles. The chart show's ratings fell after the departure of long-time host Mark Goodier, amid falling single sales in the UK. Ratings for the show fell in 2002 whilst Goodier was still presenting the show, meaning that commercial radio's Network Chart overtook it in the ratings for the first time. However, the BBC denied he was being sacked. Before July 2015, when the chart release day was changed to Friday, the BBC show competed with networked commercial radio's The Big Top 40 Show which was broadcast at the same time. Many DJs either ousted by Bannister or who left during his tenure (such as Johnnie Walker, Bob Harris and Steve Wright) have joined Radio 2 which has now overtaken Radio 1 as the UK's most popular radio station, using a style that Radio 1 had until the early 1990s. The success of Moyles' show has come alongside increased success for the station in general. In 2006, DJs Scott Mills and Zane Lowe won gold Sony Radio Awards, while the station itself came away with the best station award. A new evening schedule was introduced in September 2006, dividing the week by genre. Monday was mainly pop-funkrock-oriented, Tuesday was R&B and hip-hop, Thursdays and Fridays were primarily dance, with specialist R&B and reggae shows. Following the death of John Peel in October 2004, Annie Nightingale is now the longest-serving presenter, having worked there since 1970. 2010s The licence-fee funding of Radio 1, alongside Radio 2, is often criticised by the commercial sector. In the first quarter of 2011 Radio 1 was part of an efficiency review conducted by John Myers. His role, according to Andrew Harrison, the chief executive of RadioCentre, was "to identify both areas of best practice and possible savings." The controller of Radio 1 and sister station 1Xtra changed to Ben Cooper on 28 October 2011, following the departure of Andy Parfitt. Ben Cooper answered to the Director of BBC Audio and Music, Tim Davie. On 7 December 2011, Ben Cooper's first major changes to the station were announced. Skream & Benga, Toddla T, Charlie Sloth and Friction replaced Judge Jules, Gilles Peterson, Kissy Sell Out and Fabio & Grooverider. A number of shows were shuffled to incorporate the new line-up. On 28 February 2012, further changes were announced. Greg James and Scott Mills swapped shows and Jameela Jamil, Gemma Cairney and Danny Howard joined the station. The new line-up of DJs for In New DJs We Trust was also announced with B.Traits, Mosca, Jordan Suckley and Julio Bashmore hosting shows on a four weekly rotation. This new schedule took effect on Monday, 2 April 2012. In September 2012, Nick Grimshaw replaced Chris Moyles as host of "Radio 1's Breakfast Show". Grimshaw previously hosted Mon-Thurs 10pm-Midnight, Weekend Breakfast and Sunday evenings alongside Annie Mac. Grimshaw was replaced by Phil Taggart and Alice Levine on the 10pm-Midnight show. In November 2012, another series of changes were announced. This included the departure of Reggie Yates and Vernon Kay. Jameela Jamil was announced as the new presenter of The Official Chart. Matt Edmondson moved to weekend mornings with Tom Deacon briefly replacing him on Wednesday nights. Dan Howell and Phil Lester, famous YouTubers and video bloggers, joined the station. The changes took effect in January 2013. Former presenter Sara Cox hosted her last show on Radio 1 in February 2014 before moving back to Radio 2. In March 2014, Gemma Cairney left the weekend breakfast show to host the weekday early breakfast slot, swapping shows with Dev. In September 2014, Radio 1 operated a series of changes to their output which saw many notable presenters leave the station – including Edith Bowman, Nihal and Rob da Bank. Huw Stephens gained a new show hosting 10pm1am MondayWednesday with Alice Levine presenting weekends 1pm4pm. Radio 1's Residency also expanded with Skream joining the rotational line-up on Thursday nights (10pm1am). From December 2014 to April 2016, Radio 1 included a weekly late night show presented by a well known Internet personality called The Internet Takeover. Shows have been presented by various YouTubers such as Jim Chapman and Hannah Witton. In January 2015, Clara Amfo replaced Jameela Jamil as host of The Official Chart on Sundays (4pm7pm) and in March, Zane Lowe left Radio 1 and was replaced by Annie Mac on the new music evening show. In May 2015, Fearne Cotton left the station after 10 years of broadcasting. Her weekday mid-morning show was taken over by Clara Amfo. Adele Roberts also joined the weekday schedule line-up, hosting the Early Breakfast show. In July 2015, the Official Chart moved to a Friday from 4pm to 5:45pm, hosted by Greg James. The move took place to take into account the changes to the release dates of music globally. Cel Spellman joined the station to host Sunday evenings. In September 2017, a new slot namely Radio 1's Greatest Hits was introduced for weekends 10am-1pm. The show started on 2 September 2017. On 30 September 2017, Radio 1 celebrated its 50th birthday. Commemorations included a three-day pop-up station, 'Radio 1 Vintage', celebrating the station's presenters and special on-air programmes on the day itself, including a special breakfast show co-presented by the station's launch DJ Tony Blackburn, which is also broadcast on BBC Radio 2. In October 2017, another major schedule change was announced. Friction left the station. The change features Charlie Sloth gained a new slot called 'The 8th' which aired Mon-Thu 9-11pm. Other changes include MistaJam took over Danny Howard on the Dance Anthems. Katie Thistleton joined Cel Spellman on Sunday evenings, namely 'Life Hacks' (4-6pm) which features content from the Radio 1 Surgery, and Most Played (6-7pm). Danny Howard would host a new show on Friday 11pm-1am. Huw Stephens's show pushed to 11pm-1am. Kan D Man and DJ Limelight joined the station to host a weekly Asian Beats show on Sundays between 1-3am, Rene LaVice joined the station with the Drum & Bass show on Tuesdays 1-3am. Phil Taggart presented the Hype Chart on Tuesdays 3-4am. In February 2018, the first major schedule change of the year happened on the weekend. This saw Maya Jama and Jordan North join BBC Radio 1 to present the Radio 1's Greatest Hits, which would be on Saturday and Sunday respectively. Alice Levine moved to the breakfast slot to join Dev. Matt Edmondson would replace Alice Levine's original slot in the afternoon and joined by a different guest co-presenter each week. The changes took into effect on 24 February 2018. In April 2018, another major schedule change was made due to the incorporation of weekend schedule on Fridays. This means that Nick Grimshaw, Clara Amfo and Greg James would host four days in a week. Scott Mills became the new host for The Official Chart and Dance Anthems, which replaces Greg James, and Maya Jama would present The Radio 1's Greatest Hits on 10am-1pm. Mollie King joined Matt Edmondson officially on the 1-4pm slot, namely 'Matt and Mollie'. The changes took into effect on 15 June 2018. In May 2018, it was announced that Nick Grimshaw would leave the Breakfast Show after six years, the second longest run hosting the show in history (only second to Chris Moyles). However, Grimshaw did not leave the station, but swapped slots with Greg James, who hosted the home time show from 4-7pm weekdays. This change took place as of 20 August 2018 for the Radio 1 Breakfast Show (which was then renamed to Radio 1 Breakfast). Grimshaw's show started on 3 September 2018. In June 2018, another series of schedule changes was announced. This sees the BBC Introducing Show with Huw Stephens on Sundays 11pm-1am. Jack Saunders joined the station and presented Radio 1 Indie Show from Monday-Thursday 11pm-1am. Other changes include the shows rearrangement of Sunday evenings. Phil Taggart's chillest show moved to 7-9pm, then followed by The Rock Show with Daniel P Carter at 9-11pm. The changes took into effect in September 2018. In October 2018, Charlie Sloth announced that he was leaving Radio 1 and 1Xtra after serving the station for nearly 10 years. He was hosting The 8th and The Rap Show at that point. His last show was expected to be on 3 November 2018. However, Charlie had been in the spotlight for storming the stage and delivering a sweary, Kanye West-esque rant at the Audio & Radio Industry Awards (ARIAS) on Thursday 18 October 2018, which points towards Edith Bowman. Charlie was nominated for best specialist music show at the ARIAS – a category he lost out on to Soundtracking with Edith Bowman and prompting him to appear on stage during her acceptance. He apologised on Twitter regarding this issue and Radio 1 had agreed with Charlie that he will not do the 10 remaining shows that were originally planned. This meant that his last show ended on 18 October 2018. From 20 October 2018 onwards, Seani B filled his The Rap Show slot on 9pm-11pm and Dev covered "The 8th" beginning 22 October 2018. In the same month, B.Traits announced that she was leaving BBC Radio 1 after six years of commitment. She said she feels as though she can no longer devote the necessary time needed to make the show the best it can be, and is moving on to focus on new projects and adventures. Her last show was on 26 October 2018. The Radio 1's Essentials Mix is then shifted earlier to 1am-3am, followed by Radio 1's Wind-Down from 3 am to 6 am. The changes took effect from 2 November 2018 onwards. At the end of October 2018, Dev's takeover on The 8th resulted in the swapping between Matt Edmondson and Mollie King's show with Dev and Alice Levine's show. This meant that Matt and Mollie became the new Weekend Breakfast hosts, and Dev and Alice became the afternoon show hosts. The changes came into effect on 16 November 2018. On 15 November 2018, Radio 1 announced that Tiffany Calver, who has previously hosted a dedicated hip-hop show on the new-music station KissFresh, would join the station and host the Rap Show. The change took effect from 5 January 2019. On 26 November 2018, Radio 1 announced that the new hosts for the evening slot previously hosted by Charlie Sloth would be Rickie Haywood-Williams, Melvin Odoom, and Charlie Hedges. The trio previously presented on Kiss's breakfast show. The change took effect in April 2019. In July 2019 it was announced that there would be two new shows on the weekend, the weekend early breakfast show and best new pop, both of which started on 6 September 2019. The weekend early morning breakfast show would be and is currently hosted by Arielle Free. It is broadcast between 04:00–06:00 on Friday and Saturday and Sunday between 05:00–07:00.  Best new pop would be and is currently hosted by Mollie King and is currently broadcast between 06:00–06:30 on a Friday Morning. This in turn changed the timing of the Weekend Breakfast Show hosted by Mollie King and Matt Edmondson, which is now broadcast at Friday 06:30–10:00 and between 07:00–10:00 on Saturday & Sunday. 2020s Due to the COVID-19 pandemic, there were temporary changes. In March 2020, Radio 1 Breakfast began later at 7 am to 11 am. Scott Mills would also present his show from 1 pm-3 pm with Nick Grimshaw starting until 6 pm. BBC Radio 1 Dance Anthems now started from 3 pm with 2 hours Classic Anthems and it would end at 7 pm. In July 2020, Alice Levine and Cel Spellman announced their resignation from BBC Radio 1. In September, Vick Hope was announced to join Katie Thisleton, replacing Spellman. In September 2020, a new schedule was announced. This meant that The Radio 1 Breakfast Show was extended by 30 minutes until 10:30 am. Also, Scott Mills' show was shortened by 30 minutes from 4 to 3:30 pm. Toddla T was also announced to be leaving the show after 11 years. Annie Mac's evening show moved from 7 pm to 6 pm with Rickie, Melvin and Charlie from 8 pm. Jack Saunders would host a new show called Radio 1's Future Artists with Jack Saunders from Monday to Wednesday. Friday Schedule was also announced. Radio 1 Party Anthems moved from 6 pm to 3 pm and it would be hosted by Dev. Also, Annie Mac, Danny Howard, Pete Tong and Essential Mix shows moved 1 hour earlier. Dance Anthems on Saturday have been confirmed starting to its original time slot from 4 pm. On 26 September 2020, MistaJam left BBC Radio 1 and BBC Radio 1Xtra after 15 years. It was announced that Charlie Hedges would take over Dance Anthems from 3 October 2020. BBC Radio 1 Dance launched on Friday 9 October. The station is broadcast exclusively on BBC Sounds. In November 2020 it was confirmed that Dev Griffin, Huw Stephens, and Phil Taggart would all be leaving the station at the end of the year. From January 2021, Radio 1 Breakfast was to return to five days per week while Arielle Free would host Early Breakfast (Mon-Thu 0500–0700) and three new presenters were to take turns hosting the early breakfast slot on Fridays. Adele Roberts left Early Breakfast after five years, moving to Weekend Breakfast (Sat-Sun 0700–1030). Matt Edmondson and Mollie King returned to Weekend Afternoons (Fri-Sun 1300–1600). On Sunday evenings, Sian Eleri replaced Phil Taggart as host of the Chillest Show and Gemma Bradley replaced Huw Stephens on BBC Introducing. On 9 April 2021, BBC Radio 1 and other BBC radio stations were cut at 12:10pm for the national anthem following the death of Prince Philip, Duke of Edinburgh, and the stations then carried the BBC Radio News special programme until 4pm. Radio 1 then played music without vocals and on 10 and 11 April 2021 played downtempo and chilled music. The Official Chart was not aired, for the second time since Princess Diana's death. On 20 April 2021, Annie Mac tweeted that she would leave BBC Radio 1 after 17 years. It was also announced that Diplo would be leaving after 10 years. On Weeknights, Clara Amfo replaced Annie on Radio 1's Future Sounds (Mon-Thu 1800–2000). On Fridays, Danny Howard replaced Annie at 6 pm – with Sarah Story, a former Capital FM presenter, hosting from 8 pm. Rickie, Melvin and Charlie were announced as new hosts of the Live Lounge slot, replacing Clara Amfo. Jack Saunders also moved to an earlier time slot (Mon-Thu 2000–2200), replacing Rickie, Melvin and Charlie. Sian Eleri gained 3 new shows per week, hosting Radio 1's Power Down Playlist from 10pm-11pm Mon-Wed. BBC Introducing Dance with Jaguar airs at this time slot on a Thursday evening. On 21 April 2021, Radio 1 Relax launched on BBC Sounds, playing relaxing music and sounds such as wind and rain. After 14 years on BBC Radio 1, Nick Grimshaw announced he would be leaving the station, with Vick Hope and Jordan North taking over the time-slot. Grimshaw broadcast his final show on 12 August 2021. Vick and Jordan's new show first aired on 6 September 2021. Vick continued to co-host Life Hacks alongside Katie Thistleton, while Dean McCullough joined BBC Radio 1 to host Friday-Sunday 1030–1300. In September 2022, DJ Target and René LaVice left the station, making loads of changes, first of all, 'Radio 1's Soundsystem with Jeremiah Asimiah has moved from 2300 on Saturday to 0100 on Sundays to Saturdays from 1900-2100, replacing DJ Target. Radio 1's Drum & Bass Show has been moved to Saturdays 2300 to Sundays 0100, now being presented by Charlie Tee. Radio 1's Indie Show with Jack Saunders has been moved from Thursdays 2000-2200 to Sundays 2100-2300, Future Artists is still being broadcast Mon-Wed 2000-2200. On Mondays, Radio 1's Rock Show with Daniel P Carter will move from Sundays 2100-2300 to Mondays 2300 to Tuesdays 0100, followed by a new 'Future Rock' with Alyx Hylcombe on Tuesdays 0100-0200, and ending off with Future Alternative with Nels Hylton 0200-0300, moving from Thursdays 0300-0400. And a new programme is shown called Future Pop with Mollie King on Thursdays 2000-2200, Mollie will still host Weekends 1300-1600 with co-host Matt Edmondson. On 25 August 2022, Scott Mills and co-host Chris Stark aired their final show. Their radio time slots have been given to Dean McCullough and Vicky Hawksworth while the role of hosting the Official Chart has been given to Jack Saunders. On 8 September 2022, Radio 1 and the other radio stations were cut at 6:32pm to report the Death of Queen Elizabeth II and carried a BBC Radio News special. Radio 1 resumed broadcasts at 7am on 9 September 2022, playing downtempo music throughout the day and over the weekend. The Official Chart did not air on the Friday, which was the third time in two years since the death of the Duke of Edinburgh. Radio 1 returned to normal programming on 11 September 2022. Broadcast Studios From inception for over 20 years, Radio 1 broadcast from an adjacent pair of continuity suites (originally Con A and Con B) in the main control room of Broadcasting House. These cons were configured to allow DJs to operate the equipment themselves and play their own records and jingle cartridges (called self-op). This was a departure from traditional BBC practice, where a studio manager would play in discs from the studio control cubicle. Due to needle time restrictions, much of the music was played from tapes of BBC session recordings. The DJs were assisted by one or more technical operators (TOs) who would set up tapes and control sound levels during broadcasts. In 1985, Radio 1 moved across the road from Broadcasting House to Egton House. The station moved to Yalding House in 1996, and Egton House was demolished in 2003 to make way for an extension to Broadcasting House. This extension would eventually be renamed the Egton Wing, and then the Peel Wing. Until recently, the studios were located in the basement of Yalding House (near to BBC Broadcasting House) on Great Portland Street in central London. They used to broadcast from two main studios in the basement; Y2 and Y3 (there is also a smaller studio, YP1, used mainly for production). These two main studios (Y2 and Y3) are separated by the "Live Lounge", although it is mainly used as an office; live sets are rarely recorded from it, for Maida Vale Studios is used instead for larger set-ups. The studios are linked by webcams and windows through the "Live Lounge", allowing DJs to see each other when changing between shows. Y2 is the studio from where The Chris Moyles Show was broadcast and is also the studio rigged with static cameras for when the station broadcasts on the "Live Cam". The station moved there in 1996 from Egton House. In December 2012, Radio 1 moved from Yalding House to new studios on the 8th floor of the new BBC Broadcasting House, Portland Place, just a few metres away from the "Peel Wing", formerly the "Egton Wing", which occupies the land on which Egton House previously stood: it was renamed the "Peel Wing" in 2012 in honour of the long-serving BBC Radio 1 presenter, John Peel, who broadcast on the station from its launch in 1967 until his death in 2004. Programmes have also regularly been broadcast from other regions, notably The Mark and Lard Show, broadcast every weekday from New Broadcasting House, Oxford Road, Manchester for over a decade (October 1993–March 2004) – the longest regular broadcast on the network from outside the capital. In August 2022, the studio 82A (from which Radio 1 broadcasts) was renamed 82Mills, following the departure of the long-running DJ Scott Mills. UK analogue frequencies Radio 1 originally broadcast on AM (or 247 metres). On 23 November 1978, the station was moved to and (275 and 285 m). The BBC had been allocated three FM frequency ranges in 1955, for the then Light Programme (now BBC Radio 2), Third Programme (now BBC Radio 3) and Home Service (now BBC Radio 4) stations. Thus, when Radio 1 was launched, there was no FM frequency range allocated for the station. The official reason was that there was no space, even though no commercial stations had yet been launched on FM. To solve this issue, from launch until the end of the 1980s Radio 1 was allocated Radio 2's FM transmitters for a few hours per week. These were Saturday afternoons, Sunday teatime and evening – most notably for the Top 40 Singles Chart on Sunday afternoons and up until midnight; 10pm to midnight on weeknights including Sounds of the Seventies until 1975, and thereafter the John Peel show (Mon–Thurs), The Friday Rock Show with Tommy Vance and most Bank Holiday afternoons when Radio 2 was broadcasting a Bank Holiday edition of Sport on 2. Full-time FM broadcasting Due to the rising competition from commercial FM stations, the BBC began to draw up plans for Radio 1 to broadcast on FM full time. This process began in London on 31 October 1987, at low power on a temporary frequency of . The Home Office in the UK began to free up FM police communication bandwidths, which at the time were operating from 97.9 MHz to 102.0 MHz, in preparation for new FM radio stations planned for the future, which included BBC Radio 1. The BBC acquired 97.9 FM to 99.8 FM specifically for Radio 1. The rollout of Radio 1 on FM nationally began on 1 September 1988, starting with Central Scotland (98.6 MHz), the Midlands (98.4 MHz) and the north of England (98.8 MHz). On 24 November 1988, Belfast was added to the network on another temporary frequency on 96.0 MHz. Due to the expansion of Radio 1's FM broadcast hours, Radio 1 scaled back its airtime on Radio 2's FM frequencies - ending on weeknights (10pm–midnight), Saturday afternoons from 1pm until 7pm and Sunday evenings (7pm–midnight). The only programme continuing to broadcast on Radio 2's FM frequency was the UK Top 40, which broadcast between 5pm and 7pm on Sunday afternoon and evening; at the finish of the Top 40 show, the FM transmitters were handed back to Radio 2 at 7pm. By September 1990, with further expansion of Radio 1 FM's frequencies, after 23 years all usage of Radio 2's FM frequencies came to an end, resulting in BBC Radio 2 transmitting on FM full-time. This was due to the then new BBC Radio 5 Live broadcasting on Radio 2's former AM frequencies on 603 & 909 MW. Radio 1 made great efforts to promote its new FM service, renaming itself on-air initially as 'Radio 1 FM' and later as '1FM' until 1995. After reorganisation and a change of transmitter reallocation of the FM frequencies, especially in London (from 104.8 to 98.8 MHz), the Midlands (98.4 to 97.9 MHz) and Belfast (96.0 to 99.7 MHz), the engineering programme was completed in 1995. End of medium wave broadcasting - 1053 / 1089 kHz The Conservative government decided to increase competition on AM and disallowed the simulcasting of services on both AM and FM, affecting both BBC and Independent Local Radio. Radio 1's medium wave frequencies were reallocated to Independent National Radio. Radio 1's last broadcast on MW was on 1 July 1994, with Stephen Duffy's "Kiss Me" being the last record played on MW just before 9am. For those who continued to listen, just after 9am, Radio 1 jingles were played in reverse chronological order ending with its first jingle from 30 September 1967. In the initial months after this closure, a pre-recorded message by Mark Goodier was played to advise listeners that Radio 1 was now an "FM-only" station and to retune to the FM frequency. Around this time, Radio 1 began broadcasting on spare audio subcarriers on Sky Television's via Astra's SES satellite analogue service; initially in mono (on UK Gold) and later in stereo (on UK Living) transponders. The 1053 / 1089 frequencies were allocated to the then newly-created Talk Radio UK. Digital distribution The BBC launched its national radio stations on DAB digital radio in 1995; however, the technology was expensive at the time and so was not marketed, instead used as a test for future technologies. DAB was "officially" launched in 2002 as sets became cheaper. Today it can also be heard on UK digital TV services Freeview, Virgin Media, Sky and the Internet as well as FM. In July 2005, Sirius Satellite Radio began simulcasting Radio 1 across the United States as channel 11 on its own service and channel 6011 on Dish Network satellite TV. Sirius Canada began simulcasting Radio 1 when it was launched on 1 December 2005 (also on channel 11). The Sirius simulcasts were time shifted five hours to allow US and Canadian listeners in the Eastern Time Zone to hear Radio 1 at the same time of day as UK listeners. On 12 November 2008, Radio 1 made its debut on XM Satellite Radio in both the US and Canada on channel 29, moving to XM 15 and Sirius 15 on 4 May 2011. Until the full station was removed in August 2011, Radio 1 was able to be heard by approximately 20.6 million listeners in North America on satellite radio alone. BBC Radio 1 can be heard on cable in the Netherlands at 105.10 FM. SiriusXM cancellation in North America At midnight on 9 August 2011, Sirius XM ceased carrying BBC Radio 1 programming with no prior warning. On 10 August 2011 the BBC issued the following statement: The BBC’s commercial arm BBC Worldwide has been in partnership with SIRIUS Satellite Radio to broadcast Radio 1 on their main network, since 2005. This agreement has now unfortunately come to an end and BBC Worldwide are in current discussions with the satellite radio station to find ways to continue to bring popular music channel, BBC Radio 1, to the US audience. We will keep you posted. Thousands of angry Sirius XM customers began a campaign on Facebook and other social media to reinstate BBC Radio 1 on Sirius XM Radio. One week later, Sirius and the BBC agreed on a new carriage agreement that saw Radio 1 broadcast on a time-shifted format on the Sirius XM Internet Radio platform only, on channel 815. Starting on 15 January 2012, The Official Chart Show began broadcasting on SiriusXM 20on20 channel 3, at 4pm and 9pm Eastern Standard Time. Regionalisation From 1999 until 2012, Radio 1 split the home nations for localised programming in Scotland, Wales and Northern Ireland, to allow the broadcast of a showcase programme for regional talent. Most recently, these shows were under the BBC Introducing brand. Scotland, Wales and Northern Ireland had their own shows, which were broadcast on a 3-week rotational basis in England. From January 2011 until June 2012, Scotland's show was presented by Ally McCrae. Previously it was hosted by Vic Galloway (who also presents for BBC Radio Scotland); who had presented the show solo since 2004, after his original co-host Gill Mills departed. Wales's show was hosted by Jen Long between January 2011 until May 2012. Previously Bethan Elfyn occupied the slot, who had at one time hosted alongside Huw Stephens, until Stephens left to join the national network, although he still broadcasts a show for Wales – a Welsh-language music show on BBC Radio Cymru on Thursday evenings. Phil Taggart presented the Northern Ireland programme between November 2011 and May 2012. The show was formerly presented by Rory McConnell. Before joining the national network, Colin Murray was a presenter on The Session in Northern Ireland, along with Donna Legge; after Murray's promotion to the network Legge hosted alone for a time, and on her departure McConnell took her place. The regional opt-outs originally went out from 8pm to 10pm on Thursdays (the Evening Sessions time slot) and were known as the "Session in the Nations" (the "Session" tag was later dropped due to the demise of the Evening Session); they later moved to run from 7:30pm to 9pm, with the first half-hour of Zane Lowe's programme going out across the whole of the UK. On 18 October 2007 the regional programmes moved to a Wednesday night/Thursday morning slot from midnight to 2am under the BBC Introducing banner, allowing Lowe's Thursday show to be aired across the network; prior to this change Huw Stephens had presented the Wednesday midnight show nationally. In January 2011, BBC Introducing was moved to the new time slot of midnight to 2am on Monday mornings, and the Scottish and Welsh shows were given new presenters in the form of Ally McCrae and Jen Long. The opt-outs were only available to listeners on the FM frequencies. Because of the way the DAB and digital TV services of Radio 1 are broadcast (a single-frequency network on DAB and a single broadcast feed of Radio 1 on TV platforms), the digital version of the station was not regionalised. The BBC Trust announced in May 2012 that the regional music programmes on Radio 1 would be replaced with a single programme offering a UK-wide platform for new music as part of a series of cost-cutting measures across the BBC. In June 2012, the regional shows ended and were replaced by a single BBC Introducing show presented by Jen Long and Ally McCrae. Content Music Because of its youth-orientated nature, Radio 1 plays a broad mix of current and potential future hits, including independent/alternative, hip hop, rock, dance/electronica and pop. This made the station stand out from other top 40 stations, both in the UK and across the world. Since its progressive view on modern electronic music, the BBC Radio 1 is well-liked and known in the worldwide drum and bass community, frequently hosting producers and DJs like Hybrid Minds or Wilkinson. Due to restrictions on the amount of commercial music that could be played on radio in the UK until 1988 (the "needle time" limitation) the station has recorded many live performances. Studio sessions (recordings of about four tracks made in a single day), also supplemented the live music content, many of them finding their way to commercially available LPs and CDs. The sessions recorded for John Peel's late night programme are particularly renowned. The station has continued to record live music with its Live Lounge feature and the Piano Sessions, which started in November 2014. The station also broadcasts documentaries and interviews. Although this type of programming arose from necessity it has given the station diversity. The needletime restrictions meant the station tended to have a higher level of speech by DJs. While the station is often criticised for "waffling" by presenters, an experimental "more music day" in 1988 was declared a failure after only a third of callers favoured it. News and current affairs Radio 1 has a public service broadcasting obligation to provide news, which it fulfills through Newsbeat bulletins throughout the day. Shared with 1Xtra and Asian Network, short news summaries are provided roughly hourly on the half-hour between 06:30 and 16:30, with two additional 15-minute bulletins at 12:45 and 17:45 and nine summaries over the weekend and Bank Holiday between 07:30 and 15:30. Online visualisation and social media In recent years Radio 1 has used social media to help reach a younger audience. Its YouTube channel now has over 7.5 million subscribers. The highest viewed videos on the channel are predominately live music performances from the Live Lounge. The station also has a heavy presence on social media, with audience interaction occurring through Facebook and Twitter as well as text messaging. It was announced in 2013 that Radio 1 had submitted plans to launch its own dedicated video channel on the BBC iPlayer where videos of live performances as well as some features and shows would be streamed in a central location. Plans were approved by the BBC Trust in November 2014 and the channel launched on 10 November 2014. Special programming Bank Holiday programming Radio 1 provides alternative programming on some Bank Holidays. Programmes have included 'The 10 Hour Takeover', a request-based special, in which the DJs on air would encourage listeners to select any available track to play, 'One Hit Wonder Day' and 'The Chart of the Decade' where the 150 biggest selling singles in the last 10 years were counted down and played in full. Anniversary programming On Sunday 30 September 2007, Radio 1 celebrated its 40th birthday. To mark this anniversary Radio 1 hosted a week of special features, including a re-creation of Simon Bates' Golden Hour, and 40 different artists performing 40 different covers, one from each year since Radio 1 was established. On Saturday 30 September 2017, Radio 1 celebrated its 50th birthday. Tony Blackburn recreated the first ever Radio 1 broadcast on Radio 2, simulcast on pop-up station Radio 1 Vintage, followed by The Radio 1 Breakfast Show celebration, tricast on Radio 1, Radio 2 and Radio 1 Vintage, presented by Tony Blackburn and Nick Grimshaw, featuring former presenters as guests Simon Mayo, Sara Cox and Mike Read. Charity Radio 1 regularly supports the BBC's in house charities Comic Relief, Sport Relief and Children in Need. On 18 March 2011, BBC's Radio 1 longest-serving breakfast DJ Chris Moyles and sidekick Dave Vitty broadcast for 52 hours as part of a Guinness World Record attempt, in aid of Comic Relief. The pair stayed on air for 52 hours in total setting a new world record for 'Radio DJ Endurance Marathon (Team)’ after already breaking Simon Mayo's 12-year record for Radio 1's Longest Show of 37 hours which he set in 1999, also for Comic Relief. The presenters started on 16 March 2011 and came off air at 10:30am on 18 March 2011. During this Fearne Cotton made a bet with DJ Chris Moyles that if they raise over £2,000,000 she will appear on the show in a swimsuit. After passing the £2,000,000 mark, Cotton appeared on the studio webcam in a stripy monochrome swimsuit. The appearance of Cotton between 10:10am and 10:30am caused the Radio 1 website to crash due to a high volume of traffic. In total the event raised £2,622,421 for Comic Relief. Drama In 1981, Radio 1 broadcast a radio adaptation of the space opera film, Star Wars. The 13-episode serial was adapted for radio by the author Brian Daley and directed by John Madden, and was a co-production between the BBC and the American Broadcaster NPR. In 1994, Radio 1 broadcast a radio adaptation of the Batman comic book storyline Knightfall, as part of the Marc Goodier show, featuring Michael Gough recreating his movie role as Alfred. Later that same year, Radio 1 also broadcast a re-edited version of the Radio 4 Superman radio drama. Events Radio 1 Roadshows The Radio 1 Roadshow, which usually involved Radio 1 DJs and pop stars travelling around popular UK seaside destinations, began in 1973, as a response to the imminent introduction of local commercial radio stations. hosted by Alan Freeman in Newquay, Cornwall, with the final one held at Heaton Park, Manchester in 1999. Although the Roadshow attracted large crowds and the style changed with the style of the station itself—such as the introduction of whistlestop audio postcards of each location in 1994 ("2minuteTour")—they were still rooted in the older style of the station, and therefore fit for retirement. BBC Radio 1's Big Weekend In March 2000, Radio 1 decided to change the Roadshow format, renaming it One Big Sunday in the process. Several of these Sundays were held in large city-centre parks. In 2003, the event changed again and was rebranded One Big Weekend, with each event occurring biannually and covering two days. Under this name, it visited Derry in Northern Ireland, as part of the Music Lives campaign, and Perry Park in Birmingham. The most recent change occurred in 2005 when the event was yet again renamed and the decision taken to hold only one per year, this time as Radio 1's Big Weekend. Venues under this title have included Herrington Country Park, Camperdown Country Park, Moor Park–which was the first Weekend to feature a third stage–Mote Park, Lydiard Park, Bangor and Carlisle Airport. Tickets for each Big Weekend are given away free of charge, making it the largest free ticketed music festival in Europe. BBC Radio 1's Big Weekend was replaced by a larger festival in 2012, named 'Radio 1's Hackney Weekend', with a crowd capacity of 100,000. The Hackney Weekend took place over the weekend of 23–24 June 2012 in Hackney Marshes, Hackney, London. The event was to celebrate the 2012 Cultural Olympiad in London and had artists such as Rihanna, Jay-Z and Florence and the Machine. In 2013, Radio 1's Big Weekend returned to Derry as part of the City of Culture 2013 celebrations. So far, Derry is the only city to have hosted the Big Weekend twice. In May 2014, Radio 1's Big Weekend was held in Glasgow, Scotland. Acts which played at the event included Rita Ora, The 1975, Katy Perry, Jake Bugg and Pharrell Williams. The event was opened on the Friday with a dance set in George Square, featuring Radio 1 Dance DJs such as Danny Howard and Pete Tong, and other well-known acts such as Martin Garrix and Tiesto. In 2015, the event was held in Norwich and featured performances from the likes of Taylor Swift, Muse, David Guetta, Years & Years and others. 2016 saw the event make its way to Exeter. It was headlined by Coldplay, who closed the weekend on the Sunday evening. The event was in Hull in 2017 and saw performances by artists such as Zara Larsson, Shawn Mendes, Stormzy, Katy Perry, Little Mix, Sean Paul, Rita Ora, The Chainsmokers, Clean Bandit and Kings of Leon. To take advantage of Glastonbury Festival's fallow year in 2018, 4 separate Big Weekends were held simultaneously between 25 and 28 May. Stylized as "BBC Music's Biggest Weekend", events were held in Swansea (with a line-up curated by Radio 1), Coventry and Perth (both curated by Radio 2) and Belfast (curated by Radio 6 Music). Tickets sold out for the Swansea, Perth and Coventry Big Weekends. In 2020, the Big Weekend at Dundee was cancelled as a result of the COVID-19 pandemic. In May 2020, Radio 1 announced a virtual Big Weekend. It took place from 22 to 24 May and featured performances from artists like Mabel and Anne-Marie. Ibiza Weekend Radio 1 has annually held a dance music weekend broadcast live from Ibiza since the 1990s. The weekend is usually the first weekend in August and has performances from world-famous DJs and Radio 1's own dance music talent such as Pete Tong and Annie Mac. BBC Radio 1's Teen Awards In September 2008, Radio 1 launched an annual music event for teenagers aged 14 to 17 years. Originally named BBC Switch Live, the first event was held on 12 October 2008 at the Hammersmith Apollo. In 2009, the event became an annual awards ceremony and the following year was renamed BBC Radio 1's Teen Awards. The awards honoured inspirational teens alongside the best music, movies, TV and sport stars in a variety of categories. In 2011, it was moved to Wembley Arena and later Studio 1 at Television Centre, London. Highlights of the event has been broadcast across BBC Television. Despite the awards ceremony not taking place since 2019, the main award, "Teen Hero", has continued to be awarded by Radio 1 as Teen Heroes. Presenters The event has been hosted by various Radio 1 DJs and guest co-hosts. Performances Edinburgh Festival Radio 1 often has a presence at the Edinburgh Festival Fringe. Past events have included 'The Fun and Filth Cabaret' and 'Scott Mills: The Musical'. Europe's Biggest Dance Show Europe's Biggest Dance Show is a series of dance music oriented radio specials produced by Radio 1. The first, Europe's Biggest Dance Show 2019, was broadcast on Friday 11 October 2019 where Radio 1 joined with several European radio stations, all members of the European Broadcasting Union, including Swedish SR P3, German 1LIVE and RBB Fritz, Belgian VRT Studio Brussel, Irish RTÉ 2fm, French Radio France Mouv and Dutch NPO 3FM. A second show, Europe's Biggest Dance Show 2020, was broadcast on Friday 8 May 2020. It had the same contributing stations as 2019; however, it had begun at 7 pm BST, rather than 8 pm as the previous year. The third installment of Europe's Biggest Dance Show was broadcast on Friday 23 October 2020. French Mouv' dropped out of the broadcast until further notice while Finnish YleX and Norwegian NRK mP3 joined the show. A fourth show, Europe's Biggest Dance Show 2021, was broadcast on Friday 29 October 2021. It saw the first contribution of Austrian station FM4, while the Dutch NPO 3FM dropped out. The fifth installment, Europe's Biggest Dance Show 2022, was broadcast on Friday 14 October 2022. It saw the first contribution the Ukrainian Radio Promin of UA:PBC and the return of Dutch NPO 3FM to the show. Radio 1's summer stunts Since 2018, BBC Radio 1 has performed format-breaking listener stunts. In 2018, Greg James and Nick Grimshaw announced to play Hide and Seek on the radio to be found 22 hours later at the Royal Liver Building in Liverpool. In 2019 James and Grimshaw hid at the Grand Pier, Weston-super-Mare for almost 26 hours. In the summer of 2021 Radio 1 held Radio 1's Summer Breakout, where James was locked inside a camper van and had to escape by entering a password. James escaped the van after 62 hours. The following year, James was booted off the Radio 1 Breakfast Show and had to complete a giant 20-piece jigsaw puzzle to find the missing pieces scattered across the United Kingdom. After six days, James completed the puzzle and was reinstated as host of the Breakfast show. In the summer of 2023, all DJs other than Greg James went into hiding, with James and the listeners asked to piece back the schedule and find all 30 DJs. On 20 July, James and the listeners were informed that if any DJs were still missing by noon (UK time) on July 21, the station would go off air. Mollie King was still hidden at this time, so the station went off air for five minutes, between 12:00 and 12:05, before returning to broadcasting at 12:05 pm. Online-only sister stations On 17 September 2020, the BBC announced that it would launch an online-only sister station for BBC Radio 1, called BBC Radio 1 Dance, which would primarily play all kinds of songs from the Dance genre. The station was launched on 9 October 2020 at 6 pm BST. A second online-only sister station, BBC Radio 1 Relax, was launched on 22 April 2021. The station plays a selection of relaxation and well-being focused shows. Controllers/Head of Station Former logos Awards and nominations International Dance Music Awards Radio 1 has won the International Dance Music Awards' Best Radio Station every year from 2002 to 2020 with the exception of 2010. See also List of BBC radio stations Radio 1 Podcasts BBC Radio Triple J References Sources Further reading External links 1 Contemporary hit radio stations in the United Kingdom Radio stations established in 1967 1967 establishments in the United Kingdom Radio stations in the United Kingdom
6
A backplane (or "backplane system") is a group of electrical connectors in parallel with each other, so that each pin of each connector is linked to the same relative pin of all the other connectors, forming a computer bus. It is used to connect several printed circuit boards together to make up a complete computer system. Backplanes commonly use a printed circuit board, but wire-wrapped backplanes have also been used in minicomputers and high-reliability applications. A backplane is generally differentiated from a motherboard by the lack of on-board processing and storage elements. A backplane uses plug-in cards for storage and processing. Usage Early microcomputer systems like the Altair 8800 used a backplane for the processor and expansion cards. Backplanes are normally used in preference to cables because of their greater reliability. In a cabled system, the cables need to be flexed every time that a card is added or removed from the system; this flexing eventually causes mechanical failures. A backplane does not suffer from this problem, so its service life is limited only by the longevity of its connectors. For example, DIN 41612 connectors (used in the VMEbus system) have three durability grades built to withstand (respectively) 50, 400 and 500 insertions and removals, or "mating cycles". To transmit information, Serial Back-Plane technology uses a low-voltage differential signaling transmission method for sending information. In addition, there are bus expansion cables which will extend a computer bus to an external backplane, usually located in an enclosure, to provide more or different slots than the host computer provides. These cable sets have a transmitter board located in the computer, an expansion board in the remote backplane, and a cable between the two. Active versus passive backplanes Backplanes have grown in complexity from the simple Industry Standard Architecture (ISA) (used in the original IBM PC) or S-100 style where all the connectors were connected to a common bus. Due to limitations inherent in the Peripheral Component Interconnect (PCI) specification for driving slots, backplanes are now offered as passive and active. True passive backplanes offer no active bus driving circuitry. Any desired arbitration logic is placed on the daughter cards. Active backplanes include chips which buffer the various signals to the slots. The distinction between the two isn't always clear, but may become an important issue if a whole system is expected to not have a single point of failure (SPOF) . Common myth around passive backplane, even if it is single, is not usually considered a SPOF. Active back-planes are even more complicated and thus have a non-zero risk of malfunction. However one situation that can cause disruption both in the case of Active and Passive Back-planes is while performing maintenance activities i.e. while swapping boards there is always a possibility of damaging the Pins/Connectors on the Back-plane, this may cause full outage for the system as all boards mounted on the back-plane should be removed in order to fix the system. Therefore, we are seeing newer architectures where systems use high speed redundant connectivity to interconnect system boards point to point with No Single Point of Failure anywhere in the system. Backplanes versus motherboards When a backplane is used with a plug-in single-board computer (SBC) or system host board (SHB), the combination provides the same functionality as a motherboard, providing processing power, memory, I/O and slots for plug-in cards. While there are a few motherboards that offer more than 8 slots, that is the traditional limit. In addition, as technology progresses, the availability and number of a particular slot type may be limited in terms of what is currently offered by motherboard manufacturers. However, backplane architecture is somewhat unrelated to the SBC technology plugged into it. There are some limitations to what can be constructed, in that the SBC chip set and processor have to provide the capability of supporting the slot types. In addition, virtually an unlimited number of slots can be provided with 20, including the SBC slot, as a practical though not an absolute limit. Thus, a PICMG backplane can provide any number and any mix of ISA, PCI, PCI-X, and PCI-e slots, limited only by the ability of the SBC to interface to and drive those slots. For example, an SBC with the latest i7 processor could interface with a backplane providing up to 19 ISA slots to drive legacy I/O cards. Midplane Some backplanes are constructed with slots for connecting to devices on both sides, and are referred to as midplanes. This ability to plug cards into either side of a midplane is often useful in larger systems made up primarily of modules attached to the midplane. Midplanes are often used in computers, mostly in blade servers, where server blades reside on one side and the peripheral (power, networking, and other I/O) and service modules reside on the other. Midplanes are also popular in networking and telecommunications equipment where one side of the chassis accepts system processing cards and the other side of the chassis accepts network interface cards. Orthogonal midplanes connect vertical cards on one side to horizontal boards on the other side. One common orthogonal midplane connects many vertical telephone line cards on one side, each one connected to copper telephone wires, to a horizontal communications card on the other side. A "virtual midplane" is an imaginary plane between vertical cards on one side that directly connect to horizontal boards on the other side; the card-slot aligners of the card cage and self-aligning connectors on the cards hold the cards in position. Some people use the term "midplane" to describe a board that sits between and connects a hard drive hot-swap backplane and redundant power supplies. Backplanes in storage Servers commonly have a backplane to attach hot swappable hard disk drives and solid state drives; backplane pins pass directly into hard drive sockets without cables. They may have single connector to connect one disk array controller or multiple connectors that can be connected to one or more controllers in arbitrary way. Backplanes are commonly found in disk enclosures, disk arrays, and servers. Backplanes for SAS and SATA HDDs most commonly use the SGPIO protocol as means of communication between the host adapter and the backplane. Alternatively SCSI Enclosure Services can be used. With Parallel SCSI subsystems, SAF-TE is used. Platforms PICMG A single-board computer meeting the PICMG 1.3 specification and compatible with a PICMG 1.3 backplane is referred to as a System Host Board. In the Intel Single-Board Computer world, PICMG provides standards for the backplane interface: PICMG 1.0, 1.1 and 1.2 provide ISA and PCI support, with 1.2 adding PCIX support. PICMG 1.3 provides PCI-Express support. See also Motherboard Switched fabric Daughterboard M-Module SS-50 Bus STD Bus STEbus Eurocard (printed circuit board) VXI References Further reading Computer buses
9
The Battle of Waterloo () was fought on Sunday 18 June 1815, near Waterloo (at that time in the United Kingdom of the Netherlands, now in Belgium), marking the end of the Napoleonic Wars. A French army under the command of Napoleon was defeated by two armies of the Seventh Coalition. One of these was a British-led force with units from the United Kingdom, the Netherlands, Hanover, Brunswick, and Nassau, under the command of the Duke of Wellington (often referred to as the Anglo-allied army or Wellington's army). The other comprised three corps of the Prussian army under Field Marshal von Blücher (the fourth corps of this army fought at the Battle of Wavre on the same day). The battle was known contemporarily as the Battle of Mont Saint-Jean in France or La Belle Alliance ("the Beautiful Alliance") in Prussia. Upon Napoleon's return to power in March 1815 (beginning the Hundred Days), many states that had previously opposed him formed the Seventh Coalition and hurriedly mobilised their armies. Wellington's and Blücher's armies were cantoned close to the northeastern border of France. Napoleon planned to attack them separately in the hope of destroying them before they could join in a coordinated invasion of France with other members of the coalition. On 16 June, Napoleon successfully attacked the bulk of the Prussian army at the Battle of Ligny with his main force, causing the Prussians to withdraw northwards on 17 June, but parallel to Wellington and in good order. Meanwhile, a small portion of the French army contested the Battle of Quatre Bras and prevented the Anglo-allied army from reinforcing the Prussians at Ligny as planned. The Anglo-allied army held their ground at Quatre Bras on 16 June, but the withdrawal of the Prussians from Ligny caused Wellington to withdraw north to Waterloo on 17 June. Napoleon sent a third of his forces to pursue the Prussians, which resulted in the separate Battle of Wavre with the Prussian rear-guard on 18-19 June, and prevented that French force from participating at Waterloo. Upon learning that the Prussian army was able to support him, Wellington decided to offer battle on the Mont-Saint-Jean escarpment across the Brussels road, near the village of Waterloo. Here he withstood repeated attacks by the French throughout the afternoon of 18 June and almost lost the battle, but was eventually aided by the progressively arriving 50,000 Prussians who attacked the French flank and inflicted heavy casualties. In the evening, Napoleon assaulted the Anglo-allied line with his last reserves, the senior infantry battalions of the Imperial Guard. With the Prussians breaking through on the French right flank, the Anglo-allied army repulsed the Imperial Guard, and the French army was routed. Waterloo was the decisive engagement of the Waterloo campaign and Napoleon's last. It was also the second bloodiest single day battle of the Napoleonic Wars, after Borodino. According to Wellington, the battle was "the nearest-run thing you ever saw in your life". Napoleon abdicated four days later, and coalition forces entered Paris on 7 July. The defeat at Waterloo marked the end of Napoleon's Hundred Days return from exile. It precipitated Napoleon's second and definitive abdication as Emperor of the French, and ended the First French Empire. It set a historical milestone between serial European wars and decades of relative peace, often referred to as the Pax Britannica. In popular culture, the phrase "... Waterloo." is a reference implying someone having met their effort's end. The battlefield is located in the Belgian municipalities of Braine-l'Alleud and Lasne, about south of Brussels, and about from the town of Waterloo. The site of the battlefield today is dominated by the monument of the Lion's Mound, a large artificial hill constructed from earth taken from the battlefield itself; the topography of the battlefield near the mound has not been preserved. Prelude On 13 March 1815, six days before Napoleon reached Paris, the powers at the Congress of Vienna declared him an outlaw. Four days later, the United Kingdom, Russia, Austria, and Prussia mobilised armies to defeat Napoleon. Critically outnumbered, Napoleon knew that once his attempts at dissuading one or more members of the Seventh Coalition from invading France had failed, his only chance of remaining in power was to attack before the coalition mobilised. Had Napoleon succeeded in destroying the existing coalition forces south of Brussels before they were reinforced, he might have been able to drive the British back to the sea and knock the Prussians out of the war. Crucially, this would have bought him time to recruit and train more men before turning his armies against the Austrians and Russians. An additional consideration for Napoleon was that a French victory might cause French-speaking sympathisers in Belgium to launch a friendly revolution. Also, coalition troops in Belgium were largely second-line, as many units were of dubious quality and loyalty, and most of the British veterans of the Peninsular War had been sent to North America to fight in the War of 1812. The initial dispositions of Wellington, the British commander, were intended to counter the threat of Napoleon enveloping the Coalition armies by moving through Mons to the south-west of Brussels. This would have pushed Wellington closer to the Prussian forces, led by Gebhard Leberecht von Blücher, but might have cut Wellington's communications with his base at Ostend. In order to delay Wellington's deployment, Napoleon spread false intelligence which suggested that Wellington's supply chain from the channel ports would be cut. By June, Napoleon had raised a total army strength of about 300,000 men. The force at his disposal at Waterloo was less than one third that size, but the rank and file were mostly loyal and experienced soldiers. Napoleon divided his army into a left wing commanded by Marshal Ney, a right wing commanded by Marshal Grouchy and a reserve under his command (although all three elements remained close enough to support one another). Crossing the frontier near Charleroi before dawn on 15 June, the French rapidly overran Coalition outposts, securing Napoleon's "central position" between Wellington's and Blücher's armies. He hoped this would prevent them from combining, and he would be able to destroy first the Prussians' army, then Wellington's. Only very late on the night of 15 June was Wellington certain that the Charleroi attack was the main French thrust. In the early hours of 16 June, at the Duchess of Richmond's ball in Brussels, he received a dispatch from the Prince of Orange and was shocked by the speed of Napoleon's advance. He hastily ordered his army to concentrate on Quatre Bras, where the Prince of Orange, with the brigade of Prince Bernhard of Saxe-Weimar, was holding a tenuous position against the soldiers of Ney's left wing. Ney's orders were to secure the crossroads of Quatre Bras, so that he could later swing east and reinforce Napoleon if necessary. Ney found the crossroads of Quatre Bras lightly held by the Prince of Orange, who repelled Ney's initial attacks but was gradually driven back by overwhelming numbers of French troops. First reinforcements, and then Wellington arrived. He took command and drove Ney back, securing the crossroads by early evening, too late to send help to the Prussians, who had already been defeated. Meanwhile, on 16 June, Napoleon attacked and defeated Blücher's Prussians at the Battle of Ligny using part of the reserve and the right wing of his army. The Prussian centre gave way under heavy French assaults, but the flanks held their ground. The Prussian retreat from Ligny went uninterrupted and seemingly unnoticed by the French. The bulk of their rearguard units held their positions until about midnight, and some elements did not move out until the following morning, ignored by the French. Crucially, the Prussians did not retreat to the east, along their own lines of communication. Instead, they, too, fell back northwards—parallel to Wellington's line of march, still within supporting distance and in communication with him throughout. The Prussians rallied on Bülow's IV Corps, which had not been engaged at Ligny and was in a strong position south of Wavre. With the Prussian retreat from Ligny, Wellington's position at Quatre Bras was untenable. The next day he withdrew northwards, to a defensive position that he had reconnoitred the previous year—the low ridge of Mont-Saint-Jean, south of the village of Waterloo and the Sonian Forest. Napoleon, with the reserves, made a late start on 17 June and joined Ney at Quatre Bras at 13:00 to attack Wellington's army but found the position empty. The French pursued Wellington's retreating army to Waterloo; however, due to bad weather, mud and the head start that Napoleon's tardy advance had allowed Wellington, there was no substantial engagement, apart from a cavalry action at Genappe. Before leaving Ligny, Napoleon had ordered Grouchy, who commanded the right wing, to follow up the retreating Prussians with 33,000 men. A late start, uncertainty about the direction the Prussians had taken, and the vagueness of the orders given to him, meant that Grouchy was too late to prevent the Prussian army reaching Wavre, from where it could march to support Wellington. More importantly, the heavily outnumbered Prussian rear-guard was able to use the River Dyle to enable a savage and prolonged action to delay Grouchy. As 17 June drew to a close, Wellington's army had arrived at its position at Waterloo, with the main body of Napoleon's army following. Blücher's army was gathering in and around Wavre, around to the east of the town. Early on the morning of the 18th, Wellington received an assurance from Blücher that the Prussian army would support him. He decided to hold his ground and give battle. Armies Three armies participated in the battle: Napoleon's Armée du Nord, a multinational army under Wellington, and a Prussian army under General Blücher. The French army of around 69,000 consisted of 48,000 infantry, 14,000 cavalry, and 7,000 artillery with 250 guns. Napoleon had used conscription to fill the ranks of the French army throughout his rule, but he did not conscript men for the 1815 campaign. His troops were mainly veterans with considerable experience and a fierce devotion to their Emperor. The cavalry in particular was both numerous and formidable, and included fourteen regiments of armoured heavy cavalry, and seven of highly versatile lancers who were armed with lances, sabres and firearms. However, as the army took shape, French officers were allocated to units as they presented themselves for duty, so that many units were commanded by officers the soldiers did not know, and often did not trust. Crucially, some of these officers had little experience in working together as a unified force, so that support for other units was often not given. The French army was forced to march through rain and black coal-dust mud to reach Waterloo, and then to contend with mud and rain as it slept in the open. Little food was available for the soldiers, but nevertheless the veteran French soldiers were fiercely loyal to Napoleon. Wellington later said that he had "an infamous army, very weak and ill-equipped, and a very inexperienced Staff". His troops consisted of 67,000 men: 50,000 infantry, 11,000 cavalry, and 6,000 artillery with 150 guns. Of these, 24,000 were British, with another 6,000 from the King's German Legion (KGL). All of the British Army troops were regular soldiers and the majority of them had served in the Peninsula. Of the 23 British regiments in action, only 4 (the 14th, 33rd, 69th, and 73rd Foot) had not served in the Peninsula, and a similar level of experience was to be found in the British cavalry and artillery. In addition, there were 17,000 Dutch and Belgian troops, 11,000 from Hanover, 6,000 from Brunswick, and 3,000 from Nassau. Many of the troops in the Coalition armies were inexperienced. The Dutch army had been re-established in 1815, following the earlier defeat of Napoleon. With the exception of the British and some from Hanover and Brunswick who had fought with the British army in Spain, many of the professional soldiers in the Coalition armies had spent some of their time in the French army or in armies allied to the Napoleonic regime. The historian Alessandro Barbero states that in this heterogeneous army the difference between British and foreign troops did not prove significant under fire. Wellington was also acutely short of heavy cavalry, having only seven British and three Dutch regiments. The Duke of York imposed many of his staff officers on Wellington, including his second-in-command the Earl of Uxbridge. Uxbridge commanded the cavalry and had carte blanche from Wellington to commit these forces at his discretion. Wellington stationed a further 17,000 troops at Halle, away to the west. They were mostly composed of Dutch troops under the Prince of Orange's younger brother Prince Frederick of the Netherlands. They were placed as a guard against any possible wide flanking movement by the French forces, and also to act as a rearguard if Wellington was forced to retreat towards Antwerp and the coast. The Prussian army was in the throes of reorganisation. In 1815, the former Reserve regiments, Legions, and Freikorps volunteer formations from the wars of 1813–1814 were in the process of being absorbed into the line, along with many Landwehr (militia) regiments. The Landwehr were mostly untrained and unequipped when they arrived in Belgium. The Prussian cavalry were in a similar state. Its artillery was also reorganising and did not give its best performance—guns and equipment continued to arrive during and after the battle. Offsetting these handicaps, the Prussian Army had excellent and professional leadership in its General Staff organisation. These officers came from four schools developed for this purpose and thus worked to a common standard of training. This system was in marked contrast to the conflicting, vague orders issued by the French army. This staff system ensured that before Ligny, three-quarters of the Prussian army concentrated for battle with 24 hours' notice. After Ligny, the Prussian army, although defeated, was able to realign its supply train, reorganise itself, and intervene decisively on the Waterloo battlefield within 48 hours. Two-and-a-half Prussian army corps, or 48,000 men, were engaged at Waterloo; two brigades under Bülow, commander of IV Corps, attacked Lobau at 16:30, while Zieten's I Corps and parts of Pirch I's II Corps engaged at about 18:00. Battlefield The Waterloo position chosen by Wellington was a strong one. It consisted of a long ridge running east–west, perpendicular to, and bisected by, the main road to Brussels. Along the crest of the ridge ran the Ohain road, a deep sunken lane. Near the crossroads with the Brussels road was a large elm tree that was roughly in the centre of Wellington's position and served as his command post for much of the day. Wellington deployed his infantry in a line just behind the crest of the ridge following the Ohain road. Using the reverse slope, as he had many times previously, Wellington concealed his strength from the French, with the exception of his skirmishers and artillery. The length of front of the battlefield was also relatively short at . This allowed Wellington to draw up his forces in depth, which he did in the centre and on the right, all the way towards the village of Braine-l'Alleud, in the expectation that the Prussians would reinforce his left during the day. In front of the ridge, there were three positions that could be fortified. On the extreme right were the château, garden, and orchard of Hougoumont. This was a large and well-built country house, initially hidden in trees. The house faced north along a sunken, covered lane (usually described by the British as "the hollow-way") along which it could be supplied. On the extreme left was the hamlet of Papelotte. Both Hougoumont and Papelotte were fortified and garrisoned, and thus anchored Wellington's flanks securely. Papelotte also commanded the road to Wavre that the Prussians would use to send reinforcements to Wellington's position. On the western side of the main road, and in front of the rest of Wellington's line, was the farmhouse and orchard of La Haye Sainte, which was garrisoned with 400 light infantry of the King's German Legion. On the opposite side of the road was a disused sand quarry, where the 95th Rifles were posted as sharpshooters. Wellington's forces positioning presented a formidable challenge to any attacking force. Any attempt to turn Wellington's right would entail taking the entrenched Hougoumont position. Any attack on his right centre would mean the attackers would have to march between enfilading fire from Hougoumont and La Haye Sainte. On the left, any attack would also be enfiladed by fire from La Haye Sainte and its adjoining sandpit, and any attempt at turning the left flank would entail fighting through the lanes and hedgerows surrounding Papelotte and the other garrisoned buildings on that flank, and some very wet ground in the Smohain defile. The French army formed on the slopes of another ridge to the south. Napoleon could not see Wellington's positions, so he drew his forces up symmetrically about the Brussels road. On the right was I Corps under d'Erlon with 16,000 infantry and 1,500 cavalry, plus a cavalry reserve of 4,700. On the left was II Corps under Reille with 13,000 infantry, and 1,300 cavalry, and a cavalry reserve of 4,600. In the centre about the road south of the inn La Belle Alliance were a reserve including Lobau's VI Corps with 6,000 men, the 13,000 infantry of the Imperial Guard, and a cavalry reserve of 2,000. In the right rear of the French position was the substantial village of Plancenoit, and at the extreme right, the Bois de Paris wood. Napoleon initially commanded the battle from Rossomme farm, where he could see the entire battlefield, but moved to a position near La Belle Alliance early in the afternoon. Command on the battlefield (which was largely hidden from his view) was delegated to Ney. Battle Preparation Wellington rose at around 02:00 or 03:00 on 18 June, and wrote letters until dawn. He had earlier written to Blücher confirming that he would give battle at Mont-Saint-Jean if Blücher could provide him with at least one corps; otherwise he would retreat towards Brussels. At a late-night council, Blücher's chief of staff, August Neidhardt von Gneisenau, had been distrustful of Wellington's strategy, but Blücher persuaded him that they should march to join Wellington's army. In the morning Wellington duly received a reply from Blücher, promising to support him with three corps. From 06:00 Wellington was in the field supervising the deployment of his forces. At Wavre, the Prussian IV Corps under Bülow was designated to lead the march to Waterloo as it was in the best shape, not having been involved in the Battle of Ligny. Although they had not taken casualties, IV Corps had been marching for two days, covering the retreat of the three other corps of the Prussian army from the battlefield of Ligny. They had been posted farthest away from the battlefield, and progress was very slow. The roads were in poor condition after the night's heavy rain, and Bülow's men had to pass through the congested streets of Wavre and move 88 artillery pieces. Matters were not helped when a fire broke out in Wavre, blocking several streets along Bülow's intended route. As a result, the last part of the corps left at 10:00, six hours after the leading elements had moved out towards Waterloo. Bülow's men were followed to Waterloo first by I Corps and then by II Corps. Napoleon breakfasted off silver plate at Le Caillou, the house where he had spent the night. When Soult suggested that Grouchy should be recalled to join the main force, Napoleon said, "Just because you have all been beaten by Wellington, you think he's a good general. I tell you Wellington is a bad general, the English are bad troops, and this affair is nothing more than eating breakfast". Napoleon's seemingly dismissive remark may have been strategic, given his maxim "in war, morale is everything". He had acted similarly in the past, and on the morning of the battle of Waterloo may have been responding to the pessimism and objections of his chief of staff and senior generals. Later on, being told by his brother, Jerome, of some gossip overheard by a waiter between British officers at lunch at the 'King of Spain' inn in Genappe that the Prussians were to march over from Wavre, Napoleon declared that the Prussians would need at least two days to recover and would be dealt with by Grouchy. Surprisingly, Jerome's overheard gossip aside, the French commanders present at the pre-battle conference at Le Caillou had no information about the alarming proximity of the Prussians and did not suspect that Blücher's men would start erupting onto the field of battle in great numbers just five hours later. Napoleon had delayed the start of the battle owing to the sodden ground, which would have made manoeuvring cavalry and artillery difficult. In addition, many of his forces had bivouacked well to the south of La Belle Alliance. At 10:00, in response to a dispatch he had received from Grouchy six hours earlier, he sent a reply telling Grouchy to "head for Wavre [to Grouchy's north] in order to draw near to us [to the west of Grouchy]" and then "push before him" the Prussians to arrive at Waterloo "as soon as possible". At 11:00, Napoleon drafted his general order: Reille's Corps on the left and d'Erlon's Corps to the right were to attack the village of Mont-Saint-Jean and keep abreast of one another. This order assumed Wellington's battle-line was in the village, rather than at the more forward position on the ridge. To enable this, Jerome's division would make an initial attack on Hougoumont, which Napoleon expected would draw in Wellington's reserves, since its loss would threaten his communications with the sea. A grande batterie of the reserve artillery of I, II, and VI Corps was to then bombard the centre of Wellington's position from about 13:00. D'Erlon's corps would then attack Wellington's left, break through, and roll up his line from east to west. In his memoirs, Napoleon wrote that his intention was to separate Wellington's army from the Prussians and drive it back towards the sea. Hougoumont Historian Andrew Roberts notes that "It is a curious fact about the Battle of Waterloo that no one is absolutely certain when it actually began". Wellington recorded in his dispatches that at "about ten o'clock [Napoleon] commenced a furious attack upon our post at Hougoumont". Other sources state that the attack began around 11:30. The house and its immediate environs were defended by four light companies of Guards, and the wood and park by Hanoverian Jäger and the 1/2nd Nassau. The initial attack by Bauduin's brigade emptied the wood and park, but was driven back by heavy British artillery fire, and cost Bauduin his life. As the British guns were distracted by a duel with French artillery, a second attack by Soye's brigade and what had been Bauduin's succeeded in reaching the north gate of the house. Sous-Lieutenant Legros, a French officer, broke the gate open with an axe, and some French troops managed to enter the courtyard. The Coldstream Guards and the Scots Guards arrived to support the defence. There was a fierce melee, and the British managed to close the gate on the French troops streaming in. The Frenchmen trapped in the courtyard were all killed. Only a young drummer boy was spared. Fighting continued around Hougoumont all afternoon. Its surroundings were heavily invested by French light infantry, and coordinated attacks were made against the troops behind Hougoumont. Wellington's army defended the house and the hollow way running north from it. In the afternoon, Napoleon personally ordered the house to be shelled to set it on fire, resulting in the destruction of all but the chapel. Du Plat's brigade of the King's German Legion was brought forward to defend the hollow way, which they had to do without senior officers. Eventually they were relieved by the 71st Highlanders, a British infantry regiment. Adam's brigade was further reinforced by Hugh Halkett's 3rd Hanoverian Brigade, and successfully repulsed further infantry and cavalry attacks sent by Reille. Hougoumont held out until the end of the battle. The fighting at Hougoumont has often been characterised as a diversionary attack to draw in Wellington's reserves which escalated into an all-day battle and drew in French reserves instead. In fact there is a good case to believe that both Napoleon and Wellington thought that holding Hougoumont was key to winning the battle. Hougoumont was a part of the battlefield that Napoleon could see clearly, and he continued to direct resources towards it and its surroundings all afternoon (33 battalions in all, 14,000 troops). Similarly, though the house never contained a large number of troops, Wellington devoted 21 battalions (12,000 troops) over the course of the afternoon in keeping the hollow way open to allow fresh troops and ammunition to reach the buildings. He moved several artillery batteries from his hard-pressed centre to support Hougoumont, and later stated that "the success of the battle turned upon closing the gates at Hougoumont". Much like the fight for Little Round Top during the Battle of Gettysburg in the US Civil War some fifty years later, the struggle for Hougoumont became the key battle within the battle. Hougoumont proved to be decisive terrain. The Grand Battery starts its bombardment The 80 guns of Napoleon's grande batterie drew up in the centre. These opened fire at 11:50, according to Lord Hill (commander of the Anglo-allied II Corps), while other sources put the time between noon and 13:30. The grande batterie was too far back to aim accurately, and the only other troops they could see were skirmishers of the regiments of Kempt and Pack, and Perponcher's 2nd Dutch division (the others were employing Wellington's characteristic "reverse slope defence"). The bombardment caused a large number of casualties. Although some projectiles buried themselves in the soft soil, most found their marks on the reverse slope of the ridge. The bombardment forced the cavalry of the Union Brigade (in third line) to move to its left, to reduce their casualty rate. Napoleon spots the Prussians At about 13:15, Napoleon saw the first columns of Prussians around the village of Lasne-Chapelle-Saint-Lambert, away from his right flank—about three hours march for an army. Napoleon's reaction was to have Marshal Soult send a message to Grouchy telling him to come towards the battlefield and attack the arriving Prussians. Grouchy, however, had been executing Napoleon's previous orders to follow the Prussians "with your sword against his back" towards Wavre, and was by then too far away to reach Waterloo. Grouchy was advised by his subordinate, Gérard, to "march to the sound of the guns", but stuck to his orders and engaged the Prussian III Corps rear guard under the command of Lieutenant-General Baron von Thielmann at the Battle of Wavre. Moreover, Soult's letter ordering Grouchy to move quickly to join Napoleon and attack Bülow would not actually reach Grouchy until after 20:00. First French infantry attack A little after 13:00, I Corps' attack began in large columns. Bernard Cornwell writes "[column] suggests an elongated formation with its narrow end aimed like a spear at the enemy line, while in truth it was much more like a brick advancing sideways and d'Erlon's assault was made up of four such bricks, each one a division of French infantry". Each division, with one exception, was drawn up in huge masses, consisting of the eight or nine battalions of which they were formed, deployed, and placed in a column one behind the other, with only five paces interval between the battalions. The one exception was the 1st Division (Commanded by Quiot, the leader of the 1st Brigade). Its two brigades were formed in a similar manner, but side by side instead of behind one another. This was done because, being on the left of the four divisions, it was ordered to send one (Quiot's brigade) against the south and west of La Haye Sainte, while the other (Bourgeois') was to attack the eastern side of the same post. The divisions were to advance in echelon from the left at a distance of 400 paces apart—the 2nd Division (Donzelot's) on the right of Bourgeois' brigade, the 3rd Division (Marcognet's) next, and the 4th Division (Durutte's) on the right. They were led by Ney to the assault, each column having a front of about a hundred and sixty to two hundred files. The leftmost division advanced on the walled farmhouse compound La Haye Sainte. The farmhouse was defended by the King's German Legion. While one French battalion engaged the defenders from the front, the following battalions fanned out to either side and, with the support of several squadrons of cuirassiers, succeeded in isolating the farmhouse. The King's German Legion resolutely defended the farmhouse. Each time the French tried to scale the walls the outnumbered Germans somehow held them off. The Prince of Orange saw that La Haye Sainte had been cut off and tried to reinforce it by sending forward the Hanoverian Lüneburg Battalion in line. Cuirassiers concealed in a fold in the ground caught and destroyed it in minutes and then rode on past La Haye Sainte, almost to the crest of the ridge, where they covered d'Erlon's left flank as his attack developed. At about 13:30, d'Erlon started to advance his three other divisions, some 14,000 men over a front of about , against Wellington's left wing. At the point they aimed for they faced 6,000 men: the first line consisted of the 1st brigade ("Van Bylandt's brigade") of the 2nd Netherlands division, flanked by the British brigades of Kempt and Pack on either side. The second line consisted of British and Hanoverian troops under Sir Thomas Picton, who were lying down in dead ground behind the ridge. All had suffered badly at Quatre Bras. In addition, the Bylandt brigade had been ordered to deploy its skirmishers in the hollow road and on the forward slope. The rest of the brigade was lying down just behind the road. At the moment these skirmishers were rejoining their parent battalions, the brigade was ordered to its feet and started to return fire. On the left of the brigade, where the 7th Dutch Militia stood, a "few files were shot down and an opening in the line thus occurred". The battalion had no reserves and was unable to close the gap. D'Erlon's troops pushed through this gap in the line and the remaining battalions in the Bylandt brigade (8th Dutch Militia and Belgian 7th Line Battalion) were forced to retreat to the square of the 5th Dutch Militia, which was in reserve between Picton's troops, about 100 paces to the rear. There they regrouped under the command of Colonel Van Zuylen van Nijevelt. A moment later the Prince of Orange ordered a counterattack, which actually occurred around 10 minutes later. Bylandt was wounded and retired off the field, passing command of the brigade to Lt. Kol. De Jongh. D'Erlon's men ascended the slope and advanced on the sunken road, Chemin d'Ohain, that ran from behind La Haye Sainte and continued east. It was lined on both sides by thick hedges, with Bylandt's brigade just across the road while the British brigades had been lying down some 100 yards back from the road, Pack's to Bylandt's left and Kempt's to Bylandt's right. Kempt's 1,900 men were engaged by Bourgeois' brigade of 1,900 men of Quiot's division. In the centre, Donzelot's division had pushed back Bylandt's brigade. On the right of the French advance was Marcognet's division led by Grenier's brigade consisting of the 45e Régiment de Ligne and followed by the 25e Régiment de Ligne, somewhat less than 2,000 men, and behind them, Nogue's brigade of the 21e and 45e regiments. Opposing them on the other side of the road was Pack's 9th Brigade consisting of the 44th Foot and three Scottish regiments: the Royal Scots, the 42nd Black Watch, and the 92nd Gordons, totalling something over 2,000 men. A very even fight between British and French infantry was about to occur. The French advance drove in the British skirmishers and reached the sunken road. As they did so, Pack's men stood up, formed into a four deep line formation for fear of the French cavalry, advanced, and opened fire. However, a firefight had been anticipated and the French infantry had accordingly advanced in more linear formation. Now, fully deployed into line, they returned fire and successfully pressed the British troops; although the attack faltered at the centre, the line in front of d'Erlon's right started to crumble. Picton was killed shortly after ordering the counter-attack and the British and Hanoverian troops also began to give way under the pressure of numbers. Pack's regiments, all four ranks deep, advanced to attack the French in the road but faltered and began to fire on the French instead of charging. The 42nd Black Watch halted at the hedge and the resulting fire-fight drove back the British 92nd Foot while the leading French 45e Ligne burst through the hedge cheering. Along the sunken road, the French were forcing the Anglo-allies back, the British line was dispersing, and at two o'clock in the afternoon Napoleon was winning the Battle of Waterloo. Reports from Baron von Müffling, the Prussian liaison officer attached to Wellington's army, relate that: "After 3 o'clock the Duke's situation became critical, unless the succour of the Prussian army arrived soon". Charge of the British heavy cavalry At this crucial juncture, Uxbridge ordered his two brigades of British heavy cavalry—formed unseen behind the ridge—to charge in support of the hard-pressed infantry. The 1st Brigade, known as the Household Brigade, commanded by Major-General Lord Edward Somerset, consisted of guards regiments: the 1st and 2nd Life Guards, the Royal Horse Guards (the Blues), and the 1st (King's) Dragoon Guards. The 2nd Brigade, also known as the Union Brigade, commanded by Major-General Sir William Ponsonby, was so called as it consisted of an English (the 1st or The Royals), a Scottish (2nd Scots Greys), and an Irish (6th or Inniskilling) regiment of heavy dragoons. More than 20 years of warfare had eroded the numbers of suitable cavalry mounts available on the European continent; this resulted in the British heavy cavalry entering the 1815 campaign with the finest horses of any contemporary cavalry arm. British cavalry troopers also received excellent mounted swordsmanship training. They were, however, inferior to the French in manoeuvring in large formations, were cavalier in attitude, and, unlike the infantry, some units had scant experience of warfare. The Scots Greys, for example, had not been in action since 1795. According to Wellington, though they were superior individual horsemen, they were inflexible and lacked tactical ability. "I considered one squadron a match for two French, I didn't like to see four British opposed to four French: and as the numbers increased and order, of course, became more necessary I was the more unwilling to risk our men without having a superiority in numbers." The two brigades had a combined field strength of about 2,000 (2,651 official strength); they charged with the 47-year-old Uxbridge leading them and a very inadequate number of squadrons held in reserve. There is evidence that Uxbridge gave an order, the morning of the battle, to all cavalry brigade commanders to commit their commands on their own initiative, as direct orders from himself might not always be forthcoming, and to "support movements to their front". It appears that Uxbridge expected the brigades of Sir John Ormsby Vandeleur, Hussey Vivian, and the Dutch cavalry to provide support to the British heavies. Uxbridge later regretted leading the charge in person, saying "I committed a great mistake", when he should have been organising an adequate reserve to move forward in support. The Household Brigade crossed the crest of the Anglo-allied position and charged downhill. The cuirassiers guarding d'Erlon's left flank were still dispersed, and so were swept over the deeply sunken main road and then routed. Sir Walter Scott, in Paul's Letters to his Kinsfolk, described the following scene:Sir John Elley, who led the charge of the heavy brigade, was [...] at one time surrounded by several of the cuirassiers; but, being a tall and uncommonly powerful man, completely master of his sword and horse, he cut his way out, leaving several of his assailants on the ground, marked with wounds, indicating the unusual strength of the arm which inflicted them. Indeed, had not the ghastly evidence remained on the field, many of the blows dealt upon this occasion would have seemed borrowed from the annals of knight-errantry [...]Continuing their attack, the squadrons on the left of the Household Brigade then destroyed Aulard's brigade. Despite attempts to recall them, they continued past La Haye Sainte and found themselves at the bottom of the hill on blown horses facing Schmitz's brigade formed in squares. To their left, the Union Brigade suddenly swept through the infantry lines, giving rise to the legend that some of the 92nd Gordon Highland Regiment clung onto their stirrups and accompanied them into the charge. From the centre leftwards, the Royal Dragoons destroyed Bourgeois' brigade, capturing the eagle of the 105e Ligne. The Inniskillings routed the other brigade of Quoit's division, and the Scots Greys came upon the lead French regiment, 45e Ligne, as it was still reforming after having crossed the sunken road and broken through the hedge row in pursuit of the British infantry. The Greys captured the eagle of the 45e Ligne and overwhelmed Grenier's brigade. These would be the only two French eagles captured by the British during the battle. On Wellington's extreme left, Durutte's division had time to form squares and fend off groups of Greys. As with the Household Cavalry, the officers of the Royals and Inniskillings found it very difficult to rein back their troops, who lost all cohesion. Having taken casualties, and still trying to reorder themselves, the Scots Greys and the rest of the Union Brigade found themselves before the main French lines. Their horses were blown, and they were still in disorder without any idea of what their next collective objective was. Some attacked nearby gun batteries of the Grande Battery. Although the Greys had neither the time nor means to disable the cannon or carry them off, they put very many out of action as the gun crews were killed or fled the battlefield. Sergeant Major Dickinson of the Greys stated that his regiment was rallied before going on to attack the French artillery: Hamilton, the regimental commander, rather than holding them back cried out to his men "Charge, charge the guns!" Napoleon promptly responded by ordering a counter-attack by the cuirassier brigades of Farine and Travers and Jaquinot's two Chevau-léger (lancer) regiments in the I Corps light cavalry division. Disorganized and milling about the bottom of the valley between Hougoumont and La Belle Alliance, the Scots Greys and the rest of the British heavy cavalry were taken by surprise by the countercharge of Milhaud's cuirassiers, joined by lancers from Baron Jaquinot's 1st Cavalry Division. As Ponsonby tried to rally his men against the French cuirassers, he was attacked by Jaquinot's lancers and captured. A nearby party of Scots Greys saw the capture and attempted to rescue their brigade commander. The French lancer who had captured Ponsonby killed him and then used his lance to kill three of the Scots Greys who had attempted the rescue. By the time Ponsonby died, the momentum had entirely returned in favour of the French. Milhaud's and Jaquinot's cavalrymen drove the Union Brigade from the valley. The result was very heavy losses for the British cavalry. A countercharge, by British light dragoons under Major-General Vandeleur and Dutch–Belgian light dragoons and hussars under Major-General Ghigny on the left wing, and Dutch–Belgian carabiniers under Major-General Trip in the centre, repelled the French cavalry. All figures quoted for the losses of the cavalry brigades as a result of this charge are estimates, as casualties were only noted down after the day of the battle and were for the battle as a whole. Some historians, Barbero for example, believe the official rolls tend to overestimate the number of cavalrymen present in their squadrons on the field of battle and that the proportionate losses were, as a result, considerably higher than the numbers on paper might suggest. The Union Brigade lost heavily in both officers and men killed (including its commander, William Ponsonby, and Colonel Hamilton of the Scots Greys) and wounded. The 2nd Life Guards and the King's Dragoon Guards of the Household Brigade also lost heavily (with Colonel Fuller, commander of the King's DG, killed). However, the 1st Life Guards, on the extreme right of the charge, and the Blues, who formed a reserve, had kept their cohesion and consequently suffered significantly fewer casualties. On the rolls the official, or paper strength, for both Brigades is given as 2,651 while Barbero and others estimate the actual strength at around 2,000 and the official recorded losses for the two heavy cavalry brigades during the battle was 1,205 troopers and 1,303 horses. Some historians, such as Chandler, Weller, Uffindell, and Corum, assert that the British heavy cavalry were destroyed as a viable force following their first, epic charge. Barbero states that the Scots Greys were practically wiped out and that the other two regiments of the Union Brigade suffered comparable losses. Other historians, such as Clark-Kennedy and Wood, citing British eyewitness accounts, describe the continuing role of the heavy cavalry after their charge. The heavy brigades, far from being ineffective, continued to provide valuable services. They countercharged French cavalry numerous times (both brigades), halted a combined cavalry and infantry attack (Household Brigade only), were used to bolster the morale of those units in their vicinity at times of crisis, and filled gaps in the Anglo-allied line caused by high casualties in infantry formations (both brigades). This service was rendered at a very high cost, as close combat with French cavalry, carbine fire, infantry musketry, and—more deadly than all of these—artillery fire steadily eroded the number of effectives in the two brigades. At 6 o'clock in the afternoon the whole Union Brigade could field only three squadrons, though these countercharged French cavalry, losing half their number in the process. At the end of the fighting, the two brigades, by this time combined, could muster one squadron. Fourteen thousand French troops of d'Erlon's I Corps had been committed to this attack. The I Corps had been driven in rout back across the valley, costing Napoleon 3,000 casualties including over 2,000 prisoners taken. Also some valuable time was lost, as the charge had dispersed numerous units and it would take until 16:00 for d'Erlon's shaken corps to reform. And although elements of the Prussians now began to appear on the field to his right, Napoleon had already ordered Lobau's VI corps to move to the right flank to hold them back before d'Erlon's attack began. The French cavalry attack A little before 16:00, Ney noted an apparent exodus from Wellington's centre. He mistook the movement of casualties to the rear for the beginnings of a retreat, and sought to exploit it. Following the defeat of d'Erlon's Corps, Ney had few infantry reserves left, as most of the infantry had been committed either to the futile Hougoumont attack or to the defence of the French right. Ney therefore tried to break Wellington's centre with cavalry alone. Initially, Milhaud's reserve cavalry corps of cuirassiers and Lefebvre-Desnoëttes' light cavalry division of the Imperial Guard, some 4,800 sabres, were committed. When these were repulsed, Kellermann's heavy cavalry corps and Guyot's heavy cavalry of the Guard were added to the massed assault, a total of around 9,000 cavalry in 67 squadrons. When Napoleon saw the charge he said it was an hour too soon. Wellington's infantry responded by forming squares (hollow box-formations four ranks deep). Squares were much smaller than usually depicted in paintings of the battle—a 500-man battalion square would have been no more than in length on a side. Infantry squares that stood their ground were deadly to cavalry, as cavalry could not engage with soldiers behind a hedge of bayonets, but were themselves vulnerable to fire from the squares. Horses would not charge a square, nor could they be outflanked, but they were vulnerable to artillery or infantry. Wellington ordered his artillery crews to take shelter within the squares as the cavalry approached, and to return to their guns and resume fire as they retreated. Witnesses in the British infantry recorded as many as 12 assaults, though this probably includes successive waves of the same general attack; the number of general assaults was undoubtedly far fewer. Kellermann, recognising the futility of the attacks, tried to reserve the elite carabinier brigade from joining in, but eventually Ney spotted them and insisted on their involvement. A British eyewitness of the first French cavalry attack, an officer in the Foot Guards, recorded his impressions very lucidly and somewhat poetically: In essence this type of massed cavalry attack relied almost entirely on psychological shock for effect. Close artillery support could disrupt infantry squares and allow cavalry to penetrate; at Waterloo, however, co-operation between the French cavalry and artillery was not impressive. The French artillery did not get close enough to the Anglo-allied infantry in sufficient numbers to be decisive. Artillery fire between charges did produce mounting casualties, but most of this fire was at relatively long range and was often indirect, at targets beyond the ridge. If infantry being attacked held firm in their square defensive formations, and were not panicked, cavalry on their own could do very little damage to them. The French cavalry attacks were repeatedly repelled by the steadfast infantry squares, the harrying fire of British artillery as the French cavalry recoiled down the slopes to regroup, and the decisive countercharges of Wellington's light cavalry regiments, the Dutch heavy cavalry brigade, and the remaining effectives of the Household Cavalry. At least one artillery officer disobeyed Wellington's order to seek shelter in the adjacent squares during the charges. Captain Mercer, who commanded 'G' Troop, Royal Horse Artillery, thought the Brunswick troops on either side of him so shaky that he kept his battery of six nine-pounders in action against the cavalry throughout, to great effect. For reasons that remain unclear, no attempt was made to spike other Anglo-allied guns while they were in French possession. In line with Wellington's orders, gunners were able to return to their pieces and fire into the French cavalry as they withdrew after each attack. After numerous costly but fruitless attacks on the Mont-Saint-Jean ridge, the French cavalry was spent. Their casualties cannot easily be estimated. Senior French cavalry officers, in particular the generals, experienced heavy losses. Four divisional commanders were wounded, nine brigadiers wounded, and one killed—testament to their courage and their habit of leading from the front. Illustratively, Houssaye reports that the Grenadiers à Cheval numbered 796 of all ranks on 15 June, but just 462 on 19 June, while the Empress Dragoons lost 416 of 816 over the same period. Overall, Guyot's Guard heavy cavalry division lost 47% of its strength. Second French infantry attack Eventually it became obvious, even to Ney, that cavalry alone were achieving little. Belatedly, he organised a combined-arms attack, using Bachelu's division and Tissot's regiment of Foy's division from Reille's II Corps (about 6,500 infantrymen) plus those French cavalry that remained in a fit state to fight. This assault was directed along much the same route as the previous heavy cavalry attacks (between Hougoumont and La Haye Sainte). It was halted by a charge of the Household Brigade cavalry led by Uxbridge. The British cavalry were unable, however, to break the French infantry, and fell back with losses from musketry fire. Uxbridge recorded that he tried to lead the Dutch Carabiniers, under Major-General Trip, to renew the attack and that they refused to follow him. Other members of the British cavalry staff also commented on this occurrence. However, there is no support for this incident in Dutch or Belgian sources, and Wellington wrote in his Dispatch to Secretary for War Bathurst on 19 June 1815 that General Trip had "conducted himself much to my satisfaction". Uxbridge then ordered a charge by three squadrons of the 3rd Hussars of the King's German Legion. They broke through the French cavalry, but became hemmed in, were cut off and suffered severe losses. Meanwhile, Bachelu's and Tissot's men and their cavalry supports were being hard hit by fire from artillery and from Adam's infantry brigade, and they eventually fell back. Although the French cavalry caused few direct casualties to Wellington's centre, artillery fire onto his infantry squares caused many. Wellington's cavalry, except for Sir John Vandeleur's and Sir Hussey Vivian's brigades on the far left, had all been committed to the fight, and had taken significant losses. The situation appeared so desperate that the Cumberland Hussars, the only Hanoverian cavalry regiment present, fled the field spreading alarm all the way to Brussels. French capture of La Haye Sainte At approximately the same time as Ney's combined-arms assault on the centre-right of Wellington's line, rallied elements of D'Erlon's I Corps, spearheaded by the 13th Légère, renewed the attack on La Haye Sainte and this time were successful, partly because the King's German Legion's ammunition ran out. However, the Germans had held the centre of the battlefield for almost the entire day, and this had stalled the French advance. With La Haye Sainte captured, Ney then moved skirmishers and horse artillery up towards Wellington's centre. French artillery began to pulverise the infantry squares at short range with canister. The 30th and 73rd Regiments suffered such heavy losses that they had to combine to form a viable square. The success Napoleon needed to continue his offensive had occurred. Ney was on the verge of breaking the Anglo-allied centre. Along with this artillery fire a multitude of French tirailleurs occupied the dominant positions behind La Haye Sainte and poured an effective fire into the squares. The situation for the Anglo-allies was now so dire that the 33rd Regiment's colours and all of Halkett's brigade's colours were sent to the rear for safety, described by historian Alessandro Barbero as, "... a measure that was without precedent". Wellington, noticing the slackening of fire from La Haye Sainte, with his staff rode closer to it. French skirmishers appeared around the building and fired on the British command as it struggled to get away through the hedgerow along the road. The Prince of Orange then ordered a single battalion of the KGL, the Fifth, to recapture the farm despite the obvious presence of enemy cavalry. Their Colonel, Christian Friedrich Wilhelm von Ompteda obeyed and led the battalion down the slope, chasing off some French skirmishers until French cuirassiers fell on his open flank, killed him, destroyed his battalion and took its colour. A Dutch–Belgian cavalry regiment ordered to charge retreated from the field instead, fired on by their own infantry. Merlen's Light Cavalry Brigade charged the French artillery taking position near La Haye Sainte but were shot to pieces and the brigade fell apart. The Netherlands Cavalry Division, Wellington's last cavalry reserve behind the centre having lost half their strength was now useless and the French cavalry, despite its losses, were masters of the field, compelling the Anglo-allied infantry to remain in square. More and more French artillery was brought forward. A French battery advanced to within 300 yards of the 1/1st Nassau square causing heavy casualties. When the Nassauers attempted to attack the battery they were ridden down by a squadron of cuirassiers. Yet another battery deployed on the flank of Mercer's battery and shot up its horses and limbers and pushed Mercer back. Mercer later recalled, French tirailleurs occupied the dominant positions, especially one on a knoll overlooking the square of the 27th. Unable to break square to drive off the French infantry because of the presence of French cavalry and artillery, the 27th had to remain in that formation and endure the fire of the tirailleurs. That fire nearly annihilated the 27th Foot, the Inniskillings, who lost two thirds of their strength within that three or four hours. During this time many of Wellington's generals and aides were killed or wounded including FitzRoy Somerset, Canning, de Lancey, Alten and Cooke. The situation was now critical and Wellington, trapped in an infantry square and ignorant of events beyond it, was desperate for the arrival of help from the Prussians. He later wrote, Arrival of the Prussian IV Corps: Plancenoit The Prussian IV Corps (Bülow's) was the first to arrive in strength. Bülow's objective was Plancenoit, which the Prussians intended to use as a springboard into the rear of the French positions. Blücher intended to secure his right upon the Châteaux Frichermont using the Bois de Paris road. Blücher and Wellington had been exchanging communications since 10:00 and had agreed to this advance on Frichermont if Wellington's centre was under attack. General Bülow noted that the way to Plancenoit lay open and that the time was 16:30. At about this time, the Prussian 15th Brigade () was sent to link up with the Nassauers of Wellington's left flank in the Frichermont-La Haie area, with the brigade's horse artillery battery and additional brigade artillery deployed to its left in support. Napoleon sent Lobau's corps to stop the rest of Bülow's IV Corps proceeding to Plancenoit. The 15th Brigade threw Lobau's troops out of Frichermont with a determined bayonet charge, then proceeded up the Frichermont heights, battering French Chasseurs with 12-pounder artillery fire, and pushed on to Plancenoit. This sent Lobau's corps into retreat to the Plancenoit area, driving Lobau past the rear of the Armee Du Nords right flank and directly threatening its only line of retreat. Hiller's 16th Brigade also pushed forward with six battalions against Plancenoit. Napoleon had dispatched all eight battalions of the Young Guard to reinforce Lobau, who was now seriously pressed. The Young Guard counter-attacked and, after very hard fighting, secured Plancenoit, but were themselves counter-attacked and driven out. Napoleon sent two battalions of the Middle/Old Guard into Plancenoit and after ferocious bayonet fighting—they did not deign to fire their muskets—this force recaptured the village. Zieten's flank march Throughout the late afternoon, the Prussian I Corps (Zieten's) had been arriving in greater strength in the area just north of La Haie. General Müffling, the Prussian liaison to Wellington, rode to meet Zieten. Zieten had by this time brought up the Prussian 1st Brigade (Steinmetz's), but had become concerned at the sight of stragglers and casualties from the Nassau units on Wellington's left and from the Prussian 15th Brigade (Laurens'). These troops appeared to be withdrawing and Zieten, fearing that his own troops would be caught up in a general retreat, was starting to move away from Wellington's flank and towards the Prussian main body near Plancenoit. Zieten had also received a direct order from Blücher to support Bülow, which Zieten obeyed, starting to march to Bülow's aid. Müffling saw this movement away and persuaded Zieten to support Wellington's left flank. Müffling warned Zieten that "The battle is lost if the corps does not keep on the move and immediately support the English army." Zieten resumed his march to support Wellington directly, and the arrival of his troops allowed Wellington to reinforce his crumbling centre by moving cavalry from his left. The French were expecting Grouchy to march to their support from Wavre, and when Prussian I Corps (Zieten's) appeared at Waterloo instead of Grouchy, "the shock of disillusionment shattered French morale" and "the sight of Zieten's arrival caused turmoil to rage in Napoleon's army". I Corps proceeded to attack the French troops before Papelotte and by 19:30 the French position was bent into a rough horseshoe shape. The ends of the line were now based on Hougoumont on the left, Plancenoit on the right, and the centre on La Haie. Durutte had taken the positions of La Haie and Papelotte in a series of attacks, but now retreated behind Smohain without opposing the Prussian 24th Regiment (Laurens') as it retook both. The 24th advanced against the new French position, was repulsed, and returned to the attack supported by Silesian Schützen (riflemen) and the F/1st Landwehr. The French initially fell back before the renewed assault, but now began seriously to contest ground, attempting to regain Smohain and hold on to the ridgeline and the last few houses of Papelotte. The Prussian 24th Regiment linked up with a Highlander battalion on its far right and along with the 13th Landwehr Regiment and cavalry support threw the French out of these positions. Further attacks by the 13th Landwehr and the 15th Brigade drove the French from Frichermont. Durutte's division, finding itself about to be charged by massed squadrons of Zieten's I Corps cavalry reserve, retreated from the battlefield. The rest of d'Erlon's I Corps also broke and fled in panic, while to the west the French Middle Guard were assaulting Wellington's centre. The Prussian I Corps then advanced towards the Brussels road, the only line of retreat available to the French. Attack of the Imperial Guard Meanwhile, with Wellington's centre exposed by the fall of La Haye Sainte and the Plancenoit front temporarily stabilised, Napoleon committed his last reserve, the hitherto-undefeated Imperial Guard infantry. This attack, mounted at around 19:30, was intended to break through Wellington's centre and roll up his line away from the Prussians. Although it is one of the most celebrated passages of arms in military history, it had been unclear which units actually participated. It appears that it was mounted by five battalions of the Middle Guard, and not by the grenadiers or chasseurs of the Old Guard. Three Old Guard battalions did move forward and formed the attack's second line, though they remained in reserve and did not directly assault the Anglo-allied line. Napoleon himself oversaw the initial deployment of the Middle and Old Guard. The Middle Guard formed in battalion squares, each about 550 men strong, with the 1st/3rd Grenadiers, led by Generals Friant and Poret de Morvan, on the right along the road, to their left and rear was General Harlet leading the square of the 4th Grenadiers, then the 1st/3rd Chasseurs under General Michel, next the 2nd/3rd Chasseurs and finally the large single square of two battalions of 800 soldiers of the 4th Chasseurs led by General Henrion. Two batteries of Imperial Guard Horse Artillery accompanied them with sections of two guns between the squares. Each square was led by a general and Marshal Ney, mounted on his 5th horse of the day, led the advance. Behind them, in reserve, were the three battalions of the Old Guard, right to left 1st/2nd Grenadiers, 2nd/2nd Chasseurs and 1st/2nd Chasseurs. Napoleon left Ney to conduct the assault; however, Ney led the Middle Guard on an oblique towards the Anglo-allied centre right instead of attacking straight up the centre. Napoleon sent Ney's senior ADC Colonel Crabbé to order Ney to adjust, but Crabbé was unable to get there in time. Other troops rallied to support the advance of the Guard. On the left infantry from Reille's corps that was not engaged with Hougoumont and cavalry advanced. On the right all the now rallied elements of D'Érlon's corps once again ascended the ridge and engaged the Anglo-allied line. French artillery also moved forward in support; Duchand's battery, in particular, inflicting losses on Colin Halkett's brigade. Halkett's front line, consisting of the 30th Foot and 73rd, traded fire with the 1st/3rd and 4th Grenadiers but they were driven back in confusion into the 33rd and 69th regiments, Halket was shot in the face and seriously wounded and the whole brigade having been ordered to pull back, retreated in a mob. Other Anglo-allied troops began to give way as well. A counterattack by the Nassauers and the remains of Kielmansegge's brigade from the Anglo-allied second line, led by the Prince of Orange, was also thrown back and the Prince of Orange was seriously wounded. The survivors of Halkett's brigade were reformed, and engaged the French in a firefight. The Dutch divisional commander Chassé, on his own initiative, decided at this critical moment to advance with his relatively fresh Dutch division. Chassé first ordered his artillery forward; led by a battery of Dutch horse-artillery commanded by Captain Krahmer de Bichin the battery opened a destructive fire into the 1st/3rd Grenadiers' flank. This still did not stop the Guard's advance, so Chassé, who was affectionately called "Generaal Bajonet" by his soldiers, ordered his first brigade, commanded by Colonel Hendrik Detmers, to charge the outnumbered French with the bayonet. As the Guard wavered Chassé galloped among his men and found Captain De Haan with a few soldiers of the 19th Militia, whom he ordered into a flank attack. According to Chassé: The French grenadiers then faltered and broke. The 4th Grenadiers, seeing their comrades retreat and having suffered heavy casualties themselves, now wheeled right about and retired. To the left of the 4th Grenadiers were the two squares of the 1st/ and 2nd/3rd Chasseurs who angled further to the west and had suffered more from artillery fire than the grenadiers. But as their advance mounted the ridge they found it apparently abandoned and covered with dead. Suddenly 1,500 British Foot Guards under Peregrine Maitland, who had been lying down to protect themselves from the French artillery, rose and devastated them with point-blank volleys. The chasseurs deployed to answer the fire, but some 300 fell from the first volley, including Colonel Mallet and General Michel, and both battalion commanders. A bayonet charge by the Foot Guards then broke the leaderless squares, which fell back onto the following column. The 4th Chasseurs battalion, 800 strong, now came up onto the exposed battalions of British Foot Guards, who lost all cohesion and dashed back up the slope as a disorganized crowd with the chasseurs in pursuit. At the crest the chasseurs came upon the battery that had caused severe casualties on the 1st and 2nd/3rd Chasseurs. They opened fire and swept away the gunners. The left flank of their square now came under fire from a heavy formation of British skirmishers, which the chasseurs drove back. But the skirmishers were replaced by the 52nd Light Infantry (2nd Division), led by John Colborne, which wheeled in line onto the chasseurs' flank and poured a devastating fire into them. The chasseurs returned a very sharp fire which killed or wounded some 150 men of the 52nd. The 52nd then charged, and under this onslaught, the chasseurs broke. The last of the Guard retreated headlong. A ripple of panic passed through the French lines as the astounding news spread: "La Garde recule. Sauve qui peut!" ("The Guard is retreating. Every man for himself!") Wellington now stood up in Copenhagen's stirrups and waved his hat in the air to signal a general advance. His army rushed forward from the lines and threw themselves upon the retreating French. The surviving Imperial Guard rallied on their three reserve battalions (some sources say four) just south of La Haye Sainte for a last stand. A charge from Adam's Brigade and the Hanoverian Landwehr Osnabrück Battalion, plus Vivian's and Vandeleur's relatively fresh cavalry brigades to their right, threw them into confusion. Those left in semi-cohesive units retreated towards La Belle Alliance. It was during this retreat that some of the Guards were invited to surrender, eliciting the famous, if apocryphal, retort "La Garde meurt, elle ne se rend pas!" ("The Guard dies, it does not surrender!"). Prussian capture of Plancenoit At about the same time, the Prussian 5th, 14th, and 16th Brigades were starting to push through Plancenoit, in the third assault of the day. The church was by now on fire, while its graveyard—the French centre of resistance—had corpses strewn about "as if by a whirlwind". Five Guard battalions were deployed in support of the Young Guard, virtually all of which was now committed to the defence, along with remnants of Lobau's corps. The key to the Plancenoit position proved to be the Chantelet woods to the south. Pirch's II Corps had arrived with two brigades and reinforced the attack of IV Corps, advancing through the woods. The 25th Regiment's musketeer battalions threw the 1/2e Grenadiers (Old Guard) out of the Chantelet woods, outflanking Plancenoit and forcing a retreat. The Old Guard retreated in good order until they met the mass of troops retreating in panic, and became part of that rout. The Prussian IV Corps advanced beyond Plancenoit to find masses of French retreating in disorder from British pursuit. The Prussians were unable to fire for fear of hitting Wellington's units. This was the fifth and final time that Plancenoit changed hands. French forces not retreating with the Guard were surrounded in their positions and eliminated, neither side asking for nor offering quarter. The French Young Guard Division reported 96 per cent casualties, and two-thirds of Lobau's Corps ceased to exist. French disintegration The French right, left, and centre had all now failed. The last cohesive French force consisted of two battalions of the Old Guard stationed around La Belle Alliance; they had been so placed to act as a final reserve and to protect Napoleon in the event of a French retreat. He hoped to rally the French army behind them, but as retreat turned into rout, they too were forced to withdraw, one on either side of La Belle Alliance, in square as protection against Coalition cavalry. Until persuaded that the battle was lost and he should leave, Napoleon commanded the square to the left of the inn. Adam's Brigade charged and forced back this square, while the Prussians engaged the other. As dusk fell, both squares withdrew in relatively good order, but the French artillery and everything else fell into the hands of the Prussian and Anglo-allied armies. The retreating Guards were surrounded by thousands of fleeing, broken French troops. Coalition cavalry harried the fugitives until about 23:00, with Gneisenau pursuing them as far as Genappe before ordering a halt. There, Napoleon's abandoned carriage was captured, still containing an annotated copy of Machiavelli's The Prince, and diamonds left behind in the rush to escape. These diamonds became part of King Friedrich Wilhelm of Prussia's crown jewels; one Major Keller of the F/15th received the Pour le Mérite with oak leaves for the feat. By this time 78 guns and 2,000 prisoners had also been taken, including more generals. Other sources agree that the meeting of the commanders took place near La Belle Alliance, with this occurring at around 21:00. Aftermath Waterloo cost Wellington around 17,000 dead or wounded, and Blücher some 7,000 (810 of which were suffered by just one unit: the 18th Regiment, which served in Bülow's 15th Brigade, had fought at both Frichermont and Plancenoit, and won 33 Iron Crosses). Napoleon's losses were 24,000 to 26,000 killed or wounded, including 6,000 to 7,000 captured with an additional 15,000 deserting subsequent to the battle and over the following days. At 10:30 on 19 June, General Grouchy, still following his orders, defeated General Thielemann at Wavre and withdrew in good order—though at the cost of 33,000 French troops that never reached the Waterloo battlefield. Wellington sent his official dispatch describing the battle to England on 19 June 1815; it arrived in London on 21 June 1815 and was published as a London Gazette Extraordinary on 22 June. Wellington, Blücher and other Coalition forces advanced upon Paris. After his troops fell back, Napoleon fled to Paris following his defeat, arriving at 5:30 am on 21 June. Napoleon wrote to his brother and regent in Paris, Joseph, believing that he could still raise an army to fight back the Anglo-Prussian forces while fleeing from the Waterloo battlefield. Napoleon believed he could rally French supporters to his cause and call upon conscripts to hold off invading forces until General Grouchy's army could reinforce him in Paris. However, following defeat at Waterloo, Napoleon's support from the French public and his own army waned, including by General Ney, who believed that Paris would fall if Napoleon remained in power. Napoleon's brother Lucien and Marshal Louis-Nicolas Davout advised him to continue fighting, dissolve the Chamber of Deputies from Louis XVIII's constitutional government, and for Napoleon to rule France as a military dictator, which Napoleon had been under the guise of Emperor of the French from 1804 until 1814. To circumvent Napoleon overthrowing the Chamber of Deputies and a possible French Civil War, the Chamber of Deputies voted to become permanent on 21 June after persuasion from Lafayette. On 22 June, Napoleon wished to abdicate in favour of his son, Napoleon II, after realizing that he lacked military, public, and governmental support for his claim to continue to rule France. Napoleon's proposal for the instatement of his son was swiftly rejected by the legislature. Napoleon announced his second abdication on 24 June 1815. In the final skirmish of the Napoleonic Wars, Marshal Davout, Napoleon's minister of war, was defeated by Blücher at Issy on 3 July 1815. Allegedly, Napoleon tried to escape to North America, but the Royal Navy was blockading French ports to forestall such a move. He finally surrendered to Captain Frederick Maitland of on 15 July. There was a campaign against French fortresses that still held out; Longwy capitulated on 13 September 1815, the last to do so. Louis XVIII was restored to the throne of France and Napoleon was exiled to Saint Helena, where he died in 1821. The Treaty of Paris was signed on 20 November 1815. Peregrine Maitland's 1st Foot Guards, who had defeated the Chasseurs of the Middle Guard, were mistakenly thought to have defeated the Grenadiers of the Old Guard. They were thus awarded the title of Grenadier Guards in recognition of their feat and adopted bearskins in the style of the Grenadiers. Britain's Household Cavalry likewise adopted the cuirass in 1821 in recognition of their success against their armoured French counterparts. The effectiveness of the lance was noted by all participants and this weapon subsequently became more widespread throughout Europe; the British converted their first light cavalry regiment to lancers in 1816, their uniforms, of Polish origin, were based on those of the Imperial Guard lancers. Teeth of tens of thousands of dead soldiers were removed by surviving troops, locals or even scavengers who had travelled there from Britain, then used for making denture replacements in Britain and elsewhere. The so-called "Waterloo teeth" were in demand because they came from relatively healthy young men. Despite the efforts of scavengers both human and otherwise, human remains could still be seen at Waterloo a year after the battle. Analysis Historical importance Waterloo proved a decisive battle in more than one sense. Each generation in Europe up to the outbreak of the First World War looked back at Waterloo as the turning point that dictated the course of subsequent world history, seeing it in retrospect as the event that ushered in the Concert of Europe, an era characterised by relative peace, material prosperity and technological progress. The battle definitively ended the series of wars that had convulsed Europe—and involved other regions of the world—since the French Revolution of the early 1790s. It also ended the First French Empire and the political and military career of Napoleon Bonaparte, one of the greatest commanders and statesmen in history. There followed almost four decades of international peace in Europe. No further major international conflict occurred until the Crimean War of 1853–1856. Changes to the configuration of European states, as refashioned in the aftermath of Waterloo, included the formation of the Holy Alliance of reactionary governments intent on repressing revolutionary and democratic ideas, and the reshaping of the former Holy Roman Empire into a German Confederation increasingly marked by the political dominance of Prussia. The bicentenary of Waterloo prompted renewed attention to the geopolitical and economic legacy of the battle and to the century of relative transatlantic peace which followed. Views on the reasons for Napoleon's defeat General Antoine-Henri, Baron Jomini, one of the leading military writers on the Napoleonic art of war, had a number of very cogent explanations of the reasons behind Napoleon's defeat at Waterloo. The Prussian soldier, historian, and theorist Carl von Clausewitz, who as a young colonel had served as chief-of-staff to Thielmann's Prussian III Corps during the Waterloo campaign, expressed the following opinion: Wellington wrote in his dispatch to London: In his famous study of the Campaign of 1815, the Prussian Clausewitz does not agree with Wellington on this assessment. Indeed, he claims that if Bonaparte had attacked in the morning, the battle would probably have been decided by the time the Prussians arrived, and an attack by Blücher, while not impossible or useless, would have been much less certain of success. Parkinson (2000) adds: "Neither army beat Napoleon alone. But whatever the part played by Prussian troops in the actual moment when the Imperial Guard was repulsed, it is difficult to see how Wellington could have staved off defeat, when his centre had been almost shattered, his reserves were almost all committed, the French right remained unmolested and the Imperial Guard intact. ... Blücher may not have been totally responsible for victory over Napoleon, but he deserved full credit for preventing a British defeat". Steele (2014) writes: "Blücher's arrival not only diverted vital reinforcements, but also forced Napoleon to accelerate his effort against Wellington. The tide of battle had been turned by the hard-driving Blücher. As his Prussians pushed in Napoleon's flank, Wellington was able to shift to the offensive". It has also been noted that Wellington's maps of the battlefield were based on a recent reconnaissance and therefore more up to date than those used by Napoleon, who had to rely on Ferraris-Capitaine maps of 1794. Legacy The battlefield today Landmarks Some portions of the terrain on the battlefield have been altered from their 1815 appearance. Tourism began the day after the battle, with Captain Mercer noting that on 19 June "a carriage drove on the ground from Brussels, the inmates of which, alighting, proceeded to examine the field". In 1820, the Netherlands' King William I ordered the construction of a monument. The Lion's Mound, a giant artificial hill, was constructed here using of earth taken from the ridge at the centre of the British line, effectively removing the southern bank of Wellington's sunken road. The alleged remark by Wellington about the alteration of the battlefield as described by Hugo was never documented, however. Other terrain features and notable landmarks on the field have remained virtually unchanged since the battle. These include the rolling farmland to the east of the Brussels–Charleroi Road as well as the buildings at Hougoumont, La Haye Sainte, and La Belle Alliance. Monuments Apart from the Lion's Mound, there are several more conventional but noteworthy monuments throughout the battlefield. A cluster of monuments at the Brussels–Charleroi and Braine L'Alleud–Ohain crossroads marks the mass graves of British, Dutch, Hanoverian and King's German Legion troops. A monument to the French dead, entitled L'Aigle blessé ("The Wounded Eagle"), marks the location where it is believed one of the Imperial Guard units formed a square during the closing moments of the battle. A monument to the Prussian dead is located in the village of Plancenoit on the site where one of their artillery batteries took position. The Duhesme mausoleum is one among the few graves of the fallen. It is located at the side of Saint Martin's Church in Ways, a hamlet in the municipality of Genappe. Seventeen fallen officers are buried in the crypt of the British Monument in the Brussels Cemetery in Evere. If the French won the Battle of Waterloo, Napoleon planned to commemorate the victory by building a pyramid of white stones, akin to the pyramids he had seen during his invasion of Egypt in 1798. Remains After the battle, the bodies of the tens of thousands who died were hastily buried in mass graves across the battlefielda process that took at least ten days, according to accounts by those who visited the battlefield just after the battle. Remarkably, there is no record of any such mass grave being discovered in the 20th and 21st centuries; only two complete human skeletons have been found. The remains of a soldier thought to be 23-year-old Friederich Brandt were discovered in 2012. He was a slightly hunchbacked infantryman, tall, and was hit in the chest by a French bullet. His coins, rifle and position on the battlefield identified him as an Hanoverian fighting in the King's German Legion. In 2022 a second skeleton was found in a ditch near a former field hospital by the Waterloo Uncovered charity. In December 2022, the historians Dr. Bernard Wilkin (Belgium) and Robin Schäfer (Germany), assisted by Belgian archaeologist Dominique Bosquet, discovered and recovered the largest assembly of remains of Waterloo battlefield casualties found in recent times. In the aftermath of the historian's research into the fate of the Waterloo battlefield (see below), several local individuals had come forward who were in the possession of human remains recovered on it. Forensic examination has shown that these remains belonged to at least four soldiers, some of whom are likely to be Prussian. Another set of human remains, initially discovered on the central battlefield by illegal metal detecting and consisting of the remains of six British soldiers, was also recovered by the team. Objects found with the casualties on the central battlefield point to the fact that at least one of them served in the First Foot Guards. So far, a reason given for the absence of human remains in any quantity was that European battlefields of the time were often scoured for bones to make bone meal, which was much in demand as a fertilizer before the discovery of superphosphates in the 1840s. This theory however is questionable, as actual evidence for it is lacking. In 2022 however, historians Dr. Bernard Wilkin and Robin Schäfer, supported by the British archaeologist Professor Tony Pollard, have concluded that in the aftermath of the conflict, local farmers dug up the corpses of horses and men and sold them to the Waterloo sugar factory. There, the ground-down bones were fired in kilns to make bone-char, which was then used to filter sugar syrup as part of the production process. Coin controversy As part of the bicentennial celebration of the battle, in 2015 Belgium minted a two-euro coin depicting the Lion monument over a map of the field of battle. France officially protested against this issue of coins, while the Belgian government noted that the French mint sells souvenir medals at Waterloo. After 180,000 coins were minted but not released, the issue was melted. Instead, Belgium issued an identical commemorative coin in the non-standard value of euros. Legally valid only within the issuing country (but unlikely to circulate) it was minted in brass, packaged, and sold by the Belgian mint for 6 euros. A ten-euro coin, showing Wellington, Blücher, their troops and the silhouette of Napoleon, was also available in silver for 42 euros. See also Military career of Napoleon Bonaparte Timeline of the Napoleonic era List of Napoleonic battles Waterloo Medal awarded to those soldiers of the British Army who fought at the battle. Battle of Waterloo reenactment Lord Uxbridge's leg was shattered by a grape-shot at the Battle of Waterloo and removed by a surgeon. The artificial leg used by Uxbridge for the rest of his life was donated to a Waterloo Museum after his death. There is also a second leg on display at his house, Plas Newydd, on Anglesey. Waterloo (1970 film) directed by Sergei Bondarchuk Notes References . (Project Gutenberg) Homann, Arne; Wilkin, Bernard; Schäfer, Robin. "Die Toten von Waterloo: Aus dem Massengrab in die Zuckerfabrik?". Archäologie in Deutschland. 2023 (3 (Juni-Juli)): 44–45. (translated by Benet S.V.) . Further reading Articles Bijl, Marco, 8th Dutch Militia a history of the 8th Dutch Militia battalion and the Bylandt Brigade, of which it was a part, in the 1815 campaign (using original sources from the Dutch and Belgian national archives) de Wit, Pierre. The campaign of 1815: a study. Study of the campaign of 1815, based on sources from all participating armies. based on Books Buttery, David. Waterloo Battlefield Guide (Pen and Sword, 2018). This on-line text contains Clausewitz's 58-chapter study of the Campaign of 1815 and Wellington's lengthy 1842 essay written in response to Clausewitz, as well as supporting documents and essays by the editors. Esdaile, Charles J. Walking Waterloo: A Guide (2019) excerpt ; A study of Agincourt, Waterloo and the Somme online online Historiography and memory Balen, Malcolm. A Model Victory: Waterloo and the Battle for History (Harper Perennial, 2006). Bridoux, Jeff. "'Next to a battle lost, the greatest misery is a battle gained': the Battle of Waterloo-myth and reality". Intelligence and National Security 36.5 (2021): 754–770. Esdaile. Charles J. "Napoleon at Waterloo: The events of 18 June 1815 analyzed via historical simulation". JAMS: Journal of Advanced Military Studies 12#2 (2021) pp. 11–44 Evans, Mark, et al. "Waterloo Uncovered: From discoveries in conflict archaeology to military veteran collaboration and recovery on one of the world's most famous battlefields", in Historic Landscapes and Mental Well-Being (2019): 253–265. online Francois, Pieter. "'The Best Way to See Waterloo is with your Eyes Shut' British 'Histourism,' Authenticity and Commercialism in the Mid-Nineteenth Century". Anthropological Journal of European Cultures 22#1 (2013): 25–41. Heffernan, Julian Jimenez. "Lying Epitaphs: 'Vanity Fair', Waterloo, and the Cult of the Dead". Victorian Literature and Culture 40#1 (2012): 25–45. Kennaway, James. "Military surgery as national romance: the memory of British heroic fortitude at Waterloo". War & Society 39.2 (2020): 77–92. online Keirstead, Christopher and Marysa Demoor, eds. "Special Issue: Waterloo and Its Afterlife in the Nineteenth-Century Periodical and Newspaper Press". Victorian Periodicals Review 48#4 (2015). Mongin, Philippe. "A game-theoretic analysis of the Waterloo campaign and some comments on the analytic narrative project". Cliometrica 12.3 (2018): 451–480. online Reynolds, Luke Alexander Lewis. "Who Owned Waterloo? Wellington's Veterans and the Battle for Relevance" (PhD. Diss. City University of New York, 2019) online. Rigney, Ann. "Reframing Waterloo: Memory, mediation, experience", in The Varieties of Historical Experience (Routledge, 2019) pp. 121–139. Seaton, A.V. "War and Thanatourism: Waterloo 1815–1914". Annals of Tourism Research 26#1 (1999): 130–158. Scott, Walter. Scott on Waterloo edited by Paul O'Keeffe. (Vintage Books, 2015). Shaw, Philip. Waterloo and the Romantic Imagination (Palgrave, 2002). Turner, Harry. Courage, Blood & Luck: Poems of Waterloo (Pen and Sword Military, 2013). Maps The map from the 1911 edition is also available online. Battle of Waterloo maps and diagrams Map of the battlefield on modern Google map and satellite photographs showing main locations of the battlefield 1816 Map of the battlefield with initial dispositions by Willem Benjamin Craan Primary sources Glover, Gareth, ed. Letters from the battle of Waterloo: unpublished correspondence by Allied officers from the Siborne papers (Casemate Publishers, 2018). Earliest report of the battle in a London newspaper from The Morning Post 22 June 1815 Casualty returns. – "For records of medals awarded for service before 1914, search by name on the Ancestry website. There are separate search pages for the Army (sourced from WO 100)..." Staff, Empire and Sea Power: The Battle of Waterloo Retrieved on 9 June 2006 BBC History Waterloo, Retrieved on 9 June 2006 Uniforms French, Prussian and Anglo-allied uniforms during the Battle of Waterloo : Mont-Saint-Jean (FR) External links Records and images from the UK Parliament Collections Interview with Andrew Roberts on Napoleon & Wellington: The Battle of Waterloo and the Great Commanders Who Fought It Official guides of the Waterloo battlefield. (British site) George Nafgizer collection Waterloo ORBATs for French, Allied . 1815 in the Netherlands 19th century in the Southern Netherlands Battle honours of the Rifle Brigade Battles involving France Battles involving Hanover Battles involving Nassau Battles involving Prussia Battles involving the Netherlands Battles involving the United Kingdom Battles of the Napoleonic Wars Battle of Waterloo Cavalry charges Conflicts in 1815 June 1815 events Lasne Waterloo campaign History of Walloon Brabant Waterloo, Belgium
6
A boomerang () is a thrown tool typically constructed with aerofoil sections and designed to spin about an axis perpendicular to the direction of its flight. A returning boomerang is designed to return to the thrower, while a non-returning boomerang is designed as a weapon to be thrown straight and is traditionally used by some Aboriginal Australians for hunting. Historically, boomerangs have been used for hunting, sport, and entertainment and are made in various shapes and sizes to suit different purposes. Although considered an Australian icon, ancient boomerangs have also been discovered elsewhere in Africa, the Americas, and Eurasia. Description A boomerang is a throwing stick with aerodynamic properties, traditionally made of wood, but also of bone, horn, tusks and even iron. Modern boomerangs used for sport may be made from plywood or plastics such as ABS, polypropylene, phenolic paper, or carbon fibre-reinforced plastics. Boomerangs come in many shapes and sizes depending on their geographic or tribal origins and intended function, including the traditional Australian type, the cross-stick, the pinwheel, the tumble-stick, the Boomabird, and other less common types. Boomerangs return to the thrower, distinguishing them from throwing sticks. Returning boomerangs fly, and are examples of the earliest heavier-than-air human-made flight. A returning boomerang has two or more aerofoil section wings arranged so that when spinning they create unbalanced aerodynamic forces that curve its path into an ellipse, returning to its point of origin when thrown correctly. Their typical L-shape makes them the most recognisable form of boomerang. Although used primarily for leisure or recreation, returning boomerangs are also used to decoy birds of prey, thrown above the long grass to frighten game birds into flight and into waiting nets. Non-traditional, modern, competition boomerangs come in many shapes, sizes and materials. Throwing sticks, valari, or kylies, are primarily used as weapons. They lack the aerofoil sections, are generally heavier and designed to travel as straight and forcefully as possible to the target to bring down game. The Tamil valari variant, of ancient origin and mentioned in the Tamil Sangam literature "Purananuru", was one of these. The usual form of the Valari is two limbs set at an angle; one thin and tapering, the other rounded as a handle. Valaris come in many shapes and sizes. They are usually made of cast iron cast from moulds. Some may have wooden limbs tipped with iron or with lethally sharpened edges or with special double-edged and razor-sharp daggers known as kattari. Etymology The origin of the term is uncertain. One source asserts that the term entered the language in 1827, adapted from an extinct Aboriginal language of New South Wales, Australia, but mentions a variant, wo-mur-rang, which it dates to 1798. The first recorded encounter with a boomerang by Europeans was at Farm Cove (Port Jackson), in December 1804, when a weapon was witnessed during a tribal skirmish: David Collins listed "Wo-mur-rāng" as one of eight Aboriginal "Names of clubs" in 1798. but was probably referring to the woomera, which is actually a spear-thrower. An anonymous 1790 manuscript on Aboriginal languages of New South Wales reported "Boo-mer-rit" as "the Scimiter". In 1822, it was described in detail and recorded as a "bou-mar-rang" in the language of the Turuwal people (a sub-group of the Darug) of the Georges River near Port Jackson. The Turawal used other words for their hunting sticks but used "boomerang" to refer to a returning throw-stick. History Boomerangs were, historically, used as hunting weapons, percussive musical instruments, battle clubs, fire-starters, decoys for hunting waterfowl, and as recreational play toys. The smallest boomerang may be less than from tip to tip, and the largest over in length. Tribal boomerangs may be inscribed or painted with designs meaningful to their makers. Most boomerangs seen today are of the tourist or competition sort, and are almost invariably of the returning type. Depictions of boomerangs being thrown at animals, such as kangaroos, appear in some of the oldest rock art in the world, the Indigenous Australian rock art of the Kimberley region, which is potentially up to 50,000 years old. Stencils and paintings of boomerangs also appear in the rock art of West Papua, including on Bird's Head Peninsula and Kaimana, likely dating to the Last Glacial Maximum, when lower sea levels led to cultural continuity between Papua and Arnhem Land in Northern Australia. The oldest surviving Australian Aboriginal boomerangs come from a cache found in a peat bog in the Wyrie Swamp of South Australia and date to 10,000 BC. Although traditionally thought of as Australian, boomerangs have been found also in ancient Europe, Egypt, and North America. There is evidence of the use of non-returning boomerangs by the Native Americans of California and Arizona, and inhabitants of South India for killing birds and rabbits. Some boomerangs were not thrown at all, but were used in hand to hand combat by Indigenous Australians. Ancient Egyptian examples, however, have been recovered, and experiments have shown that they functioned as returning boomerangs. Hunting sticks discovered in Europe seem to have formed part of the Stone Age arsenal of weapons. One boomerang that was discovered in Obłazowa Cave in the Carpathian Mountains in Poland was made of mammoth's tusk and is believed, based on AMS dating of objects found with it, to be about 30,000 years old. In the Netherlands, boomerangs have been found in Vlaardingen and Velsen from the first century BC. King Tutankhamun, the famous pharaoh of ancient Egypt, who died over 3,300 years ago, owned a collection of boomerangs of both the straight flying (hunting) and returning variety. No one knows for sure how the returning boomerang was invented, but some modern boomerang makers speculate that it developed from the flattened throwing stick, still used by Aboriginal Australians and other indigenous peoples around the world, including the Navajo in North America. A hunting boomerang is delicately balanced and much harder to make than a returning one. The curving flight characteristic of returning boomerangs was probably first noticed by early hunters trying to "tune" their throwing sticks to fly straight. It is thought by some that the shape and elliptical flight path of the returning boomerang makes it useful for hunting birds and small animals, or that noise generated by the movement of the boomerang through the air, or, by a skilled thrower, lightly clipping leaves of a tree whose branches house birds, would help scare the birds towards the thrower. It is further supposed by some that this was used to frighten flocks or groups of birds into nets that were usually strung up between trees or thrown by hidden hunters. In southeastern Australia, it is claimed that boomerangs were made to hover over a flock of ducks; mistaking it for a hawk, the ducks would dive away, toward hunters armed with nets or clubs. Traditionally, most boomerangs used by Aboriginal groups in Australia were non-returning. These weapons, sometimes called "throwsticks" or "kylies", were used for hunting a variety of prey, from kangaroos to parrots; at a range of about , a non-returning boomerang could inflict mortal injury to a large animal. A throwstick thrown nearly horizontally may fly in a nearly straight path and could fell a kangaroo on impact to the legs or knees, while the long-necked emu could be killed by a blow to the neck. Hooked non-returning boomerangs, known as "beaked kylies", used in northern Central Australia, have been claimed to kill multiple birds when thrown into a dense flock. Throwsticks are used as multi-purpose tools by today's Aboriginal peoples, and besides throwing could be wielded as clubs, used for digging, used to start friction fires, and are sonorous when two are struck together. Recent evidence also suggests that boomerangs were used as war weapons. Modern use Today, boomerangs are mostly used for recreation. There are different types of throwing contests: accuracy of return; Aussie round; trick catch; maximum time aloft; fast catch; and endurance (see below). The modern sport boomerang (often referred to as a 'boom' or 'rang') is made of Finnish birch plywood, hardwood, plastic or composite materials and comes in many different shapes and colours. Most sport boomerangs typically weigh less than , with MTA boomerangs (boomerangs used for the maximum-time-aloft event) often under . Boomerangs have also been suggested as an alternative to clay pigeons in shotgun sports, where the flight of the boomerang better mimics the flight of a bird offering a more challenging target. The modern boomerang is often computer-aided designed with precision airfoils. The number of "wings" is often more than 2 as more lift is provided by 3 or 4 wings than by 2. Among the latest inventions is a round-shaped boomerang, which has a different look but using the same returning principle as traditional boomerangs. This allows for safer catch for players. In 1992, German astronaut Ulf Merbold performed an experiment aboard Spacelab that established that boomerangs function in zero gravity as they do on Earth. French Astronaut Jean-François Clervoy aboard Mir repeated this in 1997. In 2008, Japanese astronaut Takao Doi again repeated the experiment on board the International Space Station. Beginning in the later part of the twentieth century, there has been a bloom in the independent creation of unusually designed art boomerangs. These often have little or no resemblance to the traditional historical ones and on first sight some of these objects may not look like boomerangs at all. The use of modern thin plywoods and synthetic plastics have greatly contributed to their success. Designs are very diverse and can range from animal inspired forms, humorous themes, complex calligraphic and symbolic shapes, to the purely abstract. Painted surfaces are similarly richly diverse. Some boomerangs made primarily as art objects do not have the required aerodynamic properties to return. Aerodynamics A returning boomerang is a rotating wing. It consists of two or more arms, or wings, connected at an angle; each wing is shaped as an airfoil section. Although it is not a requirement that a boomerang be in its traditional shape, it is usually flat. Boomerangs can be made for right- or left-handed throwers. The difference between right and left is subtle, the planform is the same but the leading edges of the aerofoil sections are reversed. A right-handed boomerang makes a counter-clockwise, circular flight to the left while a left-handed boomerang flies clockwise to the right. Most sport boomerangs weigh between , have a wingspan, and a range. A falling boomerang starts spinning, and most then fall in a spiral. When the boomerang is thrown with high spin, a boomerang flies in a curved rather than a straight line. When thrown correctly, a boomerang returns to its starting point. As the wing rotates and the boomerang moves through the air, the airflow over the wings creates lift on both "wings". However, during one-half of each blade's rotation, it sees a higher airspeed, because the rotation tip speed and the forward speed add, and when it is in the other half of the rotation, the tip speed subtracts from the forward speed. Thus if thrown nearly upright, each blade generates more lift at the top than the bottom. While it might be expected that this would cause the boomerang to tilt around the axis of travel, because the boomerang has significant angular momentum, the gyroscopic precession causes the plane of rotation to tilt about an axis that is 90 degrees to the direction of flight, causing it to turn. When thrown in the horizontal plane, as with a Frisbee, instead of in the vertical, the same gyroscopic precession will cause the boomerang to fly violently, straight up into the air and then crash. Fast Catch boomerangs usually have three or more symmetrical wings (seen from above), whereas a Long Distance boomerang is most often shaped similar to a question mark. Maximum Time Aloft boomerangs mostly have one wing considerably longer than the other. This feature, along with carefully executed bends and twists in the wings help to set up an "auto-rotation" effect to maximise the boomerang's hover time in descending from the highest point in its flight. Some boomerangs have turbulators — bumps or pits on the top surface that act to increase the lift as boundary layer transition activators (to keep attached turbulent flow instead of laminar separation). Throwing technique Boomerangs are generally thrown in unobstructed, open spaces at least twice as large as the range of the boomerang. The flight direction to the left or right depends upon the design of the boomerang itself, not the thrower. A right-handed or left-handed boomerang can be thrown with either hand, but throwing a boomerang with the non-matching hand requires a throwing motion that many throwers find awkward. The following technique applies to a right-handed boomerang; the directions are mirrored for a left-handed boomerang. Different boomerang designs have different flight characteristics and are suitable for different conditions. The accuracy of the throw depends on understanding the weight and aerodynamics of that particular boomerang, and the strength, consistency and direction of the wind; from this, the thrower chooses the angle of tilt, the angle against the wind, the elevation of the trajectory, the degree of spin and the strength of the throw. A great deal of trial and error is required to perfect the throw over time. A properly thrown boomerang will travel out parallel to the ground, sometimes climbing gently, perform a graceful, anti-clockwise, circular or tear-drop shaped arc, flatten out and return in a hovering motion, coming in from the left or spiralling in from behind. Ideally, the hover will allow a practiced catcher to clamp their hands shut horizontally on the boomerang from above and below, sandwiching the centre between their hands. The grip used depends on size and shape; smaller boomerangs are held between finger and thumb at one end, while larger, heavier or wider boomerangs need one or two fingers wrapped over the top edge in order to induce a spin. The aerofoil-shaped section must face the inside of the thrower, and the flatter side outwards. It is usually inclined outwards, from a nearly vertical position to 20° or 30°; the stronger the wind, the closer to vertical. The elbow of the boomerang can point forwards or backwards, or it can be gripped for throwing; it just needs to start spinning on the required inclination, in the desired direction, with the right force. The boomerang is aimed to the right of the oncoming wind; the exact angle depends on the strength of the wind and the boomerang itself. Left-handed boomerangs are thrown to the left of the wind and will fly a clockwise flight path. The trajectory is either parallel to the ground or slightly upwards. The boomerang can return without the aid of any wind, but even very slight winds must be taken into account however calm they might seem. Little or no wind is preferable for an accurate throw, light winds up to are manageable with skill. If the wind is strong enough to fly a kite, then it may be too strong unless a skilled thrower is using a boomerang designed for stability in stronger winds. Gusty days are a great challenge, and the thrower must be keenly aware of the ebb and flow of the wind strength, finding appropriate lulls in the gusts to launch their boomerang. Competitions and records A world record achievement was made on 3 June 2007 by Tim Lendrum in Aussie Round. Lendrum scored 96 out of 100, giving him a national record as well as an equal world record throwing an "AYR" made by expert boomerang maker Adam Carroll. In international competition, a world cup is held every second year. , teams from Germany and the United States dominated international competition. The individual World Champion title was won in 2000, 2002, 2004, 2012, and 2016 by Swiss thrower Manuel Schütz. In 1992, 1998, 2006, and 2008 Fridolin Frost from Germany won the title. The team competitions of 2012 and 2014 were won by Boomergang (an international team). World champions were Germany in 2012 and Japan in 2014 for the first time. Boomergang was formed by individuals from several countries, including the Colombian Alejandro Palacio. In 2016 USA became team world champion. Competition disciplines Modern boomerang tournaments usually involve some or all of the events listed below In all disciplines the boomerang must travel at least from the thrower. Throwing takes place individually. The thrower stands at the centre of concentric rings marked on an open field. Events include: Aussie Round: considered by many to be the ultimate test of boomeranging skills. The boomerang should ideally cross the circle and come right back to the centre. Each thrower has five attempts. Points are awarded for distance, accuracy and the catch. Accuracy: points are awarded according to how close the boomerang lands to the centre of the rings. The thrower must not touch the boomerang after it has been thrown. Each thrower has five attempts. In major competitions there are two accuracy disciplines: Accuracy 100 and Accuracy 50. Endurance: points are awarded for the number of catches achieved in 5 minutes. Fast Catch: the time taken to throw and catch the boomerang five times. The winner has the fastest timed catches. Trick Catch/Doubling: points are awarded for trick catches behind the back, between the feet, and so on. In Doubling, the thrower has to throw two boomerangs at the same time and catch them in sequence in a special way. Consecutive Catch: points are awarded for the number of catches achieved before the boomerang is dropped. The event is not timed. MTA 100 (Maximal Time Aloft, ): points are awarded for the length of time spent by the boomerang in the air. The field is normally a circle measuring 100 m. An alternative to this discipline, without the 100 m restriction is called MTA unlimited. Long Distance: the boomerang is thrown from the middle point of a baseline. The furthest distance travelled by the boomerang away from the baseline is measured. On returning, the boomerang must cross the baseline again but does not have to be caught. A special section is dedicated to LD below. Juggling: as with Consecutive Catch, only with two boomerangs. At any given time one boomerang must be in the air. World records Guinness World Record – Smallest Returning Boomerang Non-discipline record: Smallest Returning Boomerang: Sadir Kattan of Australia in 1997 with long and wide. This tiny boomerang flew the required , before returning to the accuracy circles on 22 March 1997 at the Australian National Championships. Guinness World Record – Longest Throw of Any Object by a Human A boomerang was used to set a Guinness World Record with a throw of by David Schummy on 15 March 2005 at Murarrie Recreation Ground, Australia. This broke the record set by Erin Hemmings who threw an Aerobie on 14 July 2003 at Fort Funston, San Francisco. Long-distance versions Long-distance boomerang throwers aim to have the boomerang go the furthest possible distance while returning close to the throwing point. In competition the boomerang must intersect an imaginary surface defined as an infinite vertical projection of a line centred on the thrower. Outside of competitions, the definition is not so strict, and throwers may be happy simply not to walk too far to recover the boomerang. General properties Long-distance boomerangs are optimised to have minimal drag while still having enough lift to fly and return. For this reason, they have a very narrow throwing window, which discourages many beginners from continuing with this discipline. For the same reason, the quality of manufactured long-distance boomerangs is often difficult to determine. Today's long-distance boomerangs have almost all an S or ? – question mark shape and have a beveled edge on both sides (the bevel on the bottom side is sometimes called an undercut). This is to minimise drag and lower the lift. Lift must be low because the boomerang is thrown with an almost total layover (flat). Long-distance boomerangs are most frequently made of composite material, mainly fibre glass epoxy composites. Flight path The projection of the flight path of long-distance boomerang on the ground resembles a water drop. For older types of long-distance boomerangs (all types of so-called big hooks), the first and last third of the flight path are very low, while the middle third is a fast climb followed by a fast descent. Nowadays, boomerangs are made in a way that their whole flight path is almost planar with a constant climb during the first half of the trajectory and then a rather constant descent during the second half. From theoretical point of view, distance boomerangs are interesting also for the following reason: for achieving a different behaviour during different flight phases, the ratio of the rotation frequency to the forward velocity has a U-shaped function, i.e., its derivative crosses 0. Practically, it means that the boomerang being at the furthest point has a very low forward velocity. The kinetic energy of the forward component is then stored in the potential energy. This is not true for other types of boomerangs, where the loss of kinetic energy is non-reversible (the MTAs also store kinetic energy in potential energy during the first half of the flight, but then the potential energy is lost directly by the drag). Related terms In Noongar language, kylie is a flat curved piece of wood similar in appearance to a boomerang that is thrown when hunting for birds and animals. "Kylie" is one of the Aboriginal words for the hunting stick used in warfare and for hunting animals. Instead of following curved flight paths, kylies fly in straight lines from the throwers. They are typically much larger than boomerangs, and can travel very long distances; due to their size and hook shapes, they can cripple or kill an animal or human opponent. The word is perhaps an English corruption of a word meaning "boomerang" taken from one of the Western Desert languages, for example, the Warlpiri word "karli". Cultural references Trademarks of Australian companies using the boomerang as a symbol, emblem or logo proliferate, usually removed from Aboriginal context and symbolising "returning" or to distinguish an Australian brand. Early examples included Bain's White Ant Exterminator (1896); Webendorfer Bros. explosives (1898); E. A. Adams Foods (1920); and by the (still current) Boomerang Cigarette Papers Pty. Ltd. "Aboriginalia", including the boomerang, as symbols of Australia dates from the late 1940s and early 1950s and was in widespread use by a largely European arts, crafts and design community. By the 1960s, the Australian tourism industry extended it to the very branding of Australia, particularly to overseas and domestic tourists as souvenirs and gifts and thus Aboriginal culture. At the very time when Aboriginal people and culture were subject to policies that removed them from their traditional lands and sought to assimilate them (physiologically and culturally) into mainstream white Australian culture, causing the Stolen Generations, Aboriginalia found an ironically "nostalgic", entry point into Australian popular culture at important social locations: holiday resorts and in Australian domestic interiors. In the 21st century, souvenir objects depicting Aboriginal peoples, symbolism and motifs including the boomerang, from the 1940s–1970s, regarded as kitsch and sold largely to tourists in the first instance, became highly sought after by both Aboriginal and non-Aboriginal collectors and has captured the imagination of Aboriginal artists and cultural commentators. See also List of premodern combat weapons List of martial arts weapons Australian Aboriginal artefacts Batarang Bat'leth Captain Boomerang Chakram CAC Boomerang, a World War II fighter-plane Flying wing, tailess boomerang shaped aircraft Frisbee Googie, boomerang-shaped architecture Shuriken Throwing stick Valari Melee weapon References Further reading Boomerang (Encyclopedia.com) Nishiyama, Yutaka, Why do boomerangs come back?, Int. J. of Pure and Appl. Math. 78(3), 335–347, 2012. Valde-Nowak et al. (1987). "Upper Palaeolithic boomerang made of a mammoth tusk in south Poland". Nature 329: 436–438 (1 October 1987); doi:10.1038/329436a0. External links International Federation of Boomerang Associations Boomerang aerodynamics: an online dissertation Explanation of the origin of the word 'Boomerang' How to Throw a Boomerang 1790s neologisms Australian Aboriginal bushcraft Individual sports Recreational weapons Sports equipment Throwing clubs Australian inventions Sports originating in Australia Physical activity and dexterity toys Australian English Hunting equipment National symbols of Australia Primitive weapons Weapons of Australia
6
Bodybuilding is the practice of progressive resistance exercise to build, control, and develop one's muscles via hypertrophy. An individual who engages in this activity is referred to as a bodybuilder. It is primarily undertaken for aesthetic purposes over functional ones, distinguishing it from similar activities such as powerlifting, which focuses solely on increasing the physical load one can exert. In professional bodybuilding, competitors appear onstage in line-ups and perform specified poses (and later individual posing routines) for a panel of judges who rank them based on conditioning, muscularity, posing, size, stage presentation, and symmetry. Bodybuilders prepare for competitions by exercising and eliminating non-essential body fat. This is enhanced at the final stage by a combination of carbohydrate loading and dehydration to achieve maximum muscle definition and vascularity. Some bodybuilders also tan and shave their bodies prior to competition. Bodybuilding requires significant time and effort to reach the desired results. A novice bodybuilder may be able to gain of muscle per year if they lift weights for seven hours per week, but muscle gains begin to slow down after the first two years to about per year. After five years, gains can decrease to as little as per year. Some bodybuilders use anabolic steroids and other performance-enhancing drugs to build muscles and recover from injuries faster, however using performance enhancing drugs can have serious health risks. Furthermore most competitions prohibit the use of these substances. Despite some calls for drug testing to be implemented, the National Physique Committee (considered the leading amateur bodybuilding federation) does not require testing. The winner of the annual IFBB Mr. Olympia contest is recognized as the world's top male professional bodybuilder. Since 1950, the NABBA Universe Championships have been considered the top amateur bodybuilding contests, with notable winners including Reg Park, Lee Priest, Steve Reeves, and Arnold Schwarzenegger. History Early history Stone-lifting competitions were practiced in ancient Egypt, Greece, and Tamilakam. Western weightlifting developed in Europe from 1880 to 1953, with strongmen displaying feats of strength for the public and challenging each other. The focus was not on their physique, and they possessed relatively large bellies and fatty limbs compared to bodybuilders of today. Eugen Sandow Bodybuilding developed in the late 19th century, promoted in England by German Eugen Sandow, now considered as the "Father of Modern Bodybuilding". He allowed audiences to enjoy viewing his physique in "muscle display performances". Although audiences were thrilled to see a well-developed physique, the men simply displayed their bodies as part of strength demonstrations or wrestling matches. Sandow had a stage show built around these displays through his manager, Florenz Ziegfeld. The Oscar-winning 1936 musical film The Great Ziegfeld depicts the beginning of modern bodybuilding, when Sandow began to display his body for carnivals. Sandow was so successful at flexing and posing his physique that he later created several businesses around his fame, and was among the first to market products branded with his name. He was credited with inventing and selling the first exercise equipment for the masses: machined dumbbells, spring pulleys, and tension bands. Even his image was sold by the thousands in "cabinet cards" and other prints. First large-scale bodybuilding competition Sandow organized the first bodybuilding contest on September 14, 1901, called the "Great Competition". It was held at the Royal Albert Hall in London. Judged by Sandow, Sir Charles Lawes, and Sir Arthur Conan Doyle, the contest was a great success and many bodybuilding enthusiasts were turned away due to the overwhelming number of audience members. The trophy presented to the winner was a gold statue of Sandow sculpted by Frederick Pomeroy. The winner was William L. Murray of Nottingham. The silver Sandow trophy was presented to second-place winner D. Cooper. The bronze Sandow trophy now the most famous of all was presented to third-place winner A.C. Smythe. In 1950, this same bronze trophy was presented to Steve Reeves for winning the inaugural NABBA Mr. Universe contest. It would not resurface again until 1977 when the winner of the IFBB Mr. Olympia contest, Frank Zane, was presented with a replica of the bronze trophy. Since then, Mr. Olympia winners have been consistently awarded a replica of the bronze Sandow. The first large-scale bodybuilding competition in America took place from December 28, 1903 to January 2, 1904, at Madison Square Garden in New York City. The competition was promoted by Bernarr Macfadden, the father of physical culture and publisher of original bodybuilding magazines such as Health & Strength. The winner was Al Treloar, who was declared "The Most Perfectly Developed Man in the World". Treloar won a thousand dollar cash prize, a substantial sum at that time. Two weeks later, Thomas Edison made a film of Treloar's posing routine. Edison had also made two films of Sandow a few years before. Those were the first three motion pictures featuring a bodybuilder. In the early 20th century, Macfadden and Charles Atlas continued to promote bodybuilding across the world. Notable early bodybuilders Many other important bodybuilders in the early history of bodybuilding prior to 1930 include: Earle Liederman (writer of some of bodybuilding's earliest books), Zishe Breitbart, Georg Hackenschmidt, Emy Nkemena, George F. Jowett, Finn Hateral (a pioneer in the art of posing), Frank Saldo, Monte Saldo, William Bankier, Launceston Elliot, Sig Klein, Sgt. Alfred Moss, Joe Nordquist, Lionel Strongfort ("Strongfortism"), Gustav Frištenský, Ralph Parcaut (a champion wrestler who also authored an early book on "physical culture"), and Alan P. Mead (who became a muscle champion despite the fact that he lost a leg in World War I). Actor Francis X. Bushman, who was a disciple of Sandow, started his career as a bodybuilder and sculptor's model before beginning his famous silent movie career. 1950s1960s Bodybuilding became more popular in the 1950s and 1960s with the emergence of strength and gymnastics champions, and the simultaneous popularization of bodybuilding magazines, training principles, nutrition for bulking up and cutting down, the use of protein and other food supplements, and the opportunity to enter physique contests. The number of bodybuilding organizations grew, and most notably the International Federation of Bodybuilders (IFBB) was founded in 1946 by Canadian brothers Joe and Ben Weider. Other bodybuilding organizations included the Amateur Athletic Union (AAU), National Amateur Bodybuilding Association (NABBA), and the World Bodybuilding Guild (WBBG). Consequently, the contests grew both in number and in size. Besides the many "Mr. XXX" (insert town, city, state, or region) championships, the most prestigious titles were Mr. America, Mr. World, Mr. Universe, Mr. Galaxy, and ultimately Mr. Olympia, which was started in 1965 by the IFBB and is now considered the most important bodybuilding competition in the world. During the 1950s, the most successful and most famous competing bodybuilders were Bill Pearl, Reg Park, Leroy Colbert, and Clarence Ross. Certain bodybuilders rose to fame thanks to the relatively new medium of television, as well as cinema. The most notable were Jack LaLanne, Steve Reeves, Reg Park, and Mickey Hargitay. While there were well-known gyms throughout the country during the 1950s (such as Vince's Gym in North Hollywood, California and Vic Tanny's chain gyms), there were still segments of the United States that had no "hardcore" bodybuilding gyms until the advent of Gold's Gym in the mid-1960s. Finally, the famed Muscle Beach in Santa Monica continued its popularity as the place to be for witnessing acrobatic acts, feats of strength, and the like. The movement grew more in the 1960s with increased TV and movie exposure, as bodybuilders were typecast in popular shows and movies. 1970s1990s New organizations In the 1970s, bodybuilding had major publicity thanks to the appearance of Arnold Schwarzenegger, Franco Columbu, Lou Ferrigno, Mike Mentzer and others in the 1977 docudrama Pumping Iron. By this time, the IFBB dominated the competitive bodybuilding landscape and the Amateur Athletic Union (AAU) took a back seat. The National Physique Committee (NPC) was formed in 1981 by Jim Manion, who had just stepped down as chairman of the AAU Physique Committee. The NPC has gone on to become the most successful bodybuilding organization in the United States and is the amateur division of the IFBB. The late 1980s and early 1990s saw the decline of AAU-sponsored bodybuilding contests. In 1999, the AAU voted to discontinue its bodybuilding events. Anabolic/androgenic steroid use This period also saw the rise of anabolic steroids in bodybuilding and many other sports. More significant use began with Arnold Schwarzenegger, Sergio Oliva, and Lou Ferrigno in the late 1960s and early 1970s, and continuing through the 1980s with Lee Haney, the 1990s with Dorian Yates, Ronnie Coleman, and Markus Rühl, and up to the present day. Bodybuilders such as Greg Kovacs attained mass and size never seen previously but were not successful at the pro level. Others were renowned for their spectacular development of a particular body part, like Tom Platz or Paul Demayo for their leg muscles. At the time of shooting Pumping Iron, Schwarzenegger, while never admitting to steroid use until long after his retirement, said, "You have to do anything you can to get the advantage in competition". He would later say that he did not regret using steroids. To combat anabolic steroid use and in the hopes of becoming a member of the IOC, the IFBB introduced doping tests for both steroids and other banned substances. Although doping tests occurred, the majority of professional bodybuilders still used anabolic steroids for competition. During the 1970s, the use of anabolic steroids was openly discussed, partly due to the fact they were legal. In the Anabolic Steroid Control Act of 1990, U.S. Congress placed anabolic steroids into Schedule III of the Controlled Substances Act (CSA). In Canada, steroids are listed under Schedule IV of the Controlled Drugs and Substances Act, enacted by the federal Parliament in 1996. World Bodybuilding Federation In 1990, professional wrestling promoter Vince McMahon attempted to form his own bodybuilding organization known as the World Bodybuilding Federation (WBF). It operated as a sister to the World Wrestling Federation (WWF, now WWE), which provided cross-promotion via its performers and personalities. Tom Platz served as the WBF's director of talent development, and announced the new organization during an ambush of that year's Mr. Olympia (which, unbeknownst to organizers, McMahon and Platz had attended as representatives of an accompanying magazine, Bodybuilding Lifestyles). It touted efforts to bring bigger prize money and more "dramatic" events to the sport of bodybuilding—which resulted in its championships being held as pay-per-view events with WWF-inspired sports entertainment features and showmanship. The organization signed high-valued contracts with a number of IFBB regulars. The IFBB's inaugural championship in June 1991 (won by Gary Strydom) received mixed reviews. The WBF would be indirectly impacted by a steroid scandal involving the WWF, prompting the organization to impose a drug testing policy prior to the 1992 championship. The drug testing policy hampered the quality of the 1992 championship, while attempts to increase interest by hiring WCW wrestler Lex Luger as a figurehead (hosting a WBF television program on USA Network, and planning to make a guest pose during the 1992 championship before being injured in a motorcycle accident) and attempting to sign Lou Ferrigno (who left the organization shortly after the drug testing policy was announced) did not come to fruition. The second PPV received a minuscule audience, and the WBF dissolved only one month later in July 1992. 2000s In 2003, Joe Weider sold Weider Publications to American Media, Inc. (AMI). The position of president of the IFBB was filled by Rafael Santonja following the death of Ben Weider in October 2008. In 2004, contest promoter Wayne DeMilia broke ranks with the IFBB and AMI took over the promotion of the Mr. Olympia contest: in 2017 AMI took the contest outright. In the early 21st century, patterns of consumption and recreation similar to those of the United States became more widespread in Europe and especially in Eastern Europe following the collapse of the Soviet Union. This resulted in the emergence of whole new populations of bodybuilders from former Eastern Bloc states. Olympic sport discussion In the early 2000s, the IFBB was attempting to make bodybuilding an Olympic sport. It obtained full IOC membership in 2000 and was attempting to get approved as a demonstration event at the Olympics, which would hopefully lead to it being added as a full contest. This did not happen and Olympic recognition for bodybuilding remains controversial since many argue that bodybuilding is not a sport. Social media The advent of social media had a profound influence on fitness and bodybuilding. It is common to see platforms such as Instagram, TikTok, and YouTube flooded with fitness-related content, changing how the average person views and interacts with fitness culture. Gym clothing brands like YoungLA and Rawgear leveraged this platform to create their brands. By recruiting fitness ambassadors—real people who embody their brand values—these companies personalize their marketing strategy and create a more relatable image. These ambassadors, often in the form of fitness influencers or personal trainers, promote the brand by sharing their workout routines, dietary plans, and gym clothing. YouTube in particular has seen a surge in fitness content, ranging from gym vlogs to detailed discussions on workout attire. This not only provides consumers with an abundance of free resources to aid their fitness journey, but also creates a more informed consumer base. Another growing trend with gym-related social media is the phenomenon of gym-shaming; a video posted by content creator Jessica Fernandez on Twitch that went viral showed her lifting weights in a gym while a man in the background stared at her, sparking a widespread debate about narcissism and an increasingly toxic gym culture in the age of social media. The video led to criticism of an emerging trend in which gyms, once known as places for focused workouts, are now being treated as filming locations for aspiring or established influencers with bystanders being unintentionally placed under the public eye in the process. Bodybuilder Joey Swoll, who voiced his concerns over this culture, addressed the controversy by stating that while harassment in gyms needs to be addressed, the man in Fernandez's video was not guilty of it. Although social media is giving more attention to the world of bodybuilding, there are still some areas that are controversial. Areas Professional bodybuilding In the modern bodybuilding industry, the term "professional" generally means a bodybuilder who has won qualifying competitions as an amateur and has earned a "pro card" from their respective organization. Professionals earn the right to compete in competitions that include monetary prizes. A pro card also prohibits the athlete from competing in federations other than the one from which they have received the pro card. Depending on the level of success, these bodybuilders may receive monetary compensation from sponsors, much like athletes in other sports. Natural bodybuilding Due to the growing concerns of the high cost, health consequences, and illegal nature of some steroids, many organizations have formed in response and have deemed themselves "natural" bodybuilding competitions. In addition to the concerns noted, many promoters of bodybuilding have sought to shed the "freakish" perception that the general public has of bodybuilding and have successfully introduced a more mainstream audience to the sport of bodybuilding by including competitors whose physiques appear much more attainable and realistic. In natural contests, the testing protocol ranges among organizations from lie detectors to urinalysis. Penalties also range from organization to organization from suspensions to strict bans from competition. It is also important to note that natural organizations also have their own list of banned substances and it is important to refer to each organization's website for more information about which substances are banned from competition. There are many natural bodybuilding organizations; some of the larger ones include: MuscleMania, Ultimate Fitness Events (UFE), INBF/WNBF, and INBA/PNBA. These organizations either have an American or worldwide presence and are not limited to the country in which they are headquartered. Men's physique Due to those who found open-bodybuilding to be "too big" or "ugly" and unhealthy, a new category was started in 2013. The first Men's Physique Olympia winner was Mark Wingson, who was followed by Jeremy Buendia for four consecutive years. Like open-bodybuilding, the federations in which bodybuilders can compete are natural divisions as well as normal ones. The main difference between the two is that men's physique competitors pose in board shorts rather than a traditional posing suit and open-bodybuilders are much larger and are more muscular than the men's physique competitors. Open-bodybuilders have an extensive routine for posing while the Physique category is primarily judged by the front and back poses. Many of the men's physique competitors are not above 200 lbs and have a bit of a more attainable and aesthetic physique in comparison to open-bodybuilders. Although this category started off slowly, it has grown tremendously, and currently men's physique seems to be a more popular class than open-bodybuilding. Classic physique This is the middle ground of both Men's Physique and Bodybuilding. The competitors in this category are not nearly as big as bodybuilders but not as small as men's physique competitors. They pose and perform in men's boxer briefs to show off the legs, unlike Men's Physique which hide the legs in board shorts. Classic physique started in 2016. Danny Hester was the first classic physique Mr. Olympia and as of 2022, Chris Bumstead is the 4x reigning Mr. Olympia. Female bodybuilding The female movement of the 1960s, combined with Title IX and the all around fitness revolution, gave birth to new alternative perspectives of feminine beauty that included an athletic physique of toned muscle. This athletic physique was found in various popular media outlets such as fashion magazines. Female bodybuilders changed the limits of traditional femininity as their bodies showed that muscles are not only just for men. The first U.S. Women's National Physique Championship, promoted by Henry McGhee and held in 1978 in Canton, Ohio, is generally regarded as the first true female bodybuilding contest—that is, the first contest where the entrants were judged solely on muscularity. In 1980, the first Ms. Olympia (initially known as the "Miss" Olympia), the most prestigious contest for professionals, was held. The first winner was Rachel McLish, who had also won the NPC's USA Championship earlier in the year. The contest was a major turning point for female bodybuilding. McLish inspired many future competitors to start training and competing. In 1985, the documentary Pumping Iron II: The Women was released. It documented the preparation of several women for the 1983 Caesars Palace World Cup Championship. Competitors prominently featured in the film were Kris Alexander, Lori Bowen, Lydia Cheng, Carla Dunlap, Bev Francis, and McLish. At the time, Francis was actually a powerlifter, though she soon made a successful transition to bodybuilding, becoming one of the leading competitors of the late 1980s and early 1990s. The related areas of fitness and figure competition increased in popularity, surpassing that of female bodybuilding, and provided an alternative for women who choose not to develop the level of muscularity necessary for bodybuilding. McLish would closely resemble what is thought of today as a fitness and figure competitor, instead of what is now considered a female bodybuilder. Fitness competitions also adopted gymnastic elements. E. Wilma Conner competed in the 2011 NPC Armbrust Pro Gym Warrior Classic Championships in Loveland, Colorado, at the age of 75 years and 349 days. Competition In competitive bodybuilding, bodybuilders aspire to present an "aesthetically pleasing" body on stage. In prejudging, competitors do a series of mandatory poses: the front lat spread, rear lat spread, front double biceps, back double biceps, side chest, side triceps, Most Muscular (men only), abdominals and thighs. Each competitor also performs a personal choreographed routine to display their physique. A posedown is usually held at the end of a posing round, while judges are finishing their scoring. Bodybuilders usually spend a lot of time practising their posing in front of mirrors or under the guidance of their coach. In contrast to strongman or powerlifting competitions, where physical strength is paramount, or to Olympic weightlifting, where the main point is equally split between strength and technique, bodybuilding competitions typically emphasize condition, size, and symmetry. Different organizations emphasize particular aspects of competition, and sometimes have different categories in which to compete. Preparations Bulking and cutting The general strategy adopted by most present-day competitive bodybuilders is to make muscle gains for most of the year (known as the "off-season") and, approximately 12–14 weeks from competition, lose a maximum of body fat (referred to as "cutting") while preserving as much muscular mass as possible. The bulking phase entails remaining in a net positive energy balance (calorie surplus). The amount of a surplus in which a person remains is based on the person's goals, as a bigger surplus and longer bulking phase will create more fat tissue. The surplus of calories relative to one's energy balance will ensure that muscles remain in a state of anabolism. The cutting phase entails remaining in a net negative energy balance (calorie deficit). The main goal of cutting is to oxidize fat while preserving as much muscle as possible. The larger the calorie deficit, the faster one will lose weight. However, a large calorie deficit will also create the risk of losing muscle tissue. The bulking and cutting strategy is effective because there is a well-established link between muscle hypertrophy and being in a state of positive energy balance. A sustained period of caloric surplus will allow the athlete to gain more fat-free mass than they could otherwise gain under eucaloric conditions. Some gain in fat mass is expected, which athletes seek to oxidize in a cutting period while maintaining as much lean mass as possible. Clean bulking The attempt to increase muscle mass in one's body without any gain in fat is called clean bulking. Competitive bodybuilders focus their efforts to achieve a peak appearance during a brief "competition season". Clean bulking takes longer and is a more refined approach to achieving the body fat and muscle mass percentage a person is looking for. A common tactic for keeping fat low and muscle mass high is to have higher calorie and lower calorie days to maintain a balance between gain and loss. Many clean bulk diets start off with a moderate amount of carbs, moderate amount of protein, and a low amount of fats. To maintain a clean bulk, it is important to reach calorie goals every day. Macronutrient goals (carbs, fats, and proteins) will be different for each person, but it is ideal to get as close as possible. Dirty bulking "Dirty bulking" is the process of eating at a massive caloric surplus without trying to figure out the exact amount of ingested macronutrients. Weightlifters who attempt to gain mass quickly with no aesthetic concerns often choose to do this. Muscle growth Bodybuilders use three main strategies to maximize muscle hypertrophy: Strength training through weights or elastic/hydraulic resistance. Specialized nutrition, incorporating extra protein and supplements when necessary. Adequate rest, including sleep and recuperation between workouts. Weight training Intensive weight training causes micro-tears to the muscles being trained; this is generally known as microtrauma. These micro-tears in the muscle contribute to the soreness felt after exercise, called delayed onset muscle soreness (DOMS). It is the repair of these micro-traumas that results in muscle growth. Normally, this soreness becomes most apparent a day or two after a workout. However, as muscles become adapted to the exercises, soreness tends to decrease. Weight training aims to build muscle by prompting two different types of hypertrophy: sarcoplasmic and myofibrillar. Sarcoplasmic hypertrophy leads to larger muscles and so is favored by bodybuilders more than myofibrillar hypertrophy, which builds athletic strength. Sarcoplasmic hypertrophy is triggered by increasing repetitions, whereas myofibrillar hypertrophy is triggered by lifting heavier weight. In either case, there is an increase in both size and strength of the muscles (compared to what happens if that same individual does not lift weights at all), although the emphasis is different. Nutrition The high levels of muscle growth and repair achieved by bodybuilders require a specialized diet. Generally speaking, bodybuilders require more calories than the average person of the same weight to provide the protein and energy requirements needed to support their training and increase muscle mass. In preparation of a contest, a sub-maintenance level of food energy is combined with cardiovascular exercise to lose body fat. Proteins, carbohydrates and fats are the three major macronutrients that the human body needs in order to build muscle. The ratios of calories from carbohydrates, proteins, and fats vary depending on the goals of the bodybuilder. Carbohydrates Carbohydrates play an important role for bodybuilders. They give the body energy to deal with the rigors of training and recovery. Carbohydrates also promote secretion of insulin, a hormone enabling cells to get the glucose they need. Insulin also carries amino acids into cells and promotes protein synthesis. Insulin has steroid-like effects in terms of muscle gains. It is impossible to promote protein synthesis without the existence of insulin, which means that without ingesting carbohydrates or protein—which also induces the release of insulin—it is impossible to add muscle mass. Bodybuilders seek out low-glycemic polysaccharides and other slowly digesting carbohydrates, which release energy in a more stable fashion than high-glycemic sugars and starches. This is important as high-glycemic carbohydrates cause a sharp insulin response, which places the body in a state where it is likely to store additional food energy as fat. However, bodybuilders frequently do ingest some quickly digesting sugars (often in form of pure dextrose or maltodextrin) just before, during, and/or just after a workout. This may help to replenish glycogen stored within the muscle, and to stimulate muscle protein synthesis. Protein The motor proteins actin and myosin generate the forces exerted by contracting muscles. Cortisol decreases amino acid uptake by muscle and inhibits protein synthesis. Current recommendations suggest that bodybuilders should consume 25–30% of protein per total calorie intake to further their goal of maintaining and improving their body composition. This is a widely debated topic, with many arguing that 1 gram of protein per pound of body weight per day is ideal, some suggesting that less is sufficient, while others recommending 1.5, 2, or more. It is believed that protein needs to be consumed frequently throughout the day, especially during/after a workout, and before sleep. There is also some debate concerning the best type of protein to take. Chicken, turkey, beef, pork, fish, eggs and dairy foods are high in protein, as are some nuts, seeds, beans, and lentils. Casein or whey are often used to supplement the diet with additional protein. Whey is the type of protein contained in many popular brands of protein supplements and is preferred by many bodybuilders because of its high biological value (BV) and quick absorption rates. Whey protein also has a bigger effect than casein on insulin levels, triggering about double the amount of insulin release. That effect is somewhat overcome by combining casein and whey. Bodybuilders were previously thought to require protein with a higher BV than that of soy, which was additionally avoided due to its alleged estrogenic (female hormone) properties, though more recent studies have shown that soy actually contains phytoestrogens which compete with estrogens in the male body and can block estrogenic actions. Soy, flax, and other plant-based foods that contain phytoestrogens are also beneficial because they can inhibit some pituitary functions while stimulating the liver's P450 system (which eliminates hormones, drugs, and waste from the body) to more actively process and excrete excess estrogen. Meals Some bodybuilders often split their food intake into 5 to 7 meals of equal nutritional content and eat at regular intervals (e.g., every 2 to 3 hours). This approach serves two purposes: to limit overindulging in the cutting phase, and to allow for the consumption of large volumes of food during the bulking phase. Eating more frequently does not increase basal metabolic rate when compared to 3 meals a day. While food does have a metabolic cost to digest, absorb, and store, called the thermic effect of food, it depends on the quantity and type of food, not how the food is spread across the meals of the day. Well-controlled studies using whole-body calorimetry and doubly labeled water have demonstrated that there is no metabolic advantage to eating more frequently. Dietary supplements The important role of nutrition in building muscle and losing fat means bodybuilders may consume a wide variety of dietary supplements. Various products are used in an attempt to augment muscle size, increase the rate of fat loss, improve joint health, increase natural testosterone production, enhance training performance and prevent potential nutrient deficiencies. Performance-enhancing substances Some bodybuilders use drugs such as anabolic steroids and precursor substances such as prohormones to increase muscle hypertrophy. Anabolic steroids cause hypertrophy of both types (I and II) of muscle fibers, likely caused by an increased synthesis of muscle proteins. They also provoke undesired side effects including hepatotoxicity, gynecomastia, acne, the early onset of male pattern baldness and a decline in the body's own testosterone production, which can cause testicular atrophy. Other performance-enhancing substances used by competitive bodybuilders include human growth hormone (HGH). HGH is also used by female bodybuilders to obtain bigger muscles "while maintaining a 'female appearance'". Muscle growth is more difficult to achieve in older adults than younger adults because of biological aging, which leads to many metabolic changes detrimental to muscle growth; for instance, by diminishing growth hormone and testosterone levels. Some recent clinical studies have shown that low-dose HGH treatment for adults with HGH deficiency changes the body composition by increasing muscle mass, decreasing fat mass, increasing bone density and muscle strength, improves cardiovascular parameters, and affects the quality of life without significant side effects. In rodents, knockdown of metallothionein gene expression results in activation of the Akt pathway and increases in myotube size, in type IIb fiber hypertrophy, and ultimately in muscle strength. Injecting oil into muscles Some bodybuilders inject oils or other compounds into their muscles (sometimes known as "synthol") in order to enhance their size or appearance. This practice can have serious health consequences. Rest Although muscle stimulation occurs when lifting weights, muscle growth occurs afterward during rest periods. Some bodybuilders add a massage at the end of each workout to their routine as a method of recovering. Overtraining Overtraining occurs when a bodybuilder has trained to the point where their workload exceeds their recovery capacity. There are many reasons why overtraining occurs, including lack of adequate nutrition, lack of recovery time between workouts, insufficient sleep, and training at a high intensity for too long (a lack of splitting apart workouts). Training at a high intensity too frequently also stimulates the central nervous system (CNS) and can result in a hyperadrenergic state that interferes with sleep patterns. To avoid overtraining, intense frequent training must be met with at least an equal amount of purposeful recovery. Timely provision of carbohydrates, proteins, and various micronutrients such as vitamins, minerals, phytochemicals, even nutritional supplements are critical. A mental disorder, informally called bigorexia (by analogy with anorexia), may account for overtraining in some individuals. Sufferers feel as if they are never big enough or muscular enough, which forces them to overtrain in order to try to reach their goal physique. An article by Muscle & Fitness magazine, "Overtrain for Big Gains", claimed that overtraining for a brief period can be beneficial. Overtraining can be used advantageously, as when a bodybuilder is purposely overtrained for a brief period of time to super compensate during a regeneration phase. These are known as "shock micro-cycles" and were a key training technique used by Soviet athletes. See also References External links Body modification Athletic sports Individual sports Weight training Body shape Articles containing video clips Physical exercise
8
Biological warfare, also known as germ warfare, is the use of biological toxins or infectious agents such as bacteria, viruses, insects, and fungi with the intent to kill, harm or incapacitate humans, animals or plants as an act of war. Biological weapons (often termed "bio-weapons", "biological threat agents", or "bio-agents") are living organisms or replicating entities (i.e. viruses, which are not universally considered "alive"). Entomological (insect) warfare is a subtype of biological warfare. Offensive biological warfare in international armed conflicts is a war crime under the 1925 Geneva Protocol and several international humanitarian law treaties. In particular, the 1972 Biological Weapons Convention (BWC) bans the development, production, acquisition, transfer, stockpiling and use of biological weapons. In contrast, defensive biological research for prophylactic, protective or other peaceful purposes is not prohibited by the BWC. Biological warfare is distinct from warfare involving other types of weapons of mass destruction (WMD), including nuclear warfare, chemical warfare, and radiological warfare. None of these are considered conventional weapons, which are deployed primarily for their explosive, kinetic, or incendiary potential. Biological weapons may be employed in various ways to gain a strategic or tactical advantage over the enemy, either by threats or by actual deployments. Like some chemical weapons, biological weapons may also be useful as area denial weapons. These agents may be lethal or non-lethal, and may be targeted against a single individual, a group of people, or even an entire population. They may be developed, acquired, stockpiled or deployed by nation states or by non-national groups. In the latter case, or if a nation-state uses it clandestinely, it may also be considered bioterrorism. Biological warfare and chemical warfare overlap to an extent, as the use of toxins produced by some living organisms is considered under the provisions of both the BWC and the Chemical Weapons Convention. Toxins and psychochemical weapons are often referred to as midspectrum agents. Unlike bioweapons, these midspectrum agents do not reproduce in their host and are typically characterized by shorter incubation periods. Overview A biological attack could conceivably result in large numbers of civilian casualties and cause severe disruption to economic and societal infrastructure. A nation or group that can pose a credible threat of mass casualty has the ability to alter the terms under which other nations or groups interact with it. When indexed to weapon mass and cost of development and storage, biological weapons possess destructive potential and loss of life far in excess of nuclear, chemical or conventional weapons. Accordingly, biological agents are potentially useful as strategic deterrents, in addition to their utility as offensive weapons on the battlefield. As a tactical weapon for military use, a significant problem with biological warfare is that it would take days to be effective, and therefore might not immediately stop an opposing force. Some biological agents (smallpox, pneumonic plague) have the capability of person-to-person transmission via aerosolized respiratory droplets. This feature can be undesirable, as the agent(s) may be transmitted by this mechanism to unintended populations, including neutral or even friendly forces. Worse still, such a weapon could "escape" the laboratory where it was developed, even if there was no intent to use it – for example by infecting a researcher who then transmits it to the outside world before realizing that they were infected. Several cases are known of researchers becoming infected and dying of Ebola, which they had been working with in the lab (though nobody else was infected in those cases) – while there is no evidence that their work was directed towards biological warfare, it demonstrates the potential for accidental infection even of careful researchers fully aware of the dangers. While containment of biological warfare is less of a concern for certain criminal or terrorist organizations, it remains a significant concern for the military and civilian populations of virtually all nations. History Antiquity and Middle Ages Rudimentary forms of biological warfare have been practiced since antiquity. The earliest documented incident of the intention to use biological weapons is recorded in Hittite texts of 1500–1200 BCE, in which victims of tularemia were driven into enemy lands, causing an epidemic. The Assyrians poisoned enemy wells with the fungus ergot, though with unknown results. Scythian archers dipped their arrows and Roman soldiers their swords into excrements and cadavers – victims were commonly infected by tetanus as result. In 1346, the bodies of Mongol warriors of the Golden Horde who had died of plague were thrown over the walls of the besieged Crimean city of Kaffa. Specialists disagree about whether this operation was responsible for the spread of the Black Death into Europe, Near East and North Africa, resulting in the deaths of approximately 25 million Europeans. Biological agents were extensively used in many parts of Africa from the sixteenth century AD, most of the time in the form of poisoned arrows, or powder spread on the war front as well as poisoning of horses and water supply of the enemy forces. In Borgu, there were specific mixtures to kill, hypnotize, make the enemy bold, and to act as an antidote against the poison of the enemy as well. The creation of biologicals was reserved for a specific and professional class of medicine-men. 18th to 19th century During the French and Indian War, in June 1763 a group of Native Americans laid siege to British-held Fort Pitt. The commander of Fort Pitt, Simeon Ecuyer, ordered his men to take smallpox-infested blankets from the infirmary and give it to a Lenape delegation during the siege. A reported outbreak that began the spring before left as many as one hundred Native Americans dead in Ohio Country from 1763 to 1764. It is not clear whether the smallpox was a result of the Fort Pitt incident or the virus was already present among the Delaware people as outbreaks happened on their own every dozen or so years and the delegates were met again later and seemingly had not contracted smallpox. During the American Revolutionary War, Continental Army officer George Washington mentioned to the Continental Congress that he had heard a rumor from a sailor that his opponent during the Siege of Boston, General William Howe, had deliberately sent civilians out of the city in the hopes of spreading the ongoing smallpox epidemic to American lines; Washington, remaining unconvinced, wrote that he "could hardly give credit to" the claim. Washington had already inoculated his soldiers, diminishing the effect of the epidemic. Some historians have claimed that a detachment of the Corps of Royal Marines stationed in New South Wales, Australia, deliberately used smallpox there in 1789. Dr Seth Carus states: "Ultimately, we have a strong circumstantial case supporting the theory that someone deliberately introduced smallpox in the Aboriginal population." World War I By 1900 the germ theory and advances in bacteriology brought a new level of sophistication to the techniques for possible use of bio-agents in war. Biological sabotage in the form of anthrax and glanders was undertaken on behalf of the Imperial German government during World War I (1914–1918), with indifferent results. The Geneva Protocol of 1925 prohibited the first use of chemical and biological weapons against enemy nationals in international armed conflicts. World War II With the onset of World War II, the Ministry of Supply in the United Kingdom established a biological warfare program at Porton Down, headed by the microbiologist Paul Fildes. The research was championed by Winston Churchill and soon tularemia, anthrax, brucellosis, and botulism toxins had been effectively weaponized. In particular, Gruinard Island in Scotland, was contaminated with anthrax during a series of extensive tests for the next 56 years. Although the UK never offensively used the biological weapons it developed, its program was the first to successfully weaponize a variety of deadly pathogens and bring them into industrial production. Other nations, notably France and Japan, had begun their own biological weapons programs. When the United States entered the war, Allied resources were pooled at the request of the British. The U.S. then established a large research program and industrial complex at Fort Detrick, Maryland, in 1942 under the direction of George W. Merck. The biological and chemical weapons developed during that period were tested at the Dugway Proving Grounds in Utah. Soon there were facilities for the mass production of anthrax spores, brucellosis, and botulism toxins, although the war was over before these weapons could be of much operational use. The most notorious program of the period was run by the secret Imperial Japanese Army Unit 731 during the war, based at Pingfan in Manchuria and commanded by Lieutenant General Shirō Ishii. This biological warfare research unit conducted often fatal human experiments on prisoners, and produced biological weapons for combat use. Although the Japanese effort lacked the technological sophistication of the American or British programs, it far outstripped them in its widespread application and indiscriminate brutality. Biological weapons were used against Chinese soldiers and civilians in several military campaigns. In 1940, the Japanese Army Air Force bombed Ningbo with ceramic bombs full of fleas carrying the bubonic plague. Many of these operations were ineffective due to inefficient delivery systems, although up to 400,000 people may have died. During the Zhejiang-Jiangxi Campaign in 1942, around 1,700 Japanese troops died out of a total 10,000 Japanese soldiers who fell ill with disease when their own biological weapons attack rebounded on their own forces. During the final months of World War II, Japan planned to use plague as a biological weapon against U.S. civilians in San Diego, California, during Operation Cherry Blossoms at Night. The plan was set to launch on 22 September 1945, but it was not executed because of Japan's surrender on 15 August 1945. Cold War In Britain, the 1950s saw the weaponization of plague, brucellosis, tularemia and later equine encephalomyelitis and vaccinia viruses, but the programme was unilaterally cancelled in 1956. The United States Army Biological Warfare Laboratories weaponized anthrax, tularemia, brucellosis, Q-fever and others. In 1969, US President Richard Nixon decided to unilaterally terminate the offensive biological weapons program of the US, allowing only scientific research for defensive measures. This decision increased the momentum of the negotiations for a ban on biological warfare, which took place from 1969 to 1972 in the United Nation's Conference of the Committee on Disarmament in Geneva. These negotiations resulted in the Biological Weapons Convention, which was opened for signature on 10 April 1972 and entered into force on 26 March 1975 after its ratification by 22 states. Despite being a party and depositary to the BWC, the Soviet Union continued and expanded its massive offensive biological weapons program, under the leadership of the allegedly civilian institution Biopreparat. The Soviet Union attracted international suspicion after the 1979 Sverdlovsk anthrax leak killed approximately 65 to 100 people. 1948 Arab–Israeli War According to historians Benny Morris and Benjamin Kedar, Israel conducted a biological warfare operation codenamed "Cast Thy Bread" during the 1948 Arab–Israeli War. The Haganah initially used typhoid bacteria to contaminate water wells in newly-cleared Arab villages to prevent the population including militiamen from returning. Later, the biological warfare campaign expanded to include Jewish settlements that were in imminent danger of being captured by Arab troops and inhabited Arab towns not slated for capture. There was also plans to expand the biological warfare campaign into other Arab states including Egypt, Lebanon and Syria, but they were not carried out. International law International restrictions on biological warfare began with the 1925 Geneva Protocol, which prohibits the use but not the possession or development of biological and chemical weapons in international armed conflicts. Upon ratification of the Geneva Protocol, several countries made reservations regarding its applicability and use in retaliation. Due to these reservations, it was in practice a "no-first-use" agreement only. The 1972 Biological Weapons Convention (BWC) supplements the Geneva Protocol by prohibiting the development, production, acquisition, transfer, stockpiling and use of biological weapons. Having entered into force on 26 March 1975, the BWC was the first multilateral disarmament treaty to ban the production of an entire category of weapons of mass destruction. As of March 2021, 183 states have become party to the treaty. The BWC is considered to have established a strong global norm against biological weapons, which is reflected in the treaty's preamble, stating that the use of biological weapons would be "repugnant to the conscience of mankind". The BWC's effectiveness has been limited due to insufficient institutional support and the absence of any formal verification regime to monitor compliance. In 1985, the Australia Group was established, a multilateral export control regime of 43 countries aiming to prevent the proliferation of chemical and biological weapons. In 2004, the United Nations Security Council passed Resolution 1540, which obligates all UN Member States to develop and enforce appropriate legal and regulatory measures against the proliferation of chemical, biological, radiological, and nuclear weapons and their means of delivery, in particular, to prevent the spread of weapons of mass destruction to non-state actors. Bioterrorism Biological weapons are difficult to detect, economical and easy to use, making them appealing to terrorists. The cost of a biological weapon is estimated to be about 0.05 percent the cost of a conventional weapon in order to produce similar numbers of mass casualties per kilometer square. Moreover, their production is very easy as common technology can be used to produce biological warfare agents, like that used in production of vaccines, foods, spray devices, beverages and antibiotics. A major factor in biological warfare that attracts terrorists is that they can easily escape before the government agencies or secret agencies have even started their investigation. This is because the potential organism has an incubation period of 3 to 7 days, after which the results begin to appear, thereby giving terrorists a lead. A technique called Clustered, Regularly Interspaced, Short Palindromic Repeat (CRISPR-Cas9) is now so cheap and widely available that scientists fear that amateurs will start experimenting with them. In this technique, a DNA sequence is cut off and replaced with a new sequence, e.g. one that codes for a particular protein, with the intent of modifying an organism's traits. Concerns have emerged regarding do-it-yourself biology research organizations due to their associated risk that a rogue amateur DIY researcher could attempt to develop dangerous bioweapons using genome editing technology. In 2002, when CNN went through Al-Qaeda's (AQ's) experiments with crude poisons, they found out that AQ had begun planning ricin and cyanide attacks with the help of a loose association of terrorist cells. The associates had infiltrated many countries like Turkey, Italy, Spain, France and others. In 2015, to combat the threat of bioterrorism, a National Blueprint for Biodefense was issued by the Blue-Ribbon Study Panel on Biodefense. Also, 233 potential exposures of select biological agents outside of the primary barriers of the biocontainment in the US were described by the annual report of the Federal Select Agent Program. Though a verification system can reduce bioterrorism, an employee, or a lone terrorist having adequate knowledge of a bio-technology company's facilities, can cause potential danger by utilizing, without proper oversight and supervision, that company's resources. Moreover, it has been found that about 95% of accidents that have occurred due to low security have been done by employees or those who had a security clearance. Entomology Entomological warfare (EW) is a type of biological warfare that uses insects to attack the enemy. The concept has existed for centuries and research and development have continued into the modern era. EW has been used in battle by Japan and several other nations have developed and been accused of using an entomological warfare program. EW may employ insects in a direct attack or as vectors to deliver a biological agent, such as plague. Essentially, EW exists in three varieties. One type of EW involves infecting insects with a pathogen and then dispersing the insects over target areas. The insects then act as a vector, infecting any person or animal they might bite. Another type of EW is a direct insect attack against crops; the insect may not be infected with any pathogen but instead represents a threat to agriculture. The final method uses uninfected insects, such as bees or wasps, to directly attack the enemy. Genetics Theoretically, novel approaches in biotechnology, such as synthetic biology could be used in the future to design novel types of biological warfare agents. Would demonstrate how to render a vaccine ineffective; Would confer resistance to therapeutically useful antibiotics or antiviral agents; Would enhance the virulence of a pathogen or render a nonpathogen virulent; Would increase the transmissibility of a pathogen; Would alter the host range of a pathogen; Would enable the evasion of diagnostic/detection tools; Would enable the weaponization of a biological agent or toxin. Most of the biosecurity concerns in synthetic biology are focused on the role of DNA synthesis and the risk of producing genetic material of lethal viruses (e.g. 1918 Spanish flu, polio) in the lab. Recently, the CRISPR/Cas system has emerged as a promising technique for gene editing. It was hailed by The Washington Post as "the most important innovation in the synthetic biology space in nearly 30 years." While other methods take months or years to edit gene sequences, CRISPR speeds that time up to weeks. Due to its ease of use and accessibility, it has raised a number of ethical concerns, especially surrounding its use in the biohacking space. By target Anti-personnel Ideal characteristics of a biological agent to be used as a weapon against humans are high infectivity, high virulence, non-availability of vaccines and availability of an effective and efficient delivery system. Stability of the weaponized agent (the ability of the agent to retain its infectivity and virulence after a prolonged period of storage) may also be desirable, particularly for military applications, and the ease of creating one is often considered. Control of the spread of the agent may be another desired characteristic. The primary difficulty is not the production of the biological agent, as many biological agents used in weapons can be manufactured relatively quickly, cheaply and easily. Rather, it is the weaponization, storage, and delivery in an effective vehicle to a vulnerable target that pose significant problems. For example, Bacillus anthracis is considered an effective agent for several reasons. First, it forms hardy spores, perfect for dispersal aerosols. Second, this organism is not considered transmissible from person to person, and thus rarely if ever causes secondary infections. A pulmonary anthrax infection starts with ordinary influenza-like symptoms and progresses to a lethal hemorrhagic mediastinitis within 3–7 days, with a fatality rate that is 90% or higher in untreated patients. Finally, friendly personnel and civilians can be protected with suitable antibiotics. Agents considered for weaponization, or known to be weaponized, include bacteria such as Bacillus anthracis, Brucella spp., Burkholderia mallei, Burkholderia pseudomallei, Chlamydophila psittaci, Coxiella burnetii, Francisella tularensis, some of the Rickettsiaceae (especially Rickettsia prowazekii and Rickettsia rickettsii), Shigella spp., Vibrio cholerae, and Yersinia pestis. Many viral agents have been studied and/or weaponized, including some of the Bunyaviridae (especially Rift Valley fever virus), Ebolavirus, many of the Flaviviridae (especially Japanese encephalitis virus), Machupo virus, Coronaviruses, Marburg virus, Variola virus, and yellow fever virus. Fungal agents that have been studied include Coccidioides spp. Toxins that can be used as weapons include ricin, staphylococcal enterotoxin B, botulinum toxin, saxitoxin, and many mycotoxins. These toxins and the organisms that produce them are sometimes referred to as select agents. In the United States, their possession, use, and transfer are regulated by the Centers for Disease Control and Prevention's Select Agent Program. The former US biological warfare program categorized its weaponized anti-personnel bio-agents as either Lethal Agents (Bacillus anthracis, Francisella tularensis, Botulinum toxin) or Incapacitating Agents (Brucella suis, Coxiella burnetii, Venezuelan equine encephalitis virus, Staphylococcal enterotoxin B). Anti-agriculture Anti-crop/anti-vegetation/anti-fisheries The United States developed an anti-crop capability during the Cold War that used plant diseases (bioherbicides, or mycoherbicides) for destroying enemy agriculture. Biological weapons also target fisheries as well as water-based vegetation. It was believed that the destruction of enemy agriculture on a strategic scale could thwart Sino-Soviet aggression in a general war. Diseases such as wheat blast and rice blast were weaponized in aerial spray tanks and cluster bombs for delivery to enemy watersheds in agricultural regions to initiate epiphytotic (epidemics among plants). On the other hand, some sources report that these agents were stockpiled but never weaponized. When the United States renounced its offensive biological warfare program in 1969 and 1970, the vast majority of its biological arsenal was composed of these plant diseases. Enterotoxins and Mycotoxins were not affected by Nixon's order. Though herbicides are chemicals, they are often grouped with biological warfare and chemical warfare because they may work in a similar manner as biotoxins or bioregulators. The Army Biological Laboratory tested each agent and the Army's Technical Escort Unit was responsible for the transport of all chemical, biological, radiological (nuclear) materials. Biological warfare can also specifically target plants to destroy crops or defoliate vegetation. The United States and Britain discovered plant growth regulators (i.e., herbicides) during the Second World War, which were then used by the UK in the counterinsurgency operations of the Malayan Emergency. Inspired by the use in Malaysia, the US military effort in the Vietnam War included a mass dispersal of a variety of herbicides, famously Agent Orange, with the aim of destroying farmland and defoliating forests used as cover by the Viet Cong. Sri Lanka deployed military defoliants in its prosecution of the Eelam War against Tamil insurgents. Anti-livestock During World War I, German saboteurs used anthrax and glanders to sicken cavalry horses in U.S. and France, sheep in Romania, and livestock in Argentina intended for the Entente forces. One of these German saboteurs was Anton Dilger. Also, Germany itself became a victim of similar attacks – horses bound for Germany were infected with Burkholderia by French operatives in Switzerland. During World War II, the U.S. and Canada secretly investigated the use of rinderpest, a highly lethal disease of cattle, as a bioweapon. In the 1980s Soviet Ministry of Agriculture had successfully developed variants of foot-and-mouth disease, and rinderpest against cows, African swine fever for pigs, and psittacosis to kill the chicken. These agents were prepared to spray them down from tanks attached to airplanes over hundreds of miles. The secret program was code-named "Ecology". During the Mau Mau Uprising in 1952, the poisonous latex of the African milk bush was used to kill cattle. Defensive operations Medical countermeasures In 2010 at The Meeting of the States Parties to the Convention on the Prohibition of the Development, Production and Stockpiling of Bacteriological (Biological) and Toxin Weapons and Their Destruction in Geneva the sanitary epidemiological reconnaissance was suggested as well-tested means for enhancing the monitoring of infections and parasitic agents, for the practical implementation of the International Health Regulations (2005). The aim was to prevent and minimize the consequences of natural outbreaks of dangerous infectious diseases as well as the threat of alleged use of biological weapons against BTWC States Parties. Many countries require their active-duty military personnel to get vaccinated for certain diseases that may potentially be used as a bioweapon such as anthrax. Public health and disease surveillance It is important to note that most classical and modern biological weapons' pathogens can be obtained from a plant or an animal which is naturally infected. In the largest biological weapons accident known—the anthrax outbreak in Sverdlovsk (now Yekaterinburg) in the Soviet Union in 1979—sheep became ill with anthrax as far as 200 kilometers from the release point of the organism from a military facility in the southeastern portion of the city and still off-limits to visitors today, (see Sverdlovsk Anthrax leak). Thus, a robust surveillance system involving human clinicians and veterinarians may identify a bioweapons attack early in the course of an epidemic, permitting the prophylaxis of disease in the vast majority of people (and/or animals) exposed but not yet ill. For example, in the case of anthrax, it is likely that by 24–36 hours after an attack, some small percentage of individuals (those with the compromised immune system or who had received a large dose of the organism due to proximity to the release point) will become ill with classical symptoms and signs (including a virtually unique chest X-ray finding, often recognized by public health officials if they receive timely reports). The incubation period for humans is estimated to be about 11.8 days to 12.1 days. This suggested period is the first model that is independently consistent with data from the largest known human outbreak. These projections refine previous estimates of the distribution of early-onset cases after a release and support a recommended 60-day course of prophylactic antibiotic treatment for individuals exposed to low doses of anthrax. By making these data available to local public health officials in real time, most models of anthrax epidemics indicate that more than 80% of an exposed population can receive antibiotic treatment before becoming symptomatic, and thus avoid the moderately high mortality of the disease. Common epidemiological warnings From most specific to least specific: Single cause of a certain disease caused by an uncommon agent, with lack of an epidemiological explanation. Unusual, rare, genetically engineered strain of an agent. High morbidity and mortality rates in regards to patients with the same or similar symptoms. Unusual presentation of the disease. Unusual geographic or seasonal distribution. Stable endemic disease, but with an unexplained increase in relevance. Rare transmission (aerosols, food, water). No illness presented in people who were/are not exposed to "common ventilation systems (have separate closed ventilation systems) when illness is seen in persons in close proximity who have a common ventilation system." Different and unexplained diseases coexisting in the same patient without any other explanation. Rare illness that affects a large, disparate population (respiratory disease might suggest the pathogen or agent was inhaled). Illness is unusual for a certain population or age-group in which it takes presence. Unusual trends of death and/or illness in animal populations, previous to or accompanying illness in humans. Many affected reaching out for treatment at the same time. Similar genetic makeup of agents in affected individuals. Simultaneous collections of similar illness in non-contiguous areas, domestic, or foreign. An abundance of cases of unexplained diseases and deaths. Bioweapon identification The goal of biodefense is to integrate the sustained efforts of the national and homeland security, medical, public health, intelligence, diplomatic, and law enforcement communities. Health care providers and public health officers are among the first lines of defense. In some countries private, local, and provincial (state) capabilities are being augmented by and coordinated with federal assets, to provide layered defenses against biological weapon attacks. During the first Gulf War the United Nations activated a biological and chemical response team, Task Force Scorpio, to respond to any potential use of weapons of mass destruction on civilians. The traditional approach toward protecting agriculture, food, and water: focusing on the natural or unintentional introduction of a disease is being strengthened by focused efforts to address current and anticipated future biological weapons threats that may be deliberate, multiple, and repetitive. The growing threat of biowarfare agents and bioterrorism has led to the development of specific field tools that perform on-the-spot analysis and identification of encountered suspect materials. One such technology, being developed by researchers from the Lawrence Livermore National Laboratory (LLNL), employs a "sandwich immunoassay", in which fluorescent dye-labeled antibodies aimed at specific pathogens are attached to silver and gold nanowires. In the Netherlands, the company TNO has designed Bioaerosol Single Particle Recognition eQuipment (BiosparQ). This system would be implemented into the national response plan for bioweapon attacks in the Netherlands. Researchers at Ben Gurion University in Israel are developing a different device called the BioPen, essentially a "Lab-in-a-Pen", which can detect known biological agents in under 20 minutes using an adaptation of the ELISA, a similar widely employed immunological technique, that in this case incorporates fiber optics. List of programs, projects and sites by country United States Fort Detrick, Maryland U.S. Army Biological Warfare Laboratories (1943–69) Building 470 One-Million-Liter Test Sphere Operation Sea-Spray Operation Whitecoat (1954–73) U.S. entomological warfare program Operation Big Itch Operation Big Buzz Operation Drop Kick Operation May Day Project Bacchus Project Clear Vision Project SHAD Project 112 Horn Island Testing Station Fort Terry Granite Peak Installation Vigo Ordnance Plant United Kingdom Porton Down Gruinard Island Nancekuke Operation Vegetarian (1942–1944) Open-air field tests: Operation Harness off Antigua, 1948–1950. Operation Cauldron off Stornoway, 1952. Operation Hesperus off Stornoway, 1953. Operation Ozone off Nassau, 1954. Operation Negation off Nassau, 1954–5. Soviet Union and Russia Biopreparat (18 labs and production centers) Stepnogorsk Scientific and Technical Institute for Microbiology, Stepnogorsk, northern Kazakhstan Institute of Ultra Pure Biochemical Preparations, Leningrad, a weaponized plague center Vector State Research Center of Virology and Biotechnology (VECTOR), a weaponized smallpox center Institute of Applied Biochemistry, Omutninsk Kirov bioweapons production facility, Kirov, Kirov Oblast Zagorsk smallpox production facility, Zagorsk Berdsk bioweapons production facility, Berdsk Bioweapons research facility, Obolensk Sverdlovsk bioweapons production facility (Military Compound 19), Sverdlovsk, a weaponized anthrax center Institute of Virus Preparations Poison laboratory of the Soviet secret services Vozrozhdeniya Project Bonfire Project Factor Japan Unit 731 Zhongma Fortress Kaimingjie germ weapon attack Khabarovsk War Crime Trials Epidemic Prevention and Water Purification Department Iraq Al Hakum Salman Pak facility Al Manal facility South Africa Project Coast Delta G Scientific Company Roodeplaat Research Laboratories Protechnik Rhodesia Canada Grosse Isle, Quebec, site (1939–45) of research into anthrax and other agents DRDC Suffield, Suffield, Alberta List of associated people Bioweaponeers: Includes scientists and administrators Shyh-Ching Lo Kanatjan Alibekov, known as Ken Alibek Ira Baldwin Wouter Basson Kurt Blome Eugen von Haagen Anton Dilger Paul Fildes Arthur Galston (unwittingly) Kurt Gutzeit Riley D. Housewright Shiro Ishii Elvin A. Kabat George W. Merck Frank Olson Vladimir Pasechnik William C. Patrick III Sergei Popov Theodor Rosebury Rihab Rashid Taha Prince Tsuneyoshi Takeda Huda Salih Mahdi Ammash Nassir al-Hindawi Erich Traub Auguste Trillat Baron Otto von Rosen Yujiro Wakamatsu Yazid Sufaat Writers and activists: Daniel Barenblatt Leonard A. Cole Stephen Endicott Arthur Galston Jeanne Guillemin Edward Hagerman Sheldon H. Harris Nicholas D. Kristof Joshua Lederberg Matthew Meselson Toby Ord Richard Preston Ed Regis Mark Wheelis David Willman Aaron Henderson In popular culture See also Animal-borne bomb attacks Antibiotic resistance Asymmetric warfare Baker Island Bioaerosol Biological contamination Biological pest control Biosecurity Chemical weapon Counterinsurgency Discredited AIDS origins theories Enterotoxin Entomological warfare Ethnic bioweapon Herbicidal warfare Hittite plague Human experimentation in the United States John W. Powell Johnston Atoll Chemical Agent Disposal System List of CBRN warfare forces McNeill's law Military animal Mycotoxin Plum Island Animal Disease Center Project 112 Project AGILE Project SHAD Rhodesia and weapons of mass destruction Trichothecene Yellow rain References Further reading Counterproliferation Paper No. 53, USAF Counterproliferation Center, Air University, Maxwell Air Force Base, Alabama, USA. External links Biological weapons and international humanitarian law, ICRC WHO: Health Aspects of Biological and Chemical Weapons USAMRIID ()—U.S. Army Medical Research Institute of Infectious Diseases Bioethics Warfare by type
12